text
stringlengths 1
803k
| meta
dict |
|---|---|
\section{Introduction}\label{Sec_Setting}
Random fields under local maps are defined and analyzed in different fields of probability and statistics. In these studies a random field is typically a stochastic process whose variables take values in a subset of the real numbers, and which has a geometric index set such as the infinite lattice, a graph with infinite vertex set, or the Euclidean space. It is of interest to understand the behavior of the given random field with distribution $\mu$ under application of a map $T$ that acts on infinite-volume realizations of the process, and investigate resulting properties of the image process whose distribution we denote by $\mu'$.
Relevant deterministic maps $T$ typically have local-dependence windows. They may also be generalized to stochastic kernels that act locally. In this general setup let us call $\mu$ the {\em first-layer measure}, and the measure $\mu'$, which we shall be mostly interested in, the {\em second-layer measure}.
This setup is of theoretical interest, but also occurs in many applications of natural sciences, engineering, and statistics. We mention thinning transformations of Poisson point processes (in which points from a realization are omitted according to local rules)~\cite{bremaud1979optimal,brown1979position,isham1980dependent,rolski1991stochastic,last1993dependent,ball2005poisson,moller2010thinning,blaszczyszyn2019determinantal}, transformations in image analysis~\cite{MR1100283,MR1344683}, and renormalization group transformations in statistical mechanics (where a physical system like a ferromagnet is considered on increasingly large scales by means of maps which forget details on small scales)~\cite{griffiths1978position,griffiths1979mathematical,EnFeSo93}. Processes appearing as local maps in this way can also be viewed as generalizations of the much used {\em hidden Markov models}~\cite{MR1323178}.
Hidden Markov models are images under local kernels of an underlying {\em first-layer} Markov chain, and appear when noisy observations shall be modeled, and so the generalization is made to a situation with spatial index sets, and more complex dependence structures.
It was discovered first in the context of such renormalization group transformations that strictly local transformations acting on a spatially Markovian random field $\mu$ indexed by a lattice, may result in singularities in the image measure $\mu'$. Two concrete examples for this are provided by the low-temperature Ising model under a block average transformation, or the projection to a sublattice~\cite{MR1012855,EnFeSo93,bricmont1998renormalization,le2013almost,MR3830302}. The original motivation of such renormalization group transformations was, suggested by heuristic schemes of theoretical physics, to understand the iterated {\em coarse-graining dynamics} of the level of Hamiltonians to investigate critical behavior~\cite{wilson1974renormalization}.
The singularity in the second-layer measure $\mu'$ means, that $\mu'$ loses not only the spatial Markov property of $\mu$ under the map $T$ (which is less surprising), but it even loses the more general {\em Gibbs property}. This is more severe, it means that the infinite second-layer system acquires internal long-range dependence, and in particular does not posses a well-behaved Hamiltonian with good summability properties of its interaction potentials anymore. The singular long-range dependence appears on the level of finite-volume conditional probabilities of the image measure, which are not quasilocal functions of their conditioning. Put equivalently, finite-volume sub-systems depend on their boundary conditions arbitrarily far away, and their behavior cannot be described by kernels that are continuous in product topology. This may cause standard theory of infinite-volume states, including the variational principle, to fail, see the examples in~\cite{kulske2004relative}.
A variety of examples have been studied ever since, where non-Gibbsian behavior was proved to occur with different mechanisms, but always in regimes of sufficiently strong coupling, where the first-layer measure differs much from independence. (Having said this, there are known examples that show that the range of temperatures where non-Gibbsian behavior in the image system occurs, may be larger than the critical temperature for the first-layer system~\cite{haggstrom2003gibbs}) Moreover, examples have been found where, the set of discontinuity points is even of full measure w.r.t.~$\mu'$ itself, which is the strongest form of singularity~\cite{kulske2004relative,EnErIaKu12,JaKu16,bergmann2020dynamical}.
Main subclasses of relevant transformations which have been studied were projections in terms of a variety of deterministic maps~\cite{van2017decimation,le2013almost,kulske2017fuzzy,MR3648046,MR3961240}, and stochastic time-evolutions~\cite{le2002short,van2002possible,kulske2007spin,EnErIaKu12,roelly2013propagation,FeHoMa14,JaKu16,kraaij2021hamilton}, in various underlying geometries of lattice models, mean-field models, Kac-models, and models in the continuum~\cite{JaKu16}.
\subsubsection*{Informal result: Even independent fields may become non-Gibbs under projections}
In the present paper we provide a new and simple example that shows that a natural local transform of range $1$ can produce singularities, even when it is applied to an {\em independent field}. In our example we chose as the first-layer field the i.i.d.~Bernoulli lattice field $\mu_p$ on the integer lattice, with state space $\{0,1\}^{Z^d}$, and occupation probability $p\in [0,1)$. The Bernoulli lattice field in itself is studied in site-percolation, where one asks for existence of infinite clusters and refined connectedness properties~\cite{grimmett1999percolation}. It also drives more complex processes, in statistical mechanics of disordered systems~\cite{grimmett1997percolation,MR1766342,MR2252929}, and elsewhere in probability
~\cite{adler1991bootstrap,MR2283880,MR3161674,MR3156983,jahnel2020probabilistic}
and its application.
We then study the second-layer measure $\mu_p'$ that appears as an image under application of the concrete range-one map $T$ that is defined by removing from a realization of occupied sites the {\em occupied isolated sites}. $T$ is a projection map as it satisfies $T^2=T$, and we will call it {\em the projection to non-isolates}. Hence it keeps from a realization of occupied sites only the occupied clusters of size of at least two. This includes the infinite cluster, in case there is one, i.e., in the percolation regime of large enough $p$. We may also view $T$ as a simple smoothing transformation, as isolated 'dust' of occupied sites (or 'pixels') is forgotten under the map.
What to expect for the second-layer measure $\mu_p'$? As the $\mu_p$-probability that a given site is isolated equals
$p(1-p)^{2d}$, the map $T$ seems non-invasive, in particular for probabilities $p$ close to $1$. In particular the removed sites do no percolate in this regime. So one might naively conjecture that the second-layer measure shall not be much affected and well-behaved, and in particular is representable as a Gibbsian distribution with quasilocal conditional probabilities.
As a main message of this paper, we prove that this is not the case: We show that $\mu'_p$ is spatially {\em non-Markovian} and {\em non-quasilocal}, when $p<1$ is large, see Theorem~\ref{thm_large_p}. We hope that this result is of some interest for the percolation community. We complement this result by proving regularity of the projected measure when $p\geq 0$
is small enough see, Theorem~\ref{thm_small_p}. This implies the existence of a Gibbs-non Gibbs transition driven by $p$.
Let us finally discuss our map $T$ that projects to non-isolates (and removes the isolates) from a dual perspective. Namely, $T$ has a natural companion map $T^*$ that is again a projection map. $T^*$ does precisely the opposite, it projects to the isolated sites (and removes the non-isolates).
There is independent interest in the action of $T^*$ to the iid Bernoulli lattice field, for the reason that it produces the {\em thinned Bernoulli lattice field}, in which all occupied sites are separated. This thinned Bernoulli lattice field
is relevant, as it is the lattice analogue of the well-known and much studied Mat\'ern process in the continuum~\cite{matern1960spatial,moller2010perfect,baccelli2012extremal}. The latter by definition is derived from a first-layer Poisson process in Euclidean space by removing all points in the realization that have at least one point in the Euclidean ball of radius one.
Clearly, the second-layer measures of both maps $T,T^*$ acting jointly on the same Bernoulli lattice-field realization appear in a natural coupling. As a first guess one may conjecture from this that, either both second-layer measures are Gibbs, or both are non-Gibbs. We warn the reader that this is too naive, not only on the level of proofs, but also the statement may be false. We leave the analysis of the companion process, the thinned Bernoulli lattice field,
to another study.
The paper is organized as follows. In Section~\ref{Sec_Setting} we present the setting and our main results and non-Gibbsianness and Gibbsianness for the Bernoulli field under the removal-of-isolates transformation. In Section~\ref{Sec_Setting} we present the corresponding proofs.
\section{Setting and main results}\label{Sec_Setting}
To define our process we start from the $\Omega=\{0,1\}^{{\Bbb Z}^d}$-valued i.i.d.~Bernoulli field $\mu_p$ with parameter $p\in [0,1]$. We consider realizations of the Bernoulli field under the application of the transformation
$T:\Omega\rightarrow \Omega$
given by
$$
(T\omega)_x:=\omega'_x=\omega_x\Bigl(1-\prod_{y\in \partial x}(1-\omega_y)\Bigr),
$$
where $\partial x$ denotes the set of nearest neighbors of $x\in{\Bbb Z}^d$ in ${\Bbb Z}^d$, equipped with the usual neighborhood structure. In words, $T$ is the projection to the non-isolates, see Figure~\ref{Pix_Trans} for an illustration. The image measure under the transformation $$\mu'_p:=\mu_p\circ T^{-1}$$ is supported on the subset $\Omega':=T(\Omega)$ of sites that obey the non-isolation constraint.
\begin{figure}[!htpb]
\centering
\begin{subfigure}{0.45\textwidth}
\input{Pix_Bern.tex}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\input{Pix_Trans.tex}
\end{subfigure}
\caption{Realization of a Bernoulli field (left) and its image under the transformation $T$ (right).}
\label{Pix_Trans}
\end{figure}
Intuitively the application of $T$ should not change the measure very much at large $p$ close to $1$, where a typical configuration consists of a large percolating cluster and very few isolated sites. In this regime one may view $T$ as a cleansing transformation that wipes away the smallest dust of isolated sites, and keeps the apparent main parts. From the definition of $T$ as a local map it is also obvious that for variables at sites of graph distance greater equal than $3$ are independent, under $\mu'_p$, so that one could expect that $\mu'_p$ is a nicely behaved measure.
Recall that a {\em specification} $\gamma=(\gamma_\Lambda)_{\Lambda\Subset{\Bbb Z}^d}$ is a consistent and proper family of conditional probabilities, i.e., for all $\Lambda\subset\Delta\Subset{\Bbb Z}^d$, $\omega_\Lambda\in \Omega_\Lambda:=\{0,1\}^\Lambda$ and $\hat\omega\in \Omega$, we have that $\int_{\Omega}\gamma_\Delta(\d \tilde\omega|\hat\omega)\gamma_\Lambda(\omega_\Lambda|\tilde\omega)=\gamma_\Delta(\omega_\Lambda|\hat\omega)$, and for all $\omega_{\Lambda^{\rm c}}\in \Omega_{\Lambda^{\rm c}}$ we have $\gamma_\Lambda(\omega_{\Lambda^{\rm c}}|\hat\omega)={\mathds 1}\{\omega_{\Lambda^{\rm c}}=\hat\omega_{\Lambda^{\rm c}}\}$. A specification is called {\em quasilocal}, if for all $\Lambda\Subset{\Bbb Z}^d$ and $\hat\omega_\Lambda \in\Omega_\Lambda$, the mapping $\omega\mapsto\gamma_\Lambda(\hat\omega_\Lambda|\omega)$ is continuous with respect to the product topology on $\Omega$. We say that $\gamma$ is a specification for some random field $\mu$ on $\Omega$, if it satisfies the DLR equations, i.e., for all $\Lambda\Subset{\Bbb Z}^d$ and $\omega_\Lambda\in \Omega_\Lambda$, we have that $\int_{\Omega}\mu(\d \tilde\omega)\gamma_\Lambda(\omega_\Lambda|\tilde\omega)=\mu(\omega_\Lambda)$.
Our present result shows that this is not the case in the whole parameter regime, but the following is true: For large $p$, $\mu'_p$ is not spatially Markovian, it is not even a Gibbs measure in the sense of existence of a version of its finite-volume conditional probabilities which is continuous with respect to the product topology on $\Omega'$. More precisely we have the following theorem.
\begin{thm}(Non-Gibbsianness for large $p$)\label{thm_large_p} Consider the image measure $\mu'_p$ of the Bernoulli field on ${\Bbb Z}^d$ under the map to the non-isolates in lattice dimensions $d\geq 2$. Then, there is $p_c(d)<1$ such that for $p\in (p_c(d),1)$, there is no quasilocal specification
$\gamma'$ for $\mu'_p$.
\end{thm}
The result shows that Gibbsian descriptions of thinning processes of various types derived from Bernoulli or Poissonian fields are by no means obvious, and that more research on such processes in discrete and continuous setups is necessary.
In order to complement the above non-Gibbsianness result, let us also present the following theorem on the existence of a quasilocal specification for $\mu'_p$ for small values of $p$.
\begin{thm}(Gibbsianness for small $p$)\label{thm_small_p} Consider the image measure $\mu'_p$ of the Bernoulli field on ${\Bbb Z}^d$ under the map to the non-isolates in lattice dimensions $d\geq 1$. Then,
for $p< 1/(2d)$, there exists a quasilocal specification $\gamma'$ for $\mu'_p$.
\end{thm}
As we can see from the proofs, for $p< 1/(2d)$, in fact the continuity of $\gamma'$ is even exponentially fast.
In summary, the statements of Theorems~\ref{thm_large_p} and \ref{thm_large_p} indicate a phase-diagram of Gibbsianness of thinned Bernoulli fields under the local non-isolation constraint as exhibited in Figure~\ref{fig_1}.
\begin{figure}[!htpb]
\centering
\begin{tikzpicture}[scale=1]
\draw (-0.1,0) -- (10,0);
\draw (-0.1,1) -- (0,1);
\draw (-0.1,2) -- (0,2);
\draw (-0.1,3) -- (0,3);
\draw (0,0) -- (0,3);
\draw [fill=green,opacity=0.3](0,0) rectangle (10,1);
\draw [fill=red,opacity=0.3](0,2) rectangle (10,3);
\draw [fill=green,opacity=0.3](0,2.98) rectangle (10,3);
\node at (-0.3,3) {$1$};
\node at (-0.6,2) {$p_c(d)$};
\node at (-0.6,1) {$\frac{1}{2d}$};
\node at (-0.3,0) {$0$};
\node at (5,0.5) {Gibbsianness};
\node at (5,2.5) {non-Gibbsianness};
\end{tikzpicture}
\caption{Illustration of Gibbs-non-Gibbs transitions in $p$ for the thinned Bernoulli field under non-isolation constraint.}
\label{fig_1}
\end{figure}
In the following section, we present the proofs.
\section{Proofs}\label{Sec_Proofs}
The key idea of the proof is to first re-express conditional probabilities of $\mu'_p$ in finite volumes in terms of a first-layer constraint model in which occupied sites have to be isolated. Indeed, for large $p$, there are two distinct groundstates given by the (shifted) checkerboard configurations. We can leverage a Peierls' argument in order to show that the first-layer constraint model exhibits a phase transition of translational symmetry breaking. We note that the argument works even though there is {\em no} spin-flip symmetry in the system. The translational symmetry breaking gives rise to a point of discontinuity for which we subsequently show that it is present for any system of finite-volume conditional probabilities for $\mu'_p$, i.e., it is an essential discontinuity. The converse case of small $p$ can be handled by arguments using Dobrushin uniqueness techniques.
\subsection{Proof of Theorem~\ref{thm_large_p}: Non-Gibbianness}
\label{Sec_Proofs_High_p_s}
The main ingredient for the proof is to exhibit one non-removable bad configuration for conditional probabilities. For this, we will use the so-called two-layer view, in which one needs to understand
the Bernoulli field conditional on a fixed image configuration. We choose as the image configuration the all empty configuration, for which the first-layer measure becomes
the Bernoulli field $\mu_p$ {\em conditional on isolation}.
We proceed as follows. In Section~\ref{Sec_Pei} we exhibit a phase transition for the latter model at large $p$, in which translation symmetry is broken, which can be selected via suitable shapes of loophole-volumes. The technique is based on a (slightly non-standard) Peierls argument.
In Section~\ref{Sec_Non} we then show how this implies non-Gibbsianness of the image measure. This is based on the proof that jumps of conditional probabilities occur for certain suitably chosen local patterns, which allow to make a transparent connection to the first-layer model in suitable connected boxes, where the Peierls argument from Section~\ref{Sec_Pei} was made to work.
\subsubsection{Translational-symmetry breaking via a Peierls argument for the conditional first-layer model}\label{Sec_Pei}
For the purpose of showing that the empty configuration is bad for any specification, we will analyze the following particular finite-volume first-layer measures, and we will restrict to particular volumes $\Lambda$. Namely, let us consider finite volumes $\Lambda$ that have a {\em shape of type $0$}, and put fully occupied boundary conditions all $1$ outside of $\Lambda$. By this we mean that $\Lambda$ has a shape which allows to put the checkerboard groundstate of zeros and ones inside $\Lambda$ for which the origin obtains the value $0$ such that one obtains a configuration compatible with the boundary condition, see Figure~\ref{Pix_Types} for an illustration. We define
\begin{equation*}
\begin{split}
\nu_{\Lambda}( \omega_{\Lambda} ):=
\frac{\mu_{p,\Lambda}( \omega_{\Lambda} 1_{T(\omega_{\Lambda}1_{\Lambda^c})|_{\Lambda}=0_{\Lambda}} )}{
\mu_{p,\Lambda}(T(\sigma_{\Lambda}1_{\Lambda^c})|_{\Lambda}=0_{\Lambda} )
},
\end{split}
\end{equation*}
where $\mu_{p,\Lambda}$ is the Bernoulli product measure in $\Lambda$. Hence, by definition $\nu_{\Lambda}$ is the Bernoulli measure conditioned on isolation of ones inside $\Lambda$, where the isolation constraint remembers also the fully occupied boundary condition.
A similar definition is made for volumes of type $1$. For us, large boxes $B_L$ centered around the origin with sidelength $2L$ with a loophole boundary, will be useful, see the illustration in Figure~\ref{Pix_Loop}.
For such type-$0$ boxes $B_L$ we will show in this section that
\begin{equation}\label{eq_0}
\begin{split}
&\sup_{L}\nu_{B_L}(\omega_0=1)\leq \epsilon(p),\qquad \text{ with }\qquad\lim_{p\uparrow 1}\epsilon(p)=0.
\end{split}
\end{equation}
\begin{figure}[!htpb]
\centering
\input{Pix_Loop.tex}
\caption{Illustration of a type-0 volume with loophole boundary.
}
\label{Pix_Loop}
\end{figure}
This means that, with large probability, the origin copies the information from the boundary condition. Similarly, we will prove that the spin at the origin for the box shifted by a lattice unit vector $e$ satisfies
\begin{equation}\label{eq_1}
\begin{split}
&\sup_{L}\nu_{B_L+e}(\omega_0=0)\leq \epsilon(p),\qquad \text{ with }\qquad\lim_{p\uparrow 1}\epsilon(p)=0.
\end{split}
\end{equation}
This is an essential step as it proves that the shape of the volume $B_L$ induces a phase transition for the first-layer constrained model, and there is breaking of translational symmetry. To complete the proof of essential badness of the empty configuration on the second layer, we will however need to go one step further, and connect
to the measure on the second layer. This will be done in Lemma~\ref{lem_NonGibbs} below.
Note that configurations of the model are energetically equivalent under a lattice shift. They are not equivalent under the site-wise {\em spin-flip} that exchanges zeros and ones, much unlike the Ising antiferromagnet in zero external field. Therefore, the Peierls argument we are about to give has to be different from the one for the Ising ferromagnet or antiferromagnet.
Namely, the Peierls argument we will present involves suitable lattice shifts of parts of configurations, while the standard more straightforward Peierls argument for the Ising model involves spin-flips.
Consider the nearest-neighbor graph with vertex set ${\Bbb Z}^d$. Consider, for a spin configuration $\omega$, the set of sites
$$\Gamma(\omega):=\{x \in {\Bbb Z}^d\colon \text{ there exists } y\in \partial x \text{ such that }\omega_x=\omega_y=0 \}.$$
Note that there is a one-to-one correspondence between configurations $\omega$ that satisfy the neighborhood constraint, and sets $\Gamma(\omega)$. Note that outside of $\Gamma(\omega)$, the configuration $\omega$ looks like one of the two groundstates formed by the two possible checkerboard configurations of zeros and ones. Indeed, each site $x\not\in \Gamma(\omega)$ has the property that either $\omega_x=0$ and all the neighbors are $1$ (by definition of a contour),
or $\omega_x=1$ and all the neighbors are $0$ (as the model contains the isolation-constraint).
Further note that not all possible subsets of ${\Bbb Z}^d$ can occur as $\Gamma$, because of the isolation constraint of ones.
The connected (in the sense of graph-distance) components $\gamma$ of these sets $\Gamma$ are called {\em contours}.
To visualize this, consider a star-shaped contour that is built from flipping the one site from one to zero starting from
a checkerboard configuration, see Figure~\ref{Pix_MinCont}. This yields the minimal contour which has $2d+1$ sites. In two dimensions, e.g., it is possible that different contours can be reached from each other via the diagonal. Note that each $\gamma$ that is a contour of a configuration, must be surrounded by ones in nearest neighbor sense in the configuration. These ones must be surrounded by nearest neighbors which carry zeros, by the isolation constraint of ones. Hence the contour specifies the configuration up to sites with graph distance two.
The complement of a finite contour $\gamma$ has one infinite component, and finitely many finite components (the internal ones). Each of these components are labelled by one of the two labels $1$ (or $0$ respectively) determined whether, given $\gamma$, the component admits a configuration obtained by substituting the infinite-volume checkerboard configurations in which the origin in ${\Bbb Z}^d$ obtains a $1$ (or a $0$ respectively), see Figure~\ref{Pix_Types}.
\begin{figure}[!htpb]
\centering
\begin{subfigure}{0.45\textwidth}
\input{Pix_Type0.tex}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\input{Pix_Type1.tex}
\end{subfigure}
\caption{Illustration of the type-0 (left) and type-1 (right) groundstates, i.e., checkerboard configurations. Dots indicate occupied sites. The origin is indicated as $o$. }
\label{Pix_Types}
\end{figure}
A configuration is uniquely determined by its set of compatible contours. Contours $\gamma$ themselves are labelled by $1$ (or $0$) according to the two possible labels of their outer connected components. Contours are compatible when they arise from an allowed configuration, which is the case when the types of checkerboards on shared connected components of the complements match.
\begin{figure}[!htpb]
\centering
\input{Pix_MinCont.tex}
\caption{Illustration of a minimal contour in blue containing the origin. }
\label{Pix_MinCont}
\end{figure}
Now, suppose that $\gamma$ is a contour in a finite volume $\Lambda$. We decompose the volume in terms of the contour and the connected components of its complement
$$\Lambda=\gamma\cup V_0\cup_{i=1,\dots,k}V_i.$$
Here $V_0$ is the outer connected component of the complement of $\gamma$ intersected with $\Lambda$. (Note that the intersection with $\Lambda$ may produce several connected components, but this poses no difficulty.) The sets $(V_i)_{i\geq 1}$ are the interior connected components of the complement of $\gamma$. We also write
$$\overline\gamma=\gamma\cup_{i=1,\dots,k}V_i$$
for the sites that are contained in the support of $\gamma$ or surrounded by $\gamma$.
\medskip
We now prove the statements~\eqref{eq_0} and~\eqref{eq_1} via a Peierls estimate. It suffices
to treat the first case~\eqref{eq_0}, as the case~\eqref{eq_1} is similar. We start with a union bound
over contours surrounding the origin
\begin{equation*}
\begin{split}
\nu_{B_L}(\omega_0=1)&\leq\nu_{B_L}(\omega\colon \exists \gamma \text{ such that }
\Gamma(\omega)\ni \gamma \text{ and } \overline\gamma \ni 0)\leq \sum_{\gamma\colon \overline\gamma \ni 0 }
\nu_{B_L}(\omega\colon\Gamma(\omega)\ni \gamma ).
\end{split}
\end{equation*}
With a slight abuse of notation, we here write $\gamma \in \Gamma$ to indicate that $\gamma$ is a contour in $\Gamma$.
The main point for the Peierls estimate in our non-flip invariant situation, which is formulated in the following lemma,
will be the construction of compatible configurations after removal of contours. Let $|\gamma|$ denote the number of vertices in $\gamma$.
\begin{lem}\label{lem_Pei}
There exists a Peierls constant $\tau=\tau(p)$ with $\lim_{p\uparrow 1}\tau(p)=\infty$, such that for all $\gamma$ we have that
\begin{equation*}
\begin{split}
&\nu_{B_L}(\omega\colon \Gamma(\omega)\ni \gamma)\leq e^{-\tau |\gamma|}.
\end{split}
\end{equation*}
\end{lem}
\begin{proof}Define the activity of a contour $\gamma\subset B_L$
to be the natural weight of the zeros prescribed by it in the Bernoulli measure, i.e.,
$$
\rho(\gamma):=(1-p)^{|\gamma|}.
$$
For any configuration $\omega_U$ in a finite volume $U$
we write for its weight in the Bernoulli field
$$
R(\omega_U):=\prod_{x\in U }p^{\omega_x}(1-p)^{1-\omega_x}.
$$
In particular $\rho(\gamma)=R(0_\gamma)$, where we use the short-hand notation $0_B$ to indicate the all-zero configuration in the volume $B$. Then, we may write
\begin{equation*}
\begin{split}
&\nu_{B_L}(\omega: \Gamma(\omega)\ni \gamma )=\frac{\rho(\gamma)Z_{V_0}\prod_{i=1}^k Z_{V_i}}{Z_{B_L}},
\end{split}
\end{equation*}
where $Z_{V_i}$, for $i=0,1,\dots,k$ denotes the partition functions over all configurations in the volumes $V_i$ which are compatible with $\gamma$, with the isolation constraint on the ones, with weights provided by the Bernoulli measure for all sites in $V_i$.
\paragraph{Case 1.} Suppose that $\gamma$ only contains interior connected components of type $0$. This means that there is no typechange when going from the outside to the inside. Then we may remove $\gamma$, i.e., continue the checkerboard configuration outside of $\gamma$ to where $\gamma$ used to be.
This means that we will assign to each $\omega$ for which $\Gamma(\omega)\ni \gamma$ the reference
configuration $(\omega_{\Lambda \backslash\gamma}\omega^0_{\gamma})$ which appears in the partition function $Z_{\Lambda}$ to lower bound the latter. Here $\omega^0_{\gamma}$ denotes the type-0 checkerboard configuration on $\gamma$.
It is important to note that this removal keeps all other contributions from exterior and interior components compatible.
Hence, we immediately arrive at a lower bound
\begin{equation*}
\begin{split}
&Z_{B_L}\geq R(\omega^0_\gamma)Z_{V_0}\prod_{i=1}^k Z_{V_i}.
\end{split}
\end{equation*}
We may write $\rho(\gamma)=R(\omega^0_\gamma)((1-p)/p)^{N^\text{repl}}$, where $N^\text{repl}$ denotes the number of replacements of a zero by a one on the support of the contour. By definition of a contour each site in $\gamma$ has a neighbor which is zero. Therefore it will replaced itself by a one, or a neighbor of it will be replaced by a one. Hence $N^\text{repl}\geq|\gamma|/(2d +1)$. Thus, we have the desired estimate
\begin{equation*}
\begin{split}
&\nu_{B_L}(\omega: \Gamma(\omega)\ni \gamma )\leq \big((1-p)/p\big)^{|\gamma|/(2d +1)}.
\end{split}
\end{equation*}
\paragraph{Case 2.} Suppose now that $\gamma$ additionally also contains interior volumes of type $1$ (the bad type that does not agree with the boundary outside $\Lambda$), which we will denote by $W_j$, $j=1,\dots,l$. Writing $V_i$ for the interior components of type-0, we have
\begin{equation*}
\begin{split}
&\nu_{B_L}(\omega: \Gamma(\omega)\ni \gamma )=\frac{\rho(\gamma)Z_{V_0}\prod_{i=l+1}^k Z_{V_i}\prod_{j=1}^l Z_{W_j}}{Z_{B_L}},
\end{split}
\end{equation*}
where all partition functions are sums over compatible configurations in the respective connected components, such that the total configuration contains the contour $\gamma$.
The difficulty of this case is that the removal of $\gamma$ does not immediately create compatible configurations. However, it does so after the shift of each of the internal volumes of wrong checkerboard subtypes $W_j$ in one of the $2d$ possible (positive or negative) lattice directions $e$. Let us explain the details now.
Our comparison configuration will now be equal to the type-$0$ checkerboard on the following set $\gamma_e$ which describes the appropriate modification of $\gamma$, obtained by shifts of the internal components,
\begin{equation}\label{Con_Mov}
\begin{split}
\gamma_e:=\Bigl(\gamma \backslash\bigcup_{j=1}^l(W_j+e) \Bigr)\cup
\bigcup_{j=1}^lW_j \backslash (W_j+e).
\end{split}
\end{equation}
Note that there is the volume preservation $|\gamma_e|=|\gamma|$. For each $\omega$ for which $\Gamma(\omega)\ni \gamma$, the reference configuration will then be
\begin{equation}\label{Ref_Con}
\begin{split}
(\omega^0_{\gamma_e},\omega_{\cup_{i=l+1}^k V_i},(\theta_e \omega)_{\cup_{j=1}^l (W_j+e)}),
\end{split}
\end{equation}
where $\theta_e$ represents the shift of the configuration by $e$. We see from the definition that $\omega$ will not be modified on the external component and the internal components of good type. It will however be shifted by $e$ on the internal components of bad type, and it will be a checkerboard of good type on the modified contour $\gamma_e$. Note that this configuration really occurs in $Z_{\Lambda}$ as it satisfies the isolation constraint on the ones. Hence it can be used to lower bound the partition function which gives us
\begin{equation*}
\begin{split}
&Z_\Lambda\geq R(\omega^0_{\gamma_e})
Z_{V_0}\prod_{i=l+1}^k Z_{V_i}\prod_{j=1}^l Z_{W_j},
\end{split}
\end{equation*}
where we have used the shift invariance for the internal partition functions $Z_{W_j}$ (those with the bad types).
Now, the proof is finished once we show that there is a dimension-dependent constant $c_d>0$ such that
\begin{equation}\label{cd}
\begin{split}
&\rho(\gamma)\leq R(\omega^0_{\gamma_e})\big((1-p)/p\big)^{c_d |\gamma|}.
\end{split}
\end{equation}
We denote by $S^0$ the occupied sites of $\omega^0$ (the good checkerboard configuration).
The idea is to use the fact that any connected (in graph distance) subset of ${\Bbb Z}^d$ hits at least a positive fraction of $S^0$ to conclude the inequality
\begin{equation}\label{cd2}
\begin{split}
&|\gamma_e \cap S^0|\geq c_d |\gamma_e|=c_d |\gamma|.
\end{split}
\end{equation}
Then, the Inequality~\eqref{cd} would follow immediately from that.
However, there is the small problem with that argument since, while $\gamma$ by definition is connected (w.r.t.~graph distance), the modified set $\gamma_e$ may have obtained isolated sites, see Figure~\ref{Pix_Iso}.
\begin{figure}[!htpb]
\centering
\begin{subfigure}{0.325\textwidth}
\input{Pix_Iso_l.tex}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\input{Pix_Iso_m.tex}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\input{Pix_Iso_r.tex}
\end{subfigure}
\caption{Illustration of a configuration with one contour $\gamma$ (left in green), where the outside configuration is of type 0 and the inside configuration is of (bad) type 1. Moving the inside configuration by $-e_1$ (middle) creates a (good) configuration also inside the contour $\gamma_e$ as described in~\eqref{Con_Mov} (middle in green), however the shift also creates isolated zeros. On the right, in the large connected component of $\gamma_e$, sites are indicated in dashed blue, which can be flipped from unoccupied to occupied, as in the configuration presented in~\eqref{Ref_Con}, and therefore create an energetically preferable configuration.}
\label{Pix_Iso}
\end{figure}
This may occur if the only neighbor of such a site is eaten up by a translated interior component. We will now explain that this is not a real problem as the number of those unwanted sites is too small to spoil our desired bound~\eqref{cd2}.
Indeed, consider the case ${\Bbb Z}^2$ and $e=-e_1$ for visualization. Then, in each fixed row, the sets $\gamma$ and $\gamma_e$ contain the same number of sites. To each loss site (on the left) there corresponds a site in the set $\bigcup_{k=1}^lW_l \backslash (W_l+e)$ (which is added to the contour) and which is not isolated but has a nearest neighbor in $\gamma$. From this it follows that $\gamma_e$ still has a nearest-neighbor-connected component $\tilde\gamma_e$ that is at least of size $|\gamma|/2$. Now, already $\tilde\gamma_e$ being connected, it hits a fraction (which we call $\tilde c_d>0$) of $S^0$ and we arrive at
\begin{equation}\label{cd3}
\begin{split}
&|\gamma_e \cap S^0|\geq |\tilde \gamma_e \cap S^0|
\geq \tilde c_d |\tilde \gamma_e| \geq c_d|\gamma|,
\end{split}
\end{equation}
where $c_d=\tilde c_d/2$. Note that we may regard $\tilde\gamma_e$, as obtained in this process, as a contour, and its removal in the line of Case 1.
This proves the claim in the general form and hence proves the Peierls estimate.
\end{proof}
\subsubsection{Non-removable discontinuities for $\mu_p'$ via the relation to Bernoulli fields conditioned on isolation}\label{Sec_Non}
\begin{proof}[Proof of Theorem~\ref{thm_large_p}] Suppose that $\gamma'$ is a any specification for $\mu_p'$.
Consider a square $Q$ of sidelength $3$ around the origin. It will play the role of an observation window. Recall that we write $\omega'$ for the second-layer variables, and we write $\omega$ for the first-layer variables.
We will draw the assumption that
$$\omega'_{Q^c}\mapsto \gamma'_Q(\d \omega'_Q|\omega'_{Q^c})$$
is continuous at $\omega'_{Q^c}=0'_{Q^c}$ to a contradiction, by exhibiting a finite jump size.
We use the notation $1'_{B}$ for the all-one configuration in the volume $B$. A single-site observation window would not show the phenomenon, as a conditioning $0'_{0^c}$ for the model conditional on non-isolation forces the origin to be $0'$ due to the non-isolation constraint.
For the purpose of showing the persistence of jumps on $Q$, we consider the loophole volumes $B_L$ and $B_L+e$ as above, and choose for each $L$ cubes $C_L$ that contain both $B_L$ and $B_L+e$ with a layer of sufficiently large finite thickness. Thickness two will do.
In the first step, we
consider conditional probabilities of the second-layer measure $\mu_p'=T\mu_p$ of the form
\begin{equation*}
\begin{split}
&\mu_p'(\omega'_Q| \omega'_{C_L\backslash Q}).
\end{split}
\end{equation*}
We want to show that they are essentially discontinuous at the empty conditioning, so they cannot come from a quasilocal specification $\gamma'$. The latter argument will be discussed below in the second step, let us now discuss how to obtain the essential discontinuity. There is a slightly tricky part, as we need to go to higher volumes than single-sites, and need to take care of the constraints very carefully.
In order to do consider
\begin{equation*}
\begin{split}
&\mu_p'(\omega'_Q| 0'_{B_L\backslash Q}1'_{C_L\backslash B_L})
\end{split}
\end{equation*}
where $\omega'_Q\in \{0,1\}^Q$.
It is useful to compare with the empty configuration on the second layer
\begin{equation*}
\begin{split}
&\frac{\mu_p'(\omega'_Q| 0'_{B_L\backslash Q}1'_{C_L\backslash B_L})}
{\mu_p'(0'_Q| 0'_{B_L\backslash Q}1'_{C_L\backslash B_L})}=\frac{\mu_p'(\omega'_Q 0'_{B_L\backslash Q}1'_{C_L\backslash B_L})}
{\mu_p'(0'_Q 0'_{B_L\backslash Q}1'_{C_L\backslash B_L})}.
\end{split}
\end{equation*}
The denominators never vanish, as the configuration that appears in the denominator on the right-hand side obeys the non-isolation constraint.
Given the boundary condition $0'_{B_L\backslash Q}$, the non-isolation constraint on the second layer limits the configurations to non-isolated configurations on the cube $Q$ in order to have a non-zero measure.
We will take as a useful local pattern on $Q$, the following second-layer reference configuration, which is most easily visualized in two dimensions where it looks as follows
$$\omega'^*_Q=
\begin{array}{rrr}
0 & 1 & 0 \\
1 & 1 & 1 \\
0 & 1 & 0 \\
\end{array}.$$
It is clearly compatible with the second-layer constraint as $\omega'^*_Q 0'_{Q^c}$ contains
no isolated ones.
In general dimensions $d$ we choose it analogously, namely as the checkerboard groundstate of type-0 but with an additional $1$ at the origin, i.e.,
\begin{equation}\label{Eq_Om}
\begin{split}
(\omega'^*_Q)_i=
\begin{cases}
\omega^0_i & i \in Q\backslash 0 \\
1 & i=0.
\end{cases}
\end{split}
\end{equation}
We have the following useful observation. Given $\omega'^*_Q$, the underlying Bernoulli-field configuration $\omega_Q$ from which it appears as $T$-image, must take the same values on $Q$, independently of $\omega'_{Q^c}$.
To understand the following steps it will be helpful to make a notational distinction and write $\omega^*_Q=\omega'^*_Q$ when we refer to the same configuration as a {\em first-layer} configuration. With this we have
\begin{equation*}
\begin{split}
&\frac{\mu_p'(\omega'^*_Q 0'_{B_L\backslash Q}1'_{C_L\backslash B_L})}
{\mu_p'(0'_Q 0'_{B_L\backslash Q}1'_{C_L\backslash B_L})}=\frac{\mu_{p,C_L}(T\sigma_{C_L}=\omega'^*_Q 0'_{B_L\backslash Q}1'_{C_L\backslash B_L})}
{\mu_{p,C_L}(T\sigma_{C_L}=0'_Q 0'_{B_L\backslash Q}1'_{C_L\backslash B_L})}.
\end{split}
\end{equation*}
Next, we are aiming for a reformulation that involves only quantities of the first-layer model with isolation constraint in the {\em whole volume} $B_L$ including $Q$. For this we perform some manipulations.
We split the numerator as follows,
\begin{equation*}
\begin{split}
&\mu_{p,C_L}(T\sigma_{C_L}=\omega'^*_Q 0'_{B_L\backslash Q}1'_{C_L\backslash B_L})=\mu_{p,Q}(\omega^*_Q) \mu_{p,C_L \backslash Q}\Bigl(T(\omega^*_Q,\sigma_{C_L\backslash Q})\big|_{C_L\backslash Q} =0'_{B_L\backslash Q}1'_{C_L\backslash B_L}\Bigr).
\end{split}
\end{equation*}
Now, we change the middle site on $Q$ on the right-hand side from $1$ to $0$ to obtain a first-layer configuration that obeys the isolation constraint on $Q$. For this, we first write the simple identity
$$\mu_{p,C_L}(\omega^*_Q)=\tfrac{p}{1-p}
\mu_{p,C_L}(\omega^0_Q).$$
It is important to note that we may also replace
$$T_{C_L}(\omega^*_Q,\sigma_{C_L\backslash Q})\big|_{C_L\backslash Q}
=T_{C_L}(\omega^0_Q,\sigma_{C_L\backslash Q})\big|_{C_L\backslash Q},$$
which is possible as the middle site of $Q$ has no influence on the values of the restriction of
$T_{C_L}(\omega^0_Q,\sigma_{C_L\backslash Q})$ to $Q^c$.
So we arrive
at
\begin{equation*}
\begin{split}
&\mu_{p,C_L}(T\sigma_{C_L}=\omega'^*_Q 0'_{B_L\backslash Q}1'_{C_L\backslash B_L})=\frac{p}{1-p}
\mu_{p,C_L}(\sigma_Q=\omega^0_Q, T_{C_L}(\omega^0_Q,\sigma_{C_L\backslash Q})\big|_{C_L\backslash Q}
=0'_{B_L\backslash Q}1'_{C_L\backslash B_L}).
\end{split}
\end{equation*}
Now we have achieved our goal, as we may recognize that
\begin{equation*}
\begin{split}
&\frac{\mu_{p,C_L}(\sigma_Q=\omega^0_Q,
T_{C_L}(\omega^0_Q,\sigma_{C_L\backslash Q})\big|_{C_L\backslash Q}
=0'_{B_L\backslash Q}1'_{C_L\backslash B_L})}
{\mu_{p,C_L}(T\sigma_{C_L}=0'_Q 0'_{B_L\backslash Q}1'_{C_L\backslash B_L})}=\nu_{B_L}(\sigma_Q=\omega^0_Q),
\end{split}
\end{equation*}
with the conditional first-layer measure $\nu_{B_L}$ that is conditioned on isolation, in the whole volume $B_L$, as defined in the beginning of Section~\ref{Sec_Pei}. Indeed, the denominator on the l.h.s.~is the partition function of the conditional measure, up to the terms on $C_L\backslash B_L$ on which the first-layer configuration is frozen, and which cancel against those in the numerator. In particular there is no $C_L$-dependence.
We have thus proved the following {\em unfixing lemma.}
\begin{lem}\label{lem_NonGibbs}
The second-layer conditional probabilities and the first-layer model under the non-isolation constraint satisfy the following relation
\begin{equation*}
\begin{split}
&\frac{\mu_p'(\sigma_Q'=\omega'^*_Q| 0'_{B_L\backslash Q}1'_{C_L\backslash B_L})}
{\mu_p'(\sigma_Q'=0'_Q| 0'_{B_L\backslash Q}1'_{C_L\backslash B_L})}=\frac{p}{1-p}\nu_{B_L}(\sigma_Q=\omega^0_Q),
\end{split}
\end{equation*}
where $\omega^0_Q$ denotes the checkerboard groundstate and $\omega'^*_Q$ is defined in~\eqref{Eq_Om}.
The same relation holds for the shifted volume $B_L+e$.
\end{lem}
Note that this representation is particularly nice, as we are reduced to the discussion of the first-layer model in the full volume $B_L$ (and not a volume reduced by $Q$) where we have the Peierls estimate to our disposition.
In particular, by the Peierls estimate we have that
$$\nu_{B_L}(\sigma_Q=\omega^0_Q)\geq 1-\sum_{j\in Q}
\mu_{B_L}(\sigma_j\neq \omega^0_j)\geq 1-|Q|\epsilon(p),$$
while
$$\nu_{B_L+e}(\sigma_Q=\omega^0_Q)\leq \epsilon(p).$$
In the final step, we bring the arbitrarily chosen specification $\gamma'$ into play, with the aim to show that it must inherit a discontinuity at the empty configuration, too. For any pattern $\omega'_{\Bbb Q}$ in the observation window $Q$, we bound the infimum over perturbations of the empty configuration outside the volume $\Delta_L:=B_L\cap B_{L+e}$ via
\begin{equation*}
\begin{split}
\mu_p'(\omega'_Q| 0'_{B_L\backslash Q}1'_{C_L\backslash B_L})&=\int \gamma'_Q(\omega'_Q| 0'_{B_L\backslash Q}1'_{C_L\backslash B_L}
\tilde\omega'_{C_L^c})\,\,\mu_p'(\d\tilde\omega'_{C_L^c}| \omega'_{C_L\backslash Q}=0'_{B_L\backslash Q}1'_{C_L\backslash B_L})\cr
&\geq \inf_{\omega'_{\Delta_L^c}}
\gamma'_Q(\omega'_Q| 0'_{\Delta_L\backslash Q}
\omega'_{\Delta_L^c})=:a_L(\omega'_Q).
\end{split}
\end{equation*}
Similar arguments give that
\begin{equation*}
\begin{split}
\mu_p'(\omega'_Q| 0'_{B_L+e\backslash Q}1'_{C_L \backslash B_L+e}) &\geq a_L(\omega'_Q)\cr
\mu_p'(\omega'_Q| 0'_{B_L\backslash Q}1'_{C_L\backslash B_L}) &\leq \sup_{\omega'_{\Delta_L^c}}
\gamma_Q(\omega'_Q| 0'_{\Delta_L\backslash Q}
\omega'_{\Delta_L^c})=:b_L(\omega'_Q)\cr
\mu_p'(\omega'_Q| 0'_{B_L+e\backslash Q}1'_{C_L\backslash B_L+e})&\leq b_L(\omega'_Q).
\end{split}
\end{equation*}
Now consider specifically the patterns $0'_Q$ and $\omega'^*_Q$ and note that
\begin{equation}
\begin{split}\label{uppereast}
&\frac{p}{1-p}\nu_{B_L}(\sigma_Q=\omega^0_Q)=\frac{\mu_p'(\omega'^*_Q| 0'_{B_L\backslash Q}1'_{C_L\backslash B_L})}
{\mu_p'(0'_Q| 0'_{B_L\backslash Q}1'_{C_L\backslash B_L})}\leq \frac{b_L(\omega'^*_Q)}{a_L(0_Q)}
\end{split}
\end{equation}
and
\begin{equation}
\begin{split}\label{lowereast}
&\frac{p}{1-p}\nu_{B_L+e}(\sigma_Q=\omega^0_Q)=\frac{\mu_p'(\omega'^*_Q| 0'_{B_L+e\backslash Q}1'_{C_L\backslash B_L+e})}
{\mu_p'(0'_Q| 0'_{B_L+e\backslash Q}1'_{C_L\backslash B_L+e})}
\geq \frac{a_L(\omega'^*_Q)}{b_L(0_Q)},
\end{split}
\end{equation}
and remark that the denominators are uniformly bounded against zero.
Now, by the Peierls estimate presented in Lemma~\ref{lem_Pei} in previous section, we have lower bounds on the left-hand side of~\eqref{uppereast} and upper bounds on the left-hand side of~\eqref{lowereast}. These contradict the continuity assumption on the specification $\gamma'$, i.e., that the right-hand sides have the same limit as $L\uparrow\infty$.
This proves the discontinuity of the specification kernel $\gamma'_Q$ for any arbitrary specification $\gamma'$, at the fully empty configuration.
\end{proof}
\subsection{Proof of Theorem~\ref{thm_small_p}: Gibbsianness}
\label{Sec_Proofs_Low_p_s}
In this section, we construct a continuous specification $\gamma'$ for $\mu_p'$ for small $p$. The main ingredient is an application of the Dobrushin-uniqueness bound and the backward-martingale theorem. In the first step, we construct the conditional probabilities in finite volumes.
For this we use the following notation. For $\Lambda\subset{\Bbb Z}^d$ we denote by $\Lambda^c:={\Bbb Z}^d\setminus \Lambda$ its {\em complement} and by $\partial_-\Lambda:=\{x\in \Lambda\colon \text{there exists }y\in \Lambda^c\text{ with }y\sim_{{\Bbb Z}^d} x\}$ its {\em interior boundary}. The set $\Lambda^o:=\Lambda\setminus\partial_-\Lambda$ denotes the {\em interior} and $\bar\Lambda:=((\Lambda^c)^o)^c$ the {\em extension} of $\Lambda$. Moreover, $\partial_+\Lambda:=\bar\Lambda\setminus \Lambda$ then denotes the {\em outer boundary} of $\Lambda$.
\subsubsection{The specification}
Let us consider, for any (large) finite volume $\Delta\Subset{\Bbb Z}^d$, conditional probabilities of $\mu'_p$ inside $\Delta$ also given a (first-layer) boundary condition $\omega$ outside $\Delta$. More precisely, let $\Lambda\subset\Delta$, $\omega'=\omega'_\Lambda\omega'_{\Delta\setminus\Lambda}\omega'_{\Delta^c}\in \Omega'$ and let $\omega$ be such that $\omega_{\Delta^c}\in T^{-1}(\omega'_{(\Delta^o)^c})$, then
\begin{equation}\label{Eq0}
\begin{split}
\gamma'_{\omega_{\Delta^c}, \Lambda}&(\omega'_\Lambda|\omega'_{\Delta\setminus \Lambda})
:=\frac{
\sum_{\tilde\omega_{\Delta}}\mu_p(\tilde\omega_{\Delta}){\mathds 1}\{T_\Delta(\tilde\omega_{\Delta} \omega_{\Delta^c})=\omega'_\Delta\}}{
\sum_{\tilde\omega_{\Delta\setminus\Lambda^o}}\mu_p(\tilde\omega_{\Delta\setminus\Lambda^o}){\mathds 1}\{T_{\Delta \setminus \Lambda}(\tilde\omega_{\Delta\setminus\Lambda^o} \omega_{\Delta^c})=\omega'_{\Delta\setminus\Lambda}\}
}\\
=&\frac{
\sum_{\tilde\omega_{\Delta\setminus\Lambda^o}}\mu_p(\tilde\omega_{\Delta\setminus\Lambda^o}){\mathds 1}\{T_{\Delta \setminus \Lambda}(\tilde\omega_{\Delta\setminus\Lambda^o} \omega_{\Delta^c})=\omega'_{\Delta\setminus\Lambda}\}
\sum_{\tilde\omega_{\Lambda^o}}\mu_p(\tilde\omega_{\Lambda^o}){\mathds 1}\{T_\Lambda(\tilde\omega_{\Lambda^o}\tilde\omega_{\Delta \setminus \Lambda^o} )=\omega'_{ \Lambda} \}}{
\sum_{\tilde\omega_{\Delta\setminus\Lambda^o}}\mu_p(\tilde\omega_{\Delta\setminus\Lambda^o}){\mathds 1}\{T_{\Delta \setminus \Lambda}(\tilde\omega_{\Delta\setminus\Lambda^o} \omega_{\Delta^c})=\omega'_{\Delta\setminus\Lambda}\}
}\\
=&\frac{
\sum_{\tilde\omega_{\Delta\setminus\Lambda^o}}\mu_p(\tilde\omega_{\Delta\setminus\Lambda^o}){\mathds 1}\{T_{\Delta \setminus \Lambda}(\tilde\omega_{\Delta\setminus\Lambda^o} \omega_{\Delta^c})=\omega'_{\Delta\setminus\Lambda}\}
f_{\omega'_\Lambda}(\tilde\omega_{\partial_- \Lambda\cup\partial_+ \Lambda }) }{
\sum_{\tilde\omega_{\Delta\setminus\Lambda^o}}\mu_p(\tilde\omega_{\Delta\setminus\Lambda^o}){\mathds 1}\{T_{\Delta \setminus \Lambda}(\tilde\omega_{\Delta\setminus\Lambda^o} \omega_{\Delta^c})=\omega'_{\Delta\setminus\Lambda}\}
},
\end{split}
\end{equation}
where we wrote $T_\Lambda(\omega)$ instead of $(T(\omega))_\Lambda$ and
$$
f_{\omega'_\Lambda}(\omega_{\partial_- \Lambda\cup\partial_+ \Lambda }):= \sum_{\tilde\omega_{\Lambda^o}}\mu_p(\tilde\omega_{\Lambda^o}){\mathds 1}\{T_{\Lambda}(\tilde\omega_{\Lambda^o}\omega_{\partial_- \Lambda\cup\partial_+ \Lambda })=\omega'_{ \Lambda} \}
$$
is a local function. We have the following consistency result.
\begin{lem}\label{lem_specification}
Assume that, given $\Lambda \Subset {\Bbb Z}^d$ and $\omega' \in \Omega'$, $\lim_{\Delta \uparrow {{\mathbb Z}^d}} \gamma'_{\omega_{\Delta^c}, \Delta}(\omega'_\Lambda|\omega'_{\Delta\setminus \Lambda})=:\gamma'_\Lambda(\omega'_\Lambda | \omega'_{\Lambda^c})$ exists and is independent of $\omega_{\Delta^c} \in T^{-1}(\omega'_{(\Delta^o)^c})$. Then, $\gamma'$ is a specification for $\mu'_p$.
\end{lem}
\begin{proof}
First note that for any $\omega'\in \Omega'$ we can estimate,
\begin{equation}\label{Eq_o}
\begin{split}
\bigl\lvert \mu'_p(\omega'_\Lambda | \omega'_{\Delta\setminus \Lambda}) - \gamma'_\Lambda(\omega'_\Lambda | \omega'_{\Lambda^c})\bigr \rvert
&\leq \sup_{\omega_{\partial_+\Delta}} \bigl \lvert \gamma'_{\omega_{\partial_+\Delta}, \Delta}(\omega'_\Lambda | \omega'_{\Delta\setminus \Lambda}) - \gamma'_\Lambda(\omega'_\Lambda | \omega'_{\Lambda^c})\bigr \rvert,
\end{split}
\end{equation}
where the supremum is taken over suitable boundary configurations compatible with $\omega'$, since $\mu'_p(\omega'_\Lambda | \omega'_{\Delta\setminus \Lambda})$ can be written as an integral with respect to $\gamma'_{\cdot, \Delta}(\omega'_\Lambda | \omega'_{\Delta\setminus \Lambda})$. In particular, under our assumptions, the right-hand side of Equation~\ref{Eq_o} tends to zero as $\Delta$ tends to ${\Bbb Z}^d$.
Now, consider a cofinal sequence $\Delta_n \uparrow \Lambda^c$ and let $({\cal F}'_{\Delta_n})_{n \in {\Bbb N}}$ denote the canonical filtration on $\Omega'$. Note that the sequence of random variables $\mu'_p(\omega'_\Lambda | {\cal F}'_{\Delta_n})$ that are $\mu'_p$-almost surely defined as $\mu'_p(\omega'_\Lambda | {\cal F}'_{\Delta_n})(\omega') = \mu'_p(\omega'_\Lambda | \omega'_{\Delta_n \setminus \Lambda})$, is a uniformly-integrable martingale adapted to $({\cal F}'_{\Delta_n})_{n \in {\Bbb N}}$. Since $\sigma \left( \bigcup_{n} {\cal F}'_{\Delta_n} \right) = {\cal F}'_{\Lambda^c}$, by L\'evy's zero-one law, $(\mu'_p(\omega'_\Lambda | {\cal F}'_{\Delta_n}))_{n \in {\Bbb N}}$ converges $\mu'_p$-almost surely and in $L^1$ towards $\mu'_p(\omega'_\Lambda | {\cal F}'_{\Lambda^c})$ as $n$ tends to infinity. But this implies that for any $\tilde\omega'_\Lambda$, we can pick $n$ sufficiently large such that
\begin{equation*}
\begin{split}
&\int\mu'_p(\omega')|\mu'_p(\tilde\omega'_\Lambda|\omega'_{\Lambda^c})-\gamma'_\Lambda(\tilde\omega'_\Lambda|\omega'_{\Lambda ^c})|\\
&\le \int\mu'_p(\omega')|\mu'_p(\tilde\omega'_\Lambda|\omega'_{\Lambda^c})-\mu'_p(\tilde\omega'_\Lambda|\omega'_{\Delta_n\setminus\Lambda})|+ \int\mu'_p(\omega')|\mu'_p(\tilde\omega'_\Lambda|\omega'_{\Delta_n\setminus\Lambda})-\gamma'_\Lambda(\tilde\omega'_\Lambda|\omega'_{\Lambda ^c})|<\varepsilon,
\end{split}
\end{equation*}
where we used L\'evy's zero-one law in the first summand and the bound~\eqref{Eq_o} in the second summand on the right-hand side. But this implies that $\int\mu'_p(\omega'_{\Lambda^c})\gamma'_\Lambda(\tilde\omega'_\Lambda|\omega'_{\Lambda ^c})=\mu'_p(\tilde\omega_\Lambda')$ and hence $\gamma'$ is a specification for $\mu'_p$.
\end{proof}
\subsubsection{Transformations into first-layer constraint models}
In order to establish the conditions of Lemma~\ref{lem_specification} for sufficiently small $p$, we employ the Dobrushin uniqueness theorem for the first-layer constraint model as defined in~\eqref{eq_2nd layer1}.
For this, first note that we can uniquely identify $\omega'$ with the subset of its occupied sites in ${\Bbb Z}^d$ and with some notational abuse $\bar\omega'\subset{\Bbb Z}^d$ of $\omega'$ is a {\em fixed area} in the sense that, under $T$, there is no choice for the Bernoulli field in how to realize $\omega'$. Recall that $\omega'$ consist of clusters of size at least two and $\bar\omega'$ then consists of clusters of size at least two surrounded by unoccupied sites, see Figure~\ref{Pix_Fix} for an illustration.
\begin{figure}[!htpb]
\centering
\input{Pix_Fix.tex}
\caption{Illustration of the fixed area (black and white dots) based on a thinned configuration (black dots). The thinned configuration is surrounded by unoccupied sites (white dots).}
\label{Pix_Fix}
\end{figure}
In view of this, we introduce the following specification associated to the {\em first-layer constraint model} on $\Omega$
\begin{equation}\label{eq_2nd layer1}
\gamma^{S}_{\Delta}(\omega_\Delta|\omega_{\Delta^c})
:=\frac{
\mu_p(\omega_{\Delta\cap S}){\mathds 1}\{\omega_{\Delta\cap S}\omega_{\Delta^c\cap S}\text{ is $T$-feasible on } \Delta\cap S\}}{
\sum_{\tilde{\omega}_{\Delta\cap S}}\mu_p(\tilde{\omega}_{\Delta\cap S}){\mathds 1}\{\tilde{\omega}_{\Delta\cap S}\omega_{\Delta^c\cap S}\text{ is $T$-feasible on } \Delta\cap S \}}.
\end{equation}
Here, $S\subset{\Bbb Z}^d$ is an {\em unfixed area} that is arbitrary at this stage, $\Delta\Subset {\Bbb Z}^d$ and any configuration $\omega\in \Omega$ is called {\em $T$-feasible} on a set $\Delta\cap S$ if all occupied sites of $\omega$ in $\Delta\cap S$ have no neighboring occupied sites in $\bar\Delta\cap S$. In particular, with this definition,
\begin{equation*}
\gamma'_{\omega_{\Delta^c}, \Lambda}(\omega'_\Lambda|\omega'_{\Delta\setminus \Lambda})=\gamma^{S}_{\Delta}(f_{\omega'_\Lambda}|\omega_{\Delta^c})
\end{equation*}
for the particular choice of the unfixed area given by
$S=S(\omega'_{\Delta\setminus \Lambda})=(\Delta\setminus \Lambda^o)\setminus \bar\omega_{\Delta\setminus \Lambda}'$.
Here we used that, in the fixed area $\bar\omega_{\Delta\setminus \Lambda}'$, the Bernoulli field is completely determined by $\omega_{\Delta\setminus \Lambda}'$ and hence the corresponding factor cancels in~\eqref{Eq0}. The following result verifies the conditions of Lemma~\ref{lem_specification} for sufficiently small $p$.
\begin{lem}\label{lem_Dob}
Let $p<1/(2d)$. Then, for any $\Lambda \Subset {\Bbb Z}^d$ and $\omega' \in \Omega'$, the limit $\lim_{\Delta \uparrow {{\mathbb Z}^d}} \gamma^{S(\omega'_{\Delta\setminus\Lambda})}_\Delta(f_{\omega'_\Lambda}|\omega_{\Delta^c})$ exists and is independent of $\omega_{\Delta^c} \in T^{-1}(\omega'_{(\Delta^o)^c})$.
\end{lem}
\begin{proof}
We use the Dobrushin-uniqueness approach for the specification $\gamma_\Delta^S$ as defined in~\eqref{eq_2nd layer1}, where $S\subset{\Bbb Z}^d$ is any unfixed area.
Consider the Dobrushin matrix
$$
C_{ij}(p) = \max_{\omega_{j^c} = \tilde{\omega}_{j^c}}\|\gamma^S_i(\cdot |\omega_{i^c})
-\gamma^S_i(\cdot |\tilde{\omega}_{i^c})\|_{\text{TV}}
$$ for $i,j\in S$, where complements are defined in $S$. Note that the exterior boundary $\partial_+S$ of $S$ consists of unoccupied sites, see Figure~\ref{Pix_Fix}. We have $C_{ij}(p)=0$ unless $i$ and $j$ are neighbors in $S$. Otherwise,
\begin{align*}
C_{ij}(p)=\tfrac{1}{2}\max_{\omega_{j^c} = \tilde{\omega}_{j^c}}(|\gamma^S_i(0|\omega_{i^c})
-\gamma^S_i(0 |\tilde{\omega}_{i^c})|+ |\gamma^S_i(1|\omega_{i^c})
-\gamma^S_i(1 |\tilde{\omega}_{i^c})|)=p,
\end{align*}
where the maximum is realized when $\omega_{j^c}$ is unoccupied while $\omega_{j} $ is unoccupied and $\tilde{\omega}_{j}$ is occupied.
In particular, for the Dobrushin criterion, we have
$$
c(p)=\sup_{i\in S}\sum_{j \sim i}C_{ij}(p)\le 2dp,
$$
independent of $S$. By~\cite[Theorem 8.7]{Ge11}, for all $p< 1/(2d)$ and $S$, $\gamma_\Delta^S$ admits a unique infinite-volume Gibbs measure $\mu^S$. Finally, using the remark made above~\cite[Equation 8.25]{Ge11}, $\gamma^{S}_\Delta(f_{\omega'_\Lambda}|\omega_{\Delta^c})$ converges uniformly in $\omega$ towards $\mu^{S}(f_{\omega'_\Lambda})$, which finishes the proof.
\end{proof}
\subsubsection{Quasilocality of the specification}
What remains to be done in order to finish the proof of Theorem~\ref{thm_small_p} is to establish quasilocality for the specification. Let $s$ denote the $\ell_\infty$ metric on ${\Bbb Z}^d$ and define $s(\Lambda,\Delta)=\inf\{s(i,j)\colon i\in \Lambda, j\in \Delta\}$.
\begin{lem}\label{lem_Quasi}
For $p<1/(2d)$ there exist constants $C,c>0$ such that for all $\Lambda\subset\Delta\Subset{\Bbb Z}^d$ and all configurations $\omega'$ and $\tilde\omega'$ with $\omega'_\Delta=\tilde\omega'_\Delta$ we have that
\begin{equation*}
|\gamma'_\Lambda(\omega'_\Lambda|\omega'_{\Lambda^c})-\gamma'_\Lambda(\omega'_\Lambda|\tilde\omega'_{\Lambda^c})|\leq C |\Lambda| e^{-c s(\Lambda, \Delta^c)}.
\end{equation*}
In particular, the specification $\gamma'$ is quasilocal.
\end{lem}
\begin{proof}
We use the representation of $\gamma'_\Lambda(\omega'_\Lambda|\omega'_{\Lambda^c})$ in terms of the unique infinite-volume Gibbs measure $\mu^{S}(f_{\omega'_\Lambda})$ as presented in the proof of Lemma~\ref{lem_Dob}. This representation exists since we work in the Dobrushin-uniqueness regime. Now, for the quasilocality, we use the criterion~\cite[Remark 8.26]{Ge11} applied to~\cite[Theorem 8.20]{Ge11}. More precisely, since $p<1/(2d)$, by~\cite[Theorem 8.20]{Ge11}, for $S\cap\Delta=S'\cap \Delta$, we have that
\begin{equation*}
|\mu^{S}(f_{\omega'_\Lambda})-\mu^{S'}(f_{\omega'_\Lambda})|\le D(\Lambda,\Delta),
\end{equation*}
where $D(\Lambda,\Delta)=\sum_{i\in \Lambda, j\in \Delta^c}\big(\sum_{n\ge 0}C^n\big)_{i,j}$ with $C^n=C^n(p)$ the $n$-th power of the Dobrushin matrix as presented in the proof of Lemma~\ref{lem_Dob}. Now choose $c>0$ sufficiently small such that $p \mathrm e^c<1/(2d)$, then, by~\cite[Remark 8.26]{Ge11},
\begin{equation*}
D(\Lambda,\Delta)\le |\Lambda|(1-2dp\mathrm e^c)^{-1}\mathrm e^{-c d(\Lambda,\Delta^c)}.
\end{equation*}
This finishes the proof.
\end{proof}
\section*{Acknowledgements}
We thank Nils Engler for inspiring discussions. This work was funded by the German Research Foundation under Germany's Excellence Strategy MATH+: The Berlin Mathematics Research Center, EXC-2046/1 project ID: 390685689 and the German Leibniz Association via the Leibniz Competition 2020.
\begin{scriptsize}
|
{
"timestamp": "2021-09-30T02:02:30",
"yymm": "2109",
"arxiv_id": "2109.13997",
"language": "en",
"url": "https://arxiv.org/abs/2109.13997"
}
|
\section{Introduction}\label{intro}
This paper is part of a series~\cite{MMS,MSMS,Bxu} aiming to obtain an asymptotic enumeration of finite Cayley graphs. However, the main players in this paper are not finite Cayley graphs, but finite transitive groups. Our results on finite transitive groups can then be used to make a considerable step towards the enumeration problem of Cayley graphs and thus getting closer to solving an outstanding question of Babai and Godsil, see~\cite{BaGo} or~\cite[Conjecture~3.13]{Go2}.
Let $G$ be a finite transitive group on $\Omega$, let $\alpha\in \Omega$ and let $G_\alpha$ be the stabilizer in $G$ of the point $\alpha$. The orbits of $G_\alpha$ on $\Omega$ are said to be the \textbf{\textit{suborbits}} of $G$ and their cardinalities are said to be the \textbf{\textit{subdegrees}} of $G$. In this paper, we are concerned in finite transitive groups having many subdegrees equal to $1$ or $2$. In particular, we are interested in the ratio
$${\bf I}_\Omega(G):=\frac{|\{\omega\in \Omega\mid \omega \textrm{ lies in a }G_\alpha\textrm{-orbit of cardinality at most two}\}|}{|\Omega|}.$$
As $G$ is transitive on $\Omega$, the value of ${\bf I}_\Omega(G)$ does not depend on $\alpha$. Clearly, $0<{\bf I}_\Omega(G)\le 1$.
\begin{theorem}\label{thrm:main1}
Let $G$ be a finite transitive group on $\Omega$, let $\alpha\in \Omega$ and let $G_\alpha$ be the stabilizer in $G$ of the point $\alpha$. If
${\bf I}_\Omega(G)>\frac{5}{6},$
then ${\bf I}_G(G_\alpha)=1$, that is, each suborbit of $G$ has cardinality at most $2$.
\end{theorem}
It turns out that finite transitive groups $G$ with ${\bf I}_\Omega(G)=1$ are classified by a classical result of Bergman and Lenstra~\cite{BL}. The result of Bergman and Lenstra is rather general and applies to arbitrary (i.~e.~not necessarily finite) groups. The proof of~\cite[Theorem~1]{BL} is very beautiful and it is based on certain equivalence relations; also the strengthening of Isaacs~\cite{isaacs} of the theorem of Bergman and Lenstra has a remarkably ingenious proof.
From~\cite[Theorem~1]{BL}, finite transitive groups with ${\bf I}_\Omega(G)=1$ can be partitioned in three families
\begin{enumerate}[(a)]
\item finite transitive groups $G$ where the stabilizer $G_\alpha$ has order $1$,
\item finite transitive groups $G$ where the stabilizer $G_\alpha$ has order $2$,
\item finite transitive groups $G$ admitting an elementary abelian normal $2$-subgroup $N$ with $|N:G_\alpha|=2$.
\end{enumerate}
In the first family, each suborbit of $G$ has cardinality $1$, that is, $G$ acts regularly on $\Omega$. In the second family, since $G_\alpha$ has cardinality $2$, each orbit of $G_\alpha$ has cardinality at most $2$. In the third family, since $N\unlhd G$, the orbits of $N$ on $\Omega$ form a system of imprimitivity for the action of $G$; as $|N:G_\alpha|=2$, the blocks of this system of imprimitivity have cardinality $2$ and hence all orbits of $G_\alpha$ have cardinality at most $2$.
Theorem~\ref{thrm:main1} shows that, with respect to the operator ${\bf I}_\Omega(G)$, there is a gap between $5/6$ and $1$. The value $5/6$ is special: there exist finite transitive groups attaining the value $5/6$.
\begin{theorem}\label{thrm:main2}
Let $G$ be a finite transitive group on $\Omega$, let $\alpha\in \Omega$ and let $G_\alpha$ be the stabilizer in $G$ of the point $\alpha$. If
${\bf I}_\Omega(G)=\frac{5}{6},$
then there exists an elementary abelian normal $2$-subgroup $N$ of $G$ with $|V:G_\alpha|=|G_\alpha|=4$.
Moreover, let $e_1,e_2,e_3,e_4$ be a basis of $V$, regarded as a $4$-dimensional vector space over the field with $2$ elements, with $G_\alpha=\langle e_1,e_2\rangle$, let $H:=G/{\bf C}_G(V)$ where ${\bf C}_G(V)$ is the centralizer of $V$ in $G$, and let $K$ be the stabilizer of the subspace $W$ in $\mathrm{GL}(V)$. Then, $H$ is $K$-conjugate to one of the following two groups:
$$
\left\langle
\begin{pmatrix}
0& 0& 0& 1\\
1& 1& 0& 0\\
0& 0& 1& 0\\
1& 0& 0& 1
\end{pmatrix},
\begin{pmatrix}
1& 1& 1& 1\\
0& 0& 1& 0\\
0& 1& 0& 0\\
0& 0& 0& 1
\end{pmatrix}
\right\rangle,\,\,\,
\left\langle
\begin{pmatrix}
0& 0& 0& 1\\
1& 1& 0& 0\\
0& 0& 1& 0\\
1& 0& 0& 1
\end{pmatrix},
\begin{pmatrix}
1& 1& 1& 1\\
0& 0& 1& 0\\
0& 1& 0& 0\\
0& 0& 0& 1
\end{pmatrix},
\begin{pmatrix}
1& 0& 0& 0\\
0& 1& 0& 0\\
1& 1& 0& 1\\
1& 1& 1& 0
\end{pmatrix}
\right\rangle.
$$
The first group has order $12$ and is isomorphic to the alternating group of degree $4$ and the second group has order $24$ and is isomorphic to the symmetric group of degree $4$.
Conversely, if $G$ is a finite group containing an elementary abelian normal $2$-subgroup $V:=\langle e_1,e_2,e_3,e_4\rangle$ of order $16$ and $H:=G/{\bf C}_G(V)$ is as above, then the action of $G$ on the set $\Omega$ of the right cosets of $\langle e_1,e_2\rangle$ gives rise to a finite permutation group of degree $4|G:V|$ with ${\bf I}_\Omega(G)=5/6$.
\end{theorem}
Theorem~\ref{thrm:main2} classifies the finite transitive groups attaining the bound $5/6$.
Before discussing our motivation for proving Theorems~\ref{thrm:main1} and~\ref{thrm:main2}, we make some speculations. A computer search among the transitive groups $G$ of degree at most $48$ with the computer algebra system \texttt{magma}~\cite{magma} reveals that, if ${\bf I}_\Omega(G)>1/2$, then ${\bf I}_\Omega(G)=(q+1)/2q$, for some $q\in\mathbb{Q}$ with $2q\in\mathbb{N}$. We pose this as a conjecture.
\begin{conjecture}\label{conj}
{\rm Let $G$ be a finite transitive group on $\Omega$. If ${\bf I}_\Omega(G)>1/2$, then ${\bf I}_\Omega(G)=(q+1)/2q$, for some $q\in\mathbb{Q}$ with $2q\in\mathbb{N}$.}
\end{conjecture}
If true, Conjecture~\ref{conj} establishes a permutation analogue with a classical problem in finite group theory. Let $G$ be a finite group and let $${\bf I}(G):=\{x\in G\mid x \textrm{ has order at most 2}\}.$$ Miller~\cite{Miller} in 1905 has shown that, if ${\bf I}(G)>3/4$, then each element of $G$ has order at most $2$ and hence $G$ is an elementary abelian $2$-group. In this regard, Theorem~\ref{thrm:main1} can be seen as a permutation analogue of the theorem of Miller, with the only difference that the ratio $3/4$ in the context of abstract groups has to bump up to $5/6$ in the context of permutation groups. Miller has also classified the finite groups $G$ with ${\bf I}(G)=3/4$. Therefore, Theorem~\ref{thrm:main2} can be seen as a permutation analogue of the classification of Miller. The theorem of Miller has stimulated a lot of research; for instance, Wall~\cite{Wall} has classified all finite groups $G$ with ${\bf I}(G)>1/2$. In his proof, Wall uses the Frobenius-Schur formula for counting involutions. An application of this classification shows that, if ${\bf I}(G)>1/2$, then ${\bf I}(G)=(q+1)/2q$, for some positive integer $q$. Therefore, in Conjecture~\ref{conj}, we believe that the same type of result holds for the permutation analogue ${\bf I}_\Omega(G)$, but allowing $q$ to be an element of $\{x/2\mid x\in\mathbb{N}\}$. As a wishful thinking, we also pose the following problem.
\begin{problem}\label{problema}
{\rm
Classify the finite transitive groups $G$ acting on $\Omega$ with ${\bf I}_\Omega(G)>1/2$.}
\end{problem}
Liebeck and MacHale~\cite{LieMac} have generalized the results of Miller and Wall in yet another direction. Indeed, Liebeck and MacHale have classified the finite groups $G$ admitting an automorphism inverting more than half of the elements of $G$. (The classical results of Miller and Wall can be recovered by considering the identity automorphism.) Then, this classification has been pushed even further by Fitzpatrick~\cite{fitzpatrick} and Hegarty and MacHale~\cite{hegarty}, by classifying the finite groups $G$ admitting an automorphism inverting exactly half of the elements of $G$. An application of this classification shows that, if $\alpha$ is an automorphism of $G$ inverting more than half of the elements of $G$, then the proportion of elements inverted by $\alpha$ is $(q+1)/2q$, for some positive integer $q$. Yet again, another analogue with Theorems~\ref{thrm:main1} and~\ref{thrm:main2}, with Conjecture~\ref{conj} and with Problem~\ref{problema}. We observe that a partial generalization of this type of results in the context of association schemes is in~\cite{MZ}.
We now discuss our original motivation for proving Theorems~\ref{thrm:main1} and~\ref{thrm:main2}. A \textbf{\textit{digraph}} $\Gamma$ is an ordered pair $(V,E)$ with $V$ a finite non-empty set of vertices, and $E$ is a subset of $V\times V$, representing the arcs. A \textbf{\textit{graph}} $\Gamma$ is a digraph $(V,E)$, where the binary relation $E$ is symmetric. An automorphism of a (di)graph is a permutation on $V$ that preserves the set $E$.
\begin{definition}{\rm
Let $R$ be a group and let $S$ be a subset of $R$. The \textbf{\emph{Cayley digraph}} $\mathop{\Gamma}(R,S)$ is the digraph with $V=R$ and $(r,t) \in E$ if and only if $tr^{-1} \in S$.
The Cayley digraph is a graph if and only if $S=S^{-1}$, that is, $S$ is an inverse-closed subset of $R$.}
\end{definition}
The problem of finding graphical regular representations (GRRs) for groups has a long history. Mathematicians have studied graphs with specified automorphism groups at least as far back as the 1930s, and in the 1970s there were many papers devoted to the topic of finding GRRs (see for example \cite{babai11,Het,Im1, Im2,Im3,NW1,NW2,NW3,Wat}), although the ``GRR" terminology was coined somewhat later.
\begin{definition}{\rm
A \textbf{\emph{digraphical regular representation}} (DRR) for a group $R$ is a digraph whose full automorphism group is the group $R$ acting regularly on the vertices of the digraph.
Similarly, a \textbf{\emph{graphical regular representation}} (GRR) for a group $R$ is a graph whose full automorphism group is the group $R$ acting regularly on the vertices of the graph.}
\end{definition}
It is an easy observation that when $\mathop{\Gamma}(R,S)$ is a Cayley (di)graph, the group $R$ acts regularly on the vertices as a group of graph automorphisms. A DRR (or GRR) for $R$ is therefore a Cayley (di)graph on $R$ that admits no other automorphisms.
The main thrust of much of the work through the 1970s was to determine which groups admit GRRs. This question was ultimately answered by Godsil in~\cite{God}. The corresponding result for DRRs was proved by a much simpler argument by Babai~\cite{babai11}.
Babai and Godsil made the following conjecture. (Given a finite group $R$, $2^{{\bf c}(R)}$ denotes the number of inverse-closed subsets of $R$. See Definition~\ref{defeq:2} for the definition of generalized dicyclic group.)
\begin{conjecture}[\cite{BaGo}; Conjecture 3.13, \cite{Go2}]
{\rm
If $R$ is not generalised dicyclic or abelian of exponent greater than $2$, then for almost all inverse-closed subsets $S$ of $R$, $\mathop{\Gamma}(R,S)$ is a GRR. In other words,
$$\lim_{|R| \to \infty} \min\left\{ \frac{|\{S \subseteq R: S=S^{-1},\,\mathop{\mathrm{Aut}}(\mathop{\Gamma}(R,S))=R\}|}{2^{{\bf c}(R)}}: R\text{ admits a GRR}|\right\} =1.$$}
\end{conjecture}
From Godsil's theorem~\cite{God}, as $|R|\to \infty$, the condition ``$R$ admits a GRR" is equivalent to ``$R$ is neither a generalised dicyclic group, nor abelian of exponent greater than $2$."
The corresponding conjecture for Cayley digraphs (which does not require any families of groups to be excluded) was proved by Morris and the author in~\cite{MSMS}. Our current strategy for proving the conjecture of Babai and Godsil is to use the proof of the corresponding conjecture for Cayley digraphs as a template and extend the work in~\cite{MSMS} in the context of undirected Cayley graphs. This strategy so far has been rather successful and in~\cite{MMS,Bxu} the authors have already adapted some of the arguments in~\cite{MSMS} for undirected graphs.
One key tool in~\cite{MSMS} is an elementary observation of Babai.
\begin{lemma}\label{lemma1}
Let $G$ be a finite transitive group properly containing a regular subgroup $R$. Then there are at most $2^{\frac{3|\Omega|}{4}}$ Cayley digraphs $\Gamma$ on $R$ with $G\leq \mathop{\mathrm{Aut}}(\Gamma)$.
\end{lemma}
The proof of this fact is elementary, see for instance~\cite[Lemma~1.8]{MSMS}. Observe that the number of Cayley digraphs on $R$ is the number of subsets of $R$, that is, $2^{|R|}$. Therefore, Lemma~\ref{lemma1} says that, given $G$ properly containing $R$, only at most $2^{|R|-\frac{|R|}{4}}$ of these Cayley digraphs admit $G$ as a group of automorphisms. This gain of $|R|/4$ is one of the tools in~\cite{MSMS} for proving the Babai-Godsil conjecture on Cayley digraphs.
To continue our project of proving the Babai-Godsil conjecture for Cayley graphs, we need an analogue of Lemma~\ref{lemma1} for Cayley graphs. Observe that the number of Cayley graphs on $R$ is the number of inverse-closed subsets of $R$. We denote this number with $2^{{\bf c}(R)}$. It is not hard to prove (see for instance~\cite[Lemma~$1.12$]{MMS}) that $${\bf c}(R)=\frac{|R|+|{\bf I}(R)|}{2},$$
where ${\bf I}(R)=\{x\in R\mid x^2=1\}$.
To obtain this analogue one needs to investigate finite transitive groups having many suborbits of cardinality at most $2$. Therefore, our investigation leads to the following result.
\begin{theorem}\label{thrm:main3}
Let $G$ be a finite transitive group properly containing a regular subgroup $R$. Then one of the following holds
\begin{enumerate}[(a)]
\item the number of Cayley graphs $\Gamma$ on $R$ with $G\leq \mathop{\mathrm{Aut}}(\Gamma)$ is at most $2^{{\bf c}(R)-\frac{|R|}{96}}$,
\item $R$ is abelian of exponent greater than $2$,
\item $R$ is generalized dicyclic (see Definition~$\ref{defeq:2}$).
\end{enumerate}
\end{theorem}
\subsection{Notation}\label{notation}
In this section, we establish some notation that we use throughout the rest of the paper.
Given a subset $X$ of permutations from $\Omega$, we use an exponential notation for the action on $\Omega$ and hence, in particular, given $\omega\in \Omega$, we let
$$\omega^X:=\{\omega^x\mid x\in X\},$$
where $\omega^x$ is the image of $\omega$ under the permutation $x$.
Similarly, we let
$$\mathrm{Fix}_\Omega(X):=\{\omega\in \Omega\mid \omega^x=\omega,\,\forall x\in X\}.$$
Let $G$ be a transitive permutation group on $\Omega$.
For each positive integer $i$ and for each $\omega\in \Omega$, we let
\begin{equation}\label{notation:1}
\Omega_{\omega,i}:=\{\delta\in \Omega\mid |\delta^{G_\omega}|=i\}.
\end{equation}
Clearly,
\begin{equation}\label{eq:omega0}
\Omega=\Omega_{\omega,1}\cup\Omega_{\omega,2}\cup\Omega_{\omega,3}\cup\cdots
\end{equation} and the non-empty sets in this union form a partition of $\Omega$.
When $i:=1$, we have $$\Omega_{\omega,1}=\{\delta\in \Omega\mid G_\omega\textrm{ fixes }\delta\},$$ that is, $\Omega_{\omega,1}$ is the set of fixed points of $G_\omega$ on $\Omega$. It is well-known that $\Omega_{\omega,1}$ is a block of imprimitivity for the action of $G$ on $\Omega$, see for instance~\cite[1.6.5]{dixonmortimer}. Since this fact will play a role in what follows, we prove it here; this will also be helpful for setting up some additional notation. Let ${\bf N}_{G}(G_\omega)$ be the normalizer of $G_\omega$ in $G$. As ${\bf N}_G(G_\omega)$ contains $G_\omega$, the ${\bf N}_G(G_\omega)$-orbit containing $\omega$ is a block of imprimitivity for the action of $G$ on $\Omega$. Therefore it suffices to prove that $\Omega_{\omega,1}$ is the ${\bf N}_G(G_\omega)$-orbit containg $\omega$, that is, $\Omega_{\omega,1}=\omega^{{\bf N}_G(G_\omega)}=\{\omega^g\mid g\in {\bf N}_G(G_\omega)\}$. If $g\in {\bf N}_G(G_\omega)$, then $G_\omega=G_\omega^g=G_{\omega^g}$ and hence $G_\omega$ fixes $\omega^g$, that is, $\omega^g\in \Omega_{\omega,1}$. Conversely, let $\alpha\in \Omega_{\omega,1}$. As $G$ is transitive on $\Omega$, there exists $g\in G$ with $\alpha=\omega^g$. Thus $\omega^g\in \Omega_{\omega,1}$ and $G_\omega$ fixes $\omega^g$. This yields $G_\omega=G_{\omega^g}=G_\omega^g$ and $g\in{\bf N}_G(G_\omega)$. Therefore, $\alpha=\omega^g$ lies in the ${\bf N}_G(G_\omega)$-orbit containg $\omega$.
We let
\begin{equation*
d:=|\Omega_{\omega,1}|.
\end{equation*}
As $G$ is transitive on $\Omega$, $d$ does not depend on $\omega$. From the previous paragraph, we deduce that $d$ divides $|\Omega_{\omega,i}|$, for each positive integer $i$. We define
\begin{equation}\label{def:xi}
x_i:=\frac{|\Omega_{\omega,i}|}{|\Omega_{\omega,1}|}=\frac{|\Omega_{\omega,i}|}{d}\in\mathbb{N}.
\end{equation}
In particular, $x_1:=1$ and, from~\eqref{eq:omega0} and~\eqref{def:xi}, we have
\begin{equation*
|\Omega|=d\sum_{i}x_i.
\end{equation*}
\begin{definition}\label{defeq:2}{\rm
Let $A$ be an abelian group of even order and of exponent greater than $2$, and let $y$ be an involution of $A$. The generalised dicyclic group ${\rm Dic}(A, y, x)$ is the group $\langle A, x\mid x^2=y, a^x=a^{-1},\forall a\in A\rangle$. A group is called \textit{\textbf{generalised dicyclic}} if it is isomorphic to some ${\rm Dic}(A, y, x)$. When $A$ is cyclic, ${\rm Dic}(A, y, x)$ is called a dicyclic or generalised quaternion group.
}
\end{definition}
\section{Lemmata}\label{sec:lemmata}
In this section we use the notation established in Section~\ref{notation}.
\begin{lemma}\label{lemma:-4}
Let $G$ be a finite permutation group on a set $\Omega$ and let $\alpha\in \Omega$. If
$$\frac{|\Omega|}{2}<|\Omega_{\alpha,1}|+|\Omega_{\alpha,2}|<|\Omega|,$$
then
\begin{enumerate}[(a)]
\item\label{eq:lemma-41}$\Omega=\Omega_{\alpha,1}\cup\Omega_{1,2}\cup\Omega_{\alpha,4}$ (in particular, $\Omega_{\alpha,i}=\emptyset$, for every positive integer $i$ with $i\notin\{1,2,4\}$);
\item\label{eq:lemma-42} for every $\beta\in \Omega_{\alpha,4}$, $\Omega_{\alpha,2}\cap\Omega_{\beta,2}\ne\emptyset$;
\item\label{eq:lemma-43}for every $\beta\in\Omega_{\alpha,4}$ and for every $\omega\in \Omega_{\alpha,2}\cap\Omega_{\beta,2}$, we have
$G_\omega=(G_\alpha\cap G_\omega)(G_\beta\cap G_\omega)$.
\end{enumerate}
\end{lemma}
\begin{proof}
As $|\Omega_{\alpha,1}|+|\Omega_{\alpha,2}|<|\Omega|$, by~\eqref{eq:omega0}, we get that $\Omega_{\alpha,1}\cup\Omega_{\alpha,2}$ is strictly contained in $\Omega$. Therefore,
let $\beta\in \Omega\setminus(\Omega_{\alpha,1}\cup\Omega_{\alpha,2})$.
Since $\beta\notin \Omega_{\alpha,1}\cup\Omega_{\alpha,2}$, we have
\begin{equation}\label{eye-1}
|G_\alpha:G_\alpha\cap G_\beta|=|G_\beta:G_\alpha\cap G_\beta|>2.
\end{equation} See Figure~\ref{figureeye0}.
\begin{figure}[!ht]
\begin{tikzpicture}[node distance=1.3cm]
\node at (0,0) (A0) {$G_\alpha$};
\node[right of=A0] (A1) {};
\node[right of=A1] (A2) {$G_\beta$};
\node[below of=A1] (A3) {$G_\alpha\cap G_\beta$};
\draw[-] (A0) -- node[left]{$>2$}(A3);
\draw[-] (A2) -- node[right]{$>2$}(A3);
\end{tikzpicture}
\caption{}\label{figureeye0}
\end{figure}
From this we deduce
\begin{equation}\label{eye:2}\Omega_{\alpha,1}\cap \Omega_{\beta,1}=\Omega_{\alpha,2}\cap \Omega_{\beta,1}=\Omega_{\alpha,1}\cap \Omega_{\beta,2}=\emptyset.
\end{equation}
Indeed, if for instance $\omega\in \Omega_{\alpha,1}\cap\Omega_{\beta,2}$, then $|\omega^{G_\alpha}|=1$ and $|\omega^{G_\beta}|=2$. Therefore, $|G_\alpha:G_\alpha\cap G_\omega|=1$ and $|G_\beta:G_\beta\cap G_\omega|=2$. As $|G_\alpha:G_\alpha\cap G_\omega|=1$, we get $G_\alpha=G_\omega$. Now, as $|G_\beta:G_\beta\cap G_\omega|=2$ and $G_\alpha=G_\omega$, we get $2=|G_\beta:G_\beta\cap G_\omega|=|G_\beta:G_\beta\cap G_\alpha|$, which contradicts~\eqref{eye-1}. Therefore $\Omega_{\alpha,1}\cap \Omega_{\beta,2}=\emptyset$. The proof for all other equalities in~\eqref{eye:2} is similar.
From~\eqref{eye:2}, we obtain
\begin{equation}\label{eye:3}
(\Omega_{\alpha,1}\cup\Omega_{\alpha,2})\cap (\Omega_{\beta,1}\cup\Omega_{\beta,2})=\Omega_{\alpha,2}\cap\Omega_{\beta,2}.
\end{equation}
Recall that, by hypothesis, $|\Omega_{\alpha,1}\cup\Omega_{\alpha,2}|>|\Omega|/2$. Using this together with~\eqref{eye:3}, we get
\begin{align}\label{eye:4}
|\Omega_{\alpha,2}\cap\Omega_{\beta,2}|&=|(\Omega_{\alpha,1}\cup\Omega_{\alpha,2})\cap (\Omega_{\beta,1}\cup\Omega_{\beta,2})|\\\nonumber
&=|\Omega_{\alpha,1}\cup\Omega_{\alpha,2}|+ |\Omega_{\beta,1}\cup\Omega_{\beta,2}|-|(\Omega_{\alpha,1}\cup\Omega_{\alpha,2})\cup (\Omega_{\beta,1}\cup\Omega_{\beta,2})|\\\nonumber
&\ge |\Omega_{\alpha,1}\cup\Omega_{\alpha,2}|+ |\Omega_{\beta,1}\cup\Omega_{\beta,2}|-|\Omega|\\\nonumber
&>\frac{|\Omega|}{2}+\frac{|\Omega|}{2}-|\Omega|=0.
\end{align}
From~\eqref{eye:4}, we deduce $\Omega_{\alpha,2}\cap\Omega_{\beta,2}\ne\emptyset$. Let $\omega\in \Omega_{\alpha,2}\cap \Omega_{\beta,2}$. In particular, $|\omega^{G_\alpha}|=|\omega^{G_\beta}|=2$. This means that $|G_\alpha:G_\alpha\cap G_\omega|=|G_\beta:G_\beta\cap G_\omega|=2$. Since $|G_\alpha|=|G_\beta|=|G_\omega|$, we get that $G_\alpha\cap G_\omega$ and $G_\beta\cap G_\omega$ have both index $2$ in $G_\omega$. Suppose $G_\alpha\cap G_\omega=G_\beta\cap G_\omega$. Then
$$G_\alpha\cap G_\omega=G_\beta\cap G_\omega=G_\alpha\cap G_\beta\cap G_\omega\le G_\alpha\cap G_\beta$$
and hence
$$|G_\alpha:G_\alpha\cap G_\beta|\le |G_\alpha:G_\alpha\cap G_\omega|=2.$$
However, this contradicts~\eqref{eye-1}. Therefore, $G_\alpha\cap G_\omega$ and $G_\beta\cap G_\omega$ are two distinct subgroups of $G_\omega$ having index $2$. This yields
\begin{equation}\label{eye:5}G_\omega=(G_\alpha\cap G_\omega)(G_\beta\cap G_\omega),
\end{equation}
for each $\omega\in \Omega_{\alpha,2}\cap\Omega_{\beta,2}$.
From~\eqref{eye:5} and from the fact that $|G_\omega:G_\alpha\cap G_\omega|=|G_\omega:G_\beta\cap G_\omega|=2$, we see that $(G_\alpha\cap G_\omega)\cap (G_\beta\cap G_\omega)=G_\alpha\cap G_\beta\cap G_\omega$ has index $4$ in $G_\omega$. Since $|G_\omega|=|G_\alpha|=|G_\beta|$, we get that $G_\alpha\cap G_\beta\cap G_\omega$ has also index $4$ in $G_\alpha$ and in $G_\beta$.
Since $G_\alpha\cap G_\beta\cap G_\omega\le G_\alpha\cap G_\beta$, we get that $|G_\alpha:G_\alpha\cap G_\beta|=|G_\beta:G_\alpha\cap G_\beta|$ divides $|G_\alpha:G_\alpha\cap G_\beta\cap G_\omega|=4$. As $|G_\alpha:G_\alpha\cap G_\beta|=|G_\beta:G_\alpha\cap G_\beta|>2$, we get
$G_\alpha\cap G_\beta\cap G_\omega= G_\alpha\cap G_\beta$
and
$$|G_\alpha:G_\alpha\cap G_\beta|=|G_\beta:G_\alpha\cap G_\beta|=4.$$ We have summarized this paragraph in Figure~\ref{figureeye1}. In other words, $\beta\in \Omega_{\alpha,4}$.
\begin{figure}[!ht]
\begin{tikzpicture}[node distance=2.5cm]
\node at (0,0) (A0) {$G_\omega=(G_\alpha\cap G_\omega)(G_\beta\cap G_\omega)$};
\node[left of=A0] (A1) {};
\node[left of=A1](AA1){$G_\alpha$};
\node[right of=A0] (A2) {};
\node[right of=A2](AA2){$G_\beta$};
\node[below of=A1] (A3) {$G_\alpha\cap G_\omega$};
\node[below of=A2] (A4){$G_\beta\cap G_\omega$};
\node[below of=A0] (B){};
\node[below of=B] (A5){$G_\alpha\cap G_\beta$};
\draw[-] (A0) -- node[left]{2}(A3);
\draw[-] (AA1) -- node[left]{2}(A3);
\draw[-] (A0) -- node[right]{2}(A4);
\draw[-] (AA2) -- node[right]{2}(A4);
\draw[-] (A3) -- node[left]{2}(A5);
\draw[-] (A4) -- node[right]{2}(A5);
\end{tikzpicture}
\caption{}\label{figureeye1}
\end{figure}
Since $\beta$ is an arbitrary element in $\Omega\setminus(\Omega_{\alpha,1}\cup\Omega_{\alpha,2})$, we have proven part~\eqref{eq:lemma-41}. Now, as $\Omega_{\alpha,4}=\Omega\setminus(\Omega_{\alpha,1}\cup\Omega_{\alpha,2})$, part~\eqref{eq:lemma-42} follows from~\eqref{eye:4} and part~\eqref{eq:lemma-43} follows from~\eqref{eye:5}.
\end{proof}
\begin{lemma}\label{lemma:4esteso}
Let $G$ be a finite permutation group on a set $\Omega$ and let $\alpha\in \Omega$. If $\Omega=\Omega_{\alpha,1}\cup\Omega_{\alpha,2}\cup\Omega_{\alpha,4}$ and $|\Omega_{\alpha,1}|=|\Omega_{\alpha,4}|$, then $\Omega_{\alpha,1}\cup\Omega_{\alpha,4}$ is a block of imprimitivity for $G$. Moreover, ${\bf N}_G(G_\alpha)={\bf N}_G(G_\beta)$.
\end{lemma}
\begin{proof}
Let $\beta\in\Omega_{\alpha,4}$. As $\Omega_{\beta,1}\subseteq \Omega_{\alpha,4}$ and as $\Omega_{\beta,1}$ and $\Omega_{\alpha,4}$ have the same cardinality, we deduce $\Omega_{\alpha,4}=\Omega_{\beta,1}$. Analogously, $\Omega_{\beta,4}=\Omega_{\alpha,1}$.
Let $g\in G$ with $\beta=\alpha^g$. Now, we have
$$(\Omega_{\alpha,4})^g=\Omega_{\alpha^g,4}=\Omega_{\beta,4}=\Omega_{\alpha,1}.$$
Analogously, $\Omega_{\alpha,1}^g=\Omega_{\alpha,4}$.
So,
$$\Omega_{\alpha,1}^g=\Omega_{\alpha,4}\hbox{ and }\Omega_{\alpha,4}^g=\Omega_{\alpha,1}.$$
Therefore, $(\Omega_{\alpha,1}\cup\Omega_{\alpha,4})^g=\Omega_{\alpha,1}\cup\Omega_{\alpha,4}$ and $g^2$ fixes setwise $\Omega_{\alpha,1}$ and $\Omega_{\alpha,4}$.
Since $\Omega_{\alpha,1}$ is a block of imprimitivity for $G$ with setwise stabilizer ${\bf N}_G(G_\alpha)$, we deduce $g^2\in {\bf N}_G(G_\alpha)$. Set $T:=\langle {\bf N}_G(G_\alpha),g\rangle$.
Since $G_\alpha$ fixes setwise $\Omega_{\alpha,1}\cup\Omega_{\alpha,4}$, we deduce that $G_\alpha$ fixes setwise also $\Omega_{\alpha,4}=\Omega_{\beta,1}$. Now, for every $x\in {\bf N}_G(G_\alpha)$, we have
$$\Omega_{\alpha,1}^{g^{-1}\alpha g}=(\Omega_{\alpha,1}^{g^{-1}})^{xg}=\Omega_{\beta,1}^{xg}=(\Omega_{\beta,1}^x)^g=\Omega_{\beta,1}^g=\Omega_{\alpha,1}.$$
Thus $g^{-1}xg$ fixes setwise $\Omega_{\alpha,1}$ and hence $g^{-1}xg\in {\bf N}_G(G_\alpha)$. This yields $${\bf N}_G(G_\beta)={\bf N}_G(G_{\alpha^g})=({\bf N}_G(G_\alpha))^g={\bf N}_G(G_\alpha).$$
As $g$ normalizes ${\bf N}_G(G_\alpha)$, we have $T={\bf N}_G(G_\alpha)\langle g\rangle$ and
$$\alpha^T=(\alpha^{{\bf N}_G(G_\alpha)})^{\langle g\rangle}=\Omega_{\alpha,1}^{\langle g\rangle}=\Omega_{\alpha,1}\cup\Omega_{\alpha,4}.$$
Now, since $T$ is an overgroup of $G_\alpha$ and since $\Omega_{\alpha,1}\cup\Omega_{\alpha,4}$ is the $T$-orbit containing $\alpha$, we deduce that $\Omega_{\alpha,1}\cup\Omega_{\alpha,4}$ is a block of imprimitivity for $G$.
\end{proof}
We now need two rather technical lemmas, at first they seem out of context, but their relevance is pivotal in the proof of Lemma~\ref{lemma:-3}. We could phrase Lemma~\ref{lemma:44esteso} in a purely group theoretic terminology, but it is easier to state in our opinion using some terminology from graph theory.
\begin{lemma}\label{lemma:44esteso}
Let $G$ be a group, let $X$ be an elementary abelian $2$-subgroup of $G$, let $Y$ be a $G$-conjugate of $X$ with $Z:=X\cap Y$ having index $4$ in $X$ and in $Y$. Let $\Lambda_X:=\{X_1,X_2,X_3\}$ and $\Lambda_{Y}:=\{Y_1,Y_2,Y_3\}$ be the collection of the proper subgroups of $X$ and $Y$, respectively, properly containing $Z$.
Let $\Gamma$ be the bipartite graph having vertex set $\Lambda_X\cup\Lambda_Y$, where a pair $\{X_i,Y_j\}$
is declared to be adjacent if $X_iY_j$ is a subgroup of $G$ conjugate to $X$ via an element of $G$.
If $\Gamma$ has at least $6$ edges, then $X$ commutes with $Y$.
\end{lemma}
\begin{proof}
Suppose that
\begin{center}
$(\ast)\quad$ there exist two distinct vertices of $\Gamma$ having valency at least $2$.
\end{center} By symmetry, without loss of generality, we suppose that these two vertices are in $\Lambda_X$. Thus suppose that $X_i,X_j\in \Lambda_X$ have valency at least $2$ in $\Gamma$.
Let $Y_{i_1}$ and $Y_{i_2}$ be two neighbours of $X_i$ in $\Gamma$. Then, by definition, $X_iY_{i_1}$ and $X_iY_{i_2}$ are both subgroups of $G$ conjugate to $X$. Therefore, $X_iY_{i_1}$ and $X_iY_{i_2}$ are elementary abelian $2$-groups and hence $X_i$ commutes with both $Y_{i_1}$ and $Y_{i_2}$. Since $\langle Y_{i_1},Y_{i_2}\rangle=Y$, we deduce that $X_i$ commutes with $Y$.
Arguing as in the paragraph above with $X_i$ replaced by $X_j$, we deduce that $X_j$ commutes with $Y$. Therefore, $X=\langle X_i,X_j\rangle$ commutes with $Y$.
Now, it is elementary to see that every bipartite graph on six vertices, with parts having cardinality $3$ and having at least $6$ edges has the property $(\ast)$.
\end{proof}
Recally that a graph $\Gamma$ is said to be vertex-transitive if its automorphism group acts transitively on the vertices of $\Gamma$. Given a vertex $\omega$ of $\Gamma$, we denote by $\Gamma(\omega)$ the neighbourhood of $\omega$ in $\Gamma$.
\begin{lemma}\label{lemma:444esteso}
Let $\Gamma$ be a finite vertex-transitive graph having valency $2$, let $V$ be the set of vertices of $\Gamma$, let $\omega_1,\omega_2$ be two adjacent vertices of $\Gamma$ and let $W$ be a subset of $V$ containing $\omega_1$ and $\omega_2$ and with the property that, for any two distinct vertices $\delta_1,\delta_2$ in $W$, $V\setminus (\Gamma(\delta_1)\cup\Gamma(\delta_2))\subseteq W$. Then either $W=V$ or $|V|\le 6$.
\end{lemma}
\begin{proof}Since $\Gamma$ is vertex-transitive of valency $2$, $\Gamma$ is a disjoint union $s$ of cycles of the same length $\ell$. If $\ell\ge 7$ or if $\Gamma$ is disconnected, that is, $s\ge 2$, it can be easily checked that $W=V$.
\end{proof}
\begin{lemma}\label{lemma:-3}
Let $G$ be a finite permutation group on a set $\Omega$ and let $\alpha\in \Omega$. If
$$\frac{|\Omega|}{2}<|\Omega_{\alpha,1}|+|\Omega_{\alpha,2}|<|\Omega|,$$
then one of the following holds
\begin{enumerate}[(a)]
\item\label{lemma:-30}$|\Omega_{\alpha,1}|+|\Omega_{\alpha,2}|< 5|\Omega|/6$, or
\item\label{lemma:-31}
\begin{enumerate}[(i)]
\item\label{lemma:-322}$|\Omega_{\alpha,4}|\le 2|\Omega_{\alpha,1}|$, and
\item\label{lemma:-32}$G_\alpha$ is an elementary abelian $2$-group, and
\item\label{lemma:-33}$G_\alpha$ commutes with $G_\beta$, for every $\beta\in \Omega_{\alpha,4}$, and
\item\label{lemma:-34}$\langle G_\alpha,G_\beta\rangle=G_\alpha\times G_\beta$ is an elementary abelian normal $2$-subgroup of $G$ of order $16$, for every $\beta\in \Omega_{\alpha,4}$.
\end{enumerate}
\end{enumerate}
\end{lemma}
\begin{proof}
From Lemma~\ref{lemma:-4}, $\Omega=\Omega_{\alpha,1}\cup\Omega_{\alpha,2}\cup \Omega_{\alpha,4}$. Moreover, for each $\beta\in \Omega_{\alpha,4}$, we have shown that $G_\alpha$ contains a proper subgroup (namely, $G_\alpha\cap G_\omega$, for each $\omega\in \Omega_{\alpha,2}\cap\Omega_{\beta,2}$) strictly containg $G_\alpha\cap G_\beta$. This implies that the permutation group, $P$ say, induced by $G_\alpha$ in its action on the suborbit $\beta^{G_\alpha}$ is a $2$-group. (Indeed, if $G_\alpha$ induces the alternating group $\mathrm{Alt}(4)$ or the symmetric group $\mathrm{Sym}(4)$ on $\beta^{G_\alpha}$, then $G_\alpha$ acts primitively on $\beta^{G_\alpha}$ and hence $G_\alpha\cap G_\beta$ is maximal in $G_\alpha$.) Clearly, this $2$-group $P$ must be either cyclic of order $4$, or elementary abelian of order $4$, or dihedral of order $8$.
We have drawn in Figure~\ref{figureeye-2} the lattice of subgroups of the cyclic group of order $4$, the elementary abelian group of order $4$ and the dihedral group of order $8$: the dark colored nodes indicate the lattice of subgroups between the whole group and the stabilizer of a point.
Figure~\ref{figureeye-2} shows that, given $G_\alpha$ and $G_{\alpha}\cap G_\beta$, we only have one choice for $G_\alpha\cap G_\omega$ when $P$ is cyclic of order $4$ or dihedral of order $8$, whereas we have at most three choices for $G_\alpha\cap G_\omega$ when $P$ is elementary abelian of order $4$.
\begin{figure}[!ht]
\begin{tikzpicture}[node distance=1cm,inner sep=0pt]
\node[minimum size=2mm,circle, fill=black] at (-1,0) (A0) {};
\node[minimum size=2mm,circle,fill=black,below of=A0] (A1) {};
\node[minimum size=2mm,circle,fill=black,below of=A1] (A2) {};
\node[minimum size=2mm,circle, fill=black] at (3,0) (B0) {};
\node[minimum size=2mm,circle,fill=black,below of=B0] (B1) {};
\node[minimum size=2mm,circle,fill=black,below of=B1] (B2) {};
\node[minimum size=2mm,circle,fill=black,left of=B1] (B3) {};
\node[minimum size=2mm,circle,fill=black,right of=B1] (B4) {};
\node[minimum size=2mm,circle, fill=black] at (7,0) (C0) {};
\node[minimum size=2mm,circle,fill=lightgray,below of=C0] (C1) {};
\node[minimum size=2mm,circle,fill=black,left of=C1] (C2) {};
\node[minimum size=2mm,circle,fill=lightgray,right of=C1] (C3) {};
\node[minimum size=2mm,circle,fill=lightgray,below of=C1] (C4) {};
\node[minimum size=2mm,circle,fill=black,left of=C4] (C5) {};
\node[minimum size=2mm,circle,fill=lightgray,left of=C5] (C6) {};
\node[minimum size=2mm,circle,fill=lightgray,right of=C4] (C7) {};
\node[minimum size=2mm,circle,fill=lightgray,right of=C7] (C8) {};
\node[minimum size=2mm,circle,fill=lightgray,below of=C4] (C9) {};
\draw[-] (A0) -- (A1);
\draw[-] (A1) -- (A2);
\draw[-] (B0) -- (B1);
\draw[-] (B0) -- (B3);
\draw[-] (B0) -- (B4);
\draw[-] (B2) -- (B1);
\draw[-] (B2) -- (B3);
\draw[-] (B2) -- (B4);
\draw[-] (C0) -- (C1);
\draw[-] (C0) -- (C2);
\draw[-] (C0) -- (C3);
\draw[-] (C4) -- (C1);
\draw[-] (C4) -- (C2);
\draw[-] (C4) -- (C3);
\draw[-] (C9) -- (C5);
\draw[-] (C9) -- (C6);
\draw[-] (C9) -- (C7);
\draw[-] (C9) -- (C8);
\draw[-] (C9) -- (C4);
\draw[-] (C2) -- (C5);
\draw[-] (C2) -- (C6);
\draw[-] (C3) -- (C7);
\draw[-] (C3) -- (C8);
\end{tikzpicture}
\caption{}\label{figureeye-2}
\end{figure}
Given $\beta\in \Omega_{\alpha,4}$, let $$\mathcal{S}_{\alpha,\beta}:=\{G_\omega\mid \omega\in \Omega_{\alpha,2}\cap \Omega_{\beta,2}\}.$$
Observe that in the set $\mathcal{S}_{\alpha,\beta}$ we are collecting point stabilizers and not elements of $\Omega$ and hence different elements $\omega_1,\omega_2$ of $\Omega$ can give rise to the same element of $\mathcal{S}_{\alpha,\beta}$ when $G_{\omega_1}=G_{\omega_2}$.
We claim that
\begin{equation}\label{eye:6}
|\mathcal{S}_{\alpha,\beta}|\le
\begin{cases}
3&\textrm{when the permutation group induced by $G_\alpha$ on $\beta^{G_\alpha}$ or by $G_\beta$ on $\alpha^{G_\beta}$}\\
&\textrm{ is not an elementary abelian $2$-group of order $4$},\\
9&\textrm{otherwise}.
\end{cases}
\end{equation}
This claim follows from the paragraphs above and from Figure~\ref{figureeye-2}. Indeed, from Lemma~\ref{lemma:-4} part~\eqref{eq:lemma-43}, for each $X\in \mathcal{S}_{\alpha,\beta}$, there exists a proper subgroup $A$ of $G_\alpha$ and a proper subgroup $B$ of $G_\beta$ with $G_\alpha\cap G_\beta<A$, $G_\alpha\cap G_\beta<B$ and $X=AB$. Observe that we have at most $3$ choices for $A$ and at most $3$ choices for $B$ and hence at most $9$ choices for $X$. Moreover, as long as the permutation group induced on the corresponding orbit is not elementary abelian, we actually have only one choice for either $A$ or $B$ yielding at most $3$ choices for $X$.
\smallskip
For each $X\in\mathcal{S}_{\alpha,\beta}$, let $\mathcal{S}_X:=\{\omega\in \Omega_{\alpha,2}\cap\Omega_{\beta,2}\mid G_\omega=X\}$. From Section~\ref{notation} and from the notation therein, we have $|\mathcal{S}_X|=|\Omega_{\omega,1}|=d$. From this and from the definition of $\mathcal{S}_{\alpha,\beta}$, we obtain
\begin{equation}\label{eye:11}
|\Omega_{\alpha,2}\cap\Omega_{\beta,2}|=\left|\bigcup_{X\in\mathcal{S}_{\alpha,\beta}}\mathcal{S}_X\right|= \sum_{X\in\mathcal{S}_{\alpha,\beta}}|\mathcal{S}_X|= |\mathcal{S}_{\alpha,\beta}|d.
\end{equation}
From part~\eqref{eq:lemma-41} of Lemma~\ref{lemma:-4}, we have $\Omega=\Omega_{\alpha,1}\cup\Omega_{\alpha,2}\cup\Omega_{\alpha,4}$. From this, we immediately get $\Omega_{\beta,2}\subseteq \Omega\setminus\Omega_{\alpha,1}=\Omega_{\alpha,2}\cup\Omega_{\alpha,4}$ and hence
$\Omega_{\alpha,2}\cup\Omega_{\beta,2}\subseteq \Omega_{\alpha,2}\cup\Omega_{\alpha,4}$. Therefore,
\begin{align}\label{eye:10}
|\Omega_{\alpha,2}\cap\Omega_{\beta,2}|&=|\Omega_{\alpha,2}|+|\Omega_{\beta,2}|-|\Omega_{\alpha,2}\cup\Omega_{\beta,2}|\\\nonumber
&\ge |\Omega_{\alpha,2}|+|\Omega_{\beta,2}|-|\Omega_{\alpha,2}\cup\Omega_{\alpha,4}|\\\nonumber
&= |\Omega_{\alpha,2}|+|\Omega_{\beta,2}|-|\Omega_{\alpha,2}|-|\Omega_{\alpha,4}|\\\nonumber
&=|\Omega_{\alpha,2}|-|\Omega_{\alpha,4}|.
\end{align}
Now, dividing both sides of~\eqref{eye:11} and~\eqref{eye:10} by $|\Omega_{\alpha,1}|=d$, by recalling~\eqref{def:xi} and by rearranging the terms, we obtain
\begin{equation}\label{eye:30}
x_2\le |\mathcal{S}_{\alpha,\beta}|+x_4.
\end{equation}
\smallskip
We now suppose that part~\eqref{lemma:-30} does not hold and we show that part~\eqref{lemma:-322},~\eqref{lemma:-32},~\eqref{lemma:-33} and~\eqref{lemma:-34} are satisfied. In particular, we work under the assumption that
$$|\Omega_{\alpha,1}|+|\Omega_{\alpha,2}|\ge \frac{5|\Omega|}{6}.$$
As $|\Omega|=d(x_1+x_2+x_4)$, $|\Omega_{\alpha,1}|+|\Omega_{\alpha,2}|=d(x_1+x_2)$ and $x_1=1$, the inequality $|\Omega_{\alpha,1}|+|\Omega_{\alpha,2}|\ge 5|\Omega|/6$ gives
$$5x_4\le 1+x_2.$$
Now,~\eqref{eye:30} yields $5x_4\le 1+x_2\le 1+|\mathcal{S}_{\alpha,\beta}|+x_4$, that is, $4x_4\le 1+|\mathcal{S}_{\alpha,\beta}|$. From~\eqref{eye:6}, we deduce that $x_4\le 2$. This already shows part~\eqref{lemma:-322}.
When $x_4=2$, we deduce $|\mathcal{S}_{\alpha,\beta}|\ge 7$ and hence~\eqref{eye:6} yields that the permutation groups induced by $G_\alpha$ on $\beta^{G_\alpha}$ and by $G_\beta$ on $\alpha^{G_\beta}$ are both elementary abelian $2$-groups of order $4$. Since this argument does not depend upon $\beta\in\Omega_{\alpha,4}$, we have shown that $G_\alpha$ acts as an elementary abelian group on each of its orbits of cadinality $4$. Since all other orbits of $G_\alpha$ have cardinality $1$ or $2$, we deduce that $G_\alpha$ acts as an elementary abelian $2$-group on each of its orbits and hence $G_\alpha$ is an elementary abelian $2$-group. This shows part~\eqref{lemma:-32}, under the additional assumption that $x_4=2$. Moreover, as $|\mathcal{S}_{\alpha,\beta}|\ge 7$, Lemma~\ref{lemma:44esteso} applied with $X:=G_\alpha$ and $Y:=G_\beta$ gives that $G_\alpha$ and $G_\beta$ commute with each other. This shows that part~\eqref{lemma:-33} is satisfied. To prove part~\eqref{lemma:-34} we use Lemma~\ref{lemma:444esteso}. Let $\Gamma$ be the graph having vertex set $V$, the set of conjugates of $G_\alpha$ in $G$, that is, $$V:=\{G_\omega\mid\omega\in \Omega\}.$$ Then $|V|=1+x_2+x_4$. We declare two vertices $G_{\omega_1}$ and $G_{\omega_2}$ of $\Gamma$ adjacent if $G_{\omega_1}\cap G_{\omega_2}$ has index $4$ in $G_{\omega_1}$ (and hence also in $G_{\omega_2}$). Clearly, the action of $G$ by conjugation gives rise to a vertex-transitive action of $G$ on $\Gamma$. As $x_4=2$, $\Gamma$ has valency $2$. Let $W$ be the collection of all vertices $G_\omega$ of $\Gamma$ with $G_\alpha \cap G_\beta\le G_\omega$. Clearly, $G_\alpha,G_\beta\in W$ and, from Lemma~\ref{lemma:-4} part~\eqref{eq:lemma-43}, for any two distinct vertices $G_{\delta_1}$ and $G_{\delta_2}$ of $\Gamma$ contained in $W$, we have that $$\Omega_{\delta_1,2}\cap\Omega_{\delta_2,2}=V\setminus (\Gamma(G_{\delta_1})\cup\Gamma(G_{\delta_2}))\subseteq W.$$ From this, Lemma~\ref{lemma:444esteso} gives that either $W=V$ or $|V|\le 6$. The second alternative gives $x_2=|V|-1-x_4\le 3$, which contradicts the fact that $5x_4\le 1+x_2$. Therefore, $W=V$ and hence $G_\alpha\cap G_\beta\le G_\omega$, for every $\omega\in \Omega$. Thus $G_\alpha\cap G_\beta=1$ and hence $G_\alpha G_\beta=G_\alpha\times G_\beta$ is an elementary abelian $2$-group of order $16$. To prove that $G_\alpha\times G_\beta\unlhd G$ it suffices to apply again this argument to the collection $W$ of all vertices $G_\omega$ of $\Gamma$ with $G_\omega\le G_\alpha \times G_\beta$.
In particular, in the rest of the proof we work under the assumption $x_4=1$.
When $x_4=1$, we may refine some of the inequalities above. Indeed, when $x_4=1$, we have $\Omega_{\alpha,4}=\Omega_{\beta,1}$, because both sets have the same cardinality and $\Omega_{\beta,1}\subseteq \Omega_{\alpha,4}$. From this it follows $\Omega_{\alpha,2}=\Omega_{\beta,2}$. Therefore, from~\eqref{eye:11}, we get
$$dx_2=|\Omega_{\alpha,2}|=|\Omega_{\alpha,2}\cap\Omega_{\beta,2}|=d|\mathcal{S}_{\alpha,\beta}|.$$
Now, the inequality $5=5x_4\le 1+x_2$ implies $|\mathcal{S}_{\alpha,\beta}|=x_2\ge 4$. Again, we may use~\eqref{eye:6} to deduce that the permutation groups induced by $G_\alpha$ on $\beta^{G_\alpha}$ and by $G_\beta$ on $\alpha^{G_\beta}$ are both elementary abelian $2$-groups of order $4$.
This, as above, yields that $G_\alpha$ is an elementary abelian $2$-group, that is, part~\eqref{lemma:-32} holds.
From Lemma~\ref{lemma:-4} part~\eqref{eq:lemma-43}, $G_\alpha\cap G_\beta\le G_\omega$, for every $\omega\in \Omega_{\alpha,2}\cap \Omega_{\beta,2}$. In particular, $G_\alpha\cap G_\beta$ fixes pointwise $\Omega_{\alpha,2}\cap\Omega_{\beta,2}$. As $\Omega_{\alpha,2}\cap \Omega_{\beta,2}=\Omega_{\alpha,2}$, we deduce that $G_\alpha\cap G_\beta$ fixes pointwise $\Omega_{\alpha,2}$. Since $G_\alpha\cap G_\beta$ fixes pointwise also $\Omega_{\alpha,1}$ and $\Omega_{\beta,1}=\Omega_{\alpha,4}$, we obtain that $G_\alpha\cap G_\beta$ fixes pointwise $\Omega_{\alpha,1}\cup\Omega_{\alpha,2}\cup\Omega_{\alpha,4}=\Omega$. Thus $G_\alpha\cap G_\beta=1$ and $|G_\alpha|=4$. Observe also that when $x_4=1$, the hypothesis of Lemma~\ref{lemma:4esteso} are satisfied and hence ${\bf N}_G(G_\alpha)={\bf N}_G(G_\beta)$. Therefore $G_\beta$ normalizes $G_\alpha$. This gives that the commutator subgroup $[G_\alpha,G_\beta]$ lies in $G_\alpha\cap G_\beta=1$, that is, $G_\alpha$ commutes with $G_\beta$. This shows that part~\eqref{lemma:-33} is satisfied. Now, as $\Omega_{\alpha,2}=\Omega_{\beta,2}$, Lemma~\ref{lemma:-4} part~\eqref{eq:lemma-43} yields $G_\omega\le G_\alpha\times G_\beta$, for every $\omega\in \Omega_{\alpha,2}$. Therefore, $G_{\alpha}\times G_\beta$ contains $G_\omega$, for every $\omega\in \Omega$. Thus $$G_\alpha\times G_\beta=\langle G_\omega\mid\omega\in \Omega\rangle\unlhd G$$
and $G_\alpha\times G_\beta$ has order $16$. Thus part~\eqref{lemma:-34} is satisfied.
\end{proof}
We need one final preliminary lemma, with a somehow different flavour. We denote by $C_2$ and $C_4$ the cyclic groups of order $2$ and $4$, respectively, we denote by $Q_8$ the quaternion group of order $8$ and we denote by $D_8$ the dihedral group of order $4$.
\begin{lemma}\label{lemma:diff}
Let $R$ be a finite group, let $U$ be a proper subgroup of $R$ and let $r\in U$ be a central involution of $R$. Let $\tau:R\to R$ be the permutation defined by
$$
x\mapsto x^\tau:=
\begin{cases}
x&\textrm{when }x\in U,\\
xr&\textrm{when }x\in R\setminus U.
\end{cases}
$$
Then one of the following holds
\begin{enumerate}[(a)]
\item\label{eq:diff1}the number of inverse-closed subsets $S$ of $R$ with $S^\tau=S$ is at most $2^{{\bf c}(R)-\frac{|R|}{48}}$,
\item\label{eq:diff2}$R$ is generalized dicyclic,
\item\label{eq:diff22}$R\cong C_4\times C_2^\ell$, for some non-negative integer $\ell$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\iota:R\to R$ be the permutation defined by $x^\iota=x^{-1}$, for every $x\in R$, and let $T:=\langle\iota,\tau\rangle$. Observe that a subset $S$ of $R$ is inverse-closed and $\tau$-invariant if and only if $S$ is $T$-invariant. In particular, the number of inverse-closed subsets $S$ of $R$ with $S^\tau=S$ is $2^\kappa$, where $\kappa$ is the number of orbits of $T$ on $R$. To compute $\kappa$ we use the orbit-counting lemma, which says that
\begin{equation}\label{ocl}
\kappa=\frac{1}{|T|}\sum_{t\in T}|\mathrm{Fix}_R(t)|.
\end{equation}
Observe that
\begin{align}\label{anuw}
\mathrm{Fix}_R(1)&:=R,\\\nonumber
\mathrm{Fix}_R(\iota)&:={\bf I}(R),\\\nonumber
\mathrm{Fix}_R(\tau)&:=U,\\\nonumber
\mathrm{Fix}_R(\iota\tau)&:={\bf I}(U)\cup\{x\in R\setminus U\mid x^2=r\}.\nonumber
\end{align}
Observe that $\iota\tau=\tau\iota$ and $\iota^2=\tau^2=1$. Therefore $T$ is an elementary abelian $2$-group of order at most $4$.
Observe that $\tau\ne 1$, because $U$ is a proper subgroup of $R$ and $r\ne 1$. If $\iota=1$, then $R$ is an elementary abelian $2$-group and $T=\langle \tau\rangle$. Thus~\eqref{ocl} and~\eqref{anuw} yield
\begin{align*}
\kappa&=\frac{1}{2}\left(|R|+|U|\right)\le\frac{|R|}{2}+\frac{|R|}{4}= \frac{3|R|}{4}\\
&=|R|-\frac{|R|}{4}={\bf c}(R)-\frac{|R|}{4}.
\end{align*}
Therefore, part~\eqref{eq:diff1} holds and the proof follows in this case. Suppose now $\iota=\tau$. This means that $U$ is an elementary abelian $2$-subgroup of $R$ and $x^{-1}=xr$, for every $x\in R\setminus U$. In other words, all elements in $U$ square to $1$ and all elements in $R\setminus U$ square to $r$. Let $\bar{R}:=R/\langle r\rangle$ and let us use the ``bar'' notation for the subgroups and for the elements of $\bar{R}$. Consider the function
$$( \cdot,\cdot):\bar{R}\times\bar{R}\to \langle r\rangle$$
defined by $( x\langle r\rangle, y\langle r\rangle)=x^{-1}y^{-1}xy$, for every $x,y\in R$. Similarly, consider the function
$$q:\bar{R}\to\langle r\rangle$$
defined by $q(x\langle r\rangle)=x^2$. It is not hard to see that, regarding $\bar{R}$ as a vector space over the field with $2$ elements, $(\cdot,\cdot)$ is a bilinear form and $q$ is a quadratic form polarizing to $(\cdot,\cdot)$, that is, $$q(\bar{x}\bar{y})q(\bar{x})q(\bar{y})=(\bar{x},\bar{y}),$$
for every $\bar{x},\bar{y}\in \bar{R}$.
Using this terminology, we have that each element of $\bar{U}$ is totally singular and each element of $\bar{R}\setminus\bar{U}$ is non-degenerate. From the classification of the quadratic forms over finite fields, we have $|\bar{R}:\bar{U}|\in \{2,4\}$. When $|\bar{R}:\bar{U}|=2$, we deduce that
$R$ is an abelian group isomorphic to the direct product $C_4\times C_2^\ell$, for some $\ell\ge0$. In particular, part~\eqref{eq:diff22} holds. When $|\bar{R}:\bar{U}|=4$, we deduce that
$R\cong Q_8\times C_2^\ell$, for some $\ell\ge0$. In particular, $R$ is generalized dicyclic and part~\eqref{eq:diff2} holds. For the rest our our argument, we may suppose that
$\tau\ne\iota\ne1$.
The paragraph above can be summarized by saying that $T=\langle\iota,\tau\rangle$ has order $4$ and hence, from~\eqref{anuw},~\eqref{ocl} becomes
\begin{align}\label{ocl1}
\kappa&=
\frac{1}{4}
\left(
|\mathrm{Fix}_R(1)||+
|\mathrm{Fix}_R(\iota)|+
|\mathrm{Fix}_R(\tau)|+
|\mathrm{Fix}_R(\tau\iota)|
\right)\\\nonumber
&=\frac{1}{4}\left(
|R|+|{\bf I}(R)|+|U|+|{\bf I}(U)|+|\{x\in R\setminus U\mid x^2=r\}|\right)\\\nonumber
&\le
\frac{1}{4}\left(
|R|+|{\bf I}(R)|+|U|+|{\bf I}(R)|+|\{x\in R\setminus U\mid x^2=r\}|\right)\\\nonumber
&=\frac{|R|+|{\bf I}(R)|}{2}-\left(
\frac{|R|}{4}-\frac{|U|}{4}-\frac{|\{x\in R\setminus U\mid x^2=r\}|}{4}
\right)\\\nonumber
&={\bf c}(R)-\left(
\frac{|R|}{4}-\frac{|U|}{4}-\frac{|\{x\in R\setminus U\mid x^2=r\}|}{4}
\right).\nonumber
\end{align}
Set $\mathcal{S}:=\{x\in R\setminus U\mid x^2=r\}$.
If $\mathcal{S}=\emptyset$, then the proof follows immediately from~\eqref{ocl1}, indeed, part~\eqref{eq:diff1} holds true. Therefore, for the rest of the proof we suppose $$\mathcal{S}\ne\emptyset.$$
To conclude we divide the proof in various cases.
Suppose first that $|R:U|=2$. Let $x\in \mathcal{S}$ and observe that $R=U\cup Ux$. Now, a computation yields
$$\mathcal{S}=\{ux\mid u\in U, u^x=u^{-1}\}.$$
When $\mathcal{S}=Ux$, the action of $x$ on $U$ by conjugation is an automorphism of $U$ inverting each element of $U$. Therefore $U$ is abelian and $R$ is generalized dicyclic. Hence part~\eqref{eq:diff2} holds. When $\mathcal{S}\subsetneq Ux$, the result of Liebeck and MacHale~\cite{LieMac} shows that the automorphism $x$ can invert at most $3/4$ of the elements of $U$ and hence $|\mathcal{S}|\le 3|U|/4=3|R|/8$. Now,~\eqref{ocl1} gives $\kappa\le {\bf c}(R)-|R|/32$; hence part~\eqref{eq:diff1} holds and the proof follows. Therefore, for the rest of the proof we may suppose
\begin{equation}\label{bound}|R:U|\ge3.
\end{equation}
When $|\mathcal{S}|\le 3|R|/4-|U|/2$, from~\eqref{ocl1} and~\eqref{bound}, we deduce
\begin{align*}
\kappa&\le {\bf c}(R)-\left(
\frac{|R|}{4}-\frac{|U|}{4}-\frac{3|R|}{16}+\frac{|U|}{8}\right)\\
&= {\bf c}(R)-\left(
\frac{|R|}{16}-\frac{|U|}{8}\right)\\
&\le{\bf c}(R)-\left(
\frac{|R|}{16}-\frac{|R|}{24}\right)={\bf c}(R)-
\frac{|R|}{48}
\end{align*} and the proof follows. Therefore, for the rest of the proof, we suppose $$|\mathcal{S}|> 3|R|/4-|U|/2.$$
Let $u$ be an arbitrary element of $U$. Then $u\mathcal{S}\subseteq R\setminus U$ and hence $\mathcal{S}\cup u\mathcal{S}\subseteq R\setminus U$. Therefore
\begin{align}\label{ocl2}
|\mathcal{S}\cap u\mathcal{S}|&=
|\mathcal{S}|+|u\mathcal{S}|-|\mathcal{S}\cup u\mathcal{S}|=2|\mathcal{S}|-|\mathcal{S}\cup u\mathcal{S}|\\\nonumber
&\ge 2|\mathcal{S}|-(|R|-|U|)>\frac{3|R|}{2}-|U|-(|R|-|U|)=\frac{|R|}{2}.\nonumber
\end{align}
Now, let $ux\in\mathcal{S}\cap u\mathcal{S}$. Then $x\in\mathcal{S}$ and hence
$$r=(ux)^2=uxux=uu^xx^2=uu^xr.$$
Therefore $u^x=u^{-1}$. Now, repeating the argument above with $y\in \mathcal{S}\cap u\mathcal{S}$, we deduce $u^y=u^{-1}$ and hence $xy^{-1}\in {\bf C}_R(u)$. Since we have $|\mathcal{S}\cap u\mathcal{S}|$ choices for $y$,~\eqref{ocl2} implies $|{\bf C}_R(u)|>|R|/2$ and hence $R={\bf C}_R(u)$. Since $u$ is an arbitrary element of $U$, we deduce that $U$ is a central subgroup of $R$.
Since $u^x=u^{-1}$, for every $u\in U$ and for every $ux\in \mathcal{S}\cap u\mathcal{S}$, and since $U$ is contained in the center of $R$, we deduce that $U$ has exponent $2$. Since $U$ is a central subgroup of $R$ of exponent $2$, we now have an easier description of for $\mathcal{S}$, that is,
$$\mathcal{S}=\{x\in R\mid x^2=r\}.$$
Now that we know that $U$ has exponent $2$, we consider the quotient group $\bar{R}:=R/\langle r\rangle$. Observe that each element of $\bar U$ is an involution. Assume that $\bar{R}$ is not an elementary abelian $2$-group. Then, the theorem of Miller~\cite{Miller} yields $|{\bf I}(\bar{R})|\le 3|\bar{R}|/4$. In particular, the number of involutions in $\bar{R}\setminus \bar{U}$ is at most $3|\bar{R}|/4-|\bar{U}|$. Since each element in $\bar{\mathcal{S}}$ is an involution and since $\bar{\mathcal{S}}\subseteq \bar{R}\setminus \bar{U}$, we deduce $|\mathcal{S}|\le 3|R|/4-|U|$. Using this inequality in~\eqref{ocl1}, we get
$$\kappa\le {\bf c}(R)-\frac{|R|}{16},$$
part~\eqref{eq:diff1} holds
and the proof follows in this case. It remains to consider the case that $\bar{R}$ is an elementary abelian $2$-group.
For this remaining case, we consider the bilinear form
$$( \cdot,\cdot):\bar{R}\times\bar{R}\to \langle r\rangle$$
defined by $( x\langle r\rangle, y\langle r\rangle)=x^{-1}y^{-1}xy$, for every $x,y\in R$, and its quadratic form
$$q:\bar{R}\to\langle r\rangle$$
defined by $q(x\langle r\rangle)=x^2$. Again, we use the classification of the quadratic forms over finite fields. Using this terminology, $\bar{U}$ is totally isotropic and contained in the kernel of the the bilinear form $(\cdot,\cdot)$ and the elements of $\bar{\mathcal{S}}$ are non-degenerate. Using this information we obtain that $R$ is isomorphic to one of the following groups
\begin{itemize}
\item $C_4\times C_2^\ell$, for some $\ell\ge 0$,
\item $\underbrace{D_8\circ D_8\circ \cdots \circ D_8}_{t\textrm{ times}}\times C_2^\ell$, for some $\ell\ge 0$ and $t\ge 1$,
\item $Q_8\circ\underbrace{D_8\circ D_8\circ \cdots \circ D_8}_{(t-1)\textrm{ times}}\times C_2^\ell$, for some $\ell\ge 0$ and some $t\ge 1$,
\item $C_4\circ \underbrace{D_8\circ D_8\circ \cdots \circ D_8}_{t\textrm{ times}}\times C_2^\ell$, for some $\ell\ge 0$ and some $t\ge 1$.
\end{itemize}
In the first case, an explicit computation gives $|\mathcal{S}|=|R|/2$. Hence~\eqref{ocl1} gives
\begin{align*}
\kappa&\le{\bf c}(R)-\left(\frac{|R|}{4}-\frac{|U|}{4}-\frac{|R|}{8}\right)=
{\bf c}(R)-\left(\frac{|R|}{8}-\frac{|U|}{4}\right)\\
&\le{\bf c}(R)-\left(\frac{|R|}{8}-\frac{|R|}{16}\right)
={\bf c}(R)-\frac{|R|}{16}
\end{align*}
and part~\eqref{eq:diff1} holds. In the second case, an explicit computation gives $|\mathcal{S}|=(2^t-1)|R|/2^{t+1}\le |R|/2$. Therefore we may argue as in the previous case and we obtain that part~\eqref{eq:diff1} holds. In the third case, an explicit computation gives $|\mathcal{S}|=(2^t+1)|R|/2^{t+1}$. When $t=1$, $R\cong Q_8\times C_2^\ell$ is generalized dicyclic and hence part~\eqref{eq:diff2} hods. When $t\ge 2$, we have $|\mathcal{S}|\le 5|R|/8$ and hence~\eqref{ocl1} gives
\begin{align*}
\kappa&\le{\bf c}(R)-\left(\frac{|R|}{4}-\frac{|U|}{4}-\frac{5|R|}{32}\right)=
{\bf c}(R)-\left(\frac{3|R|}{32}-\frac{|U|}{4}\right)\\
&\le{\bf c}(R)-\left(\frac{3|R|}{32}-\frac{|R|}{16}\right)
={\bf c}(R)-\frac{|R|}{32}.
\end{align*}
Thus, we obtain that part~\eqref{eq:diff1} holds. In the forth (and last) case, an explicit computation gives $|\mathcal{S}|=|R|/2$. Therefore we may argue as in the first case and we obtain that part~\eqref{eq:diff1} holds.
\end{proof}
\section{Proof of Theorems~\ref{thrm:main1} and~\ref{thrm:main2}}
In this section, using Section~\ref{sec:lemmata} we prove both Theorems~\ref{thrm:main1} and~\ref{thrm:main2}. Thus, let $G$ be a finite transitive permutation group on $\Omega$ with $${\bf I}_\Omega(G)\ge \frac{5}{6}.$$
If ${\bf I}_\Omega(G)=1$, then there is nothing to prove and hence we may suppose that ${\bf I}_\Omega(G)<1$. Let $\alpha\in \Omega$. From Lemma~\ref{lemma:-4}, we have
$$\Omega=\Omega_{\alpha,1}\cup\Omega_{\alpha,2}\cup\Omega_{\alpha,4}.$$
Since ${\bf I}_\Omega(G)<1$, $\Omega_{\alpha,4}\ne\emptyset$. Let $\beta\in \Omega_{\alpha,4}$. From Lemma~\ref{lemma:-3},
$$V:=G_\alpha\times G_\beta$$
is an elementary abelian normal $2$-subgroup of $G$ of order $16$. Let $e_1,e_2,e_3,e_4$ be a basis of $V$, regarded as a vector space over the field with $2$ elements, and with $G_\alpha=\langle e_1,e_2\rangle$. Let $H:=G/{\bf C}_G(V)$ and $W:=G_\alpha$. Clearly, $H\le \mathrm{GL}(V)\cong\mathrm{GL}_4(2)$. Now, consider the action of $H$ on the $2$-dimensional subspaces of $V$ and consider $O:=\{W^h\mid h\in H\}$, the $H$-orbit containing $W$. Clearly,
$$\frac{|\Omega_{\alpha,1}\cup\Omega_{\alpha,2}|}{|\Omega|}=\frac{|\{U\in O\mid |W:W\cap U|\le 2\}|}{|O|}.$$
Observe that the right hand side of this equality can be easily computed with the help of a computer. With the computer algebra system~\texttt{magma}~\cite{magma}, we have computed all the subgroups of $\mathrm{GL}_4(2)$. Then, we have selected only the subgroups $H$ with the property that
$$V=\langle W^h\mid h\in H\rangle\hbox{ and }\bigcap_{h\in H}W^h=0.$$
(This selection is due to the fact that $V=\langle G_\alpha^g\mid g\in G\rangle$ and that $G_\alpha$ is core-free in $G$.)
Then, for each such subgroup $H$, we have computed the orbit $O=W^H$ and we have computed the ratio $\frac{|\{U\in O\mid |W:W\cap U|\le 2\}|}{|O|}$. We have checked that in all cases this ratio is at most $5/6$. In particular, Theorem~\ref{thrm:main1} is proved. Moreover, we have checked that this ratio is $5/6$ if and only if $H$ is given in the statement of Theorem~\ref{thrm:main2}. Since this construction can be reversed, we also obtain the converse implication for Theorem~\ref{thrm:main2}.
\section{Proof of Theorem~\ref{thrm:main3}}
Let $G$ be a finite transitive group properly containing a regular subgroup $R$. Since $R$ acts regularly, we may identify the domain of $G$ with $R$. Now, the number of Cayley graphs $\mathop{\Gamma}(R,S)$ on $R$ with $G\le \mathrm{Aut}(\mathop{\Gamma}(R,S))$ is the number of inverse-closed subsets $S$ of $R$ left invariant by $G_1$, where $G_1$ is the stabilizer of the point $1\in R$ in $G$. In particular, to prove Theorem~\ref{thrm:main3}, we need to estimate the number of inverse-closed subsets of $R$ that are union of $G_1$-orbits.
Suppose first that $${\bf I}_R(G)=1.$$ Since $R$ is properly contained in $G$, from the theorem of Bergman and Lenstra mentioned in Section~\ref{intro}, we have two cases to consider
\begin{itemize}
\item $|G_1|=2$,
\item $G$ contains an elementary abelian normal $2$-subgroup $N$ with $|N:G_1|=2$.
\end{itemize}
Assume first that $|G_1|=2$. Let $\varphi\in G_1\setminus\{1\}$. From the Frattini argument, $G=RG_1$ and hence $|G:R|=2$. This gives $R\unlhd G$ and hence $\varphi$ acts by conjugation on $R$ as a group automorphism. Now, from~\cite[Lemma~$2.7$]{Bxu} or~\cite[Theorem~1.13]{MMS}, we have that
\begin{enumerate}[(a)]
\item the number of $\varphi$-invariant inverse-closed subsets of $R$ is at most $2^{{\bf c}(R)-\frac{|R|}{96}}$, or
\item $R$ is abelian of exponent greater than $2$ and $\varphi$ is the automorphism of $R$ mapping each element to its inverse, or
\item $R$ is generalized dicyclic and $\varphi$ is an automorphism of $R$ with $x^\varphi\in \{x,x^{-1}\}$, for every $x\in R$.
\end{enumerate}
In particular, the proof of Theorem~\ref{thrm:main3} follows in this case.
Assume next that $G$ contains an elementary abelian normal $2$-subgroup $N$ with $|N:G_1|=2$. Since $R$ acts transitively, $G=RN$. Moreover, since $R$ acts regularly, $G=RG_1$ and $R\cap G_1=1$. Thus $|R\cap N|=|N|/|G_1|=2$. Let $r$ be a generator of $R\cap N$. Since $\langle r\rangle=R\cap N\unlhd R$, $r$ is a central involution of $R$. Let $U:={\bf N}_R(G_1)$. Since ${\bf N}_R(G_1)$ is a block of imprimitivity for $G$, $U={\bf N}_R(G_1)$ is also a block of imprimitivity for the regular action of $R$ and hence $U$ is a subgroup of $R$. As $G_1\ne1 $ because $R$ is properly contained in $G$, we deduce that $U$ is a proper subgroup of $R$. Now, $G_1$ fixes pointwise $U$ and, for every $x\in R\setminus U$, we have
$$x^{G_1}=\{x,xr\}.$$
Let $\tau:R\to R$ be the permutation defined by
$$
x\mapsto x^\tau:=
\begin{cases}
x&\textrm{when }x\in U,\\
xr&\textrm{when }x\in R\setminus U.
\end{cases}
$$
We have shown that $S\subseteq R$ is $G_1$-invariant if and only if $S$ is $\langle\tau\rangle$-invariant. Therefore, the proof of this case follows from Lemma~\ref{lemma:diff}.
To conclude the proof of Theorem~\ref{thrm:main3}, it remains to consider the case that $${\bf I}_R(G)\ne 1.$$ From Theorem~\ref{thrm:main1}, we have ${\bf I}_R(G)\le 5/6$. Recall that ${\bf I}(R)=\{x\in R\mid x^2=1\}.$ We define
\begin{align*}
&a:=|\Omega_{R,1}\cap {\bf I}(R)|,&&b:=|\Omega_{R,1}\cap (R\setminus {\bf I}(R))|,\\
&c:=|\Omega_{R,2}\cap {\bf I}(R)|,&&d:=|\Omega_{R,2}\cap (R\setminus {\bf I}(R))|,\\
e&:=|(R\setminus (\Omega_{R,1}\cup \Omega_{R,2}))\cap {\bf I}(R)|,&&f:=|(R\setminus (\Omega_{R,1}\cup \Omega_{R,2}))\cap (R\setminus {\bf I}(R))|.
\end{align*}
As ${\bf I}_R(G)\le 5/6$, we deduce
\begin{equation}\label{final?}
\frac{|R|}{6}\le|R\setminus(\Omega_{R,1}\cup\Omega_{R,2})|=e+f.
\end{equation}
Let $\iota:R\to R$ be the permutation defined by $x^\iota:=x^{-1}$, for every $x\in R$, and let $T:=\langle \iota,G_1\rangle$. Now, the number of $G_1$-invariant inverse-closed subsets of $R$ is exactly the number of $T$-invariant subsets of $R$. Moreover, the number of $T$-invariant subsets of $R$ is $2^\kappa$, where $\kappa$ is the number of orbits of $T$ on $R$.
The group $T$ has
\begin{itemize}
\item orbits of cardinality $1$ on $\Omega_{R,1}\cap {\bf I}(R)$,
\item orbits of cardinality $2$ on $\Omega_{R,1}\cap (R\setminus {\bf I}(R))$,
\item orbits of cardinality $2$ on $\Omega_{R,2}\cap {\bf I}(R)$,
\item orbits of cardinality at least $2$ on $\Omega_{R,2}\cap (R\setminus {\bf I}(R))$,
\item orbits of cardinality at least $3$ on $(R\setminus(\Omega_{R,1}\cup\Omega_{R,2}))\cap {\bf I}(R)$,
\item orbits of cardinality at least $4$ on $(R\setminus(\Omega_{R,1}\cup\Omega_{R,2}))\cap (R\setminus {\bf I}(R))$.
\end{itemize}
All of these assertions are trivial except, possibly, the last one. Indeed, if $x\in (R\setminus(\Omega_{R,1}\cup\Omega_{R,2}))\cap (R\setminus {\bf I}(R))$, then $x$ is not an involution and the $G_1$-orbit $x^{G_1}$ has cardinality at least $3$. As
$$(x^{G_1})^{-1}=(x^{-1})^{G_1},$$
we deduce that $|x^T|$ has even cardinality and hence $|x^T|$ is at least $4$.
Summing up, we have
\begin{align*}
\kappa&\le a+\frac{b}{2}+\frac{c}{2}+\frac{d}{2}+\frac{e}{3}+\frac{f}{4}=a+c+e+\frac{b}{2}+\frac{d}{2}+\frac{f}{2}-\left(\frac{c}{2}+\frac{2e}{3}+\frac{f}{4}\right)\\
&=\frac{|R|+|{\bf I}(R)|}{2}-\left(\frac{c}{2}+\frac{2e}{3}+\frac{f}{4}\right)
={\bf c}(R)-\left(\frac{c}{2}+\frac{2e}{3}+\frac{f}{4}\right)\\
&\le{\bf c}(R)-\left(\frac{2e}{3}+\frac{f}{4}\right)\le {\bf c}(R)-\left(\frac{e}{4}+\frac{f}{4}\right)\\
&\le{\bf c}(R)-\frac{|R|}{24},
\end{align*}
where in the last inequality we have used~\eqref{final?}. This concludes the proof of Theorem~\ref{thrm:main3}.
\thebibliography{10}
\bibitem{babai11}L.~Babai, Finite digraphs with given regular automorphism groups, \textit{Periodica Mathematica
Hungarica} \textbf{11} (1980), 257--270.
\bibitem{BaGo}L.~Babai, C.~D.~Godsil, On the automorphism groups of almost all Cayley graphs, \textit{European J. Combin.} \textbf{3} (1982), 9--15.
\bibitem{BL}G.~M.~Bergman, H.~W.~Lenstra Jr., Subgroups close to normal subgroups, \textit{J. Algebra} \textbf{127} (1989), 80--97.
\bibitem{magma} W.~Bosma, J.~Cannon, C.~Playoust,
The Magma algebra system. I. The user language,
\textit{J. Symbolic Comput.} \textbf{24} (3-4) (1997), 235--265.
\bibitem{dixonmortimer}J. D. Dixon, B. Mortimer, \textit{Permutation groups}, Graduate Texts in Mathematics, Springer-Verlag, New York, 1996.
\bibitem{fitzpatrick}P.~Fitzpatrick, Groups in which an automorphism inverts precisely half the elements, \textit{Proc. Roy. Irish Acad. Sect. A} \textbf{86} (1986) 81--89.
\bibitem{Go2}C.~D.~Godsil, On the full automorphism group of a graph, \textit{Combinatorica} \textbf{1} (1981), 243--256.
\bibitem{God}C.~D. Godsil, GRRs for nonsolvable groups, \textit{Algebraic Methods in Graph Theory,} (Szeged, 1978), 221--239, \textit{Colloq. Math. Soc. J\'{a}nos Bolyai} \textbf{25}, North-Holland, Amsterdam-New York, 1981.
\bibitem{hegarty}P.~Hegarty, D.~MacHale, Two-groups in which an automorphism inverts precisely half the elements, \textit{Bull. London Math. Soc.} \textbf{30} (1998), 129--135.
\bibitem{Het} D. Hetzel, \"{U}ber regul\"{a}re graphische Darstellung von aufl\"{o}sbaren Gruppen. Technische Universit\"{a}t, Berlin, 1976.
\bibitem{Im1} W. Imrich, Graphen mit transitiver Automorphismengruppen, \textit{Monatsh. Math.} \textbf{73} (1969), 341--347.
\bibitem{Im2} W. Imrich, Graphs with transitive abelian automorphism group, \textit{Combinat. Theory (Proc. Colloq. Balatonf\"{u}red, 1969}, Budapest, 1970, 651--656.
\bibitem{Im3} W. Imrich, On graphs with regular groups, \textit{J. Combinatorial Theory Ser. B.} \textbf{19} (1975), 174--180.
\bibitem{isaacs}I.~M.~Isaacs,
Subgroups close to all of their conjugates, \textit{Arch. Math. (Basel)} \textbf{55} (1990), 1--4.
\bibitem{LieMac}H.~Liebeck, D.~MacHale, Groups with Automorphisms Inverting most Elements, \textit{Math. Z.} \textbf{124} (1972), 51--63.
\bibitem{Miller}G.~A.~Miller, Groups containing the largest possible number of operators of order two, \textit{Amer.
Math. Monthly }\textbf{12} (1905), 149--151.
\bibitem{MMS}J.~Morris, M.~Moscatiello, P.~Spiga, Asymptotic enumeration of Cayley graphs, \textit{Ann. Mat. Pura Appl.}, to appear.
\bibitem{MSV}J. Morris, P. Spiga, G. Verret, Automorphisms of Cayley graphs on generalised dicyclic groups, \textit{European J. Combin. }\textbf{43} (2015), 68--81.
\bibitem{MSMS}J.~Morris, P.~Spiga, Asymptotic enumeration of Cayley digraphs, \textit{Israel J. Mathematics}, \textit{Israel J. Math.} \textbf{242} (2021), 401--459.
\bibitem{MZ}M.~Muzychuk, P.~H.~Zieschang, On association schemes all elements of which have valency $1$ or $2$, \textit{Discrete Math. }\textbf{308} (2008), 3097--3103.
\bibitem{NW1}L. A. Nowitz, M. Watkins, Graphical regular representations of direct product of groups, \textit{Monatsh. Math. }\textbf{76} (1972), 168--171.
\bibitem{NW2}L. A. Nowitz, M. Watkins, Graphical regular represntations of non-abelian groups, II, \textit{Canad. J. Math. }\textbf{24} (1972), 1009--1018.
\bibitem{NW3}L. A. Notwitz, M. Watkins, Graphical regular representations of non-abelian groups, I, \textit{Canad. J. Math. }\textbf{24} (1972), 993--1008.
\bibitem{Bxu}P.~Spiga, On the equivalence between a conjecture of Babai-Godsil and a conjecture of Xu concerning the enumeration
of Cayley graphs, \textit{The Art of Discrete and Applied Mathematics} \textbf{4} (2021), \href{https://doi.org/10.26493/2590-9770.1338.0b2s}{https://doi.org/10.26493/2590-9770.1338.0b2s}.
\bibitem{Wall}C.~T.~C.~Wall, On groups consisting mostly of involutions, \textit{Proc. Cambridge Philos. Soc.} \textbf{67} (1970),
251--262.
\bibitem{Wat} M. E. Watkins, On the action of non-abelian groups on graphs, \textit{J. Combin. Theory} \textbf{11} (1971), 95--104.
\end{document}
|
{
"timestamp": "2021-09-29T02:25:36",
"yymm": "2109",
"arxiv_id": "2109.13882",
"language": "en",
"url": "https://arxiv.org/abs/2109.13882"
}
|
\section{Introduction}
The scattering between kinks has become a very popular research topic in recent decades because of its astonishing properties \cite{Manton2004, Shnir2018, Kevrekidis2019}. The study of the collisions between kinks and antikinks in the $\phi^4$ model was initially addressed in the seminal references \cite{Sugiyama1979, Campbell1983, Anninos1991}. As it is well known, only two different scattering channels arise: \textit{bion formation} (where kink and antikink collide and bounce back over and over emitting radiation in every impact) and \textit{kink reflection} (where kink and antikink collide and bounce back a finite number of times before moving away). These two channels are predominant, respectively, for low and large values of the initial collision velocity. In these studies emerges the fascinating property that the two previously mentioned channels are infinitely interlaced in the transition of these regimes, giving rise to a fractal structure embedded in the final versus initial velocity diagram. The kink reflection windows included in this region involve scattering processes where kink and antikink collide and bounce back a finite number of times before definitely escaping away. This kink dynamics could have important consequences on physical applications where the presence of these topological defects allows the understanding of certain non-linear phenomena. Kinks (and topological defects in general) have been employed in a wide variety of physical disciplines, such as Condensed Matter \cite{Eschenfelder1981,Jona1993,Strukov}, Cosmology \cite{Vilenkin1994,Vachaspati2006}, Optics \cite{Mollenauer2006,Schneider2004,Agrawall1995}, molecular systems \cite{Davydov1985,Bazeia1999}, Biochemistry \cite{Yakushevich2004}, etc.
The appearance of a fractal structure in the velocity diagram describing the kink scattering for the $\phi^4$ model is based on the existence of an internal vibrational mode (the shape mode) associated to the kink solutions. The presence of this massive mode together with the zero mode triggers the \textit{resonant energy transfer mechanism}, which allows the redistribution of the energy between the kinetic and vibrational energy pools when the kinks collide. In a usual scattering event the kink and the antikink approach each other and collide. A certain amount of kinetic energy is transferred to the shape mode, such that kink and antikink become wobblers (kinks whose shape modes are excited), which try to escape from each other. If the kinetic energy of each wobbler is not large enough both of them end up approaching and colliding again. This process can continue indefinitely or finish after a finite number of collisions. In this last case, enough vibrational energy is returned back to the zero mode as kinetic energy, which allows the wobblers to escape. This mechanism and other related phenomena have been thoroughly analyzed in a large variety of models \cite{Shiefman1979, Peyrard1983, Goodman2005, Gani1999, Malomed1989, Gani2018, Gani2019,Simas2016,Gomes2018,Bazeia2017b,Bazeia2017a, Bazeia2019, Adam2019, Romanczukiewicz2018, Adam2020, Mohammadi2020, Yan2020,Romanczukiewicz2017, Weigel2014, Gani2014, Bazeia2018b, Lima2019, Marjaheh2017, Belendryasova2019, Zhong2020, Bazeia2020c, Christov2019, Christov2019b, Christov2020,Halavanau2012, Romanczukiewicz2008, Alonso2018, Alonso2018b,Alonso2017, Alonso2019, Alonso2020, Alonso2021, Ferreira2019, Goodman2002, Goodman2004,Malomed1985,Malomed1992, Saadatmand2015,Saadatmand2018, Manton1997,Adam2018,Adam2019b,Adam2020b,Dorey2011,Dorey2018,Mohammadi2021b,Campos2020,Blanco-Pillado2021}, revealing the enormous complexity of these events and the difficulty in explaining this phenomenon analytically. The \textit{collective coordinate approach} has been used to accomplish this task for decades, reducing the field theory to a finite dimensional mechanical system, where the separation between the kinks and the wobbling amplitudes associated to the shape modes are promoted to dynamical variables. This method has been progressively improved, see for example \cite{Sugiyama1979,Takyi2016, Kevrekidis2019,Pereira2020} and references therein, and recently, a reliable description of the kink scattering in the $\phi^4$ model has been achieved in the reflection-symmetric case \cite{Manton2021}, by introducing in this scheme the removal of a coordinate singularity in the moduli space and choosing the appropriate initial conditions.
As previously mentioned, after the first collision the initially unexcited kink and antikink become wobblers, so in a $n$-bounce scattering process the subsequent $n-1$ collisions can be understood as scattering processes between two wobblers. This observation justifies an intrinsic interest on the collision between these objects. The evolution of a single wobbler has been studied by employing perturbation expansion schemes by different authors, see \cite{Barashenkov2009,Barashenkov2018,Segur1983} and references therein. The scattering between wobblers in the $\phi^4$ model has been discussed in \cite{Alonso2021b} for a space reflection symmetric scenario. This situation is relevant in the original kink scattering problem where the mirror symmetry is preserved. The goal of these investigations is to bring insight into the resonant energy transfer mechanism by means of numerical analysis of the scattering solutions derived from the corresponding the Klein-Gordon partial differential equations. In this context it is worthwhile mentioning that the scattering of wobblers in the double sine-Gordon model has been studied by Campos and Mohammadi \cite{Campos2021}.
In this paper we shall continue with this line of research by investigating the asymmetric scattering between wobblers in two different scenarios, which are considered representative of this context. The scattering processes addressed in previous works involve wobblers which evolve with the same phase. This implies that a constructive interference between the shape modes associated to each wobbler takes place at the collision. In this work we propose the analysis of the scattering between wobblers with opposite phases, such that now a destructive interference between the vibrational modes occurs at the impact. The second scenario is described by the collision between a wobbler and an unexcited kink. This allows us to monitor the transfer of the vibrational energy from the wobbler to the kink. We will show that the fractal structures ruled by the resonance phenomenon in these two cases display very different patterns.
The organization of this paper is as follows: in Section \ref{sec:2} the theoretical background of the $\phi^4$ model together with the analytical description of kinks and wobblers is introduced. The kink-antikink scattering is also discussed, which allows us to describe the numerical setting employed to study the problem. Section~\ref{sec:3} is dedicated to study the scattering between wobblers with opposite phase, whereas the collision between a wobbler and an unexcited kink is addressed in Section \ref{sec:4}. Finally, some conclusions are drawn in Section~\ref{sec:5}.
\section{The $\phi^4$ model: kinks and wobblers}
\label{sec:2}
The dynamics of the $\phi^4$ model in (1+1) dimensions is governed by the action
\begin{equation}\label{action}
S=\int d^2 x \,\, {\cal{L}}(\partial_{\mu}\phi, \phi) \hspace{0.5cm},
\end{equation}
where the Lagrangian density ${\cal{L}}(\partial_{\mu}\phi, \phi)$ is of the form
\begin{equation}\label{lagrangiandensity}
{\cal{L}}(\partial_{\mu}\phi, \phi) = \frac{1}{2} \,\partial_\mu \phi \, \partial^\mu \phi - V(\phi) \hspace{0.5cm} \mbox{with} \hspace{0.5cm} V(\phi) = \frac{1}{2} (\phi^2 -1)^2 \hspace{0.5cm}.
\end{equation}
The use of dimensionless field and coordinates, as well as Einstein summation convention, are assumed in expressions (\ref{action}) and (\ref{lagrangiandensity}). Here, the Minkowski metric $g_{\mu\nu}$ has been set as $g_{00}=-g_{11}= 1$ and $g_{12}=g_{21}=0$. Therefore, the non-linear Klein-Gordon partial differential equation
\begin{equation}
\frac{\partial^2 \phi}{\partial t^2} - \frac{\partial^2 \phi}{\partial x^2} = -2\phi(\phi^2-1)
\label{pde}
\end{equation}
characterizes the time-dependent solutions of this model. The energy-momentum conservation laws imply that the total energy and momentum
\begin{equation}
E[\phi] = \int dx \Big[ \frac{1}{2} \Big( \frac{\partial \phi}{\partial t} \Big)^2 + \frac{1}{2} \Big( \frac{\partial \phi}{\partial x} \Big)^2 + V(\phi) \Big] \hspace{0.5cm}, \hspace{0.5cm}
P[\phi] = - \int dx\, \frac{\partial \phi}{\partial t} \, \frac{\partial \phi}{\partial x} \hspace{0.5cm}, \label{invariants}
\end{equation}
are system invariants. The kinks/antikinks ($+/-$)
\begin{equation}
\phi_{\rm K}^{(\pm)}(t,x;x_0,v_0) = \pm \tanh \left[\frac{x-x_0-v_0 t}{\sqrt{1-v_0^2}}\right] \label{travelingkink}
\end{equation}
are travelling solutions of (\ref{pde}), whose energy density is localized around the kink center $x_C=x_0+v_0 t$ (the value where the field profile vanishes). The parameter $v_0$ can be interpreted as the kink velocity. As it is well known, the solutions (\ref{travelingkink}) are topological defects because they asymptotically connect the two elements of the set of vacua ${\cal M}=\{-1,1\}$. These solutions have a normal mode of vibration. When this mode is excited the size of these solutions (called \textit{wobbling kinks} or \textit{wobblers}) periodically oscillates with frequency $\omega=\sqrt{3}$. This fact has been numerically checked and has been analytically proved in the linear regime. The spectral problem
\[
{\cal H} \psi_{\omega^2}(x) = \omega^2 \psi_{\omega^2}(x)
\]
of the second order small fluctuation operator associated with the static kink/antikink,
\begin{equation}
{\cal H} = - \frac{d}{dx^2} + 4-6\,{\rm sech}^2 (x-x_0), \label{hessian}
\end{equation}
involves the shape mode
\[
\psi_{\omega^2=3}(x;x_0)= \, {\rm sinh}\, (x-x_0) \, {\rm sech}^2 (x-x_0)
\]
with eigenvalue $\omega^2=3$. The discrete spectrum of the operator (\ref{hessian}) is completed with the presence of a zero mode
\[
\psi_{\omega^2=0}(x;x_0)= \, {\rm sech}^2 (x-x_0) = \left. \frac{\partial \phi_K^{(+)}}{\partial x}\right|_{t=0,v_0=0} \hspace{0.5cm},
\]
whereas the continuous spectrum emerges on the threshold value $\omega^2=4$.
As a result of this linear analysis, the expression
\begin{equation}
\phi_{\rm W}^{(\pm)}(t,x;x_0,v_0,\omega,a,\delta) = \pm \tanh \left[ \frac{x-x_0 - v_0 t}{\sqrt{1-v_0^2}} \right] + a \sin(\omega t+\delta){\rm{sech}} \left[ \frac{x-x_0 - v_0 t}{\sqrt{1-v_0^2}} \right] \tanh\left[ \frac{x-x_0 - v_0 t}{\sqrt{1-v_0^2}} \right] \label{wobbler}
\end{equation}
can be considered a good approximation of a traveling wobbler in the linear regime $a\ll 1$. Note that $\phi_{\rm W}^{(-)}(t,x)$ describes a \textit{wobbling antikink} (or \textit{antiwobbler}).
The maximum deviation of the wobbler (\ref{wobbler}) from the kink (\ref{travelingkink}) takes place at the points
\begin{equation}
x_M^{(\pm)} = x_C \pm \sqrt{1-v_0^2} \,\,{\rm arccosh}\, \sqrt{2} \hspace{0.5cm} , \label{points}
\end{equation}
where the relation
\[
\left| \phi_{\rm W} (x_M^{(\pm)}) - \phi_{\rm K} (x_M^{(\pm)})\right| = \frac{1}{2} \, |a|
\]
holds. An optimized strategy to measure the wobbling amplitude of a traveling wobbler in a numerical scheme is to monitor the profile of these solutions at the points (\ref{points}). By using fourth order perturbation theory in the expansion parameter $a$, it has been proved that $a$ depends on time, $a=a(t)$, and decays following the expression
\begin{equation}
|a(t)|^2 = \frac{|a(0)|^2}{1+\omega \,\xi_I\, |a(0)|^2 t}, \label{amplitude}
\end{equation}
where $\xi_I$ is a constant. However, when the initial wobbling amplitude $a(0)$ is small, the decay is very slow and becomes appreciable only after a long time $t\sim |a(0)|^{-2}$ \cite{Barashenkov2009,Barashenkov2018}.
The scattering between a kink and an antikink has been thoroughly analyzed in the physical and mathematical literature during the last decades. In this case, a kink and antikink which are well separated are pushed together with initial collision velocity $v_0$. Taking into account the spatial reflection symmetry of the system the kink can be located at the left of the antikink or vice versa. For very small values of the time $t$ (with respect to the impact time), the previous scenario is characterized by the concatenation
\begin{equation}
\Phi_{KK}(t,x;x_0,v_0) = \phi_K^{(\pm)}(t,x;x_0,v_0) \cup \phi_K^{(\mp)}(t,x;-x_0,-v_0) \label{configuration01}
\end{equation}
for $x_0\gg 0$, where we have introduced the notation
\begin{equation}
\phi_K^{(\pm)}(t,x;x_0,v_0) \cup \phi_K^{(\mp)}(t,x;-x_0,-v_0)\equiv \left\{ \begin{array}{ll} \phi_K^{(\pm)}(t,x;x_0,v_0) & \mbox{if } x\leq 0, \\[0.2cm] \phi_K^{(\mp)}(t,x;-x_0,-v_0) & \mbox{if } x> 0 . \end{array} \right. \label{concadef}
\end{equation}
The initial separation distance between the kink and the antikink is equal to $2x_0$. The configuration (\ref{configuration01}) defines the initial conditions of the scattering problem. As it is well known, there exist two different scattering channels in this case: (1) \textit{bion formation}, where kink and antikink end up colliding and bouncing back over and over, and (2) \textit{kink reflection}, where kink and antikink collide, bounce, and finally recede with respective final velocities $v_{f,L}$ and $v_{f,R}$ in the opposite direction in which they were initially traveling. These scattering regimes are predominant, respectively, for low and high values of the initial velocity $v_0$. In Figure \ref{fig:VelDiaAmp000} the two previously mentioned final velocities $v_{f,L}$ and $v_{f,R}$ are plotted as a function of the initial collision velocity $v_0$.
\begin{figure}[htb]
\centerline{\includegraphics[height=3.5cm]{veldiaamp000}}
\caption{\small Final versus initial velocity diagram for the kink-antikink scattering. The final velocity of the bion is assumed to be zero in this context. The color code is used to specify the number of bounces suffered by the kinks before escaping.} \label{fig:VelDiaAmp000}
\end{figure}
From the spatial reflection symmetry exhibited by the initial configuration (\ref{configuration01}) it is clear that $v_{f,L}=-v_{f,R}$ and that the velocity of a bion must be zero. Therefore, the velocity diagram in Figure \ref{fig:VelDiaAmp000} is symmetric with respect to the $v_0$-axis. In the next sections we shall address asymmetric scattering events where this symmetry is lost and $|v_{f,L}|\neq |v_{f,R}|$ in general. The fascinating property found in this scattering problem is that the transition between the two previously mentioned regimes is ruled by a fractal structure where the bion formation and the kink reflection regimes are infinitely interlaced. The kink reflection windows included in this initial velocity interval involve scattering processes where kink and antikink collide and bounce back a finite number of times exchanging energy between the zero and shape modes before definitely moving away. These processes involve the so called \textit{resonant energy transfer mechanism}.
For the previously mentioned $n$-bounce processes (with $n\geq 2$) it is clear that after the first impact the subsequent collisions correspond to scattering processes between wobblers because, in general, the collision between kinks causes the excitation of their shape modes. Taking into account the spatial reflection symmetry of the problem, the wobbling amplitudes and phases of the colliding wobblers are equal. Therefore, these events are characterized by an initial configuration of the form
\begin{equation}
\Phi_{WW}(t,x;x_0,v_0,\omega,a,\delta) =
\phi_W^{(\pm)}(t,x; x_0,v_0,\omega,a,\delta) \cup
\phi_W^{(\mp)}(t,x; -x_0,-v_0,\omega,a,\delta) \,. \label{configuration02}
\end{equation}
This scattering problem has been numerically studied in \cite{Alonso2021b}. By mirror symmetry, it can be assumed that the phases of the shape modes of the traveling wobblers are also the same at the impact time, so a constructive interference takes places in the collision. As a consequence, it was found that the fractal pattern enlarges and becomes more complex as the value of the initial wobbling amplitude $a$ increases. Another interesting property in this context is the emergence of isolated 1-bounce windows, which are not present in the original kink-antikink scattering. It is clear that the scattering between wobblers characterized by the initial configuration (\ref{configuration02}) is extremely relevant to study the resonant energy transfer mechanism in this problem. However, because of the spatial reflection symmetry of this type of processes, wobblers transfer the same amount of energy to each other at the collision, that is, the scattered wobblers travel away with the same final speeds and wobbling amplitudes. In this work we are interested in analyzing more general scattering events where the energy transfer mechanism becomes asymmetric with respect to the traveling wobblers.
The first type of processes which could involve novel properties in this framework is the collision between two wobblers with opposite phase. This scenario can be characterized by the initial configuration
\begin{equation}
\Phi_{W \widetilde{W}}(t,x;x_0,v_0,\omega,a,\delta) =
\phi_W^{(\pm)}(t,x; x_0,v_0,\omega,a,\delta) \cup
\phi_W^{(\mp)}(t,x; -x_0,-v_0,\omega,a,\pi+\delta) \, . \label{configuration04}
\end{equation}
We have employed the notation $W\widetilde{W}$ as subscript of $\Phi$ in (\ref{configuration04}) to simply emphasize that the wobblers have different initial phases and to distinguish this configuration from (\ref{configuration02}). In this case it is assumed that the wobblers evolve preserving a phase difference of $\pi$, giving place to a destructive interference in the excitation of the shape modes of each wobbler when they collide. It is expected that the final versus initial velocity diagrams associated to these scattering events will be affected by this fact and that they will be very different from those found in the constructive interference scenario (\ref{configuration02}), analyzed in \cite{Alonso2021b}.
Another important situation which deserves attention is the scattering between a wobbler and a kink. These asymmetric events can be characterized by the initial configuration
\begin{equation}
\Phi_{WK}(t,x;x_0,v_0,\omega,a,\delta) =
\phi_W^{(\pm)}(t,x; x_0,v_0,\omega,a,\delta) \cup
\phi_K^{(\mp)}(t,x; -x_0,-v_0) \, , \label{configuration03}
\end{equation}
where without loss of generality the non-excited antikink/kink $\phi_K^{(\mp)}(t,x; -x_0,-v_0)$ has been placed to the right of the wobbler/antiwobbler. This situation allows to analyze how the vibrational energy is transferred to the non-excited kink in a better way than in the previous contexts.
In order to study the scattering between kinks and wobblers in the two previously described scenarios, in the present work we shall employ numerical approaches based on the discretization of the partial differential equation (\ref{pde}) with different initial conditions determined by the configurations (\ref{configuration04}) and (\ref{configuration03}). The particular numerical scheme used here is a fourth-order explicit finite difference algorithm implemented with fourth-order Mur boundary conditions, which has been designed to address non-linear Klein-Gordon equations, see the Appendix in \cite{Alonso2021}. The linear plane waves are absorbed at the
boundaries in this numerical scheme avoiding that radiation is reflected in the simulation contours. To rule out the presence of spurious phenomena attributable to the use of a particular numerical algorithm, a second numerical procedure is used to validate the results. This double checking has been carried out by means of an energy conservative second-order finite difference algorithm with Mur boundary conditions.
As previously mentioned, the initial settings for our scattering simulations are described by single solutions (kinks or wobblers) which are initially well separated, and are pushed together with initial collision velocity $v_0$. This situation is characterized by the concatenation (\ref{configuration04}) for the scattering between wobblers with opposite phase and by (\ref{configuration03}) for the scattering between a wobbler and a kink, both of them with $x_0\gg 0$. These configurations verify the partial differential equation (\ref{pde}) in a very approximate way for very small values of the time when $x_0\gg 0$ and $a\ll 1$. Therefore, $\Phi(t=0)$ and $\frac{\partial \Phi}{\partial t}(t=0)$ provide the initial conditions of our scattering problem.
In particular, our numerical simulations have been carried out in a spatial interval $x\in [-100,100]$ where the centers of the single solutions are initially separated by a distance $d=2x_0=30$. Simulations have been performed for $v_0\in [0.04,0.9]$ with initial velocity step $\Delta v_0=0.001$, which is decreased to $\Delta v_0=0.00001$ in the resonance interval.
At this point it is worthwhile mentioning that the expression (\ref{wobbler}) is only an approximation of the exact wobbler solution. When this expression is employed as initial condition in the Klein-Gordon equation (\ref{pde}) a small amount of radiation is emitted for a very small period of time. In this time interval the approximate solution (\ref{wobbler}) decays to the exact wobbler. When considering a traveling wobbler, this radiation emission can cause a very small change in its velocity. This effect takes place when $\delta\neq 0,\pi$ in the expression (\ref{wobbler}) and it is maximized for $\delta= \pm \pi/2$. In order to avoid this effect we shall implement initial conditions by setting $\delta=0$ in the configurations (\ref{configuration04}) and (\ref{configuration03}). By taking this restriction we guarantee that the traveling wobbler involved in (\ref{configuration03}) continues to move with velocity $v_0$ after the initial radiation emission. As mentioned above, this effect is very small and unnoticeable in the final versus initial velocity diagrams. However, we shall analyze the velocity difference of the resulting wobblers and in this context it is better to avoid this influence. On the other hand, for the values $\delta=0$ or $\delta=\pi$ the decay of the approximation (\ref{wobbler}) to the real wobbler induces a very small variation in its wobbling amplitude. This effect also is very small and does not affect the global properties of the scattering processes discussed in this paper. An alternative scheme to implement initial configurations (\ref{configuration03}) with non-vanishing initial phases is to find an approximately equivalent configuration with vanishing phase. This can be obtained, for example, taking into account that
\[
\phi_W^{(\pm)}(t,x; x_0,v_0,\omega,a,\delta) = \phi_W^{(\pm)}(t-\textstyle\frac{\delta}{\omega},x; x_0-\frac{\delta v_0}{\omega},v_0,\omega,a,0)\hspace{0.3cm} .
\]
\section{Scattering between wobblers with opposite phases}
\label{sec:3}
In this section we shall analyze the asymmetric scattering between two wobblers whose shape modes have the same amplitude but they have opposite phases with respect to our inertial system, which is located at the center of mass. In this context, a wobbler and an anti-wobbler approach each other with initial velocity $v_0$ and $-v_0$, respectively. They evolve preserving the phase difference of $\pi$ and collide giving place to a destructive interference between the shape modes of the involved wobblers. Figure~\ref{fig:VelDiaContrafaseAmp020} shows the final versus initial velocity diagrams for three representative values of the initial wobbling amplitude, $a=0.04$, $a=0.1$, and $a=0.2$.
\begin{figure}[h]
\centerline{\includegraphics[height=3.5cm]{veldiacontrafaseamp004}}
\medskip
\centerline{\includegraphics[height=3.5cm]{veldiacontrafaseamp010}}
\medskip
\centerline{\includegraphics[height=3.5cm]{veldiacontrafaseamp020}}
\caption{\small Final versus initial velocity diagram for the scattering between two wobblers with opposite phase for the values of the wobbling amplitude $a=0.04$, $a=0.1$ and $a=0.2$. The color code is used to specify the number of bounces suffered by the kinks before escaping. The black curves determine the final velocity of the bion formed for low initial velocities.} \label{fig:VelDiaContrafaseAmp020}
\end{figure}
Unsurprisingly, the only scattering channels to emerge in this new scenario are still bion formation and kink reflection. As before, the former is predominant for small values of the initial velocity $v_0$, while the latter is found for large values. However, these velocity diagrams display some important differences regarding the scattering of wobblers addressed in \cite{Alonso2021b}, where the corresponding shape modes have the same phase and a constructive interference occurs in the collision. In this new context, the destructive interference avoids the emergence of isolated 1-bounce windows (at least for non-extreme values of $a$), as can be observed in Figures~\ref{fig:VelDiaContrafaseAmp020} and \ref{fig:VelDiaContrafaseFractal}. The suppression of this mechanism implies that the fractal structure width does not grow.
In Figure \ref{fig:VelDiaContrafaseFractal} the evolution of the fractal pattern can be visualized as the value of the initial amplitude $a$ increases. First, we can observe that the value of the critical velocity $v_c$ varies very slowly as the initial amplitude $a$ grows. For instance, $v_c \approx 0.2601$ for $a=0.02$, whereas $v_c\approx 0.2681$ for $a=0.2$, following a linear dependence in $a$ for intermediate values. Second, it can be seen that the 2-bounce windows are deformed as the value of $a$ increases and get broken up in smaller 2-bounce windows. The first 2-bounce window shown in Figure \ref{fig:VelDiaContrafaseFractal} for $a=0.04$ can be used to illustrate this mechanism. This window gets distorted when $a=0.12$ and split into two pieces for $a=0.14$. In turn, one of these pieces is divided again into two new 2-bounce windows for $a=0.20$. Third, spontaneous generation of $n$-bounce windows with $n\geq 2$ can also be identified in the sequence of graphics included in Figure~\ref{fig:VelDiaContrafaseFractal}. For instance, for $a=0.12$ a small 3-bounce window spontaneously emerges in the interval $[0.21585,0.2174]$, which was occupied by the bion formation regime for previous values of $a$. Subsequently, this window is split into two parts, resulting in a 2-bounce window in the middle for $a=0.14$, which is surrounded by new $n$-bounce windows. This new 2-bounce window gets bigger as $a$ increases and finally splits into two new 2-bounce windows once more, as you can see from the graphics for $a=0.20$. This window generation mechanism could explain the clustering of 2-bounce windows that arise around the $v_0=0.2566$ value for $a=0.20$.
\begin{figure}[htb]
\centerline{\includegraphics[height=2.5cm]{veldiacontrafasefractalamp004}}
\medskip
\centerline{\includegraphics[height=2.5cm]{veldiacontrafasefractalamp010}}
\medskip
\centerline{\includegraphics[height=2.5cm]{veldiacontrafasefractalamp012}}
\medskip
\centerline{\includegraphics[height=2.5cm]{veldiacontrafasefractalamp014}}
\medskip
\centerline{\includegraphics[height=2.5cm]{veldiacontrafasefractalamp020}}
\caption{\small Evolution of the fractal pattern found in the velocity diagrams associated to the scattering between wobblers with opposite phases as a function of the initial wobbling amplitude $a$.} \label{fig:VelDiaContrafaseFractal}
\end{figure}
Another important characteristic of this type of scattering processes is that the final velocities of the scattered wobblers are different. This behavior is not surprising because the initial configuration (\ref{configuration04}) is not symmetric. Recall that the initial wobbling phases of the colliding wobblers are different. This velocity difference is very small, and therefore not noticeable in the velocity diagrams shown in Figure~\ref{fig:VelDiaContrafaseAmp020}. In order to emphasize this feature we define the magnitude
\begin{equation}
\Delta v_f = |v_{f,R}|-|v_{f,L}| \, , \label{deltavf}
\end{equation}
as the difference between the final speed $|v_{f,R}|$ of the rightward traveling wobbler and the final speed $|v_{f,L}|$ of the leftward traveling wobbler. Positive values of $\Delta v_f$ imply that the wobbler scattered to the right travels faster than the wobbler scattered to the left, whereas negative values describe the reverse situation. In Figure~\ref{fig:VelocityDifferenceContrafase}, the magnitude $\Delta v_f$ is plotted as a function of the initial velocity $v_0$ and the wobbling amplitude $a$. There, we can see that $\Delta v_f$ has oscillating behavior, which means that there are alternating initial velocity windows in which the wobbler traveling from the left travels faster than the wobbler traveling from the right and vice versa. The amplitudes of the oscillations exhibited by $\Delta v_f$ grow as the value of the parameter $a$ increases. This is reasonable because the vibrational energy stored in the shape mode is greater for bigger values of $a$ and the resonant energy transfer mechanism may deflect a greater amount of this energy to the kinetic energy pool. However, the most remarkable property exhibited by Figure~\ref{fig:VelocityDifferenceContrafase} is that the zeroes of $\Delta v_f$, the initial velocity values for which the two wobblers disperse with the same velocity, are approximately independent of the initial amplitude $a$. This behavior is precisely followed for sufficiently large values of $v_0$, where the effect of the resonance regime is not noticed (approximately for $v_0\geq 0.3$ in Figure~\ref{fig:VelocityDifferenceContrafase}).
\begin{figure}[htb]
\centerline{\includegraphics[height=3.3cm]{velocitydifferencecontrafase}}
\caption{\small Final velocity difference $\Delta v_f$ of the scattered wobblers as a function of the collision velocity $v_0$ and the initial wobbling amplitude $a$ for the scattering of two wobblers with opposite phase. Recall that $\Delta v_f=0$ for $a=0$ due to spatial reflection symmetry. For the sake of clarity, $n$-bounce processes with $n\geq 2$ have not been included in the plot. The vertical dashed lines mark the zeroes $\widetilde{v}_k$ of the final velocity difference $\Delta v_f$.} \label{fig:VelocityDifferenceContrafase}
\end{figure}
In Table \ref{ZerosContrafase}, the zeros $\widetilde{v}_k$ of the final velocity difference $\Delta v_f$ (explicitly computed for the case $a=0.04$) are shown in the non-resonance regime. The values $\widetilde{v}_k$ correspond to the nodes of the oscillations found in Figure~\ref{fig:VelocityDifferenceContrafase}, which have been remarked by means of vertical dashed lines.
The location of these points seems to depend mainly on the value of the wobbling phase when the collision between the wobblers occurs. This conjecture is heuristically supported by the following simple argument. Remember that $x_0$ denotes the initial position of the kink center, while $\omega$ represents the wobbling frequency. As previously discussed, the values $x_0=15$ and $\omega=\sqrt{3}$ have been implemented for our numerical simulations. Let $v_0$ be the initial velocity at which the wobblers are initially approaching. In the point particle approximation the collision would happen at the time $t_I=\frac{x_0}{v_0}$. We must bear in mind that there are several factors in the real dynamics which break the precision of this assumption. For example, the interaction between the kinks and/or wobblers can make the collision velocity vary (it is not a constant velocity $v_0$). We shall assume that the phase of the wobbler at the instant $t_I$ can be expressed as
\[
\varphi(v_0)= c(x_0) \, \frac{x_0}{v_0} \omega \sqrt{1-v_0^2} + \delta \, .
\]
where $c(x_0)$ is a correction factor which is included to incorporate the previously mentioned behavior. The main assumption in this case is that $c(x_0)$ does not depend on $v_0$. If we think about the initial impact velocity as a variable $v$, then it makes sense to consider $\varphi(v)=c(x_0) \, \frac{x_0}{v} \omega \sqrt{1-v^2} + \delta$.
Those phenomena depending only on the wobbling phase must exhibit a periodicity based on the relation
\begin{equation}
\varphi(v_0) - \varphi(v)= T\, k\, , \hspace{0.6cm} k\in \mathbb{Z}\,, \label{fase}
\end{equation}
where $T$ is the periodicity associated to our problem. In general, $T=2\pi$ but in the present scenario where we are interested in the zeroes $\widetilde{v}_k$ of $\Delta v_f$ the symmetry of the initial configuration leads to the choice $T=\pi$. From (\ref{fase}) we conclude that the discrete set of velocities
\begin{equation}
f_k(v_0,T)=\frac{v_0 \, x_0 \, \omega}{\sqrt{\frac{k^2 \, T^2\, v_0^2}{4 \,c(x_0)^2} - \frac{T}{c(x_0)} \, k \, v_0 \, x_0 \, \omega \sqrt{1-v_0^2} + x_0^2 \, \omega^2}} \,,\hspace{0.6cm} k\in \mathbb{Z},
\label{velocities}
\end{equation}
must share similar features. The nodes $\widetilde{v}_k$ of $\Delta v_f$ can be approximately figured out by using equation (\ref{velocities}). In Table \ref{ZerosContrafase} (third column) the values $V_{k}=f_k(v_0,\pi)$, obtained by using the formula (\ref{velocities}) taking as initial input $v_0=\widetilde{v}_0=0.301538$, are included. The value $c(x_0)$ has been adjusted to $c(x_0)= 0.465$. The comparison between the data allows us to conclude that the previous conjecture is satisfied at least for intermediate values of the initial velocity. Of course, the nonlinear nature of the problem makes the argument only an approximation to the actual behavior. This is clear for very large values of the collision velocity. In this regime the amplitudes of the oscillations of $\Delta v_f$ are very attenuated compared to intermediate values of $v_0$. For these cases radiation emission can play a predominant role in the scattering processes.
\begin{table}[htb]
\centerline{\begin{tabular}{|c|c|c|} \hline
$k$ & $\widetilde{v}_k$ & $V_k$ \\ \hline
0 & 0.301538 & 0.301538 \\
1 & 0.313991 & 0.313224\\
2 & 0.326057 & 0.325797 \\
3 & 0.340542 & 0.339354 \\
4 & 0.354757 & 0.354006 \\
5 & 0.371738 & 0.369877 \\
6 & 0.388575 & 0.387110 \\
\hline
\end{tabular} \hspace{0.2cm} \begin{tabular}{|c|c|c|} \hline
$k$ & $\widetilde{v}_k$ & $V_k$ \\ \hline
7 & 0.408291 & 0.405864 \\
8 & 0.428219 & 0.426323 \\
9 & 0.451464 & 0.448689 \\
10 & 0.475173 & 0.473188 \\
11 & 0.502713 & 0.500069 \\
12 & 0.531044 & 0.529591 \\
13 & 0.563882 & 0.562022 \\
\hline
\end{tabular} \hspace{0.2cm}
\begin{tabular}{|c|c|c|} \hline
$k$ & $\widetilde{v}_k$ & $V_k$ \\ \hline
14 & 0.597459 & 0.597606 \\
15 & 0.636259 & 0.636531 \\
16 & 0.675558 & 0.678857 \\
17 & 0.727740 & 0.724418 \\
18 & 0.770909 & 0.772667 \\ && \\ && \\ \hline
\end{tabular}}
\caption{Comparison between the zeros $\widetilde{v}_k$ of the final velocity difference $\Delta v_f$ and the values $V_k=f_k(v_0,\pi)$ obtained by using equation (\ref{velocities}) for the scattering between wobblers with opposite phase and initial wobbling amplitude $a=0.04$. } \label{ZerosContrafase}
\end{table}
At this point it is worthwhile mentioning that the zeroes $\widetilde{v}_k$ introduced in Table \ref{ZerosContrafase} have been computed when $\delta=0$ in the initial configuration (\ref{configuration04}). The particular location of these points depends on the initial phase $\delta$ introduced in (\ref{configuration04}), although it is clear that the same pattern is periodically reproduced for the values $\delta + k T$ with $k\in \mathbb{Z}$.
\begin{figure}[h]
\centerline{\includegraphics[height=4.2cm]{amplitudfinalcontrafase020kink1y2}}
\caption{\small Graphics of the final wobbling amplitudes of the wobblers scattered to the left and to the right as a function of the initial velocity $v_0$ and the initial amplitude $a$. For the sake of clarity, $n$-bounce processes with $n\geq 2$ have not been included. The vertical dashed lines mark the zeroes $\widetilde{v}_k$ of the final velocity difference $\Delta v_f$. The letters $L$ and $R$ label the smooth amplitude functions associated with wobblers traveling left and right, respectively. } \label{fig:AmplitudFinalContrafaseKink1y2}
\end{figure}
Once the final velocities of the scattered wobblers have been examined, we shall now analyze the behavior of the wobbling amplitude of these evolving topological defects.
In Figure~\ref{fig:AmplitudFinalContrafaseKink1y2}, the oscillation amplitudes of the wobblers moving to the left and to the right are represented as a function of the initial velocity $v_0$ and the initial amplitude $a$. There it can be seen that this magnitude follows an oscillating behavior with respect to the kink-antikink scattering. The variation of these oscillations grows as the parameter $a$ increases. Furthermore, the amplitudes of the resulting wobblers follow an antagonistic behavior. When the oscillation amplitude of the wobbler moving to the left reaches a maximum as a function of the initial velocity $v_0$, the oscillation amplitude of the wobbler moving to the right is minimized and vice versa. The asymmetry of the initial configuration (\ref{configuration04}) causes the wobblers to vibrate at different amplitudes in general. On the other hand, there are some points in the graphs shown in Figure~\ref{fig:AmplitudFinalContrafaseKink1y2} where the amplitudes of the two wobblers coincide. Surprisingly, these points coincide with the zeroes $\widetilde{v}_k$ of the final velocity difference $\Delta v_f$ (as we can observed by means of the vertical dashed lines plotted in Figure~\ref{fig:AmplitudFinalContrafaseKink1y2}). In conclusion, for the initial velocities $\widetilde{v}_k$ the scattered wobblers travel with the same velocity and vibrate with the same wobbling amplitude.
In order to explore the relation between the final velocity and the final wobbling amplitude of the scattered wobblers, we define the amplitude difference
\begin{equation}
\Delta a = \frac{1}{2} \left[ a_{f,R} - a_{f,L} \right]\, ,
\end{equation}
where $a_{f,R}$ and $a_{f,L} $ are, respectively, the final oscillation amplitudes of the wobblers moving to the right and to the left. $\Delta a > 0$ means that the wobbler scattered to the right vibrates strongly than that moving to the left, whereas $\Delta a < 0$ describes the opposite situation. Figure~\ref{fig:AmplitudYVelocidadContrafase} shows simultaneously the final velocity and the amplitude differences $\Delta v_f$ and $\Delta a$, as functions of the initial velocity $v_0$ for the particular value $a=0.10$. It can be seen that when a scattered wobbler gains more kinetic energy than the other, it obtains less vibrational energy, and vice versa. The values $\widetilde{v}_k$ are interpreted as the collision velocities for which the final velocities and the wobbling amplitudes of the scattered wobblers are the same.
\begin{figure}[htb]
\centerline{\includegraphics[height=3.4cm]{amplitudyvelocidadcontrafase}}
\caption{\small Graphics of $\Delta v_f$ (final velocity difference) and $\Delta a$ (final wobbling amplitude difference) as functions of the initial collision velocity $v_0$ for the scattering between wobblers with opposite phase with $a=0.10$. $n$-bounce processes with $n\geq 2$ have not been included. The vertical dashed lines mark the zeroes $\widetilde{v}_k$ of $\Delta v_f$. } \label{fig:AmplitudYVelocidadContrafase}
\end{figure}
Finally, another consequence of the asymmetry of these scattering events is that the bion (formed as a bound state between the two colliding wobblers) can now move with certain final non-vanishing velocity after the impact. This velocity will be very small and for this reason it is sometimes difficult to compute its magnitude numerically. In~Figure~\ref{fig:VelDiaContrafaseBionAmp010} the region of the velocity diagram introduced in Figure~\ref{fig:VelDiaContrafaseAmp020} for $a=0.10$ with $v_0\in [0.10,0.18]$ has been enlarged to illustrate the behavior of the bion velocity.
Again, we find an oscillating pattern, clearly seen in Figure~\ref{fig:VelDiaContrafaseBionAmp010} for the interval $v_0\in [0.13,0.16]$. Also, it turns out that the formula (\ref{velocities}) still governs this oscillating behavior. In the previously mentioned range of $v_0$, vertical dashed lines have been plotted to approximately mark the location of the nodes of the bion velocity. The values used correspond to the initial velocities $v_1 \approx 0.134$, $v_2 \approx 0.1364$, $v_3 \approx 0.1388$, $v_4 \approx 0.1415$, $v_5 \approx 0.1443$, $v_6 \approx 0.1475$, $v_7 \approx 0.1506$, $v_8 \approx 0.1538$, $v_9 \approx 0.1572$ and $v_{10} \approx 0.161$, which can be approximately reproduced by (\ref{velocities}).
\begin{figure}[htb]
\centerline{\includegraphics[height=3.4cm]{veldiacontrafasebionamp010}}
\caption{\small Final bion velocity as a function of the initial velocity $v_0$ in the interval $v_0\in [0.10,0.18]$ for the scattering between two wobblers with opposite phase and the initial wobbling amplitude $a=0.10$. The vertical dashed lines mark some of the nodes of the curve.} \label{fig:VelDiaContrafaseBionAmp010}
\end{figure}
\section{Scattering between a kink and a wobbler}
\label{sec:4}
In this section we shall study the scattering between a wobbler and a kink. This scenario is characterized by the concatenation (\ref{configuration03}). With the first choice of signs, this configuration describes a wobbler and an antikink which travel respectively with velocities $v_0$ and $-v_0$. The rightward traveling wobbler and the leftward traveling antikink approach each other, collide, and bounce back. As usual, the \textit{formation of a bion} and the \textit{reflection of the solutions} complete the list of possible scattering channels. In the reflection regime, the initially unexcited antikink becomes an anti-wobbler after the collision because, in general, the shape mode of this solution is excited. Therefore, after the impact two wobblers emerge moving away with different final velocities in our inertial system. The goal of this study is to analyze the transfer of the vibrational and kinetic energies between the resulting wobblers. The dependence of the final velocities of the scattered extended particles on the initial velocity $v_0$ has been graphically represented in Figure~\ref{fig:VelDiaAmp020} for the cases $a=0.04$, $a=0.1$, and $a=0.2$.
\begin{figure}[htb]
\centerline{\includegraphics[height=3.5cm]{veldiaamp004}}
\medskip
\centerline{\includegraphics[height=3.5cm]{veldiaamp010}}
\medskip
\centerline{\includegraphics[height=3.5cm]{veldiaamp020}}
\caption{\small Final versus initial velocity diagram for the wobbler-antikink scattering for the values of the wobbling amplitude $a=0.04$, $a=0.10$ and $a=0.20$. The color code is used to specify the number of bounces suffered by the kinks before escaping.} \label{fig:VelDiaAmp020}
\end{figure}
\begin{figure}[bht]
\centerline{\includegraphics[height=3cm]{velodiaisolated}}
\caption{\small Velocity diagrams for the wobbler-antikink scattering showing the emergence and location of the isolated 1-bounce windows as the value of the wobbling amplitude increases. The vertical dashed lines mark the values of $v_0$ at the center of the 1-bounce windows for the extreme case $a=0.2$. For the sake of clarity, $n$-bounce processes with $n\geq 2$ have not been included. } \label{fig:VelDiaIsolated}
\end{figure}
Some of the most relevant characteristics described in \cite{Alonso2021b} for the scattering between wobbling kinks are also found in this framework, such as the emergence of isolated 1-bounce windows and the growing complexity of the fractal pattern as the initial wobbling amplitude $a$ of the originally rightward-traveling wobbler increases. It is also worthwhile mentioning the presence of oscillations in the 1-bounce tail arising for large values of the initial velocity. However, these features are less accentuated in this scenario. The reason of this behavior lies in the fact that the constructive interference is maximized when the wobblers collide with the same wobbling phase. In particular, we can observe the existence of two isolated 1-bounce windows for the case $a=0.04$. They occupy approximately the region $[0.2458,0.2522] \cup [0.2607,0.2717]$. For $a=0.1$ six of these windows can be identified in $[0.2068,0.2085] \cup [0.2183,0.2214] \cup [0.2310,0.2358] \cup [0.2452,0.2521] \cup [0.2610,0.2709] \cup [0.2787,0.2925]$. Finally, the number of these windows explodes as the initial amplitude $a$ grows. This can be observed in the velocity diagram for $a=0.2$ in Figure \ref{fig:VelDiaAmp020}. Some of the widest 1-bounce windows in this case arise in the set of intervals $[0.2067,0.2108]\cup [0.2185,0.2234] \cup [0.2315,0.2375] \cup [0.2460,0.2535] \cup [0.2621,0.2717] \cup [0.2804,0.2928] \cup [0.3012,0.3179]$. From the previous list of 1-bounce windows, it can be verified that once an isolated 1-bounce window emerges its location is approximately fixed (although its width slightly grows) as the initial wobbling amplitude $a$ increases. This behavior can be checked in Figure \ref{fig:VelDiaIsolated}. Note that the deviation from the rule described above is a small translation of the center of these windows. In Figure \ref{fig:VelDiaIsolated} the vertical dashed lines mark the values of the initial velocity which determine the centers of the 1-bounce windows for the extreme case $a=0.2$. Once again, these velocities approximately follow relation (\ref{velocities}), which reveals that the role of the phase of the evolving shape mode is predominant in this phenomenon.
The velocity diagrams shown in Figure~\ref{fig:VelDiaAmp020} also have some distinctive properties of their own.
Because the scattering processes introduced in this section are asymmetric, the final velocities of the resulting wobblers are different, as well as their wobbling amplitudes. In order to illustrate this feature more clearly, the difference $\Delta v_f$ between the final speeds of the scattered wobblers is plotted for different values of the wobbling amplitude $a$ in Figure~\ref{fig:VelocityDifference}. For the sake of simplicity, only 1-bounce events have been included in Figure~\ref{fig:VelocityDifference}. As in the case of the scattering between wobblers with opposite phase discussed in Section \ref{sec:3}, the zeros of this function $\Delta v_f$ are approximately independent of the initial amplitude $a$ and, indeed, coincide with the zeroes $\widetilde{v}_k$ introduced in Table~\ref{ZerosContrafase} in Section~\ref{sec:3}. This behavior underlies the fact that the initially rightward wobbler defined in the configuration (\ref{configuration03}) has the same initial conditions as those given by the configuration (\ref{configuration04}).
\begin{figure}[h]
\centerline{\includegraphics[height=3.3cm]{velocitydifference}}
\caption{\small Final velocity difference $\Delta v_f$ of the scattered wobblers as a function of the collision velocity $v_0$ and the initial wobbling amplitude $a$ for the scattering of a wobbler and a kink. $n$-bounce processes with $n\geq 2$ have not been included. The vertical dashed lines mark the zeroes $\widetilde{v}_k$ of $\Delta v_f$ displayed in Table~\ref{ZerosContrafase}. } \label{fig:VelocityDifference}
\end{figure}
In Figure~\ref{fig:AmplitudFinalTotalKink} the final wobbling amplitudes of the scattered wobblers are plotted as a function of the initial velocity $v_0$ and the initial wobbling amplitude $a$. Recall that $a_L(v_0,a)$ and $a_R(v_0,a)$ represent, respectively, the final wobbling amplitudes of the resulting leftward and rightward traveling wobblers after the collision. We can observed that the shape modes of the scattered wobblers become excited and its amplitudes are similar as a function of the initial velocity, oscillating around the values found for the kink-antikink scattering events (with $a=0$). However, the amplitude of these oscillations is much bigger for the final rightward traveling wobbler.
\begin{figure}[htb]
\centerline{\includegraphics[height=3.5cm]{amplitudfinaltotalkink1}}\medskip
\centerline{\includegraphics[height=3.5cm]{amplitudfinaltotalkink2}}
\caption{\small Final wobbling amplitudes $a_L$ and $a_R$ of the wobblers scattered to the left (top) and to the right (bottom) as a function of the initial velocity $v_0$ and the initial wobbling amplitude $a$ of the colliding wobbler. The vertical dashed lines mark the zeroes $\widetilde{v}_k$ of $\Delta v_f$ displayed in Table \ref{ZerosContrafase}.} \label{fig:AmplitudFinalTotalKink}
\end{figure}
To illustrate the role of the the zeroes $\widetilde{v}_k$ of the final velocity difference $\Delta v_f$ shown in Table~\ref{ZerosContrafase} in this scenario, the functions $\Delta v_f$ and $\Delta a$ have been represented simultaneously for the case $a=0.10$ in Figure~\ref{fig:AmplitudFinal020Kink1y2}. As in the scattering between wobblers with opposite phase, the values $\widetilde{v}_k$ determine the initial velocities for which the final velocities and the final wobbling amplitudes are the same for the both scattered wobblers.
\begin{figure}[h]
\centerline{\includegraphics[height=3.5cm]{amplitudyvelocidad}}
\caption{\small Graphics of $\Delta v_f$ (final velocity difference) and $\Delta a$ (final wobbling amplitude difference) as a function of the initial collision velocity $v_0$ for the scattering between a wobbler and an antikink with $a=0.10$. $n$-bounce processes with $n\geq 2$ have not been included. The vertical dashed lines mark the zeroes $\widetilde{v}_k$ of $\Delta v_f$.} \label{fig:AmplitudFinal020Kink1y2}
\end{figure}
\section{Conclusions}
\label{sec:5}
This paper delves into the study on the scattering between wobbling kinks initially addressed in \cite{Alonso2021b}. Here, we have investigated the asymmetric scattering between kinks and wobblers (kinks whose shape mode is excited) in the standard $\phi^4$ model. In particular, two different scenarios in this context have been considered: (a) the scattering between wobblers with opposite phases, and (b) the scattering between a wobbler and an unexcited antikink. Both cases exhibit the usual bion formation and reflection regimes, which are infinitely interlaced forming a fractal structure embedded in the final versus initial velocity diagram. However, the first case involves a destructive interference of the shape modes in the collision. As a consequence, the growth in the complexity of the fractal pattern is smaller than that found in \cite{Alonso2021b}, where the colliding wobbling kinks travel with the same phase leading to a constructive interference at the impact. For example, the emergence of isolated 1-bounce windows is not found in this new case (at least for moderate values of the initial wobbling amplitude $a$), although the splitting of $n$-bounce widows is present. On the other hand, the kink scattering in the second scenario displays similar features (although more attenuated) than to those found in \cite{Alonso2021b}.
Due to the asymmetry of the initial configurations (\ref{configuration04}) and (\ref{configuration03}), the final velocities and wobbling amplitudes of the scattered wobblers are different in general. However, there is a sequence of initial velocities for which both the final velocities and wobbling amplitudes coincide. These values are almost independent of the initial wobbling amplitude $a$ when the initial wobbling phase considered in (\ref{configuration04}) and (\ref{configuration03}) is fixed. Besides, the values of these velocities very approximately follow the expression (\ref{velocities}). This means that the phase associated to the shape modes of the evolving wobblers at the collision instant plays a predominant role in the scattering properties of these objects.
Indeed, (\ref{velocities}) allows to obtain values of the initial velocities which share similar features. For example, this expression has been used in the second scenario to predict the location of the maxima of the isolated 1-bounce windows. Finally, it is also worthwhile mentioning the results displayed in Figures \ref{fig:AmplitudYVelocidadContrafase} and \ref{fig:AmplitudFinal020Kink1y2}. It can be verified that systematically when a scattered wobbler gains more kinetic energy than the other, it obtains less vibrational energy and vice versa.
The research introduced in the present work opens up some possibilities for future work. For example, the $\phi^6$ model implies a resonance regime similar to the $\phi^4$ model, although it does not present vibrational eigenstates in the second-order small fluctuation operator. The characteristics of scattered wobbling kinks can be analyzed to study their influence on the resonant energy transfer mechanism. Alternatively, you can build a model twin to the $\phi^6$ model that involves internal modes. By doing this, we could compare the scattering processes of the twin model with those of the standard $\phi^6$ model. In this way, it will be possible to examine the role that shape modes play in the collision process.
Furthermore, many other different topological defects (kinks in the double sine-Gordon model, deformed $\phi^4$ models, hybrid and hyperbolic models, etc.) could be studied in the new perspective presented here. Work in these directions is in progress.
\section*{Acknowledgments}
A. Alonso-Izquierdo acknowledges Spanish MCIN financial support under grant PID2020-113406GB-I0. He also acknowledges the Junta de Castilla y Le\'on for financial support under grants SA067G19. L.M. Nieto acknowledges Spanish MCIN financial support under grant PID2020-113406GB-I0. This research has made use of the high performance computing resources of the Castilla y Le\'on Supercomputing Center (SCAYLE, www.scayle.es), financed by the European Regional Development Fund (ERDF).
|
{
"timestamp": "2021-10-05T02:09:37",
"yymm": "2109",
"arxiv_id": "2109.13904",
"language": "en",
"url": "https://arxiv.org/abs/2109.13904"
}
|
\section{Introduction and Context}
\subsection{Historical Context}
Circuit complexity is a branch of Computational Complexity theory in which Boolean functions are classified according to the size or depth of the Boolean Circuits that compute them. Where the \textit{circuit-size complexity} of a Boolean function $f$ is the minimal size of any circuit computing $f$. The \textit{circuit-depth complexity} of a Boolean function $f$ is the minimal depth of any circuit computing $f$. Boolean functions and the corresponding complexity behind these functions have been under thorough study since the publication of \textit{Claude Shannon's} seminal paper in the year of $1949$, which demonstrated that almost all Boolean Functions on $n$ variables require circuits of size $O(2^n / n)$.
Our motivations for studying circuit complexity are multi-faceted. Circuit complexity is a reasonable measure for the difficulty measure for functions that take only finitely many inputs i.e. through the lens of circuit complexity, we are able to deduce very meaningful conclusions about finite-input problems - we have more control over the study of \textit{program size} of algorithms on finite bit instances than if we were simply analyzing the same problem through the lens of Turing Computation. There are also close connections between circuit complexity, de-randomisation and by extension the complexity class BPP.
The best known lower bound on the circuit size for a problem in NP is currently $4.5n - o(n)$, \cite{AB07} this is not very strong and much lower than expected, yet we are still stuck at this stage. For the Polynomial Hierarchy, better lower bounds are known - for example for every $k > 0$, some language in the Polynomial Hierarchy require circuits of size $\Omega(n^k)$.
Whilst proving circuit lower bounds is notoriously difficult, super-polynomial lower bounds have been proved under certain restrictions on the family of circuits used. The first function for which such bounds were obtained was the \textit{parity function}, which computes the sum of input bits modulo $2$. The proof that PARITY was not contained in the complexity class $AC^0$ was first established independently by \textit{Ajitai} in $1982$ and by \textit{Furst, Saxe} and \textit{Sipser} in $1984$, after which in $1987$ Hastad proved the even stronger result that any family of constant-depth circuits computing the parity function require exponential size, where he applied his famous and powerful switching lemma, utilising the key idea of random restrictions, which Rossman, Servedio and Tan extended to the idea of random \textit{Projections} to prove an average-case depth hierarchy theorem for Boolean Circuits \cite{RST15}.
\subsection{Setup and Definitions}
\begin{definition}
$t$-CNF is an AND of clauses of width at most $t$. A clause of width $t$ is an OR of $t$ literals. For example, $x_1 \lor \bar{x_2} \lor x_4$ is a clause of width $3$, and $(x_1 \lor x_2) \land (\bar{x}_1 \lor \bar{x}_2)$ is a $2$-CNF and another example of a $3$-CNF:
\end{definition}
$$(A \lor \neg B \lor \neg C) \land (\neg D \lor E \lor F)$$
\begin{definition}
$s$-DNF is an OR of disjuncts of width at most $s$, where a disjunct of width $s$ is an AND of $s$ literals, with an example below:
\end{definition}
$$(A \land \neg B \land \neg C) \lor (\neg D \land E \land F)$$
\begin{definition}
A restriction $\rho$ is a mapping from $\{x_1, \cdots, x_n\}$ to $\{0, 1 *\}$. A random restriction $\rho \in R(p, q)$, $0 \le p, q \le 1$ is a random restriction such that:
$$P_{\rho}[\rho(x_i) = *] = p$$
$$P_{\rho}[\rho(x_i) = 0] = (1 - p)q$$
$$P_{\rho}[\rho(x_i) = 1] = (1 - p)(1 - q)$$
independently, for each $i$.
\end{definition}
\begin{definition}
$AC^k$ is the class of functions computable by polynomial size and $O((\log n)^k)$ depth circuits over infinitely many AND and infinitely many OR and NOT gates
\end{definition}
\begin{definition}
$NC^k$ is the class of functions computable by polynomial size and $O((\log n)^k)$ depth circuits over $2$-AND and $2$-OR and NOT gates
\end{definition}
In particular we know that $\text{AC}^k \subseteq \text{NC}^{k + 1} \subseteq \text{AC}^{k + 1}$
\begin{definition}
The complexity class $P / \text{Poly}$ is defined as the set of languages recognised by a polynomial-time Turing Machine with polynomial-bounded \textit{advice}\footnote{We say that a Turing Machine has access to an advice family $\{a_n\}_{n \ge 0}$ where each $a_n$ is a string if while computing on an input of size $n$, the machine is allowed to examine $a_n$. The advice is said to have polynomial size if there is a $c \ge 0$ such that $|a_n| \le n^c$} function and its relation with circuit complexity is encoded in the fact that:
$$P_{/\text{poly}} = \cup_{c \in \mathbb{N}} \text{SIZE}(n^c)$$
\end{definition}
\begin{definition}
\end{definition}
\subsection{Shannon's Proof}
We present Shannon's beautiful non-constructive proof below.
\textbf{Shannon (1949)} There is a constant $c$ such that every $n$-ary Boolean Function has Circuit Complexity at most $2^n / cn$
\begin{proof}
Fix an $n$-ary function $F : \{0, 1\}^n \to \{0, 1\}$. Let $x = (x_1, x_2, \cdots, x_n)$ denote the input vector for some integer $k$ to be specified later and define $y = (x_1, \cdot,s x_k)$ and $z= (x_{k + 1}, \cdots, x_n)$.
We begin by constructing a $2^k \times 2^{n - k}$ truth table for $F$, where each row is specified by a possible value of $y$ and each column by a possible value of $z$. Suppose that there are only $t$ different column vectors in this matrix. The inequality $t \le 2^{n- k}$ is obvious; less obvious but still trivial is the inequality $t \le 2^t$. Let $F_i(z) = 1$ if column $z$ has the the $i$-th pattern and $0$ otherwise. Similarly, let $G_i(y)$ be the function specified by the $i$-th column vector. Then we can write:
$$F(x) = F(y, z) = \cup_{i = 1}^n G_i(y) \land F_i(z)$$
Suppose for the moment that all the $1$ in this table are restricted to $s$ of the $2^k$ rows. This immediately implies that $t \le 2^s$. Under this assumption, we can build a circuit for $F$ as follows. The inputs are connected to two binary-to-positional converters $B_k$ and $B_{n - k}$ which requires $2^k + 2^{n - k}$ gates. For each $i$ between $1$ and $t$, we add two trees of ORs connected by a single AND to compute $F_i(y) \land G_i(z)$ for each $i$ this requires $2^k + 2^{n - k} + 1$ gates. Finally, we connect all these with another tree of $t \le 2^s$ OR gates. The total number of gates used in this special case is at most $2(2^k + 2^{n - k}) + 2^s + 1$.
Now back to the general case. We can write any $n$-ary function $F$ as a dis-junction $2^k / s$ different $n$-ary functions, each of which has a truth table with at most $s$ non-zero rows. To construct a circuit for $F$, we build a circuit for each of its $s$-row components, sharing a single pair of binary-to-positional converters. The total gate count for this construction is at most:
$$2^k + 2^{n - k} + \frac{2^k}{s}(2^k + 2^{n - k} + 2^s + 1) = 2^k + 2^{n - k} + \frac{2^n + 2^k(2^s + s^k + 1)}{s}$$
Finally, we are free to choose the parameters $k$ and $s$ to minimize this gate count. If we take $k = 2 \log n$ and $s = n - 2 \log n$, then the total gate count is at most:
$$n^2 + \frac{2^n}{n^2} + \frac{2^n + n^2(2^n / n^2 + n^2 + 1)}{n - 2 \log n} = n^2 + \frac{2^n}{n^2} + \frac{2^{n + 1} + n^4 + n^2}{n - 2 \log n} = O(\frac{2^n}{n})$$
\end{proof}
\subsection{Karl-Lipton Theorem}
The Karl-Lipton Theorem justifies why circuit lower bounds is a viable way to attack the $P$ versus $NP$ problem.
\begin{center}
\includegraphics[scale = 0.5]{Second_Level_PH.png}
\end{center}
\textit{Karl-Lipton Theorem:} $\text{PH} \neq \sum_2 \Rightarrow \text{NP} \subseteq P_{\text{/poly}}$
\begin{proof}
Suppose $\text{NP} \subseteq P_{/\text{poly}}$. We will show that this implies $\text{PH} \subseteq \sum_2$. Note that $\text{PH} \subseteq \sum_2$ and $\prod_2 \subseteq \sum_2$ are equivalent; if the latter were true then by swapping quantifier order, we have that $\sum_3 = \sum_2$, collapsing the Polynomial Hierarchy.
By assumption that $\text{NP} \subseteq P_{/\text{poly}}$ there exists a polynomially sized circuit family $\{C_n\}$, which decides SAT. This implies that there exists a polynomial size circuit family $\{C_{*n}\}$ which, given a boolean formula $\phi$, finds a satisfying assignment for it.
We construct $\{C_{*n}\}$ in the following way: we choose a free variable and substitute $1$ or $0$ for it, then run the decider. If the decider says that one of these boolean formulas has a satisfying assignment, repeat this process on another free variable. When there are no more free variables, we will have substituted a satisfying assignment into $\phi$.
Let $L \in \prod_2$. Then $x \in L \iff \forall u \exists v \text{ s.t. } \phi(x, u, v) = 0$
Then $\exists C * \forall u \phi(x, u, C * (x, u)) = 0$. That is, there exists a circuit which outputs an unsatisfying assignment for $\phi$ for any $u$. Then $L \in \sum_2$ and the theorem is proven. \\
\end{proof}
\section{Hastad's Switching Lemma \cite{AB07}}
Arguments using restrictions (partial assignments to input variables) to simplify unbounded fan-in Boolean circuits have been quite successful for obtaining lower bounds on circuit size and depth, oracles to separate complexity classes, lower bounds on time, processors and memory of PRAMs as well as on the complexity of proofs in bounded-depth proof systems.
Ultimately the key intuition behind Hastad's switching lemma is to show that an AND of small ORs can be written as an OR of small ANDs if an appropriate restriction is applied. In particular, we can reduce the depth of formulas by $1$ at the expense of reducing the number of input variables.
The core to their effectiveness is that they simplify the formulas without completely trivialising the functions that are being computed. Therefore, there is a certain element of creativity attributed to this choice of restrictions, since one chooses a family of restrictions that is tailored to the function being computed and argues that some member of the family has the desired properties. As long as the random restriction is likely to kill a clause, the switching lemma should work.
The history of switching lemmas traces back to Furst, Sax and Sipser, but in this project we will be looking at the most powerful of these switching lemmas, which is attributed to Hastad.
\begin{center}
\includegraphics[scale = 0.5]{Hastad_Before.png}
$$\downarrow$$
\includegraphics[scale = 0.5]{Hastad_After.png}
\end{center}
\subsection{Statement of the Lemma}
\begin{theorem}
Let $f$ be some Boolean Function which can be written as some $t$-CNF. Then, for any integer $s \ge 1$, any $p \in [0, 1]$ we have that:
$$P_{\rho \in R(p, 1/2)}[f|_{\rho} \text{is not } s-\text{DNF}] \le (5pt)^s$$
Here $f_{\rho}$ is not $s$-DNF means $f|_{\rho}$ can not be written as $s$-DNF
\end{theorem}
The argument of Hastad's switching lemma involves the probabilistic method, in particular we argue that the probability that a restriction from the family fails to have the desired properties is strictly less than $1$. For example, we can consider an OR of small ANDs and each term in turn. Essentially a term that is falsified by a restriction does not contribute any variables to the AND of small ORs and for each term that is not falsified it is more likely that the term is satisfied than that any variable is contributed in the AND of small ORs.
Note that one very important feature of this lemma is that the probability on the right hand side of the inequality $(5pt)^s$ does \textit{not} depend on the number of variables. The notion of the random restriction is actually key to Hastad's Switching Lemma. Indeed if we consider the threshold function $\text{Th}_k^n(x)$, which is $1$ if and only if:
$$x_1 + x_2 + \cdots + x_n \ge k$$
We can readily see that $\text{Th}_k^n(x)$ can be written as $k$-DNF and $(n - k)$-CNF but not as a $(n - k - 1)$-CNF. Indeed if $k$ is fixed and $n$ goes to infinity, this example shows, in general, that $t$-CNF can not be written as an $s$-DNF, where $s = s(t)$ only depends on $t$.
\subsection{Depth $2$ Circuit Computing Parity}
\subsection{Lower-bound on PARITY}
\textbf{Theorem:} For any $d \ge 2$, $\text{PAR}_n$ can not be computed by depth-$d$ size $2^{cn^{1 / (d - 1)}}$ circuit when $n$ is sufficiently large, where $c$ is a reasonable constant, say $1 / 11$
\begin{proof}
Assume for sake of contradiction that $\text{PAR}_n$ can be computed by some depth $d$ size $S$ circuit $C(x)$, where $$S < 2^{cn^{1/(d - 1)}}$$ and the constant $c$ will be determined later. Without loss of generality, assume that our circuit $C(x)$ is already in standard form (depth $d$ size $S$ circuit can be transformed to standard form of depth $d$ size $dS$, for which we only need to make the constant $c$ slightly larger, that is, $c + \epsilon$ for any $\epsilon > 0$.
Now view $C(x)$ as a depth $d + 1$ circuit by adding a dummy layer consist of AND or OR gates of fan-in $1$ to the bottom, alternating each layer. The motivation is to apply switching lemma for $t = 1$. Indeed applying the switching lemma to this circuit with parameter $t = 1$, $s = (1 + \delta)c_H \log S$ and $p = 1 / (2c_H)$ where $\delta > 0$ is an arbitrarily small constant, and $c_H$ denotes the constant $5$ in Hastad's Switching Lemma. That is, we will apply random restriction $\rho \in R(1/(2c_H), 1/2)$. For each gate in the \textit{second bottom} layer, which is $1$-CNF or $1$-DNF with probability at least:
$$1 - (c_H pt)^s = 1 - (\frac{1}{2})^{(1 + \delta) c_H \log S} \ge 1 - S^{1 - \delta}$$
that gate can be written as $s$-DNF. If all the bottom gates can be \textit{switched}, we get a depth $d$ circuit of bottom fan-in $s = (1 + \delta) c_H \log S$.
Now apply the random restriction $\rho_i \in R(p, 1/2)$ where:
$$p = 1/(2c_H s) = 1 / (2(1 + \delta)c_H \log S)$$
for $i = 1, 2, \cdots, d - 2$. For $\rho_1$, for each gate in the second bottom layer, by switching lemma, with probability at least:
$$1 - (c_Hpt)^2 \ge 1 - S^{1 - \delta}$$
this $t$-CNF can be converted to an $s$-DNF where $t = s = (1 + \delta) \log S$. For each $\rho_i$, the depth of the circuit is supposed to be reduced by $1$. Finally, it remains to count the number of times we have applied the switching lemma, which is at most $S$, the total number of gates in the original circuit. Apply a union bound, with probability $\ge 1 - S^{-\delta} \to 1$, circuit $C(x)|_{\rho}$ can be converted to a depth $2$ size $S$ circuit where:
$$\rho = \rho_{d - 2} \circ \cdots \circ \rho_0 \in R(1 / (2c_H(2(1 + \delta) \log S)^{d - 2}), 1/2)$$
Now that we have proved that $C(x)|_{\rho}$ can be written as a depth $2$ size $S$ circuit with high probability, we observe that parity is still a parity function or its negation after applying \textit{any} restriction in possibly less number of variables. let us count the number of free variables after applying $p$. The expected number of free variables is:
$$\frac{n}{2c_H(2c_H(1 + \delta)\log S)^{d - 2}}$$
By the Chernoff bound, we claim:
$$m = |\rho^{-1}(*)| > \frac{n}{2c_H(2c_H(1 + 2\delta)\log S)^{d - 2}}$$
with high probability. It is not difficult to prove that any depth $2$ circuit computing $\text{PAR}_m$ should have bottom fan-in $m$, which implies that:
$$(1 + \delta)\log S \ge m \ge n / (2c_H(2c_H(1 + 3\delta) \log S)^{d - 2})$$
This implies that $S \ge 2^{n^{(d - 1)} / ((2 + 3\delta)c_H)}$. Therefore, as long as constant $c < 1 / (2c_H)$, our theorem holds.
\end{proof}
\subsection{Matching Upper Bound on Parity}
Now we present an equivalent upper bound up to a constant factor for PARITY. Indeed for depth $3$, we observe that:
$$\text{PAR}_n(x_1, \cdots, x_n) = \text{PAR}_m(\text{PAR}_m(y_1), \cdots, \text{PAR}_m(y_m))$$
where $m = \sqrt{n}$ and $y_1 = (x_1, \cdots, x_m), y_2 = (x_{m + 1}, \cdots, x_{2m})$ etc. For the $\text{PAR}_m$ on the outside, write it as $m$-CNF of size $2^{m-1} + 1$ and for the $\text{PAR}_m$ inside, write it as $m$-DNF of size $2^{m-1} + 1$ which is a depth $4$ circuit of size
$$(1 + m)(2^{m - 1} + 1) \le n2^{\sqrt{n}}$$
To make it depth $3$, simply merge two layers in the middle, both of which are OR gates and the size will not increase. Using the same argument we can prove:
\begin{theorem}
For any $d \ge 2$, $\text{PAR}_n$ can be computed by depth $d$ circuit of size at most $$n2^{n^{1/(d - 1)}}$$
\end{theorem}
Now if the depth $d \ge \log n$< it turns out that there exists linear size circuits computing $\text{PAR}_n$.
\begin{theorem}
$\text{PAR}_n$ can be computed by depth $\lceil \log n \rceil$ circuit of size $2n - 1$
\end{theorem}
\begin{proof}
Build a complete binary tree with $n$ leaves, corresponding to variables $x_1, \cdots, x_n$; all non-leaf nodes are $\text{XOR} = \text{PAR}_2$ gates. Since the depth of the binary tree is $d = \lceil \log n \rceil$, the number of non-leaf nodes is:
$$(n - 2^{d - 1}) + 2^{d - 2} + 2^{d - 3} + \cdots + 1 = n - 1$$
For each $\text{XOR}$ gate, we need $2$ gates to implement (recall that XOR can be either written as $2$-CNF or $2$-DNF, and apply the gates merging technique again) except for the top gate ($3$ gates are needed). The total number of gates is $2(n - 1) + 1 = 2n - 1$
\end{proof}
\subsection{Proof of the Switching Lemma}
Let the DNF $F = T_1 \lor T_2 \lor \cdots \lor T_m$. We restrict variables in two stages.
\textbf{Stage 1:} Restrict variable with probability $\sqrt{p}$, $f \to f|_{\rho_1}$:
$$x_i = \begin{cases} 0 \text{ with probability } \frac{1 - \sqrt{p}}{2} \\ x_i \text{ with probability } \sqrt{p} \\ 1 \text{ with probability } \frac{1 - \sqrt{p}}{2} \end{cases}$$
In the first case, we consider terms with fan-in $\ge 4 \log S$ we have that:
$$P[\text{Any Term with fan-in} \ge 4 \log S \neq 0] \le (\frac{1 + \sqrt{p}}{2})^{4 \log S} \le (\frac{2}{3})^{4 \log S} \le \frac{1}{S^3}$$
Therefore, we have that:
$$P[\exists \text{Term with fan-in} \ge 4 \log S \text{doesn't become } 0] \le \frac{1}{S^2}$$
In the second case, we consider terms with fan-in $\le 4 \log S$:
$$P[T_i \text{depends on } c_0 \text{variables}] \le (4 \log S)^{c_0}(\sqrt{p})^{c_0} \le \frac{1}{S^3}$$
Therefore it must be the case that:
$$P[\exists \text{term with fan-in} \le 4 \log S, \text{depends on } \ge c_0 \text{variables}] \le \frac{1}{S^2}$$
where $c_0$ is a constant depending on $c_1$, so now the DNF is also a $c_0$-DNF.
\textbf{Stage 2:} Restrict variables in $f|_{\rho_1}$ with probability $\sqrt{p}$ $\to$ $f|_{\rho_1 \cup \rho_2}$:
$$x_i = \begin{cases} 0 \text{ with probability } \frac{1 - \sqrt{p}}{2} \\ x_i \text{ with probability } \sqrt{p} \\ 1 \text{ with probability } \frac{1 - \sqrt{p}}{2} \end{cases}$$
In the first case, there are many disjoint terms $T_1, \cdots, T_l$ with $l \ge 3^{c_0} 4 \log S$ so that:
$$P[T_i = 1] \ge (\frac{1}{3})^{c_0}$$
$$P[T_i \neq 1] \le (1 - \frac{1}{3})^{c_0}$$
$$P[\exists T_i = 1] \ge 1 - ((1 - \frac{1}{3})^{c_0})^l = 1 - 2^{-l / 3^{c_0}} \ge 1 - \frac{1}{S^2}$$
In the second case, the maximum number of disjoint $T_i$'s is $\le 4 \log S$.
Now if we select disjoint $T_i$'s greedily, we find that there exists a set $H$ with $4c_0 3^{c_0} \log S$ variables such that $\forall i$ we have that $H \cap T_i \neq \emptyset$
To finish off the proof, we first restrict variables of $H$, and we can find a constant $b$ such that the number of unset variables in $H \le b$. We subsequently restrict variables not in $H$, and use induction on $c_0$.
Now Razborov provided a simpler version of this proof which is presented in $\cite{AB07}$:
\begin{proof} We will present a more simplified proof of the Switching Lemma due to Razborov. Let $R_t$ denote the set of all restrictions to $t$ variables, where $t \ge n / 2$. Then:
$$|R_t| = {n \choose t} 2^t$$
The set of \textit{bad restrictions} $\rho$ - those for which $D(f|_{\rho}) > s$ is greater than $s$ - is a subset of these. To show that this subset is small, we give a one-to-one mapping from it to the Cartesian product of three sets: $R_{t + s}$, the set of restrictions to $(t + s)$ variables, a set $\text{code}(k, s)$ of size $k^{O(s)}$ and the set $\{0, 1\}^s$ (The set $\text{code}(k, s)$ is explained below). This Cartesian product has size ${n \choose t + s}2^{t + s} k^{O(s)} 2^s$. Thus the probability of picking a bad restriction is bounded by:
$$\frac{{n \choose t + s}2^{t + s}k^{O(s)}2^s}{{n \choose t}2^t}$$
Intuitively, this ratio is small because $k, s$ are to be thought of as constant and $t > n / 2$, and therefore:
$${n \choose t}2^t >> {n \choose t + s}2^{t + s}$$
Formally, by using the correct constants as well as the approximation ${n \choose a} \sim (ne / a)^a$ we can upper-bound the ratio as follows:
$$(\frac{7(n - t)k}{n})^2$$
Therefore to prove the Switching Lemma it suffices to describe the one-to-one mapping mentioned above. This uses the notion of a \textit{canonical decision tree} for $f$. We take the $k$-DNF circuit for $f$ and order its terms (i.e. the $\land$ gates in Figure $1$) arbitrarily and within each term we order the variables. The canonical decision tree queries all the variables in the first term in order, then all the variables in the second term, and so on until the function value is determined.
Suppose that restriction $\rho$ is bad, that is, $D(f|_{\rho}) > s$. The canonical decision tree for $f|_{\rho}$ is defined in the same way as for $f$, using the same order for terms and variables. Since the decision tree complexity of $f|_{\rho}$ is at least $s$, there is a path of length at least $s$ from the root to a least. This path defines a partial assignment to the input variables; denote it by $\pi$. The rough intuition is that the one-to-one mapping takes $\rho$ to itself plus $\pi$.
let us reason about restriction $\rho$. None of the $\land$ gates outputs $1$ under $\rho$, otherwise $f|_{\rho}$ would be determined and would not require a decision tree. Some terms output $0$, but not all, since that would also fix the overall output. Imagine walking down the path $\pi$. Let $t_1$ be the first term that is not set to zero under $\rho$. Then $\pi$ must query all the unfixed variables in $t_1$. Denote the part of path $\pi$ that deals with $t_1$ by $\pi_1$; that is, $\pi_1$ is an assignment to the variables of $t_1$. Since $f$ is not determined even after $s$ steps in $\pi$, we conclude that $\pi_1$ sets $t_1$ to zero.
Let $t_2$ be the next term not yet set to zero by $\rho$ and $\pi_1$; again, the path must set $t_2$ to zero. Let $\pi_2$ denote the assignment to the variables of $t_2$ along the path. Then $\pi_2$ also sets $t_2$ to zero. This process continues until we have dealt with $m$ terms when $\pi$ has reached depth $s$. Each of these terms was not set by $\rho$ and, except perhaps for $\pi_m$, is set to zero after $s$ queries in $\pi$. The disjoint union of these $\pi_i$ terms contains assignments for at least $s$ variables that were unfixed in $\rho$.
Our mapping will map $\rho$ to:
$$([\rho \cup \sigma_1 \cup \cdots \cup \sigma_m], c, z)$$
where the term $\sigma_i$ is the unique set of assignments that makes $t_i$ true, $c = c_1c_2, \cdots, c_m$ is in $\text{code}(k, s)$ and $z \in \{0, 1\}^s$. In defining this mapping we are crucially relying on the fact that there is only one way to make a term true, namely to set all its literals to $1$.
To show that the mapping is one-to-one, we show how to invert it uniquely. This is harder than it looks since \textit{a priori} there is no way to identify $\rho$ from $\rho \cup \sigma_1 \cup \cdots \cup \sigma_m$. The main idea is that the information in $c$ and $z$ allows us to extract $\rho$ from the union.
Suppose that we are given the assignment $\rho \cup \sigma_1 \cdots \cup \sigma_m$. We can plug this assignment into $f$ and then infer which term serves as $t_1$. It is the first one to be set true. The first $k$ bits of $c$, say $c_1$, are an indicator string showing which variables in $t_1$ are set by $\sigma_1$. We can reconstruct $\pi_1$ from $\sigma_1$ using the string $z$, which indicates which of the $s$ bits fixed in the decision tree differ between the $\pi$ assignments and the $\sigma$ assignments.
Having reconstructed $\pi_1$, we can work out which term is $t_2$: it is the first \textit{true} term under the restriction $\rho \cup \pi_1 \cup \sigma_2 \cdots \cup \sigma_m$. The next $k$ bits of $c$, denoted $c_2$, give us $\sigma_2$ and continue this process until we have processed all $m$ terms and figured out what $\sigma_1, \cdots, \sigma_m$ are. Thus we have figured out $\rho$, so the mapping is one-to0one.
Finally, we define the set $\text{code}(k, s)$: this is the set of all sequences of $k$-bit binary strings in which each string has at least one $1$ bit and the total number of $1$ bits is at most $s$. It can be shown by induction on $s$ that:
$$|\text{code}(k, s)| \le (\frac{k}{\ln 2})^s$$
\end{proof}
\subsection{Proof that PARITY $\notin$ $AC^0$}
We will prove this theorem in two steps, the first of which we will show that for any given $AC_0$ circuit, there is a low degree polynomial that approximates the circuit, and the second of which we will show that parity cannot be approximated by a low degree polynomial.
\textbf{Lemma:} Every function $f: \mathbb{F}_p^n \to \mathbb{F}$ is computed by a unique polynomial of degree at most $p - 1$ in each variable.
\begin{proof}
Given any $a \in \mathbb{F}_p^n$, consider the polynomial:
$$1_a = \prod_{i = 1}^n \prod_{z \in \mathbb{F}_p, z_i \neq a_i} \frac{(X_i - z_i)}{(a_i - z_i)}$$
We have that:
$$1_a(b) = \begin{cases} 1 \text{ if } a = b \\ 0 \text{ else} \end{cases}$$
Furthermore, each variable has degree at most $p - 1$ in each variable. Now given any function $f$, we can represent $f$ using the polynomial:
$$f(X_1, \cdots, X_n) = \sum_{a \in \mathbb{F}_p^n} f(a) \cdot 1_a$$
To prove that this polynomial is unique, note that the space of polynomials whose degree is at most $p - 1$ ine ach variable is spanned by monomials where the degree in each of the variables is at most $p - 1$, so it is a space of dimension $p^n$ (i.e. there are $p^{p^n}$ monomials).
Similarly, the space of functions $f$ is also of dimension $p^n$. Therefore the correspondence must be one to one. \\
\end{proof}
Suppose that we are given a circuit $C \in \text{AC}_0$, we build an approximating polynomial gate by gate. The input gates are relatively straightforward, $x_i$ is a good approximation to the $i$-th input. Similarly, the negation of $f_i$ is the same as the polynomial $1 - f_i$.
The difficult case is a function like $f_1 \land f_2 \land \cdots \land f_t$ which can be computed by a single gate in the circuit. The naive approach would be to use the polynomial $\prod_{i = 1}^t f_i$. However, this gives a polynomial whose degree may be as large as the fan-in of the gate, which is too large for our purposes.
We will use the following trick, let $S \subset [t]$ be a completely random set, and consider the function $\sum_{i \in S} f_i$. Then we have the following claim.
\textbf{Claim:} If there is some $j$ such that $f_j \neq 0$, then $P_S[\sum_{i \in S} f_i = 0] \le 1 /2$
\begin{proof}
Observe that for every set $T \subseteq [n] - \{j \}$, it cannot be that both:
$$\sum_{i \in T} f_i = 0$$
and:
$$f_j + \sum_{i \in T} f_i = 0$$
Therefore at most half of the sets can give a non-zero sum.
\end{proof}
Now note that squaring turns non-zero values into $1$, so let us select independent uniformly random sets $S_1, \cdots, S_t \subseteq [t]$ and use the approximation:
$$g = 1 - \prod_{k = 1}^l(1 - (\sum_{i \in S_k} f_i)^2)$$
\textbf{Claim:} If each $f_i$ has degree at most $r$, then $g$ has degree at most $2lr$ and:
$$P[g \neq f_1 \lor f_2 \lor \cdots \lor f_t] \le 2^{-l}$$
Overall, if the circuit is of depth $h$, and has $s$ gates, then this process produces a polynomial whose degree is at most $(2l)^h$ that agrees with the circuit on any fixed input except with probability $s2^{-l}$ by the union bound. Therefore in expectation, the polynomial we produce will compute the correct value on a $1 - s2^{-l}$ fraction of all inputs.
Setting $l = \log^2 n$ we obtain a polynomial of degree $\text{polylog}(n)$ that agrees with the circuit on all but one percent of the inputs.
Now it remains to prove the following theorem.
\textbf{Theorem:} Let $f$ be any polynomial over $\mathbb{F}_3$ in $n$ variables whose degree is $d$. Then $f$ can compute the parity on at most $1/2 + O(d / \sqrt{n})$ fraction of all inputs.
\begin{proof}
Consider the polynomial:
$$g(Y_1, \cdots, Y_n) = f(Y_1 - 1, Y_2 - 1, \cdots, Y_n - 1) + 1$$
The key point is that when $Y_1, \cdots, Y_n \in \{1, -1\}$, if $f$ computes the parity of $n$ bits, then $g$ computes the product $\prod_i Y_i$. Therefore, we have found a degree $d$ polynomial that can compute the same quantity as the product of $n$ variables. We shall show that this computation cannot work on a large fraction of inputs using a counting argument.
Let $T \subseteq \{1, -1\}^n$ denote the set of inputs for which $g(y) = \prod_i y_i$. To complete the proof, it will suffice to show that $T$ consists of at most $1/2 + O(d / \sqrt{n})$ fraction of all strings.
Consider the set of all functions $q : T \to \mathbb{F}_3$. This is a space dimension $|T|$. We shall show how to compute every such function using a low degree polynomial.
By Fact $2$, every such function $q$ can be computed by a polynomial. Note that in any such polynomial, since $y_i \in \{-1, 1\}$, we have that $y_i^2 = 1$ so we can assume that each variable has degree at most $1$. Now suppose $I \subset [n]$ is a set of size more than $n / 2$, then for $y \in T$:
$$\prod_{i \in I} y_i = (\prod_{i = 1}^n y_i)(\prod_{i \notin I} y_i) = g(y)(\prod_{i \notin I} y_i)$$
In this way, we can express every monomial of $q$ with low degree terms, and so obtain a polynomial of degree at most $n / 2 + d$ that computes $q$.
The space of all such polynomials is spanned by $\sum_{i = 0}^{n/2 + d} {n \choose i}$ monomials. Therefore we get that:
$$|T| \le \sum_{i = 0}^{n / 2 + d} {n \choose i}$$
$$\le 2^n / 2 + \sum_{i = n/2 + 1}^d {n \choose i}$$
$$\le 2^n / 2 + O(d \cdot 2^n / \sqrt{n}) = 2^n(1 / 2 + O(d / \sqrt{n}))$$
\end{proof}
\section{Beyond Canonical Hastad \cite{Bea94}}
Whilst uniform restrictions consider the variables set equally likely to values $0$ or $1$, we can also consider restrictions whether the input variables are set to these values with imbalanced probability. This method is used to get circuit lower bounds for the CLIQUE problem.
The key idea is to provide weights to restrictions which reflect the probability of the restriction being chosen as a random member of the set. Indeed suppose that in the context of Hastad's switching lemma, we wanted to argue that there is a restriction which is strongly biased towards setting bits to $1$ and keeps the decision tree height small. The basic switching Lemma does not give us this information because as $n$ gets large it is much more unlikely to get a restriction with even a constant factor bias than the probability of failure.
\subsection{Other Restriction Methods}
The $\text{Stars}_m$ restriction is defined as a map from $\{x_1, \cdots, x_n\} \to \{0, 1, *\}$ with exactly $m$ stars and behaves similarly to $R_{m / n}$. The corresponding switching lemma is:
$$P[\text{DT}_{\text{depth}}(k-\text{DNF}_{\text{Stars}_m} \ge t] \le O((m / n)k)^t$$
Consider the $q$-biased $p$-restriction $R_{p, q}$:
$$R_{p, q}(x_i) = \begin{cases} * \text{ w.p. } p \\ 1 \text{ w.p. } (1 - p)q \\ 0 \text{ w.p. } (1 - p)(1 - q) \end{cases}$$
Now the corresponding switching lemma (for $q \le 0.5$) is as follows:
$$P[\text{DT}_{\text{depth}}(k-DNF|_{R_{p, q}} \ge t] \le O(pk / q)^t$$
This is typically used for average case lower bounds under $q$-biased distributions on $\{0, 1\}^n$.
Another switching lemma forms the foundation of the so-called "clique switching lemma" for the random restriction on ${n \choose 2}$ variables such that the stars are edges of a clique on a $p$-random set of vertices and the non-stars are set to $1$ with probability $q$ and $0$ with probability $1 - q$.
The corresponding switching lemma for $q \le 0.5$ is as follows:
$$P{\text{DT}_{\text{depth}}}(k-\text{DNF}_{\text{Clique}_{p, q}}) \ge t] \le O(pk / q^{O(k + t)})^t$$
In particular, this provides a $n^{\Omega(k / d^2)}$ lower bound for $k$-$\text{CLIQUE}_n$. Readers are encouraged to look at \cite{Bea94} for a more in-depth analysis on these biased-restrictions, their applications and motivations.
\subsection{Consequences of Parity not in $AC^0$}
$\text{PARITY} \notin \text{AC}_0$ is the way to obtain an oracle that separates PSPACE from the Polynomial Hierarchy. Hence, no proof that relativises can be used to separate PSPACE from the Polynomial hierarchy. There are many consequences of this result threaded throughout Complexity Theory literature, where three of the key results are highlighted below.
\textbf{Fourier Concentration:} The Fourier expansion of a boolean function is it's representation as a low-degree polynomial. Using Hastad's switching lemma, we can show that the Fourier Expansion of any function in $AC^0$ has all of it's larger coefficients concentrated on it's low order Fourier coefficients; $AC^0$ functions can be approximated by low degree polynomials.
\textbf{Pseudo-Random Generators for $AC^0$:} De-randomization studies the possibility of removing or reducing the amount of randomization used by randomized algorithms while still maintaining their efficiency and correctness. Nisan and Wigderson have proved that the randomized analogues of $AC^0, RAC^0$ and $BPAC^0$ can be de-randomized in poly-logarithmic space and quasi-polynomial time.
\textbf{$AC^0$-Circuit SAT and $\#$-SAT Algorithms:} Given an $AC^0$ circuit ,determine whether there exists an input $x$ which evaluates that circuit to $1$. Impagliazzo, Matthew and Paturi have demonstrated that Hastad's Switching Lemma can be used to provide non-trivial algorithms for $AC^0$-circuit SAT.
\subsection{Comparison with Polynomial Method}
The polynomial method for proving lower bounds on PARITY is similar to the method of random restrictions in the sense that we take advantage of a property which is common to all circuits of small size and constant depth with PARITY does not have. This property is that circuits of small size and constant depth can be represented by low degree polynomials with high probability.
In particular, we use the following fact and two lemmas to prove the result.
\textbf{Fact:} Any function $g : \{0, 1\}^n \to \mathbb{R}$ has degree at most $d$ if and only if $\hat{g}_{\alpha} = 0$ for all $\alpha$ such that $|\alpha| > d$
\textbf{Lemma 1:} For every circuit $C$ of size $S$ and depth $d$, there is a function $g : \{0, 1\}^n \to \mathbb{R}$ of degree $O((\log S)^{2d})$ such that $g$ and $C$ agree on at least $3/4$ fraction of $\{0, 1\}^n$
\begin{proof}
Given a circuit $C$ of size $S$ and depth $d$, for every gate we pick independently an approximating function $g_i$ with parameter $\epsilon = \frac{1}{4S}$ and replace the gate by $g_i$. Then for a given input, the probability that the new function so defined computes $C(x)$ correctly is at least the probability that the results of all the gates are correctly computed, which is at least $\frac{3}{4}$. In particular, there is a function among those generated this way that agrees with $C()$ on at least $3/4$ fraction of inputs. Each $g_i$ has degree at most $O(\log S)^2$, because the fan-in of each gate is at most $S$, and the degree of the function defined in the construction is at most $O((\log S)^{2d}$.
\end{proof}
\textbf{Lemma 2:} Let $g : \{0, 1\}^n \to \mathbb{R}$ be a function that agrees with PARITY on at least $3/4$ fraction of $\{0, 1\}^n$. Then the degree of $g$ is $\Omega(\sqrt{n})$
\begin{proof}
Let $g : \{0, 1\}^n \to \mathbb{R}$ be a function of degree at most $t$ that agrees with PARITY on at least $3/4$ fraction of inputs. Let $G : \{-1, 1\}^n \to \mathbb{R}$ be defined as:
$$G(x) = 1 - 2g(\frac{1}{2} - \frac{1}{2} x_1, \cdots, \frac{1}{2} - \frac{1}{2} x_n)$$
Now note that $G$ is still of degree at most $t$, and $G$ agrees with the function $\prod(x_1, \cdots, x_n) = x_1 \cdot x_2 \cdots x_n$ on at least $3/4$ fraction of $\{-1, 1\}^n$.
Define $A$ to be the set of $x \in \{-1, 1\}^n$ such that $G(x) = \prod(x)$:
$$A = \{x : G(x) = \prod_{i = 1}^n x_i \}$$
Then $|A| \ge \frac{3}{4} \cdot 2^n$ by our initial assumption. Now consider the set $F$ of all functions $f : A \to \mathbb{R}$. These form a vector space of dimension $|A|$ over the reals. We know that any function $f$ in this set can be written as:
$$f(x) = \sum_{\alpha} \hat{f}_{\alpha} \prod_{i \in \alpha} x_i$$
Over $A$, $G(x) = \prod_{i = 1}^n x_i$ and therefore for $x \in A$:
$$\prod_{i \in \alpha} x_i = G(x) \prod_{i \notin \alpha} x_i$$
Now by our initial assumption, $G(x)$ is a polynomial of degree at most $t$. Therefore, for every $\alpha$ such that $|\alpha| \ge \frac{n}{2}$, we can replace $\prod_{i \in \alpha} x_i$ by a polynomial of degree less than or equal to $t + \frac{n}{2}$. Every such function $f$ which belong to $F$ can be written as a polynomial of degree at most $t + \frac{n}{2}$. Hence the set $\{\prod_{i \in \alpha} x_i\}_{|\alpha| \le t+ \frac{n}{2}}$ forms a basis for the set $S$. As there must be at least $|A|$ such monomials, this implies that:
$$\sum_{k = 0}^{t + \frac{n}{2}} {n \choose k} \ge \frac{3}{4} \cdot 2^n$$
And in particular:
$$\sum_{k = \frac{n}{2}}^{t + \frac{n}{2}} {n \choose k} \ge \frac{1}{4} \cdot 2^n$$
Now we know from Stirling's approximation that every binomial coefficient ${n \choose k}$ is at most $O(2^n / \sqrt{n})$, and therefore we obtain:
$$O(\frac{t}{\sqrt{n}} \cdot 2^n) \ge \frac{1}{4} \cdot 2^n$$
This implies that $t = \Omega(\sqrt{n})$.
\end{proof}
And the theorem is subsequently proved as follows.
\begin{proof}
From Lemma $1$, we have that there is a function $g : \{0, 1\}^n \to \mathbb{R}$ that agrees with PARITY on a $3/4$ fraction of $\{0, 1\}^n$, whose degree is at most $O((\log S)^{2d})$. From Lemma $2$, we can deduce that the degree of $g$ must be at least $\Omega(\sqrt{n})$ so that:
$$(\log S)^{2d} = \Omega(\sqrt{n})$$
which is equivalent to:
$$S = 2^{\Omega(n^{1/4d})}$$
\end{proof}
In comparison with the polynomial method, the method of random restrictions prove a stronger and tighter lower bound. Furthermore, they use a property of the parity function which is true of other functions. Therefore, it truly is a general method to attack circuit lower bounds.
Polynomial methods can also accomplish this, and further has the advantage that it can be applied to any circuit model in which gates can be approximated by low-degree polynomials. For example, consider the $\text{AC}0$ model in which we have NOT gates, unbounded fan-in AN and OR gates and also MOD3 gates that, given boolean inputs $x_1, \cdots, x_n$ output $1$ if and only if $\sum_i x_i \equiv 1 \lor 2 \mod 3$.
The method of random restrictions cannot be applied to such circuits, because a MOD3 gate has a value that remains under-determined as long as at least three variables are not fixed and requires a CNF of size exponential in the number of non-fixed variables. We can prove that every $AC0^3$ circuit of depth $d$ that computes PARITY must size at least $2^{\Omega(n^1/(4d)}$.
\section*{References}
\beginrefs
\bibentry{AB07} {\sc S. Arora, B. Barak}, Computational Complexity: A Modern Approach, \textit{Princeton University} (2007) \url{https://theory.cs.princeton.edu/complexity/book.pdf}
\bibentry{ASWZ20} {\sc R. Alweiss, S. Lovett, K. Wu, J. Zhang}, Improved Bounds for the Sunflower Lemma, \textit{Princeton University, UCSD, Peking University, Harvard University} (2019) \url{https://dl.acm.org/doi/pdf/10.1145/3357713.3384234}
\bibentry{Bea94} {\sc P. Beame}, A Switching Lemma Primer, \textit{The University of Toronto} (1994) \url{https://www.cs.toronto.edu/~toni/Courses/Complexity2015/handouts/primer.pdf}
\bibentry{Bor72} {\sc A. Borodin}, Computational Complexity and the Existence of Complexity Gaps, \textit{University of Toronto} (1972) \url{https://dl.acm.org/doi/pdf/10.1145/321679.321691}
\bibentry{BH09} {\sc P. Beame, D. Huynh-Ngoc}, Multiparty Communication Complexity and Threshold Circuit Size of $AC^0$, \textit{The University of Washington Department of Computer Science} (2009) \url{https://homes.cs.washington.edu/~beame/papers/multiac0j.pdf}
\bibentry{Coh12} {\sc G. Cohen}, A Taste of Circuit Complexity Pivoted at $\text{NEXP} \subsetneq \text{ACC}^0$ (and more), \textit{Weizmann Institute of Science} (2012)
\bibentry{CRTY19} {\sc L. Chen, R. Rothblum, R. Tell, E. Yogev}, On Exponential-Time Hypotheses, Derandomization, and Circuit Lower Bounds, \textit{Electronic Colloquium on Computational Complexity} (2019) \url{https://eccc.weizmann.ac.il/report/2019/169/}
\bibentry{Fur08} {\sc J. Furtado}, An Introduction to Computational Complexity, \textit{MIT 6.080: Great Ideas in Theoretical Computer Science}, \textit{Massachusetts's Institute of Technology}
\bibentry{GNW99} {\sc O. Goldreich, N. Nisan, A. Wigderson}, On Yao's XOR-Lemma, \textit{Electronic Colloquium on Computational Complexity} (1999), pp. 10-29 \url{http://www.wisdom.weizmann.ac.il/~oded/COL/yao.pdf}
\bibentry{Gol05} {\sc P. Goldreich}, Texts in Computational Complexity: $\text{P}/\text{Poly}$ and $\text{PH}$, \textit{Department of Computer Science and Applied Mathematics, Weizmann Institute Israel} (2005)
\bibentry{GP14} {\sc J. Grochow, T. Pitassi}, Circuit Complexity, Proof Complexity and Polynomial Identity Testing, (2014)
\bibentry{GW04} {\sc O. Goldreich, A. Wigderson}, Computational Complexity, \textit{Weizmann Institute of Science}, \textit{Institute of Advanced Study} (2004)
\bibentry{Jun12} {\sc S. Junka}, Boolean Function Complexity: Advances and Frontiers, \textit{Algorithms and Combinatorics}, Vol. $27$ (2012) ISBN: $978$-$3$-$642$-$24507$-$7$
\bibentry{L20} {\sc S. Lovett}, CSE200: Complexity Theory, Circuit Lower Bounds, \textit{UC San Diego Department of Computer Science}
\bibentry{Mos16} {\sc D. Moshkovitz}, Derandomization Implies Circuit Lower Bounds, \textit{MIT: Advanced Complexity Theory} (2016) \url{https://ocw.mit.edu/courses/mathematics/18-405j-advanced-complexity-theory-spring-2016/lecture-notes/MIT18_405JS16_CircuitLower.pdf}
\bibentry{Mor19} {\sc H. Morizumi}, Some Results on the Circuit Complexity of Bounded Width Circuits and Non-deterministic Circuits, \textit{Shimane University} (2019) \url{https://arxiv.org/pdf/1811.01347.pdf}
\bibentry{RST15} Benjamin Rossman, Rocco A. Servedio, and Li-Yang Tan. An average-case depth hierarchy theorem for Boolean circuits. In \textit{Proceedings of the 56th Annual Symposium on Foundations of Computer Science}, $2015$ \url{https://arxiv.org/pdf/1504.03398.pdf}
\bibentry{TTV09}{\sc L. Trevisan, M. Tulsiani, S. Vadhan}, Regularity, Boosting and Efficiently Stimulating Every High-Entropy Distribution {\it SEAS Harvard\/} (2009), pp.~1-11 \url{https://people.seas.harvard.edu/~salil/research/regularity-ccc09.pdf}
\bibentry{Wig19} {\sc A. Wigderson}, Mathematics and Computation: A Theory Revolutionizing Technology and Science, \textit{Princeton University Press} (2019) \url{https://www.math.ias.edu/files/Book-online-Aug0619.pdf}
\bibentry{Weg87} {\sc I. Wegener}, The Complexity of Boolean Functions, \textit{Johan Wolfgang Goethe-Universitat} (1987)
\bibentry{Wil10} {\sc R. Williams}, Non-uniform $ACC$ Circuit Lower Bounds, \textit{IBM Almaden Research Center} (2010) \url{https://www.cs.cmu.edu/~ryanw/acc-lbs.pdf}
\bibentry{Wil14} {\sc R. Williams}, Algorithms for Circuits and Circuits for Algorithms: Connecting the Tractable and Intractable, \textit{IEEE Conference on Computational Complexity} (2014) \url{https://people.csail.mit.edu/rrw/ICM-survey.pdf}
\end{list}
\end{document}
|
{
"timestamp": "2021-09-30T02:00:12",
"yymm": "2109",
"arxiv_id": "2109.13917",
"language": "en",
"url": "https://arxiv.org/abs/2109.13917"
}
|
\section{Introduction}
We study the bit complexity of finding approximate solutions to the following problems, where the input is assumed to be given {exactly} as a rational matrix, collection of matrices, or polynomial.
\begin{enumerate}
\item {\bf Jordan Normal Form.} Given a complex matrix $A$, find a similarity $V$ such
that $A=VJV^{-1}$ where $J$ is a direct sum of {\em Jordan blocks}, i.e., matrices of type
$$ J_\lambda:=\bm{ \lambda & 1 & 0 & 0 &\ldots \\ 0 & \lambda & 1 & 0 & \ldots\\
0 & 0 & \lambda & 1 & \ldots\\ \vdots \\ 0 & 0 & 0 &\ldots & \lambda}$$
for eigenvalues $\lambda\in {\mathbb C}$ of $A$.
Here $J$
is unique up to permutations but $V$ is not if there are
eigenspaces of dimension greater than one. This theorem is taught in
undergraduate linear algebra courses and has myriad
applications throughout science and mathematics.
\item {\bf Spectral Factorization.} Given an $n\times n$ monic matrix polynomial
$$P(x)=x^{2d}I+\sum_{i\le 2d-1} x^i P_i$$ with Hermitian coefficients
$P_i\in{\mathbb C}^{n\times n}$ satisfying $P(x)\succeq 0$ for $x\in{\mathbb R}$, find a monic
matrix polynomial $Q(x)=x^dI+\sum_{i\le d-1}x^iQ_i$ such that
$P(x)=Q^*(x)Q(x)$ and $\det(Q(x))$ has all of its zeros in the
closed upper half complex plane (where $Q^*(x)=x^dI+\sum_{i\le d-1} x^{i}Q_i^*$). Such a $Q(x)$ is guaranteed to exist and is unique \cite{rosenblatt1958multi, yakubovich1970factorization}.
This fact has been rediscovered several times and goes under many names
(such as matrix F\'ejer-Riesz/Wiener-Hopf factorization and matrix polynomial sum of squares) in different fields.
Note that the $d=0$ case is just the Cholesky factorization if we do not insist that
$P_{2d}=Q_d=I$, and the $n=1$ case is the fact that a univariate scalar polynomial nonnegative on ${\mathbb R}$ may be expressed as a sum of squares (which can
be obtained by considering the real and imaginary parts of $Q(x)$).
\end{enumerate}
Both of the above problems have generated a large literature and
several proposed methods for solving them (see Section \ref{sec:related} for a thorough
discussion). Roughly speaking, these methods range on a spectrum between
symbolic (relying on algebraic reasoning, performing exact computations with
rational numbers, polynomials, field extensions, etc.) and numerical (relying on
analytic reasoning, semidefinite optimization, homotopy continuation, etc.). With one
exception in the case of problem (1) \cite{cai1994computing}, to the best of our knowledge none of
these methods has been rigorously shown to yield a polynomial time algorithm.
This paper provides the first polynomial time bit complexity bounds for problem (2), and
significantly improves the best known bound for (1). The algorithms we study
are simple and the algorithmic ideas employed are not essentially new; rather, our main contribution is to synthesize ideas
from both the symbolic and numerical approaches to these problems, which have
in the past developed largely separately across different fields over several decades, in a way which enables good bit complexity estimates.
At a technical level, the main task is to find good bounds on both the bit lengths of
rational numbers and on the condition numbers of matrices appearing during the execution
of the algorithms. A key theme of our proofs is that bit length bounds can be used to
obtain condition number bounds and {\em vice versa}, and that carefully passing between the two
is more effective than either one alone.
Our two main results, advertised in the abstract, appear in Sections \ref{sec:jnf} and \ref{sec:specfact} as Theorems \ref{thm:jnfmain} and \ref{thm:specfactmain}. Additional preliminaries for each result are included in its section, and further history and context for our contributions is discussed in Section \ref{sec:related}. Two notable common features of our results are:
\begin{itemize}
\item Our algorithms have good {\em forward error} bounds, i.e., they compute approximations to the exact solution of the given instance (as opposed to backward error, computing exact solutions of nearby instances, which is the standard notion in scientific computing). This notion of error is appropriate for mathematical (as opposed to scientific) applications where discontinuous quantities in the input (such as the size of a Jordan block) can be meaningful, but typically comes at the cost of higher running times resulting from the use of numbers with large bit length\footnote{The most well-known example of this phenomenon is that performing Gaussian elimination on an integer matrix exactly takes $O(n^{4}a)$ time \cite{grotschel2012geometric} for an invertible integer matrix with $a-$bit entries.}.
\item The running times of our algorithms are bounded solely in terms of the number of bits used to specify the input. This type of result is easier to use than bounds depending on difficult to compute condition numbers, especially for such ill-conditioned problems. As such, the key phenomenon enabling our results is that {\em instances of controlled bit length cannot be arbitrarily ill-conditioned} in an appropriate sense.
\end{itemize}
We conclude with a discussion and open problems in Section \ref{sec:discussion}.
\subsection{Comparison to Related Work}\label{sec:related}
\paragraph{Jordan Normal Form.} As far as we are aware, the only known polynomial bit complexity algorthm for approximately computing the JNF $A=VJV^{-1}$ of a general square complex matrix $A\in{\mathbb C}^{n\times n}$ with $a-$bit entries is \cite{cai1994computing}, obtaining a runtime of $O(\mathrm{poly}(n,a))$ where the degree of the polynomial is not specified but is seen to be at least twelve\footnote{The related paper \cite{ar1994reliable} proposed using JNF as an ``uncheatable benchmark'' for certifying that a device has high computational power.}.
In the symbolic computation community, the works \cite{kaltofen1986fast, ozello1987calcul, gil1992computation, giesbrecht1995nearly, roch1996fast,li1997determining} gave polynomial {\em arithmetic} complexity\footnote{The works \cite{ozello1987calcul, gil1992computation} derived bit complexity bounds for certain special cases of input matrices, but not in general.} bounds for computing the ``rational Jordan form'' of a matrix over any field. Roughly speaking, the rational Jordan form involves a symbolic representation of the matrix $J$ where the eigenvalues are represented in terms of their minimal polynomials over the field. These results are not adequate for our application to spectral factorization, which requires inverting submatrices of the similarity $V$, an operation which becomes difficult in the symbolic representation. Nonetheless our JNF algorithm is heavily inspired by the ideas in these works, relying on the same reduction to Frobenius canonical form (expressing $A$ as a direct sum of companion matrices) used in essentially all of them. The main difference is that we compute the eigenvalues approximately using numerical techniques \cite{pan2002univariate}, and are able to bound the condition number of $V$ by controlling the minimum gap between distinct eigenvalues as a function of the bit length of the input matrix.
Methods for computing the JNF must inherently involve a symbolic component since the Jordan structure can be changed by infinitesimal perturbations. It is worth mentioning that JNF is still not a solved problem ``in practice'' as trying to compute the JNF of a $50\times 50$ matrix using standard sofware packages reveals.
\paragraph{Spectral Factorization.} Polynomial spectral factorization has been rediscovered many times. The earliest references we are aware of are \cite{rosenblatt1958multi,helson1958prediction, yakubovich1970factorization,rosenblum1971factorization, choi1995sums}; the reader may consult any of the excellent surveys \cite{sayed2001survey,aylward2007explicit,dritschel2010operator,janashia2013matrix} for a detailed discussion of the history. More recently, several constructive proofs of the spectral factorization theorem have been proposed e.g. \cite{hardin2004matrix,aylward2007explicit,ephremidze2009simple,janashia2011new,ephremidze2014elementary,ephremidze2017algorithmization}, \cite[\S 2]{bakonyi2011matrix} (this list is not meant to be comprehensive). While these may be considered constructive from a mathematical standpoint, bit complexity bounds are not pursued and are not readily evident from the techniques used\footnote{This is due in each case to one or more of the following operations: inverting a linear system without bounding its condition number, computing eigenvalues or roots of a univariate polynomial or system of multivariate polynomials ``exactly'', solving a semidefinite program without controlling the volume of its feasible region, ``exactly'' a Schur or Jordan form of a matrix, using an iterative scheme with no rigorous proof of convergence, and assuming arithmetic is carried out in infinite precision.}. Two particularly simple algorithms on this list are \cite{aylward2007explicit} (which requires exactly computing the Schur form of a certain matrix and inverting some of its submatrices) and \cite[\S 2]{bakonyi2011matrix} (which requires solving a semidefinite program). We remark that unlike JNF, spectral factorization is actually a problem that is frequently solved in practice, with several of the papers above including numerical experiments.
The work most relevant for this paper is the important paper \cite{gohberg1980spectral} (see also \cite{langer1976factorization}), which reduces spectral factorization to computing the JNF of a block companion matrix (see Section \ref{sec:specfact} for a definition), and inverting and multiplying some matrices derived from it. Our contribution is to analyze the conditioning of this approach and combine it with our JNF algorithm, yielding concrete bit complexity bounds.
One notable advantage of our algorithm is that it works even when the input is degenerate --- i.e., $P(x)$ is only positive semidefinite rather than positive definite. This is in contrast to almost all of the works mentioned above, which only consider the strictly positive definite case (or even require all roots of $\det(P(x))$ to be distinct) and appeal to nonconstructive limiting arguments to handle the degenerate case.
A more stringent variant of the problem is to find a real factorization $P(x)=Q^T(x)Q(x)$ in the case when $P(x)$ is real symmetric, possibly allowing $Q(x)$ to be rectangular. The recent works \cite{blekherman2019low,hanselka2019positive} have obtained optimal bounds on the dimensions of $Q(x)$. In this paper, we restrict our attention to the Hermitian setting.
\begin{remark}[Applications to Real Algebraic Geometry] Besides control theory, spectral factorization has several applications in real algebraic geometry --- in particular it is a key step in a constructive proof of the existence of a definite determinantal representation for every hyperbolic polynomial in three variables \cite{grinshpan2016stable}. \end{remark}
\subsection{Preliminaries}\label{sec:prelim}
\renewcommand{\b}{\mathsf{dy}}
\newcommand{\mathsf{round}}{\mathsf{round}}
{\em Asmyptotic Notation.} We will use $\O(\cdot)$ to suppress logarithmic factors in the input parameters
$n$ (dimension), $a$ (bit length of input numbers), $d$ (degree), and $b$ (desired bits of accuracy). \\
\noindent {\em Numbers and Arithmetic.} We say that $x\in \mathbb Z\bit{a}$ if $x$ is an integer with bit length at most $\O(a)$ (logarithmic factors are not the focus of this paper and can be safely ignored everywhere). We use ${\mathbb Q}\bit{a/c}$ to denote the rationals $p/q$ with $p\in\mathbb Z\bit{a},q\in\mathbb Z\bit{c}$, and ${\mathbb Q}_\b\bit{a/c}$ to denote the elements of ${\mathbb Q}\bit{a/c}$ with denominator equal to a power of two; the latter will sometimes be useful since adding rationals with dyadic denominators does not increase the bit length of the denominator. For a rational $x$, let $\mathsf{round}_c(x)$ denote the nearest rational with denominator $2^c$, which clearly satisfies
\begin{equation}\label{eqn:round}
|x-\mathsf{round}_c(x)|\le 2^{-c}\end{equation}
and can be computed in time nearly linear in the bit length of $x$.
This notation extends to complex numbers with rational real and imaginary parts in the natural way. The bit complexity of arithmetic with rational numbers is nearly linear in the bit length (see e.g. \cite{grotschel2012geometric}).\\
\newcommand{\mathbb{K}}{\mathbb{K}}
\noindent{\em Matrices.} We use $\mathbb Z^{n\times n}\bit{a}$ to denote integer matrices with entries of bit length $\O(a)$, and ${\mathbb Q}^{n\times n}\bit{a/c}$ (similarly ${\mathbb Q}^{n\times n}_\b\bit{a/c}$, ${\mathbb C}^{n\times n}\bit{a/c}$, and ${\mathbb C}^{n\times n}_\b\bit{a/c}$) to denote matrices with entries in ${\mathbb Q}\bit{a/c}$ having a {\em common denominator}. We record the following easy facts about inverses as well as products and sums of pairs of matrices\footnote{We do not rely on matrix arithmetic with a superconstant number of matrices in this paper, for which the bit length bounds necessarily depend on the number of matrices.}, which follow from the adjugate formula for the inverse and the assumption on common denominators\footnote{Allowing distinct denominators in the entries of $A,B$ could the bit lengths of $AB$ and $A+B$ by a factor of $n$ if the denominators are, say, relatively prime}:
\begin{fact}[Bit Length of Matrix Arithmetic]\label{fact:arithmetic}
\begin{enumerate}
\item If $A\in\mathbb Z^{n\times n}\bit{a}$ then $A^{-1}\in {\mathbb Q}^{n\times n}\bit{an/an}$.
\item If $A\in\mathbb{K}^{n\times n}\bit{a/c}$ then $A^{-1}\in \mathbb{K}^{n\times n}\bit{c+an/an}$, for $\mathbb{K}={\mathbb Q},{\mathbb Q}_\b,{\mathbb C},{\mathbb C}_\b$.
\item If $A,B\in\mathbb{K}^{n\times n}\bit{a/c}$ then
$$A+B\in \mathbb{K}^{n\times n}\bit{a/c}\quad\textrm{and}\quad AB\in \mathbb{K}^{n\times n}\bit{a/c},$$
for $\mathbb{K}={\mathbb Q},{\mathbb Q}_\b,{\mathbb C},{\mathbb C}_\b$.
\end{enumerate}
\end{fact}
We use $\|\cdot\|$ to denote the operator norm and $\maxn{\cdot}$ to denote the entrywise maximum norm, noting that $\maxn{M}\le \|M\|\le n\maxn{M}$ for an $n\times n$ matrix $M$.
We use $\kappa(M):=\|M\|\|M^{-1}\|$ to denote the condition number of an invertible matrix. We will frequently use the elementary fact:
\begin{equation}\label{eqn:kappainv}
\|(M+E)^{-1}-M^{-1}\| \le \frac{\|E\|\|M^{-1}\|}{1-\|E\|\|M^{-1}\|}\cdot \|M^{-1}\|
\end{equation}
provided $\|E\|\|M^{-1}\|<1$, which follows from a Neumann series argument, as well as
\begin{equation}\label{eqn:kappakappa}
\kappa(M+E) \le \kappa(M)\frac{1+\|E\|\|M^-1\|}{1-\|E\|\|M^{-1}\|}
\end{equation}
whenever $\|E\|\|M^{-1}\|<1$. We will also appeal to the bound
\begin{equation}\label{eqn:invbound}\|A^{-1}\|\le n!2^{an}\quad\textrm{whenever $A\in \mathbb Z^{n\times n}\bit{a}$ is invertible},\end{equation}
which is easily seen by considering the adjugate formula for the inverse.\\
\noindent {\em Polynomials.} We use $\mathrm{mingap}(\cdot)$ to indicate the minimum gap between {\em distinct} roots of a polynomial. We use $\maxn{P(\cdot)-Q(\cdot)}$ to denote the coefficient-wise $\maxn{\cdot}$ norm of two matrix polynomials.
\section{Jordan Normal Form}\label{sec:jnf}
The {\em companion matrix} of a scalar monic polynomial $p(x) = x^d+\sum_{i<d} p_ix^i$ is the $d\times d$ matrix:
\begin{equation}\label{eqn:scalarcomp}
C_p:=\bm{ 0& 1 & & & & \\
0& 0& 1& & & \\
& & \vdots & & \\
& & & & &1 \\
-p_0& -p_1& -p_2& -p_3& \ldots& -p_{d-1}}^T
\end{equation}
It is easily seen that $\det(xI-C_p)=p(x)$. The high level idea of our algorithm is to use symbolic techniques to reduce the input matrix to a direct sum of companion matrices, and then use explicit formulas and root finding algorithms to compute the JNF of the companion matrices. We will rely on the following tools.
\begin{theorem}[Exact Frobenius Canonical Form, \cite{giesbrecht2002computing} {Theorems 2.2 \& 3.2}]\label{thm:storjohann} There is a randomized Las Vegas algorithm which given $A\in \mathbb Z^{n\times n}\bit{a}$, outputs a matrix $F\in \mathbb Z\bit{an}$ which is a direct sum of companion matrices and an invertible $U\in\mathbb Z\bit{an^2}$ satisfying $A=UFU^{-1}$, with an expected running time of $\O(n^5a+n^4a^2)$ bit operations.
\end{theorem}
\begin{theorem}[Approximate Polynomial Roots in the Unit Disk, \cite{pan2002univariate} Corollary 2.1.2\footnote{The parameter $b'$ here corresponds to $b/n$ in \cite{pan2002univariate}}] \label{thm:pan} There is an algorithm which given the coefficients of a polynomial $p\in{\mathbb Q}[x]$ of degree $n$ with all roots $z_1,\ldots,z_n\in{\mathbb C}$ satisfying $|z_i|\le 1$ and a parameter $b'\ge \log n$, computes numbers $z_1',\ldots,z_n'\in {\mathbb C}_\b\bit{b'}$ such that $|z_i'-z_i|\le 2^{2-b'}$ for all $i\le n$, using at most $\O(n^2b')$ bit operations.\end{theorem}
\begin{theorem}[Minimum Gap of Integer Polynomials, \cite{mahler1964inequality}] \label{thm:mignotte} If $p\in \mathbb Z[x]\bit{a}$ then $$\mathrm{mingap}(p)\ge 2^{-an-2n\log n}.$$
\end{theorem}
\begin{corollary}[Approximate Roots and Multiplicities of Integer Polynomials]\label{cor:rootmult} There is an algorithm which given an integer polynomial $p\in \mathbb Z[x]\langle a\rangle$ of degree $n\ge 2$ with roots $z_1,\ldots,z_n\in{\mathbb C}$ and a parameter $b'\ge an+4n\log n$, computes numbers $z_1',\ldots,z_n'\in {\mathbb C}_\b\bit{(a+b')/b'}$ such that $|z_i'-z_i|<2^{-b'}$ for all $i\le n$, using at most $\O(n^2(b'+a))$ bit operations. Each $z_i'$ appears a number of times exactly equal to the multiplicity of $z_i$ in $p(x)$.
\end{corollary}
\begin{proof} The largest root of $p(x)$ has magnitude at most the sum of the absolute values of its coefficients, which is at most $M=n2^a$. Apply Theorem \ref{thm:pan} to the polynomial $p(Mx)$, which has roots in the unit disk, with error parameter $b'+\lg(M)=b'+\lg(n)+a+1$ to obtain numbers $\tilde{z_1},\ldots,\tilde{z_n}\in {\mathbb C}\langle (a+b')/(a+b')\rangle={\mathbb C}\langle b'/b'\rangle$ (since $b'\ge a$) with common denominator. Then for $i=1,\ldots n$ we have $z_i':=\tilde{z_i} M \in {\mathbb C}\langle (a+b')/b'\rangle$ with common denominator and $|z_i'-z_i|\le 2^{-b'}$. By Theorem \ref{thm:mignotte} the minimum gap
between distinct $z_i$ is at least $2^{-b'+1}$ since $2n\log n \ge 1$ so this is sufficient
to correctly determine the multiplicity of each $z_i$ and replace all $z_i'$ corresponding to a root with the same value.
\end{proof}
We will also use an explicit formula for the JNF of a companion matrix as a confluent Vandermonde matrix (see e.g. \cite{gautschi1962inverses, batenkov2012norm} for a discussion) in the roots of the corresponding polynomial.
\begin{theorem}[\cite{brand1964companion}] \label{thm:brand} If $C\in{\mathbb C}^{n\times n}$ is a companion matrix with distinct eigenvalues $\lambda_1,\ldots \lambda_k\in{\mathbb C}$ of multiplicities $m_1,\ldots,m_k$, then $C=WJW^{-1}$ with
$$J = \oplus_{j\le k} J_{\lambda_j}$$
$$W = [W_{\lambda_1}, W_{\lambda_2},\ldots, W_{\lambda_k}]$$
where $J_{\lambda_j}$ an $m_j\times m_j$ Jordan block with eigenvalue $\lambda_j$ and $W_{\lambda_j}$ is the $n\times m_j$ matrix:
\begin{equation}\label{eqn:brand} W_{\lambda_j}:=\bm{ 1 & 0 & 0 & \ldots & 0\\
\lambda_j & 1 & 0 & \ldots & 0\\
\lambda_j^2 & 2\lambda_j & 1 & \ldots & 0\\
\lambda_j^3 & 3\lambda_j^2 & \binom{3}{2}\lambda_j & \ldots & 0\\
& & \vdots & &\\
\lambda_j^{n-1} & (n-1)\lambda_j^{n-2} & \binom{n-1}{2}\lambda_j^{n-3} & \ldots & \binom{n-1}{m_j}\lambda_j^{n-m_j}},
\end{equation}
so that $W$ is a confluent Vandermonde matrix.
\end{theorem}
Note that the entries of $W_\lambda$ in \eqref{eqn:brand} are univariate polynomials of degree $n$ in the $\lambda_i$ with coefficients in $\mathbb Z\bit{n}$.
\begin{figure}[ht]
\begin{boxedminipage}{\textwidth}
Algorithm $\mathsf{JNF}$.\\
Input: $A\in \mathbb Z^{n\times n}\bit{a}$, desired bits of accuracy $b$.\\
Output: $\ax{J}, \ax{V}\in {\mathbb C}_\b^{n\times n}\bit{an^3+b/(an^3+b)}.$\\
Guarantee: $\|J-\ax{J}\|\le 2^{-b}\|J\|, \|V-\ax{V}\|\le 2^{-b}\|V\|$ for some exact JNF $A=VJV^{-1}$ and $\kappa(\ax{V})\le 2^{\O(an^3)}$.
\begin{enumerate}
\item Exactly compute the Frobenius Normal Form $A=UFU^{-1}$ with
$F\in\mathbb Z^{n\times n}\bit{an}$ and $U\in\mathbb Z^{n\times
n}\bit{an^2}$ using Theorem \ref{thm:storjohann}. Let
$F=\oplus_{i\le \ell} C_i$ for companion matrices
$C_i\in\mathbb Z^{n_i\times n_i}\bit{an}$.
\item For $i=1\ldots \ell$, apply Corollary \ref{cor:rootmult} to the
characteristic polynomial $\chi_{C_i}(x)\in \mathbb Z[x]\bit{an}$
with accuracy \begin{equation}\label{eqn:bset} b'=b+\O(an^3)\end{equation} to obtain
approximations $\ax{\lambda_{i1}},\ldots,
\ax{\lambda_{ik_i}}\in{\mathbb C}_\b\bit{b'/b'}$ to the distinct
eigenvalues $\lambda_{i1},\ldots,\lambda_{ik_i}$ of $C_i$, with
error $|\lambda_{ij}-\ax{\lambda_{ij}}|\le 2^{-b'}$, as
well as their multiplicities.
\item For $i=1,\ldots,\ell$, compute approximate eigenvalue powers $\ax{\lambda_{i1}^p},\ldots,\ax{\lambda_{ik_{i}}^p}\in{\mathbb C}_\b\bit{b'/b'}$ satisfying
$|\ax{\lambda_{ij}^p}-\lambda_{ij}^p|\le 2^{-\Omega^*(b')}$ for $p=1,\ldots n$ using Lemma \ref{lem:approxpowers}. Let
$$\ax{J_i}:=\oplus_{j\le k_i} J_{\ax{\lambda_{ij}}}\in{\mathbb C}_\b^{n_i\times n_i}\bit{b'/b'}\qquad \textrm{and } \qquad \ax{W_i}:=[W_{\ax{\lambda_{i1}}}, \ldots, W_{\ax{\lambda_{ik_i}}}]\in{\mathbb C}_\b^{n_i\times n_i}\bit{b'/b'}$$
as in \eqref{eqn:brand}, i.e., substitute the approximate powers $\ax{\lambda_{ij}^p}$ into the appropriate polynomials $J_\lambda, W_{\lambda}$.
\item Output $\ax{J}$ and $\ax{V}=U\ax{W}$.
\end{enumerate}
\end{boxedminipage}
\end{figure}
The key condition number bound is the following, obtained via the minimum eigenvalue gap.
\begin{lemma}[Condition Numbers from Gaps]\label{lem:jnfcond} If $A\in\mathbb Z^{n\times n}\bit{a}$ and $A=UFU^{-1}=(UW)J(UW)^{-1}=VJV^{-1}$ for exact Frobenius and Jordan forms as above, then
$$ \kappa(U)\le 2^{\O(an^3)}, \qquad \kappa(W)\le 2^{\O(an^3)}, \quad\quad \kappa(V)\le 2^{\O(an^3)},\quad\textrm{and }\|V\|^{-1}\le 2^{\O(an^3)}.$$
\end{lemma}
\begin{proof} Since $U\in\mathbb Z^{n\times n}\bit{an^2}$, we have $\|U^{-1}\|\le 2^{\O(an^3)}$ and consequently $\|U\|\|U^{-1}\|\le 2^{\O(an^3)}$. For $W$, we note that it is a confluent Vandermonde matrix in the eigenvalues $\lambda_{ij}$ which have a minimum gap of $\delta\ge 2^{-\O(an^2)}$ by Theorem \ref{thm:mignotte}, so \cite[Theorem 1]{batenkov2012norm} implies that $\|W^{-1}\|\le n!(1/\delta)^n\le 2^{\O(an^3)}$. On the other hand, the formula \eqref{eqn:brand} reveals that $\|W\|\le n\cdot 2^{\O(an)}$ since $|\lambda_{ij}|\le n2^a$. Multiplying these two bounds yields $\kappa(W)\le 2^{\O(an^3)}$. Finally, we have $\kappa(V)\le \kappa(U)\kappa(W)$.
\end{proof}
\begin{theorem}\label{thm:jnfmain} The algorithm $\mathsf{JNF}$ satisfies its guarantees and runs in expected $\O(n^{\omega+3}a+n^4a^2+n^\omega b)$ bit operations.
\end{theorem}
\begin{proof}
{\em Bit Size of the Output.} The bit length assertions in Steps 1 and 2 are immediate from Theorem \ref{thm:storjohann} and Corollary \ref{cor:rootmult}. The bit length of $\ax{J},\ax{W}$ in Step 3 is guaranteed by Lemma \ref{lem:approxpowers}. The bit length of the product in Step 4 is implied by Fact \ref{fact:arithmetic}.\\
\noindent {\em Error Bounds.} Step 1 is exact.
Lemma \ref{lem:approxpowers} implies that the matrices
$\ax{J_i},\ax{W_i}$ in Step 3 satisfy
$$ \maxn{J_i-\ax{J_i}}\le 2^{-(\Omega^*(an^3)+b)},\qquad \maxn{W_i-\ax{W_i}}\le 2^{-(\Omega^*(an^3)+b)}$$
This additive bound is preserved under taking direct sums. To obtain the multiplicative bound, we observe that $\norm{W}\ge 1$ by \eqref{eqn:brand}; if there is a Jordan block of size at least two then $\norm{J}\ge 1$ also, otherwise since $A$ is integral we have
$$\prod_{\textrm{nonzero} \lambda_{ij}}\lambda_{ij}^{\mathrm{mult}(\lambda_{ij})}=e_k(A)=e_k(J)\ge 1$$
for the last nonzero elementary symmetric function $e_k$ of $A$, so one of the eigenvalues $\lambda_{ij}$ must be at least $2^{-\O(an)}$ and we have $\|J\|\ge 2^{-\O(an)}$. In either case, we conclude after passing to the operator norm that
$$\norm{J-\ax{J}}\le 2^{-(\Omega^*(an^3)+b)}\norm{J}\quad\textrm{ and }\quad
\norm{W-\ax{W}}\le 2^{-(\Omega^*(an^3)+b)}\norm{W}.$$
To obtain the final error bound on $\ax{V}$ in Step 4, we observe that
\begin{equation}\label{eqn:vapprox1}
\|V-\ax{V}\|=\|UW-U\ax{W}\|\le \|U\|2^{-(\Omega^*(an^3)+b)}\le 2^{-(\Omega^*(an^3)+b)}
\end{equation}
since $\|U\|\le 2^{\O(an^2)}$. Since $\|V\|\ge \|W\|/\|U^{-1}\|\ge 2^{-\O(an^2)}$, we obtain the conclusion
\begin{equation}\label{eqn:vapprox}
\|V-\ax{V}\|\le 2^{-(\Omega^*(an^3)+b)}\|V\|.
\end{equation}
\noindent {\em Condition of $\ax{V}$.}
Assuming that the implicit constant in the definition of $b'$ in Step 2 of the algorithm is chosen appropriately large, the bound \eqref{eqn:vapprox1} together with Lemma \ref{lem:jnfcond} implies $\|V-\ax{V}\|\|V^{-1}\|<1/2$, It follows from
\eqref{eqn:kappakappa} that
\begin{equation}\label{eqn:kappavax}\kappa(\ax{V})\le \kappa(V)\frac{1+1/2}{1-1/2}\le 3\kappa(V)\le 2^{\O(an^3)},\end{equation}
as desired.
\noindent {\em Complexity.}
Step 1 takes $\O(n^5a+n^4a^2)$ bit operations by Theorem \ref{thm:storjohann}.
Step 2 takes $\O(n^2(an^3+b))$ bit operations by Theorem \ref{thm:pan}.
Step 3 takes $\O(an^5+bn^2)$ bit operations by Lemma \ref{lem:approxpowers}.
The matrix multiplication in Step 4 $\O(n^\omega (b'+an^2))$ time.
The total running time is therefore $\O(n^\omega b'+n^4a^2)=\O(n^{\omega+3}a+n^{4}a^2+n^{\omega}b)$, as advertised.
\end{proof}
\begin{lemma}[Rounded Approximate Eigenvalue Powers]\label{lem:approxpowers} The approximate powers of the eigenvalues $\ax{\lambda_{ij}^p}\in {\mathbb C}_\b\bit{b'/b'}$ required in Step 3 may be computed in $\O(n^2b')$ bit operations and satisfy
$$ |\ax{\lambda_{ij}^p} - {\lambda_{ij}}^p|\le 2^{-(\Omega^*(an^3))+b}$$
for every $i,j$.
\end{lemma}
\begin{proof}
Suppose we wish to compute powers $\lambda,\lambda^2,\ldots,\lambda^r$ for some nonzero eigenvalue $\lambda=\lambda_{ij}$ appearing in $W$. Let $\ax{\lambda}$ be the approximate eigenvalue produced in Step 2, satisfying $|\lambda-\ax{\lambda}|\le 2^{-b'}$. We use the following inductive scheme for $p=2,\ldots,r$:
$$ \ax{\lambda^p} = \mathsf{round}_{b'}(\ax{\lambda^{p-1}}\cdot \ax{\lambda}).$$
First, observe that from Step 2 we have the error estimate $|\lambda-\ax{\lambda}|\le 2^{-b'}$, which implies that for every $p\le n$:
$$ |\lambda^p-(\ax{\lambda})^p|\le 2^{-b'}\cdot p\cdot|\lambda^{p-1}|\le 2^{-b'+\O(an)}$$
since $|\lambda|\le \|A\|\le n2^a$.
Thus, it suffices to show that
$$ |\ax{\lambda^p}-(\ax{\lambda})^p|\le 2^{-(\Omega^*(an^3)+b)}.$$
Notice that $|\lambda|\ge 2^{-\O(an^2)}$ since $e_k(A)\ge 1$ for the last nonzero elementary symmetric function of $A$, and each eigenvalue of $A$ is bounded by $2^{\O(an)}$. Consequently, we have $|\ax{\lambda}|\ge 2^{-\O(an^2)}$ and thereby $|(\ax{\lambda})^p|\ge 2^{-\O(an^3)}$ for every $p=1,\ldots n$. Choosing the constant in the definition \eqref{eqn:bset} of $b'$ in Step 2 to be appropriately large, it follows by induction that:
$$|\mathsf{round}(\ax{\lambda^{p-1}}\cdot \ax{\lambda})-(\ax{\lambda})^p)|\le O(p)\cdot 2^{-(\Omega^*(an^3))+b}|(\ax{\lambda})^p|$$
for every $p=2,\ldots,r$, i.e., where the inductive hypothesis is that in each step the rounding incurs a {\em multiplicative} error of at most $2^{-\Omega^*(an^3)+b}$, and we observe that the multiplicative errors simply add up since they are sufficiently smaller than one. Since we also have the upperbound $|(\ax{\lambda})^p|\le 2^{\O(a)}$, the advertised absolute error bound follows.
The total bit complexity for one eigenvalue is $n$ times the cost of one step of the induction, which is $\O(nb')$. Since there are $n$ eigenvalues, the total cost is $\O(n^2b')$.
\end{proof}
\begin{corollary}[JNF of Rational Matrices with Common Denominator]\label{rem:commonjnf} The algorithm $\mathsf{JNF}$ can easily be used to compute the JNF of $A/q$ for integer $A$ and $q$: if $\ax{J},\ax{V}$ is an approximate JNF of $A$ with $b$ bits of accuracy, then $\ax{J}/q,\ax{V}$ is an approximate JNF of $A/q$ with $b-\lg(q)$ bits of accuracy. This fact will be useful in the spectral factorization algorithm in the following section.
\end{corollary}
\section{Spectral Factorization}\label{sec:specfact}
We briefly review some aspects of the theory of matrix polynomials (the reader may consult \cite{gohberg2005matrix} for a comprehensive introduction). Given a
monic matrix polynomial $L(x)=x^dI+\sum_{i\le d-1}x^iL_i$ with
$L_i\in{\mathbb C}^{n\times n}$, its adjoint is $L^*(x)=x^dI+\sum_{i\le d-1}x^iL_i^*$,
and its latent roots are the $\lambda\in{\mathbb C}$ such that $L(\lambda)$ is singular.
The {\em block companion matrix}\footnote{This is a ``row'' companion matrix as opposed to the ``column'' companion matrices
in Section \ref{sec:jnf}. This is customary in the theory of matrix polynomials.} of $L$ is the $dn\times dn$ matrix:
\begin{equation}\label{eqn:blockcomp}
C_L:=\bm{ 0& 1 & & & & \\
0& 0& 1& & & \\
& & \vdots & & \\
& & & & &1 \\
-L_0& -L_1& -L_2& -L_3& \ldots& -L_{d-1}}
\end{equation} The following important theorem states the existence of spectral factorizations of positive definite monic matrix polynomials,
and gives a way of computing them using the block companion matrix.
\begin{theorem}[Theorems 5.1, 5.4 of \cite{gohberg1980spectral}] \label{thm:sfglr1} Suppose $P(x)=P^*(x)=x^{2d}I+\sum_{i\le 2d-1} x^{i}P_i\in{\mathbb C}^{n\times n}[x]$ monic of degree $2d$ satisfies $P(x)\succeq 0$ for all $x\in{\mathbb R}$. Then:
\begin{enumerate}
\item There is a unique monic $Q(x)\in{\mathbb C}^{n\times n}[x]$ of degree $d$ such that $P(x)=Q^*(x)Q(x)$ and $Q$ has all of its latent roots in the closed upper half plane.
\item The complex eigenvalues of $C_P$ occur in conjugate pairs, and each Jordan block in the JNF of $C_P$ corresponding to a real eigenvalue has even size.
\item Let $C_P=VJV^{-1}$ be a Jordan Form of the block companion matrix of $P$ with block decomposition
$$ J=:\bm{ J_+ & &\\ & J_0 & \\ & & J_-},\qquad V=:\bm{V_+ & V_0 & V_-\\ Z_+ & Z_0 & Z_-}$$
for $J_{\pm}$ corresponding to eigenvalues in the open upper/lower half plane and $J_0$ corresponding to the real eigenvalues and $V_\pm, V_0$ having $dn$ rows.
Then
\begin{equation}\label{eqn:cqformula}C_Q = V_{\ge 0} J_{\ge 0} V_{\ge 0}^{-1},\end{equation}
where
\begin{equation}\label{def:vgeq} V_{\ge 0}=[V_+, V_0^{(1/2)}]\in{\mathbb C}^{dn\times dn}\quad\textrm{and}\quad J_{\ge 0}=J_+\oplus J_0^{(1/2)}\in{\mathbb C}^{dn\times dn}.\end{equation}
Here, for each Jordan block of size $2s$ in $J_{0}$, $J_0^{(1/2)}$ contains a Jordan block of size $s$ with the same eigenvalue, and $V_0^{(1/2)}$ contains as columns the first $s$ of the corresponding $2s$ columns of $V_0$.
\end{enumerate}
\end{theorem}
The formula \eqref{eqn:cqformula} gives a one line algorithm for computing $Q$ given access to the exact Jordan form of $P$. The key issue
is that in order to use an approximate Jordan form $\ax{V}\ax{J}\ax{V^{-1}}$ in the formula, we must have a good bound on the condition number of $V_{\ge 0}$
in order to control the error incurred during inversion. Note that while Lemma \ref{lem:jnfcond} guarantees a bound on $\kappa(V)$,
this does not in general imply a bound on its submatrices; indeed, it is known that there can be square submatrices of $V$ which are singular.
The main technical contribution of this section is to prove a bound on $\kappa(V_{\ge 0})$ in terms of $\kappa(V)$ by exploiting the special structure of $V$
which arises from the structure of $C_P$. This is encapsulated in the following fact, which may be found in any reference on matrix polynomials (e.g., \cite[\S 1]{gohberg2005matrix}).
\begin{fact} If $C_P=VJV^{-1}$ is the Jordan normal form of an $n\times n$ complex matrix polynomial $P$ of degree $d$, then there is a matrix $X\in{\mathbb C}^{n\times 2dn}$ such that:
\begin{equation}\label{eqn:xjstack}V=\bm{ X\\ XJ\\ XJ^2\\ \vdots \\ XJ^{2d-1}}.\end{equation}
\end{fact}
We show that the least singular value of a column submatrix of any matrix of type \eqref{eqn:xjstack} may be related to the least singular values of certain block submatrices.
\begin{lemma}[Condition of Submatrices of Companion JNF]\label{lem:sfcond} Given any $Y\in{\mathbb C}^{n\times D}$ and $K\in {\mathbb C}^{D\times D}$ with $\|K\|\ge 1$, define for $k=1,2,\ldots$ the $nk\times D$ matrices:
$$ W_k:=\bm{ Y\\ YK\\ YK^2\\ \vdots \\ YK^{k-1}}.$$
Then
$$\sigma_D(W_D)\ge \frac{\sigma_D(W_{k})}{\sqrt{k}(4\|K\|)^{D(k-D+1)}}$$
for every $k\ge D$.
\end{lemma}
\begin{proof} Suppose $x\in{\mathbb C}^{D}$ is a unit vector satisfying $\|W_Dx\|=\sigma_D(W_D)=:\sigma$. We will show that
\begin{equation}\label{eqn:yksigma}\|W_kx\|\le \sigma\cdot \sqrt{k} (2D^{1/D}\|K\|)^{D(k-D+1)},\end{equation}
which yields the Lemma by using $D^{1/D}\le 2$.
Let $q$ be the characteristic polynomial of $K$. By the Cayley-Hamilton theorem, we have
$$q(K) = K^D + \sum_{0\le i\le d-1} c_i K^{i} = 0,$$
for some complex coefficients $c_i$ crudely bounded as
$$\max_{i\le D-1} |c_i| \le 2^D\|K\|^D:=\alpha,$$
by considering their expansion as elementary symmetric functions in the eigenvalues of $K$. Using this expression, we obtain the identity:
\begin{align*}
YK^{j}x = YK^{j-D}K^Dx = -\sum_{0\le i\le D-1} c_i YK^{j-D}K^ix,
\end{align*}
for every $j\ge D$.
By the triangle inequality, this yields:
\begin{align*}
\|YK^j x\|\le \alpha D\cdot \max_{i<j}\|YK^i x\|,
\end{align*}
which applied recursively gives: $$\|YK^jx\|\le (\alpha D)^{j-D+1}\cdot \max_{i<D}\|YK^ix\|\le (\alpha D)^{j-D+1}\sigma.$$
Summing over all $j\le k$, we have:
$$\|W_kx\|^2 \le \sigma^2 + \sum_{j=D}^k (\alpha D)^{2(j-D+1)}\sigma^2 \le k(\alpha D)^{2(k-D+1)}.$$
Taking a square root establishes \eqref{eqn:yksigma} and finishes the proof.
\end{proof}
\begin{remark} Lemma \ref{lem:sfcond} is a quantitative version of the main claim of \cite[\S 2.3]{gohberg1980spectral} showing that $V_{\ge 0}$ is invertible whenever $V$ is invertible, which is central to the theory of matrix polynomials.
The proof above is an arguably simpler proof of this fact, and may be of independent interest. The original proof of \cite{gohberg1980spectral} relies on a delicate analysis of a certain indefinite quadratic form.\end{remark}
Finally, we are able to bound $\kappa(V_{\ge 0})$.
\begin{lemma}\label{lem:kappavge} In the setting of Theorem \ref{thm:sfglr1},
$$ \|V_{\ge 0}^{-1}\|\le \|V^{-1}\|\cdot \sqrt{2dn} (4+4\|C_P\|)^{dn(dn+1)}$$
and
$$\kappa(V_{\ge 0})\le \kappa(V)\cdot \sqrt{2dn} (4+4\|C_P\|)^{dn(dn+1)}.$$
\end{lemma}
\begin{proof}
Letting $C_P=VJV^{-1}$, the similarity $V$ has the form \eqref{eqn:xjstack} for some $X\in{\mathbb C}^{n\times 2dn}$. Let $X_{\ge 0}$ be the $n\times dn$ submatrix of $X$ with columns corresponding to the columns in $V_{\ge 0}$. Apply Lemma \ref{lem:sfcond} with $D=dn, k=2dn, K = J_{\ge 0}, Y = X_{\ge 0}$, noting that $\|K\|\le 1+\|C_P\|$ since all of the diagonal entries of $J_{\ge 0}$ are eigenvalues of $C_P$ and bounded by its norm. This yields:
$$ \sigma_{dn} (V_{\ge 0}) \ge \frac{\sigma_{dn}\left(\bm{ V_{\ge 0} \\ Z_{\ge 0}}\right)}{\sqrt{2dn} (4+4\|C_P\|)^{dn(dn+1)}},$$
where $Z_{\ge 0}$ has the obvious meaning, yielding the first claim. But $\bm{ V_{\ge 0} \\ Z_{\ge 0}}$ is a column submatrix of $V$ so $$\sigma_{dn}\left(\bm{ V_{\ge 0} \\ Z_{\ge 0}}\right)\ge \sigma_{dn}(V)\ge \sigma_{2dn}(V).$$ Combining this with $\sigma_1(V_{\ge 0})\le \sigma_1(V)$, we obtain the second claim.
\end{proof}
We now present the algorithm $\mathsf{SF}$ which approximately computes the $Q(\cdot)$ guaranteed by Theorem \ref{thm:sfglr1} using an approximate Jordan normal form computation and exact inversion. We rely on the following tool from symbolic computation.
\begin{theorem}[Fast Exact Inversion, \cite{storjohann2015complexity}]\label{thm:storinv} There is a randomized algorithm which given an invertible matrix $A\in\mathbb Z^{n\times n}\bit{a}$ exactly computes its inverse $A^{-1}\in{\mathbb Q}^{n\times n}\bit{an/an}$ in time $\O(n^3a+n^3\log\kappa(A))$.
\end{theorem}
\begin{figure}[ht]
\begin{boxedminipage}{\textwidth}
{\bf Algorithm $\mathsf{SF}$}: \quad Input: Coefficients $P_0,\ldots,P_{2d-1}\in{\mathbb Q}^{n\times n}\bit{a/a}$ (with a common denominator) of a monic matrix polynomial $P(x)$, desired bits of accuracy $b\in\mathbb{N}$.\\
Output: $\ax{Q_0},\ldots,\ax{Q_{d-1}}\in{\mathbb C}_\b^{n\times n}\bit{a(dn)^3+b}$ or
or a certificate that $P(x)\nsucceq 0$ for some $x\in{\mathbb R}$.\\
Guarantee: If $P(x)\succeq 0$ then $\maxn{\ax{Q}(\cdot)-Q(\cdot)}\le 2^{-b}\maxn{Q(\cdot)}$ for $P(x)=Q^*(x)Q(X)$.
\begin{enumerate}
\item Compute an approximate Jordan Normal Form $(\ax{V},\ax{J})=\mathsf{JNF}(C_P,b'')$ of $C_P$ using Corollary \ref{rem:commonjnf}, with $\ax{J},\ax{V}\in {\mathbb C}_\b^{n\times n}\bit{b''/b''}$ where \begin{equation}\label{eqn:b2set}
b''=\O(a(dn)^3)+b.
\end{equation} Determine its eigenvalues on\footnote{This can be determined by testing if there are any $\ax{\lambda}$ with $|\ax{\lambda}-\overline{\ax{\lambda}}|\ll 2^{-an^2d^2}$ by Theorem \ref{thm:mignotte}.}, below, and above the real line. If any Jordan block corresponding to a real eigenvalue has odd size, output ``$P(x)\nsucceq 0$''.
\item Let
$$ \ax{J}=:\bm{ \ax{J_+} & &\\ & \ax{J_0} & \\ & & \ax{J_-}},\qquad V=:\bm{\ax{V_+} & \ax{V_0} & \ax{V_-}\\ * & * & *}$$
be a block decomposition such that $\ax{J_+}$ corresponds to eigenvalues of $C_P$ in the open upper half plane and $\ax{J_0}$ corresponds to real eigenvalues of $C_P$.
\item Output the negative of the last row of \begin{equation}\ax{C_Q} := \ax{V_{\ge 0}} \ax{J_{\ge 0}} \ax{V_{\ge 0}^{-1}},\end{equation}
where
\begin{equation} \ax{V_{\ge 0}}:=[\ax{V_+}, \ax{V_0}^{(1/2)}]\in{\mathbb C}^{dn\times dn}\quad\textrm{and}\quad \ax{J_{\ge 0}}:=\ax{J_+}\oplus \ax{J_0}^{(1/2)}\in{\mathbb C}^{dn\times dn}\end{equation}
and $(\cdot)^{(1/2)}$ is defined as in Theorem \ref{thm:sfglr1}. The approximate inverse $\ax{V_{\ge 0}^{-1}}$ is computed by exactly computing $(\ax{V_{\ge 0}})^{-1}$ using Theorem \ref{thm:storinv} and letting
$\ax{V_{\ge 0}^{-1}}=\mathsf{round}_{b''}((\ax{V_{\ge 0}})^{-1})$.
\end{enumerate}
\end{boxedminipage}
\end{figure}
\begin{theorem}\label{thm:specfactmain} The algorithm $\mathsf{SF}$ satisfies its guarantees and runs in $\O((dn)^6a+(dn)^4a^4+(dn)^3b)$ bit operations.
\end{theorem}
\begin{proof}
Item (2) of Theorem \ref{thm:sfglr1} shows that $P(x)\nsucceq 0$ if there is an odd size Jordan block with real eigenvalue.
Assuming this is not the case, that theorem shows that the exact spectral factor $Q$ is given by the last row of $C_Q=V_{\ge 0}J_{\ge 0}V_{\ge 0}^{-1}$. We now prove that the quantities $\ax{V_{\ge 0}}, \ax{J_{\ge 0}}, \ax{J_{\ge 0}}^{-1}$ computed by $\mathsf{SF}$ are close to the true quantities. This is a consequence of the following estimates. The arguments are essentially identical to those in the proof of Lemma \ref{lem:jnfcond} and Theorem \ref{thm:jnfmain} (the key point being that the inverse of a well-conditioned matrix is stable under small enough perturbations) so we describe them tersely.
\newcommand{J_{\ge 0}}{J_{\ge 0}}
\newcommand{V_{\ge 0}}{V_{\ge 0}}
\begin{enumerate}
\item $\|J_{\ge 0} - \ax{J_{\ge 0}}\|\le 2^{-b''}\|J_{\ge 0}\|$ and $\|V_{\ge 0}-\ax{V_{\ge 0}}\|\le 2^{-b''} \|V_{\ge 0}\|$ by the guarantees of $\mathsf{JNF}$. Moreover $\|\ax{V_{\ge 0}}\|\le 2^{\O(b'')}$ by the guarantee on the bit length of entries of $\ax{V}$.
\item $\|V_{\ge 0}^{-1}\|, \kappa(V_{\ge 0})\le 2^{\O(a(dn)^3+a(dn)^2)}=2^{\O(a(dn)^3)}$ by Lemma \ref{lem:kappavge} and Lemma \ref{lem:jnfcond}.
\item $\|(\ax{V_{\ge 0}})^{-1}\|, \kappa(\ax{V_{\ge 0}})\le 2^{\O(a(dn)^3)}$ by \eqref{eqn:kappakappa} and the previous two items in this list, where we easily verify the condition
$$ \|\ax{V_{\ge 0}}-V_{\ge 0}\|\|V_{\ge 0}^{-1}\|\le 2^{-b''}2^{\O(a(dn^3))}\le 1/2$$
by setting the implicit constant in the definition \eqref{eqn:b2set} of $b''$ appropriately large.
\item $\|\mathsf{round}_{b''}((\ax{V_{\ge 0}})^{-1})-V_{\ge 0}^{-1}\|\le 2^{-(\Omega^*(a(dn)^3)+b)}\|V_{\ge 0}^{-1}\|$ by \eqref{eqn:kappainv}, the previous item on this list, another application of \eqref{eqn:kappainv} to handle the rounding error, and the triangle inequality
$$\|\mathsf{round}_{b''}((\ax{V_{\ge 0}})^{-1})-V_{\ge 0}^{-1}\| \le \|\mathsf{round}_{b''}((\ax{V_{\ge 0}})^{-1})-(\ax{V_{\ge 0}})^{-1}\| + \| (\ax{V_{\ge 0}})^{-1} - V_{\ge 0}^{-1}\|.$$
\end{enumerate}
Combining these bounds and applying the triangle inequality three times, we conclude that
$$\|\ax{V_{\ge 0}}\ax{J_{\ge 0}}\ax{V_{\ge 0}^{-1}}-V_{\ge 0} J_{\ge 0} V_{\ge 0}^{-1}\|\le 2^{-(\Omega^*(a(dn)^3)+b)}\|C_Q\|,$$
which yields the advertised guarantee on $\maxn{\ax{Q}(\cdot)-Q(\cdot)}$ by changing norms and incurring a polynomial loss.
{\em Complexity.} The running time of $\mathsf{JNF}$ in Step 1 is $\O((dn)^{\omega+3}a+(dn)^4a^2+(dn)^\omega b'')$. Step 2 does not involve any computation. The time taken to exactly invert $\ax{V_{\ge 0}}$ in Step 3 using Theorem \ref{thm:storinv} (after pulling out the common dyadic denominator to obtain an integer matrix) is $\O((dn)^3\cdot (b''+a(dn)^3+b))$ by the estimte on $\kappa(\ax{V_{\ge 0}})$ above. The time taken to round down the entries of $\ax{V_{\ge 0}}^{-1}$ to $b''$ bits is $\O((dn)^2b'')$. The time taken to multiply together the three matrices is $\O((dn)^\omega b'')$. Thus, the total number of bit operations is dominated by $$\O((dn)^{\omega+3}a+(dn)^4a^2+(dn)^3 b'') = \O((dn)^6a+(dn)^4a^2+(dn)^3b),$$ as advertised.
\end{proof}
\begin{corollary}[Spectral Factorization of Non-Monic Polynomials]\label{cor:nonmon} Suppose $P(x)=VV^*x^{2d}+\sum_{i=0}^{2d-1} x^iP_i$ is a positive semidefinite Hermitian matrix polynomial with $V, P_i\in{\mathbb Q}^{n\times n}\bit{a/a}$ with a common denominator and $V$ invertible. Then an approximate spectral factorization of $P(x)$ accurate to $b-\O(a)$ bits (as in Theorem \ref{thm:specfactmain}) can be computed in expected $\O((dn)^6an+(dn)^4(an)^2+(dn)^3b)$ bit operations.
\end{corollary}
\begin{proof}
The rescaled polynomial $\tilde{P}(x):= x^{2d}I + \sum_{i=0}^{2d-1} x^i V^{-1}P_i V^{-*}$ is also positive semidefinite, has coefficients in ${\mathbb Q}^{n\times n}\bit{an/an}$, and is monic. Applying Theorem \ref{thm:specfactmain} yields an approximate spectral factor $\tilde{Q}$, for which $Q(x) = V \tilde{Q}(x) V^*$ is an approximate spectral factor of $P(x)$ with at most a loss of $\O(a)$ bits of accuracy.
\end{proof}
\section{Discussion and Future Work}
\label{sec:discussion}
For historical context, proving bit complexity bounds on fundamental linear algebra computations
(such as inversion, polynomial matrix inversion, Hermite/Smith/Frobenius normal forms \cite{kannan1979polynomial,kannan1985solving,storjohann1997fast,storjohann1998n,storjohann2001deterministic, gupta2011computing,zhou2015deterministic,kaltofen2015complexity}) has been
a vibrant topic in theoretical computer science and symbolic computation since the 70's, with near-optimal arithmetic and bit complexity
bounds being obtained for several of these problems within the last decade (e.g. \cite{storjohann2015complexity}). However,
this program did not reach the same level of completion for problems of a spectral nature,
such as the ones studied in this paper. While the polynomial time bounds obtained in this paper are modest,
we hope they will stimulate further work on these fundamental problems, as well as the important special case of efficiently diagonalizing a diagonalizable matrix in the forward error model, which remains unresolved (the recent work \cite{banks2020pseudospectral} considers the backward error formulation of the problem).
Some concrete questions left open by this work are:
\begin{enumerate}
\item Improve the running time for computing the JNF of a general matrix. The best known running time for computing the eigenvalues of a matrix is roughly $O(n^{\omega+1}a)$ \cite{pan1999complexity}, so this seems like a reasonable goal to shoot for. The current bottleneck is the bound of $2^{\O(an^3)}$ on the condtion number of the similarity $V$, which could conceivably be improved to $2^{\O(an^2)}$.
\item Improve the running time for computing the JNF of the block companion matrix of a matrix polynomial by exploiting its special structure, particularly \eqref{eqn:xjstack}. This would yield faster algorithms for spectral factorization.
\end{enumerate}
\subsubsection*{Acknowledgments}
We thank Bill Helton, Cl\'ement Pernet, Pablo Parrilo, Mario Kummer, Rafael Oliveira, and Rainer Sinn for helpful discussions, as well as the Simons Institute for the Theory of Computing, where a large part of this work was carried out during the ``Geometry of Polynomials'' program.
\bibliographystyle{alpha}
\input{jnf.bbl}
\end{document}
|
{
"timestamp": "2021-09-30T02:00:42",
"yymm": "2109",
"arxiv_id": "2109.13956",
"language": "en",
"url": "https://arxiv.org/abs/2109.13956"
}
|
\section{Introduction}
The most generic family of regular, stationary, asymptotically flat, electrovacuum solutions in Einstein-Maxwell theory is the Kerr-Newman (KN) family of black holes (BHs)~\cite{1965JMP.....6..918N, Mazur_1982}.
This solution extends the Kerr metric~\cite{Kerr:1963ud} and is uniquely characterised by the mass $M$, the dimensionless spin $\chi$, and charge-to-mass ratio $\bar{q}$, typically identified with the electric charge-to-mass ratio of the BH.
Astrophysical BHs are expected to carry negligible electric charge~\cite{Gibbons, Znajek, Palenzuela:2011es}. Although a rotating BH embedded in a magnetic field can selectively accrete electric charge, the maximum amount accreted through this effect is negligible for astrophysical values of magnetic fields~\cite{PhysRevD.10.1680}.
Additionally, mechanisms such as vacuum polarization, breakdown pair production, and neutralisation from surrounding material prevent a stellar-mass BH from sustaining a large amount of electric charge~\cite{Znajek, Gibbons}.
Even if a significant amount of charge is acquired, it is dissipated on a time scale much shorter than the one probed by gravitational-wave (GW) observations~\cite{Znajek}.
These dissipation mechanisms have roots in the large charge-to-mass-ratio of the electron~\cite{2016JCAP...05..054C}.
As a consequence, gravitational-wave (GW) searches, parameter estimation (PE), and population studies~\cite{O3a_catalog, Abbott:2020gyp} are routinely carried out by \textit{assuming} that BHs giving rise to the signals observed in the LIGO-Virgo interferometers~\cite{AdvLIGO, AdvVirgo} can be accurately described by the Kerr metric.
Even though all these arguments rely on well understood physical principles and thus in standard astrophysical scenarios the neutral BH approximation is a reasonable one, a robust and direct observational verification of charge neutrality for the population of BHs observed by LIGO and Virgo is still missing.
The confirmation of small or null electrical charge would also constrain more exotic scenarios, where the charge parameter of the KN family can be identified with the magnetic charge (due to primordial magnetic monopoles~\cite{monopoles, Bozzola:2020mjx, Liu:2020cds}), vector charge in theories mediated by a gravitational vector field~\cite{Bozzola:2020mjx}, a hidden electromagnetic charge in models of minicharged dark matter~\cite{2016JCAP...05..054C}, or a topologically induced charge~\cite{Kim:2020bhg}.
Models of minicharged dark matter would evade the aforementioned discharge mechanisms due to their different charge-to-mass-ratio, while charge effects arising from modified gravity scenarios would be due to the presence of an additional gravitational field.
Given that at the scale of BH mergers all these effects can be parametrised with the same parameter appearing in Einstein-Maxwell theory, we will simply refer to this parameter as \textit{charge}, bearing in mind its different meanings depending on the context in which this parameter is interpreted.
Charged BHs have also recently gained interest as a possible explanation of ultra high energy cosmic ray particles~\cite{Liberati:2021uom}.
GWs constitute a unique probe of these exotic scenarios for stellar-mass binary black hole (BBH) mergers, since the corresponding electromagnetic (EM) signal emitted by such sources would lie in the kHz range, where plasma absorption and reflection by the interstellar medium would prevent the detection of an EM counterpart~\cite{2016JCAP...05..054C, Cardoso:2020nst}.
For these reasons, we will not discuss possible EM counterparts to GWs that would be present if BHs possess a charge; in the recent past this topic received considerable attention due to a putative EM counterpart to GW150914~\cite{Liebling:2016orx, Loeb:2016fzn, Fraschetti:2016bpm, Liu:2016olx, Zhang:2016rli}.
Finally, in addition to probing effects due to new physics or uncommon astrophysical scenarios, KN BHs provide an excellent opportunity to test current phenomenological paradigms to search for violations of the Kerr hypothesis in a plausible and well understood scenario.
The KN case is in fact an extensively studied modification to Kerr BHs in GR, stemming from a well-posed extension of Einstein's equations, the Einstein-Maxwell theory. It will also include some of the effects one would find in Einstein-Maxwell-dilaton theory, which is also well-posed; see~\cite{Hirschmann:2017psw} for simulations of binary black holes in the theory starting from approximate initial data, \cite{Khalil:2018aaj,Julie:2018lfp} for post-Newtonian calculations, and~\cite{Ferrari:2000ep} for computations of quasinormal mode (QNM) frequencies for nonspinning black holes in this theory.
Conversely, most alternative theories of gravity which could likely leave detectable imprints in GW signals from binary BHs, are instead often not known to have a well-posed formulation or their effects on observable quantities have been computed only approximately, e.g.,~\cite{Yunes:2009ke, Nair:2019iur, Perkins:2021mhb, Wagle:2021tam, Pierini:2021jxd, Okounkova:2019dfo,Okounkova:2019zjf,Okounkova:2020rqw, Shiralilou:2020gah, Shiralilou:2021mfl, Srivastava:2021imr}. However, see~\cite{Kovacs:2020pns,Kovacs:2020ywu} for well-posed formulations of some theories, though still assuming weak coupling, and~\cite{East:2020hgw, East:2021bqk} for initial numerical simulations using these formulations.
Investigating the impact of the KN scenario on GW measurements is of paramount importance to explore complications that may arise when considering non-perturbative beyond-GR effects in a self-consistent manner.
These complications include the excitation of additional modes not present in GR, and the correlations among beyond-GR parameters and BH intrinsic parameters in the different phases of the coalescence.
The LIGO-Virgo Collaboration (LVC) routinely applies a battery of tests to GR~\cite{TGR-LVC2016, LIGOScientific:2019fpa, O3a_catalog} on confident GW detections, aimed at detecting deviations from GR predictions in the observed signals.
A variety of effects are tested with different methodologies, including modifications to the generation or propagation of GWs, the nature of the merging objects, or the presence of additional polarizations, absent in GR.
Residuals in the interferometer strain, obtained subtracting a representative best-fit waveform, are also tested for the presence of additional coherent power not modeled by GR templates~\cite{Ghonge:2020suv, O3a_TGR}.
Some of these tests are in principle sensitive to the presence of charge. Examples are the parametrised family of tests targeting the emission of dipole radiation during the early inspiral~\cite{Abbott:2018lct, LIGOScientific:2019fpa, O3a_TGR}, the parametrised ringdown tests~\cite{O3a_TGR, Brito:2018rfr, ghosh2021constraints, CARULLO-DELPOZZO-VEITCH, ISI-GIESLER-FARR-SCHEEL-TEUKOLSKY, Isi:2021iql} and the inspiral-merger-ringdown (IMR) consistency test~\cite{IMR_consistency_test1, IMR_consistency_test2}.
Nevertheless, the aforementioned tests all follow a phenomenological approach, meaning that they do not assume a specific form for the modification to GR.
The un-modelled approach is thus non-committal to a specific alternative scenario. On the one hand, this is a desirable feature, given the extraordinarily large number of possible alternatives to GR~\cite{Clifton:2011jh}.
On the other hand, ignoring predictions from specific theories implies a loss in sensitivity when looking for deviations from the GR Kerr predictions.
In this work, we improve on the aforementioned agnostic tests and search for signatures of astrophysical or exotic charges in the merger-ringdown signal of BBH coalescences detected by the LIGO-Virgo interferometers.
We do so by tabulating the QNM frequencies of a KN BH for arbitrary values of charge and spin, building on the work of Ref.~\cite{Dias:2015wqa}, and constructing a GW template implementing these predictions.
We then use this template to perform an observational analysis on all confident post-merger BBH observations, deriving a bound on the maximum amount of charge compatible with current observations. Additionally, we present a study of the detectability of charge using the projected design sensitivity of the current detector network.
We employ a robust statistical framework and, for the first time, a non-perturbative treatment of the effects of charge and spin in the gravitational ringdown modes, without relying on assumptions such as a small-charge or WKB approximations, as used in previous analyses~\cite{2016JCAP...05..054C, Wang:2021uuh}.
We also take into account possible modifications in the amplitude of the waveform, in addition to the modifications to the phase.
We additionally compute analytic fits for the QNM frequencies as a function of mass and spin. Such fits are a crucial ingredient for the construction of complete inspiral-plunge-merger-ringdown \textit{analytical} templates for charged binary black holes (generalizing the ones available for uncharged black holes) aimed at routinely extracting all the possible available information on BH charges from current LVC observations. Such templates will also require input from post-Newtonian calculations~\cite{Khalil:2018aaj,Julie:2018lfp} and numerical relativity simulations~\cite{Bozzola:2020mjx,Bozzola:2021elc} in the charged case.
The paper is structured as follows: in Sec.~\ref{sec:QNM_NR} we summarise the results obtained in a companion study~\cite{QNMsKN}, discussing a large dataset of QNM numerical solutions as a function of the charge and spin of the remnant BH.
In Sec.~\ref{sec:QNM_fits} we use the numerical data to construct parametric fits in an analytical form. We give the fit coefficients in the Appendix. Sec.~\ref{sec:LVC_DA} deals with the construction of a suitable waveform template describing a KN BH resulting from a BBH coalescence and the analysis of all available merger-ringdown observations from the LVC.
Sec.~\ref{sec:Injections} discusses the prospects of extracting the BH charge from upcoming merger-ringdown observations using ground-based interferometers. Finally, we conclude and discuss future developments in Sec.~\ref{sec:Conclusions}.
Throughout the manuscript we use both $c=G=1$ units and Gaussian units in the electromagnetic sector. The charge to mass ratio, the parameter entering QNM computations directly, is defined by $\bar{q}:= |Q|/M$, with $Q$ the BH charge in Gaussian units, and the absolute value is quoted due to the QNM spectrum invariance with respect to the sign of the charge in the Einstein-Maxwell theory. Additionally, for statistical quantities, we quote the median and $90\%$ credible levels (CL) -- or credible regions (CR) when discussing multidimensional distributions -- unless explicitly stated otherwise. When discussing observations, we always quote BH redshifted masses, as observed in a geocentric reference frame.
\section{Numerical QNM computation}\label{sec:QNM_NR}
\begin{figure*}
\includegraphics[width=.46\textwidth]{Figs/qnml2m2n0n1i-rp.pdf}
\hspace{1.5cm}
\includegraphics[width=.44\textwidth]{Figs/qnml2m2n0n1r-rp.pdf}
\caption{Imaginary (left panel) and real (right panel) part of the frequency for the $Z_2$ $\ell=m=2$ KN QNMs.
The orange surface (top of both figures) represents the $\mathrm{PS}$ family of the $n=0$ mode. The green surface (right side of both figures) represents the $\mathrm{NH}$ family of the $n=0$ mode.
The dark-red surface (below the orange surface) corresponds to the $\mathrm{PS}$ family of the $n=1$ mode.
Finally, the blue surface (almost on the top of the green surface) corresponds to the $\mathrm{NH}$ family of the $n=1$ mode.
The point at $a=0=Q$ in the orange (dark-red) surfaces matches the gravitational QNM of Schwarzschild [60,61] for the $n=0$ ($n=1$) modes, while the brown curve marks the extremal limit.
In these figures and the others we just plot the NH surface up to $Q/r_+=0.99$ which explains the small gap between the green (blue) surface and the extremal brown curve. These highly charged values can be computed with an analytical formula derived in~\cite{QNMsKN,ExtendedQNMsKN} that provides an excellent approximation to the numerical solution. We display $\mathrm{NH}_n$ (i.e., the green and blue surfaces) only for large charge where they can dominate over the $\mathrm{PS}_n$ sub-families; for smaller charge they are very strongly damped.
}\label{Fig:Z2l2m2n0n1-rp}
\end{figure*}
In the 1980s, Chandrasekhar, in his seminal textbook~\cite{Chandra:1983}, identified the coupled system of two partial differential equations (PDEs) that, under a particular gauge choice, describe the most general gravito-electromagnetic perturbations of a KN BH (excluding the perturbations that change the mass, angular momentum and/or charge of the solution). See also Ref.~\cite{Mark:2014aja}.
Since then, different studies have attempted to solve these PDEs using certain approximations. In a first attempt,~\cite{Kokkotas_charge, Berti:2005eb} studied perturbations described by the so-called Dudley-Finley equations. This is a decoupled system of Teukolsky-like equations that describes exactly the spin $0$ (scalar field) perturbations of KN and are expected to be a reasonably good approximation for the higher spin gravito-electromagnetic perturbation equations. Later, Chandrasekhar's equations for KN were solved perturbatively in a small rotation ($a$) expansion about the Reissner-Nordstr\"om BH~\cite{Pani:2013ija,Pani:2013wsa}, and in a small charge ($Q$) expansion about the Kerr background~\cite{Mark:2014aja}. The calculation of QNMs, within a WKB and/or near-horizon approximation, in the KN extremal limit was also considered in Ref.~\cite{Zimmerman:2015trm}.
More recently, it was shown that the most general gravito-electromagnetic perturbations of KN can be described by a coupled system of two PDEs
for two gauge invariant Newman-Penrose fields~\cite{Dias:2015wqa} that,
upon gauge fixing, reduce to the PDE system originally found by Chandrasekhar~\cite{Chandra:1983,Mark:2014aja}. These equations were then solved numerically using numerical methods reviewed in~\cite{Dias:2015nua}, in relevant ranges of the KN parameter space (notably for KN with $a=Q$), to establish strong evidence in favour of linear mode stability of the KN BH against gravito-electromagnetic perturbations \cite{Dias:2015wqa} (further supported by the non-linear time evolution study of~\cite{Zilhao:2014wqa}).
However, it was only recently~\cite{QNMsKN,ExtendedQNMsKN}, that the most desired task of obtaining the full frequency spectra of the QNMs with slowest decay rate (and others of physical interest) was finally completed. In the rest of this section, we will borrow and discuss results from our companion papers~\cite{QNMsKN,ExtendedQNMsKN} that will form the theoretical basis for the present study.
\begin{figure*}
\includegraphics[width=.46\textwidth]{Figs/qnml2m2n0i.pdf}
\hspace{1.5cm}
\includegraphics[width=.44\textwidth]{Figs/qnml2m2n0r.pdf}
\caption{Imaginary (left panel) and real (right panel) parts of the frequency for the $Z_2$, $\ell=m=2, n=0$ KN QNM with lowest $\mathrm{Im}\,|\hat{\omega}|$.
At each ($a/M$, $q/M$) point, only the ``dominant'' QNM family (i.e., the one with the larger damping time between the PS and NH families) is shown.
The orange (green) surface denotes the region where the PS (NH) family is dominant.
The yellow area indicates the region where the two families of modes trade dominance.
At extremality, the dominant mode always starts at $\mathrm{Im}\,\hat{\omega}=0$ and $\mathrm{Re}\,\hat{\omega}=m\hat{\Omega}_H^{\hbox{\footnotesize ext}}$ (brown curve).
The dark-red point ($a=0=Q$), $\hat{\omega}\simeq 0.37367168 - 0.08896232\, i $, is the gravitational QNM of Schwarzschild ~\cite{Chandra:1983,Leaver:1985ax}.
In the right panel, the yellow and green regions are so close to the extremal brown curve that they are not visible.}
\label{Fig:Z2l2m2n0}
\end{figure*}
\begin{figure*}
\includegraphics[width=.45\textwidth]{Figs/qnml2m2n1i.png}
\hspace{1.5cm}
\includegraphics[width=.45\textwidth]{Figs/qnml2m2n1r.png}
\caption{Imaginary (left panel) and real (right panel) parts of the frequency for the $Z_2$, $\ell=m=2, n=1$ KN QNM with lowest $\mathrm{Im}\,|\hat{\omega}|$.
At each ($a/M$, $q/M$) point, only the ``dominant'' QNM family (i.e., the one with the larger damping time between the PS and NH families) is shown.
The dark-red (blue) surface denotes the region where the PS (NH) family is dominant.
The magenta area indicates the region where the two families of modes trade dominance.
At extremality, the dominant mode always starts at $\mathrm{Im}\,\hat{\omega}=0$ and $\mathrm{Re}\,\hat{\omega}=m\hat{\Omega}_H^{\hbox{\footnotesize ext}}$ (brown curve).
The red point ($a=0=Q$), $\hat{\omega}\simeq 0.34671099 - 0.27391488\, i $, is the gravitational QNM of Schwarzschild ~\cite{Chandra:1983,Leaver:1985ax}.
In the right panel, the magenta and blue regions are so close to the extremal brown curve that they are not visible.}
\label{Fig:Z2l2m2n1}
\end{figure*}
\begin{figure*}
\includegraphics[width=.45\textwidth]{Figs/qnml3m3n0i.png}
\hspace{1.5cm}
\includegraphics[width=.45\textwidth]{Figs/qnml3m3n0r.png}
\caption{Imaginary (left panel) and real (right panel) parts of the frequency for the $Z_2$, $\ell=m=3, n=0$ KN QNM with lowest $\mathrm{Im}\,|\hat{\omega}|$.
At each ($a/M$, $q/M$) point, only the ``dominant'' QNM family (i.e., the one with the larger damping time between the PS and NH families) is shown.
The magenta (blue) surface denotes the region where the PS (NH) family is dominant.
The orange area indicates the region where the two families of modes trade dominance.
At extremality, the dominant mode always starts at $\mathrm{Im}\,\hat{\omega}=0$ and $\mathrm{Re}\,\hat{\omega}=m\hat{\Omega}_H^{\hbox{\footnotesize ext}}$ (brown curve).
The dark-red point ($a=0=Q$), $\hat{\omega}\simeq 0.59944329 - 0.09270305\, i $, is the gravitational QNM of Schwarzschild ~\cite{Chandra:1983,Leaver:1985ax}.
In the right panel, the orange and light-blue regions are so close to the extremal brown curve that they are not visible.}
\label{Fig:Z2l3m3n0}
\end{figure*}
For the astrophysical investigations considered in this work, we wish to identify the families of gravito-electromagnetic QNMs that dominate the ringdown emission following a BBH merger, focusing on the perturbations with spin weight $-2$.
Here by dominant we mean the families that have the slowest decay rates for all KN BHs parametrized by the $\{a,Q\}$ pairs\footnote{We use the notation of~\cite{QNMsKN,ExtendedQNMsKN}. In Boyer-Lindquist coordinates, the outer and inner horizon radii $r_\pm$ are related to the KN mass $M$ and charge $Q$ by $r_\pm=M\pm\sqrt{M^2-a^2-Q^2}$ and the event horizon angular velocity and temperature are $\Omega_H= \frac{a}{r_+^2+a^2}$ and $T_H = \frac{1}{4 \pi r_+}\frac{r_+^2-a^2-Q^2}{r_+^2+a^2 }$. At $r_-=r_+$, i.e., $a=a_{\hbox{\footnotesize ext}}=\sqrt{M^2-Q^2}$, the KN BH has a regular extremal (``ext") configuration with $T_H^{\hbox{\footnotesize ext}} =0$, and maximum angular velocity $\Omega_H^{\hbox{\footnotesize ext}} =a_{\hbox{\footnotesize ext}}/(M^2+a_{\hbox{\footnotesize ext}}^2)$.}.
Not surprisingly, the dominant family of QNMs is the one that, in the $a=Q=0$ limit and using Chandrasekhar's notation \cite{Chandra:1983}, reduces to the Schwarzschild gravitational $Z_2$ $\{ \ell=m=2, n=0 \}$ mode. Here $\ell$ is the harmonic number that gives the number of zeros of the QNM eigenfunction along the polar direction and $n$ is the radial overtone (related to the number of zeros of the QNM eigenfunction along the radial direction).
The second family of interest is the one that reduces to the gravitational $Z_2$ $\{ \ell=m=2, n=1 \}$ mode in the Schwarzschild limit~\cite{Chandra:1983}. Although this mode has a short lifetime, in the uncharged case it contributes significantly to the emission soon after the peak of the GW waveform~\cite{Giesler:2019uxc} due to its large excitation. These QNM spectra were obtained in our companion paper \cite{QNMsKN} and further detailed in the associated extended study~\cite{ExtendedQNMsKN}.
Finally, we will also need information about the spectra of the QNM family that reduces to the Schwarzschild gravitational $Z_2$ $\{ \ell=m=3, n=0 \}$ mode in the $a=Q=0$ limit. This mode makes a significant contribution to the emission for BBH mergers where the progenitor's mass ratio is significantly different from unity.
The task of identifying the most dominant modes within each of these families of QNMs is made less trivial by the fact that for each family $\{\ell=m,n\}$ there are not one but actually two sub-families of QNMs \cite{QNMsKN,ExtendedQNMsKN}. These can be denoted as
1) the {\it photon sphere} ($\mathrm{PS}_n$), and 2) the {\it near-horizon} ($\mathrm{NH}_n$) sub-families, although this sharp distinction is unambiguous only for small rotation parameters, i.e., close to the Reissner-Nordstr\"om family. To classify them in the Kerr-Newman background, we start by identifying them in the Reissner-Nordstr\"om limit and then we follow these two sub-families as the rotation parameter increases.
In this Reissner-Nordstr\"om case, the PS family of QNMs is the one that in the eikonal or geometric optics limit $-$ i.e. the WKB limit $m=\ell\to \infty$ $-$ has a frequency spectrum that is closely connected to the properties of unstable equatorial circular photon orbits: the real and imaginary parts of the PS frequency are proportional to the Keplerian frequency and to the Lyapunov exponent of the orbit, respectively. The latter describes how quickly radial deformations increase the cross section of a null geodesic congruence around the orbit. On the other hand, the NH family of QNMs is characterized by having a wavefunction that near-extremality is very much localized around the horizon and quickly decays to zero as we move away from it. It is further characterized by the fact that its QNM spectrum has an imaginary part that vanishes in the extremal limit and, in the Reissner-Nordstr\"om case, has vanishing real part (unlike the PS modes).
Starting in the Reissner-Nordstr\"om solution, as the rotation increases and we run over the KN parameter space, these PS and NH sub-families define two surfaces (in a $\{Q,a,\omega\}$ plot) that do intersect (with a simple crossover) or have the interesting phenomena of eigenvalue repulsions in the KN parameter space as detailed in~\cite{QNMsKN,ExtendedQNMsKN}. Typically, this happens for very large values of $Q/M$, in a region of the parameter space which is difficult to probe with observations. When eigenvalue repulsion occurs, instead of a simple intersection, it becomes harder to make a clear distinction between the PS and NH sub-families.
Additional difficulties emerge from the fact that non-trivial intersections with eigenvalue repulsions can also happen between different (sub-)families, e.g., between $\{\ell=m=2,n=0\}$ and $\{\ell=m=2,n=1\}$ modes.
This requires a careful analysis of the data to identify to which (sub-)family a particular QNM belongs; see Ref.~\cite{QNMsKN,ExtendedQNMsKN} for additional details.
For each KN BH parametrized by $\{a,Q\}$, we derive all the relevant $\mathrm{PS}_n$ and $\mathrm{NH}_n$ QNMs for a given $\{\ell=m,n\}$ family, and identify the modes that have the slowest decay rate within that particular $\{\ell=m,n\}$ QNM family.
An overview of four (out of the six) sub-families of QNMs that we need for our study is presented in Fig.~\ref{Fig:Z2l2m2n0n1-rp}, in order to give a general reference of their relative position in the frequency plane. Namely, in this figure we focus on the $\{\ell=m=2,n=0\}$ and $\{\ell=m=2,n=1\}$ families and, for each of them, we display the spectra of each of their two sub-families, namely $\mathrm{PS}_n$ and $\mathrm{NH}_n$.
We plot the imaginary (left panel) and real (right panel) part of the frequency for these KN QNMs as a function of the KN rotation and charge.
In this particular figure we use dimensionless quantities in units of the horizon radius $r_+$ (instead of units of $M$), namely $\hat{a}:= a/r_+, \hat{Q}:= Q/r_+$, because it turns out that the distinction between the four sub-families is better seen in these units. For the frequency we always use units of $M$: $\hat{\omega} :=\omega M$, $\hat{\Omega} := \Omega M$. The brown curve with $\mathrm{Im}\,\hat{\omega}=0$ and $\mathrm{Re}\,\hat{\omega}=2\hat{\Omega}_H^{\mathrm{ext}}$ corresponds to extremality (`ext') where $\hat{a}_\mathrm{ext}=\sqrt{1-\hat{Q}^2}$.
Note that, as expected, $\mathrm{PS}_{0}$ always has smaller $|\mathrm{Im}\,\hat{\omega}|$ than $\mathrm{PS}_{1}$, and $\mathrm{NH}_{0}$ always has smaller $|\mathrm{Im}\,\hat{\omega}|$ than $\mathrm{NH}_{1}$.
Focusing our attention on the families with slowest decay rate, the $\mathrm{PS}_{0}$ and $\mathrm{NH}_{0}$ curves intersect at large charge (with simple crossovers or with intricate eigenvalue repulsions not clear in this overview figure, but identified in \cite{QNMsKN,ExtendedQNMsKN}).
Starting from $Q=0$ until a critical large charge $\hat{Q}=\hat{Q}_c(\hat{a})$, the $\mathrm{PS}_{0}$ dominates the QNM spectra and terminates on the brown curve at extremality, while for $\hat{Q}_c(\hat{a})<\hat{Q}\leq \hat{Q}_{\mathrm{ext}}$ it is instead the $\mathrm{NH}_{0}$ (which always terminates at the brown curve) that has the slowest rate. It should be noted that the $\mathrm{NH}_{0}$ green surface also intersects the $\mathrm{PS}_{1}$ dark-red surface often with eigenvalue repulsions that are not clearly visible in Fig.~\ref{Fig:Z2l2m2n0n1-rp} but that are detailed in \cite{QNMsKN,ExtendedQNMsKN}. They also leave an imprint in the dark-red $\mathrm{PS}_1$ surface that is partially visible in the left panel of Fig.~\ref{Fig:Z2l2m2n1} just to the left of the magenta region.
For reference, although not shown in Fig.~\ref{Fig:Z2l2m2n0n1-rp}, the $\mathrm{PS}_0$ surface of the $\{\ell=m=3,n=0\}$ QNMs would be in between the orange and dark-red surfaces, and the $\mathrm{NH}_0$ surface of the $\{\ell=m=3,n=0\}$ QNMs would be in between the green and blue surfaces.
After this generic overview of two of the main QNM families of interest for our study, we now give the QNM spectra of the {\it slowest decaying mode} for each of the three main families, namely $\{\ell=m=2,n=0\}$, $\{\ell=m=2,n=1\}$ and $\{\ell=m=3,n=0\}$, used in our study. This time we parametrize the KN BH by $\chi= a/M, \bar{q}= Q/M$.
In Fig.~\ref{Fig:Z2l2m2n0}, we show the result for the $\{\ell=m=2,n=0\}$ mode. For small and intermediate charge, the spectra is dominated by the $\mathrm{PS}_{0}$ mode (orange surface). On the other hand for very large charge (up to $Q/M=1$), the spectra is instead dominated by the $\mathrm{NH}_{0}$ mode (green surface). In between these two, there is a yellow area in the left panel where the $\mathrm{PS}_{0}$ and $\mathrm{NH}_{0}$ intersect either with a simple crossover or eigenvalue repulsions and they trade dominance. The yellow area picks the frequency of the mode that has the smallest $|\mathrm{Im}\,\hat{\omega}|$. In the right plot of Fig.~\ref{Fig:Z2l2m2n0}, the yellow and green areas are not visible because in units of $M$ the real part of their frequency is very, very close to the extremal brown curve.\footnote{A similar discussion applies to Figs.~\ref{Fig:Z2l2m2n1} and \ref{Fig:Z2l3m3n0}.}
These surfaces are however well visible when we use horizon radius units: see
Fig.~\ref{Fig:Z2l2m2n0n1-rp}.
In Fig.~\ref{Fig:Z2l2m2n1}, we repeat the exercise for the $\{\ell=m=2,n=1\}$ mode. The $\mathrm{PS}_{1}$ mode (dark-red) dominates for small and intermediate charges, while the $\mathrm{NH}_{1}$ mode (blue) dominates for very large charges. In between, there is a small window with a magenta area where the $\mathrm{PS}_{1}$ and $\mathrm{NH}_{1}$ modes trade dominance and we display the mode with smallest $|\mathrm{Im}\,\hat{\omega}|$. In the rightmost side of the dark-red $\mathrm{PS}_{1}$ surface one identifies a small region where the surface is very deformed by the eigenvalue repulsion between this $\ell=m=2$ $\mathrm{PS}_{1}$ family and the $\ell=m=2$ $\mathrm{NH}_{0}$ family as detailed in \cite{QNMsKN,ExtendedQNMsKN}.
Finally, in Fig.~\ref{Fig:Z2l3m3n0} we give the spectra for the $\{\ell=m=3,n=0\}$ mode. The $\mathrm{PS}_{0}$ mode (magenta surface) dominates for small and intermediate charges.
The light-blue surface is the $\mathrm{NH}_{0}$ mode and dominates for very large charges. In between, the orange area describes the region where the $\mathrm{PS}_{0}$ and $\mathrm{NH}_{0}$ modes trade dominance and we show the mode with smallest $|\mathrm{Im}\,\hat{\omega}|$.
\section{Fitting formulae for the numerical solutions}\label{sec:QNM_fits}
In this section, after introducing our fitting algorithm and testing it on previous Kerr results, we construct analytical fits for the real and imaginary part of KN PS QNM frequencies as a function of the BH charge and spin parameters.
\subsection{Bayesian fitting method}
We formulate the problem in the language of Bayesian inference, an extension of classical logic in the absence of complete information~\cite{Jaynes2003}.
Our fitting templates will be characterised by a set of coefficients, collectively labeled by $\vec{\theta}$, possibly different for each analytical form chosen. We infer the optimal (best-fit) values and related uncertainties for the coefficients by computing their probability distribution, the \textit{posterior distribution} $p(\vec{\theta} | d, \mathcal{H}, I)$, conditioned on the available numerical data $d$. The distribution is obtained through Bayes' theorem:
\begin{equation}
p(\vec{\theta} | d, \mathcal{H}, I) = \frac{p(\vec{\theta} | \mathcal{H}, I) \cdot p(d | \vec{\theta}, \mathcal{H}, I)}{p(d | \mathcal{H}, I)} \,,
\end{equation}
where $\mathcal{H}$ constitutes the parametric model describing the data (hypothesis), while $I$ denotes all the available background information.
The distribution $p(\vec{\theta} | \mathcal{H}, I)$ is the \textit{prior distribution}, encoding all the available information on the coefficients before the start of the inference process (e.g., the bounds within which we allow the coefficients to vary).
If no \emph{a priori} information is available, the prior can be chosen to be uniform on $\vec{\theta}$ within a given range of interest.
The last key ingredient in the numerator is the \textit{likelihood} function $p(d | \vec{\theta}, \mathcal{H}, I)$, which is fixed by the error distribution of the numerical data.
For the numerical fits discussed in this section, we assume a likelihood given by a zero-mean gaussian distribution with a standard deviation equal to the numerical uncertainty, together with uniform priors on the template coefficients.
The overall normalisation $\mathcal{Z} := p(d | \mathcal{H}, I)$, known as the \textit{evidence}, encodes the probability that the data $d$ can be described by the chosen model.
This approach allows one to compute the full multi-dimensional probability distribution of the coefficient set, improving upon uni-dimensional error estimates on each of the coefficients, and avoiding convergence issues in highly dimensional problems.
To explore the posterior probability distribution we use the nested sampling~\cite{skilling2006} algorithm \texttt{CPNest}~\cite{cpnest}.
\subsection{The Kerr case}
Before constructing a QNM template to model the KN case, we first test our fitting procedure by reproducing known results from the literature. As test cases, we choose the models of Berti et al.~\cite{Berti_fits}, Nagar et al.~\cite{Nagar_fits}, and London et al.~\cite{London_fits}. We start from the widely used analytical representation of Kerr BH spectra as a function of the BH spin from Berti et al.~\cite{Berti_fits}. It has the general form:
\begin{equation} \label{eq:Berti_etal}
X = c_0 +c_1 \cdot (1-\chi)^{c_2} \,,
\end{equation}
where $c_i \in \mathbb{R}$ and, defining the complex QNM frequency $\tilde{\omega} = \omega + i/\tau$, $X$ corresponds to $\omega$ or to the quality factor of each QNM mode, $Q= \omega \tau /2$.
The second model, employed in the construction of the effective one body model from Nagar et al.~\cite{Nagar_fits}, provides an improved representation of the spectrum with respect to Eq.~\eqref{eq:Berti_etal}, by assuming a rational function:
\begin{equation} \label{eq:Nagar_etal}
X = Y_0 \, \left( \frac{1+\sum_{j=1}^{3} b_j \, \chi^j}{1+\sum_{j=1}^{3} c_j \, \chi^j} \right) \,,
\end{equation}
where $b_k, c_k \in \mathbb{R}$, $X$ corresponds to $\omega$ or $\tau^{-1}$ and $Y_0$ is the Schwarzschild value of the parameter under consideration.
The last model considered, from London et al.~\cite{London_fits}, has a precision comparable to the one of Eq.~\eqref{eq:Nagar_etal}, with the additional advantage of providing a smooth modeling of the near-extremal behaviour.
It models directly the \textit{complex} QNM frequency by first smoothing the spectrum behaviour through a $\kappa$-transformation defined by:
\begin{equation}
\kappa := \left(\log_3(2-\chi)\right)^{\frac{1}{2+l-|m|}} \,,
\end{equation}
and subsequently modeling the QNM frequencies as:
\begin{equation} \label{eq:London_etal}
\omega + i/{\tau} = \sum_{j=0}^{5} \kappa^j \, A_j \, e^{ip_j} \,,
\end{equation}
for each $(\ell,m,n)$ with $A_j\in \mathbb{R}, p_j\in [0, 2\pi]$.
We apply our fitting algorithm to the $(\ell,m,n)=(2,2,0)$ $\omega$ numerical data, publicly available from Ref.~\cite{Berti_website}, assuming each of the above templates, seeking to reproduce the results obtained in the original studies~\cite{Berti_fits,Nagar_fits,London_fits}.
Fig.~\ref{fig:Kerr_fits} shows the comparison of each template to the data both using the coefficients given in the original work and the ones obtained with our algorithm using the maximum \emph{a posteriori} values.
The fractional error is computed as the residual $(\omega^\text{data} - \omega^\text{fit}) / \omega^\text{data}$.
As expected, Eqs.~\eqref{eq:Nagar_etal} and \eqref{eq:London_etal} provide a more accurate description of the QNM frequency, with residuals around the $0.1\%$ level. Eq.~\eqref{eq:London_etal} proves to be the most faithful to the numerical data, especially in the extremal limit ($a \rightarrow$ 1).
The overall agreement of each result with the data set employed is quantified by the $\mathbb{L}^2$ distance between the numerical data and the analytical formula. In the Nagar et al.\ case, our fits perform an order of magnitude better in terms of residuals and norm. However, it has to be noted\footnote{Alessandro Nagar, private communication.} that the fit of Nagar et al.\ was calibrated on values of the remnant spin corresponding to SXS catalog simulations employed in Ref.~\cite{Nagar_fits}. These include also some negative spin values, and only a subset of the data points for positive spin values considered here.
This dataset discrepancy might explain some of the difference in the residuals we observe. In the London et al.\ case, there is a general improvement in the non-extremal region, while the original fit provides a smoother behaviour around the extremality limit, although our result still shows a faithful representation below the $0.1 \%$ residual level for all the considered values of spin. This can be explained by the fact that we choose to not fix the extremal limit to minimise the global residuals, contrary to what was done in the original fit. Since our aim is simply to validate our numerical algorithm, which is returning compatible results with the ones of the original works, we do not try to resolve the small differences we observe in the residuals of these last two models, which are probably due to the aforementioned discrepancies in training data sets or fitting choices.
\begin{figure*}[!tb]
\includegraphics[width=0.85\textwidth]{Figs/Kerr_fits_comparison}
\caption{Test of the fitting method (with the maximum posterior point estimate for the fit parameters) against results from the literature for the Kerr $(\ell,m,n)=(2,2,0)$ frequency, using the same ansatzs as in the original works. The fractional error is computed as the residual $(\omega^\text{data} - \omega^\text{fit}) / \omega^\text{data}$. The $\mathbb{L}^2$ norm quantifies the overall agreement with the data set. The open circles representing the numerical data have been down-sampled for visualisation purposes.}
\label{fig:Kerr_fits}
\end{figure*}
\subsection{The KN case}
We now turn to the task of building a parametric function capable of modeling the spectrum of the gravitational KN QNMs discussed in Sec.~\ref{sec:QNM_NR}, the ones reducing to the Schwarzschild QNMs in the non-spinning, uncharged case.
We initially considered a generalization of the model used in Refs.~\cite{Pani:2013ija, Pani:2013wsa} to fit the small-charge case, including higher-order terms. However, we found that this is ineffective at modeling the spectrum in the large charge limit.
We instead choose to fit the numerical data presented in Sec.~\ref{sec:QNM_NR}, using a generalisation of the Nagar et al.\ model, Eq.~\eqref{eq:Nagar_etal}:
\begin{equation} \label{eq:Nagar_etal_ext}
X = Y_0 \, \left(\frac{\sum_{k,j=0}^{N_\text{max}} b_{k,j} \, \chi^k \, \bar{q}^j}{\sum_{k,j=0}^{N_\text{max}} c_{k,j} \, \chi^k \, \bar{q}^j}\right) \,.
\end{equation}
Here $X$ corresponds to $\omega$ or $\tau^{-1}$, $Y_0$ stands for the Schwarzschild value of the corresponding fitted quantity, and $b_{k,j}, c_{k,j} \in \mathbb{R}$, with $b_{0,0}=c_{0,0}=1$.
This template contains $2 \cdot (N_\text{max}+1)^2 - 2$ free coefficients, implying that already truncating the expression to the same order used in the Kerr case, $N_\text{max}=3$, the number of coefficients increases to $30$, versus the $6$ coefficients used in the uncharged case.
We apply the Nested Sampling algorithm as described above, choosing uniform priors $\mathcal{U}(-10,10)$ for all the coefficients appearing in Eq.~\eqref{eq:Nagar_etal_ext}, and setting $N_\text{max}=3$ to limit the number of free parameters.
We restrict our attention to the QNMs of interest for analysing observational data, hence we only consider data points respecting the sub-extremality condition $\chi^2 + \bar{q}^2 < $ 0.99.
In this region, the PS family is always dominant (longer damping time) compared to the NH one. The NH family has damping times comparable (or larger) to the PS one only very close to the extremal regime.
Thus, in what follows, we only consider PS QNMs.
We split the data into a training set, which constitutes $90\%$ of the full data set, and a validation set containing the remainder of the data. During the fit, we only employ the training set to find the values of the templates coefficients and use the validation set in a post-processing phase, to evaluate the residuals on values which were not used to construct the fit.
Fig.~\ref{fig:omega_220_KN_fit} shows the maximum a posteriori (which coincides with the maximum likelihood, since all priors are uniform) fitting model against the validation data points for the fundamental $(\ell, m, n) = (2,2,0)$ QNM frequency.
The residuals are centered around zero, spanning the range $\pm 0.2 \%$, indicating the same level of agreement of the best Kerr templates available.
We achieve comparable residuals on the frequencies and damping times of the other modes, except for the damping time of the $(\ell, m, n) = (2,2,1)$ mode which shows residuals as high as $1\%$ in the corners of the parameter space, though the residuals drop below $0.5\%$ for $\chi^2 + \bar{q}^2 < 0.9$. This level of agreement is acceptable given the current and expected measurement precision obtainable on the damping time~\cite{O3a_TGR}.
The maximum of the posterior for both the frequency and damping time coefficients of the fits for the $(\ell, m,n) = \{ (2,2,0), (2,2,1), (3,3,0) \}$ modes, together with the median and $90\%$ CL on these coefficients are reported in the Appendix. As expected, residuals on the training set are of the same order of magnitude, although presenting smaller tails.
\begin{figure*}[!tb]
\includegraphics[width=0.95\textwidth]{Figs/KN_Nagar_ext_omega_r_220_median_model_residuals_3D}
\caption{Maximum \emph{a posteriori} fitting model for the $(\ell, m, n) = (2,2,0)$ QNM frequency, in red. Black points correspond to validation data points, excluded from the original fit, computed according to the methods described in Sec.~\ref{sec:QNM_NR}. The shadows encodes the corresponding residuals. The training dataset shows qualitatively identical behaviour, albeit with smaller residuals, as expected.}
\label{fig:omega_220_KN_fit}
\end{figure*}
\section{Analysis of GW data}
\label{sec:LVC_DA}
In this section, after reviewing previous constraints and sensitivity predictions on BH charges from GW observations, we introduce our time-domain formalism and GW emission model used to infer the remnant object properties from GW data. This model is then applied to high confidence detections of GW transients with a sufficiently loud ringdown, presenting observational constraints on the maximum charge-to-mass ratio compatible with gravitational-wave data.
\subsection{Previous constraints on BH $U(1)$ charges}
Effects induced by the presence of a BH charge on GW signals were previously investigated in Refs.~\cite{2016JCAP...05..054C, Bozzola:2020mjx,Bozzola:2021elc, Wang:2020fra, Christiansen:2020pnv, Yunes:2016jcc, Wang:2021uuh, Gupta:2021rod} under different approximations.
Ref.~\cite{Wang:2020fra} considered modifications to the early inspiral phase, although neglecting the effects of spins, and found no evidence for the presence of a BH charge. The impact of charge in the inspiral phase and the related PE were also analysed in Ref.~\cite{Christiansen:2020pnv} in simplified settings.
Ref.~\cite{2016JCAP...05..054C} used the observation of GW150914 to place constraints on the dipolar emission, similarly to what was done in Ref.~\cite{Yunes:2016jcc} (for future prospects of constraining the dipolar emission using GW observations, see Refs.~\cite{Chamberlain:2017fjl, Perkins:2020tra}). A Bayesian study of the measurability of BH charges in the inspiral phase, considering the effects of charge up to first post-Newtonian order in the waveform phase was presented in Ref.~\cite{Gupta:2021rod}. This model was applied to GWTC-2 low-mass detections, providing the bound $\bar{q} < 0.2-0.3$ at $1$-$\sigma$ credibility.
A recent study on how some of these constraints from the inspiral phase could be affected by the presence of plasma surrounding the binary was presented in Ref.~\cite{Cardoso:2020nst}.
The detectability of charge in the ringdown emission was studied in Ref.~\cite{2016JCAP...05..054C} in the small charge limit, while recently Ref.~\cite{Wang:2021uuh} analysed the ringdown signal of GW150914 by including the effect of a BH charge using a WKB approximation. The results we find are in contrast with the bound obtained in this latter work. We attribute this difference to the KN spectrum of this latter reference being approximated using an ansatz based on the eikonal limit. This ansatz was further calibrated only on $\bar{q}=\chi$ numerical data, neglecting the full structure of the two-dimensional parameter space.
The limitations of the eikonal approximation for low-$\ell$ values (contributing to our analyses) are discussed in our companion paper~\cite{QNMsKN}.
Finally, a major step towards the full characterisation of waveforms sourced by KN metrics was taken in Refs.~\cite{Bozzola:2019aaw, Bozzola:2020mjx, Bozzola:2021elc}, where a set of complete numerical solutions of the inspiral, merger and ringdown of two charged non-spinning BHs in quasi-circular orbits was computed. The accuracy of different analytical approximations was evaluated against numerical results, pointing to a poor agreement of quantities estimated from a quadrupolar approximation in Newtonian models, while a much better agreement was found on remnant quantities estimates from the test particle limit.
The simulations were used to perform a mismatch analysis between charged and uncharged numerical solutions, allowing them to predict a constraint on the charge-to-mass-ratio of GW150914: $\bar{q} \leq 0.3$.
This is the first prediction on the BH charge to stem from a full IMR comparison, although it has not been yet directly validated against observational data. The prediction was also obtained for a fixed mass ratio and neglecting the effect of spins, thus not taking into account the full correlation structure of the BBHs parameter space, an important point in an observational analysis, as will be discussed in the remainder of the paper.
The detectability predictions of Refs.~\cite{Bozzola:2020mjx, Bozzola:2021elc}, where applicable, are in good agreement with the results we obtain.
\subsection{Methods}
\textit{pyRing --} We investigate the KN hypothesis in LIGO-Virgo data by employing the \texttt{pyRing}~\cite{CARULLO-DELPOZZO-VEITCH, ISI-GIESLER-FARR-SCHEEL-TEUKOLSKY, Carullo:2021dui} software, a \texttt{python}~\cite{python} package specifically tailored to the estimation of ringdown parameters. \texttt{pyRing} implements a Bayesian approach (see Sec.~\ref{sec:QNM_fits}), formulating the problem completely in the time domain, both for the likelihood and the waveform, in order to exclude any contribution from the pre-merger emission. Similarly to the numerical fits, the underlying stochastic sampling is performed by the \texttt{CPNest}~\cite{cpnest} algorithm. A convenient feature supported by the software is the possibility to generate synthetic data streams obtained by adding -- \emph{injecting}, in LVK jargon -- simulated signals to real or simulated detector noise. This functionality will be explored in the next section to predict constraints on BH charges obtainable with future detectors upgrades. The \texttt{pyRing} package has been used to produce the first ringdown-only catalog of remnant properties, together with constraints on deviations from GR QNM spectra, using data from the first three observing runs of the LIGO-Virgo interferometers; see Tables VIII-IX of Ref.~\cite{O3a_TGR}. Moreover, it has been employed to explore possible signatures~\cite{FOIT-KLEBAN, CARDOSO-FOIT-KLEBAN, AGULLO2020} of the area quantisation on the BH ringdown emission in Ref.~\cite{Laghi:2020rgl} and to obtain bounds~\cite{Carullo:2021dui} on a possible new physics length scale entering QNM spectra, in a linearized perturbative scheme~\cite{ParSpec}.
\textit{GW model --} To construct our model for a charged BH, we start from a standard $\mathrm{Kerr}$ template~\cite{Berti_fits, Lim:2019xrb}:
\begin{equation} \label{eq:Kerr_model}
h_+ - i h_{\times} = \frac{M_f}{D_L} \sum_{\ell=2}^{\infty} \sum_{m=-\ell}^{+\ell} \sum_{n=0}^{\infty}\, \, (h^{+}_{\ell m n} + h^{-}_{\ell m n})
\end{equation}
with:
\begin{subequations}
\begin{gather}
h^{+}_{\ell m n} = \mathcal{A}^{+}_{\ell m n} \, S_{\ell m n}( \iota, \varphi) \, e^{-i(t-t_{\ell m n})\tilde{\omega}_{\ell m n}+i\phi^{+}_{\ell m n}} \\
h^{-}_{\ell m n} = \mathcal{A}^{-}_{\ell m n} \, S^{*}_{\ell m n}(\pi-\iota, \varphi) \, e^{+i(t-t_{\ell m n})\tilde{\omega}^*_{\ell m n}+i\phi^{-}_{\ell m n}}
\end{gather}
\end{subequations}
where $\tilde{\omega}_{\ell m n} = {\omega}_{\ell m n} - i/{\tau_{\ell m n}}$ (a * denotes complex conjugation) is the complex ringdown frequency, determined in the Kerr cases by the remnant BH mass $M_f$ and spin $\chi_{\scriptscriptstyle f}$,\footnote{The ``f" subscript on BH parameters indicate these values refer those of the remnant BH.} $\tilde{\omega}_{\ell m n} = \tilde{\omega}_{\ell m n}(M_f, \chi_{\scriptscriptstyle f})$.
The amplitudes $\mathcal{A}^{+/-}_{\ell m n}$ and phases $\phi^{+/-}_{\ell m n}$ characterise the excitation of each mode and are inferred from the data. The inclination of the BH final spin relative to the observer's line of sight is denoted by $\iota$, while $\varphi$ corresponds to the azimuthal angle of the line of sight in the BH frame, which without loss of generality we set to zero given the complete degeneracy with the single-mode phases.
$S_{\ell m n}$ are the spin-weighted spheroidal harmonics~\cite{Berti:2014fga} and $t_{\ell m n}=t_0$ is a reference start time.
In writing Eq.~(\ref{eq:Kerr_model}), we follow the convention of Ref.~\cite{Lim:2019xrb} (see their Section III), for which $m>0$ indices denote co-rotating modes, while counter-rotating modes are labeled by $m<0$. In the remainder of this work, we will only consider co-rotating modes, since counter-rotating modes are predicted to be hardly excited in the post-merger phase for the binaries analysed in this work. For a discussion about the possible relevance of counter-rotating modes see Refs.~\cite{Lim:2019xrb, Dhani:2020nik, Dhani:2021vac}.
We restrict this template to a superposition of the quadrupolar fundamental (longest-lived) mode and its first corresponding overtone ($\ell=m=2, n=0,1$), considering all the amplitudes and phases as independent numbers. We refer to the template constructed in this manner using the Kerr QNM frequencies as $\mathrm{Kerr_{221}}$~\cite{Giesler:2019uxc, ISI-GIESLER-FARR-SCHEEL-TEUKOLSKY, O3a_TGR}.
The template is then modified by replacing Kerr QNM complex frequencies as a function of the remnant mass and spin $\tilde{\omega}_{\ell m n}(M_f, \chi_{\scriptscriptstyle f})$, with the corresponding KN frequencies $\tilde{\omega}_{\ell m n} = \tilde{\omega}_{\ell m n}(M_f, \chi_{\scriptscriptstyle f}, \bar{q}_{\scriptscriptstyle f})$. In the following applications, we interpolate the numerical values obtained in Sec.~\ref{sec:QNM_NR}. This modified template, used in the remainder of this work, is labeled $\mathrm{KN_{221}}$.
The assumption lying behind the construction of our template is that the post-merger signal of a BBH coalescence giving rise to a KN BH can be described by the superposition of the fundamental QNM and its first overtone. We stress that the amplitudes and phases of the modes considered in this model do not assume the predictions for a Kerr BH arising from a BBH coalescence, a key ingredient to avoid biases in the remnant PE in alternative scenarios.
Due to the high flexibility of our template, our modeling hypothesis appears robust. Nevertheless, in the future it would be interesting to directly test this assumption by comparing to numerical simulations~\cite{Bozzola:2020mjx, Bozzola:2021elc}, which would also allow to predict the values of the post-merger amplitudes and phases as a function of the binary parameters, improving the sensitivity of the model to charge effects.
Due to the coupling of EM fields to the gravitational field, in principle also the coupling of the $s=-1$ modes to the GW spectrum should be considered. However, as shown in Ref.~\cite{2016JCAP...05..054C} for simplified settings, the contribution to the gravitational emission of these modes is subdominant for non-extremal cases. Thus, we will neglect the contribution of such modes, leaving investigations of their contribution to the GW signal to future work.
\textit{Analysis details --}
The event selection criteria (a positive Bayes factor for the hypothesis of a signal being present in the data compared to the noise-only hypothesis, and informative parameters distributions), strain data, data conditioning methods, and sampler settings are chosen to be identical to those of Ref.~\cite{O3a_TGR}, which are publicly available from the accompanying data release~\cite{O3a_TGR_data_release}.
Additionally, for completeness we include in our analysis GW170729~\cite{Chatziioannou:2019dsz}, which was included in the testing GR analyses of the first LVC catalog~\cite{LIGOScientific:2019fpa}, but did not pass the stricter threshold imposed for the testing GR analysis of the later GWTC-2 catalog~\cite{O3a_TGR}.
The dataset thus consists of 18 BBH events listed in Table~\ref{tab:O1O2O3_events} (out of a total of 46, mainly due to the limited sensitivity of GW detectors to high frequencies) detected by the LVC.\footnote{The selection criterion should in principle be revised in light of the new physics present in our model.
Nevertheless, we checked that none of the excluded events passes the Bayes factor threshold applied in Ref.~\cite{O3a_TGR} or provides any significant constraint on the presence of a BH charge.}
The dataset is conservatively restricted to minimise the effect of noise events, possibly mimicking a GW event and contaminating our analysis.
The time origin of the strain for the analysis is set by the peak of $h_+^2 + h_{\times}^2$ in each of the detectors, as computed a-posteriori from an IMR analysis, and assuming the maximum likelihood value of the event sky location~\cite{O3a_TGR}.
The adopted prior distributions are also identical to the ones chosen in Ref.~\cite{O3a_TGR}, in particular uniform on the remnant mass and spin, the latter spanning the range [0, 0.99]. The prior distribution on the charge parameter is also uniform in the interval [0, 0.99]. Finally, we impose an \emph{a priori} joint limit on the charge and spin parameters $\chi_{\scriptscriptstyle f}^2 + \bar{q}_{\scriptscriptstyle f}^2 < 0.99$, excluding near-extremal BHs configurations consistently with the numerical fits discussed in Sec.~\ref{sec:QNM_fits}.
\subsection{Analysis of the GW transient catalog}
\textit{Full analysis --} We apply the KN waveform model described above to the available LIGO-Virgo events selected in the previous section.
The results are presented in Fig.~\ref{fig:GWTC-2_results}, where we show the $90\%$ CL of the two-dimensional posterior distributions on remnant spin and charge-to-mass ratio, for a representative set of four events showing the strongest constraints on these parameters.
Uni-dimensional posteriors on the charge-to-mass ratio are uninformative, while the ones on the spin parameter are consistent with the result from the $\mathrm{Kerr}_{221}$ analysis, with a corresponding broadening due to the increased number of parameters included in the analysis presented here.
Remnant masses, showing very weak correlations with the charge, are always consistent with the values inferred without assuming the presence of a charge~\cite{O3a_TGR}, with a broadening analogous to the one of the spin.
Current events allow us to exclude a large portion of the spin-charge parameter space, although a strong correlation is present, due to the similar effect those two parameters have on increasing QNM frequencies.
In fact, the 90$\%$ contour roughly corresponds to an iso-frequency region, containing the inferred values of spin and charge-to-mass ratio needed to reproduce the dominant -- slowly evolving -- frequency content observed in the post-merger signal.
The typical value of the remnant spin $\chi_{\scriptscriptstyle f}$ generated by the coalescence of close-to-equal-mass, mildly spinning BHs on a quasi-circular orbit, when assuming the absence of charge, is $\chi_{\scriptscriptstyle f} \sim 0.7$~\cite{PhysRevD.77.026004, Baker:2008mj}. Around this value, the results show consistency with $\bar{q}_{\scriptscriptstyle f} \sim 0$, although the wide distribution does not allow us to strongly constrain a specific value of normalised charge and spin.
We compute a global figure of merit comparing the Kerr and KN hypotheses against the GW data, the Bayes factor. In the Kerr case we assume the same template, but now using Kerr predictions for the QNM spectra as a function of the remnant parameters. The results are reported in the first column of Table~\ref{tab:O1O2O3_events}, indicating that current data do not allow us to meaningfully distinguish between the two hypotheses within current statistical uncertainties, according to criteria such as the Jeffreys scale~\cite{Jeffreys}.
\begin{table}[t]
\caption{Summary of the signal-to-noise Bayes factors between the $\mathrm{KN}$ and $\mathrm{Kerr}$ models, for both the full and null analyses, together with upper bounds at $90\%$ credibility on the remnant BH charge $\bar{q}_{\scriptscriptstyle f}$ from the null analysis. The numerical statistical error on each ln B is $\pm \, 0.1$. No significant evidence for or against charged black holes is present.}
\vspace{0.2cm}
\resizebox{1.0\columnwidth}{!}{\begin{tabular}{@{}lccc}
\hline\hline
\multicolumn{3}{c}{\qquad \qquad \qquad \qquad \qquad GWTC-2 ringdown events} \\
\hline
\hline
Event & ln B$^{\mathrm{KN}}_{\mathrm{Kerr}}$ & ln B$^{\mathrm{KN}}_{\mathrm{Kerr}}$ (null) & $\bar{q}_{\scriptscriptstyle f}$ bound at $90 \%$ (null)\\
\hline
GW150914 & $-0.6$ & $-0.7$ & $0.33$ \\
GW170104 & $-0.2$ & $-0.6$ & $0.45$ \\
GW170729 & $-0.7$ & $-0.3$ & $0.44$ \\
GW170814 & $-0.1$ & $-0.3$ & $0.45$ \\
GW170823 & $-0.1$ & $-0.6$ & $0.45$ \\
GW190408\_181802 & $-0.1$ & $-0.6$ & $0.48$ \\
GW190512\_180714 & \,\,\,\,$0.5$ & $-0.1$ & $0.56$ \\
GW190513\_205428 & $-0.5$ & $-0.8$ & $0.43$ \\
GW190519\_153544 & \,\,\,\,$0.3$ & $-0.8$ & $0.37$ \\
GW190521 & $-0.2$ & $-0.2$ & $0.47$ \\
GW190521\_074359 & $-0.2$ & $-0.8$ & $0.41$ \\
GW190602\_175927 & \,\,\,\,$0.3$ & $-0.4$ & $0.51$ \\
GW190706\_222641 & $-0.2$ & $-0.8$ & $0.43$ \\
GW190708\_232457 & \,\,\,\,$0.2$ & $-0.4$ & $0.54$ \\
GW190727\_060333 & $-0.2$ & $-0.5$ & $0.51$ \\
GW190828\_063405 & $-0.3$ & $-0.3$ & $0.47$ \\
GW190910\_112807 & \,\,\,\,$0.1$ & $-0.6$ & $0.43$ \\
GW190915\_235702 & $-0.7$ & $-0.9$ & $0.47$ \\
\hline\hline
\end{tabular}}
\label{tab:O1O2O3_events}
\end{table}
\begin{figure*}[!tb]
\includegraphics[width=0.48\textwidth]{Figs/Charge_spin_PROD1_KN_PSNH_event_GW150914}
\includegraphics[width=0.48\textwidth]{Figs/Charge_spin_PROD1_KN_PSNH_event_GW170729}\\
\vspace{0.3cm}
\includegraphics[width=0.48\textwidth]{Figs/Charge_spin_PROD1_KN_PSNH_event_GW190521A}
\includegraphics[width=0.48\textwidth]{Figs/Charge_spin_PROD1_KN_PSNH_event_GW190521B}
\caption{Credible region ($90 \%$ confidence) of the two-dimensional posterior probability density function of the spin and charge-to-mass ratio, for the subset of GWTC-2 events showing the tightest constraints (GW190521\_07 stands for GW190521\_074359). Crosses mark the median of the two-dimensional distribution. The gray region marks charge-spin values above the extremal limit, excluded in the analysis. Most of the two-dimensional plane is excluded by the data, although the strong correlation between the two parameters results in a iso-frequency contour, with one-dimensional projections extending over most of the charge and spin ranges.}
\label{fig:GWTC-2_results}
\end{figure*}
\textit{Null analysis --} As a null test, we repeat the analysis described above restricting the mass and spin uniform prior values to the $90\%$ CL obtained by the LVC collaboration from a full IMR analysis~\cite{O3a_catalog}, hence restricting them to around O($10-20\%$) of their median measured value. The outcome of such analysis will be an upper bound on the maximum allowed amount of charge compatible with LIGO-Virgo observations.
Such a test provides a comparison of our analyses with the ones discussed in the literature when ignoring the correlation of the charge with the remnant spin~\cite{Bozzola:2020mjx}.
Indeed, by restricting the available parameter space, we neglect the full correlation structure of the problem. Consequently, in the presence of an actual violation of the Kerr hypothesis, the parameter estimation resulting from this analysis could not be interpreted as the correct value of the BH charge. Nevertheless, this sort of analysis can still be used to \textit{detect} a violation of the Kerr hypothesis. In fact, if the Kerr metric is a correct description of the BH remnant, the result would yield charge values consistent with zero. By increasing the amount of information present in our inference model, this test acquires an increased accuracy on the detection of a Kerr violation, compared to the full analysis.
Results on the charge-to-mass ratio obtained under these assumptions are presented in Fig.~\ref{fig:GWTC-2_res_results}.
In this case, the $\bar{q}_{\scriptscriptstyle f}$ posterior support is significantly reduced with respect to its prior range, the latter taking into account the sub-extremality condition $\chi_{\scriptscriptstyle f}^2 + \bar{q}_{\scriptscriptstyle f}^2 < 0.99$.
For the most favorable case of GW150914 (highlighted in the figure), we obtain an upper bound of $\bar{q}_{\scriptscriptstyle f} < 0.33$ at 90\% credibility, consistent with the analysis presented in Ref.~\cite{Bozzola:2020mjx}. Upper bounds for the other events are reported in the rightmost column of Table~\ref{tab:O1O2O3_events}.
We recompute the Bayes factors against a Kerr hypothesis where the $M_f, \chi_{\scriptscriptstyle f}$ parameters are also restricted to the same prior bounds.
The results are shown in the central column of Table~\ref{tab:O1O2O3_events}, again indicating that no significant evidence is present in the data for or against the KN hypothesis, as compared to the Kerr hypothesis.
\begin{figure}[!tb]
\centering
\includegraphics[width=1.0\columnwidth]{Figs/KN_GWTC-2_res}
\caption{Posterior distribution on the charge-to-mass ratio for the null test with GWTC-2 events with detectable ringdown. The distribution is obtained from a null analysis, breaking the full correlation structure of the problem. We highlight the event showing the tightest constraint (GW150914) and the prior distribution on the charge, which incorporates the sub-extremality condition.}
\label{fig:GWTC-2_res_results}
\end{figure}
\section{Future measurement prospects}
\label{sec:Injections}
\begin{figure*}
\includegraphics[width=0.95\textwidth]{Figs/Waveform_KN}
\caption{Plus polarisation of the post-merger KN$_{221}$ model corresponding to the parameters reported in Table~\ref{tab:injection_parameters} for different values of the charge up to $\bar{q}_{\scriptscriptstyle f}=0.6$ (chosen for visualisation purposes), together with its uncharged limit, Kerr$_{221}$. The color scale is set by the charge or equivalently by the ratio of the corresponding $(\ell,m,n)=(2,2,0)$ frequencies.}
\label{fig:Waveform_KN}
\end{figure*}
Given the limited information that can be extracted from current observations, it is natural to ask whether the LIGO-Virgo network at its design sensitivity can allow us to distinguish a KN BH from a Kerr BH, using the templates considered in this work. We explore this question by addressing both the charge measurability when assuming a charged BH remnant and the sensitivity of current ringdown tests of GR when assuming uncharged BHs.
\subsection{Charge measurability}
To address the value of charges that can be measured by the current GW detector network, we simulate charged ringdown signals, using the KN$_{221}$ template,\footnote{Our model neglects the presence of additional overtones, which we expect to be subdominant compared to the amplitude corrections induced by charged progenitors.} with increasing charge-to-mass ratio $\bar{q}_{\scriptscriptstyle f}=\{ 10^{-4}, 10^{-3}, 10^{-2}, 10^{-1}, 3 \cdot 10^{-1}, 5 \cdot 10^{-1}\}$, while the rest of the BH parameters are fixed to fiducial values close to the ones estimated for GW150914, listed in Table~\ref{tab:injection_parameters}. To reduce the number of free parameters, in our set of injections we impose the conjugate symmetry, see Ref.~\cite{Berti_fits}, $\mathcal{A}^{-}_{\ell m n} = (-1)^l \mathcal{A}^{+}_{\ell m n}, \phi^{-}_{\ell m n} = - \phi^{+}_{\ell m n}$. The values of the amplitudes and phases are chosen from the corresponding uncharged case, by fitting the post-merger waveform of an BBH coalescence with the same intrinsic parameters, generated using the TEOBResumS model~\cite{Nagar_fits}. We note that to obtain a good agreement between the waveforms, it is necessary to choose the relative phase $\Delta\phi$ of the fundamental mode and first overtone to be in opposition, $\Delta\phi \simeq \pi$.
Such a requirement can be deduced from the fact that extrapolating the fundamental mode (whose amplitude is fixed by the late-time signal) back to the peak of the waveform, the corresponding peak amplitude exceeds that of a BBH remnant.
The additional modes thus have to be chosen in such a way that the total amplitude is reduced.
Our ringdown-only reference signal has a signal-to-noise ratio (SNR) of $36$, computed by assuming the design sensitivity power spectral density of the LIGO-Virgo detector network~\cite{Aasi:2013wya}.
Fig.~\ref{fig:Waveform_KN} shows the $h_+$ polarisation corresponding to the KN$_{221}$ template for different values of charge, represented by the color scale, and the parameters reported in Table~\ref{tab:injection_parameters}.
For each value of the charge we also indicate the ratio of the KN and Kerr fundamental frequencies.
For a given BH spin and moderate values of the BH charge, Fig.~\ref{fig:Waveform_KN} shows a KN waveform morphology similar to Kerr, suggesting that a high SNR might be needed to distinguish the modulation of the signal due to the presence of the charge (apart from more extreme cases). The differences between the two models are further blurred by the strong correlation, which we fully account for in the analysis, between the BH spin and charge.
\begin{table*}[!tb]
\centering
\caption{
BH parameters of the KN BH ringdown signals employed in the simulation study. The table reports the injected values of final mass $M_f$, final spin $\chi_{\scriptscriptstyle f}$, real amplitudes $\mathcal{A}_{\ell mn}$ and phases $\phi_{\ell mn}$, cosine of the inclination of the BH final spin relative to the line of sight $\cos \iota$, global phase $\phi$, polarization angle $\psi$, luminosity distance $D_L$, right ascension $\alpha$, declination $\delta$ and the resulting signal-to-noise ratio, when assuming the LIGO-Virgo design sensitivity noise power spectrum.
\label{tab:injection_parameters}
}
\vspace{0.2cm}
\begin{tabular}{ccccccccccccc}
\hline
\hline
\multicolumn{13}{c}{Injected values} \\
\hline
\hline
$M_f \,(M_{\odot})$ & \,\,\,\, $\chi_{\scriptscriptstyle f}$ & \,\,\,\, $\mathcal{A}_{220}$ & \,\,\,\, $\mathcal{A}_{221}$ & \,\,\,\, $\phi_{220}$ & \,\,\,\, $\phi_{221}$ & \,\,\,\, $\cos \iota$ & \,\,\,\, $\phi$ & \,\,\,\, $\psi$ & \,\,\,\, $D_L \,(\text{Mpc})$ & \,\,\,\, $\alpha$ & \,\,\,\, $\delta$ & \,\,\,\, SNR \\
\hline
\hline
67.0 & \,\,\,\, 0.67 & \,\,\,\, 1.1 & \,\,\,\, 0.95 & \,\,\,\, -2.0 & \,\,\,\, 1.14 & \,\,\,\, 1.0 & \,\,\,\, 0.0 & \,\,\,\, 1.12 & \,\,\,\, 403 & \,\,\,\, 1.16 & \,\,\,\, -1.19 & \,\,\,\, 36 \\
\end{tabular}
\end{table*}
We perform injections of the KN templates into zero-noise, while the computation of the likelihood includes the LIGO-Virgo design sensitivity curves~\cite{Aasi:2013wya}.
This procedure is commonly adopted in the study of new physical effects in simulated LIGO-Virgo data to avoid shifts in the posteriors due to a specific noise realization.
Each of these simulated events is then recovered with different templates, corresponding to the charge and uncharged assumptions: KN$_{221}$ and Kerr$_{221}$. The first template reduces to the second in the limit of $\bar{q}_{\scriptscriptstyle f}=0$.
Analysing a KN signal with a Kerr template has the purpose of understanding the bias we would incur when ignoring a priori the presence of the BH charge, as in standard GW observational analyses.
In fact, we expect to recover a biased value of the BH spin for injections with sufficiently large values of $\bar{q}_{\scriptscriptstyle f}$, given its strong correlation with the charge parameter, as observed in the previous analysis.
This effect is illustrated in Fig.~\ref{fig:injections_Mf_chif_qbar_chif_posteriors}, where in the left panel we report posteriors (90\% CL) for mass and spin, inferred assuming the KN$_{221}$ (solid filled) and Kerr$_{221}$ (dashed line) templates for different injected values of $\bar{q}_{\scriptscriptstyle f}$, while the right panel illustrates the posteriors (90\% CL) for the charge-to-mass ratio and spin assuming the KN$_{221}$ template. Results for injections with $\bar{q}_{\scriptscriptstyle f}$ below 0.1 are very similar, so they are not shown.
Concerning the inference with the KN$_{221}$ template, we find that the one-dimensional (marginalised) posterior for $\bar{q}_{\scriptscriptstyle f}$ is in general uninformative even for high injected values of $\bar{q}_{\scriptscriptstyle f}$, as one can also deduce from the right panel of Fig.~\ref{fig:injections_Mf_chif_qbar_chif_posteriors}, where the 90\% CL posterior extends over the whole range of $\bar{q}_{\scriptscriptstyle f}$ in the parameter space. Interestingly though, the left panel of Fig.~\ref{fig:injections_Mf_chif_qbar_chif_posteriors} suggests that the effect of a moderately large ($\bar{q} \gtrsim$ 0.3) charge-to-mass ratio on the signal could be indirectly detected: the assumption of the Kerr$_{221}$ template, i.e., excluding the presence of charge, results in a reconstructed final spin $\chi_{\scriptscriptstyle f}$ which gets increasingly biased with the value of $\bar{q}_{\scriptscriptstyle f}$. This could potentially be detected using the IMR consistency test~\cite{IMR_consistency_test1, IMR_consistency_test2}, one of the standard tests performed by the LVC.
However, the Bayes factors are not informative enough to prefer either of the two templates, making the unique identification of such an effect (as compared to another modification of the Kerr scenario) with a BH charge difficult to obtain.
Thus, we conclude that the strong degeneracy between spin and charge does not allow for an independent measurement of the BH charge from the LIGO-Virgo network with the model considered. A similar spin-charge degeneracy is observed in the $(\ell,m,n)=(3,3,0)$ mode, suggesting that an extension of the current model considering such a mode would not strongly affect this conclusion.
As discussed in Ref.~\cite{Bozzola:2021elc}, such a correlation would instead be broken when including information from the previous stages of the coalescence, consistently modelling also the progenitors as KN BHs.
An analysis of the signal with an IMR waveform model for charged BBHs will be able to give its own estimates of the remnant charge, but an independent measurement from the ringdown will be useful to check waveform systematics and to bolster the evidence for a charged BBH detection being real and not a noise artefact.
To mimic this scenario, in Fig.~\ref{fig:injections_charge_posteriors_gaussian_prior} we report the marginalised posterior for $\bar{q}_{\scriptscriptstyle f}$ assuming a Gaussian prior constraining the final spin around its simulated value, $p(\chi_{\scriptscriptstyle f} | I) = \mathcal{N}(0.67, 0.05)$, where the width is estimated from the uncertainty associated to the 90\% CL of the final spin estimated from the IMR analysis of GW150914~\cite{LIGOScientific:2019fpa}.
The posteriors show that for $\bar{q}_{\scriptscriptstyle f} \gtrsim 0.5$ a robust measurement of the charge can be achieved, while for other values it will only be possible to place an upper bound. Our result is in agreement with the analysis of Ref.~\cite{Bozzola:2021elc}, pointing to a weak measurability up to $\bar{q}_{\scriptscriptstyle f} \sim 0.3$ at the considered SNRs.
\begin{figure*}[!tb]
\includegraphics[width=0.95\textwidth]{Figs/Charge_injection_Mf_chif_Q_chif_EXP7.pdf}
\caption{Mass-spin and charge-spin plots for analyses of our KN$_{221}$ injections (see main text for details). Solid filled and dashed contours are KN$_{221}$ and Kerr$_{221}$ posteriors (90\% CL), respectively, with plus symbols representing the median values of the KN$_{221}$ posteriors. Dotted lines represent the injected values. The grey region in the right panel is excluded by the sub-extremality condition.}
\label{fig:injections_Mf_chif_qbar_chif_posteriors}
\end{figure*}
\subsection{Tests of GR}
Another question that naturally arises is whether the standard tests of GR routinely performed by the LVC~\cite{O3a_TGR} would signal the presence of the additional BH parameter (assuming the Kerr metric). To answer this question for the \texttt{pyRing} analysis, we consider a Kerr$_{221}$ template where now the QNM parameters are allowed to deviate with respect to the Kerr values. We consider parametric deviations of the form:
\begin{equation}
\mathrm{X} = \mathrm{X}_\text{Kerr} \cdot (1+\delta \mathrm{X}) \,,
\end{equation}
where $\mathrm{X}$ = $\omega_{221},\tau_{221}$. We only consider deviations in the overtone to reduce the strong degeneracy between deviations and intrinsic parameters of the BH. The fundamental mode, generally better constrained, determines the mass and spin values, while the overtone degrees of freedom are employed to constrain the deviation parameters. This allows for a much less prior-dependent determination of the deviation parameters~\cite{O3a_TGR,ghosh2021constraints}.
We define three different modified Kerr$_{221}$ templates by adding parametrised deviations either to the frequency $\{ \delta \omega_{221} \}$ or the damping time $\{ \delta \tau_{221}\}$, or to both simultaneously $\{ \delta \omega_{221}, \delta \tau_{221} \}$.
Deviations on the frequency peak around the null value for all the injected values of $\bar{q}_{\scriptscriptstyle f}$ considered, and thus do not signal any deviation from the Kerr scenario. Instead, for the highest $\bar{q}_{\scriptscriptstyle f}$ values considered, deviations on the damping times tend to be overestimated compared to the Kerr value, albeit the Kerr case is always inside the 90\% CL, thus making the test not conclusive. Similarly, the Bayes factors are uninformative, not allowing one to discriminate between templates with or without deviation parameters.
We defer further investigations to future work, possibly using more information from previous stages of the coalescence, since this should help increase the sensitivity of the test and hence its conclusiveness.
\begin{figure}[!tb]
\includegraphics[width=0.99
\columnwidth]{Figs/Charge_injection_all_events_EXP7_chif_gaussian_prior_no_pyRing.pdf}
\caption{Posterior distribution on the charge-to-mass ratio recovered analysing KN$_{221}$ signals with different values of charge-to-mass ratio (vertical dashed lines) assuming a Gaussian prior on the final spin (see the main text). For the highest charge case, we also plot the bounds on the $90\%$ CL as dotted lines.}
\label{fig:injections_charge_posteriors_gaussian_prior}
\end{figure}
\section{Conclusions}
\label{sec:Conclusions}
In this work, we discussed extensive computations of the QNM spectrum of a KN BH for the $(\ell,m,n)=\{(2,2,0), (2,2,1), (3,3,0)\}$ modes, obtained in a companion paper~\cite{QNMsKN}, characterising the spectrum's dependence on arbitrary values of the BH charge and spin. These results were used to construct the first analytical fits of KN QNM frequencies for arbitrary values of the BH charge and spin. By extrapolating known results for the Kerr metric, we then constructed an analytical template to model the post-merger emission process of a BBH merger giving rise to a KN remnant.
We applied this model to all available LIGO-Virgo observations, showing that current data do not allow for a direct measurement of the BH charge from the post-merger emission, mainly due to the strong correlation of the charge with the remnant spin. A null test showed that the maximum value of the charge-to-mass ratio compatible with current LIGO-Virgo observations, for the most favorable event GW150914, is $\bar{q}_{\scriptscriptstyle f} < 0.33$. This is the first self-consistent observational analysis of charged remnant BHs with GWs, employing a robust statistical framework and taking the full correlation structure of the problem into account.
Finally, we performed a study aimed at exploring the sensitivity of current detectors to the remnant BH charge, finding that unless information from previous stages of the coalescence is introduced in the template, the LIGO-Virgo network at its design sensitivity will be unable to measure the charge of the remnant BH from a post-merger analysis alone. Also, current tests of GR using only the ringdown emission, routinely performed by the LVK collaboration, are unable to confidently point to a deviation from the Kerr hypothesis. However, for sufficiently large charges ($\bar{q} \gtrsim$ 0.3) a consistent overestimation of the remnant damping time (with respect to the Kerr value) could signal the presence of BH charges within the IMR consistency test.
Our results have implications for tests of general relativity and beyond Standard Model physics, since charge observations constrain the presence of magnetic monopoles, models of minicharged dark matter and alternative theories of gravity predicting the presence of an additional BH charge (through either a topological coupling or the presence of additional gravitational vector fields) degenerate with the electric charge at the scales of BH mergers.
In the future, the recent availability of full IMR simulations of charged BBHs~\cite{Bozzola:2020mjx, Bozzola:2021elc} could allow us to characterise the accuracy of the present template without relying on extrapolations of the known Kerr behaviour. Our work also provides one of the required elements for the construction of analytical templates able to model the complete signal coming from a charged BBH merger, along with the aforementioned numerical simulations and the post-Newtonian calculations in Refs.~\cite{Khalil:2018aaj,Julie:2018lfp}.
\begin{acknowledgments}
\textit{Acknowledgments}
The authors would like to thank John Veitch for useful discussions, together with Arnab Dhani and Stanislav Babak for helpful comments on the manuscript. This work greatly benefited from discussions within the \textit{Testing GR} working group of the LIGO-Virgo-KAGRA collaboration.
We gratefully acknowledge computational resources provided by the US National Science Foundation.
The authors also acknowledge the use of the cluster `Baltasar-Sete-S\'ois', and associated support services at CENTRA/IST, in the completion of this work. The authors further acknowledge the use of the IRIDIS High Performance Computing Facility, and associated support services at the University of Southampton, in the completion of this work.
This research has made use of data, software and/or web tools obtained from the Gravitational Wave Open Science Center (https://www.gw-openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration and the Virgo Collaboration. LIGO is funded by the U.S. National Science Foundation. Virgo is funded by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale della Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by Polish and Hungarian institutes.
N.~K.~J.-M.\ acknowledges support from STFC Consolidator Grant No.~ST/L000636/1.
O.~C.~D.\ acknowledges financial support from the STFC ``Particle Physics Grants Panel (PPGP) 2016" Grant No.~ST/P000711/1 and the STFC ``Particle Physics Grants Panel (PPGP) 2018" Grant No.~ST/T000775/1. M.~G.\ is supported by a Royal Society University Research Fellowship. J.~E.~S.\ has been partially supported by STFC consolidated grants ST/P000681/1, ST/T000694/1. The research leading to these results has received funding from the European Research Council under the European Community's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement no. [247252].
\textit{Software} LIGO-Virgo data are interfaced through \texttt{GWpy}~\cite{gwpy} and some of the post-processing steps are handled through \texttt{PESummary}~\cite{pesummary}, a sampler-agnostic \texttt{python} package providing interactive webpages to simplify results visualisation. Moreover, \texttt{PESummary} meta-files are used to store the complete information (both of the internal \texttt{pyRing} parameters and of the software environment) for the analysis to be completely reproducible. The \texttt{pyRing} package is publicly available at: \href{https://git.ligo.org/lscsoft/pyring}{https://git.ligo.org/lscsoft/pyring}, where example configuration files using the KN spectrum are also provided.
Other open-software \texttt{python} packages, accessible through \texttt{PyPi}, internally used by \texttt{pyRing} are: \texttt{corner, matplotlib, numba, numpy, seaborn}~\cite{corner, matplotlib, numba, numpy, seaborn}.
\end{acknowledgments}
\section*{Fit coefficients}\label{sec:Appendix}
In Tables~\ref{tab:coeffs_table_r}, \ref{tab:coeffs_table_i} we report the numerical coefficients obtained fitting the data presented in Sec.~\ref{sec:QNM_NR}, using the template of Eq.~(\ref{eq:Nagar_etal_ext}) with the Bayesian method described in Sec.~\ref{sec:QNM_fits}.
Single point estimates correspond to the maximum of the posterior distribution (the same as the maximum of the likelihood, since the priors on all coefficients are uniform), which should be used in applications where a point estimate is employed. We also report the median and $90\%$ CIs of the full probability distribution.
\begin{table*}[t]
\caption{Numerical results for the coefficients of the real QNM frequency, using as a template the rational expression considered in Eq.~(\ref{eq:Nagar_etal_ext}) with $N_\text{max}=3$. The first column of each mode reports the maximum of the posterior, while the second reports median and $90\%$ CL from the full probability distribution. For applications in which a single point estimate is used, the maximum of the posterior yields a more faithful representation of the numerical data. The Schwarzschild value is given by: $Y_0 = \{0.37367168, 0.34671099, 0.59944329 \}$ for the $(\ell,m,n) = \{ (2,2,0), (2,2,1), (3,3,0) \}$ modes respectively, while b$_{0,0}=$ c$_{0,0}=1$ by definition.}
\begin{ruledtabular}
\resizebox{1.0\textwidth}{!}{\begin{tabular}{c|ccc}
& & $\omega$ & \\
\hline \hline
$(\ell,m,n)$ & $(2,2,0)$ & $(2,2,1)$ & $(3,3,0)$ \\
\hline \hline
& $\mathrm{max \mathcal{P}}$ \quad \quad \quad \quad \hspace{0.1cm} $\mathrm{Prob}$ & \hspace{0.1cm} $\mathrm{max \mathcal{P}}$ \quad \quad \quad \quad \hspace{0.1cm} $\mathrm{Prob}$ & \hspace{0.1cm} $\mathrm{max \mathcal{P}}$ \quad \quad \quad \quad \hspace{0.1cm} $\mathrm{Prob}$ \\
\hline \hline
b$_{0,1}$ & \,\,\,\,\,${0.537583}$ \quad \quad \,\,\,\,\,${0.541}^{+0.045}_{-0.050}$ & ${-2.918987}$ \quad \quad ${-2.918}^{+0.001}_{-0.001}$ & ${-0.311963}$ \quad \quad ${-0.299}^{+0.019}_{-0.017}$\\
b$_{0,2}$ & ${-2.990402}$ \quad \quad ${-2.997}^{+0.084}_{-0.077}$ & \,\,\,\,\,${2.866252}$ \quad \quad \,\,\,\,\,${2.865}^{+0.002}_{-0.001}$ & ${-1.457057}$ \quad \quad ${-1.478}^{+0.028}_{-0.031}$\\
b$_{0,3}$ & \,\,\,\,\,${1.503421}$ \quad \quad \,\,\,\,\,${1.507}^{+0.032}_{-0.035}$ & ${-0.944554}$ \quad \quad ${-0.944}^{+0.001}_{-0.001}$ & \,\,\,\,\,${0.825692}$ \quad \quad \,\,\,\,\,${0.834}^{+0.013}_{-0.012}$\\
b$_{1,0}$ & ${-1.899567}$ \quad \quad ${-1.895}^{+0.005}_{-0.007}$ & ${-1.850299}$ \quad \quad ${-1.853}^{+0.004}_{-0.003}$ & ${-1.928277}$ \quad \quad ${-1.926}^{+0.003}_{-0.003}$\\
b$_{1,1}$ & ${-2.128633}$ \quad \quad ${-2.143}^{+0.120}_{-0.109}$ & \,\,\,\,\,${7.321955}$ \quad \quad \,\,\,\,\,${7.320}^{+0.005}_{-0.008}$ & ${-0.026433}$ \quad \quad ${-0.060}^{+0.040}_{-0.048}$\\
b$_{1,2}$ & \,\,\,\,\,${6.626680}$ \quad \quad \,\,\,\,\,${6.649}^{+0.163}_{-0.183}$ & ${-8.783456}$ \quad \quad ${-8.775}^{+0.020}_{-0.007}$ & \,\,\,\,\,${3.139427}$ \quad \quad \,\,\,\,\,${3.190}^{+0.071}_{-0.063}$\\
b$_{1,3}$ & ${-2.903790}$ \quad \quad ${-2.914}^{+0.069}_{-0.064}$ & \,\,\,\,\,${3.292966}$ \quad \quad \,\,\,\,\,${3.288}^{+0.004}_{-0.011}$ & ${-1.484557}$ \quad \quad ${-1.504}^{+0.026}_{-0.026}$\\
b$_{2,0}$ & \,\,\,\,\,${1.015454}$ \quad \quad \,\,\,\,\,${1.009}^{+0.010}_{-0.008}$ & \,\,\,\,\,${0.944088}$ \quad \quad \,\,\,\,\,${0.948}^{+0.005}_{-0.005}$ & \,\,\,\,\,${1.044039}$ \quad \quad \,\,\,\,\,${1.041}^{+0.004}_{-0.004}$\\
b$_{2,1}$ & \,\,\,\,\,${2.147094}$ \quad \quad \,\,\,\,\,${2.162}^{+0.087}_{-0.094}$ & ${-5.584876}$ \quad \quad ${-5.583}^{+0.010}_{-0.009}$ & \,\,\,\,\,${0.545708}$ \quad \quad \,\,\,\,\,${0.575}^{+0.037}_{-0.034}$\\
b$_{2,2}$ & ${-4.672847}$ \quad \quad ${-4.692}^{+0.129}_{-0.116}$ & \,\,\,\,\,${7.675096}$ \quad \quad \,\,\,\,\,${7.666}^{+0.010}_{-0.027}$ & ${-2.188569}$ \quad \quad ${-2.229}^{+0.048}_{-0.051}$\\
b$_{2,3}$ & \,\,\,\,\,${1.891731}$ \quad \quad \,\,\,\,\,${1.900}^{+0.044}_{-0.046}$ & ${-3.039132}$ \quad \quad ${-3.035}^{+0.012}_{-0.005}$ & \,\,\,\,\,${0.940019}$ \quad \quad \,\,\,\,\,${0.956}^{+0.019}_{-0.018}$\\
b$_{3,0}$ & ${-0.111430}$ \quad \quad ${-0.109}^{+0.003}_{-0.004}$ & ${-0.088458}$ \quad \quad ${-0.089}^{+0.001}_{-0.002}$ & ${-0.112303}$ \quad \quad ${-0.111}^{+0.002}_{-0.001}$\\
b$_{3,1}$ & ${-0.581706}$ \quad \quad ${-0.585}^{+0.022}_{-0.020}$ & \,\,\,\,\,${1.198758}$ \quad \quad \,\,\,\,\,${1.198}^{+0.004}_{-0.003}$ & ${-0.226402}$ \quad \quad ${-0.234}^{+0.008}_{-0.009}$\\
b$_{3,2}$ & \,\,\,\,\,${1.021061}$ \quad \quad \,\,\,\,\,${1.025}^{+0.028}_{-0.029}$ & ${-1.973222}$ \quad \quad ${-1.971}^{+0.009}_{-0.004}$ & \,\,\,\,\,${0.482482}$ \quad \quad \,\,\,\,\,${0.493}^{+0.012}_{-0.012}$\\
b$_{3,3}$ & ${-0.414517}$ \quad \quad ${-0.416}^{+0.011}_{-0.011}$ & \,\,\,\,\,${0.838109}$ \quad \quad \,\,\,\,\,${0.837}^{+0.002}_{-0.004}$ & ${-0.204299}$ \quad \quad ${-0.209}^{+0.005}_{-0.004}$\\
\hline \hline
c$_{0,1}$ & \,\,\,\,\,${0.548651}$ \quad \quad \,\,\,\,\,${0.552}^{+0.046}_{-0.050}$ & ${-2.941138}$ \quad \quad ${-2.940}^{+0.001}_{-0.001}$ & ${-0.299153}$ \quad \quad ${-0.286}^{+0.019}_{-0.017}$\\
c$_{0,2}$ & ${-3.141145}$ \quad \quad ${-3.148}^{+0.087}_{-0.079}$ & \,\,\,\,\,${2.907859}$ \quad \quad \,\,\,\,\,${2.907}^{+0.002}_{-0.001}$ & ${-1.591595}$ \quad \quad ${-1.613}^{+0.029}_{-0.033}$\\
c$_{0,3}$ & \,\,\,\,\,${1.636377}$ \quad \quad \,\,\,\,\,${1.640}^{+0.034}_{-0.037}$ & ${-0.964407}$ \quad \quad ${-0.964}^{+0.001}_{-0.001}$ & \,\,\,\,\,${0.938987}$ \quad \quad \,\,\,\,\,${0.948}^{+0.014}_{-0.012}$\\
c$_{1,0}$ & ${-2.238461}$ \quad \quad ${-2.235}^{+0.005}_{-0.006}$ & ${-2.250169}$ \quad \quad ${-2.253}^{+0.003}_{-0.003}$ & ${-2.265230}$ \quad \quad ${-2.263}^{+0.003}_{-0.003}$\\
c$_{1,1}$ & ${-2.291933}$ \quad \quad ${-2.307}^{+0.134}_{-0.124}$ & \,\,\,\,\,${8.425183}$ \quad \quad \,\,\,\,\,${8.423}^{+0.005}_{-0.008}$ & \,\,\,\,\,${0.058508}$ \quad \quad \,\,\,\,\,${0.022}^{+0.045}_{-0.054}$\\
c$_{1,2}$ & \,\,\,\,\,${7.695570}$ \quad \quad \,\,\,\,\,${7.718}^{+0.188}_{-0.208}$ & ${-9.852886}$ \quad \quad ${-9.844}^{+0.021}_{-0.007}$ & \,\,\,\,\,${3.772084}$ \quad \quad \,\,\,\,\,${3.828}^{+0.082}_{-0.071}$\\
c$_{1,3}$ & ${-3.458474}$ \quad \quad ${-3.470}^{+0.082}_{-0.072}$ & \,\,\,\,\,${3.660289}$ \quad \quad \,\,\,\,\,${3.655}^{+0.004}_{-0.011}$ & ${-1.852247}$ \quad \quad ${-1.874}^{+0.030}_{-0.031}$\\
c$_{2,0}$ & \,\,\,\,\,${1.581677}$ \quad \quad \,\,\,\,\,${1.575}^{+0.011}_{-0.009}$ & \,\,\,\,\,${1.611393}$ \quad \quad \,\,\,\,\,${1.616}^{+0.005}_{-0.006}$ & \,\,\,\,\,${1.624332}$ \quad \quad \,\,\,\,\,${1.621}^{+0.005}_{-0.005}$\\
c$_{2,1}$ & \,\,\,\,\,${2.662938}$ \quad \quad \,\,\,\,\,${2.682}^{+0.115}_{-0.124}$ & ${-7.869432}$ \quad \quad ${-7.867}^{+0.013}_{-0.008}$ & \,\,\,\,\,${0.533096}$ \quad \quad \,\,\,\,\,${0.569}^{+0.050}_{-0.043}$\\
c$_{2,2}$ & ${-6.256090}$ \quad \quad ${-6.281}^{+0.170}_{-0.157}$ & \,\,\,\,\,${9.999751}$ \quad \quad \,\,\,\,\,${9.988}^{+0.011}_{-0.032}$ & ${-3.007197}$ \quad \quad ${-3.056}^{+0.061}_{-0.067}$\\
c$_{2,3}$ & \,\,\,\,\,${2.494264}$ \quad \quad \,\,\,\,\,${2.506}^{+0.055}_{-0.060}$ & ${-3.737205}$ \quad \quad ${-3.731}^{+0.014}_{-0.005}$ & \,\,\,\,\,${1.285026}$ \quad \quad \,\,\,\,\,${1.303}^{+0.024}_{-0.023}$\\
c$_{3,0}$ & ${-0.341455}$ \quad \quad ${-0.338}^{+0.004}_{-0.005}$ & ${-0.359285}$ \quad \quad ${-0.361}^{+0.002}_{-0.002}$ & ${-0.357651}$ \quad \quad ${-0.356}^{+0.002}_{-0.002}$\\
c$_{3,1}$ & ${-0.930069}$ \quad \quad ${-0.937}^{+0.037}_{-0.034}$ & \,\,\,\,\,${2.392321}$ \quad \quad \,\,\,\,\,${2.391}^{+0.003}_{-0.005}$ & ${-0.300599}$ \quad \quad ${-0.311}^{+0.012}_{-0.015}$\\
c$_{3,2}$ & \,\,\,\,\,${1.688288}$ \quad \quad \,\,\,\,\,${1.697}^{+0.042}_{-0.046}$ & ${-3.154979}$ \quad \quad ${-3.151}^{+0.012}_{-0.005}$ & \,\,\,\,\,${0.810387}$ \quad \quad \,\,\,\,\,${0.824}^{+0.018}_{-0.017}$\\
c$_{3,3}$ & ${-0.612643}$ \quad \quad ${-0.616}^{+0.015}_{-0.014}$ & \,\,\,\,\,${1.129776}$ \quad \quad \,\,\,\,\,${1.128}^{+0.002}_{-0.005}$ & ${-0.314715}$ \quad \quad ${-0.320}^{+0.006}_{-0.006}$\vspace{0.05cm}\\
\end{tabular}}
\end{ruledtabular}
\label{tab:coeffs_table_r}
\end{table*}
\begin{table*}[t]
\caption{Numerical results for the coefficients of the QNM inverse damping time, using as a template the rational expression considered in Eq.~(\ref{eq:Nagar_etal_ext}) with $N_\text{max}=3$. The first column of each mode reports the maximum of the posterior, while the second reports median and $90\%$ CL from the full probability distribution. For applications in which a single point estimate is used, the maximum of the posterior yields a more faithful representation of the numerical data. The Schwarzschild value is given by: $Y_0 = \{0.08896232, 0.27391488, 0.09270305 \}$ for the $(\ell,m,n) = \{ (2,2,0), (2,2,1), (3,3,0) \}$ modes respectively, while b$_{0,0}=$ c$_{0,0}=1$ by definition.}
\begin{ruledtabular}
\resizebox{1.0\textwidth}{!}{\begin{tabular}{c|ccc}
& & $\tau^{-1}$ & \\
\hline \hline
$(\ell,m,n)$ & $(2,2,0)$ & $(2,2,1)$ & $(3,3,0)$ \\
\hline \hline
& $\mathrm{max \mathcal{P}}$ \quad \quad \quad \quad \hspace{0.1cm} $\mathrm{Prob}$ & \hspace{0.1cm} $\mathrm{max \mathcal{P}}$ \quad \quad \quad \quad \hspace{0.1cm} $\mathrm{Prob}$ & \hspace{0.1cm} $\mathrm{max \mathcal{P}}$ \quad \quad \quad \quad \hspace{0.1cm} $\mathrm{Prob}$ \\
\hline \hline
b$_{0,1}$ & ${-2.721789}$ \quad \quad ${-2.723}^{+0.016}_{-0.014}$ & ${-3.074983}$ \quad \quad ${-3.073}^{+0.005}_{-0.005}$ & ${-2.813977}$ \quad \quad ${-2.817}^{+0.018}_{-0.017}$\\
b$_{0,2}$ & \,\,\,\,\,${2.472860}$ \quad \quad \,\,\,\,\,${2.476}^{+0.028}_{-0.031}$ & \,\,\,\,\,${3.182195}$ \quad \quad \,\,\,\,\,${3.179}^{+0.009}_{-0.009}$ & \,\,\,\,\,${2.666759}$ \quad \quad \,\,\,\,\,${2.672}^{+0.033}_{-0.033}$\\
b$_{0,3}$ & ${-0.750015}$ \quad \quad ${-0.752}^{+0.015}_{-0.014}$ & ${-1.105297}$ \quad \quad ${-1.103}^{+0.005}_{-0.004}$ & ${-0.850618}$ \quad \quad ${-0.853}^{+0.016}_{-0.017}$\\
b$_{1,0}$ & ${-2.533958}$ \quad \quad ${-2.519}^{+0.024}_{-0.022}$ & \,\,\,\,\,${0.366066}$ \quad \quad \,\,\,\,\,${0.343}^{+0.046}_{-0.048}$ & ${-2.163575}$ \quad \quad ${-2.161}^{+0.035}_{-0.035}$\\
b$_{1,1}$ & \,\,\,\,\,${7.181110}$ \quad \quad \,\,\,\,\,${7.173}^{+0.062}_{-0.061}$ & \,\,\,\,\,${4.296285}$ \quad \quad \,\,\,\,\,${4.328}^{+0.067}_{-0.065}$ & \,\,\,\,\,${6.934304}$ \quad \quad \,\,\,\,\,${6.969}^{+0.095}_{-0.093}$\\
b$_{1,2}$ & ${-6.870324}$ \quad \quad ${-6.898}^{+0.099}_{-0.109}$ & ${-9.700146}$ \quad \quad ${-9.696}^{+0.011}_{-0.012}$ & ${-7.425335}$ \quad \quad ${-7.499}^{+0.147}_{-0.160}$\\
b$_{1,3}$ & \,\,\,\,\,${2.214689}$ \quad \quad \,\,\,\,\,${2.236}^{+0.053}_{-0.049}$ & \,\,\,\,\,${5.016955}$ \quad \quad \,\,\,\,\,${5.004}^{+0.026}_{-0.027}$ & \,\,\,\,\,${2.640936}$ \quad \quad \,\,\,\,\,${2.679}^{+0.077}_{-0.072}$\\
b$_{2,0}$ & \,\,\,\,\,${2.102750}$ \quad \quad \,\,\,\,\,${2.075}^{+0.043}_{-0.047}$ & ${-3.290350}$ \quad \quad ${-3.247}^{+0.091}_{-0.088}$ & \,\,\,\,\,${1.405496}$ \quad \quad \,\,\,\,\,${1.401}^{+0.068}_{-0.067}$\\
b$_{2,1}$ & ${-6.317887}$ \quad \quad ${-6.300}^{+0.092}_{-0.093}$ & ${-0.844265}$ \quad \quad ${-0.904}^{+0.119}_{-0.123}$ & ${-5.678573}$ \quad \quad ${-5.739}^{+0.149}_{-0.157}$\\
b$_{2,2}$ & \,\,\,\,\,${6.206452}$ \quad \quad \,\,\,\,\,${6.249}^{+0.126}_{-0.117}$ & \,\,\,\,\,${9.999863}$ \quad \quad \,\,\,\,\,${9.999}^{+0.001}_{-0.002}$ & \,\,\,\,\,${6.621826}$ \quad \quad \,\,\,\,\,${6.739}^{+0.226}_{-0.204}$\\
b$_{2,3}$ & ${-1.980749}$ \quad \quad ${-2.007}^{+0.052}_{-0.062}$ & ${-5.818349}$ \quad \quad ${-5.802}^{+0.034}_{-0.031}$ & ${-2.345713}$ \quad \quad ${-2.401}^{+0.092}_{-0.101}$\\
b$_{3,0}$ & ${-0.568636}$ \quad \quad ${-0.555}^{+0.022}_{-0.021}$ & \,\,\,\,\,${1.927196}$ \quad \quad \,\,\,\,\,${1.906}^{+0.041}_{-0.043}$ & ${-0.241561}$ \quad \quad ${-0.240}^{+0.033}_{-0.032}$\\
b$_{3,1}$ & \,\,\,\,\,${1.857404}$ \quad \quad \,\,\,\,\,${1.851}^{+0.040}_{-0.041}$ & ${-0.401520}$ \quad \quad ${-0.376}^{+0.054}_{-0.052}$ & \,\,\,\,\,${1.555843}$ \quad \quad \,\,\,\,\,${1.584}^{+0.072}_{-0.068}$\\
b$_{3,2}$ & ${-1.820547}$ \quad \quad ${-1.836}^{+0.047}_{-0.050}$ & ${-3.537667}$ \quad \quad ${-3.537}^{+0.003}_{-0.003}$ & ${-1.890365}$ \quad \quad ${-1.942}^{+0.085}_{-0.087}$\\
b$_{3,3}$ & \,\,\,\,\,${0.554722}$ \quad \quad \,\,\,\,\,${0.564}^{+0.021}_{-0.018}$ & \,\,\,\,\,${2.077991}$ \quad \quad \,\,\,\,\,${2.072}^{+0.012}_{-0.013}$ & \,\,\,\,\,${0.637480}$ \quad \quad \,\,\,\,\,${0.659}^{+0.035}_{-0.032}$\\
\hline \hline
c$_{0,1}$ & ${-2.732346}$ \quad \quad ${-2.734}^{+0.016}_{-0.014}$ & ${-3.079686}$ \quad \quad ${-3.078}^{+0.005}_{-0.005}$ & ${-2.820763}$ \quad \quad ${-2.823}^{+0.017}_{-0.016}$\\
c$_{0,2}$ & \,\,\,\,\,${2.495049}$ \quad \quad \,\,\,\,\,${2.498}^{+0.027}_{-0.029}$ & \,\,\,\,\,${3.191889}$ \quad \quad \,\,\,\,\,${3.188}^{+0.009}_{-0.009}$ & \,\,\,\,\,${2.680557}$ \quad \quad \,\,\,\,\,${2.686}^{+0.031}_{-0.033}$\\
c$_{0,3}$ & ${-0.761581}$ \quad \quad ${-0.763}^{+0.014}_{-0.013}$ & ${-1.110140}$ \quad \quad ${-1.108}^{+0.004}_{-0.004}$ & ${-0.857462}$ \quad \quad ${-0.860}^{+0.016}_{-0.016}$\\
c$_{1,0}$ & ${-2.498341}$ \quad \quad ${-2.484}^{+0.024}_{-0.022}$ & \,\,\,\,\,${0.388928}$ \quad \quad \,\,\,\,\,${0.366}^{+0.046}_{-0.048}$ & ${-2.130446}$ \quad \quad ${-2.128}^{+0.035}_{-0.035}$\\
c$_{1,1}$ & \,\,\,\,\,${7.089542}$ \quad \quad \,\,\,\,\,${7.080}^{+0.062}_{-0.060}$ & \,\,\,\,\,${4.159242}$ \quad \quad \,\,\,\,\,${4.192}^{+0.068}_{-0.066}$ & \,\,\,\,\,${6.825101}$ \quad \quad \,\,\,\,\,${6.858}^{+0.095}_{-0.091}$\\
c$_{1,2}$ & ${-6.781334}$ \quad \quad ${-6.807}^{+0.096}_{-0.104}$ & ${-9.474149}$ \quad \quad ${-9.472}^{+0.010}_{-0.010}$ & ${-7.291058}$ \quad \quad ${-7.361}^{+0.142}_{-0.157}$\\
c$_{1,3}$ & \,\,\,\,\,${2.181880}$ \quad \quad \,\,\,\,\,${2.201}^{+0.051}_{-0.046}$ & \,\,\,\,\,${4.904881}$ \quad \quad \,\,\,\,\,${4.893}^{+0.024}_{-0.025}$ & \,\,\,\,\,${2.583282}$ \quad \quad \,\,\,\,\,${2.619}^{+0.074}_{-0.070}$\\
c$_{2,0}$ & \,\,\,\,\,${2.056918}$ \quad \quad \,\,\,\,\,${2.030}^{+0.041}_{-0.045}$ & ${-3.119527}$ \quad \quad ${-3.077}^{+0.087}_{-0.085}$ & \,\,\,\,\,${1.394144}$ \quad \quad \,\,\,\,\,${1.390}^{+0.065}_{-0.065}$\\
c$_{2,1}$ & ${-6.149334}$ \quad \quad ${-6.132}^{+0.090}_{-0.089}$ & ${-0.914668}$ \quad \quad ${-0.974}^{+0.117}_{-0.119}$ & ${-5.533669}$ \quad \quad ${-5.589}^{+0.143}_{-0.151}$\\
c$_{2,2}$ & \,\,\,\,\,${6.010021}$ \quad \quad \,\,\,\,\,${6.048}^{+0.120}_{-0.113}$ & \,\,\,\,\,${9.767356}$ \quad \quad \,\,\,\,\,${9.768}^{+0.005}_{-0.005}$ & \,\,\,\,\,${6.393699}$ \quad \quad \,\,\,\,\,${6.504}^{+0.213}_{-0.193}$\\
c$_{2,3}$ & ${-1.909275}$ \quad \quad ${-1.933}^{+0.050}_{-0.058}$ & ${-5.690517}$ \quad \quad ${-5.676}^{+0.033}_{-0.030}$ & ${-2.254239}$ \quad \quad ${-2.306}^{+0.087}_{-0.097}$\\
c$_{3,0}$ & ${-0.557557}$ \quad \quad ${-0.545}^{+0.021}_{-0.020}$ & \,\,\,\,\,${1.746957}$ \quad \quad \,\,\,\,\,${1.728}^{+0.038}_{-0.040}$ & ${-0.261229}$ \quad \quad ${-0.260}^{+0.030}_{-0.030}$\\
c$_{3,1}$ & \,\,\,\,\,${1.786783}$ \quad \quad \,\,\,\,\,${1.780}^{+0.038}_{-0.039}$ & ${-0.240680}$ \quad \quad ${-0.216}^{+0.049}_{-0.050}$ & \,\,\,\,\,${1.517744}$ \quad \quad \,\,\,\,\,${1.543}^{+0.067}_{-0.064}$\\
c$_{3,2}$ & ${-1.734461}$ \quad \quad ${-1.749}^{+0.046}_{-0.047}$ & ${-3.505359}$ \quad \quad ${-3.505}^{+0.004}_{-0.004}$ & ${-1.810579}$ \quad \quad ${-1.857}^{+0.079}_{-0.081}$\\
c$_{3,3}$ & \,\,\,\,\,${0.524997}$ \quad \quad \,\,\,\,\,${0.533}^{+0.018}_{-0.018}$ & \,\,\,\,\,${2.049254}$ \quad \quad \,\,\,\,\,${2.044}^{+0.011}_{-0.013}$ & \,\,\,\,\,${0.608393}$ \quad \quad \,\,\,\,\,${0.628}^{+0.034}_{-0.030}$\vspace{0.05cm}\\
\end{tabular}}
\end{ruledtabular}
\label{tab:coeffs_table_i}
\end{table*}
\clearpage
|
{
"timestamp": "2022-04-12T02:42:24",
"yymm": "2109",
"arxiv_id": "2109.13961",
"language": "en",
"url": "https://arxiv.org/abs/2109.13961"
}
|
\section{Introduction}
Biological imaging which precisely labels microscopic structures offers a range of insights, from Single Molecule Localization Microscopy (SMLM) \cite{storm_09} visualizing the microarchitecture of the cellular cytoskeleton; to single-molecule Fluorescence \emph{in situ} Hybridization \cite{smfish} revealing the spatial distribution of gene expression; and synaptic immunofluorescence \cite{dogNet} showing the distribution of neuronal connections in the brain. To succeed, each of these imaging methods is accompanied by analysis tools to identify and localize the desired signal.
In localization problems, regardless of the imaging system used, analysis pipelines attempt to undo the blurring effect of its imperfect impulse response, or point-spread function (PSF). The PSF always has finite width, leading to limited image resolution. If small objects, like the two thin microtubules seen in Fig. \ref{fig:smlm}, are nearer to each other than the width of the PSF, they may be difficult to distinguish. The biological signal in the field of view (FOV) of the imaging system can be represented as a high resolution matrix, where each element's value represents the intensity of the signal at that physical location. The imaging process may be thought of as a 2D convolution of the \say{true} object being imaged and an array representing the PSF. The goal of a localization algorithm is to \say{undo} this convolution.
The localization problem is much more tractable if the images have predictable structure, because the solution space can be constrained. For example, some biological images are comprised of similarly-sized cells, elongated fibers of known width, or small, scattered fluorescent spots representing individual molecules with dimensions below the diffraction limit. This knowledge can be combined with an understanding of the physical parameters of the imaging system, such as the numerical aperture and magnification, in analysis pipelines, to identify cell centers \cite{ecnncs}, trace long fibers \cite{storm_09}, or localize fluorophores which have been bound to biologically relevant molecules \cite{MERFISH}. The results of these algorithms may then be used to achieve higher level biological goals, from generating tissue atlases, to determining genetic expression patterns or diagnosing pathologies.
Here, we focus specifically on biological localization problems on sparse images, meaning that the high-resolution information to be recovered has relatively few nonzero values. Some signals, such as fluorescently tagged messenger RNA (mRNA), clusters of proteins, or micro-bubbles in ultrasound imaging are naturally sparse. In other cases, experimental techniques may be used to induce sparsity \cite{storm_09}, or sparsity in bases other than the spatial domain may be exploited.
\begin{figure}
\centering
\includegraphics[width=.5\textwidth,height=.2\textheight,keepaspectratio]{SMLM.png}
\caption{Simulated microscopic imaging of microtubules from \cite{LSPARCOM}. LEFT: True localization of microtubule structure. RIGHT: Imaged microtubule, ``blurred'' by the microscope PSF.}
\label{fig:smlm}
\end{figure}
Iterative optimization techniques have emerged as one of the most powerful approaches for localizing sparse signal emitters. For example, in SMLM, a super-resolution technique which relies on sub-pixel localization of scattered fluorophores, the original peak-finding algorithms have been outperformed by approaches using iterative convex optimization-based algorithms in terms of localization accuracy, signal-to-noise ratio (SNR), and resolution \cite{smlm_rev}. These advantages of iterative optimization techniques for sparse recovery go beyond SMLM to other biological imaging problems, providing high-accuracy localization in many settings \cite{JSIT}\cite{ULM}\cite{CISTA}. However, they also suffer some disadvantages. They require adjustment of optimization parameters and explicit knowledge of the impulse response of the imaging system, which restricts their use when the imaging system is not well-characterized. They are computationally expensive and converge slowly, which limits their use in real-time, live-cell imaging. Finally, they are relatively inflexible: a given algorithm is designed to take advantage of a particular structure (here, signal sparsity), but ignores other context which may be important (e.g., cell size or density).
Many of these disadvantages can be overcome by replacing the iterations of these algorithms with trained neural networks which perform the same mathematical operation, a process known as algorithmic unrolling \cite{LISTA} (alternatively, \say{unfolding}). By doing so, parameters which would have to be specified explicitly or tuned empirically are learned automatically, and relevant context ignored by the algorithm may be incorporated into the learned model. Since its introduction a decade ago, a wide variety of techniques have been adapted using learned unrolling, enabling improvements in performance across a variety of settings \cite{Unrolling_rev}.
In the rest of this paper, we review how learned unrolling is applied to the localization of sparse sources in biological imaging data. We first formulate biological localization as a sparse recovery problem, and discuss the advantages and disadvantages of iterative approaches to sparse recovery. We then describe how algorithmic unrolling addresses some of the shortcomings, and how the general sparse recovery problem may be adapted to the unrolling framework. Next we show in detail how unrolling has been used to achieve fast, accurate super-resolution in the optical microscopy technique SMLM. We then turn to review a number of additional biological imaging analysis problems to which unrolling has been applied to improve performance: Ultrasound Localization Microscopy (ULM), Light-Field Microscopy (LFM), and cell center localization in fluorescence microscopy. Throughout, we discuss a number of additional data analysis problems in sparse optical microscopy, and propose that algorithmic unrolling be applied, to achieve the same benefits obtained in the reviewed techniques.
\section{Sparse Recovery in Biological Imaging}
The localization of biological objects, from microtubules to neural synapses, can be approached effectively as a convex optimization problem. For ease of notation, we first reframe the imaging process, typically thought of as a convolution, as a matrix-vector multiplication. The high-resolution signal is \say{vectorized} and then multiplied by a matrix representing the PSF. We note that the problem can be formulated and solved with 2D convolutions equally well, and the techniques described throughout the paper may be applied to a 2D formulation, as in \cite{LSPARCOM} and \cite{CISTA}.
Formally, we consider the FOV as a high-resolution square grid with side length $n_{h}$. The total number of locations in this grid is $N_{h} = n_{h}^{2}$. This grid is \say{vectorized} to form a vector $\mathbf{x}\in\mathds{R}^{N_{h}}$. The locations of emitters in the sample may be modeled by assigning each element of $\mathbf{x}$ a value related to the number of photons emitted from that location within the FOV. If the FOV is imaged using a sensor with $N_{l}$ pixels (with $N_{h}>N_{l}$), then we can model the imaging process as multiplication by a matrix $\mathbf{A}\in\mathds{R}^{N_{l}\times N_{h}}$, in which element $(i,j)$ is the proportion of signal emitted from location $j$ on the high-resolution grid that will be detected at pixel $i$ of the sensor. Thus defined, the columns of $\mathbf{A}$ represent the PSF of the imaging system, such that column $j$ of $\mathbf{A}$ is the PSF of the system for a point source at location $j$. The (vectorized) measured image is then $\mathbf{y} = \mathbf{Ax}$, with $\mathbf{y}\in\mathds{R}^{N_{l}}$. The goal of the analysis pipeline is to infer the value of $\mathbf{x}$, given $\mathbf{y}$ and $\mathbf{A}$.
This inference problem can be formulated as a least-squares optimization problem: we seek to find
\begin{equation}
\label{eqn:ls}
\mathbf{\hat{x}} = \argmin_{\mathbf{x}}\|\mathbf{y}-\mathbf{Ax}\|^{2}_{2}.
\end{equation}
Even if $\mathbf{A}$ is known perfectly, as long as $N_{h}>N_{l}$, $\mathbf{A}$ will have a nontrivial null space, so that the optimization problem is underdetermined. Leveraging knowledge of the biological structure of $\mathbf{x}$ can resolve this issue. If, as discussed above, it is known that $\mathbf{x}$ is sparse, then we may choose a sparse optimization technique, such as the well-known LASSO \cite{LASSO}, to recover $\mathbf{x}$:
\begin{equation}
\label{eqn:sls}
\mathbf{\hat{x}} = \argmin_{\mathbf{x}}\|\mathbf{y}-\mathbf{Ax}\|^{2}_{2}+\lambda\|\mathbf{x}\|_{1}.
\end{equation}
In particular, by correctly tuning $\lambda$, the minimizer $\mathbf{\hat{x}}$ of (\ref{eqn:sls}) will provide accurate locations of each signal-emitting object in the FOV.
\section{Algorithmic Unrolling for Sparse Localization}
Once a problem is framed as a sparse optimization of the form (\ref{eqn:sls}), a number of algorithms may be used to find the minimizer $\mathbf{\hat{x}}$. Examples include the Alternating Direction Method of Multipliers (ADMM), \cite{ADMM} the Iterative Shrinkage-Thresholding Algorithm (ISTA) \cite{ISTA}, and the Half-Quadratic Splitting algorithm (HQS)\cite{HQS-net}. These methods converge to the correct minimizer $\mathbf{\hat{x}}$, but have some limiting disadvantages, as discussed above: slow convergence, the requirement of parameters tuning and explicit knowledge of the imaging system \cite{LSPARCOM}, and mathematical inflexibility.
Deep learning approaches have overcome some of these disadvantages. In analysis of SMLM data, convolutional neural network models have achieved fast, accurate super-resolution \cite{deepSTORM}, able to improve recovery by incorporating structures not specified by the user. Deep learning, however, comes with disadvantages of its own. In particular, deep learning is typically thought of as a \say{black box} process: it is difficult to interpret the way the model transforms input to obtain a result. Because of this, when inaccurate results are produced, it can be difficult to understand how to improve the model. Typical deep learning approaches are strongly dependent on the available training data, causing a lack of model robustness to new examples. Finally, when using generic network architectures, many layers and parameters are typically required for good performance.
In 2010, Gregor and LeCun proposed a method to create neural networks based on iterative methods used for sparse recovery \cite{LISTA}, known as algorithm unrolling. The goal is to take advantage of both the interpretability of iterative techniques and the flexibility of learned methods. In learned unrolling, the transformation applied to the input by each iteration of the algorithm is replaced with a neural network layer which applies the same type of function: for instance, matrix multiplication can be replaced by a fully-connected layer, and thresholding can be replaced by an activation function representing an appropriate regularizer. These iteration-layers are concatenated together, and the resulting model-based neural network is optimized using supervised learning, with training data consisting of paired examples of the signal vector $\mathbf{x}$ and measurement vector $\mathbf{y}$ from (\ref{eqn:sls}). Training data may be obtained, for example, from measurement simulations with known ground truth as in \cite{LSPARCOM}. A forward pass through the optimized network will then perform the same operations as the iterative algorithm, with the parameters of each transformation optimized to map the training input $\mathbf{y}$ to its paired signal $\mathbf{x}$.
Gregor and LeCun applied the unrolling framework to ISTA, calling the ISTA-inspired network \say{Learned ISTA}, or LISTA. For a given number of iterations/layers, the trained LISTA network obtains lower prediction error than ISTA, and even achieves faster convergence and higher accuracy than the accelerated version of ISTA, FISTA \cite{LISTA}. In the \say{From ISTA to LISTA} box below, we detail the process of constructing the LISTA network based on ISTA.
\vspace{0.5em}
\begin{tcolorbox}[breakable,title={From ISTA to LISTA}]
\setstretch{1}
Here, we detail ISTA and use it as a case study to describe the process of algorithm unrolling. Given a problem with the form of (\ref{eqn:sls}), ISTA estimates $\mathbf{x}$, taking as inputs the measurement matrix $\mathbf{A}$, the measurement vector $\mathbf{y}$, the regularization parameter $\lambda$, and $L$, a Lipschitz constant of $\nabla \|\mathbf{Ax-y}\|_{2}^{2}$.
\begin{algorithm}[H]
\setstretch{1}
\caption{ISTA}
\label{alg:ISTA}
\begin{algorithmic}[1]
\Require{$\mathbf{y}$, $\mathbf{A}$, $\lambda$, $L$, number of iterations $k_{max}$ }
\Ensure{$\mathbf{\hat{x}}$}
\State $\mathbf{\hat{x}_{1}} = 0$, $k = 1$.
\While{$k<k_{max}$}
\State $\mathbf{\hat{x}_{k+1}} = \mathbf{\mathcal{T}}_{\frac{\lambda}{L}}(\mathbf{\hat{x}_{k}}-2L\mathbf{A^{T}}(\mathbf{A\hat{x}_{k}-y}))$
\State $k\leftarrow k+1$
\EndWhile
\State $\mathbf{\hat{x}} = \mathbf{\hat{x}_{k_{max}}}$
\end{algorithmic}
\end{algorithm}
Here, $\mathbf{\mathcal{T}}_{\frac{\lambda}{L}}$ represents the soft thresholding operator after which ISTA is named,
\begin{equation}
\label{eqn:th_op}
\mathcal{T}_{\alpha}(x) = \max\{|x|-\alpha,0\}\cdot\sign(x),
\end{equation}
where $\sign(\cdot)$ is the sign operator,
\begin{equation}
\sign(x) =
\begin{cases}
-1, & x<0\\
1, & x>0.
\end{cases}
\end{equation}
The iterative step of ISTA is given in line 3 of Algorithm \ref{alg:ISTA}. The argument of $\mathbf{\mathcal{T}}_{\frac{\lambda}{L}}(\cdot)$ in the iterative step can be rewritten as the sum of matrix-vector products with $\mathbf{y}$ and $\mathbf{x}_{k}$:
\begin{equation}
\mathbf{x_{k}}-2L\mathbf{A}^{T}(\mathbf{Ax_{k}-y})) = 2L\mathbf{A}^{T}\mathbf{y} + (\mathbf{I}-2L\mathbf{A}^{T}\mathbf{A})\mathbf{x_{k}} = \mathbf{W_{0k}y} + \mathbf{W_{k}x_{k}}.
\label{eqn:ISTA_step}
\end{equation}
This step can be modeled by the sum of fully-connected neural network layers and an activation function with learned threshold, as depicted in Fig. \ref{fig:lista}. By stringing several of these layers together, the resulting deep neural network, LISTA, has the same form as the operation performed by running ISTA over multiple iterations. Differently from ISTA, the weights of each layer are trained independently, providing greater flexibility.
\vspace{1em}
\centering
\includegraphics[scale=0.65]{lista_5.png}
\captionof{figure}{\footnotesize Unrolling of ISTA into LISTA. \textbf{(a)}: Diagram of the operation of ISTA as a feedback loop. \textbf{(b)}: Diagram of LISTA, matrix multiplications by $2L\mathbf{A}^{T}$ are replaced by the weight matrices, $\mathbf{W_{0k}}$, and multiplication by $\mathbf{I}-2L\mathbf{A}^{T}\mathbf{A}$ is replaced by $\mathbf{W_{k}}$, which, along with $\mathbf{W_{0k}}$, may be optimized with supervised learning.}
\label{fig:lista}
\end{tcolorbox}
This framework provides several key advantages: first, algorithm parameters, such as $\lambda$ in (\ref{eqn:sls}), are learned automatically. Second, with an unrolled model, the part of the model corresponding to $\mathbf{A}$ in (\ref{eqn:sls}), is learned, removing the need to explicitly model the PSF. Finally, while these iterative algorithms are designed to solve a specific problem, sparse recovery, the approach is general, solving all problems of this type equally well. With a neural network, the model can learn to analyze data that may have additional structure not explainable by sparsity, and thereby obtain higher-accuracy results more quickly. Because the underlying structure of the algorithm remains intact, the network is also less prone to overfitting, improving robustness.
The unrolling framework also has a few drawbacks. The data-driven approach requires a substantial quantity of training data, which may be difficult to obtain. If the data used to train the network is generated differently from that being analyzed (for example, if a different microscope is used, with a substantially different PSF), recovery performance will degrade. However, it has been found that learned unrolled networks are much more robust than traditional learned neural networks to changes in the distribution of signal (for example, studying a different type of subcellular structure \cite{LSPARCOM}). The learned weights of the network may also be less interpretable than an algorithm's iterative step, in which each component has explicit physical meaning. In \cite{LSPARCOM}, however, it is shown that the LISTA-based neural network used for the superresolution task learns transformations which are closely related to the operation of the iterative step (for instance, convolution filters are learned with shapes similar to the PSF) and are easier to understand than those learned by the purely data-driven approach of typical neural networks. It is also important to note that, being partially data-driven, learned unrolled networks no longer explicitly solve the optimization problems they are based on (such as sparse recovery). While the structure of the algorithm is maintained, the transformation applied by the network does not follow the exact steps that are guaranteed to find the minimizer of (\ref{eqn:sls}), and may not be applied in the image domain. So, while learned unrolled networks have been shown to be successful in localizing spatially sparse sources, they do not explicitly solve the sparse recovery problem.
Throughout the rest of the paper, we will concentrate on applications of deep unrolling to the recovery of sparse biological data, specifically using unrolled networks based on ISTA. Importantly, the learned unrolling strategy is not restricted to ISTA, nor is it restricted to problems with a sparse prior: any algorithm for which the iterative step may be carried out by a learnable neural network layer may be unrolled. Gregor and LeCun developed a learned version of the coordinate descent algorithm, finding that the learned version again obtained much lower prediction error than the iterative version\cite{LISTA}. Other authors have applied the unrolling framework to a variety of algorithms for biological data processing tasks, including ADMM \cite{ADMM-net} and robust PCA \cite{rPCA}, which were shown to obtain lower errors in MRI signal recovery and ultrasound clutter suppression, respectively, in less time than the then state-of-the-art algorithms, consistent with learned unrolled networks converging more quickly. Outside of the realm of biology, in natural images unrolling of the half-quadratic splitting algorithm has been shown to achieve both high-quality denoising \cite{iminpaint} and super-resolution in natural images\cite{HQS-net}. Many additional example applications are provided in a recent review \cite{Unrolling_rev}.
\section{Unrolling in Optical Localization Microscopy}
In this section, we will focus on the domain of optical localization microscopy. First, we will give a detailed example of how unrolling enhances the capabilities of one optical imaging technique: Single-Molecule Localization Microscopy. Then, we will discuss how the concept of unrolling can be applied to other sparse biological optical imaging problems.
\subsection{Unrolling in Single-Molecule Localization Microscopy}
Visualization of sub-cellular features and organelles within biological cells requires imaging techniques with nanometer resolution. In the case of optical imaging systems, from the 19\textsuperscript{th} century until the recent development of super-resolution microscopy, the resolution limit was considered to be set by Abbe's diffraction limit for a microscope:
\begin{equation}
\label{eqn:difflim}
d=\frac{\beta}{2NA},
\end{equation}
where $d$ is the minimal distance below which two point sources cannot be distinguished, $\beta$ is the wavelength of the emitted photons, and NA is the numerical aperture of the microscope. In fluorescence microscopy, the sample is stained with fluorophores which can be excited with one color of light and emit photons of a higher wavelength for subsequent detection. Since most cells are not naturally fluorescent, this allows specific imaging of the stained biomolecules. If the number of photons emitted is sufficiently high, and the background is sufficiently low, single molecules can be detected in this way. However, biological structures of interest are typically made of multitudes of the same biomolecule type in close apposition, obscuring details finer than the diffraction limit of the emitted photons when all fluorophores are emitting at the same time.
One may overcome the diffraction limit by distinguishing between the photons coming from two neighboring fluorophores \cite{sub_diff}. One way to distinguish neighboring molecules, is by utilizing photo-activated or photo-switching fluorophores to separate fluorescent emission in time; this is the basis for SMLM techniques such as Photo-Activated Localization Microscopy (PALM) and Stochastic Optical Reconstruction Microscopy (STORM) \cite{SMLM1, SMLM2}. Optical, physical, or chemical means are used to ensure that at any given moment only a small subset of all flurophores are emitting photons. Then a large number of diffraction-limited images is collected, each containing just a few active isolated fluorophores. The imaging sequence is long enough such that each fluorophore is stochastically activated from a non-emissive state to a bright state, and back to a non-emissive (or bleached) state. During each cycle, the density of activated molecules is kept low enough that emission profiles of individual fluorophores do not overlap.
\begin{figure}
\centering
\includegraphics[width=.7\textwidth,height=.3\textheight,keepaspectratio]{SMLM_results.png}
\caption{\footnotesize Sample experimental results from \cite{SMLM2}, comparing diffraction-limited and STORM-generated images of RecA-coated circular plasmid DNA. (a) Illustration of the DNA construct, with linked fluorophores (via immunohistochemistry). (b) diffraction-limited frames taken by a total internal reflection microscope (top), and the reconstructed STORM images of the same frames (bottom). Scale bars, 300 nm.}
\label{fig:storm}
\end{figure}
High-resolution fluorophore localization can be framed as a linear inverse problem. Let us denote the collected sequence of diffraction-limited frames as $\mathbf{Y}\in \mathds{R}^{M^2 \times T}$, where every column is the $M^2$ vector stacking of the corresponding $M \times M$ frame. Our goal is to reconstruct an image of size $N \times N$, consisting of fluorophore locations on a fine grid ($N>M$). We can model the generation of $\mathbf{Y}$ as:
\begin{equation}
\label{eqn:sc1}
\mathbf{Y=AX},
\end{equation}
where $\mathbf{X}\in \mathds{R}^{N^2 \times T}$ is the sequence of vector-stacked high-resolution frames, and the non-zero entries in each frame (i.e., columns in the matrix) correspond to the locations of activated fluorophores. The matrix $\mathbf{A}\in \mathds{R}^{M^2 \times N^2}$ is the measurement matrix, where each column of $\mathbf{A}$ is defined as the system's PSF shifted by a single pixel on the high-resolution grid.
The simplest way to retrieve $\mathbf{X}$ without leveraging knowledge of its biological structure, is by fitting the observed emission profile, $\mathbf{Y}$, to the PSF of the system, which is typically modelled as a Gaussian function in 2D. This results in localizations with precision greater than the diffraction limit (accurate up to few to tens of nm, versus a diffraction limit of ~200nm), allowing for imaging at a molecular scale within cells. Fig. \ref{fig:storm} illustrates the enhanced resolution of SMLM: STORM reveals the underlying structure of a circular DNA construct, which was completely unseen in its diffraction-limited images.
\begin{figure}
\centering
\includegraphics[width=\textwidth,height=.5\textheight,keepaspectratio]{__SPARCOM_results_2.PNG}
\caption{\footnotesize Results from \cite{SPARCOM}, showing the simulation and reconstruction of microtubules from a movie \cite{smlm_rev} of 361 high density frames. (a) Simulated ground truth of the image with sub-wavelength features (b) Diffraction-limited image, obtained by summing all the 361 frames in the movie. (c) Single molecule localization reconstruction from a low-density movie of 12,000 frames of the same simulated microtubules and the same number of emitters (the image is constructed using the ThunderSTORM plugin \cite{ThunderSTORM} for ImageJ). SPARCOM recoveries for movies with 361 and 60 high-density frames (simulated microtubules and number of emitters is the same) are given in (d) and (e), respectively.}
\label{fig:sparcom}
\end{figure}
While achieving excellent resolution, standard SMLM methods have one main drawback: they require lengthy imaging times to achieve full coverage of the imaged specimen on the one hand, and minimal overlap between PSFs on the other. Thus, in its classical form this technique has low temporal resolution, preventing its application to fast-changing specimens in live-cell imaging. To circumvent the long acquisition periods required for SMLM methods, a variety of techniques have emerged, which enable the use of a smaller number of frames for reconstructing the 2-D super-resolved image \cite{falcon,CSSTORM,SOFI,SPARCOM,deepSTORM}. These techniques take advantage of prior information regarding either the optical setup, the geometry of the sample, or the statistics of the emitters. One such technique is SPARCOM \cite{SPARCOM_MATH, SPARCOM}, which exploits sparsity in the correlation domain, while assuming that the blinking emitters are uncorrelated over time and space. This allows re-formulation of the localization task as a sparse recovery problem which can be solved using ISTA (see \say{Learned Sparsity-Based Super-Resolution Correlation Microscopy} and Fig. \ref{fig:LSPARCOM} for more details).
SPARCOM yields excellent results when compared to a standard STORM reconstruction (using ThunderSTORM \cite{ThunderSTORM}), as illustrated in Fig. \ref{fig:sparcom}. SPARCOM achieves similar spatial resolution with as few as 361 and even 60 frames, compared with the 12,000 frames needed for ThunderSTORM to produce a reliable recovery, corresponding to a 33- or 200-times faster acquisition rate when using SPARCOM. Thus, SPARCOM improves temporal resolution while retaining the spatial resolution of PALM/STORM. Gaining these benefits comes with tradeoffs: SPARCOM requires prior knowledge of the PSF of the optical setup for the calculation of the measurement matrix, which is not always available, and a careful choice of regularization factor $\lambda$, which is generally done heuristically.
As shown in the previous section, these shortcomings can be overcome by learning from data using an algorithm unrolling approach. This was done recently by Dardikman-Yoffe et al. \cite{LSPARCOM}, which introduced Learned SPARCOM (LSPARCOM) - a deep network with 10 layers resulting from unfolding SPARCOM, detailed in the box below.
\vspace{0.5em}
\begin{tcolorbox}[breakable,title={Learned Sparsity-Based Super-Resolution Correlation Microscopy}]
\setstretch{1}
In SPARCOM, we start by observing the temporal covariance matrices of $\mathbf{X}$ and $\mathbf{Y}$, $\mathbf{M}_X$ and $\mathbf{M}_Y$. According to (\ref{eqn:sc1}), we can write the following:
\begin{equation}
\label{eqn:sc3}
\mathbf{M}_Y=\mathbf{AM}_X\mathbf{A}^T.
\end{equation}
We assume that different emitters are uncorrelated over time and space. Thus, $\mathbf{M}_X$ is a diagonal matrix, where each entry on its diagonal, $\mathbf{m}$, represents the variance of the emitter fluctuation on a high-resolution grid. Since non-zero variance can only exist where there is fluctuation in emission, the support of the diagonal corresponds to the emitters' locations on the high resolution grid. Therefore, recovering $\mathbf{m}$ and reshaping it as a matrix yields the desired high-resolution image. For this purpose, let us re-write (\ref{eqn:sc3}) as:
\begin{equation}
\label{eqn:sc4}
\mathbf{M}_Y=\sum_{i=1}^{N^2} \mathbf{A}_i\mathbf{A}^T_i\mathbf{m}_i,
\end{equation}
where $\mathbf{A}_i$ is the $i$-th column in $\mathbf{A}$ and $\mathbf{m}_i$ is the $i$-th entry in $\mathbf{m}$. Following (\ref{eqn:sls}), we can exploit the sparsity of emitters, and compute $\mathbf{m}$ by solving the following sparse recovery problem:
\begin{equation}
\label{eqn:sc5}
\min_{\mathbf{m}\geq0}{\hspace{0.3em}\lambda\|\mathbf{m}\|_1 + \frac{1}{2}\|\mathbf{M}_Y - \sum_{i=1}^{N^2} \mathbf{A}_i\mathbf{A}^T_i\mathbf{m}_i\|^2_2},
\end{equation}
where $\lambda\geq0$ is the regularization parameter. ISTA can be used to solve this optimization problem, as shown in \ref{fig:LSPARCOM}.
To apply unrolling to SPARCOM, we need to replace the operations performed in a single iteration with neural network layers, and choose the input of the unrolled algorithm. The unrolling process is illustrated in Fig. \ref{fig:LSPARCOM}: to start, $\mathbf{G}$, the $N \times N$ matrix-shaped resized version of the diagonal of $\mathbf{M}_Y$, is taken as input. The matrix-multiplication operations performed in each iteration are replaced with convolutional filters $W^{(k)}_p$, $k$=$0,... ,9$, and the positive soft-thresholding operator is replaced with a differentiable, sigmoid-based approximation of the positive hard-thresholding operator \cite{l0-relu}, denoted as $S^{+}_{\alpha_0, \beta_0}(\cdot)$. The unrolling process results in LSPARCOM - a deep neural network which acts as the operation done by running SPARCOM over multiple iterations. LSPARCOM can be trained on a single sequence of frames taken from one FOV with a known underlying structure, which can be generated using simulations (like the one offered by ThunderSTORM \cite{ThunderSTORM}). The model is then trained on overlapping small patches taken from multiple frames of that sequence.
\includegraphics[width=.9\textwidth,height=.9\textheight,keepaspectratio]{LSPARCOM.png}
\captionof{figure}{\footnotesize Unrolling of SPARCOM to LSPARCOM, from \cite{LSPARCOM}.
(a) block diagram of SPARCOM (via ISTA), recovering the vector-stacked super-resolved image $\mathbf{x}^{(k)}$ (which corresponds to $\mathbf{m}$). The input is $\mathbf{g}_Y$, the diagonal of $\mathbf{M}_Y$. The block with the blue graph is $\mathcal{T}_{\frac{\lambda}{L_f}}$, the positive soft thresholding operator with parameter $\frac{\lambda}{L_f}$, where $L_f$ is the Lipschitz constant of the gradient of (\ref{eqn:sc5}). The other blocks denote matrix multiplication (from the left side), where $\mathbf{\Tilde{A}}=\mathbf{A}^2$ (element-wise power) and $\mathbf{M} = |\mathbf{A}^T\mathbf{A}|^2$ (absolute-value and power operations performed element-wise).
(b) LSPARCOM, recovering the super-resolved image $\mathbf{X}^{(k)}$. The input \textbf{G} is the matrix-shaped resized version of $\mathbf{g}_Y$. The blocks with the blue graph apply the smooth activation function $S^{+}_{\alpha_0, \beta_0}(\cdot)$ with two trainable parameters: $0\geq\alpha_0^{(k)}\geq1$, $\beta_0^{(k)}$, $k$=$0,... ,10$. The other blocks denote convolutional layers, where $I_f$ is a non-trainable identity filter and $W_i$, $W^{(k)}_p$, $k$=$0,... ,9$ are trainable filters.}
\label{fig:LSPARCOM}
\end{tcolorbox}
The results shown in Fig. \ref{fig:LSPARCOM_results} illustrate that inference from 10 folds of LSPARCOM is comparable to running SPARCOM for 100 iterations with a carefully-chosen regularization parameter. Both methods succeed in reconstructing the underlying tubulin structure from a sequence of 350 high-density frames. Moreover, if a shorter, denser sequence is constructed by summing groups of 14 frames of the original sequence, the SPARCOM reconstruction's resolution degrades while the LSPARCOM reconstruction remains excellent. Thus, even with 25 extremely dense frames as input, LSPARCOM yields excellent reconstruction of sub-wavelength features, which allows for substantially higher temporal resolution (compared to the hundreds of frames needed for SPARCOM). LSPARCOM is also faster to use, with approximately 5x improvement over SPARCOM in execution time \cite{LSPARCOM}. LSPARCOM enables efficient and accurate imaging well below the diffraction limit, without prior knowledge regarding the PSF or imaging parameters.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth,height=.5\textheight,keepaspectratio]{__LSPARCOM_results_2_2.PNG}
\caption{\footnotesize Sample results from \cite{LSPARCOM}, reconstructed from a simulated biological tubulins dataset \cite{smlm_rev}, composed of 350 high-density (b,c) or 25 very high-density frames (d,e). (a): simulated ground truth tubulin structure. (b,d): SPARCOM reconstruction, executed over 100 iterations with $\lambda = 0.25$ for 350 frames (b), and $\lambda = 0.05$ for 25 frames (d). (c,e): LSPARCOM reconstruction, given 350 frames (c) and 25 frames (e) as input.}
\label{fig:LSPARCOM_results}
\end{figure}
Given its enhanced capabilities, LSPARCOM has great potential for localization of biological structures. Meeting the temporal and spatial resolutions' criteria for imaging of dynamic cellular processes at a molecular scale, it might replace its iterative counterpart as a robust, efficient method for live-cell imaging. The success of LSPARCOM further suggests that unrolling may benefit other sparse biological imaging problems, as we discuss in the next section.
\subsection{Optical Microscopy Extensions}
In the previous section, learned unrolling was shown to achieve fast, highly accurate results in SMLM. Here we touch on two applications that may benefit from unrolling: imaging transcriptomics, and synapse detection.
\subsubsection{Imaging Transcriptomics}
Imaging Transcriptomics (IT) is a family of fluorescence microscopy techniques studying the spatial distribution of messenger RNA transcripts in cells. This can allow classification of individual cells by their gene expression in the context of their location in a tissue, yielding insight about the function of the whole system \cite{MERFISH}, or revealing sub-cellular spatial organization of mRNA transcripts. Many IT methods are based on single-molecule Fluorescence \emph{in situ} hybridization (smFISH) \cite{smfish}, in which fluorophore-labeled probes bind to complementary regions of messenger RNA. While smFISH localizes transcripts of one gene at a time, in many experiments, it is desirable to study multiple genes at once, up to tens of thousands. To achieve this goal, combinatorial IT techniques, like MERFISH \cite{MERFISH}, assign a distinct binary barcode with length $F$ to each transcript. The barcodes are chosen to be distinct entries in a \say{codebook}: $F$ rounds of FISH imaging are performed, with transcripts appearing as spots in round $f$ if the $f$-th bit of its barcode is 1, as depicted in Fig. \ref{fig:MERFISH}. By using this technique, up to $2^{F}$ genes may be studied in only $F$ rounds of imaging.
\begin{figure}
\centering
\includegraphics[scale=0.85]{merfish_snips.PNG}
\caption{Depiction of MERFISH multiplexed Imaging Transcriptomics, from \cite{MERFISH}. LEFT: Fluorophores are hybridized to mRNA transcripts if the bit of the associated barcode is equal to 1, and mRNA appears as a spot. RIGHT: Acquisition and decoding of MERFISH data. }
\label{fig:MERFISH}
\end{figure}
Once these $F$ rounds of imaging are performed, images are processed to produce a set of localizations for mRNA of each gene. The problem of translating images into such a list is a sparse recovery problem: the fluorophores, scattered sparsely across the sample, appear in the images, modulated by the codebook and blurred by the PSF of the microscope. The goal, similarly to SMLM, is to locate these sparsely scattered fluorescent emitters. Currently used processing techniques analyze image data with a heuristic approach in which each location is separately checked for signal. In \cite{JSIT}, we formalized this system analogously to the sparse optimization problem (\ref{eqn:sls}), in a method called the Joint Sparse method for Imaging Transcriptomics (JSIT). For IT data with $F$ rounds of imaging, studying $G$ genes, with $N_{h}$ locations on the high-resolution location grid, and $N_{l}$ pixels in measurement images, we can set up an optimization problem similar to \ref{eqn:sls}. We vectorize and concatenate images into a matrix $\mathbf{Y}\in\mathds{R}^{N_{l}\times F}$. Then, we take $\mathbf{Y}$ to be generated as a product of three matrices,
\begin{equation}
\label{eqn:JSIT}
\mathbf{Y} = \mathbf{AXC}.
\end{equation}
Here, $\mathbf{A}\in\mathds{R}^{N_{l}\times N_{h}}$, is the same as in (\ref{eqn:ls}), the columns of $\mathbf{X}\in\mathds{R}^{N_{h}\times G}$ are signal vectors like $\mathbf{x}$ in (\ref{eqn:ls}), with each column representing a specific gene, and $\mathbf{C}\in\mathds{R}^{G\times F}$ is the set of barcodes. Given (\ref{eqn:JSIT}), we recover $\mathbf{X}$ from the measurement $\mathbf{Y}$ and known matrices $\mathbf{A}$ and $\mathbf{C}$, using an optimization-based approach, constrained by assumptions of sparsity: that only one mRNA will be present at each location, and that relatively few mRNA will be present in the FOV. This constrained optimization problem can be solved with an iterative algorithm. In addition to being more interpretable than the currently used heuristic, this method has achieved more accurate mRNA localization, especially in low-magnification imaging.
A natural extension of this formulation is the application of learned unrolling. Based on our experience with other applications, unrolling may improve performance obtaining more accurate genetic expression levels in fewer iterations, while eliminating parameter-tuning requirements and explicit knowledge of the optical PSF $\mathbf{A}$.
\subsubsection{Synapse Detection}
Many other biological settings involve sparse emitters, but have not necessarily been framed as sparse recovery problems to improve performance. For example, sparse biological images are encountered in neuronal synapse detection for characterization of the neurophysiological consequences of genetic and pharmacological perturbation screens. Synapses are localized by identifying positions in which fluorescently tagged pre- and post-synaptic proteins are located in close proximity. These proteins cluster into puncta which appear as point sources, similar to fluorophores in SMLM data. Sparse analysis could identify sub-pixel locations for each punctum, and, from the presence of both pre- and post-synaptic proteins, infer accurate synapse locations. While sparse algorithm unrolling has not yet been applied in this context, model-based learning strategies have already shown good results \cite{dogNet} for this setting.
\section{Unrolling in other imaging modalities}
Sparse emitters arise in other biological imaging modalities beyond epifluorescent microscopy, and the algorithmic unrolling method has achieved fast, highly accurate localization in several such settings. Here we will review three such cases: Ultrasound Localization Microscopy (ULM), Light Field Microscopy (LFM), and cell center localization in non-spatially-sparse histology images.
\subsection{Unrolling in Ultrasound Localization Microscopy}
The attainable resolution of ultrasonography is fundamentally limited by wave diffraction, i.e., the minimum distance between separable scatters is half a wavelength. Due to this limit, conventional ultrasound techniques are bound to a tradeoff between resolution and penetration depth: increases in the transmit frequency shortens the wavelength (thus increasing resolution) but come at the cost of reduced penetration depth, since higher frequency waves suffer from stronger absorption. This tradeoff particularly hinders deep high-resolution microvascular imaging, which is crucial for many diagnostic applications.
A decade ago, this tradeoff was circumvented by the introduction of Ultrasound Localization Microscopy (ULM) \cite{ULM, ULM2}, which leverages the principles of SMLM and adapts these to ultrasound imaging. In SMLM, stochastic “blinking” of subsets of fluorophores is exploited to provide sparse point sources; in ULM, lipid-shelled gas microbubbles fulfill this role. A sequence of diffraction-limited ultrasonic scans is acquired, each containing just a few active isolated sources. Thus, each received image frame can be written as:
\begin{equation}
\label{eqn:ulm1}
\mathbf{y = Ax + w},
\end{equation}
where $\mathbf{x}$ is a vector that describes the sparse microbubble distribution on a high-resolution image grid, $\mathbf{y}$ is a vectorized image frame from the ultrasound sequence, $\mathbf{A}$ is the measurement matrix (defined by the system's PSF), and $\mathbf{w}$ is a noise vector. As in SMLM, this enables precise localization of their centers on a subdiffraction grid. The accumulation of many such localizations over time yields a super-resolved image. This approach achieves a resolution up to ten times smaller than the wavelength \cite{ufULM}, showing that ultrasonography at sub-diffraction scale is possible.
Similarly to SMLM, the quality of ULM imaging is dependant on the quantity of localized microbubbles and localization accuracy; thus, it gives rise to a new tradeoff between microbubble density and acquisition time. To achieve the desired signal sparsity for straightforward isolation of the backscattered echoes, ULM is typically performed using a very diluted solution of microbubbles. On regular ultrasound systems, this constraint leads to tediously long acquisition times to cover the full vascular bed. Ultrafast plane-wave ultrasound (uULM) imaging has managed to lower acquisition time \cite{ufULM} by taking many snapshots of individual microbubbles, as they transport through the vasculature, thereby facilitating high-fidelity reconstruction of the larger vessels. Nevertheless, mapping the full capillary bed still requires microbubbles to pass through each capillary, capping acquisition time benefits to tens of minutes \cite{ULM_TRADEOFF}.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth,height=.3\textheight,keepaspectratio]{__ULM_results.PNG}
\caption{\footnotesize Sample results from \cite{ULM_IMAGE}. Reconstructed highly-dense sequence of 300 frames, clinically acquired \textit{in-vivo} from a human prostate. Maximum intensity projection (MIP) image of the sequence is shown in (A); (B) is a selected area in the image, and (C) is the sparsity-driven super-resolution ultrasound on the same area.}
\label{fig:ULM_results}
\end{figure}
As with SMLM, uULM can be extended by using the sparsity of the measured signal (whether spatially sparse or in any transform domain \cite{SUSHI}). Sparse recovery again enables improved localization precision and recall for high microbubble concentrations \cite{SPARSE_ULM}. Fig. \ref{fig:ULM_results} illustrates this, showing how the sparse recovery method produces fine visualization of the human prostate from a high-density sequence of \textit{in-vivo} ultrasound scans. However, as in the case of SPARCOM, solving the ULM sparse recovery problem requires iterative algorithms, such as ISTA. Unfortunately, as was previously noted, these algorithms are not computationally efficient, and their effectiveness is strongly dependent on good approximation of the system's PSF and careful tuning of the optimization parameters. With the unrolling approach, these challenges can be met in a similar fashion to SMLM above (see \say{Deep Unrolled ULM} for details).
\vspace{0.5em}
\begin{tcolorbox}[breakable,title={Deep Unrolled ULM}]
\setstretch{1}
Under the assumption of spatial sparsity of microbubbles in the high-resolution grid, $\mathbf{x}$ (as defined in (\ref{eqn:ulm1})) corresponds to the solution of the $l_1$-regularized inverse problem which was previously presented in (\ref{eqn:sls}); thus, it can be computed using ISTA. After estimating $\mathbf{x}$ for each frame, the estimates are summed across all frames to yield the final super-resolution image, which describes the microbubble distribution throughout the entire sequence.
To apply unrolling in this case, LISTA can replace ISTA as in section 2. Van Sloun et al. \cite{DEEP_ULM} have implemented such a model, resulting in a 10-layer feed-forward neural network. Each layer consists of trainable 5x5 convolutional filters $\mathbf{W}_{0k}$ and $\mathbf{W}_k$, along with a trainable shrinkage parameter $\lambda^k$ ($k = 0, ..., 9$). The convolutional filters replace the fully-connected layers which appear in the original unrolled version of ISTA (see Fig. \ref{fig:lista}). Replacing the proximal soft-thresholding operator $T_{\lambda}$ (see (\ref{eqn:th_op})) with a smooth sigmoid-based soft-thresholding operation \cite{l0-relu}, helped avoid vanishing gradients. Similarly to LSPARCOM, this network is trained on simulated ultrasound scans of point sources, with a variety of PSF and noise realizations. Overlapping small patches taken from multiple frames of each simulated scan sequence, and given to the network as training samples.
\end{tcolorbox}
\begin{figure}
\centering
\includegraphics[width=.7\textwidth,height=.25\textheight,keepaspectratio]{unfolded_ulm_results.png}
\caption{\footnotesize Performance comparison of standard ULM, sparse-recovery (FISTA) and deep unrolled ULM on simulations (taken from \cite{DEEP_ULM}).}
\label{fig:unfolded_ULM_results}
\end{figure}
Tests on synthetic data show that deep unrolled ULM significantly outperforms standard ULM and sparse recovery through FISTA for high microbubble concentrations (see Fig. \ref{fig:unfolded_ULM_results}), offering better recall (measured by the recovered density) and lower localization error. In addition, when tested on \textit{in-vivo} ultrasound data, Van Sloun et al. \cite{DEEP_ULM} observed that deep unrolled ULM yields super-resolution images with higher fidelity, implying improved robustness and better generalization capabilities. Furthermore, Bar-Shira et al. \cite{breastULM} demonstrated how the use of deep unrolled ULM for \emph{in vivo} human super-resolution imaging allows for better diagnosis of breast pathologies. The unrolled method is also highly efficient, requiring just over 1,000 FLOPS, and containing only 506 parameters (compared to millions of parameters in other deep-learning models).
In sum, deep unrolled ULM can be a method for efficient, robust and parameter-free ultrasonic imaging, with comparable (or superior) resolution to that of other ULM methods (standard and sparsity-based). Given the ability of deep unrolled ULM to perform precise reconstructions for high microbubble concentrations, deep-tissue ultrasound imaging becomes feasible. High microbubble concentrations dramatically shorten required acquisition times, which allows to perform deep high-resolution imaging much faster. Thus, intricate ultrasonography tasks (like microvascular imaging) which have a key role in non-invasive, \textit{in-vivo} diagnosis of many medical conditions such as cancer, arteriosclerosis, stroke and diabetes, become simpler to execute.
\subsection{CISTA}
Another imaging domain dealing with spatially sparse data is Light Field Microscopy (LFM) \cite{LFM}. Obtaining 3D information from a single acquisition is valuable, enabling real-time volumetric neural imaging. LFM enables single-shot 3D imaging by placing a micro-lens array between the microscope objective and the camera sensor. This configuration captures both lateral and angular information from each light ray emitted from the sample, so deconvolution of the system's PSF produces 3D emitter locations. Since each spatial location is imaged in multiple pixels on the detector, LFM faces a tradeoff between depth and lateral resolution. However, if the sample is composed of spatially sparse emitters, localization on a high-resolution, 3D grid can be performed, as in SMLM and ULM.
In \cite{CISTA}, the problem of localizing neurons in 3D space with LFM images is presented: in LFM, neurons are small enough to be considered point sources, and are distributed in a spatially sparse manner. To localize neurons, measured images are converted to a structure called an Epipolar Plane Image (EPI); the system PSF in this domain varies strongly with depth, as shown in Fig. \ref{fig:EPI}. By performing sparse optimization, the authors are able to achieve fast, accurate neuron localization.
\begin{figure}
\centering
\includegraphics[scale=0.75]{LFM_dict.PNG}
\caption{Epipolar Plane Images (EPI) derived from LFM images of emitters at different depths (from \cite{LFM_CS} under Creative Commons License 4.0). By matching an observed EPI, the depth of sources may be determined.}
\label{fig:EPI}
\end{figure}
By framing 3D neuron localization as a sparse optimization problem, the problem is opened to unrolling. In \cite{CISTA}, Song et al. first use a convolutional variant of ISTA (CISTA) to solve the localization problem, then create an unrolled network based on that algorithm, called CISTA-net. The unrolled network recovers neuron location with higher accuracy in all dimensions and performs the recovery task more than 10,000 times faster than ISTA. This increase in speed expands the applicability of LFM: it could enable, for instance, live, 3D imaging of whole nervous systems in small model organisms like \emph{C. elegans}, or of activity of large volumes of the mammalian cortex.
\subsection{Non-Spatially Sparse Imaging}
The most obvious way of thinking about sparse recovery in biological imaging is in the domain of spatially sparse sources, but there are other methods which leverage sparse coding in other aspects, and unrolling can achieve accurate results in these situations as well.
One example is cell center localization in histology slides. While cell centers are scattered sparsely in a FOV, cell shapes are irregular, so there is not a single \say{impulse response} transforming cell center locations into images of cells, as in the sparse recovery form of (\ref{eqn:sls}). In \cite{ecnncs}, a traditional CNN is combined with a LISTA-like network to localize cell centers. In this framework, the locations of the centers of cells in a 2-dimensional FOV with dimension $h \times w$ are represented by a binary matrix $\mathbf{X}\in\mathds{R}^{h\times w}$. The matrix $\mathbf{X}$ is Radon transformed to represent the cell centers in polar coordinates, $\mathbf{X}_{p} = \mathcal{R}f(X)$. A measurement matrix $\mathbf{A}$ is generated as a random Gaussian projection matrix, and the product $\mathbf{AX_{p}} = \mathbf{Y}$ is formed. Xue et al. found that although $\mathbf{Y}$ cannot be measured directly, a CNN may be trained to infer $\mathbf{Y}$ from an image of the FOV. A two-stage neural network is designed: the first stage a traditional CNN, transforming the images into an estimate $\mathbf{\hat{Y}}$ of the matrix $\mathbf{Y}$, the second stage a LISTA-like network used to obtain an estimate $\mathbf{\hat{X}_{p}}$ of the sparse matrix $\mathbf{X_{p}}$, which, after inverse Radon transformation, gives the cell center locations $\mathbf{X}$. The network, called End-to-end Convolutional Neural Network and Compressed Sensing (ECNNCS), is trained by penalizing the differences between both $\mathbf{\hat{y}}$ and $\mathbf{y}$ and $\mathbf{\hat{x}}$ and $\mathbf{x}$. The ECNNCS model achieved better localization accuracy than the state-of-the-art algorithms used as comparison \cite{ecnncs}, showing that unrolling can improve performance outside problems of strict spatial sparsity.
Another technique in which sparse recovery may be applied to biological imaging data which is not spatially sparse is Compressed \emph{in situ} Imaging (CISI) \cite{cisi}, and we propose that algorithm unrolling will improve performance in this method as well. Like Imaging Transcriptomics (IT), CISI is a microscopy technique which evaluates expression levels of genes at single-cell resolution. Different from IT, in CISI data are not spatially sparse. Instead, CISI takes advantage of genetic co-expression patterns to infer single cells' transcriptomes from a few measurements of multiple genes at once (\say{composite measurements}). In CISI, single-cell transcriptomes are conceptualized as linear combinations of \say{modules}, (sparse) linear combinations of co-expressed genes. The sparse recovery problem is to infer from composite measurements of genes the sparse set of active co-expression modules. Currently, modules are defined before the experiment, but with algorithmic unrolling, optimal co-expression modules could be learned, enabling improved transcriptome inference.
\section{Conclusion}
New biological imaging techniques are constantly being developed, and with them, computational pipelines to identify and characterize the imaged biological structures. We have described a few of these techniques and their accompanying pipelines. In many cases, these techniques consist of heuristic strategies which have limited accuracy and are difficult to interpret. As computational power continues to increase and the methods become more developed, more powerful, interpretable processing techniques have been created by incorporating biological and physical assumptions into constrained optimization problems, solved with iterative methods. These in turn require parameter tuning and explicit knowledge of the experimental setup. A natural next step in pipeline development is model-based learning methods, including algorithmic unrolling. We have shown how, in many imaging modalities requiring source localization, unrolling achieves fast, accurate results with robust models, and proposed that unrolling be extended widely to other similar problems, including to methods involving biological structure other than sparsity. We hope this work will inspire methods extending unrolling to further biological imaging modalities and experimental settings.
\bibliographystyle{IEEEbib}
\setstretch{0.8}
|
{
"timestamp": "2021-09-30T02:03:48",
"yymm": "2109",
"arxiv_id": "2109.14025",
"language": "en",
"url": "https://arxiv.org/abs/2109.14025"
}
|
\section{Introduction} \label{sec:intro}
Characterizing young planetary systems is key to improving our understanding of their formation and evolution. Young transiting systems in particular offer a means to directly probe the radii, and together with masses from precise radial-velocity (RV) measurements, the bulk densities of the planets. RV observations are also crucial to constrain the eccentricity of the orbit to understand the kinematic history and stability of the system. A precision of 20\% for the mass determination is further recommended for enabling detailed atmospheric characterization, particularly for terrestrial-mass planets \citep{2019ApJ...885L..25B}.
Unfortunately, searches for planets orbiting young stars have been limited by stellar activity signals comparable in amplitude to that of typical Keplerian signals. Stellar surface inhomogeneities (e.g., cool spots, hot plages) driven by the dynamic stellar magnetic field rotate in and out of view, leading to photometric variations over time. The presence of such active regions breaks the symmetry between the approaching and receding limbs of the star, introducing RV variations over time as well \citep{desort2007etal}. These active regions further affect the integrated convective blue-shift over the stellar disk, and will therefore manifest as an additional net red- or blue-shift \citep{meunier2013, 2014ApJ...796..132D}. Various techniques have been introduced to lift the degeneracy between activity- and planetary-induced signals in RV datasets such as line-by-line analyses \citep{dumusque2018, 2018AJ....156..180W, cretignier2020} and Gaussian process (GP) modeling \citep[e.g.,][]{2014MNRAS.443.2517H, 2015ApJ...808..127G, 2016AJ....152..204L, 2021arXiv210209441T, Robertson_2020, P20, K21}, but such measurements remain challenging due to the sparse cadence of typical RV datasets compared to the activity timescales.
AU Mic is a young \citep[22 Myr;][]{2014MNRAS.445.2169M}, nearby \citep[$\beta$ Pictoris moving group, $\sim$ 10 pc;][]{gaiadr2}, and active pre-main-sequence M1 dwarf \citep[][hereafter referred to as P20]{P20}. AU Mic hosts an edge-on debris disk \citep{2013ApJS..208....9P}, and therefore the probability for planets to transit is greater than for other systems. Using photometric observations from \textit{TESS} \citep{2015JATIS...1a4003R} in Sector 1 (2018-July-25 to 2018-August-22), \citetalias{P20} discovered an $\approx8.46$ day Neptune-size ($R_{b}=4.38_{-0.18}^{+0.18}\ R_{\oplus}$) transiting planet, which was further validated to transit with \textit{Spitzer} observations (hereafter referred to as AU Mic b). \citetalias{P20} also reported the detection of a single-transit event in the \textit{TESS} Sector 1 light curve, but did not constrain the period with only an isolated event. With high cadence RVs from SPIRou, \cite{K21} (hereafter referred to as K21) measured the mass of AU Mic b and confirmed it to be consistent with a Neptune-mass planet ($M_{b}=17.1_{-4.5}^{+4.7}\ M_{\oplus}$). With more observations of AU Mic from the \textit{TESS} extended mission in Sector 27 (2020-July-04 to 2020-July-30), \cite{M21} (hereafter referred to as M21) determined AU Mic c to be a smaller Neptune-sized planet ($R_{c}=3.51_{-0.16}^{+0.16}\ R_{\oplus}$) with a period of $\approx$ 18.86 days.
In this paper, we present and discuss analyses of several years of multi-wavelength RV observations of AU Mic that further elucidates this planetary system. In section \ref{sec:data}, we summarize the visible and near-infrared (NIR) RV observations, as well as photometric observations which are used to inform our RV model. In section \ref{sec:rv_fitting}, we introduce two joint quasi-periodic Gaussian process kernels which are first steps in taking into account the expected wavelength dependence of stellar activity through simple scaling relations between wavelengths. We then apply our model to RVs of AU Mic and present results in section \ref{sec:results}. In section \ref{sec:discussion}, we assess the sensitivity of our RV model through planet injection and recovery tests. We then briefly discuss the utility of ``RV-color'' between two wavelengths in isolating Keplerian from stellar activity induced signals in Section \ref{sec:disc_utility}. We finally note the assumptions and caveats in this work in section \ref{sec:caveats_future_work}. A summary of this work is provided in section \ref{sec:conclusion}.
\section{Observations} \label{sec:data}
\subsection{RVs} \label{sec:data_rvs}
Our analyses make use of new and archival high-resolution echelle spectra from a variety of facilities, which are summarized in Table \ref{tab:spectrographs}. We briefly detail new spectroscopic observations and the corresponding RVs from observing programs primarily intended to characterize the AU Mic planetary system.
\subsubsection{CARMENES} \label{sec:carmenes}
The CARMENES (Calar Alto high-Resolution search for M dwarfs with Exo-earths with Near-infrared and optical echelle Spectrographs) instrument \citep{2018SPIE10702E..0WQ} is a pair of two high-resolution spectrographs installed at the 3.5\,m telescope at the Calar Alto Observatory in Spain. The visual (VIS) and near-infrared (NIR) arms cover a wavelength range of 520--960\,nm and 960--1710\,nm, with resolving powers of R=94,600 and R=80,400, respectively. AU Mic was observed 100 times with CARMENES during two different campaigns between 14 July and 9 October 2019, and between 19 July and 16 November 2020, respectively. This last observing period was partially contemporaneous with \textit{TESS} observations of AU Mic in Sector 27 (04 July -- 30 July 2020). One or two exposures of 295\,s were obtained per epoch with typical $S/N$ larger than 70--100, and at airmasses larger than 2.5, due the low declination of the target at the Calar Alto observatory. CARMENES data were processed by the {\it caracal} pipeline \citep{2016SPIE.9910E..0EC}, which includes bias, flat-field, and dark correction, tracing the echelle orders on the detector, optimal extraction of the one-dimensional spectra, and performance of the initial wavelength calibration using U-Ar, U-Ne, and Th-Ne lamps. The RVs were obtained with the \texttt{serval} pipeline \citep{serval} by cross-correlating the observed spectrum with a reference template constructed from all observed spectra of the same star. In addition, the \texttt{serval} pipeline also computes the correction for barycentric motion, secular acceleration, instrumental drift using simultaneous observations of Fabry-P\'erot etalons, and nightly zero-points using RV standards observed during the night \citep{trifinov2018}.
\subsubsection{CHIRON} \label{sec:chiron}
We obtained 14 nightly observations of AU Mic with the CHIRON spectrometer \citep{2018SPIE10702E..11K} on the SMARTS 1.5\,m telescope at the Cerro Tololo Inter-American Observatory (CTIO) between UT dates 2019-09-14 and 2019-11-10. Observations are recorded in narrow slit mode (R$\sim$136,000) using the iodine cell to simultaneously calibrate for the wavelength scale and instrument profile. Like iSHELL observations \citep[see][]{caleetal2019}, exposure times ($t_{exp}$) were limited to 5 minutes due to the uncertainties of barycenter corrections scaling as $t_{exp}^{2}$ \citep{2019MNRAS.489.2395T}, and the dynamicity of telluric absorption over a single exposure. We initially recorded 22 exposures per-night, and later increased this to 42 as the cumulative $S/N$ within a night was insufficient ($\sim100$).\footnote{Unlike iSHELL (and like many modern echelle spectrographs), CHIRON makes use of an exposure meter in order to calculate the proper (flux-weighted) exposure midpoint, and therefore longer exposure times will be less-impacted by the uncertainty in computing the exposure midpoint. Further, tellurics at visible wavelengths are far more sparse than for iSHELL at K-band wavelengths. We therefore recommend significantly longer exposure times ($\geq$30 minutes) for future observations of AU Mic (or targets of similar brightness) with CHIRON in narrow slit mode.} Raw CHIRON observations are reduced via the \texttt{REDUCE} package \citep{Piskunov2002}, and the corresponding RVs are computed using \texttt{pychell}. Unfortunately, a significant fraction of the extracted 1-dimensional spectra are too noisy to robustly measure the precise RVs from (peak $S/N\approx 20-30$ per spectral pixel). We therefore flag clear outliers in the RV measurements, and re-compute the nightly (binned) RVs resulting in 12 epochs to be included in our analyses.
\subsubsection{HIRES} \label{sec:hires}
We include 60 Keck-HIRES \citep{1994SPIE.2198..362V} observations of AU Mic in in our analyses. The majority of these observations took place in the second half of 2020 with several nights yielding contemporaneous observations with other facilities. Exposure times range from 204--500 seconds, yielding a median $S/N\approx234$ at 550 nm per spectral pixel. HIRES spectra are processed and RVs computed via methods described in \cite{2010ApJ...721.1467H}.
\subsubsection{{\textsc{Minerva}}-Australis} \label{sec:minerva_aus}
Spectroscopic observations of AU Mic were carried out using the {\textsc{Minerva}}-Australis facility situated at the Mount Kent Observatory in Queensland, Australia \citep{2018arXiv180609282W, 2019PASP..131k5003A, 2021MNRAS.502.3704A} between 2019 July 18 and 2019 November 5. {\sc {\textsc{Minerva}}}-Australis consists of an array of four independently operated 0.7\,m CDK700 telescopes, three of which were used in observing AU Mic. Each telescope simultaneously feeds stellar light via fiber optic cables to a single KiwiSpec R4-100 high-resolution ($R\sim80,000$) spectrograph \citep{2012SPIE.8446E..88B} with wavelength coverage from 480 to 620\,nm. In total, we obtained 31 observations with telescope 3 (M-A Tel3), 35 observations with telescope 4 (M-A Tel4), and 33 observations with telescope 6 (M-A Tel6). Exposure times for these observations were set to 1800\,s, providing a signal-to-noise ratio between 15 and 35 per spectral pixel. RVs are derived for each telescope by using the least-squares shift and fit technique \citep{2012ApJS..200...15A}, where the template being matched is the mean spectrum of each telescope. Spectrograph drifts are corrected for using simultaneous thorium-argon (ThAr) arc lamp observations.
\subsubsection{TRES}
We include 85 observations (archival and new) of AU Mic observed with the Tillinghast Reflector Echelle Spectrograph \citep[TRES;][]{tres_paper,furesz:2008} in our analyses. The majority of these observations took place in the second half of 2019 with several nights yielding contemporaneous observations with other facilities. Typical exposure times range from 600--1200 seconds, with a median $S/N\approx60$\ per resolution element. Spectra are processed using methods outlined in \cite{buchhave:2010} and \cite{quinn:2014}, with the exception of the cross-correlation template, for which we use the high-$S/N$ median observed spectrum.
\subsubsection{IRD} \label{sec:ird}
We obtained near infrared, high resolution spectra of AU Mic using the InfraRed Doppler (IRD) instrument \citep[e.g.,][]{2018SPIE10702E..11K} on the Subaru 8.2\,m telescope. The observations were carried out between June -- October 2019, and we obtained a total of 430 frames with integration times of 30--60 seconds. Half of these frames were taken on the transit night (UT 2019 June 17) with the goal of measuring the stellar obliquity for AU Mic b, whose RVs were already presented in \citet{2020ApJ...899L..13H}. The raw data are reduced in a standard manner using our custom code as well as \texttt{IRAF} \citep{1993ASPC...52..173T}, and the extracted one-dimensional spectra are analyzed by the RV-analysis pipeline for IRD as described in \citet{2020PASJ...72...93H}. The typical precision of the derived RVs is 9--13\,m\,s$^{-1}$.
\subsubsection{iSHELL} \label{sec:ishell}
We obtained 46 out-of-transit observations of AU Mic with iSHELL on the NASA Infrared Telescope Facility \citep{2016SPIE.9908E..84R} from October 2016 to October 2020. The exposure times varied from 20--300 seconds, and the exposures were repeated 2--23 times within a night to reach a cumulative $S/N$ per spectral pixel $>$ 200 (the approximate center of the blaze for the middle order, 2.35 $\mu \mathrm{m}$) for most nights. Raw iSHELL spectra are processed in \texttt{pychell} using methods outlined in \cite{caleetal2019}.
The corresponding iSHELL RVs are computed in \texttt{pychell} using updated methods to those described in \cite{caleetal2019}. Instead of starting from an unknown (flat) stellar template, we start with a BT-Settl \citep{2012RSPTA.370.2765A} stellar template with $T_{eff}=3700\ \mathrm{K}$, and with solar values for $\log g$ and Fe/H. We further Doppler-broaden the template using the \texttt{rotBroad} routine from \texttt{PyAstronomy} \citep{pya} with $v \sin i = 8.8\ \mathrm{km\,s^{-1}}$. Qualitatively, this broadened template matches the iSHELL observations well. We also ``iterate'' the template by co-adding residuals in a quasi-inertial reference frame with respect to the star according to the bary-center velocities ($v_{BC}$), however the stellar RVs for subsequent iterations tend to be highly correlated with $v_{BC}$ and exhibit significantly larger scatter than the first iteration suggests. We therefore use RVs from the first iteration only and leave the cause of this correlation as a subject for future work.
\subsection{Photometry from \textit{TESS}} \label{sec:data_photometry}
The NASA \textit{TESS} mission \citep{2015JATIS...1a4003R} observed AU Mic in Sectors 1 (2018-July-25 to 2018-August-22) and 27 (2020-July-04 to 2020-July-30). We download the light-curves from the Mikulski Archive for Space Telescopes \citep[][MAST]{2018SPIE10704E..15S}. We use the Science Processing Operations Center \citep[SPOC;][]{jenkinsSPOC2016} ``Presearch Data Conditioning'' light curves utilizing ``Simple Aperture Photometry'' \citep[PDCSAP;][]{2012PASP..124..985S, 2014PASP..126..100S, 2012PASP..124.1000S} to inform our model in section \ref{sec:kernel_parameter_estimation}.
\begin{table*}
\caption{A summary of the RV datasets used in this work. The nightly-binned measurements are provided in appendix \ref{app:rvs_all}. $\mathrm{N_{tot}}$ and $\mathrm{N_{nights}}$ refers to the number of individual and per-night epochs, respectively. The median intrinsic error bars $\sigma_{\mathrm{RV}}$ consider all observations.}
\begin{center}
\begin{tabular}{ | p{2.8cm} | p{1cm} | p{.8cm} | p{.7cm} | p{1.5cm} | p{1.2cm} | p{2.8cm} | p{2.8cm} | }
\hline
Spectrograph/ \newline Facility & $\lambda/\Delta \lambda$ \newline [$\times 10^{3}$] & $\mathrm{N_{nights}}$ & $\mathrm{N_{used}}$ & Median \newline $\sigma_{\scaleto{RV}{2pt}}$ [m\,s$^{-1}$] & Adopted \newline $\lambda$ [nm] & Pipeline \newline & Comm. Paper \\
\hline
HIRES/Keck & 85 & 60 & 41 & 2.6 & 565 & -- & \cite{1994SPIE.2198..362V} \\
Tillinghast/TRES & 44 & 85 & 55 & 24.2 & 650 & -- & \cite{tres_paper} \\
CARMENES-VIS/ \newline Calar Alto 3.5\,m & 94.6 & 63 & 60 & 11.4 & 750 & \texttt{caracal} \citep{2016SPIE.9910E..0EC} \newline \texttt{serval} \citep{serval} & \cite{carmcomm} \\
CARMENES-NIR/ \newline Calar Alto 3.5m & 80.4 & 62 & 49 & 32.6 & 1350 & -- & -- \\
SPIRou/CFHT & 75 & 27 & 27 & 5.0 & 1650 & \citetalias{K21} & \cite{2018haex.bookE.107D} \\
iSHELL/IRTF & 85 & 46 & 31 & 5.0 & 2350 & \texttt{pychell} \newline \cite{caleetal2019} & \cite{2016SPIE.9908E..84R} \\
HARPS-S/ \newline La Silla 3.6m & 115 & 34 & 0 & 2.2 & 565 & ESO DRS \citep{2010Msngr.142...42C} \newline \texttt{HARPS-TERRA} \citep{2012ApJS..200...15A} & \cite{2003Msngr.114...20M} \\
{\sc {\textsc{Minerva}}}- \newline Australis-T3 & 80 & 13 & 0 & 9.5 & 565 & \citep{2012ApJS..200...15A} & \cite{2018arXiv180609282W} \newline \cite{2019PASP..131k5003A} \newline \cite{2021MNRAS.502.3704A} \\
{\sc {\textsc{Minerva}}}-\newline Australis-T4 & 80 & 13 & 0 & 9.5 & 565 & -- & -- \\
{\sc {\textsc{Minerva}}}-\newline Australis-T6 & 80 & 13 & 0 & 9.5 & 565 & -- & -- \\
CHIRON/CTIO & 136 & 12 & 0 & 46 & 565 & \cite{reduce2002} \newline \cite{caleetal2019} & \cite{chiron2013} \\
IRD/Subaru & 70 & 6 & 0 & 3.0 & 1350 & \texttt{IRAF}; \citep{1993ASPC...52..173T} \newline \cite{2020PASJ...72...93H} & \cite{2018SPIE10702E..11K} \\
NIRSPEC/Keck & 25 & 14 & 0 & 50 & 2350 & \cite{2012ApJ...749...16B} & \cite{1998SPIE.3354..566M} \\
CSHELL/IRTF & 36 & 21 & 0 & 26 & 2350 & \cite{gagne2016}, \cite{Gao2016} & \cite{1993SPIE.1946..313G} \\
\hline
\end{tabular}
\end{center}
\label{tab:spectrographs}
\end{table*}
\section{Radial Velocity Fitting} \label{sec:rv_fitting}
\subsection{Bayesian Inference for Radial-Velocities} \label{sec:bayesian_inference}
We primarily seek to utilize a global (joint) Gaussian process model with multiple realizations that give rise to the data we observe with all of the above instruments simultaneously. To implement our desired framework, we have developed two \textit{Python} packages. We leave the description of \texttt{optimize} - a high-level Bayesian inference framework to appendix \ref{app:optimize}.
To provide RV-specific routines, we extend the \texttt{optimize} package within the \texttt{orbits} sub-module of the \texttt{pychell} \citep{caleetal2019} package\footnote{Documentation: https://pychell.readthedocs.io/en/latest/}. We define classes specific for RV-data, models, and likelihoods, with much of the ``boiler-plate'' code handled through \texttt{optimize}. A top-level ``RVProblem'' further defines a pool of RV-specific methods for pre- and post-optimization routines, such as plotting phased RVs, periodogram tools, model comparison tests, and propagation of MCMC chains for deterministic Keplerian parameters (e.g, planet masses, semi-major axes, and densities).
\subsection{Two Chromatic Gaussian Processes} \label{sec:gp_kernel}
A Gaussian process kernel is defined through a square matrix, $\mathbf{K}$ (also called the covariance matrix), where each entry describes the covariance between two measurements\footnote{See \cite{haywoodthesis} for a thorough discussion of Gaussian processes.}. We introduce two GP kernels as extensions of the quasi-periodic (QP) kernel, which has been demonstrated in numerous cases to model rotationally modulated stellar activity in both photometric and RV observations (see Section \ref{sec:intro}) \footnote{Other parameterizations are also common.}.
\begin{gather}
\mathbf{K_{QP}}(t_{i},t_{j}) = \eta_{\sigma}^{2} \exp \bigg[-\frac{\Delta t^{2}}{2 \eta_{\tau}^{2}} - \frac{1}{2 \eta_{\ell}^{2}} \sin^{2} \bigg( \pi \frac{\Delta t}{\eta_{p}} \bigg) \bigg] \label{eq:gp_qp} \\ \nonumber \\
\mathrm{where}\ \ \Delta t = |t_{i} - t_{j}| \nonumber
\end{gather}
\noindent Here, $\eta_{P}$ typically represents the stellar-rotation period, $\eta_{\tau}$ the mean spot lifetime, and $\eta_{\ell}$ is the relative contribution of the periodic term, which may be interpreted as a smoothing parameter (larger is smoother). $\eta_{\sigma}$ is the amplitude of the auto-correlation of the activity signal.
We seek to use a fully-inclusive QP-like kernel that accounts for the wavelength-dependence of the stellar activity present in our multi-wavelength dataset. In this work, we only modify the amplitude parameter, $\eta_{\sigma}$; we leave further chromatic modifications (namely convective blue-shift and limb-darkening, see Section \ref{sec:intro}), as subjects for future work. To first order, we expect the amplitude from activity to be linearly proportional to frequency (or inversely proportional to wavelength). This approximation is a direct result of the spot-contrast scaling with the photon frequency (or inversely with wavelength) from the ratio of two black-body functions with different effective temperatures \citep{2010ApJ...710..432R}.
We first re-parametrize the amplitude through a linear kernel as follows:
\begin{gather}
\mathbf{K_{J1}}(t_{i}, t_{j}) = \eta_{\sigma,\mathrm{s}(i)} \eta_{\sigma,\mathrm{s}(j)} \times \exp[...] \label{eq:gp_j1}
\end{gather}
\noindent Here, $\eta_{\sigma,s(i)} \eta_{\sigma,s(j)}$ are the effective amplitudes for the spectrographs at times $t_{i}$ and $t_{j}$, respectively, where $s(i)$ represents an indexing set between the observations at time $t_{i}$ and spectrograph $s$.\footnote{Truly simultaneous measurements (i.e., $t_{i} = t_{j}$) would necessitate a more sophisticated indexing set.} Each amplitude is a free parameter.
We also consider a variation of this kernel which further enforces the expected inverse relationship between the amplitude with wavelength. We rewrite the kernel to become:
\begin{gather}
\mathbf{K_{J2}}(t_{i}, t_{j}, \lambda_{i}, \lambda_{j}) = \eta_{\sigma,0}^{2} \Bigg( \frac{\lambda_{0}}{\sqrt{\lambda_{i} \lambda_{j}}} \Bigg) ^{2 \eta_{\lambda}}
\times \exp[...] \label{eq:gp_j2}
\end{gather}
\noindent Here, $\eta_{\sigma,0}$ is the effective amplitude at $\lambda = \lambda_{0}$, and $\eta_{\lambda}$ is an additional power-law scaling parameter with wavelength to allow for a more flexible non-linear (with frequency) relation. $\lambda_{i}$ and $\lambda_{j}$ are the ``effective'' wavelengths for observations at times $t_{i}$ and $t_{j}$, respectively. For both eqs. \ref{eq:gp_j1} and \ref{eq:gp_j2}, the expression within square brackets is identical to that in eq. \ref{eq:gp_qp}.
To make predictions from $\mathbf{K_{J2}}$ (eq. \ref{eq:gp_j2}), we follow \cite{2006gpml.book.....R} (eqs. 2.23 and 2.24). We construct the matrix $\mathbf{K_{J2}}(t_{i,*}, t_{j}, \lambda_{*}, \lambda_{j})$, which denotes the $n_{*} \times n$ matrix of the covariances evaluated at all pairs of test points and training points (the data). Wavelengths in the * dimension are identical, and therefore each realization corresponds to a unique wavelength. This formulation allows us to realize the GP with high accuracy for all wavelengths so long as at least one wavelength is sampled near $t_{i,*}$. Predictions with kernel $\mathbf{K_{J1}}$ (eq. \ref{eq:gp_j1}) are found in a similar fashion, where each realization corresponds to a particular spectrograph.
\subsection{Primary RV Analyses} \label{sec:primary_analyses}
We first bin out-of-transit RV observations from each night (per-spectrograph). While not negligible, we expect changes from rotationally modulated activity to be small within a night, so we choose to mitigate activity on shorter timescales our model is not intended to capture (e.g, p-mode oscillations, granulation). The median RV for each spectrograph is also subtracted. We choose to ignore poorly-sampled regions with respect to our adopted mean spot lifetime $\eta_{\tau}$ (100 days, see Section \ref{sec:kernel_parameter_estimation}); each instance of a covariance matrix represents a \textit{family} of functions, and therefore the GP regression may be too flexible (and thus poorly constrained) in regions of low-cadence observations. We also ignore regions with only low precision measurements (median errors $\gtrsim$ 10 m\,s$^{-1}$). This limits our analyses to all observations between September 2019 -- December 2020, and the spectrographs HIRES, TRES, CARMENES-VIS and NIR, SPIRou, and iSHELL. We do not include six binned IRD or thirteen binned {\sc {\textsc{Minerva}}}-Australis observations in our primary analyses as we expect the offsets to be poorly constrained in the presence of stellar activity. Finally, we discard 3 CARMENES-VIS and 13 CARMENES-NIR measurements from our analyses primarily near the beginning of each season due to residuals $>$ 100 m\,s$^{-1}$ that are inconsistent with our other datasets. We suspect that telluric contamination which is further exacerbated by the high airmass of the observations may have degraded the CARMENES observations. For completeness, we present fit results including all spectrographs in appendix \ref{app:full_fit}. A summary of measurements is provided in Table \ref{tab:spectrographs}.
Our RV model first consists of two Keplerian components for the known transiting planets, a GP model for stellar activity, and per-instrument zero points. The zero points are each assigned to 1 m\,s$^{-1}$ with a uniform prior of $\pm$ 300 m\,s$^{-1}$. We further adopt a normal prior of $\mathcal{N}(0, 100)$ to make each offset well-behaved. When using multiple priors, the composite prior probability for such a parameter will not integrate to unity. For the combination of a uniform $+$ normal prior, this is not a concern; the normal prior is properly normalized and takes on a continuous range of values, whereas the uniform prior will either result in a constant term added to the likelihood function if the parameter is in bounds, or $-\infty$ if out of bounds.
Analyses of the \textit{TESS} transits in \citetalias{M21} found $P_{b}=8.4629991 \pm 0.0000024$ days, $TC_{b}=2458330.39046 \pm 0.00016$, $P_{c}=18.858991 \pm 0.00001$ days, and $TC_{c}=2458342.2243 \pm 0.0003$. For all of our analyses, we fix $P$ and $TC$ for planets b and c; the uncertainties in these measurements are insignificant even for our full baseline of $\approx$ 17 years. The semi-amplitudes of each planet start at $K_{b}=8.5$ m\,s$^{-1}$ and $K_{c}=5$ m\,s$^{-1}$, and are only enforced to be positive. Preliminary analyses of a secondary eclipse observed in \textit{Spitzer} observations support a moderately eccentric orbit for AU Mic b, with $e_{b}=0.189 \pm 0.04$ (Collins et al., in prep.), which is somewhat larger than the eccentricity determined from the duration of the primary transits observed with \textit{TESS} ($e_{b}=0.12 \pm 0.04$, Gilbert et al., submitted to \textit{Astrophysical Journal}). We assume a circular orbit for AU Mic c, and further examine eccentric cases in section \ref{sec:disc_eccentricity}. The Keplerian component of our RV model in \texttt{pychell} is nearly identical to that used in \texttt{RadVel} \citep{radvel}. Kepler's equation is written in \textit{Python} and makes use of the \texttt{numba.@njit} decorator \citep{numba} for optimal performance. We exclusively use the orbit basis $\{P,$ $TC,$ $e,$ $\omega,$ $K\}$.
Our optimizer seeks to maximize the natural-logarithm of the a posteriori probability (MAP) under the assumption of normally distributed errors:
\begin{equation} \label{eq:map}
\ln \mathcal{L} = -\frac{1}{2} \bigg[\vec{r}^{\mathsf{T}} \mathbf{K_{o}}^{-1} \vec{r} + \ln |\mathbf{K_{o}}| + N \ln(2 \pi) \bigg] + \sum_{i} \ln \mathcal{P}_{i}
\end{equation}
\noindent Here, $\vec{r}$ is the vector of residuals between the observations and model. $\mathbf{K_{o}}$ is the covariance matrix sampled at the same observations, $N$ is the number of data points, and $\{\mathcal{P}_{i}\}$ is the set of prior knowledge. We maximize eq. \ref{eq:map} using the iterative Nelder-Mead algorithm described in \cite{caleetal2019}, which is included as part of the \texttt{optimize} package. We also sample the posterior distributions using the \texttt{emcee} package \citep{emcee} for a subset of models to determine parameter uncertainties, always starting from the MAP-derived parameters. In all cases, we use twice the number of chains as varied parameters. We perform a burn-in phase of 1000 steps followed by a full MCMC analysis for $\approx$ 50$\times$ the median auto-correlation time (steps) of all chains.
\subsubsection{Estimation of Kernel Parameters} \label{sec:kernel_parameter_estimation}
We briefly analyze both Sectors of \textit{TESS} photometry in order to estimate the GP kernel parameters $\eta_{\tau}, \eta_{\ell}$, and $\eta_{P}$. We note that the rotationally modulated structure in both Sectors is consistent (fig. \ref{fig:light_curves}). If we assume spots are spatially static in the rest-frame of the stellar-surface (i.e., spots do not migrate), this suggests a similar spot configuration and contrast for each Sector. We first determine $\eta_{P}$ by qualitatively analyzing both \textit{TESS} Sectors phased up to periods close to that used in \citetalias{M21} ($4.862\pm0.032$ days) with a step size of 0.001 days (see fig. \ref{fig:light_curves}). We find $\eta_{P} \approx 4.836$ \textit{or} $\eta_{P} \approx 4.869$ days from our range of periods tested; no periods between these two values are consistent with our assumption of an identical spot configuration. The difference in these two periods further corresponds to one additional period between the two sectors (i.e., $|1/\eta_{P,1} - 1/\eta_{P,2}| \approx 1/700$ days$^{-1}$). The smaller of these two values implies AU Mic b is in a 7:4 resonance with the stellar rotation period \citep{2021arXiv210802149S}, potentially indicating tidal interactions between the planet and star. We adopt $\eta_{P} \sim \mathcal{N}(4.836, 0.001)$ in all our analyses where the uncertainty is a conservative estimate determined by our step size.
Although the \textit{TESS} light curve itself can provide insight into $\eta_{\tau}$ and $\eta_{\ell}$, we instead try to estimate these values directly from the predicted spot-induced RV variability via the $FF'$ technique \citep{2012MNRAS.419.3147A}:
\begin{equation} \label{eq:ffp}
\Delta RV_{\mathrm{spots}}(t) = - F(t) F'(t) R_{\star} / f
\end{equation}
\noindent Here, $F$ is the photometric flux and $f$ represents the relative flux drop for a spot at the center of the stellar disk. To compute $F$ and $F'$ (the derivative of $F$ with respect to time), we first fit the \textit{TESS} light curve via cubic spline regression \citep[\texttt{scipy.interpolate.LSQUnivariateSpline};][]{scipy} for each Sector individually with knots sampled in units of 0.5 days ($\approx$ 10\% of one rotation period) to average over transits and the majority of flare events (fig. \ref{fig:light_curves}). The nominal cubic splines are then used to directly compute both $F$ and $F'$ on a down-sampled grid of 100 evenly-spaced points for each Sector. We then divide the resulting joint-Sector curve by its standard deviation for normalization; we do not care to directly fit for the chromatic parameter $f(TESS, ...)$. We further assume $f$ to be constant in time (i.e., spots are well-dispersed on the stellar surface). We then perform both MAP and MCMC analyses for this curve using a standard QP kernel (eq. \ref{eq:gp_qp}) with loose uniform priors of $\eta_{\tau} \sim \mathcal{U}(10, 2000)$ (days) and $\eta_{\ell} \sim \mathcal{U}(0.05, 0.6)$. We set the intrinsic error bars of the curve to zero but include an additional ``jitter'' (white noise) term in the model with a Jeffrey's prior \citep{1946RSPSA.186..453J} distribution with the knee at zero to help keep the jitter well-behaved by discouraging larger values unless it significantly improves the fit-quality through an inversely proportional penalty term. The amplitude of the model is drawn from a wide uniform distribution of $\mathcal{U}(0.3, 3.0)$. The posterior distributions are provided in fig. \ref{fig:ffprime_corner}.
\begin{figure*}
\centering
\includegraphics[width=0.98\textwidth]{aumic_light_curve_s1.png}
\includegraphics[width=0.68\textwidth]{aumic_light_curve_s27.png}
\includegraphics[width=0.3\textwidth]{aumic_light_curve_phased.png}
\caption{The \textit{TESS} PDCSAP light curves of AU Mic from Sectors 1 (top) and 27 (bottom). The lower right plot shows both Sectors phased to 4.836 days. Although the two seasons exhibit nearly identical periodic signals, Sector 27 exhibits moderate evolution. The least-squares cubic spline fit for each Sector is shown in pink.}
\label{fig:light_curves}
\end{figure*}
A fit to the $FF'$ curve suggests the mean activity timescale $\eta_{\tau}\approx92_{-23}^{+29}$ days. Although our interpretation implies $\eta_{\tau}$ should be comparable to the gap between the two Sectors, ($\sim$ 700 days) we do not have photometric measurements between the two Sectors, and therefore cannot speak to evolution which will be important for our 2019 observations. We further note that the \textit{TESS} Sector 27 light curve exhibits moderate evolution whereas Sector 1 appears more stable (fig. \ref{fig:light_curves}). The posterior distributions are also consistent with a relatively smooth GP with the period length scale $\eta_{\ell}\approx 0.45 \pm 0.06$.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{ffprime_corner.png}
\caption{Posterior distributions from fits to the predicted RV variability from the $FF'$ technique (eq. \ref{eq:ffp}).}
\label{fig:ffprime_corner}
\end{figure}
Before making use of our joint kernels, we first assess the performance of the standard QP kernel (eq. \ref{eq:gp_qp}) for each instrument individually. Here, each spectrograph makes use of a unique QP kernel and amplitude term, but the remaining three GP parameters are shared across all kernels. Each amplitude is drawn from a normal prior with mean equal to the standard deviation of the dataset, and a conservative width of 30 m\,s$^{-1}$. The expected semi-amplitudes for AU Mic b and c ($\lesssim 10$ m\,s$^{-1}$) will negligibly affect this estimation. We also apply a Jeffrey's prior with the knee at zero to help keep the amplitude well-behaved\footnote{Although the composite prior for the GP amplitudes is not proper (i.e., does not integrate to unity, see Section \ref{sec:primary_analyses}), we find nearly identical results with only the normal prior.}. For $\eta_{\tau}$ and $\eta_{\ell}$, we first make use of the same priors used to model the $FF'$ curve. We further include a fixed jitter term at 3 m\,s$^{-1}$ added in quadrature along the diagonal of the covariance matrix $\mathbf{K_{o}}$ for the HIRES observations only; HIRES observations provide the smallest intrinsic uncertainties, but are most impacted by activity (largest in amplitude), so we choose to moderately down-weight the HIRES observations. Given the flexibility of GP regression with a nightly-cadence, we choose not to fit for (nor include) jitter-terms for other spectrographs, and further discuss this decision in Section \ref{sec:caveats_future_work}. This is the most flexible model we employ to the RVs, and we therefore use these results to flag the aforementioned CARMENES-VIS and CARMENES-NIR measurements.
We find normally distributed posteriors for $\eta_{\tau}$ and $\eta_{\ell}$ (fig. \ref{fig:corner_2planets_disjoint_float}), but the reduced $\chi^{2}$ statistic of $0.32$ indicates the model over fits the data. The per-spectrograph amplitudes are reasonably consistent with their respective priors, so we assert this is a result of $\eta_{\tau}$ ($\approx43$ days) and/or $\eta_{\ell}$ ($\approx0.23$) taking on too small of values, indicating our RV model is insufficient to constrain these values from the RV observations, either due to insufficient cadence and/or an inadequate model. We therefore again fix $\eta_{\tau}=100$ days to let each season have mostly distinct activity models, while minimizing the flexibility within each season, which is consistent with what the $FF'$ curve suggests. As a compromise between the $FF'$ and RV analyses, we also fix $\eta_{\ell}=0.28$. Our adopted value of $\eta_{\tau}$ is larger than that used in \citetalias{K21} ($\approx$70 days\footnote{In \citetalias{K21}, the hyperparameters $\eta_{\tau}$ and $\eta_{\ell}$ absorb the factors of two present in the formulation used in this work (eq. \ref{eq:gp_qp}).}), while $\eta_{\ell}$ is nearly identical. We further explore these decisions and its impact on our derived semi-amplitudes in section \ref{sec:disc_kernel_parameters}. With fixed value for $\eta_{\tau}$, we re-run MAP and MCMC fits with disjoint kernels, yielding a reduced $\chi^{2}$ of 0.86, indicating the model is now only slightly over-fit.
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{rvs_full_disjoint.png}
\caption{RVs of AU Mic zoomed in on a window with high-cadence, multi-wavelength observations from 2019. Here, we use a disjoint QP GP kernel (eq. \ref{eq:gp_qp}) to model the stellar activity. Each plotted dataset is only corrected according to the best-fit zero points. Data errors are computed by adding the intrinsic errors in quadrature with any uncorrelated noise terms (i.e., 3 m\,s$^{-1}$ for HIRES, see Table \ref{tab:pars_priors}). Although each GP makes use of the same parameters, each still exhibits unique features. This indicates either an insufficient activity model with our cadence or yet-to-be characterized chromatic effects of activity from different wavelength regimes not consistent with a simple scaling relation.}
\label{fig:rvs_zoom_disjoint}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{rvs_full_2019.png}
\caption{The 2019 RVs using kernel $\mathbf{K_{J1}}$ (eq. \ref{eq:gp_j1}) to model the stellar activity. Although there is only one HIRES observation in early 2019, we are still able to make predictions for the HIRES GP for the entire baseline by using joint kernels.}
\label{fig:rvs_2019}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{rvs_full_2020.png}
\caption{Same as fig. \ref{fig:rvs_2019}, but for our 2020 observations which overlap with the \textit{TESS} Sector 27 photometry. In red we show the generated $FF'$ curve for spot-induced activity signals (eq. \ref{eq:ffp}, arbitrarily scaled) generated from the \textit{TESS} light-curve (section \ref{sec:kernel_parameter_estimation}).}
\label{fig:rvs_2020}
\end{figure*}
\subsubsection{Joint Kernel RV Fitting}
We use results from the disjoint case to inform our primary joint-kernel models. Although the different GPs appear similar (fig. \ref{fig:rvs_zoom_disjoint}), each still exhibits unique features, suggesting a simple scaling is not valid, and/or insufficient sampling for each kernel individually. Regardless, our two joint kernels will enforce a perfect scaling between any two spectrographs.
We run MAP and MCMC fits using the joint kernel $\mathbf{K_{J1}}$ (eq. \ref{eq:gp_j1}) again making use of the same normal and Jeffrey's priors for each amplitude. We then fit the resulting set of best-fit amplitudes using our proposed power law relation (see eq. \ref{eq:gp_j2}): $\eta_{\sigma}(\lambda) = \eta_{\sigma,0} (\lambda_{0} / \lambda)^{\eta_{\lambda}}$ with \texttt{scipy.optimize.curve\_fit} \citep{jones2001scipy} (fig. \ref{fig:gp_amps_wavelength}). We arbitrarily anchor $\lambda_{0}$ at $\lambda=565$ nm. The effective mean wavelength of each spectrograph should consider the RV information content (stellar and calibration), and ignore regions with dense telluric features. For gas-cell calibrated spectrographs (HIRES, CHIRON, and iSHELL), we do limit the the range to regions with gas cell features. For all other spectrographs, we take the effective RV information content to be uniform over the full spectral range as a ``zeroth-order'' approximation \citep{2020ApJS..247...11R}. We further do not consider regions of tellurics which may have been masked (e.g., CARMENES RVs generated with \texttt{serval}). Although these estimations are imperfect, they are only relevant to kernel $\mathbf{K_{J2}}$ (eq. \ref{eq:gp_j2}). The adopted wavelengths for each spectrograph are listed in Table \ref{tab:spectrographs}.
We find $\eta_{\sigma,0} \approx 221$ m\,s$^{-1}$, and $\eta_{\lambda} \approx 1.17$. This amplitude is significantly larger than the intrinsic scatter of our observations (namely HIRES) suggests, so we adopt a tight normal prior of $\mathcal{N}(221, 10)$ to restrict it from getting any larger. We only apply a loose uniform prior for $\eta_{\lambda}\sim\mathcal{U}(0.2, 2)$. We then run corresponding MAP and MCMC fits with kernel $\mathbf{K_{J2}}$ (eq. \ref{eq:gp_j2}). A summary of all parameters is provided in Table \ref{tab:pars_priors}. We present and discuss fit results from both joint kernels in Section \ref{sec:results}.
\begin{table*}
\caption{The model parameters and prior distributions used in our primary fitting routines. \faLock\ indicates the parameter is fixed. We run models utilizing $\mathbf{K_{J1}}$ and $\mathbf{K_{J2}}$. We list the radii of AU Mic b and c measured in \citetalias{M21} which we use to compute the corresponding densities of each planet.}
\begin{center}
\begin{tabular}{ | c | c | c | c | }
\hline
Parameter [units] & Initial Value ($P_{0}$) & Priors & Citation\\
\hline
\hline
$P_{b}$ [days] & 8.4629991 & \faLock & Primary transit; \citetalias{M21} \\
$TC_{b}$ [days] & 2458330.39046 & \faLock & Primary transit; \citetalias{M21} \\
$e_{b}$ & 0.189 & $\mathcal{N}(P_{0}, 0.04)$ & Secondary eclipse; Collins et al. in prep \\
$\omega_{b}$ [rad] & 1.5449655 & $\mathcal{N}(P_{0}, 0.004)$ & Secondary eclipse; Collins et al. in prep \\
$K_{b}$ [$\mathrm{ms}^{-1}$] & 8.5 & Positive & \citetalias{K21} \\
\hline
$P_{c}$ [days] & 18.858991 & \faLock & Primary transit; \citetalias{M21} \\
$TC_{c}$ [days] & 2458342.2243 & \faLock & Primary transit; \citetalias{M21} \\
$e_{c}$ & 0 & \faLock & -- \\
$\omega_{c}$ [rad] & $\pi$ & \faLock & -- \\
$K_{c}$ [$\mathrm{ms}^{-1}$] & 5 & Positive & \citetalias{M21} \\
\hline
$\eta_{\sigma,0}$ [m\,s$^{-1}$] & 216 & $\mathcal{J}(1, 600)$, $\mathcal{N}(P_{0}, 10)$ & RVs; this work \\
$\eta_{\lambda}$ & 1.18 & $\mathcal{U}(0.3, 2)$ & RVs; this work \\
$\eta_{\sigma,HIRES}$ [m\,s$^{-1}$] & 130 & $\mathcal{J}(1, 600)$, $\mathcal{N}(P_{0}, 30)$ & RVs; this work \\
$\eta_{\sigma,TRES}$ [m\,s$^{-1}$] & 103 & $\mathcal{J}(1, 600)$, $\mathcal{N}(P_{0}, 30)$ & RVs; this work \\
$\eta_{\sigma,CARM-VIS}$ [m\,s$^{-1}$] & 98 & $\mathcal{J}(1, 600)$, $\mathcal{N}(P_{0}, 30)$ & RVs; this work \\
$\eta_{\sigma,CARM-NIR}$ [m\,s$^{-1}$] & 80 & $\mathcal{J}(1, 600)$, $\mathcal{N}(P_{0}, 30)$ & RVs; this work \\
$\eta_{\sigma,SPIRou}$ [m\,s$^{-1}$] & 42 & $\mathcal{J}(1, 600)$, $\mathcal{N}(P_{0}, 30)$ & RVs; this work \\
$\eta_{\sigma,iSHELL}$ [m\,s$^{-1}$] & 40 & $\mathcal{J}(1, 600)$, $\mathcal{N}(P_{0}, 30)$ & RVs; this work \\
$\eta_{\tau}$ [days] & 100 & \faLock & \textit{TESS} light curve and RVs; this work \\
$\eta_{\ell}$ & 0.28 & -- & \textit{TESS} light curve and RVs; this work \\
$\eta_{p}$ [days] & 4.836 & $\mathcal{N}(P_{0}, 0.001)$ & \textit{TESS} light curve; this work \\
$\gamma$ (per-spectrograph) [m\,s$^{-1}$] & 1 & $\mathcal{U}(-300, 300)$, $\mathcal{N}(0, 100)$ & RVs; this work \\
\hline
$\sigma_{HIRES}$ [m\,s$^{-1}$] & 3 & \faLock & -- \\
$\sigma_{TRES}$ [m\,s$^{-1}$] & 0 & \faLock & -- \\
$\sigma_{CARM-VIS}$ [m\,s$^{-1}$] & 0 & \faLock & -- \\
$\sigma_{CARM-NIR}$ [m\,s$^{-1}$] & 0 & \faLock & -- \\
$\sigma_{SPIRou}$ [m\,s$^{-1}$] & 0 & \faLock & -- \\
$\sigma_{iSHELL}$ [m\,s$^{-1}$] & 0 & \faLock & -- \\
\hline
$M_{\star}$ [$M_{\odot}$] & $0.5_{-0.03}^{+0.03}$ & -- & \citetalias{P20} \\
$R_{b}$ [$R_{\oplus}$] & $4.38_{-0.18}^{+0.18}$ & -- & \citetalias{P20} \\
$R_{c}$ [$R_{\oplus}$] & $3.51_{-0.16}^{+0.16}$ & -- & \citetalias{M21} \\
\hline
\end{tabular}
\end{center}
\label{tab:pars_priors}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{gp_amps.png}
\caption{The best-fit GP amplitudes and uncertainties from kernels without enforcing any dependence with wavelength. We consider cases which let $\eta_{\tau}$ and $\eta_{\ell}$ float as well as and our fixed values (see Table \ref{tab:pars_priors}). The solid line is a least-squares solution to the amplitudes for kernel $\mathbf{K_{J2}}$ (eq. \ref{eq:gp_j2}) for the joint-kernel fixed case (pink markers). Horizontal bars correspond to the adopted spectral range for each instrument.}
\label{fig:gp_amps_wavelength}
\end{figure}
\section{Results} \label{sec:results}
The best-fit parameters and corresponding uncertainties from the MAP and MCMC analyses with a two-planet model using joint kernels $\mathbf{K_{J1}}$ (eq. \ref{eq:gp_j1}) and $\mathbf{K_{J2}}$ (eq. \ref{eq:gp_j2}) are provided in Table \ref{tab:pars_results_2planets}. We compute planet masses, densities, and orbital semi-major axes by propagating the appropriate MCMC chains. The uncertainties in $M_{\star}$ and the planetary radii from Table \ref{tab:pars_priors} are added in quadrature where appropriate. A corner plot presenting the posterior distributions of each varied parameter are provided in figs. \ref{fig:corner_2planets_j1} and \ref{fig:corner_2planets_j2} for kernels $\mathbf{K_{J1}}$ and $\mathbf{K_{J2}}$, respectively. All chains are well-converged, with posteriors resembling Gaussian distributions. We find the offsets for each spectrograph are highly correlated with one-another; we note this is unique to the cases leveraging a joint-kernel, and strongest when datasets overlap, but do not further explore this result.
Unlike kernel $\mathbf{K_{J2}}$, $\mathbf{K_{J1}}$ only enforces a scaling relation between the different spectrographs but no correlation with wavelength, so we adopt results from $\mathbf{K_{J1}}$ for our primary results, although the results for $K_{b}$ and $K_{c}$ are moderately consistent between kernels. With kernel $\mathbf{K_{J1}}$ (eq. \ref{eq:gp_j1}), we report the median semi-amplitudes of AU Mic b and c to be $10.23_{-0.91}^{+0.88}$ m\,s$^{-1}$ and $3.68_{-0.86}^{+0.87}$ m\,s$^{-1}$, corresponding to masses of $20.12_{-1.72}^{+1.57}\ M_{\oplus}$ and $9.60_{-2.31}^{+2.07}\ M_{\oplus}$, respectively. The phased-up RVs for AU Mic b and c are shown in fig. \ref{fig:2planets_phased}. With kernel $\mathbf{K_{J2}}$ (eq. \ref{eq:gp_j2}), we find $K_{b}=8.92_{-0.85}^{+0.85}$ m\,s$^{-1}$ and $K_{c}=5.21_{-0.87}^{+0.90}$ m\,s$^{-1}$. Both our findings for $K_{b}$ are larger but within $1\sigma$ of the semi-amplitude reported in \citetalias{K21} ($8.5_{-2.2}^{+2.3}$ m\,s$^{-1}$). The mass of AU Mic c is also consistent with a Chen-Kipping mass-radius relation \citep[$\approx 12.1\ M_{\oplus}$;][]{chen_kipping}. The posterior distributions for $e_{b}$ and $\omega_{b}$ are also consistent with their respective priors. Our finding for $K_{b}$ is nearly twice as large as that obtained when using disjoint QP kernels (5.58 m\,s$^{-1}$, fig. \ref{fig:corner_2planets_disjoint_locked}), although the uncertainties are similar. With disjoint kernels, we find no evidence in the RVs for AU Mic c.
We further validate our results by computing the Bayesian information criterion (BIC) and the small-sample Akaike information criterion \citep{akaike1974, aicc2002}. We compute the relevant quantities for a power-set of planet models. We are not trying to independently detect the eccentricity of AU Mic b and therefore do not include cases with $e_{b}=0$. Prior probabilities are not included in the calculation of the corresponding $\ln \mathcal{L}$ (eq. \ref{eq:map}) to maintain normalization between different models. The results are summarized in Table \ref{tab:model_selection} and are consistent with the relative precisions for each derived semi-amplitude.
Lastly, we compute and present the reduced chi-squared statistic ($\chi_{red}^{2}$) for each spectrograph individually to assess their respective goodness of fit (Table \ref{tab:redchi2s_per_spectrograph}). We add in quadrature the intrinsic error bars with any additional uncorrelated noise (i.e., 3 m\,s$^{-1}$ for HIRES, see Table \ref{tab:pars_priors}). We find the HIRES observations are moderately over-fit ($\chi_{red}^{2}$=0.64), whereas the other spectrographs are under-fit. We suspect this is due to the activity amplitude for HIRES being significantly larger than the other spectrographs despite exhibiting a similar overall dispersion. Although we include an additional 3 m\,s$^{-1}$ white noise term for the HIRES observations, they still yield the smallest overall error bars and therefore are given the most weight in the GP regression. Although a more flexible uncorrelated noise model may yield a more accurate weighting scheme for the different spectrographs (i.e., a varied ``jitter'' parameter for each spectrograph), we favor the model without them for the variety of reasons discussed in Section \ref{sec:caveats_future_work}. Lastly, we note that we find moderately similar results when not using the HIRES RVs altogether ($K_{b}=12.95 \pm 1.1$ m\,s$^{-1}$, $K_{c}=3.5 \pm 1.0$ m\,s$^{-1}$).
\begin{table*}
\caption{The best-fit parameters and corresponding Keplerian variables for our primary two-planet fits using joint-kernels $\mathbf{K_{J1}}$ (eq. \ref{eq:gp_j1}) and $\mathbf{K_{J2}}$ (eq. \ref{eq:gp_j2}). The MCMC values correspond to the 15.9$^{\textrm{th}}$, 50$^{\textrm{th}}$, and 84.1$^{\textrm{th}}$ percentiles. Planet masses, densities, and semi-major axes are computed by propagating the appropriate MCMC chains. We also add in quadrature the uncertainties in $M_{\star}$ and planetary radii from Table \ref{tab:pars_priors} where relevant.}
\begin{center}
\begin{tabular}{ | c | c | c | c | c | }
\hline
Name [units] & MAP (J1) & MCMC (J1) & MAP (J2) & MCMC (J2) \\
\hline
\hline
$P_{b}$ [days] & 8.4629991 & -- & -- & -- \\
$TC_{b}$ [days; BJD] & 2458330.39046 & -- & -- & -- \\
$e_{b}$ & 0.187 & $0.186_{-0.035}^{+0.036}$ & 0.182 & $0.181_{-0.035}^{+0.035}$ \\
$\omega_{b}$ [radians] & 1.5452 & $1.5451_{-0.0038}^{+0.0038}$ & 1.5453 & $1.5454_{-0.0041}^{+0.0041}$ \\
$K_{b}$ [m\,s$^{-1}$] & 10.21 & $10.23_{-0.91}^{+0.88}$ & 8.94 & $8.92_{-0.85}^{+0.85}$ \\
$M_{b}$ [$M_{\oplus}$] & 20.14 & $20.12_{-1.72}^{+1.57}$ & 17.66 & $17.73_{-1.62}^{+1.68}$ \\
$a_{b}$ [AU] & 0.0645 & $0.0645_{-0.0013}^{+0.0013}$ & -- & -- \\
$\rho_{b}$ [g/cm$^{3}$] & 1.32 & $1.32_{-0.20}^{+0.19}$ & 1.16 & $1.16_{-0.18}^{+0.18}$ \\
\hline
$P_{c}$ [days] & 18.858991 & -- & -- & -- \\
$TC_{c}$ [days; BJD] & 2458342.2243 & -- & -- & -- \\
$e_{c}$ & 0 & -- & -- & -- \\
$\omega_{c}$ [radians] & $\pi$ & -- & -- & -- \\
$K_{c}$ [m\,s$^{-1}$] & 3.62 & $3.68_{-0.86}^{+0.87}$ & 5.23 & $5.21_{-0.87}^{+0.90}$ \\
$M_{c}$ [$M_{\oplus}$] & 9.50 & $9.60_{-2.31}^{+2.07}$ & 13.71 & $14.12_{-2.71}^{+2.48}$ \\
$a_{c}$ [AU] & 0.1101 & $0.1101_{-0.002}^{+0.002}$ & -- & -- \\
$\rho_{c}$ [g/cm$^{3}$] & 1.21 & $1.22_{-0.29}^{+0.26}$ & 1.75 & $1.80_{-0.34}^{+0.31}$ \\
\hline
$\gamma_{HIRES}$ [m\,s$^{-1}$] & 2.9 & $4.1_{-57.0}^{+55.6}$ & -19.4 & $-8.7_{-42.9}^{+43.3}$ \\
$\gamma_{TRES}$ [m\,s$^{-1}$] & 11.4 & $12.1_{-27.8}^{+27.4}$ & -0.2 & $9.3_{-38.0}^{+38.4}$ \\
$\gamma_{CARM-VIS}$ [m\,s$^{-1}$] & 3.7 & $4.3_{-26.6}^{+26.0}$ & -12.1 & $-3.4_{-33.9}^{+34.5}$ \\
$\gamma_{CARM-NIR}$ [m\,s$^{-1}$] & 2.6 & $2.9_{-21.8}^{+21.7}$ & -6.8 & $-1.5_{-21.2}^{+21.2}$ \\
$\gamma_{SPIRou}$ [m\,s$^{-1}$] & 5.5 & $5.6_{-12.4}^{+12.3}$ & 0.72 & $5.1_{-17.3}^{+17.8}$ \\
$\gamma_{iSHELL}$ [m\,s$^{-1}$] & -2.8 & $-2.4_{-12.4}^{+12.1}$ & -7.5 & $-4.3_{-12.9}^{+13.0}$ \\
\hline
$\eta_{\sigma,0}$ [m\,s$^{-1}$] & -- & -- & 242.4 & $243.1_{-9.1}^{+8.8}$ \\
$\eta_{\lambda}$ & -- & -- & 0.843 & $0.845_{-0.024}^{+0.024}$ \\
$\eta_{\sigma,HIRES}$ [m\,s$^{-1}$] & 269.4 & $275.7_{-16.4}^{+17.4}$ & -- & -- \\
$\eta_{\sigma,TRES}$ [m\,s$^{-1}$] & 132.3 & $135.4_{-9.5}^{+10.6}$ & -- & -- \\
$\eta_{\sigma,CARM-VIS}$ [m\,s$^{-1}$] & 125.1 & $128.2_{-8.2}^{+8.8}$ & -- & -- \\
$\eta_{\sigma,CARM-NIR}$ [m\,s$^{-1}$] & 103.0 & $105.5_{-8.7}^{+9.1}$ & -- & -- \\
$\eta_{\sigma,SPIRou}$ [m\,s$^{-1}$] & 58.5 & $60.1_{-4.0}^{+4.3}$ & -- & -- \\
$\eta_{\sigma,iSHELL}$ [m\,s$^{-1}$] & 58.5 & $60.0_{-3.9}^{+4.2}$ & -- & -- \\
$\eta_{P}$ [days] & 4.8384 & $4.8384_{-0.0009}^{+0.0008}$ & 4.8376 & $4.8376_{-0.0009}^{+0.0009}$ \\
\hline
\end{tabular}
\end{center}
\label{tab:pars_results_2planets}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=0.98\textwidth]{2planets_phased.png}
\caption{The phased RVs for AU Mic b (left column), and c (right column), and the corresponding best fit Keplerian models, generated from our nominal two-planet model. For each spectrograph, we subtract the unique zero-points, all other planet signals, and the appropriate GP. Corresponding data errors are computed by adding the intrinsic error in quadrature with additional uncorrelated noise (i.e., 3 m\,s$^{-1}$ for HIRES, see Table \ref{tab:pars_priors}). The dark red points are generated by binning the phased RVs using a window size of 0.1, weighted by $1/\sigma_{RV}^{2}$ where $\sigma_{\mathrm{RV}}$ are the data errors. In the top row we plot all data used in the fit. In the bottom row, we only show HIRES, iSHELL, and SPIRou. Although the HIRES cadence in 2020 was relatively dense with respect to the activity timescales $\eta_{\tau}$ and $\eta_{P}$, the data still appears to be over-fit.}
\label{fig:2planets_phased}
\end{figure*}
\begin{table}
\centering
\begin{tabular}{|c|c|}
\hline
Spectrograph & $\chi_{red}^{2}$ \\
\hline
HIRES & 1.09 \\
TRES & 6.86 \\
CARMENES-VIS & 5.89 \\
CARMENES-NIR & 4.39 \\
SPIRou & 16.70 \\
iSHELL & 22.43 \\
\hline
\end{tabular}
\caption{The reduced chi-squared for each spectrograph from our nominal two-planet model using kernel $\mathbf{K_{J1}}$. Unlike when using quasi-disjoint kernels (Section \ref{sec:kernel_parameter_estimation}), we find the model is overall under-fit with the joint kernel. We suspect this is primarily due to an inadequate stellar-activity (i.e., a scaling relation is insufficient between spectrographs) and/or the exclusion of per-spectrograph jitter terms, and discuss these details further in Section \ref{sec:caveats_future_work}.}
\label{tab:redchi2s_per_spectrograph}
\end{table}
\subsection{Evidence For Additional Candidates?}
We compute periodograms to further assess the relative statistical confidence of the two transiting planets and to search for other planets in the system. We first compute a series of generalized Lomb-Scargle \citep[GLS;][]{2018ascl.soft07019Z, pya} periodograms out to 500 days after removing the nominal zero-points, appropriate GPs, and the two planets, all generated using parameters from our nominal two-planet model (Table \ref{tab:pars_results_2planets}) with kernel $\mathbf{K_{J1}}$ (eq. \ref{eq:gp_j1}) to model the stellar activity. We also compute an activity-filtered periodogram from a planet-free model to assess how much the GP model will absorb planetary signals, and inform our interpretation of other peaks present in the periodogram. We further plot the normalized power-levels for false alarm probabilities (FAPs) of 10\%, 1\%, and 0.1\%.
We also compute ``brute-force'' periodograms by performing MAP fits for a wide range of fixed orbital periods for a user-defined ``test-planet'' with various assumptions for other model parameters \citep[see][]{2021MNRAS.502.3704A}. Given the time complexity of GP regression, we only consider periods out to 100 days. We first run two searches with no other planets in the model, first allowing for the test-planet's $TC$ to float, and second fixing $TC$ to the nominal value for AU Mic b (Table \ref{tab:pars_results_2planets}). We then run searches for a second-planet, this time including a planetary model to account for the orbit of AU Mic b, with $K_{b}\sim\mathcal{N}(8.5, 2.5)$, consistent with the semi-amplitude found in \citetalias{K21}. We again consider the case of letting the test-planet's $TC$ float, then run three cases with fixing the test-planet's TC to each time of transit for AU Mic c from the \textit{TESS} Sector 1 and 27 light curves (\citetalias{M21}). Lastly, we perform a search for a third planet letting its $TC$ float, and including models for AU Mic b and c ($K_{b}\sim\mathcal{N}(8.5, 2.5)$, $K_{c}>0$).
Both the GLS (fig. \ref{fig:gls_periodograms}) and brute-force (fig. \ref{fig:brute_force_periodograms}) periodograms exhibit clear aliasing with a frequency of $\approx 0.00281$ days$^{-1}$ (or 356 days) which we attribute to having two seasons of observations separated by $\approx 200$ days. Given the respective power of AU Mic b in both the GLS and brute-force planet-free periodograms, we briefly explore other peaks with similar power, even though all other peaks are below all three FAPs after removing the nominal two-planet model (fig. \ref{fig:gls_periodograms}, row 3; fig. \ref{fig:brute_force_periodograms}, row 7). Both two- and zero-planet periodograms (as well as GLS and brute-force) show power between AU Mic b and c's orbits near 12.72 and 13.19 days, as well as power near 66.7 days for the residual RVs. Although these peaks are comparable in power to AU Mic b in both planet-free periodograms, they may be spurious. We further discuss the confirmation of AU Mic b and c as well as the validation of such additional potential candidates in Section \ref{sec:injection_recovery}. A mass-radius diagram is shown in fig. \ref{fig:mass_radius} to place the mass and radius of all AU Mic b and c in context with other known exoplanets, including a subset of young sample of exoplanets shown in \citetalias{P20}. The plotted masses for AU Mic b and c are from our nominal two-planet model using kernel $\mathbf{K_{J1}}$ (eq. \ref{eq:gp_j1}).
\begin{figure*}
\centering
\includegraphics[width=0.98\textwidth]{pgrams_gls}
\caption{GLS periodograms for AU Mic. Rows 1--4 are generated from our nominal two-planet MAP fit result using $\mathbf{K_{J1}}$ (eq. \ref{eq:gp_j1}) to model the stellar activity. From top to bottom, with each step applying an additional ``correction'': 1. zero-point corrected RVs, 2. activity-filtered RVs, 3. planet b-filtered RVs, 4. planet c-filtered RVs. Annotated from left to right in green are the periods for AU Mic c and b. In the top row, we also annotate in orange (from left to right) potential aliases of the stellar rotation period $3\eta_{P}$, $2\eta_{P}$, and $3\eta_{P}/2$, followed by the first three harmonics. In the bottom row, we compute a periodogram from an activity-filtered and trend-corrected zero-planet model to indicate how power from planets is absorbed by the GP. In each periodogram, we also identify the false alarm probability (FAP) power levels corresponding to 0.1\% (highest), 1\%, and 10\% (lowest). The clear alias present in all periodograms is caused from the large gap between the two seasons of observations. In the bottom panel, we also plot in pink a Lomb-Scargle periodogram (arbitrarily scaled) of our window function (i.e, identical yet arbitrary RVs at each observation).}
\label{fig:gls_periodograms}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.98\textwidth]{pgrams_bruteforce}
\caption{``Brute-force'' periodograms for AU Mic with different assumptions for planetary models, but all making use of kernel $\mathbf{K_{J1}}$ (eq. \ref{eq:gp_j1}) to model the stellar activity. In each row, we perform a MAP fit for a wide range of fixed periods for a particular ``test''-planet. In row 1, we include no other planets in our model, and allow for the test-planet's $TC$ to float. In row 2, we perform the same search but fixing $TC$ to the nominal value for AU Mic b (Table \ref{tab:pars_priors}). In row 3, we include a model for AU Mic b (with $K_{b}\sim\mathcal{N}(8.5, 2.5)$, see \citetalias{K21}), and search for a second planet again letting $TC$ float. In rows 4--6, we perform the same search but fix the test-planet's $TC$ to one of the three times of transit for AU Mic c from \textit{TESS} (in chronological order). In the bottom row, we include nominal models for AU Mic b and c ($K_{b}\sim\mathcal{N}(8.5, 2.5)$, $K_{c}>0$). We also annotate the same potential aliases with the stellar-rotation period (orange) and planetary periods (green) as in fig. \ref{fig:gls_periodograms}.}
\label{fig:brute_force_periodograms}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{mass_radius.png}
\caption{The mass vs. radius for all exoplanets with provided radii and masses from the NASA Exoplanet Archive \citep{nasa_exoplanet_archive}. For AU Mic b and c, we plot (maroon markers) the masses determined from our two-planet model with kernel $\mathbf{K_{J1}}$. We also indicate with an arrow the $5\sigma$ upper limit to the mass of AU Mic c determined from the posterior of $K_{c}$. The radii for b and c are those reported in \citetalias{M21}. In blue, we plot a piece-wise Chen-Kipping mass-radius relation \citep{chen_kipping}. We also annotate (cyan markers) the masses and radii for a sample of young planets (stellar-age estimated $\lesssim 400$ Myr).}
\label{fig:mass_radius}
\end{figure}
\begin{table}
\caption{Model information criterion for AU Mic b and c using kernel $\mathbf{K_{J1}}$ (eq. \ref{eq:gp_j1}) to model the stellar activity.}
\begin{center}
\begin{tabular}{ | c | c | c | c | c | c | }
\hline
Planets & $\ln\mathcal{L}$ & $\chi_{red}^{2}$ & N free & $\Delta$AICc & $\Delta$BIC \\
\hline
b, c & -1753.1 & 4.73 & 17 & 0 & 0 \\
b & -1762.0 & 4.77 & 16 & 15.5 & 12.2 \\
c & -1816.4 & 5.14 & 14 & 119.8 & 109.8 \\
None & -1828.8 & 5.23 & 13 & 142.4 & 129.13 \\
\hline
\end{tabular}
\end{center}
\label{tab:model_selection}
\end{table}
\section{Discussion} \label{sec:discussion}
\subsection{Constraints on Eccentricity} \label{sec:disc_eccentricity}
Here we briefly explore eccentric orbits for the two-transiting planets b and c. For each planet, we take $e \sim \mathcal{U}(0, 0.7)$ and $\omega\sim \mathcal{U}(0, 2\pi)$. We only use kernel $\mathbf{K_{J1}}$ (eq. \ref{eq:gp_j1}) to mode the stellar activity. Posterior distributions are presented in fig. \ref{fig:corner_2planets_j1_ecc}. We find $e_{b}=0.30\pm0.04$, which is $\approx$ 50\% larger than our prior informed by a secondary eclipse event indicates. The corresponding finding of $\omega_{b}=3.01\pm0.27$ is also inconsistent with our adopted prior for $\omega_{b}$. The posterior distribution for $e_{c}$ is concentrated at the upper bound (0.7), implying an overlapping orbit with AU Mic b. Orbital stability calculations presented in \citetalias{M21} indicate $e_{c}<0.2$, so we assert our model is unable to accurately constrain its eccentricity. The behavior of $e_{c}$ further indicates our detection of $K_{c}$ may not be significant.
\subsection{Sensitivity to Kernel Hyperparameters} \label{sec:disc_kernel_parameters}
Our analyses in section \ref{sec:kernel_parameter_estimation} make use of a fixed mean spot lifetime $\eta_{\tau}=100$ days and smoothing parameter $\eta_{\ell}=0.28$. Here we determine how sensitive the recovered semi-amplitudes of AU Mic b and c are to these two parameters. We consider $\eta_{\tau} \in$ \{40, 70, 100, 200, 300\} (days), and $\eta_{\ell} \in$ \{0.15, 0.2, 0.25, 0.3, 0.35\}. We perform MAP and MCMC fits for all pairs of these two fixed parameters using $\mathbf{K_{J1}}$ (eq. \ref{eq:gp_j1}) for a two-planet model. All other parameters adopt initial values and priors from Table \ref{tab:pars_priors}. Results are summarized in Table \ref{tab:kernel_param_search}.
We find $K_{b}$ is only moderately sensitive to the values of each hyper-parameter, ranging from $\sim$ 7--11 m\,s$^{-1}$. With a larger spot lifetime, $K_{b}$ tends towards larger values, indicating the GP is likely absorbing power from planet b with a more flexible model (smaller $\eta_{\tau}$). However, $K_{b}$ is relatively insensitive to the value of $\eta_{\ell}$. The range of values for $K_{c}$ is larger, changing by nearly a factor of three. Unlike $K_{b}$, $K_{c}$ is more unstable and tends towards larger values when using a more flexible (smaller) spot lifetime. The reduced chi-squared statistic indicates the model is not over-fit in any of the cases performed, but is also larger than unity by a several factors in most cases indicating our modeling is inadequate.
\begin{table}[]
\centering
\begin{tabular}{| c | c | c | c | c | c |}
\hline
$\eta_{\tau}$ [days] & $\eta_{\ell}$ & $K_{b}$ [m\,s$^{-1}$] & $K_{c}$ [m\,s$^{-1}$] & $\chi^{2}_{red}$ \\
\hline
40 & 0.15 & $8.79\pm1.47$ & $7.38\pm1.65$ & 1.58 \\
40 & 0.2 & $8.84\pm1.30$ & $8.51\pm1.33$ & 2.10 \\
40 & 0.25 & $8.23\pm1.17$ & $9.05\pm1.17$ & 2.45 \\
40 & 0.3 & $7.41\pm1.08$ & $9.13\pm1.13$ & 2.68 \\
40 & 0.35 & $6.95\pm0.98$ & $9.23\pm1.05$ & 2.85 \\
\hline
70 & 0.15 & $8.74\pm1.24$ & $6.88\pm1.30$ & 1.99 \\
70 & 0.2 & $10.32\pm1.13$ & $5.90\pm1.07$ & 2.69 \\
70 & 0.25 & $10.45\pm1.04$ & $4.76\pm0.95$ & 3.25 \\
70 & 0.3 & $9.61\pm0.91$ & $4.16\pm0.89$ & 3.65 \\
70 & 0.35 & $9.18\pm0.82$ & $3.94\pm0.83$ & 3.90 \\
\hline
100 & 0.15 & $9.28\pm1.17$ & $5.88\pm1.08$ & 2.46 \\
100 & 0.2 & $10.85\pm1.00$ & $4.73\pm0.98$ & 3.20 \\
100 & 0.25 & $10.78\pm0.95$ & $3.78\pm0.87$ & 3.76 \\
100 & 0.3 & $9.81\pm0.85$ & $3.63\pm0.80$ & 4.12 \\
100 & 0.35 & $9.22\pm0.80$ & $3.60\pm0.77$ & 4.39 \\
\hline
200 & 0.15 & $9.38\pm0.98$ & $4.01\pm0.96$ & 3.44 \\
200 & 0.2 & $11.04\pm0.89$ & $3.35\pm0.84$ & 4.16 \\
200 & 0.25 & $11.09\pm0.87$ & $3.38\pm0.77$ & 4.73 \\
200 & 0.3 & $10.14\pm0.84$ & $4.51\pm0.76$ & 5.28 \\
200 & 0.35 & $9.06\pm0.73$ & $4.77\pm0.66$ & 5.59 \\
\hline
300 & 0.15 & $9.32\pm0.92$ & $3.84\pm0.87$ & 3.82 \\
300 & 0.2 & $10.60\pm0.88$ & $3.78\pm0.81$ & 4.68 \\
300 & 0.25 & $10.42\pm0.80$ & $4.99\pm0.74$ & 5.59 \\
300 & 0.3 & $10.51\pm0.76$ & $4.52\pm0.68$ & 5.96 \\
300 & 0.35 & $9.89\pm0.75$ & $4.41\pm0.68$ & 6.27 \\
\hline
\end{tabular}
\caption{MCMC results with different assumptions for the mean spot lifetime $\eta_{\tau}$ and $\eta_{\ell}$ using kernel $\mathbf{K_{J1}}$ (eq. \ref{eq:gp_j1}). For each row, we fix the values of $\eta_{\tau}$ and $\eta_{\ell}$. All other model parameters take on the initial values and priors from Table \ref{tab:pars_priors} for a two-planet model. We perform a MAP fit followed by MCMC sampling for each case. We report the nominal values and uncertainties for the semi-amplitudes of AU Mic b and c from the MCMC fitting, as well as the reduced chi-square statistic, $\chi^{2}_{red}$ using the MAP-derived parameters. Uncertainties reported here for $K_{b}$ and $K_{c}$ are the average of the upper and lower uncertainties.}
\label{tab:kernel_param_search}
\end{table}
\subsection{Planet Injection and Recovery} \label{sec:injection_recovery}
Here we assess the fidelity of our RV model applied to the AU Mic system through planetary injection and recovery tests. We first inject planetary signals into the RV data with well-defined semi-amplitudes, periods, and ephemerides ($TC$). We arbitrarily choose $TC=2457147.36589$ for all injected cases. We consider 40 unique periods between 5.12345--100.12345 days, uniformly distributed in log-space. For the semi-amplitude $K$, we consider values from 1--10 m\,s$^{-1}$ with a step size of 1 m\,s$^{-1}$, as well as values between 10--100 m\,s$^{-1}$ which are uniformly distributed in log-space (20 total values). In all cases, we include a model for AU Mic b with fixed $P$ and $TC$ such that $K_{b}\sim\mathcal{N}(8.5, 2.5)$. We first assess our recovery capabilities using a Gaussian prior for $P$ such that $P\sim\mathcal{N}(P_{\mathrm{inj}}, P_{\mathrm{inj}}/50)$ and a uniform prior for $TC$ such that $TC \sim \mathcal{U}(TC_\mathrm{inj} \pm P_{\mathrm{inj}}/2)$. For each injected planet (one at a time), we run our MAP and MCMC analyses to determine the recovered $K$ and corresponding uncertainty. The starting value for $K$ and $TC$ are always the injected values. We also consider the same injection and recovery test but with fixing $P$ and $TC$ to the injected value. We finally determine how susceptible our RV model is to pick out ``fake''-planets by running these same two trials with no injected planets. Although there are no injected planets, we still run the same trials as the injected case with different initial values for $K$. A two-dimensional histogram of the recovered $K$ as a fraction of the injected $K$, as well as the associated uncertainty (also as a fraction of the injected $K$) for each case are shown in figs. \ref{fig:injrec_real} and \ref{fig:injrec_fake} for the injected and non-injected cases, respectively.
\begin{figure*}
\centering
\includegraphics[width=0.98\textwidth]{injrec_real.png}
\caption{Histograms depicting our injection and recovery-test results. In the top row, we show the relative confidence interval of the recovered semi-amplitudes ($K_{\mathrm{rec}}$) derived from the MCMC analysis in the case of letting the ephemeris ($P$, $TC$) float (left) and fixing the ephemeris to the injected values (right). In the bottom row, we compare the recovered semi-amplitude to the injected value ($K_{\mathrm{inj}}$).}
\label{fig:injrec_real}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.98\textwidth]{injrec_fake}
\caption{Histograms depicting the recovery of planetary signals without having injected any into the data. In panels 1 and 2, we show the relative confidence interval of the recovered semi-amplitudes ($K_{rec}$) derived from the MCMC analysis in the case of letting the ephemeris ($P$, $TC$) float (left) and fixing the ephemeris to arbitrary the arbitrary TC=2457147.36589 (middle). On the right, we show the recovered semi-amplitudes for each case.}
\label{fig:injrec_fake}
\end{figure*}
In the case of injected planets, we find our RV-model is able to confidently recover semi-amplitudes down to a few m\,s$^{-1}$ in this data set with a relative precision of $\gtrsim 4\sigma$. However, a closer inspection reveals the recovered semi-amplitudes are typically larger than the injected K, particularly for smaller injected values (1--5 m\,s$^{-1}$) that includes our measured semi-amplitude AU Mic c. When the ephemeris is known, we tend to poorly measure the smallest values of K, indicating the recovered $TC$ in the non-fixed case is unlikely what we have injected. In the 6--13 m\,s$^{-1}$ range, which covers the recovered semi-amplitude of AU Mic b, we find that the accuracy of the recovered semi-amplitudes are $\sim$50\%. So, while we quote a formal precision on the mass of AU Mic b to be $M_{b}=20.12_{-1.57}^{+1.72}\ M_{\oplus}$ ($\sim$9\% precision), our injection and recovery tests indicate that the accuracy on the mass of AU Mic b is only known to a factor of two.
Unfortunately, attempts to recover non-injected planets are ``unsuccessful'', in that our modeling finds strong evidence for planets we did not inject (fig. \ref{fig:injrec_fake}) in the case of allowing $P$ and $TC$ to float. A deeper investigation into the posteriors of such fits indicates certain parameters (primarily $P$ and $TC$) are typically not well-behaved and yield non-Gaussian distributions. When fixing $P$ and $TC$ to ``nominal'' values, our modeling does not tend to find such non-existent planets (fig. \ref{fig:injrec_fake}).
The confident recoveries of ``fake''-planets in our tests indicate our GP model is flexible enough to find relatively (quasi)-stable islands in probability space with high confidence for $K$ specifically. Although several peaks stand out in our periodogram analyses (figs. \ref{fig:gls_periodograms} and \ref{fig:brute_force_periodograms}), more observations and/or more sophisticated modeling are needed to robustly claim these periods as statistically validated planets. We further note that the recovered values of $K$ for the smallest injected values are inaccurate, indicating our measurement of $K_{c}=3.68$ m\,s$^{-1}$ is also moderately unconvincing, and is likely an overestimate given the behavior of all recoveries at this level of $K$. We finally note this analysis is limited by planets we do not account for in the model, which may impact our ability to recover certain combinations of $P$ and $TC$. Further tests using several values for the injected $TC$ may also yield different results. With these limitations in mind, we also provide an estimation of the upper-limit to the mass of AU Mic c. We find a $5\sigma$ upper-limit to the semi-amplitude of AU Mic c of $\leq7.68$ m\,s$^{-1}$, corresponding to a mass of $\leq20.13\ M_{\oplus}$.
\subsection{Utility of RV-Color} \label{sec:disc_utility}
Our chromatic kernel used in this work is an initial step to exploit the expected correlation of stellar activity versus wavelength by introducing a scaling relation between wavelengths (eqs. \ref{eq:gp_j1} and \ref{eq:gp_j2}). Here we examine the ``RV-color'' for our multi-wavelength dataset in order to further assess the correlation between our RVs with expected activity:
\begin{equation}
RV_{\mathrm{color}}(t, \lambda, \lambda') = \mathrm{RV}(t, \lambda) - \mathrm{RV}(t, \lambda')
\end{equation} \label{eq:rv_color}
We first determine which nights contain nearly simultaneous measurements at unique wavelengths. We require observations to be within 0.3 days ($\approx 6\%$ of one rotation period) of each other to minimize differences from rotationally modulated activity but to increase the number of pairs for our brief use. For each nearly simultaneous chromatic pair, we compute the ``data-color'' directly from the measured RVs as well as the ``GP-color'' by computing the differences between the two measurements and two GPs sampled at the identical times, respectively (such that $\lambda' > \lambda$). This calculation requires knowledge of the parameters in order to remove the per-instrument zero points and realize each appropriate GP, so we make use of the MAP-derived parameters in Table \ref{tab:pars_results_2planets} with kernel $\mathbf{K_{J2}}$ (eq. \ref{eq:gp_j2}). The correlation between the data- and GP-color is shown in fig. \ref{fig:rv_color_1_to_1}. The agreement between the data and the model (weighted $R^{2}\approx0.71$) indicates that our chromatic GP technique is doing a good job of reproducing the RV-color phenomenon for multiple wavelength pairs.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{rvcolor_onetoone.png}
\caption{The observed ``RV-color'' = RV(t, $\lambda$) - RV(t, $\lambda'$) ($\lambda' > \lambda$) from our 2019 and 2020 nights with nearly simultaneous measurements at unique wavelengths. These are plotted against the same RV-color difference predicted by our chromatic GP model using kernel $\mathbf{K_{J2}}$. Pairs consisting of CARMENES-VIS and CARMENES-NIR measurements are nearly transparent to make other pairs more visible. We do not plot pairs of SPIRou and iSHELL because they are tightly centered near zero. A dashed one-to-one line is also shown. The weighted coefficient of determination ( R$^{2}$) is $\approx 0.68$.}
\label{fig:rv_color_1_to_1}
\end{figure}
With a sufficient model for stellar activity, we expect the data and GP RV-color to match (up to white noise). Therefore, the ``RV-color'' between the data and GP may be used to further constrain (in future analyses) the model (and therefore prevent over-fitting) by including an effective L2 regularization penalty as follows:
\begin{equation}
\ln \mathcal{L}\ \textrm{+=}\ -\Lambda \sum_{t} r_{col}(t)^{2}
\end{equation}
\noindent Here, $\vec{r}_{col}$ is the vector of residuals between the GP and data RV-color. $\Lambda>0$ is a tunable hyperparameter whose value is directly correlated with the relative importance and confidence of the stellar activity model. $+=$ represents the standard ``addition assignment'' operator. The vector $\vec{r}_{col}$ may be computed for all pairs of wavelengths with (nearly)-simultaneous measurements, and each pair can make use of an identical or unique values of $\Lambda$. We finally note this regularization term is not limited to our assumption of a simple-scaling relation, and could also be used in the case of disjoint kernels.
\subsection{Additional Caveats and Future Work} \label{sec:caveats_future_work}
Kernels $\mathbf{K_{J1}}$ (\ref{eq:gp_j1}) and $\mathbf{K_{J2}}$ (eq. \ref{eq:gp_j2}) make use of a scaling relation for stellar activity models at different wavelengths (spectrographs) where each activity model is drawn from a Gaussian process characterized by a covariance matrix utilizing all observations. Using such joint kernels yield fits with larger scatter than cases using disjoint QP kernels (one per-spectrograph, eq. \ref{eq:gp_qp}). In the latter case, we find that although each activity model appears to be ``in-phase'' with one another, each GP exhibits unique features that are inconsistent with a simple scaling relation (fig. \ref{fig:rvs_zoom_disjoint}). With nightly sampling, it is difficult to determine whether the observed differences between disjoint GPs is indicative of inadequate sampling or an inadequate RV model (activity + planets). Further, all activity models used in this work make use of identical kernel hyperparameters (excluding the amplitude) which may further be an inadequate assumption. We expect the stellar rotation period ($\eta_{P}$) to be identical across wavelengths (or nearly so), however it is not clear whether the mean activity timescale ($\eta_{\tau}$) or period length scale ($\eta_{\ell}$) in particular should be achromatic hyperparameters.
Our work further excluded per-spectrograph uncorrelated ``jitter'' terms. We suspect this may be the source of our model's ability to find planets we did not inject into the model (Section \ref{sec:injection_recovery}), which we defer to future work. The reduced-$\chi^2$ values in Tables \ref{tab:redchi2s_per_spectrograph} and \ref{tab:kernel_param_search} quantify the degree to which our models do not capture signals from possible additional planets, incorrect values for eccentricity and/or $\omega$, per-spectrograph systematics not included in the formal measurement uncertainties, stellar activity such as p-mode oscillations, convection noise, or longer time-scale variations. Therefore, although our specific likelihood function (eq. \ref{eq:map}) assumes normally distributed errors, we choose not to combine any remaining (i.e., unaccounted for by the provided error bars) potentially correlated noise into an additional uncorrelated jitter term to keep our model simple.
More accurately characterizing the masses and orbits of AU Mic b and c may require a more sophisticated stellar activity model and more intensive multi-wavelength cadence. Our work further does not make use of activity indicators (e.g., Ca \textsc{II} H and K, H$\alpha$) or asymmetries in the cross-correlation function (e.g., the bisector inverse slope (BIS) or differential line width dLW; \cite{serval}) to help constrain the activity model \citep[see][]{rajpaul2015a}. The \texttt{serval} pipeline in particular provides a measure of the chromaticity (CRX) for both the CARMENES-VIS and NIR datasets which we do not use in our modeling. For AU Mic, we expect that each spectrograph is precise enough to resolve first-order chromatic effects within their respective spectral grasp's which will unfortunately make the formal uncertainties of each spectrograph larger. Further, our QP-based kernels are primarily intended to capture rotationally modulated activity induced from temperature inhomogeneities on the stellar surface. Although the flexibility of disjoint GPs likely captures other rotationally modulated effects such as convective blueshift and limb-darkening, it will not capture short-term activity such as flares. We finally note that more seasons with high-cadence RVs will help mitigate the strong 1 year alias present in our dataset, and will help determine the correct periods for potential non-transiting planets.
\section{Conclusion} \label{sec:conclusion}
In this work, we have developed two joint-Gaussian process kernels which begin to take into account the expected wavelength dependence of stellar activity through a simple-scaling relation. We apply our kernels to a dataset of AU Mic, which is composed of RVs from multiple facilities, and wavelengths ranging from visible to K-band. With our analyses, we report a refined mass of AU Mic b of $M_{b}=20.12_{-1.72}^{+1.57}\ M_{\oplus}$, and provide a $4.2\sigma$ mass estimate of the recently validated transiting planet AU Mic c to be $M_{c}=9.60_{-2.31}^{+2.07}\ M_{\oplus}$, corresponding to a $5\sigma$ upper limit of $M_{c}\leq20.13\ M_{\oplus}$. We also identify additional peaks present in the activity-filtered RVs, but such periods require more evidence for a robust validation given the overall flexibility of our RV model with an unknown ephemeris.
In Section \ref{sec:disc_eccentricity}, we find our model is unable to robustly constrain the eccentricity for AU Mic b or c. In section \ref{sec:disc_kernel_parameters}, we find the derived planetary semi-amplitudes for AU Mic b and c are moderately sensitive to the choice of kernel-parameters, indicating careful attention must be made when interpreting planetary masses with such a flexible model. Through injection and recovery tests in section \ref{sec:injection_recovery}, we further validate our RV-model by demonstrating our ability to recover planets down to $\approx$ 10 m\,s$^{-1}$ when the orbit's ephemeris is known. However, we find that the accuracy in the recovered semi-amplitudes is $\sim$50\% at 10 m\,s$^{-1}$. In section \ref{sec:disc_utility}, we introduce a method to further leverage the ``RV-color'' correlation between the observations and activity model through penalizing the objective function by including an effective L2 regularization term.
\section{Acknowledgments} \label{sec:acknowledgements}
All data processed with \texttt{pychell} (iSHELL and CHIRON) were run on ARGO, a research computing cluster provided by the Office of Research Computing, and the exo computer cluster, both at George Mason University, VA.
We thank all support astronomers, observers, and engineers from all facilities in helping enable the collection of the data presented in this paper.
The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community, where the iSHELL, HIRES, IRD, and SPIRou observations were recorded. We are most fortunate to have the opportunity to conduct observations from this mountain.
This work is supported by grants to Peter Plavchan from NASA (awards 80NSSC20K0251 and 80NSSC21K0349), the National Science Foundation (Astronomy and Astrophysics grants 1716202 and 2006517), and the Mount Cuba Astronomical Foundation.
Emily A. Gilbert also wishes to thank the LSSTC Data Science Fellowship Program, which is funded by LSSTC, NSF Cybertraining Grant \#1829740, the Brinson Foundation, and the Moore Foundation; her participation in the program has benefited this work. Emily is thankful for support from GSFC Sellers Exoplanet Environments Collaboration (SEEC), which is funded by the NASA Planetary Science Division’s Internal Scientist Funding Model. The material is based upon work supported by NASA under award number 80GSFC21M0002.
This work is partly supported by JSPS KAKENHI Grant Number JP18H05439, JST PRESTO Grant Number JPMJPR1775, the Astrobiology Center of National Institutes of Natural Sciences (NINS) (Grant Number AB031010).
M.T. is supported by JSPS KAKENHI grant Nos. 18H05442, 15H02063, and 22000005.
The authors also with to acknowledge funding from the Agencia Estatal de Investigaci\'on del Ministerio de Ciencia e Innovaci\'on (AEI-MCINN) under grant PID2019-109522GB-C53.
The authors also wish to thank the California Planet Search (CPS) collaboration for carrying out the HIRES observations recorded in 2020 presented in this work.
{\textsc{Minerva}}-Australis is supported by Australian Research Council LIEF Grant LE160100001, Discovery Grant DP180100972, Mount Cuba Astronomical Foundation, and institutional partners University of Southern Queensland, UNSW Australia, MIT, Nanjing University, George Mason University, University of Louisville, University of California Riverside, University of Florida, and The University of Texas at Austin.
CARMENES is an instrument at the Centro Astron\'omico Hispano-Alem\'an de Calar Alto (CAHA, Almer\'{\i}a, Spain).
CARMENES is funded by the German Max-Planck-Gesellschaft (MPG), the Spanish Consejo Superior de Investigaciones Cient\'{\i}ficas (CSIC), the European Union through FEDER/ERF FICTS-2011-02 funds, and the members of the CARMENES Consortium (Max-Planck-Institut f\"ur Astronomie, Instituto de Astrof\'{\i}sica de Andaluc\'{\i}a, Landessternwarte K\"{o}onigstuhl, Institut de Ci\`encies de l’Espai, Institut f\"ur Astrophysik G\"{o}ttingen, Universidad Complutense de Madrid, Th\"uringer Landessternwarte Tautenburg, Instituto de Astrof\'{\i}sica de Canarias, Hamburger Sternwarte, Centro de Astrobiolog\'{\i}a and Centro Astron\'omico Hispano-Alem\'an), with additional contributions by the Spanish Ministry of Economy, the German Science Foundation through the Major Research Instrumentation Programme and DFG Research Unit FOR2544 ``Blue Planets around Red Stars'', the Klaus Tschira Stiftung, the states of Baden-W\"urttemberg and Niedersachsen, and by the Junta de Andaluc\'{\i}a.
We acknowledge financial support from the Agencia Estatal de Investigaci\'on of the Ministerio de Ciencia, Innovaci\'on y Universidades and the ERDF through projects PID2019-109522GB-C5[1:4]/AEI/10.13039/501100011033, PGC2018-098153-B-C33, and the Centre of Excellence ``Severo Ochoa'' and ``Mar\'ia de Maeztu'' awards to the Instituto de Astrof\'isica de Canarias (CEX2019-000920-S), Instituto de Astrof\'isica de Andaluc\'ia (SEV-2017-0709), and Centro de Astrobiolog\'ia (MDM-2017-0737), and the Generalitat de Catalunya/CERCA programme.
This paper includes data collected by the NASA \textit{TESS} mission that are publicly available from the Mikulski Archive for Space Telescopes (MAST). Funding for the \textit{TESS} mission is provided by NASA's Science Mission Directorate. We acknowledge the use of public \textit{TESS} data from pipelines at the \textit{TESS} Science Office and at the TESS Science Processing Operations Center \citep{jenkinsSPOC2016}.
Baptiste Klein acknowledges funding from the European Research Council under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 865624, GPRV).
Eder Martioli acknowledges funding from the French National Research Agency (ANR)
under contract number ANR-18-CE31-0019 (SPlaSH).
\textit{Software:} pychell \citep{caleetal2019}, optimize\footnote{\url{https://optimize.readthedocs.io/en/latest/}}, Matplotlib \citep{matplotlib}, SciPy \citep{scipy}, NumPy \citep{numpy}, Numba \citep{numba}, corner \citep{2020zndo...3937526F}, plotly \citep{plotly}, Gadfly Matplotlib theme \url{https://gist.github.com/JonnyCBB/c464d302fefce4722fe6cf5f461114ea}, emcee, \citep{emcee}
\subsubsection*{#1}}
\pagestyle{headings}
\markright{Reference sheet: \texttt{natbib}}
\usepackage{shortvrb}
\MakeShortVerb{\|}
\begin{document}
\thispagestyle{plain}
\newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX}
\newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}
\begin{center}{\bfseries\Large
Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\
\large(Describing version \fileversion\ from \filedate)
\end{center}
\begin{quote}\slshape
For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the
source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}.
\end{quote}
\head{Overview}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command,
to work with both author--year and numerical citations. It is compatible with
the standard bibliographic style files, such as \texttt{plain.bst}, as well as
with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago},
\texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
\head{Loading}
Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of
\emph{options} at the end.
\head{Replacement bibliography styles}
I provide three new \texttt{.bst} files to replace the standard \LaTeX\
numerical ones:
\begin{quote}\ttfamily
plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst
\end{quote}
\head{Basic commands}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and
|\citep| for \emph{textual} and \emph{parenthetical} citations, respectively.
There also exist the starred versions |\citet*| and |\citep*| that print
the full author list, and not just the abbreviated one.
All of these may take one or two optional arguments to add some text before
and after the citation.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. (1990)\\
|\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex]
|\citep{jon90}| & (Jones et al., 1990)\\
|\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\
|\citep[see][]{jon90}| & (see Jones et al., 1990)\\
|\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex]
|\citet*{jon90}| & Jones, Baker, and Williams (1990)\\
|\citep*{jon90}| & (Jones, Baker, and Williams, 1990)
\end{tabular}
\end{quote}
\head{Multiple citations}
Multiple citations may be made by including more than one
citation key in the |\cite| command argument.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\
|\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\
|\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\
|\citep{jon90a,jon90b}| & (Jones et al., 1990a,b)
\end{tabular}
\end{quote}
\head{Numerical mode}
These examples are for author--year citation mode. In numerical mode, the
results are different.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. [21]\\
|\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex]
|\citep{jon90}| & [21]\\
|\citep[chap.~2]{jon90}| & [21, chap.~2]\\
|\citep[see][]{jon90}| & [see 21]\\
|\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex]
|\citep{jon90a,jon90b}| & [21, 32]
\end{tabular}
\end{quote}
\head{Suppressed parentheses}
As an alternative form of citation, |\citealt| is the same as |\citet| but
\emph{without parentheses}. Similarly, |\citealp| is |\citep| without
parentheses. Multiple references, notes, and the starred variants
also exist.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citealt{jon90}| & Jones et al.\ 1990\\
|\citealt*{jon90}| & Jones, Baker, and Williams 1990\\
|\citealp{jon90}| & Jones et al., 1990\\
|\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\
|\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\
|\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\
|\citetext{priv.\ comm.}| & (priv.\ comm.)
\end{tabular}
\end{quote}
The |\citetext| command
allows arbitrary text to be placed in the current citation parentheses.
This may be used in combination with |\citealp|.
\head{Partial citations}
In author--year schemes, it is sometimes desirable to be able to refer to
the authors without the year, or vice versa. This is provided with the
extra commands
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citeauthor{jon90}| & Jones et al.\\
|\citeauthor*{jon90}| & Jones, Baker, and Williams\\
|\citeyear{jon90}| & 1990\\
|\citeyearpar{jon90}| & (1990)
\end{tabular}
\end{quote}
\head{Forcing upper cased names}
If the first author's name contains a \textsl{von} part, such as ``della
Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the
beginning of a sentence. One can force the first letter to be in upper case
with the command |\Citet| instead. Other upper case commands also exist.
\begin{quote}
\begin{tabular}{rl@{\quad$\Rightarrow$\quad}l}
when & |\citet{dRob98}| & della Robbia (1998) \\
then & |\Citet{dRob98}| & Della Robbia (1998) \\
& |\Citep{dRob98}| & (Della Robbia, 1998) \\
& |\Citealt{dRob98}| & Della Robbia 1998 \\
& |\Citealp{dRob98}| & Della Robbia, 1998 \\
& |\Citeauthor{dRob98}| & Della Robbia
\end{tabular}
\end{quote}
These commands also exist in starred versions for full author names.
\head{Citation aliasing}
Sometimes one wants to refer to a reference with a special designation,
rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be
defined and used, textual and/or parenthetical with:
\begin{quote}
\begin{tabular}{lcl}
|\defcitealias{jon90}{Paper~I}|\\
|\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\
|\citepalias{jon90}| & $\Rightarrow$ & (Paper~I)
\end{tabular}
\end{quote}
These citation commands function much like |\citet| and |\citep|: they may
take multiple keys in the argument, may contain notes, and are marked as
hyperlinks.
\head{Selecting citation style and punctuation}
Use the command |\bibpunct| with one optional and 6 mandatory arguments:
\begin{enumerate}
\item the opening bracket symbol, default = (
\item the closing bracket symbol, default = )
\item the punctuation between multiple citations, default = ;
\item the letter `n' for numerical style, or `s' for numerical superscript
style, any other letter for
author--year, default = author--year;
\item the punctuation that comes between the author names and the year
\item the punctuation that comes between years or numbers when common author
lists are suppressed (default = ,);
\end{enumerate}
The optional argument is the character preceding a post-note, default is a
comma plus space. In redefining this character, one must include a space if
one is wanted.
Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep{jon90,jon91,jam92}|
\end{quote}
into [Jones et al. 1990; 1991, James et al. 1992].
Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep[and references therein]{jon90}|
\end{quote}
into (Jones et al. 1990; and references therein).
\head{Other formatting options}
Redefine |\bibsection| to the desired sectioning command for introducing
the list of references. This is normally |\section*| or |\chapter*|.
Define |\bibpreamble| to be any text that is to be printed after the heading but
before the actual list of references.
Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to
the list of references.
Define |\citenumfont| to be a font declaration or command like |\itshape|
or |\textit|.
Redefine |\bibnumfmt| as a command with an argument to format the numbers in
the list of references. The default definition is |[#1]|.
The indentation after the first line of each reference is given by
|\bibhang|; change this with the |\setlength| command.
The vertical spacing between references is set by |\bibsep|; change this with
the |\setlength| command.
\head{Automatic indexing of citations}
If one wishes to have the citations entered in the \texttt{.idx} indexing
file, it is only necessary to issue |\citeindextrue| at any point in the
document. All following |\cite| commands, of all variations, then insert
the corresponding entry to that file. With |\citeindexfalse|, these
entries will no longer be made.
\head{Use with \texttt{chapterbib} package}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package
which makes it possible to have several bibliographies in one document.
The package makes use of the |\include| command, and each |\include|d file
has its own bibliography.
The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded
is unimportant.
The \texttt{chapterbib} package provides an option \texttt{sectionbib}
that puts the bibliography in a |\section*| instead of |\chapter*|,
something that makes sense if there is a bibliography in each chapter.
This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add
the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
Every |\include|d file must contain its own
|\bibliography| command where the bibliography is to appear. The database
files listed as arguments to this command can be different in each file,
of course. However, what is not so obvious, is that each file must also
contain a |\bibliographystyle| command, \emph{preferably with the same
style argument}.
\head{Sorting and compressing citations}
Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the
options \texttt{sort} or \texttt{sort\&compress}.
These also work with author--year citations, making multiple citations appear
in their order in the reference list.
\head{Long author list on first citation}
Use option \texttt{longnamesfirst} to have first citation automatically give
the full list of authors.
Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|,
given before the first citation.
\head{Local configuration}
Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which
is read in after the main package file.
\head{Options that can be added to \texttt{\char`\\ usepackage}}
\begin{description}
\item[\ttfamily round] (default) for round parentheses;
\item[\ttfamily square] for square brackets;
\item[\ttfamily curly] for curly braces;
\item[\ttfamily angle] for angle brackets;
\item[\ttfamily colon] (default) to separate multiple citations with
colons;
\item[\ttfamily comma] to use commas as separaters;
\item[\ttfamily authoryear] (default) for author--year citations;
\item[\ttfamily numbers] for numerical citations;
\item[\ttfamily super] for superscripted numerical citations, as in
\textsl{Nature};
\item[\ttfamily sort] orders multiple citations into the sequence in
which they appear in the list of references;
\item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple
numerical citations are compressed if possible (as 3--6, 15);
\item[\ttfamily longnamesfirst] makes the first citation of any reference
the equivalent of the starred variant (full author list) and subsequent
citations normal (abbreviated list);
\item[\ttfamily sectionbib] redefines |\thebibliography| to issue
|\section*| instead of |\chapter*|; valid only for classes with a
|\chapter| command; to be used with the \texttt{chapterbib} package;
\item[\ttfamily nonamebreak] keeps all the authors' names in a citation on
one line; causes overfull hboxes but helps with some \texttt{hyperref}
problems.
\end{description}
\end{document}
|
{
"timestamp": "2021-09-30T02:02:26",
"yymm": "2109",
"arxiv_id": "2109.13996",
"language": "en",
"url": "https://arxiv.org/abs/2109.13996"
}
|
\section{Introduction}\label{intro}
During the last years, a large number of young and intermediate-age stellar clusters (with ages up to around two billion years) have been discovered in the Magellanic
Clouds (MCs) exhibiting extended main-sequence turnoffs \citep[eMSTOs,][]{Mackey07,Milone09,Li17,Milone18}. Among them, the youngest ones ($\tau$\,$\leq$\,700\,Ma)
also display split main sequences \citep[MSs,][]{Bastian17,Correnti17,Li17,Milone18}, similar to those observed in the old globular clusters of the Milky Way (MW).
These features are not a peculiarity only of the MCs clusters but they have recently been found in Galactic open clusters as well \citep{Marino18a,Cordoni18,Piatti19,Li19,Sun19}.
This fact, which appears to be quite common, leads us to critically reconsider the assumption that colour-magnitude diagrams (CMDs) of open clusters can be reproduced
by a single isochrone, as a consequence of an unique and homogeneous stellar population, as it was thought until now. This has led, for instance,
to the use of the so-called isochrone cloud to fit the CMDs of cluster displaying eMSTOs \citep{Johnston19}.
It has been observed that the magnitude of the eMSTO/split MS phenomenon is related to the cluster age \citep{Niederhofer15,Cordoni18}, which would imply that behind
it exists an evolutionary effect. Stellar rotation is accepted as the main responsible \citep{Marino18b,Sun19}. By comparing observed and synthetic CMDs, split MSs have been explained by the
coexistence of two stellar populations with different rotation rates \citep{D'Antona15,Milone16}. One of them, which includes around two-thirds of the total MS stars, consists of fast rotators
and forms the so-called red MS (rMS), while the other one, the blue MS (bMS), is composed of the slow-rotating stars. Additionally, in the area of the CMDs around the MSTO, fast rotators
are brighter than the slow ones. This picture has been confirmed directly from the measurement of projected rotational velocities ($v$\,sin\,i) among eMSTO stars
in both MCs \citep{Dupree17, Marino18b} and MW open clusters \citep{Sun19}.
However, the rotation alone is not always able to explain the observational behaviour and in certain situations
an age spread, resulting from a prolonged star formation history or multiple star formation episodes,
is also required \citep{Goudfrooij17,Gossage19}. Nonetheless, this is not the case for open clusters, whose mass is well below that
considered necessary to originate multiple populations \citep{Krumholz2019,Gratton19}.
Alternatively, according to \citet{D'Antona17} the rotational braking due to tidal interactions between the components of close binaries
from a single stellar population of coeval stars, may also produce a distribution of rotational velocities capable to reproduce the eMSTOs and split MSs observed in the CMDs.
A greater number of observations are necessary to elucidate and constrain the role of each of these mechanisms, or any other that is still hidden underneath, that allows us
to fully understand this phenomenon.
Here we report the analysis of a large sample of stars, both on the MS and giants in the nearby and poorly studied open cluster Stock\,2.
It is a dispersed cluster discovered by \citet{Stock56} located in the Orion spiral arm, [$\alpha$(2000)\,=\,2h15m,
$\delta$(2000)\,=\,+59$^{\circ}$16$'$, $\ell$\,=\,133.334$^{\circ}$, $b$\,=\,-1.694$^{\circ}$\footnote{nominal coordinates according to the WEBDA database,
\url{https://webda.physics.muni.cz/}}],
roughly in the same line of sight as the double cluster $h$ $\&$ $\chi$
Persei, but considerably closer to the Sun. However, despite its proximity, physical parameters for this cluster such as age or chemical composition are not precisely known.
According to the literature \citep{Stock56,Krzeminski1967,Robichon1999,Spagna09} the distance to Stock\,2 ranges between 300 and 350 pc, although the most recent
studies, based on the second $Gaia$ data release, place it
at about 400 pc \citep{Cantat2018,Reddy2019}. The average reddening is $E(B-V)$\,$\approx$\,0.35, but it seems to
be variable across the cluster field \citep{Krzeminski1967,Spagna09,Ye21}.
Regarding the age, it is still not precisely known. On the one hand, the cluster might be coeval or slightly older than the Pleiades \citep[100--275 Ma, e.g.][]{Krzeminski1967,Robichon1999,Reddy2019,Ye21}
but on the other hand, \citet{Sciortino2000}, from the analysis of the cluster X-ray luminosity function, found it to have and age similar to the Hyades ($\tau\simeq$\,625 Ma). \citet{Spagna09}, based on
the TO region shape and the distribution of the giants on the CMD, reported an age within the 200--500 Ma range. Thus, the age of Stock\,2 is still a debated
issue and represents a challenging task.
Recently, \citet{Reddy2019} performed the first detailed spectroscopic analysis of this cluster so far. They took high-resolution spectra of three red giants, from which they estimated
a solar-like mean metallicity ([Fe/H]=$-$0.06$\pm0.03$) and the chemical abundances for 23 elements.
\citet{Ye21} obtained a similar value ([Fe/H]=$-$0.04$\pm0.15$) from LAMOST mid-resolution spectra of almost 300 likely members. They also found that Stock\,2 is a massive
cluster ($\approx$\,4000\,M$_{\sun}$).
The present paper is part of the Stellar Population Astrophysics (SPA) project, an ongoing Large Programme running on the 3.6-m Telescopio Nazionale Galileo (TNG) at
the Roque de los Muchachos Observatory (La Palma, Spain). SPA is an ambitious project whose aim is to reveal the star formation and chemical enrichment history of
the Galaxy, obtaining an age-resolved chemical map of the solar neighbourhood and the Galactic thin disc. More than 500 nearby, representative stars are being observed
at high resolution in the optical and near-infrared bands by combining the High Accuracy Radial velocity Planet Searcher in North hemisphere spectrograph (HARPS-N) and GIANO-B spectrographs
\citep[see][for more details on SPA]{Origlia19}.
In this work, we combine high-resolution spectroscopy, archival photometry and the $Gaia$ early third data release \citep[$Gaia$-eDR3,][]{eDR3} in order to investigate the properties
of Stock\,2, paying special attention to the upper MS and MSTO. The analysis of stellar parameters, CMDs, and the Lithium abundance are of great importance to
constrain the cluster age. The paper is structured as follows.
In Sect.~\ref{sec_targets} we present our observations and explain the criterium followed to select our targets. Then, in Sect.~\ref{sec_spec_anal} we describe our
spectral analysis and display the results derived: radial velocities, atmospheric parameters and chemical abundances. The determination of the extinction and the
analysis of the CMDs are detailed in Sect.~\ref{sec_redd} and Sect.~\ref{sec_CMD}, respectively. The discussion and comparison of our results with the literature are conducted in
Sect.~\ref{sec_discuss}. Finally, we summarise our results and present our conclusions in Sect.~\ref{sec_concl}.
\section{Observations and targets selection}\label{sec_targets}
With the aim of studying the cluster and determine its properties, we observed a sample of representative stars among the bona-fide members (with an
assigned membership probability of $P$=1) from \citet{Cantat2018}. The only exception is the brightest giant, star g1, for which \citet{Cantat2018} report a
membership probability of $P$=0.8.
We targeted initially the giants, to determine the cluster metallicity and
detailed abundances, as we did for other clusters in SPA, for which we mainly selected red clump stars, to have a sample as homogeneous as possible
\citep[see][]{Casali20,Zhang21}. These stars, orange circles in Fig.~\ref{fig_targets}, are labelled as `g' in Table~\ref{tab_obs}. By examining the $Gaia$-DR2 CMD
(since the $Gaia$-eDR3 was not available when we prepared our observations) we realised that the cluster exhibited an eMSTO/split MS, something that was not clearly
visible in pre-existing photometry, due to field contamination. In order to study it we selected as targets also the brightest stars in the upper MS, close to the turn-off (TO)
point (green triangles in Fig.~\ref{fig_targets} and labelled as `to' in Table~\ref{tab_obs}) as well as MS stars following three different sequences to sample the blue MS (bMS,
blue circles and `b'), red MS (rMS, red circles and `r') and the upper envelope of the main sequence, which is the region mostly populated by binary and multiple stars
(black circles and `u'). The numbering used throughout this paper consists, for each of these series, of assigning a sequential number beginning with the brightest star.
In total, we acquired high-resolution spectra for 46 stars in several observational runs which are described below (see Table~\ref{tab_obs}).
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./CMD_targets_paper_rev.pdf}
\caption{$G/(G_{\textrm{BP}}-G_{\textrm{RP}})$ diagram for Stock\,2. Members from \citet{Cantat2018} are marked with light brown dots.
Stars observed with CAOS in this work are represented with green triangles while those observed with HARPS-N appear as circles with different
colours, as explained in the text.}
\label{fig_targets}
\end{figure}
\subsection{Spectroscopy}
We used HARPS-N \citep{HARPS-N} to observe the ten cluster giants on November 5 and 6, 2018. HARPS-N is an \'{e}chelle spectrograph mounted at the 3.6-m TNG telescope at El Roque de los Muchachos Observatory
(La Palma, Spain). It is fibre-fed from the Nasmyth B focus and covers the wavelength range from 3870\,\AA{} to 6910\,\AA{} providing a resolving power of
$R=115\,000$. Later, still with the same equipment, we took spectra for 24 MS stars from 16 to 19 December 2018 and from 13 to 15 January 2019\footnote{
We used GIARPS, i.e. the combination of GIANO and HARPS-N; however, we use only HARPS-N spectra here, as they are more efficient for the warm, MS stars. GIANO spectra
will be used in forecoming papers}. The instrument's pipeline was used to reduce these spectra.
We completed the TNG observations by collecting additional spectra for the 14 brightest stars of the upper MS around the TO point. Observations were
carried out between 29 and 31 October 2020 with the Catania Astropysical Observatory Spectrograph \citep[CAOS,][]{CAOS1,CAOS2}.
CAOS is an \'{e}chelle spectrograph mounted on the 0.91-m telescope at {\it M. G. Fracastoro station} (Serra La Nave, Mt Etna (Italy)) which provides a resolution of $R=55\,000$. It is fibre-fed from the
Cassegrain focus and covers, in 81 orders, the wavelength range from 3875\,\AA{} to 6910\,\AA{}. These spectra were reduced by employing the {\scshape iraf}\footnote{{\scshape iraf}
is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under the cooperative agreement
with the National Science Foundation.} packages following standard procedures.
The log of the observations can be found in Table~\ref{tab_obs}. This table displays the spectrograph used, the heliocentric Julian day at mid exposure (HJD), the exposure time ($t_{\textrm{exp}}$, which is the sum of all exposures of the same star),
an estimate of the average signal-to-noise ratio per pixel achieved at 6500\,\AA\ ($S/N$) and the HD (or Tycho, or 2MASS) designation (Name).
\begin{table}[thb!]
\caption{Observation log. }
\begin{center}
\begin{tabular}{lcccc}
\hline\hline
Star & Name & HJD & $t_{\textrm{exp}}$ (s) & $S/N^{a}$ \\
\hline
\noalign{\smallskip}
\multicolumn{5}{c}{\bf HARPS-N}\\
\noalign{\smallskip}
\hline
b1 & HD\,13967 & 58469.459 & 3000 & 111 \\
b2 & HD\,13100 & 58469.420 & 3000 & 99 \\
b3 & TYC\,3698-2381-1 & 58471.426 & 3600 & 98 \\
b4 & TYC\,3699-1132-1 & 58472.519 & 7200 & 90 \\
b5 & TYC\,3698-2224-1 & 58499.390 & 3800 & 82 \\
b6 & TYC\,3698-483-1 & 58498.383 & 5400 & 58 \\
b7 & {\tiny J02192173+5927303$^{b}$} & 58470.375 & 9600 & 64 \\
b8 & {\tiny J02204032+5923204$^{b}$} & 58497.360 & 5400 & 30 \\
& & & & \\
r1 & HD\,12920 & 58499.473 & 1900 & 99 \\
r2 & TYC\,3698-861-1 & 58469.498 & 3000 & 93 \\
r3 & TYC\,3698-645-1 & 58471.506 & 3600 & 103 \\
r4 & TYC\,3698-2739-1 & 58471.644 & 4800 & 67 \\
r5 & TYC\,3697-479-1 & 58499.438 & 3800 & 78 \\
r6 & {\tiny J02134650+5923569$^{b}$} & 58498.450 & 5400 & 74 \\
r7 & TYC\,3697-1499-1 & 58470.514 & 9600 & 61 \\
r8 & {\tiny J02131100+5945191$^{b}$} & 59178.387 & 6300 & 46 \\
& & & & \\
u1 & HD\,13699 & 58469.381 & 2400 & 147 \\
u2 & TYC\,3698-1363-1 & 58469.537 & 3000 & 108 \\
u3 & TYC\,3698-1420-1 & 58471.562 & 4800 & 95 \\
u4 & TYC\,3698-1703-1 & 59131.598 & 3680 & 65 \\
u5 & {\tiny J02134467+5933039$^{b}$} & 59131.687 & 5520 & 73 \\
u6 & {\tiny J02162746+5954309$^{b}$} & 59131.748 & 4200 & 28 \\
& & & & \\
g1 & HD\,15498 & 58428.407 & 700 & 264 \\
g2 & HD\,14346 & 58428.390 & 700 & 232 \\
g3 & HD\,13437 & 58428.468 & 1400 & 346 \\
g4 & HD\,13207 & 58428.450 & 1400 & 255 \\
g5 & HD\,14403 & 58428.487 & 1400 & 282 \\
g6 & HD\,12650 & 58428.423 & 1400 & 248 \\
g7 & HD\,15665 & 58429.341 & 1400 & 242 \\
g8 & HD\,14415 & 58428.505 & 1400 & 255 \\
g9 & HD\,13655 & 58429.359 & 1400 & 211 \\
g10 & HD\,13134 & 58429.378 & 1400 & 192 \\
\hline
\noalign{\smallskip}
\multicolumn{5}{c}{\bf CAOS}\\
\noalign{\smallskip}
\hline
to1 & HD\,14183 & 59152.498 & 2400 & 164 \\
to2 & HD\,14161 & 59152.567 & 2700 & 153 \\
to3 & HD\,12184 & 59152.529 & 2700 & 164 \\
to4 & HD\,14025 & 59152.601 & 2700 & 89 \\
to5 & HD\,13518 & 59153.510 & 2400 & 114 \\
to6 & HD\,15240 & 59152.474 & 3000 & 70 \\
to7 & HD\,13591 & 59153.541 & 2700 & 97 \\
to8 & HD\,14946 & 59154.361 & 3000 & 69 \\
to9 & HD\,14579 & 59153.615 & 2700 & 81 \\
to10 & HD\,13909 & 59154.489 & 3000 & 153 \\
to11 & HD\,13688 & 59153.576 & 2700 & 80 \\
to12 & HD\,15315 & 59154.404 & 3000 & 115 \\
to13 & HD\,13899 & 59154.526 & 3000 & 153 \\
to14 & HD\,13606 & 59154.323 & 3000 & 45 \\
\hline
\end{tabular}
\end{center}
{\bf Notes.} $^{a}$ Signal-to-noise ratio per pixel at 6500\,\AA. $^{b}$ 2MASS designation.
\label{tab_obs}
\end{table}
\subsection{Archival data}
As mentioned above we started our investigation based on the work conducted by \citet{Cantat2018}. From the analysis of $Gaia$-DR2 data they identified 1209 members
for Stock\,2. In the astrometric space, they located the cluster at ($\mu_{\alpha*}$, $\mu_{\delta}$, $\varpi$) = (15.966, $-$13.627, 2.641) $\pm$ (0.650, 0.591, 0.076),
clearly standing out from the background (as seen in Fig.~\ref{fig_mp}, which highlights the stars observed in this work).
According to the spatial distribution of its members (Fig.~\ref{fig_distribution}) \citet{Cantat2018} placed the cluster centre at $\alpha$(2000)\,=\,2h15m25.44s,
$\delta$(2000)\,=\,+59$^{\circ}$31$'$19.2$''$, at a distance $\Delta(\alpha,\delta)$=(25.4$^s$,15.3$^{\arcmin}$) from the nominal value. Stock\,2 is a dispersed cluster
and half of its members ($r_{\textrm{50}}$) are found within a radius of 1.03$^{\circ}$ around the centre, with the most distant ones positioned almost 4$^{\circ}$ away.
As a result, none of the photometric datasets existing in the literature cover its entire extension. For this reason, to complement our spectroscopy and the $Gaia$ data
we resorted to all-sky photometric surveys. We used $JHK_{\textrm{S}}$ magnitudes from the 2MASS catalogue in the near infrared wavelength \citep{2MASS}
as well as $BVg'r'i'$ optical bands from the APASS catalogue \citep{apass}. In some cases, for the brightest stars for which the APASS photometry is not reliable we
also made use of the values listed in the ASCC2.5 catalogue \citep{ASCC25}.
The combination of all these data allowed us to analyse the CMDs of the cluster, as will be explained later in Sect.~\ref{sec_CMD}.
All the astrometric and photometric data available for the stars observed in this work are summarised in Tables~\ref{tab_mp} and \ref{tab_fotom} in the appendix of the paper.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./mp_paper_rev.png}
\caption{Proper-motion diagram in the field of Stock\,2.
The ellipse (brown dashed line) is centred in the average proper motions of the cluster and has semi axes of 4 times the sigmas of the $\mu_{\alpha*}$ and $\mu_{\delta}$
distributions of the cluster members according to \citet{Cantat2018}. It represents the cluster extent in the astrometric space.
Grey dots are field stars whereas the rest of the symbols are the same as those in Fig.~\ref{fig_targets}.}
\label{fig_mp}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./distrib_paper_rev.png}
\caption{Sky region around Stock\,2. Grey dots are the sources with $G$\,$\leq$\,16\,mag within a radius of 240\,$\arcmin$ around the cluster nominal
centre (magenta cross). Cluster members identified by \citet{Cantat2018} are represented by black points whereas the cluster centre derived from them is
the white cross. Coloured circles and green triangles are the objects observed in this work (see Fig.~\ref{fig_targets}) with the HARPS-N and CAOS
spectrographs, respectively. The overdensities visible at RA$\sim35^{\circ}$ and DEC$\sim57^{\circ}$ correspond to $h$ \& $\chi$ Per double cluster.}
\label{fig_distribution}
\end{figure}
\section{Spectral analysis}\label{sec_spec_anal}
\subsection{Radial velocity}
We started the spectroscopic analysis by measuring the heliocentric radial velocity (RV) of the observed objects. For this purpose we cross-correlated our spectra
against synthetic templates by employing the task {\scshape fxcor} contained in the {\scshape iraf} packages.
When examining the cross-correlation function (CCF) we identified some multiple systems (SB2 or SB3) among the stars forming our sample namely, r4, u1 and u2. Therefore,
in the upper sequence, we found only two binaries out of the six candidates, although the remaining four could be single-lined systems (SB1).
Additionally, star u3 might also have a close companion since it shows a discrepant {\scshape RUWE\footnote{\url{https://www.cosmos.esa.int/web/gaia/II-124}}} $Gaia$ parameter for a single
source ($\approx$\,3.3).
For the remaining single stars results are listed in the last column of Table~\ref{tab_params}. As can be seen, RVs show a large dispersion, with values ranging from $-$16.5
to $+$15.7\,km\,s$^{-1}$. This is likely a consequence of the
$v$\,sin\,$i$ distribution. Indeed, while for slow rotators (e.g. giants and stars in the lower main-sequence) is possible determine
precise RVs, for rapid rotators, instead, it is not. This is specially relevant for the hottest stars in our sample, located at the upper MS close to the TO point.
These stars, with spectral types A, in addition to rotating rapidly, display far fewer features in their spectra, which broaden and reduce the intensity of the CCF peak.
For this reason, to calculate the average RV for the cluster we only took the stars whose $v$\,sin\,$i$<50\,km\,s$^{-1}$ into account. In this way, from 21 members, we derived
an average value of $RV$=7.5$\pm$3.3\,km\,s$^{-1}$. On the other hand, $Gaia$-DR2 (since the eDR3 does not provided new values) gives RV for 194 objects among the members listed in \citet{Cantat2018}.
The average value, after applying a 3$\sigma$-clipping filter to ignore outliers, is $RV_{\textrm{GDR2}}$=9.5$\pm$3.3\,km\,s$^{-1}$ (which becomes 8.0\,km\,s$^{-1}$ if, instead,
the error-weighted mean is calculated). If we consider only the giants, the weighted average of our values is $RV$=7.9$\pm$1.4\,km\,s$^{-1}$ (where we have assumed the weighted standard deviation
as uncertainty), which is in close agreement with the above estimate.
\subsection{Atmospheric parameters}\label{sec_params}
To determine the stellar atmospheric parameters of our targets we used the {\scshape rotfit} code \citep{Rotfit} adapted to the SPA project workframe, as
previously done \citep[see e.g.][]{ASCC123, Casali20}. The code provides us not only with atmospheric parameters such as effective temperature ($T_{\textrm{eff}}$),
surface gravity ($\log\,g$) and iron abundance ([Fe/H], as a proxy of the metallicity) but also with an estimate of the spectral type (SpT) and the projected rotational
velocity ($v\,\sin\,i$). It should be noted that the last is a key parameter for the research we are conducting in this work.
{\scshape rotfit} is based on a $\chi^2$ minimization of the difference between the target spectrum and a grid of templates.
This difference is evaluated in 28 spectral segments of 100\,\AA{} each. Then, the final parameters are obtained by averaging the results of the individual regions, weighting them
according to the $\chi^2$ and the information contained in each spectral segment. As template spectra we selected a collection of high-resolution spectra of real stars
with well-known parameters taken with ELODIE ($R$\,=\,42\,000). This grid of templates is the same as that used in the $Gaia$-ESO Survey by the Catania node
\citep{Smiljanic14, Frasca15}. A more detailed description of our methodology can be found in \citet{ASCC123}.
For all the single stars, the results are displayed in Table~\ref{tab_params}.
We obtained for this cluster an average solar metallicity of [Fe/H]=0.00$\pm$0.08, which was calculated as the weighted mean of the values for the spectra analyzed with {\scshape rotfit}. The
error reflects the standard deviation of the individual values around the cluster mean.
{\scshape rotfit} is optimised to be used with FGK-type targets. Therefore, for hotter stars we used a different approach based on a grid of synthetic spectra computed as described in
Sect.~\ref{sec_abund}, for which we adopted an Opacity Distribution Function (ODF) computed for solar abundances. To determine $T_{\rm eff}$\ and $\log g$\ we used the wings and the cores of Balmer lines,
while a region around the \ion{Mg}{ii}$\lambda$4481 line has been used to derive the $v\sin i$. Due to the rapid stellar rotation, spectral lines are very broadened and shallow and then very difficult
to measure, thus we have chosen to adopt [Fe/H]\,=\,0.
\begin{table*}[ht!]
\caption{Stellar parameters derived for the single stars.}
\label{tab_params}
\begin{center}
\begin{tabular}{lrcrlrr}
\hline\hline
Star & $T_{\textrm{eff}}$ (K)~~~~ & $\log\,g$ & [Fe/H]~~~~ & \ Sp T & $v \sin\,i$ (km\,s$^{-1}$) & RV (km\,s$^{-1}$) \\
\hline
b1 & 8500 $\pm$ 300 & 4.10 $\pm$ 0.20 & 0.00$^{a}$~~~~ & A1\,V$^{b}$ & 140 $\pm$ ~15 & 13.44 $\pm$ 2.69 \\
b2 & 8700 $\pm$ 200 & 4.00 $\pm$ 0.20 & 0.00$^{a}$~~~~ & A0\,V$^{b}$ & 34.3 $\pm$ 4.7 & 9.37 $\pm$ 0.68 \\
b3 & 7700 $\pm$ 300 & 4.07 $\pm$ 0.23 & $-$0.19 $\pm$ 0.18 & A7\,V & 280 $\pm$ ~30 & 15.65 $\pm$ 9.16 \\
b4 & 7800 $\pm$ 300 & 4.09 $\pm$ 0.21 & $-$0.21 $\pm$ 0.16 & A7\,V & 120 $\pm$ ~15 & 8.03 $\pm$ 2.85 \\
b5 & 7289 $\pm$ 252 & 4.05 $\pm$ 0.22 & $-$0.16 $\pm$ 0.13 & A9\,IV & 220 $\pm$ ~20 & 6.43 $\pm$ 8.04 \\
b6 & 6132 $\pm$ ~~91 & 4.11 $\pm$ 0.14 & 0.00 $\pm$ 0.10 & F9\,IV-V & 21.9 $\pm$ 0.7 & 8.47 $\pm$ 0.23 \\
b7 & 6092 $\pm$ ~~73 & 4.20 $\pm$ 0.10 & 0.07 $\pm$ 0.10 & F8\,V & 4.0 $\pm$ 1.0 & $-$4.27 $\pm$ 0.10 \\
b8 & 5841 $\pm$ ~~86 & 4.42 $\pm$ 0.12 & 0.06 $\pm$ 0.08 & G1\,V & 2.2 $\pm$ 1.7 & 5.74 $\pm$ 0.11 \\
& & & & & & \\
r1 & 8000 $\pm$ 250 & 3.90 $\pm$ 0.20 & 0.00$^{a}$~~~~ & A1\,V$^b$ & 250 $\pm$ 30 & $-$1.42 $\pm$ 6.35 \\
r2 & 8300 $\pm$ 300 & 3.80 $\pm$ 0.30 & 0.00$^{a}$~~~~ & A1\,V$^b$ & 230 $\pm$ 30 & 0.62 $\pm$ 3.09 \\
r3 & 8800 $\pm$ 300 & 3.90 $\pm$ 0.20 & 0.00$^{a}$~~~~ & A0$^b$ & 40 $\pm$ ~~9 & $-$3.69 $\pm$ 0.54 \\
r5 & 7607 $\pm$ 279 & 4.11 $\pm$ 0.20 & $-$0.09 $\pm$ 0.12 & F0\,III & 135 $\pm$ 15 & 13.75 $\pm$ 5.81 \\
r6 & 6851 $\pm$ 138 & 4.14 $\pm$ 0.11 & $-$0.07 $\pm$ 0.09 & F4\,V & 13.3 $\pm$1.0 & 9.65 $\pm$ 0.24 \\
r7 & 6332 $\pm$ 163 & 4.04 $\pm$ 0.15 & $-$0.06 $\pm$ 0.11 & F7\,IV & 42 $\pm$ ~~2 & 3.55 $\pm$ 0.83 \\
r8 & 6086 $\pm$ ~~73 & 4.20 $\pm$ 0.10 & 0.07 $\pm$ 0.10 & F8\,V & 8.5 $\pm$0.8 & 9.02 $\pm$ 0.14 \\ \\
& & & & & & \\
u3 & 7603 $\pm$ 299 & 4.01 $\pm$ 0.21 & $-$0.13 $\pm$ 0.12 & A8\,V & 53 $\pm$ ~~6 & 9.26 $\pm$ 0.86 \\
u4 & 8300 $\pm$ 300 & 4.10 $\pm$ 0.20 & 0.00$^{a}$~~~~ & B8\,V$^b$ & 85 $\pm$ 10 & $-$4.96 $\pm$ 0.29 \\
u5 & 6449 $\pm$ 152 & 4.09 $\pm$ 0.16 & $-$0.07 $\pm$ 0.11 & F6\,IV & 44 $\pm$ ~~1 & 3.38 $\pm$ 0.75 \\
u6 & 6534 $\pm$ 131 & 4.11 $\pm$ 0.15 & $-$0.05 $\pm$ 0.10 & F8\,V & 41 $\pm$ ~~1 & 2.27 $\pm$ 0.76 \\
& & & & & & \\
g1 & 4530 $\pm$ ~~86 & 2.14 $\pm$ 0.10 & 0.01 $\pm$ 0.09 & K1\,III & 1.6 $\pm$ 1.5 & 9.78 $\pm$ 0.12 \\
g2 & 4760 $\pm$ 111 & 2.69 $\pm$ 0.14 & 0.02 $\pm$ 0.10 & K0\,III & 7.6 $\pm$ 0.6 & 8.11 $\pm$ 0.13 \\
g3 & 4937 $\pm$ 114 & 2.51 $\pm$ 0.35 & 0.04 $\pm$ 0.08 & G8\,III & 6.1 $\pm$ 0.7 & 8.36 $\pm$ 0.13 \\
g4 & 4977 $\pm$ 117 & 2.82 $\pm$ 0.18 & 0.04 $\pm$ 0.08 & G8\,III & 1.7 $\pm$ 1.5 & 8.37 $\pm$ 0.11 \\
g5 & 5061 $\pm$ ~~56 & 2.99 $\pm$ 0.19 & 0.04 $\pm$ 0.07 & G8\,III & 5.4 $\pm$ 1.2 & 9.20 $\pm$ 0.12 \\
g6 & 5002 $\pm$ 110 & 2.96 $\pm$ 0.20 & 0.03 $\pm$ 0.07 & G8\,III & 1.9 $\pm$ 1.6 & 7.10 $\pm$ 0.10 \\
g7 & 5058 $\pm$ ~~56 & 2.97 $\pm$ 0.20 & 0.03 $\pm$ 0.07 & G8\,III & 2.7 $\pm$ 1.6 & 8.45 $\pm$ 0.11 \\
g8 & 5065 $\pm$ ~~56 & 3.00 $\pm$ 0.19 & $-$0.03 $\pm$ 0.09 & G8\,III & 5.2 $\pm$ 1.3 & 7.86 $\pm$ 0.11 \\
g9 & 5062 $\pm$ ~~56 & 3.00 $\pm$ 0.19 & 0.00 $\pm$ 0.09 & G8\,III & 4.6 $\pm$ 1.4 & 4.38 $\pm$ 0.11 \\
g10 & 5066 $\pm$ ~~56 & 3.01 $\pm$ 0.19 & $-$0.03 $\pm$ 0.09 & G8\,III & 4.5 $\pm$ 0.9 & 8.66 $\pm$ 0.11 \\
& & & & & & \\
to1$^c$ & 9300 $\pm$ 300 & 4.5 $\pm$ 0.2 & 0.00$^{a}$~~~~ & A1\,V$^{b}$ & 80 $\pm$ 10 & 4.0 $\pm$ 11.7 \\
to2 & 9100 $\pm$ 300 & 4.3 $\pm$ 0.2 & 0.00$^{a}$~~~~ & A2\,IV$^{b}$ & 60 $\pm$ 10 & 6.7 $\pm$ ~~7.0 \\
to3 & 9000 $\pm$ 300 & 4.1 $\pm$ 0.2 & 0.00$^{a}$~~~~ & A2\,V$^{b}$ & 133 $\pm$ 10 & 2.8 $\pm$ ~~7.7 \\
to4 & 8300 $\pm$ 400 & 3.5 $\pm$ 0.2 & 0.00$^{a}$~~~~ & A1\,V$^{b}$ & 199 $\pm$ 20 & 4.0 $\pm$ 10.0 \\
to5 & 9000 $\pm$ 400 & 4.0 $\pm$ 0.2 & 0.00$^{a}$~~~~ & A1\,V$^{b}$ & 108 $\pm$ 10 & 3.0 $\pm$ ~~9.6 \\
to6 & 9100 $\pm$ 300 & 4.3 $\pm$ 0.2 & 0.00$^{a}$~~~~ & A0\,V$^{b}$ & 245 $\pm$ 25 & 9.3 $\pm$ 10.6 \\
to7 & 8800 $\pm$ 400 & 4.5 $\pm$ 0.2 & 0.00$^{a}$~~~~ & A1\,IV$^{b}$ & 165 $\pm$ 15 & 10.3 $\pm$ ~~1.7 \\
to8 & 8800 $\pm$ 300 & 4.5 $\pm$ 0.2 & 0.00$^{a}$~~~~ & A1\,V$^{b}$ & 94 $\pm$ 10 & 5.6 $\pm$ ~~5.9 \\
to9 & 8000 $\pm$ 400 & 3.5 $\pm$ 0.2 & 0.00$^{a}$~~~~ & A3\,V$^{b}$ & 211 $\pm$ 20 & 0.4 $\pm$ 21.5 \\
to10 & 9100 $\pm$ 300 & 4.4 $\pm$ 0.2 & 0.00$^{a}$~~~~ & A0\,IV$^{b}$ & 83 $\pm$ 10 & 7.4 $\pm$ ~~4.8 \\
to11 & 8800 $\pm$ 300 & 3.9 $\pm$ 0.2 & 0.00$^{a}$~~~~ & A0\,IV$^{b}$ & 11 $\pm$ ~~5 & 8.3 $\pm$ ~~0.1 \\
to12 & 8800 $\pm$ 400 & 4.0 $\pm$ 0.2 & 0.00$^{a}$~~~~ & A0\,V$^{b}$ & 228 $\pm$ 20 & 4.5 $\pm$ ~~8.3 \\
to13 & 8500 $\pm$ 400 & 3.6 $\pm$ 0.2 & 0.00$^{a}$~~~~ & A0\,V$^{b}$ & 236 $\pm$ 25 & 8.2 $\pm$ ~~8.3 \\
to14 & 8800 $\pm$ 300 & 4.5 $\pm$ 0.2 & 0.00$^{a}$~~~~ & A0\,V$^{b}$ & 144 $\pm$ 14 & 7.5 $\pm$ ~~6.5 \\
\hline
\end{tabular}
\end{center}
{\bf Notes.} $^{a}$ Solar ODF adopted. $^{b}$ Spectral types adopted from SIMBAD. $^{c}$ Possible SB2 system.
\end{table*}
\subsection{Chemical abundances}\label{sec_abund}
In order to calculate the elemental abundances of our (single) targets we made use of the spectral synthesis technique \citep{Catanzaro11,Catanzaro13}, as we already did
within the SPA project previously \citep{ASCC123}.
As a starting point, we took the atmospheric parameters obtained with {\scshape rotfit} to compute 1D Local Thermodynamic Equilibrium (LTE) atmospheric models with the {\scshape atlas9} code \citep{Kurucz93a,Kurucz93b}.
Then, we generated the corresponding synthetic spectra by using the radiative transfer code {\scshape synthe} \citep{Kurucz81}. As an optimization code we exploited ad hoc {\scshape idl} routines
based on the {\it amoeba} minimization algorithm
to find the best solution by minimizing the $\chi^2$ of the differences between the synthetic spectra and the observed ones. At this point, to check the validity of the input parameters we let them vary.
We always found that the best solution is consistent with the {\scshape rotfit} values reported in
Table~\ref{tab_params}, so we adopted them for the subsequent analysis. Once we checked the parameters we started to determine the abundances. We focused our analysis on 39 spectral regions of
50\,\AA{} each between 4400 and 6800\,\AA{}. In this way we derived the chemical abundances of 22 elements of atomic number up to 56, namely, C, O, Na, Mg, Al, Si, S, Ca, Sc, Ti, V, Cr, Mn,
Fe, Co, Ni, Cu, Zn, Sr, Y, Zr, and Ba. For the hottest stars, those around the TO point observed with CAOS, it has been impossible to provide reliable abundances. These A-type stars, with effective temperatures
above 8000\,K, rotate with moderate/high velocities which prevent the analysis of the few spectral lines observed in their spectra. In fact, the bluest part of the spectra is not sufficiently well exposed even
for classification purposes so we took the spectral types from the SIMBAD database.
Individual abundances for each star are listed, according to the standard notation $A($X$)= \log\,$[n(X)/n(H)]\,+\,12, in Tables~\ref{abund_ms} and \ref{abund_gig} for MS stars and giants, respectively. Additionally,
the cluster mean abundances for each element, in terms of [X/H], are reported in Table~\ref{tab_abund}. They have been calculated by means
of the weighted average of each star, using the individual errors as weight. The abundances are expressed referring to the solar value
that we obtained by applying the same procedure to a HARPS-N spectrum of Ganymede \citep[see table 5 in][]{ASCC123}.
For what concerns iron, with the exception of the hottest and fast rotating stars for which we can not measure its abundance, we found an average [Fe/H]=$-$0.13$\pm$0.08. This value is slightly lower than that
derived by using {\scshape rotfit} but still compatible within the errors. In any case, for clarity's sake,
hereinafter we adopted the weighted mean of both values (obtained from {\scshape rotfit} and {\scshape synthe}, respectively)
as the iron content of the cluster, i.e. [Fe/H]=$-$0.07$\pm$0.06.
We find that abundances derived from giants and dwarfs are compatible within the errors for all the elements except for Ba and Sr, which are clearly overabundant in giants
(0.48 and 0.38\,dex, respectively), and Co, which is only marginally overabundant. For the remaining elements significant discrepancies are not seen. Only for Na, V, and Cu differences are $\geq$\,0.15\,dex, but still consistent with each other.
Stock\,2 shows solar weighted-mean ratios for $\alpha$-elements ([$\alpha$/Fe]=0.04$\pm$0.05, without including the O) and iron-group elements ([X/Fe]=0.03$\pm$0.03)
while for the heaviest elements, without taking into account Sr and Ba, the cluster exhibits a supersolar ratio ([$s$/Fe]=0.17$\pm$0.04).
\begin{table}[ht]
\caption{Average chemical abundances ([X/H]) for Stock\,2 obtained with {\scshape synthe}.} \label{tab_abund}
\begin{center}
\begin{tabular}{lccc}
\hline\hline
Element & Total & MS stars & Giants \\
\hline
C & $-$0.08 $\pm$ 0.05 & $-$0.08 $\pm$ 0.05 & \dots \\
O & $-$0.20 $\pm$ 0.03 & $-$0.20 $\pm$ 0.03 & \dots \\
Na & $+$0.14 $\pm$ 0.14 & $+$0.08 $\pm$ 0.14 & $+$0.23 $\pm$ 0.14 \\
Mg & $-$0.20 $\pm$ 0.10 & $-$0.25 $\pm$ 0.10 & $-$0.15 $\pm$ 0.10 \\
Al & $-$0.13 $\pm$ 0.15 & $-$0.18 $\pm$ 0.16 & $-$0.12 $\pm$ 0.15 \\
Si & $+$0.05 $\pm$ 0.08 & $+$0.03 $\pm$ 0.09 & $+$0.07 $\pm$ 0.09 \\
S & $+$0.05 $\pm$ 0.10 & $+$0.00 $\pm$ 0.11 & $+$0.14 $\pm$ 0.11 \\
Ca & $-$0.04 $\pm$ 0.09 & $-$0.02 $\pm$ 0.09 & $-$0.09 $\pm$ 0.10 \\
Sc & $+$0.01 $\pm$ 0.13 & $+$0.00 $\pm$ 0.13 & $+$0.03 $\pm$ 0.14 \\
Ti & $-$0.06 $\pm$ 0.12 & $-$0.08 $\pm$ 0.12 & $-$0.01 $\pm$ 0.13 \\
V & $+$0.06 $\pm$ 0.10 & $+$0.14 $\pm$ 0.11 & $-$0.03 $\pm$ 0.11 \\
Cr & $+$0.02 $\pm$ 0.15 & $-$0.04 $\pm$ 0.15 & $+$0.09 $\pm$ 0.15 \\
Mn & $-$0.07 $\pm$ 0.15 & $-$0.09 $\pm$ 0.16 & $-$0.05 $\pm$ 0.15 \\
Fe & $-$0.13 $\pm$ 0.08 & $-$0.15 $\pm$ 0.09 & $-$0.10 $\pm$ 0.09 \\
Co & $+$0.01 $\pm$ 0.05 & $+$0.08 $\pm$ 0.06 & $-$0.09 $\pm$ 0.06 \\
Ni & $-$0.04 $\pm$ 0.10 & $-$0.01 $\pm$ 0.10 & $-$0.07 $\pm$ 0.11 \\
Cu & $-$0.22 $\pm$ 0.10 & $-$0.16 $\pm$ 0.10 & $-$0.31 $\pm$ 0.11 \\
Zn & $-$0.16 $\pm$ 0.09 & $-$0.20 $\pm$ 0.10 & $-$0.13 $\pm$ 0.09 \\
Sr & $+$0.09 $\pm$ 0.15 & $-$0.02 $\pm$ 0.15 & $+$0.46 $\pm$ 0.16 \\
Y & $+$0.11 $\pm$ 0.04 & $+$0.12 $\pm$ 0.04 & $+$0.06 $\pm$ 0.06 \\
Zr & $+$0.00 $\pm$ 0.14 & $+$0.01 $\pm$ 0.15 & $-$0.01 $\pm$ 0.14 \\
Ba & $-$0.11 $\pm$ 0.09 & $-$0.20 $\pm$ 0.09 & $+$0.18 $\pm$ 0.09 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Reddening and SED fitting}\label{sec_redd}
With the aim of determining the interstellar extinction ($A_V$) of our sources, as well as the luminosity ($L$), we resorted to the spectral energy distribution (SED)
fitting method. From optical and NIR photometric data publicly available we built the corresponding SED, which was fitted with BT-Settl synthetic spectra \citep{Allard2014}. For each target, we assumed
its $Gaia$-eDR3 parallax as well as the atmospheric parameters ($T_{\textrm{eff}}$ and $\log\,g$) obtained in
Sect.~\ref{sec_params}, leaving the stellar radius ($R$) and $A_V$ as free parameters. These parameters were then obtained by $\chi^2$ minimization and the stellar luminosity was calculated as
$L$=4\,$\pi$\,$R^2$\,$\sigma$\,$T_{\textrm{eff}}^4$. An example of this fitting is shown in Fig.~\ref{fig_sed}.
The errors on $A_V$ and $R$ are found by the minimization procedure considering the 1-$\sigma$ confidence level of the $\chi^2$ map, but we have also taken the error on $T_{\rm eff}$\ into account .
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./sed3.png}
\caption{{\it Top:} Example of a SED fitting (star g8). {\it Bottom:} $\chi^2$-contour map of the fitting. The red contour corresponds to the 1-$\sigma$
confidence level.}
\label{fig_sed}
\end{figure}
The $A_V$ values thus obtained are reported in Table~\ref{tab_sed}. In total, we provide results for 42 stars, whose $A_V$ range from 0.37 to 1.93 mag, with an average
of $A_V$=0.84$\pm$0.34, where the error is the standard deviation. This extinction corresponds to $E(B-V)$=0.27$\pm$0.11 when assuming a standard reddening law with $R_V$=3.1.
The high dispersion confirms the existence of a noticeable differential reddening across the observed field, as described in previous studies \citep{Krzeminski1967,Spagna09}. Indeed, our
value is compatible within the errors with that mostly accepted for the cluster, $E(B-V)\approx$0.35 \citep{Ye21}.
Alternatively, we evaluated the reddening from the colour excess definition, that is, by comparing observed and intrinsic colours for each star.
For this purpose we used the 2MASS photometric data shown in Table~\ref{tab_fotom}, since they are more suitable than the optical ones as they are less affected by the extinction.
The intrinsic colours were adopted from the spectral types (Table~\ref{tab_params}) according to the calibrations of \citet{St09}. In this way, from 43 stars, we obtained
an average cluster reddening of $E(B-V)$=0.26$\pm$0.11, which shows an excellent agreement with the value derived from the SED fitting.
This agreement is especially remarkable considering that photometric calibrations do not take the effect of the rotational velocity on the colour
into account.
\begin{table}
\caption{Results of the SED fitting.\label{tab_sed}}
\begin{center}
\begin{tabular}{lccr}
\hline\hline
Star & $A_V$ (mag) & $R$ (R$_{\sun}$) & \multicolumn{1}{c}{$L$ (L$_{\sun}$)} \\
\hline
b1 & 0.65 $\pm$ 0.17 & 2.23 $\pm$ 0.03 & 23.4 $\pm$ 3.2 \\
b2 & 0.77 $\pm$ 0.08 & 2.12 $\pm$ 0.04 & 23.2 $\pm$ 2.1 \\
b3 & 0.42 $\pm$ 0.22 & 1.85 $\pm$ 0.02 & 10.9 $\pm$ 1.7 \\
b4 & 0.73 $\pm$ 0.24 & 1.80 $\pm$ 0.03 & 10.7 $\pm$ 1.6 \\
b5 & 0.77 $\pm$ 0.25 & 1.64 $\pm$ 0.03 & 6.8 $\pm$ 0.9 \\
b6 & 0.45 $\pm$ 0.10 & 1.15 $\pm$ 0.02 & 1.7 $\pm$ 0.1 \\
b7 & 0.47 $\pm$ 0.09 & 1.08 $\pm$ 0.02 & 1.4 $\pm$ 0.1 \\
b8 & 0.46 $\pm$ 0.11 & 0.98 $\pm$ 0.02 & 1.0 $\pm$ 0.1 \\
& & & \\
r1 & 0.53 $\pm$ 0.25 & 2.84 $\pm$ 0.05 & 29.7 $\pm$ 4.4 \\
r2 & 0.85 $\pm$ 0.19 & 2.45 $\pm$ 0.03 & 25.7 $\pm$ 3.7 \\
r3 & 1.28 $\pm$ 0.09 & 2.37 $\pm$ 0.03 & 30.2 $\pm$ 4.1 \\
r5 & 1.06 $\pm$ 0.22 & 1.68 $\pm$ 0.03 & 8.5 $\pm$ 1.2 \\
r6 & 0.94 $\pm$ 0.17 & 1.55 $\pm$ 0.02 & 4.8 $\pm$ 0.3 \\
r7 & 0.86 $\pm$ 0.16 & 1.21 $\pm$ 0.02 & 2.1 $\pm$ 0.2 \\
r8 & 0.82 $\pm$ 0.07 & 1.06 $\pm$ 0.01 & 1.4 $\pm$ 0.1 \\
& & & \\
u3 & 1.29 $\pm$ 0.23 & 2.55 $\pm$ 0.04 & 19.6 $\pm$ 3.0 \\
u4 & 1.93 $\pm$ 0.20 & 1.91 $\pm$ 0.03 & 15.5 $\pm$ 2.2 \\
u5 & 1.23 $\pm$ 0.16 & 1.77 $\pm$ 0.02 & 4.9 $\pm$ 0.4 \\
u6 & 1.51 $\pm$ 0.10 & 1.23 $\pm$ 0.02 & 2.5 $\pm$ 0.2 \\
& & & \\
g1 & 0.40 $\pm$ 0.30 & 29.84 $\pm$ 1.55 & 337.6 $\pm$ 31.1 \\
g2 & 0.58 $\pm$ 0.31 & 24.85 $\pm$ 0.48 & 285.4 $\pm$ 27.2 \\
g3 & 0.66 $\pm$ 0.27 & 21.27 $\pm$ 0.30 & 242.6 $\pm$ 22.5 \\
g4 & 0.82 $\pm$ 0.26 & 17.36 $\pm$ 0.23 & 166.9 $\pm$ 15.7 \\
g5 & 0.53 $\pm$ 0.13 & 14.99 $\pm$ 0.20 & 132.7 $\pm$ 5.9 \\
g6 & 1.15 $\pm$ 0.25 & 18.30 $\pm$ 0.21 & 188.5 $\pm$ 16.6 \\
g7 & 0.85 $\pm$ 0.12 & 15.62 $\pm$ 0.24 & 144.0 $\pm$ 6.7 \\
g8 & 0.37 $\pm$ 0.11 & 12.12 $\pm$ 0.17 & 86.8 $\pm$ 3.9 \\
g9 & 1.43 $\pm$ 0.08 & 15.78 $\pm$ 0.26 & 147.0 $\pm$ 6.8 \\
g10 & 0.90 $\pm$ 0.08 & 12.33 $\pm$ 0.13 & 90.5 $\pm$ 3.9 \\
& & & \\
to2 & 0.78 $\pm$ 0.14 & 4.58 $\pm$ 0.05 & 129.3 $\pm$ 16.9 \\
to3 & 0.59 $\pm$ 0.15 & 4.38 $\pm$ 0.08 & 113.3 $\pm$ 15.1 \\
to4 & 0.65 $\pm$ 0.34 & 4.50 $\pm$ 0.09 & 86.4 $\pm$ 16.7 \\
to5 & 1.00 $\pm$ 0.20 & 4.42 $\pm$ 0.09 & 115.2 $\pm$ 20.5 \\
to6 & 0.37 $\pm$ 0.17 & 3.11 $\pm$ 0.07 & 59.8 $\pm$ 7.9 \\
to7 & 1.04 $\pm$ 0.22 & 4.39 $\pm$ 0.06 & 104.0 $\pm$ 18.9 \\
to8 & 0.48 $\pm$ 0.17 & 3.54 $\pm$ 0.05 & 67.6 $\pm$ 9.1 \\
to9 & 0.77 $\pm$ 0.36 & 4.59 $\pm$ 0.08 & 77.8 $\pm$ 15.5 \\
to10 & 0.90 $\pm$ 0.13 & 3.96 $\pm$ 0.07 & 96.9 $\pm$ 12.7 \\
to11 & 1.06 $\pm$ 0.19 & 4.22 $\pm$ 0.12 & 95.9 $\pm$ 13.3 \\
to12 & 0.61 $\pm$ 0.24 & 3.62 $\pm$ 0.10 & 70.7 $\pm$ 12.9 \\
to13 & 0.93 $\pm$ 0.27 & 4.13 $\pm$ 0.11 & 80.2 $\pm$ 15.1 \\
to14 & 1.23 $\pm$ 0.22 & 4.03 $\pm$ 0.15 & 87.8 $\pm$ 12.4 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Colour-magnitude diagrams}\label{sec_CMD}
With the aim of investigating the age of the cluster we combined archival photometry with the spectroscopy obtained in this work.
We made use of the most widespread procedure, the so-called isochrone-fitting method. It consists of finding the age-dependent model, isochrone, that best
reproduces the cluster evolutionary snapshot reflected in its CMD. In a first step, it was necessary to construct the CMD. We did it in three different
photometric systems (optical, 2MASS and $Gaia$-eDR3), highlighting our targets in Fig.~\ref{CMD}, according to the criterium described in Sect.~\ref{sec_targets}.
We took advantage of the reddening previously obtained
($E(B-V)$=0.27, Sect.~\ref{sec_redd}) to draw the following diagrams: $M_V$/$(B-V)_0$, $M_{K_{\textrm{S}}}$/$(J-K_{\textrm{S}})_0$ and $G$/$(G_{BP}-G_{RP})$.
Individual distances,
derived from the inversion of their parallaxes, were also taken into account. Individual zero-point offset corrections, with an average value around $-$33\,$\mu$as,
were applied to the published $Gaia$-eDR3 parallaxes following the recommendations outlined by \citet{Lindegren21}.
Then, in a second step, we drew PARSEC isochrones \citep{PARSEC} for different ages computed at the metallicity found in this work ([Fe/H]=$-$0.07, see Sect.~\ref{sec_abund}).
With the intention of ensuring the reliability of the fit, we selected, among the list of members identified by \citet{Cantat2018}, only those with a membership probability
sufficiently high (i.e. $P\geq$0.7). Additionally, on this sample we imposed a quality cutoff, taking only the objects whose error on parallax is below 0.1 mas, i. e. with an uncertainty
less than 5$\%$. In total we considered 1016 cluster
members with $Gaia$-eDR3 photometry. We did a cross-match of our member list with the APASS \citep{apass} and 2MASS \citep{2MASS} catalogues and then we selected
only the stars with good-quality photometry. In the first case this meant stars with errors on both $V$ and $(B-V)$ <\,0.1 mag, while in the second case,
just the stars without any `$U$' photometric flag assigned. In total 409 and 955 objects were retrieved, respectively. The resulting diagrams are displayed in
Fig.~\ref{CMD}.
When building the first CMD ($M_V$/$(B-V)_0$) we immediately realised the wrong position of the brightest stars, among which were many of our targets.
For these stars the APASS photometry provide errors above one magnitude or even not quantified.
With this purpose we resorted to the ASCC2.5 catalogue \citep{ASCC25} from which we took $V$ and $(B-V)$ for stars brighter than $V$=10,
after scaling both photometric datasets.\footnote{By employing almost a hundred stars with good-quality photometry in both catalogues, we found average
differences (ASCC2.5 minus APASS) of $\Delta\,V$=$-$0.040 and $\Delta(B-V)$=$-$0.005 mag.}
Then, we dereddened the CMD (left panel of Fig.~\ref{CMD}) by applying individual corrections to the stars for which we have spectra and the average
value to the rest of stars.
Finally, we plotted the isochrone that best reproduces the CMD based on a visual inspection, from which we obtained for the cluster a log\,$\tau$=8.65$\pm$0.15 (equivalent to an age of 450$\pm$150\,Ma). In this case, the error reflects the interval of
isochrones that gives a good fit. With this age the MSTO stellar mass is $\approx$2.8\,M$_{\sun}$. In general, stars occupy positions close to the isochrone and only the TO
stars seem to be slightly away from it.
Regarding the 2MASS CMD, the fit is quite good and all stars match the isochrone rather well, with the exception of the star g1. It is the brightest in the cluster and
shows a position away from the the rest of the giants. As it is so bright, it is close to saturate and its photometry, flagged in the catalogue as `$EDD$', has errors in each
band of around 0.2 mag. Therefore, its anomalous $(J-K)$ colour could simply be an instrumental effect. Some residual dispersion is still observed
for the MS stars, although the correction for reddening has been applied; moreover in the NIR the reddening is lower than at optical wavelengths and should play a minor role on the CMD.
After the reddening correction, no clear eMSTO/split MS is apparent in the CMD. Giants show a dispersion in magnitude greater than it would be expected from their
atmospheric parameters, which are very similar to each other.
In the last diagram, the $Gaia$-eDR3 CMD, since the dereddening of the $Gaia$ photometry is not a trivial task, the isochrone (and not the stars as in the previous CMDs)
was reddened using the average extinction obtained in Sect.~\ref{sec_redd}.
A distance modulus of 7.87, which corresponds to the distance derived by \citet{Cantat2018}, was applied.
The fit is also good and stars lie along the isochrone.
\begin{figure*}
\centering
\includegraphics[width=18cm]{./CMD_paper_ov2.pdf}
\caption{Colour-magnitudes diagrams for Stock\,2 in three different photometric systems: \textbf{Left:} $M_V$/$(B-V)_0$, photometric data from the APASS
catalogue; \textbf{Centre:} $M_{K_{\textrm{S}}}$/$(J-K_{\textrm{S}})_0$ (2MASS) and \textbf{Right:} $G$/$(G_{BP}-G_{RP})$ ($Gaia$-eDR3). Colours and symbols are the
same as those in Fig.~\ref{fig_targets}. The green line and the shaded area are the best-fitting isochrone within the uncertainties (log\,$\tau$=8.65$\pm$0.15).}
\label{CMD}
\end{figure*}
\section{Discussion}\label{sec_discuss}
One of the objectives of this research was to determine the age of the cluster.
Now, based on $Gaia$-eDR3 individual parallaxes for the cluster members and the extinction derived from the SED fitting we were able to build
suitable CMDs, in which the cluster age was obtained via the isochrone-fitting method.
By analysing the dereddened 2MASS CMD, which is less affected by the interstellar dust than the ones at optical wavelengths used in past works, we can asses that Stock\,2 is a
moderately young open cluster of 450$\pm$150\,Ma. Therefore, it is somewhat younger than the Hyades and clearly older than the Pleiades. This confirms the results of
\citet{Spagna09} and \citet{Sciortino2000} over older studies \citep[e.g.][]{Krzeminski1967}.
The RVs obtained by us are, in general, compatible within the errors with those found in the literature, as displayed in Table~\ref{tab_comp_rv} for stars
in common with \citet{Me08}, who measured RVs for red giants in open clusters, and \citet{Reddy2019}. Although \citet{Me08} claimed binarity for g3 and
g9, we have not seen any feature in their spectra that might confirm it, as also \citet{Reddy2019} concluded.
However, given the discrepancies for the latter, perharps it might be a long-period variable.
Figure~\ref{fig_vrad_gaia} shows the
stars for which we have derived their RV compared, when possible, to the values obtained by $Gaia$-DR2. We remark the excellent agreement for the slow
rotators, especially in the case of giant stars. For fast rotators, instead, as we already noted, our errors are very large and results are not very reliable; for most
of them $Gaia$-DR2 does not provide any RV.
\begin{table}
\caption{Comparison of the RV (km\,s$^{-1}$) derived in this work and in the literature.}
\label{tab_comp_rv}
\begin{center}
\begin{tabular}{lcccc}
\hline\hline
Star & Me08 & $Gaia$-DR2 & Reddy19 & This work \\
\hline
g3 & 9.6 $\pm$ 1.3 & 8.5 $\pm$ 0.2 & 8.8 $\pm$ 0.1 & 8.4 $\pm$ 0.1 \\
g4 & 8.1 $\pm$ 0.4 & 8.4 $\pm$ 0.1 & 8.6 $\pm$ 0.1 & 8.4 $\pm$ 0.1 \\
g9 & 7.8 $\pm$ 0.8 & 4.7 $\pm$ 0.2 & 4.4 $\pm$ 0.1 & 4.4 $\pm$ 0.1 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./rv_gaia_ov2.pdf}
\caption{Comparison of the RVs obtained in this work (symbols and colours as in previous figures) with those of $Gaia$-DR2 (open squares). The dashed line
shows the cluster average value, $RV$=8.0 km\,s$^{-1}$.}
\label{fig_vrad_gaia}
\end{figure}
Regarding the atmospheric parameters, as already mentioned, \citet{Reddy2019} conducted the only spectroscopy-based paper devoted to Stock\,2. Their study is
based on high-resolution spectra ($R$=60\,000) of three of the cluster giants. These stars, which have also been observed by us, are g3 (numbered as 43 in their work),
g4 (1011) and g9 (1082). Our temperatures and metallicities are slightly larger but still in agreement with their values, within the errors.
Instead, gravities are only marginally compatible. Both datasets are compared in Table~\ref{tab_comp_par}. These discrepancies probably can be explained because of the
different methodology followed. In this work we employed spectral fitting while their approach was based on the equivalent width (EW) analysis.
\begin{table*}[ht]
\caption{Comparison of the atmospheric parameters derived in this work with those of the literature. }
\label{tab_comp_par}
\begin{center}
\begin{tabular}{l|ccc|ccc}
\hline\hline
\multirow{2}{*}{Star} & \multicolumn{3}{c}{This work} & \multicolumn{3}{c}{\citet{Reddy2019}} \\
& $T_{\textrm{eff}}$ (K) & $\log\,g$ & [Fe/H] & $T_{\textrm{eff}}$ (K) & $\log\,g$ & [Fe/H] \\
\hline
g3 & 4937 $\pm$ 114 & 2.51 $\pm$ 0.35 & 0.04 $\pm$ 0.08 & 4925 $\pm$ 50 & 2.0 $\pm$ 0.1 & $-$0.07 $\pm$ 0.03 \\
g4 & 4977 $\pm$ 117 & 2.82 $\pm$ 0.35 & 0.04 $\pm$ 0.08 & 4900 $\pm$ 50 & 2.3 $\pm$ 0.1 & $-$0.05 $\pm$ 0.04 \\
g9 & 5062 $\pm$ 56 & 2.98 $\pm$ 0.18 & 0.00 $\pm$ 0.09 & 5050 $\pm$ 50 & 2.6 $\pm$ 0.1 & $-$0.06 $\pm$ 0.03 \\
\hline
\end{tabular}
\end{center}
\end{table*}
With the aim of checking the consistency of our results, we plot the Kiel and HR diagrams in Fig.~\ref{fig_phr}. The former is a reddening-free diagnostic
whereas in the latter, extinction has been taken into account when calculating the luminosity. The location of the stars in the HR diagram is better than in the
Kiel diagram, where gravities lie away with respect to those of the isochrone around 0.2 dex, as already came out in the comparison with results from \citet{Reddy2019}. Additionally,
TO stars show a large dispersion in this diagram. This is very likely a consequence of the poor accuracy of the gravity determinations for these A-type stars,
which have a moderate or fast rotation.
On the contrary, in the HR diagram these stars are placed more closely clustered around the TO point, as it is expected.
The fit is also better for MS stars and especially good for giants, which fall on the isochrone.
\begin{figure}
\centering
\includegraphics[width=7cm]{./pHR_HR_paper_ov3.pdf}
\caption{Kiel and HR diagrams for Stock\,2. Symbols and colours are the same as those in Fig.~\ref{CMD}.}
\label{fig_phr}
\end{figure}
\subsection{Chromospheric emission and lithium abundance}
\label{Sec:chrom_lithium}
\begin{figure}
\begin{center}
\hspace{-.5cm}
\includegraphics[width=9.0cm]{fig_subtract_ew.pdf}
\vspace{-1.5cm}
\caption{Subtraction of the non-active, lithium-poor template (red line) from the spectrum of Stock2~r8 (black dots), which reveals the chromospheric emission in the H$\alpha$ core (blue
line in the {\it bottom panel}) and emphasizes the \ion{Li}{i} $\lambda$6708\,\AA\ absorption line,
removing the nearby blended lines ({\it top panel}). The green hatched areas represent the excess H$\alpha$
emission ({\it bottom panel}) and \ion{Li}{i} absorption ({\it top panel}) that were integrated to
obtain $EW_{\rm H\alpha}^{em}$ and $EW_{\rm Li}$, respectively.}
\label{fig:subtraction}
\end{center}
\end{figure}
For stars cooler than about 6500\,K and with an age from a few ten to a few hundred Ma, the level of magnetic activity (e.g. the emission in the cores of lines formed in the chromosphere)
and the atmospheric lithium abundance can be used to estimate the age \citep[see, e.g.,][and references therein]{Jeffries2014, Frasca2018}.
The best diagnostics of chromospheric emission in the wavelength range covered by HARPS-N are \ion{Ca}{ii} H\&K and Balmer H$\alpha$ lines.
However, the S/N ratio at 3900\,\AA\ is very low, so that we can only use the H$\alpha$ for this purpose.
The templates produced by {\scshape rotfit} with rotationally broadened spectra of non-active, lithium-poor stars were
subtracted from the observed spectra of the targets to measure the excess emission in the core of the H$\alpha$ line ($EW_{\rm H\alpha}^{em}$) and the equivalent
width of the \ion{Li}{i} $\lambda$6708\,\AA\ absorption line ($EW_{\rm Li}$), removing the blends with nearby lines.
\setlength{\tabcolsep}{5pt}
\begin{table}[htb]
\caption{H$\alpha$, \ion{Li}{i}$\lambda$6708\,\AA\ equivalent widths and lithium abundance for the targets cooler than 7000\,K.}
\begin{center}
\begin{tabular}{lcrrrrl}
\hline
\hline
\noalign{\smallskip}
Star & $T_{\rm eff}$ & $EW_{\rm H\alpha}^{em}$ & err & $EW_{\rm Li}$ & err & $A$(Li) \\
& (K) & \multicolumn{2}{c}{(m\AA)} & \multicolumn{2}{c}{(m\AA)} & {(dex)} \\
\hline
\noalign{\smallskip}
b6 & 6132 & 143 & 24 & 63 & 6 & $2.68^{+0.10}_{-0.10}$ \\
b7 & 6092 & $\dots$ & $\dots$ & <3 & $\dots$ & $<$1.27 \\
b8 & 5841 & 72 & 31 & 145 & 12 & $2.93^{+0.11}_{-0.10}$ \\
r6 & 6851 & $\dots$ & $\dots$ & 54 & 5 & $3.03^{+0.12}_{-0.13}$ \\
r7 & 6332 & 50 & 15 & 9 & 6 & $1.91^{+0.31}_{-0.56}$ \\
r8 & 6086 & 110 & 17 & 89 & 10 & $2.88^{+0.10}_{-0.11}$ \\
u5 & 6449 & 38 & 13 & 32 & 6 & $2.55^{+0.18}_{-0.20}$ \\
u6 & 6534 & 93 & 37 & 15 & 10 & $2.23^{+0.32}_{-0.55}$ \\
\hline
\end{tabular}
\end{center}
\label{Tab:Halpha_Lithium}
\end{table}
Figure~\ref{fig:subtraction} shows an example of the subtraction procedure used to measure the equivalent width of H$\alpha$ and lithium lines,
$EW_{\rm H\alpha}^{em}$ and $EW_{\rm Li}$. These quantities were measured on the subtracted spectra by integrating the residual emission and absorption
profiles, as shown by the green dashed areas in Fig.~\ref{fig:subtraction}, and are reported in Table~\ref{Tab:Halpha_Lithium}.
A simple method to get an estimate of a star's age independent of that derived from isochrones is to compare its position in a diagram that plots lithium abundance,
$A$(Li), versus $T_{\rm eff}$\ with the upper envelopes of clusters with a known age.
We calculated the lithium abundance, $A$(Li), from our values of $T_{\rm eff}$, $\log g$, and $EW_{\rm Li}$ by interpolating the curves of growth of \citet{Lind2009}, which span
the $T_{\rm eff}$\ range 4000--8000\,K and $\log g$\ from 1.0 to 5.0 and include non-LTE corrections. In Fig.\,\ref{Fig:NLi} we show the lithium abundance as a function of $T_{\rm eff}$
along with the upper envelopes of the distributions of some young open clusters shown by \citet{Sestito2005}. Apart from the large errors of $A$(Li), which take into account both
the $T_{\rm eff}$ and $EW_{\rm Li}$ errors,
Fig.\,\ref{Fig:NLi} shows that all the targets are located close or below the Hyades upper envelope, compatible with an age $\approx$\,600\,Ma. The only exception is the coldest
target, b8, which lies between the upper envelopes of the Pleiades ($\approx$\,100\,Ma) and NGC\,6475 ($\approx$\,300\,Ma), which suggests an age $\lesssim 300$\,Ma for this star.
However, for stars with $T_{\rm eff}$>6000\,K the upper envelopes are very close to each other, which hampers the estimation of the cluster's age with this method.
Lithium abundances for colder stars, where the envelopes separate more, would be extremely useful in clarifying this point. Unfortunately, the combination of very high resolution and telescope size
did not permit to reach the low main sequence. Hopefully, large samples of fainter stars will be acquired, e.g. by the survey WEAVE \citep{Dalton20} due to start soon
at the 4.2-m William Herschel Telescope.
\begin{figure}[]
\includegraphics[width=9.0cm]{NLi_Stock2_Lind.pdf}
\caption{Lithium abundance as a function of $T_{\rm eff}$.
The upper envelopes of $A$(Li) for IC~2602 ($age\approx$\,30 Ma), Pleiades ($\approx$\,100 Ma), NGC\,6475 ($\approx$\,300 Ma), and Hyades ($\approx$\,600 Ma) clusters adapted from \citet{Sestito2005} are overplotted.
}
\label{Fig:NLi}
\end{figure}
\subsection{Galactic metallicity gradient}
Open clusters are good tracers of the radial metallicity distribution of the Galaxy (i.e. the so-called Galactic gradient).
To see how the metallicity derived for Stock\,2 in this work compares with the general gradient, we collected a sample of homogeneously
analysed clusters from the $Gaia$-ESO iDR5 and iDR6 \citep{Baratella20,Magrini21} and the APOGEE DR16 surveys \citep{Donor20}. From
the latter we only took clusters with data derived from two or more stars and closer than 15 kpc. In addition, open clusters from
\citet{6067,3105,2345,3OC} are also added to the sample along with those previously investigated within the SPA project \citep{ASCC123, D'Orazi20, Casali20, Zhang21}.
In total, for this comparison we gathered more than a hundred clusters, ten of which are in common among
different datasets. Figure~\ref{grad} shows the location of Stock\,2 in the Galactic gradient. Galactocentric distances have been taken from \citet{Cantat2018},
which obtained their distances from the $Gaia$-DR2 parallaxes, taking as a reference for the solar value $R_{\sun}$=8.34\,kpc. The metallicity, in terms of iron
abundance, was referenced to $A$(Fe)=7.45\,dex \citep{Grevesse07}. The metallicity found in this work is compatible with that expected for its
position.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{./grad_paper_feh-007.pdf}
\caption{Radial metallicity gradient from open clusters studied in the framework of the $Gaia$-ESO \citep[][red circles]{Baratella20, Magrini21} and
APOGEE \citep[][green circles]{Donor20} surveys. Other similar clusters analysed by \citet[][blue circles]{6067, 3105, 2345, 3OC} are also added along to those
previously investigated in the SPA project \citep[][orange circles]{ASCC123, D'Orazi20, Casali20, Zhang21}. Black lines link
results for the same cluster provided by different authors. The star-symbol represents Stock\,2.}
\label{grad}
\end{figure}
\subsection{Chemical composition and Galactic trends}
Regarding the abundances, we compared our results (separately for MS stars and giants) to those of \citet{Reddy2019}, with which we have 17 chemical elements in common.
For the comparison, the values from \citet{Reddy2019} have been scaled to our solar references. In Fig.~\ref{comp_reddy} the differences of the abundance ratios ([X/H]),
this work minus literature, are displayed. As expected, differences are smaller for giants (on average, $\Delta$[X/H]=0.07\,dex) than for MS stars (0.12 dex).
With the only exception of Y, the chemical composition of all the giants is fully compatible with that obtained by \citet{Reddy2019}. On the other hand,
for MS stars, abundances for Na, V, Co, Zn, Y and Ba are somewhat different.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./comp_redd_esc_ov2.pdf}
\caption{Differences between our mean abundances, for giants and MS stars, and those by \citet{Reddy2019}. The error bars are the quadratic sum of the
uncertainties reported in both studies for each element.}
\label{comp_reddy}
\end{figure}
Finally, as we have done above in relation to the metallicity gradient, we contrast the abundances obtained in this work with those of the
comparison clusters selected before. We completed the sample by adding the $Gaia$-ESO DR4 abundances \citep{Magrini17, Magrini18} for the clusters in
common with \citet{Magrini21}.
In total, we have in common with them up to 18 chemical elements, out of which the ratios [X/Fe] versus [Fe/H] are displayed in Fig.~\ref{trends} for 16 chemical elements.
The remaining two are O and Ba but since for these elements, the measure of the abundances is conditioned by the evolutionary state of the stars (see Sect.~\ref{sec_abund}), we discarded them from the comparison.
In general, Stock\,2 shows a chemical composition compatible with that of the Galactic thin disc, as supported by the agreement with the observed chemical trends
traced by more than a hundred open clusters. Only the abundance of Cu is sligthtly below these trends, but it is still compatible with them.
\begin{figure*}[ht]
\centering
\includegraphics[width=16cm]{./trends_paper_ov2.pdf}
\caption{Abundance ratios [X/Fe] vs. [Fe/H]. Symbols and colours are the same as in Fig.~\ref{grad}. The dashed lines indicate the solar value.}
\label{trends}
\end{figure*}
\subsection{Rotational velocity, reddening and eMSTO}
We investigated the relationship between $v$\,sin\,$i$ and the eMSTO phenomenon. As mentioned in Sect.~\ref{sec_targets}, we selected our targets following
in the CMD of Fig.~\ref{fig_targets} three different sequences along the MS: blue, red and the upper envelope. Among all the stars observed
in this work around 40$\%$ rotate rapidly (with $v$\,sin\,$i$\,>\,100\,km\,s$^{-1}$). As can be seen in Table~\ref{tab_params}, in general, the
fastest rotators are found among the brightest stars in each sequence but also a large scatter of velocities is detected.
According to the literature \citep{Dupree17,Marino18b,Sun19} the bMS should be populated by stars that rotate slower than those in the rMS.
However, this is not what we observe in this work. Significant differences are not found in
the mean $v$\,sin\,$i$ of both sequences. In addition, for those single stars in the group in which we expected to find binaries (the upper envelope sequence),
their $v$\,sin\,$i$ are smaller than in the two other series, despite being redder even than the rMS stars (see Table~\ref{tab_vsini}).
\setlength{\tabcolsep}{7pt}
\begin{table}
\caption{Mean projected rotational velocities (km\,s$^{-1}$) and reddening along MS stars. N is the number of stars in each category.}
\label{tab_vsini}
\begin{center}
\begin{tabular}{lrc}
\hline\hline
\multirow{2}{*}{MS sequence (N)} & \multicolumn{1}{c}{$v$\,sin\,$i$} & $A_V$ \\
& (km\,s$^{-1}$) & (mag) \\
\hline
bMS (8) & 103 $\pm$ 106 & 0.59 $\pm$ 0.15 \\
rMS (7) & 100 $\pm$ ~~98 & 0.91 $\pm$ 0.23 \\
uMS (4) & 57 $\pm$ ~~22 & 1.49 $\pm$ 0.32 \\
\hline
\end{tabular}
\end{center}
\end{table}
\setlength{\tabcolsep}{5pt}
To interpret this phenonomenon the contribution of the reddening should not be ignored. The cluster average value obtained in this work is compatible within
the errors with that expected for its position according to the extinction maps obtained by \citet{Lallement19}. However, as noted above, its value varies
considerably across the cluster field. For illustrative purposes only, in Fig.~\ref{fig_extinct} we mapped the distribution of $A_G$ in the cluster region
from its members.
Since $Gaia$-eDR3 does not provide these values, we took them from $Gaia$-DR2. For slightly more than half of the members identified by \citet{Cantat2018},
specifically for 673 stars, their $A_G$ were available. In order to derive individual values for the remaining objects we calculated them as the distance-weighted
average of the values of the five closest members. Once we estimated the $A_G$ for all the members we started to construct the chart. In a first step, a grid of points
covering the spatial distribution of the cluster members was generated. These points were spaced every 30$\arcsec$ in both RA and DEC. Then, in a second step, the $A_G$
of all the members distant up to 3$\arcmin$ from each point was averaged. The resulting spatial distribution of the cluster members, colour-coded according
to their $A_G$, is shown in Fig.~\ref{fig_extinct}. It displays how variable is the
reddening across the cluster field, which is likely the result of the low Galactic latitude and the large extension that it occupies in the sky.
For each of the sequences in which we grouped our MS stars, we calculated the average $v$\,sin\,$i$ and $A_V$.
These quantities, together with their standard deviations, are quoted in Table~\ref{tab_vsini}. Although our sample is not statistically large, our data
suggest that rotational velocity cannot explain the observed eMSTO, while the reddening is the most likely responsible for it.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./map_extinction.png}
\caption{Interstellar extinction ($A_G$) towards Stock\,2, as traced by the cluster members.}
\label{fig_extinct}
\end{figure}
\section{Conclusions}\label{sec_concl}
We have conducted this research in the framework of the SPA project with the aim of continuing to improve our knowlegde of the solar neighbourhood.
This work is focused on Stock\,2, a nearby and little-studied open cluster. We performed its detailed study from high-resolution spectroscopy complemented
with archival photometry and $Gaia$-eDR3 data. Our sample, by far the largest to date, is composed of 46 bona-fide members, including both giants
and MS stars. Among the latter, in order to study the eMSTO phenomenon, we selected the brightest stars around the TO point and many others following three
different sequences to cover the spread observed in the CMDs.
We found three double spectrum binaries in our sample. For the rest of the stars we measured their radial and projected rotational velocities and derived the extinction and
their atmospheric parameters. In addition, we carried out the chemical analysis for 29 stars observed with HARPS-N providing the abundances of 22 elements.
We found that half of the MS stars are fast rotators, with $v$\,sin\,$i$>100\,km\,s$^{-1}$. However, the distribution of slow and fast rotators along the
bMS, rMS and uMS sequences is random, which discards the rotational velocity as the cause of the observed eMSTO. Additionally, cluster members
are disseminated over a wide region of the sky (up to $\approx$13$^{\circ}$\,$\times$\,8$^{\circ}$) and differential reddening plays an important role in shaping
the CMDs. We found an average reddening in the cluster field of $E(B-V)$=0.27$\pm$0.11. Its large dispersion (consistent with the $Gaia$-DR2 value,
$E(G_{\textrm{BP}}-G_{\textrm{RP}})$=0.40$\pm$0.18) confirms the existence of a variable reddening across the field of Stock\,2.
The reddening also makes it difficult to obtain an accurate age for the cluster. However, from the isochrone-fitting on the dereddened 2MASS CMD, which is the one
less affected by the extintcion, we derived a value of 450$\pm$150\,Ma. This age implies a mass at the MSTO of $\approx$2.8\,M$_{\sun}$.
The analysis of the abundance of lithium indicates an age similar to the Hyades ($\sim 600$\,Ma), although the coolest observed member could be as young as 300\,Ma.
Spectroscopic observations of a larger sample of members with a lower $T_{\rm eff}$\ is needed to settle this point. We expect very useful data from large spectroscopic surveys
that will start in the near future, such as WEAVE. The cluster RV derived from the giants is $\approx$8.0\,km\,s$^{-1}$. Stock\,2 shows a solar-like metallicity, [Fe/H]=$-$0.07$\pm$0.06,
fully compatible within the errors with that expected for its Galactocentric distance.
Finally, we performed a detailed study of the cluster chemical composition by determining the abundances of C, odd-Z elements (Na, Al), $\alpha$-elements (O,
Mg, Si, S, Ca, Ti), iron-peak elements (Sc, V, Cr, Mn, Co, Ni, Cu, Zn) and $s$-elements (Sr, Y, Zr, Ba). MS stars exhibit a chemical composition
compatible within the errors with the giants. Only for Co and particularly for Ba and Sr diferences are significant, being the abundances of Ba and Sr clearly higher in giants.
We conclude our research claiming the consistency of its chemical composition with that of the thin disc. This is supported by the values of its ratios [X/Fe] that are on
the Galactic trends displayed by open clusters in the $Gaia$-ESO and APOGEE surveys. Finally, the cluster shows solar-like mean ratios for the $\alpha$ ([$\alpha$/Fe]=0.04$\pm$0.05) and
the iron-peak [iron-peak/Fe]=0.03$\pm$0.03 elements while for the heaviest elements (without including the Ba and Sr abundances) exhibits a mild overabundance
with respect to the Sun, [$s$/Fe]=0.17$\pm0.04$.
\bibliographystyle{aa}
|
{
"timestamp": "2021-09-30T02:00:47",
"yymm": "2109",
"arxiv_id": "2109.13959",
"language": "en",
"url": "https://arxiv.org/abs/2109.13959"
}
|
\section{Introduction}
\label{sec:introduction}
The so-called \textit{cooling function} $\Lambda$ describes the energy loss of a gas cloud per unit volume and time and is an important consideration in cosmological simulations (e.g.\ \cite{draine:2011}).
Besides density and temperature, $\Lambda$ depends on many additional, local properties, such as the chemical composition of the gas cloud and various spectral energy distributions that shape the radiation background. { As a result, $\Lambda$ is time consuming to compute and has a complicated functional form. This includes large gradients that can span more than three orders of magnitude. At the same time a cosmological simulation may require its calculation on the order of a billion times or more.}
As such, $\Lambda$ is usually interpolated from regular grids of { precalculated values, which typically omit higher dimensions to avoid memory issues and only provide approximative results.}
In \cite{lueders:21} we improved on this with the {Cloudy based Heuristic and Iterative Parameterspace Sampler (CHIPS,}~ \url{https://github.com/Vetinar1/CHIPS}) that eschews regular grids in favor of amorphous sample distributions that take the shape of $\Lambda$ into account. This both reduced the number of required samples and increased the interpolation accuracy. There exist a variety of methods for interpolat{ing} the resulting unstructured data \cite{hoschek:92}. {We achieved the best results using a simple Delaunay tessellation based method} that executes a directed search for the simplex containing the target point and then interpolates it using barycentric coordinates\cite{moebius:1827}. This resulted in our Delaunay Interpolation Program DIP (\url{https://github.com/Vetinar1/DIP}).
{In cosmology, tessellation based interpolation has been previously applied to velocity fields and density distributions (e.g.~\cite{bernardeau:1996}, \cite{weygaert:2001}, \cite{schaap:2007} \cite{Cautun:2011}). The simplest approach is Voronoi Tessellation Field Estimation (VTFE), in which the functional value of a point is used to approximate the function within the entire corresponding cell. This zeroth-order interpolation is equivalent to a nearest neighbor estimation and produces sufficient approximations for slow-changing functions, but is not satisfactory for the highly varied cooling function.
In contrast, Delaunay Tessellation Field Estimation (DTFE) is a first order approach, equivalent to linear interpolation in higher dimensions. Unlike VTFE it does not produce discontinuities at cell edges. DIP is similar to existing DTFE software \cite{Cautun:2011}, but does not consider the location of the points themselves as data, and has been developed specifically to be integrated in cosmological simulation software.}
The main downside of {a Delaunay based approach} is the space complexity of the triangulation which scales strongly with sample count and dimensionality, making it infeasible for high dimensional parameter spaces. { While space efficient Delaunay algorithms such as \Verb@Del_graph@ \cite{boissonnat:2009} can alleviate this issue, they can not completely remedy it, as sample counts may reach millions depending on the desired accuracy.} Since building small-scale triangulations near the target is impractical due to the associated runtime, we developed a novel simplex construction heuristic that can be used instead, allowing for much higher sample counts and dimensions. We call it the Projective Simplex Algorithm and implement it in the Projective Simplex Interpolation (PSI) package which is made available as part of DIP.
\section{Description of the Algorithm}
\label{sec:description}
Let $f$ be some function known at points $P \in \mathbb{R}^D$ and let the target point $t$ be a point within the convex hull of $P$. Our goal is to approximate $f(t)$ without building a triangulation on $P$ or a subset of $P$.
We apply the following three step process:
\begin{enumerate}
\item Identify $P_k$, the $k$ nearest neighbors of $t$ in $P$.
\item Construct a simplex $S$ out of the points in $P_k$ such that $t$ is contained within $S$.
\item Calculate the barycentric coordinates of $t$ relative to $S$ and use them to interpolate.
\end{enumerate}
Finding the $k$ nearest neighbors of a given point is a well researched problem and we chose to use ball trees for this purpose, see \cite{omohundro:1989} for further information.
The barycentric coordinates $\lambda_i$ of a simplex are homogeneous coordinates relative to its $D+1$ vertices. They refer to that point in space which would be the center of mass (``barycenter") of the simplex if each vertex $v_i$ had the mass of the associated barycentric coordinate $\lambda_i$. If their sum is normalized to unity they can be used to linearly interpolate on the $D$-dimensional hyperplane spanned up by the simplex. A good introduction on the subject can be found in \cite{vince:2017}.
The contribution of this paper is a new heuristic to directly select points for the simplex, the Projective Simplex Algorithm. The underlying idea is that the nearest neighbor of the target point is likely to be part of a small, high quality simplex containing $t$.
The dimension along the vector between $t$ and the nearest neighbor is then eliminated through projection and the problem is reduced to an equivalent but lower dimensional one.
Algorithm \ref{alg:psa} shows the steps of the algorithm in detail.
\begin{algorithm}
\caption{The Projective Simplex Algorithm.}
\label{alg:psa}
\begin{algorithmic}[1]
\REQUIRE Set of points $P_k$ in $\mathbb{R}^D$, target point $t$, $S = \emptyset$
\STATE Let $d$ equal the number of dimensions
\STATE $\mathcal{P} := P_k$, $t' := t$
\WHILE{$d > 1$}
\STATE Let $a$ be the nearest neighbor of $t'$ in $\mathcal{P}$
\STATE $S \leftarrow S \cup \{p \in P_k \mid a \text{ is projection of } p \}$
\STATE $\mathcal{P} \leftarrow \mathcal{P} \setminus \{a\}$
\STATE $\vec{n} := a - t'$, $\mathcal{P}' := \emptyset$
\FORALL{$\vec{p} \in \mathcal{P}$}
\STATE Project $\vec{p}$ on $\vec{n}$: $\vec{p}_\perp := \frac{\vec{p} \cdot \vec{n}}{\vec{n} \cdot \vec{n}} \cdot \vec{n}$
\STATE Shift to origin: $\vec{p}_{\perp,0} := \vec{p}_\perp - t'$
\IF{$\vec{p}_{\perp,0} \cdot \vec{n} < 0$}
\STATE Add projected point: $\mathcal{P}' \leftarrow \mathcal{P}' \cup \{\vec{p} - \vec{p}_\perp\}$
\ENDIF
\ENDFOR
\STATE Project target point: $t' \leftarrow t' - \frac{t' \cdot \vec{n}}{\vec{n} \cdot \vec{n}} \cdot \vec{n}$
\STATE $\mathcal{P} \leftarrow \mathcal{P}'$
\STATE $d \leftarrow d - 1$
\ENDWHILE
\STATE The remaining projected points are on a line in $\mathbb{R}^D$. Add the nearest neighbor in each direction on the line to $S$, if they exist
\RETURN $S$
\end{algorithmic}
\end{algorithm}
\begin{figure*}[h]
\centering
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=\linewidth]{03_alg1.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=\linewidth]{04_alg2.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=\linewidth]{05_alg3.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=\linewidth]{06_alg4.pdf}
\caption{}
\end{subfigure}
\caption{This 3-dimensional example illustrates the core idea of the Projective Simplex Algorithm. (a) The red arrow represents a vector pointing from the target point $t$ to the nearest neighbor. Interpreted as a normal vector it defines a plane. Points below this plane remain (blue dots), while points above the plane are removed (orange crosses). (b) The remaining points are projected into the plane (pink points). (c) We can construct a simplex containing $t$ in $\mathbb{R}^3$ by finding a triangle in the projection plane that contains $t$ and adding the nearest neighbor to form a ``pyramid''. The problem of finding this triangle in the 2-dimensional subspace of the projection plane is equivalent to the original problem of finding a simplex. (d) If the simplex found using the projections in (c) contains $t$ then the simplex using the original points represented by these projections must also contain $t$ since all the original points are below the plane.}
\label{fig:alg}
\end{figure*}
First, working copies $\mathcal{P}$ and $t'$ are made that the algorithm can modify in-place. We find the current nearest neighbor of $t'$ in $\mathcal{P}$ and add it to our simplex $S$. It is removed from further considerations.
Next, the vector $\vec{n}$ pointing from $t'$ to the nearest neighbor is calculated. This vector is the normal vector defining a $(d-1)$-dimensional hyperplane through $t'$ that splits $\mathcal{P}$ into two sets. The points ``above" the plane are in the half space that contains the nearest neighbor and the normal vector. All other points are considered to be below the plane.
If $\vec{n}$ points to the tip of a pyramid-like simplex $\mathcal{S}_d$ with its base inside the plane, then this base face is itself a $(d-1)$-dimensional simplex $\mathcal{S}_{d-1}$ consisting of $d$ points. If $\mathcal{S}_{d-1}$ contains $t'$ then $t'$ is on the surface of $\mathcal{S}_d$, which we consider to be ``inside''. It is now easy to see that if any of the vertices of the base face were moved perpendicularly below the plane, the resulting $\mathcal{S}_d'$ would still contain the target point, since this deformation can never lead to any of the faces crossing $t'$.
This property is the core idea of the algorithm. However, instead of picking points on the plane and then moving their position, the points in $\mathcal{P}$ are projected onto the plane and selected to form a valid simplex face containing $t'$.
First, each $p \in P$ is projected on $\vec{n}$, giving the part of $p$ perpendicular to the plane. It is shifted to the origin in order to compare it to $\vec{n}$. If the scalar product with $\vec{n}$ is positive, $p$ is above the plane and is discarded. If it is negative the projection is completed and the projected point is added to a new set $\mathcal{P}'$. After all points have been processed $\mathcal{P}$ is updated to $\mathcal{P}'$. Finally, $t'$ is also updated through projection.
Now that the dimension along $\vec{n}$ has been eliminated, the goal is to select $d$ points on the $(d-1)$-dimensional projection plane such that they contain $t'$, i.e.\ to find a simplex containing $t'$ using the points in $\mathcal{P}$. This is equivalent to the original problem and thus the proceess can repeat. {Figure \ref{fig:alg} visualizes these steps using a small example distribution.}
At $d = 1$ all points are on a line in $\mathbb{R}^D$ and the procedure can not be applied anymore. However, the simplex can be completed easily by finding the nearest neighbor in either direction on this line, e.g.\ by taking the scalar product of all difference vectors $p_i - t'$ and a vector parallel to the line, then picking the $p_i$ with the absolute smallest positive and negative result.
Sometimes one or both of these neighbors may not exist. In general, if there are less than $d+1$ points left after the projection step the algorithm should abort or restart with a larger $k$. There are several reasons why this might occur:
\begin{itemize}
\item If $t$ is near the edges of $P$ there may not be enough points in the vicinity for the algorithm to work at all.
\item Heterogeneous data may lead to the neighbors not being distributed evenly enough around $t$.
\item $k$ is too small.
\end{itemize}
The first case can be avoided by padding the edges of the point cloud. The optimal amount of padding depends on the sampling density.
If heterogeneities are an issue one might select the nearest neighbors using a radius criterion instead of choosing the $k$ nearest.
In the latter two cases increasing $k$ and re-running the algorithm usually leads to a valid solution\footnote{We also tried ``rewinding" the algorithm and choosing the second nearest neighbor for the simplex, in order to get a better split. This did not significantly improve results.}. The size of $k$ carries a tradeoff between simplex quality and runtime, where smaller $k$ leads to better simplices but larger runtimes due to re-runs. We recommend doubling $k$ after each failed run and choosing the initial value such that {on average} the algorithm runs 1.2 - 1.5 times.
{ In case neither of these approaches succeed, an alternative way of choosing the first vertex can eliminate most of the remaining failures, at the cost of relaxing the nearest neighbor condition. Instead of choosing the nearest neighbor, choose that point for which the most points remain in the point set after the first iteration. This can be achieved by determining the mean difference vector between the target point and its neighbors, which describes the overall directional bias of the neighbor distribution, and using it to select the point that lies farthest in the opposite direction.
This approach incurs additional runtime cost (but no additional complexity) over the regular algorithm, and leads to slightly poorer numerical results. It is, however, more reliable. We thus recommend using this option as a backup. Choosing additional vertices this way does not seem to provide any further advantages.}
Thus, the algorithm is not guaranteed to find a solution. However, according to our testing the solutions it does find are guaranteed to be valid (i.e.\ contain $t$), and the { simplex construction} succeeds in the overwhelming majority of cases (see \ref{sec:performance}).
Overall, the algorithm has a time complexity of $O(kD^2)$. Projecting and filtering the points takes $kD$ operations, and finding the nearest neighbor using brute force also takes $kD$ operations\footnote{Building a $k$d-tree here did not improve performance since $k$ is usually small, see table \ref{tab:data}.}. Both of these are executed $D$ times. In practice, however, significant time can be saved in the first iteration since the nearest neighbor is already known from finding $P_k$.
Also, $k$ merely represents an upper limit since the number of points in the working copy reduces each iteration.
To find the barycentric coordinates of $t$ after the simplex has been constructed, a linear equation system needs to be solved. This adds an additonal term $O(D^3)$. The time required for finding $P_k$ with the ball tree is negligible. The space complexity for saving $n$ $D$-dimensional points is $O(Dn)$.
By comparison a directed search through a full Delaunay triangulation has a time complexity of $O(FD^3)$, where $F$ is the dimension dependent number of simplices evaluated before the simplex containing $t$ is found. In general, $F << k$ for all $D$ if the initial simplex is chosen well.
The space complexity of a Delaunay triangulation is at least $O(Dn^{\lceil D / 2 \rceil})$.
\section{Performance of the Algorithm}
\label{sec:performance}
We implemented the algorithms described in the previous section in the Projective Simplex Interpolation program (PSI). We compare the performance of PSI with our previous Delaunay Interpolation Program (DIP). To this end we run both algorithms on different data sets of varying dimensionalities\footnote{All data sets available at \url{https://www.usm.uni-muenchen.de/~dolag/Data/PSI}}.
The non-uniform data sets (NU) were generated using CHIPS\cite{lueders:21} and provide real-world scenarios. They are characterized by an overdense slice of samples through the parameter space corresponding to the ionization temperature of hydrogen {(cf.~fig.~\ref{fig:exampledist})}. We intend to use PSI with {larger versions of} data sets like these. The uniform data sets (U) were generated using a uniform random number generator and are supposed to provide PSI with ideal conditions. Both cover realistic parameter spaces (comparable to \cite{lueders:21}) and contain accurate values for the cooling function $\log \Lambda$, calculated using version 17.02 of \textsc{Cloudy}\cite{cloudy:17}. Typical values for $\log \Lambda$ range from -40 to {-15}. We tested against evaluation sets containing 1000 known values at randomly distributed points.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{07_exampledist.pdf}
\caption{ Two 2D projections of the 3D data set NU\_3D. As a result of the adaptive sampling algorithm CHIPS the hydrogen ionization feature is visible in the point distribution as an overdensity at $T \approx 10^4$K. Interpolating accurately is particularly difficult here. The effects are more pronounced at high densities ($n_H \gtrsim 1\mathrm{cm}^{-3}$) and less pronounced at high metallicities ($Z \gtrsim 0.5Z_\odot$). Note that the values of $\Lambda$ span 14 orders of magnitude.}
\label{fig:exampledist}
\end{figure}
The metrics we use to evaluate PSI and DIP are memory use, runtime, simplex quality, and interpolation error. There are multiple ways to judge the quality of a simplex (e.g.\ \cite{partha:93}). Here, we use the ratio of the inradius $\rho$ to the longest edge $h_\mathrm{max}$ \cite{george:1998}:
\begin{equation}
\label{eq:quality}
Q = \alpha \rho / h_\mathrm{max}
\end{equation}
with the dimension dependent normalization factor $\alpha = \sqrt{(2d(d+1))}$. The values of this measure range from 0 for degenerate simplices to 1 for the regular $n$-simplex.
\begin{table*}[b]
\centering
\begin{tabularx}{\textwidth}{lXXXXlllllllX
Data & $N$ & $S_\mathrm{PSI}$ & $S_\mathrm{DIP}$ & $t_\mathrm{PSI}$ & $t_\mathrm{DIP}$ & $Q_\mathrm{PSI}$ & $Q_\mathrm{DIP}$ & $|\Delta_\mathrm{PSI}|$ & $|\Delta_\mathrm{DIP}|$ & $k$ \\
\hline
NU\_2D & 489 & 0.02 & 0.04 & 74 & {10} & 0.53$\pm$0.05 & 0.66$\pm$0.03 & 0.066$\pm$0.008 & 0.061$\pm$0.008 & 15 \\
NU\_3D & 2692 & 0.11 & 0.78 & 787 & {13} & 0.37$\pm$0.04 & 0.49$\pm$0.03 & 0.11$\pm$0.02 & 0.098$\pm$0.016 & 50 \\
NU\_4D & 16950 & 0.83 & 29.8 & 3568 & {55} & 0.35$\pm$0.03 & 0.45$\pm$0.03 & 0.32$\pm$0.15 & 0.31$\pm$0.16 & 100 \\
NU\_5D & 26047 & 1.40 & 312 & 12957 & {172} & {0.34$\pm$0.03} & 0.40$\pm$0.02 & {0.35$\pm$0.15} & 0.35$\pm$0.15 & {200} \\
{NU\_6D} & 37519 & 2.1 & 3500 & 12157 & 1296 & 0.31$\pm$0.02 & 0.42$\pm$0.01 & 0.44$\pm$0.18 & 0.42$\pm$0.2 & 250 \\
{NU\_7D} & 67970 & 4.2 & 56000 & 38680 & DNF & 0.31$\pm$0.02 & DNF & 0.53$\pm$0.23 & DNF & 350 \\
\hline
U\_2D & 500 & 0.03 & 0.05 & 68 & {10} & 0.59$\pm$0.05 & 0.68$\pm$0.03 & 0.066$\pm$0.013 & 0.068$\pm$0.015 & 10 \\
U\_3D & 2500 & 0.10 & 0.76 & 351 & {15} & 0.49$\pm$0.04 & 0.57$\pm$0.02 & 0.095$\pm$0.022 & 0.089$\pm$0.017 & 20 \\
U\_4D & 15000 & 0.69 & 27.7 & 1853 & {65} & 0.42$\pm$0.03 & 0.51$\pm$0.02 & 0.14$\pm$0.04 & 0.13$\pm$0.04 & 40 \\
U\_5D & 25000 & 1.10 & 314 & 5794 & {197} & 0.38$\pm$0.02 & 0.47$\pm$0.02 & 0.19$\pm$0.07 & 0.17$\pm$0.05 & 80 \\
U\_6D & 40000 & 2.90 & 3920 & 12563 & {623} & 0.35$\pm$0.02 & 0.43$\pm$0.01 & 0.24$\pm$0.10 & 0.22$\pm$0.08 & 160 \\
{U\_7D} & 80000 & 4.6 & 67000 & 38477 & DNF & 0.32$\pm$0.02 & DNF & 0.26$\pm$0.1 & DNF & 250 \\
\end{tabularx}
\caption{Results for DIP and PSI on several uniform and nonuniform data sets. Shows the number of points $N$, the choice of $k$ for PSI, the storage requirements $S$ for both algorithms in megabyte, the runtime $t$ for both algorithms in milliseconds, the resulting simplex qualities according to eq.\ \ref{eq:quality}, and the mean absolute errors of the interpolation $\Delta$. {Data sets NU\_6D and NU\_7D required modifying our previously Delaunay based sampling code to use the projective simplex algorithm.}}
\label{tab:data}
\end{table*}
{Using our data sets,} we found that choosing $k$ such that the algorithm runs between 1.2 and 1.5 times { on average} produces a good tradeoff between runtime and simplex quality. Better quality simplices correspond to higher interpolation quality, but for our data this correlation was weaker than we expected. We set PSI to double $k$ if it failed to build a valid simplex, with a maximum of four doublings. If a valid simplex was found or the maximum was reached, $k$ was reset. {With this configuration all but 20 evaluations found valid simplices (5 in NU\_5D, 2 in NU\_6D, 9 in NU\_7D, 4 in U\_7D).}
Across all runs, PSI did not construct any simplices that did not contain their target point. The results of our tests are shown in Table \ref{tab:data}.
The memory requirements of both algorithms are comparable at low dimensions. However, the space complexity of the Delaunay triangulation becomes an issue past five dimensions. The triangulation alone takes up several gigabytes of data. Since PSI only needs to save the points themselves it is much more efficient here, and can easily scale to both high dimensions and point counts in comparison to DIP.
{Across all} dimensions DIP is consistently faster than PSI by an order of magnitude. It not only loads the triangulation itself into memory, but also caches a lot of intermediate information about the simplices to speed up calculations\footnote{Such as centroids, face-midpoints, transformation matrices, and normal vectors.}. Such caching is not possible with PSI. {This is most pronounced between 3D and 5D. We suspect that at higher dimensions the amount of points removed in each iteration of the projective simplex algorithm counteracts the increasing complexity. However, we have no explanation for the fact that PSI took more time to complete for NU\_5D than NU\_6D. } The runtimes in table \ref{tab:data} do not include the setup time of DIP, which depends on the number of dimensions rather than the number of interpolations, and could be two hours long { or more at 6D and above}. The setup time of PSI is negligible in comparison.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{01_qdist.pdf}
\caption{Simplex quality distributions. The upper row shows the uniform data sets while the lower row shows the non-uniform data sets. The left column shows PSI, while the right column shows DIP. Both the Delaunay and PSA simplices suffer in quality as dimension increases.
}
\label{fig:qdist}
\end{figure}
The simplex quality is a more abstract measure and is closely related to the interpolation quality. A low quality simplex is long and elongated; using such a simplex to interpolate might produce worse results than a more compact simplex, since the vertices are further away from the target point.
A Delaunay triangulation guarantees an optimal triangulation by maximizing the sum of the interior angles of the simplices, avoiding long thin simplices as much as possible. {The Projective Simplex Algorithm} makes no such guarantee. Instead, distance information is lost through each projection, which can favor elongated simplices particularly for large $k$. As such, the quality of the simplices used in PSI is worse on average and less consistent than the ones used in DIP (cf.\ Fig.\ \ref{fig:qdist}). In the 5D, non-uniform case a worrying number of simplices have a quality of zero. { However, as the dimensionality increases to 7D the mean quality appears to converge to 0.3. DIP shows a similar effect at 0.4. This indicates that PSI could produce reasonable simplices even at much higher dimensions.}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{02_errcumsum.pdf}
\caption{Cumulative sums of the absolute interpolation errors of DIP and PSI.
}
\label{fig:cumsums}
\end{figure}
Despite { the lower quality simplices}, the absolute interpolation error $\Delta$ of PSI is similar to the one of DIP. Fig.\ \ref{fig:cumsums} shows the cumulative sums of the error distributions of both DIP and PSI for the different data sets. For the non-uniform data sets PSI performs as well as DIP, except in the 3D case where it performs slightly worse. For the uniform data sets it performs almost as well as DIP in all cases except U\_2D, where it performs even better.
{In practical applications the accuracy can be improved further by increasing sample counts. Due to its lower space complexity PSI can potentially achieve higher accuracy than DIP, which can not support as many samples.}
Overall, PSI performs better than we expected against its predecessor, and we intend to integrate it into our version of the cosmological simulation software \textsc{OpenGadget3}\cite{Springel:2005} soon.
\section{Conclusion}
\label{sec:conclusion}
We developed a new algorithm that constructs a simplex around a target point using an iterative dimensionality reduction through projection, bypassing the need to build a full triangulation. We successfully implemented this algorithm in the Projective Simplex Interpolation program (PSI) and used it to interpolate the cooling function $\Lambda$ needed in galaxy evolution simulations. PSI completely resolved the excessive memory requirements of the previous implementation { with acceptable losses in accuracy, allowing us to interpolate the cooling function in higher dimensions than previously possible.}
However, the loss of distance information associated with the projection step can impact the point selection in later iterations, leading to lower quality simplices.
This might be explored in future work to further improve the algorithm.
\section*{Acknowledgments}
This research was supported by the COMPLEX project from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program grant agreement ERC-2019-AdG 860744.
\bibliographystyle{elsarticle-num}
|
{
"timestamp": "2022-03-30T02:28:37",
"yymm": "2109",
"arxiv_id": "2109.13926",
"language": "en",
"url": "https://arxiv.org/abs/2109.13926"
}
|
\section{GALAXY MASS THRESHOLD}\label{gal_mass_threshold}
In this study, we considered only galaxies with stellar mass $M_{\rm gal} \ge \SI{3e10}{\text{M}_{\odot}}$. The motivation behind this choice is two-fold. On one hand, the vast majority of stars - and compact remnants - in the Universe is hosted in galaxies above this mass threshold; on the other hand, simulating large light cones down to a much lower mass threshold requires significant computing time and memory resources. It is, however, important to investigate to what extent our results are dependent on this specific choice. \par
To this end, we ran a test light cone considering all galaxies with $M_{\rm gal} \ge \SI{e10}{\text{M}_{\odot}}$. We built the error boxes with the same procedure outlined in the main text, constructing \num{20} independent realizations of the above-gap SBHB population. The main result of this exercise is shown in \cref{fig:lower_threshold_results}. We notice here that, although the results are consistent with those obtained with a higher mass cut, the precision in the determination of the cosmological parameter is slightly degraded. In particular, the width of the marginalized posterior on $h$ is about $30\%$ larger, with an average error on its determination increasing from $\approx 1.5\%$ to $\gtrsim 2\%$. Similarly, constraints on $\Omega_{m}$ are slightly looser, with typical precision of $\approx 30\%$. The reason of this precision loss might lurk in the peculiar clustering properties of different galaxy populations. For example, specific classes of dwarf galaxies tend to be less clustered and more evenly distributed in the sky \citep{2021ApJS..252...18T}, which is expected to somewhat reduce the effectiveness of our methodology. Despite this, we notice that the main results of our study are robust and we defer a more comprehensive investigation of effects such as catalog completeness and selection effects to future work.
\begin{figure}[b]
\centering
\includegraphics[width=0.5\textwidth]{different_mass_threshold_results_averaged_posterior.pdf}
\caption{Averaged joint posterior distribution of \num{20} different realizations assuming a lower threshold in galaxy's selection.}
\label{fig:lower_threshold_results}
\end{figure}
\section{Introduction} \label{intro}
Recent analysis, relying on Type Ia supernovae (SNe) \cite{riess2019large} and Planck measurements \cite{aghanim2020planck}, revealed a tension at $4.4\sigma$ on the determination of the Hubble constant, $\text{H}_{0}$. While the former reported $\text{H}_{0} = 74.03 \pm 1.42 \, \si{\kilo\meter\per\second\per\mega\parsec}$, the latter derived $\text{H}_{0} = 67.4 \pm 0.5 \, \si{\kilo\meter\per\second\per\mega\parsec}$. Although a number of possible solutions have been proposed (see, e.g.,~\cite{verde2019tensions}), the tension still persists. \par
The first gravitational wave (GW) detection \cite{abbott2016observation} brightened the future of astrophysical observations. Among the possible theoretical \cite{PhysRevD.103.122002} and cosmological applications \cite{Abbott_2021}, GWs offer a unique opportunity to provide an independent constraint on the $\text{H}_{0}$ parameter, thus shedding light on the evolution history of our Universe. Coalescing compact object systems are ideal standard sirens in the determination of cosmological distances and their signals bring direct information about the source's luminosity distance.
However no information is carried about the redshift. If an electro-magnetic (EM) counterpart is detected, the identification of the host galaxy might provide the redshift information necessary to build the $d_L - z$ relation and constrain $\text{H}_{0}$ \cite{GW170817_H0}. \par
Even if no information about the source redshift is gathered, a measurement of $\text{H}_{0}$ can be performed by exploiting the statistical properties of the inhomogeneous redshift distribution of possible galaxy hosts \cite{schutz1986determining}. This method has been developed in the last decade \citep[e.g.,][]{holz2005using, 2011ApJ...732...82P,mukherjee2021accurate,del2018stellar} and has been successfully applied to recent LIGO-Virgo observations, although yielding only mild constraints \cite{Abbott_2021}. \par
The future \emph{Laser Interferometer Space Antenna} \citep[LISA,][]{amaro2017laser} will extend the frequency band currently explored by ground based detectors to the \si{\milli\hertz} region. Moreover, the third generation interferometers Einstein Telescope \citep[ET,][]{Punturo2010} and Cosmic Explorer \citep[CE,][]{Reitze2019Cosmic} will remarkably enhance the GW sensitivity from the ground, thus enabling the exploration of a larger portion of Universe. LISA will detect the early inspiral of stellar black hole binaries (SBHBs) \cite{PhysRevLett.116.231102}. A fraction of these systems will coalesce within only few years from the first LISA observation, becoming observable by ground based detectors, thus fostering a multiband approach. \par
Combining information from ground and space detectors might provide unique scientific outcomes. This is particularly true in the case of multiband SBHB exploitation for cosmological measurements. While LISA will determine the sky position of the GW source to great accuracy, due to the long persistence of the signal in band, ET will pin down its luminosity distance, thanks to the high signal-to-noise ratio ($S/N$). By combining these two measurements, the origin of the signal can be constrained to a small 3D volume, encompassing only a small number of candidate galaxy hosts. Therefore, multiband GW sources might prove to be a particularly powerful class of ``dark'' standard sirens. \par
The idea of multiband GW sources was first proposed in the context of intermediate mass black holes (IMBHs) \cite{2010ApJ...722.1197A} and later revised in light of the observation of GW150914 \cite{PhysRevLett.116.231102}. Although the idea is appealing, recent estimates of the SBHB merger rate and mass function coupled with the current LISA sensitivity curve result in rather pessimistic prospects of detection. In fact, when considering only BHs below the pair-instability mass gap, LISA might perhaps detect only a handful of such multiband sources \cite{2019PhRvD..99j3004G}. However, things change when the BH mass function is extended beyond the mass gap. As shown in \cite{mangiagli2019merger}, depending on the details of the SBHB population and on the duration of the LISA mission, ``above-gap'' SBHBs might dominate the population of multiband sources \cite{Ezquiaga_2021}, with up to about a hundred of detected systems. \par
Here we exploit SBHBs above the pair-instability gap jointly observed by LISA and ET (which will be our 3G detector default choice) as dark standard sirens to infer the cosmological parameters via statistical methods. We simulate a population of inspiralling SBHBs in LISA, focusing on the systems that are going to coalesce during the LISA time mission. We perform parameter estimation of the multiband binaries under the Fisher matrix formalism with LISA and ground-based detectors. Combining the sky localization uncertainty provided by LISA and the estimate on the luminosity distance by ET, we construct error boxes in the sky. We populate these volumes with realistic galaxy catalogs \cite{izquierdo2020galactic}, and we infer $\text{H}_{0}$, as well as other cosmological parameters, by applying a statistical nested sampling algorithm. \par
A very similar approach was taken in \cite{del2018stellar}. Their study focused on the inference of cosmological parameters via LISA-only observations of SBHBs below the pair-instability mass gap. Assuming different detector configurations, they found that the Hubble constant is determined between $\sim5\%$ and $\sim2\%$. In the past few years, however, LISA sensitivity has been downgraded by 50\% at the high frequency end \cite{2019CQGra..36j5011R}, and the latest observing run by LIGO/Virgo reported an intrinsic merger rate of $~24 \,\rm Gpc^{-3} yr^{-1}$ for SBHBs \cite{Abbott_2021}. This will inevitably impact the results of \cite{del2018stellar}, yielding a worse estimate of $\text{H}_{0}$. Therefore, above-gap SBHBs could very well represent the only possibility to perform precision cosmology in this mass range, unless EM counterpart are found to be associated to SBHB mergers \cite{2017ApJ...835..165B}. \par
Even if EM-based measurements are going to be improved by the time LISA and ET start observing, we stress that GW-based observations rely on completely independent assumptions and systematics, and for distance measurements they do not require complex calibrations as for the construction of a cosmic distance ladder. \par
The paper is structured as follows: in \cref{pre} we present the assumptions under which we perform our analyses. In particular, in \cref{binary_pop} we describe the astrophysical GW sources that we consider in our work and their observational properties. In \cref{par_est} we present the analytical framework adopted to perform the SBHB parameter estimation. \Cref{errbox} is devoted to the description of the error box construction and population, and \cref{bayes}, describes the statistical framework and the numerical implementation of the inference of cosmological parameters from SBHB observations. Our main results are presented and extensively discussed in \cref{results}, while \cref{end} summarizes the main features of our work and lays out future plans.
\section{Preliminaries} \label{pre}
\subsection{Binary population} \label{binary_pop}
A multiband approach requires multiband sources. In this work we explore the realm of SBHBs above the pair-instability mass gap. Current stellar evolution models predict the lack of stellar black holes in the mass range between $\sim$\SI{60}{\text{M}_{\odot}} and $\sim$\SI{120}{\text{M}_{\odot}} \cite{spera2017very}. Depending on the metallicity, the hot core of very massive stars might undergo electron-positron pairs production. These processes soften the star's equation of state and bring the star to collapse. The rising temperatures from the contraction then trigger thermonuclear runaway reactions that completely disrupt the progenitor in a luminous \emph{pair-Instability supernova} (PISN) event, thus leaving no remnant \cite{woosley2002evolution}. On the contrary, no physical process can halt the collapse triggered in the last life stages of very massive stars, and the progenitor leaves a BH remnant with mass greater than $\sim$\SI{120}{\text{M}_{\odot}} \cite{spera2017very}. In this study we will consider SBHBs composed by such ``above-gap'' objects, i.e.~BHs with masses in the range \SI{120}{\text{M}_{\odot}}-\SI{300}{\text{M}_{\odot}}, and we will assume the mass gap has a sharp cutoff at $[60,\, 120] \rm M_{\odot}$ (but look also at \cite{2020ApJ...888...76M} for the effects of star rotation and compactness on the BHs mass distribution). \par
When bounded in coalescing binaries, above-gap BHs emit a GW signal which crosses multiple frequency bands: from the \si{\milli\hertz}, where LISA observes the long lasting inspiral phase of the system, up to $\mathcal{O}(10^2)$\si{\hertz}, where ET detects the last-inspiral cycles, the merger, and the ringdown. A multiband signal requires to be detected (i.e.~it has to be revealed with $S/N$ greater than a fixed threshold) by both interferometers, with LISA being the first of them. \par
These loud GW sources were extensively investigated in \cite{mangiagli2019merger} and in this work we rely on those results to perform our analysis. Assuming the optimistic scenario developed by the authors, the LISA merger rate of above-gap GW sources is estimated to be between $R \simeq \SI{10}{\per\year}$ and $R \simeq \SI{14}{\per\year}$. A LISA mission time of \num{4} (\num{10}) years would then reveal $\sim \num{40}$ ($\sim \num{140})$ multiband events.
However, as in \cite{mangiagli2019merger}, we first consider three different subpopulations of SBHBs: below-gap (above-gap) binaries, where both components are below (above) \SI{60}{\text{M}_{\odot}} (\SI{120}{\text{M}_{\odot}}), and across-gap binaries with the primary (secondary) BH above (below) \SI{120}{\text{M}_{\odot}} (\SI{60}{\text{M}_{\odot}}).
We consider the same models described in that study, i.e.~an optimistic (sSFR-sZ) and pessimistic (mSFR-mZ) model plus two intermediates ones (mSFR-sZ and sSFR-mZ). The optimistic model features an higher star formation rate and lower metallicity, hence leading to more massive SBHBs than the pessimistic one. We defer to the original paper for extensive details on how the population of SBHBs is built and we summarize here only the main properties of our binary population model. \par
We consider two models for the SFR and the average metallicity evolution as function of redshift. The pessimistic SFR and metallicity are taken both from \cite{Madau_2017} while the optimistic SFR and metallicity are adopted from \cite{Strolger_2004} and \cite{Madau_2014}, respectively. We assume a power-law stellar initial mass function (IMF) $\xi(M_\star, \alpha)\propto M_\star^{-\alpha}$ extending in the range $[8, \, 350] \, \rm M_{\odot}$, with $\alpha = 2.7$ for the pessimistic SFR and $\alpha = 2.35$ for the optimistic one. Single stars are evolved with the code \texttt{SEVN} \cite{spera2017very} and the resulting BHs are combined assuming a flat mass-ratio distribution in $[0.1, 1]$ and a log-flat time delay distribution in [$50 \, \rm Myr$, $t_{\rm Hubble}$]. The frequency distribution in LISA is computed in the quadrupole approximation for circular orbits following \cite{PhysRev.136.B1224}. \par
We stress that our population model is simplified in many ways. Perhaps most importantly, the resulting BH mass function is obtained by evolving individual stars only, neglecting the effect of binary interaction. In fact, binary evolution models have been generally been implemented for stars up to $\approx 150\, \rm M_{\odot}$ \cite{10.1093/mnras/sty1613, 2021MNRAS.508.5028B}, therefore preventing the possibility to study above-gap remnants in binaries. Moreover, we consider only the standard field formation channel, neglecting alternatives such as dynamical capture \cite{2019ApJ...871...91Z}, hierarchical mergers \cite{2021MNRAS.502.2049L} and accretion and mergers in AGN disks \cite{2021ApJ...908..194T}. In this respect, our BH binary population can be considered a ``toy model'' to provide a proof of principle demonstration of the multiband approach. \par
In our analysis there are two important differences with respect to~\cite{mangiagli2019merger}: we normalized the overall rate at $~25 \,\rm Gpc^{-3} yr^{-1}$ to best match the updates from the latest LIGO/Virgo results and we changed the cosmology choosing the same cosmological values used for the generation of the light cones (see \cref{errbox} for more details) for internal consistency. For each sub-population and each model, we generated 30 Monte Carlo realizations of the expected SBHB population. At this step, each binary is characterized by the two component masses, the merging redshift and the initial frequency. The redshift is sampled between $10^{-3}$ and $\sim19$ in log scale, while the initial frequency is in the interval $[\num{2e-4}, 1] \, \rm Hz$. We checked that extending the minimum frequency range to lower frequencies did not affect the number of observable systems. The frequency is then converted into a coalescence timescale $t_c$, via the standard quadrupole approximation \cite{PhysRev.131.435}, assuming circular binaries. Being interested in multiband events, we selected only systems with $t_c < 20 \, \rm years$~\footnote{Even though LISA will be able to detect systems much further from coalescence, these SBHBs will be characterized by large error (especially on luminosity distance) and therefore will add little information to the measurement of cosmological parameters.}. Assuming a LISA mission lifetime of 10 years, we expect exquisite estimates on the coalescence time for those systems, of the order $\Delta t_c \simeq [1 - 10] \, \rm s$ \cite{PhysRevLett.116.231102}, allowing the unequivocal identification of the binary by ground-based detectors.
\subsection{Parameter estimation formalism} \label{par_est}
To simulate observations with LISA and ET, we evaluate the error on the estimated parameters of each source by means of the Fisher information matrix. The general output of a detector is a time series $s(t)$ given by the superposition of the noise contribution $n(t)$ and a GW signal $h(t)$, if present:
\begin{equation}
s(t) = n(t) + h(t) \, .
\end{equation}
Assuming stationary, Gaussian white noise, the $S/N$ produced by the GW signal is described by
\begin{equation}
\biggl( \frac{S}{N} \biggr)^2 = \inner{h}{h} \, ,
\end{equation}
where the round brackets $(\,|\,)$ refer to the inner product between two real functions $A(t)$ and $B(t)$ defined as
\begin{equation}
\inner{A}{B} = 4 \, \text{Re} \int_{0}^{+\infty} df \, \frac{\tilde A^{*}(f) \tilde B(f)}{S_{n} (f)} \, .
\label{eqn:inner}
\end{equation}
Here the tilde labels Fourier transformed quantities, while the star denotes complex conjugate quantities. The $S_{n}$ term represents the \emph{power spectral density} (PSD) of the detector in \si{\per\hertz} units. \par
The GW signal $h(t,\Theta)$ produced by a spinning, precessing, and eccentric binary in a detector is characterized by a set of \num{17} parameters ${\bf{\Theta}}= \{\Theta_{1}, \, ... \,, \Theta_{17}\}$. In the limit of high $S/N$, the probability (or likelihood) that the observed signal is described by $\bf{\Theta}$, given an output $s(t)$, is
\begin{equation}
p({\bf{\Theta}}|s) \propto \exp \biggl[ -\frac{1}{2}\inner{\de{i}h}{\de{j}h} \Delta\Theta_{i}\Delta\Theta_{j} \biggr] \, ,
\label{eqn:error_distribution}
\end{equation}
where $\de{i}$ denotes the derivative of the signal $h(t,\Theta)$ with respect to the parameter $\Theta_i$. Equation \eqref{eqn:error_distribution} describes a multivariate Gaussian distribution centered in $\bf{\Theta}$ and with covariance matrix $\Sigma = \inner{\de{i}h}{\de{j}h}^{-1}$. The Fisher information matrix $\Gamma$ is then defined as
\begin{equation}
\Gamma_{ij} = \inner{\de{i}h}{\de{j}h} \, ,
\label{eqn:FM}
\end{equation}
and the parameter uncertainties and covariances are thus contained in its inverse $\Sigma$. Moreover, since LISA and ET are independent detectors, we can construct a more informative Fisher matrix by simply adding the individual ones:
\begin{equation}
\label{eqn:LISA_ET}
\Gamma_{\text{ET+LISA}} = \Gamma_{\text{ET}} + \Gamma_{\text{LISA}} \, .
\end{equation}
The new matrix contains the features of both detectors, therefore yielding better constraints on source parameters, in particular on sky location and luminosity distance. \par
For each binary sampled from our distributions as described in \cref{binary_pop}, we compute the $S/N$ and Fisher matrix in LISA. We adopt the same sensitivity curve described in \cite{PhysRevD.102.084056}. The signal in LISA is described by the inspiral-only precessing waveform presented in \cite{PhysRevD.90.124029}, assuming random spin magnitude in $[0,1]$ aligned with the binary angular momentum for consistency with the waveform adopted for ground-based detectors, as discussed below. Sky position and direction of the binary orbital angular momentum are randomly sampled from a uniform distribution on the sphere. We choose $S/N = 8$ as threshold for LISA detection and for each detected binary we compute the $15 \times 15$ Fisher matrix according to \cref{eqn:FM}. Due to the numerical nature of the problem, we must check the outcome of the subsequent inversion process. We checked the discrepancy between the matrix product $(\Gamma \cdot \Sigma)_{ij}$ and the Kronecker symbol $\delta_{ij}$ through
\begin{equation}
\varepsilon_{\text{inv}} = \max_{i, \, j} |(\Gamma \cdot \Sigma)_{ij} - \delta_{ij} | \, ,
\label{eqn:epsinv}
\end{equation}
and we consider the inversion successful if $\varepsilon_{\text{inv}} \le \num{e-3}$ (see the appendix in \cite{berti2005estimating} for details on the procedure). \par
All binaries detected by LISA are then analyzed with a pipeline for ground-based detectors. The ET Fisher matrix [again defined through \cref{eqn:FM}] is computed using the \texttt{PYTHON} library \texttt{PyCBC} \cite{PyCBC}, which can perform the inner products in \cref{eqn:inner} for a number of waveform approximants. Since ET will also detect the merger and ringdown, we adopt the \emph{PhenomD} waveform \cite{khan2016frequency} to model the full GW signal. Due to its domain of definition, the waveform only allows US to study nonprecessing spin-aligned binaries, as mentioned before. Therefore, of the original \num{17} source parameters we neglect the \num{4} corresponding to the $x$ and $y$ spin components of each BH. Two of the remaining source parameters are eccentricity related. However, GW emission eventually circularizes any eccentric orbit (but see also \cite{2021ApJ...907L..20T} for alternative scenarios): we choose to assume only circular binaries, and we therefore ignore these parameters. The binaries and their signals are evolved through the high frequency band probed by ET, starting from \SI{3}{\hertz}, and we set an $S/N$ threshold of \num{12} for the 3G interferometer detection.
\begin{table}[h]
\centering
\begin{tabular}{ l c }
\hline
\hline
\textbf{Quantity} & \textbf{Parameter} \\
\hline
Mass 1 & $\ln M_1$ \\
Mass 2 & $\ln M_2$ \\
Luminosity distance & $\ln d_{L}$ \\
Spin 1 & $\chi_1$ \\
Spin 2 & $\chi_2$ \\
RA & $\varphi_N$ \\
DEC & $\mu_{N} = \cos \theta_N$ \\
Inclination & $\iota$ \\
\hline
\hline
\end{tabular}
\caption{Parameters that characterize the Fisher matrix. We denote with a capital $N$ subscript the celestial coordinates defined through $\theta_N = \frac{\pi}{2} - \text{DEC}$ and $\varphi_N = \text{RA}$. Furthermore, we take the natural logarithm of $M_1$, $M_2$ and $d_{L}$ so to deal with relative errors.}
\label{tab:selected_parameters}
\end{table}
Even considering circular binaries with aligned spins, we are still left with a $11\times11$ matrix that keeps track of the two BH masses and dimensionless spins, the luminosity distance, right ascension and declination of the source, the inclination and polarization angles, the merger time and phase. However, we find that in the case of ET the matrix is either ill-conditioned or singular \cite{vallisneri2008use}, and the algorithm struggles with the matrix inversion process~\footnote{To invert the matrices we adopt the LU decomposition.}. Thus, we are forced to select the larger subset of parameters that leads to acceptable $\varepsilon_{\text{inv}}$ values. For what concerns the aim of this work, we choose to exclude the polarization and phase angle and the merger time. We do not expect the first two angles to have a strong impact on our parameter estimation and LISA will be anyway able to provide accurate estimates of the coalescence time early during the inspiral phase. In other words, our Fisher matrices are computed for the following subset of 8 parameters: the two BH masses $M_1$ and $M_2$ (with the condition $M_1> M_2$), the luminosity distance $d_{L}$, the two BH spins $\chi_1$ and $\chi_2$, the right ascension RA and declination DEC of the binary and the inclination $\iota$~\footnote{Although the ET 11$\times$11 matrix turns out to be ill-conditioned, the LISA one is not. As a sanity check supporting the use of a reduced matrix, we computed the errors on the sky position and luminosity distance for LISA-only observations both for an $11\times11$ and $8\times8$ Fisher matrix. We found comparable results in the two cases, with the latter providing slightly more accurate numbers (by a factor $\approx 1.5$) on average.}. In \cref{tab:selected_parameters} we show how we model the remaining quantities. We therefore reduce the Fisher matrices computed by LISA and ET to the 8 aforementioned parameters, add them, and invert the sum to get the correlation matrix. In this sense, we are not just looking at what each individual detector is able to perform, but we are summing the information matrices from the two detectors as in \cref{eqn:LISA_ET}. The diagonal of the correlation matrix $\Sigma$ contains the variance of each parameter, while the other entries represent their correlations. The error on $d_{L}$ can be read directly out of the diagonal elements, whereas the sky position uncertainty area, in units of \si{\square\radiant}, is simply recovered through \cite{lang2006measuring}
\begin{equation}
\Delta \Omega = 2 \pi \sqrt{\Sigma_{\varphi_{N} \varphi_{N}}\Sigma_{\mu_{N} \mu_{N}} - (\Sigma_{\mu_{N} \varphi_{N}})^2} \, ,
\label{eqn:sky_loc_error}
\end{equation}
where $\Sigma_{\mu_{N} \mu_{N}}$ and $\Sigma_{\varphi_{N} \varphi_{N}}$ correspond to the diagonal elements for the right ascension and declination of the source and $\Sigma_{\mu_{N} \varphi_{N}}$ is their correlation value.
\subsection{Error box construction and population} \label{errbox}
The uncertainties on the luminosity distance and the sky location allow us to constrain the volume where the GW signal comes from, and galaxies inside it represent possible host candidates. We build the error boxes of the events following the procedure outlined in \cite{laghi2021gravitational}. For a given cosmology, a $d_{L} \pm \sigma_{d_{L}}$ measure translates in a $z \pm \sigma_z$ interval, which is obtained by inverting the relation
\begin{equation}
d_{L}(z) = c \, (1 + z) \int_{0}^{z} \frac{dz'}{\text{H}(z')} \, .
\label{eqn:dl_z}
\end{equation}
Here $\text{H}(z)$ represents the Hubble parameter as a function of redshift and its expression in a $\Lambda \text{CDM}$ Universe is given by
\begin{equation}
\text{H}(z) = \text{H}_{0} \sqrt{\Omega_{m}(1+z)^3 + \Omega_{\Lambda}} \, .
\end{equation}
Since each set of cosmological parameters yields a different $d_L-z$ relation, the $z \pm \sigma_z$ redshift interval is extended in order to take into account for the prior ranges of the cosmological parameters one is willing to infer. From a measure of luminosity distance, we therefore construct a redshift range $[z^{-}, \, z^{+}]$, whose bounds represent the lowest and highest $z$ obtained when the cosmology varies within the prior range of the cosmological parameters, which will be specified in \cref{bayes}. Furthermore, due to the peculiar velocity $v_p$ of galaxies, the \emph{cosmological} and \emph{apparent} redshifts of a GW host might differ from each other. This is indeed an uncertainty source that we must account for. We characterize this error with $\sigma_{pv}(z) = (1 + z)\sigma_{v_{p}}/c$, with $\sigma_{v_p} = \SI{500}{\kilo\meter\per\second}$, which is consistent with the standard deviation of the radial peculiar velocity distribution observed in the Millennium run \cite{springel2005simulations}. In conclusion, each $d_{L} \pm \sigma_{d_{L}}$ measure translates in a [$z^{-} - \sigma_{pv}(z^{-}), \, z^{+} + \sigma_{pv}(z^{+})$] redshift range. This corresponds to the redshift shell of the Universe encompassing all galaxies with a redshift consistent with the GW measured luminosity distance, once the cosmological prior and peculiar velocities are taken into account. \par
The next step is to populate those redshift shells with realistic galaxy catalogs. To this end, we make use of a custom light cone built specifically for this purpose by the authors. In particular, the light cone is assembled based on the methodology presented in \cite{izquierdo2020galactic}, which uses the semianalytical model \texttt{L-Galaxies} applied on the Millennium dark matter merger trees. Specifically, the light cone generated for this work is complete up to $z = 1$ and it contains several physical properties such as mass, magnitudes, observed and geometrical redshift for all the galaxies included in it. Given that we are mainly interested in the low redshift Universe, where our results might be affected by cosmic variance caused by narrow angular apertures, we decide to set the light cone aperture to \num{1}/{8}th of the full sky. We refer the reader to \cite{izquierdo2020galactic} for further details about the light cone construction. As an illustrative example, \cref{fig:light_cone} displays the spatial distribution of the $M_{\rm gal} > \SI{e10}{\text{M}_{\odot}}$ galaxies~\footnote{The mass resolution of the simulation is $M_{*} = \SI{e10}{\text{M}_{\odot}}$.} inside ${\sim}\SI{1}{\degree}$ declination slice.
\begin{figure}[b]
\centering
\includegraphics[width=0.5\textwidth]{LightCone_cut.pdf}
\caption{Visual representation of a slice of the light cone adopted in this work. The major panel shows the galaxy distribution in the $z$-RA plane, while the top-right corner plot displays the galaxy distribution in the RA-DEC plane.}
\label{fig:light_cone}
\end{figure}
We fix the parameters of the simulation so that $\text{H}_{0} = \SI{73}{\kilo\meter\per\second\per\mega\parsec}$, $\Omega_{m} = 0.25$ and $\Omega_{\Lambda} = 0.75$. Even though these values do not reflect the state-of-the-art estimates, this has no impact on the analysis. Our work aims to show the potential of this GW based measurement for a given Universe, which we are allowed to customize within our pipeline. \par
To locate and populate the error boxes, for each GW event we adopt the following procedure:
\begin{enumerate}
\item We list all the galaxies within our light cone with a cosmological redshift within the $z \pm \sigma_z$ interval consistent with the ET+LISA $d_{L} \pm \sigma_{d_{L}}$ measurement and the true (i.e.~the Millennium) cosmology.
\item We randomly~\footnote{However, we reject those selected near the boundary of the light cone, so that we avoid cutting the edges of the $\Delta\Omega$ ellipses.} select a galaxy among them and we denote it as the ``true host'' of the GW event. This galaxy position is described by the set ${\bf{\Theta}}^{\text{th}}=\{d_{L}^{\text{th}}, \mu_{N}^{\text{th}}, \varphi_{N}^{\text{th}}\}$.
\item We draw a $(\mu_N', \varphi_N')$ pair of celestial coordinates from the uncertainty distribution in \cref{eqn:error_distribution} centered in ${\bf{\Theta}}^{\text{th}}$ .
\item We consider a 3$\sigma$ region in $\Delta\Omega$, computed according to \cref{eqn:sky_loc_error} and centered in $(\mu_N', \varphi_N')$. This procedure ensures that the volume is not artificially centered onto the true host, but in a nearby point in the sky consistent with the sky location error given by the GW measurement.
\item We select all the galaxies with $M_{\rm gal} \ge \SI{3e10}{\text{M}_{\odot}}\footnote{See \cref{gal_mass_threshold} for a discussion on the impact of the adopted mass threshold.}$ within $\Delta\Omega \times [z^{-} - \sigma_{pv}(z^{-}), \, z^{+} + \sigma_{pv}(z^{+})]$ and we associate to each one a hosting probability consistent with the marginalized sky location error given by the GW measurement (see \cref{bayes}). These represent all the possible host candidates for the GW event.
\end{enumerate}
The outcome of such a method is displayed in \cref{fig:error_boxes}, where we present different error boxes to show the main features and how they may affect the inference of cosmological parameters. In particular, the left column shows an error box with an optimal galaxy clustering in the true cosmology region, thus helping the inference of cosmological parameters. The right column, instead, depicts a noninformative event, due to the misleading information provided by the large galaxy cluster at higher redshift. The middle column is an average error box and it becomes useful to the inference when it is cross correlated with other GW events.
\begin{figure*}
\centering
\includegraphics[width=0.3\textwidth]{ERRORBOX_EVENT_10263.pdf}
\includegraphics[width=0.3\textwidth]{ERRORBOX_EVENT_10276.pdf}
\includegraphics[width=0.3\textwidth]{ERRORBOX_EVENT_10043.pdf}
\caption{Three different error boxes are displayed column-wise. The top row depicts the distribution of galaxies in the $(\theta,\varphi)$ plane, and the color scale denotes the associated hosting probability, from dark blue (low) to yellow (high). The histograms on the bottom row represent the galaxy distributions over redshift, weighted on the hosting probability. According to the true cosmology, the dark green solid line denotes the best ET+LISA $z$ measurement, while the light green dotted lines represent the $2\sigma$ redshift interval. The red color in each panel denotes the selected ``true host''.
}
\label{fig:error_boxes}
\end{figure*}
\subsection{Bayesian inference} \label{bayes}
The problem of measuring quantities in nonrepeatable experiments has a Bayesian nature by definition. The entire framework relies on the Bayes theorem, which states that, given a set of data $D$ and a model hypothesis $H$, the probability of $H$ given $D$ is
\begin{equation}
p(H|D) = \frac{\mathcal{L}(D|H) \pi(H)}{\mathcal{Z}} \, ,
\label{eqn:bayes}
\end{equation}
where
\begin{itemize}
\item $p(H|D)$ is the \emph{posterior distribution}, stating the degree of belief in $H$ after the measure.
\item $\mathcal{L}(D|H)$ is the \emph{likelihood function}, which is known once we assume a model hypothesis.
\item $\pi(H)$ is the \emph{prior distribution}, which amounts to our degree of belief in $H$ before the measure.
\item $\mathcal{Z}$ is the \emph{evidence}, a central quantity in model selection studies. Since here we are interested just at the posterior distribution, we can neglect this factor and simply renormalize the posterior at the end of the computation.
\end{itemize}
In our case, $D$ represents the detected GW events, while $H$ denotes a particular assumed cosmological model that will define the parameter space to be explored. The aim of Bayesian inference is to determine the posterior distribution. To sample the posterior distribution $p(H|D)$ we first need to define the model (which defines the likelihood function $\mathcal{L}$ ) and the prior distribution.
\subsubsection{Single GW event likelihood} \label{single_GW_likelihood}
In the next derivation, we follow the arguments detailed in \cite{del2018stellar}. Consider a set of $n$ GW events $\vector{gw} = \{gw_1, \, ... \, , gw_n\}$ and let $\mathcal{S} = \{\text{H}_{0}, \Omega_{m}, \, ... \}$ be a set of cosmological parameters to be inferred. Each GW event is reasonably independent from the others, therefore the likelihood in \cref{eqn:bayes} can be rewritten as the product of the single GW event likelihoods~\footnote{\label{txt:quasilikelihood}This quantity is more rigorously called \emph{quasilikelihood}, due to the fact that it is obtained through the marginalization of the likelihood over \emph{nuisance} parameters.}:
\begin{equation}
\mathcal{L}(\vector{gw}|\mathcal{S}) = \prod_{i = 1}^{n} \mathcal{L}(gw_i|\mathcal{S}) \, .
\label{eqn:total_GW_events_likelihood}
\end{equation}
The single GW event quasilikelihood is obtained through the marginalization over the source parameters. By defining $\vector{x}=\{d_{L}, \, z, \, \theta, \, \varphi \}$, we can write
\begin{equation}
\mathcal{L}(gw_i|\mathcal{S}) = \int d\vector{x} \, \mathcal{L}(gw_i|\vector{x}, \mathcal{S}) \pi(\vector{x}|\mathcal{S}) \, .
\end{equation}
As in \cite{del2018stellar}, we assume that the integral over the sky position can be performed analytically. Hence we are left with
\begin{equation}
\mathcal{L}(gw_i|\mathcal{S}) = \int dd_{L} dz \, \mathcal{L}(gw_i|d_{L}, z, \mathcal{S}) \pi(d_{L}|z, \mathcal{S}) \pi(z|\mathcal{S}) \, .
\end{equation}
As shown in \cref{eqn:dl_z}, a given cosmological model fixes a $d_{L}$-$z$ relation. Therefore, among $d_{L}$, $z$ and $\mathcal{S}$, only two of them are independent quantities. We choose $d_{L}$ to be the dependent one, and the prior on the luminosity distance becomes, as in \cite{del2018stellar,del2012inference}, the following Dirac's delta
\begin{equation}
\pi(d_{L}|z, \mathcal{S}) = \delta(d_{L} - d_{L}(z, \mathcal{S})) \, ,
\end{equation}
where $d_{L}(z, \mathcal{S})$ is computed through \cref{eqn:dl_z}. Under the Fisher information matrix formalism, the likelihood of a GW event is a Gaussian distribution in luminosity distance, whose mean value $\braket{d_{L}}$ and width $\sigma_{d_{L}}$ are given by the matched filtering technique. In addition to the instrument uncertainty of the luminosity distance, we also account for the systematic error due to weak-lensing, as modeled through $\sigma_{WL}$ (see eq.~(7.3) of~\cite{Tamanini:2016zlh}). Therefore, once we marginalize over the luminosity distance, we obtain
\begin{equation}
\mathcal{L}(gw_i|d_{L}(z, \mathcal{S}), z, \mathcal{S}) \propto \exp\Biggl[-\frac{1}{2} \frac{\bigl(d_{L}(z, \mathcal{S}) - \braket{d_{L}}\bigr)^2}{\sqrt{\sigma_{d_{L}}^2 + \sigma_{WL}^2}} \Biggr] \, .
\end{equation}
Finally, as in \cite{del2018stellar}, the prior over the GW event redshift is built in order to take into account for the peculiar velocities of galaxies in the catalog. Each galaxy $j$ is assigned with a hosting probability $w_j$ proportional to the distance in the ($\theta, \varphi$) plane between the host candidate and the relocated GW event. In particular, this quantity is computed by marginalization of \cref{eqn:error_distribution} over the luminosity distance \cite{laghi2021gravitational}
\begin{equation}
w_{j} = \int \, d d_{L} \, p({\bf{\Theta}}|s) \, ,
\end{equation}
with ${\bf{\Theta}}=\{d_{L}, \mu_{N}, \varphi_{N}\}$, and by evaluating it at ($\mu_{N_{j}}, \phi_{N_{j}}$), which are the sky coordinates of the $j$-th galaxy within the error volume of a GW event. \par
The prior over the GW event redshift is therefore chosen as a discrete sum of $K$ Gaussians, to account for the galaxy redshift uncertainty, each weighted by its $w_j$ value:
\begin{equation}
\pi(z|\mathcal{S}) \propto \sum_{j = 1}^{K} w_j \exp\biggl[ -\frac{1}{2} \biggl(\frac{z - z_j}{\sigma_{pv_{j}}} \biggr)^2 \biggr] \, .
\label{eqn:redshift_prior}
\end{equation}
Here $j$ runs over the $K$ galaxies inside the error box, while $\sigma_{pv_{j}} = \sigma_{pv}(z_{j})$. The single GW quasilikelihood then reads
\begin{equation}
\begin{split}
\mathcal{L}(gw_i|\mathcal{S}) &\propto \int_{z_{\text{min}}}^{z_{\text{max}}} dz \, \exp{\Biggl[-\frac{1}{2} \frac{\bigl(d_{L}(z, \mathcal{S}) - \braket{d_{L}}\bigr)^2}{\sqrt{\sigma_{d_{L}}^2 + \sigma_{WL}^2}} \Biggr]} \times \\
& \sum_{j = 1}^{N} w_j \exp{\biggl[ - \frac{1}{2} \biggl( \frac{z - z_j}{\sigma_{pv_{j}}} \biggr)^2 \biggr]} \, ,
\end{split}
\label{eqn:single_GW_likelihood}
\end{equation}
where $z_{\text{min}}$ and $z_{\text{max}}$ are the lower and upper integration bounds and correspond to the minimum and maximum GW redshift obtained from the prior on $\mathcal{S}$ and inverting the $d_{L}$-$z$ relation in \cref{eqn:dl_z}.
\subsubsection{Prior choices and application}
In this work, we highlight the need of an independent estimate of the local universe expansion rate due to the current tension between the most recent estimates from EM surveys \cite{riess2019large,aghanim2020planck}. We therefore choose to infer a set of two different cosmological parameters which count the Hubble constant~\footnote{We define $h = \text{H}_{0}/\SI{100}{\kilo\meter\per\second\per\mega\parsec}$, so that it is dimensionless and smaller than unity, in our case $h = 0.73$.} and the matter energy density parameter, $\mathcal{S} = \{h, \Omega_{m}\}$. We choose conservative flat prior distributions for each quantity, in particular $h \in [0.6, 0.86]$ and $\Omega_{m} \in [0.04, 0.5]$. \par
The numerical implementation of the method described in \cref{bayes} is achieved through \texttt{COSMOLISA}, a public software package~\cite{cosmolisa} based on a nested sampling algorithm \cite{cpnest}. The primary output of nested sampling is the evidence $\mathcal{Z}$, producing samples of the posterior distribution as a side product (see \cite{skilling2006nested} for the basics of the method). We explore the parameter space with \num{5000} live points and evolve, at each iteration, the lowest-likelihood one through a \emph{Markov Chain Monte Carlo} (MCMC) until $\mathcal{Z}$ is computed at a given accuracy level.
\section{Results and discussion} \label{results}
\subsection{multiband approach} \label{multi_band}
We analysed the results of the parameter estimation for the three binary subpopulations (below-gap, across-gap, above-gap) and the four scenarios (mSFR-mZ, mSFR-sZ, sSFR-mZ, sSFR-sZ). As \cref{tab:detection} shows, however, not all the cases are suitable multiband candidates for the inference of cosmological parameters. For instance, across-gap binaries are extremely rare systems regardless of the scenario, while below-gap binaries reach interesting numbers only assuming \num{10} years of observation and the most optimistic models. As a matter of fact, above-gap binaries are the most promising ones, in particular under the sSFR-sZ scenario. Even if the corresponding intermediate models are still encouraging, in this work we focus on the optimistic one, as we need to impose other cuts on the catalog that further shrink the sample of useful systems. \par
\begin{table}
\centering
\begin{ruledtabular}
\begin{tabular}{ l c c c c c }
& & mSFR-mZ & mSFR-sZ & sSFR-mZ & sSFR-sZ \\
\hline
\multirow{3}*{\num{4} years} & below-gap & \num{1.6} & \num{1.5} & \num{2.1} & \num{2.1} \\
& across-gap & \num{0.1} & \num{0.3} & \num{0.2} & \num{0.7}\\
& above-gap & \num{1.1} & \num{7.7} & \num{8.1} & \num{40.8} \\
\hline
\multirow{3}*{\num{10} years} & below-gap & \num{5.1} & \num{5.5} & \num{7.3} & \num{8.9} \\
& across-gap & \num{0.2} & \num{0.7} & \num{0.5} & \num{3.1}\\
& above-gap & \num{4.2} & \num{28.4} & \num{27.8} & \num{134.1} \\
\end{tabular}
\end{ruledtabular}
\caption{A showcase of the number of LISA's detection for a given population of binaries and model, assuming 4 and 10 years of mission lifetimes. These numbers represent one over the \num{30} realizations. Due to ET sensitivity curve, all binaries detected by LISA are also detected at ground.}
\label{tab:detection}
\end{table}
The first remarkable results are the high performances of the cooperation between space-born and third generation, ground-based interferometers. In \cref{fig:fisher_errors} we report the uncertainty distributions for $d_{L}$ and $\Delta\Omega$ for the above-gap and the sSFR-sZ model, assuming 10 years of LISA mission time. We consider LISA, ET and LIGO-Virgo network at design sensitivity. Since the latter does not yield sufficient accuracy in parameter estimation to perform meaningful cosmological measurements, we focus on ET and LISA. Individually, both detectors hardly achieve the precision reached through their joint exploitation: for the sky location, the median of the ET+LISA distribution, in fact, decreases by an order of magnitude with respect to the single detector ones, with LISA being slightly better in the determination of this parameter. The same improvement concerns the luminosity distance, with ET being the best probe to measure this quantity, as expected.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Fisher_errors_number_on_ylabel_delta_on_xlabel.pdf}
\caption{Uncertainty distributions for the luminosity distance (upper panel) and the sky localization (bottom panel) in LIGO/Virgo, ET, LISA and the network ET+LISA, colors as in legend. The panels display the results assuming \num{10} years LISA mission time and the above-gap, sSFR-sZ scenario.}
\label{fig:fisher_errors}
\end{figure}
In \cref{tab:medians} we show the medians of the uncertainty distributions for each parameter in the single detectors and in the network ET+LISA. Crucially for this work, ET+LISA yields an improvement factor of 100 in sky localization and of 10 in luminosity distance measurement precision. It should be noticed that the combination of the two also improves the estimate of all other parameters, most noticeably spin magnitudes.
\begin{table}[b]
\centering
\begin{ruledtabular}
\begin{tabular}{ l c c c c }
& \textbf{LIGO/Virgo} & \textbf{ET} & \textbf{LISA} & \textbf{ET+LISA} \\
\hline
$\Delta\Omega$ [\si{\square\degree}] & \num{5.6e2} & \num{5.1} & \num{2.5} & \num{2.2e-1} \\
$\Delta d_{L} / d_{L}$ & \num{3.6} & \num{3.4e-1} & \num{4.3e-1} & \num{3.9e-2} \\
$\Delta \mathcal{M} / \mathcal{M}$ & \num{3.6e-1} & \num{3.3e-3} & \num{4.1e-7} & \num{2.0e-7}\\
$\Delta \iota / \iota$ & \num{5.9} & \num{4.7e-1} & \num{9.3e-1} & \num{6.5e-2} \\
$\Delta \chi_1 / \chi_1$ & \num{3.9} & \num{2.8e-2} & \num{3.3e-1} & \num{8.6e-4} \\
$\Delta \chi_2 / \chi_2$ & \num{3.8} & \num{3.0e-2} & \num{4.6e-1} & \num{1.4e-3} \\
\end{tabular}
\end{ruledtabular}
\caption{Medians of the uncertainty distributions of each parameter for LIGO/Virgo, ET, LISA and the network ET+LISA, assuming \num{10} years LISA mission time. Here $\mathcal{M}$ represents the \emph{chirp mass} of the system, defined as $\mathcal{M} = (M_1M_2)^{(3/5)}/(M_1+M_2)^{(1/5)}$ and its uncertainty is obtained through error propagation formulas.}
\label{tab:medians}
\end{table}
\subsection{Inference of cosmological parameters} \label{inference}
Among the many standard siren candidates, we impose a few cuts on the original binary catalogs. First, we request that the luminosity distance of the event is determined within $\num{10}\%$ of precision. Then we select only the events localized with $\Delta\Omega < \SI{1}{\square\degree}\,$~\footnote{The likelihood computation becomes too expensive for GW events with more than $10^4$ hosts. Those events are generally poorly localized and their large 3D volumes result in a rather uninformative host redshift distribution.}. Moreover, due to the limited light cone extension, we request that the maximum redshift of the error box when varying the cosmology is smaller than \num{1}. Considering \num{4} (\num{10}) years of LISA observations, in 30 independent realizations of the Universe, from a total of \num{1223} (\num{4023}) binaries we are left with \num{222} (\num{510}) GW events, as reported in \cref{tab:number_of_events_for_inference}.
\begin{table}
\centering
\begin{ruledtabular}
\begin{tabular}{l c c c}
Number of events: & Before cuts & After cuts & Per realization \\
\num{4} years & \num{1223} & \num{222} & $\sim 7$ \\
\num{10} years & \num{4023} & \num{510} & $\sim 17$ \\
\end{tabular}
\end{ruledtabular}
\caption{Number of events expected before and after the requirements imposed on the above-gap/sSFR-sZ binary catalog, assuming both LISA mission lifetimes. The rightmost column shows the average number of events per realization once we performed the selections.}
\label{tab:number_of_events_for_inference}
\end{table}
\subsubsection{Preliminary tests} \label{pre_test}
\begin{figure*}[t]
\centering
\includegraphics[width=0.4\textwidth]{zobs_zcosmo.pdf}
\includegraphics[width=0.4\textwidth, height=0.39\textwidth]{fig_pec_vel.pdf}
\caption{Left panel: cosmological parameter measurements as a function of the number of closest GW events used for the inference assuming a Universe with and without peculiar velocities (labeled $z_{obs}$ and $z_{cosmo}$, respectively). We investigate the results from a Universe with and without peculiar velocities, colors as in legend. The top plot represents the results for $h$, while the bottom plot displays the $\Omega_{m}$ estimates (median values with error bars representing the $\num{1}\sigma$ confidence level). The red dashed line represents the true value of each parameter. Right panel: distribution of peculiar velocities in our light cone for several redshift bin. Histograms from bottom to top are shown for incremental \num{0.05} redshift bins, starting from [\num{0}, \num{0.05}]. To guide the eye, solid lines represent Gaussians with standard deviation $\sigma_{v_p}=500\,$km s$^{-1}$, as assumed in our study.}
\label{fig:preliminary_test}
\end{figure*}
Since the analysis over a large number of events can be computationally costly, we start by exploring a small fraction of the total \num{10}-year catalog of events. We measure $h$ and $\Omega_{m}$ by considering the closest GW events first, adding progressively farther events to the inference. \par
Even though the results are consistent with the true cosmology, we find estimates that are systematically biased toward large values of $h$ and small values of $\Omega_{m}$, due to the correlation between the parameters. However, as the number of events increases, this effect is mitigated and eventually disappears, as the algorithm returns well centered Gaussian posterior distributions and the accuracy of the $h$ ($\Omega_{m}$) measures evolves from \num{0.63}$\sigma$ (\num{0.28}$\sigma$) to \num{0.29}$\sigma$ (\num{0.24}$\sigma$). To understand whether this behavior is due to low redshift events only, we focus on these events in the following discussion. \par
The inference relies on the determination of the posterior distribution of each GW event redshift. If this step produces misleading information, the bias is propagated to the estimates of the cosmological parameters. To leading order, if the observed redshift is underestimated (overestimated), the $h$ and $\Omega_{m}$ posteriors will be biased toward low (large) and high (low) values, respectively. The accuracy of our results suggests that the closer the GW event is, the more the observed redshift is overestimated. In light of these considerations, peculiar velocities may play a role in altering significantly the apparent redshift of close objects compared to the cosmological one. \par
As already mentioned in \cref{errbox}, the galaxy catalog comes with both the geometrical and the observed redshift of each object, thus allowing to assess the impact of the peculiar velocity on the analysis.
Through the same aforementioned procedure, we therefore infer the cosmological parameters in a Universe without peculiar velocities. The results are shown in the left panel of \cref{fig:preliminary_test}, where we directly compare the estimates from a static ($z_{cosmo}$) and dynamic ($z_{obs}$) Universe as a function of the number of closest GW events. As we can see, the bias completely vanishes already with \num{10} events, and as the number of mergers increases, the two analyses lead to the true cosmology. \par
Thus, peculiar velocities play a major role in the inference of cosmological parameters with low-redshift events. In fact, the right panel of \cref{fig:preliminary_test} shows that the $v_p$ distribution of our light cone galaxies is not symmetrically distributed around zero at low redshifts. The quantity $v_p$ is computed along the radial direction with positive values corresponding to a drift away from the observer. There is a clear preference for positive $v_p$ values for galaxies at $0.05<z<0.1$, which is where most of the closest GW events in our catalogs occur, thus explaining the bias. \par
It is interesting that our light cone features a large portion of galaxies that move preferentially away from the observer. One possible cause is the limited solid angle covered by our galaxy catalog. Although a full sky would mitigate this issue, its generation would be extremely computationally expensive. Here we just notice that the bias in the parameters was found considering the closest events of the full GW source catalog, featuring 30 realizations of the experiment. In a single realization there will be perhaps only one such close source. In fact, as we will see in \cref{real_observations}, this bias does not systematically appear in this case.
Moreover, in a real experiment, one can in principle further mitigate any issue related to peculiar velocities by modelling the bulk motions as a function of redshift and update the inference model to keep into account for any local anisotropy.
\subsubsection{Individual realizations of the full experiment}\label{real_observations}
\begin{figure*}
\centering
\includegraphics[width=0.4\textwidth]{H1_tgw4yr_30_realizations_massively_parallel.pdf}
\includegraphics[width=0.4\textwidth]{H1_tgw4yr_averaged_corner_massively_parallel.pdf} \\
\includegraphics[width=0.4\textwidth]{H1_tgw10yr_30_realizations_massively_parallel.pdf}
\includegraphics[width=0.4\textwidth]{H1_tgw10yr_averaged_corner_massively_parallel.pdf}
\caption{Inference results from the \num{30} independent realizations at the $68\%$ confidence level. The top row refers to \num{4} years of LISA observations, while the bottom row refers to \num{10} years. Plots in the left column, the blue dots denote the median of the posterior distributions, while the red dashed lines represent the true value of each parameter. The right column displays the joint posterior distributions averaged on the \num{30} independent realizations. Here the gray color scale distinguishes low probability regions (light) with high probability regions (dark); the black dashed lines mark, from left to right, the \num{16}, \num{50} and \num{84} percentile and finally the blue lines highlight the true cosmology. Plots in the right column have been made using \cite{corner}.}
\label{fig:results}
\end{figure*}
\begin{table*}
\centering
\begin{ruledtabular}
\begin{tabular}{c c c c c c c c c}
\multirow{2}*{LISA mission time} & \multirow{2}*{Realization} & \multirow{2}*{GW events} & \multicolumn{3}{c}{$h$} & \multicolumn{3}{c}{$\Omega_{m}$} \\
& & & $1\sigma$ & $\%$ & $A$ & $1\sigma$ & $\%$ & $A$ \\
\hline
\multirow{30}*{\num{4} years} & $1$ & $8$ & $0.736^{+0.011}_{-0.012}$ & $1.5$ & $0.5\sigma$ & $0.224^{+0.075}_{-0.063}$ & $30.74$ & $0.4\sigma$ \\
& $2$ & $8$ & $0.730^{+0.019}_{-0.017}$ & $2.5$ & $< 0.1\sigma$ & $0.259^{+0.097}_{-0.085}$ & $35.3$ & $0.1\sigma$ \\
& $3$ & $8$ & $0.766^{+0.062}_{-0.049}$ & $7.3$ & $0.6\sigma$ & $0.321^{+0.134}_{-0.185}$ & $49.7$ & $0.4\sigma$ \\
& $4$ & $8$ & $0.727^{+0.017}_{-0.016}$ & $2.3$ & $0.2\sigma$ & $0.261^{+0.106}_{-0.112}$ & $41.7$ & $0.1\sigma$ \\
& $5$ & $8$ & $0.747^{+0.018}_{-0.018}$ & $2.4$ & $1.0\sigma$ & $0.183^{+0.075}_{-0.065}$ & $38.5$ & $1.0\sigma$ \\
& $6$ & $8$ & $0.727^{+0.014}_{-0.014}$ & $1.9$ & $0.2\sigma$ & $0.309^{+0.100}_{-0.084}$ & $29.8$ & $0.6\sigma$ \\
& $7$ & $8$ & $0.722^{+0.013}_{-0.013}$ & $1.8$ & $0.6\sigma$ & $0.264^{+0.093}_{-0.100}$ & $36.5$ & $0.1\sigma$ \\
& $8$ & $8$ & $0.715^{+0.013}_{-0.012}$ & $1.8$ & $1.2\sigma$ & $0.329^{+0.080}_{-0.073}$ & $23.3$ & $1.0\sigma$ \\
& $9$ & $8$ & $0.731^{+0.013}_{-0.012}$ & $1.7$ & $0.1\sigma$ & $0.220^{+0.073}_{-0.083}$ & $35.5$ & $0.4\sigma$ \\
& $10$ & $8$ & $0.745^{+0.011}_{-0.012}$ & $1.5$ & $1.3\sigma$ & $0.156^{+0.071}_{-0.063}$ & $42.9$ & $1.4\sigma$ \\
& $11$ & $8$ & $0.736^{+0.013}_{-0.010}$ & $1.6$ & $0.6\sigma$ & $0.272^{+0.200}_{-0.124}$ & $59.57$ & $0.1\sigma$ \\
& $12$ & $8$ & $0.724^{+0.014}_{-0.013}$ & $1.9$ & $0.4\sigma$ & $0.275^{+0.092}_{-0.089}$ & $32.9$ & $0.3\sigma$ \\
& $13$ & $7$ & $0.732^{+0.013}_{-0.014}$ & $1.9$ & $0.2\sigma$ & $0.240^{+0.087}_{-0.066}$ & $32.0$ & $0.1\sigma$ \\
& $14$ & $7$ & $0.726^{+0.074}_{-0.027}$ & $7.0$ & $0.1\sigma$ & $0.308^{+0.131}_{-0.109}$ & $39.0$ & $0.5\sigma$ \\
& $15$ & $7$ & $0.735^{+0.013}_{-0.013}$ & $1.8$ & $0.4\sigma$ & $0.238^{+0.164}_{-0.120}$ & $59.5$ & $0.1\sigma$ \\
& $16$ & $7$ & $0.732^{+0.014}_{-0.017}$ & $2.1$ & $0.1\sigma$ & $0.229^{+0.115}_{-0.082}$ & $43.0$ & $0.2\sigma$ \\
& $17$ & $7$ & $0.722^{+0.012}_{-0.012}$ & $1.7$ & $0.6\sigma$ & $0.280^{+0.094}_{-0.092}$ & $33.3$ & $0.3\sigma$ \\
& $18$ & $7$ & $0.732^{+0.014}_{-0.013}$ & $1.8$ & $0.1\sigma$ & $0.279^{+0.081}_{-0.079}$ & $29.0$ & $0.4\sigma$ \\
& $19$ & $7$ & $0.720^{+0.028}_{-0.021}$ & $3.4$ & $0.4\sigma$ & $0.327^{+0.112}_{-0.130}$ & $36.9$ & $0.6\sigma$ \\
& $20$ & $7$ & $0.718^{+0.019}_{-0.015}$ & $2.4$ & $0.7\sigma$ & $0.319^{+0.118}_{-0.139}$ & $40.4$ & $0.5\sigma$ \\
& $21$ & $7$ & $0.827^{+0.026}_{-0.044}$ & $4.2$ & $2.8\sigma$ & $0.405^{+0.075}_{-0.142}$ & $26.9$ & $1.4\sigma$ \\
& $22$ & $7$ & $0.707^{+0.012}_{-0.008}$ & $1.5$ & $2.2\sigma$ & $0.416^{+0.059}_{-0.090}$ & $17.8$ & $2.2\sigma$ \\
& $23$ & $7$ & $0.722^{+0.017}_{-0.016}$ & $2.2$ & $0.5\sigma$ & $0.295^{+0.107}_{-0.091}$ & $33.5$ & $0.5\sigma$ \\
& $24$ & $7$ & $0.713^{+0.015}_{-0.010}$ & $1.7$ & $1.4\sigma$ & $0.398^{+0.0.70}_{-0.116}$ & $23.4$ & $1.6\sigma$ \\
& $25$ & $7$ & $0.707^{+0.011}_{-0.008}$ & $1.3$ & $2.5\sigma$ & $0.430^{+0.051}_{-0.084}$ & $15.7$ & $2.7\sigma$ \\
& $26$ & $7$ & $0.732^{+0.012}_{-0.012}$ & $1.6$ & $0.2\sigma$ & $0.232^{+0.065}_{-0.057}$ & $26.2$ & $0.3\sigma$ \\
& $27$ & $7$ & $0.754^{+0.042}_{-0.039}$ & $5.3$ & $0.6\sigma$ & $0.240^{+0.142}_{-0.121}$ & $54.6$ & $0.1\sigma$ \\
& $28$ & $7$ & $0.715^{+0.019}_{-0.015}$ & $2.4$. & $0.9\sigma$ & $0.340^{+0.097}_{-0.115}$ & $31.3$ & $0.8\sigma$ \\
& $29$ & $7$ & $0.733^{+0.085}_{-0.022}$ & $7.3$ & $0.1\sigma$ & $0.276^{+0.128}_{-0.117}$ & $44.4$ & $0.2\sigma$ \\
& $30$ & $7$ & $0.737^{+0.011}_{-0.012}$ & $1.5$ & $0.6\sigma$ & $0.198^{+0.079}_{-0.065}$ & $36.4$ & $0.7\sigma$ \\
\hline
\multirow{30}*{\num{10} years} & $1$ & $17$ & $0.726^{+0.009}_{-0.010}$ & $1.3$ & $0.4\sigma$ & $0.275^{+0.062}_{-0.050}$ & $22.2$ & $0.5\sigma$ \\
& $2$ & $17$ & $0.734^{+0.012}_{-0.014}$ & $1.7$ & $0.3\sigma$ & $0.229^{+0.086}_{-0.062}$ & $32.4$ & $0.3\sigma$ \\
& $3$ & $17$ & $0.721^{+0.010}_{-0.010}$ & $1.4$ & $0.9\sigma$ & $0.297^{+0.056}_{-0.055}$ & $18.9$ & $0.8\sigma$ \\
& $4$ & $17$ & $0.725^{+0.008}_{-0.009}$ & $1.2$ & $0.5\sigma$ & $0.292^{+0.067}_{-0.052}$ & $20.4$ & $0.7\sigma$ \\
& $5$ & $17$ & $0.723^{+0.014}_{-0.015}$ & $2.0$ & $0.5\sigma$ & $0.316^{+0.104}_{-0.085}$ & $28.9$ & $0.7\sigma$ \\
& $6$ & $17$ & $0.719^{+0.011}_{-0.011}$ & $1.6$ & $1.0\sigma$ & $0.320^{+0.065}_{-0.057}$ & $19.1$ & $1.1\sigma$ \\
& $7$ & $17$ & $0.721^{+0.011}_{-0.012}$ & $1.6$ & $0.8\sigma$ & $0.312^{+0.089}_{-0.068}$ & $25.3$ & $0.8\sigma$ \\
& $8$ & $17$ & $0.731^{+0.010}_{-0.010}$ & $1.7$ & $0.1\sigma$ & $0.250^{+0.056}_{-0.052}$ & $21.6$ & $< 0.1\sigma$ \\
& $9$ & $17$ & $0.720^{+0.013}_{-0.012}$ & $1.7$ & $0.1\sigma$ & $0.262^{+0.071}_{-0.056}$ & $24.3$ & $0.2\sigma$ \\
& $10$ & $17$ & $0.738^{+0.010}_{-0.010}$ & $1.4$ & $0.8\sigma$ & $0.191^{+0.048}_{-0.046}$ & $24.6$ & $1.3\sigma$ \\
& $11$ & $17$ & $0.726^{+0.010}_{-0.009}$ & $1.4$ & $0.4\sigma$ & $0.286^{+0.056}_{-0.061}$ & $20.4$ & $0.6\sigma$ \\
& $12$ & $17$ & $0.731^{+0.034}_{-0.013}$ & $3.2$ & $< 0.1\sigma$ & $0.289^{+0.103}_{-0.059}$ & $28.1$ & $0.5\sigma$ \\
& $13$ & $17$ & $0.730^{+0.011}_{-0.011}$ & $1.5$ & $< 0.1\sigma$ & $0.257^{+0.060}_{-0.051}$ & $21.4$ & $0.1\sigma$ \\
& $14$ & $17$ & $0.721^{+0.012}_{-0.013}$ & $1.7$ & $0.8\sigma$. & $0.311^{+0.083}_{-0.066}$ & $23.9$ & $0.8\sigma$ \\
& $15$ & $17$ & $0.729^{+0.009}_{-0.010}$ & $1.3$ & $0.1\sigma$ & $0.254^{+0.061}_{-0.049}$ & $21.7$ & $0.1\sigma$ \\
& $16$ & $17$ & $0.733^{+0.010}_{-0.010}$ & $1.3$ & $0.3\sigma$ & $0.242^{+0.049}_{-0.045}$ & $19.4$ & $0.2\sigma$ \\
& $17$ & $17$ & $0.719^{+0.012}_{-0.011}$ & $1.7$ & $0.9\sigma$ & $0.324^{+0.074}_{-0.078}$ & $23.6$ & $1.0\sigma$ \\
& $18$ & $17$ & $0.838^{+0.016}_{-0.106}$ & $7.3$ & $1.8\sigma$ & $0.359^{+0.121}_{-0.109}$ & $32.0$ & $0.9\sigma$ \\
& $19$ & $17$ & $0.725^{+0.009}_{-0.009}$ & $1.2$ & $0.5\sigma$ & $0.268^{+0.051}_{-0.044}$ & $17.7$ & $0.4\sigma$ \\
& $20$ & $17$ & $0.729^{+0.010}_{-0.010}$ & $1.4$ & $0.1\sigma$ & $0.268^{+0.060}_{-0.050}$ & $20.5$ & $0.3\sigma$ \\
& $21$ & $17$ & $0.729^{+0.013}_{-0.014}$ & $1.9$ & $0.1\sigma$ & $0.252^{+0.061}_{-0.053}$ & $22.8$ & $< 0.1\sigma$ \\
& $22$ & $17$ & $0.730^{+0.008}_{-0.008}$ & $1.1$ & $<0.1\sigma$ & $0.277^{+0.041}_{-0.038}$ & $14.3$ & $0.7\sigma$ \\
& $23$ & $17$ & $0.740^{+0.014}_{-0.013}$ & $1.8$ & $0.8\sigma$ & $0.220^{+0.062}_{-0.057}$ & $27.0$ & $0.5\sigma$ \\
& $24$ & $17$ & $0.727^{+0.010}_{-0.010}$ & $1.4$ & $0.3\sigma$ & $0.260^{+0.051}_{-0.047}$ & $18.8$ & $0.2\sigma$ \\
& $25$ & $17$ & $0.745^{+0.016}_{-0.017}$ & $2.2$ & $0.9\sigma$ & $0.183^{+0.079}_{-0.069}$ & $40.7$ & $0.9\sigma$ \\
& $26$ & $17$ & $0.728^{+0.009}_{-0.009}$ & $1.2$ & $0.2\sigma$ & $0.282^{+0.061}_{-0.055}$ & $20.5$ & $0.6\sigma$ \\
& $27$ & $17$ & $0.725^{+0.009}_{-0.010}$ & $1.3$ & $0.5\sigma$ & $0.284^{+0.053}_{-0.046}$ & $17.4$ & $0.7\sigma$ \\
& $28$ & $17$ & $0.714^{+0.009}_{-0.009}$ & $1.3$ & $1.7\sigma$ & $0.342^{+0.057}_{-0.053}$ & $16.1$ & $1.7\sigma$ \\
& $29$ & $17$ & $0.697^{+0.034}_{-0.012}$ & $3.3$ & $1.4\sigma$ & $0.416^{+0.063}_{-0.142}$ & $24.7$ & $1.6\sigma$ \\
& $30$ & $17$ & $0.733^{+0.016}_{-0.013}$ & $2.0$ & $0.2\sigma$ & $0.259^{+0.092}_{-0.060}$ & $29.4$ & $0.1\sigma$ \\
\end{tabular}
\end{ruledtabular}
\caption{The $h$ and $\Omega_{m}$ estimates for all the inference runs. The columns report: the estimates at the $68\%$ confidence level ($1\sigma$), the precision of the measure ($\%$), and the estimate accuracy as a fraction of $\sigma$ ($A$).}
\label{tab:results}
\end{table*}
The nominal LISA mission lifetime is set to be \num{4} years with a possible operation extension up to \num{10} years \cite{amaro2017laser}. To produce realistic samples of GW events for a given mission time, we divide the original catalog in independent subsets producing:
\begin{itemize}
\item \num{30} realizations of either 7 or 8 events each for \num{4} years of observations;
\item \num{30} realizations of 17 events each for \num{10} years of observations.
\end{itemize}
The sizes of both samples are easily manageable from the computational point of view. We therefore consider all individual realizations and then average the posterior distributions from the \num{30} subsets to characterize the accuracy and the precision of this method. \par
The results for the inference of $h$ and $\Omega_{m}$ from all the runs are presented in \cref{fig:results,tab:results}. We observe that the bias due to peculiar velocities does not pose a significant issue in individual realizations of the experiment because of the very small number of low redshift events. In the vast majority of the cases, the inference yields measurements of $h$ that are both precise and accurate. By looking at \cref{tab:results} we can in fact appreciate that $h$ is generally measured to better than $2\%$ ($68\%$ credible region) and often around $1\%$ in the 10-year case, and the true value is generally well within the $68\%$ credible region implying no significant biases. $\Omega_m$ is far less constrained, generally at a 20-30\% level. \par
There are, however, a couple of realizations that yield severely inconsistent results. These outliers can be identified in realization 21 in the 4-year case and realizations 18 in the 10-year case (i.e.~in \num{1} of our \num{30} realizations). In these realizations, both $h$ and $\Omega_m$ rail against the upper end of the prior range, a behavior that is not explained by the expected bias due to peculiar velocities. The effect of this can also be seen in the averaged posteriors shown in the right panels of \cref{fig:results}. In the 4-year case the averaged marginalized posteriors of $h$ and $\Omega_m$ display long tails extending to the upper bound of the priors, whereas in the 10-year case this effect is largely suppressed. However, in this case a mild secondary peak appears at the boundary of the prior and leads to a slightly asymmetric distribution. Even if the averaged estimates that we obtain are largely consistent with the true cosmology, we want to focus our attention on the physical meanings (if any) of the small deviations from the expected results.
\subsubsection{Origin of the problematic realizations} \label{origin_prob_real}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Before_after_event_174_realization_18.pdf}
\caption{The impact of VIE on the $h$ and $\Omega_{m}$ posterior distributions of realization \num{18} (\num{10} years). The blue solid lines mark the true cosmology.}
\label{fig:before_after_event_174}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{H1_tgw10yr_averaged_corner_massively_parallel_new_prior_ranges.pdf}
\caption{Averaged posterior distribution over \num{30} different realizations assuming different prior ranges on $h$ and $\Omega_{m}$, as discussed in \cref{origin_prob_real}.}
\label{fig:new_prior_ranges}
\end{figure}
We extensively examined the bad \num{10}-year realization and did not find any specific pathology in the error box construction, nor in the inference procedure.
The culprit of the bias appears to lie in the relation between $\text{H}_{0}$, $\Omega_{m}$ and redshift that enters the single-event likelihood \cref{eqn:single_GW_likelihood}, which is proportional to the number of galaxies at a given redshift. Therefore, for a comoving-volume-uniform galaxy distribution, the weight is roughly proportional to the comoving volume shell, naturally favoring values of redshift approaching $z_{\rm max}$. If the information enclosed in the clustering is not strong enough, this ``high $z$'' solution competes with the correct one and can eventually dominate. This issue is further exacerbated by the $\text{H}_{0} - \Omega_{m}$ degeneracy. As seen from the 2D posteriors shown in \cref{fig:results}, the two cosmological parameters are partially degenerate, showing a clear anticorrelation. Therefore, a given $(d_{L}, z)$ pair can be produced by a continuum of $(\text{H}_{0}, \Omega_{m})$ following this degeneracy. For a rectangular uniform prior in the cosmological parameters, while galaxies in the middle of the error box are consistent with a degenerate set of cosmologies, those at the boundaries ($z_{\rm min}$ and $z_{\rm max}$) are only consistent with the corners of the prior. Since many more galaxies accumulate toward $z_{\rm max}$, this creates an artificial spike at the top right corner of the parameter space. So in absence of events with zero support around $z_{\rm max}$, this second mode of the posterior cannot be suppressed and would eventually dominate, essentially by design of the likelihood function. \par
The above interpretation is corroborated by two tests that we now discuss. The first test we performed was to artificially add to the bad \num{10}-year realization a very informative event (VIE), i.e.~an event with no (or little) support at redshifts outside those allowed by the true cosmology, in particular with zero support around $z_{\rm max}$. The result of this procedure is shown in \cref{fig:before_after_event_174}. It can be seen that the addition of a VIE kills the high $z$ solution and allows the recovery of the correct cosmology, albeit with large uncertainties on $\Omega_{m}$. \par
A second test consisted in changing the prior range of the analysis. If our interpretation is correct, the second mode of the solution appearing in \cref{fig:results} should follow the boundary of the prior. \Cref{fig:new_prior_ranges} shows the average posterior over 30 realizations (different from those used to produce \cref{fig:results}) of the experiment with a modified prior range $h\in[0.6,0.95]$ and $\Omega_{m}\in[0.04, 0.4]$. Clearly, the secondary mode follows the boundary of the prior, while the correct solution is consistently recovered regardless of the prior range.
\subsubsection{Mitigation techniques and future investigations}
\begin{figure*}
\centering
\includegraphics[width=1.\textwidth]{Posterior_evolution_horizontal_realization_18.pdf}
\caption{Evolution of the posterior distribution of the Hubble constant from the realization \num{18} (\num{10}-year) as the number of events increases by one each time, as the panel titles show. Here the red dotted line represents the true value, i.e.~$h=0.73$. In this particular case, the spurious solution overcomes the right cosmology when the last few events are added to the inference.}
\label{fig:h_posterior_evolution}
\end{figure*}
Although we are focusing on the ``bad outcomes'' of our analysis, it is worth keeping in mind that they involve only a minority of realizations of the experiment. It is nevertheless important to be able to treat these potential issues should they manifest in a future real analysis on actual data. \par
Even without applying any change to the inference procedure, the tests presented above provide practical ways to reject spurious solutions and correctly constrain cosmological parameters. Bad realizations generally display bimodal posteriors. Repetition of the analysis with a varying prior range should return a ``steady mode'' around the correct solution and a dominant spurious peak following the boundary. Moreover, events can be analyzed individually and cosmological inference can be done progressively by adding them one by one. In order to exclude peculiar velocities' side effects, we perform this test by considering a steady Universe. As we show in \cref{fig:h_posterior_evolution} for realization \num{18}, we found that in our two bad realizations, when this procedure is employed, the joint posterior initially builds up around the correct solution due to the clustering information. This mode, however, is eventually superseded by the spurious one in the long run, if there are no events without host galaxy support around $z_{\rm max}$. These two checks (varying prior and incremental analysis) could allow us to reject the spurious solution and identify the correct cosmological parameters. \par
It would be obviously desirable to develop an analysis that prevents side effects like the ones identified here. For example one can reweight or change the shape of the prior, so that an uninformative experiment returns a flat posterior on the parameters, regardless of the fact that the assumption of a cosmological model intrinsically brings some information on the correlation structure of the parameters.
Another aspect that should be accounted for concerns the approach adopted to place each GW event in the Millennium Universe, that we discussed in \cref{errbox}. Within our work, the binary population and the galaxy catalog are independent entities, i.e.~binaries merge at points in space where there may not be any galaxy at all: to address the problem, we randomly selected the ``true host'' within the redshift interval associated to the true cosmology consistent with the $d_{L}$ uncertainty of the GW measurement. This ensures that an actual galaxy can be referred as the host of the GW event. The random selection should ensure that galaxies in denser regions are more likely to be drawn rather than others. However, the high performance of the multiband approach squeezes the pool of true host candidates to a narrow redshift interval, which often lacks of relevant clustering properties. Therefore, when we extend the redshift interval to take into account for the prior ranges on the cosmological parameters, we are likely to bring into the error box much denser regions that were artificially excluded from the host selection. In practice, one should start from a given light cone and place GW events randomly within it, to ensure that the distribution of events traces the 3D clustering of galaxies. We plan to explore different experiment designs and likelihood forms in future work. \par
Besides these adjustments, we remark once again that the analysis performed here allows a robust determination of $\text{H}_{0}$ within \num{1}-\num{2}$\%$ in the vast majority of the cases. Moreover, even when the inference is biased, the spurious solution can be identified and the correct cosmology recovered.
\section{Conclusions} \label{end}
In this paper we explored the possibility of exploiting multiband GW astronomy to use SBHBs above the pair-instability mass gap as effective dark standard sirens. \par
Massive SBHBs forming from progenitor stars above the pair-instability mass gap are in fact anticipated to be loud multiband sources, detectable both by LISA and 3G detectors, for which we considered ET as an example. By combining observations in the two bands, the source 3D sky localization can be pinned down to an accuracy which is far better (by three orders of magnitude, on average) than the individual probes alone. This allows an efficient probabilistic identification of the host among all galaxies within the error volume, enabling statistical inference on the cosmological parameters. \par
We exploited this idea to constrain the Hubble constant $\text{H}_{0}$ and matter density fraction $\Omega_{m}$ under the assumption of a flat, $\Lambda \text{CDM}$ Universe. We performed the parameter estimation in each detector under the Fisher information matrix formalism, and then combined the relative uncertainties in order to produce accurate estimates of the source parameters, specifically the 3D localization error volume in the sky. We then relied on the Millennium simulation to simulate the galaxy distribution across \num{1}/\num{8}th of the sky up to $z=1$. We placed the 3D volumes in this synthetic sky, thus creating for each of them a sample of host candidates consistent with the clustering properties of the $\Lambda \text{CDM}$ Universe. Finally, we used these data to perform Bayesian inference on $\text{H}_{0}$ and $\Omega_{m}$ employing a nested sampling algorithm. \par
By analyzing a large catalog of GW events, we found that peculiar velocities might be a source of systematic errors, depending on the direction of the bulk motion of galaxies. This behavior, however, affects mostly low redshift events, and progressively vanishes when adding farther away mergers to the inference. Moreover, it can be due to limited solid angle coverage of the light cone adopted, and a full sky catalog may address the problem in the first place. \par
We then performed \num{30} realizations of the sample of observed SBHBs assuming either \num{4} or \num{10} years of LISA operations. By analyzing them, we found that multiband observations of ``above-gap'' SBHBs can provide a competitive measurement of the cosmological parameters $\mathcal{S} = \{h, \Omega_{m}\}$. In fact, assuming \num{4} (\num{10}) years of observation, the Hubble constant is determined down to a \num{1.5}$\%$ (\num{1.1}$\%$) precision, while $\Omega_{m}$ is measured at a \num{26.2}$\%$ (\num{14.3}$\%$) level. In general, the two parameters are estimated to better than $\sim2\%$ and $\sim30\%$ respectively. \par
We found, however, a couple of realizations yielding biased solutions, favoring $h$ and $\Omega_{m}$ values railing against the upper end of the prior range. We traced back the insurgence of those solutions to a combination of factors, including the form of the likelihood, the $h-\Omega_{m}$ degeneracy and the lack of very informative events in those realizations. We notice here that this problem did not appear in the work of \cite{del2018stellar} and \cite{laghi2021gravitational}, who employed the same techniques. This is likely because \cite{del2018stellar} considered ``below-gap'' SBHBs at $z<0.1$; at such low redshifts, the number of galaxies in the error volume is generally much smaller, and it is unlikely that all the events have support around $z_{\rm max}$. On the other hand, \cite{laghi2021gravitational} investigated a limited number of realizations of their experiment, perhaps insufficient to identify a bad one. By individually analyzing those ``bad realization'' we provided practical ways to recognize and discard spurious solutions, allowing the recovery of the correct cosmological parameters even in their presence. \par
Nevertheless, the identification of this issue calls for future improvements. Among them, we underline the need to upgrade the design of the simulation so that the binaries from our population merge in crowded clusters in the very first place, as reality suggests. This can be achieved by consistently assigning a host to each GW event before the Fisher matrix pipeline, thus avoiding the need of an artificial true host selection. We plan to enhance hosting probability assignments, so to take into account also for other important parameters besides the host candidate's sky position (e.g.,~the mass of the galaxy). Furthermore, a reweight or change of the shape of the prior can be folded into the analysis, ensuring that an uninformative experiment returns a flat posterior on the cosmological parameters. We defer a detailed investigation of these possible improvements of the analysis to future work. \par
Finally, our work relies on the assumption that SBHBs above the mass gap come only from the isolated evolution channel. However BHs inside and above the mass gap can be formed in dense environment thanks to the close interplay between dynamics and stellar evolution \cite{2020ApJ...903...45K, 2020MNRAS.497.1043D}. These additional sources would increase the number of detected systems that could be exploited in our approach, allowing us to constrain even further the cosmological parameters.
\section*{Acknowledgments}
A.S. acknowledges financial support provided under the European Union’s H2020 ERC Consolidator Grant ``Binary Massive Black Hole Astrophysics'' (B Massive, Grant Agreement No.: 818691).
\input{appendix}
\newpage
|
{
"timestamp": "2022-02-11T02:00:49",
"yymm": "2109",
"arxiv_id": "2109.13934",
"language": "en",
"url": "https://arxiv.org/abs/2109.13934"
}
|
\section{Introduction}
The fourth-order nonlinear Schr\"odinger (4NLS) equation or biharmonic cubic nonlinear \linebreak Schr\"odinger equation
\begin{equation}
\label{fourtha}
i\partial_tu +\partial_x^2u-\partial_x^4u=\lambda |u|^2u,
\end{equation}
have been introduced by Karpman \cite{Karpman} and Karpman and Shagalov \cite{KarSha} to take into account the role of small fourth-order dispersion terms in the propagation of intense laser beams in a bulk medium with Kerr nonlinearity. Equation \eqref{fourtha} arises in many scientific fields such as quantum mechanics, nonlinear optics and plasma physics, and has been intensively studied with fruitful references (see \cite{Ben,Karpman} and references therein).
In the past twenty years such 4NLS have been deeply studied from different mathematical points of view. For example, Fibich \textit{et al.} \cite{FiIlPa} worked various properties of the equation in the subcritical regime, with part of their analysis relying on very interesting numerical developments. The well-posedness problem and existence of the solutions has been shown (see, for instance, \cite{tsutsumi}) by means of the energy method, harmonic analysis, etc.
\subsection{Dispersive models on star graphs}
The study of nonlinear dispersive models in a metric graph has attracted a lot of attention of mathematicians, physicists, chemists and engineers, see for details \cite{BK, BlaExn08, BurCas01, Mug15} and references therein. In particular, the framework prototype (graph-geometry) for description of these phenomena have been a {\it star graph} $\mathcal G$, namely, on metric graphs with $N$ half-lines of the form $(0, +\infty)$ connecting at a common vertex $\nu=0$, together with a nonlinear equation suitably defined on the edges such as the nonlinear Schr\"odinger equation (see Adami {\it{et al.}} \cite{AdaNoj14, AdaNoj15} and Angulo and Goloshchapova \cite{AngGol17a, AngGol17b}). We note that with the introduction of nonlinearities in the dispersive models, the network provides a nice field, where one can look for interesting soliton propagation and nonlinear dynamics in general. A central point that makes this analysis a delicate problem is the presence of a vertex where the underlying one-dimensional star graph should bifurcate (or multi-bifurcate in a general metric graph).
Looking at other nonlinear dispersive systems on graph structure, we have some interesting results. For example, related with well-posedness theory, the second author in \cite{Cav}, studied the local well-posedness for the Cauchy problem associated to Korteweg-de Vries equation in a metric star graph with three semi-infinite edges given by one negative half-line and two positives half-lines attached to a common vertex $\nu=0$ (the $\mathcal Y$-junction framework). Another nonlinear dispersive equation, the Benjamin--Bona--Mahony (BBM) equation, is treated in \cite{bona,Mugnolobbm}. More precisely, Bona and Cascaval \cite{bona} obtained local well-posedness in Sobolev space $H^1$ and Mugnolo and Rault \cite{Mugnolobbm} showed the existence of traveling waves for the BBM equation on graphs. Using a different approach Ammari and Crépeau \cite{AmCr1} derived results of well-posedness and, also, stabilization for the Benjamin-Bona-Mahony equation in a star-shaped network with bounded edges.
Recently, in \cite{CaCaGa1}, the authors deals to present answers for some questions left in \cite{CaCaGa} concerning the study of the cubic fourth order Schr\"odinger equation in a star graph structure $\mathcal{G}$. Precisely, they considered $\mathcal{G}$ composed by $N$ edges parameterized by half-lines $(0,+\infty)$ attached with a common vertex $\nu$. With this structure the manuscript studied the well-posedness of a dispersive model on star graphs with three appropriate vertex conditions.
Regarding to the control theory and inverse problems, let us cite some previous works on star graphs. Ignat \textit{et al.} in \cite{Ignat2011} worked on the inverse problem for the heat equation and the Schr\"odinger equation on a tree. Later on, Baudouin and Yamamoto \cite{Baudouin} proposed a unified - and simpler - method to study the inverse problem of determining a coefficient. Results of stabilization and boundary controllability for KdV equation on star-shaped graphs was also proved in \cite{AmCr,Cerpa,Cerpa1}. Finally, recently, Duca in \cite{Duca,Duca1} showed the controllability of the bilinear Schrödinger equation defined on a compact graph. In booth works, with different main goals, the author showed control properties for this system.
We caution that this is only a small sample of the extant work on graphs structure for partial differential equations.
\subsection{Functional framework}
Let us define the graphs ${\mathcal{G}}$ given by the central node $0$ and edges $I_j$, for $j=1,2,\cdots, N$. Thus, for any function $f: {\mathcal{G}}\rightarrow \mathbb C$, we set $f_j= f|_{I_j},$
$$
L^2({\mathcal{G}}):= \bigoplus_{j=1}^{N}L^2(I_j):=\left\lbrace f: {\mathcal{G}} \rightarrow \mathbb R: f_j \in L^2(I_j), j \in \{1,2,\cdot, N\} \right\rbrace, \quad \|f\|_2=\left( \sum_{j=1}^N \|f_j\|_{L^2(I_j)}\right)^{1/2}
$$
and
$$
\left( f, g\right)_{L^2({\mathcal{G}})}:= {\text{Re }} \int_{-l_1}^0 f_1(x)\overline{g_1(x)}dx +{\text{Re }} \sum_{j=2}^{N}\int_{0}^{l_j} f_j(x)\overline{g_j(x)}dx.
$$
Also, we need the following spaces
$$
H^m_0({\mathcal{G}}):= \bigoplus_{j=1}^{N}H^m_0(I_j):=\left\lbrace f: {\mathcal{G}} \rightarrow \mathbb C: f_j \in H^m(I_j), j \in \{1,2,..., N\}\right\rbrace,
$$
where $\partial_x^j f_1(-l_1)=\partial_x^jf_j(l_j)=0, \ j \in \{1,2,..., m-1\}$ and $ f_1(0)=\alpha_j f_j(0), j \in \{1,2,..., N\}$, with
$$
\|f\|_{H^m_0({\mathcal{G}})}=\left( \sum_{j=1}^{N} \|f_j\|_{H^m(I_j)}^2\right)^{1/2},
$$
for $m\in {\mathbb N}$ with the natural inner product of $H^s_0(I_j)$. We often write,
$$
\int_{\mathcal{G}} f d x=\int_{-l_{1}}^{0} f_{1}(x) d x+\sum_{j=2}^{N} \int_{0}^{l_{j}} f_{j}(x) d x
$$
Then the inner products and the norms of the Hilbert spaces $L^2({\mathcal{G}})$ and $H^m_0 ({\mathcal{G}})$ are defined by
\begin{align*}
\langle f, g\rangle_{L^{2}(\mathcal{G})}={\text{Re }}\int_{\mathcal{G}} f(x) \overline{ g(x)} d x\quad \text { and } \quad \|f\|_{L^{2}(\mathcal{G})}^{2} &=\int_{\mathcal{G}}|f|^{2} d x,
\\
\langle f, g\rangle_{H_{0}^{m}(\mathcal{G})}={\text{Re }} \sum_{k\leq m}\int_{\mathcal{G}} \partial^k_x f_{x}(x) \overline{ \partial_x^k g_{x}}(x) d x \quad \text { and } \quad \|f\|_{H_{0}^{m}(\mathcal{G})}^{2} &=\sum_{k\leq m}\int_{\mathcal{G}}\left|\partial_x^k f\right|^{2} d x.
\end{align*}
We will denote $H^{-s}({\mathcal{G}})$ the dual of $H^s_0({\mathcal{G}})$. By using Poincaré inequality, it follows that
$$
\|f\|_{L^2({\mathcal{G}})}^2 \leq \frac{L^2}{\pi^2} \|\partial_x f\|_{L^2({\mathcal{G}})}^2, \quad \forall f \in H^1_0({\mathcal{G}}),
$$
where $L=\max_{j=1,2,...,N}\left\lbrace l_j \right\rbrace.$Thus, we have that
\begin{equation}\label{poincare}
\sum_{k=1 }^m \|\partial_x^k f\|_{L^2({\mathcal{G}})}^2 \leq \|f\|_{H_{0}^{m}(\mathcal{G})}^{2} \leq \left( \frac{L^2}{\pi^2}+1\right)\sum_{k=1 }^m \|\partial_x^k f\|_{L^2({\mathcal{G}})}^2.
\end{equation}
\subsection{Setting of the problem and main result} Let us now present the problem that we will study in this manuscript. Due the results presented in work \cite{CaCaGa1}, naturally, we should see what happens for the control properties for a linear Schrödinger type system with mixed dispersion on compact graph structure $\mathcal{G}$ of $(N+1)$ edges $e_{j}$ (where $\left.N \in \mathbb{N}^{*}\right)$, of lengths $l_{j}>0, j \in\{1, . ., N+1\}$, connected at one vertex that we assume to be 0 for all the edges. Precisely, we assume that the first edge $e_{1}$ is parametrized on the interval $I_{1}:=\left(-l_{1}, 0\right)$ and the $N$ other edges $e_{j}$ are parametrized on the interval $I_{j}:=\left(0, l_{j}\right)$. On each edge we pose a linear biharmonic NLS equation $(Bi-NLS)$. On the first edge $(j=1)$ we put no control and on the other edges $(j=2, \cdots, N+1)$ we consider Neumann boundary controls (see Fig. \ref{control}).
\begin{figure}[h!]
\centering
\tikzset{every picture/.style={line width=0.75pt}}
\begin{tikzpicture}[x=0.50pt,y=0.50pt,yscale=-1,xscale=1]
\draw (149.5,141) -- (269,141) ;
\draw (269,141) .. controls (269,137.41) and (272.13,134.5) .. (276,134.5) .. controls (279.87,134.5) and (283,137.41) .. (283,141) .. controls (283,144.59) and (279.87,147.5) .. (276,147.5) .. controls (272.13,147.5) and (269,144.59) .. (269,141) -- cycle ;
\draw (135.5,141) .. controls (135.5,137.41) and (138.63,134.5) .. (142.5,134.5) .. controls (146.37,134.5) and (149.5,137.41) .. (149.5,141) .. controls (149.5,144.59) and (146.37,147.5) .. (142.5,147.5) .. controls (138.63,147.5) and (135.5,144.59) .. (135.5,141) -- cycle ;
\draw (281,135.5) -- (409,37.5) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (408,34.5) .. controls (408,30.91) and (411.13,28) .. (415,28) .. controls (418.87,28) and (422,30.91) .. (422,34.5) .. controls (422,38.09) and (418.87,41) .. (415,41) .. controls (411.13,41) and (408,38.09) .. (408,34.5) -- cycle ;
\draw (283,138) -- (458,85.5) ;
\draw (279,147.5) -- (381.5,267.5) ;
\draw (282.5,144) -- (380,151) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (458,85.5) .. controls (458,81.91) and (461.13,79) .. (465,79) .. controls (468.87,79) and (472,81.91) .. (472,85.5) .. controls (472,89.09) and (468.87,92) .. (465,92) .. controls (461.13,92) and (458,89.09) .. (458,85.5) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (380,151) .. controls (380,147.41) and (383.13,144.5) .. (387,144.5) .. controls (390.87,144.5) and (394,147.41) .. (394,151) .. controls (394,154.59) and (390.87,157.5) .. (387,157.5) .. controls (383.13,157.5) and (380,154.59) .. (380,151) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (381.5,267.5) .. controls (382.68,264.11) and (386.59,262.39) .. (390.24,263.65) .. controls (393.9,264.92) and (395.9,268.7) .. (394.73,272.09) .. controls (393.55,275.48) and (389.64,277.2) .. (385.98,275.94) .. controls (382.33,274.67) and (380.32,270.89) .. (381.5,267.5) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (162.5,90.5) .. controls (162.5,86.91) and (165.63,84) .. (169.5,84) .. controls (173.37,84) and (176.5,86.91) .. (176.5,90.5) .. controls (176.5,94.09) and (173.37,97) .. (169.5,97) .. controls (165.63,97) and (162.5,94.09) .. (162.5,90.5) -- cycle ;
\draw (163,71) .. controls (163,67.41) and (166.13,64.5) .. (170,64.5) .. controls (173.87,64.5) and (177,67.41) .. (177,71) .. controls (177,74.59) and (173.87,77.5) .. (170,77.5) .. controls (166.13,77.5) and (163,74.59) .. (163,71) -- cycle ;
\draw (127.5,151.9) node [anchor=north west][inner sep=0.75pt] {$-l_{1}$};
\draw (265,150.4) node [anchor=north west][inner sep=0.75pt] {$0$};
\draw (413,44.4) node [anchor=north west][inner sep=0.75pt] {$l_{2}$};
\draw (459.5,96.4) node [anchor=north west][inner sep=0.75pt] {$l_{3}$};
\draw (383,162.4) node [anchor=north west][inner sep=0.75pt] {$l_{4}$};
\draw (396.73,275.49) node [anchor=north west][inner sep=0.75pt] {$l_{N}$};
\draw (182,62) node [anchor=north west][inner sep=0.75pt] [align=left] {no control};
\draw (182.5,80) node [anchor=north west][inner sep=0.75pt] [align=left] {control};
\draw (394.48,189.49) node [anchor=north west][inner sep=0.75pt] [rotate=-89.95] [align=left] {$\cdots$ $\cdots$};
\end{tikzpicture}
\caption{A compact graph with $N+1$ edges}
\label{control}
\end{figure}
Thus, in this work, we consider the following system
\begin{equation}\label{graph}
\begin{cases}
i\partial_t u_j +\partial_x^2 u_j - \partial_x^4 u_j =0, & (x,t)\in I_j \times (0,T),\ j=1,2, ..., N\\
u_j(x,0)=u_{j0}(x), & x\in I_j,\ j=1,2, ..., N\\
\end{cases}
\end{equation}
with appropriated boundary conditions as follows
\begin{equation}\label{bound}
\left\lbrace
\begin{split}
&u_1(-l_1,t )=\partial_x u_1(-l_1,t)=0,\\
& u_j(l_j,t)=0, & j \in \{2,3,\cdots, N\},\\
& \partial_x u_j(l_j,t)= h_j(t), &j \in \{2,3,\cdots, N\}, \\
& u_1(0,t)=\alpha_j u_j(0,t), & j \in \{2,3,\cdots, N\},\\
&\partial_x u_1(0,t)=\sum_{j=2}^{N}\frac{\partial_x u_j(0,t)}{\alpha_j} \\
& \partial_x^2 u_1(0)= \alpha_j \partial_x^2 u_j(0,t), & j \in \{2,3,\cdots, N\}, \\
&\partial_x^3 u_1(0,t)=\sum_{j=2}^{N}\frac{\partial_x^3 u_j(0,t)}{\alpha_j}.
\end{split}\right.
\end{equation}
Here $u_{j}(x, t)$ is the amplitude of propagation of intense laser beams on the edge $e_{j}$ at position $x \in I_{j}$ at time $t, h_{j}=h_{j}(t)$ is the control on the edge $e_{j}\ (j \in\{2, \cdots, N+1\})$ belonging to $L^{2}(0, T)$ and $\alpha_{j} \ (j \in\{2, \cdots, N+1\})$ is a positive constant. The initial data $u_{j 0}$ are supposed to be $H^{-2}(\mathcal{G})$ functions of the space variable.
With this framework in hand, our work deals with the following classical control problem.
\vspace{0.2cm}
\noindent\textbf{Boundary controllability problem:}\textit{ For any $T > 0$, $l_j > 0$, $u_{j0}\in H^{-2}(\mathcal{G})$ and $u_{T}\in H^{-2}(\mathcal{G})$, is it possible to find $N$ Neumann boundary controls $h_j\in L^2(0,T)$ such that the solution $u$ of \eqref{graph}-\eqref{bound} on the tree shaped network of $N+1$ edges (see Fig. \ref{control}) satisfies
\begin{equation}\label{ect}
u(\cdot,0) = u_{0}(\cdot) \quad \text{ and } \quad u(\cdot,T)=u_{T}(\cdot) ?
\end{equation}}
The answer for that question is given by the following result.
\begin{theorem}\label{Th_Control_N}
For $T>0$ and $l_1, l_2, \cdots l_{N}$ positive real numbers, let us suppose that
\begin{equation}\label{condition1}
T > \sqrt{ \frac{ \overline{L} (L^2+\pi^2)}{\pi^2\varepsilon(1- \overline{L} \varepsilon)}}:=T_{min}
\end{equation}
where
\begin{equation}\label{condition2}
L=\max \left\lbrace l_1, l_2 \cdots, l_{N}\right\rbrace, \quad \overline{L} =\max \left\lbrace 2l_1, \max \left\lbrace l_2,l_3, \cdots, l_N \right\rbrace + l_1\right\rbrace,
\end{equation}
and
\begin{equation}\label{condition3}
0<\varepsilon < \frac{1}{ \overline{L} }.
\end{equation}
Additionally, suppose that the coefficients of the boundary conditions \eqref{bound} satisfies
\begin{equation}\label{putaquepario1}
\sum_{j=2}^{N}\frac{1}{\alpha^2_j}=1 \quad \text{and} \quad \frac{1}{\alpha^2_j}\leq \frac{1}{N-1}.
\end{equation}
Then for any $u_{0},\ u_{T}\in H^{-2}(\mathcal{G})$, there exists a control $h_j(t)\in L^2(0,T)$, for $j=2,...,N$, such that the unique solution $u(x,t)\in C([0,T];H^{-2}(\mathcal{G}))$ of \eqref{graph}-\eqref{bound}, with $h_1(t)=0$, satisfies \eqref{ect}.
\end{theorem}
\subsection{Outline and structure of the paper} In this article we prove the exact controllability of the Schrödinger type system with mixed dispersion in star graph structure $\mathcal{G}$ of $(N+1)$ edges $e_{j}$ of lengths $l_{j}>0, j \in\{1, . ., N+1\}$, connected at one vertex that we assume to be $0$ for all the edges (see Fig. \ref{control}). Precisely, we are able to prove that solutions of adjoint system associated to \eqref{graph}, with boundary conditions \eqref{bound}, preserve conservation laws in $L^2(\mathcal{G})$, $H^1(\mathcal{G})$ and $H^2(\mathcal{G})$ (see Appendix \ref{Sec4}), which are proved \textit{via} Morawetz multipliers. With this in hand, an \textit{observability inequality} associated with the solution of the adjoint system is proved. Here, the relation between $T>T_{min}$, where $$T_{min}=\sqrt{ \frac{ \overline{L} (L^2+\pi^2)}{\pi^2\varepsilon(1- \overline{L} \varepsilon)}},$$ is crucial to prove the result.
\begin{remark}Let us give some remarks in order.
\begin{itemize}
\item[1.] It is important to point out that the transmission conditions at the central node 0 are inspired by the recent papers \cite{CaCaGa,Cav,GM,MNS}. It is not the only possible choice, and the main motivation is that they guarantee uniqueness of the regular solutions of the $(Bi-NLS)$ equation linearized around 0.
\item[2.] An important fact is that we are able to deal with the mix dispersion in the system \eqref{graph}, that is, with laplacian and bi-laplacian terms in the system. The laplacian term gives us an extra difficulty to deal with the adjoint system associated to \eqref{graph}. Precisely, if we remove the term $\partial^2_x$ in \eqref{graph} and deal only with the fourth order Schrödinger equation with the boundary conditions \eqref{bound} we can use two different constants $\alpha_j$ and $\beta_j$ in the traces of the boundary conditions.
\item[3.] We are able to control $N+1$ edges with $N-$boundary controls, however, we do not have the sharp conditions on the lengths $l_j$. Moreover, the time of control $T>T_{min}$ is not sharp, but we get an explicit constant in the observability inequality. In this way, these two problems are open.
\end{itemize}
\end{remark}
To end our introduction, we present the outline of the manuscript. Section \ref{Sec2} is related with the well-posedness results for the system \eqref{graph}-\eqref{bound} and its adjoint. In Section \ref{Sec3}, we give a rigorous proof of observability inequality, and with this in hand, we are able to prove Theorem \ref{Th_Control_N}. In Appendix \ref{Sec4} we present key lemmas using Morawetz multipliers which are crucial to prove the main result of the paper.
\section{Well-posedness results}\label{Sec2}
We first study the homogeneous linear system (without control) and the adjoint system associated to \eqref{graph}-\eqref{bound}. After that, the linear biharmonic Schrödinger equation with regular initial data and controls is studied.
\subsection{Study of the linear system} In this section we consider the following linear model
\begin{equation}\label{graph_1}
\begin{cases}
i\partial_t u_j +\partial_x^2 u_j - \partial_x^4 u_j =0, & (t,x)\in (0,T) \times I_j,\ j=1,2, ..., N\\
u_j(0,x)=u_{j0}(x), & x \in I_j,\ j=1,2, ..., N\\
\end{cases}
\end{equation}
with the boundary conditions
\begin{equation}\label{bound1}
\left\lbrace
\begin{split}
&u_1(-l_1,t )=\partial_x u_1(-l_1,t)=0\\
&u_j(l_j,t)= \partial_x u_j(l_j,t)=0, & j \in \{2,3,\cdots, N\}, \\
& u_1(0,t)=\alpha_j u_j(0,t), & j \in \{2,3,\cdots, N\}, \\
&\partial_x u_1(0,t)=\sum_{j=2}^{N}\frac{\partial_x u_j(0,t)}{\alpha_j} \\
& \partial_x^2 u_1(0)= \alpha_j \partial_x^2 u_j(0,t), &j \in \{2,3,\cdots, N\}, \\
&\partial_x^3 u_1(0,t)=\sum_{j=2}^{N}\frac{\partial_x^3 u_j(0,t)}{\alpha_j}.
\end{split}\right.
\end{equation}
Additionally, from now on we use the notation introduced in the introduction of the manuscript.
Let us consider the differential operator $$A: u=\left(u_{1}, \cdots, u_{N+1}\right) \in \mathcal{D}(A) \subset L^{2}(\mathcal{G}) \mapsto i \partial_x^2 u - i \partial_x^4 u \in L^{2}(\mathcal{R})$$
with domain defined by
\begin{equation}\label{b1}
D(A):= \left\lbrace u \in \prod_{j=1}^N H^4(I_j) \cap V : \partial_x u_1(0)=\sum_{j=2}^{N}\frac{\partial_x u_j(0)}{\alpha_j}, \,\, \partial_x^3 u_1(0)=\sum_{j=2}^{N}\frac{\partial_x^3 u_j(0)}{\alpha_j} \right\rbrace,
\end{equation}
where
\begin{multline}\label{b2}
V=\left\lbrace u \in \prod_{j=1}^N H^2(I_j) : u_1(-l_1 )=\partial_x u_1(-l_1)=u_j(l_j)=\partial_x u_j(l_j)=0, \right. \\
\left. u_1(0)=\alpha_j u_j(0), \partial_x^2 u_1(0)= \alpha_j \partial_x^2 u_j(0), \quad j \in \{2,3,\cdots, N\}. \right\rbrace
\end{multline}
Then we can rewrite the homogeneous linear system \eqref{graph_1}-\eqref{bound1} takes the form
\begin{equation}\label{gen}
\begin{cases}
u_t (t)= Au(t), & t>0 \\
u(0)=u_0 \in L^2({\mathcal{G}}).
\end{cases}
\end{equation}
The following proposition guarantees some properties for the operator $A$. Precisely, the following holds.
\begin{proposition}\label{selfadjoint}
The operator $A:D(A)\subset L^2(\mathcal G)\rightarrow L^2(\mathcal G)$ is self-adjoint in $L^2({\mathcal{G}})$.
\end{proposition}
\begin{proof}
Let us first to prove that $A$ is a symmetric operator. To do this let $u$ and $v$ in $D(A)$. Then, by approximating $u$ and $v$ by $C^4(\mathcal{G})$ functions, integrating by parts and using the boundary conditions \eqref{b1} and \eqref{b2} we have that
\begin{equation*}
\begin{split}
(Au,v)_{L^{2}(\mathcal{G})}&
= {\text{Re }}\int_{-l_1}^0 (Au)_1(x)\overline{v_1(x)}dx + {\text{Re }} \sum_{j=2}^{N} \int_{0}^{l_j} (Au)_j(x)\overline{v_j(x)}dx\\
&={\text{Re }} \int_{-l_1}^0 (i \partial_x^2 u_1 - i \partial_x^4 u_1)\overline{v_1(x)}dx + {\text{Re }} \sum_{j=2}^{N}\int_{0}^{l_j} (i \partial_x^2 u_j - i \partial_x^4 u_j)(x)\overline{v_j(x)}dx\\
&={\text{Re }} \int_{-l_1}^0 {u_1} (i \partial_x^2 \overline{v}_1 - i \partial_x^4 \overline{v}_1)dx +{\text{Re }} \sum_{j=2}^{N}\int_{0}^{l_j} {u_j} (i \partial_x^2 \overline{v}_j - i \partial_x^4 \overline{v}_j)dx\\
&\quad \quad + {\text{Re }} i\sum_{j=1}^{N} \left[\partial_xu_j\overline{v}_j-u_j\partial_x\overline{v}_j-\partial_x^3u_j\overline{v}_j+\partial_x^2u_j\partial_x\overline{v}_j-\partial_xu_j\partial_x^2\overline{v}_j+u_j\partial_x^3\overline{v}_j\right]_{\partial\mathcal{G}}\\
&=(u,Av)_{L^2(\mathcal{G})},\ \forall\ u,v\in D(A),
\end{split}
\end{equation*}
that is, $A$ is symmetric. It is not hard to see that $D(A^*)=D(A)$, so $A$ is self-adjoint. This finishes the proof.
\end{proof}
By using semigroup theory, $A$ generates a strongly continuous unitary group on $L^2({\mathcal{G}})$, and for any $u_0=(u_{10}, u_{20},..., u_{N0}) \in L^2({\mathcal{G}})$ there exists a unique mild solution $u \in C([0; T];L^2({\mathcal{G}}))$ of \eqref{gen}. Furthermore, if $u_0 \in D(A)$, then \eqref{gen} has a classical solution satisfying $u \in C([0; T];D(A)) \cap C^1([0; T];L^2({\mathcal{G}}))$. Summarizing, we have the following result.
\begin{proposition}\label{mild}
Let $u_0=(u_{10}, u_{20},..., u_{N0})\in H_0^k(\mathcal G)$, for $k\in\{0,1,2,3,4\}$. Then the linear system \eqref{graph_1} with boundary conditions \eqref{bound1} has a unique solution $u$ on the space $C([0,T]:H_0^k(\mathcal G))$. In particular, for $k=4$ we get a classical solution and for the other cases ($k\in\{0,1,2,3\}$) the solution is a mild solution.
\end{proposition}
Now, we deal with the adjoint system associated to \eqref{graph_1}-\eqref{bound1}. As the operator $A=i\partial_x^2-i\partial_x^4$ is self adjoint (see Proposition \ref{selfadjoint}) the adjoint system is defined as follows
\begin{equation}\label{adj_graph_1a}
\begin{cases}
i\partial_t v_j +\partial_x^2 v_j - \partial_x^4 v_j =0, & (t,x)\in (0,T) \times I_j,\ j=1,2, ..., N\\
v_j(T,x)=v_{jT}(x), & x \in I_j,\ j=1,2, ..., N\\
\end{cases}
\end{equation}
with the boundary conditions
\begin{equation}\label{adj_bound1}
\left\lbrace
\begin{split}
&v_1(-l_1,t )=\partial_x v_1(-l_1,t)=0\\
&v_j(l_j,t)=\partial_x v_j(l_j,t)=0, & j \in \{2,3,\cdots, N\}, \\
& v_1(0,t)=\alpha_j v_j(0,t), & j \in \{2,3,\cdots, N\}, \\
&\partial_x v_1(0,t)=\sum_{j=2}^{N}\frac{\partial_x v_j(0,t)}{\alpha_j} \\
& \partial_x^2 v_1(0)= \alpha_j \partial_x^2 v_j(0,t), &j \in \{2,3,\cdots, N\}, \\
&\partial_x^3 v_1(0,t)=\sum_{j=2}^{N}\frac{\partial_x^3 v_j(0,t)}{\alpha_j}.
\end{split}\right.
\end{equation}
Also, as $A=A^*$ we have that $D(A^{*})=D(A)$ and the proof of well-posedness is the same that in Proposition \ref{mild}.
\section{Exact boundary controllability}\label{Sec3}
This section is devoted to the analysis of the exact controllability property for the
linear system corresponding to \eqref{graph} with boundary control \eqref{bound}. Here, we will present the answer for the control problem presented in the introduction of this work. First, let us present two definitions that will be important for the rest of the work.
\begin{definition}
Let $T > 0$, The system \eqref{graph}-\eqref{bound} is exactly controllable in time $T$ if for any initial and final data $u_0\,, u_T\in H^{-2}(\mathcal{G})$ there exist control functions $h_j\in L^2(0,T)$, $j\in \{2,3,\cdots, N\}$, such that solution $u$ of \eqref{graph}-\eqref{bound} on the tree shaped network of $N+1$ edges satisfies \eqref{ect}. In addition, when $u_T=0$ we said that the system \eqref{graph}-\eqref{bound} is null controllable in time $T$.
\end{definition}
Now on, consider the transposition solution to \eqref{graph}-\eqref{bound}, with $h_1(t)=0$, which is given by the following.
\begin{definition}
We say $u\in L^{\infty}(0,T;H^{-2}(\mathcal{G}))$ is solution of \eqref{graph}-\eqref{bound}, with $h_1(t)=0$,
in the transposition sense if and only if
$$
\sum_{j=1}^{N}\left(\int_0^T\left\langle u_j(t),f_j(t) \right\rangle dt+i\langle u_j(0),v_j(0)\rangle\right)+\sum_{j=2}^{N}\left(\int_{\mathcal{G}}h_j\partial^2_x\overline{v_j}dx\right)=0,
$$
for every $f\in L^2(0,T;H^2_0(\mathcal{G}))$, where $v(x,t)$ is the mild solution to the problem \eqref{adj_graph_1a}-\eqref{adj_bound1} on the space $C([0,T];H_0^2(G))$, with $v(x,T)=0$, obtained in Proposition \ref{mild}. Here, $\left\langle \cdot,\cdot \right\rangle$ means the duality between the spaces $H^{-2}(\mathcal{G})$ and $H^2_0(\mathcal{G})$.
\end{definition}
With this in hand, the following lemma gives an equivalent condition for the exact controllability property.
\begin{lemma}\label{L_CECP_1}
Let $u_T\in H^{-2}(\mathcal{G})$. Then, there exist controls $h_j(t)\in L^2(0,T)$, for $j=2,...,N$, such that the solution $u(x,t)$ of \eqref{graph}-\eqref{bound}, with $h_1(t)=0$, satisfies \eqref{ect} if and only if
\begin{equation}\label{CECP_1}
i\sum_{j=1}^N\int_{I_j}\langle u_j(T),\overline{v}_j(T)\rangle dx=\sum_{j=2}^{N}\int_0^Th_j(t)\partial^2_xv_j(l_j,t)dt,
\end{equation}
where $v$ is solution of \eqref{adj_graph_1a}-\eqref{adj_bound1}, with initial data $v(x,T)=v(T)$.
\end{lemma}
\begin{proof}
Relation \eqref{CECP_1} is obtained multiplying \eqref{graph}-\eqref{bound}, with $h_1(t)=0$, by the solution $v$ of \eqref{adj_graph_1a}-\eqref{adj_bound1} and integrating by parts on $\mathcal{G}\times(0,T)$.
\end{proof}
\subsection{Observability inequality} A fundamental role will be played by the following observability result, which together with Lemma \ref{L_CECP_1} give us Theorem \ref{Th_Control_N}.
\begin{proposition}\label{Oinequality_1}
Let $l_j>$ for any $j\in \{1,\cdots,N+1\}$ satisfying \eqref{condition1} and assume
that \eqref{putaquepario1} holds. There exists a positive constant $T_{min}$ such that if $T>T_{min}$, then the following inequality holds
\begin{equation}\label{OI_2}
\norm{v(x,T)}_{H^2_0(\mathcal{G})}^2\leq C\sum_{j=2}^{N}\norm{\partial^2_xv_j(l_j,t)}^2_{L^2(0,T)}
\end{equation}
for any $v=\left(v_{1}, v_{2}, \cdots, v_{N+1}\right)$ solution of \eqref{adj_graph_1a}-\eqref{adj_bound1} with final condition $v_{T}=\left(v_{1}^{T}, v_{2}^{T}, \cdots, v_{N+1}^{T}\right) \in H^2_0(\mathcal{G})$ and for a positive constant $C >0$.
\end{proposition}
\begin{proof}
Firstly, taking $f=0$ and choosing $q(x,t)=1$ in \eqref{identity}, we get that
\begin{multline*}
\begin{split}
-\frac{Im}{2}\left.\int_{\mathcal{G}}v\overline{\partial_xv}\right]_0^Tdx+\frac{Im}{2}\left.\int_0^Tv\overline{\partial_tv}\right]_{\partial\mathcal{G}}dt&+\frac{1}{2}\left.\int_0^T|\partial_xv|^2\right]_{\partial\mathcal{G}}dt +\frac{1}{2}\left.\int_0^T|\partial_x^2v|^2\right]_{\partial\mathcal{G}}dt \\&-Re\int_0^T\left[ \partial_x^3 v\overline{\partial_xv}\right]_{\partial\mathcal{G}}dt=0,
\end{split}
\end{multline*}
or equivalently,
\begin{multline*}
\begin{split}
0=&-\frac{Im}{2}\left.\int_{\mathcal{G}}v\overline{\partial_xv}\right]_0^Tdx+\frac{Im}{2}\int_0^T \left( \left[ v_1\overline{\partial_t v}_1\right]_{-l_1}^0 + \left[ \sum_{j=2}^{N} v_j\overline{\partial_t v}_j\right]_{0}^{l_j}\right) dt \\
& +\frac{1}{2}\int_0^T\left( \left[ |\partial_x v_1|^2 \right]_{-l_1}^0 + \left[ \sum_{j=2}^{N} |\partial_x v_j|^2 \right]_{0}^{l_j} \right)dt
+\frac{1}{2}\int_0^T\left( \left[ |\partial_x^2 v_1|^2 \right]_{-l_1}^0 + \left[ \sum_{j=2}^{N} |\partial_x^2 v_j|^2 \right]_{0}^{l_j} \right)dt\\
& -Re\int_0^T\left(\left[ \partial_x^3 v_1\overline{\partial_x v}_1\right]_{-l_1}^0 + \left[ \sum_{j=2}^{N} \partial_x^3 v_j\overline{ \partial_x v}_j\right]_{0}^{l_j}\right)dt.
\end{split}
\end{multline*}
By using the boundary conditions \eqref{adj_bound1}, it follows that
\begin{multline*}
\begin{split}
0=&-\frac{Im}{2}\left.\int_{\mathcal{G}}v\overline{\partial_xv}\right]_0^Tdx+\frac{Im}{2}\int_0^T \left( v_1(0)\overline{\partial_t v}_1(0) - \sum_{j=2}^{N} v_j(0)\overline{\partial_t v}_j(0)\right) dt \\
&+\frac{1}{2}\int_0^T\left( |\partial_x v_1(0)|^2- \sum_{j=2}^{N} |\partial_x v_j(0)|^2 \right)dt \\
&+\frac{1}{2}\int_0^T\left( |\partial_x^2 v_1(0)|^2 - |\partial_x^2 v_1(-l_1)|^2 + \sum_{j=2}^{N} \left( |\partial_x^2 v_j(l_j)|^2 - |\partial_x^2 v_j(0)|^2 \right) \right)dt \\
&-Re\int_0^T\left(\partial_x^3 v_1 (0)\overline{\partial_x v}_1(0) - \sum_{j=2}^{N} \partial_x^3 v_j(0)\overline{ \partial_x v}_j(0) \right)dt.
\end{split}
\end{multline*}
Once again, due to the boundary conditions \eqref{adj_bound1} and relations \eqref{putaquepario1}, we have that
\begin{equation}\label{perfect}
\begin{split}
\int_0^T |\partial_x^2&v_1(-l_1)|^2 dt =-Im\left.\int_{\mathcal{G}}v\overline{\partial_xv}\right]_0^Tdx +\int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt \\
&+\int_0^T\left( |\partial_x v_1(0)|^2- \sum_{j=2}^{N} |\partial_x v_j(0)|^2 \right)dt
+\int_0^T\left( |\partial_x^2 v_1(0)|^2- \sum_{j=2}^{N} |\partial_x^2 v_j(0)|^2 \right)dt.
\end{split}
\end{equation}
Thanks to relations \eqref{putaquepario1}, we deduce that
\begin{multline*}
\int_0^T\left( |\partial_x v_1(0)|^2- \sum_{j=2}^{N} |\partial_x v_j(0)|^2 \right)dt = \frac{1}{2}\int_0^T\left( \left| \sum_{j=2}^{N} \frac{\partial_x v_j(0)}{\alpha_j}\right |^2- \sum_{j=2}^{N} |\partial_x v_j(0)|^2 \right)dt \\
\leq \frac{1}{2}\int_0^T\left( (N-1) \sum_{j=2}^{N} \left|\frac{\partial_x v_j(0)}{\alpha_j}\right |^2- \sum_{j=2}^{N} |\partial_x v_j(0)|^2 \right)dt = \frac{1}{2}\int_0^T \sum_{j=2}^{N} |\partial_x v_j(0)|^2\left( \frac{N-1}{\alpha_j^2}-1 \right)dt \leq 0
\end{multline*}
and
\begin{align*}
\frac{1}{2}\int_0^T\left( |\partial_x^2 v_1(0)|^2- \sum_{j=2}^{N} |\partial_x^2 v_j(0)|^2 \right)dt &= \frac{1}{2}\int_0^T\left( |\partial_x^2 v_1(0)|^2- \sum_{j=2}^{N} \left| \frac{\partial_x^2 v_1(0)}{\alpha_j} \right|^2 \right)dt \\
&= \frac{1}{2}\int_0^T |\partial_x^2 v_1(0)|^2\left( 1- \sum_{j=2}^{N} \frac{1}{\alpha_j^2} \right)dt= 0.
\end{align*}
Thus, previous calculations ensure that
\begin{equation}\label{newmult_1}
\int_0^T |\partial_x^2 v_1(-l_1)|^2 dt \leq Im\left.\int_{\mathcal{G}}v\overline{\partial_xv}\right]_0^Tdx +\int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt .
\end{equation}
Now, choosing $q(x,t)=x$ in \eqref{identity}, by using the boundary conditions \eqref{adj_bound1} and taking $f=0$, we get
\begin{align*}
2\int_Q|\partial_xv|^2dxdt+4\int_Q|\partial_x^2v|^2dxdt &=\left.\int_0^T|\partial^2_xv|^2x\right]_{\partial\mathcal{G}}dt-Im\left.\int_{\mathcal{G}}v\overline{\partial^2_xv}\right]_0^Tdx \\
&=l_1 \int_0^T |\partial_x^2 v_1(-l_1)|^2 dt + \int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^{2}l_jdt-Im\left.\int_{\mathcal{G}}v\overline{\partial_xv}x\right]_0^Tdx.
\end{align*}
From inequality \eqref{newmult_1}, it yields that
\begin{multline*}
\begin{split}
2\int_Q|\partial_xv|^2dxdt+4\int_Q|\partial_x^2v|^2dxdt \leq& \ l_1Im\left.\int_{\mathcal{G}}v\overline{\partial_xv}\right]_0^Tdx +l_1\int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt \\
& + \int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^{2}l_jdt-Im\left.\int_{\mathcal{G}}v\overline{\partial_xv}x\right]_0^Tdx,
\end{split}
\end{multline*}
hence
\begin{equation}\label{OI1_b_1}
2\int_Q|\partial_xv|^2dxdt+4\int_Q|\partial_x^2v|^2dxdt\leq Im\left.\int_{\mathcal{G}}v\overline{\partial_xv} (l_1-x)\right]_0^Tdx +(\overline{L}+l_1)\int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt ,
\end{equation}
where $\overline{L}$ is defined by \eqref{condition2}.
Now, we are in position to prove \eqref{OI_2}. Thanks to \eqref{OI1_b_1}, follows that
\begin{multline}\label{OI1_c_1}
\begin{split}
2\int_Q(|\partial_xv|^2+|\partial_x^2v|^2)dxdt\leq&\ (\overline{L}+l_1)\int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt + \left|\left.\int_{\mathcal{G}}v\overline{\partial_xv} (l_1-x)\right]_0^Tdx \right| \\
\leq & \ L\int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt + \int_{\mathcal{G}}|v(T)||\overline{\partial_xv(T)}| |l_1-x|dx \\
&+ \int_{\mathcal{G}}|v(0)||\overline{\partial_xv(0)}| |l_1-x|dx .
\end{split}
\end{multline}
As we have the conservation laws for solutions of \eqref{adj_bound1aa}, that is, \eqref{H12_c} is satisfied, so by using it on the left hand side of \eqref{OI1_c_1}, yields that
\begin{multline}\label{OI1_d_1}
\begin{split}
2\int_0^T(\norm{\partial_xv(T)}^2_{L^2(\mathcal{G})}+&\norm{\partial^2_xv(T)}^2_{L^2(\mathcal{G})})dt
\leq \ L\int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt \\ & + \overline{L} \left( \int_{\mathcal{G}}|v(T)||\overline{\partial_xv(T)}| dx + \int_{\mathcal{G}}|v(0)||\overline{\partial_xv(0)}| |dx\right).
\end{split}
\end{multline}
Applying Young inequality in \eqref{OI1_d_1}, with $\varepsilon >0$ satisfying \eqref{condition3}, we deduce that
\begin{multline}\label{OI1_d_1_2}
2T \left(\norm{\partial_xv(T)}^2_{L^2(\mathcal{G})}+\norm{\partial_x^2v(T)}^2_{L^2(\mathcal{G})}\right)
\leq \ L \int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt \\ + \overline{L} \left( \frac{1}{\varepsilon T} \int_{\mathcal{G}}|v(T)|^2 dx + \varepsilon T \int_{\mathcal{G}} |\overline{\partial_xv(T)}|^2 dx + \frac{1}{\varepsilon T} \int_{\mathcal{G}}|v(0)|^2 dx + \varepsilon T \int_{\mathcal{G}}|\overline{\partial_xv(0)})|^2 dx\right).
\end{multline}
Therefore, we have due to \eqref{OI1_d_1_2} and, again using the conservation law, the following estimate
\begin{equation*}
2T(1-\overline{L}\epsilon ) \left( \norm{\partial_xv(T)}^2_{L^2(\mathcal{G})}+\norm{\partial_x^2v(T)}^2_{L^2(\mathcal{G})}\right)
\leq L \int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt +\frac{2\overline{L} }{\varepsilon T} \norm{v(T)}^2_{L^2(\mathcal{G})}.
\end{equation*}
From relation \eqref{poincare}, we have that
\begin{multline*}
\begin{split}
2(1-\overline{L}\epsilon )T \left(\norm{\partial_xv(T)}^2_{L^2(\mathcal{G})}+\norm{\partial_x^2v(T)}^2_{L^2(\mathcal{G})}\right)\leq&\ L\int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt\\ & +\frac{2 M }{\varepsilon T} \left( \frac{L^2}{\pi^2} +1 \right) (\norm{\partial_xv(T)}^2_{L^2(\mathcal{G})}+\norm{\partial^2_xv(T)}^2_{L^2(\mathcal{G})}).
\end{split}
\end{multline*}
Equivalently, we get that
\begin{equation*}
2\overline{L}\left[ \left( \frac{1}{ \overline{L}} -\epsilon \right) T - \frac{1}{\varepsilon T} \left( \frac{L^2}{\pi^2} +1 \right)\right] \left(\norm{\partial_xv(T)}^2_{L^2(\mathcal{G})}+\norm{\partial_x^2v(T)}^2_{L^2(\mathcal{G})} \right)
\leq L\int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt.
\end{equation*}
Note that the conditions \eqref{condition1}, \eqref{condition2} and \eqref{condition3} imply that
\begin{equation*}
K=\left[ \left( \frac{1}{\overline{L}} -\epsilon \right) T - \frac{1}{\varepsilon T} \left( \frac{L^2}{\pi^2} +1 \right)\right] >0
\end{equation*}
Thus, again using \eqref{poincare}, we achieved the observability inequality \eqref{OI_2}.
\end{proof}
\subsection{Proof of Theorem \ref{Th_Control_N}}
Notice that Theorem \ref{Th_Control_N} is a consequence of the observability inequality \eqref{OI_2}. In fact, without loss of generality, pick $u_0=0$ on $\mathcal{G}$. Define $\Gamma$ the linear and bounded map
$\Gamma:H^2_0(\mathcal{G})\longrightarrow H^{-2}(\mathcal{G})$
by
\begin{equation}
\Gamma(v(\cdot,T))=\langle u(\cdot,T),\overline{v}(\cdot,T)\rangle,
\end{equation}
where $v=v(x,t)$ is solution of \eqref{adj_graph_1a}-\eqref{adj_bound1}, with initial data $v(x,T)=v(T)$,$\left\langle \cdot,\cdot \right\rangle$ means the duality between the spaces $H^{-2}(\mathcal{G})$ and $H^2_0(\mathcal{G})$, $u=u(x,t)$ is solution of \eqref{graph}-\eqref{bound}, with $h_1(t)=0$ and
\begin{equation}\label{control2a}
h_j(t)=\partial^2_xv_j(l_j,t),
\end{equation}
for $j=2,\cdots,N$.
According to Lemma \ref{L_CECP_1} and Proposition \ref{Oinequality_1}, we obtain
\begin{equation*}
\langle \Gamma(v(T)),v(T)\rangle =\sum_{j=2}^{N}\norm{h_j(t)}^2_{L^2(0,T)}\geq C^{-1}\norm{v(T)}^2_{H^2_0(\mathcal{G})}.
\end{equation*}
Thus, by the Lax–Milgram theorem, $\Gamma$ is invertible. Consequently, for given $u(T)\in H^{-2}(\mathcal{G})$, we can define $v(T):=\Gamma^{-1}(u(T))$ which one solves \eqref{adj_graph_1a}-\eqref{adj_bound1}. Then, if $h_j(t)$, for $j=2,\cdots,N$ is defined by \eqref{control2a}, the corresponding solution $u$ of the system \eqref{graph}-\eqref{bound}, satisfies \eqref{ect} and so, Theorem \ref{Th_Control_N} holds.
|
{
"timestamp": "2021-09-29T02:25:28",
"yymm": "2109",
"arxiv_id": "2109.13878",
"language": "en",
"url": "https://arxiv.org/abs/2109.13878"
}
|
\section*{Appendix}
This appendix is segmented into three key parts.
\begin{enumerate}
\item \textbf{Section \ref{impl}} discusses additional implementation details. In particular, method-specific overheads are discussed in detail and detailed hyper-parameter settings for {IGLU}\xspace and the main baselines reported in Table \ref{results} are provided. A key outcome of this analysis is that {IGLU}\xspace has the least overheads as compared to the other methods. \revision{We also provide the methodology for incorporating architectural modifications into IGLU, provide a detailed comparison with caching based methods and provide a more detailed descriptions of the algorithms mentioned in the main paper. }
\item \textbf{Section \ref{data_baselines}} reports dataset statistics, provides comparisons with additional baselines (not included in the main paper due to lack of space) and provides experiments on scaling to deeper models and larger graphs as mentioned in Section~\ref{ablation} in the main paper. It also provides experiments for the applicability of {IGLU}\xspace across settings and architectures, using smooth activation functions, continued ablation on degrees of staleness and optimization on the train set. The key outcome of this discussion is that {IGLU}\xspace scales well to deeper models and larger datasets, and continues to give performance boosts with state-of-the-art even when compared to these additional baselines. Experimental evidence also demonstrates {IGLU}\xspace's ability to generalize easily to a different setting and architecture, perform equivalently with smooth activation functions and achieve lower training losses faster compared to all the baselines across datasets.
\item \textbf{Section~\ref{sec:proofs}} gives detailed proofs of Lemma~\ref{lem:main} and the convergence rates offered by {IGLU}\xspace. It is shown that under standard assumptions such as objective smoothness, {IGLU}\xspace is able to offer both the standard rate of convergence common for SGD-style algorithms, as well as a \textit{fast} rate if full-batch GD is performed.
\end{enumerate}
\section{Additional Implementation Details}
\label{impl}
We recall from Section~\ref{sec:main-results} that the wall-clock time reported in Figure~\ref{fig:convergence_results} consists of strictly the optimization time for each method and excludes method-specific overheads. This was actually a disadvantage for {IGLU}\xspace since its overheads are relatively mild. This section demonstrates that other baselines incur much more significant overheads whereas {IGLU}\xspace does not suffer from these large overheads. When included, it further improves the speedups that {IGLU}\xspace provides over baseline methods.
\subsection{Hardware}\label{app:hardware}
We implement IGLU in TensorFlow 1.15.2 and perform all experiments on an NVIDIA V100 GPU (32 GB Memory) and Intel Xeon CPU processor (2.6 GHz). We ensure that all baselines are experimented with the exact same hardware.
\subsection{Time and Memory Overheads for Various Methods}
\label{app:time_memory_overheads}
We consider three main types of overheads that are incurred by different methods. This includes pre-processing overhead that is one-time, recurring overheads and additional memory overhead. We describe each of the overheads in the context of the respective methods below.
\begin{enumerate}
\item \textbf{GraphSAGE} - GraphSAGE recursively samples neighbors at each layer for every minibatch. This is done on-the-fly and contributes to a significant sampling overhead. Since this overhead is incurred for every minibatch, we categorize this under recurring overhead.
We aggregate this overhead across all minibatches during training.
GraphSAGE does not incur preprocessing or additional memory overheads.
\item \textbf{VRGCN} - Similar to GraphSAGE, VRGCN also recursively samples neighbors at each layer for every minibatch on-the-fly. We again aggregate this overhead across all minibatches during training.
VRGCN also stores the stale/historical embeddings that are learnt for every node at every layer. This is an additional overhead of $\bigO{NKd}$, where $K$ is the number of layers, $N$ is the number of nodes in the graph and $d = \frac1K\sum_{k \in [K]}\ d_k$ is the average embedding dimensionality across layers.
\item \textbf{ClusterGCN} - ClusterGCN creates subgraphs and uses them as minibatches for training. For the creation of these subgraphs, ClusterGCN performs graph clustering using the highly optimized METIS tool\footnote{\href{http://glaros.dtc.umn.edu/gkhome/metis/metis/download}{\texttt{http://glaros.dtc.umn.edu/gkhome/metis/metis/download}}}. This overhead is a one-time overhead since graph clustering is done before training and the same subgraphs are (re-)used during the whole training process. We categorize this under preprocessing overhead. ClusterGCN does not incur any recurring or additional memory overheads.
\item \textbf{GraphSAINT} - GraphSAINT, similar to ClusterGCN creates subgraphs to be used as minibatches for training. We categorize this minibatch creation as the preprocessing overhead for GraphSAINT. However, unlike ClusterGCN, GraphSAINT also periodically creates new subgraphs on-the-fly. We categorize this overhead incurred in creating new subgraphs as recurring overhead. GraphSAINT does not incur any additional memory overheads.
\item \textbf{IGLU} - {IGLU}\xspace creates mini-batches only once using subsets of nodes with their full neighborhood information which is then reused throughout the training process. In addition to this, {IGLU}\xspace requires initial values of both the incomplete gradients $\valpha^k$ and the $X^k$ embeddings (Step 3 and first part of Step 4 in Algorithm \ref{algo:bp} and Step 2 and Step 4 in Algorithm \ref{algo:inv}) before optimization can commence. We categorize these two overheads - mini-batch creation and initializations of $\valpha^k$'s and $X^k$ embeddings as {IGLU}\xspace's preprocessing overhead and note that \textbf{{IGLU}\xspace does not have any recurring overheads}. {IGLU}\xspace does incur an additional memory overhead since it needs to store the incomplete gradients $\valpha^k$'s in the inverted variant and the embeddings $X^k$ in the backprop variant. However, note that the memory occupied by $X^k$ for a layer $k$ is the same as that occupied by $\valpha^k$ for that layer (see Definition~\ref{defn:alphak}). Thus, for both its variants, {IGLU}\xspace incurs an additional memory overhead of $\bigO{NKd}$, where $K$ is the number of layers, $N$ is the number of nodes in the graph and $d = \frac1K\sum_{k \in [K]}\ d_k$ is the average embedding dimensionality across layers.
\end{enumerate}
\begin{table}[t]
\caption{\textbf{OGBN-Proteins: Overheads of different methods.} {IGLU}\xspace does not have any recurring overhead while VRGCN, GraphSAGE and GraphSAINT all suffer from heavy recurring overheads. ClusterGCN runs into runtime error on this dataset (\textbf{denoted by ||}). GraphSAINT incurs an overhead that is $\sim 2 \times$ the overhead incurred by {IGLU}\xspace, while GraphSAGE and VRGCN incur upto $\sim 4.7 \times$ and $\sim 7.8 \times$ the overhead incurred by {IGLU}\xspace respectively. For the last row, I denotes initialization time, MB denotes minibatch time and T denotes total preprocessing time. Please refer to Discussion on OGBN - Proteins at Section \ref{proteins_overhead_discussion} for more details.}
\vspace{5mm}
\centering
{
\begin{tabular}{ccccc}
\hline
Method & Preprocessing (One-time) & Recurring & Additional Memory \\
\hline
\hline
GraphSAGE & N/A & 276.8s & N/A \\
VRGCN & N/A & 465.0s & $\bigO{NKd}$ \\
ClusterGCN & || & || & || \\
GraphSAINT & 22.1s & 101.0s & N/A \\
\hline
{IGLU}\xspace & 34.0s ({I}) + 25.0s ({MB}) = 59.0s ({T}) & N/A & $\bigO{NKd}$\\
\hline
\end{tabular}
}
\vspace{5mm}
\label{overheads:proteins}
\end{table}
\begin{table}[t]
\caption{\textbf{Reddit: Overheads of different methods.} {IGLU}\xspace and GraphSAINT do not have any recurring overhead for this dataset while VRGCN and GraphSAGE incur heavy recurring overheads. ClusterGCN suffers from heavy preprocessing overhead incurred due to clustering. In this case, {IGLU}\xspace incurs an overhead that is marginally higher ($\sim 1.4 \times$) than that of GraphSAINT, while VRGCN, GraphSAGE and ClusterGCN incur as much as $\sim 2.1 \times$, $\sim 4.5 \times$ and $\sim 5.8 \times$ the overhead incurred by {IGLU}\xspace respectively. For the last line, I denotes initialization time, MB denotes minibatch time and T denotes total preprocessing time. Please refer to Discussion on Reddit at Section \ref{reddit_overhead_discussion} for more details. }
\vspace{3mm}
\centering
{
\begin{tabular}{ccccc}
\hline
Method & Preprocessing (One-time) & Recurring & Additional Memory \\
\hline
\hline
GraphSAGE & N/A & 41.7s & N/A\\
VRGCN & N/A & 19.2s & $\bigO{NKd}$\\
ClusterGCN & 54.0s & N/A & N/A \\
GraphSAINT & 6.7s & N/A & N/A\\
\hline
{IGLU}\xspace & 3.5s ({I}) + 5.7s ({MB})= 9.2s ({T}) & N/A & $\bigO{NKd}$\\
\hline
\end{tabular}
}
\vspace{5mm}
\label{overheads:reddit}
\end{table}
Tables \ref{overheads:proteins} and \ref{overheads:reddit} report the overheads incurred by different methods on the OGBN-Proteins and Reddit datasets (the largest datasets in terms of edges and nodes respectively). ClusterGCN runs into a runtime error on the Proteins dataset \textbf{(denoted by || in the table)}. N/A stands for Not Applicable in the tables. In the tables, specifically for {IGLU}\xspace, pre-processing time is the sum of initialization time required to pre-compute $\valpha^k, X^k$ and mini-batch creation time. We also report the individual overhead for both initialization and mini-batch creation for {IGLU}\xspace. The total pre-processing time for {IGLU}\xspace is denoted by T, overheads incurred for initialization by I and overheads incurred for mini-batch creation by MB.
\begin{figure}
\begin{center}
\includegraphics[width=0.6\textwidth]{figures/overheads_timing_plots.pdf}\vspace*{-5pt}
\end{center}
\caption{\textbf{Total Overheads in Wall Clock Time (Log Scale) for the different methods on OGBN-Proteins and Reddit dataset.} ClusterGCN runs into runtime error on the OGBN-Proteins dataset and hence has not been included in the plot. {IGLU}\xspace frequently offers least total overhead compared to the other methods and hence significantly lower overall experimentation time. Please refer to section \ref{proteins_overhead_discussion} for details.}
\label{fig:overheads_timing_analysis}
\end{figure}
For {IGLU}\xspace, the minibatch creation code is currently implemented in Python, while GraphSAINT uses a highly optimized C++ implementation. Specifically for the Reddit dataset, the number of subgraphs that GraphSAINT samples initially is sufficient and it does not incur any recurring overhead. However, on Proteins, GraphSAINT samples 200 subgraphs every 18 epochs, once the initially sampled subgraphs are used, leading to a sizeable recurring overhead.
\paragraph{Discussion on OGBN - Proteins:}\label{proteins_overhead_discussion}
Figure \ref{fig:overheads_timing_analysis} summarizes the total overheads for all the methods. On the OGBN-Proteins dataset, {IGLU}\xspace incurs $\sim 2 \times$ less overhead than GraphSAINT, the best baseline, while incurring as much as $\sim 7.8 \times$ less overhead than VRGCN and $\sim 4.7 \times$ less overhead than GraphSAGE. It is also important to note that these overheads are often quite significant as compared to the optimization time for the methods and can add to the overall experimentation time. For experiments on the OGBN-Proteins dataset, VRGCN's total overheads equal 46.25\% of its optimization time, GraphSAINT's overheads equal 19.63\% of its optimization time, GraphSAGE's overhead equal 9.52\% of its optimization time. However {IGLU}\xspace's total overheads equal only \textbf{5.59\%} of its optimization time which is the lowest out of all methods. Upon re-computing the speedup provided by {IGLU}\xspace using the formula defined in equation \eqref{eq:speedup}, but this time with overheads included, it was observed that {IGLU}\xspace offered an improved speedup of \textbf{16.88\%} (the speedup was only 11.05\% when considering only optimization time).
\paragraph{Discussion on Reddit:}\label{reddit_overhead_discussion}
On the Reddit dataset, {IGLU}\xspace incurs upto $\sim 2.1 \times$ less overhead than VRGCN and upto $\sim 4.5\times$ less overhead than GraphSAGE. However, {IGLU}\xspace incurs marginally higher overhead ($\sim 1.4 \times$) than GraphSAINT. This can be attributed to the non-optimized minibatch creation routine currently used by {IGLU}\xspace compared to a highly optimized and parallelized implementation in C++ used by GraphSAINT. This is an immediate avenue for future work. Nevertheless, VRGCN's total overhead equals 15.41\% of its optimization time, GraphSAINT's overhead equals 43.41\% of its optimization time, GraphSAGE's overhead equals 31.25\% of its optimization time, ClusterGCN's overhead equals 41.02\% of its optimization time while {IGLU}\xspace's overhead equals 44.09\% of its optimization time. Whereas the relative overheads incurred by {IGLU}\xspace and GraphSAINT in comparison to optimization time may seem high for this dataset, this is because the actual optimization times for these methods are rather small, being just 15.43 and 20.86 seconds for GraphSAINT and {IGLU}\xspace respectively in comparison to the other methods such as VRGCN, ClusterGCN and GraphSAGE whose optimization times are 124s, 131s and 133s respectively, almost an order of magnitude larger than that of {IGLU}\xspace.
\begin{table}[t]
\caption{\textbf{URL's and commit numbers to run baseline codes}}
\vspace{2mm}
\centering
{
\begin{tabular}{ccc}
\hline
Method & URL & Commit \\
\hline
\hline
GCN & \href{https://github.com/williamleif/GraphSAGE}{github.com/williamleif/GraphSAGE} & a0fdef \\
GraphSAGE & \href{https://github.com/williamleif/GraphSAGE}{github.com/williamleif/GraphSAGE} & a0fdef\\
VRGCN & \href{https://github.com/thu-ml/stochastic_gcn}{github.com/thu-ml/stochastic\_gcn} & da7b78\\
ClusterGCN &
\href{https://github.com/google-research/google-research/tree/master/cluster_gcn}{github.com/google-research/google-research/tree/master/cluster\_gcn} & 0c1bbe5 \\
AS-GCN & \href{https://github.com/huangwb/AS-GCN}{github.com/huangwb/AS-GCN} & 5436ecd \\
L2-GCN & \href{https://github.com/VITA-Group/L2-GCN}{github.com/VITA-Group/L2-GCN} & 687fbae \\
MVS-GNN & \href{https://github.com/CongWeilin/mvs_gcn}{github.com/CongWeilin/mvs\_gcn} & a29c2c5 \\
LADIES & \href{https://github.com/acbull/LADIES}{github.com/acbull/LADIES} & c10b526 \\
FastGCN & \href{https://github.com/matenure/FastGCN}{https://github.com/matenure/FastGCN} & b8e6e64 \\
SIGN & \href{https://github.com/twitter-research/sign}{https://github.com/twitter-research/sign} & 42a230c \\
PPRGo & \href{https://github.com/TUM-DAML/pprgo\_pytorch}{https://github.com/TUM-DAML/pprgo\_pytorch} & c92c32e \\
BanditSampler & \href{https://github.com/xavierzw/gnn-bs}{https://github.com/xavierzw/gnn-bs} & a2415a9 \\
\hline
\end{tabular}
}
\vspace{1.5pt}
\label{baselines_urls}
\end{table}
\subsection{Memory Analysis for IGLU}\label{app:memory}
{While {IGLU}\xspace requires storing stale variables which can have additional memory costs, for most scenarios with real world graphs, saving these stale representations on modern GPUs are quite reasonable. We provide examples of additional memory usage required for two of the large datasets - Reddit and Proteins in Table \ref{memory_overheads} and we observe that IGLU requires only \textbf{150MB} and \textbf{260MB} of additional GPU memory. Even for a graph with 1 million nodes, the additional memory required would only be $\sim$ 2.86GB which easily fit on modern GPUs. For even larger graphs, CPU-GPU interfacing can be used. CPU and GPU interfacing for data movement is a common practice in training machine learning models and hence a potential method to mitigate the issue of limited memory availability in settings with large datasets. This has been explored by many works for dealing with large datasets in the context of GCNs, such as VR-GCN \citep{vrgcn} for storing historical activations in main memory (CPU). Such an interfacing is an immediate avenue of future work for {IGLU}\xspace.}
\begin{table}[ht]
\caption{\textbf{{Additional Memory Overheads incurred by {IGLU}\xspace on Large datasets}}}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{cccccc}
\hline
Dataset & \# of Train Nodes & Embedding Dimensions & Number of Layers & Memory Per Layer & Total GPU Memory \\[0.05cm]
\hline
\hline
Reddit & 155k & 128 & 2 & 75MB & 150MB \\[0.05cm]
Proteins & 90k & 256 & 3 & 85MB & 260MB \\[0.05cm]
Million Sized Graph & 1M & 256 & 3 & 0.95GB & 2.86GB \\[0.05cm]
\hline
\end{tabular}
}
\vspace{1.5pt}
\label{memory_overheads}
\end{table}
{We observe that IGLU enjoys significant speedups and improvements in training cost across datasets as compared to the baselines as a result of using stale variables. Additionally, since IGLU requires training just a single layer at a time, there is scope for further reduction in memory usage by using only the variables required for the current layer and by re-using the computation graphs across layers, and therefore making {IGLU}\xspace even less memory expensive.}
\subsection{Hyperparameter Configurations for {IGLU}\xspace and baselines}
\label{app:params}
Table \ref{baselines_urls} summarizes the source URLs and commit stamps using which baseline methods were obtained for experiments. The Adam optimizer \cite{kingma2014adam} was used to train {IGLU}\xspace and all the baselines until convergence for each of the datasets. A grid search was performed over the other hyper-parameters for each baseline which are summarized below:
\begin{enumerate}
\item \textbf{GraphSAGE}: Learning Rate - \{0.01, 0.001\}, Batch Size - \{512, 1024, 2048\}, Neighborhood Sampling Size - \{25, 10, 5\}, Aggregator - \{Mean, Concat\}
\item \textbf{VRGCN} : Batch Size - \{512, 1000, 2048\}, Degree - \{1, 5, 10\}, Method - \{CV, CVD\}, Dropout - \{0, 0.2, 0.5\}
\item \textbf{ClusterGCN} : Learning Rate - \{0.01, 0.001, 0.005\}, Lambda - \{-1, 1, 1e-4\}, Number of Clusters - \{5, 50, 500, 1000, 1500, 5000\}, Batch Size - \{1, 5, 50, 100, 500\}, Dropout - \{0, 0.1, 0.3, 0.5\}
\item \textbf{GraphSAINT-RW} : Aggregator - \{Mean, Concat\}, Normalization - \{Norm, Bias\}, Depth - \{2, 3, 4\}, Root Nodes - \{1250, 2000, 3000, 4500, 6000\}, Dropout - \{0.0, 0.2, 0.5\}
\item \textbf{IGLU} : Learning Rate - \{0.01, 0.001\} with learning rate decay schemes, Batch Size - \{512, 2048, 4096, 10000\}, Dropout - \{0.0, 0.2, 0.5, 0.7\}
\end{enumerate}
\subsection{Incorporating Residual Connections, Batch Normalization and Virtual Nodes in {IGLU}\xspace}
\label{app:arch}
General-purpose techniques such as BatchNorm and skip/residual connections, and GCN-specific advancements such as bi-level aggregation using virtual nodes offer performance boosts. The current implementation of IGLU already incorporates normalizations
as described in Sections \ref{sec:method} and \ref{sec:exps}. Below we demonstrate how all these aforementioned architectural variations can be incorporated into IGLU with minimal changes to Lemma~\ref{lem:main} part 2. Remaining guarantees such as those offered by Lemma~\ref{lem:main} part 1 and Theorem~\ref{thm:conv} remain unaltered but for changes to constants.
\textbf{Incorporating BatchNorm into {IGLU}\xspace}: As pointed out in Section 3, {IGLU}\xspace assumes a general form of the architecture, specifically
\[
{\bm{x}}_i^k = f({\bm{x}}_j^{k-1}, j \in \bc i \cup \cN(i); E^k),
\]
where $f$ includes the aggregation operation such as using the graph Laplacian, weight matrices and any non-linearity. This naturally allows operations such as normalizations (LayerNorm, BatchNorm etc) to be carried out. For instance, the $f$ function for BatchNorm with a standard GCN would look like the following
\begin{align*}
{\bm{z}}_i^k &= \sigma\br{\sum_{j \in \cV}A_{ij}(W^k)^\top{\bm{x}}_j^{k-1}}\\
\hat{\bm{z}}_i^k &= \frac{{\bm{z}}_i^k - {\bm{\mu}}_B^k}{\sqrt{\vnu_B^k + \epsilon}}\\
{\bm{x}}_i^k &= \Gamma^k\cdot\hat{\bm{z}}_i^k + \vbeta^k,
\end{align*}
where $\sigma$ denotes a non-linearity like the sigmoid, $\Gamma^k \in \bR^{d_k\times d_k}$ is a diagonal matrix and $\vbeta^k \in \bR^{d_k}$ is a vector, and ${\bm{\mu}}_B^k, \vnu_B^k \in \bR^{d_k}$ are vectors containing the dimension-wise mean and variance values over a mini-batch $B$ (division while computing $\hat{\bm{z}}^k_i$ is performed element-wise). The parameter $E^k$ is taken to collect parameters contained in all the above operations i.e. $E^k = \bc{W^k, \Gamma^k, \vbeta^k}$ in the above example ( ${\bm{\mu}}_B^k, \vnu_B^k$ are computed using samples in a mini-batch itself). Downstream calculations in Definition~\ref{defn:alphak} and Lemma~\ref{lem:main} continue to hold with no changes.
\textbf{Incorporating Virtual Nodes into {IGLU}\xspace}: Virtual nodes can also be seamlessly incorporated simply by re-parameterizing the layer-wise parameter $E^k$. We are referring to \citep{pei2020geomgcn} [Pei et al ICLR 2020] for this discussion. Let $R$ be the set of relations and let $\cN_g(\cdot), \cN_s(\cdot)$ denote graph neighborhood and latent-space neighborhood functions respectively. A concrete example is given below to illustrate how virtual nodes can be incorporated. We note that operations like BatchNorm etc can be additionally incorporated and alternate aggregation operations e.g. concatenation can be used instead.
\begin{align*}
\begin{aligned}
{\bm{g}}_i^{(k,r)} &= \sigma\br{\sum_{\substack{j \in \cN_g(i)\\\tau(i,j)=r}}A_{ij}(L_g^k)^\top{\bm{x}}_j^{k-1}}\\
{\bm{s}}_i^{(k,r)} &= \sigma\br{\sum_{\substack{j \in \cN_s(i)\\\tau(i,j)=r}}T_{ij}(L_s^k)^\top{\bm{x}}_j^{k-1}}
\end{aligned}
&& \text{(Low-level aggregation)}\\
\begin{aligned}
{\bm{m}}_i^k &= \frac1{\abs R}\sum_{r \in R}\br{{\bm{g}}_i^{(k,r)} + {\bm{s}}_i^{(k,r)}}
\end{aligned}
&&\text{(High-level aggregation)}\\
\begin{aligned}
{\bm{x}}_i^k &= \sigma\br{(W^k)^\top{\bm{m}}_i^k}
\end{aligned}
&&\text{(Non-linear transform)}
\end{align*}
where $\tau$ denotes the relationship indicator, $A_{ij}$ denotes the graph edge weight between nodes $i$ and $j$ and $T_{ij}$ denotes their geometric similarity in latent space. Note that ${\bm{g}}_i^{(k,r)}, {\bm{s}}_i^{(k,r)}$ corresponds to embeddings of the virtual nodes in the above example. To implement the above, the parameter $E^k$ can be taken to collect the learnable parameters contained in all the above operations i.e. $E^k = \bc{L_g^k, L_s^k, W^k}$ in the above example. Downstream calculations in Definition~\ref{defn:alphak} and Lemma~\ref{lem:main} continue to hold as is with no changes.
\textbf{Incorporating Skip/Residual Connections into {IGLU}\xspace}: The architecture style presented in Section 3 does not directly allow skip connections but they can be incorporated readily with no change to Definition~\ref{defn:alphak} and minor changes to Lemma~\ref{lem:main}. Let us introduce the notation $k \rightarrow m$ to denote a direct forward (skip) connection directed from layer $k$ to some layer $m > k$. In a purely feed-forward style architecture, we would only have connections of the form $k \rightarrow k+1$. The following gives a simple example of a GCN with a connection that skips two layers, specifically $(k-2) \rightarrow k$.
\[
{\bm{x}}_i^k = f\br{{\bm{x}}_j^{k-1}, j \in \bc i \cup \cN(i); E^k)} + {\bm{x}}_i^{k-2},
\]
where $f$ includes the aggregation operation with the graph Laplacian, the transformation weight matrix and any non-linearity. Definition~\ref{defn:alphak} needs no changes to incorporate such architectures. Part 1 of Lemma~\ref{lem:main} also needs no changes to address such cases. Part 2 needs a simple modification as shown below
\begin{lemma}[Lemma~\ref{lem:main}.2 adapted to skip connections]
For the final layer, we continue to have (i.e. no change) $\valpha^K = \vG(W^{K+1})^\top$. For any $k < K$, we have $\valpha^k = \sum_{m: k \rightarrow m}\left.\frac{\partial(\valpha^m\odot X^m)}{\partial X^k}\right|_{\text{all } \valpha^m \text{ s.t. } k \rightarrow m}$ i.e. $\alpha^k_{jp} = \sum_{m: k \rightarrow m}\sum_{i\in\cV}\sum_{q = 1}^{d_m} \alpha^m_{iq}\cdot\frac{\partial X^m_{iq}}{\partial X^k_{jp}}$.
\end{lemma}
Note that as per the convention established in the paper, $\sum_{m: k \rightarrow m}\left.\frac{\partial(\valpha^m\odot X^m)}{\partial X^k}\right|_{\text{all } \valpha^m \text{ s.t. } k \rightarrow m}$ implies that while taking the derivatives, $\valpha^m$ values are fixed (treated as a constant) for all $m > k$ such that $k \rightarrow m$. This ``conditioning'' is important since $\valpha^m$ also indirectly depends on $X^k$ if $m > k$.
\textbf{Proof of Lemma 1.2 adapted to skip connections}: We consider two cases yet again and use Definition 1 that tells us that
\[
\alpha^k_{jp} = \sum_{i \in \cV}\sum_{c \in [C]} g_{ic}\cdot\frac{\partial\hat y_{ic}}{\partial X^K_{jp}}
\]
\textbf{Case 1} ($k = K$): Since this is the top-most layer and there are no connections going ahead let alone skipping ahead, the analysis of this case remains unchanged and continues to yield $\alpha^K = \vG(W^{K+1})^\top$.
\textbf{Case 2} ($k < K$): Using Definition 1 and incorporating all layers to which layer $k$ has a direct or skip connection gives us
\[
\alpha^k_{jp} = \sum_{i \in \cV}\sum_{c \in [C]}g_{ic}\cdot\frac{\partial\hat y_{ic}}{\partial X^k_{jp}} = \sum_{m: k \rightarrow m}\sum_{i \in \cV}\sum_{c \in [C]}g_{ic}\cdot\sum_{l \in \cV}\sum_{q=1}^{d_m}\frac{\partial\hat y_{ic}}{\partial X^m_{lq}}\frac{\partial X^m_{lq}}{\partial X^k_{jp}}
\]
Rearranging the terms gives us
\[
\alpha^k_{jp} = \sum_{m: k \rightarrow m}\sum_{l \in \cV}\sum_{q=1}^{d_m}\left(\sum_{i \in \cV}\sum_{c \in [C]}g_{ic}\cdot\frac{\partial\hat y_{ic}}{\partial X^m_{lq}}\right)\cdot\frac{\partial X^m_{lq}}{\partial X^k_{jp}} = \sum_{m: k \rightarrow m}\sum_{l \in \cV}\sum_{q=1}^{d_m}\alpha^m_{lq}\cdot\frac{\partial X^m_{lq}}{\partial X^k_{jp}},
\]
where we simply used Definition 1 in the second step. However, the resulting term simply gives us $\alpha^k_{jp} = \sum_{m: k \rightarrow m}\left.\frac{\partial(\valpha^m\odot X^m)}{\partial X^k_{jp}}\right|_{\text{all }\valpha^m \text{ s.t. } k \rightarrow m}$ which conditions on, or treats as a constant, the term $\valpha^m$ for all $m > k$ such that $k \rightarrow m$ according to our notation convention. This finishes the proof of part 2 adapted to skip connections.
\subsection{\revision{Comparison of {IGLU}\xspace with VR-GCN, MVS-GNN and GNNAutoScale}}
We highlight the key difference between {IGLU}\xspace and earlier works that cache intermediate results for speeding up GNN training below.
\subsubsection{VRGCN v/s IGLU}
\begin{enumerate}
\item \textbf{Update of Cached Variables}: VR-GCN \citep{vrgcn} caches only historical embeddings, and while processing a single mini-batch these historical embeddings are updated for a sampled subset of the nodes. In contrast {IGLU}\xspace does not update any intermediate results after processing each mini-batch. These are updated only once per epoch, after all parameters for individual layers have been updated.
\item \textbf{Update of Model Parameters}: VR-GCN’s backpropagation step involves update of model parameters of all layers after each mini-batch. In contrast {IGLU}\xspace updates parameters of only a single layer at a time.
\item \textbf{Variance due to Sampling}: VR-GCN incurs additional variance due to neighborhood sampling which is then reduced by utilizing historical embeddings for some nodes and by computing exact embeddings for the others. IGLU does not incur such variance since IGLU uses all the neighbors.
\end{enumerate}
\subsubsection{MVS-GNN v/s IGLU}
MVS-GNN \citep{mvsgcn} is another work that caches historical embeddings. It follows a nested training strategy wherein firstly a large batch of nodes are sampled and mini-batches are further created from this large batch for training. MVS-GNN handles variance due to this mini-batch creation by performing importance weighted sampling to construct mini-batches.
\begin{enumerate}
\item \textbf{Update of Cached Variables and Variance due to Sampling:} Building upon VR-GCN, to reduce the variance in embeddings due to its sampling of nodes at different layers, MVS-GNN caches only embeddings and uses historical embeddings for some nodes and recompute the embeddings for the others. Similar to VR-GCN, these historical embeddings are updated as and when they are part of the mini-batch used for training. As discussed above, {IGLU}\xspace does not incur such variance since IGLU uses all the neighbors.
\item \textbf{Update of Model Parameters: } Update of model parameters in MVS-GNN is similar to that of VR-GCN, where backpropagation step involves update of model parameters of all layers for each mini-batch. As described already, {IGLU}\xspace updates parameters of only a single layer at a time.
\end{enumerate}
\subsubsection{GNNAutoScale v/s IGLU}
GNNAutoScale \citep{fey2021gnnautoscale} extends the idea of caching historical embeddings from VR-GCN and provides a scalable solution.
\begin{enumerate}
\item{\textbf{Update of intermediate representations and model parameters: }}While processing a minibatch of nodes, GNNAutoScale computes the embeddings for these nodes at each layer while using historical embeddings for the immediate neighbors outside the current minibatch. After processing each mini-batch, GNNAutoScale updates the historical embeddings for nodes considered in the mini-batch. Similar to VR-GCN and MVS-GNN, GNNAutoScale updates all parameters at all layers while processing a mini-batch of nodes. In contrast IGLU does not update intermediate results (intermediate representations in Algorithm 1 and incomplete gradients in Algorithm 2) after processing each minibatch. In fact, these are updated only once per epoch, after all parameters for individual layers have been updated.
\item{\textbf{Partitioning:}} GNNAutoScale relies on the METIS clustering algorithm for creating mini-batches that minimize inter-connectivity across batches. This is done to minimize access to historical embeddings and reduce staleness. This algorithm tends to bring similar nodes together, potentially resulting in the distributions of clusters being different from the original dataset. This may lead to biased estimates of the full gradients while training using mini-batch SGD as discussed in Section 3.2, Page 5 of Cluster-GCN \citep{clustergcn}. {IGLU}\xspace does not rely on such algorithms since it's parameter updates are concerned with only a single layer and also avoids potential additional bias.
\end{enumerate}
\textbf{Similarity of IGLU with GNNAutoScale:} Both of the methods avoid a neighborhood sampling step, thereby avoiding additional variance due to neighborhood sampling and making use of all the edges in the graph. Both IGLU and GNNAutoScale propose methods to reduce the neighborhood explosion problem, although in fundamentally different manners. GNNAutoScale does so by pruning the computation graph by using historical embeddings for neighbors across different layers. IGLU on the other hand restricts the parameter updates to a single layer at a time by analyzing the gradient structure of GNNs therefore alleviating the neighborhood explosion problem.
\subsubsection{Summary of {IGLU}\xspace's Technical Novelty and Contrast with Caching based related works}
To summarize, {IGLU}\xspace is fundamentally different from these methods that cache historical embeddings in that it changes the entire training procedure of GCNs in contrast with the aforementioned caching based methods as follows:
\begin{itemize}
\item The above methods still follow standard \textbf{SGD style training of GCNs} in that they update the model parameters at all the layers after each mini-batch. This is very different from IGLU’s parameter updates that concern \textbf{only a single layer} while processing a mini-batch.
\item IGLU can cache either \textbf{incomplete gradients \textit{or} embeddings} which is different from the other approaches that cache \textbf{only embeddings.} This provides alternate approaches for training GCNs and we demonstrate empirically that caching incomplete gradients, in fact, offers superior performance and convergence.
\item Unlike GNNAutoScale and VR-GCN that update some of the historical embeddings after each mini-batch is processed, IGLU’s \textbf{caching is much more aggressive} and the stale variables are updated \textbf{only once per epoch}, after all parameters for all layers have been updated.
\item Theoretically, we provide \textbf{good convergence rates and bounded bias} even while using \textbf{stale gradients}, which has not been discussed in any prior works.
\end{itemize}
These are the key technical novelties of our proposed method and they are a consequence of a careful understanding of the gradient structure of GCN’s themselves.
\subsubsection{Empirical Comparison with GNNAutoScale}
We provide an empirical comparison of {IGLU}\xspace with GNNAutoScale and summarize the results in Table \ref{results_gnnaus} and Figure \ref{fig:convergence_results_gnnautoscale}. It is important to note that the best results for GNNAutoScale as reported by the authors in the paper, correspond to varying hyperparameters such as number of GNN layers and different embedding dimensions across methods, datasets and architectures. However, for the experiments covered in the main paper, we use 2 layer settings for PPI-Large, Flickr and Reddit and 3 layer settings for OGBN-Arxiv and OGBN-Proteins datasets consistently for {IGLU}\xspace and the baseline methods, as motivated by literature. We also ensure that the embedding dimensions are uniform across {IGLU}\xspace and the baselines. Therefore, to ensure a fair comparison, we perform additional experiments with these parameters for GNNAutoScale set to values that are consistent with our experiments for IGLU and the baselines. We train GNNAutoScale with three variants, namely GCN, GCNII \citep{gcn2} and PNA \citep{pna} and report the results for each of the variant. We also note here that GNNAutoScale was implemented in PyTorch \citep{pytorch} while {IGLU}\xspace was implemented in TensorFlow \citep{tensorflow}. While this makes a wall-clock time comparison unsuitable as discussed in Appendix \ref{additional_baselines_text}, we still provide a wall-clock time comparison for completeness. We also include the best performance numbers for GNNAutoScale on these datasets (as reported by the authors in Table 5, Page 9 of the GNNAutoScale paper) across different architectures. Note that we do not provide comparisons on the OGBN-Proteins dataset since we ran into errors while trying to incorporate the dataset into the official implementation of GNNAutoScale.
\textbf{Results:} Figure \ref{fig:convergence_results_gnnautoscale} provides convergence plots comparing {IGLU}\xspace with the different architectures of GNNAutoScale and Table \ref{results_gnnaus} summarizes the test performance on PPI-Large, Flickr, Reddit and OGBN-Arxiv (transductive) datasets. From the table, we observe that {IGLU}\xspace offers competitive performance compared to the GCN variant of GAS for the majority of the datasets. We also observe from Figure \ref{fig:convergence_results_gnnautoscale} that {IGLU}\xspace offers significant improvements in training time with rapid early convergence on the validation set. We note that more complex architectures such as GCNII and PNA offer improvements in performance to GNNAutoScale. {IGLU}\xspace being architecture agnostic can be incorporated with these architectures for further improvements in performance. We leave this as an avenue for future work.
\begin{table}[ht]
\caption{\revision{\textbf{Test Accuracy of {IGLU}\xspace compared to GNNAutoScale}.} * - We perform experiments using GNNAutoScale in a setting identical to {IGLU}\xspace with 2-layer models on PPI-Large, Reddit and Flickr datasets and 3-layer models on OGBN-Arxiv dataset (transductive) and report the performance. For completeness, we also include the best results from GNNAutoScale for comparison. We were unable to perform experiments with GNNAutoScale on the Proteins dataset, and hence omit it for comparison. We observe that {IGLU}\xspace performs competitively with GNNAutoScale for models like GCN on most of the datasets. {IGLU}\xspace being architecture agnostic can be further combined with varied architectures like GCNII and PNA to obtain gains offered by these architecture.}
\centering
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{ccccc}
\hline
\textbf{Algorithm} & \textbf{PPI-Large} & \textbf{Reddit} & \textbf{Flickr} & \textbf{Arxiv (Trans) } \\
\hline \hline
\multicolumn{5}{c}{Our Experiments*} \\
\hline
GAS-GCN & 0.983 & 0.954 & 0.533 & 0.710 \\[0.05cm]
GAS-GCNII & 0.969 & 0.964 & 0.539 & 0.724 \\[0.05cm]
GAS-PNA & 0.917 & 0.970 & 0.555 & 0.714 \\[0.05cm]
\hline
\multicolumn{5}{c}{Best Results: GNNAutoScale, (From Table 5, Page 9)} \\
\hline
GAS-GCN & 0.989 & 0.954 & 0.540 & 0.716 \\[0.05cm]
GAS-GCNII & 0.995 & 0.967 & 0.562 & 0.730 \\[0.05cm]
GAS-PNA & 0.994 & 0.971 & 0.566 & 0.725 \\[0.05cm]
\hline \hline
{IGLU}\xspace & 0.987 $\pm$ 0.004 & 0.964 $\pm$ 0.001 & 0.515 $\pm$ 0.001 & 0.719 $\pm$ 0.002\\[0.05cm]
\hline
\hline
\end{tabular}
}
\label{results_gnnaus}\vspace*{10pt}
\end{table}
\begin{figure}[ht]
\includegraphics[width=1.0\textwidth]{figures/valid_vs_time_gas_20.pdf}\vspace*{-5pt}
\caption{\revision{\textbf{Wall Clock Time vs Validation Accuracy on different datasets as compared to GNNAutoScale.}} We perform experiments using GNNAutoScale in a setting identical to {IGLU}\xspace with 2-layer models on PPI-Large, Reddit and Flickr datasets and 3-layer models on OGBN-Arxiv dataset and report the performance. {IGLU}\xspace offers competitive performance and faster convergence across the datasets. }
\label{fig:convergence_results_gnnautoscale}
\end{figure}
\subsection{Detailed Description of Algorithms 1 and 2}
We present Algorithms~\ref{algo:bp} and \ref{algo:inv} again below (as Algorithms~\ref{algo:bp-full} and \ref{algo:inv-full} respectively) with details of each step.
\textbf{{IGLU}\xspace: backprop order}
Algorithm~\ref{algo:bp-full} implements the {IGLU}\xspace algorithm in its backprop variant. Node embeddings $X^k, k \in [K]$ are calculated and kept stale. They are not updated even when model parameters $E^k, k \in [K]$ get updated during the epoch. On the other hand, the incomplete task gradients $\valpha^k, k \in [K]$ are kept refreshed using the recursive formulae given in Lemma~\ref{lem:main}. For sake of simplicity, the algorithm been presented with staleness duration of one epoch i.e. $X^k$ are refreshed at the beginning of each epoch. Variants employing shorter or longer duration of staleness can be also explored simply by updating $X^k, k \in [K]$ say twice in an epoch or else once every two epochs.
\begin{algorithm}[ht]
\caption{{IGLU}\xspace: backprop order}
\label{algo:bp-full}
{
\begin{algorithmic}[1]
\REQUIRE GCN $\cG$, initial features $X^0$, task loss $\cL$
\STATE Initialize model parameters $E^k, k \in [K], W^{K+1}$
\FOR{epoch = $1, 2, \ldots$}
\FOR{$k = 1 \ldots K$}
\STATE Refresh $X^k \leftarrow f(X^{k-1}; E^k)$ \COMMENT{$X^k$ will be kept stale till next epoch}
\ENDFOR
\STATE $\hat\vY \leftarrow X^KW^{K+1}$ \COMMENT{Predictions}
\STATE $\vG \leftarrow \bs{\frac{\partial\ell_i}{\partial \hat y_{ic}}}_{N \times C}$ \COMMENT{The loss derivative matrix}
\STATE Compute $\frac{\partial\cL}{\partial W^{K+1}} \leftarrow (X^K)^\top\vG$ \COMMENT{Using Lemma~\ref{lem:main}.1 here}
\STATE Update $W^{K+1} \leftarrow W^{K+1} - \eta\cdot\frac{\partial\cL}{\partial W^{K+1}}$
\STATE Refresh $\valpha^K \leftarrow \vG(W^{K+1})^\top$ \COMMENT{Using Lemma~\ref{lem:main}.2 here}
\FOR{$k = K \ldots 2$}
\STATE Compute $\frac{\partial\cL}{\partial E^k} \leftarrow \left.\frac{\partial(\valpha^k\odot X^k)}{\partial E^k}\right|_{\valpha^k}$, \COMMENT{Using Lemma~\ref{lem:main}.1 here}
\STATE Update $E^k \leftarrow E^k - \eta\cdot\frac{\partial\cL}{\partial E^k}$
\STATE Refresh $\valpha^{k-1} \leftarrow \left.\frac{\partial(\valpha^k\odot X^k)}{\partial X^{k-1}}\right|_{\valpha^k}$ \COMMENT{Using Lemma~\ref{lem:main}.2 here}
\ENDFOR
\ENDFOR
\end{algorithmic}
}
\end{algorithm}
\textbf{{IGLU}\xspace: inverted order}
Algorithm~\ref{algo:inv-full} implements the {IGLU}\xspace algorithm in its inverted variant. Incomplete task gradients $\valpha^k, k \in [K]$ are calculated once at the beginning of every epoch and kept stale. They are not updated even when node embeddings $X^k, k \in [K]$ get updated during the epoch. On the other hand, the node embeddings $X^k, k \in [K]$ are kept refreshed. For sake of simplicity, the algorithm been presented with staleness duration of one epoch i.e. $\valpha^k$ are refreshed at the beginning of each epoch. Variants employing shorter or longer durations of staleness can be also explored simply by updating $\valpha^k$ say twice in an epoch or else once every two epochs. This has been explored in Section~\ref{ablation} (see paragraph on "Analysis of Degrees of Staleness.").
\begin{algorithm}[t]
\caption{{IGLU}\xspace: inverted order}
\label{algo:inv-full}
{
\begin{algorithmic}[1]
\REQUIRE GCN $\cG$, initial features $X^0$, task loss $\cL$
\STATE Initialize model parameters $E^k, k \in [K], W^{K+1}$
\FOR{$k = 1 \ldots K$}
\STATE $X^k \leftarrow f(X^{k-1}; E^k)$ \COMMENT{Do an initial forward pass}
\ENDFOR
\FOR{epoch = $1,2,\ldots$}
\STATE $\hat\vY \leftarrow X^KW^{K+1}$ \COMMENT{Predictions}
\STATE $\vG \leftarrow \bs{\frac{\partial\ell_i}{\partial \hat y_{ic}}}_{N \times C}$ \COMMENT{The loss derivative matrix}\newline
\COMMENT{Use Lemma~\ref{lem:main}.2 to refresh $\valpha^k, k \in [K]$.}
\STATE Refresh $\valpha^K \leftarrow \vG(W^{K+1})^\top$
\FOR{$k = K \ldots 2$}
\STATE Refresh $\valpha^{k-1} \leftarrow \left.\frac{\partial(\valpha^k\odot X^k)}{\partial X^{k-1}}\right|_{\valpha^k}$
\ENDFOR\newline
\COMMENT{These $\valpha^k, k \in [K]$ will now be kept stale till next epoch}
\FOR{$k = 1 \ldots K$}
\STATE Compute $\frac{\partial\cL}{\partial E^k} \leftarrow \left.\frac{\partial(\valpha^k\odot X^k)}{\partial E^k}\right|_{\valpha^k}$, \COMMENT{Using Lemma~\ref{lem:main}.1 here}
\STATE Update $E^k \leftarrow E^k - \eta\cdot\frac{\partial\cL}{\partial E^k}$
\STATE Refresh $X^k \leftarrow f(X^{k-1}; E^k)$
\ENDFOR
\STATE Compute $\frac{\partial\cL}{\partial W^{K+1}} \leftarrow (X^K)^\top\vG$ \COMMENT{Using Lemma~\ref{lem:main}.1 here}
\STATE Update $W^{K+1} \leftarrow W^{K+1} - \eta\cdot\frac{\partial\cL}{\partial W^{K+1}}$
\ENDFOR
\end{algorithmic}
}
\end{algorithm}
\subsection{\revision{OGBN-Proteins: Validation performance at a mini-batch level}}
\revision{The performance analysis in the main paper was plotted at a coarse granularity of an epoch. We refer to an epoch as one iteration of steps 6 to 18 (both inclusive) in Algorithm \ref{algo:inv-full}. For a finer analysis of {IGLU}\xspace's performance on the OGBN-Proteins dataset, we measure the Validation ROC-AUC at the granularity of a mini-batch. As mentioned in the ``SGD Implementation" paragraph in Page 5 below the algorithm description in the main paper, steps 14 and 18 in Algorithm \ref{algo:inv-full} are implemented using mini-batch SGD. Recall that OGBN-Proteins used a 3 layer GCN. For the proteins dataset we have $\sim$ 170 mini-batches for training. We update the parameters for each layer using all the mini-batches from the layer closest to the input to the layer closest to the output as detailed in Algorithm \ref{algo:inv-full}. To generate predictions, we compute partial forward passes after the parameters of each layer is updated. We plot the validation ROC-AUC as the first epoch progresses in Figure \ref{fig:proteins_mb}. We observe that when the layer closest to the input is trained (Layer 1 in the figure), {IGLU}\xspace has an ROC-AUC close to 0.5 on the validation set. Subsequently once the second layer (Layer 2 in the figure) is trained, we observe that the validation ROC-AUC improves from around $\sim$ 0.51 to $\sim$ 0.57 and finally once the layer closest to the output is trained (Layer 3 in the figure), the ROC-AUC progresses quickly to a high validation ROC-AUC of $\sim$ 0.81. In the figure in the main paper the high validation score reflects the result at the end of the first epoch. Training the GCN using a total of $\sim$ 510 minibatches ($\sim$ 170 per layer) approximately takes 5 seconds as depicted in Figure \ref{fig:convergence_results} in the main paper}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.6\textwidth]{figures/proteins_valid_vs_mb.pdf}\vspace*{-5pt}
\end{center}
\caption{\revision{\textbf{Fine-grained Validation ROC-AUC for IGLU on the Proteins Dataset for Epoch 1.}} We depict the value of Validation ROC - AUC at the granularity of a minibatch for the first epoch. We observe that the Validation ROC-AUC begins with a value close to 0.5 and quickly reaches an ROC-AUC of 0.81 by the end of the first epoch. As mentioned in the text, Proteins uses a 3 layer GCN and each layer processes $\sim$ 170 mini-batches.}
\label{fig:proteins_mb}
\end{figure}
\color{black}
\section{Dataset Statistics and Additional Experimental Results}
\label{data_baselines}
\subsection{Dataset Statistics}
Table \ref{dataset} provides details on the benchmark node classification datasets used in the experiments. The following five benchmark datasets were used to empirically demonstrate the effectiveness of {IGLU}\xspace: predicting the communities to which different posts belong in Reddit\footnote{\href{http://snap.stanford.edu/graphsage/reddit.zip}{http://snap.stanford.edu/graphsage/reddit.zip}} \citep{graphsage}, classifying protein functions across various biological protein-protein interaction graphs in PPI-Large\footnote{\href{http://snap.stanford.edu/graphsage/ppi.zip}{http://snap.stanford.edu/graphsage/ppi.zip}} \citep{graphsage}, categorizing types of images based on descriptions and common properties in Flickr\footnote{\href{https://drive.google.com/drive/folders/1apP2Qn8r6G0jQXykZHyNT6Lz2pgzcQyL}{https://github.com/GraphSAINT/GraphSAINT - Google Drive Link}} \citep{graphsaint}, predicting paper-paper associations in OGBN-Arxiv\footnote{\href{https://ogb.stanford.edu/docs/nodeprop/\#ogbn-arxiv}{https://ogb.stanford.edu/docs/nodeprop/\#ogbn-arxiv}} \citep{ogb} and categorizing meaningful associations between proteins in OGBN-Proteins\footnote{\href{https://ogb.stanford.edu/docs/nodeprop/\#ogbn-proteins}{https://ogb.stanford.edu/docs/nodeprop/\#ogbn-proteins}} \citep{ogb}.
\begin{table}[t]
\caption{\textbf{Datasets used in experiments along with their statistics.} MC refers to a multi-class problem, whereas ML refers to a multi-label problem.}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{ccccccc}
\hline
Dataset & \# Nodes & \# Edges & Avg. Degree & \# Features & \# Classes & Train/Val/Test \\[0.05cm]
\hline
\hline
PPI-Large & 56944 & 818716 & 14 & 50 & 121 (ML) & 0.79/0.11/0/10\\[0.05cm]
Reddit & 232965 & 11606919 & 60 & 602 & 41 (MC) & 0.66/0.10/0.24\\[0.05cm]
Flickr & 89250 & 899756 & 10 & 500 & 7 (MC) & 0.5/0.25/0.25\\[0.05cm]
OGBN-Proteins & 132534 & 39561252 & 597 & 8 & 112 (ML) & 0.65/0.16/0.19\\[0.05cm]
OGBN-Arxiv & 169343 & 1166243 & 13 & 128 & 40 (MC) & 0.54/0.18/0.28\\[0.05cm]
\hline
\end{tabular}
}
\vspace{1.5pt}
\label{dataset}
\end{table}
\subsection{Comparison with additional Baselines}\label{additional_baselines_text}
In addition to the baselines mentioned in Table \ref{results}, Table \ref{results_additional} compares {IGLU}\xspace to LADIES \citep{ladies}, L2-GCN \citep{l2gcn}, AS-GCN \citep{asgcn}, MVS-GNN \citep{mvsgcn}, FastGCN \citep{fastgcn}, SIGN \citep{frasca2020sign}, PPRGo \citep{pprgo} and Bandit Sampler's \citep{liu2020bandit} performance on the test set. However, a wall-clock time comparison with these methods is not provided since the author implementations of LADIES, L2GCN, MVS-GNN and SIGN are in PyTorch \citep{pytorch} which has been shown to be less efficient than Tensorflow \citep{clustergcn, tensorflow} for GCN applications. Also, the official AS-GCN, FastGCN and Bandit Sampler implementations released by the authors were for 2 layer models only, whereas some datasets such as Proteins and Arxiv require 3 layer models for experimentation. Attempts to generalize the code to a 3 layer model ran into runtime errors, hence the missing results are denoted by ** in Table \ref{results_additional} and these methods are not considered for a timing analysis. MVS-GNN also runs into a runtime error on the Proteins dataset denoted by ||. \emph{{IGLU}\xspace continues to significantly outperform all additional baselines on all the datasets.}
\begin{table}[ht]
\caption{\textbf{{Performance on Test Set for {IGLU}\xspace compared to additional algorithms.}} The metric is ROC-AUC for Proteins and Micro-F1 for the others {IGLU}\xspace still retains the state-of-the-art results across all datasets even when compared to these new baselines. MVS-GNN ran into runtime error on the Proteins dataset \textbf{(denoted by ||)}. AS-GCN, FastGCN and BanditSampler run into a runtime error on datasets that require more than two layers \textbf{(denoted by $**$)}. Please refer to section \ref{additional_baselines_text} for details.}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{cccccc}
\hline
Algorithm & PPI-Large & Reddit & Flickr & Proteins & Arxiv \\
\hline \hline
LADIES & 0.548 $\pm$ 0.011 & 0.923 $\pm$ 0.008 & 0.488 $\pm$ 0.012 & 0.636 $\pm$ 0.011 & 0.667 $\pm$ 0.002 \\[0.05cm]
L2GCN & 0.923 $\pm$ 0.008 & 0.938 $\pm$ 0.001 & 0.485 $\pm$ 0.001 & 0.531 $\pm$ 0.001 & 0.656 $\pm$ 0.004 \\[0.05cm]
ASGCN & 0.687 $\pm$ 0.001 & 0.958 $\pm$ 0.001 & 0.504 $\pm$ 0.002 & ** & ** \\[0.05cm]
MVS-GNN & 0.880 $\pm$ 0.001 & 0.950 $\pm$ 0.001 & 0.507 $\pm$ 0.002 & || & 0.695 $\pm$ 0.003\\
FastGCN & 0.513 $\pm$ 0.032 & 0.924 $\pm$ 0.001 & 0.504 $\pm$ 0.001 & ** & ** \\[0.05cm]
SIGN & 0.970 $\pm$ 0.003 & \textbf{0.966 $\pm$ 0.003} & 0.510 $\pm$ 0.001 & 0.665 $\pm$ 0.008 & 0.649 $\pm$ 0.003 \\[0.05cm]
PPRGo & 0.626 $\pm$ 0.002 & 0.946 $\pm$ 0.001 & 0.501 $\pm$ 0.001 & 0.659 $\pm$ 0.006 & 0.678 $\pm$ 0.003\\[0.05cm]
BanditSampler & 0.905 $\pm$ 0.003 & 0.957 $\pm$ 0.000 & 0.513 $\pm$ 0.001 & ** & ** \\[0.05cm]
\hline \hline
IGLU & \textbf{0.987} $\pm$ 0.004 & 0.964 $\pm$ 0.001
& \textbf{0.515} $\pm$ 0.001 & \textbf{0.784} $\pm$ 0.004 & \textbf{0.718} $\pm$ 0.001\\[0.05cm]
\hline
\end{tabular}
}
\label{results_additional}
\end{table}
\begin{table}[ht]
\caption{\textbf{Per epoch time (in seconds) for different methods as the number of layers increase on the OGBN-Proteins dataset.} ClusterGCN ran into a runtime error on this dataset as noted earlier. VRGCN ran into a runtime error for a 4 layer model (\textbf{denoted by ||}). {IGLU}\xspace and GraphSAINT scale almost linearly with the number of layers. It should be noted that these times strictly include only optimization time. GraphSAINT has a much lower per-epoch time than {IGLU}\xspace because of the large sizes of subgraphs per batch ($\sim$ 10000 nodes), while {IGLU}\xspace uses minibatches of size of 512. This results in far less gradient updates within an epoch for GraphSAINT when compared with {IGLU}\xspace, resulting in a much smaller per-epoch time but requiring more epochs overall. Please refer to section \ref{per_epoch_analysis} for details. }
\vspace{2mm}
\centering{
\begin{tabular}{|c|c|c|c|}
\hline
& \multicolumn{3}{c|}{\textbf{Number of Layers}} \\[0.1cm] \hline
\textbf{Method} & \textbf{2} & \textbf{3} & \textbf{4} \\[0.1cm] \hline
GraphSAGE & 2.6 & 14.5 & 163.1 \\[0.1cm] \hline
VR-GCN & 2.3 & 21.5 & || \\[0.1cm] \hline
GraphSAINT & 0.45 & 0.61 & 0.76 \\[0.1cm] \hline \hline
IGLU & 2.97 & 5.27 & 6.99 \\[0.1cm] \hline
\end{tabular}}
\label{scaling_epoch_time}
\end{table}
\begin{table}[t]
\caption{\textbf{Test Performance (ROC-AUC) at Best Validation for different methods as the number of layers increase on the OGBN-Proteins Dataset.} Results are reported for a single run but trends were observed to remain consistent across repeated runs. VRGCN ran into runtime error for 4 layers (\textbf{denoted by ||}). {IGLU}\xspace offers steady increase in performance as the number of layers increase, as well as state-of-the-art performance throughout. GraphSAGE shows a decrease in performance on moving from 3 to 4 layers while GraphSAINT shows only a marginal increase in performance. Please refer to section \ref{test_auc_convergence} for details. }
\vspace{3mm}
\centering{
\begin{tabular}{|c|c|c|c|}
\hline
& \multicolumn{3}{c|}{\textbf{Number of Layers}} \\ [0.1cm]\hline
\textbf{Method} & \textbf{2} & \textbf{3} & \textbf{4} \\ [0.1cm]\hline
GraphSAGE & 0.755 & 0.759 & 0.742 \\ [0.1cm]\hline
VR-GCN & 0.732 & 0.749 & || \\ [0.1cm] \hline
GraphSAINT & 0.752 & 0.764 & 0.767 \\ [0.1cm] \hline \hline
IGLU & \textbf{0.768} & \textbf{0.783} & \textbf{0.794} \\ \hline
\end{tabular}}
\label{scaling_test_performance}
\vspace{5mm}
\end{table}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.6\textwidth]{figures/scaling_test_convergence_auc_layers_valid_cutoff_condense.pdf}\vspace*{-5pt}
\end{center}
\caption{\textbf{Test Convergence AUC plots across different number of layers on the OGBN-Proteins dataset.} {IGLU}\xspace has consistently higher AUTC values compared to the other baselines, demonstrating increased stability, faster convergence and better generalization. GraphSAGE suffers from neighborhood explosion problem and the training became very slow as noted earlier. This results in a decrease in the AUTC while going from 3 to 4 layers. GraphSAGE's AUTC for 4 layers is only 0.313, and is thus not visible in the plot. VRGCN also suffers from the neighborhood explosion problem and runs into runtime errors for a 4 layer model. ClusterGCN runs into runtime error for the OGBN-Proteins for all of 2, 3 and 4 layers and is therefore not present in this analysis. Please refer to Section \ref{test_auc_convergence} for details.}
\label{fig:scaling_test_auc_analysis}
\end{figure}
\subsection{Timing Analysis for scaling to more layers}\label{app:deeper}
To compare the scalability and performance of different algorithms for deeper models, models with 2, 3 and 4 layers were trained for {IGLU}\xspace and the baseline methods. {IGLU}\xspace was observed to offer a per-epoch time that scaled roughly \textbf{linearly with the number of layers} as well as offer the \textbf{highest gain in test performance as the number of layers was increased}.
\subsubsection{Per Epoch Training Time}\label{per_epoch_analysis}
Unlike neighbor sampling methods like VRGCN and GraphSAGE, {IGLU}\xspace does not suffer from the neighborhood explosion problem as the number of layers increases, since {IGLU}\xspace updates involve only a single layer at any given time. We note that GraphSAINT and ClusterGCN also do not suffer from the neighborhood explosion problem directly since they both operate by creating GCNs on subgraphs. However, these methods may be compelled to select large subgraphs or else suffer from poor convergence. To demonstrate {IGLU}\xspace's effectiveness in solving the neighborhood explosion problem, the per-epoch training times are summarized as a function of the number of layers in Table \ref{scaling_epoch_time} on the Proteins dataset. A comparison with ClusterGCN could not be provided as since it ran into runtime errors on this dataset. In Table \ref{scaling_epoch_time}, while going from 2 to 4 layers, GraphSAINT was observed to require $\sim 1.6 \times $ more time per epoch while {IGLU}\xspace required $\sim 2.3 \times$ more time per epoch, with both methods scaling almost linearly with respect to number of layers as expected. However, GraphSAGE suffered a $\sim 62 \times$ increase in time taken per epoch in this case, suffering from the neighborhood explosion problem. VRGCN ran into a run-time error for the 4 layer setting (denoted by || in Table \ref{scaling_epoch_time}). Nevertheless, even while going from 2 layers to 3 layers, VRGCN and GraphSAGE are clearly seen to suffer from the neighborhood explosion problem, resulting in an increase in training time per epoch of almost $\sim 9.4 \times$ and $\sim 5.6 \times$ respectively.
We note that the times for GraphSAINT in Table \ref{scaling_epoch_time} are significantly smaller than those of {IGLU}\xspace even though earlier discussion reported {IGLU}\xspace as having the fastest convergence. However, there is no contradiction -- GraphSAINT operates with very large subgraphs, with each subgraph having almost 10 \% of the nodes of the entire training graph ($\sim 10000$ nodes in a minibatch), while {IGLU}\xspace operates with minibatches of size 512, resulting in {IGLU}\xspace performing a lot more gradient updates within an epoch, as compared with GraphSAINT. Consequently, \textbf{{IGLU}\xspace also takes fewer epochs to converge to a better solution than GraphSAINT}, thus compensating for the differences in time taken per epoch.
\subsubsection{Test Convergence AUC} \label{test_auc_convergence}
This section explores the effect of increasing the number of layers on convergence rate, optimization time, and final accuracy, when scaling to larger number of layers. To jointly estimate the efficiency of a method in terms of wall clock time and test performance achieved, the area under the test convergence plots (AUTC) for various methods was computed. A method that converged rapidly and that too to better performance levels would have a higher AUTC than a method that converges to suboptimal values or else converges very slowly. To fairly time all methods, each method was offered time that was triple of the time it took the best method to reach its highest validation score. Defining the cut-off time this way ensures that methods that may not have rapid early convergence still get a fair chance to improve by having better final performance, while also simultaneously penalizing methods that may have rapid early convergence but poor final performance. We rescale the wall clock time to be between 0 and 1, where 0 refers to the start of training while 1 refers to the cut-off time.
\textbf{Results:}
Figure \ref{fig:scaling_test_auc_analysis} summarizes the AUTC values. {IGLU}\xspace consistently obtains higher AUC values than all methods for all number of layers demonstrating its stability, rapid convergence during early phases of training and ability to generalize better as compared to other baselines. GraphSAGE suffered from neighborhood explosion leading to increased training times and hence decreased AUC values as the number of layers increase. VR-GCN also suffered from the same issue, and additionally ran into a run-time error for 4 layer models.
\textbf{Test Performance with Increasing Layers:} Table \ref{scaling_test_performance} summarizes the final test performances for different methods, across different number of layers. The performance for some methods is inconsistent as the depth increases whereas \textit{{IGLU}\xspace consistently outperforms all the baselines in this case as well} with gains in performance as we increase the number of layers hence \textit{making it an attractive technique to train deeper GCN models}.
\subsection{Scalability to Larger Datasets: OGBN - Products}\label{app:largegraph}
{To demonstrate the ability of IGLU to scale to very large datasets, we performed experiments on the OGBN-Products dataset \citep{ogb}, one of the largest datasets in the OGB collection, with its statistics summarized in Table \ref{dataset_products}.}
\begin{table}[ht]
\caption{\textbf{{Statistics for the OGB-Products datasets}}. MC refers to a multi-class problem, whereas ML refers to a multi-label problem.}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{ccccccc}
\hline
Dataset & \# Nodes & \# Edges & Avg. Degree & \# Features & \# Classes & Train/Val/Test \\[0.05cm]
\hline
\hline
OGBN-Products & 2,449,029 & 61,859,140 & 50.5 & 100 & 47 (MC) & 0.08/0.02/0.90\\[0.05cm]
\hline
\end{tabular}
}
\vspace{1.5pt}
\label{dataset_products}
\end{table}
\begin{table}[ht]
\caption{\textbf{{Performance on the OGBN-Products Test Set for {IGLU}\xspace compared to baseline algorithms.}} {IGLU}\xspace outperforms all the baseline methods on this significantly large dataset as well.}
\centering
\begin{tabular}{cc}
\hline
Algorithm & Test Micro-F1\\
\hline \hline
GCN & 0.760 $\pm$ 0.002 \\[0.05cm]
GraphSAGE & 0.787 $\pm$ 0.004 \\[0.05cm]
ClusterGCN & 0.790 $\pm$ 0.003 \\
GraphSAINT & 0.791 $\pm$ 0.002 \\[0.05cm]
SIGN (3,3,0) & 0.771 $\pm$ 0.001 \\[0.05cm]
SIGN (5,3,0) & 0.776 $\pm$ 0.001 \\[0.05cm]
\hline \hline
IGLU & \textbf{0.793} $\pm$ 0.003 \\[0.05cm]
\hline
\end{tabular}
\label{results_products}
\end{table}
{To demonstrate scalability, we conducted experiments in the transductive setup since this setup involves using the full graph. In addition, this was the original setup in which the dataset was benchmarked, therefore allowing for a direct comparison with the baselines (results taken from Table 4 in OGB \citep{ogb} and Table 6 in SIGN \citep{frasca2020sign}).
We summarize the performance results in the Table \ref{results_products}, reporting Micro-F1 as the metric. We however do not provide timing comparisons with the baseline methods since {IGLU}\xspace is implemented in TensorFlow while the baselines in the original benchmark \cite{ogb} are implemented in PyTorch. This therefore renders a direct comparison of wall-clock time unsuitable. Please refer to Appendix \ref{additional_baselines_text} for more details.}
{We observe that IGLU is able to scale to the OGBN-Products dataset with over 2.4 million nodes and outperforms all of the baseline methods.}
\subsection{Applicability of IGLU in the transductive setting: OGBN-Arxiv and OGBN-Proteins}\label{app:applicability}
{In addition to the results in the inductive setting reported in the main paper, we perform additional experiments on the OGBN-Arxiv and OGBN-Proteins dataset in the transductive setting to demonstrate IGLU’s applicability across inductive and transductive tasks and compare the performance of IGLU to that of transductive baseline methods in Table \ref{results_transductive} (results taken from OGB \citep{ogb}).}
\begin{table}[ht]
\caption{\textbf{{Comparison of IGLU’s Test performance with the baseline methods in the transductive setting on the OGBN-Arxiv and OGBN-Proteins datasets.}} The metric is Micro-F1 for OGBN-Arxiv and ROC-AUC for OGBN-Proteins. }
\centering
\resizebox{0.6\textwidth}{!}{
\begin{tabular}{ccc}
\hline
Algorithm & OGBN-Arxiv & OGBN-Proteins\\
\hline \hline
GCN & 0.7174 $\pm$ 0.0029 & 0.7251$\pm$ 0.0035 \\[0.05cm]
GraphSAGE & 0.7149 $\pm$ 0.0027 & 0.7768 $\pm$ 0.0020\\[0.05cm]
\hline \hline
IGLU & \textbf{0.7193} $\pm$ 0.0018 & \textbf{0.7840} $\pm$ 0.0061 \\[0.05cm]
\hline
\end{tabular}
}
\label{results_transductive}
\end{table}
{We observe that even in the transductive setting, IGLU outperforms the baseline methods on both the OGBN-Arxiv and OGBN-Proteins datasets.}
\subsection{Architecture agnostic nature of IGLU}\label{app:archagnostic}
{To demonstrate the applicability of IGLU to a wide-variety of architectures, we perform experiments on IGLU with Graph Attention Networks (GAT) \citep{gat}, GCN \citep{gcn} and GraphSAGE \citep{graphsage} based architectures and summarize the results for the same in Table \ref{results_gat} and \ref{results_gcn_sage} respectively as compared to these baseline methods. We use the Cora, Citeseer and Pubmed datasets as originally used in \cite{gat} for comparison with the GAT based architecture and the OGBN-Arxiv dataset for comparison with GCN and GraphSAGE based architectures (baseline results as originally reported in \citep{ogb}).}
\begin{table}[ht]
\centering
\parbox{.49\linewidth}{
\caption{\textbf{{Comparison of IGLU +GAT’s test performance with the baseline GAT on different datasets.}}}
\centering
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{cccc}
\hline
Algorithm & Cora & Citeseer & Pubmed\\
\hline \hline
GAT & 0.823 $\pm$ 0.007 & 0.711 $\pm$ 0.006 & 0.786 $\pm$ 0.004 \\[0.05cm]
\hline \hline
\textbf{IGLU + GAT} & \textbf{0.829 } $\pm$ 0.004 & \textbf{0.717} $\pm$ 0.005 & \textbf{0.787} $\pm$ 0.002 \\[0.05cm]
\hline
\end{tabular}
}
\label{results_gat}
}
\hfill
\parbox{.49\linewidth}{
\caption{\textbf{{Comparison of IGLU's test performance with GCN and GraphSAGE architectures with the baseline methods on the OGBN-Arxiv dataset.}}}
\centering
\resizebox{0.35\textwidth}{!}{
\begin{tabular}{cc}
\hline
Algorithm & OGBN-Arxiv \\
\hline \hline
GCN & 0.7174 $\pm$ 0.0029\\[0.05cm]
GraphSAGE & 0.7149 $\pm$ 0.0027\\[0.05cm]
\hline \hline
\textbf{IGLU + GCN} & \textbf{0.7187} $\pm$ 0.0014 \\[0.05cm]
\textbf{IGLU + GraphSAGE} & \textbf{0.7155} $\pm$ 0.0032 \\[0.05cm]
\hline
\end{tabular}
}\label{results_gcn_sage}
}
\end{table}
{We observe that the IGLU+GAT, IGLU+GCN and IGLU+GraphSAGE outperforms the baseline methods across datasets, thereby demonstrating the architecture agnostic nature of {IGLU}\xspace.}
\subsection{Experiments Using a Smooth Activation Function: GELU} \label{app:smooth}
{To understand the effect of using non-smooth vs smooth activation functions on {IGLU}\xspace, we perform experiments using the GELU \citep{hendrycks2020gaussian} activation function which is a smooth function and in-line with the objective smoothness assumptions made in Theorem~\ref{thm:conv}}.
$$
\operatorname{GELU}(x)=x P(X \leq x)=x \Phi(x)=x \cdot \frac{1}{2}[1+\operatorname{erf}(x / \sqrt{2})]
$$
{We compare the performance of IGLU using ReLU and GELU on all the 5 datasets in the main paper and summarize the results in Table \ref{relu_vs_gelu}.}
\begin{table}[ht]
\caption{\textbf{{Effect of Non-smooth vs Smooth activation functions: Test Performance of IGLU on different datasets using ReLU and GELU activation functions.}} Metrics are the same for the datasets as reported in Table 1 of the main paper. Results reported are averaged over five different runs.}
\centering
{
\begin{tabular}{ccc}
\hline \hline
Dataset & ReLU & GELU \\
\hline
PPI-Large & 0.987 $\pm$ 0.004 & 0.987 $\pm$ 0.000 \\
Reddit & 0.964 $\pm$ 0.001 & 0.962 $\pm$ 0.000 \\
Flickr & 0.515 $\pm$ 0.001 & 0.516 $\pm$ 0.001 \\
Proteins & 0.784 $\pm$ 0.004 & 0.782 $\pm$ 0.002 \\
Arxiv & 0.718 $\pm$ 0.001 & 0.720 $\pm$ 0.002 \\
\hline
\hline
\end{tabular}
}
\vspace{1.5pt}
\label{relu_vs_gelu}
\end{table}
{We observe that IGLU is able to enjoy very similar performance across both GELU and ReLU as the activation functions, thereby justifying the practicality of our smoothness assumptions.}
\subsection{Analysis on Degree of Staleness: Backprop Order of Updates}\label{app:staleness}
{To understand the effect of more frequent updates in the backprop variant, we performed additional experiments using the backprop variant on the PPI-Large dataset and varied the frequency of updates, the results of which are reported in Table \ref{staleness_table_backprop} for a single run. We fix the hyperparameters and train for 200 epochs.}
\begin{table}[ht]
\caption{\textbf{{Accuracy vs different update frequency on PPI-Large: Backprop Order}}}
\centering
{
\begin{tabular}{cccc}
\hline \hline
Update Frequency & Train Micro-F1 & Validation Micro-F1 & Test Micro-F1 \\
\hline
0.5 & 0.761 & 0.739 & 0.756 \\
\hline
1 & 0.805 & 0.778 & 0.796 \\
\hline
2 & 0.794 & 0.769 & 0.784\\
\hline
\hline
\end{tabular}
}
\vspace{1.5pt}
\label{staleness_table_backprop}
\end{table}
{We observe that more frequent updates help stabilize training better. We also observe that update frequencies 1 and 2 perform competitively, and both significantly outperform update frequency 0.5.}
{However we note that with higher update frequency, we incur an additional computational cost since we need to re-compute embeddings more frequently. We believe that both improved stability and competitive performance can be attributed to the fresher embeddings, which are otherwise kept stale within an epoch in this order of updates. The experiment with frequency 0.5 has a slower convergence and comparatively poor performance as expected.}
\subsection{Convergence - Train Loss vs Wall Clock Time}
\begin{figure}[ht]
\begin{tabular}{c}\hspace*{-8pt}
\includegraphics[width=1.0\textwidth]{figures/train_loss_vs_wall_clock_time_20_large_range_new.pdf}
\end{tabular}
\caption{\textbf{Training Loss curves of different methods on the benchmark datasets against Wall clock time.}}
\label{fig:train_loss_results_epoch_appendix}
\end{figure}
Figure \ref{fig:train_loss_results_epoch_appendix} provides train loss curves for all datasets and methods in Table \ref{results}. {IGLU}\xspace is able to achieve a lower training loss faster compared to all the baselines across datasets.
\input{theory-supp}
\subsubsection*{Ethics Statement}
\label{sec:ethics}
{This paper presents {IGLU}\xspace, a novel technique to train GCN architectures on large graphs that outperforms state of the art techniques in terms of prediction accuracy and convergence speed. The paper does not explore any sensitive applications and the experiments focus primarily on publicly available benchmarks of scientific (e.g. PPI) and bibliographic (e.g. ArXiv) nature do not involve any user studies or human experimentation.}
\section{Empirical Evaluation}
\label{sec:exps}
{IGLU}\xspace was compared to state-of-the-art (SOTA) baselines on several node classification benchmarks in terms of test accuracy and convergence rate. The inverted order of updates was used for {IGLU}\xspace as it was found to offer superior performance in ablation studies.
\textbf{Datasets and Tasks:} The following five benchmark tasks were used:\\
(1) Reddit \citep{graphsage}: predicting the communities to which different posts belong,\\
(2) PPI-Large \citep{graphsage}: classifying protein functions in biological protein-protein interaction graphs,\\
(3) Flickr \citep{graphsaint}: image categorization based on descriptions and other properties,\\
(4) OGBN-Arxiv \citep{ogb}: predicting paper-paper associations, and\\
(5) OGBN-Proteins \citep{ogb}: categorizing meaningful associations between proteins.\\ Training-validation-test splits and metrics were used in a manner consistent with the original release of the datasets: specifically ROC-AUC was used for OGBN-Proteins and micro-F1 for all other datasets. Dataset descriptions and statistics are presented in Appendix~\ref{data_baselines}. The graphs in these tasks varied significantly in terms of size (from 56K nodes in PPI-Large to 232K nodes in Reddit), density (from average degree 13 in OGBN-Arxiv to 597 in OGBN-Proteins) and number of edges (from 800K to 39 Million). They require diverse information to be captured and a variety of multi-label and multi-class node classification problems to be solved thus offering extensive evaluation.
\textbf{Baselines:} {IGLU}\xspace was compared to state-of-the-art algorithms, namely - GCN \citep{gcn}, GraphSAGE \citep{graphsage}, VR-GCN \citep{vrgcn}, Cluster-GCN \citep{clustergcn} and GraphSAINT \citep{graphsaint} (using the Random Walk Sampler which was reported by the authors to have the best performance). The \revision{mini-batched} implementation of GCN provided by GraphSAGE authors was used since the implementation released by \citep{gcn} gave run time errors on all datasets. GraphSAGE and VRGCN address the neighborhood explosion problem by sampling neighborhood subsets, whereas ClusterGCN and GraphSAINT are subgraph sampling techniques. Thus, our baselines include neighbor sampling, layer-wise sampling, subgraph sampling and no-sampling methods. We recall that {IGLU}\xspace does not require any node/subgraph sampling. {IGLU}\xspace was implemented in TensorFlow and compared with TensorFlow implementations of the baselines released by the authors. Due to lack of space, comparisons with the following additional baselines is provided in Appendix \ref{additional_baselines_text}: LADIES \citep{ladies}, L2-GCN \citep{l2gcn}, AS-GCN \citep{asgcn}, MVS-GNN \citep{mvsgcn}, FastGCN \citep{fastgcn}, SIGN \citep{frasca2020sign}, PPRGo \citep{pprgo} and Bandit Sampler \citep{liu2020bandit}.
\textbf{Architecture}: We note that all baseline methods propose a specific network architecture along with their proposed training strategy. These architectures augment the standard GCN architecture \citep{gcn} e.g. using multiple non-linear layers within each GCN layer, normalization and concatenation layers, all of which can help improve performance. {IGLU}\xspace being architecture-agnostic can be readily used with all these architectures. However, for the experiments, the network architectures proposed by VR-GCN and GraphSAINT was used with {IGLU}\xspace owing to their consistent performance across datasets as demonstrated by \citep{vrgcn, graphsaint}. Both architectures were considered and results for the best architecture are reported for each dataset.
\textbf{Detailed Setup.} The supervised inductive learning setting was considered for all five datasets as it is more general than the transductive setting that assumes availability of the entire graph during training. The inductive setting also aligns with real-world applications where graphs can grow over time \citep{graphsage}. Experiments on PPI-Large, Reddit and Flickr used 2 Layer GCNs for all methods whereas OGBN-Proteins and OGBN-Arxiv used 3 Layer GCNs for all methods as prescribed in the original benchmark \citep{ogb}. Further details are provided in Appendix \ref{app:hardware}.
\textbf{Model Selection and Hyperparameter Tuning.} Model selection was done for all methods based on their validation set performance. Each experiment was run five times with mean and standard deviation in test performance reported in Table \ref{results} along with training times. Although the embedding dimension varies across datasets, they are same for all methods for any dataset. For {IGLU}\xspace, GraphSAGE and VR-GCN, an exhaustive grid search was done over general hyperparameters such as batch size, learning rate and dropout rate \citep{dropout}. In addition, method-specific hyperparameter sweeps were also carried out that are detailed in Appendix \ref{app:params}.
\subsection{Results}
\label{sec:main-results}
For {IGLU}\xspace, the VR-GCN architecture performed better for Reddit and OGBN-Arxiv datasets while the GraphSAINT architecture performed better on the remaining three datasets. All baselines were extensively tuned on the 5 datasets and their performance reported in their respective publications was either replicated closely or else improved. Test accuracies are reported in Table \ref{results} and convergence plots are shown in Figure \ref{fig:convergence_results}. Additionally, Table~\ref{results} also reports the absolute accuracy gain of {IGLU}\xspace over the best baseline. The \% speedup offered by {IGLU}\xspace is computed as follows: let the highest validation score obtained by the best baseline be $v_{1}$ and $t_{1}$ be the time taken to reach that score. Let the time taken by {IGLU}\xspace to reach $v_{1}$ be $t_{2}$. Then,
\begin{equation}\label{eq:speedup}
\text{\% Speedup} := \frac{t_{1} - t_{2}}{t_{1}} \times 100
\end{equation}
The wall clock training time in Figure \ref{fig:convergence_results} is strictly the optimization time for each method and excludes method-specific overheads such as pre-processing, sampling and sub-graph creation that other baselines incur. This is actually a disadvantage to {IGLU}\xspace since its overheads are much smaller. The various time and memory overheads incurred by all methods are summarized in Appendix \ref{app:time_memory_overheads} and memory requirements for {IGLU}\xspace are discussed in Appendix \ref{app:memory}.
\begin{table}[t]
\caption{\textbf{Accuracy of {IGLU}\xspace compared to SOTA algorithms.} The metric is ROC-AUC for Proteins and Micro-F1 for the others. {IGLU}\xspace is the only method with accuracy within $0.2\%$ of the best accuracy on each dataset with significant speedups in training across datasets. On PPI-Large, {IGLU}\xspace is $\sim 8\times$ faster than VR-GCN, the most accurate baseline. * denotes speedup in initial convergence based on a high validation score of 0.955. \textbf{--} denotes no absolute gain. $\|$ denotes a runtime error.}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{cccccc}
\hline
\textbf{Algorithm} & \textbf{PPI-Large} & \textbf{Reddit} & \textbf{Flickr} & \textbf{Proteins} & \textbf{Arxiv} \\
\hline \hline
GCN & 0.614 $\pm$ 0.004 & 0.931 $\pm$ 0.001 & 0.493 $\pm$ 0.002 & 0.615 $\pm$ 0.004 & 0.657 $\pm$ 0.002 \\[0.05cm]
GraphSAGE & 0.736 $\pm$ 0.006 & 0.954 $\pm$ 0.002 & 0.501 $\pm$ 0.013 & 0.759 $\pm$ 0.008 & 0.682 $\pm$ 0.002 \\[0.05cm]
VR-GCN & 0.975 $\pm$ 0.007 & 0.964 $\pm$ 0.001 & 0.483 $\pm$ 0.002 & 0.752 $\pm$ 0.002 & 0.701 $\pm$ 0.006 \\[0.05cm]
Cluster-GCN & 0.899 $\pm$ 0.004 & 0.962 $\pm$ 0.004 & 0.481 $\pm$ 0.005 & $\|$ & 0.706 $\pm$ 0.004 \\[0.05cm]
GraphSAINT & 0.956 $\pm$ 0.003 & \textbf{0.966} $\pm$ 0.003 & 0.510 $\pm$ 0.001 & 0.764 $\pm$ 0.009 & 0.712 $\pm$ 0.006\\[0.05cm]
\hline \hline
{IGLU}\xspace & \textbf{0.987} $\pm$ 0.004 & 0.964 $\pm$ 0.001 & \textbf{0.515} $\pm$ 0.001 & \textbf{0.784} $\pm$ 0.004 & \textbf{0.718} $\pm$ 0.001\\[0.05cm]
\hline
Abs. Gain & \textbf{0.012} & \textbf{--} & \textbf{0.005} & \textbf{0.020} & \textbf{0.006} \\[0.05cm]
\% Speedup \eqref{eq:speedup} & \textbf{88.12} & \textbf{8.1}* & \textbf{44.74} & \textbf{11.05} & \textbf{13.94}\\[0.05cm]
\hline
\end{tabular}
}
\label{results}\vspace*{10pt}
\end{table}
\begin{figure}
\includegraphics[width=1.0\textwidth]{figures/results_valid_vs_time_20_new.pdf}\vspace*{-5pt}
\caption{\textbf{Wall Clock Time vs Validation Accuracy on different datasets for various methods.} {IGLU}\xspace offers significant improvements in convergence rate over baselines across diverse datasets. }
\label{fig:convergence_results}
\end{figure}
\textbf{Performance on Test Set and Speedup Obtained:}
Table~\ref{results} establishes that {IGLU}\xspace significantly outperforms the baselines on PPI-Large, Flickr, OGBN-Proteins and OGBN-Arxiv and is competitive with best baselines on Reddit. On PPI-Large, {IGLU}\xspace improves accuracy upon the best baseline (VRGCN) by over \textbf{1.2}\% while providing speedups of upto \textbf{88}\%, i.e., {IGLU}\xspace is about $8\times$ faster to train than VR-GCN. On OGBN-Proteins, {IGLU}\xspace improves accuracy upon the best baseline (GraphSAINT) by over \textbf{2.6}\% while providing speedup of \textbf{11}\%. On Flickr, {IGLU}\xspace offers \textbf{0.98}\% improvement in accuracy while simultaneously offering upto \textbf{45}\% speedup over the previous state-of-the-art method GraphSAINT. Similarly on Arxiv, {IGLU}\xspace provides \textbf{0.84}\% accuracy improvement over the best baseline GraphSAINT while offering nearly \textbf{14}\% speedup. On Reddit, an \textbf{8.1}\% speedup was observed in convergence to a high validation score of 0.955 while the final performance is within a standard deviation of the best baseline. The OGBN-Proteins and OGBN-Arxiv datasets were originally benchmarked in the transductive setting, with the entire graph information made available during training. However, we consider the more challenging inductive setting yet {IGLU}\xspace outperforms the best transductive baseline by over \textbf{0.7}\% for Proteins while matching the best transductive baseline for Arxiv \citep{ogb}. It is important to note that OGBN-Proteins is an atypical dataset because the graph is highly dense. Because of this, baselines such as ClusterGCN and GraphSAINT that drop a lot of edges while creating subgraphs show a deterioration in performance. ClusterGCN encounters into a runtime error on this dataset (denoted by $\|$ in the table), while GraphSAINT requires a very large subgraph size to achieve reasonable performance.
\textbf{Convergence Analysis}: Figure \ref{fig:convergence_results} shows that {IGLU}\xspace converges faster to a higher validation score than other baselines. For PPI-Large, while Cluster-GCN and VR-GCN show promising convergence in the initial stages of training, they stagnate at a much lower validation score in the end whereas {IGLU}\xspace is able to improve consistently and converge to a much higher validation score. For Reddit, {IGLU}\xspace's final validation score is marginally lower than GraphSAINT but {IGLU}\xspace offers rapid convergence in the early stages of training.
For OGBN-Proteins, Flickr and OGBN-Arxiv, {IGLU}\xspace demonstrates a substantial improvement in both convergence and the final performance on the test set.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=.4\textwidth]{figures/proteins_order_ablation_roc_auc.pdf}&
\includegraphics[width=.4\textwidth]{figures/reddit_order_ablation_microf1.pdf}\\
{ (a)}&{ (b)}\vspace*{-3pt}
\end{tabular}
\caption{Effect of backprop and inverted order of updates in {IGLU}\xspace on Reddit and OGBN-Proteins datasets. The inverted order of updates offers more stability and faster convergence. It is notable that techniques such as VR-GCN use stale node embeddings that correspond to the backprop variant.}
\label{fig:ablation_convergence_results}
\end{figure}
\subsection{Ablation studies}\label{ablation}
\paragraph{Effect of the Order of Updates.}\label{update_order}
{IGLU}\xspace offers the flexibility of using either the backprop order of updates or the inverted order of updates as mentioned in Section \ref{sec:method}. Ablations were performed on the Reddit and OGBN-Proteins datasets to analyse the effect of the different orders of updates. Figure~\ref{fig:ablation_convergence_results} offers epoch-wise convergence for the same and shows that the inverted order of updates offers faster and more stable convergence in the early stages of training although both variants eventually converge to similar validation scores. It is notable that keeping node embeddings stale (backprop order) offered inferior performance since techniques such as VR-GCN \citep{vrgcn} that also keep node embeddings stale. {IGLU}\xspace offers the superior alternative of keeping incomplete gradients stale instead.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=.4\textwidth]{figures/Staleness_Ablation_PPI_Train_Loss_1.pdf}&
\includegraphics[width=.4\textwidth]{figures/Staleness_Ablation_PPI_Validation_1.pdf}\\
{ (a)}&{ (b)}
\end{tabular}
\caption{Update frequency vs Accuracy. Experiments conducted on PPI-Large. As expected, refreshing the $\valpha^k$'s too frequently or too infrequently can affect both performance and convergence. }
\label{fig:staleness_ablation}
\end{figure}
\paragraph{Analysis of Degrees of Staleness.}
The effect of frequency of updates to the incomplete gradients ($\valpha^k$) on the performance and convergence of {IGLU}\xspace was analyzed. This ablation was conducted keeping all the other hyperparameters fixed. In the default setting, $\valpha^k$ values were updated only once per epoch (referred to as frequency 1). Two other update schemes were also considered: a) update the $\valpha^k$ values every two epochs (referred to as frequency 0.5), and b) update the $\valpha^k$ values twice within an epoch (referred to as frequency 2). To clarify, $\valpha^k$ values are the most fresh with update frequency 2 and most stale with update frequency 0.5. This ablation study was performed on the PPI-Large dataset and each variant was trained for 200 epochs. Table \ref{staleness_table} summarizes the results doing model selection and Figure \ref{fig:staleness_ablation} plots the convergence of these variants. Figure \ref{fig:staleness_ablation}b shows that on PPI-Large, the default update frequency 1 has the best convergence on the validation set, followed by update frequency 0.5. Both update frequency 1 and 0.5 massively outperform update frequency 2. Figure \ref{fig:staleness_ablation}a shows that {IGLU}\xspace with update frequency 2 has the lowest training loss but poor validation performance, indicating overfitting to the training dataset. We observe from this ablation that updating embeddings prematurely can cause unstable training resulting in convergence to a suboptimal solution. However, not refreshing the $\valpha^k$ values frequently enough can delay convergence to a good solution.
\begin{table}[ht]
\caption{Accuracy vs different update frequency on PPI-Large.}
\centering
{
\begin{tabular}{cccc}
\hline \hline
Update Frequency & Train Micro-F1 & Validation Micro-F1 & Test Micro-F1 \\
\hline
0.5 & 0.947 & 0.899 & 0.916 \\
\hline
1 & 0.970 & 0.947 & 0.961 \\
\hline
2 & 0.960 & 0.708 & 0.726\\
\hline
\hline
\end{tabular}
}
\vspace{1.5pt}
\label{staleness_table}
\end{table}
\paragraph{Additional Ablations and Experiments:} Due to lack of space, the following additional ablations and experiments are described in the appendices: a) Ablation on degrees of staleness for the backprop order of updates in Appendix \ref{app:staleness}, b) Experiments demonstrating {IGLU}\xspace's scalability to deeper networks and larger datasets in Appendices \ref{app:deeper} and \ref{app:largegraph} respectively, and c) Experiments demonstrating {IGLU}\xspace's applicability and architecture-agnostic nature in Appendices \ref{app:applicability} and \ref{app:archagnostic}.
\section{Introduction}
\label{sec:intro}
The Graph Convolution Network (GCN) model is an effective graph representation learning technique. Its ability to exploit network topology offers superior performance in several applications such as node classification \citep{gcn}, recommendation systems \citep{recsys} and program repair \citep{drrepair}. However, training multi-layer GCNs on large and dense graphs remains challenging due to the very aggregation operation that enables GCNs to adapt to graph topology -- a node's output layer embedding depends on embeddings of its neighbors in the previous layer which recursively depend on embeddings of their neighbors in the previous layer, and so on. Even in GCNs with 2-3 layers, this prompts back propagation on loss terms for a small mini-batch of nodes to update a large multi-hop neighborhood causing mini-batch SGD techniques to scale poorly.
Efforts to overcome this problem try to limit the number of nodes that receive updates as a result of a back-propagation step \cite{clustergcn,graphsage,graphsaint}. This is done either by sub-sampling the neighborhood or clustering ({it is important to note the distinction between nodes sampled to create a mini-batch and neighborhood sampling done to limit the neighborhood of the mini-batch that receives updates}). Variance reduction techniques \cite{vrgcn} attempt to reduce the additional variance introduced by neighborhood sampling. However, these techniques often require heavy subsampling in large graphs resulting in poor accuracy due to insufficient aggregation. They also do not guarantee unbiased learning or rigorous convergence guarantees. See Section~\ref{sec:related} for a more detailed discussion on the state-of-the-art in GCN training.
\textbf{Our Contributions}: This paper presents {IGLU}\xspace, an efficient technique for training GCNs based on lazy updates. An analysis of the gradient structure in GCNs reveals the most expensive component of the back-propagation step initiated at a node to be (re-)computation of forward-pass embeddings for its vast multi-hop neighborhood. Based on this observation, {IGLU}\xspace performs back-propagation with significantly reduced complexity using intermediate computations that are cached at regular intervals. This completely avoids neighborhood sampling and is a stark departure from the state-of-the-art. {IGLU}\xspace is architecture-agnostic and can be readily implemented on a wide range of GCN architectures. Avoiding neighborhood sampling also allows {IGLU}\xspace to completely avoid variance artifacts and offer provable convergence to a first-order stationary point under standard assumptions. In experiments, {IGLU}\xspace offered superior accuracies and accelerated convergence on a range of benchmark datasets.
\section{Discussion and Future Work}
\label{sec:disc}
This paper introduced {IGLU}\xspace, a novel method for training GCN architectures that uses biased gradients based on cached intermediate computations to speed up training significantly. The gradient bias is shown to be provably bounded so overall convergence is still effective (see Theorem~\ref{thm:conv}).{IGLU}\xspace's performance was validated on several datasets where it significantly outperformed SOTA methods in terms of accuracy and convergence speed. Ablation studies confirmed that {IGLU}\xspace is robust to its few hyperparameters enabling a near-optimal choice.
Exploring other possible variants of {IGLU}\xspace, in particular reducing variance due to mini-batch SGD, sampling nodes to further speed-up updates, and exploring alternate staleness schedules are interesting future directions. On a theoretical side, it would be interesting to characterize properties of datasets and loss functions that influence the effect of lazy updates on convergence. Having such a property would allow practitioners to decide whether to execute {IGLU}\xspace with lazier updates or else reduce the amount of staleness. Exploring application- and architecture-specific variants of {IGLU}\xspace is also an interesting direction.
\input{reproducibility}
\input{ethics}
\subsubsection*{Acknowledgments}
The authors are thankful to the reviewers for discussions that helped improve the content and presentation of the paper.
\section{{IGLU}\xspace: effIcient Gcn training via Lazy Updates}
\label{sec:method}
\textbf{Problem Statement}: Consider the problem of learning a GCN architecture on an undirected graph $\cG(\cV, \cE)$ with each of the $N$ nodes endowed with an \emph{initial} feature vector ${\bm{x}}^0_i \in \bR^{d_0}, i \in \cV$. $X^0 \in \bR^{n \times d_0}$ denotes the matrix of these initial features stacked together. $\cN(i) \subset \cV$ denotes the set of neighbors of node $i$. $A$ denotes the (normalized) adjacency matrix of the graph. A multi-layer GCN architecture uses a parameterized function at each layer to construct a node's embedding for the next layer using embeddings of that node as well as those of its neighbors. Specifically
\[
{\bm{x}}_i^k = f({\bm{x}}_j^{k-1}, j \in \bc i \cup \cN(i); E^k),
\]
where $E^k$ denotes the parameters of $k$-th layer. For example, a classical GCN layer is given by
\[
{\bm{x}}_i^k = \sigma\br{\sum_{j \in \cV}A_{ij}(W^k)^\top{\bm{x}}_j^{k-1}},
\]
where $E^k$ is simply the matrix $W^k \in \bR^{d_{k-1} \times d_k}$ and $d_k$ is the embedding dimensionality at the $k\nth$ layer. {IGLU}\xspace supports more involved architectures including residual connections, virtual nodes, layer normalization, batch normalization, etc (see Appendix~\ref{app:arch}). We will use $E^k$ to collectively refer to all parameters of the $k\nth$ layer e.g. offset and scale parameters in a layer norm operation, etc. $X^k \in \bR^{n \times d_k}$ will denote the matrix of $k\nth$ layer embeddings stacked together, giving us the handy shorthand $X^k = f(X^{k-1};E^k)$. Given a $K$-layer GCN and a multi-label/multi-class task with $C$ labels/classes, a fully-connected layer $W^{K+1} \in \bR^{d_K \times C}$ and activation functions such as sigmoid or softmax are used to get predictions that are fed into the task loss. {IGLU}\xspace does not require the task loss to decompose over the classes. The convergence proofs only require a smooth training objective.
\textbf{Neighborhood Explosion}: To understand the reasons behind neighborhood explosion and the high cost of mini-batch based SGD training, consider a toy univariate regression problem with unidimensional features and a 2-layer GCN with sigmoidal activation i.e. $K = 2$ and $C = 1 = d_0 = d_1 = d_2$. This GCN is parameterized by $w^1, w^2, w^3 \in \bR$ and offers the output $\hat y_i = w^3 \sigma\br{z^2_i}$ where $z^2_i = \sum_{j \in \cV}A_{ij}w^2x_j^1 \in \bR$. In turn, we have $x_j^1 = \sigma\br{z^1_i}$ where $z^1_i = \sum_{j' \in \cV}A_{jj'}w^1x_{j'}^0 \in \bR$ and $x_{j'}^0 \in \bR$ are the initial features of the nodes. Given a task loss $\ell: \bR \times \bR \rightarrow \bR_+$ e.g. least squares, denoting $\ell'_i = \ell'(\hat y_i, y_i)$ gives us
\begin{align*}
\frac{\partial\ell(\hat y_i, y_i)}{\partial w^1} &= \ell'_i\cdot\frac{\partial\hat y_i}{\partial z^2_i}\cdot\frac{\partial z^2_i}{\partial w^1} = \ell'_i\cdot w^3\sigma'(z^2_i)\cdot\sum_{j \in \cV}A_{ij}w^2\frac{\partial x_j^1}{\partial w^1}\\
&= \ell'_i\cdot w^3\sigma'(z^2_i)\cdot\sum_{j \in \cV}A_{ij}w^2\sigma'(z^1_j)\cdot\sum_{j' \in \cV}A_{jj'}x_{j'}^0.
\end{align*}
The nesting of the summations is conspicuous and indicates the neighborhood explosion: when seeking gradients in a $K$-layer GCN on a graph with average degree $m$, up to an $m^{K-1}$-sized neighborhood of a node may be involved in the back-propagation update initiated at that node. Note that the above expression involves terms such as $\sigma'(z^2_i), \sigma'(z^1_j)$. Since the values of $z^2_i, z^1_j$ etc change whenever the model i.e. $\bc{w^1,w^2,w^3}$ receives updates, for a fresh mini-batch of nodes, terms such as $\sigma'(z^2_i), \sigma'(z^1_j)$ need to be computed afresh if the gradient is to be computed exactly. Performing these computations amounts to doing {\em forward pass operations} that frequently involve a large neighborhood of the nodes of the mini-batch. Sampling strategies try to limit this cost by directly restricting the neighborhood over which such forward passes are computed. However, this introduces both bias and variance into the gradient updates as discussed in Section~\ref{sec:related}. {IGLU}\xspace instead lazily updates various \emph{incomplete gradient} (defined below) and node embedding terms that participate in the gradient expression. This completely eliminates sampling variance but introduces a bias due to the use of stale terms. However, this bias provably bounded and can be made arbitrarily small by adjusting the step length and frequency of refreshing these terms.
\begin{figure}[t]%
\includegraphics[width=\columnwidth]{figures/arch.pdf}%
\caption{\small Fig~\ref{fig:method}(a) highlights the distinction between existing sampling-based approaches that may introduce bias and variance. {IGLU}\xspace completely sidesteps these issues and is able to execute GCN back-propagation steps on the full graph owing to its use of lazy updates which offer no sampling variance and provably bounded bias. Fig~\ref{fig:method}(b) summarizes the quantities useful for {IGLU}\xspace's updates. {IGLU}\xspace is architecture-agnostic and can be readily used with wide range of architectures. Fig~\ref{fig:method}(c) gives an example layer architecture used by GraphSAINT.}%
\label{fig:method}%
\end{figure}
\textbf{Lazy Updates for GCN Training}: Consider an arbitrary GCN architecture with the following structure: for some parameterized layer functions we have $X^k = f(X^{k-1};E^k)$ where $E^k$ denotes the collection of all parameters of the $k\nth$ layer e.g. weight matrices, offset and scale parameters used in layer norm operations, etc. $X^k \in \bR^{N \times d_k}$ denotes the matrix of $k\nth$ layer embeddings stacked together and $X^0 \in \bR^{N \times d_0}$ are the initial features. For a $K$-layer GCN on a multi-label/multi-class task with $C$ labels/classes, a fully-connected layer $W^{K+1} \in \bR^{d_K \times C}$ is used to offer predictions $\hat{\bm{y}}_i = (W^{K+1})^\top{\bm{x}}^K_i \in \bR^C$. We use the shorthand $\hat\vY \in \bR^{N \times C}$ to denote the matrix where the predicted outputs $\hat{\bm{y}}_i$ for all the nodes are stacked. We assume a task loss function $\ell: \bR^C \times \bR^C \times \bR_+$ and use the abbreviation $\ell_i := \ell(\hat y_i, y_i)$. The loss function need not decompose over the classes and can thus be assumed to include activations such as softmax that are applied over the predictions $\hat{\bm{y}}_i$. Let $\cL = \sum_{i \in \cV}\ell_i$ denote the training objective. The convergence proofs assume that $\cL$ is smooth.
{\textbf{Motivation}: We define the the \emph{loss derivative} matrix $\vG = [g_{ic}] \in \bR^{N \times C}$ with $g_{ic} := \frac{\partial \ell_i}{\partial \hat y_{ic}}$. As the proof of Lemma~\ref{lem:main} (see Appendix \ref{sec:proofs}) shows, the loss derivative with respect to parameters $E^k$ at any layer has the form $\frac{\partial{\mathcal L}}{\partial E^k} = \sum_{j=1}^N\sum_{p=1}^{d_k} \br{\sum_{i \in \cV}\sum_{c \in [C]} g_{ic}\cdot\frac{\partial \hat y_{ic}}{\partial X^k_{jp}}}\frac{\partial X^k_{jp}}{\partial E^k}$. Note that the partial derivatives $\frac{\partial X^k_{jp}}{\partial E^k}$ can be computed for any node using only embeddings of its neighbors in the $(k-1)\nth$ layer i.e. $X^{k-1}$ thus avoiding any neighborhood explosion. This means that neighborhood explosion must be happening while computing the terms encapsulated in the round brackets. Let us formally recognize these terms as \emph{incomplete gradients}. The notation $\left.\frac{\partial P}{\partial Q}\right|_R$ denotes the partial derivative of $P$ w.r.t $Q$ while keeping $R$ fixed i.e. treated as a constant.}
\begin{definition}
\label{defn:alphak}
For any layer $k \leq K$, define its \emph{incomplete task gradient} to be $\valpha^k = [\alpha^k_{jp}] \in \bR^{N \times d_k}$,
\[
\alpha^k_{jp} := \left.\frac{\partial(\vG\odot\hat\vY)}{\partial X^k_{jp}}\right|_{\vG} = \sum_{i\in\cV}\sum_{c \in [C]} g_{ic}\cdot\frac{\partial\hat y_{ic}}{\partial X^k_{jp}}
\]
\end{definition}%
The following lemma completely characterizes the loss gradients and also shows that the incomplete gradient terms $\valpha^k, k \in [K]$ can be efficiently computed using a recursive formulation that also does not involve any neighborhood explosion.
\begin{lemma}
\label{lem:main}
The following results hold whenever the task loss $\cL$ is differentiable:
\begin{enumerate}
\item For the final fully-connected layer we have $\frac{\partial\cL}{\partial W^{K+1}} = (X^K)^\top\vG$ as well as for any $k \in [K]$ and any parameter $E^k$ in the $k\nth$ layer, $\frac{\partial\cL}{\partial E^k} = \left.\frac{\partial(\valpha^k\odot X^k)}{\partial E^k}\right|_{\valpha^k} = \sum_{i\in\cV}\sum_{p = 1}^{d_k} \alpha^k_{ip}\cdot\frac{\partial X^k_{ip}}{\partial E^k}$.
\item For the final layer, we have $\valpha^K = \vG(W^{K+1})^\top$ as well as for any $k < K$, we
have $\valpha^k = \left.\frac{\partial(\valpha^{k+1}\odot X^{k+1})}{\partial X^k}\right|_{\valpha^{k+1}}$ i.e. $\alpha^k_{jp} = \sum_{i\in\cV}\sum_{q = 1}^{d_{k+1}} \alpha^{k+1}_{iq}\cdot\frac{\partial X^{k+1}_{iq}}{\partial X^k_{jp}}$.
\end{enumerate}
\end{lemma}
Lemma~\ref{lem:main} establishes a recursive definition of the incomplete gradients using terms such as $\frac{\partial X^{k+1}_{iq}}{\partial X^k_{jp}}$ that concern just a single layer. Thus, computing $\valpha^k$ for any $k \in [K]$ does not involve any neighborhood explosion since only the immediate neighbors of a node need be consulted. Lemma~\ref{lem:main} also shows that if $\valpha^k$ are computed and frozen, the loss derivatives $\frac{\partial\cL}{\partial E^k}$ only involve additional computation of terms such as $\frac{\partial X^k_{ip}}{\partial E^k}$ which yet again involve a single layer and do not cause neighborhood explosion. This motivates lazy updates to $\valpha^k, X^k$ values in order to accelerate back-propagation. However, performing lazy updates to both $\valpha^k, X^k$ offers suboptimal performance. Hence {IGLU}\xspace adopts two variants described in Algorithms~\ref{algo:bp} and \ref{algo:inv}. The \emph{backprop} variant\footnote{The backprop variant is named so since it updates model parameters in the order back-propagation would have updated them i.e. $W^{K+1}$ followed by $E^K, E^{K-1},\ldots$ whereas the inverted variant performs updates in the reverse order i.e. starting from $E^1, E^2$ all the way to $W^{K+1}$.} keeps embeddings $X^k$ stale for an entire epoch but performs eager updates to $\valpha^k$. The \emph{inverted} variant on the other hand keeps the incomplete gradients $\valpha^k$ stale for an entire epoch but performs eager updates to $X^k$.
\begin{minipage}[t]{0.49\textwidth}
\vspace*{-4ex}
\begin{algorithm}[H]
\caption{{IGLU}\xspace: backprop order}
\label{algo:bp}
{\small
\begin{algorithmic}[1]
\REQUIRE GCN $\cG$, initial features $X^0$, task loss $\cL$
\STATE Initialize model parameters $E^k, k \in [K], W^{K+1}$
\WHILE{not converged}
\STATE Do a forward pass to compute $X^k$ for all $k \in [K]$ as well as $\hat\vY$
\STATE Compute $\vG$ then $\frac{\partial\cL}{\partial W^{K+1}}$ using Lemma~\ref{lem:main} (1) and update $W^{K+1} \leftarrow W^{K+1} - \eta\cdot\frac{\partial\cL}{\partial W^{K+1}}$
\STATE Compute $\valpha^K$ using $\vG, W^{K+1}$, Lemma~\ref{lem:main} (2)
\FOR{$k = K \ldots 2$}
\STATE Compute $\frac{\partial\cL}{\partial E^k}$ using $\valpha^k, X^k$, Lemma~\ref{lem:main} (1)
\STATE Update $E^k \leftarrow E^k - \eta\cdot\frac{\partial\cL}{\partial E^k}$
\STATE Update $\valpha^k$ using $\valpha^{k+1}$ using Lemma~\ref{lem:main} (2)
\ENDFOR
\ENDWHILE
\end{algorithmic}
}
\end{algorithm}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\textwidth}
\vspace*{-4ex}
\begin{algorithm}[H]
\caption{{IGLU}\xspace: inverted order}
\label{algo:inv}
{\small
\begin{algorithmic}[1]
\REQUIRE GCN $\cG$, initial features $X^0$, task loss $\cL$
\STATE Initialize model parameters $E^k, k \in [K], W^{K+1}$
\STATE Do an initial forward pass to compute $X^k, k \in [K]$%
\WHILE{not converged}
\STATE Compute $\hat\vY,\vG$ and $\valpha^k$ for all $k \in [K]$ using Lemma~\ref{lem:main} (2)
\FOR{$k = 1 \ldots K$}
\STATE Compute $\frac{\partial\cL}{\partial E^k}$ using $\valpha^k, X^{k}$, Lemma~\ref{lem:main} (1)
\STATE Update $E^k \leftarrow E^k - \eta\cdot\frac{\partial\cL}{\partial E^k}$
\STATE Update $X^k \leftarrow f(X^{k-1}; E^k)$
\ENDFOR
\STATE Compute $\frac{\partial\cL}{\partial W^{K+1}}$ using Lemma~\ref{lem:main} (1) and use it to update $W^{K+1} \leftarrow W^{K+1} - \eta\cdot\frac{\partial\cL}{\partial W^{K+1}}$
\ENDWHILE
\end{algorithmic}
}
\end{algorithm}
\end{minipage}
{\textbf{SGD Implementation}: Update steps in the algorithms (steps 4, 8 in Algorithm~\ref{algo:bp} and steps 7, 10 in Algorithm~\ref{algo:inv}) are described as a single gradient step over the entire graph to simplify exposition -- in practice, these steps are implemented using {\em mini-batch} SGD. A mini-batch of nodes $S$ is sampled and task gradients are computed w.r.t $\hat\cL_S = \sum_{i \in S}\ell_i$ alone instead of $\cL$.}
\textbf{Contribution}: As noted in Section~\ref{sec:related}, {IGLU}\xspace uses caching in a manner fundamentally different from frameworks such as PyTorch or TensorFlow which use short-lived caches and compute exact gradients unlike {IGLU}\xspace that computes gradients faster but with bounded bias. Moreover, unlike techniques such as VR-GCN that cache only node embeddings, {IGLU}\xspace instead offers two variants and the variant that uses inverted order of updates (Algorithm~\ref{algo:inv}) and caches incomplete gradients usually outperforms the backprop variant of {IGLU}\xspace (Algorithm~\ref{algo:bp}) that caches node embeddings.
\section{Related Works}
\label{sec:related}
\citep{orig,defferrard2016convolutional, gcn} introduced the GCN architecture for transductive learning on graphs. Later works extended to inductive settings and explored architectural variants such as the GIN \citep{gin}. Much effort has focused on speeding-up GCN training.
\textbf{Sampling Based Approaches}: The \emph{neighborhood sampling} strategy e.g. GraphSAGE \citep{graphsage} limits compute by restricting back-propagation updates to a sub-sampled neighborhood of a node. \emph{Layer sampling} strategies such as FastGCN \citep{fastgcn}, LADIES \citep{ladies} and ASGCN \citep{asgcn} instead sample nodes at each GCN layer using importance sampling to reduce variance and improve connectivity among sampled nodes. FastGCN uses the same sampling distribution for all layers and struggles to maintain connectivity unless large batch-sizes are used. LADIES uses a per-layer distribution conditioned on nodes sampled for the succeeding layer. ASGCN uses a linear model to jointly infer node importance weights. Recent works such as Cluster-GCN \citep{clustergcn} and GraphSAINT \citep{graphsaint} propose \emph{subgraph sampling} creating mini-batches out of subgraphs and restricting back-propagation to nodes within the subgraph. To avoid losing too many edges, large mini-batch sizes are used. Cluster-GCN performs graph clustering and chooses multiple clusters per mini-batch (reinserting any edges cutting across clusters in a mini-batch) whereas GraphSAINT samples large subgraphs directly using random walks.
\textbf{Bias and Variance}: Sampling techniques introduce bias as non-linear activations in the GCN architecture make it difficult to offer unbiased estimates of the loss function. \cite{graphsaint} offer unbiased estimates if non-linearities are discarded. Sampling techniques also face increased variance for which variance-reduction techniques have been proposed such as VR-GCN \citep{vrgcn}, MVS-GNN \citep{mvsgcn} and AS-GCN \citep{asgcn}. VR-GCN samples nodes at each layer whose embeddings are updated and uses stale embeddings for the rest, offering variance elimination in the limit under suitable conditions. MVS-GNN handles variance due to mini-batch creation by performing importance weighted sampling to construct mini-batches. The Bandit Sampler \citep{liu2020bandit} formulates variance reduction as an adversarial bandit problem.
\textbf{Other Approaches:} Recent approaches decouple propagation from prediction as a pre-processing step e.g. PPRGo \citep{pprgo}, APPNP \citep{appnp} and SIGN \citep{frasca2020sign}. APPNP makes use of the relationship between the GCNs and PageRank to construct improved propagation schemes via personalized PageRank. PPRGo extends APPNP by approximating the dense propagation matrix via the push-flow algorithm. SIGN proposes inception style pre-computation of graph convolutional filters to speed up training and inference. GNNAutoScale \citep{fey2021gnnautoscale} builds on VR-GCN and makes use of historical embeddings for scaling GNN training to large graphs.
\textbf{{IGLU}\xspace in Context of Related Work}: {IGLU}\xspace avoids neighborhood sampling entirely and instead speeds-up learning using stale computations. Intermediate computations are cached and lazily updated at regular intervals e.g. once per epoch. {We note that {IGLU}\xspace's \textit{caching} is distinct and much more aggressive (e.g. lasting an entire epoch) than the internal caching performed by popular frameworks such as TensorFlow and PyTorch (where caches last only a single iteration)}. Refreshing these caches in bulk offers {IGLU}\xspace economies of scale. {IGLU}\xspace incurs no sampling variance but incurs bias due to the use of stale computations. Fortunately, this bias is provably bounded, and can be made arbitrarily small by adjusting the step length and refresh frequency of the stale computations.
\subsubsection*{Reproducibility Statement}
\label{sec:reproducibility}
Efforts have been made to ensure that results reported in this paper are reproducible.
\textbf{Theoretical Clarity}: Section~\ref{sec:method} discusses the problem setup and preliminaries and describes the proposed algorithm. Detailed proofs are provided in Appendix~\ref{sec:proofs} due to lack of space.
\textbf{Experimental Reproducibility}: Section \ref{sec:exps} and Appendices~\ref{impl} and~\ref{data_baselines} contain information needed to reproduce the empirical results, namely datasets statistics and data source, data pre-processing, implementation details for {IGLU}\xspace and the baselines including architectures, hyperparameter search spaces and the best hyperparameters corresponding to the results reported in the paper.
\textbf{Code Release}: An implementation of {IGLU}\xspace can be found at the following URL\\ \href{https://github.com/sdeepaknarayanan/iglu}{\texttt{https://github.com/sdeepaknarayanan/iglu}}
\section{Theoretical Proofs}
\label{sec:proofs}
\allowdisplaybreaks
In this section we provide a formal restatement of Theorem~\ref{thm:conv} as well as its proof. We also provide the proof of Lemma 1 below.
\subsection{Proof for Lemma 1}
\textbf{Proof of Lemma 1} We consider the two parts of Lemma 1 separately and consider two cases while proving each part. For each part, Case 1 pertains to the final layer and Case 2 considers intermediate layers.
\textbf{Clarification about some Notation in the statements of Definition 1 and Lemma 1}: The notation $\left.\frac{\partial(\vG\odot\hat\vY)}{\partial X^k_{jp}}\right|_{\vG}$ is meant to denote the partial derivative w.r.t $X^k_{jp}$ but while keeping $\vG$ fixed i.e. treated as a constant or being ``conditioned upon'' (indeed both $\vG$ and $\hat\vY$ depend on $X^k_{jp}$ but the definition keeps $\vG$ fixed while taking derivatives).
Similarly, in Lemma 1 part 1, $\valpha^k$ is fixed (treated as a constant) in the derivative in the definition of $\partial{\mathcal L}/\partial E^k$ and in part 2, $\valpha^{k+1}$ is fixed (treated as a constant) in the derivative in the definition of $\valpha^k$.
\textbf{Recalling some Notation for sake of completeness}:
We recall that ${\bm{x}}^k_i$ is the $k$-th layer embedding of node $i$. ${\bm{y}}_i$ is the $C$-dimensional ground-truth label vector for node $i$ and $\hat{\bm{y}}_i$ is the $C$-dimensional predicted score vector for node $i$. $K$ is total number of layers in the GCN. ${\mathcal L}_i$ denotes the loss on the $i$-th node and ${\mathcal L}$ denotes the total loss summed over all nodes. $d_k$ is the embedding dimensionality after the $k$-th layer.
\textbf{Proof of Lemma 1.1} We analyze two cases
\textbf{Case 1} ($k = K$ i.e. final layer): Recall that the predictions for node $i$ are obtained as $\hat{\bm{y}}_i = (W^{K+1})^\top x^K_i$. Thus we have
\[
\frac{\partial{\mathcal L}}{\partial W^{K+1}} = \sum_{i \in \cV}\sum_{c \in [C]}\frac{\partial{\mathcal L}_i}{\partial \hat y_{ic}}\frac{\partial \hat y_{ic}}{\partial W^{K+1}}.
\]
Now, $\frac{\partial{\mathcal L}_i}{\partial \hat y_{ic}} = g_{ic}$ by definition. If we let ${\bm{w}}^{K+1}_c$ denote the $c$-th column of the $d_k \times C$ matrix $W^{K+1}$, then it is clear that $\hat y_{ic}$ depends only on ${\bm{w}}^{K+1}_c$ and ${\bm{x}}^K_i$. Thus, we have
\[
\frac{\partial \hat y_{ic}}{\partial W^{K+1}} = \frac{\partial \hat y_{ic}}{\partial{\bm{w}}^{K+1}_c}\cdot {\bm{e}}_c^\top = ({\bm{x}}^K_i)^\top {\bm{e}}_c^\top,
\]
where ${\bm{e}}_c$ is the $c$-th canonical vector in $C$-dimensions with 1 at the $c$-th coordinate and 0 everywhere else. This gives us
\[
\frac{\partial{\mathcal L}}{\partial W^{K+1}} = \sum_{i \in \cV} ({\bm{x}}^K_i)^\top \sum_{c \in [C]} g_{ic}\cdot{\bm{e}}_c^\top = (X^K)^\top\vG
\]
\textbf{Case 2} ($k < K$ i.e. intermediate layers): We recall that $X^k$ stacks all $k$-th layer embeddings as an $N \times d_k$ matrix and $X^k = f(X^{k-1};E^k)$ where $E^k$ denotes the parameters (weights, offsets, scales) of the k-th layer. Thus we have
\[
\frac{\partial{\mathcal L}}{\partial E^k} = \sum_{i \in \cV}\sum_{c \in [C]}\frac{\partial{\mathcal L}}{\partial \hat y_{ic}}\frac{\partial \hat y_{ic}}{\partial E^k}.
\]
As before, $\frac{\partial{\mathcal L}}{\partial \hat y_{ic}} = g_{ic}$ by definition and we have
\[
\frac{\partial \hat y_{ic}}{\partial E^k} = \sum_{j \in \cV}\sum_{p=1}^{d_k}\frac{\partial \hat y_{ic}}{\partial X^k_{jp}}\frac{\partial X^k_{jp}}{\partial E^k}
\]
This gives us
\[
\frac{\partial{\mathcal L}}{\partial E^k} = \sum_{i \in \cV}\sum_{c \in [C]} g_{ic}\cdot\sum_{j \in \cV}\sum_{p=1}^{d_k}\frac{\partial \hat y_{ic}}{\partial X^k_{jp}}\frac{\partial X^k_{jp}}{\partial E^k} = \sum_{j \in \cV}\sum_{p=1}^{d_k}\alpha^k_{jp}\cdot\frac{\partial X^k_{jp}}{\partial E^k} = \left.\frac{\partial(\valpha^k\odot X^k)}{\partial E^k}\right|_{\valpha^k},
\]
where we used the definition of $\alpha^k_{jp}$ in the second step and used a ``conditional'' notation to get the third step. We reiterate that $\left.\frac{\partial(\valpha^k\odot X^k)}{\partial E^k}\right|_{\valpha^k}$ implies that $\valpha^k$ is fixed (treated as a constant) while taking the derivative. This ``conditioning'' is critical since $\valpha^k$ also depends on $E^k$. This concludes the proof.
\textbf{Proof of Lemma 1.2}: We consider two cases yet again and use Definition 1 that tells us that
\[
\alpha^k_{jp} = \sum_{i \in \cV}\sum_{c \in [C]} g_{ic}\cdot\frac{\partial\hat y_{ic}}{\partial X^k_{jp}}
\]
\textbf{Case 1} ($k = K$): Since $\hat y_i = (W^{K+1})^\top {\bm{x}}^K_i$, we know that $\hat y_{ic}$ depends only on ${\bm{x}}^K_i$ and ${\bm{w}}^{K+1}_c$ where as before, ${\bm{w}}^{K+1}_c$ is the $c$-th column of the matrix $W^{K+1}$. This gives us $\frac{\partial\hat y_{ic}}{\partial X^K_{jp}} = 0$ if $i \neq j$ and $\frac{\partial\hat y_{ic}}{\partial X^K_{jp}} = w^K_{pc}$ if $i = j$ where $w^K_{pc}$ is the $(p,c)$-th entry of the matrix $W^{K+1}$ (or in other words, the $p$-th coordinate of the vector ${\bm{w}}^{K+1}_c$). This tells us that
\[
\alpha^K_{jp} = \sum_{i \in \cV}\sum_{c \in [C]}g_{ic}\cdot\frac{\partial\hat y_{ic}}{\partial X^K_{jp}} = \sum_{c \in [C]}g_{jc}\cdot w^K_{pc},
\]
which gives us $\valpha^K = \vG(W^{K+1})^\top$.
\textbf{Case 2} ($k < K$): By Definition 1 we have
\[
\alpha^k_{jp} = \sum_{i \in \cV}\sum_{c \in [C]}g_{ic}\cdot\frac{\partial\hat y_{ic}}{\partial X^k_{jp}} = \sum_{i \in \cV}\sum_{c \in [C]}g_{ic}\cdot\sum_{l \in \cV}\sum_{q=1}^{d_{k+1}}\frac{\partial\hat y_{ic}}{\partial X^{k+1}_{lq}}\frac{\partial X^{k+1}_{lq}}{\partial X^k_{jp}}
\]
Rearranging the terms gives us
\[
\alpha^k_{jp} = \sum_{l \in \cV}\sum_{q=1}^{d_{k+1}}\left(\sum_{i \in \cV}\sum_{c \in [C]}g_{ic}\cdot\frac{\partial\hat y_{ic}}{\partial X^{k+1}_{lq}}\right)\cdot\frac{\partial X^{k+1}_{lq}}{\partial X^k_{jp}} = \sum_{l \in \cV}\sum_{q=1}^{d_{k+1}}\alpha^{k+1}_{lq}\cdot\frac{\partial X^{k+1}_{lq}}{\partial X^k_{jp}},
\]
where we simply used Definition 1 in the second step. However, the resulting term is simply $\left.\frac{\partial(\valpha^{k+1}\odot X^{k+1})}{\partial X^k_{jp}}\right|_{\valpha^{k+1}}$ which conditions on, or treats as a constant, the term $\valpha^{k+1}$ according to our notation convention. This finishes the proof of part 2.
\subsection{Statement of Convergence Guarantee}
The rate for full-batch updates, as derived below, is $\bigO{\frac1{T^{\frac23}}}$. This \textit{fast} rate offered by full-batch updates is asymptotically superior to the $\bigO{\frac1{\sqrt T}}$ rate offered by mini-batch SGD updates. This is due to the additional variance due to mini-batch construction that mini-batch SGD variants have to incur.
\begin{theorem}[{IGLU}\xspace Convergence (Final)]\label{thm:conv-restated}
Suppose the task loss function $\cL$ has $H$-smooth and an architecture that offers bounded gradients and Lipschitz gradients as quantified below, then if {IGLU}\xspace in its inverted variant (Algorithm~\ref{algo:inv}) is executed with step length $\eta$ and a staleness count of $\tau$ updates per layer as in steps 7, 10 in Algorithm~\ref{algo:inv}, then within $T$ iterations, we must have
\begin{enumerate}
\item $\norm{\nabla\cL}_2^2 \leq \bigO{1/T^{\frac23}}$ if model update steps are carried out on the entire graph in a full-batch with step length $\eta = \bigO{1/T^{\frac13}}$ and $\tau = \bigO1$.
\item $\norm{\nabla\cL}_2^2 \leq \bigO{1/\sqrt T}$ if model update steps are carried out using mini-batch SGD with step length $\eta = \bigO{1/\sqrt T}$ and $\tau = \bigO{T^{\frac14}}$.
\end{enumerate}
\end{theorem}
It is curious that the above result predicts that when using mini-batch SGD, a non-trivial amount of staleness ($\tau = \bigO{T^{\frac14}}$ as per the above result) may be optimal and premature refreshing of embeddings/incomplete gradients may be suboptimal as was also seen in experiments reported in Table \ref{staleness_table} and Figure \ref{fig:staleness_ablation}. Our overall proof strategy is the following
\begin{enumerate}
\item \textbf{Step 1}: Analyze how lazy updates in {IGLU}\xspace affect model gradients
\item \textbf{Step 2}: Bound the bias in model gradients in terms of staleness due to the lazy updates
\item \textbf{Step 3}: Using various properties such as smoothness and boundedness of gradients, obtain an upper bound for the bias in the gradients in terms of number of iterations since last update
\item \textbf{Step 4}: Use the above to establish the convergence guarantee
\end{enumerate}
We will show the results for the variant of {IGLU}\xspace that uses the \textit{inverted} order of updates as given in Algorithm~\ref{algo:inv}. A similar proof technique will also work for the variant that uses the \textit{backprop} order of updates. Also, to avoid clutter, we will from hereon assume that a normalized total loss function is used for training i.e. $\cL = \frac1N\sum_{i \in \cV}\ell_i$ where $N = \abs\cV$ is the number of nodes in the training graph.
\subsection{Step 1: Partial Staleness and its Effect on Model Gradients}
A peculiarity of the inverted order of updates is that the embeddings $X^k, k \in [K]$ are never stale in this variant. To see this, we use a simple inductive argument. The base case of $X^0$ is obvious -- it is never stale since it is never meant to be updated. For the inductive case, notice how, the moment any parameter $E^k$ is updated in step 7 of Algorithm~\ref{algo:inv} (whether by mini-batch SGD or by full-batch GD), immediately thereafter in step 8 of the algorithm, $X^k$ is updated using the current value of $X^{k-1}$ and $E^k$. Since by induction $X^{k-1}$ never has a stale value, this shows that $X^k$ is never stale either, completing the inductive argument.
This has an interesting consequence: by Lemma~\ref{lem:main}, we have $\frac{\partial\cL}{\partial E^k} = \frac1N\left.\frac{\partial(\valpha^k\odot X^k)}{\partial E^k}\right|_{\valpha^k}$ (notice the additional $1/N$ term since we are using a normalized total loss function now). However, as $\left.\frac{\partial(\valpha^k\odot X^k)}{\partial E^k}\right|_{\valpha^k}$ is completely defined given $E^k, \valpha^k$ and $X^{k-1}$ and by the above argument, $X^{k-1}$ never has a stale value. This shows that the only source of staleness in $\frac{\partial\cL}{\partial E^k}$ is the staleness in values of the incomplete task gradient $\valpha^k$. Similarly, it is easy to see that the only source of staleness in $\frac{\partial\cL}{\partial W^{K+1}} = (X^K)^\top\vG$ is the staleness in $\vG$.
The above argument is easily mirrored for the backprop order of updates where an inductive argument similar to the one used to argue above that $X^k$ values are never stale in the inverted update variant, would show that the incomplete task gradient $\valpha^k$ values are never stale in the backprop variant and the only source of staleness in $\frac{\partial\cL}{\partial E^k}$ would then be the staleness in the $X^k$ values.
\subsection{Step 2: Relating the Bias in Model Gradients to Staleness}
The above argument allows us to bound the bias in model gradients as a result of lazy updates. To avoid clutter, we will present the arguments with respect to the $E^k$ parameter. Similar arguments would hold for the $W^{K+1}$ parameter as well. Let $\tilde\valpha^k, \valpha^k$ denote the stale and actual values of the incomplete task gradient relevant to $E^k$. Let $\frac{\widetilde{\partial\cL}}{\partial E^k} = \left.\frac{\partial(\tilde\valpha^k\odot X^k)}{\partial E^k}\right|_{\tilde\valpha^k}$ be the stale gradients used by {IGLU}\xspace in its inverted variant to update $E^k$ and similarly let $\frac{\partial\cL}{\partial E^k} = \left.\frac{\partial(\valpha^k\odot X^k)}{\partial E^k}\right|_{\valpha^k}$ be the true gradient that could have been used had there been no staleness.
We will abuse notation and let the vectorized forms of these incomplete task gradients be denoted by the same symbols i.e. we stretch the matrix $\valpha^k \in \bR^{N\times d_k}$ into a long vector denoted also by $\valpha^k \in \bR^{N\cdot d_k}$. Let $\dim(E^k)$ denote the number of dimensions in the model parameter $E^k$ (recall that $E^k$ can be a stand-in for weight matrices, layer norm parameters etc used in layer $k$ of the GCN). Similarly, we let $Z^k_{jp} \in \bR^{\dim(E^k)}, j \in [N], p \in [d_k]$ denote the vectorized form of the gradient $\frac{\partial X^k_{ip}}{\partial E^k}$ and let $Z^k \in \bR^{N\cdot d_k \times \dim(E^k)}$ denote the matrix with all these vectors $Z^k_{jp}$ stacked up.
As per the above notation, it is easy to see that the vectorized form of the model gradient is given by
\[
\frac{\partial\cL}{\partial E^k} = \frac{(Z^k)^\top\valpha^k}N \in \bR^{\dim(E^k)}
\]
The $1/N$ term appears since we are using a normalized total loss function. This also tells us that
\begin{equation}
\label{eq:step1}
\norm{\frac{\widetilde{\partial\cL}}{\partial E^k} - \frac{\partial\cL}{\partial E^k}}_2 = \frac{\sqrt{(\tilde\valpha^k - \valpha^k)^\top(Z^k(Z^k)^\top)(\tilde\valpha^k - \valpha^k)}}N \leq \frac{\norm{\tilde\valpha^k - \valpha^k}_2\cdot\sigma_{\max}(Z^k)}N,
\end{equation}
where $\sigma_{\max}(Z^k)$ is the largest singular value of the matrix $Z^k$.
\subsection{Step 3: Smoothness and Bounded Gradients to Bound Gradient Bias}
The above discussion shows how to bound the bias in gradients in terms of staleness in the incomplete task gradients. However, to utilize this relation, we assume that the loss function $\cL$ is $H$-smooth which is a standard assumption in literature. We will also assume that the network offers bounded gradients. Specifically, for all values of model parameters $E^k, k \in [K], W^{K+1}$ we have $\norm\vG_2, \norm{\frac{\partial X^k_{ip}}{\partial E^k}}_2, \norm{\frac{\partial\cL}{\partial E^k}}_2, \norm{\frac{\partial X^{k+1}}{\partial X^k}}_2, \norm{\valpha^k}_2 \leq B$. For sake of simplicity, we will also assume the same bound on parameters e.g. $\norm{\vW^K}_2 \leq B$. Assuming bounded gradients and bounded parameters is also standard in literature. However, whereas works such as \cite{vrgcn} assume bounds on the sup-norm i.e. $L_\infty$ norm of the gradients, our proofs only require an $L_2$ norm bound.
We will now show that if the model parameters $\tilde E^k, k \in [K], \tilde W^{K+1}$ undergo gradient updates to their new values $E^k, k \in [K], W^{K+1}$ and the amount of \emph{travel} is bounded by $r > 0$ i.e. $\norm{\tilde E^k - E^k}_2 \leq r$, then we have $\norm{\tilde\valpha^k - \valpha^k}_2 \leq I_k \cdot r$ where $\tilde\valpha^k, \valpha^k$ are the incomplete task gradients corresponding to respectively old and new model parameter values and $I_k$ depends on various quantities such as the smoothness parameter and the number of layers in the network.
Lemma~\ref{lem:main} (part 2) tells us that for the final layer, we have $\valpha^K = \vG(W^{K+1})^\top$ as well as for any $k < K$, we have $\valpha^k = \left.\frac{\partial(\valpha^{k+1}\odot X^{k+1})}{\partial X^k}\right|_{\valpha^{k+1}}$. We analyze this using an inductive argument.
\begin{enumerate}
\item Case 1: $k = K$ (Base Case): In this case we have
\begin{align*}
\tilde\valpha^K - \valpha^K &= \tilde\vG(\tilde W^{K+1})^\top - \vG(W^{K+1})^\top\\
&= (\tilde\vG - \vG)(\tilde W^{K+1})^\top + \vG(\tilde W^{K+1} - W^{K+1})^\top
\end{align*}
Now, the travel condition tells us that $\norm{\tilde W^{K+1} - W^{K+1}}_2 \leq r$, boundedness tells us that $\norm{\vG}_2, \norm{\tilde\vW^{K+1}}_2 \leq B$. Also, since the loss function is $H$-smooth, the task gradients are $H$-Lipschitz which implies along with the travel condition that $\norm{\tilde\vG - \vG}_2 \leq H\cdot r$. Put together this tells us that
\[
\norm{\tilde\valpha^K - \valpha^K}_2 \leq (H+1)B\cdot r,
\]
telling us that $I_K \leq (H+1)B$.
\item Case 2: $k < K$ (Inductive Case): In this case, let $\tilde X^k$ and $X^k$ denote the embeddings with respect to the old and new parameters respectively. Then we have
\begin{align*}
\tilde\valpha^k - \valpha^k =& \left.\frac{\partial(\tilde\valpha^{k+1}\odot \tilde X^{k+1})}{\partial\tilde X^k}\right|_{\tilde\valpha^{k+1}} - \left.\frac{\partial(\valpha^{k+1}\odot X^{k+1})}{\partial X^k}\right|_{\valpha^{k+1}}\\
=& \underbrace{\left.\frac{\partial(\tilde\valpha^{k+1}\odot \tilde X^{k+1})}{\partial\tilde X^k}\right|_{\tilde\valpha^{k+1}} - \left.\frac{\partial(\valpha^{k+1}\odot \tilde X^{k+1})}{\partial\tilde X^k}\right|_{\valpha^{k+1}}}_{(P)}\\
&{}+ \underbrace{\left.\frac{\partial(\valpha^{k+1}\odot \tilde X^{k+1})}{\partial\tilde X^k}\right|_{\valpha^{k+1}} - \left.\frac{\partial(\valpha^{k+1}\odot X^{k+1})}{\partial X^k}\right|_{\valpha^{k+1}}}_{(Q)}
\end{align*}
By induction we have $\norm{\tilde\valpha^{k+1} - \valpha^{k+1}}_2 \leq I_{k+1}\cdot r$ and bounded gradients tells us $\norm{\frac{\partial \tilde X^{k+1}}{\partial\tilde X^k}}_2 \leq B$ giving us $\norm{(P)}_2 \leq I_{k+1}B\cdot r$. To analyze the term $\norm{(Q)}_2$, recall that we have $X^{k+1} = f(X^k; E^{k+1})$ and since the overall task loss is $H$ smooth, so must be the function $f$. Abusing notation to let $H$ denote the Lipschitz constant of the network gives us $\norm{\tilde X^k - X^k}_2 \leq H \cdot r$. Then we have
\begin{align*}
\frac{\partial(\tilde X^{k+1})}{\partial\tilde X^k} - \frac{\partial X^{k+1}}{\partial X^k} &= f'(\tilde X^k; \tilde E^{k+1}) - f'(X^k; E^{k+1})\\
&= \underbrace{f'(\tilde X^k; \tilde E^{k+1}) - f'(X^k; \tilde E^{k+1})}_{(M)} + \underbrace{f'(X^k; \tilde E^{k+1}) - f'(X^k; E^{k+1})}_{(N)}
\end{align*}
Applying smoothness, we have $\norm{(M)}_2 \leq H^2\cdot r$ as well as $\norm{(N)}_2 \leq H\cdot r$. Together with bounded gradients that gives us $\norm{\valpha^{k+1}}_2 \leq B$, we have $\norm{(Q)}_2 \leq BH(H+1)\cdot r$. Together, we have
\[
\norm{\tilde\valpha^k - \valpha^k}_2 \leq B(H(H+1) + I_{k+1})\cdot r,
\]
telling us that $I_k \leq B(H(H+1) + I_{k+1})$.
\end{enumerate}
The above tells us that in Algorithm~\ref{algo:inv}, suppose the parameter update steps i.e. step 7 and step 10 are executed by effecting $\tau$ mini-batch SGD steps or else $\tau$ full-batch GD steps each time, with step length $\eta$, then the amount of travel in any model parameter is bounded above by $\tau\eta B$ i.e. $\norm{\tilde E^k - E^k}_2 \leq \tau\eta B$ and so on. Now, incomplete task gradients $\valpha^k$ are updated only after model parameters for all layers have been updated once and the algorithm loops back to step 6. Thus, Lipschitzness of the gradients tells us that the staleness in the incomplete task gradients, for any $k \in [K]$, is upper bounded by
\[
\norm{\tilde\valpha^k - \valpha^k}_2 \leq \tau\eta\cdot IB,
\]
where we take $I = \max_{k \leq K}I_k$ for sake of simplicity. Now, since $\norm{\frac{\partial X^k_{ip}}{\partial E^k}}_2 \leq B$ as gradients are bounded, we have $\sigma_{\max}(Z^k) \leq N\cdot d_k B$. Then, combining with the result in Equation~\eqref{eq:step1} gives us
\[
\norm{\frac{\widetilde{\partial\cL}}{\partial E^k} - \frac{\partial\cL}{\partial E^k}}_2 \leq \tau\eta\cdot IB^2d_k
\]
Taking in contributions of gradients of all layers and the final classifier layer gives us the bias in the total gradient using triangle inequality as
\[
\norm{\widetilde{\nabla\cL} - \nabla\cL}_2 \leq \sum_{k \in [K]}\norm{\frac{\widetilde{\partial\cL}}{\partial E^k} - \frac{\partial\cL}{\partial E^k}}_2 + \norm{\frac{\widetilde{\partial\cL}}{\partial W^{K+1}} - \frac{\partial\cL}{\partial W^{K+1}}}_2 \leq \tau\eta\cdot IB^2d_{\max}(K+1),
\]
where $d_{\max} = \max_{k \in K}\ d_k$ is the maximum embedding dimensionality of any layer.
\subsection{Step 4 (i): Convergence Guarantee (mini-batch SGD)}
Let us analyze convergence in the case when updates are made using mini-batch SGD in steps 7, 10 of Algorithm~\ref{algo:inv}. The discussion above establishes an upper bound on the \emph{absolute} bias in the gradients. However, our proofs later require a \emph{relative} bound which we tackle now. Let us decide to set the step length to $\eta = \frac1{C\sqrt T}$ for some constant $C > 0$ that will be decided later and also set some value $\phi < 1$. Then two cases arise
\begin{enumerate}
\item \textbf{Case 1}: The relative gradient bias is too large i.e. $\tau\eta\cdot IB^2d_{\max}(K+1) > \phi\cdot\norm{\nabla\cL}_2$. In this case we are actually done since we get
\[
\norm{\nabla\cL}_2 \leq \frac{\tau\cdot IB^2d_{\max}(K+1)}{\phi\cdot C\sqrt T},
\]
i.e. we are already at an approximate first-order stationary point.
\item \textbf{Case 2}: The relative gradient bias is small i.e. $\tau\eta\cdot IB^2d_{\max}(K+1) \leq \phi\cdot\norm{\nabla\cL}_2$. In this case we satisfy the relative bias bound required by Lemma~\ref{lem:conv} (part 2) with $\delta = \phi$.
\end{enumerate}
This shows that either Case 1 happens in which case we are done or else Case 2 keeps applying which means that Lemma~\ref{lem:conv} (part 2) keeps getting its prerequisites satisfied. If Case 1 does not happen for $T$ steps, then Lemma~\ref{lem:conv} (part 2) assures us that we will arrive at a point where
\[
\mathbb{E}{\norm{\nabla\cL}_2^2} \leq \frac{2C(\cL^0 - \cL^\ast) + H\sigma^2/C}{(1-\phi)\sqrt T}
\]
where $\cL^0, \cL^\ast$ are respectively the initial and optimal values of the loss function which we recall is $H$-smooth and $\sigma^2$ is the variance due to mini-batch creation and we set $\eta = \frac1{C\sqrt T}$. Setting $C = \sqrt{\frac{H\sigma^2}{2(\cL^0 - \cL^\ast)}}$ tells us that within $T$ steps, either we will achieve
\[
\norm{\nabla\cL}_2^2 \leq \frac{2\tau^2\cdot I^2B^4d_{\max}^2(K+1)^2(\cL^0 - \cL^\ast)}{\phi^2H\sigma^2T},
\]
or else we will achieve
\[
\mathbb{E}{\norm{\nabla\cL}_2^2} \leq \frac{2\sigma\sqrt{2(\cL^0 - \cL^\ast)H}}{(1-\phi)\sqrt T}
\]
Setting $\tau = T^{\frac14}\cdot\br{\frac{\sqrt{\sigma^3H\sqrt H}}{(\cL^0 - \cL^\ast)^{\frac14}IB^2d_{\max}(K+1)}}$ balances the two quantities in terms of their dependence on $T$ (module absolute constants such as $\phi$). Note that as expected, as the quantities $I, B, d_{\max}, K$ increase, the above limit on $\tau$ goes down i.e. we are able to perform fewer and fewer updates to the model parameters before a refresh is required. This concludes the proof of Theorem~\ref{thm:conv-restated} for the second case.
\subsection{Step 4 (ii): Convergence Guarantee (full-batch GD)}
In this case we similarly have either the relative gradient bias to be too large in which case we get
\[
\norm{\nabla\cL}_2 \leq \frac{\tau\eta\cdot IB^2d_{\max}(K+1)}\phi,
\]
or else we satisfy the relative bias bound required by Lemma~\ref{lem:conv} (part 1) with $\delta = \phi$. This shows that either Case 1 happens in which case we are done or else Case 2 keeps applying which means that Lemma~\ref{lem:conv} (part 1) keeps getting its prerequisites satisfied. If Case 1 does not happen for $T$ steps, then Lemma~\ref{lem:conv} (part 1) assures us that we will arrive at a point where
\[
\norm{\nabla\cL}_2^2 \leq \frac{2(\cL^0 - \cL^\ast)}{\eta(1-\phi)T}
\]
In this case, for a given value of $\tau$, setting $\eta = \br{\frac{2(\cL^0 - \cL^\ast)}{(1-\phi)\tau^2I^2B^4d_{\max}^2(K+1)^2T}}^{\frac13}$ gives us that within $T$ iterations, we must achieve
\[
\norm{\nabla\cL}_2^2 \leq \br{\frac{2\tau(\cL^0 - \cL^\ast)IB^2d_{\max}(K+1)}{(1-\phi)}}^{\frac23}\cdot\frac1{T^{\frac23}}
\]
In this case, it is prudent to set $\tau = \bigO1$ so as to not deteriorate the convergence rate. This concludes the proof of Theorem~\ref{thm:conv-restated} for the first case.
\subsection{Generic Convergence Results}
\begin{lemma}[First-order Stationarity with a Smooth Objective]
\label{lem:conv}
Let $f: \vTheta \rightarrow \bR$ be an $H$-smooth objective over model parameters $\theta \in \vTheta$ that is being optimized using a gradient oracle and the following update for step length $\eta$:
\[
{\bm{\theta}}^{t+1} = {\bm{\theta}}^t - \eta\cdot{\bm{g}}^t
\]
Let ${\bm{\theta}}^\ast$ be an optimal point i.e. ${\bm{\theta}}^\ast \in \arg\min_{{\bm{\theta}} \in \vTheta}\ f({\bm{\theta}})$. Then, the following results hold depending on the nature of the gradient oracle:
\begin{enumerate}
\item If a non-stochastic gradient oracle with bounded bias is used i.e. for some $\delta \in (0,1)$, for all $t$, we have ${\bm{g}}^t = \nabla f({\bm{\theta}}^t) + \vDelta^t$ where $\norm{\vDelta^t}_2 \leq \delta\cdot\norm{\nabla f({\bm{\theta}}^t)}_2$ and if the step length satisfies $\eta \leq \frac{(1-\delta)}{2H(1+\delta^2)}$, then for any $T > 0$, for some $t \leq T$ we must have
\[
\norm{\nabla f({\bm{\theta}}^t)}_2^2 \leq \frac{2(f({\bm{\theta}}^0) - f({\bm{\theta}}^\ast))}{\eta(1-\delta)T}
\]
\item If a stochastic gradient oracle is used with bounded bias i.e. for some $\delta \in (0,1)$, for all $t$, we have $\mathbb{E}{\cond{{\bm{g}}^t}{\bm{\theta}}^t} = \nabla f({\bm{\theta}}^t) + \vDelta^t$ where $\norm{\vDelta^t}_2 \leq \delta\cdot\norm{\nabla f({\bm{\theta}}^t)}_2$, as well as bounded variance i.e. for all $t$, we have $\mathbb{E}{\cond{\norm{{\bm{g}}^t - \nabla f({\bm{\theta}}^t) - \vDelta^t}_2^2}{\bm{\theta}}^t} \leq \sigma^2$ and if the step length satisfies $\eta \leq \frac{(1-\delta)}{2H(1+\delta^2)}$, as well as $\eta = \frac1{C\sqrt T}$ for some $C > 0$, then for any $T > \frac{4H^2(1+\delta^2)^2}{C^2(1-\delta)^2}$, for some $t \leq T$ we must have
\[
\mathbb{E}{\norm{\nabla f({\bm{\theta}}^t)}_2^2} \leq \frac{2C(f({\bm{\theta}}^0) - f({\bm{\theta}}^\ast)) + H\sigma^2/C}{(1-\delta)\sqrt T}
\]
\end{enumerate}
\end{lemma}
\begin{proof}[Proof (of Lemma~\ref{lem:conv})]
To prove the first part, we notice that smoothness of the objective gives us
\[
f({\bm{\theta}}^{t+1}) \leq f({\bm{\theta}}^t) + \ip{\nabla f({\bm{\theta}}^t)}{{\bm{\theta}}^{t+1} - {\bm{\theta}}^t} + \frac H2\norm{{\bm{\theta}}^{t+1} - {\bm{\theta}}^t}_2^2
\]
Since we used the update ${\bm{\theta}}^{t+1} = {\bm{\theta}}^t - \eta\cdot{\bm{g}}^t$ and we have ${\bm{g}}^t = \nabla f({\bm{\theta}}^t) + \vDelta^t$, the above gives us
\begin{align*}
f({\bm{\theta}}^{t+1}) &\leq f({\bm{\theta}}^t) - \eta\cdot\ip{\nabla f({\bm{\theta}}^t)}{{\bm{g}}^t} + \frac{H\eta^2}2\norm{{\bm{g}}^t}_2^2\\
&= f({\bm{\theta}}^t) - \eta\cdot\ip{\nabla f({\bm{\theta}}^t)}{\nabla f({\bm{\theta}}^t) + \vDelta^t} + \frac{H\eta^2}2\br{\norm{\nabla f({\bm{\theta}}^t) + \vDelta^t}_2^2}\\
&= f({\bm{\theta}}^t) - \eta\cdot\norm{\nabla f({\bm{\theta}}^t)}_2^2 - \eta\cdot\ip{\nabla f({\bm{\theta}}^t)}{\vDelta^t} + \frac{H\eta^2}2\br{\norm{\nabla f({\bm{\theta}}^t) + \vDelta^t}_2^2}
\end{align*}
Now, the Cauchy-Schwartz inequality along with the bound on the bias gives us $- \eta\cdot\ip{\nabla f({\bm{\theta}}^t)}{\vDelta^t} \leq \eta\delta\cdot\norm{\nabla f({\bm{\theta}}^t)}_2^2$ as well as $\norm{\nabla f({\bm{\theta}}^t) + \vDelta^t}_2^2 \leq 2(1+\delta^2)\cdot\norm{\nabla f({\bm{\theta}}^t)}_2^2$. Using these gives us
\begin{align*}
f({\bm{\theta}}^{t+1}) &\leq f({\bm{\theta}}^t) - \eta(1 - \delta - \eta H(1+\delta^2))\cdot\norm{\nabla f({\bm{\theta}}^t)}_2^2\\
&\leq f({\bm{\theta}}^t) - \frac{\eta(1 - \delta)}2\cdot\norm{\nabla f({\bm{\theta}}^t)}_2^2,
\end{align*}
since we chose $\eta \leq \frac{(1-\delta)}{2H(1+\delta^2)}$. Reorganizing, taking a telescopic sum over all $t$, using $f({\bm{\theta}}^{T+1}) \geq f({\bm{\theta}}^\ast)$ and making an averaging argument tells us that since we set , for any $T > 0$, it must be the case that for some $t \leq T$, we have
\[
\norm{\nabla f({\bm{\theta}}^t)}_2^2 \leq \frac{2(f({\bm{\theta}}^0) - f({\bm{\theta}}^\ast))}{\eta(1-\delta)T}
\]
This proves the first part. For the second part, we yet again invoke smoothness to get
\[
f({\bm{\theta}}^{t+1}) \leq f({\bm{\theta}}^t) + \ip{\nabla f({\bm{\theta}}^t)}{{\bm{\theta}}^{t+1} - {\bm{\theta}}^t} + \frac H2\norm{{\bm{\theta}}^{t+1} - {\bm{\theta}}^t}_2^2
\]
Since we used the update ${\bm{\theta}}^{t+1} = {\bm{\theta}}^t - \eta\cdot{\bm{g}}^t$, the above gives us
\begin{align*}
f({\bm{\theta}}^{t+1}) &\leq f({\bm{\theta}}^t) - \eta\cdot\ip{\nabla f({\bm{\theta}}^t)}{{\bm{g}}^t} + \frac{H\eta^2}2\norm{{\bm{g}}^t}_2^2\\
&= f({\bm{\theta}}^t) - \eta\cdot\ip{\nabla f({\bm{\theta}}^t)}{{\bm{g}}^t} + \frac{H\eta^2}2\br{\norm{\nabla f({\bm{\theta}}^t) + \vDelta^t}_2^2 + \norm{{\bm{g}}^t - \nabla f({\bm{\theta}}^t) - \vDelta^t}_2^2}\\
&\quad -H\eta^2\ip{\nabla f({\bm{\theta}}^t) + \vDelta^t}{{\bm{g}}^t - \nabla f({\bm{\theta}}^t) - \vDelta^t}
\end{align*}
Taking conditional expectations on both sides gives us
\[
\mathbb{E}{\cond{f({\bm{\theta}}^{t+1})}{\bm{\theta}}^t} \leq f({\bm{\theta}}^t) - \eta\cdot\norm{\nabla f({\bm{\theta}}^t)}_2^2 - \eta\cdot\ip{\nabla f({\bm{\theta}}^t)}{\vDelta^t} + \frac{H\eta^2}2\br{\norm{\nabla f({\bm{\theta}}^t) + \vDelta^t}_2^2 + \sigma^2}
\]
Now, the Cauchy-Schwartz inequality along with the bound on the bias gives us $- \eta\cdot\ip{\nabla f({\bm{\theta}}^t)}{\vDelta^t} \leq \eta\delta\cdot\norm{\nabla f({\bm{\theta}}^t)}_2^2$ as well as $\norm{\nabla f({\bm{\theta}}^t) + \vDelta^t}_2^2 \leq 2(1+\delta^2)\cdot\norm{\nabla f({\bm{\theta}}^t)}_2^2$. Using these and applying a total expectation gives us
\begin{align*}
\mathbb{E}{f({\bm{\theta}}^{t+1})} &\leq \mathbb{E}{f({\bm{\theta}}^t)} - \eta(1 - \delta - \eta H(1+\delta^2))\cdot\mathbb{E}{\norm{\nabla f({\bm{\theta}}^t)}_2^2} + \frac{H\eta^2}2\cdot\sigma^2\\
&\leq \mathbb{E}{f({\bm{\theta}}^t)} - \frac{\eta(1 - \delta)}2\cdot\mathbb{E}{\norm{\nabla f({\bm{\theta}}^t)}_2^2} + \frac{H\eta^2}2\cdot\sigma^2
\end{align*}
where the second step follows since we set $\eta \leq \frac{(1-\delta)}{2H(1+\delta^2)}$. Reorganizing, taking a telescopic sum over all $t$, using $f({\bm{\theta}}^{T+1}) \geq f({\bm{\theta}}^\ast)$ and making an averaging argument tells us that for any $T > 0$, it must be the case that for some $t \leq T$, we have
\[
\mathbb{E}{\norm{\nabla f({\bm{\theta}}^t)}_2^2} \leq \frac{2(f({\bm{\theta}}^0) - f({\bm{\theta}}^\ast))}{\eta(1-\delta)T} + \frac{H\eta\sigma^2}{1-\delta}
\]
However, since we also took care to set $\eta = \frac1{C\sqrt T}$, we get
\[
\mathbb{E}{\norm{\nabla f({\bm{\theta}}^t)}_2^2} \leq \frac{2C(f({\bm{\theta}}^0) - f({\bm{\theta}}^\ast)) + H\sigma^2/C}{(1-\delta)\sqrt T}
\]
which proves the second part and finishes the proof.
\end{proof}
|
{
"timestamp": "2022-04-05T02:22:24",
"yymm": "2109",
"arxiv_id": "2109.13995",
"language": "en",
"url": "https://arxiv.org/abs/2109.13995"
}
|
\section{Introduction}
In this paper, we consider the problem of sampling from a posterior
$$
\pi({\boldsymbol \theta}|D)\propto p(D|{\boldsymbol \theta})p({\boldsymbol \theta}),
$$
where $D$ denotes data and ${\boldsymbol \theta} \in \Theta$ is a vector of unknown parameters, in the case where the likelihood $p(D|{\boldsymbol \theta})$ is costly to evaluate. We discuss two-stage algorithms. In the first of these, we examine an adaptive Metropolis-Hastings (MH) algorithm~\cite{Hastings70, Robert2015} which employs an adaptively tuned Gaussian Process (GP) surrogate model at the first stage to filter out poor proposals. If a proposal is not filtered out, at the second stage a full (expensive) log-likelihood evaluation is carried out and used to decide whether it is accepted as the next state. Introduction of the first stage, constructed in this way, saves computation on poor proposals. A key contribution of this work is in the form of the acceptance probability in the first stage obtained by marginalising out the GP function. This makes the acceptance ratio dependent on the variance of the GP, which naturally results in an exploration-exploitation trade-off similar to the one of Bayesian Optimisation
\citep{BCD10}, which allows us to sample while learning the GP. We demonstrate that using this expectation serves as a useful filtering scheme. The second algorithm is a two-stage form of Metropolis adjusted Langevin algorithm (MALA)~\cite{Neal2011}. Here, we use GP as a surrogate for the log-likelihood function again, but in this case the GP is also used to approximate the gradient required for MALA updating, using a well known result that the gradient of a GP is also a GP~\cite{Rasmussen02}. Marginalizing out of the GP can also be performed in this instance.
The approximation we use is
\begin{equation}
\label{eq:GPapprox}
\text{LL}(D|{\boldsymbol \theta}):= \ln p(D|{\boldsymbol \theta}) \approx \widetilde{\text{LL}}_t(D|{\boldsymbol \theta}) \sim \text{GP}(\mu({\boldsymbol \theta}|\mathcal{I}_t),k({\boldsymbol \theta},{\boldsymbol \theta}^*|\mathcal{I}_t))
\end{equation}
where $\mathcal{I}_t$ denotes the set of $t$ full evaluations of the log-likelihood by the current iteration, and $\boldsymbol \theta^*$ collectively denotes the parameter values at which these evaluations were made.
Adaptive tuning of the GP surrogate is accomplished through use of the collection $\mathcal{I}_t$ of full evaluations of the log-likelihood. We argue that the tuning schedule we suggest satisfies diminishing adaptation~\cite{roberts_rosenthal_2007} and hence will ensure correct sampling from the true target $\pi(\boldsymbol\theta|D)$.
Within the Markov chain Monte Carlo (MCMC) literature, there has been much interest in recent years, in the use of proxy quantities for the target measure evaluations from different aspects. Approaches using noisy approximations to an invariant transition kernel~\cite{Andrieu2009, Alquier16} have gained much interest. The work here assumes that the log-likelihood, though maybe expensive, can be computed, and is thus more aligned to the work of~\citet{rasmussen2003gaussian,Christen2005,Sherlock2017,li2019neural,fielding2011efficient,bliznyuk2012local,joseph2012bayesian}, involving ideas from delayed acceptance MCMC. The key difference is that we do not carry out pre-computation of the GP prior to running the algorithm, investigating adaptation of the GP on the fly using key results from the adaptive MCMC literature~\cite{roberts_rosenthal_2007} to ensure convergence to the true target.
We present the two stage MH algorithm in Section~\ref{sec:two-stage-description}. Section~\ref{sec:MALA} follows by introducing a the two-stage MALA algorithm building on these ideas. Section~\ref{sec:examples} explores a range of examples and demonstrates the merits of the filtering step, with a discussion of potential drawbacks.
\section{Two Stage Adaptive Metropolis-Hastings via GP approximation} \label{sec:two-stage-description}
We combine the MH algorithm with a GP model which approximates the log-likelihood. In cases where the log-likelihood is expensive to compute, the GP model can be used in a pre-filtering step to determine proposals for which a full computation of the log-likelihood might well lead to an acceptance~\cite{Christen2005}. %
At each iteration of the algorithm, the
first stage, uses a GP to deliver an approximate log-likelihood evaluation. The GP is based on a collection $\mathcal{I}_t$ of previous full evaluations of the log-likelihood. A propsal is made from the current state and then the usual MH acceptance probability is computed using the approximated log-likelihood (this step is computationally inexpensive). If the proposal is accepted in this first stage, then it goes to the second stage, where another acceptance probability is computed, but this time, based on the full costly evaluation of the log-likelihood. The resulting evaluation of the log-likelihood is then appended to $\mathcal{I}_t$, resulting in $\mathcal{I}_{t+1}$.
Before giving a full description of the algorithm, we introduce some notation and give an explicit definition of $\mathcal{I}_t$:
\begin{itemize}
\item $\mathcal{S}_k$ denotes the points sampled up to the iteration $k$ of the algorithm;
\item ${\boldsymbol \theta}^{(k)}$ denotes the most recent element in $\mathcal{S}_k$ and ${\boldsymbol \theta}^*$ denotes the proposed state
\item $\mathcal{I}_t=\{({\boldsymbol \theta}^{(i)},\text{LL}(D|{\boldsymbol \theta}^{(i)})): ~~i=1,\dots,t\}$ denotes the $t\leq k$ exact likelihood evaluations performed up to iteration $k$.
\end{itemize}
We use a noise free GP as a surrogate model for the log-likelihood and denote by $\text{GP}_k(\mu({\boldsymbol \theta}|\mathcal{I}_t),k({\boldsymbol \theta},{\boldsymbol \theta}|\mathcal{I}_t))$ the posterior GP at the iteration $k$ conditioned on the collection $\mathcal{I}_t$.
We use $\widetilde{\text{LL}}_k({\boldsymbol \theta})$
to denote the GP-distributed log-likelihood.
We choose the parameters of the GP to satisfy the following exact interpolation property.
\begin{assumption}
\label{as1}
The prior mean function and prior covariance function of the GP are selected to guarantee exact interpolation: %
$$
\mu({\boldsymbol \theta}^{(i)}|\mathcal{I}_t)=\text{LL}(D|{\boldsymbol \theta}^{(i)}),~~~~~~~\qquad \qquad~~~~~~~~~~ k({\boldsymbol \theta}^{(i)},{\boldsymbol \theta}|\mathcal{I}_t)=0,
$$
for all ${\boldsymbol \theta}^{(i)}$ with a corresponding entry in $\mathcal{I}_t$ and $ {\boldsymbol \theta} \in \Theta$.\footnote{Any universal covariance function satisfies this property, for instance Squared Exponential.}
\end{assumption}
This means the predictions of the GP at the points ${\boldsymbol \theta}^{(i)} \in \mathcal{I}_t$ are exact and certain (zero (co)variance), which is a desirable property in a noise free regression problem.\footnote{This also guarantees consistency between the two stages: the denominator of \eqref{eq:alphastar}
and \eqref{eq:alpha} is the same}.
The two stages of the MH algorithm are as follows.
\paragraph{Stage 1} Use the predictive posterior GP (conditioned on the collection $\mathcal{I}_t$) to approximate the log-likelihood.
Define the first stage acceptance probability:
\begin{align}
\label{eq:alphastar}
\tilde{\alpha}^{(1)}({\boldsymbol \theta}^{(k)},{\boldsymbol \theta}^*)= 1 \wedge \frac{\exp( \widetilde{\text{LL}}_t({\boldsymbol \theta}^*))p({\boldsymbol \theta}^*)q({\boldsymbol \theta}^{(k)}|{\boldsymbol \theta}^*)}{\exp( \widetilde{\text{LL}}_t({\boldsymbol \theta}^{(k)}))p({\boldsymbol \theta}^{(k)})q({\boldsymbol \theta}^*|{\boldsymbol \theta}^{(k)})}
\end{align}
where $\widetilde{\text{LL}}_t(\cdot) \sim \text{GP}_k(\mu({\boldsymbol \theta}|\mathcal{I}_t),k({\boldsymbol \theta},{\boldsymbol \theta}|\mathcal{I}_t))$ and we use the shorthand notation $a \wedge b = \min(a, b)$. Note that, because of the exact interpolation property in Assumption \ref{as1}, it results that $\widetilde{\text{LL}}_t({\boldsymbol \theta}^{(k)})=\text{LL}_t({\boldsymbol \theta}^{(k)})$.
The acceptance probability $\tilde{\alpha}^{(1)}({\boldsymbol \theta}^{(k)} ,{\boldsymbol \theta}^*)$ (respectively, $\tilde{\alpha}^{(1)}({{\boldsymbol \theta}^*,\boldsymbol \theta}^{(k)})$) depends on $\widetilde{\text{LL}}_t({\boldsymbol \theta}^*)-\widetilde{\text{LL}}_t({\boldsymbol \theta}^{(k)})$ (respectively, $-\widetilde{\text{LL}}_t({\boldsymbol \theta}^*)+\widetilde{\text{LL}}_t({\boldsymbol \theta}^{(k)})$ ) which is GP distributed. A key part of our approach involves marginalizing this dependence out by exploiting the following result.
\begin{proposition}
\label{prop:1}
The distribution of $ e^{\widetilde{\text{LL}}_t({\boldsymbol \theta})}$ is
$ \textit{Lognormal}\left(\mu({\boldsymbol \theta}|\mathcal{I}_t),k({\boldsymbol \theta},{\boldsymbol \theta}|\mathcal{I}_t)\right),
$
and its mean is
\begin{equation}
\label{eq:meanlognorm}
e^{\mu({\boldsymbol \theta}|\mathcal{I}_t) +\frac{1}{2} k({\boldsymbol \theta},{\boldsymbol \theta}|\mathcal{I}_t)}.
\end{equation}
%
%
\end{proposition}
The proofs of this and the other Propositions are given in Appendix~\ref{app:proofs}.
By Assumption \ref{as1}, we have that $k({\boldsymbol \theta}^{(k)},{\boldsymbol \theta}^{(k)}|\mathcal{I}_t)=k({\boldsymbol \theta}^*,{\boldsymbol \theta}^{(k)}|\mathcal{I}_t)=0$ and, therefore, $\widetilde{\text{LL}}_t({\boldsymbol \theta}^*)$ and $\widetilde{\text{LL}}_t({\boldsymbol \theta}^{(k)})$ are sampled independently.
By exploiting Proposition \ref{prop:1}, we remove the dependence of the acceptance probability on $\widetilde{\text{LL}}$ in \eqref{eq:alphastar} resulting in the acceptance probability:
\begin{equation}
\label{eq:alphastar1}
\alpha^{(1)}({\boldsymbol \theta}^{(k)},{\boldsymbol \theta}^*)= 1 \wedge \frac{e^{\mu({\boldsymbol \theta}^*|\mathcal{I}_t) +\frac{1}{2}k({\boldsymbol \theta}^*,{\boldsymbol \theta}^*|\mathcal{I}_t)}p({\boldsymbol \theta}^*)q({\boldsymbol \theta}^{(k)} |{\boldsymbol \theta}^*)}{e^{\mu({\boldsymbol \theta}^{(k)}|\mathcal{I}_t)}p({\boldsymbol \theta^{(k)}})q({\boldsymbol \theta}^*|{\boldsymbol \theta}^{(k)})},
\end{equation}
where $\mu({\boldsymbol \theta}^{(k)}|\mathcal{I}_t)=
{\text{LL}}({\boldsymbol \theta}^{(k)})$ is the exact log-likelihood (by Assumption \ref{as1}).
It can be seen that
$\mu({\boldsymbol \theta}^*|\mathcal{I}_t) +\frac{1}{2}k({\boldsymbol \theta}^*,{\boldsymbol \theta}^*|\mathcal{I}_t)$ depends on the GP variance and, therefore, the acceptance probability is larger in regions where the GP uncertainty is large. Similar to the acquisition functions in Bayesian optimisation, this naturally results in an exploration-exploitation trade-off. However, our goal here is different, we aim to sample from the target distribution.
Therefore, given \eqref{eq:alphastar1}, in Stage 1, we accept ${\boldsymbol \theta}^*$ with probability $\alpha^{(1)}({\boldsymbol \theta}^{(k)},{\boldsymbol \theta}^*)$, otherwise ${\boldsymbol \theta}^{(k+1)}={\boldsymbol \theta}^{(k)}$. This defines the following transition kernel at Stage 1:
\begin{align}
\label{eq:starproposal}
Q^*_k(A|{\boldsymbol \theta}^{(k)})&=\int_{A} \alpha^{(1)}({\boldsymbol \theta}^{(k)},{\boldsymbol \theta}^*)q({\boldsymbol \theta}^*|{\boldsymbol \theta}^{(k)})d{\boldsymbol \theta}^*+I_A({\boldsymbol \theta})\int_{\Theta} (1-\alpha^{(1)}({\boldsymbol \theta}^{(k)},{\boldsymbol \theta}^*))d{\boldsymbol \theta}^*.
\end{align}
One can show that the above transition kernel satisfies the detailed balance property for the approximated target distribution $e^{\mu({\boldsymbol \theta}|\mathcal{I}_t) +\frac{1}{2}k({\boldsymbol \theta},{\boldsymbol \theta}|\mathcal{I}_t)}p({\boldsymbol \theta})$.
\begin{proposition}
\label{prop:2}
The transition kernel \eqref{eq:starproposal} satisfies detailed balance.
\end{proposition}
We are not interested in the approximated target distribution, this is the reason we perform the second stage.
\paragraph{Stage 2.} At Stage 2, we perform another MH acceptance step, evaluating the exact log-likelihood. Let ${\boldsymbol \theta}^*$ denote a point sampled from $q^*_k({\boldsymbol \theta}^*|{\boldsymbol \theta}^{(k)}):=Q^*_k( d{\boldsymbol \theta}^*|{\boldsymbol \theta}^{(k)})$. Note that, ${\boldsymbol \theta}^*$ is either equal to the point ${\boldsymbol \theta}^*$ sampled at Stage 1 or to ${\boldsymbol \theta}^{(k)}$ if ${\boldsymbol \theta}^*$ was rejected at Stage 1.
So, with probability
\begin{align}
\nonumber
\alpha^{(2)}({\boldsymbol \theta}^{(k)},{\boldsymbol \theta}^*)&= 1 \wedge \frac{\exp( \text{LL}(D|{\boldsymbol \theta}^*))p({\boldsymbol \theta}^*)q^*_k({\boldsymbol \theta}^{(k)}|{\boldsymbol \theta}^*)}{\exp( \text{LL}(D|{\boldsymbol \theta}^{(k)}))p({\boldsymbol \theta}^{(k)})q^*_k({{\boldsymbol \theta}^*|\boldsymbol \theta}^{(k)})}\\
\label{eq:alpha}
&= 1 \wedge \frac{\exp( \text{LL}(D|{\boldsymbol \theta}^*))p({\boldsymbol \theta}^*)q({\boldsymbol \theta}^{(k)}|{\boldsymbol \theta}^*)\alpha^{(1)}({{\boldsymbol \theta}^*,\boldsymbol \theta}^{(k)})}{\exp( \text{LL}(D|{\boldsymbol \theta}^{(k)}))p({\boldsymbol \theta}^{(k)})q({{\boldsymbol \theta}^*|\boldsymbol \theta}^{(k)})\alpha^{(1)}({\boldsymbol \theta}^{(k)},{\boldsymbol \theta}^*)},
\end{align}
we accept ${\boldsymbol \theta}^*$, otherwise ${\boldsymbol \theta}^{(k+1)}={\boldsymbol \theta}^{(k)}$.
The definition of $q^*_k$ means a rejection at Stage 1 always leads to a rejection at Stage 2, we do not need to compute \eqref{eq:alpha} when \eqref{eq:alphastar} has led to a rejection.
When the sample ${\boldsymbol \theta}^*$ is accepted a Stage 1, we compute the full log-likelihood, update the set $\mathcal{I}_t$, and evaluate \eqref{eq:alpha}.
Overall the acceptance probability for a new point ${\boldsymbol \theta}^*$ is $\alpha^{(1)}({\boldsymbol \theta}^{(k)},{\boldsymbol \theta}^*)\alpha^{(2)}({\boldsymbol \theta}^{(k)},{\boldsymbol \theta}^*)$.
The overall two-stage algorithm preserves detailed balance with respect to the posterior distribution and this follows directly by \eqref{eq:alpha}, which is a standard MH acceptance step with proposal $q_k({\boldsymbol \theta}^{(k)}|{\boldsymbol \theta}^*)\alpha^{(1)}({{\boldsymbol \theta}^*,\boldsymbol \theta}^{(k)})$.
\paragraph{Convergence analysis} To prove the convergence to the target distribution, it is enough to show that the overall (two-stage) transition kernel $P_{t}(\cdot|{\boldsymbol \theta})$
satisfies the \textit{Diminishing Adaptation} condition in~\citet{roberts_rosenthal_2007}:
\begin{equation}
\label{eq:diminish}
\lim\limits_{t \rightarrow \infty} \sup_{{\boldsymbol \theta} \in \Theta} ||P_{t}(\cdot|{\boldsymbol \theta}) -P_{t-1}(\cdot|{\boldsymbol \theta})||=0 ~~\text{ in probability}.
\end{equation}
and the \textit{Bounded Convergence condition} which is generally
satisfied under some regularity conditions of $\Theta$ and the target distribution.
The adaptivity in our two-stage algorithm is due to the GP and diminishing adaptation follows by this property of the posterior predictive variance.
\begin{proposition}
\label{prop:3}
For fixed hypeprameters, the surrogated model satisfies
this property:
$k({\boldsymbol \theta}^*,{\boldsymbol \theta}^*|\mathcal{I}_t)< k({\boldsymbol \theta}^*,{\boldsymbol \theta}^*|\mathcal{I}_{t-1})$.
\end{proposition}
For illustration, in Figure \ref{fig:1}, we consider a 1D case with $\pi(\theta)\propto e^{-\frac{x^2}{2}}$. It can be noticed how $\alpha_1$ converges to $\alpha_2$ at the increase of the log-likelihood evaluations in $\mathcal{I}_t$.
Proposition \ref{prop:5} holds under the assumption of fixed hyperparameters for the covariance function of the GP. Therefore, in our algorithm, we update the hyperparameters only during \textit{burnin}.
\begin{figure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=.8\linewidth]{fig1.pdf}
\caption{}
\label{fig:sfig1}
\end{subfigure}%
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=.8\linewidth]{fig2.pdf}
\caption{}
\label{fig:sfig2}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=.8\linewidth]{fig3.pdf}
\caption{}
\label{fig:sfig3}
\end{subfigure}\\
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=.8\linewidth]{fig4.pdf}
\caption{}
\label{fig:sfig4}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=.8\linewidth]{fig5.pdf}
\caption{}
\label{fig:sfig5}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=.8\linewidth]{fig10.pdf}
\caption{}
\label{fig:sfig6}
\end{subfigure}
\caption{Convergence of the GP mean to the log-normal unnormalised density and of $\alpha_1$ to $\alpha_2$ with increasing iterations (a)-(f).}
\label{fig:1}
\end{figure}
In the next section, we extend these results to Metropolis-adjusted Langevin method.
\section{Metropolis-adjusted Langevin} \label{sec:MALA}
The MALA takes one step (of step size $\delta>0$) in the direction of the gradient from the current point
\begin{equation}
\label{eq:MALAprop}
{\boldsymbol \theta}^*:={\boldsymbol \theta}^{(k)}+\frac{1}{2}\delta \Lambda \Big(\nabla LL({\boldsymbol \theta}^{(k)})+\nabla\log p({\boldsymbol \theta}^{(k)})\Big)+{\sqrt {\delta \Lambda}}\mathbf{z}
\end{equation}
with $\mathbf{z} \sim N(0,I)$ and $\Lambda$ is a preconditioning covariance matrix. Here $\sqrt{\Lambda}$ denotes the matrix square root. In this case, we assume that we can evaluate both the log-likelihood and its gradient
$\mathcal{I}_t=\{({\boldsymbol \theta}^{(i)},[\text{LL}(D|{\boldsymbol \theta}^{(i)}),\nabla^T\text{LL}(D|{\boldsymbol \theta}^{(i)})]): ~~i=1,\dots,t\}$. We use a multiple-output joint GP~\cite{Rasmussen02} as surrogate model for the log-likelihood and its gradient. The idea in this case is simply to apply the previous two-stage algorithm
using the proposal \eqref{eq:MALAprop} with gradient replaced by $\widetilde{\nabla LL}$.
\begin{align}
\label{eq:alphastarMALA}
\tilde{\alpha}^{(1)}({\boldsymbol \theta}^{(k)},{\boldsymbol \theta}^*)= 1 \wedge \frac{\exp( \widetilde{\text{LL}}_t({\boldsymbol \theta}^*))p({\boldsymbol \theta}^*)q({\boldsymbol \theta}^{(k)}|{\boldsymbol \theta}^*+\frac{1}{2}\delta \Lambda \widetilde{\nabla LL}({\boldsymbol \theta}^*)+\frac{1}{2}\delta \Lambda\nabla\log p({\boldsymbol \theta}^*))}{\exp( \widetilde{\text{LL}}_t({\boldsymbol \theta}^{(k)}))p({\boldsymbol \theta}^{(k)})q({\boldsymbol \theta}^*|{\boldsymbol \theta}^{(k)}+\frac{1}{2}\delta \Lambda \widetilde{\nabla LL}({\boldsymbol \theta}^{(k)})+\frac{1}{2}\delta \Lambda\nabla\log p({\boldsymbol \theta}^{(k)}))}
\end{align}
where $q$ is the Normal proposal with covariance $\delta \Lambda$. Note that, $\widetilde{\text{LL}}_t({\boldsymbol \theta}^{(k)})),\widetilde{\nabla LL}({\boldsymbol \theta}^{(k)})$ are exact evaluations because of Assumption \ref{as1}. As before we can marginalise out $\widetilde{LL},\widetilde{\nabla LL}$ computing the expectation of $\alpha_1$ w.r.t.\ the GP. We use the following result.
\begin{proposition}
\label{prop:5}
The expectation of
\begin{align}
\label{eq:ppdf}
e^{f({\boldsymbol \theta}^*)} q({\boldsymbol \theta}'|{\boldsymbol \theta}^*+\tfrac{\delta}{2}\Lambda\nabla f({\boldsymbol \theta}^*)+\tfrac{\delta}{2}\Lambda\nabla \log p({\boldsymbol \theta}^*) )
\end{align}
w.r.t.\ the GP, where $f,\nabla f$ denote the GP distributed log-likelihood and its gradient and $p({\boldsymbol \theta}^*)$ is the prior, is equal to
$$
e^{V^T \begin{bmatrix}
\mu\\
\Lambda \mu_{\nabla}
\end{bmatrix}+\frac{1}{2}V^TKV +\frac{1}{2}\frac{\delta^2}{4} \mu_{\nabla}^T \Lambda \mu_{\nabla}} q({\boldsymbol \theta}'|{\boldsymbol \theta}^*+\tfrac{\delta}{2}\Lambda\nabla \log p({\boldsymbol \theta}^*) )
$$
with $
V=\left[1,\left({\boldsymbol \theta}'-{\boldsymbol \theta}^*-
\tfrac{\delta}{2}\Lambda\nabla \log p({\boldsymbol \theta}^*)
-\tfrac{\delta}{4}(\Lambda\mu_{\nabla})^T) \right)^T \Lambda^{-1}\tfrac{\delta}{2}\right]^T
$, $\mu, \mu_{\nabla}$ are the GP predictive means for $f,\nabla f$ and $K$ is the relative covariance matrix.
\end{proposition}
Stage 2 uses the exact evaluation of the log-likelihood and its gradient. We omit the details. The overall algorithm is similar to the one presented previously for MH with the only difference that the GP is multi-output over the log-likelihood and its gradient.
\section{Numerical experiments} \label{sec:examples}
To model the log-likelihood (and its gradient for MALA), we use a GP with Square Exponential covariance function. A zero mean is used with the value of $\text{LL}({\boldsymbol \theta}^{(k)})$ (and its gradient for MALA) subtracted. This is equivalent to defining a GP with prior mean equal to $\text{LL}({\boldsymbol \theta}^{(k)})$; in this way, far from the data, the acceptance probability $\alpha_1$ only depends on the variance of the GP. This guarantees a high probability of acceptance in Stage 1 for samples in large-uncertainty regions.
The GP is initialised using $3$ observations, before starting the two-stage sampler.
We consider five target distributions.
\begin{description}
\item[T1] The 2D posterior of the parameters $a,b$ of the banana shape distribution (true value set to $a=0.2,b=2$);
\item[T2] The 3D posterior of the parameters $a,b,\sigma$ of the nonlinear regression model $y=a\frac{x}{x+b}+\epsilon$, $\epsilon \sim N(0,\sigma^2)$ (true value set to $a=0.14,b=50,\sigma=0.1$).
\item[T3] The 3D posterior of the parameters
$\ell_1,\ell_2,\sigma^2$ of the SE kernel for a GP-classifier.
\item[T4] The 4D posterior of the parameters
$\beta,\gamma, \sigma_1,\sigma_2$ of a Susceptible, Infected, Recovery (SIR) model.
\item[T5] The 5D posterior of the parameters
$\beta_0,\dots,\beta_4$ of a parametric logistic regression problem.
\end{description}
Appendix \ref{app:priors} gives further details on priors assumed for the parameters and selected proposal. Each of these five posteriors has a specific feature, resulting in a diverse set of challenging targets, for instance T1 is heavy tailed and T2 is heavily anisotropic. T4, the SIR problem, is a prototypical example of the type of applications targeted by the proposed method. To compute the likelihood, we need to solve numerically a system of ODEs and, in more complex biological and chemical models, this can be computationally heavy.
Evaluating the likelihood in these five problems is very fast, this allows us to quickly perform Monte Carlo simulations to assess the performance of the model by generating artificial data. We then evaluate the efficiency of the algorithms by simply counting the number of likelihood evaluations.
We compare our two-stage algorithm with the standard implementations of MH and MALA.
For each target problem and in each simulation, we generate 2500 samples (including 500 for burnin). We have deliberately selected a small number of samples to show that our approach converges quickly, which is important in computationally expensive applications.
We check for convergence to the correct posterior distribution using the metrics described in the caption of Table \ref{tab:res}.
Table \ref{tab:res} reports the value of the metrics averaged over the 30 simulations and over parameters.
Comparing the simulations' results it can be noticed that the proposed GP-based samplers obtain the same convergence metrics of the standard MH and MALA, but with a fraction of the number of likelihood evaluations. It can also be noticed how the fraction of the number of full likelihood evaluations required is problem dependent, ranging from 15\% for T4 to 65\% for T3. This demonstrates that our approach automatically adapts to the complexity of the specific target distribution.
\begin{table}
\begin{minipage}{0.45\textwidth}
{\scriptsize
\begin{tabular}{ccccccc}
& & AR& ESS & ESJD & Eval\% & SD \\
\hline
\multirow{4}{*}{T1} & MH & 0.37& 90 & 0.13 & 100 & 0.02 \\
& GP-MH & 0.36 & 113 & 0.13 & {\bf 41} & 0.02\\
& MALA& 0.26 & 73 & 0.2 & 100 & 0.03\\
& GP-MALA& 0.26 & 75 & 0.2 & {\bf 35} & 0.02\\
\hline
\multirow{4}{*}{T3} & MH & 0.42& 137 & 0.44 & 100 & 4.1 \\
& GP-MH & 0.42 & 135 & 0.38 & {\bf 42} & 3.5\\
& MALA & 0.44& 133 & 0.48 & 100 & 4.1 \\
& GP-MALA & 0.43 & 134 & 0.45 & {\bf 45} & 3.5\\
\hline
\multirow{4}{*}{T5} & MH & 0.29& 98 & 0.002 & 100 & 0.006 \\
& GP-MH & 0.29 & 102 & 0.002 & {\bf 35} & 0.006\\
& MALA & 0.67 & 339 & 0.009 & 100 & 0.006 \\
& GP-MALA & 0.67 & 368 & 0.009 & {\bf 68} & 0.006\\
\end{tabular}
}
\end{minipage}
\begin{minipage}{0.45\textwidth}
{\scriptsize
\begin{tabular}{ccccccc}
& & AR& ESS & ESJD & Eval\% & SD \\
\hline
\multirow{4}{*}{T2} & MH & 0.28& 138 & 32.6 & 100 & 339 \\
& GP-MH & 0.27 & 133 & 31 & {\bf 39} & 339\\
& MALA& 0.26 & 220 & 51 & 100 & 316\\
& GP-MALA& 0.21 & 147 & 29 & {\bf 43} & 255\\
\hline
\multirow{4}{*}{T4} & MH & 0.1& 51 & 0.003 & 100 & 0.009 \\
& GP-MH & 0.1 & 45 & 0.003 & {\bf 15} & 0.009\\
& & & & & & \\
& & & & & & \\
\hline
\multirow{4}{*}{} & & & & & & \\
& & & & & & \\
& & & & & & \\
& & & & & & \\
\end{tabular}
}
\end{minipage}
\caption{Comparison between the proposed GP-based samplers and standard MH and MALA in terms of the acceptance rate (AR), Effective Sample Size (ESS),
Expected Square Jumping Distance (ESJD), percentage of likelihood evaluations in the 2000 iterations (EVAL), Square Distance (SD) between the true value of the parameters and the estimated posterior mean. We have not run MALA for the SIR model.}
\label{tab:res}
\end{table}
\section{Conclusions}
We have presented a two-stage Metropolis-Hastings algorithm for sampling probabilistic models, whose log-likelihood is computationally expensive to evaluate, by using a surrogate GP model.
The key feature of the approach, and the difference w.r.t.\ previous works, is the ability to learn the target distribution from scratch (while sampling), and so without the need of pre-training the GP. This is fundamental for automatic and inference in Probabilistic Programming Languages
In particular, we have presented an alternative first stage acceptance scheme by marginalising out the GP distributed function, which makes the acceptance ratio explicitly dependent on the variance of the GP. This approach is extended to Metropolis-Adjusted Langevin algorithm (MALA). Numerical experiments have demonstrated the effectiveness of the method, which can automatically adapt to the complexity of the target distribution. In the numerical experiments, we have used a full GP whose computational load grows cubically as the size of the training set increases. Sparse GPs can be employed to address this issue \citep{quinonero2005unifying,snelson2006sparse,pmlrv5titsias09a,Hensman2013,hernandez2016scalable,bauer2016understanding,SCHURCH2020} when it is necessary to sample thousands of samples.
In future work, we plan to extend the approach we used for MALA to Hamiltonian Monte Carlo. We also intend to investigate whether tailored covariance functions for log densities or ratios of densities can provide any convergence advantage, but also investigate surrogate models alternative to GPs.
\begin{acks}
\end{acks}
\bibliographystyle{acm-reference-format}
|
{
"timestamp": "2021-09-29T02:25:55",
"yymm": "2109",
"arxiv_id": "2109.13891",
"language": "en",
"url": "https://arxiv.org/abs/2109.13891"
}
|
\section{Introduction}\label{sec:intro}
Migration is an essential stage of cancer progression \cite{hanahan}. It starts with cells of a growing malignant tumour invading the surrounding tissue matrix. Locally, this eventually leads to malfunctioning of the organ in which the tumour has arisen. An even more life-threatening implication of invasion is metastasis. Migrating cancer cells are able to reach and penetrate blood and/or lymph vessels. If this occurs, and the cells manage to survive the transportation across the circulatory system, they can colonise distant sites, forming further neoplasms there. This process is termed metastasis. It is responsible for about 90\% of all deaths coursed by cancer \cite[chapter 14]{Weinberg}.
Cancer invasion results
from a complicated interplay of many effects, including cell movement, proliferation, and interaction among themselves and with their surroundings. Cancer cell motility, proliferation, even survival, are subject to cell-tissue interaction, see e.g. \cite{pickup}. The main mechanisms of cancer cell movement are diffusion and, unlike lifeless particles, various taxes. Taxis refers to movement guided by the gradient of a stimulus in the cell's surroundings.
One speaks, e.g. of haptotaxis, chemotaxis, and pH-taxis if the motion is directed by the gradients of tissue fiber density, a diffusing chemical, and pH, respectively. All three kinds of taxis occur during tumour invasion. Haptotaxis plays the key role, directing the cells along the tissue fibers \cite{carter}.
Many mathematical models of invasion have been derived and studied with the aim of improving our understanding of the involved biological phenomena.
Macroscopic reaction-diffusion-taxis (RDT) systems are among the most popular tools in this context. Yet few of them, even if carefully derived, do not violate a very basic property, namely that cell speeds are bounded by a certain finite intrinsic value. This value is, for example, independent of the initial cell density.
Models with flux limitation have been designed with the aim to have control on speeds as well as other characteristics of propagation. In these models, diffusion and/or taxis parts of the cell flux possess a priori bounds that are independent of the spacial gradients of the involved quantities. This generally leads to a finite and well-controlled contribution from the corresponding motion effect (i.e. diffusion or taxis) to the propagation speed, staying below a certain value that can be directly determined from the equation coefficients. This property makes equations with flux-limitation an attractive tool for modelling cancer invasion. Some models involving them have been proposed in \cite{conte2021modeling,KumarSur,DKSS20,KimFried}, and we believe that there are more to come.
RDT equations are often obtained by macroscopic flux balance. Examples in the context of cancer invasion include the models in \cite{anderson2000,ChapLol2011}. However, this method is not very accurate as it disregards important information from smaller scales. In contrast, a multiscale approach that is based on construction and upscaling of a mesoscopic kinetic transport equation (KTE) leads to a considerably more precise description on the macroscale. An early example of an application to modelling of a type of cell motion often observed in cancer can be found in \cite{HillenM5}.
\bigskip
In this paper we review models with flux-limited (FL) diffusion and/or taxis in the context of cell migration, cancer invasion being the principal application that we have in mind.
We start with FL diffusion equations in \cref{SecDiff}. We discuss the rational behind such models, their main properties, the analytical challenge that they pose, and how they can be derived on the macroscale. In the same spirit we overview RDT systems with FL taxis in \cref{SecTax}. \cref{Sec:Multi} is the main part of this paper. There we closely examine several ways to derive equations involving FL motion from KTEs by means of a suitable upscaling.
\section{FL diffusion equations}\label{SecDiff}
\subsection{Motivation}\label{SecDiffMot}
Early PDE models for population spread mostly include the standard linear diffusion, see e.g. \cite{MurrayI,MurrayII}. The simplest possible model of this kind is the linear diffusion equation
\begin{align}
\partial_t c=\nabla_x\cdot(D_c\nabla_x c),\label{LD}
\end{align}
where $c$ is the density of a population and $D_c>0$ is its diffusion coefficient which is assumed to be constant. However, in many biological applications, the choice of a diffusion flux which is in a constant proportion to the density gradient turns out to be inadequate due to certain well-known characteristics of patterns it induces, including:
\begin{enumerate
\item\label{LD1} infinite propagation speed, i.e. even if $c_0$ has a compact support, $c(t,\cdot)$ is positive everywhere for any $t>0$;
\item\label{LD2} formation of smooth Gaussian-like structures, with any initial singularities eliminated;
\item\label{LD3} insensitivity to overcrowding, so that in local regions where the population is dense but rather evenly distributed diffusion is not enhanced.
\end{enumerate}
Concerns were raised in connection with biofilm formation \cite{eberl2001new}, tumour invasion \cite{ZSU,conte2021modeling}, as well as more generally \cite{BBNS2010} (see also references therein), pointing out that experimentally observed patterns of a spreading cell population violate \cref{LD1,LD2,LD3}.
A partial remedy is, e.g. the diffusion term proposed in \cite{eberl2001new}:
\begin{align}
\nabla_x\cdot\left(D_c\frac{c^b}{(c_{max}-c)^{a}}\nabla_x c\right),\label{DegSingD}
\end{align}
where $a,b, c_{max},D_c>0$ are some intrinsic parameters of the population, $c_{max}$ being the physically maximal possible density.
Originating from the porous media equation
\begin{align}
\partial_tc=\nabla_x\cdot(c^m\nabla_x c)\label{PM}
\end{align}
for a constant $m>0$, the power-like degeneracy at zero enforces finite speed of propagation. This means that for compactly supported initial data the speed with which the spacial support of the solution extends with time is finite.
As to the singularity at $c_{max}$, it serves to accelerate the
dispersal from densely populated areas where it comes close to that value.
In this situation one can meaningfully speak of a moving front, i.e. the boundary of the support of $c(t,\cdot)$, and how it changes over time if one starts with an initial datum which is compactly supported.
Still, using diffusion \cref{DegSingD} in an equation for cell motion cannot guarantee an accurate description of a spreading out. Indeed, along the moving front solutions behave similar to those of the porous media equation \cref{PM}, in particular (see e.g. \cite{Vazquez}):
\begin{enumerate
\item the propagation speed, though finite due to a degeneracy at zero, is not a population intrinsic trait. It
depends on the initial density;
\item along the moving front, the power-like degeneracy smooths any sharp singularities, e.g. jump discontinuities down to H\"older continuity.
\end{enumerate}
Thus, with such diffusion as in \cref{DegSingD}, impossibly large speeds are achievable. Furthermore, sharp moving fronts cannot be reproduced, and yet this effect has been observed in experiments, e.g. for glioblastoma spread \cite{conte2021modeling}. We refer to \cite{CCCSS15} where further properties of the standard porous media equation are surveyed in relation to modelling of propagation of moving fronts.
In all mentioned cases the diffusion flux is of the form
\begin{align}
J_{diff}=cV_{diff},\qquad V_{diff}=-b_{diff}(c,\nabla_xc)\label{V}
\end{align}
for some (vector-valued) function $b_{diff}$ which is linear in its second argument and hence unbounded. As discussed above, for the corresponding diffusion equations there are also no implicit universal bounds, i.e. such that would hold for all solutions, on the propagation speed. FL diffusion or flux-saturated
diffusion, as it is often called, offers a means for an explicit control on speeds. It corresponds to the situations where for each fixed $c$ the speed function, i.e. $|b|$ 'saturates' in its second argument, meaning that it converges to a finite value as $|y|\to\infty$. An early example is the relativistic heat equation
\begin{align}
\partial_tc=\nabla_x\cdot\left(D_c\frac{c\nabla_x c}{\sqrt{c^2+\frac{D_c^2}{C^2}|\nabla_x c|^2}}\right)\label{RH}
\end{align}
for constants $D_c,C>0$.
Here
\begin{align*}
b_{diff}(c,y)=D_c\frac{y}{\sqrt{c^2+\frac{D_c^2}{C^2}|y|^2}}
\end{align*}
so that for each $c$
\begin{align*}
|b_{diff}(c,y)|\underset{|y|\to\infty}{\nearrow}C,
\end{align*}
which implies that the saturation condition is satisfied and that formally cell speeds do not exceed $C$.
Various modifications of \cref{RH} exist in the literature, including hybrid models which combine FL diffusion with \cref{PM} \cite{CCCSS15} or volume saturation effects \cite{BuriniChouhad}, see also references in these papers.
FL diffusion models were extensively reviewed in \cite{CCCSS15}. There one may find a historical account on these models and a detailed discussion of their various properties that can be observed in numerical simulations and to a large extent also verified by rigorous analysis, as well as of the ways they can be derived.
In particular, it was proved for \cref{RH} in \cite{ACMM06},
as well as for a broad class of its variants in \cite{Calvo2015} that in contrast to \cref{PM}:
\begin{enumerate
\item the propagation speed is bounded by a universal constant which is an explicit model parameter. Moreover, generically this speed is equal to that constant. For the relativistic heat equation, the propagation speed is essentially equal to $C$;
\item initial discontinuities on the support boundary are, at least in certain cases, propagated eternally. Thus, in general regularisation does not occur on the moving front.
\end{enumerate}
If a reaction term of a Fisher-Kolmogorov type is included into \cref{RH}, then singular fronts can also be propagated, e.g. by travelling waves, see \cite{CCCSS15} and references therein
\subsection{Macroscopic derivation}
In this Subsection we briefly review two methods of deriving FL diffusion on the macroscale. Multiscale alternatives are addressed in \cref{Sec:Multi}.
\subsubsection{Flux adjustment.}
The simplest construction goes back to \cite{rosenau1992tempered} and consists of adjusting the form of the diffusion flux directly on the macroscale: one replaces \cref{V} by
\begin{align*}
V_{diff}=-\tilde{b}_{diff}(c,\nabla c),\qquad \tilde{b}_{diff}=\psi\circ b_{diff},
\end{align*}
for some (vector-valued) continuous bounded function $\psi$ that saturates to a constant $C$ at infinity and is close to identity for $|V_{diff}|\ll C$. For example, taking $b_{diff}(c,y)=D_cy$ and
\begin{align}
\psi(z)=\frac{z}{\sqrt{1+\frac{|z|^2}{C^2}}}\label{psi}
\end{align}
yields \cref{RH}.
More general modifications of the formula for $V$ are, of course, possible, e.g. one could take $\psi=\psi(x,z)$ in order to account for local heterogeneity of the surroundings.
In the context of modelling cell migration, a purely macroscopic framework was used, e.g. in \cite{conte2021modeling} in order to describe the moving fronts observed in glioblastoma invasion.
On the whole, this approach is flexible, yet may lead to inaccurate descriptions, see the discussion in \cref{Sec:Multi}.
\subsubsection{Optimal transport.}
An alternative macroscopic derivation can be accomplished with the optimal transport approach. It was noticed in \cite{Brenier03} and made rigorous in \cite{McCP} that in the Monge-Kantorovich mass transportation framework equation \cref{RH} is the
gradient flow of the Boltzmann entropy
\begin{align*}
F(r)=r\ln r-r
\end{align*}
for the Wasserstein metric corresponding to the cost function
\begin{align}
k(z)=\begin{cases}
C^2\left(1-\sqrt{1-\frac{|z|^2}{C^2}}\right)&\text{if }|z|\leq C,\\
+\infty&\text{if }|z|>C.
\end{cases}\label{cost}
\end{align}
Choosing $k$ differently allows to obtain other variants of FL models, see e.g. examples in \cite{CCCSS15}. For $|z|\ll C$, the cost function in \cref{cost} is close to the quadratic function $\frac{1}{2}|z|^2$. Choosing $k(z)=\frac{1}{2}|z|^2$ for all $z\in\mathbb{R}$ would yield the heat equation. It is the case for which this method was originally proposed and carried out in \cite{RKO}.
\subsection{Analytical challenges}
While the ability of equations with a FL diffusion to reproduce discontinuous moving fronts is attractive for modelling purposes, their potential presence leads to substantial analytical difficulties. Indeed, for \cref{RH} one can only expect that $u(t,\cdot)$ belongs to the space of functions of bounded variation, so that its spacial derivatives are Radon measures and, in general, not some integrable functions.
This makes the diffusion flux particularly difficult to handle because it is a nonlinear function of the spacial gradient of $u$. A well-posedness theory for \cref{RH} and its variants was developed and studied in a series of works \cite{ACMM10,ACM04,ACM05,ACM05Cauchy,ACM08}, as well as \cite{ACMFK10} which treats a reaction-diffusion equation
(see also those references in \cite{CCCSS15} which deal with further modifications of the model). There a suitable form of a so-called entropy solution was developed and its existence and uniqueness was proved.
\section{RDT systems with FL mechanisms}\label{SecTax}
\subsection{Motivation}
Early PDE models for taxis were developed specifically for chemotaxis. This is a directed movement of cells or organisms in response to diffusing chemical cues. Many biological processes, including cancer invasion,
crucially depend on it, see e.g. \cite{eisenbach2004}.
PDE systems modelling chemotaxis
have enjoyed great popularity ever since the introduction of the classical Keller-Segel model \cite{KS70,KS71}:
\begin{subequations}\label{KS}
\begin{align}
&\partial_t c=\nabla_x\cdot(D_c\nabla_x c-\chi c\nabla_x S),\label{KSc}\\
&\partial_t S=D_v\Delta_x S- \alpha S+ \beta c\label{KSv}
\end{align}
\end{subequations}
for some constants $\alpha,\beta,D_c,D_v,\chi>0$.
Equation \cref{KSc} for the population density $c$ features two motion effects: a linear diffusion and drift in the direction of the spacial gradient of the concentration of a chemical $S$. This model is able to reproduce formation of aggregates which is the main implication of chemotaxis. Yet the resulting patterns are often inadequate because one observes:
\begin{enumerate
\item that already in finite time an unlimited aggregation may occur, leading to a so-called 'blow-up' (i.e. the cell density becomes unbounded);
\item consequences of the linear diffusion, see \cref{SecDiff};
\item an unlimited response to chemotaxis due to the chemotaxis flux being directly proportional to the gradient of the attractant.
\end{enumerate}
Arguably the main drawback of \cref{KS} is that it cannot maintain a reasonable balance between
the two drivers of cell spread. Indeed, there are essentially two options: either the cell motion is governed by the linear diffusion, and then chemotaxis hardly plays any role, or chemotaxis dominates, inducing an unrealistically strong aggregation, even a blow-up. This aspect is particularly well-understood, see e.g. reviews \cite{Horstmann,BBTW,LankWink2020}.
Similar to the purely diffusion case, one could try to improve the model by allowing the population diffusion coefficient, $D_c$ and the so-called chemotactic sensitivity, $\chi$ to depend on $c$ and/or $S$, see e.g. \cite{Horstmann,BBTW,HillenPainter} where many examples can be found.
One such model which includes diffusion of the form given by \cref{DegSingD} was proposed and analysed in \cite{EEWZ}. In that model the cell density cannot exceed a pregiven threshold, extreme aggregation is avoided, and the propagation speed is finite.
However, as observed in \cref{SecDiff} for the purely diffusion case, a density-independent upper bound for the propagation speed and the reproduction of experimentally observed sharp moving fronts cannot be achieved in this manner. Similar to the pure diffusion case (see \cref{SecDiff}), this motivates the use of FL mechanisms.
FL taxis models rely on replacing $\chi c\nabla_x S$ with
\begin{align*}
J_{chemo}=cV_{chemo},\qquad V_{chemo}=b_{chemo}(c,S,\nabla_x S),
\end{align*}
where $b_{chemo}$ is a (vector-valued) function such that on the one hand, $b_{chemo}(c,S,y)$ is close to $\chi(c,S)y$ for sufficiently small $|y|$ for some bounded function $\chi$, but on the other hand, $b_{chemo}(c,S,\cdot)$ is bounded for every fixed pair $(c,S)$. The latter property ensures a limitation of the taxis component of the flux. A prototypical model with FL taxis is thus
\begin{subequations}\label{KSFS}
\begin{align}
&\partial_t c=\nabla_x\cdot\left(D_c\nabla_x c-cb_{chemo}(c,S,\nabla_x S)\right),\label{KSFSc}\\
&\partial_t S=D_v\Delta_x S- \alpha S+ \beta c.\label{KSFSv}
\end{align}
\end{subequations}
One could, for instance, use the following function that was proposed in \cite{HillenPainter}:
\begin{align*}
b_{chemo}(c,S,y)=\chi C\left(\tanh\left(\frac{y_1}{1+C}\right),\dots,\tanh\left(\frac{y_d}{1+C}\right)\right)
\end{align*}
for some constants $C$ and $\chi$.
We refer to \cite{HillenPainter} and references therein as well as to \cite{CKWW,PerthameVW,BBNS2010} for further examples of models with non-FL diffusion and FL chemotaxis. In \cite{KimFried} a model for glioma invasion was developed which includes linear diffusion and FL chemo- and haptotaxis, as well as other relevant effects.
In \cite{BBNS2010} a model with a fully limited cell flux was proposed:
\begin{subequations}\label{new}
\begin{align}
&\partial_t c=\nabla_x\cdot\left(D_c\frac{c\nabla_x c}{\sqrt{c^2+\frac{D_c^2}{C^2}|\nabla_x c|^2}}-\chi c \frac{\nabla_x S}{\sqrt{1+|\nabla_x S|^2}}\right)+f_c(c,S),\label{newc}\\
&\partial_t S=D_v\Delta S+f_v(c,S),\label{newv}
\end{align}
\end{subequations}
with some functions $f_c$ and $f_v$ and $C>0$ a constant. Both motion effects in \cref{newc} reflect some sort of optimal transport \cite{BBNS2010}. The diffusion term originates from the relativistic heat equation \cref{RH}. Most importantly, choosing both diffusion and taxis FL guaranties that the speed of propagation cannot exceed a universal constant, in this case $C+\chi$.
Travelling wave analysis for some parabolic-elliptic systems with FL diffusion and non-FL chemotaxis \cite{arias2018cross,CYPerthame,campos2021kinks} and numerical simulations for an extension of \cref{new} to a model for glioblastoma (the most aggressive type of glioma) invasion that includes FL diffusion as well as multiple FL taxis terms \cite{conte2021modeling} indicate the ability of such models to propagate singularities observed in biological applications, including cancer invasion. Further models for glioma invasion which involve FL motion terms were developed in \cite{KumarSur,DKSS20}.
\bigskip
Similar to the purely diffusion case, RDT systems with FL mechanisms can be constructed directly on the macroscale. More accurate derivations based on a multiscale approach are discussed in \cref{Sec:Multi} below.
\subsection{Analytical challenge}
FL diffusion and taxis terms may have a similar form, as e.g. in \cref{new}, yet their impact on the analysis is vastly different.
In order to see that, let us compare \cref{KSFS} and \cref{new} with the classical model \cref{KS}. We assume that these three systems are stated in a bounded domain and we impose the no-flux boundary conditions.
Classical theory \cite{Amann1} implies that \cref{KS} is uniquely solvable in the classical sense as long as it remains bounded. If a solution blows up at some finite time, then it still exists globally in a certain generalised sense \cite{ZhigunKS}. Key to solvability is in both cases the special structure of \cref{KS}: it belongs to the class of regular quasilinear upper-triangular parabolic systems \cite{Amann1}. In such systems, diffusion is sufficiently strong compared to taxis. This allows to obtain certain necessary a priori estimates by manipulating both equations. Thanks to these estimates, existence and uniqueness of maximal classical solutions can be obtained by means of a standard argument which is based on the Banach fixed-point theorem.
In general, systems with linear diffusion and FL taxis can be handled very similar to \cref{KS}, see e.g. \cite{PerthameVW,CKWW}. In \cref{KSFS} flux-limitation ensures that cell diffusion is the dominating factor in \cref{KSFSc}, which considerably simplifies the analysis. For example, since the velocity component due to taxis is bounded by construction, a priori boundedness of $c$ can be obtained by dealing with equation \cref{KSFSc} alone.
On the other hand, system \cref{new} raises new challenges compared to \cref{KS}. The FL diffusion term precludes the application of the standard theory of parabolic PDEs even if global a priori boundedness is guaranteed. For a single purely diffusion equation such as \cref{RH} it was possible to establish the existence of a mild solution and to prove that this solution is also the unique entropy solution \cite{CCCSS15}. For the strongly coupled chemotaxis system \cref{new} it seems that it is neither possible to set up a semigroup, nor to prove uniqueness even if entropy inequalities are imposed.
A rigorous analysis of
this system
is still lacking.
Currently available analytical studies of systems with a fully FL cell flux are restricted to parabolic-elliptic versions. They are generally easier to deal with than the parabolic-parabolic ones.
In \cite{BelWinkler20171,BelWinkler20172,MIZUKAMI2019,Chiyoda2019} existence and blow-up were addressed in the radial-symmetric case and for strictly positive initial $c$-values. Travelling wave analysis in \cite{arias2018cross,CYPerthame,campos2021kinks} allows for biologically relevant nonnegative densities.
\section{Multiscale derivations}\label{Sec:Multi}
In this Section we turn to derivations which start with a KTE on the mesoscale and yield a macroscopic PDE that contains FL diffusion and/or taxis. This multiscale approach makes possible a more careful modelling than a single-scale purely macroscopic one.
RDT systems, such as those discussed in \cref{SecTax}, are among the most widely used tools in cancer invasion modelling. They describe the evolution of macroscopic densities of cancer cell populations and densities/concentrations of other involved components, such as, e.g. tissue or biochemical signals. These quantities depend only upon time and position in space, which allows for comparison with information acquired by standard biomedical imaging techniques, e.g. magnetic resonance imaging (MRI) and computed tomography (CT). Another advantage is the availability of well-developed mathematical analysis tools and efficient numerical methods.
Often, macroscopic RDT systems are derived using a standard single-scale approach based on the balancing of macroscopic fluxes. Derivations of this type which focus on cell-tissue interaction in cancer include, e.g. \cite{anderson2000,ChapLol2011}. However, when modelling directly on the microscale, one may lose important lower-level information or capture it inaccurately. In contrast, a multiscale modelling approach begins with putting together equations for processes of which at least some are occurring on scales smaller than the macroscale, i.e. micro, meso, etc. A more or less realistic setting typically includes a combination of several scales. Unfortunately, the resulting detailed equations are generally too difficult to solve numerically.
Hence, a suitable upscaling is usually performed, yielding an RDT system which still contains some essential lower-level information in the equation terms. If successfully studied analytically and simulated numerically, using parameters determined from experiments, it generally offers a much more careful description of cell migration than that which can be achieved through balancing of (macroscopic) fluxes.
The described multiscale modelling approach was originally applied in physical context. It allowed, for example, to obtain the Euler and Navier-Stokes equations as macroscopic scaling limits of the Boltzmann equation, see e.g. \cite{SaintRaymond}.
Later, the approach based on modelling with KTEs and their subsequent upscaling was successfully adjusted to modelling of population motility. Starting from \cite{Alt1980,OthmerDunbarAlt} numerous models have been derived in this manner.
In the context of cancer migration, the KTE-based approach allows for an adequate description of the impact that various sorts of heterogeneity have on tumour invasion. For instance, the cell-tissue interaction depends on such variables as: the tissue fiber position-direction distribution (mesoscopic), the amount of free receptors on the cell surface that can bind to tissue (microscopic), the density/concentration gradients of various tactic signals (macroscopic), etc.
Including such variables leads to multiscale settings which, when upscaled, result in nonstandard PDEs that differ considerably from those set directly on the macroscale. We refer, e.g. to \cite{HillenPainter} (see also references therein), where the effect of environmental anisotropy is comprehensively addressed (not specifically for cancer), showing that it leads to equations with drift and/or myopic (and thus non-Fickian) diffusion, both of which depend on parameters of the tissue distribution. The kinetic theory for active particles (KTAP) \cite{bellom3} is a further development of the method for those situations where not only physical variables (time, position, velocity, etc.) but, also, the so-called 'active variables' are involved. For example, in \cite{KelkelSur2012} an extension of an earlier model for glioma invasion under tissue anisotropy \cite{HillenM5} is presented which treats cell surface receptors as active variables. That model also includes chemo- and haptotaxis.
\bigskip
To simplify the exposition, we start with a single KTE of the form
\begin{align}
\partial_t c+v\cdot\nabla_x c={\mathcal L}(c)\label{KTE}
\end{align}
for cell density distribution $c$. This mesoscopic quantity is a function not only of time $t\geq0$ and position $x\in\mathbb{R}^d$, $d\in\mathbb{N}$ but also of velocity $v$ which belongs to a suitably chosen bounded
velocity space ${\mathcal V}\subset \mathbb{R}^d$. Equation \cref{KTE} balances deterministic transport and probabilistic changes. For a completely zero right-hand side, we obtain a simple transport equation corresponding to the situation when velocities of cells do not change. On the microscopic level, the movement of each of them is modelled by the ODE system
\begin{subequations}\label{MicroDSt}
\begin{align}
&\frac{dx}{dt}=v, \\
&\frac{dv}{dt}=0,\label{eq:microdynSt}
\end{align}
\end{subequations}
Of course, this is only valid if no deterministic external forces are acting on the cells. Later in \cref{withExt} we consider a case where such forces are present. We assume the operator ${\mathcal L}$ on the right-hand side of \cref{KTE} to be a so-called turning operator: it models the impact of a velocity-jump process, i.e. of probabilistic instantaneous changes in the velocity of the species.
Aiming at a macroscopic description, one often performs a suitable upscaling. The standard approach begins with a rescaled equation
\begin{align}
\varepsilon^{\kappa} \partial_t {c^{\varepsilon}}+\varepsilon v\cdot\nabla_x {c^{\varepsilon}}={\mathcal L}_{\varepsilon}({c^{\varepsilon}}),\label{KTEe}
\end{align}
where $t$ and $x$ now stand for macroscopic time and space variables, respectively, $\varepsilon>0$ is a small scaling parameter, and, typically, $\kappa=2$ or $\kappa=1$, corresponding to the parabolic and hyperbolic scalings, respectively. In order to obtain a macroscopic characterisation, one seeks to eliminate $v$ and looks for a good approximation of $\overline {c^{\varepsilon}}=\overline {c^{\varepsilon}}(t,x)$ as $\varepsilon$ tends to zero. Here and in what follows we use the following notation.
\begin{Notation}
We denote
$$\overline u\overset{\text{def}}{=}\int_{{\mathcal V}} u\,dv$$
if $u:{\mathcal V}\to\mathbb{R}$ is a function and
$$\overline u\overset{\text{def}}{=}\int_{{\mathcal V}} du$$
if it is a measure in ${\mathcal V}$.
\end{Notation}
As a rule, one assumes that ${c^{\varepsilon}}$ can be well-approximated by a truncation of the Hilbert expansion
\begin{align}
{c^{\varepsilon}}=\sum_{n=0}^{\infty}\varepsilon^n c_n^0,\label{Hilbert}
\end{align}
where $\overline{c_0^0}$ is of particular importance since it is the zero-order macroscopic approximation. The first-order correction $\overline{c_1^0}$ is also of interest.
Since integration and a nonlinear map do not commute, it is in general difficult, if not impossible, to deal with a nonlinear ${\mathcal L}$ unless it is, e.g. of the form
\begin{align}
{\mathcal L}(c)={\mathcal L}[f_1,\dots,f_M]c,\end{align}
for some macroscopic functions $f_i=f_i(t,x)$, $i=1,\dots,M$, $M\in\mathbb{N}$, which when fixed define a linear operator ${\mathcal L}[f_1,\dots,f_M]$. It is possible to have $\overline c$ among $f_i$'s.
\bigskip
Three approaches starting from such equations as \cref{KTE} or its extensions and leading to RDT systems with FL effects have been proposed so far. We briefly review them in the reminder of this Section.
\subsection{Nonlinear Hilbert expansion}\label{NLHil}
In this Subsection we look at a construction where FL diffusion results from a nonstandard approximation.
In \cite{Coulombel2005}, one considered a parabolic scaling of \cref{KTE} for the turning operator which, in the above notation, takes the form
\begin{align}
{\mathcal L}c=\lambda(\overline{c}\mu-c),\label{OperL}
\end{align}
where $\lambda>0$ is a constant, $\mu$ is a fixed probability measure on a bounded space ${\mathcal V}\subset[-1,1]$ such that for any continuous odd function $h$
\begin{align}
\int_{{\mathcal V}}h\,d\mu=0\label{vmu}
\end{align}
and satisfying certain other assumptions, and $c=f\mu$ for a square integrable density $f$.
As set out in \cite{HillenPainter}, this kind of turning operator is a convenient way to capture gains and losses due to instantaneous velocity changes for a population moving in a heterogeneous surroundings. In the context of cancer migration, $\mu$ typically stands for the orientational distribution of tissue
fibers, see e.g. \cite{HillenM5,KelkelSur2012}, whereas $\lambda$ is the reorientation rate.
Since ${\mathcal L}$ is linear, the corresponding equations for $\overline{c_0^0}$ and $\overline{c_0^0}+\varepsilon\overline{c_1^0}$ are linear as well. In fact, both approximations satisfy the linear diffusion equation \cref{LD} with $$D_c=\frac{1}{\lambda}\int_{{\mathcal V}}v^2\,d\mu,$$
so that the propagation speed is infinite (see the discussion in \cref{SecDiff}). Yet this is not the case for ${c^{\varepsilon}}$. Indeed, integrating \cref{KTEe} yields the conservation law
\begin{align*}
\partial_t \overline{{c^{\varepsilon}}}+\partial_x(\overline{{c^{\varepsilon}}} V^{\varepsilon})=0, \qquad V^{\varepsilon}=\frac{1}{\varepsilon}\frac{\int_{{\mathcal V}}v\,d{{c^{\varepsilon}}}}{\int_{{\mathcal V}}d{{c^{\varepsilon}}}},
\end{align*}
and, since $V\subset [-1,1]$,
\begin{align*}
|V^{\varepsilon}|\leq \frac{1}{\varepsilon}<\infty.
\end{align*}
To avoid the infinite propagation speed on the macroscopic scale, as well as further undesirable effects, it was proposed in \cite{Coulombel2005} to consider a nonlinear Hilbert expansion, replacing \cref{Hilbert} with
\begin{align}
{c^{\varepsilon}}=e^{\sum_{n=0}^{\infty}\varepsilon^n \Phi_n^0}\mu.\label{Hilbertexp}
\end{align}
It was proved there that a good approximation of the first order truncation
\begin{align}
\overline{e^{\Phi_0^0+\varepsilon \Phi_1^0}\mu}\nonumber
\end{align}
and, thus, of ${c^{\varepsilon}}$ solves
\begin{align}
\partial_t u^{\varepsilon}=\partial_x\left(\frac{1}{\varepsilon}u^{\varepsilon}\mathbb{G}\left(\frac{\varepsilon}{\lambda}\frac{\partial_xu^{\varepsilon}}{u^{\varepsilon}}\right)\right),\label{FSG}
\end{align}
where
\begin{align}
\mathbb{G}(\beta)=&\frac{\int_{{\mathcal V}}ve^{\beta v}\,d\mu}{\int_{{\mathcal V}}e^{\beta v}\,d\mu}\nonumber\\
=&\frac{d}{d\beta}\ln \left(\int_{{\mathcal V}}e^{\beta v}\,d\mu\right),\label{G}
\end{align}
and, under the assumptions on $V$ and $\mu$ as imposed in \cite{Coulombel2005},
\begin{enumerate
\item\label{G1} $\mathbb{G}:\mathbb{R}\to(-1,1)$ is odd, strictly increasing, infinitely differentiable, and a diffeomorphism;
\item\label{G2} $\mathbb{G}(\pm\infty)=\pm1$.
\end{enumerate}
In particular, the saturation property is guaranteed.
For some measures the corresponding $\mathbb{G}$ can be computed explicitly.
\begin{Example}[\cite{Coulombel2005}]\label{Examplemu}
Let ${\mathcal V}=[-1,1]$.
\begin{itemize}
\item A homogeneous environment corresponds to the normalised Lebesgue measure:
\begin{align}
&\mu=\frac{1}{2}|\cdot|\nonumbe
\\
\Rightarrow\quad&\mathbb{G}(\beta)=\coth(\beta)-\frac{1}{\beta}.\nonumber
\end{align}
\item If cell speeds remain constant, then one deals with a discrete measure:
\begin{align}
&\mu=\frac{1}{2}(\delta_{-1}+\delta_1)\label{mud}\\
\Rightarrow\quad&\mathbb{G}(\beta)=\tanh(\beta).\nonumber
\end{align}
\end{itemize}
\end{Example}
However, not every $\mathbb{G}$ which satisfies \cref{G1}-\cref{G2} can be generated in this way.
To see why this is indeed the case, let us consider
\begin{align*}
I(\beta)\overset{\text{def}}{=}e^{\int_0^{\beta}G(s)\,ds}= \int_{{\mathcal V}}e^{\beta v}\,d\mu,
\end{align*}
where the latter equality is due to \cref{G}. Then, for all $n\in\mathbb{N}$
\begin{align}
\frac{d^{2n}}{d\beta^{2n}}I(0)= \int_{{\mathcal V}}v^{2n}\,d\mu\in(0,1].\label{an}
\end{align}
Many interesting $\mathbb{G}$'s do not satisfy \cref{an}. This includes a function which corresponds to the relativistic heat equation:
$$\mathbb{G}(\beta)=\frac{\beta}{\sqrt{1+|\beta|^2}}.$$ Indeed, a direct computation shows that
\begin{align*}
I(\beta)=e^{\int_0^{\beta}G(s)\,ds}=e^{\sqrt{1+|\beta|^2}-1}
\end{align*}
and
\begin{align*}
\frac{d^{4}}{d\beta^{4}}I(0)=0,
\end{align*}
which violates \cref{an}.
\bigskip
Yet another limitation is that in general the resulting FL diffusion equation \cref{FSG} generates the propagation speed $1/\varepsilon$ (see \cref{SecDiff}) which, while finite, may be unrealistically large.
\subsubsection*{Summary}
\begin{enumerate}
\item A parabolic scaling of a basic linear KTE together with a nonlinear 'exponential' Hilbert expansion leads to a FL (and thus nonlinear) diffusion equation.
\item This construction cannot be used in order to obtain some standard FL equations, including the relativistic heat equation.
\item The resulting generic propagation speed is $1/\varepsilon$.
\end{enumerate}
\subsection{Scaled turning operator}\label{Sec:scaledOper}
We have seen in the preceding Subsection that a FL effect (diffusion in that particular case) can be obtained from a basic linear KTE if one is prepared to go beyond the zero-order approximation, while keeping some sort of first order correction. This is often undesirable, i.e. one wishes to have a macroscopic PDE for $\overline{c_0^0}$ alone.
Studies in \cite{BBNS2010,PerthameVW,DolakSchmeiser} show that it is possible to obtain equations featuring FL diffusion and/or taxis by considering operators of \cref{OperL} type and choosing $f_i$'s and their scalings appropriately. We exploit such constructions in this Subsection.
To illustrate the approach, we use a turning operator of the form
\begin{align}
&{\mathcal L}[S]c={\mathcal L}_0c+{\mathcal L}_1[S]c,\nonumber\\
&{\mathcal L}_0c=\lambda\left(\frac{1}{|{\mathcal V}|}\overline c-c\right),\nonumber\\
&{\mathcal L}_1[S]c=\int_{\mathcal V}T_1[S](v,v')c(v')-T_1[S](v',v)c(v)\,dv'.
\end{align}
It models a superposition of two essentially independent effects.
For $\lambda>0$, the first component, ${\mathcal L}_0$ is the multidimensional version of a special case of \cref{OperL} for the normalised Lebesgue measure $$\mu=\frac{1}{|{\mathcal V}|}|\cdot|.$$
This choice corresponds to chaotic velocity changes in a homogeneous environment and amounts to a constant linear diffusion operator in the equation for the macroscopic zero order approximation $\overline{c^0}$. The second component of ${\mathcal L}[S]$, operator ${\mathcal L}_1[S]$ has been added with the aim to capture velocity changes due to the influence of a substance with density/concentration $S=S(t,x)$. For each velocity pair $v',v\in {\mathcal V}$, the corresponding value $T_1[S](v,v')$ of a so-called turning kernel $T_1[S]$ can be interpreted as the likelihood of a cell to change from $v'$ to $v$, provided that $T_1[S]$ is nonnegative. The kernel needs to satisfy certain further assumptions, so that, in particular, ${\mathcal L}_1[S]$ is a conservative operator, i.e.
\begin{align*}
\int_{{\mathcal V}}{\mathcal L}_1[S]c\,dv=0.
\end{align*}
Such turning operators are a standard tool for deriving chemotaxis models, with $S$ being the concentration of a chemoattractant, see e.g. \cite{BBTW} and references therein. Here we concentrate solely on equations for cell dynamics since the dynamics of the chemical is a standard macroscopic one.
In the reminder of this subsection, we review the effect of some possible choices of $T_1[S]$ and of their scalings such that lead to RDT equations with FL diffusion and/or taxis terms on the macroscale.
\subsubsection{Dependence on past motion.} \label{SecPast}
We begin with
\begin{align}
T_1[S](v,v')=\Psi(D_tS),\qquad D_tS=\partial_tS+v'\cdot\nabla_x S.\label{T1}
\end{align}
Proposed in \cite{DolakSchmeiser}, it describes the likelihood of a velocity change as a function of the temporal derivative of $S$ along the path the cell has been moving prior to that change. This choice is based on the known ability of cells to compare present signal concentrations to previous ones and respond to that. Function $\Psi$ describes the response rate. We assume it to be nonlinear.
Different upscalings can be adopted for \cref{KTE} with the turning kernel \cref{T1}. The hyperbolic limit is a drift equation for $\overline{c^0}$ \cite{DolakSchmeiser}. No diffusion can be recovered this way unless a first-order correction is included.
A straightforward parabolic rescaling would be
\begin{align}
&\varepsilon^2\partial_t {c^{\varepsilon}}+\varepsilon v\cdot\nabla_x {c^{\varepsilon}}\nonumber\\
=&\lambda\left(\frac{1}{|{\mathcal V}|}\overline {c^{\varepsilon}}-{c^{\varepsilon}}\right)\nonumber\\
&+\int_{\mathcal V}\Psi\left(\varepsilon^2\partial_tS^{\varepsilon}+\varepsilon v'\cdot\nabla_x S^{\varepsilon}\right){c^{\varepsilon}}(v')-\Psi\left(\varepsilon^2\partial_tS^{\varepsilon}+\varepsilon v\cdot\nabla_x S^{\varepsilon}\right){c^{\varepsilon}}(v)\,dv'.\nonumber
\end{align}
However, a nonlinear dependence upon the gradient of the attractant is lost in the limit when $\varepsilon$ is sent to zero. To preclude this, the response function needs to be rescaled as well. In \cite{PerthameVW}, one therefore replaced $\Psi$ by
$$\varepsilon\Psi\left(\frac{\cdot}{\varepsilon}\right).$$
This choice is biologically justifiable, see \cite{PerthameVW} and references therein. The resulting rescaled KTE is
\begin{align}
& \varepsilon^2\partial_t {c^{\varepsilon}}+\varepsilon v\cdot\nabla_x {c^{\varepsilon}}\nonumber\\
=&\lambda\left(\frac{1}{|{\mathcal V}|}\overline {c^{\varepsilon}}-{c^{\varepsilon}}\right)\nonumber\\
&+\varepsilon\int_{\mathcal V}\Psi\left(\varepsilon\partial_tS^{\varepsilon}+ v'\cdot\nabla_x S^{\varepsilon}\right){c^{\varepsilon}}(v')-\Psi\left(\varepsilon\partial_tS^{\varepsilon}+v\cdot\nabla_x S^{\varepsilon}\right){c^{\varepsilon}}(v)\,dv'.\label{KTETaxe1}
\end{align}
It was verified in \cite{PerthameVW} that this leads to the diffusion-taxis equation
\begin{align}
\partial_t\overline{c^0}=D\Delta_x\overline{c^0}-\nabla\cdot(\overline{c^0}\Phi(\nabla_x S^0))\label{MacroFSC}
\end{align}
wher
\begin{align}
&D=\frac{1}{\lambda|{\mathcal V}|}\int_{{\mathcal V}}v\otimes v\,dv,\nonumber\\
& \Phi(\beta)=-\frac{1}{\lambda}\int_{{\mathcal V}}v\Psi(v\cdot \beta)\,dv,\label{PhiPsi}
\end{align}
and, under suitable conditions on $\Psi$, ${c^{\varepsilon}}$ is well-approximated by $c^0$.
Various kinds of chemotactic response can be obtained in this manner.
Let us consider a basic case: $${\mathcal V}=[-1,1].$$ Relation \cref{PhiPsi} between $\Phi$ and $\Psi$ implies that $\Phi$ is necessarily odd and that then
\begin{align*}
\Psi(\beta)=\Psi_{even}(\beta)-\frac{\lambda}{2\beta}\frac{d}{d\beta}(\beta^2\Phi(\beta)),
\end{align*}
where $\Psi_{even}$ is any even function.
In particular, a direct computation shows that choosing
\begin{align}
\Psi(\beta)=C-\frac{x \left(2 x^2+3\right)}{2\left(x^2+1\right)^{3/2}},\label{PsiEx}
\end{align}
with $C$ a constant, leads to
\begin{align*}
\Phi(\beta)=\frac{\beta}{\sqrt{1+|\beta|^2}},
\end{align*}
so that the corresponding taxis term is as in \cref{new}. Choosing $C$ sufficiently large ensures that $\Psi$ is nonnegative and can therefore be viewed as the likelihood of turning due to taxis. However, it is not clear how to interpret the particular form of $\Psi$ in \cref{PsiEx}. Further examples of possible $\Psi$ can be found, e.g. in \cite{DolakSchmeiser} and \cite{PerthameVW} (see also references therein).
The approach works for various modifications of the turning operator ${\mathcal L}$. For instance, the same equation \cref{MacroFSC} is obtained if one uses
\begin{align*}
&T_1[S](v,v')=\Psi(v'\cdot\nabla_x S)
\end{align*}
instead of \cref{T1} since $\partial_tS$ only appears on the macroscale if a first order correction is included.
In both cases one could take $S$ to be a function of $\overline{c}$, such as e.g.
$$S=-\ln(\overline c).$$ This leads to a diffusion flux which is nonlinear with respect to $\nabla_x\overline{c^0}$, yet it is not FS due to the Fickian contribution from ${\mathcal L}_0$.
One could also use more general ${\mathcal L}_0$, e.g. such as in \cref{OperL} for a non-Lebesgue measure in order to account for the environmental heterogeneity.
\subsubsection{Dependence on both anterior and posterior velocities.}\label{Sec:FullFS}
Aiming at a fully FL cell flux,
the following kernel was proposed in \cite{BBNS2010}:
\begin{align}
&T_1[\alpha[\overline c,S]](v,v')=\left(\alpha[\overline c,S]-v'\right)\cdot vh(v),\label{T1al}\\
&\alpha[\overline c,S]=D_c\frac{\nabla_x \overline c}{\sqrt{\overline c^2+\frac{D_c^2}{C^2}|\nabla_x \overline c|^2}}-\chi \frac{\nabla_x S}{\sqrt{1+|\nabla_x S|^2}},\nonumber
\end{align}
where $h$ satisfies
\begin{align}
\int_{{\mathcal V}}h(v)\,dv=1,\qquad \int_{{\mathcal V}}vh(v)\,dv=0, \qquad \int_{{\mathcal V}}v\otimes vh(v)\,dv=\beta I \nonumber
\end{align}
for a positive constant $\beta$, and $S$ is the concentration of a signal substance.
The authors then considered an extension of \cref{KTE} involving further integral operators, e.g. such that model cell proliferation on the mesoscale. They took
\begin{align*}
\lambda=0
\end{align*}
and considered the KTE
\begin{align}
\varepsilon \partial_t {c^{\varepsilon}}+\varepsilon v\cdot\nabla_x {c^{\varepsilon}}={\mathcal L}[\alpha[\overline{{c^{\varepsilon}}}],S^{\varepsilon}]]{c^{\varepsilon}}+\text{[growth, etc.]}.\label{KTEeh}
\end{align}
This equation can be interpreted as a hyperbolic scaling for \cref{KTE} with a rescaled turning kernel \cref{T1al}: $\alpha[\overline c,S]$ needs to be replaced by
\begin{align*}
D_c\frac{\frac{1}{\varepsilon}\nabla_x \overline c}{\sqrt{\overline c^2+\frac{D_c^2}{C^2}\left|\frac{1}{\varepsilon}\nabla_x \overline c\right|^2}}-\chi \frac{\frac{1}{\varepsilon}\nabla_x S}{\sqrt{1+\left|\frac{1}{\varepsilon}\nabla_x S\right|^2}}
\end{align*}
in order to have \cref{KTEeh} after a hyperbolic scaling.
Proceeding with a formal limit as $\varepsilon\to 0$, one recovered in \cite{BBNS2010} the fully FL diffusion-taxis equation \cref{newc}.
\bigskip
While choosing the kernel as in \cref{T1al} ensured the desired macroscopic limit, one has that
\begin{enumerate
\item the corresponding integral operator ${\mathcal L}_1[\alpha[\overline c,S]]$ is conservative,
\item but the kernel is not nonnegative everywhere. \end{enumerate}
Thus, ${\mathcal L}_1[\alpha[\overline c,S]]$ is not a turning operator in the traditional sense.
\subsubsection*{Summary}
\begin{enumerate}
\item
It is possible to obtain an RDT equation with FL effects, including a completely FL diffusion-taxis flux, as the zero order approximation of KTE \cref{KTE} by choosing suitably:
\begin{itemize
\item[(i)] a turning kernel,
\item[(ii)] a scaling of time and space (parabolic/hyperbolic),
\item[(iii)] a rescaling of the turning kernel.
\end{itemize}
\item The taxis term on the macroscale arises from a turning operator and thus has probabilistic roots.
\item The approach is flexible, but may require using a turning kernel that is difficult to interpret, e.g. because it has a complicated form and/or it is not everywhere nonnegative.
\end{enumerate}
\subsection{Accelerated motion}\label{withExt}
So far in this Section we have dealt with constructions which presuppose that cell velocity changes are fully probabilistic. In particular, the FL effects on the macroscale stemmed from the turning kernel.
In this final Subsection we turn to a different approach that was developed in \cite{DSW,ZSMM}. As an illustration we use a special case of the modelling framework from \cite{ZSMM}.
There the deterministic part of cell motion is described on the microscale by the following extension of \cref{MicroDSt}:
\begin{subequations}\label{MicroD}
\begin{align}
&\frac{dx}{dt}=v, \\
&\frac{dv}{dt}=-a(v-v_*[S](t,x)),\label{eq:microdyn}
\end{align}
\end{subequations}
with
\begin{align}
v_*[S]=\mathbb{F}\frac{\nabla_x S}{1+|\nabla_x S|}.\label{vstar}
\end{align}
The ODE system \cref{MicroD} resembles the second Newton's law. However, unlike lifeless matter for which the acceleration would necessarily be due to a physical force, here the presence of a signal stimulates the cells to divert from a straight line.
The choice of the right-hand side in \cref{eq:microdyn} is motivated by the assumption that a cell tends to realign with a certain 'preferred' velocity $v_*[S]$ which depends on the spacial gradient of a cue with density/concentration $S=S(t,x)$ and on the spacial
heterogeneity of the environment. The latter is accounted for by means of a matrix-valued function $\mathbb{F}=\mathbb{F}(x)$. In the absence of signal gradients cell deceleration is proportional to a constant $a>0$, as in the Stokes' law.
Choosing $$\|\mathbb{F}(x)\|_2\leq 1\qquad\text{for all }x\in\mathbb{R}$$ ensures that $v_*[S]$ remains inside the bounded velocity space
\begin{align*}
{\mathcal V}=\{x\in\mathbb{R}^d:\ \ |x|<1\}.
\end{align*}
Combining the microscopic dynamic \cref{MicroD} with a turning operator which, for simplicity, we assume here to be of the form
\begin{align*}
{\mathcal L}c=\lambda\left(\frac{1}{|{\mathcal V}|}\overline c-c\right)
\end{align*}
for some constant $\lambda>0$, and the mass conservation law, one arrives at the KTE
\begin{align}
\partial_t c+\nabla_x\cdot (vc)-a\nabla_v\cdot((v-v_*)c)
=&\overline{c}-c.\label{meso}
\end{align}
The parabolic scaling then yields a diffusion-taxis equation for the zero order approximation:
\begin{align}
(a+\lambda)\partial_t \overline{c^0}
=\frac{\lambda}{2a+\lambda}\frac{n}{n+2}\Delta_x\overline{c^0}-a\nabla_x\cdot\left(\overline{c^0}\mathbb{F}\nabla_x S\right),\label{dif2}
\end{align}
and it was rigorously proved in \cite{ZSMM} that ${c^{\varepsilon}}$ is well-approximated by $c^0$. While in derivations in \cref{Sec:scaledOper} all motions effects on the macroscale came from a turning operator, here it is a source of diffusion only. This time, taxis originates from the deterministic microscopic dynamics. A possible interpretation is that cells change their velocities in the attempt to follow the attractant gradients but may at the same time be diverted from such preferred trajectories by chaotic velocity jumps.
Alike the derivation in \cref{NLHil}, including a first order correction leads to a FL effect on the macroscale \cite{ZSMM}:
\begin{align}
(a+\lambda)\partial_t \overline{c_{01}^{\varepsilon}}
=\frac{\lambda}{2a+\lambda}\frac{n}{n+2}\Delta_x\overline{c_{01}^{\varepsilon}}-a\nabla_x\cdot\left(\overline{c_{01}^{\varepsilon}}\mathbb{F}\frac{\nabla_x S}{1+\varepsilon|\nabla_x S|}\right)+O\left(\varepsilon^2\right)\label{dif3}
\end{align}
for the first order approximation
\begin{align*}
c_{01}^{\varepsilon}=c^{0}+\varepsilon c^{0}_1.
\end{align*}
If the error term on the right-hand side is dropped, \cref{dif3} becomes a parabolic PDE with a linear diffusion and a FL taxis. However, and this is also related to what we saw in \cref{NLHil}, the velocity component due to taxis is of order $O(1/\varepsilon)$, i.e. potentially too large.
Similar to \cref{Sec:FullFS}, one could avoid $\varepsilon$ in the limit equation by replacing $v_*[S]$ in \cref{eq:microdyn} with
\begin{align}
v_*^{\varepsilon}[S]=\mathbb{F}\frac{\frac{1}{\varepsilon}\nabla_x S}{1+\frac{1}{\varepsilon}|\nabla_x S|}\nonumber
\end{align}
and adopting the hyperbolic scaling of the KTE.
One readily verifies that this amounts to
\begin{align}
(a+\lambda)\partial_t \overline{c^0}=a\nabla_x\cdot(-\overline{c^0}v_*[S^0]).\label{hyp}
\end{align}
A similar scaling was performed in \cite{DSW} in a more general context. There one first used the moments method and then the hyperbolic scaling.
In \cite{KumarSur} one relied fully on the moment closure, so no rescaling was required.
Different forms of $v_*[S]$ and more general turning operators are possible. For example, for
\begin{align*}
&v_*[\overline c]=\psi\left(-\nabla_x \ln(\overline c)\right),\\
&v_*^{\varepsilon}[\overline c]=\psi\left(-\frac{1}{\varepsilon}\nabla_x \ln(\overline c)\right)
\end{align*}
with $\psi$ as in \cref{psi} the same procedure leads to equation \cref{hyp} which now becomes the relativistic heat equation \cref{RH}. Choosing the turning operator ${\mathcal L}$ as, e.g. in \cref{OperL} for a non-Lebesgue measure leads to an anisotropic diffusion and additional transport terms. This was done in \cite{ZSMM,DSW,KumarSur} in order to account for a fibrous environment. While \cite{ZSMM} dealt with scalings and their rigorous justification for a single prototypical equation, \cite{DSW,KumarSur} focused on applications, providing detailed models for tumour invasion.
\subsubsection*{Summary}
\begin{enumerate}
\item Including a suitable transport term with respect to velocity into the mesoscopic KTE allows to obtain an RDT equation with FL effects.
\item The taxis term on the macroscale arises from the deterministic motion component and is independent from the turning kernel.
\item The approach is flexible, but may require a suitable rescaling of terms in order to avoid
unrealistically large propagation speeds.
\end{enumerate}
\begin{acknowledgement}
The results of this paper were presented by the author at the 34th Annual Meeting of the Irish Mathematical Society in September 2021. The author was supported by the Engineering and Physical Sciences Research Council [grant number EP/T03131X/1]. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
\end{acknowledgement}
\addcontentsline{toc}{section}{References}
\printbibliography
\end{document}
|
{
"timestamp": "2022-03-10T02:19:02",
"yymm": "2109",
"arxiv_id": "2109.14009",
"language": "en",
"url": "https://arxiv.org/abs/2109.14009"
}
|
\section{Introduction}\label{sec:intro}
The emergence of quantum computers has brought a new paradigm to the field of computation. The unique features of these devices has garnered attention from various disciplines, including High Energy Physics (HEP), where the computational challenges associated with taking, processing and analysing vast amounts of data in collider experiments like the Large Hadron Collider (LHC) requires innovative solutions. Quantum algorithms have been proposed to tackle some of these challenges, including the simulation of collision events~\cite{bauer2019quantum,PhysRevD.103.076020,li2021partonic} reconstruction of charged particle tracks in the detectors~\cite{das2020track,T_ys_z_2020,magano2021quantum}, and event classification and analysis~\cite{Blance:2020nhl,Blance:2021gcs,Terashi_2021,Wu_2021,Araz:2021zwu,Belis_2021,armenakas2020application,pires2021digital,MottQuantum}.
Collision events at the LHC typically involve hundreds of particles and can be very complicated. Simulation of such events requires extensive modelling of proton-proton interactions and the subsequent detector response to fully uncover the underlying physics processes. Theoretical descriptions of these collisions can be separated into several stages. Constituent partons in the colliding protons can interact via large momentum transfer in the so-called hard interaction. Due to the large interaction energies, such collisions have the potential to probe new physics. Colour-charged particles produced as a result of this hard interaction are likely to emit further partons, resulting in a parton shower. The parton shower process evolves the system down in energy from the hard interaction to the hadronisation scale, $\mathcal{O} ( \Lambda_{\textrm{QCD}})$. It is a perturbative process and can involve many partons, thus being one of the most time consuming parts of the generation of a collision event. Consequently, the development of quantum algorithms for the calculation of the hard process \cite{PhysRevD.103.076020} and the resultant parton shower \cite{bauer2019quantum, PhysRevD.103.076020} is an area of interest.
This paper presents a novel approach to simulating a many-particle, collinear parton shower on a quantum device using a quantum walk (QW) framework. It is structured as follows: Section~\ref{sec:QW} gives a brief introduction to the QW framework, Section~\ref{sec:PS} contains the description of the proposed parton shower algorithm, and Section~\ref{sec:Summary} gives a summary and conclusions.
\section{Quantum walks}\label{sec:QW}
The quantum random walk~\cite{PhysRevA.48.1687, Aharonov2, Kempe, Rohde2012IncreasingTD} is the quantum analogue of the classical random walk and defines the movement of a particle, the \textit{walker}, which can occupy certain $position$ states on a graph. Here we will consider only discrete-time random walks, where a coin flip determines the movement of the walker at distinct time steps. The state of the walker can therefore be defined by the position of the walker, $x$, and the coin, $c$, as $\vert x,c \rangle$. The movement of the walker through the graph is determined by two operations: the coin operation, $C$, which determines the direction the walker will move, and the shift operation, $S$, which propagates the walker to the next position.
As a simple example, we construct a random walk following the approach in~\cite{Kempe}. Consider a walker moving along a one-dimensional line according to an unbiased coin (i.e, the walker has an equal chance of moving left or right after the coin flip), with the walker originally positioned at $x=0$, see Figure~\ref{fig:1DWalk}. The position of the particle on the line forms a Hilbert space $\mathcal{H}_P$ spanned by integer values on the line, \{$\vert i \rangle : i \in \mathbb{Z}$\}. The position space is augmented by the coin-space, $\mathcal{H}_C$, which spans two basis states, \{$\vert~\uparrow~\rangle, \vert~\downarrow~\rangle$\}, the up and down spin-states of a fermion\footnote{The choice of using the up and down spin-states of a fermion is useful when implementing quantum walks on qubit-based quantum devices, such as those available on the IBM Q network \cite{IBMQ}.}. Therefore, the walker occupies a total space of
\begin{equation}
\mathcal{H} = \mathcal{H}_C \otimes \mathcal{H}_P.
\end{equation}
In the classical case, the coin operation is carried out by evaluating a classical coin. Based on the outcome of this coin, the shift operator moves the walker in the correct direction. Here we will attribute the coin state $\vert \uparrow \rangle$ to the walker moving in the positive $x$ direction and the $\vert \downarrow \rangle$ state to the walker moving in the negative $x$ direction. After the step process is complete, the walker is either in the $x=-1$ or $x=1$ position. In contrast to the classical case, the quantum coin operation is based on a \textit{quantum coin}. In this example, we will consider the Hadamard coin,
\begin{equation}
H = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix},
\end{equation}
which gives an equal chance for the coin to be measured in each of the coin states. The quantum coin operation puts the system into a superposition of the basis states of the $\mathcal{H}_C$ space. The shift operation is then performed, moving the walker into a superposition of the position states, $x=-1$ and $x=1$. A measurement after the step collapses the wavefunction to recover the classical case of the walker being in either the $x=-1$ or $x=1$ position.
\begin{figure}[t]
\centering
\includegraphics[scale=0.6]{figures/QW1D_Example.pdf}
\caption{One-dimensional walker at position $x=0$ can move either left or right depending on the outcome of the coin flip, $\vert \downarrow \rangle$ and $\vert \uparrow \rangle$ respectively.}
\label{fig:1DWalk}
\end{figure}
The Hadamard coin used here is a balanced unitary coin operation\footnote{Strictly speaking, the Hadamard coin introduces a bias to the quantum walk through the phase on the coin qubit. This is discussed in detail in \cite{Kempe} and references therein. Here we remove this bias by using a symmetric initial state.} and therefore the coin and shift operations can be defined as a single unitary transformation to the initial qubit state,
\begin{equation}\label{eqn:QWprocess}
U = S \cdot (C \otimes I),
\end{equation}
which is applied iteratively to represent the number of steps. For a quantum walk of $N$ steps, the propagation of the walker is described by the transformation $U^N$ \cite{Kempe}. An example of running a linear, one-dimensional, $N=100$ step, random walk for both the classical case and the quantum case is shown in Figure~\ref{fig:100StepWalks}. The classical case has been achieved by measuring the $coin$ qubit at each step, removing the superposition from the system. As expected, the classical walk yields a Gaussian distribution of positions centred about the initial position of the particle, with the variance $\sigma^2 = N$. In contrast, the quantum random walk, where the quantum interference between the intermediate steps of the walk process, results in a distribution that is dramatically different from the classical case. It can be shown \cite{Kempe, Ambainis} that the variance of the quantum random walk process goes as $\sigma^2 \sim N^2$. This is a remarkable attribute of the quantum random walker, which propagates quadratically faster than the classical walker. The average distance of the walker from the initial position is $\sigma = \sqrt{N}$ and $\sigma \sim N$ for the classical and quantum walks, respectively.
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{figures/CWvQWcomparison.pdf}
\caption{Simulation of a 100-step random walk using the IBM Q 32-qubit simulator \cite{32_sim} for 100,000 shots for a classical random walk obtained by measuring the coin state after each step, and a quantum random walk using a symmetric initial position and a Hadamard coin. Only non-zero probabilities are shown, as odd-numbered positions will have zero probability for this walk.}
\label{fig:100StepWalks}
\end{figure}
\section{Quantum walk as a parton shower simulation}\label{sec:PS}
The parton shower \cite{Buckley:2011ms} evolves the energy scale of a scattering event from the hard interaction down to the hadronisation scale through the radiation of additional partons. The emissions are determined by splitting functions which correspond to the different emission probabilities in the shower. The shower content is then updated depending on which splitting probability is chosen. Due to this probabilistic interpretation of parton showers, the quantum walk mechanism provides a natural framework for the simulation of parton showers: the emission probabilities correspond to the coin flip probabilities, and updating the shower content depending on the emission corresponds to the shift operation in the quantum walk framework.
In this Section we detail this novel quantum walk approach to simulating a parton shower on a quantum device. Within this framework, the algorithm can simulate a many-particle parton shower, and shows a remarkable improvement on the number of shower steps that can be simulated in comparison to previous quantum algorithms~\cite{bauer2019quantum, PhysRevD.103.076020}. The Section is ordered as follows: Section~\ref{sec:Theory} gives the theoretical outline of the toy model used in the parton shower, Section~\ref{sec:SimpleShower} shows the implementation of a simple parton shower with one particle type, Section~\ref{sec:collinearShower} outlines the full collinear parton shower and Section~\ref{sec:future} discusses possible extensions to the algorithm with advancements in quantum computers to simulate a realistic parton shower.
\subsection{Theoretical outline of shower algorithm}\label{sec:Theory}
We present a discrete, collinear parton shower using the quantum walk framework. As with the parton shower algorithms presented in~\cite{PhysRevD.103.076020, bauer2019quantum}, this algorithm utilises the ability of the quantum device to remain in a superposition state throughout the calculation. Consequently, all shower histories are calculated simultaneously and are encoded in the final wavefunction, with a measurement projecting out a specific quantity of the final state, e.g. the number of partons. This offers a unique advantage over classical parton shower algorithms, which need to calculate each shower history explicitly and store the information on a physical memory device. Only after summing over all possible shower histories can a physically meaningful quantity be extracted. The goal of this algorithm is to create the foundation for the development of a full, general parton shower by studying a simplified toy model that meets the capabilities of current quantum simulators.
An emission is collinear if a parton with momentum $P$ splits into two massless particles, which have parallel 4-momenta, such that the momentum distribution is,
\begin{align}
p_i = zP,& &p_j = (1-z)P,
\end{align}
thus, $(p_i + p_j)^2 = P^2 = 0$ \cite{Taylor_2017}. In this algorithm, we use a similar theoretical set-up as~\cite{PhysRevD.103.076020}. In each shower step, emission is determined by first ascertaining whether an emission occurred in the step using the Sudakov factors, and then applying the relevant splitting functions. The Sudakov factors for a QCD process are given by,
\begin{align}
\Delta_{i,k}(z_1, z_2) = \exp \big[ - \alpha^2_s \int^{z_2}_{z_1} P_k (z^\prime) dz^\prime \big],
\end{align}
and are used to calculate the probability of non-emission~\cite{Sudakov:1954sw}. The probability that no particles split for an arbitrary step $N$ in the shower process, where $N$ particles can be present, is given by the total Sudakov factor,
\begin{equation}\label{eqn:Sudakovs}
\Delta_\textrm{tot} (z_1,z_2) = \Delta_g^{n_g}(z_1,z_2)\Delta_q^{n_q}(z_1,z_2)\Delta_{\overline{q}}^{n_{\overline{q}}}(z_1,z_2)
\end{equation}
where $n_g$, $n_q$ and $n_{\overline{q}}$ are the number of gluons, quarks and antiquarks present in the step\footnote{As the algorithm allows for steps with no emissions, for a step $N$: $(n_g + n_q + n_{\overline{q}}) \leq N$}.
As in~\cite{PhysRevD.103.076020}, only collinear splittings will be considered. The emission probabilities are therefore calculated using the collinear splitting functions outlined in \cite{Dokshitzer:1977sg, Gribov:1972ri, ALTARELLI1977298}. The emission of a gluon from a quark is defined at leading order (LO) by,
\begin{align}\label{eqn:QSplit}
P_{q\rightarrow qg} (z) = C_F \frac{1 + (1-z)^2}{z},
\end{align}
where $C_F = 4/3$ is calculated using colour algebra, and the quark and gluon have momentum fractions $1-z$ and $z$ respectively. The gluon can self-couple, and therefore can split to both a pair of gluons and a quark-antiquark pair. At LO, the splitting functions for these emissions are,
\begin{align}\label{eqn:GSplit}
P_{g\rightarrow gg} (z) = C_A \Big[ 2 \frac{1-z}{z} + z(1-z) \Big],& &P_{g\rightarrow q\overline{q}} (z) = n_f T_R (z^2 + (1 - z)^2),
\end{align}
where $C_A = 3$ and $T_R = 1/2$ are calculated using colour algebra, and $n_f$ is the number of massless quark flavours.
Combining the Sudakov factors with the splitting functions defines the full probability of emission,
\begin{equation}
\textrm{Prob}_{k\rightarrow ij} = (1 - \Delta_k) \times P_{k \rightarrow ij} (z).
\end{equation}
In the QW framework, this probability is applied as a unitary rotation to the coin qubit, defining the shower algorithm's coin operation.
The proposed algorithm does not include kinematics. This allows for the calculation to be implemented on currently accessible simulators, such as the 32-qubit IBM Q Quantum Simulator~\cite{32_sim}. As a result, the shower evolution cannot be determined by the kinematics of the shower particles, but instead the evolution variable $z$ is evolved to lower and lower momenta, exponentially with the number of steps. Section~\ref{sec:future} outlines how a more realistic parton shower could be constructed on future devices.
\subsection{Implementation of a simple shower as a one-dimensional quantum walk}\label{sec:SimpleShower}
The implementation of a parton shower as a quantum walk follows the framework of a simple quantum random walk outlined in Section~\ref{sec:QW}. Here we define the coin operation as a unitary rotation on the coin qubit corresponding to the probability of emission, calculated using the Sudakov factors and the subsequent splitting functions defined in Section~\ref{sec:Theory}. This rotation takes the form,
\begin{equation}
U_c = \begin{pmatrix} \sqrt{1 - P_{jk}} & \sqrt{P_{jk}} \\ \sqrt{P_{jk}} & \sqrt{1 - P_{jk}} \end{pmatrix},
\end{equation}
where $P_{jk} = (1 - \Delta_i) \times P_{i \rightarrow jk}$ is the probability of particle $i$ splitting to $j$ and $k$. The coin space, $\mathcal{H}_C$, therefore spans the space \{$\vert 0 \rangle$, $\vert 1 \rangle$\} defined by the possible measured states of the coin qubit. Here we define the $\vert 0 \rangle$ state as the ``\textit{no emission}" state, and the $\vert 1 \rangle$ state as the ``$emission$" state. The position space, $\mathcal{H}_P$, now defines the number of particles present in the shower and has been altered to include only zero and positive integers, \{$\vert i \rangle : i \in \mathbb{N}_0$\}, as the parton shower cannot have a negative number of particles. The shift operation controls from the coin qubit and moves the walker in the correct direction. In order to apply the correct splitting probabilities to the coin qubit, the number of particles present must be determined. An efficient scheme has been created using a series of \textsc{cnot} gates to determine the position of the walker.
To illustrate this simple shower, Figure~\ref{fig:1DShower} shows a schematic of a one-dimensional quantum walk able to simulate a particle which can split to produce another particle of the same type. In this simple shower, the number of particles present is encoded in the position of the walker. Figure~\ref{fig:1DShower} uses a two qubit basis for the position of the walker, ultimately allowing the algorithm to simulate a maximum of 4 shower particles in the final state. The number of particles that the algorithm can simulate increases exponentially with the number of position qubits, $x$, as $2^x$. For this example, only one splitting is possible, $i \rightarrow ii$, and as a result only one coin qubit is needed to encode the splitting probability. If after the coin operation, the coin qubit is in the $\vert 1 \rangle$ state, then the splitting has occurred and the position of the walker is increased by one, thus increasing the number of particles present in the shower by one. However, if the coin operation yields a $\vert 0 \rangle$ state, then the walker does not move for this simple example\footnote{Note that in Figure~\ref{fig:1DShower} the shift operation also shows the ability to decrease the walker's position. This is not needed for the simple example of $i\rightarrow ii$ splittings, but will be useful later.}. This step can then be repeated for the number of discrete shower steps in the parton shower, resembling the quantum random walk outlined in Section~\ref{sec:QW}. Throughout the calculation, the device is in a superposition state of all possible outcomes of the coin and shift operations. At the end of the shower process, the final state of the system is measured and projected onto a classical state.
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{figures/1DShower.pdf}
\caption{Template for a single step of a quantum walk as a parton shower, with the ability to simulate a particle which can split to more particles of the same type. Here, the ``\textit{position check}" determines the number of particles present by assessing the position of the walker. The ``\textit{coin}" operation applies the correct splitting probabilities depending on the position of the walker. The ``\textit{shift}" operation moves the walker depending on the outcome of the coin operation.}
\label{fig:1DShower}
\end{figure}
\subsection{Implementation of a collinear parton shower}\label{sec:collinearShower}
It is possible to extend the simple shower outlined in Section~\ref{sec:SimpleShower} to include multiple parton types by increasing the dimension of the position space $\mathcal{H}_P$, with the aim of developing a multi-particle, discrete, collinear parton shower using the theoretical outline discussed in Section~\ref{sec:Theory}. The algorithm presented here considers a toy model comprised of a gluon and one flavour of quark, and can simulate the corresponding splittings.
As shown in Section~\ref{sec:SimpleShower}, a quantum walker in a one-dimensional position space, $\mathcal{H}_P$, has the ability to simulate a single particle type. Augmented by the coin space, $\mathcal{H}_C$, with dimension equal to the number of possible splittings associated with the particle, the quantum walk can simulate a simple parton shower comprising one particle type. Increasing the dimension of the position space increases the number of particles which can be simulated in the algorithm. Applying this mechanism to our toy model of the parton shower, the position space, $\mathcal{H}_P$, is increased to two dimensions to accommodate the simulation of gluons and quarks, counting gluons in one dimension and quarks in the other. Note that we do not need to include dimensions for both quarks and antiquarks as they are produced in conjunction through the $g\rightarrow q\overline{q}$ splitting, thus instead we count quark-antiquark pairs. Figure~\ref{fig:visualisation} shows a visualisation of how the walker's position on a 2D plot corresponds to the number of particles in the shower, with gluons and quarks measured on the $x$ and $y$-axes of the walker's 2D lattice respectively. The coin space, $\mathcal{H}_C$, is increased to a three-dimensional Hilbert space, with three coin qubit rotations corresponding to the splitting functions in Equations~\ref{eqn:QSplit} and \ref{eqn:GSplit}. Controlled from the coin register, the shift operations propagate the walker to reflect the production of new particles in the shower step. A schematic of the quantum circuit is shown in Figure~\ref{fig:partonShower}. It should be noted that it is likely that more than one of the coin qubits can be in the $\vert 1 \rangle$ state in a step. In these situations, it is not clear which splitting kernel should be applied and therefore the algorithm does not apply a shift operation to the walker. This is realised by controlling from coin states that only have one coin qubit in the $\vert 1 \rangle$ state, as shown in Figure~\ref{fig:partonShower}.
\begin{figure}[t]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{figures/Visualisation_Shower.pdf}
\subcaption{}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{figures/Visualisation_Gen.pdf}
\subcaption{}
\end{subfigure}
\caption{Visualisation of a quantum walk as a parton shower comprising gluons and quarks. The quantum walker's position on a 2D plot corresponds to the number of particles in the parton shower: (a) shows a parton shower using the collinear splitting functions for quarks and gluons, (b) shows a parton shower with modified splitting functions to show how the walker moves in the 2D lattice.}
\label{fig:visualisation}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.3]{figures/FullCircuit.pdf}
\caption{Schematic of the quantum circuit for a single step of a discrete QCD, collinear parton shower with the ability to simulate the splittings of gluons and one flavour of quark. The shower algorithm is split into 3 distinct sections: (1) The position check determines the position of the walker so that the correct Sudakov form factors are applied in the splitting kernels, (2) the coin flip applies unitary rotations to a coin register corresponding to the possible splitting kernels, (3) the shift operation propagates the walker into correct direction to describe the particle splitting in the shower step. This step is then repeated iteratively to simulate a full shower process.}
\label{fig:partonShower}
\end{figure}
To simulate a parton shower, the steps shown in Figure~\ref{fig:partonShower} are performed many times, with only one splitting allowed to occur per step. Steps where no emission occurred are dictated by the Sudakov form factors from Equation~\ref{eqn:Sudakovs}. The system is kept in a superposition state throughout the algorithm, with a measurement taking place only at the end of the calculation. Therefore, after all the steps have been evaluated, the system is in a superposition of all possible shower histories. This differs dramatically from classical parton shower algorithms where each shower history must be individually calculated. A physically meaningful quantity can only be extracted from a classical shower algorithm once all possible shower histories have been summed over. Consequently, the quantum algorithm approach to parton showers provides a unique advantage over the classical approach.
The quantum parton shower algorithm with 31 shower steps has been run for 500,000 shots on the IBM Q 32-qubit Quantum Simulator~\cite{32_sim}. The output from the quantum simulator has been compared to a classical parton shower algorithm, which follows the same theoretical framework as that outlined in Section~\ref{sec:Theory}, simulating a toy model with one quark flavour and a gluon. Figure~\ref{fig:originalResults} shows the comparison between the quantum and classical parton shower algorithms of the probability distributions of the number of gluons measured at the end of the shower. This is shown for the scenario where there are zero quark-antiquark pairs in the final state and the much less probable scenario where there is one quark-antiquark pair in the final state. Due to the low statistics for the $1 q\overline{q}$ results, a further validation of the parton shower algorithm has been carried out using modified splitting functions to enhance the $g \rightarrow q\overline{q}$ and $q \rightarrow qg$ splittings. The results of this test are shown in Figure~\ref{fig:modResults} and display good agreement between the quantum and classical parton shower algorithms. The probability of producing two quark-antiquark pairs is less than $10^{-5}$. For both comparison runs, the classical algorithm has been executed for 31 shower steps with $10^6$ shots of the algorithm.
The algorithm is implemented using 16 qubits to simulate a parton shower of 31 steps. This is a dramatic increase in the number of steps that can be simulated on a quantum device in comparison to previous algorithms~\cite{bauer2019quantum, PhysRevD.103.076020}, with almost a factor of two reduction in the number of required qubits~\cite{PhysRevD.103.076020}.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figures/fullComparison.pdf}
\caption{Probability distribution of the number of gluons measured at the end of the 31-step parton shower for the classical and quantum algorithms, for the scenario where there are zero quark anti-quark pairs (left) and exactly one quark anti-quark pair (right) in the final state. The quantum algorithm has been run on the IBMQ 32-qubit quantum simulator~\cite{32_sim} for 500,000 shots, and the classical algorithm has been run for $10^6$ shots.}
\label{fig:originalResults}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figures/mod_fullComparison.pdf}
\caption{Probability distribution of the number of gluons measured at the end of the 31-step parton shower for the classical and quantum algorithms with modified splitting kernels, for the scenario where there are zero quark anti-quark pairs (left) and exactly one quark anti-quark pair (right) in the final state. The quantum algorithm has been run on the IBMQ 32-qubit quantum simulator~\cite{32_sim} for 100,000 shots, and the classical algorithm has been run for $10^6$ shots.}
\label{fig:modResults}
\end{figure}
\subsection{Towards a realistic parton shower}\label{sec:future}
The parton shower algorithm described in Section~\ref{sec:collinearShower} is a simplified, toy model and has thus limited capability compared to state-of-the-art, classical parton shower algorithms. However, the quantum algorithm leverages the unique ability of the quantum computer to remain in a superposition state throughout the calculation, enabling all shower histories to be calculated simultaneously and providing a remarkable advantage over the classical algorithms.
It is interesting to consider how the parton shower algorithm will develop with advancements in quantum technologies. Near-future devices with larger quantum volume~\cite{gambetta_2020, Jurcevic2020DemonstrationOQ} make it feasible to imagine a practical parton shower algorithm on a quantum device.
An obvious extension to the algorithm proposed would be to include more particle types and flavours. As described in Section~\ref{sec:collinearShower}, this is easily done by increasing the dimension of the $\mathcal{H}_P$ and $\mathcal{H}_C$ spaces to include another particle and its corresponding splittings. It may then be possible to extend the shower to include all quark flavours, increasing the dimension of the walker's lattice to seven: six quark dimensions and one gluon dimension. To implement this circuit would require a large number of qubits, with the number required for each particle type being
\begin{equation}
n_\textrm{qubits} = \log_2 N,
\end{equation}
where $N$ is the number of desired steps in the shower process. It is possible to reduce the overall number of qubits in the system by removing redundant areas in the quantum walker's lattice. For example, in Figures~\ref{fig:originalResults} and \ref{fig:modResults}, there is only one quark-antiquark pair in the results. Therefore, all lattice sites containing two or more quark-antiquark pairs could be removed to streamline the circuit. However, this does reduce the generality of the circuit, and such areas would have to be known a priori to running the device.
It is feasible to consider an algorithm that can simulate a parton shower for calculations where a basis transformation is performed. For example, in ~\cite{bauer2019quantum} a parton shower algorithm was proposed with two fermions $f_1$ and $f_2$ and a scalar $\phi$. The algorithm considers a rotation from the flavour basis, $f_{1/2}$, to a mass basis, $f_{a/b}$. The parton shower calculation is then carried out in the mass basis, rotating back to the flavour basis before measurement. It claims an advantage over classical algorithms by simulating the quantum interference between the two fermions. Due to the qubit requirement of the circuit, the full algorithm was restricted to 2 steps in the shower process. It is possible to replicate the parton shower algorithm from~\cite{bauer2019quantum} in a quantum walk framework by increasing the dimension of the position space $\mathcal{H}_P$ and the coin space $\mathcal{H}_C$ to include the two fermions and one scalar, and their corresponding splitting functions. The basis transformation can be reproduced as a rotation across the fermion dimensions of the position space $\mathcal{H}_C$. The shower would then follow the quantum walk process outlined in Equation~\ref{eqn:QWprocess}, with a final rotation across the fermion position space to transform back to the flavour basis before measurement. This algorithm would be able to run for many steps and would be a good test of the quantum advantage claimed in~\cite{bauer2019quantum}.
Keeping track of particle kinematics in the parton shower algorithm outlined in Section~\ref{sec:collinearShower} is an important step towards emulating a realistic parton shower. The current publicly accessible devices and simulators do not have adequate quantum volume to include shower kinematics, but future devices may have the capability to implement this. Within the quantum walk framework, it is possible to consider extending the Hilbert space of the system to include a \textit{kinematic} space $\mathcal{H}_K$ such that the total space now has the form,
\begin{equation}
\mathcal{H} = \mathcal{H}_C \otimes \mathcal{H}_P \otimes \mathcal{H}_K.
\end{equation}
The kinematic space $\mathcal{H}_K$ would comprise a discretised momentum space that each shower particle could move in. Similarly to the position check in Figure~\ref{fig:partonShower}, conditional coin operations would then be used to apply the correct splitting kernels to the coin qubits depending on the position of the walker in the kinematic space $\mathcal{H}_K$. A schematic of a one particle type parton shower, with kinematics included, is shown in Figure~\ref{fig:kinematics}. It should be noted that, in order to keep track of each particle's momentum in the shower, the kinematic space $\mathcal{H}_K$ will have to be extended at each splitting. One can initialise the system to have the whole kinematic space at the beginning of the algorithm, populating the space only in the event of a splitting. However, this approach will lead to a large redundancy in the circuit, an area which may have to be optimised in practice.
\begin{figure}
\centering
\includegraphics[scale=0.45]{figures/kinematics.pdf}
\caption{A schematic circuit diagram for a one particle type parton shower with a discretised kinematic space. Here, $\mathcal{H}_P$, $\mathcal{H}_K$ and $\mathcal{H}_C$ are the position, kinematic and coin spaces respectively, and $w$ is an ancillary register.}
\label{fig:kinematics}
\end{figure}
\section{Summary}\label{sec:Summary}
Simulating parton showers on quantum computers has been shown~\cite{bauer2019quantum, PhysRevD.103.076020} to have distinct advantages that exploit the unique features of the quantum device.
In classical parton showers, all possible shower histories are calculated individually, stored on a physical memory device and then analysed in their entirety to provide information on a physical quantity. In contrast, the quantum device remains in a quantum state throughout the calculation, constructing a wavefunction which comprises a superposition of all possible shower histories. Consequently, all shower histories are calculated simultaneously in a single calculation, removing the requirement to store and track each shower history on physical memory. However, simply porting over the classical parton shower implementations onto a quantum device is computationally inefficient, requiring a large number of qubits and only allowing up to 2 steps of the parton shower to be simulated on current simulators~\cite{ PhysRevD.103.076020}.
This paper proposes a novel quantum walk approach to simulating parton showers on a quantum computer that represents a significant improvement in the depths of the shower that can be simulated and with far fewer qubits.
We present a quantum algorithm for the simulation of a collinear, 31-step parton shower implemented as a 2D quantum walk, where the coin flip represents the total parton emission probability, and the movement of the walker in the 2D space represents an emission corresponding to either gluons or a quark-antiquark pair. Reframing the parton shower in this quantum walk paradigm enables a 31-step shower to be simulated, a dramatic improvement over previous quantum algorithms~\cite{bauer2019quantum, PhysRevD.103.076020} and with almost a factor of two reduction in the number of required qubits~\cite{ PhysRevD.103.076020}. The quantum walk approach thus offers a natural and much more efficient approach to simulating parton showers on quantum devices. Furthermore, the algorithm scales efficiently: the number of possible shower steps increase exponentially with the number of qubits in the position registers; and the circuit depth grows linearly with the number of steps, in contrast to previous algorithms which grow quadratically, at best.
\vspace{1cm}
\noindent {\it{{\bf Acknowledgements:}~~Sarah Malik and Simon Williams are funded by grants from the Royal Society. We would like to acknowledge the use of the IBM Q for this work. We thank Frank Krauss and Stefan Prestel for valuable discussions.}}
\bibliographystyle{JHEP}
|
{
"timestamp": "2021-09-30T02:01:35",
"yymm": "2109",
"arxiv_id": "2109.13975",
"language": "en",
"url": "https://arxiv.org/abs/2109.13975"
}
|
\section{Introduction}
In modern physics, one of the outstanding question is regarding the determination of the mass of neutrinos which has fundamental implications to both particle physics and cosmology. The neutrino oscillation experiments have established that the neutrinos should have a non-zero mass \citep[e.g.][]{Forero+14,Gonzalez-Garcia+16,Capozzi+17,Salas+17}. However these oscillation experiments are only sensitive to the mass splittings between the neutrino mass eigenstates, and to measure the absolute scale of the neutrino mass other experiments are required. Recently the Karlsruhe Tritium Neutrino (KATRIN) experiment has reported the first direct detection of sub-eV neutrino mass, with an upper limit on the `effective neutrino mass' of 0.8 eV \citep{KATRIN21}. This is based on the kinematic measurements through the observation of the energy spectrum of tritium $\beta$-decay, and is model independent.
However stronger constraints on the summed neutrino mass ($M_\nu$) can obtained through combining various cosmological probes, as the massive neutrinos leave an imprint on various cosmological observables \citep[e.g.][]{Wong11, LesgourguesPastor12}. But these constraints have an additional model dependence. Assuming the standard `lambda cold dark matter' ($\Lambda$CDM) model, one of the strongest constraint on the summed neutrino mass has been obtained by combining cosmic microwave background (CMB, \citetalias{Planck2018}), baryonic acoustic oscillation, and redshift-space galaxy clustering \citepalias{eBOSS20} to obtain an upper limit of $M_\nu < 0.102$ eV. However by considering extensions of the standard cosmological model, the upper limit becomes less stringent \cite[e.g.][]{Vagnozzi+18,Choudhury+20}. The community can expect the summed neutrino mass to be measured with increased precision from cosmological probes in the foreseeable future with the advent of next generation of CMB surveys [e.g. the Simons Observatory\footnote{\url{https://simonsobservatory.org/}} \cite[SO,][]{SO}, CMB-S4\footnote{\url{https://cmb-s4.org/}} \citep{CMBS4}], and the stage IV galaxy redshift surveys [e.g. the `Dark Energy Spectroscopic Instrument'\footnote{\url{https://www.desi.lbl.gov/}} \cite[DESI,][]{DESI16}, Euclid\footnote{\url{https://www.euclid-ec.org/}} \citep{Euclid11} and Nancy Grace Roman space telescope\footnote{\url{https://roman.gsfc.nasa.gov/}} \citep{WFIRST15}].
Currently the $M_\nu$ constraints from galaxy clustering are mainly obtained using two-point statistics, i.e. the power spectrum in Fourier space or the two-point correlation function in the configuration space. The impact of massive neutrinos on the two-point clustering statistics has been studied quite extensively using $N$-body simulations both in real- \citep[e.g.][]{Saito+08,Wong08,Castorina+15} and redshift-space \citep[e.g.][]{Navarro+18,Garcia+19}. However they are affected by the $M_\nu$-$\sigma_8$ degeneracy, and thus acts as a limitation in measuring the summed neutrino mass. The three-point clustering statistics in Fourier space (i.e. the bispectrum) has been shown to break this degeneracy \citep{Hahn+20,HahnVilla20}. In addition it has been shown that the three-point cluster statistics contains additional cosmological information compared to its two-point counterpart, and thus able to obtain substantial improvements on constraining other cosmological parameters also \citep[e.g.][]{YankelevichPorciani18, ChudaykinIvanov19, Gualdi+20, Agarwal+20, Samushia+21}. Currently there are efforts to understand the possibility of constraining summed neutrino mass using various summary statistics, among others, like the one-point probability distribution function of the matter density \citep[e.g.][]{Uhlemann+20} and the void size function \citep[e.g.][]{Bayer+21}.
Another avenue is to use velocity statistics like the mean pairwise velocity which provides a complementary view to the clustering information, either through the peculiar velocity surveys or the kinetic Sunyaev--Zeldovich (kSZ) effect. \cite{Mueller+15b} has shown that the mean pairwise velocity can utilised to constrain the summed neutrino mass, and \cite{Kuruvilla+20} has studied its interplay between the baryonic feedback and the summed neutrino mass effects at nonlinear separation scales. Furthermore the three-point mean relative velocity statistics is able to obtain stronger constraints on the summed neutrino mass when compared to the mean pairwise velocity \citep{KuruvillaAghanim21}. However the growth rate measurement from the kSZ effect of the CMB are degenerate with the optical depth \cite[e.g.][]{KeislerSchmidt13,Battaglia16,Flender+17} and this degeneracy acts as a limitation in measuring cosmological parameters \cite[e.g.][]{Smith+18}, which is commonly referred to as the optical depth degeneracy. It has been suggested that the use of fast radio bursts can be used to break this degeneracy \citep{Madhavacheril+19}. In this paper we will develop a new statistic which is independent of the optical depth using the first moment of the three-point relative velocities, i.e. the mean relative velocities between pairs in a triplet, and thus circumvent the problem of optical depth as a limitation factor in kSZ experiments. This forms one of the main goals of this paper.
The remainder of this work is structured as follows. In Sect.~\ref{sec:cosmoksz}, we describe the newly introduced summary statistic based on three-point mean relative velocities. The Quijote suite of simulation, which we use in this work is introduced briefly in Sect.~\ref{sec:sims}. The information content in the velocity statistics is studied using the Fisher-matrix formalism, which is briefly summarised in Sect.~\ref{sec:fisher}. Our results are presented in Sect.~\ref{sec:results}, and we finally conclude in Sect.~\ref{sec:conclusions}.
\section{Cosmology using the kSZ effect}
\label{sec:cosmoksz}
\subsection{Kinetic Sunyaev--Zeldovich effect}
\label{sec:kSZ}
As the CMB photons interact with the free electrons of hot ionised gas along the line-of-sight (LOS), the apparent CMB temperature changes. This is due to the fact that there is a transfer of energy from electrons to the resulting scattered photons as the electrons have a significantly higher kinetic energy than the photons. In this work, we focus on the secondary effect which is known as the kinetic Sunyaev--Zeldovich \citep[kSZ;][]{SZ72,SZ80} which arises if the scattering medium is moving relative to the Hubble flow. The fractional temperature fluctuation caused due to kSZ is
\begin{align}
\left.\frac{\Delta T(\hat{n})}{T_{\mathrm{cmb}}}\right|_{\mathrm{kSZ}} &= - \int \mathrm{d}l\ \sigma_{\mathrm{T}} \left(\frac{\bm{v}_{\mathrm{e}}\cdot\hat{n}}{c}\right) n_{\mathrm{e}} \, , \nonumber \\
& = -\tau \left(\frac{\bm{v}_{\mathrm{e}}\cdot\hat{n}}{c}\right) \, ,
\label{eq:ksz}
\end{align}
where $\sigma_{\mathrm{T}}$ is the Thomson scattering cross-section, $T_{\mathrm{cmb}}$ is the CMB temperature, $c$ is the speed of light, $\bm{v}_{\mathrm{e}}$ is the peculiar velocity of free electrons, and $n_{\mathrm{e}}$ is the physical free electron number density. The integral $\int \mathrm{d}l$ is computed along the LOS which is given by $\hat{n}$. The optical depth is defined as $\tau = \int \mathrm{d}l\ \sigma_{\mathrm{T}}\ n_{\mathrm{e}}$, i.e. the integrated electron density.
The kSZ signal detection is challenging because of its small amplitude and its spectrum being identical to that of primary CMB temperature fluctuations. One of the approaches to detect the kSZ signal is to employ the pairwise statistic \cite[e.g.][]{Hand+12,Carlos+15,Planck16,Schaan+16,Soergel+16,Bernardis+17,Li+17,Calafut+21,Chen+21}. There have been evidences of kSZ signal using other techniques also \cite[e.g.][]{Carlos+15,Schaan+16,Nguygen+20,Tanimura+21,Chaves-Montero+21, Schaan+21}.
In the case of kSZ pairwise signal the temperature acts as a proxy for the peculiar velocity, and as such it probes the optical depth weighted pairwise velocity \cite[e.g.][]{Hand+12,Soergel+18}
\begin{equation}
\frac{\langle \Delta T^{\mathrm{kSZ}}(r_{12})\rangle}{T_{\mathrm{cmb}}} \simeq -\tau \frac{\bar{w}(r_{12})}{c}\,,
\label{eq:ksz-pair}
\end{equation}
where $\langle \Delta T^{\mathrm{kSZ}}_{12} \rangle$ is the mean temperature difference between the objects `1' and `2', and $\bar{w}(r_{12})$ is the mean radial component of the pairwise velocity which can be defined in the single streaming regime as follows
\begin{align}
\langle \bm{w}_{12}|\bm{r}_{12} \rangle_\mathrm{p} = &\ \displaystyle \frac{\langle(1+\delta_{1})(1+\delta_{2}) (\bm{v}_{2}-\bm{v}_{1})\rangle}{\langle (1+\delta_{1})(1+\delta_{2})\rangle} \, ,
\label{eq:mean-radial-velocity-two}
\end{align}
where $\delta_i\equiv\delta(\bm{x}_i)$ is the density contrast, $\bm{v}_i\equiv \bm{v}(\bm{x}_i)\equiv \bm{u}(\bm{x}_i)/aH$ is the normalised peculiar velocity, $a$ is the scale factor, and $H$ is the Hubble constant. Using perturbation theory at leading order (LO), the mean radial matter pairwise velocity can be written as \citep[e.g.][]{Fisher95, Juszkiewicz+98, ReidWhite11}
\begin{equation}
\langle \bm{w}_{12}|\bm{r}_{12} \rangle_{\mathrm p} = \bar{w}(r_{12})\,\hat{\bm{r}}_{12}
\simeq \displaystyle - \frac{f}{\pi^2} \, \hat{\bm{r}}_{12} \int_0^{\infty} k \,j_1(k r_{12})\,P(k)\, \mathrm{d} k \, ,
\label{eq:mean-radial-velocity-two-theory}
\end{equation}
where $\hat{\bm{r}}_{12}$ is the unit vector along the pair `12', the subscript p implies that the averages are computed over all pairs with separation $r_{12}$, $P(k)$ denotes the linear matter power spectrum, and $j_1(x) = \sin (x)/x^2- \cos (x)/x$. It should be noted that Eq.~(\ref{eq:ksz-pair}) assumes that there is no correlation between optical depth and velocity field. Following Eqs.~(\ref{eq:ksz-pair}) and (\ref{eq:mean-radial-velocity-two-theory}), we can see that
\begin{equation}
\Delta T^{\mathrm{kSZ}} \propto \tau f \sigma^2_8 \,
\end{equation}
and thus implying that the growth rate measurement from the pairwise kSZ is perfectly degenerate with optical depth \citep{KeislerSchmidt13}. Here we have presented the argument for the matter, but in observations there will be an additional bias dependence entering in the above equation.
\subsection{New statistics based on mean relative velocity between pairs in a triplet}
\label{sec:velsta}
In the previous section, we mentioned about the mean relative velocity between two tracers (i.e between a pair) or the mean pairwise velocity. However this can be generalised to the case of three tracers, in which we can consider two mean relative velocity between pairs in a triplet with separations $\triangle_{123}=(r_{12},r_{23},r_{31})$:
(i) $\langle \bm{w}_{12}|\triangle_{123} \rangle_\mathrm{t}$, and (ii)
$\langle \bm{w}_{23}|\triangle_{123} \rangle_\mathrm{t}$.
The subscript t here implies that the averages are computed over all triplets with separations $(r_{12},r_{23},r_{31})$. Similar to Eq.~(\ref{eq:mean-radial-velocity-two}) in the single stream fluid approximation, the mean relative velocity between pair 12 in a triplet can be written as \citep{KuruvillaPorciani20}
\begin{align}
\langle \bm{w}_{12}|\triangle_{123} \rangle_\mathrm{t} = &\ \displaystyle \frac{\langle(1+\delta_{1})(1+\delta_{2}) (1+\delta_{3})(\bm{v}_{2}-\bm{v}_{1})\rangle}{\langle (1+\delta_{1})(1+\delta_{2}) (1+\delta_{3})\rangle} \nonumber \\
\simeq &\ \langle \delta_{1}\bm{v}_{2} \rangle - \langle \delta_{2} \bm{v}_{1} \rangle + \langle \delta_{3}\bm{v}_{2} \rangle - \langle \delta_{3} \bm{v}_{1} \rangle\nonumber \\
=&\ \bar{w}(r_{12})\,\hat{\bm{r}}_{12}-\frac{1}{2}\left[
\bar{w}(r_{23})\,\hat{\bm{r}}_{23}+\bar{w}(r_{31})\,\hat{\bm{r}}_{31}\right]
\, .
\label{eq:mean-radial-velocity-three}
\end{align}
The three-point mean relative velocity statistics can be composed into both its radial ($R_{ij}$) and transverse ($T_{ij}$) component in the plane of the triangle defined by the particles. This is in contrast to the mean pairwise velocity for which the transverse component is zero. In the case of $ \langle \bm{w}_{12}|\triangle_{123} \rangle_\mathrm{t}$, it is as follows
\begin{align}
\langle \bm{w}_{12}|\triangle_{123} \rangle_\mathrm{t} & = \langle \bm{w}_{12}\cdot \hat{\bm{r}}_{12} |\triangle_{123} \rangle_{\mathrm t}\,\hat{\bm{r}}_{12} +
\langle \bm{w}_{12}\cdot \hat{\bm{t}}|\triangle_{123} \rangle_{\mathrm t}\, \hat{\bm{t}}\nonumber\\
& =R_{12}(\triangle_{123})\,\hat{\bm{r}}_{12}+T_{12}(\triangle_{123})\,\hat{\bm{t}}\;,
\label{eq:decomposition}
\end{align}
where $\hat{\bm{t}}=(\hat{\bm{r}}_{23}-\cos\chi\,\hat{\bm{r}}_{12})/\sin\chi$, $\hat{\bm{r}}_{23}$ is the unit vector along the pair `23', and $\chi= \arccos(\hat{\bm r}_{12} \cdot \hat{\bm r}_{23})$. In this work, we make use of only the radial component, and for the pair 12 in the triplet it can be written as
\begin{align}
R_{12}(\triangle_{123})=
\bar{w}(r_{12})&-\frac{1}{2}\Bigg[
\bar{w}(r_{23})\,\cos \chi \nonumber \\
& -\bar{w}(r_{31})\,\frac{r_{12}+r_{23}\cos\chi}{\sqrt{r_{12}^2+r_{23}^2+2r_{12}r_{23}\cos\chi}}\Bigg]\;.
\label{eq:R12_triangle}
\end{align}
\noindent Similarly, the mean radial relative velocity between the pair 23 in $\triangle_{123}$ can be written as
\begin{align}
R_{23}(\triangle_{123})=
\bar{w}(r_{23})&-\frac{1}{2}\Bigg[
\bar{w}(r_{12})\,\cos \chi \nonumber \\
& -\bar{w}(r_{31})\,\frac{r_{23}+r_{12}\cos\chi}{\sqrt{r_{12}^2+r_{23}^2+2r_{12}r_{23}\cos\chi}}\Bigg]\;.
\label{eq:R23_triangle}
\end{align}
\noindent Similar to Eq.~(\ref{eq:ksz-pair}), the three-point mean relative temperature difference from kSZ can be written down as
\begin{equation}
\frac{\Delta T^{\mathrm{kSZ}}_{ij}(\triangle_{123})}{T_{\mathrm{cmb}}} \simeq -\tau \frac{R^{\mathrm{h}}_{ij}(\triangle_{123})}{c}\,,
\label{eq:ksz-triplet}
\end{equation}
where $R^{\mathrm{h}}_{ij}(\triangle_{123})$ respresents the three-point mean relative velocity statistics for haloes (biased tracers), and to first approximation it can be written down as linear bias term times $R_{ij}(\triangle_{123})$ \citep{KuruvillaAghanim21}. Based on the radial mean relative velocities between pairs in a triplet, we can introduce a new ratio statistic
\begin{align}
\mathcal{R}(\triangle_{123}) &= \frac{\langle \bm{w}_{12}\cdot \hat{\bm{r}}_{12} |\triangle_{123} \rangle_{\mathrm t}}{\langle \bm{w}_{23}\cdot \hat{\bm{r}}_{23} |\triangle_{123} \rangle_{\mathrm t}} \, , \label{eq:ratio-new-statistics}
\end{align}
which tells us how quickly the average infall velocity of pair 12 is in comparison to the average infall velocity of pair 23 for a specific triangular configuration $\triangle_{123}$. On linear scales, using perturbation theory at LO, $\mathcal{R}(\triangle_{123})$ can be written as the ratio between Eqs.~(\ref{eq:R12_triangle}) and (\ref{eq:R23_triangle})
\begin{align}
\mathcal{R}(\triangle_{123})&=\frac{R_{12}(\triangle_{123})}{R_{23}(\triangle_{123})} = \frac{R^{\mathrm{h}}_{12}(\triangle_{123})}{R^{\mathrm{h}}_{23}(\triangle_{123})} \equiv \frac{\Delta T^{\mathrm{kSZ}}_{12}(\triangle_{123})}{\Delta T^{\mathrm{kSZ}}_{23}(\triangle_{123})} \, .
\label{eq:ratio-new-statistics-lo}
\end{align}
The above introduced statistic is thus independent of optical depth, $\sigma_8$ and linear bias. In the following sections, we take a detailed look at whether the Ansatz of $\sigma_8$ and linear bias independence holds up. Additionally, we study the cosmological information content in $\mathcal{R}(\triangle_{123})$.
\section{Data and analysis}
\subsection{Quijote simulation suite}
\label{sec:sims}
In this work, we make use of the Quijote\footnote{\url{https://quijote-simulations.readthedocs.io/}} \citep{Quijote20} suite of simulations, which was run using the tree-PM code \textsc{gadget}-3 \citep{Springel05}. Spanning more than a few thousand cosmological models, it contains 44,100 \textit{N}-body simulations. These simulations have a box length of $1\,h^{-1}\,\mathrm{Gpc}$, and tracks the evolution of $512^3$ cold dark matter (CDM) particles. The initial conditions (ICs) were generated at redshift $z=127$ using the second-order Lagrangian perturbation theory. The fiducial cosmological parameters (assuming zero summed neutrino mass) for the simulation is as follows: the total matter density: $\Omega_{\mathrm{m}}=0.3175$, the baryonic matter density: $\Omega_{\mathrm{b}}=0.049$, the primordial spectral index of the density perturbations: $n_{\mathrm{s}}=0.9624$, the amplitude of the linear power spectrum on the scale of $8\ h^{-1}\mathrm{Mpc}$: $\sigma_8=0.834$, and the present-day value of the Hubble constant: $H_0\equiv H(z=0)=100\, h\,\mathrm{km}\,\mathrm{s}^{-1}\mathrm{Mpc}^{-1}$ with $h=0.6711$. This is broadly consistent with the Planck 2018 result (\citetalias{Planck2018}). The suite consists of 15,000 random realisations for the fiducial cosmology. For the purpose of calculating derivatives, Quijote provides a set of 500 random realisations wherein only one parameter is varied with respect to the the fiducial cosmology. The variations are as follows: \{$\Omega^+_{\mathrm{m}}, \Omega^-_{\mathrm{m}}, \Omega^+_{\mathrm{b}}, \Omega^-_{\mathrm{b}}, n^+_{\mathrm{s}}, n^-_{\mathrm{s}}, \sigma^+_{8}, \sigma^-_{8}\} = \{0.3275, 0.3075, 0.051, 0.047, 0.9824, 0.9424, 0.849, 0.819\}$ and $\{h^+, h^-\} = \{0.6911,$ $0.6511\}$.
In addition the suite also provides 500 realisations for three massive neutrino cosmology, where the summed neutrino masses are 0.1, 0.2, and 0.4 eV respectively. The initial conditions for these simulations were produced using the Zeldovich approximation (ZA), and has $512^3$ neutrino particles in addition to the CDM particles. To compute the numerical derivatives with respect to massive neutrinos, the Quijote suite provides an addition 500 random realisations for the fiducial cosmology, in which the ICs were also generated using ZA.
In this work, we use halo catalog data from 22,000 \textit{N}-body simulations of the Quijote suite. These halos were identified using a friends-of-friends algorithm. We selected halos that have a halo mass $M_\mathrm{h} > 5 \times 10^{13}\ h^{-1}\mathrm{M}_\odot$ (corresponding to groups and clusters of galaxies) at $z=0$ which gives a mean number density of $\bar{n} \sim 0.92 \times 10^{-4}\,h^3\,\mathrm{Mpc}^{-3}$ for the reference simulations. Additionally in the case of (i) fiducial cosmology, and (ii) for variations in $\sigma_8$ (both $\sigma^+_{8}$ and $\sigma^-_{8}$), we use 30 realisations of the particle data (randomly down-sampled to $100^3$ particles) to compute $\mathcal{R}(\triangle_{123})$.
\subsection{Fisher-matrix formalism}
\label{sec:fisher}
To quantify the error estimates on the cosmological parameters, we use the Fisher-matrix formalism which can be defined as \citep[e.g.][]{Tegmark+1997, Heavens09, Verde10}
\begin{equation}
F_{\alpha \beta} = \left\langle -\frac{\partial^2\ln{\mathcal{L}}}{\partial \theta_\alpha \partial \theta_\beta} \right\rangle \, ,
\label{eq:fisher_definition}
\end{equation}
where $\theta_\alpha$ and $\theta_\beta$ are two of the cosmological model parameters, and $\mathcal{L}$ is the likelihood of the data given a model. Assuming a Gaussian likelihood, we can write the Fisher information matrix as
\begin{equation}
F_{\alpha \beta} = \frac{\partial \mkern 1mu \boldsymbol{\mathcal{R}}}{\partial \theta_\alpha} \cdot \hat{\mathbf{C}}^{-1} \cdot \frac{\partial \mkern 1mu \boldsymbol{\mathcal{R}}^\mathsf{T}}{\partial \theta_\beta} \, ,
\label{eq:fisher_reduced}
\end{equation}
where $\boldsymbol{\mathcal{R}}$ represents the data vector for the ratio statistic we introduced in Eq.~(\ref{eq:ratio-new-statistics}), and $\hat{\mathbf{C}}^{-1}$ is the precision matrix (i.e. the inverse covariance matrix). It should be noted that in the definition of $F_{\alpha\beta}$, we have neglected a term which appears due to the cosmology dependence of the covariance matrix. However the correction has been shown to have a negligible effect \citep{Kodwani+19}. We compute the covariance matrix of $\mathcal{R}$ directly from the simulations as follows
\begin{equation}
\widetilde{\mathbf{C}} = \frac{1}{N_{\mathrm{sims}}-1}\sum_{i=1}^{N_{\mathrm{sims}}} \left(\boldsymbol{\mathcal{R}}_i-\overline{\boldsymbol{\mathcal{R}}}\right)\left(\boldsymbol{\mathcal{R}}_i-\overline{\boldsymbol{\mathcal{R}}}\right)^\mathsf{T} \, ,
\label{eq:covariancematrix}
\end{equation}
where $\overline{\boldsymbol{\mathcal{R}}} = N_{\mathrm{sims}}^{-1}\sum_{i=1}^{N_{\mathrm{sims}}} \boldsymbol{\mathcal{R}}_i$, and $N_{\mathrm{sims}}$ denotes the total number of simulations used to compute the covariance matrix (in this work $N_{\mathrm{sims}}=15,000$). While Eq.~(\ref{eq:covariancematrix}) gives an unbiased estimate of the covariance matrix, its inversion leads to a biased estimate of the precision matrix. This however can be statistically corrected by applying a multiplicative correction factor to the precision matrix \citep{Kaufmann67,Anderson03,Hartlap+07}
\begin{equation}
\hat{\mathbf{C}}^{-1} = \frac{N_{\mathrm{sims}}-N_{\mathrm{bins}}-2}{N_{\mathrm{sims}}-1}\ \widetilde{\mathbf{C}}^{-1} \, ,
\end{equation}
where $N_{\mathrm{bins}}$ is the number of bins in $\mathcal{R}$.
We numerically compute the derivatives required to construct the Fisher information matrix using the Quijote suite of simulations, which provides 500 realisations where only one cosmological parameter is varied while the rest are fixed at its fiducial value. Thus in the case when the model parameters are one of the follows: $\theta \equiv \{\Omega_{\mathrm{m}}, \Omega_{\mathrm{b}}, h, n_{\mathrm{s}}, \sigma_{8}\}$, we make use of the central difference approximation to compute the derivative numerically
\begin{equation}
\frac{\partial \mkern 1mu \boldsymbol{\mathcal{R}}}{\partial \theta} \simeq \frac{\boldsymbol{\mathcal{R}}(\theta+\mathrm{d}\theta)-\boldsymbol{\mathcal{R}}(\theta-\mathrm{d}\theta)}{2\ \mathrm{d}\theta} \, .
\end{equation}
In the case of the neutrino mass, the fiducial value is 0.0 eV and it cannot have negative values, hence we obtain the partial derivative using
\begin{equation}
\frac{\partial \mkern 1mu \boldsymbol{\mathcal{R}}}{\partial M_\nu} \simeq \frac{-\boldsymbol{\mathcal{R}}(M_\nu=0.4)+4\boldsymbol{\mathcal{R}}(M_\nu=0.2) - \boldsymbol{\mathcal{R}}(M_\nu=0)}{0.4} \, .
\end{equation}
Thus we utilise two sets of massive neutrino simulations from Quijote, with $M_\nu = 0.2$ eV and $M_\nu = 0.4$ eV for the Fisher information matrix. However the initial condition of the simulations with the massive neutrinos were generated using ZA. To be consistent to compute the partial derivative, we make use of another 500 realisations of fiducial cosmology (with $M_\nu = 0$ eV) simulation in which the initial conditions were also generated using ZA.
\section{Results}
\label{sec:results}
\begin{figure}
\centering
\includegraphics[scale=0.56]{ratio_theory_fidelity}
\caption{Top: comparing theoretical prediction (orange dashed line) for $\mathcal{R}(\triangle_{123})$ using perturbation theory at LO against the direct measurement (blue solid line) from the halo catalogs of the Quijote suite of simulation. Bottom: residual showing the deviation of the theoretical prediction from the direct measurement from the simulations. The blue shaded region denotes the 5\% region. The triangle configuration go from the smallest being $\{(40,45), (40,45), (40,45)\} \,h^{-1}\mathrm{Mpc}$ to the largest which corresponds to $\{(115,120), (115,120), (115,120)\} \,h^{-1}\mathrm{Mpc}$.}
\label{fig:theory}
\end{figure}
In Fig.~\ref{fig:theory} we show the direct measurements of $\mathcal{R}(\triangle_{123})$ from the 15,000 reference halo catalogs (solid blue line), and compare it against the LO prediction (dashed orange line). We consider all triangular configurations with $r_\mathrm{min} \in (40, 45)$ and $r_\mathrm{max} \in (115, 120)$, and such that $r_{12} \geq r_{23} \geq r_{31}$. All the separation scales have a bin width of 5 $h^{-1}\mathrm{Mpc}$. It thus corresponds to a total of 766 triangular configurations, spanning from configuration `0' being the smallest (i.e. $\triangle_{123} \in \{(40,45), (40,45), (40,45)\} \,h^{-1}\mathrm{Mpc}$) to configuration `765' being the largest ($\triangle_{123} \in \{(115,120), (115,120), (115,120)\} \,h^{-1}\mathrm{Mpc}$). One can see from Eqs.~(\ref{eq:R12_triangle}) and (\ref{eq:R23_triangle}) that the mean three-point relative velocities, $R_{12}$ and $R_{23}$, will be equal to each other when $r_{12} = r_{23}$, irrespective of the length of the third side. This is directly visible in Fig.~\ref{fig:theory}, where $\mathcal{R}=1$ when this condition is met. When comparing the theoretical predictions, we see that it is overall accurate within 4--5\% for configurations with all separation lengths greater than 55 $h^{-1} \mathrm{Mpc}$. As expected when the separation length decreases, the fidelity of the LO prediction also decreases with the maximum deviation at about 27\% for the triangular configuration $\{(100,105),(50,55),(50,55)\}$ $h^{-1}\mathrm{Mpc}$. This thus motivates us to directly measure $\mathcal{R}$ from the simulations to compute the derivatives for the Fisher information matrix.
\begin{figure}
\centering
\includegraphics[scale=0.58]{ratio_sigma8_independence}
\caption{Ratio of $\mathcal{R}$ at $\sigma_8^+=0.849$ to $\mathcal{R}$ at $\sigma_8^-=0.819$. The (blue) solid dot, and the (orange) error bar represents the mean and the relative error on the mean, respectively.}
\label{fig:s8_independence}
\end{figure}
In addition $\mathcal{R}(\triangle_{123})$ is unaffected by variation in $\sigma_8$ as mentioned earlier in Sect.~\ref{sec:velsta}. To demonstrate this, we compute the ratio of $\mathcal{R}(\triangle_{123})$ for $\sigma^+_8 = 0.849$ and $\sigma^-_8 = 0.819$ using dark matter particles from 30 realisations, and showcase it in Fig.~\ref{fig:s8_independence}. The (blue) dot represents the mean of the measurement, while the scatter is shown using the (orange) error bars. Thus we can conclude that $\mathcal{R}$ is independent of $\sigma_8$. Similarly in Fig.~\ref{fig:halo_bias_independence}, we take a look at the bias dependence of $\mathcal{R}(\triangle_{123})$ (black solid line), and $R_{12}$ (blue dashed line), where we show the ratio of each between the halo and matter component for each of the summary statistics. The bias term for the mean relative velocity between pair `23' in a triplet is similar to $R^{\mathrm{h}}_{12}$, and is thus not shown in the figure. As reported in \cite{KuruvillaAghanim21}, for these triangular configurations (assuming a scale independent bias) it yields a bias factor around 1.85 for $R_{12}$ and $R_{23}$. For the purpose of computing these ratios in the figure, we used the mean relative velocity information for matter from 30 realisations of the dark-matter only simulations, and for halo we utilised the 15,000 catalogues. The shaded regions represents the $1\sigma$ errors from the propagation of uncertainties of the mean relative velocity statistics for matter, and halo. One can see that on large separation scales the newly introduced statistic (black solid line) is bias independent, while for the smallest triangle configuration there is a very weak dependence of bias when considering the newly introduced statistic. This thus supports the Ansatz presented in Eq.~(\ref{eq:ratio-new-statistics-lo}), where the LO in perturbation theory renders $\mathcal{R}$ to be bias independent on linear scales. For all triangular configurations considered in this work the bias is found to be equal to one within 1--2\%, and hence for the purpose of Fisher matrix formalism we consider $\mathcal{R}$ being independent of a (constant) linear bias term.
\begin{figure}
\centering
\includegraphics[scale=0.58]{ratio_bias_independence}
\caption{The dashed (blue) line shows the bias for $R_{12}$, i.e. the mean radial relative velocity between pairs 1 and 2 in a triplet. While the solid (black) line shows the (weak) bias dependence of the ratio statistics $\mathcal{R}$, which is equal to one within 1--2$\%$ for all triangular configurations considered here.}
\label{fig:halo_bias_independence}
\end{figure}
Since $\mathcal{R}$ is found to be independent of $\sigma_8$ at the scales we are probing (i.e. $r_{\mathrm{min}} \geq 40\ h^{-1}\mathrm{Mpc}$), this renders the statistic at an unique position of being unaffected by the degeneracy in the $M_\nu$-$\sigma_8$ parameter plane. We check the impact of summed neutrino mass on $\mathcal{R}$ utilising three non-zero neutrino mass in Fig.~\ref{fig:halo_neutrino_mass}. The solid (blue) line shows the impact of $M_\nu=0.1$ eV on $\mathcal{R}$ when compared to zero neutrino mass cosmology. Similarly the dashed (orange) and dash-dotted (green) lines showcases the impact of $M_\nu=0.2$ and $M_\nu=0.4$ eV, respectively. As can be seen, when the neutrino mass increases there is a decrease in the infall velocity between a pair in most of the triangular configurations. This is related to the free-streaming of neutrinos as a result of them having large thermal velocities. As a result, below the free-streaming scale neutrinos does not cluster, which further slows down the collapse of the matter in general. This leads to an overall reduction in the growth of overall density perturbations at scales below free-streaming scale, and thus causes a suppression of power on large Fourier modes when looking at the matter power spectrum \citep[e.g.][]{Wong11,LesgourguesPastor12}. When looking at $\mathcal{R}$ for all the configurations we measured, the maximal effect of suppression is seen in the case when $M_\nu=0.4$ eV, and for the triangular configuration $\{(100,105),(50,55),(50,55)\} \ h^{-1}\mathrm{Mpc}$ when compared to the zero neutrino mass cosmology.
\begin{figure}
\centering
\includegraphics[scale=0.56]{ratio_neutrino_mass}
\caption{Effect of summed neutrino mass on $\mathcal{R}(\triangle_{123})$, as measured directly from the simulations, when compared to zero neutrino mass fiducial cosmology. The summed neutrino mass considered here are denoted in the legend, and its units are given in eV.}
\label{fig:halo_neutrino_mass}
\end{figure}
\subsection{Cosmological parameters}
\label{sec:cosmoparams}
\begin{figure}
\centering
\includegraphics[scale=0.62]{corr_ratio_ratio}
\caption{The correlation matrix (i.e. the covariance matrix of $\mathcal{R}$ normalised by its diagonal elements) computed using 15,000 realisations of the Quijote simulations. The triangle configurations are same as in Fig.~\ref{fig:theory}.}
\label{fig:correlation_matrix}
\end{figure}
\begin{figure*}
\centering
\includegraphics[scale=0.53]{ratio_fisher_results}
\caption{Joint 68.3\% (dark shaded contour) and 95.4\% (light shaded contour) credible region for all the pairs of cosmological model parameters at $z=0$.}
\label{fig:fisher_results}
\end{figure*}
We now turn our attention to the information content in the new ratio statistic $\mathcal{R}$, and see its viability in constraining the cosmological model. As discussed in Sect.~\ref{sec:fisher} we achieve this using the Fisher information matrix, and the ingredients for it are the partial derivatives of $\mathcal{R}$ with respect to the cosmological model parameters and the covariance matrix ($\mathbf{C}$). We show case the correlation matrix in Fig.~\ref{fig:correlation_matrix}, which is given as $\mathrm{C}_{ij}/\sqrt{\mathrm{C}_{ii}\mathrm{C}_{jj}}$, wherein the covariance matrix is directly measured from the simulations using 15,000 realisations. We notice the presence of non-diagonal terms in it, being positively correlated at similar triangular configurations while tending to be negatively correlated as the configurations differ substantially.
We now take a look at the information content in $\mathcal{R}$ using the Fisher information matrix formalism, as defined in Sect.~\ref{sec:fisher}. As mentioned, we have computed both the elements of it directly from the simulations. Since the bias dependence of $\mathcal{R}$ was shown to being very weak even at small scales ($\sim$ 40--50 $h^{-1}\mathrm{Mpc}$) and being independent at large scales ($\geq 80$ $h^{-1}\mathrm{Mpc}$), we do not consider the bias parameter in the Fisher-matrix formalism. Thus the model parameters are $\Omega_{\mathrm{m}}$, $\Omega_{\mathrm{b}}$, $h$, $n_{\mathrm{s}}$, and $M_\nu$. We show the results of our Fisher forecast in Fig.~\ref{fig:fisher_results}, where the dark and light shaded contours denote the 68.3\% and 95.4\% joint credible region for all possible model parameters, respectively. The $1\sigma$ marginalised error for any model parameter $\theta_{\alpha}$ is given by $\sqrt{F_{\alpha\alpha}^{-1}}$, and are as follows for the parameters we considered: $\{\Omega_{\mathrm{m}}, \Omega_{\mathrm{b}}, h, n_{\mathrm{s}},M_\nu\} \equiv \{0.0158, 0.0041, 0.0391, 0.0394, 0.1175\}$.
We compare these constraints with those obtained from the mean pairwise velocity, and the mean relative velocity between pairs in a triplet as reported in \cite{KuruvillaAghanim21}. As a fair comparison, we use the constraints obtained from them using $r_{\mathrm{min}} = 40\ h^{-1}\mathrm{Mpc}$. It presents a factor of improvement of \{6.2, 7.6, 9.8, 12.9, 8.87\} for \{$\Omega_{\mathrm{m}}$, $\Omega_{\mathrm{b}}$, $h$, $n_{\mathrm{s}}$, $M_\nu$\}, respectively over the mean pairwise velocity. However when comparing against the constraints from $R^{\mathrm{h}}_{12}(\triangle_{123})+R^{\mathrm{h}}_{23}(\triangle_{123})$ (i.e. the mean three-point relative velocities), $\mathcal{R}$ has its constraining power shrunk by a factor of 1.34--1.44 for all the model parameters. This could also due to the fact that the length of the data vector is twice in $R^{\mathrm{h}}_{12}+R^{\mathrm{h}}_{23}$ when compared to $\mathcal{R}$. This shrinkage in constraining power was also seen when considering $R^{\mathrm{h}}_{12}$ and $R^{\mathrm{h}}_{23}$ separately as compared to its combination in {\protect\NoHyper\citet{KuruvillaAghanim21}\protect\endNoHyper}.
It is informative to ask how the constraints from $\mathcal{R}$ fares against those obtained from clustering statistics. In order to answer that question, we compare the constraints obtained in this work with those obtained from the redshift space halo power spectrum, and the halo bispectrum in \cite{Hahn+20}. Comparing against the constraints from the redshift-space power multipoles for $k_{\mathrm{max}}=0.2\,h\,\mathrm{Mpc}^{-1}$ (which is closest to the $r_{\mathrm{min}}$ considered in this work), $\mathcal{R}$ obtains a factor of improvement of \{2.3, 3.6, 4.5, 5.4, 5.7\} for \{$\Omega_{\mathrm{m}}$, $\Omega_{\mathrm{b}}$, $h$, $n_{\mathrm{s}}$, $M_\nu$\}, respectively. However it slightly reduces to \{1.4, 2.9, 3.1, 3.3, 2.5\} when comparing against the constraints from power spectrum when $k_{\mathrm{max}}=0.5\,h\,\mathrm{Mpc}^{-1}$. This improvement over power spectrum (a two-point summary statistics) is not surprising as $\mathcal{R}$ is based on the first moment of the three-point relative velocity statistics. Hence it is interesting to compare the constraints against those obtained from the redshift-space bispectrum monopole, and when using all triangular configurations for $k_{\mathrm{max}}=0.2\,h\,\mathrm{Mpc}^{-1}$, $\mathcal{R}$ still obtains a factor of improvement of \{1.8, 2.9, 3.2, 3.1, 1.8\} for \{$\Omega_{\mathrm{m}}$, $\Omega_{\mathrm{b}}$, $h$, $n_{\mathrm{s}}$, $M_\nu$\}, respectively. But when considering a larger set of triangular configurations for the bispectrum with $k_{\mathrm{max}}=0.5\,h\,\mathrm{Mpc}^{-1}$, there is less constraining power for $\mathcal{R}$ with a factor of \{0.7, 1.0, 1.0, 0.9, 0.4\} for \{$\Omega_{\mathrm{m}}$, $\Omega_{\mathrm{b}}$, $h$, $n_{\mathrm{s}}$, $M_\nu$\}, respectively. This is not surprising, as the bispectrum monopole in this case, probes further into the nonlinear scales while $\mathcal{R}$ was analysed for triangular configurations with separation scales of $40\,h^{-1}\mathrm{Mpc}$ and above.
As it currently stands one of the limitations to applying the new statistic, $\mathcal{R}(\triangle_{123})$, directly to any observational data is the lack of an estimator to measure the three-point mean radial relative velocities using the LOS velocities (as it will be the LOS velocity which can be measured from either the peculiar velocity surveys or kSZ experiments). In the case of the mean pairwise velocity, \cite{Ferreira+99} has shown how to construct such an estimator. Similarly for the case of $R_{ij}(\triangle_{123})$, and $\mathcal{R}(\triangle_{123})$, we will be constructing such an estimator in a future work. In the case of the mean pairwise velocities, an alternative estimator exists using each tracers' transverse velocity component \citep{Yasini+19}. On the other hand, the three-point mean relative velocity consists of a non-vanishing mean transverse component in the plane of the triangle (unlike in the case of the pairwise velocity which has its transverse component equal to zero). Thus we would construct an estimator for the non-vanishing three-point mean transverse relative velocity in a future work. And furthermore the analysis we presented here took only the radial component into consideration, and hence a combination of both radial and transverse components of the three-point mean relative velocity could further improve the chances of constraining the cosmological model accurately.
Another caveat which we has not discussed in this work is the mass dependence of the optical depth parameter, which has been shown to increase as the halo mass increases \citep[e.g.][]{Battaglia16}. We considered it as an averaged quantity [as shown in Eqs.~(\ref{eq:ksz-pair}) and (\ref{eq:ksz-triplet})]. However we do not envision the mass dependence to affect $\mathcal{R}(\triangle_{123})$, as long as the ratio is done using the same mass bin. On the other hand assuming a fixed cosmology, we could consider a scenario where measuring $R^{\mathrm{h}}_{23}$ is fixed to a high halo mass bin while measuring $R^{\mathrm{h}}_{12}$ for various mass bins in Eq.~(\ref{eq:ratio-new-statistics}). Thus, it could lead to potentially measuring the (scaled) mass dependence of optical depth (and degenerate with the bias factor) from direct kSZ experiment directly.
\section{Conclusions}
\label{sec:conclusions}
Determination of neutrino mass using cosmological observables have become one of the main goals for the forthcoming cosmological surveys. However the two-point statistics in general is affected by the $M_\nu$-$\sigma_8$ degeneracy, whether using clustering or relative velocity statistics which limits the potential of constraining neutrino mass from cosmology. With regards to relative velocities, \cite{KuruvillaPorciani20} introduced the three-point mean relative statistics (i.e. the mean relative velocity between pairs in a triplet), and subsequently in \cite{KuruvillaAghanim21} they quantified the cosmological information content in them. It was found to offer substantial information gain when compared to two-point statistics (both power spectrum and mean pairwise velocity), while being competitive with the constraints from the bispectrum.
In this paper, we extended the applications with the mean three-point relative velocity statistics, and introduced a new ratio statistic $\mathcal{R}$ [Eq.~(\ref{eq:ratio-new-statistics})] which is unaffected by $\sigma_8$. This enables to constrain neutrino mass, in addition to other cosmological parameters, independent of $\sigma_8$. Moreover, in the context of kSZ experiments this statistic is independent of optical depth, hence circumventing the optical depth degeneracy which currently acts a limiting factor in the determination of cosmological parameters from the kSZ experiments. Furthermore, the leading order perturbation theory prediction suggests that $\mathcal{R}$ will be bias independent on linear scales. We verified it by measuring $\mathcal{R}$ for both halos and matter, and found that the bias is consistent with one at 1--2\% for all triangular configurations we probed in this work ($r_{\mathrm{min}}=40\ h^{-1}\mathrm{Mpc}$ and $r_{\mathrm{max}}=120\ h^{-1}\mathrm{Mpc}$).
We also studied the effect of summed neutrino mass on $\mathcal{R}$, and found that as the neutrino mass increases the amplitude of $\mathcal{R}$ decreases. This can be understood by the fact that due to the free streaming of neutrinos the collapse of matter slows down, and by the virtue that $\mathcal{R}$ acts as a proxy to the mean infall velocity between pairs in a triplet, $\mathcal{R}$ decreases as $M_\nu$ increases.
We used the Fisher-matrix formalism to quantify the information content in $\mathcal{R}$, where the necessary derivatives and the covariance matrices were directly measured from the Quijote suite of simulations. We utilised 15,000 realisations of the reference cosmology to compute the covariance matrix, and the partial derivatives were also computed directly from the simulations. We find that constraints obtained from $\mathcal{R}$ has a factor of 6.2--12.9 improvement when compared against the constraints obtained from the mean pairwise velocity. When compared against the power spectrum and bispectrum, it still achieves an improvement in the constraints with a factor of 2.3--5.7 and 1.8--3.2, respectively.
In summary we have introduced a new statistic based on the mean radial relative velocity between pairs in a triplet and shown that it can act as robust cosmological observable which could lead to sizeable information gain in comparison to the mean radial pairwise velocity. One of the limitation of the kSZ experiments is the optical depth degeneracy, and breaking this degeneracy requires some form of external data set \citep[for e.g. using fast radio bursts as suggested in][]{Madhavacheril+19}.
This new statistic thus provides a way forward in which the cosmological parameters can be constrained using data from future kinetic Sunyaev--Zeldovich experiments alone, without being affected by the optical depth parameter.
\begin{acknowledgements}
We would like to thank Nabila Aghanim and Francisco Villaescusa-Navarro for useful discussions. JK acknowledges funding for the ByoPiC project from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program grant agreement ERC-2015-AdG 695561. We are thankful to the community for developing and maintaining open-source software packages extensively used in our work, namely \textsc{Cython} \citep{cython}, \textsc{Matplotlib} \citep{matplotlib} and \textsc{Numpy} \citep{numpy}.
\end{acknowledgements}
\setlength{\bibhang}{2.0em}
\setlength\labelwidth{0.0em}
\bibliographystyle{aa}
\section{Introduction}
In modern physics, one of the outstanding question is regarding the determination of the mass of neutrinos which has fundamental implications to both particle physics and cosmology. The neutrino oscillation experiments have established that the neutrinos should have a non-zero mass \citep[e.g.][]{Forero+14,Gonzalez-Garcia+16,Capozzi+17,Salas+17}. However these oscillation experiments are only sensitive to the mass splittings between the neutrino mass eigenstates, and to measure the absolute scale of the neutrino mass other experiments are required. Recently the Karlsruhe Tritium Neutrino (KATRIN) experiment has reported the first direct detection of sub-eV neutrino mass, with an upper limit on the `effective neutrino mass' of 0.8 eV \citep{KATRIN21}. This is based on the kinematic measurements through the observation of the energy spectrum of tritium $\beta$-decay, and is model independent.
However stronger constraints on the summed neutrino mass ($M_\nu$) can obtained through combining various cosmological probes, as the massive neutrinos leave an imprint on various cosmological observables \citep[e.g.][]{Wong11, LesgourguesPastor12}. But these constraints have an additional model dependence. Assuming the standard `lambda cold dark matter' ($\Lambda$CDM) model, one of the strongest constraint on the summed neutrino mass has been obtained by combining cosmic microwave background (CMB, \citetalias{Planck2018}), baryonic acoustic oscillation, and redshift-space galaxy clustering \citepalias{eBOSS20} to obtain an upper limit of $M_\nu < 0.102$ eV. However by considering extensions of the standard cosmological model, the upper limit becomes less stringent \cite[e.g.][]{Vagnozzi+18,Choudhury+20}. The community can expect the summed neutrino mass to be measured with increased precision from cosmological probes in the foreseeable future with the advent of next generation of CMB surveys [e.g. the Simons Observatory\footnote{\url{https://simonsobservatory.org/}} \cite[SO,][]{SO}, CMB-S4\footnote{\url{https://cmb-s4.org/}} \citep{CMBS4}], and the stage IV galaxy redshift surveys [e.g. the `Dark Energy Spectroscopic Instrument'\footnote{\url{https://www.desi.lbl.gov/}} \cite[DESI,][]{DESI16}, Euclid\footnote{\url{https://www.euclid-ec.org/}} \citep{Euclid11} and Nancy Grace Roman space telescope\footnote{\url{https://roman.gsfc.nasa.gov/}} \citep{WFIRST15}].
Currently the $M_\nu$ constraints from galaxy clustering are mainly obtained using two-point statistics, i.e. the power spectrum in Fourier space or the two-point correlation function in the configuration space. The impact of massive neutrinos on the two-point clustering statistics has been studied quite extensively using $N$-body simulations both in real- \citep[e.g.][]{Saito+08,Wong08,Castorina+15} and redshift-space \citep[e.g.][]{Navarro+18,Garcia+19}. However they are affected by the $M_\nu$-$\sigma_8$ degeneracy, and thus acts as a limitation in measuring the summed neutrino mass. The three-point clustering statistics in Fourier space (i.e. the bispectrum) has been shown to break this degeneracy \citep{Hahn+20,HahnVilla20}. In addition it has been shown that the three-point cluster statistics contains additional cosmological information compared to its two-point counterpart, and thus able to obtain substantial improvements on constraining other cosmological parameters also \citep[e.g.][]{YankelevichPorciani18, ChudaykinIvanov19, Gualdi+20, Agarwal+20, Samushia+21}. Currently there are efforts to understand the possibility of constraining summed neutrino mass using various summary statistics, among others, like the one-point probability distribution function of the matter density \citep[e.g.][]{Uhlemann+20} and the void size function \citep[e.g.][]{Bayer+21}.
Another avenue is to use velocity statistics like the mean pairwise velocity which provides a complementary view to the clustering information, either through the peculiar velocity surveys or the kinetic Sunyaev--Zeldovich (kSZ) effect. \cite{Mueller+15b} has shown that the mean pairwise velocity can utilised to constrain the summed neutrino mass, and \cite{Kuruvilla+20} has studied its interplay between the baryonic feedback and the summed neutrino mass effects at nonlinear separation scales. Furthermore the three-point mean relative velocity statistics is able to obtain stronger constraints on the summed neutrino mass when compared to the mean pairwise velocity \citep{KuruvillaAghanim21}. However the growth rate measurement from the kSZ effect of the CMB are degenerate with the optical depth \cite[e.g.][]{KeislerSchmidt13,Battaglia16,Flender+17} and this degeneracy acts as a limitation in measuring cosmological parameters \cite[e.g.][]{Smith+18}, which is commonly referred to as the optical depth degeneracy. It has been suggested that the use of fast radio bursts can be used to break this degeneracy \citep{Madhavacheril+19}. In this paper we will develop a new statistic which is independent of the optical depth using the first moment of the three-point relative velocities, i.e. the mean relative velocities between pairs in a triplet, and thus circumvent the problem of optical depth as a limitation factor in kSZ experiments. This forms one of the main goals of this paper.
The remainder of this work is structured as follows. In Sect.~\ref{sec:cosmoksz}, we describe the newly introduced summary statistic based on three-point mean relative velocities. The Quijote suite of simulation, which we use in this work is introduced briefly in Sect.~\ref{sec:sims}. The information content in the velocity statistics is studied using the Fisher-matrix formalism, which is briefly summarised in Sect.~\ref{sec:fisher}. Our results are presented in Sect.~\ref{sec:results}, and we finally conclude in Sect.~\ref{sec:conclusions}.
\section{Cosmology using the kSZ effect}
\label{sec:cosmoksz}
\subsection{Kinetic Sunyaev--Zeldovich effect}
\label{sec:kSZ}
As the CMB photons interact with the free electrons of hot ionised gas along the line-of-sight (LOS), the apparent CMB temperature changes. This is due to the fact that there is a transfer of energy from electrons to the resulting scattered photons as the electrons have a significantly higher kinetic energy than the photons. In this work, we focus on the secondary effect which is known as the kinetic Sunyaev--Zeldovich \citep[kSZ;][]{SZ72,SZ80} which arises if the scattering medium is moving relative to the Hubble flow. The fractional temperature fluctuation caused due to kSZ is
\begin{align}
\left.\frac{\Delta T(\hat{n})}{T_{\mathrm{cmb}}}\right|_{\mathrm{kSZ}} &= - \int \mathrm{d}l\ \sigma_{\mathrm{T}} \left(\frac{\bm{v}_{\mathrm{e}}\cdot\hat{n}}{c}\right) n_{\mathrm{e}} \, , \nonumber \\
& = -\tau \left(\frac{\bm{v}_{\mathrm{e}}\cdot\hat{n}}{c}\right) \, ,
\label{eq:ksz}
\end{align}
where $\sigma_{\mathrm{T}}$ is the Thomson scattering cross-section, $T_{\mathrm{cmb}}$ is the CMB temperature, $c$ is the speed of light, $\bm{v}_{\mathrm{e}}$ is the peculiar velocity of free electrons, and $n_{\mathrm{e}}$ is the physical free electron number density. The integral $\int \mathrm{d}l$ is computed along the LOS which is given by $\hat{n}$. The optical depth is defined as $\tau = \int \mathrm{d}l\ \sigma_{\mathrm{T}}\ n_{\mathrm{e}}$, i.e. the integrated electron density.
The kSZ signal detection is challenging because of its small amplitude and its spectrum being identical to that of primary CMB temperature fluctuations. One of the approaches to detect the kSZ signal is to employ the pairwise statistic \cite[e.g.][]{Hand+12,Carlos+15,Planck16,Schaan+16,Soergel+16,Bernardis+17,Li+17,Calafut+21,Chen+21}. There have been evidences of kSZ signal using other techniques also \cite[e.g.][]{Carlos+15,Schaan+16,Nguygen+20,Tanimura+21,Chaves-Montero+21, Schaan+21}.
In the case of kSZ pairwise signal the temperature acts as a proxy for the peculiar velocity, and as such it probes the optical depth weighted pairwise velocity \cite[e.g.][]{Hand+12,Soergel+18}
\begin{equation}
\frac{\langle \Delta T^{\mathrm{kSZ}}(r_{12})\rangle}{T_{\mathrm{cmb}}} \simeq -\tau \frac{\bar{w}(r_{12})}{c}\,,
\label{eq:ksz-pair}
\end{equation}
where $\langle \Delta T^{\mathrm{kSZ}}_{12} \rangle$ is the mean temperature difference between the objects `1' and `2', and $\bar{w}(r_{12})$ is the mean radial component of the pairwise velocity which can be defined in the single streaming regime as follows
\begin{align}
\langle \bm{w}_{12}|\bm{r}_{12} \rangle_\mathrm{p} = &\ \displaystyle \frac{\langle(1+\delta_{1})(1+\delta_{2}) (\bm{v}_{2}-\bm{v}_{1})\rangle}{\langle (1+\delta_{1})(1+\delta_{2})\rangle} \, ,
\label{eq:mean-radial-velocity-two}
\end{align}
where $\delta_i\equiv\delta(\bm{x}_i)$ is the density contrast, $\bm{v}_i\equiv \bm{v}(\bm{x}_i)\equiv \bm{u}(\bm{x}_i)/aH$ is the normalised peculiar velocity, $a$ is the scale factor, and $H$ is the Hubble constant. Using perturbation theory at leading order (LO), the mean radial matter pairwise velocity can be written as \citep[e.g.][]{Fisher95, Juszkiewicz+98, ReidWhite11}
\begin{equation}
\langle \bm{w}_{12}|\bm{r}_{12} \rangle_{\mathrm p} = \bar{w}(r_{12})\,\hat{\bm{r}}_{12}
\simeq \displaystyle - \frac{f}{\pi^2} \, \hat{\bm{r}}_{12} \int_0^{\infty} k \,j_1(k r_{12})\,P(k)\, \mathrm{d} k \, ,
\label{eq:mean-radial-velocity-two-theory}
\end{equation}
where $\hat{\bm{r}}_{12}$ is the unit vector along the pair `12', the subscript p implies that the averages are computed over all pairs with separation $r_{12}$, $P(k)$ denotes the linear matter power spectrum, and $j_1(x) = \sin (x)/x^2- \cos (x)/x$. It should be noted that Eq.~(\ref{eq:ksz-pair}) assumes that there is no correlation between optical depth and velocity field. Following Eqs.~(\ref{eq:ksz-pair}) and (\ref{eq:mean-radial-velocity-two-theory}), we can see that
\begin{equation}
\Delta T^{\mathrm{kSZ}} \propto \tau f \sigma^2_8 \,
\end{equation}
and thus implying that the growth rate measurement from the pairwise kSZ is perfectly degenerate with optical depth \citep{KeislerSchmidt13}. Here we have presented the argument for the matter, but in observations there will be an additional bias dependence entering in the above equation.
\subsection{New statistics based on mean relative velocity between pairs in a triplet}
\label{sec:velsta}
In the previous section, we mentioned about the mean relative velocity between two tracers (i.e between a pair) or the mean pairwise velocity. However this can be generalised to the case of three tracers, in which we can consider two mean relative velocity between pairs in a triplet with separations $\triangle_{123}=(r_{12},r_{23},r_{31})$:
(i) $\langle \bm{w}_{12}|\triangle_{123} \rangle_\mathrm{t}$, and (ii)
$\langle \bm{w}_{23}|\triangle_{123} \rangle_\mathrm{t}$.
The subscript t here implies that the averages are computed over all triplets with separations $(r_{12},r_{23},r_{31})$. Similar to Eq.~(\ref{eq:mean-radial-velocity-two}) in the single stream fluid approximation, the mean relative velocity between pair 12 in a triplet can be written as \citep{KuruvillaPorciani20}
\begin{align}
\langle \bm{w}_{12}|\triangle_{123} \rangle_\mathrm{t} = &\ \displaystyle \frac{\langle(1+\delta_{1})(1+\delta_{2}) (1+\delta_{3})(\bm{v}_{2}-\bm{v}_{1})\rangle}{\langle (1+\delta_{1})(1+\delta_{2}) (1+\delta_{3})\rangle} \nonumber \\
\simeq &\ \langle \delta_{1}\bm{v}_{2} \rangle - \langle \delta_{2} \bm{v}_{1} \rangle + \langle \delta_{3}\bm{v}_{2} \rangle - \langle \delta_{3} \bm{v}_{1} \rangle\nonumber \\
=&\ \bar{w}(r_{12})\,\hat{\bm{r}}_{12}-\frac{1}{2}\left[
\bar{w}(r_{23})\,\hat{\bm{r}}_{23}+\bar{w}(r_{31})\,\hat{\bm{r}}_{31}\right]
\, .
\label{eq:mean-radial-velocity-three}
\end{align}
The three-point mean relative velocity statistics can be composed into both its radial ($R_{ij}$) and transverse ($T_{ij}$) component in the plane of the triangle defined by the particles. This is in contrast to the mean pairwise velocity for which the transverse component is zero. In the case of $ \langle \bm{w}_{12}|\triangle_{123} \rangle_\mathrm{t}$, it is as follows
\begin{align}
\langle \bm{w}_{12}|\triangle_{123} \rangle_\mathrm{t} & = \langle \bm{w}_{12}\cdot \hat{\bm{r}}_{12} |\triangle_{123} \rangle_{\mathrm t}\,\hat{\bm{r}}_{12} +
\langle \bm{w}_{12}\cdot \hat{\bm{t}}|\triangle_{123} \rangle_{\mathrm t}\, \hat{\bm{t}}\nonumber\\
& =R_{12}(\triangle_{123})\,\hat{\bm{r}}_{12}+T_{12}(\triangle_{123})\,\hat{\bm{t}}\;,
\label{eq:decomposition}
\end{align}
where $\hat{\bm{t}}=(\hat{\bm{r}}_{23}-\cos\chi\,\hat{\bm{r}}_{12})/\sin\chi$, $\hat{\bm{r}}_{23}$ is the unit vector along the pair `23', and $\chi= \arccos(\hat{\bm r}_{12} \cdot \hat{\bm r}_{23})$. In this work, we make use of only the radial component, and for the pair 12 in the triplet it can be written as
\begin{align}
R_{12}(\triangle_{123})=
\bar{w}(r_{12})&-\frac{1}{2}\Bigg[
\bar{w}(r_{23})\,\cos \chi \nonumber \\
& -\bar{w}(r_{31})\,\frac{r_{12}+r_{23}\cos\chi}{\sqrt{r_{12}^2+r_{23}^2+2r_{12}r_{23}\cos\chi}}\Bigg]\;.
\label{eq:R12_triangle}
\end{align}
\noindent Similarly, the mean radial relative velocity between the pair 23 in $\triangle_{123}$ can be written as
\begin{align}
R_{23}(\triangle_{123})=
\bar{w}(r_{23})&-\frac{1}{2}\Bigg[
\bar{w}(r_{12})\,\cos \chi \nonumber \\
& -\bar{w}(r_{31})\,\frac{r_{23}+r_{12}\cos\chi}{\sqrt{r_{12}^2+r_{23}^2+2r_{12}r_{23}\cos\chi}}\Bigg]\;.
\label{eq:R23_triangle}
\end{align}
\noindent Similar to Eq.~(\ref{eq:ksz-pair}), the three-point mean relative temperature difference from kSZ can be written down as
\begin{equation}
\frac{\Delta T^{\mathrm{kSZ}}_{ij}(\triangle_{123})}{T_{\mathrm{cmb}}} \simeq -\tau \frac{R^{\mathrm{h}}_{ij}(\triangle_{123})}{c}\,,
\label{eq:ksz-triplet}
\end{equation}
where $R^{\mathrm{h}}_{ij}(\triangle_{123})$ respresents the three-point mean relative velocity statistics for haloes (biased tracers), and to first approximation it can be written down as linear bias term times $R_{ij}(\triangle_{123})$ \citep{KuruvillaAghanim21}. Based on the radial mean relative velocities between pairs in a triplet, we can introduce a new ratio statistic
\begin{align}
\mathcal{R}(\triangle_{123}) &= \frac{\langle \bm{w}_{12}\cdot \hat{\bm{r}}_{12} |\triangle_{123} \rangle_{\mathrm t}}{\langle \bm{w}_{23}\cdot \hat{\bm{r}}_{23} |\triangle_{123} \rangle_{\mathrm t}} \, , \label{eq:ratio-new-statistics}
\end{align}
which tells us how quickly the average infall velocity of pair 12 is in comparison to the average infall velocity of pair 23 for a specific triangular configuration $\triangle_{123}$. On linear scales, using perturbation theory at LO, $\mathcal{R}(\triangle_{123})$ can be written as the ratio between Eqs.~(\ref{eq:R12_triangle}) and (\ref{eq:R23_triangle})
\begin{align}
\mathcal{R}(\triangle_{123})&=\frac{R_{12}(\triangle_{123})}{R_{23}(\triangle_{123})} = \frac{R^{\mathrm{h}}_{12}(\triangle_{123})}{R^{\mathrm{h}}_{23}(\triangle_{123})} \equiv \frac{\Delta T^{\mathrm{kSZ}}_{12}(\triangle_{123})}{\Delta T^{\mathrm{kSZ}}_{23}(\triangle_{123})} \, .
\label{eq:ratio-new-statistics-lo}
\end{align}
The above introduced statistic is thus independent of optical depth, $\sigma_8$ and linear bias. In the following sections, we take a detailed look at whether the Ansatz of $\sigma_8$ and linear bias independence holds up. Additionally, we study the cosmological information content in $\mathcal{R}(\triangle_{123})$.
\section{Data and analysis}
\subsection{Quijote simulation suite}
\label{sec:sims}
In this work, we make use of the Quijote\footnote{\url{https://quijote-simulations.readthedocs.io/}} \citep{Quijote20} suite of simulations, which was run using the tree-PM code \textsc{gadget}-3 \citep{Springel05}. Spanning more than a few thousand cosmological models, it contains 44,100 \textit{N}-body simulations. These simulations have a box length of $1\,h^{-1}\,\mathrm{Gpc}$, and tracks the evolution of $512^3$ cold dark matter (CDM) particles. The initial conditions (ICs) were generated at redshift $z=127$ using the second-order Lagrangian perturbation theory. The fiducial cosmological parameters (assuming zero summed neutrino mass) for the simulation is as follows: the total matter density: $\Omega_{\mathrm{m}}=0.3175$, the baryonic matter density: $\Omega_{\mathrm{b}}=0.049$, the primordial spectral index of the density perturbations: $n_{\mathrm{s}}=0.9624$, the amplitude of the linear power spectrum on the scale of $8\ h^{-1}\mathrm{Mpc}$: $\sigma_8=0.834$, and the present-day value of the Hubble constant: $H_0\equiv H(z=0)=100\, h\,\mathrm{km}\,\mathrm{s}^{-1}\mathrm{Mpc}^{-1}$ with $h=0.6711$. This is broadly consistent with the Planck 2018 result (\citetalias{Planck2018}). The suite consists of 15,000 random realisations for the fiducial cosmology. For the purpose of calculating derivatives, Quijote provides a set of 500 random realisations wherein only one parameter is varied with respect to the the fiducial cosmology. The variations are as follows: \{$\Omega^+_{\mathrm{m}}, \Omega^-_{\mathrm{m}}, \Omega^+_{\mathrm{b}}, \Omega^-_{\mathrm{b}}, n^+_{\mathrm{s}}, n^-_{\mathrm{s}}, \sigma^+_{8}, \sigma^-_{8}\} = \{0.3275, 0.3075, 0.051, 0.047, 0.9824, 0.9424, 0.849, 0.819\}$ and $\{h^+, h^-\} = \{0.6911,$ $0.6511\}$.
In addition the suite also provides 500 realisations for three massive neutrino cosmology, where the summed neutrino masses are 0.1, 0.2, and 0.4 eV respectively. The initial conditions for these simulations were produced using the Zeldovich approximation (ZA), and has $512^3$ neutrino particles in addition to the CDM particles. To compute the numerical derivatives with respect to massive neutrinos, the Quijote suite provides an addition 500 random realisations for the fiducial cosmology, in which the ICs were also generated using ZA.
In this work, we use halo catalog data from 22,000 \textit{N}-body simulations of the Quijote suite. These halos were identified using a friends-of-friends algorithm. We selected halos that have a halo mass $M_\mathrm{h} > 5 \times 10^{13}\ h^{-1}\mathrm{M}_\odot$ (corresponding to groups and clusters of galaxies) at $z=0$ which gives a mean number density of $\bar{n} \sim 0.92 \times 10^{-4}\,h^3\,\mathrm{Mpc}^{-3}$ for the reference simulations. Additionally in the case of (i) fiducial cosmology, and (ii) for variations in $\sigma_8$ (both $\sigma^+_{8}$ and $\sigma^-_{8}$), we use 30 realisations of the particle data (randomly down-sampled to $100^3$ particles) to compute $\mathcal{R}(\triangle_{123})$.
\subsection{Fisher-matrix formalism}
\label{sec:fisher}
To quantify the error estimates on the cosmological parameters, we use the Fisher-matrix formalism which can be defined as \citep[e.g.][]{Tegmark+1997, Heavens09, Verde10}
\begin{equation}
F_{\alpha \beta} = \left\langle -\frac{\partial^2\ln{\mathcal{L}}}{\partial \theta_\alpha \partial \theta_\beta} \right\rangle \, ,
\label{eq:fisher_definition}
\end{equation}
where $\theta_\alpha$ and $\theta_\beta$ are two of the cosmological model parameters, and $\mathcal{L}$ is the likelihood of the data given a model. Assuming a Gaussian likelihood, we can write the Fisher information matrix as
\begin{equation}
F_{\alpha \beta} = \frac{\partial \mkern 1mu \boldsymbol{\mathcal{R}}}{\partial \theta_\alpha} \cdot \hat{\mathbf{C}}^{-1} \cdot \frac{\partial \mkern 1mu \boldsymbol{\mathcal{R}}^\mathsf{T}}{\partial \theta_\beta} \, ,
\label{eq:fisher_reduced}
\end{equation}
where $\boldsymbol{\mathcal{R}}$ represents the data vector for the ratio statistic we introduced in Eq.~(\ref{eq:ratio-new-statistics}), and $\hat{\mathbf{C}}^{-1}$ is the precision matrix (i.e. the inverse covariance matrix). It should be noted that in the definition of $F_{\alpha\beta}$, we have neglected a term which appears due to the cosmology dependence of the covariance matrix. However the correction has been shown to have a negligible effect \citep{Kodwani+19}. We compute the covariance matrix of $\mathcal{R}$ directly from the simulations as follows
\begin{equation}
\widetilde{\mathbf{C}} = \frac{1}{N_{\mathrm{sims}}-1}\sum_{i=1}^{N_{\mathrm{sims}}} \left(\boldsymbol{\mathcal{R}}_i-\overline{\boldsymbol{\mathcal{R}}}\right)\left(\boldsymbol{\mathcal{R}}_i-\overline{\boldsymbol{\mathcal{R}}}\right)^\mathsf{T} \, ,
\label{eq:covariancematrix}
\end{equation}
where $\overline{\boldsymbol{\mathcal{R}}} = N_{\mathrm{sims}}^{-1}\sum_{i=1}^{N_{\mathrm{sims}}} \boldsymbol{\mathcal{R}}_i$, and $N_{\mathrm{sims}}$ denotes the total number of simulations used to compute the covariance matrix (in this work $N_{\mathrm{sims}}=15,000$). While Eq.~(\ref{eq:covariancematrix}) gives an unbiased estimate of the covariance matrix, its inversion leads to a biased estimate of the precision matrix. This however can be statistically corrected by applying a multiplicative correction factor to the precision matrix \citep{Kaufmann67,Anderson03,Hartlap+07}
\begin{equation}
\hat{\mathbf{C}}^{-1} = \frac{N_{\mathrm{sims}}-N_{\mathrm{bins}}-2}{N_{\mathrm{sims}}-1}\ \widetilde{\mathbf{C}}^{-1} \, ,
\end{equation}
where $N_{\mathrm{bins}}$ is the number of bins in $\mathcal{R}$.
We numerically compute the derivatives required to construct the Fisher information matrix using the Quijote suite of simulations, which provides 500 realisations where only one cosmological parameter is varied while the rest are fixed at its fiducial value. Thus in the case when the model parameters are one of the follows: $\theta \equiv \{\Omega_{\mathrm{m}}, \Omega_{\mathrm{b}}, h, n_{\mathrm{s}}, \sigma_{8}\}$, we make use of the central difference approximation to compute the derivative numerically
\begin{equation}
\frac{\partial \mkern 1mu \boldsymbol{\mathcal{R}}}{\partial \theta} \simeq \frac{\boldsymbol{\mathcal{R}}(\theta+\mathrm{d}\theta)-\boldsymbol{\mathcal{R}}(\theta-\mathrm{d}\theta)}{2\ \mathrm{d}\theta} \, .
\end{equation}
In the case of the neutrino mass, the fiducial value is 0.0 eV and it cannot have negative values, hence we obtain the partial derivative using
\begin{equation}
\frac{\partial \mkern 1mu \boldsymbol{\mathcal{R}}}{\partial M_\nu} \simeq \frac{-\boldsymbol{\mathcal{R}}(M_\nu=0.4)+4\boldsymbol{\mathcal{R}}(M_\nu=0.2) - \boldsymbol{\mathcal{R}}(M_\nu=0)}{0.4} \, .
\end{equation}
Thus we utilise two sets of massive neutrino simulations from Quijote, with $M_\nu = 0.2$ eV and $M_\nu = 0.4$ eV for the Fisher information matrix. However the initial condition of the simulations with the massive neutrinos were generated using ZA. To be consistent to compute the partial derivative, we make use of another 500 realisations of fiducial cosmology (with $M_\nu = 0$ eV) simulation in which the initial conditions were also generated using ZA.
\section{Results}
\label{sec:results}
\begin{figure}
\centering
\includegraphics[scale=0.56]{ratio_theory_fidelity}
\caption{Top: comparing theoretical prediction (orange dashed line) for $\mathcal{R}(\triangle_{123})$ using perturbation theory at LO against the direct measurement (blue solid line) from the halo catalogs of the Quijote suite of simulation. Bottom: residual showing the deviation of the theoretical prediction from the direct measurement from the simulations. The blue shaded region denotes the 5\% region. The triangle configuration go from the smallest being $\{(40,45), (40,45), (40,45)\} \,h^{-1}\mathrm{Mpc}$ to the largest which corresponds to $\{(115,120), (115,120), (115,120)\} \,h^{-1}\mathrm{Mpc}$.}
\label{fig:theory}
\end{figure}
In Fig.~\ref{fig:theory} we show the direct measurements of $\mathcal{R}(\triangle_{123})$ from the 15,000 reference halo catalogs (solid blue line), and compare it against the LO prediction (dashed orange line). We consider all triangular configurations with $r_\mathrm{min} \in (40, 45)$ and $r_\mathrm{max} \in (115, 120)$, and such that $r_{12} \geq r_{23} \geq r_{31}$. All the separation scales have a bin width of 5 $h^{-1}\mathrm{Mpc}$. It thus corresponds to a total of 766 triangular configurations, spanning from configuration `0' being the smallest (i.e. $\triangle_{123} \in \{(40,45), (40,45), (40,45)\} \,h^{-1}\mathrm{Mpc}$) to configuration `765' being the largest ($\triangle_{123} \in \{(115,120), (115,120), (115,120)\} \,h^{-1}\mathrm{Mpc}$). One can see from Eqs.~(\ref{eq:R12_triangle}) and (\ref{eq:R23_triangle}) that the mean three-point relative velocities, $R_{12}$ and $R_{23}$, will be equal to each other when $r_{12} = r_{23}$, irrespective of the length of the third side. This is directly visible in Fig.~\ref{fig:theory}, where $\mathcal{R}=1$ when this condition is met. When comparing the theoretical predictions, we see that it is overall accurate within 4--5\% for configurations with all separation lengths greater than 55 $h^{-1} \mathrm{Mpc}$. As expected when the separation length decreases, the fidelity of the LO prediction also decreases with the maximum deviation at about 27\% for the triangular configuration $\{(100,105),(50,55),(50,55)\}$ $h^{-1}\mathrm{Mpc}$. This thus motivates us to directly measure $\mathcal{R}$ from the simulations to compute the derivatives for the Fisher information matrix.
\begin{figure}
\centering
\includegraphics[scale=0.58]{ratio_sigma8_independence}
\caption{Ratio of $\mathcal{R}$ at $\sigma_8^+=0.849$ to $\mathcal{R}$ at $\sigma_8^-=0.819$. The (blue) solid dot, and the (orange) error bar represents the mean and the relative error on the mean, respectively.}
\label{fig:s8_independence}
\end{figure}
In addition $\mathcal{R}(\triangle_{123})$ is unaffected by variation in $\sigma_8$ as mentioned earlier in Sect.~\ref{sec:velsta}. To demonstrate this, we compute the ratio of $\mathcal{R}(\triangle_{123})$ for $\sigma^+_8 = 0.849$ and $\sigma^-_8 = 0.819$ using dark matter particles from 30 realisations, and showcase it in Fig.~\ref{fig:s8_independence}. The (blue) dot represents the mean of the measurement, while the scatter is shown using the (orange) error bars. Thus we can conclude that $\mathcal{R}$ is independent of $\sigma_8$. Similarly in Fig.~\ref{fig:halo_bias_independence}, we take a look at the bias dependence of $\mathcal{R}(\triangle_{123})$ (black solid line), and $R_{12}$ (blue dashed line), where we show the ratio of each between the halo and matter component for each of the summary statistics. The bias term for the mean relative velocity between pair `23' in a triplet is similar to $R^{\mathrm{h}}_{12}$, and is thus not shown in the figure. As reported in \cite{KuruvillaAghanim21}, for these triangular configurations (assuming a scale independent bias) it yields a bias factor around 1.85 for $R_{12}$ and $R_{23}$. For the purpose of computing these ratios in the figure, we used the mean relative velocity information for matter from 30 realisations of the dark-matter only simulations, and for halo we utilised the 15,000 catalogues. The shaded regions represents the $1\sigma$ errors from the propagation of uncertainties of the mean relative velocity statistics for matter, and halo. One can see that on large separation scales the newly introduced statistic (black solid line) is bias independent, while for the smallest triangle configuration there is a very weak dependence of bias when considering the newly introduced statistic. This thus supports the Ansatz presented in Eq.~(\ref{eq:ratio-new-statistics-lo}), where the LO in perturbation theory renders $\mathcal{R}$ to be bias independent on linear scales. For all triangular configurations considered in this work the bias is found to be equal to one within 1--2\%, and hence for the purpose of Fisher matrix formalism we consider $\mathcal{R}$ being independent of a (constant) linear bias term.
\begin{figure}
\centering
\includegraphics[scale=0.58]{ratio_bias_independence}
\caption{The dashed (blue) line shows the bias for $R_{12}$, i.e. the mean radial relative velocity between pairs 1 and 2 in a triplet. While the solid (black) line shows the (weak) bias dependence of the ratio statistics $\mathcal{R}$, which is equal to one within 1--2$\%$ for all triangular configurations considered here.}
\label{fig:halo_bias_independence}
\end{figure}
Since $\mathcal{R}$ is found to be independent of $\sigma_8$ at the scales we are probing (i.e. $r_{\mathrm{min}} \geq 40\ h^{-1}\mathrm{Mpc}$), this renders the statistic at an unique position of being unaffected by the degeneracy in the $M_\nu$-$\sigma_8$ parameter plane. We check the impact of summed neutrino mass on $\mathcal{R}$ utilising three non-zero neutrino mass in Fig.~\ref{fig:halo_neutrino_mass}. The solid (blue) line shows the impact of $M_\nu=0.1$ eV on $\mathcal{R}$ when compared to zero neutrino mass cosmology. Similarly the dashed (orange) and dash-dotted (green) lines showcases the impact of $M_\nu=0.2$ and $M_\nu=0.4$ eV, respectively. As can be seen, when the neutrino mass increases there is a decrease in the infall velocity between a pair in most of the triangular configurations. This is related to the free-streaming of neutrinos as a result of them having large thermal velocities. As a result, below the free-streaming scale neutrinos does not cluster, which further slows down the collapse of the matter in general. This leads to an overall reduction in the growth of overall density perturbations at scales below free-streaming scale, and thus causes a suppression of power on large Fourier modes when looking at the matter power spectrum \citep[e.g.][]{Wong11,LesgourguesPastor12}. When looking at $\mathcal{R}$ for all the configurations we measured, the maximal effect of suppression is seen in the case when $M_\nu=0.4$ eV, and for the triangular configuration $\{(100,105),(50,55),(50,55)\} \ h^{-1}\mathrm{Mpc}$ when compared to the zero neutrino mass cosmology.
\begin{figure}
\centering
\includegraphics[scale=0.56]{ratio_neutrino_mass}
\caption{Effect of summed neutrino mass on $\mathcal{R}(\triangle_{123})$, as measured directly from the simulations, when compared to zero neutrino mass fiducial cosmology. The summed neutrino mass considered here are denoted in the legend, and its units are given in eV.}
\label{fig:halo_neutrino_mass}
\end{figure}
\subsection{Cosmological parameters}
\label{sec:cosmoparams}
\begin{figure}
\centering
\includegraphics[scale=0.62]{corr_ratio_ratio}
\caption{The correlation matrix (i.e. the covariance matrix of $\mathcal{R}$ normalised by its diagonal elements) computed using 15,000 realisations of the Quijote simulations. The triangle configurations are same as in Fig.~\ref{fig:theory}.}
\label{fig:correlation_matrix}
\end{figure}
\begin{figure*}
\centering
\includegraphics[scale=0.53]{ratio_fisher_results}
\caption{Joint 68.3\% (dark shaded contour) and 95.4\% (light shaded contour) credible region for all the pairs of cosmological model parameters at $z=0$.}
\label{fig:fisher_results}
\end{figure*}
We now turn our attention to the information content in the new ratio statistic $\mathcal{R}$, and see its viability in constraining the cosmological model. As discussed in Sect.~\ref{sec:fisher} we achieve this using the Fisher information matrix, and the ingredients for it are the partial derivatives of $\mathcal{R}$ with respect to the cosmological model parameters and the covariance matrix ($\mathbf{C}$). We show case the correlation matrix in Fig.~\ref{fig:correlation_matrix}, which is given as $\mathrm{C}_{ij}/\sqrt{\mathrm{C}_{ii}\mathrm{C}_{jj}}$, wherein the covariance matrix is directly measured from the simulations using 15,000 realisations. We notice the presence of non-diagonal terms in it, being positively correlated at similar triangular configurations while tending to be negatively correlated as the configurations differ substantially.
We now take a look at the information content in $\mathcal{R}$ using the Fisher information matrix formalism, as defined in Sect.~\ref{sec:fisher}. As mentioned, we have computed both the elements of it directly from the simulations. Since the bias dependence of $\mathcal{R}$ was shown to being very weak even at small scales ($\sim$ 40--50 $h^{-1}\mathrm{Mpc}$) and being independent at large scales ($\geq 80$ $h^{-1}\mathrm{Mpc}$), we do not consider the bias parameter in the Fisher-matrix formalism. Thus the model parameters are $\Omega_{\mathrm{m}}$, $\Omega_{\mathrm{b}}$, $h$, $n_{\mathrm{s}}$, and $M_\nu$. We show the results of our Fisher forecast in Fig.~\ref{fig:fisher_results}, where the dark and light shaded contours denote the 68.3\% and 95.4\% joint credible region for all possible model parameters, respectively. The $1\sigma$ marginalised error for any model parameter $\theta_{\alpha}$ is given by $\sqrt{F_{\alpha\alpha}^{-1}}$, and are as follows for the parameters we considered: $\{\Omega_{\mathrm{m}}, \Omega_{\mathrm{b}}, h, n_{\mathrm{s}},M_\nu\} \equiv \{0.0158, 0.0041, 0.0391, 0.0394, 0.1175\}$.
We compare these constraints with those obtained from the mean pairwise velocity, and the mean relative velocity between pairs in a triplet as reported in \cite{KuruvillaAghanim21}. As a fair comparison, we use the constraints obtained from them using $r_{\mathrm{min}} = 40\ h^{-1}\mathrm{Mpc}$. It presents a factor of improvement of \{6.2, 7.6, 9.8, 12.9, 8.87\} for \{$\Omega_{\mathrm{m}}$, $\Omega_{\mathrm{b}}$, $h$, $n_{\mathrm{s}}$, $M_\nu$\}, respectively over the mean pairwise velocity. However when comparing against the constraints from $R^{\mathrm{h}}_{12}(\triangle_{123})+R^{\mathrm{h}}_{23}(\triangle_{123})$ (i.e. the mean three-point relative velocities), $\mathcal{R}$ has its constraining power shrunk by a factor of 1.34--1.44 for all the model parameters. This could also due to the fact that the length of the data vector is twice in $R^{\mathrm{h}}_{12}+R^{\mathrm{h}}_{23}$ when compared to $\mathcal{R}$. This shrinkage in constraining power was also seen when considering $R^{\mathrm{h}}_{12}$ and $R^{\mathrm{h}}_{23}$ separately as compared to its combination in {\protect\NoHyper\citet{KuruvillaAghanim21}\protect\endNoHyper}.
It is informative to ask how the constraints from $\mathcal{R}$ fares against those obtained from clustering statistics. In order to answer that question, we compare the constraints obtained in this work with those obtained from the redshift space halo power spectrum, and the halo bispectrum in \cite{Hahn+20}. Comparing against the constraints from the redshift-space power multipoles for $k_{\mathrm{max}}=0.2\,h\,\mathrm{Mpc}^{-1}$ (which is closest to the $r_{\mathrm{min}}$ considered in this work), $\mathcal{R}$ obtains a factor of improvement of \{2.3, 3.6, 4.5, 5.4, 5.7\} for \{$\Omega_{\mathrm{m}}$, $\Omega_{\mathrm{b}}$, $h$, $n_{\mathrm{s}}$, $M_\nu$\}, respectively. However it slightly reduces to \{1.4, 2.9, 3.1, 3.3, 2.5\} when comparing against the constraints from power spectrum when $k_{\mathrm{max}}=0.5\,h\,\mathrm{Mpc}^{-1}$. This improvement over power spectrum (a two-point summary statistics) is not surprising as $\mathcal{R}$ is based on the first moment of the three-point relative velocity statistics. Hence it is interesting to compare the constraints against those obtained from the redshift-space bispectrum monopole, and when using all triangular configurations for $k_{\mathrm{max}}=0.2\,h\,\mathrm{Mpc}^{-1}$, $\mathcal{R}$ still obtains a factor of improvement of \{1.8, 2.9, 3.2, 3.1, 1.8\} for \{$\Omega_{\mathrm{m}}$, $\Omega_{\mathrm{b}}$, $h$, $n_{\mathrm{s}}$, $M_\nu$\}, respectively. But when considering a larger set of triangular configurations for the bispectrum with $k_{\mathrm{max}}=0.5\,h\,\mathrm{Mpc}^{-1}$, there is less constraining power for $\mathcal{R}$ with a factor of \{0.7, 1.0, 1.0, 0.9, 0.4\} for \{$\Omega_{\mathrm{m}}$, $\Omega_{\mathrm{b}}$, $h$, $n_{\mathrm{s}}$, $M_\nu$\}, respectively. This is not surprising, as the bispectrum monopole in this case, probes further into the nonlinear scales while $\mathcal{R}$ was analysed for triangular configurations with separation scales of $40\,h^{-1}\mathrm{Mpc}$ and above.
As it currently stands one of the limitations to applying the new statistic, $\mathcal{R}(\triangle_{123})$, directly to any observational data is the lack of an estimator to measure the three-point mean radial relative velocities using the LOS velocities (as it will be the LOS velocity which can be measured from either the peculiar velocity surveys or kSZ experiments). In the case of the mean pairwise velocity, \cite{Ferreira+99} has shown how to construct such an estimator. Similarly for the case of $R_{ij}(\triangle_{123})$, and $\mathcal{R}(\triangle_{123})$, we will be constructing such an estimator in a future work. In the case of the mean pairwise velocities, an alternative estimator exists using each tracers' transverse velocity component \citep{Yasini+19}. On the other hand, the three-point mean relative velocity consists of a non-vanishing mean transverse component in the plane of the triangle (unlike in the case of the pairwise velocity which has its transverse component equal to zero). Thus we would construct an estimator for the non-vanishing three-point mean transverse relative velocity in a future work. And furthermore the analysis we presented here took only the radial component into consideration, and hence a combination of both radial and transverse components of the three-point mean relative velocity could further improve the chances of constraining the cosmological model accurately.
Another caveat which we has not discussed in this work is the mass dependence of the optical depth parameter, which has been shown to increase as the halo mass increases \citep[e.g.][]{Battaglia16}. We considered it as an averaged quantity [as shown in Eqs.~(\ref{eq:ksz-pair}) and (\ref{eq:ksz-triplet})]. However we do not envision the mass dependence to affect $\mathcal{R}(\triangle_{123})$, as long as the ratio is done using the same mass bin. On the other hand assuming a fixed cosmology, we could consider a scenario where measuring $R^{\mathrm{h}}_{23}$ is fixed to a high halo mass bin while measuring $R^{\mathrm{h}}_{12}$ for various mass bins in Eq.~(\ref{eq:ratio-new-statistics}). Thus, it could lead to potentially measuring the (scaled) mass dependence of optical depth (and degenerate with the bias factor) from direct kSZ experiment directly.
\section{Conclusions}
\label{sec:conclusions}
Determination of neutrino mass using cosmological observables have become one of the main goals for the forthcoming cosmological surveys. However the two-point statistics in general is affected by the $M_\nu$-$\sigma_8$ degeneracy, whether using clustering or relative velocity statistics which limits the potential of constraining neutrino mass from cosmology. With regards to relative velocities, \cite{KuruvillaPorciani20} introduced the three-point mean relative statistics (i.e. the mean relative velocity between pairs in a triplet), and subsequently in \cite{KuruvillaAghanim21} they quantified the cosmological information content in them. It was found to offer substantial information gain when compared to two-point statistics (both power spectrum and mean pairwise velocity), while being competitive with the constraints from the bispectrum.
In this paper, we extended the applications with the mean three-point relative velocity statistics, and introduced a new ratio statistic $\mathcal{R}$ [Eq.~(\ref{eq:ratio-new-statistics})] which is unaffected by $\sigma_8$. This enables to constrain neutrino mass, in addition to other cosmological parameters, independent of $\sigma_8$. Moreover, in the context of kSZ experiments this statistic is independent of optical depth, hence circumventing the optical depth degeneracy which currently acts a limiting factor in the determination of cosmological parameters from the kSZ experiments. Furthermore, the leading order perturbation theory prediction suggests that $\mathcal{R}$ will be bias independent on linear scales. We verified it by measuring $\mathcal{R}$ for both halos and matter, and found that the bias is consistent with one at 1--2\% for all triangular configurations we probed in this work ($r_{\mathrm{min}}=40\ h^{-1}\mathrm{Mpc}$ and $r_{\mathrm{max}}=120\ h^{-1}\mathrm{Mpc}$).
We also studied the effect of summed neutrino mass on $\mathcal{R}$, and found that as the neutrino mass increases the amplitude of $\mathcal{R}$ decreases. This can be understood by the fact that due to the free streaming of neutrinos the collapse of matter slows down, and by the virtue that $\mathcal{R}$ acts as a proxy to the mean infall velocity between pairs in a triplet, $\mathcal{R}$ decreases as $M_\nu$ increases.
We used the Fisher-matrix formalism to quantify the information content in $\mathcal{R}$, where the necessary derivatives and the covariance matrices were directly measured from the Quijote suite of simulations. We utilised 15,000 realisations of the reference cosmology to compute the covariance matrix, and the partial derivatives were also computed directly from the simulations. We find that constraints obtained from $\mathcal{R}$ has a factor of 6.2--12.9 improvement when compared against the constraints obtained from the mean pairwise velocity. When compared against the power spectrum and bispectrum, it still achieves an improvement in the constraints with a factor of 2.3--5.7 and 1.8--3.2, respectively.
In summary we have introduced a new statistic based on the mean radial relative velocity between pairs in a triplet and shown that it can act as robust cosmological observable which could lead to sizeable information gain in comparison to the mean radial pairwise velocity. One of the limitation of the kSZ experiments is the optical depth degeneracy, and breaking this degeneracy requires some form of external data set \citep[for e.g. using fast radio bursts as suggested in][]{Madhavacheril+19}.
This new statistic thus provides a way forward in which the cosmological parameters can be constrained using data from future kinetic Sunyaev--Zeldovich experiments alone, without being affected by the optical depth parameter.
\begin{acknowledgements}
We would like to thank Nabila Aghanim and Francisco Villaescusa-Navarro for useful discussions. JK acknowledges funding for the ByoPiC project from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program grant agreement ERC-2015-AdG 695561. We are thankful to the community for developing and maintaining open-source software packages extensively used in our work, namely \textsc{Cython} \citep{cython}, \textsc{Matplotlib} \citep{matplotlib} and \textsc{Numpy} \citep{numpy}.
\end{acknowledgements}
\setlength{\bibhang}{2.0em}
\setlength\labelwidth{0.0em}
\bibliographystyle{aa}
|
{
"timestamp": "2021-09-30T02:00:26",
"yymm": "2109",
"arxiv_id": "2109.13938",
"language": "en",
"url": "https://arxiv.org/abs/2109.13938"
}
|
\section{Introduction}
\label{sec:intro}
The large-scale structure (LSS) of the universe has become a powerful tool for research in cosmology, providing information complementary to or inaccessible by the cosmic-microwave background \cite{planck18}. Over the next decade, LSS surveys such as DESI \cite{desi1}, {\it Euclid} \cite{euclid11, euclid18}, and the LSST \cite{lsst} will generate vast amounts of cosmological data, providing stringent tests of our understanding of the universe.
All these surveys target biased tracers (e.g.~galaxies) of the underlying matter field.
Modeling the connection between these tracers and the underlying matter density is thus key to extracting the maximal information about the universe from observational data.
In this work we develop a fully non-parametric framework to find the abundance of halos from the initial Lagrangian density field, going beyond the traditional perturbative approaches.
In the Lagrangian formalism, the Lagrangian-space (pre-advection-)halo overdensity $\delta_h$ at the initial time can be written as a function $f$ of the local Lagrangian matter overdensity $\delta$ and two terms that encode nonlocality:
$\nabla_i \nabla_j \Phi$ and $\nabla_i v_j$, where $\Phi$ is the gravitational potential and $\vec{v}$ is the peculiar velocity \citep[for a recent review, see e.g.][]{desjacques18}.
This function $f$ determines the weight that a fluid element carry, which is then advected to the final redshift to give the Eulerian halo density field.
Previous works \cite{matsubara08, mcdonald09, assassi14, vlah16} have suggested that the functional form
\begin{equation}
1+\delta_h = f(\delta, \nabla^2\delta, \mathcal{G}_2)
\label{eq:f_func}
\end{equation}
provides an accurate description of biased tracers, where $\mathcal{G}_2$ is the tidal operator (equation~\eqref{eq:G2}).
Traditionally, $f$ is Taylor expanded around $\delta=0$ and a series of bias coefficients $b_i$ are used to encode the response of small-scale halo formation physics to the large-scale structure \cite{fry93, matsubara08, vlah16}:
\begin{equation}
f \approx 1 + b_1 \delta + b_2 (\delta^2 - \langle \delta^2 \rangle) + b_{\mathcal{G}_2}(\mathcal{G}_2 - \langle \mathcal{G}_2 \rangle) + b_{\nabla^2} \nabla^2\delta + ...,
\label{eq:f_pt}
\end{equation}
where $\langle \cdot \rangle$ represents a spatial average.
The above formalism can also be written in Eulerian space, where the final-time Eulerian halo overdensity and matter overdensity are related via the bias expansion.
Since the standard Eulerian bias model has been shown to lead to larger errors at reproducing the observed halo field than the Lagrangian one \cite{roth11, schmittfull19, modi19}, we will focus on the Lagrangian picture of linking the halo field to the initial density field, but avoid a Taylor expansion of $f$.
We will advect the halo and initial density fields to lower redshifts non-perturbatively using N-body simulations, which is more computationally expensive than computing displacements using the Zel'dovich approximation \citep[as done in][]{schmittfull19, modi19} but more accurate \cite{modi20, kokron21, zennaro21, pelliban21}.
Previous studies have focused on the bias expansion approach of describing the large-scale halo field and evaluating the biases \cite{kaiser84, desjacques10, musso12, baldauf15, modi17, lazeyras16, lazeyras18, lazeyras19, lazeyras21}.
With various improvements developed over the years, the bias expansion has achieved broad success in describing summary statistics such as the galaxy power spectrum and bispectrum \cite{chan12, baldauf12, saito14, abidi18, fujita20, modi20, kokron21}.
The effectiveness of perturbation theory has also been evaluated at the field level \cite{schmittfull19, schmittfull20, modi19, barreira21}.
An important merit of the bias expansion is that one can get physical intuition of bias parameters from the peak-background split argument and obtain theoretical predictions for the biases \cite{bardeen86, mo96, sheth99}.
However, the bias expansion can yield an unphysical relation between the galaxy and matter fields.
Unphysicality can manifest via a non-positive-definite $f$, as well as enhanced biases for underdense regions.
Given these caveats of the bias expansion, we propose a fully Lagrangian, non-parametric halo bias model and measure the $f$ function in N-body simulations at the field level in real space. We perform the measurements for mass-weighted halo fields, such that the resulting $f$ represents the halo-to-mass ratio that a patch in the initial Lagrangian space should carry to form halos at the final redshift. Figure~\ref{fig:illustration} gives a schematic illustration of our procedure to calculate a halo field given the $f$ weights that the particles should carry. We show that our non-parametric $f$ is non-negative by construction and monotonically increasing with density. Its shape shows a clear deviation from a linear or quadratic function of the density, especially for more massive halos. Our non-parametric $f$ leads to sub-percent level accuracy on the prediction of the halo power spectrum at $k\sim0.01-0.1\ h\ {\rm Mpc}^{-1}$ given an appropriate smoothing scale of the initial density field, albeit with a mild dependence on the smoothing scale and other input parameters.
The paper is organized as follows. Section~\ref{sec:methods} introduces the formalism of our non-parametric $f$ and the simulations used. Section~\ref{sec:results} shows $f$ measured for various halo mass cuts, the recovery of the halo power spectrum, and the dependencies on the parameters used. We conclude in Section~\ref{sec:conclusions} and discuss possible extensions of our formalism.
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidth]{figures/illustration.pdf}
\caption{\label{fig:illustration} A schematic illustration of our procedure to calculate a halo field given the $f$ weights that the particles in a simulation should carry. Left panel shows the initial matter density field, illustrated by colors. The overlying gray circles represent the $f$ weights that the particles carry, with the size of the circles showing the amplitude of $f$. The top right panel illustrates the final matter field, where the particles shown by circles have been moved by gravity. The bottom right panel presents the final halo field, where the circles follow the locations of the particles and the colors and sizes of the circles represent the amplitude of $f$.}
\end{figure}
\section{Methods}
\label{sec:methods}
\subsection{Lagrangian formalism of a non-parametric biasing model}
In the Lagrangian picture, a fluid element is mapped from its initial Lagrangian position $\vec{q}$ to its final Eulerian coordinates $\vec{x}$ at time $t$ through the displacement $\vec{\Psi}(\vec{q}, t)$:
\begin{equation}
\vec{x}(\vec{q}, t) = \vec{q} + \vec{\Psi}(\vec{q}, t).
\end{equation}
If the particles carry weights $f(\vec{q})$, then the resulting field at the final time $t$ can be obtained as $\int \mathrm{d}^3q \delta^D(\vec{x}-\vec{q}-\vec{\Psi}(\vec{q},t)) f(\vec{q})$, where $\delta^D$ denotes the 3-dimensional Dirac delta function. Specifically, $f=1$ gives the Eulerian density field at time $t$.
The overall goal of this work is to find which weights $f$ lead to the correct halo field $1+\delta_h$ at time $t$. We take $f$ to be a function of the smoothed linear overdensity $\delta_1$, its Laplacian $\nabla^2\delta_1$, and the corresponding tidal operator $\mathcal{G}_2$:
\begin{equation}
\mathcal{G}_2 = \sum_{ij} \left( \left[ \nabla_i \nabla_j \nabla^{-2} - \frac{1}{3}\delta^K_{ij} \right] \delta_1 \right)^2,
\label{eq:G2}
\end{equation}
where $\delta^K_{ij}$ is the Kronecker delta.
$\delta_1$ is defined as
\begin{equation}
\delta_1(\vec{q}) = \int \mathrm{d}^3q' W_{R_f}(|\vec{q}-\vec{q'}|) \delta(\vec{q'}),
\end{equation}
where $W_{R_f}$ is a Gaussian smoothing kernel of size $R_f$, and $\delta(\vec{q})$ is the unsmoothed linear overdensity. We will explain the necessity of smoothing the density field later. The final halo field is thus computed through
\begin{equation}
1+\delta_h = \int \mathrm{d}^3q \delta^D(\vec{x}-\vec{q}-\vec{\Psi}(\vec{q},t)) f(\delta_1, \nabla^2\delta_1, \mathcal{G}_2).
\label{eq:delta_model_displacement}
\end{equation}
Here the weights $f$ should satisfy
\begin{equation}
\int P(\delta_1, \nabla^2\delta_1, \mathcal{G}_2) f(\delta_1, \nabla^2\delta_1, \mathcal{G}_2) \mathrm{d}\delta_1 \mathrm{d}(\nabla^2\delta_1) \mathrm{d}\mathcal{G}_2 = 1,
\label{eq:inte_f_constraint}
\end{equation}
where $P(\delta_1, \nabla^2\delta_1)$ is the probability distribution of $(\delta_1, \nabla^2\delta_1, \mathcal{G}_2)$.
Instead of expanding $f$ in a Taylor series, we choose to fit a non-parametric $f$ using the initial conditions and the final halo field in N-body simulations. To this end, we make a number $N_{\rm bins}$ of bins in the 3-dimensional volume of $\delta_1$-$\nabla^2\delta_1$-$\mathcal{G}_2$ and fit for the $f$ value within each bin. We assign a weight $f$ to each dark matter particle at a final redshift $z$ according to the $(\delta_1, \nabla^2\delta_1, \mathcal{G}_2)$ values at its initial Lagrangian position. We then grid the particles into $N_{\rm cells}$ grid cells by Cloud-In-Cell (CIC) interpolation using their locations at $z$ and the assigned weights $f$, which yields the predicted density field for the biased objects, $1 + \delta^{\rm model}_h$. Comparing to the true halo density field $1 + \delta^{\rm true}_h$ obtained also with CIC interpolation and minimizing $\sum_j (\delta^{\rm model}_{h,j} - \delta^{\rm true}_{h,j})^2$ in real space gives the least squares solution to $f$, where $j$ denotes the grid index.
Figure~\ref{fig:illustration} gives a schematic illustration of how we calculate $\delta^{\rm model}_h$, where the particles carry their corresponding $f$ weights (gray circles in the left panel) given at the initial time to their final locations, forming the final halo field (bottom right panel).
We now briefly outline a mathematical derivation of $\delta^{\rm model}_h$. At a location with grid index $j$, $\delta^{\rm model}_{h,j}$ is given by
\begin{equation}
1+\delta^{\rm model}_{h, j} = \sum_i w_{ij} f(\delta_{1,i}, \nabla^2\delta_{1,i}, \mathcal{G}_{2,i}),
\label{eq:delta_model_0}
\end{equation}
where $i$ denotes the indices of the dark matter particles, $w_{ij}$ is the CIC weight that the $i$-th particle contributes to the $j$-th grid point, and $f(\delta_{1,i}, \nabla^2\delta_{1,i}, \mathcal{G}_{2,i})$ is the weight that the $i$-th particle carries. Suppose that $(\delta_{1,i}, \nabla^2\delta_{1,i}, \mathcal{G}_{2,i})$ falls in the $m$-th bin in the 3-dimensional $\delta_1$-$\nabla^2\delta_1$-$\mathcal{G}_2$ volume so that $f(\delta_{1,i}, \nabla^2\delta_{1,i}, \mathcal{G}_{2,i}) = f_m$, we then get
\begin{equation}
1+\delta^{\rm model}_{h, j} = \sum_m \sum_{i \in \mathcal{I}_m} w_{ij} f_m = \sum_m A_{jm} f_m,
\end{equation}
where $\mathcal{I}_m$ is the set of indices of particles that carry weight $f_m$. $A$ is a $N_{\rm cells} \times N_{\rm bins}$ matrix whose $(j,m)$-th element is
\begin{equation}
A_{jm} = \sum_{i \in \mathcal{I}_m} w_{ij}.
\end{equation}
We thus aim to minimize the error of reproducing the true halo field by solving the quadratic optimization problem
\begin{align*}
&\operatorname*{argmin}_f\ \left( A f - \left( 1 + \delta^{\rm true}_h \right) \right)^T \left( Af - \left( 1 + \delta^{\rm true}_h \right) \right) \\
= &\operatorname*{argmin}_f\ f^T A^T A f - 2\left( 1 + \delta^{\rm true}_h \right)^T A f + {\rm const}.\numberthis
\label{eq:qp_objective}
\end{align*}
In practice, rather than using all information down to the pixel scale in the least-squares fit, we minimize $\sum_{k<k_{\rm max}} | \mathcal{F}(\delta^{\rm model}_h) - \mathcal{F}(\delta^{\rm true}_h) |^2$, where $\mathcal{F}(\cdot)$ denotes the Fourier transform and $k_{\rm max}$ is the maximum wavenumber that we sum up the residuals to. Using Parseval's theorem, this sum of residuals in $k$-space can be written equivalently in real space as $\sum_j (\tilde{\delta}^{\rm model}_{h,j} - \tilde{\delta}^{\rm true}_{h,j})^2$, where the $\tilde{\delta}$'s are the real space halo fields filtered with a sharp-$k$ filter $W(k)$:
\begin{gather}
\tilde{\delta}^{\rm true}_h = \mathcal{F}^{-1}\left( \mathcal{F}\left( \delta^{\rm true}_h \right) W(k) \right) \\
\tilde{\delta}^{\rm model}_h = \mathcal{F}^{-1}\left( \mathcal{F}\left( \delta^{\rm model}_h \right) W(k) \right) = \sum_m \underbrace{\mathcal{F}^{-1}\left( \mathcal{F}\left( A_{*m} \right) W(k) \right)}_{\widetilde{A}_{*m}} f_m - 1.
\end{gather}
We thus still use equation~\eqref{eq:qp_objective} to calculate the objective function, but substituting $A$ and $\delta^{\rm true}_h$ with the filtered values $\widetilde{A}$ and $\tilde{\delta}^{\rm true}_h$.
Without constraints, the quadratic optimization
problem of equation~\eqref{eq:qp_objective} can be easily solved with linear algebra. However, we found that for some choices of the parameters $R_f, k_{\rm max},$ and $N_{\rm cells}$, the simple least-squares solution leads to negative $f$ in underdense regions, violating the physical intent of our formalism (see Section~\ref{sec:results_N150}). The least-squares solution also does not guarantee the normalization constraint of equation~\eqref{eq:inte_f_constraint}. We thus by default solve $f$ as a quadratic programming problem with the normalization constraint and the $f\ge0$ constraint using the Python package {\tt qpsolvers}\footnote{\url{https://github.com/stephane-caron/qpsolvers}}. We will discuss how the results change with and without these constraints.
We note that although recent works such as \cite{schmittfull19, kokron21} do not smooth the initial density field as we do, the gridding of the field leads to an implicit smoothing on scales roughly corresponding to the cell size. Our explicit smoothing leads to results that are independent of the grid size (but dependent on the smoothing scale).
Smoothing the initial field also makes the matrix $A^T A$ more diagonal. Intuitively, smoothing with a large enough $R_f$ would lead to all particles at a point having the same weight $f$ and so each row of $A$ having only one non-zero element of 1, thus making $A^T A$ diagonal.
We choose to use a Gaussian smoothing kernel in this work.
In this work we use mass-weighted halos instead of number-weighted. We are thus predicting the the ratio of the mass-weighted halo density to the total matter density. We defer an examination against halos weighted by number or a halo occupation distribution model to future work.
\subsection{Simulations}
We now briefly outline the \textsc{AbacusSummit} simulations that we use in this work.
\textsc{AbacusSummit} \cite{maksimova21} is a suite of large, high-accuracy cosmological N-body simulations run with the Abacus N-body simulation code \cite{garrison18, garrison19, garrison21}. Abacus utilizes a novel, fully disjoint split between the near-field and far-field gravitational sources, solving the former on GPU hardware and the latter with a variant of a multipole method \cite{metchnik09}. The resulting code is both accurate and fast, up to 70M particle updates per second per node on Summit.
The \textsc{AbacusSummit} simulations were designed to meet and exceed the currently stated Cosmological Simulation Requirements of the Dark Energy Spectroscopic Instrument (DESI) survey \cite{desi1}. We utilize a set of 25 simulations, each with $2\ h^{-1}$~Gpc box size and $6912^3$ particles, using the Planck2018 LCDM cosmology \cite{planck18}: $\Omega_{\rm m}=0.14237, h=0.6736, \sigma_8=0.807952$. This gives a particle mass of $2\times10^9\ h^{-1}\ M_\odot$. We use a force softening of $7.2h^{-1}$ proper kpc.
The initial conditions were generated at $z=99$ using the method proposed in \cite{garrison16}. To obtain the $(\delta_1, \nabla^2\delta_1, \mathcal{G}_2)$ values associated with a particle, we interpolated the initial density field onto $1152^3$ grids and calculated the $(\delta_1, \nabla^2\delta_1, \mathcal{G}_2)$ values on each grid point given a smoothing scale $R_f$. We then assign $(\delta_1, \nabla^2\delta_1, \mathcal{G}_2)$ values to each particle by looking for the nearest grid point to the particle's location in the initial space.
Halos are identified on the fly with the CompaSO Halo Finder which uses a hybrid FoF-SO algorithm (Hadzhiyska et al. submitted). A kernel density estimate is first computed around all particles. Particles with overdensity larger than 60 are then segmented with the FoF algorithm with linking length $0.25$ of the interparticle spacing. Finally, halos are identified within each segmentation by a competitive spherical overdensity algorithm, with an overdensity threshold of 200. Here we will only use halos at $z=0.5$ with at least 150 particles (corresponding to a halo mass of $M=3\times 10^{11}\,h^{-1}\,M_\odot$).
In addition to the $2\ h^{-1}$~Gpc simulations, on which we focus here, in Sec.~\ref{sec:results_smallbox} we use multiple $500\ h^{-1}$~Mpc small-box simulations with the same mass resolution and cosmological parameters as the large-box ones.
Given the increased computational cost incurred when fitting large-dimensional parameter spaces, we will study $f$ in the 2-dimensional planes $\delta_1$-$\nabla^2\delta_1$ and $\delta_1$-$\mathcal{G}_2$ separately, instead of fully exploring the 3-dimensional $\delta_1$-$\nabla^2\delta_1$-$\mathcal{G}_2$ volume. We first make 40 bins in $\delta_1$ from $\delta_1/\sigma(\delta_1)=-4$ to 5, where $\sigma(\cdot)$ denotes the standard deviation. In each bin of $\delta_1$, we make 5 bins in $\nabla^2\delta_1$ corresponding to $<5, 5-30, 30-70, 70-95, >95$ percentiles, or 5 bins in $\mathcal{G}_2$ representing $<10, 10-30, 30-70, 70-90, >90$ percentiles.
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidth]{figures/2Dbins.pdf}
\caption{\label{fig:2Dbins} An illustration of the bins used for fitting $f$. Left and right panels show the 2D histograms of $\delta_1$ and $\nabla^2\delta_1$, and $\delta_1$ and $\mathcal{G}_2$ respectively, where the binning in $\delta_1$ correspond to the 40 bins we used for fitting $f$. The vertical edges of the red rectangles show the boundaries of 3 different bins in $\delta_1$ centered around $\delta_1/\sigma(\delta_1)=-2,0,2$, and the horizontal edges represent the boundaries of bins in either $\nabla^2\delta_1$ or $\mathcal{G}_2$. The red crosses illustrate the centers of the bins.}
\end{figure}
Figure~\ref{fig:2Dbins} illustrate the 2D histograms of $\delta_1$ and $\nabla^2\delta_1$ (left), and $\delta_1$ and $\mathcal{G}_2$ (right), where the bins in $\delta_1$ correspond to the 40 bins we used for fitting $f$. The vertical edges of the red rectangles show the boundaries of 3 different bins in $\delta_1$ centered around $\delta_1/\sigma(\delta_1)=-2,0,2$, and the horizontal edges represent the boundaries of bins in either $\nabla^2\delta_1$ or $\mathcal{G}_2$. The red crosses illustrate the centers of the bins.
We note that owing to the increasing computational cost, we are not able to make more bins in $\nabla^2\delta_1$ and $\mathcal{G}_2$, even though these bins are much sparser than those in $\delta_1$. However, most bins appear compact, except the boundary bins corresponding to $\nabla^2\delta_1<5$ percentile or $>95$ percentile, and $\mathcal{G}_2>90$ percentile, where the particle numbers are small. We have also verified that making 4 bins in $\nabla^2\delta_1$ instead of 5 bins does not impact our $f$ or recovery of the halo power spectrum, indicating that the binning is good enough for the purpose of this paper.
\section{The non-parametric halo bias model}
\label{sec:results}
We now discuss the non-parametric halo-to-mass ratio ($f$) solutions and how well they recover the halo power spectra for mass-weighted halos with $M>3\times10^{11} - 6\times10^{12}\ h^{-1}\ M_\odot$ at $z=0.5$.
Since we focus on the mass-weighted halo field, a proper value for the smoothing scale $R_f$ should be determined by enclosing the mass-weighted average mass of the halos. The mass-weighted mean masses with $M>3\times10^{11}\ h^{-1}\ M_\odot$ and $6\times10^{12}\ h^{-1}\ M_\odot$ are $2.4\times10^{13}\ h^{-1}\ M_\odot$ and $5.1\times10^{13}\ h^{-1}\ M_\odot$ respectively, which correspond to Gaussian filters with $R_f=2.6\ h^{-1}$~Mpc and $3.3\ h^{-1}$~Mpc respectively. We thus by default adopt a Gaussian smoothing scale $R_f=3\ h^{-1}$~Mpc and $k_{\rm max}=0.3\ h\ {\rm Mpc}^{-1} (\sim 1/R_f)$ to compute $f$ for all mass cuts.
We will discuss results with other $R_f$ and $k_{\rm max}$ choices later on.
We interpolate the particles to a $400^3$ grid and compute the matrix $A$ and the $f$ solution. This gives a Nyquist frequency of $0.63\ h\ {\rm Mpc}^{-1}$ and a cell size of $5\ h^{-1}$~Mpc, larger than the biggest clusters in our simulations.
As described in Section~\ref{sec:methods}, by default we solve the quadratic programming problem with the normalization constraint (equation~\eqref{eq:inte_f_constraint}) and the non-negativity ($f\ge0$) constraint, but will discuss results without these constraints.
To compute the model power spectrum given a non-parametric $f$, we assign each particle in a simulation with an $f$ weight according to its associated $(\delta_1, \nabla^2\delta_1, \mathcal{G}_2)$ values in the initial space. We then grid the particles onto a $512^3$ grid using CIC interpolation, which gives a Nyquist frequency of $0.8\ h\ {\rm Mpc}^{-1}$. We choose to use this finer grid when computing the halo power spectrum to avoid aliasing effects \cite{jing05}.
\subsection{Results using large boxes}
\label{sec:results_f_and_P}
Here we derive the $f$ solutions for 6 simulations and compute their mean. We then apply the average $f$ to 10 different simulations to calculate the model power spectra, their mean, and the error-bars on the power.
In this way the first 6 simulations act as the training set, whereas the latter 10 serve as a cross-validation set.
This shows that our $f$ is not overfitting to specific details of the simulations.
\subsubsection{Physical quantities that best describe the halo field}
\label{sec:results_N150}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.7\linewidth]{figures/f_d1_N150.pdf}
\includegraphics[width=\linewidth]{figures/f_d1G2_N150.pdf}
\includegraphics[width=\linewidth]{figures/f_d1nabla2d1_N150.pdf}
\caption{\label{fig:f_N150} The halo-to-mass ratios ($f$) fitted for mass-weighted halos with $M>3\times10^{11}\ h^{-1}\ M_\odot$, assuming that $f$ depends on different physical quantities.
From top to bottom: $f$ (defined in equation~\eqref{eq:delta_model_0}) as a function of $\delta_1$, $(\delta_1, \mathcal{G}_2)$, and $(\delta_1, \nabla^2\delta_1)$. The function $f$ is obtained using $R_f=3\ h^{-1}$~Mpc and $k_{\rm max}=0.3\ h\ {\rm Mpc}^{-1}$ on $400^3$ grids, and is averaged over 6 simulations. Shades represent 1-$\sigma$ scatter. Solid lines show solutions of $f$ with the normalization constraint (equation~\eqref{eq:inte_f_constraint}) and the $f\ge0$ constraint, while dotted lines represent solutions without constraints. The left and right panels of the top row show the same $f$ in linear and log scales respectively. The right panel of the middle row illustrates $f$ in the 2-dimensional $(\delta_1, \mathcal{G}_2)$ plane. The left panel shows $f$ as a function of $\delta_1$ in different (percentile) bins of $\mathcal{G}_2$, and the middle panel presents $f$ as a function of $\mathcal{G}_2$ at different values of $\delta_1$. The bottom row is similar to the middle one, but showing results for $f(\delta_1, \nabla^2\delta_1)$. We note that the mean of $\nabla^2\delta_1$ is zero, while the mean of $\mathcal{G}_2$ is not. Effects of including constraints are more evident for $f(\delta_1, \nabla^2\delta_1)$.}
\end{figure}
We first focus on modeling the mass-weighted halo field with a mass threshold of $M>3\times10^{11}\ h^{-1}\ M_\odot$ at $z=0.5$. Figure~\ref{fig:f_N150} shows $f$ averaged over the 6 training simulations. The top, middle, and bottom rows illustrate $f$ as a function of $\delta_1$, $(\delta_1, \mathcal{G}_2)$, and $(\delta_1, \nabla^2\delta_1)$ respectively. Shades represent 1-$\sigma$ scatter. Solid and dotted lines show solutions of $f$ with and without constraints. The left and right panels of the top row show $f$ in linear and log scales respectively. The right panel of the middle row illustrates $f$ in the 2-dimensional $(\delta_1, \mathcal{G}_2)$ plane. The left panel shows $f$ as a function of $\delta_1$ in different (percentile) bins of $\mathcal{G}_2$, and the middle panel presents $f$ as a function of $\mathcal{G}_2$ at different values of $\delta_1$. The bottom row is similar to the middle one, but showing results for $f(\delta_1, \nabla^2\delta_1)$.
In all 3 fitting choices with the mass-weighted halos, our non-parametric $f$ obtained with constraints is non-negative and monotonically increasing with $\delta_1$, except in $\delta_1\gtrsim3$ regions where the solution becomes noisy. The shape of $f$ deviates from a linear or quadratic function of $\delta_1$ as seen from the top right panel of Figure~\ref{fig:f_N150}. These trends are even more evident with higher halo mass cuts as we will show below. Such behavior contradicts the prediction from bias expansion, as we will demonstrate later.
While $f$ seems only weakly dependent on $\mathcal{G}_2$, being slightly larger at smaller $\mathcal{G}_2$ when $\delta_1 \gtrsim 2$, it strongly depends on $\nabla^2\delta_1$, showing a clear separation of $f$ in different $\nabla^2\delta_1$ bins. This implies a small contribution of $\mathcal{G}_2$ in recovering the halo field for the $M>3\times10^{11}\ h^{-1}\ M_\odot$ halos, but a more significant role of $\nabla^2\delta_1$.
We have verified that this conclusion and the trend of variation in $f(\delta_1, \mathcal{G}_2)$ hold for all mass bins considered in this work ($M>3\times10^{11} - 6\times10^{12}\ h^{-1}\ M_\odot$), which we will come back to in Section~\ref{sec:results_diffN}.
The function $f(\delta_1,\nabla^2\delta_1)$ appears to be sensitive to the normalization and non-negativity constraints. Without constraints, $f$ becomes negative when $\delta_1<0$ which is unphysical, but this trend is mild for $R_f=3\ h^{-1}$~Mpc and $k_{\rm max}=0.3\ h\ {\rm Mpc}^{-1}$. We will summarize the effects of the non-negativity and normalization constraints in Section~\ref{sec:results_qp_constraints}. We also note a tendency of $f$ to become non-monotonic with $\nabla^2\delta_1$ at $\delta_1 \gtrsim 2$, both with and without constraints. These trends become more evident with smaller $R_f$ and larger $k_{\rm max}$, which we will discuss in Section~\ref{sec:results_Rf_and_kmax}.
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidth]{figures/P_N150.pdf}
\caption{\label{fig:P_N150} Recovery of the halo power spectrum of mass-weighted halos with $M>3\times10^{11}\ h^{-1}\ M_\odot$, using the $f$ solutions shown in Figure~\ref{fig:f_N150}. The power spectra and their ratios are calculated using $512^3$ grids and averaged over 10 simulations that are different from those used for computing $f$. In each panel the vertical gray dashed line illustrates the $k_{\rm max}$ that we use for fitting.
The black solid line in the left panel shows the measured halo power spectrum $P_{\rm h}$ from the simulations. Blue, orange, and green dashed lines represent the model power spectra $P_{\rm model}$ using the fitted $f(\delta_1)$, $f(\delta_1,\mathcal{G}_2)$, and $f(\delta_1,\nabla^2\delta_1)$ respectively.
The top right panel illustrates the ratio of the power spectrum of the uncorrelated residual $P_{\rm uncorr}$ to $P_{\rm h}$. The black dot-dashed line represents the ratio of the conventional Poissonian shot noise to $P_{\rm h}$, shown for illustration purposes only. Shades show 1-$\sigma$ scatter between simulations of the ratios.
The bottom right panel illustrates $P_{\rm model}/P_{\rm h}$, and the green dotted line represents $P_{\rm model}$ using $f(\delta_1,\nabla^2\delta_1)$ without the normalization and non-negative constraints.}
\end{figure}
Using the $f$ solutions shown above, we assess how well our model recovers the halo power spectrum $P_h$ by calculating the model grid $\delta^{\rm model}_h$ and the model power spectrum $P_{\rm model}$. We divide $\delta^{\rm model}_h$ into a part that is correlated with $\delta^{\rm true}_h$, and another uncorrelated residual part. The power spectrum of the uncorrelated residual is \cite{modi17}
\begin{equation}
P_{\rm uncorr} = P_{\rm model} - P_{\rm h,model}^2/P_{\rm h},
\end{equation}
where $P_{\rm h,model}$ is the cross power spectrum of the halo and model grids. $P_{\rm uncorr}$ thus acts as a metric of the quality of the fit, and $P_{\rm model} - P_{\rm uncorr} = P_{\rm h,model}^2/P_{\rm h}$ gives the power spectrum of the correlated part.
Figure~\ref{fig:P_N150} compares the model power spectra $P_{\rm model}$ computed using the $f$ solutions shown above to the measured halo power spectrum $P_{\rm h}$ from the simulations.
The vertical gray dashed lines mark $k_{\rm max}$.
The black solid line in the left panel shows $P_{\rm h}$. Blue, orange, and green dashed lines represent $P_{\rm model}$ using the fitted $f(\delta_1)$, $f(\delta_1,\mathcal{G}_2)$, and $f(\delta_1,\nabla^2\delta_1)$ respectively.
The top right panel illustrates the ratio $P_{\rm uncorr}/P_{\rm h}$. Shades show 1-$\sigma$ scatter between simulations of the ratios.
The bottom right panel illustrates a second metric of the goodness of fit of our model, $P_{\rm model}/P_{\rm h}$, and the green dotted line represents $P_{\rm model}$ using $f(\delta_1,\nabla^2\delta_1)$ without the normalization and non-negative constraints.
The black dot-dashed line in the top right panel of Figure~\ref{fig:P_N150} represents the ratio of the conventional shot noise power spectrum $P_{\rm shot}$ to $P_{\rm h}$, where $P_{\rm shot}$ is calculated assuming Poisson sampling and mass weighting \cite{seljak09}. We note that we did not subtract this conventional shot noise in our analysis, and $P_{\rm shot}$ is only shown for illustration purposes. Any irreducible randomness in $\delta^{\rm model}_h$ should appear in the uncorrelated part with $\delta^{\rm true}_h$ by construction, and therefore be reflected in $P_{\rm uncorr}$. However, $P_{\rm uncorr}$ is about a factor of 10 smaller than $P_{\rm shot}$, indicating that the uncorrelated residual in $\delta^{\rm model}_h$ is well below the Poisson expectation. Our results also agree with \cite{seljak09} in that mass weighting suppresses shot noise.
Clearly, the model with $f(\delta_1,\nabla^2\delta_1)$ results in the best recovery of the halo power spectrum, with $P_{\rm model}$ matching $P_{\rm h}$ at the $0.5\%$ level from $0.01\ h\ {\rm Mpc}^{-1}$ to $k_{\rm max}=0.3\ h\ {\rm Mpc}^{-1}$.
The other two models, $f(\delta_1)$ and $f(\delta_1,\mathcal{G}_2)$, on the other hand, lead to over 15\% overestimates of $P_{\rm model}/P_{\rm h}$. This overestimation remains at 13\% when considering both $\delta_1$ and $\mathcal{G}_2$.
We thus find that $\nabla^2\delta_1$ is key to modeling the mass-weighted halo field. We note that our model with $\delta_1$ and $\nabla^2\delta_1$ overestimates $P_{\rm h}$ at $k<0.01\ h\ {\rm Mpc}^{-1}$ by up to 5\%, but the power spectrum of the uncorrelated residual $P_{\rm uncorr}$ is less than 1\% of $P_{\rm h}$ at these wavenumbers. This indicates that the overestimation is driven by the component of $\delta_h^{\rm model}$ that is correlated with $\delta^{\rm true}_h$, rather than the uncorrelated residuals. Moreover, the $f(\delta_1,\nabla^2\delta_1)$ solution with constraints raises the amplitude of $P_{\rm model}$ by $0.5\%$ compared to the solution without constraints, which we will come back to in Section~\ref{sec:results_qp_constraints}.
Noticeably, although $f(\delta_1,\nabla^2\delta_1)$ leads to the best recovery of the halo power spectrum, $f(\delta_1,\mathcal{G}_2)$ results in the lowest amplitude of $P_{\rm uncorr}$ instead. The reason why $f(\delta_1,\nabla^2\delta_1)$ leads to the highest $P_{\rm uncorr}$ is unclear to us. We note that we do not have an errorbar for the halo field in each grid cell, so we do not have a $\chi^2$ value for the goodness of fit.
For the rest of the paper we will only focus on results using $f(\delta_1, \nabla^2\delta_1)$ with the normalization and non-negativity constraints.
\subsubsection{Results with different halo mass cuts}
\label{sec:results_diffN}
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidth]{figures/f_d1nabla2d1_diffN.pdf}
\caption{\label{fig:f_diffN} The fitted $f(\delta_1, \nabla^2\delta_1)$ for different mass thresholds, obtained using $R_f=3\ h^{-1}$~Mpc and $k_{\rm max}=0.3\ h\ {\rm Mpc}^{-1}$ on $400^3$ grids, and averaged over 6 simulations. From left to right: $M>3\times10^{11}, 1\times10^{12}, 2\times10^{12}, 6\times10^{12}\ h^{-1}\ M_\odot$. Top panels show $f$ in the 2-dimensional $(\delta_1, \nabla^2\delta_1)$ plane, and bottom panels illustrate $f$ as a function of $\delta_1$ in different $\nabla^2\delta_1$ bins. Shades represent 1-$\sigma$ scatter.}
\end{figure}
We now discuss the results of fitting the halo field with 4 different mass cuts $M>3\times10^{11}, 1\times10^{12}, 2\times10^{12}, 6\times10^{12}\ h^{-1}\ M_\odot$, which will test the robustness of our method for a broad range of halo masses. Figure~\ref{fig:f_diffN} illustrates the averaged $f(\delta_1,\nabla^2\delta_1)$ for these halo mass cuts. Top panels show $f$ in the 2-dimensional $(\delta_1, \nabla^2\delta_1)$ plane, and bottom panels illustrate $f$ as a function of $\delta_1$ in different $\nabla^2\delta_1$ bins.
Larger halo mass cuts lead to more evident deviation of $f$ from a polynomial of $\delta_1$ and $\nabla^2\delta_1$. Especially for $M>6\times10^{12}\ h^{-1}\ M_\odot$ halos, $f$ soars up at $\delta_1>0$ and shows large gradients with $\nabla^2\delta_1$.
This reflects that higher mass halos are exponentially rarer and their formation depends more heavily on the density peaks.
The solutions also gradually become non-monotonic in $\nabla^2\delta_1$ for higher halo mass cuts, suggesting a potential failure of the model and a need for larger $R_f$ to fit the more massive halos. We have verified that $f(\delta_1,\mathcal{G}_2)$ does not lead to better recovery of the halo power spectrum even for our largest halo mass threshold $6\times10^{12}\ h^{-1}\ M_\odot$.
We note that most previous works on the tidal shear bias studied more massive halos than ours ($M\gtrsim10^{13}\ h^{-1}\ M_\odot$, \cite{baldauf12, chan12, saito14, modi17, abidi18, lazeyras18}, except \cite{bel15, castorina16}). Our findings are broadly consistent with past works, that either the tidal bias is only important for halos with $M\gtrsim10^{13}\ h^{-1}\ M_\odot$ \cite{abidi18}, or there is a small negative shear bias regardless of halo mass \cite{bel15, lazeyras18}, since our $f$ decreases with increasing $\mathcal{G}_2$ (see, however, \cite{modi17} for a different prediction).
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidth]{figures/P_diffN_f_d1nabla2d1.pdf}
\caption{\label{fig:P_diffN} Recovery of the halo power spectrum of mass-weighted halos with $M>3\times10^{11}, 1\times10^{12}, 2\times10^{12}, 6\times10^{12}\ h^{-1}\ M_\odot$, represented by blue, orange, green, and red colors respectively. In each panel the vertical gray dashed line illustrates the $k_{\rm max}$ that we use for fitting. Left panel presents the ratio of halo power spectra to the matter power spectrum. Solid and dashed lines represent $P_{\rm h}$ and $P_{\rm model}$ respectively, where $P_{\rm model}$ is obtained using the $f(\delta_1,\nabla^2\delta_1)$ solutions shown in Figure~\ref{fig:f_diffN}. Dot-dashed lines in the top right panel illustrate $P_{\rm uncorr}/P_{\rm h}$, and the dot-dashed lines represent the ratio of the conventional Poissonian shot noise to $P_{\rm h}$, for illustration purposes only}. Bottom right panel shows $P_{\rm model}/P_{\rm h}$. Shades represent 1-$\sigma$ scatter between simulations.
\end{figure}
Figure~\ref{fig:P_diffN} shows the halo power spectra with the 4 mass cuts using the $f$ solutions presented above. The left panel shows the halo power spectra divided by the matter power spectrum, which at low $k$ gives the linear bias.
The departure from a constant bias is visible for $k\gtrsim 0.1\ h\ {\rm Mpc}^{-1}$.
The two right panels illustrate $P_{\rm uncorr}/P_h$ and $P_{\rm model}/P_h$.
As we mentioned in Section~\ref{sec:results_N150}, $P_{\rm uncorr}$ contains all the information of the uncorrelated residual in $\delta^{\rm model}_h$ with $\delta^{\rm true}_h$. The conventional Poissonian calculation of the shot noise does not apply in our situation and is only shown for illustration purposes.
For mass thresholds up to $2\times10^{12}\ h^{-1}\ M_\odot$, our $f(\delta_1, \nabla^2\delta_1)$ solutions reproduce the halo power spectra to within 2\% error from $k=0.01\ h\ {\rm Mpc}^{-1}$ to $k_{\rm max}$. However, halos with higher mass cuts experience an overestimation of $P_{\rm model}/P_{\rm h}$, with the $M>6\times10^{12}\ h^{-1}\ M_\odot$ case seeing a 4\% shift.
We will discuss the dependence of the modeling results on $R_f$ and $k_{\rm max}$ in Section~\ref{sec:results_Rf_and_kmax} and how a slight increase in $R_f$ might mitigate the overestimation problem with the largest halo mass cut.
For all halo mass cuts, at the low-$k$ end $P_{\rm model}$ overestimates $P_{\rm h}$ by about 5\% more than in the intermediate $k$ range of $0.01-0.1\ h\ {\rm Mpc}^{-1}$.
This issue is unlikely to be caused by the uncorrelated residual part in $\delta_h^{\rm model}$, as $P_{\rm uncorr}$ is less than 1-2\% of $P_{\rm h}$ at low $k$, unable to explain the 5\% overestimation. We conjecture that the issue is partly caused by $\nabla^2\delta_1$ lacking support at low $k$, given that its Fourier transform goes as $-k^2$ times that of $\delta_1$. However, $\nabla^2\delta_1$ also carries information of the smoothing scale through the gradient operator. We thus speculate that incorporating multiple smoothing scales in the modeling, especially one with large $R_f$, might mitigate this low-$k$ overestimation of the halo power spectrum. We leave an exploration of this question for future work, as the drastically increasing number of dimensions in $f$ when including more smoothing scales makes it hard to solve the problem with quadratic programming. Machine learning may provide a better approach to this.
Finally, we point out that since we use different sets of simulations to compute $f$ and apply to $P_{\rm model}$, this shows that the $f$ solutions can be cross-validated well across simulations. We will further demonstrate that applying the $f$ solutions from $500\ h^{-1}$~Mpc small box simulations to the $2\ h^{-1}$~Gpc large boxes also results in good matches of $P_{\rm model}$ to $P_{\rm h}$.
\subsubsection{Comparison with EPS}
\label{sec:results_EPS}
We now show that the $f(\delta_1)$ functions we obtained from the simulation agree qualitatively with analytic predictions from the extended Press-Schechter (EPS) formalism.
Supposing that $f$ depends on $\delta_1$ only, EPS states that the function $f$ that modulates the amount of mass collapsed into halos more massive than a given threshold $M$ is given by \cite{matsubara08}
\begin{equation}
f(\delta_1) = \frac{\int_M^\infty M n(M', z | \delta_1, R_f) \mathrm{d}M'}{\int_M^\infty M n(M', z) \mathrm{d}M'},
\end{equation}
where $n(M,z)$ is the halo mass function, and $n(M,z | \delta_1,R_f)$ represents the conditional mass function, which gives the number density of halos of mass $M$, identified at redshift $z$, in a region of Lagrangian radius $R_f$ in which the linear overdensity extrapolated to the present time is $\delta_1$. In our case, instead of a real-space top-hat filter with radius $R_f$, we adopt a Gaussian filter. For illustrative purposes, we calculate $f$ using the Press-Schechter mass function, which gives
\begin{equation}
f(\delta_1) = \frac{{\rm erfc}\left( \left( \delta_c(z) - \delta_1 \right) / \left( 2 \sqrt{\sigma^2(M) - \sigma^2_{R_f}} \right) \right)}{{\rm erfc}\left( \delta_c(z) / 2 \sigma(M) \right)},
\label{eq:eps_f}
\end{equation}
where $\delta_c(z)$ is the critical overdensity required for spherical collapse at redshift $z$, $\sigma(M)$ is the variance of mass overdensity in a spherical region of size corresponding to the mass scale $M$ and is linearly extrapolated to $z=0$, and $\sigma_{R_f}$ is the mass variance in a Gaussian filter with size $R_f$. For reference, evaluating the first and second derivatives of $f$ at $\delta_1=0$ gives the bias expansion $f(\delta_1) = f(0) + b_1\delta_1 + b_2\delta_2^2 / 2!$ (though in Section~\ref{sec:results_bias} we will fit for $b_1, b_2$ instead of taking a numerical derivative of $f$).
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidth]{figures/f_d1_diffN_EPS.pdf}
\caption{\label{fig:f_diffN_bias} Comparison of our non-parametric $f$ to the predictions from EPS and the bias expansion for two halo mass cuts.
Left panel: comparison of the EPS $f$ (equation~\eqref{eq:eps_f}, dot-dashed lines) and the non-parametric $f$ (solid lines). The latter takes the weighted averaged of $f(\delta_1, \nabla^2\delta_1)$, with the weights given by the percentile ranges of the $\nabla^2\delta_1$ bins. Blue and red represent halo mass cuts of $M>3\times10^{11}$ and $6\times10^{12}\ h^{-1}\ M_\odot$ respectively. The middle and right panels compare the non-parametric $f$ (solid lines) to the best-fit bias expansion (dashed lines) in different $\nabla^2\delta_1$ bins represented by different colors (following Figure~\ref{fig:f_diffN}), for the $M>3\times10^{11}$ and $6\times10^{12}\ h^{-1}\ M_\odot$ halos respectively.}
\end{figure}
The left panel of Figure~\ref{fig:f_diffN_bias} compares the EPS prediction of $f(\delta_1)$ (dashed lines) and our non-parametric $f$ (thick solid lines) with two halo mass cuts: $M>3\times10^{11}$ (blue) and $6\times10^{12}\ h^{-1}\ M_\odot$ (red). The non-parametric $f$ as a function of $\delta_1$ plotted here takes the weighted averaged of $f(\delta_1, \nabla^2\delta_1)$, where the weights are the percentile ranges of the $\nabla^2\delta_1$ bins. A plateau in the EPS $f$ appears at $\delta_1/\sigma(\delta_1)>2$ because $\delta_1$ is larger than $\delta_c(z)$ at these high $\delta_1$ values, and so the collapsed fraction is saturated at 1, preventing $f$ from growing further. The overall shape of the EPS $f(\delta_1)$ agrees with our fitted $f$, and our $f$ also shows an indication of flattening at large $\delta_1$ (top right panel of Figure~\ref{fig:f_N150}).
\subsubsection{Comparison with the bias expansion}
\label{sec:results_bias}
We next compare our least-squares non-parametric $f$ to the usual bias expansion.
We assume that $f$ is given by $f(\{\mathcal{O}\}) = \sum_n^{N_{\rm bias}} b_n \mathcal{O}_n$, where $\mathcal{O}_n$ represents the $n$-th operator (out of $N_{\rm bias}$) and $b_n$ is its associated bias parameter.
Starting with equation~\eqref{eq:delta_model_0}, we have
\begin{equation}
1+\delta^{\rm model}_{h, j} = \sum_i w_{ij} f(\{\mathcal{O}\}_i) = \sum_m \sum_{i\in\mathcal{I}_m} w_{ij} \sum_n b_n \mathcal{O}_{n,m},
\end{equation}
where $\left( \mathcal{O}_n \right)_i$ represents $\mathcal{O}_n$ evaluated at the $i$-th particle and $\mathcal{O}_{n,m}$ is the corresponding value if the $i$-th particle falls into the $m$-th bin in the 3-dimensional volume of $\delta_1$-$\nabla^2\delta_1$-$\mathcal{G}_2$. Rearranging the above equation gives
\begin{equation}
1+\delta^{\rm model}_{h, j}
= \sum_n \sum_m A_{jm} \underbrace{\mathcal{O}_{n,m}}_{T_{mn}} b_n = \sum_n B_{jn} b_n
\end{equation}
where $T$ is a $N_{\rm bins} \times N_{\rm bias}$ matrix with $T_{mn}$ = $\mathcal{O}_{n,m}$, and $B = AT$. Therefore the polynomial bias expansion solution $f_{\rm poly} = (b_0, b_1, ..., b_{N_{\rm bias}})$ reads
\begin{equation}
f_{\rm poly} = \left( B^T B \right)^{-1} B^T \left( 1 + \delta^{\rm true}_h \right).
\label{eq:f_bias}
\end{equation}
Following \cite{vlah16, schmittfull19, kokron21}, we expand up to second-order bias
\begin{equation}
f(\{ \mathcal{O} \}) = b_0 + b_1\delta_1 + b_2\delta_1^2 + b_{\nabla^2}\nabla^2\delta_1,
\end{equation}
and use equation~\eqref{eq:f_bias} to calculate the $(b_0, b_1, b_2, b_{\nabla^2})$ coefficients from our simulations, using the $A$ matrices that have already been computed for obtaining the non-parametric $f$.\footnote{Another way of computing the biases is to directly solve the least-squares problem $\sum_n b_n \tilde{\mathcal{O}}_n = 1+\delta_h^{\rm true}$, where $\tilde{\mathcal{O}}_n$ represents the advected fields obtained by CIC interpolating the particles with their corresponding $\mathcal{O}_n$ weights \cite{kokron21}. We have verified that using our equation~\ref{eq:f_bias} gives roughly the same results as directly fitting the biases. Since the direct fit method can include all of $\delta_1, \delta_1^2, \nabla^2\delta_1, \mathcal{G}_2$, we have also confirmed that further including $\mathcal{G}_2$ in addition to $\nabla^2\delta_1$ in the bias expansion only affects the recovery of the halo power spectrum by $\lesssim1\%$ for the mass cuts that we considered. As we mentioned in Section~\ref{sec:results_diffN}, a negligible impact of $\mathcal{G}_2$ for our mass cuts is consistent with previous works on the tidal shear bias.}
The resulting bias expansion solution thus represents fitting the halos at the field level with $(b_0,b_1,b_2,b_{\nabla^2})$, with the same $R_f$ and $k_{\rm max}$ as the non-parametric $f$. We note that we are still minimizing the real-space squared error, unlike \cite{schmittfull19, kokron21} who essentially minimize the error of fitting the halo power spectrum. As a consequence, we are not fitting for the power spectrum, but the halo field.
Our approach is also different from \cite{modi17, abidi18} since we do not calculate the bias using the halo-matter cross power spectrum or higher-order halo statistics. Our definition of the bias and the operators differ from previous works on the bias expansion, but it does not affect the results in this Section and we are more interested in comparing the bias expansion with our non-parametric $f$.
The middle and right panels of Figure~\ref{fig:f_diffN_bias} compare the non-parametric $f$ (solid lines) to the bias expansion $f$ (dashed lines) in different $\nabla^2\delta_1$ bins represented by different colors (as in Figure~\ref{fig:f_diffN}), for the $M>3\times10^{11}$ and $6\times10^{12}\ h^{-1}\ M_\odot$ halos respectively. For both halo mass cuts, the bias expansion predicts negative $f$ at $\delta_1<0$ in the lower $\nabla^2\delta_1$ bins, which is unphysical.
While the bias expansion roughly captures the shape of the non-parametric $f$ for $M>3\times10^{11}\ h^{-1}\ M_\odot$ (except in the highest $\nabla^2\delta_1$ bin), it completely misses it for $M>6\times10^{12}\ h^{-1}\ M_\odot$.
This is expected since higher-mass halos showed a steeper behavior with $\delta_1$ in Figure~\ref{fig:f_diffN}, making $f$ less amenable to a bias expansion.
The bias expansion also predicts an unphysically rising $f$ at sufficiently negative overdensities.
\footnote{If we use $R_f=4\ h^{-1}$~Mpc and $k_{\rm max}=0.25\ h\ {\rm Mpc}^{-1}$ for the $M>6\times10^{12}\ h^{-1}\ M_\odot$ halos, which as we will show in Section~\ref{sec:results_Rf_and_kmax} results in the non-parametric $f$ better reproducing $P_{\rm h}$, the resulting non-parametric $f$ becomes slightly more linear but the comparison with the bias expansion remains similar.}
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidth]{figures/P_diffN_f_d1nabla2d1_bias.pdf}
\caption{\label{fig:P_diffN_bias} Comparison of the power spectra obtained using the non-parametric $f(\delta_1,\nabla^2\delta_1)$ and the bias expansion for mass-weighted halos with $M>3\times10^{11}$ (left) and $6\times10^{12}\ h^{-1}\ M_\odot$ (right). Solid and dashed lines represent results using the non-parametric $f$ and bias expansion respectively. Shades represent 1-$\sigma$ scatter.}
\end{figure}
Figure~\ref{fig:P_diffN_bias} compares the power spectra obtained using the non-parametric $f(\delta_1,\nabla^2\delta_1)$ and the bias expansion for mass-weighted halos with $M>3\times10^{11}$ (left) and $6\times10^{12}\ h^{-1}\ M_\odot$ (right). Solid and dashed lines represent results using the non-parametric $f$ and bias expansion respectively. The bias expansion underpredicts $P_{\rm model}$ at 1\% and up to 4\% levels at $k=0.01-0.1\ h\ {\rm Mpc}^{-1}$ for $M>3\times10^{11}$ and $6\times10^{12}\ h^{-1}\ M_\odot$ respectively. The non-parametric $f$ thus outperforms the bias expansion for the $M>3\times10^{11}\ h^{-1}\ M_\odot$ halos since it recovers the halo power spectrum at sub-percent level in the intermediate $k$ range, although it overpredicts the power spectrum of $M>6\times10^{12}\ h^{-1}\ M_\odot$ halos by 3\%. However, we will show below that using $R_f=4\ h^{-1}$~Mpc and $k_{\rm max}=0.25\ h\ {\rm Mpc}^{-1}$ results in percent level recovery of the power spectrum of the more massive halos, while we have verified that these parameters lead to up to 6\% underestimate of $P_{\rm model}/P_{\rm h}$ when using the bias expansion.
Since we do not fit the power spectrum but the halo field, it may not be surprising that our bias expansion $f$ does not reproduce the halo power spectrum as closely as previous works on Lagrangian biasing \citep[e.g.][]{kokron21}
Furthermore, we follow a fully Lagrangian approach, not using any $k$-dependent biases or transfer functions on the final (Eulerian) space \citep[e.g.][]{schmittfull19}.
While our 1-4\% underestimation of $P_{\rm model}/P_{\rm h}$ using the bias expansion seems acceptable, we speculate that the span of $f$ in positive and negative values at $\delta_1<0$ likely results in a cancellation of the effects of using such unphysical $f$ that does not preserve monotonicity and non-negativity, and also rises at low $\delta_1$. An unphysical $f$ that is negative at $\delta_1<0$ may lead to negative halo densities at the final redshift, whose effect may be small on the power spectrum, but is likely important for the one-point function of the halo field.
\subsection{Results using small boxes}
\label{sec:results_smallbox}
Here we derive $f(\delta_1, \nabla^2\delta_1)$ for multiple $500\ h^{-1}$~Mpc small box simulations and apply the solutions to the $2\ h^{-1}$~Gpc large box simulations. This is motivated by the fact that realistic galaxy populations can only be modeled in small-box (hundreds of Mpc) cosmological hydrodynamical simulations (e.g., Illustris \cite{vogelsberger14}, IllustrisTNG \cite{weinberger17, pillepich18, tng_dr}, EAGLE \cite{schaye15, eagle_dr}, BAHAMAS \cite{mccarthy17}, MAGNETICUM \cite{hirschmann14}, Horizon-AGN \cite{dubois14}), while large box sizes are required to capture the low-wavenumber modes and allow for a systematic exploration of halo clustering. Moreover, recent studies have begun to explore galaxy bias in cosmological hydrodynamical simulations \cite{chavesmontero16, springle18, monterodorta20, barreira21}. We thus aim to test whether small-box simulations produce $f$ solutions consistent with the large boxes, and whether these solutions lead to a precise match of $P_{\rm model}$ to $P_{\rm h}$ when applied to large box simulations.
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidth]{figures/f_and_P_N150_small.pdf}
\caption{\label{fig:f_and_P_N150_small} The $f$ solutions obtained from small-box simulations and the corresponding $P_{\rm model}$, compared to those derived from the large boxes alone. Left panel compares the $f(\delta_1,\nabla^2\delta_1)$ solutions for the $M>3\times10^{11}\ h^{-1}\ M_\odot$ halos obtained from the $2\ h^{-1}$Gpc boxes (thick lines) to those from five $500\ h^{-1}$~Mpc boxes (thin lines). Different colors represent different $\nabla^2\delta_1$ bins, as in the bottom left panel of Figure~\ref{fig:f_N150}. Right panel shows $P_{\rm model}/P_{\rm h}$. The thick line represents the result of applying $f$ found from the big boxes to other big boxes, whereas the thin lines illustrate the results of using $f$ from the small boxes to the big ones. Each $P_{\rm model}/P_{\rm h}$ curve is averaged over 10 big-box simulations to reduce Poisson noise.}
\end{figure}
We derive the $f$ solutions for mass-weighted halos within 5 small-box simulations, using the same $R_f=3\ h^{-1}$~Mpc, $k_{\rm max}=0.3\ h\ {\rm Mpc}^{-1}$, and $5\ h^{-1}$~Mpc cell size as the large boxes, therefore $100^3$ grids to interpolate the particles. Figure~\ref{fig:f_and_P_N150_small} shows the results of our calculations for the $M>3\times10^{11}\ h^{-1}\ M_\odot$ halos.
The thin lines in the left panel illustrate the different $f$ solutions obtained from the small boxes, and the thick lines represent the averaged $f$ solution from the 6 large boxes discussed above. Different colors represent different $\nabla^2\delta_1$ bins, as in the bottom left panel of Figure~\ref{fig:f_N150}.
The small-box $f$ solutions fluctuate around the large-box ones and have more scatter in the less occupied $\delta_1$ and $\nabla^2\delta_1$ bins.
The right panel shows $P_{\rm model}/P_{\rm h}$, where each curve is averaged over 10 large-box simulations. The thick line represents using the averaged $f$ from the large boxes, while the 5 thin lines illustrate the results of applying each of the 5 $f$ solution to the large boxes to calculate the model grid. Although variations exist, applying the small box $f$ to large boxes leads to $P_{\rm model}/P_{\rm h}$ consistent with applying the large box $f$ to within sub-percent level. In future work we plan to test whether such stability holds when using halo occupation distribution models \cite{yuan18, hadzhiyska20, hadzhiyska21}.
\subsection{Effects of different smoothing scales and wavenumber cuts}
\label{sec:results_Rf_and_kmax}
Here we discuss the effects of using different smoothing radii $R_f$ and cutoff wavenumbers $k_{\rm max}$ on the $f$ values and the model power spectrum. We only calculate $f$ from one simulation for computational efficiency, and compute the associated model power spectrum using that same simulation.
We perform the calculations using $f(\delta_1,\nabla^2\delta_1)$, but show a weighted averaged $f$ with the weights given by the percentile ranges of the $\nabla^2\delta_1$ bins.
We also present $f$ solutions and the resulting $P_{\rm model}$ without the normalization (equation~\eqref{eq:inte_f_constraint}) and non-negativity ($f\ge0$) constraints, but will discuss the effects of these constraints in detail in Section~\ref{sec:results_qp_constraints}.
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidth]{figures/f_and_P_N150_Rf3_diffkmax.pdf}
\caption{\label{fig:f_and_P_diffkmax} Effects of varying $k_{\rm max}$ on $f$ (left) and $P_{\rm model}$ (right), for mass-weighted halos with $M>3\times10^{11}\ h^{-1}\ M_\odot$. We fix $R_f=3\ h^{-1}$~Mpc. Blue, green, black, and red represent $k_{\rm max}=0.1,0.2,0.3,0.5\ h\ {\rm Mpc}^{-1}$ respectively.
The left panel illustrates the $\nabla^2\delta_1$-averaged $f$ as a function of $\delta_1$, where solid and dotted lines show results with and without the normalization and non-negativity constraints respectively. The right panel shows $P_{\rm model}/P_{\rm h}$, and the vertical dashed lines illustrate the corresponding values of $k_{\rm max}$.}
\end{figure}
Figure~\ref{fig:f_and_P_diffkmax} shows the result of varying the $k$ cuts ($k_{\rm max}=0.1,0.2,0.3,0.5\ h\ {\rm Mpc}^{-1}$) on the fit for $f$ (left panel) and $P_{\rm model}/P_{\rm h}$ (right panel) for mass-weighted halos with $M>3\times10^{11}\ h^{-1}\ M_\odot$ using $R_f=3\ h^{-1}$~Mpc.
Solid and dotted lines represent results with and without the normalization and non-negativity constraints respectively, and here we only focus on discussing the former.
We find that $f$ becomes slightly more linear with lower $k$ cuts. Although not shown in this plot, for the higher $k_{\rm max}=0.5\ h\ {\rm Mpc}^{-1}$, $f$ appears to be more non-monotonic in $\nabla^2\delta_1$ at $\delta_1>2$.
The recovery of the halo power spectrum is best with $k_{\rm max}=0.3\ h\ {\rm Mpc}^{-1}$ since it corresponds to $\sim 1/R_f$. Lower $k$ cuts raise $P_{\rm model}$ by 1-2\% in the relevant range ($k=0.01\ h\ {\rm Mpc}^{-1} - k_{\rm max}$), while a larger $k_{\rm max}=0.5\ h\ {\rm Mpc}^{-1}$ reduces $P_{\rm model}$ by 2\%.
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidth]{figures/f_and_P_N150_diffRf_diffkmax.pdf}
\caption{\label{fig:f_and_P_diffRf} Effects of different $R_f$ on $f$ and $P_{\rm model}$, for halos with $M>3\times10^{11}\ h^{-1}\ M_\odot$. Different colors represent varying $R_f$ together with $k_{\rm max}$, similar to Figure~\ref{fig:f_and_P_diffkmax}.}
\end{figure}
Figure~\ref{fig:f_and_P_diffRf} compares the fits for $f$ and $P_{\rm model}/P_{\rm h}$ when using different $R_f$ values for the $M>3\times10^{11}\ h^{-1}\ M_\odot$ halos.
We choose $R_f=2,3,5,10\ h^{-1}$~Mpc, and set maximum wavenumbers at $k_{\rm max}=0.5,0.3,0.2,0.1\ h\ {\rm Mpc}^{-1}$ for them, where each $k_{\rm max}$ is roughly $1/R_f$. The $f$ solution becomes more linear with larger smoothing scales, consistent with the linear-bias picture. Setting $R_f=2\ h^{-1}$~Mpc leads to an overestimation of $P_{\rm model}/P_{\rm h}$ by 3\% at $k=0.01\ h\ {\rm Mpc}^{-1} - k_{\rm max}$, while $R_f=5$ and $10\ h^{-1}$~Mpc result in a 1-2\% underestimation. This points to the need to select a proper smoothing scale according to the mass of the halos under consideration.
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidth]{figures/P_diffN_diffRf.pdf}
\caption{\label{fig:diffN_diffRf}
Ratio $P_{\rm model}/P_{\rm h}$ of the model and halo power spectra for halos with $M>3\times10^{11}\ h^{-1}\ M_\odot$ (left panel) and $6\times10^{12}\ h^{-1}\ M_\odot$ (right panel). Black and red lines represent results using $R_f=3\ h^{-1}$~Mpc, $k_{\rm max}=0.3\ h\ {\rm Mpc}^{-1}$ and $R_f=4\ h^{-1}$~Mpc, $k_{\rm max}=0.25\ h\ {\rm Mpc}^{-1}$, where the larger $R_f$ provides a better fit for the heavier halos. The vertical dashed lines illustrate the corresponding values of $k_{\rm max}$.}
\end{figure}
Finally, we explore whether a larger smoothing scale might lead to better recovery of the halo power spectrum for the $M>6\times10^{12}\ h^{-1}\ M_\odot$ halos, as our power-spectrum predictions were biased high in Figure~\ref{fig:P_diffN}. The mass-weighted average masses of halos above thresholds of $M>3\times10^{11}\ h^{-1}\ M_\odot$ and $6\times10^{12}\ h^{-1}\ M_\odot$ are $2.4\times10^{13}\ h^{-1}\ M_\odot$ and $5.1\times10^{13}\ h^{-1}\ M_\odot$, respectively, which would be the masses contained in Gaussian filters with $R_f=2.6\ h^{-1}$~Mpc and $3.3\ h^{-1}$~Mpc respectively. Figure~\ref{fig:diffN_diffRf} shows $P_{\rm model}/P_{\rm h}$ for the $M>3\times10^{11}\ h^{-1}\ M_\odot$ (left panel) and $6\times10^{12}\ h^{-1}\ M_\odot$ (right panel) halos. Black and red lines represent results using $R_f=3\ h^{-1}$~Mpc, $k_{\rm max}=0.3\ h\ {\rm Mpc}^{-1}$ and $R_f=4\ h^{-1}$~Mpc, $k_{\rm max}=0.25\ h\ {\rm Mpc}^{-1}$ respectively. While $R_f=3\ h^{-1}$~Mpc leads to sub-percent level recovery of the halo power spectrum for the lower mass cut and 3-4\% overestimation of $P_{\rm model}/P_{\rm h}$ for the higher mass cut, $R_f=4\ h^{-1}$~Mpc results in sub-percent recovery of the halo power spectrum for the higher mass cut, but 1\% underestimation of $P_{\rm model}/P_{\rm h}$ for the lower mass cut. This suggests that it is unlikely to find one smoothing scale that works for all halo mass thresholds, as different mass cuts require a corresponding $R_f$. We also note that while $R_f=4\ h^{-1}$~Mpc gives a better recovery of the power spectrum of the massive halos, it does not fully eliminate the non-monotonic behavior of $f$ with respect to $\nabla^2\delta_1$, seen in the bottom right panel of Figure~\ref{fig:f_diffN}.
In summary, we find that varying $R_f$ and $k_{\rm max}$ could result in a few percent variations in $P_{\rm model}/P_{\rm h}$ at $k=0.01-0.1\ h\ {\rm Mpc}^{-1}$. This implies that a sub-percent level of recovery of the halo power spectrum may require fine-tuning of the parameters, although our default choice of $R_f=3\ h^{-1}$~Mpc and $k_{\rm max}=0.3\ h\ {\rm Mpc}^{-1}$ worked out well for $M>3\times10^{11}\ h^{-1}\ M_\odot$ halos. Future work may explore a broader parameter space and aim to get rid of the fine-tuning.
\subsection{Effects of the non-negativity and normalization constraints}
\label{sec:results_qp_constraints}
We now discuss the effects of including the non-negativity ($f\ge0$) and normalization (equation~\eqref{eq:inte_f_constraint}) constraints on $f$ and $P_{\rm model}$. The $\nabla^2\delta_1$ bin-averaged $f$ solutions and the corresponding $P_{\rm model}/P_{\rm h}$ without these constraints are shown by the dotted lines in Figure~\ref{fig:f_and_P_diffkmax} and \ref{fig:f_and_P_diffRf}, while Figure~\ref{fig:f_N150} contains a full comparison of $f$ with and without constraints in different $\nabla^2\delta_1$ bins.
Including the constraints affects $f$ primarily at $\delta_1<0$.
When fixing $R_f$ and lowering $k_{\rm max}$ (Figure~\ref{fig:f_and_P_diffkmax}), the $\nabla^2\delta_1$ bin-averaged $f$ shows a tendency of being more negative at $\delta_1/\sigma(\delta_1)<-2$. While not shown in this plot, for the higher $k_{\rm max}=0.5\ h\ {\rm Mpc}^{-1}$, $f$ is more negative in higher $\nabla^2\delta_1$ bins at $\delta_1/\sigma(\delta_1)<0$, even though the $\nabla^2\delta_1$ bin-averaged $f$ is non-negative. It also becomes more non-monotonic in $\nabla^2\delta_1$ at $\delta_1/\sigma(\delta_1)>2$.
When changing $R_f$ and setting $k_{\rm max}\sim1/R_f$ (Figure~\ref{fig:f_and_P_diffRf}), the differences in $f$ with and without constraints diminish for larger $R_f$. For $R_f=2\ h^{-1}$~Mpc, $f$ without constraints becomes more negative in higher $\nabla^2\delta_1$ bins at $\delta_1/\sigma(\delta_1)<0$, which leads to $f<0$ when averaged in $\nabla^2\delta_1$ bins.
The changes in $f$ at $\delta_1<0$ when including the constraints tend to raise $P_{\rm model}$ by up to 2\% at $k=0.01-0.1\ h\ {\rm Mpc}^{-1}$. This effect seems more evident for the larger $k_{\rm max}=0.5\ h\ {\rm Mpc}^{-1}$ when fixing $R_f=3\ h^{-1}$~Mpc, or when using a small $R_f=2\ h^{-1}$~Mpc with $k_{\rm max}=0.5\ h\ {\rm Mpc}^{-1}$. For $R_f\ge5\ h^{-1}$~Mpc, $f$ becomes more linear and including the constraints no longer make a big difference in the resulting $f$ and $P_{\rm model}$. These results echo those of Section~\ref{sec:results_Rf_and_kmax} that our model may still require some fine-tuning of the parameters to well recover the halo power spectrum, which we leave to future work for a detailed exploration.
\section{Conclusions}
\label{sec:conclusions}
We have developed a fully Lagrangian halo biasing model that is non-parametric and qualitatively different from the traditional bias expansion. We measured the halo-to-mass ratios $f$ using mass-weighted halos in N-body simulations, assuming $f$ is a function of the smoothed linear overdensity $\delta_1$, the tidal operator $\mathcal{G}_2$, and a non-local term $\nabla^2\delta_1$.
Our derived $f$ functions are non-negative and monotonically increasing with $\delta_1$ for mass-weighted halos, unlike a polynomial of $\delta_1$ that does not necessarily guarantee these constraints. We find that $f$ clearly deviates from a polynomial function of $\delta_1$ as would be expected from the bias expansion. These trends are more evident for more massive halos, where $f$ starts soaring up at $\delta_1>0$.
We find that including $\nabla^2\delta_1$ is essential to reproducing the power spectrum of mass-weighted halos. In particular, our $f(\delta_1,\nabla^2\delta_1)$ is able to recover the power spectrum of mass-weighted halos with $M>3\times10^{11}\ h^{-1}\ M_\odot$ at sub-percent level of accuracy at $k=0.01-0.1\ h\ {\rm Mpc}^{-1}$ given an appropriate smoothing scale to filter the initial density field.
On the other hand, treating $f$ as a function only of $\delta_1$ leads to a 15\% overestimation of the halo power spectrum. The inclusion of $\mathcal{G}_2$ only reduces this overestimation by 2\%. Similar conclusions hold for all halo mass cuts considered in our work, $M>3\times10^{11}-6\times10^{12}\ h^{-1}\ M_\odot$. This is consistent with previous works which find that either the tidal shear bias is unimportant for halos less massive than $\sim10^{13}\ M_\odot$, or there is a small negative tidal bias across a range of halo masses (\cite{saito14, bel15, abidi18, lazeyras18}, but see also \cite{modi17}).
However, the amplitude of the halo power spectrum is more overestimated with $f(\delta_1,\nabla^2\delta_1)$ for larger halo mass thresholds and at $k<0.01\ h\ {\rm Mpc}^{-1}$, suggesting a need to use larger smoothing scales for more massive halos.
How well the non-parametric $f$ recovers the halo power spectrum is also mildly dependent on input parameters such as the smoothing scale.
By measuring $f(\delta_1,\nabla^2\delta_1)$ using mass-weighted halos in $500\ h^{-1}$~Mpc simulations and applying the resulting $f$ to $2\ h^{-1}$~Gpc simulations, we find that the halo power spectrum can still be matched to within percent level accuracy. While we have not tested our formalism using number-weighted halos or galaxies populated with a halo-occupation-distribution model, this shows the potential of applying our framework on small-box cosmological hydrodynamical simulations.
We compared our non-parametric $f$ with the $f$ function assuming the bias expansion, which exhibits negative values at $\delta_1<0$ and then rises to positive again at lower overdensities. We find that using the same smoothing scales and wavenumber cuts, the bias expansion underpredicts the amplitude of the halo power spectrum by up to 4\%.
Having found good performance with our formalism, we list the possible extensions and improvements of our model below:
\setlist{nolistsep}
\begin{itemize}[noitemsep]
\item Test our formalism using number-weighted halos or halos weighted by a halo-occupation distribution;
\item Examine the use of a Poisson likelihood in obtaining $f$ instead of a Gaussian likelihood as in the least-squares fitting, as halo number counts are expected to follow a Poisson distribution;
\item Study whether $\mathcal{G}_2$ plays a more important role in modeling $M>10^{13}-10^{14}\ M_\odot$ halos using our non-parametric model;
\item Apply our model onto high-redshift halos, since halo formation becomes rarer and more extreme at early times;
\item Implement an improved version of our formalism that can take multiple smoothing scales.
\end{itemize}
In summary, we have developed a substantially different picture of describing halo formation compared to the traditional bias expansion approach. We have also demonstrated a great potential for our non-parametric halo-to-mass ratio to be implemented and tested in future simulations and observational surveys, with some improvements on our formalism in future work.
\acknowledgments
JBM is supported by a Clay fellowship at the Smithsonian Astrophysical Observatory. DJE is partially supported by U.S. Department of Energy grant DE-SC0013718, NASA ROSES grant 12-EUCLID12-0004, NASA contract NAS5-02015, and as a Simons Foundation Investigator.
|
{
"timestamp": "2022-01-19T02:44:47",
"yymm": "2109",
"arxiv_id": "2109.13948",
"language": "en",
"url": "https://arxiv.org/abs/2109.13948"
}
|
\section{Introduction}
Companies often face the problem of searching through \textcolor{black}{and extracting information that they are interested in,} from their unorganized mix of physical paper and digital documents. This process can be time-consuming and tedious. \textcolor{black}{To automate this process, people studied the Key Information Extraction (KIE) task~\cite{cheng2020one,rusinol2013field,2019One}.} Thus, KIE is crucial to a company in terms of efficiency and productivity, and it has been successfully used in many industrial scenarios, such as fast indexing and efficient archiving.
A typical KIE consists of three key steps: text detection, text recognition, and \textcolor{black}{text field labeling}, as shown in Fig.~\ref{fig:field-labeling-pipline}. While the text detection and recognition approaches~\cite{yang2018inceptext,wan2020textscanner,yue2020robustscanner} have been studied widely in the area of Optical Character Recognition (OCR), \textcolor{black}{one-shot learning based text field labeling} is less studied. \textcolor{black}{The text field labeling task aims to identify the predefined label of each text field.}
\begin{figure}[t]
\centering
\includegraphics[width = 0.7\linewidth]{spatial_drift.pdf}\\
\bcaption{\label{fig:spatial_drift}\textcolor{black}{Samples containing drifted fields in the d0 dataset.} (a) Support document. (b) Query document. The red boxes represent landmarks (static zones), and the blue ones indicate fields (dynamic zones). \textcolor{black}{The users annotate support fields in the support document with predefined labels. Then, one can infer the labels of query fields in query documents with three steps. First, one can align query fields with their corresponding landmarks as the yellow lines did. Then, one can find support fields that align to the same landmarks, and therefore a mapping between a support field and a query field exists if they align to the same landmark. At last, one can use the labels of support fields as the labels of their mapped query fields.} The drifted fields in a document are likely to misalign with their corresponding landmarks caused by a printer.}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width = 0.9\linewidth]{pipline.pdf}\\
\bcaption{\label{fig:field-labeling-pipline}The pip-line of extracting user predefined key information. (a) Receipt, (b) Text detection, (c) Text recognition, (d) Text field labeling. }
\end{figure*}
The layout of a document plays a key role in distinguishing different fields. Generally, as illustrated in Fig.~\ref{fig:field-labeling-pipline}, the \textit{Total}\_\textit{Amount} \textit{7.60} is much more likely on the above of \textit{Tendered}\ \textit{10.00}. Fig.~\ref{fig:our_dataset} shows some document images with different layouts and categories. Many learning-based methods~\cite{d2018field,xu2020layoutlm,Yu2020PICKPK,vrdgcn_naacl19,qian-etal-2019-graphie} have been proposed to utilize both the text and visual patterns for the KIE task. \textcolor{black}{They have shown good performance, but they require sufficient training data.} To reduce the cost of labor and alleviate the dependence of a large amount of training data for each type of document with a separate model, one-shot learning methods are studied. Early attempts at one-shot methods~\cite{chiticariu2013rule,schuster2013intellix,rusinol2013field,d2018field} are usually based on template for entity extraction. However, these rule-based methods are limited to specific layouts and are not general enough to scale to all types of documents. Cheng et al.\cite{cheng2020one} proposed an attention-based learning approach to transfer the spatial relationships between landmarks and fields from a support document to a query document. \textcolor{black}{However, their method cannot deal with drifted fields and outliers. In practice, printed docs often contain drifted fields and outliers as shown in Fig.~\ref{fig:spatial_drift}. Drifted fields refer to fields that are printed in unexpected positions. Thus the spatial relationships between landmarks and drifted fields are different from the one between landmarks and non-drifted fields. A direct transfer from the support documents to the query documents would fail because of this difference. Outliers refer to fields that do not match any fields in the support doc such as unexpected handwritten words. Their method cannot pick out the outliers too.}
\textcolor{black}{To address the challenges of drifted fields and outliers, we propose to cast the text field labeling task as a partial graph matching problem. Our method uses multiple features such as position, shape, and text embedding, to measure the similarity score between a support and a query field. Then our method maps support fields to query fields to maximize the summation of similarity score of all mapped pairs of fields. Particularly, our method will obey the one-to-(at most)-one mapping constraint when it searches for the mapping between fields. This constraint can help map drifted query fields to the correct support fields even if there are other more similar support fields. Our method maps the outliers to no fields too.}
The major contributions of this paper can be summarized as follows:
\begin{itemize}
\item We propose a deep end-to-end trainable network for one-shot Key Information Extraction (KIE) using \textcolor{black}{partial} graph matching with the one-to-(at most)-one mapping constraint. Our method enables the learning of similarity and solving for combinatorial optimization done in an end-to-end framework instead of solving these two phases explicitly separated as opposed to many previous methods. To the best of our knowledge, this is the first KIE approach that generates globally optimized solutions.
\item We design a simple context ensemble block to fuse features of spatial, textual, and aspect representations. The proposed framework is general enough to plugin other constraints such as zero assignment constraint to adapt to different KIE tasks.
\item To promote research in KIE, one dataset is constructed and the proposed one-shot KIE model will be released soon. Note that these datasets cover diverse types of document images, and much of them are highly difficult with spatial drift.
\item Our method achieves state-of-the-art performance on the collected datasets.
\end{itemize}
\begin{figure*}[t]
\centering
\includegraphics[width = 0.9\linewidth]{overview.pdf}\\
\bcaption{\label{fig:system} Overview of the proposed model. In step (a), we build the graphs, extract and concatenate vertex and edge features. In step (b), we feed different features into separate Multi-layer Perceptrons (MLP), and their outputs are vertex and edge affinity matrices. In step (c), we compute the average vertex and edge affinity matrices over different MLPs. In step (d), the average vertex and edge affinity matrices are feed into the combinatorial solvers such as ZAC-GM solvers, and its output is $\hat{P}$, which is called the predicted permutation matrix. The elements of $\hat{P}$ are 1 or 0. In step (e), we calculate the hamming loss between $\hat{P}$ and the groundtruth $P^*$. Lastly, we compute the gradients of hamming loss for the parameters of MLPs. Each $\nabla$ in the gradient matrix means the corresponding element is non-zero.}
\end{figure*}
\section{Related Work}
In this section, we first review previous work on keyword spotting task, KIE, one-shot learning of KIE, and then discuss approaches for graph matching that inspired our approach.
\subsection{Keyword Spotting}
\textcolor{black}{The methods in KeyWord Spotting (KWS)~\cite{vidal2021probabilistic} task cannot solve the KIE task. On the one hand, KWS checks if a given text exists in an image and finds its location. On the other hand, KIE aims at assigning a label to each field based on text detection and recognition results, e.g., identifying “Tom” as “name” in an ID card. However, KWS can’t find “Tom” in the support doc because the name can vary for different ID cards. Thus, both the methods and datasets for KWS are not suitable for KIE.}
\subsection{Key Information Extraction}
Language model based methods~\cite{devlin2018bert,dai-etal-2019-transformer,yang2019xlnet} work on plain text representations. However, document layout information is also crucial for information extraction. Then, many existing learning-based methods \cite{denk2019bertgrid,katti-etal-2018-chargrid,palm2019attend} tend to use both textual and visual embedding to enhance the performance of KIE.
Liu et al.\cite{vrdgcn_naacl19} introduced a method that combines visual and textual information in an image by a graph convolution model. Yu et al.\cite{Yu2020PICKPK} presented a layout extraction framework via combining graph learning with graph convolutions, which resulted in rich semantic representations of textual, visual, and layout representations. Zhang et al.\cite{zhang2020trie} fused the embedding of visual and textual representations such that the two tasks can reinforce each learning process. Inspired by BERT\cite{devlin2018bert}, Xu et al.\cite{xu2020layoutlm} proposed a pre-training method that jointly models the text and layout information within a single framework. However, this method requires explicit segmentation of individual words such that some modern OCR approaches are not applicable.
While the approaches we discussed above achieved promising results, we have to train a separate model for each type of document that is a waste of resources. Additionally, we have to collect and manually annotate a large number of labeled images for each category of document, which is labor-intensive and time-consuming.
\subsection{One-shot Learning of KIE}
Medvet et al.\cite{medvet2011probabilistic} proposed a probabilistic model to search key information from a document. However, their method required two sequences have the same length. Rusinol et al.\cite{rusinol2013field} presented an iterative framework to extract information from administrative documents. They introduced a star graph to model the spatial relationships among different fields. The weights for each node were adapted by term frequency-inverse document frequency (TF-IDF). However, for some scenarios such as invoice, where most words are digits such that TF-IDF is not robust enough.
Cheng et al.\cite{cheng2020one} presented a one-shot field labeling method using attention and belief propagation to retrieve structured information. Although their method dramatically simplifies the labeling process and achieved good performance compared with previous one-shot-based approaches, the final matching results are not globally optimal. For example, as illustrated in Fig.~\ref{fig:spatial_drift}, $phone\_b$ and $plate\_no$ were labeled as the same class due to the vertical drift caused by the printer.
Existing one-shot approaches are mostly rule-based and struggled to identify text fields close to each other. In particular, the performance of crucial information extraction dropped sharply when large spatial drift is observed between the landmark and corresponding fields. These performance drops suggesting that exiting models are sensitive to spatial relationship variations. This paper proposes a deep end-to-end trainable structured information extraction framework that is topology invariant and global optimized such that cases like two different fields are mapped to the same category would be alleviated.
\subsection{Graph Matching}
Graph matching approaches have been widely used in computer vision tasks, such as key-points matching. In this subsection, we focus on deep learning methods for graph matching.
Hammami et al.\cite{hammami2015one} proposed a subgraph isomorphism-based method to extract informative areas in administrative and commercial forms using color information. The information extraction task is then converted to search the sub-graph of a query document for the best matching of the graph representation of the supporting document. However, many documents are scanned in black and white that limits the application of this method.
Andrei Zanfir and Cristian Sminchisescu\cite{zanfir2018deep} proposed to model deep feature extraction and solve combinatorial optimization as an end-to-end learning framework. Wang et al.\cite{wang2019learning} presented an end-to-end differentiable deep combinatorial learning of graph matching. Different from the pixel offset loss\cite{zanfir2018deep}, a permutation loss based on Sinkhorn net was employed to handle an arbitrary number of nodes for combinatorial graph matching. Further, Wang et al.\cite{wang2020learning} embedded the learning of affinities and into a uniform framework instead of solving them separately\cite{zanfir2018deep}.
\begin{figure*}[!t]
\centering
\subfloat[Samples contain multi-region fields.]{\includegraphics[width=0.3151\linewidth]{case_1_num.png}%
\label{fig_first_case}}
\subfloat[Samples contain drifted fields.]{\includegraphics[width=0.368\linewidth]{case_2_num.png}%
\label{fig_second_case}}
\subfloat[Samples contain outliers.]{\includegraphics[width=0.3151\linewidth]{case_3_num.png}%
\label{fig_first_case}}
\caption{\textcolor{black}{
The one-to-(at most)-one mapping constraint help resolve the problem of drifted fields and outliers. The red bounding boxes are landmarks, and the rest boxes are fields. We omit parts of boxes of both landmarks and fields for clearer illustration. The hand writen words in c2 and c4 are both ``$\textcircled{4}$".}}
\label{fig:one_to_most_one}
\end{figure*}
\section{Our Model}\label{sec:methodology}
\textcolor{black}{In this section, we introduce our framework in detail. We present the framework of our model in Fig. \ref{fig:system}. In the first subsection, we define all the notations about graphs. In the second subsection, we discuss how to formulate the partial graph matching problem and how to annotate training data to avoid the many-to-many mapping, which violates the definition of graph matching problem~\cite{cheng2020one}, between fields. In the third subsection, we report important details of constructing graphs that consists of fields. In the forth subsection, we propose to use different MLP modules to calculate similarity scores between fields or edges based on different features. In the last subsection, we apply two solvers to the partial graph matching problem based on the similarity score.}
\subsection{Notations on Graphs}
We follow~\cite{hammami2015one} to call the set of dynamic text regions as \textbf{Fields}. \textcolor{black}{We use $f$ to note each field. We use the superscripts to differentiate the fields within one document, e.g., the $i$ th field is denoted as $f^i$.} Each field has its vertex features $x$ and label $y$. There are several ways to generate such node features. Firstly, we can use the set of static text regions, denoted as \textbf{Landmark} $L$, to generate the spatial feature of each field. Secondly, we can use text embedding to generate the semantic feature of each field. Thirdly, the aspect of each field, namely the width and height of its OCR bounding box, can also be a useful feature. For a document, we note the set of its fields as $F=\left\{f^i, 1<i<|F|\right\}$, the set of node features as $X=\left\{x^i\right\}$, and the set of labels as $Y=\left\{y^i\right\}$. \textcolor{black}{Notations on edges are flexible. Given two fields $f^i,f^j\in F$, both $ij$ and $f^if^j$ can represent the directed edges from $f^i$ to $f^j$. We define the set of all edges to be $FF=\left\{ij,\forall f^i,f^j \in F\right\}$}. We can represent a document as a quaternion $G=\left\{F, FF, X, Y\right\}$.
We will use the subscripts to differentiate the support and query documents, i.e., $s$ represents the support document and $q$ represents the query document. \textcolor{black}{We use $f^i_q$ to represent the $i$ th field in a query document.} The one-shot KIE problem is to predict the label of each query field $f_{q}^{i}$ whose ground-truth label is $y_{q}^{i}$. We propose to solve the one-shot KIE problem using partial graph matching such that if the query field $f_{q}^i$ is matched with the support field $f_{s}^{a}$, then the model predict the label of $f_{q}^i$ to be $y_{s}^a$.
\subsection{Solving one-shot KIE with Partial Graph Matching}
Based on the above notations, the formulation of partial graph matching requires two additional concepts. \textcolor{black}{We follow Burkard et al.\cite{burkard1998quadratic} to use the concave quadratic formulation of the graph matching problem. Partial graph matching shares the same concepts but has different constraints.}
The first one is the permutation matrix $P$, whose element $P_{ia}$ is $1$ if a query field $f_{q}^{i}$ is matched with a support field $f_{s}^{a}$, $0$ otherwise. This matrix describes the matching between $G_q$ with $G_s$, and has $|F_{q}|\times|F_{s}|$ elements.
The second one is the affinity matrix $A$, which is a square matrix and operates on the vector version of $P$. Note that the vector version of $P$ is in the $\mathbb{R}^{|F_{q}|*|F_{s}|}$ space, and the shape of $A$ is $(|F_{q}|*|F_{s}|)\times(|F_{q}|*|F_{s}|)$. The elements of $A$ in different positions have different meanings. For the off diagonal elements, they describe how similar two edges are, where one edge comes from the graph $G_{q}$ and another one comes from $G_{s}$. If $f_{q}^{i}f_{q}^{j}$ is the edge in $FF_{q}$, and $f_{s}^{a}f_{s}^{b}$ is the edge in $FF_{s}$, then their similarity score in the affinity matrix $A$ is denoted as $A_{ij}^{ab}$. For the diagonal elements, we use $A_{ii}^{aa}$ to note how similar two fields are, i.e., $A_{ii}^{aa}$ is the similarity score between $f_{q}^{i}$ and $f_{s}^{a}$.
Finally, the partial graph matching problem is formulated to be a constrained optimization problem, whose objective is:
\begin{alignat*}{2}
\max_{P}\quad & \sum_{i=1}^{|F_q|}\sum_{a=1}^{|F_s|}P_{ia}A_{ii}^{aa}P_{ia} + \sum_{ij\in FF_q}^{|FF_{q}|}\sum_{ab\in FF_s}^{| FF_s|}P_{ia}A_{ij}^{ab}P_{jb}, & \tag{1}\\
\mbox{s.t.}\quad& P\in\left\{P\mathbf{1}\leq\mathbf{1},P^{\top}\mathbf{1}\leq\mathbf{1},P\in\{0,1\}^{|F_{q}|\times|F_{s}|}\right\}, &\tag{2}
\label{eqa:gm}
\end{alignat*}
where $\mathbf{1}$ is a column-wise vector whose elements are all one. \textcolor{black}{All the notations in equation (1) and (2) are fully explained in subsection B and A. The first inequality in equation (2) forbids a feasible permutation matrix $P$ to match multiple support fields with a target query field. The second inequality forbids $P$ to match multiple query fields with one support field. Both inequalities allow part of support and query fields to match with no fields. The first term in equation (1) sums over all possible matching between support and query fields to calculate the vertex similarity score. The second term sums over all possible matching between support and query edges to calculate the edge similarity score. A query edge $f_q^if_q^j$ is matched with a support edge $f_s^af_s^b$ if and only if both $f_q^i$ is matched with $f_s^a$ and $f_q^j$ is matched with $f_s^b$.}
\textcolor{black}{Fig. \ref{fig:one_to_most_one} shows why the one-to-(at most) one constraint can be ensured, and how it helps resolve the problems of drifted fields and outliers. In practice, a document may contain many multi-line fields. The examples are the fields in subfigure (a). They are supposed to have the same label but are bounded by separate boxes. Cheng et al.~\cite{cheng2020one} suggests using the average boxes of support fields that share the same label to match with query fields. As shown in a1 and a2, their method leads to a one-to-many mapping that violates the one-to-(at most)-one mapping constraint. However, we propose to add number suffix to the label of multi-line support fields, so that the one-to-(at most)-one mapping between multi-region fields are possible as shown in a3 and a4. We remove the number suffix after prediction to restore the original label of each field.}
\textcolor{black}{In the subfigure (b), the yellow line segments indicate that the fields in b2 and b4 drifted towards down when compared with b1 and b3. We can see that a direct transport of spatial relationship from b1 to b2 failed. Particularly, the support field of ``$\$17.00$" is mapped to ``$\$0.00$" and ``$\$51.00$" at the same time. However, the one-to-(at most)-one constraint forbids our model to do so. In b2 and b4, to satisfy such constraint, our model choose to map each support field to the correct query field eventhough they are not the most similar field to each other in the aspect of spatial relationship.}
\textcolor{black}{In the subfigure (c), c2 and c4 both contain the same outlier, which is ``$\textcircled{4}$". If there is no constraint, the method of Cheng et al~\cite{cheng2020one} will map a wrong support field to ``$\textcircled{4}$" as shown in c1 and c2. However, our model can refuse to match any support fields with ``$\textcircled{4}$" because of the constraint.}
\subsection{Document Graph Construction}
\noindent\textbf{Graph Vertices}. We only regard fields as graph vertices for both support and query documents.
We use landmarks to generate spatial features for the fields. Specifically, for a target field, all the line segments connecting its center point with all landmarks will be arranged as a 2d matrix whose shape is $|L|\times2$. $|L|$ means the number of landmarks. The spatial features of different fields will be stacked such that the overall shape of one spatial features $X$ in a document is $|F|\times|L|\times2$. $|F|$ means the number of fields. We also use the OCR bounding box of each field to generate its aspect feature, i.e., the height and width of the bounding box are concatenated into a 2-dimensional feature. The aspect features in a document is of size $|F|\times2$. We use average word embedding to generate the textual features for each field. We use the pre-trained word embedding~\cite{P18-2023} with 300-dimension, and freeze it during training. The shape of the textual features in a document is with the size of $|F|\times300$.
For all documents, the landmarks and fields are detected by OCR systems automatically and then labeled manually. For each type of document, we will select one document as the support document, and the rest will serve as query documents. The support document should be as complete as possible.
We will remove the extra landmarks for the query document and repair the missing ones compared to the support document. If a field is split into several parts because of an imperfect OCR system, then we will merge these fields. Note that this operation is possible only for the training data. The model will assign the ``outliers'' label to the extra fields during the evaluation process.
\noindent\textbf{Graph Edges}. For each document, we build a visible graph among fields and then apply the Prime algorithm~\cite{cheriton1976finding} to get the minimum spanning tree of this graph. This tree is used as the final graph. Specifically, each field will emit 36 rays to search its visible neighbors. The resulting visible graph may contain many loops. We find that it is important to remove all the loops in the graph using the Prime algorithm in practice. The shorter edges connecting neighbor fields should be preserved to generate better performance. Each edge has two types of features: 1) The direction feature is the line segment connecting two fields. 2) We concatenate the height and width of the start field to the ones of the end field and generates a 4-dimensional feature as the aspect feature.
\begin{figure*}[t]
\centering
\subfloat[Training data, whose supervision signal is a permutation matrix.]{\includegraphics[width=0.49\linewidth]{ranking_loss_short.png}%
\label{samples}}
\hfil
\subfloat[The permutation matrix.]{\includegraphics[width=0.238\linewidth]{permut_long.png}%
\label{permut_matrix}}
\hfil
\subfloat[The vertex affinity matrix.]{\includegraphics[width=0.238\linewidth]{affi_mat_long.png}%
\label{fig_second_case}}
\caption{\textcolor{black}{In subfigure (a), the red boxes are landmarks, the blue boxes are fields, and the black lines indicate mapping from support to query fields. In subfigure (b), each row of the matrix corresponds to a query field, and each row has at most one entry being one with the other entries being zero. The third entry of the fifth row is ``1", and this means that the fifth query field, whose text is ``00086508", corresponds to the third support field ``00347247". The fourth row has no entries being ``1", and this means that the forth query field has no matching support fields. We shuffle the index of query fields such that the permutation matrix will not be an identity matrix. In subfigure (c), similar to the permutation matrix, each row of the vertex affinity matrix corresponds to a query field. The entries are the similarity score, calculated by formula (\ref{eqa:vertex_affinity}), between support and query fields. We only show the entries of correct pairs of fields, and the other entries are not zero. The similarity score of correct pairs of fields should be larger than the wrong pairs that lie in the same row or column.}}
\label{fig:ranking_loss}
\end{figure*}
\subsection{Vertex and Edge Affinities}
For a pair of fields, we can use multiple features to compute the vertex affinity between them. Specifically, we can compute their spatial, aspect, and textual affinities. Then, the average of these affinities is the final vertex affinity between them. We concatenate the features from query and support fields and then apply a Multi-layer Perceptron (MLP) to generate the affinity score between them. We will describe the process of computing spatial affinity matrix precisely. Aspect, textual, and edge affinity matrices are processed similarly.
For the field $f_q^i$ and $f_s^a$, their spatial features are $x_q^i$ and $x_s^a$. Both features are matrices with the same shape, $|L|\times 2$. We have aligned the landmarks such that the query documents always have the corresponding landmarks in the support document. Thus, it is reasonable to fix a specific landmark $l^k$ and then concatenate the two line segments $l^kf_q^i$ and $l^kf_s^a$. Note that $l^kf_q^i$ connects the center point of $f_q^i$ and $l^k$. $l^kf_s^a$ is similar. Let $l^kf_q^i\oplus l^kf_s^a$ be the concatenated feature, then the affinity score between $f_q^i$ and $f_s^a$ w.r.s to landmark $l^k$ is
\begin{equation}
Affi_{spatial}(f_q^i, f_s^a, l^k)=MLP(l^kf_q^i\oplus l^kf_s^a). \tag{3}
\end{equation}
By iterating over all landmarks, we can concatenate $x_q$ and $x_s$ into a $|L|\times 4$ matrix, which will be denoted as $x_q\oplus x_s$. Finally, the spatial affinity between $f_q^i$ and $f_s^a$ equals to the average of affinity score for all landmarks:
\begin{equation}
A_{ii_{_{spatial}}}^{aa}=\frac{1}{|L|}\sum_{l^k=1}^{|L|}Affi_{spatial}(f_q^i, f_s^a, l^k). \tag{4}
\end{equation}
Note that the spatial affinity matrix $A_{ii_{_{spatial}}}^{aa}$ is calculated in a vectorized way, i.e., $X_q$ and $X_s$ are concatenated into a $|F_q|\times|F_s|\times|L|\times4$ tensor, then feed it into an MLP module to compute the affinity score tensor with the shape of $|F_q|\times|F_s|\times|L|$. We then average over the last dimension to obtain spatial affinity matrix $A_{ii_{_{spatial}}}^{aa}$ with the shape of $|F_q|\times|F_s|$.
We compute the aspect, textual affinity matrices in a similar way but with separate MLP modules. The average of all affinity matrices is the final vertex affinity matrix computed as follow:
\begin{equation}
A_{ii}^{aa}=\frac{1}{3}(A_{ii_{_{spatial}}}^{aa}+A_{ii_{_{aspect}}}^{aa}+A_{ii_{_{textual}}}^{aa}). \tag{5}
\label{eqa:vertex_affinity}
\end{equation}
The off-diagonal elements of $A$ are calculated similarly. For edge $f_q^if_q^j$, we use $f_q^if_{q_{_{direct}}}^{j}$ to represent the direction feature, and use $f_q^if_{q_{_{aspect}}}^{j}$ to represent the aspect feature. Notations about the edge $f_s^af_s^b$ are similar. Then the similarity score between the two edges is computed by:
\begin{align}
A_{ij}^{ab}&=\frac{1}{2}(MLP(f_q^if_{q_{_{direct}}}^{j}\oplus f_s^af_{s_{_{direct}}}^{b}) \tag{6}\\
&\quad +MLP(f_q^if_{q_{_{aspect}}}^{j}\oplus f_s^af_{s_{_{aspect}}}^{b})). \tag{7}
\end{align}
Note that we use separate MLP modules for vertex and edge affinities.
\subsection{Combinatorial Solver}
Fig.\ref{fig:system} shows the pipeline of our model. We need to solve the partial graph matching problem and back-propagate through the solvers after calculating the affinity matrix.
\noindent\textbf{Solving Partial Graph Matching Problem}.
Inspired by recent work of fusing deep learning and combinatorics\cite{vlastelica2019differentiation}, we adopt two solvers to solve the partial graph matching problem. The first solver is DD-ILP solver~\cite{swoboda2017dual}, which is a third party libraries that aim to solve a specific type of discrete optimization problem called Integer-Relaxed Pairwise-Separable Linear Programs (IRPS-LP), and the partial graph matching problem with formulation (1) and (2) is an example of such problems.
\textcolor{black}{We also reimplement the ZAC-GM solver in~\cite{wang2020zero}. Although the formulation of the ZAC-GM solver is not the same as equations (1) and (2), the input, output, and constraints of this solver are the same as DD-ILP. In~\cite{wang2020zero}, the authors clarify a sufficient condition about when a vertex affinity matrix can lead to an optimal permutation matrix that represents the correct mapping between vertexes. Inspired by this sufficient condition, we design an additional ranking loss to regularize the MLP modules. We use this ranking loss to enlarge the similarity score difference between correct vertex pairs and the wrong vertex pairs during training.}
\textcolor{black}{Fig. \ref{fig:ranking_loss} illustrates how to calculate this ranking loss for a pair of support and query documents. Take the support field ``00347247" and the query field ``00086508" for example. Their similarity score should be higher than the score between ``00347247" and any other query fields. Their score should be higher than the score between ``00086508" and any other support fields too. Experiments show the effectiveness of this ranking loss.}
\noindent\textbf{Back Propagate Through Solver}. We adopt the hamming loss between the predicted $\hat{P}$ and the label $P^*$. A fundamental problem of fusing deep learning and combinatorics is that the gradient of neural networks tends to be zero most times. In our model, a subtle change of the affinity matrix will not change the predicted permutation matrix $\hat{P}$, i.e., $\hat{P}$ is a piece-wise constant function w.r.t the parameters of MLP modules. To overcome this problem, we adopt the techniques described in \cite{rolinek2020deep}. The additional benefits of using hamming loss are that the wrong prediction can also generate gradients as shown in Fig. \ref{fig:system}, this leads to a faster convergence compared with the cross-entropy (CE) loss in the LF-BP model. For example, the CE loss will only consider the negative diagonal elements in $P^*$ in Fig.~ \ref{fig:system}, while the hamming loss will also propagate through those non-zero and off-diagonal elements.
\begin{table}[!hbt]
\caption{Statistics of DKIE datasets.}
\label{tab:dataset_statistics}
\centering
\begin{tabular}{c c c c c}
\hline
\hline
Dataset & Description & $\#$ Styles & $\#$ Docs & $\#$ Fields \\
\hline
\textcolor{black}{d0} & Taxi receipts & 12 (7:5) & 136 & 16 \\
\hline
\textcolor{black}{d1} & CHSR tickets & 1 (All test) & 169 & 11 \\
\hline
d2 & Bording pass & 2 (1:1) & 54 & 10 \\
\hline
d3 & PT invoice (Special) & 2 (1:1) & 151 & 15 \\
\hline
\textcolor{black}{d4} & VAT invoice (Normal) & 2 (1:1) & 118 & 12 \\
\hline
d5 & Ferry tickets & 2 (1:1) & 98 & 14 \\
\hline
d6 & Airline itinerary & 3 (2:1) & 107 & 25 \\
\hline
\textcolor{black}{d7} & VAT invoice (Special) & 2 (1:1) & 118 & 44 \\
\hline
d8 & Medical invoice & 3 (2:1) & 163 & 36 \\
\hline
\textcolor{black}{d9} & Quota invoice & 4 (All train) & 162 & 9 \\
\hline
d10 & Bank card & 1 (All train) & 197 & 8 \\
\hline
d11 & Express bill & 1 (All train) & 157 & 5 \\
\hline
d12 & Toll fee & 1 (All train) & 151 & 10 \\
\hline
d13 & Customs declaration & 3 (All train) & 158 & 14 \\
\hline
d14 & Duty-paid proof & 3 (All train) & 106 & 6 \\
\hline
d15 & Car-hailing receipts & 2 (All train) & 162 & 21 \\
\hline
d16 & VAT invoice (Volume) & 2 (All train) & 151 & 33\\
\hline
\hline
\end{tabular}
\end{table}
\begin{table*}[!hbt]
\centering
\bcaption{Comparison with state-of-the-art supervised and one-shot learning methods.}
\label{tab:over_all_results}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c}
\hline
\hline
\multicolumn{2}{c|}{\multirow{2}{*}{Method}} & \multirow{2}{*}{Features} & Training & Testing & \multicolumn{8}{c}{Accuracy(\%)}\\
\cline{6-14}
\multicolumn{2}{c|}{} & & data size & data type & d0 & d1 & d2 & d3 & d4 & d5 & d6 & d7 & d8\\
\hline
\multirow{3}{*}{One-shot} & LF-BP~\cite{cheng2020one} & \multirow{3}{*}{spatial} & \multirow{3}{*}{1861 all styles} & \multirow{3}{*}{clean} & 94.7 & \textcolor{black}{99} & 100 & 100 & 92.1 & \textcolor{black}{100} & \textcolor{black}{92.6} & \textcolor{black}{100} & \textcolor{black}{100}\\
\cline{2-2}
\cline{6-14}
& Ours (DD-ILP) & & & & \textcolor{black}{99.1} & \textcolor{black}{100} & \textcolor{black}{100} & \textcolor{black}{100} & \textcolor{black}{100} & \textcolor{black}{100} & \textcolor{black}{98} & \textcolor{black}{100} & \textcolor{black}{100}\\
\cline{2-2}
\cline{6-14}
& Ours (ZAC-GM) & & & & 99.1 & \textcolor{black}{100} & 100 & 100 & 100 & \textcolor{black}{100} & \textcolor{black}{97.8} & \textcolor{black}{100} & \textcolor{black}{100}\\
\hline
\multirow{3}{*}{One-shot} & LF-BP~\cite{cheng2020one} & \multirow{3}{*}{spatial} & \multirow{3}{*}{1861 all styles} & \multirow{3}{*}{drifted} & 60 & & 68.6 & 42.4 & & \textcolor{black}{80.9} & \textcolor{black}{75.6} & & \textcolor{black}{65.2}\\
\cline{2-2}
\cline{6-6}
\cline{8-9}
\cline{11-12}
\cline{14-14}
& Ours (DD-ILP) & & & & \textcolor{black}{97} & - & \textcolor{black}{71.1} & \textcolor{black}{90} & - & \textcolor{black}{100} & \textcolor{black}{96} & - & \textcolor{black}{96.2}\\
\cline{2-2}
\cline{6-6}
\cline{8-9}
\cline{11-12}
\cline{14-14}
& Ours (ZAC-GM) & & & & 96.3 & & 71.1 & 93.9 & & \textcolor{black}{100} & \textcolor{black}{96} & & \textcolor{black}{96.2}\\
\hline
\multirow{3}{*}{One-shot} & LF-BP~\cite{cheng2020one} & \multirow{3}{*}{spatial} & \multirow{3}{*}{1861 all styles} & \multirow{3}{*}{outliers} & 64.4 & \textcolor{black}{97.9} & 90 & 81.3 & 93.1 & \textcolor{black}{97} & \textcolor{black}{70} & \textcolor{black}{91.2} & \textcolor{black}{90}\\
\cline{2-2}
\cline{6-14}
& Ours (DD-ILP) & & & & \textcolor{black}{89} & \textcolor{black}{97.2} & \textcolor{black}{85.2} & \textcolor{black}{79.1} & \textcolor{black}{86.3} & \textcolor{black}{96.2} & \textcolor{black}{88} & \textcolor{black}{91.3} & \textcolor{black}{95.8}\\
\cline{2-2}
\cline{6-14}
& Ours (ZAC-GM) & & & & 91.2 & \textcolor{black}{98.7} & 90 & 100 & 98 & \textcolor{black}{100} & \textcolor{black}{95.8} & \textcolor{black}{96} & \textcolor{black}{97}\\
\hline
Supervised & LayoutLM~\cite{xu2020layoutlm} & spatial+visual & \multirow{2}{*}{3K per style} & \multirow{5}{*}{all} & 97.3 & 97.0 & - & 94.6 & 95.0 & 96.1 & 91.8 & 94.3 & 92.5 \\
\cline{2-2}
\cline{6-14}
learning & PICK~\cite{Yu2020PICKPK} & +text & & & 97.9 & 97.8 & - & 95.8 & 95.7 & 96.6 & 92.3 & 94.9 & 92.2 \\
\cline{1-4}
\cline{6-14}
\multirow{3}{*}{One-shot} & LF-BP~\cite{cheng2020one} & \multirow{3}{*}{spatial} & \multirow{3}{*}{1861 all styles} & & 80.39 & 98.2 & 84.1 & 94.4& 92.5& 97.3& 87.2 & 95.8 &93.8\\
\cline{2-2}
\cline{6-14}
& Ours (DD-ILP) & & & & 93.6 & 98.5 & 84.7 & 96.0 & 97.1 & 98.4 & 94.4 & 96.1 & 97.2\\
\cline{2-2}
\cline{6-14}
& Ours (ZAC-GM) & & & & 95.7 & \textcolor{black}{99} & 85.2 & 98.5 & 99.5 & \textcolor{black}{100} & \textcolor{black}{96.4} & 97.2 & \textcolor{black}{98}\\
\hline
\hline
\end{tabular}
\end{table*}
\section{EXPERIMENTS}\label{sec:experiments}
\subsection{Datasets}
\textcolor{black}{The datasets of DocLL/SROIE-Oneshot collected by \cite{cheng2020one} are not released by the day we submit our paper. Therefore, we created our own datasets to promote the research in the one-shot KIE task, especially on the problems of drifted fields and outliers. To generate a fair comparison on our datasets in this paper, we used the same features and training settings as \cite{cheng2020one}. }
\noindent\textbf{DKIE One-shot Dataset}. We created a new dataset consisting of 2,500 documents. \textcolor{black}{We report the statistics of the DKIE dataset in Table \ref{tab:dataset_statistics}. The dataset can be grouped into 17 big categories such as Value-Added Tax (VAT) invoices, Medical invoices, and Taxi receipts. Each category may contain different styles of documents. Different styles of documents in one category will have similar layouts. However, each style needs independent support document because different styles have different landmarks and labels for the one-shot learning methods.} Fig.\ref{fig:our_dataset} shows sample images from the testing set. The dataset is challenging because the document images are taken by smartphones. Thus, 3D distortions, variant image size, drifted fields, and unmatched fields are quite common.
\noindent\textbf{Groundtruth Generation}. We asked two annotators to label the data separately. We cross-checked and rectified the incorrect labels. We repair the missing landmarks with dummy bounding boxes for a query document during the inference process to guarantee the support and query have the same number of landmarks. For multi-region fields, we add number suffix to the original labels as suggested in the B subsection of our model.
\subsection{Implementation Details}
\noindent\textbf{State-Of-The-Art models}
We compared our method with the LayoutLM\cite{xu2020layoutlm}, PICK\cite{Yu2020PICKPK}, and LF-BP\cite{cheng2020one}. Layout LM model and the PICK model are supervised-learning models. LM-BP model and our model are one-shot learning models.
\noindent\textbf{Training Details}.
We used Pytorch to implement our models. All the models are trained on one NVIDIA Tesla V100 GPU with 16G memory. We applied ADAM to optimize the model with a batch size of 8 and trained the model on a single GPU card. \textcolor{black}{The initial learning rate is 0.05 and decays by 0.85 after each epoch.}
To keep good performance, supervised-learning-based models generally maintain different parameters for different styles of documents. \textcolor{black}{Therefore, we need to split each style of documents into training and testing data to train separate parameters for each style. We generate 3,000 documents for each style, including all the real samples, and the rest are synthesized.} The rest training details are the same as the original methods discussed in \cite{xu2020layoutlm,Yu2020PICKPK}.
On the contrary, one-shot-learning-based models can handle different styles of documents using the same parameters. \textcolor{black}{Therefore, we should train one model with different styles of documents. For one-shot learning methods, We split the documents of each category into training and testing sets according to their styles. For example, there are 12 styles in the taxi receipts category. We choose 7 styles as the training data and the rest 5 styles as the testing data. The third column of Table \ref{tab:dataset_statistics} shows the number of styles of each category. This column also shows the ratio between training and testing styles in the parentheses. The number of all training documents is 1861 and the number of all testing documents is 639.}
\noindent\textbf{Testing Details}. For one-shot-learning-based models, we predefined the support document of each style, and then different images of the same style will serve as the query documents. \textcolor{black}{For each style, a document containing as many landmarks/fields as possible serves to be a good support document.} Our model will predict the label of each field in query documents using those labels defined in the support documents.
\textcolor{black}{To study the performance of our model on samples containing drifted fields and outliers separately, we further split the testing data of each style into 3 parts as shown in the fifth column of Table \ref{tab:over_all_results}. There are ``clean" documents in which all fields are nicely aligned with the landmarks and have no outliers. There are ``drifted" documents in which some fields are drifted so badly that even humans need to check each field very carefully to judge the labels. There are also documents containing ``outliers". A small number of documents contain both drifted fields and outliers. We include them in ``drifted" and ``outliers" datasets at the same time. In the ``all" dataset, we report the average accuracy of fields in all query documents within the same category.}
\subsection{Experimental Results}\label{sec:experimental_results}
In this section, we compared our method with existing SOTA methods. The results are presented in Table~\ref{tab:over_all_results}. For a fair comparison, we only use spatial features in our model because the LF-BP model~\cite{cheng2020one} only consumes spatial features. The effect of other features will be discussed in Section~\ref{sec:ablation_study}.
\subsubsection{Performance on ``Clean" Documents}
\textcolor{black}{All models perform good on the ``clean" data in Table~\ref{tab:over_all_results}. Our model converges faster than the LF-BP model in the training stage. For each iteration, we calculate the average accuracy of fields in all training documents. We show part of the increasing process in Figure~\ref{fig:average_acc}. Different from the LF-BP model whose average accuracy increased relatively smoothly, the accuracy of our model increased drastically in the initial training stage. There are two reasons explaining the rapidly increasing process of accuracy of our model. First, our model uses hamming loss to calculate the gradients while the LF-BP model uses the cross entropy loss. For each query field, when our model matches it with the wrong support field, the hamming loss will generate gradients not only based on the labeled pairs of fields but also the wrong pairs. In the contrary, the cross entroy loss will only generate gradients based on the labeled pairs of fields. Second, the combinatorial solvers in our model are not sensitive to the suttle change of affinity matrices, which are the outputs of MLP modules. Therefore, MLP modules in our model only need to output roughly correct affinity matrices such that the combinatorial solvers can find the correct mapping between support and query fields. This also leads to a faster increasing process of accuracy.}
\subsubsection{Performance on ``Drifted" Documents}
\textcolor{black}{The LF-BP model failed on the ``drifted" data in Table~\ref{tab:over_all_results} while the performance of our model droped moderately. Our model significantly outperform the LF-BP model accross all datasets. Especially on the d0 dataset. We find that the fields in this dataset aranged vertically. If one of the fields in the head part of a document drifted downwards, then all the fields below it will also drifted downwards. Typical samples of d0 dataset can be found in Figure~\ref{fig:spatial_drift}, Figure~\ref{fig:one_to_most_one} (b) and Figure~\ref{fig:drifted_data}. Both the online demo released by~\cite{cheng2020one} and our reimplemented LF-BP model achieve low accuracy on the drifted fields. We show typical mistakes made by the LF-BP model in Figure~\ref{fig:drifted_data}. The left pair of documents in Figure~\ref{fig:drifted_data} shows the prediction of the online demo released by~\cite{cheng2020one}. Red lines indicate wrong mapping between fields. Although the LF-BP model does not map ``16.9" and ``\$0.00" to any support fields, the authors of \cite{cheng2020one} do not report this feature in their paper. We analysis why the LF-BP model failed on the drifted fields in the case study section~\ref{subsec:case_study} using the samples in the d0 dataset.}
\textcolor{black}{Notice that d1, d4 and d7 do not have documents that contain drifted fields. The fields drifted horizontally in d3, d5, d6 and d8 datasets. Around half of documents in the d2 dataset contain flipped fields as shown in Figure~\ref{fig:flipped_fields}. The models can not predict the labels of these flipped fields correctly solely based on the spatial features. However, our model with ZAC-GM solver can achieve good performance when it use additional features such as the width and height of bounding boxes of fields. We discuss this problem in the subsection~\ref{sec:ablation_study}. Figure~\ref{fig:our_dataset} shows examples of drifted fields in all datasets.}
\begin{figure}[h]
\centering
\includegraphics[width = 1.0\linewidth]{Fig_6.png}
\caption{\label{fig:average_acc}Average accuracy of all fields from different documents.}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width = 1.0\linewidth]{ali_mistake.png}
\caption{\label{fig:drifted_data} LF-BP (left) and our model's (right) prediction about documents containing drifted fields in the d0 dataset. LF-BP model failed while our model can handle the drifted fields.}
\end{figure}
\subsubsection{Performance on ``Outliers" Documents}
\textcolor{black}{Table~\ref{tab:over_all_results} shows that our model, when using ZAC-GM solver, is the only one succeed across all datasets. When our model use the DD-ILP solver, it can not handle documents containing outliers. We investigate the original code and find that it aims to solve the graph matching problem and requires the support and query documents to have the same number of fields. This is not true in documents containing outliers. However, when we reimplement the ZAC-GM solver~\cite{wang2020zero} and employ it to pick out the outliers, our model can handle the drifted fields and outliers at the same time to some extend. By taking a close look at the prediction of our reimplemented LF-BP model, we find that the outliers are predicted to have a label of their neighbors as shown in the Figure~\ref{fig:one_to_most_one} (c1) and (c2). Despite that~\cite{cheng2020one} did not report how they handle the outliers, the demo released by them will refuse to output the label of some outliers. We argure that our model provides an alternative approach that can handle the outliers.}
\textcolor{black}{Some documents in the d0, d3 and d6 dataset contain drifted fields and outliers at the same time. We find that such documents are the most difficult ones. Not only the LF-BP model failed on these documents, but also the performance of our model dropped by a relatively large margin. Figure~\ref{fig:d3_compare} shows such samples in the d3 dataset. We find that when the outliers are close to the drifted fields, they are hard to distinct with each other solely based on their spatial features. For example, LF-BP model maps the ``type" field in (a1) to the ``outliers$\_$2" field in (a2). Our model also maps the ``fee" field in (b1) to the ``outliers$\_$1" field in (b2). The position of these outliers are so close to other drifted query fields such that the models may confuse them with the situation of multi-region fields. This indicates that we should measure the imilarity between fields using more diverse features such as the width and height of bounding boxes of fields or the text embedding in fields. We discuss the impact of different features in the ablation study section~\ref{sec:ablation_study}.}
\begin{figure}[t]
\centering
\includegraphics[width = 1.0\linewidth]{d3_compare.png}
\caption{\label{fig:d3_compare} Documents containing both drifted fields and outliers in the d3 dataset. (a1) and (a2) show the prediction of LF-BP model. (b1) and (b2) show the prediction of our model with ZAC-GM solver. Yellow boxes are outliers. Red lines indicate wrong mapping between fields. Both models failed on this pair of documents.}
\end{figure}
\subsubsection{Performance on ``All" Documents}
Our model outperformed the LF-BP method on all datasets. This is because our datasets are more challenging. There are many documents containing spatial drifted fields and outliers. In contrast to the method of LF-BP, the solving process of our model generates globally optimized results through the one-to-(at most)-one constraint. Such constraint overcomes the difficulties brought by the spatial drifted fields and outliers. Our model achieved competitive performance against the supervised-learning-based models on the d0 dataset, and better results on the other testing datasets. This result is not surprising because the supervised-learning-based models have three unfair advantages. First, they consume a lot more labeled documents than one-shot-learning-based methods during the training stage, namely 3,000 samples per style versus 1861 samples for all styles. Second, they need to train separate models for different types of documents, which can help them fit the data bias in each type of document. In contrast, one-shot learning based models use one model for all types of documents. Therefore, for each new style of document, our model benefits from the other styles of documents and requires only one labeled sample to serve as the support document for each type. Lastly, they consume more features than the one-shot-learning-based models in this experiment, namely spatial, visual, and text features versus spatial feature only.
\begin{figure}[t]
\centering
\includegraphics[width = 1.0\linewidth]{flipped_fields.png}
\caption{\label{fig:flipped_fields} Documents containing both flipped fieldsin the d2 dataset. (a1) and (a2) show the prediction of our model without using aspect features. (b1) and (b2) show the prediction of our model using aspect features. Red lines indicate wrong mapping between fields.}
\end{figure}
\begin{table}[t]
\bcaption{Aspect features help deal with the flipped fields in the d2 dataset. Examples are shown in the Figure~\ref{fig:flipped_fields}}
\label{tab:abliation_results_drifted}
\centering
\begin{tabular}{c | c c c | c}
\hline
\hline
Different Features & Clean & Drifted(flipped) & Outliers & All \\
\hline
Spatial & 100 & 71.7 & 90 & 85.2 \\
\hline
Spatial+Aspect & 100 & 100 & 100 & 100 \\
\hline
\hline
\end{tabular}
\end{table}
\subsection{Ablation Study}\label{sec:ablation_study}
We conduct ablation studies on the d0 and d2 dataset to examine the importance of spatial features, aspect features, textual features, edge features, the number of landmarks, and the ranking loss. \textcolor{black}{The d0 dataset contains 5 testing styles of documents. The d2 dataset contains 1 testing style. Each style has a separate support documents and a number of query documents. We use the d2 dataset to test whether different features can help handle the challenge of drifted fields and outliers. We present the results in Table~\ref{tab:abliation_results_drifted}. We use the d0 dataset to test the impact of different features across different styles. The results of training with different features are presented in Table~\ref{tab:abliation_results}. We report the influence of landmarks on the performance of our model in Fig. \ref{fig:landmarkImpact}. We also study the influence of ranking loss in Table~\ref{tab:ranking_loss_impact}.}
\begin{table}[t]
\bcaption{Test the impact of different features using different styles of documents in the d0 dataset. Document name alias: ``SC'': SiChuan province, ``BJ'': BeiJing, ``AH'': AnHui province, ``JS'': JiangSu province, ``CQ'': CongQing province. ``AVG'' indicates average accuracy.}
\label{tab:abliation_results}
\centering
\begin{tabular}{c | c c c c c | c}
\hline
\hline
Different Features & SC & BJ & AH & JS & CQ & AVG\\
\hline
Spatial & 98.8 & 91.2 & 81.8 & 92.8 & 99.1 & 93.6\\
\hline
Aspect & 0 & 0 & 0 & 0 & 0 & 0\\
\hline
Text & 10.2 & 10.4 & 14.7 & 7.2 & 12.3 & 10.6\\
\hline
Edge & 66.2 & 87.9 & 53.1 & 98.8 & 88.0 & 78.6\\
\hline
Spatial+Aspect & 98.8 & 85.3 & 94.4 & 94.0 & 97.2 & 93.2\\
\hline
Spatial+Edge & 97.2 & 87.9 & 95.8 & 91.6 & 99.1 & 93.2\\
\hline
Aspect+Edge & 91.6 & 82.2 & 74.1 & 98.2 & 90.0 & 87.1\\
\hline
Spatial+Aspect+Edge & 98.2 & 88.5 & 97.2 & 94.0 & 99.1 & 94.2 \\
\hline
Spa+Aspe+Text+Edge & 96.9 & 93.3 & 95.8 & 94.6 & 99.1 & \textbf{95.1} \\
\hline
\hline
\end{tabular}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width = 1.0\linewidth]{landmark_num_impact.pdf}\\
\bcaption{\label{fig:landmarkImpact}Labeling accuracy versus the number of landmarks. ``S'', ``A'', and ``E'' means spatial, aspect, and edge features.}
\end{figure}
\subsubsection{Different Features} \textcolor{black}{As shown in Figure~\ref{fig:flipped_fields} and Table~\ref{tab:abliation_results_drifted}, some fields are not possible to distinguish apart solely based on the spatial features. For example, in (a1) and (b1) of Figure~\ref{fig:flipped_fields}, the ``name$\_$chinese" field lies above the ``name$\_$english" field. In the contrary, their position exchanged in (a2) and (b2) of Figure~\ref{fig:flipped_fields}. According to the results in (a1) and (a2) of Figure~\ref{fig:flipped_fields}, if our model only uses spatial features, it maps the ``name$\_$chinese" field to the ``name$\_$english" field because they stay in the same position. However, two fields have very different width and height. Therefore, when our model incorporates the aspect of two fields to calculate their similarity scores, it can find the correct mapping between fields in (b1) and (b2) of Figure~\ref{fig:flipped_fields}. Table~\ref{tab:abliation_results_drifted} shows that our model can achieve 100 percent accuracy in the d2 dataset if it also uses the aspect features.}
\begin{figure}[t]
\centering
\includegraphics[width = 1.0\linewidth]{ranking_loss_ablation.png}
\caption{Accuracy of our models (ZAC-GM) that are trained with different ranking loss weight. We also include training accuracy of LF-BP model in this figure. Better viewed in color.}
\label{fig:ranking_loss_ablation}
\end{figure}
\textcolor{black}{This inspired us to design additional MLP modules to incorporate more diverse features, and test the benefits of this practise across different datasets in Table~\ref{tab:abliation_results}.} First, we use only the spatial feature to train our model and test its performance. The first line of Table~ \ref{tab:abliation_results} indicates that our model achieves good performance solely based on the spatial features on most types of documents except for the ``AH'' type. The second line shows that our model fails if it only consumes the aspect features. This is not surprising because we find that many fields have a very similar shape and only a few of them have vast bounding boxes in the d0 dataset. Therefore, this feature alone can not distinguish the fields well. Similarly, our model will fail if it uses the textual features alone as we can see in the third line of Table~\ref{tab:abliation_results}. By checking the text of many fields, we found that many fields are hard to be distinguished by content, such as pure digits. We also find that our model will fail when the textual features are combined with other features except when all the features are used. Therefore, we do not report these failed results in Table~\ref{tab:abliation_results} to save the space. The forth line of Table~\ref{tab:abliation_results} tells that our model cannot perform well if we only use the edge features. By investigating the edges set of each document, we find that many edges in query documents cannot be found in their corresponding support documents. This is because the typology of the graph constructed for a query document can be very different from the one in the support document.
Second, we evaluate the model performance by combining two types of features to train our model. If we use ``Spatial+Aspect'' features (spatial combined with aspect features), the accuracy of our model on ``AH'' and ``JS'' type increases, while its accuracy on ``BJ'' and ``CQ'' decreases. If we use ``Spatial+Edge'' features (spatial combined with edge features), our model can perform better on the ``AH'' type, and worse on the ``SC'', ``BJ'', and ``JS'' type. Not surprisingly, if we use ``Aspect+Edge'' features (aspect combined with edge features), the performance of our model will only increase on the ``JS'' type, while decreases on the rest types.
When we use ``Spatial+Aspect+Edge'' features, our model achieves the best accuracy on ``AH'' document with increases in accuracy from 14.7 to 97.2. Lastly, if we use all possible features (``Spatial+Aspect+Text+Edge''), the accuracy of our model on all types outperform 90\%. This is surprising because the textual features tends to decrease the performance of our model. Therefore, we believe the proposed four features are complementary to each other, and it is necessary to use all of them if possible.
\begin{table}[t]
\centering
\bcaption{Impact of Ranking Loss on different solvers.}
\label{tab:ranking_loss_impact}
\centering
\begin{tabular}{c|c|c|c|c|c|c}
\hline
\hline
& Solvers & SC & BJ & AH & JS & CQ\\
\hline
\multirow{2}{*}{Ranking loss (0)} & DD-ILP & 95.7 & 96.6 & 92.3 & 94.9 & 92.2 \\
\cline{2-7}
& ZAC-GM & 98.2 & 98.3 & 94.4& 92.5& 97.3\\
\hline
\multirow{2}{*}{Ranking loss (1)} & DD-ILP & 97.1 & 98.4 & 94.4 & 96.1 & 97.2\\
\cline{2-7}
& ZAC-GM & 98.8 & 99.2 & 98.8 & 99.8 & 99.1 \\
\hline
\hline
\end{tabular}
\end{table}
\begin{table*}[t]
\centering
\caption{Different permutation matrix corresponds to different total affinity score.}
\label{tab:different_total_affinity_score}
\centering
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
\hline
\multicolumn{2}{c|}{\diagbox{(a1) in \\ Figure~\ref{fig:case_study_1}.}{Total:11.471 /\\ None}} &
\multicolumn{2}{c|}{\diagbox{(b1) in \\ Figure~\ref{fig:case_study_1}.}{Total:10.604 /\\10.481}} &
\multicolumn{2}{c|}{\diagbox{(c1) in \\ Figure~\ref{fig:case_study_1}.}{Total:9.782 /\\10.423}} &
\multicolumn{2}{c}{\diagbox{(d1) in \\ Figure~\ref{fig:case_study_1}.}{Total:11.257 /\\ None}}\\
\hline
Coordinates in $\hat{P}$ & Score & Coordinates in $\hat{P}$ & Score & Coordinates in $\hat{P}$ & Score & Coordinates in $\hat{P}$ & Score \\
\hline
\textcolor{black}{(S$\_$1, Q$\_$7)} & \textcolor{black}{1.000} & (S$\_$1, Q$\_$7) & 1.000 & (S$\_$1, Q$\_$7) & 1.000 & (S$\_$1, Q$\_$7) & 1.000\\
\hline
\textcolor{black}{(S$\_$1, Q$\_$10)} & \textcolor{black}{0.980} & (S$\_$2, Q$\_$2) & 0.959 & (S$\_$2, Q$\_$2) & 0.959 & (S$\_$2, Q$\_$10) & 0.937 \\
\hline
\textcolor{black}{(S$\_$1, Q$\_$2)} & \textcolor{black}{0.964} & (S$\_$3, Q$\_$11) & 0.937 & (S$\_$3, Q$\_$11) & 0.937 & (S$\_$3, Q$\_$2) & 0.910 \\
\hline
(S$\_$2, Q$\_$11) & 0.938 & (S$\_$4, Q$\_$3) & 0.879 & (S$\_$4, Q$\_$3) & 0.879 & (S$\_$4, Q$\_$11) & 0.902 \\
\hline
(S$\_$3, Q$\_$3) & 0.880 & (S$\_$5, Q$\_$8) & 0.863 & (S$\_$5, Q$\_$8) & 0.863 & (S$\_$5, Q$\_$3) & 0.861 \\
\hline
(S$\_$5, Q$\_$8) & 0.863 & (S$\_$6, Q$\_$1) & 0.842 & (S$\_$6, Q$\_$1) & 0.842 & (S$\_$6, Q$\_$8) & 0.845 \\
\hline
(S$\_$6, Q$\_$1) & 0.842 & (S$\_$7, Q$\_$6) & 0.832 & (S$\_$7, Q$\_$6) & 0.832 & (S$\_$7, Q$\_$1) & 0.823 \\
\hline
(S$\_$7, Q$\_$6) & 0.832 & \textcolor{black}{(S$\_$8, Q$\_$10)} & \textcolor{black}{0.122} & (S$\_$8, Q$\_$4) & 0.817 & (S$\_$8, Q$\_$6) & 0.810 \\
\hline
\textcolor{black}{(S$\_$9, Q$\_$4)} & \textcolor{black}{0.832} & (S$\_$9, Q$\_$4) & 0.832 & (S$\_$9, Q$\_$9) & 0.821 & (S$\_$9, Q$\_$4) & 0.832 \\
\hline
\textcolor{black}{(S$\_$9, Q$\_$9)} & \textcolor{black}{0.821} & (S$\_$10, Q$\_$9) & 0.818 & (S$\_$10, Q$\_$5) & 0.814 & (S$\_$10, Q$\_$9) & 0.818 \\
\hline
(S$\_$11, Q$\_$5) & 0.818 & (S$\_$11, Q$\_$5) & 0.818 & (S$\_$11, Q$\_$12) & 0.819 & (S$\_$11, Q$\_$5) & 0.818 \\
\hline
(S$\_$12, Q$\_$12) & 0.843 & (S$\_$12, Q$\_$12) & 0.843 & (S$\_$12, Q$\_$13) & 0.839 & (S$\_$12, Q$\_$12) & 0.843 \\
\hline
(S$\_$13, Q$\_$13) & 0.859 & (S$\_$13, Q$\_$13) & 0.859 & \textcolor{black}{(S$\_$13, Q$\_$10)} & \textcolor{black}{$-$0.641} & (S$\_$13, Q$\_$13) & 0.859 \\
\hline
\hline
\end{tabular}
\end{table*}
\begin{figure}[!hbt]
\centering
\includegraphics[width = 0.8\linewidth]{ali_demo_mistake_in_d0.png}
\caption{The prediction of LF-BP model on the drifted fields in d0 dataset.}
\label{fig:ali_demo_mistake_in_d0}
\end{figure}
\subsubsection{Landmarks} We further evaluate the impacts of the number of landmarks on the accuracy of our model in Figure~\ref{fig:landmarkImpact}. In practice, text embedding (300 dimensions) may not be used to save computation resources, thus we only test the combination of ``Spatial+Aspect+Edge" (4 dimensions). The overall accuracy is good if we drop less than 3 landmarks (see -1, -2, -3 in the x-axis). When we use ``Spatial+Aspect+Edge'' features, the labeling accuracy grows as the number of landmarks increases. This is not true when we only use the spatial features. This also proves that these features are complementary to each other.
\subsubsection{Ranking Loss} \textcolor{black}{We also evaluate the effectiveness of ranking loss by changing the weight of ranking loss. Figure~\ref{fig:ranking_loss_ablation} shows that our ranking loss can help accelerate the training process. When our model does not employ the ranking loss, our model can still outperform the LF-BP model. By increasing the weight of ranking loss, our model converged much more faster and the accuracy also increased. When the weight of ranking loss is too large, the performance of our model dropped. As shown in Table~\ref{tab:ranking_loss_impact}, when we apply the ranking loss to two solvers, the accuracy of them improved 1$\%$ across different testing styles in the d0 dataset.}
\subsection{Case Study}\label{subsec:case_study}
In this section, we presented a case study on the d0 dataset. The performance on this dataset can be found in Table~\ref{tab:over_all_results}. The accuracy of our model on this dataset is much higher than the LF-BP method~\cite{cheng2020one}, despite that both of them use the landmarks to generate spatial features. We select a pair of documents from the ``SC'' type as an example. As shown in Fig. \ref{fig:spatial_drift}, fields in the query document drift upwards when compared with fields in the support document. The LF-BP method successfully predicted fields with "receipt-code", "receipt-no", "phone-a", and "phone-b" labels. It failed on the rest fields because they drift and aligned with the wrong landmarks. Therefore, the LF-BP would mismatch their labels according to the landmarks. Although the Belief Propagation -`(BP) step in the LP-BP model may alleviate the spatial drift problem, both our re-implementation and
their online demonstration $\footnote{https://ocr.data.aliyun.com/experience\#/?first\_tab=general}$ failed on this example.
To dive into the details of the inference phase of our model, we visualized the vertex affinity matrix of our model in Fig. \ref{fig:case_study_1}. The fields in the query document are ordered in the same vertical sequence as shown in Fig.~\ref{fig:spatial_drift}. We deliberately disrupt the order of fields for the support document to add difficulty to the model. Each row of the affinity matrix in Fig.~\ref{fig:case_study_1} indicates the similarities between the current query filed and all the fields in the support document. Since the direct output of vertex MLP is not normalized, we apply the min-max normalization to each row of this matrix, i.e., each element subtracts the minimum value of its row, and then divided by the maximum value of its row. We will choose all "1" elements if we apply the greedy strategy used in LP-BP to select the possible label for each query field. However, this solution conflicts with the one-to-(at most)-one constraint in our model. Therefore, combinatorial solvers in our model would find a globally optimized labeling strategy such that the overall affinity summation over all chosen elements is maximized and never violate the constrain. This enables that our model is less sensitive to vertex shift such that we can handle the spatial drifted cases well. Our model picked the elements lying in the red line path as shown in Fig. \ref{fig:case_study_1}.
\begin{figure*}[!hbt]
\centering
\includegraphics[width = 0.85\linewidth]{case_study_1.png}
\caption{Different permutation matrix corresponds to different total affinity score.}
\label{fig:case_study_1}
\end{figure*}
\section{Conclusion}
In this work, we proposed to solve the text field labeling problem using graph matching. We designed a one-shot framework that combines the power of deep learning and combinatorial solvers. To the best of our knowledge, our framework is the first to generate globally optimized solutions. Our model could handle spatial drifted documents, and shows state-of-the-art performance on most testing datasets.
Other potential visual cues, such as text color, fonts, and background, will be explored in future work in addition to current textual, aspect, and spatial relationship features. Our method can be extended to support few-short learning by adding additional constraints such as cycle consistency, and we leave it for future work.
\begin{figure*}[!htb]
\centering
\includegraphics[width = 1.0\linewidth]{Sample_receipts1.pdf}\\
\includegraphics[width = 1.0\linewidth]{Sample_receipts2.pdf}\\
\bcaption{\label{fig:our_dataset}Samples in test dataset of DKIE. The sensitive information has been masked.}
\end{figure*}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2021-09-30T02:00:59",
"yymm": "2109",
"arxiv_id": "2109.13967",
"language": "en",
"url": "https://arxiv.org/abs/2109.13967"
}
|
\section{Introduction}
Accretion powered High Mass X-ray Binary (HMXB) pulsars comprise of a highly magnetized neutron star (NS) accreting from a high mass companion. Most of these systems in the Milky Way galaxy are transient sources that usually remain in quiescence for long periods, and occasionally exhibit outbursts with durations ranging from several days to hundreds of days (see \citealt{Paul-Naik2011} for a review). Outbursts from these systems have been classified as Type-I and Type-II depending upon their peak luminosities and duration. Type-1 outbursts tend to be periodic with durations of a few days to tens of days and X-ray luminosities L$_{\rm X}\sim$10$^{35}$-10$^{37}$~ergs s$^{-1}$. They exhibit periodic enhanced episodes of accretion during the periastron passage of the NS. The Type-II outbursts, however, are not periodic, are much longer in duration, lasting for weeks to months and are characterized by luminosities $L_{X}${$\sim10^{37}-10^{38}$} ergs s$^{-1}$. These systems exhibit luminosity dependent timing and spectral features such as variations in pulse profiles, presence/absence of Quasi Periodic Oscillations (QPOs) in the power density spectra, variation in cyclotron line energy properties, etc. \citep{Paul-Naik2011}.
Accretion onto a NS is an efficient way of generating high energy photons. However, the maximum luminosity that can be reached by this process is limited by several factors. For the case of spherical accretion, the Eddington luminosity limits the maximum attainable luminosity. However, for a highly magnetized NS in an accreting binary system,
the maximum luminosity attained, can greatly exceed the L$_{\rm Edd}$ via the formation of accretion columns \citep{Basko1976,Mushtukov2015}. From the current understanding of the theory of accretion, we know that emission region configuration strongly depends on the mass accretion rate \citep{Basko1976}. At lower X-ray luminosities, the accretion mounds on the NS polar caps majorly dominate the X-ray emission. The NS surface is heated up by the inflowing plasma via Coulomb collisions \citep{ZelShakura,Mushtukov2015}.
However, at higher mass accretion rates, the plasma is stopped at some distance above the NS surface by radiation-dominated shock and the radiation is observed to originate from an extended `accretion column' \citep{Basko1976,Becker2012,Mushtukov2015,Doroshenkov2017}.
A \textit{`critical'} luminosity has been defined in literature that serves as a boundary between these two regimes. The behavior of pulse profiles and even CRSF line energy properties have been shown to vary in these two domains \citep{Becker2012, Mushtukov2012}. Luminosity-related variations in the CRSF line energies and pulse profiles, help probe changes in the configuration of the accretion column, all of which can be best studied while accreting magnetized NS systems undergo episodic outbursts.
Observations of X-ray pulsars have indicated that they have spin periods ranging from milli-seconds (for example, SAX J1808.4+3658, \citealt{Wijnands_klis1998}) all the way up to a few thousand seconds (for example, 4U2206+54, 2S 0114+65, see \citealt{Wang2020}). In fact, recent studies have indicated that the Be X-ray binary population comprises of two sub populations based on their spin and orbital periods and orbital eccentricities \citep{Knigge2011}: i) the short spin P$_{\rm spin} \sim$ 10~s, P$_{\rm orb}\sim$40~d, and ii) P$_{\rm spin} \sim$ 200 s, P$_{\rm orb} \approx$ 100 d \citep{Priegen2020}. Such a bimodal distribution in the NS spin has been suggested to be the result of two distinct NS forming supernova pathways \citep{Knigge2011}. In this context, particularly interesting are the X-ray pulsars hosting NS with intermediate spin ($\sim$ several seconds to several tens of seconds) since they also occupy a unique position in the Corbet diagram of spin period versus orbital period. IGR J19294+1816 is one such poorly studied intermediate spin pulsar that underwent an outburst in October 2019, which we observed and studied using {\it AstroSat~\/} instruments.
IGR J19294+1816, a hard X-ray transient, was first detected with the IBIS/ISGRI camera on board \textit{INTEGRAL} Gamma Ray Observatory while undergoing an outburst in 2009 \citep{Atel2}. Pulsations at 12.4~s were detected from this bright source using Swift observations \citep{Rodrig2009}; \citet{Strohmayer2009ATel} further confirmed the X-ray pulsar nature of this object using RXTE-PCA observations. \citet{Corbet-Krim2009} inferred an orbital period of 117.2~days using long term Swift-BAT flux modulations. NIR spectroscopy revealed that the system hosts a B1Ve optical companion star and the distance to the system was inferred to be around 11$\pm$1~kpc \citep{Tsygankov2019}. A cyclotron absorption feature was detected at $\sim$42~keV using {\it NuSTAR~\/} observations \citep{Tsygankov2019}.
A recent XMM-Newton observation detected enhanced X-ray activity during its periastron passage, indicating it to be a Type-I X-ray outburst \citep{Atel1}. In order to study the wide band spectral and timing characteristics of this source during its latest outburst, we carried out X-ray observations using the various {\it AstroSat~\/} instruments. The paper is organized as follows. In Section 2, we present the details of the observations and data reduction methods for the LAXPC and SXT instruments. This is followed by timing and spectral analysis and results in Section 3. We then discuss the implications of our results and describe the broadband outburst properties of this pulsar system in Section 4.
\section{Observations and Data Reduction}\label{sec:obs}
{\it AstroSat~\/} observations of IGR 19294+1816 were carried out using the SXT and LAXPC instruments on the 29th of October, 2019, following an ATel reporting re-brightening of the source in XMM-Newton observations \citep{Atel1}. The source was already in the declining phase of its outburst after its peak had passed. This observation was part of a Target of Opportunity (ToO) proposal (Obs ID T03 3272) comprising of {\it AstroSat~\/} orbits 22082-22093 and a total exposure time of $\sim$62~ks.
The Indian satellite {\it AstroSat~\/} carries five payloads providing simultaneous multi-wavelength coverage in the UV, soft and hard X-ray bands. The Large Area X-ray Proportional Counter (LAXPC) is a broad-band timing instrument that comprises of three co-aligned proportional counter detectors with a 1$^\circ \times$1$^\circ$ field of view. It achieves a timing resolution of 10~micro-second and is sensitive in the broad band energy range of 3.0--80.0~keV. LAXPC has proven to be extremely useful in carrying out CRSF line studies \citep{Varun2019new,Varun2019-new}. Details of the LAXPC calibration can be found in \citet{Antia2017,Yadav2017-new}. During this observation, the LXP30 detector was non functional and the LXP10 detector was corrected for gain changes rendering it unreliable for spectroscopic work. We have therefore utilised the LXP20 detector for spectroscopic analysis in this work. The level 1 LXP20 data files were reduced using the standard data reduction pipeline\footnote{http://astrosat-ssc.iucaa.in/?q=laxpcData} (Format A). The level 2 event file was generated using the standard procedure \texttt{laxpc$\_$make$\_$event}. We further generated the good time intervals that exclude the time intervals corresponding to Earth occultation and satellite passage through the South Atlantic Anomaly (SAA) using the task \texttt{laxpc$\_$make$\_$stdgti}. With the help of the \texttt{as1bary} tool, the new level 2 event file was then barycenter corrected from the satellite frame to the Solar System Barycenter using the orbital information and the source coordinates. The resultant event file was then used for further timing and spectral analysis.
\begin{figure*}
\centering
\includegraphics[scale=0.5,angle=-90,trim={0cm 0cm 0 0cm},clip]{bat-lc-latest.pdf}
\caption{The 15--50~keV Swift BAT light curve shows the October 2019 outburst of IGR J19294+1816. A prompt {\it AstroSat~\/} observation was undertaken and its location on the long term light curve is indicated in red.}
\label{fig:batlc}
\end{figure*}
The Soft X-ray Telescope (SXT) is an imaging telescope operating in the 0.3--8.0~keV energy band \citep{Singh2016}. It has an X-ray optics unit consisting of 40 confocal cells made of gold-coated aluminium foils. The X-ray CCD consists of 600$\times$600 pixels. It images a circular region of the sky with a radius of $\sim20^{'}$. The on-axis Point Spread Function (PSF) has a full width at half maximum (FWHM) of $100^{\arcsec}$. It has a spectral resolution of $\sim$150~eV at 6~keV. During this observation, the {\it SXT~\/} was operated in the Photon Counting (PC) mode with a time resolution of 2.4~s. The LEVEL 1 PC mode data were processed using the {\it SXT~\/} software pipeline version 1.4b and the SXT spectral redistribution matrices in \texttt{CALDB (v20160510)}. Orbit-wise event files were merged using the Julia software-based {\it SXT~\/} merger tool called \texttt{sxtevtmergerjl}. We selected all the events inside a circular region of radius $15^{'}$ centered on the source coordinates. The image, light curve and spectra were extracted using the \texttt{Xselect} tool.
\section{Results}
\subsection{Timing}
\begin{figure}
\centering
\includegraphics[scale=0.33,angle=-90,trim={0cm 0cm 0cm 2cm},clip]{pds-new2-july.pdf}
\includegraphics[scale=0.37,angle=0]{IGR-qpo-rms.pdf}
\caption{The white noise subtracted RMS normalised PDS of the light curve showing the pulse period at $\sim$0.08~Hz and the QPO feature at 0.032~Hz (top panel). The QPO is modeled using a Lorenztian function, shown here as a solid black line. The bottom panel shows the variation of the QPO rms fractional amplitude as a function of energy. The QPO is not detected beyond 30~keV.}
\label{fig:pds}
\end{figure}
We obtained a long term Swift BAT light curve from the Swift-BAT hard X-ray transient Monitor page\footnote{https://swift.gsfc.nasa.gov/results/transients/index.html} to indicate the segment of the outburst which was sampled by \textit{AstroSat}. The Swift-BAT light curve in the 15--50~keV energy band, with the {\it AstroSat~\/} observation marked on it, shows a marked increase in the count rate during the October 2019 outburst (see Figure \ref{fig:batlc}).
We extracted the time series from LAXPC unit LXP20 with a 5~ms binning in the 3.0--80.0~keV energy band using the task \texttt{laxpc$\_$make$\_$lightcurve}. The background subtracted, barycenter corrected light curve showed a steady emission during the observations with an average count rate of 96~counts~s$^{-1}$.
We created the Power Density Spectrum (PDS) using the FTOOLs task \texttt{powspec} from the LXP20 light curve of the full observation. The PDS was normalised such that the integral gives the squared RMS fractional variability. Interestingly, we detect signatures of a QPO like bump at $\sim$0.032~Hz along with spin frequency peak at 0.08~Hz and its two harmonics (at $\sim$0.16~Hz and $\sim$0.25~Hz) in the 3.0--80.0~keV PDS as shown in Figure \ref{fig:pds}. The spin peak feature shows a peculiar broadening at its base. To infer the properties of the QPO, the PDS in the frequency range around the centroid of the QPO (0.01-0.07~Hz) was fitted with a model consisting of a constant to describe the continuum along with a Lorentzian component. The QPO centroid frequency was determined to be 0.032$\pm$0.002~Hz with a width of 0.012$\pm$0.006~Hz. The Quality-factor for this QPO feature is calculated to be (Q=$\nu_{\rm 0}/\sigma$) $\sim$2.7. The white noise subtracted RMS value of the QPO in the 3.0--80.0~keV energy band is found to be 13.6\%$\pm$0.5\%. We further calculated the QPO RMS fractional variability in different energy bands and found that it increased as a function of energy (Figure \ref{fig:pds}, bottom panel). The QPO is detected in the 3 energy bands 3--6~keV, 6--10~keV, 10--30~keV with an RMS of 10.2\%$\pm$0.4\%, 13.5\%$\pm$1.2\%, and 15.7\%$\pm$1.0\%, respectively. The QPO feature is not detectable in the LAXPC power spectra beyond 30~keV.
To determine the spin period accurately we used the pulse folding and $\chi^{2}$-maximization method using the FTOOLS task \texttt{efsearch}. We searched for 1000 trial periods around a central pulse period of 12.48~s with a resolution of 0.1 $\mu$s. A maximum $\chi^2$, indicating the best period, was obtained for a pulse period value 12.485065$\pm$0.000015~s. The quoted uncertainty in the pulse period was determined by generating a 1000 gaussian randomized light curve realizations and carrying out period searches on all of the sample. This was achieved using the method specified in \citet{Boldin2013}. The orbital parameters of this source are not known except its orbital period. Therefore a period search was done on the light curve which is only barycenter corrected. The period value might still not be accurate due to the orbital motion of the source. We have folded the total light curve with the best period obtained from the period search and used MJD 58784.99998 as the reference epoch to create a pulse profile as shown in Figure \ref{Fig:efold}. The pulse profile shows a very broad single peak which is spread across the whole phase range. There is an indication of the presence of another component on the right side of this peak. \\
\begin{figure}
\centering
\includegraphics[scale=0.5,angle=-90,trim={0cm 0cm 0cm 0cm},clip]{efold-g3.pdf}
\caption{The average pulse profile of { IGR J19294+1816~\/} in the 3.0--80.0~keV energy band for LXP20 plotted with 64 phase bins per period.}
\label{Fig:efold}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.5,angle=0,trim={4cm 0cm 1cm 1cm},clip]{en_res_pp_all_bands.pdf}
\caption{The energy resolved pulse profiles of { IGR J19294+1816~\/} in 6 different energy bands from the LXP20 detector. These are the energy bands in which pulsations were detected. }
\label{fig:ene-res-pp}
\end{figure}
The count statistics in the full energy range data allowed us to create pulse profiles in several energy bands. We extracted light curves in the following bands: 3--6~keV, 6--10~keV, 10--30~keV, 30--39~keV, 39--42~keV, 42--45~keV, 45--65~keV and 65--80~keV. Two energy bands (39--42~keV and 42--45~keV) are specifically chosen around the center of the cyclotron line that was seen in the spectral analysis (see section \ref{sec:spec}). We folded the light curves at the newly estimated spin period and obtained corresponding pulse profiles. The energy resolved pulse profiles are shown in Figure \ref{fig:ene-res-pp}. We detected pulsations upto 45~keV. In the last two energy bands (45--65~keV and 65--80~keV) pulsations are not detected due to poor statistics in the data. The pulse profile has a single peak in the soft X-ray bands. In the 3--6~keV band, the pulse profile has an additional feature to the left of the peak. This feature disappears in the 6--10~keV band. A different component appears on right side in 10--30~keV band and becomes more prominent in subsequent energy bands. We compute the Pulse Fraction (PF) from the pulse profiles using the prescription: \texttt{$\frac{\rm I_{\rm max}-\rm I_{\rm min}}{\rm I_{\rm max}+\rm I_{\rm min}}$}. The PF shows an interesting trend as a function of energy. The PF first increases from 3--6~keV up to 10--30~keV band and then decreases in the subsequent energy bands as shown in Figure \ref{fig:pff}. We observe this trend in the PF when the source luminosity was $\sim$1.6$\times10^{37}$~erg~s$^{-1}$ (see Section \ref{sec:spec} for spectral analysis and flux estimation details). This trend is different from the one reported by {\it NuSTAR~\/} observations that were carried out at two different X-ray luminosities $L_{\rm X}=6.7\times10^{34}$, and $3.4\times10^{36}$~erg~s$^{-1}$, where the PF was found to be increasing with energy in both cases. However, we note that the PF behavior is consistent with the reports of \citet{Roy2017} derived using RXTE data.
\begin{figure}
\centering
\includegraphics[scale=0.45]{PF_en_res_pp_nus_as_22sept21.png}
\caption{ Variation of pulse fraction is shown for different energy bands. Mid point of energy band is taken as reference for the whole energy band.}
\label{fig:pff}
\end{figure}
\subsection{Spectroscopy} \label{sec:spec}
We further carried out a joint spectral analysis using the SXT and the LAXPC instruments. The spectral products for the LXP20 detector, which include the source and background spectra in the 5.0--50.0~keV energy range and the response files were extracted for all the layers using the Format A pipeline as specified in Section \ref{sec:obs}. We restrict our spectral range to 50.0~keV beyond which the spectrum has very limited S/N ratio. We also ignore the energy channels below 5.0~keV since the LAXPC response is not fully understood at those energies. The 0.3--7.0~keV SXT spectrum was extracted from the LEVEL 2 event file using the \texttt{Xselect} tool. For {\it SXT~\/} spectral fitting, we used the background spectrum and Response Matrix File (RMF) distributed by TIFR-POC\footnote{\url{https://www.tifr.res.in/~astrosat_sxt}} and the \texttt{sxt$\_$ARFModule} tool to generate the corrected Auxilary Response File (ARF). We then imported all the spectral products into \texttt{XSPEC} v.12.10.0 for fitting. We included the recommended 3\% systematics when fitting the joint LAXPC--SXT spectra, in order to account for the uncertainty in the spectral responses. \\
\begin{figure}
\centering
\includegraphics[scale=0.34,angle=-90,trim={0cm 1cm 0cm 0cm}]{spec_final_sept23_2021.pdf}
\caption{The unfolded LXP20 spectrum and fit residuals are shown for the joint fit using SXT and LAXPC obtained for Model 1 `\texttt{nthcomp}'. The total model fit is shown as a solid black (SXT) and red (LAXPC) lines. The middle panel shows the residuals with no CRSF component added, while the lower most panel shows the residuals when the CRSF line has been accounted for.}
\label{fig:spectrum}
\end{figure}
\begin{table*}
\centering
\caption{Best fit spectral parameters for the joint SXT and LAXPC spectral fit.}
\begin{tabular}{|c|c|c|c|c|}
\hline
&&&& \\
Model & Parameters & Model 1 (nthComp) & Model 2 (PL$\times$highEcut) & Model 3 (cutoff PL) \\
&&&& \\
\hline
&&&& \\
\textit{const} & LAXPC & 1.0 (fixed) & 1.0 (fixed) & 1.0 (fixed)\\
& SXT & 1.14 & 1.13 & 1.13\\
&&&& \\
\textit{TBabs} & nH ($\times$10$^{22}$~cm$^{-2}$) & 1.65$\pm$0.2 & 2.1$\pm$0.3 & 2.02$\pm$0.4\\
&&&& \\
\textit{bbodyrad} & kT (keV) & 1.41$\pm$0.1 & 1.68$\pm$0.1 & 1.57$\pm$0.1\\
& norm$^a$ & 1.4$\pm$0.8 & 2.1$\pm$0.6 & 2.24$\pm$0.3\\
&&&&\\
\textit{nthcomp} & Gamma & 1.5$\pm$0.1 & &\\
& kT$_e$ (keV) & 11.2$\pm$2.1 & &\\
& kT$_{BB}$ (keV) & 1.41$\pm$0.1 & &\\
& & (tied to kT$_{BB}$) & & \\
&&&& \\
\textit{power law} & index ($\alpha$) & & 0.55$\pm$0.1 & \\
& norm$^b$ & & 0.01$\pm$0.003 & \\
&&&& \\
\textit{highEcut} & cutoff E (keV) & & 14.8$\pm$1.4 &\\
& fold E (keV) & & 24.62$^{+9.6}_{-4.1}$ & \\
&&&& \\
\textit{cutoffpl} & index ($\alpha$) & & & 0.26$\pm$0.01\\
& highEcut & & & 18.4$\pm$3.8 \\
& norm$^b$ & & & 0.01 \\
&&&& \\
\textit{gabs} & E$_{cycl}$ (keV) & 42.7$\pm$0.9 & 45.8$\pm$3.1 & 42.9$\pm$0.8\\
& $\sigma_{cycl}$ (keV) & 2.72$\pm$1.1 & 4.5$\pm$2.0 & 2.9$\pm$0.8 \\
& optical depth ($\tau$) & 1.8$\pm$0.1 &1.9$\pm$0.9 & 1.7$\pm$0.5 \\
&&&& \\
\textit{gaus} & E$_{Fe}$ (keV) & 6.40$\pm$0.1 & 6.42$\pm$0.1 & 6.40$\pm$0.1\\
& $\sigma$ (keV) & 0.01 (fixed) & 0.01 (fixed) & 0.01 (fixed)\\
& Eq width (eV) & 128.8 & 130.1 & 136.1 \\
&&&& \\
\hline
&&&& \\
Reduced $\chi^2$ / dof & & 1.1/50& 1.01/50 & 1.03/53\\
&&&& \\
\hline
\end{tabular}
\flushleft $^a$ The \texttt{blackbody} norm is a dimensionless parameter defined as $\frac{R_{\rm km}^{2}}{D_{\rm 10}^2}$, where $R_{\rm km}$ is the source radius in km and $D_{\rm 10}$ is the distance to the source in units of 10~kpc.
\flushleft $^b$ photons~keV$^{-1}$~cm$^{-2}$~s$^{-1}$ at 1~keV
\label{tab:spec}
\end{table*}
The 5.0--50.0~keV LXP20 spectrum and the 1.0--7.0~keV SXT spectrum were jointly fit by introducing a relative normalization term in the form of a constant factor. The constant factor was fixed at 1.0 for LAXPC and was left free to vary for the SXT spectrum. We also applied a gain correction for the SXT pipeline by using the \texttt{gain$\_$fit} command. We fixed the gain slope to 1.0 and allowed the gain offset value to vary. To describe the spectral contiuuum, we compared three different models: 1) The thermal comptonization model `\texttt{nthcomp}' \citep{Zd1996,Z1999} and 2) an absorbed power law with a high energy cutoff, `\texttt{highEcut}' and 3) a \texttt{cutoffpl} model. The `\texttt{nthcomp}' model describes comptonization of soft photons in a hot plasma. It provides a better description of the continuum shape at lower energies compared to a power law by incorporating a low energy rollover such that the scattered spectrum has a fewer photons at energies below the typical input seed photon energies\footnote{https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/node205.html}. We then compared the spectral fit parameters of this model with the two other phenomenological models, i.e., `\texttt{highEcut}' and `\texttt{cutoffPL}'. We also note that the more commonly adopted physical model `\texttt{compTT}' was resulting in highly unphysical fit parameters and was therefore ignored for this analysis. The ISM absorption model \texttt{TBabs} was used to account for the line-of-sight neutral hydrogen (nH) column density. We adopted the updated photoionization cross sections and the Wilms abundance model \citep{Wilms2000}. All three continuum models showed significant soft X-ray residuals that prompted us to include a blackbody component, which significantly improved the fit. We assume that the same blackbody photons contribute as seed photons for the comptonization process, and so we tie the two temperatures for the fitting. Blackbody temperatures of the order of $\sim$1.5 keV, as seen for this pulsar in this work, were recently reported in a similar transient pulsar system, Swift J0243.6+6124 \citep{GauravJaisawal-2017}. We note that all three spectral models indicate a consistent blackbody temperature with an emission size of $\sim$1.3--1.5~km.
The LAXPC spectrum using all three models also showed a broad absorption feature near $\sim$42~keV, which we interpret as the CRSF. This line is fit using a Gaussian absorption (\texttt{gabs}) model. The addition of the \texttt{gabs} model improved the reduced $\chi^2$ to 1.1 (with a $\Delta\chi^{2}\sim$52.4 for 50 dof).
The CRSF line energy is poorly constrained by the highEcut model with a larger uncertainty (45.8$\pm$3.1~keV), compared to the other two models. To compute the significance for multiplicative components like the CRSF line, we adopt the F-test routine in IDL package, \texttt{MPFTEST\footnote{https://pages.physics.wisc.edu/~craigm/idl/down/mpftest.pro}} \citep{DeCesar2013} and compute an F-test probability. We estimate the significance of the CRSF line to be 5.2$\sigma$, 6.0$\sigma$ and 5.8$\sigma$, for Models 1, 2 \& 3, respectively. The SXT spectrum was dominated by continuum emission, with a narrow excess near $\sim$6.4~keV. We added a Gaussian emission line with a fixed width of 0.01~keV to model this line for all three spectral model fits. The Fe K-$\alpha$ line is detected at $\sim$6.4~keV and is independent of the continuum spectral model. Details of the best fit parameters are shown in Table \ref{tab:spec}. We obtain a positive SXT gain offset value of 30.2~eV, 36.1~eV and 30.5 eV, for the three models, respectively.
In order to statistically examine the CRSF line, we carried out a correlation study of the CRSF line parameters with different continuum parameters. The CRSF centroid line energy at $\sim$42~keV remained largely uncorrelated with the compton continuum parameters. We also carried out an F-test to compare the spectral models. We find that Model 2 (\texttt{`highEcut'}) is a statistically more preferred phenomenological model (with a p-value$\sim$0.001) compared to the `\texttt{cutoffPL}' model. However, we find that the physical model `\texttt{nthcomp}' is statistically indistinguishable from the two phenomenological models using the current data.
\section{Discussion}
\begin{table*}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Sr & Source & Nature & E$_{cycl}$ & QPOs & Ref. \\
No. & & & (keV) & $\nu_{qpo} $ & \\
\hline
&&&&&\\
1 & 4U 0115+63 & Transient &12,24,36, 48,62 & 1-2 mHz, 27-46 mHz & 1, 2 \\
2 & V 0332+53 & Transient &28 & 0.22 Hz, 0.05 Hz & 3,4\\
3 & A 0535+26 & Transient &50 & 27 - 72 mHz & 5, 6 \\
4 & 1A 1118-61 & Transient &55, 110? & 80 mHz, 70-90 mHz & 7, 8 \\
5 & Swift 1626.6-5156 & Transient &10,18 & 1 Hz & 9 \\
6 & Cen X-3 & Persistent & 28 & 40-90 mHz & 10\\
7 & GX 304-1 & Transient & 54 & 0.125 Hz & 11 \& 12\\
8 & 4U 1901+03 & Transient & 30 & 0.135 Hz &
13 \& 14\\
9 & 4U 1626-67 & Persistent & 37 & 1 mHz, 48 mHz & 15, 16 \& 17\\
10 & IGR J19294+1816 & Transient & 42 & 3.5 mHz & 18 and \textit{\textbf{this work}} \\
&&&&&\\
\hline
\end{tabular}
\color{black}
\caption{A list of all the transient and persistent X-ray binary sources exhibiting a \textit{CRSF} as well as a QPO. Numbered references to the QPO and CRSF publications are indicated below. 1. \citet{Jayashree2019} 2. \citet{Heindl1999} 3. \citet{Qu_2005} 4. \citet{caballerogarcia2015} 5. \citet{Finger1996} 6. \citet{CameroArranz2012} 7. \citet{Jincy2011} 8. \citet{Maitra2012} 9. \citet{Reig2008} 10. \citet{Raichur2008} 11. \citet{Devasia2011} 12. \citet{Yamamoto2011} 13. \citet{Marykutty2011} 14. \citet{Beri2020} 15. \citet{Orlandini1998} 16. \citet{Raman2016} 17. \citet{Chakrabarty2001} 18. \citet{Tsygankov2019}.}
\label{tab:cycl-qpo}
\end{table*}
We report the results from a timing and spectral study of the HMXB pulsar IGR 19294+1816 using {\it AstroSat~\/} observations that were carried out during the October 2019 outburst. The observations fall on the declining phase of the outburst. We measure the pulsar's spin period to be 12.485065$\pm$0.000015~s. We also report the detection of a 0.032~Hz QPO from the timing analysis of LAXPC light curves. From our spectral analysis results we find the cyclotron line energy to be centered at 42.7$\pm$0.9~keV (consistent with {\it NuSTAR~\/} report, \citealt{Tsygankov2019}) and estimate the pulsar's magnetic field to be 4.6$\times$10$^{12}$~G. From the 0.7--50~keV model flux obtained using our spectroscopy results and assuming a source distance of 11~kpc, we find that during its declining outburst phase, IGR 19294+1816 had a source luminosity of 1.6$\times$10$^{37}$~ergs s$^{-1}$.
\subsection{Cyclotron line \& Quasi Periodic Oscillation}
Out of 36 X-ray binary sources with confirmed detections of CRSF \citep{Staubert2019}, only 9 sources exhibit simultaneous cyclotron line absorption along with a QPO.This becomes crucial since we can now obtain two independent measures of the magnetic field and probe the magnetospheric interactions using combined timing and spectral properties. A list of such sources are given in Table \ref{tab:cycl-qpo}. Results from our work makes IGR 19294+1816 the 10th such source in this list.
In our work, we detect the CRSF feature at 42~keV, which indicates (for a z of $\sim$0.3 and for M$_{NS}$ in the range 1.4--2.0 M$_{\odot}$: \citealt{Cottam2002}) a magnetic field strength of 4.6$\times10^{12}$~G. Our measurement of E$_{\rm CRSF}$ agrees with the {\it NuSTAR~\/} studies of the source \citep{Tsygankov2019}. It is also interesting to note that the centroid energy of the cyclotron line has remained steady between the different luminosity states. Particularly, \citet{Tsygankov2019} have conducted their spectral analysis over two luminosity states (L$_{\rm X}\sim$6.4$\times10^{34}$~ergs~s$^{-1}$ and L$_{\rm X}\sim$3.4$\times10^{36}$~ergs~s$^{-1}$) and our work probes a slightly higher luminosity (i.e. L$_X\sim$1.6$\times10^{37}$ ergs~s$^{-1}$) during which the E$_{\rm CRSF}$ has remained fairly steady at $\sim$42~keV (see Figure \ref{fig:cyc-lum}).
Interestingly, there is strong observational evidence for the bimodal dependence of the CRSF line energy with X-ray luminosity (L$_{\rm X}$) \citep{Becker2012,Mushtukov2015, Staubert2019}. Both positive and negative correlations between the CRSF line energy and L$_{\rm X}$ have been observed in various HMXB pulsars and the luminosity where the correlation changes from positive to negative has therefore been associated with the critical luminosity. \citet{Becker2012} also provide a theoretical framework that explains the beam pattern in both the regimes - above L$_{\rm crit}$, radiation dominated shocks are responsible for the generation of a fan beam; below L$_{\rm crit}$, Coulomb interactions dominate the flow decelaration process and radiation escapes parallel to the field to produce a pencil beam pattern.
\begin{figure}
\centering
\includegraphics[scale=0.42,angle=0,trim={0cm 0.0cm 0 0.1cm},clip]{cyc-lum-IGR_sept2021.pdf}
\caption{The CRSF line energy measured by {\it NuSTAR~\/} and {\it AstroSat~\/} for IGR 19294+1816 are shown as function of luminosity. The red curve indicates the critical luminosity model function (for an accretion disk parameter of $\Lambda=0.5$) that demarcates the sub and super critical accretion regimes \citep{Mushtukov2015}.}
\label{fig:cyc-lum}
\end{figure}
In this context we discuss different works addressing the computation of the critical luminosity achieved in pulsar accretion columns. \citet{Becker2012} derive an effective L$_{Edd}$ that arises specifically for the case of pulsar accretion columns. The overall accretion dynamics is governed by matter being decelerated in response to radiation pressure or coulomb interactions and these two domains are separated by a critical luminosity (L$_{\rm crit}$), which for a canonical NS, is $\sim$10$^{37}$~erg s$^{-1}$. However, more recent calculations by \citet{Mushtukov2015}, which are based on the physical model derived by \citet{Basko1976}, provide an accurate estimate of critical luminosity by accounting for the resonances in the Compton scattering cross section, polarization as well as the accretion flow configuration. They show that the L$_{\rm crit}$ is not a monotonic function of the magnetic field and has a more complicated behavior. Their theoretical work compares well with observed behavior for some of the sources like V 0332+53, 4U 0115+63, GX 304-1. Sources like A 0535+26 show neither a positive or a negative correlation \citep{Caballero2007,Mushtukov2015}. The CRSF line in IGR 19294+1816 has so far been measured using {\it NuSTAR~\/} and {\it AstroSat~\/}. Figure \ref{fig:cyc-lum} shows the CRSF line energy measurements as a function of luminosity along with the critical luminosity model curve (shown for an accretion disk parameter of $\Lambda=0.5$, \citealt{Mushtukov2015}). The current {\it AstroSat~\/} measurement lies very close to the L$_{\rm crit}$, in the sub-critical regime. Future broadband X-ray observations in different luminosity states might be useful in understanding the accretion geometry and in turn, the correlation behavior of this pulsar.
In accretion powered X-ray pulsars, both the transient as well as the persistent population often exhibit QPOs. They are observed for several of the sources during their outbursts (see \citealt{bpaul2011} and references therein). Since HMXBs host highly magnetized NS, their inner disk radii extend to about 1000 km due to which their corresponding keplerian frequencies fall in the mHz regime. We can therefore expect to detect slow mHz QPOs ($\sim$1~mHz--1 Hz) in such systems \citep{Psaltis2006}. This is unlike QPOs observed in LMXBs or black hole X-ray binaries whose QPO frequencies are $\sim$1~Hz--100's of Hz.
QPOs are traditionally understood to be a result of plasma instabilities that are generated around the magnetospheric boundary. Two very popular models that are often invoked to explain the QPO frequency ($\nu_{qpo}$) are the Keplerian Frequency Model (KFM, \citealt{vanderklis1987}) and the Beat Frequency Model (BFM, \citealt{AlparShaham1985}). In the KFM, the X-rays are generated by inhomogeneities that are modulated at $\nu_{k}$ and so the QPO is generated at the keplerian frequency corresponding to the inner accretion disk. Thus the keplerian frequency is $\nu_{\rm k}=\nu_{\rm qpo}$. However, the KFM is valid under the assumption that the NS is spinning slower than the keplerian frequency corresponding to the radius at which the QPO generating inhomogeneity is located. This is the case for some of the HMXBs like EXO 2030+375, A0535+262 \citep{Finger1996}, etc. Otherwise, a faster rotating NS will give rise to centrifugal forces that will inhibit accretion \citep{Stella1986}. The BFM, on the other hand, assumes that the keplerian frequency is modulated by the rotating magnetic field of the NS. The QPO feature is therefore assumed to arise as a beat phenomenon between the NS spin ($\nu_{\rm spin}$) and the $\nu_{\rm k}$, giving $\nu_{\rm qpo}=\nu_{\rm k}-\nu_{\rm spin}$.
IGR 19294+1816 has a spin frequency of $\nu_{\rm s}=0.08$~Hz and the measured QPO frequency lies at $\nu_{\rm qpo}\sim 0.034$~Hz. Since the $\nu_{\rm s} > \nu_{\rm qpo}$, the KFM does not apply to this object in this scenario. In order to examine if the BFM is applicable, we use the X-ray luminosity and the QPO frequency measurements to estimate the NS surface magnetic field as predicted by the BFM. Assuming a circular orbit, we can calculate the radius at which the QPO is generated as,
\begin{equation}
r_{\rm qpo}=\bigg(\frac{GM_{\rm NS}}{4\pi^2\nu^2_{\rm k}}\bigg)^{\frac{1}{3}}
\end{equation}
where, the keplerian frequency as predicted by the BFM is, $\nu_{\rm k}=\nu_{\rm qpo}+\nu_{\rm spin}$. The BFM further assumes that the QPO is generated at the magnetospheric radius (r$_{\rm M}$), i.e., r$_{\rm M}=r_{\rm qpo}$. According to \citet{GhoshLamb1979}, r$_{\rm M}$ is given by:
\begin{equation}
r_M=2.3\times10^8 \times M_{1.4}^{1/7} R_{6}^{10/7} L_{37}^{-2/7} B_{2.5B_{12}}^{4/7} cm
\end{equation}
For a generic NS with mass $\sim$1.4 $\rm M_{\odot}$ and a measured 1.0--50.0~keV flux of 1.2$\times$10$^{-9}$~ergs~s$^{-1}$ cm$^{-2}$, we obtain a magnetic field strength in the range 1.2$\times$10$^{13}$--6.2$\times$10$^{12}$~G, corresponding to a neutron star with radius in the range 10--15~km.
Another model that has been invoked to explain such low frequency mHz QPOs is the magnetic disk precession model \citep{Shirakawa-Lai2002}. The authors demonstrate that the inner region of the accretion disk is subject to magnetic torques that can cause warping and precession of the accretion disk. Under typical conditions present in X-ray pulsars, these magnetic torques can overcome the viscous damping and can enable the instability mode to grow, which may in turn generate mHz QPOs. This model was proposed by \citet{Shirakawa-Lai2002} originally to explain the mHz QPOs in sources like 4U 1626-67, etc. More recently, \citet{Dugair2013} and \citet{Roy2019} showed that this warped disk model was able to explain the mHz QPOs observed in 4U 0115+63.
The QPO precessional frequency as specified by \citet{Shirakawa-Lai2002} is given by:
\begin{equation}
\tau_{prec} = 776\times \alpha^{0.85} \times L_{37}^{-0.71} \rm s
\end{equation}
where $\alpha$ is the accretion disk viscosity parameter (usually $<$1), and L$_{37}$ is the X-ray luminosity in units of 10$^{37}$~erg/s. Assuming an $\alpha$=0.02 (this choice comes from the model fits carried out by \citealt{Jayashree2019} for the source 4U0115+63), we obtain a predicted $\nu_{\rm qpo}\sim$3.6~mHz, which is a factor of 10 smaller than our detected 32~mHz QPO.
\subsection{Broadening of the pulse peak}
The base of the pulse peak in the power density spectrum from our analysis of IGR 19294+1816 exhibits a broadened shape very similar to the one previously observed by \citet{Rodrig2009}. These wing like features have also been previously reported in several other HMXB sources, for example, Cen X-3 \citep{HarshaPaul2008}, 4U 1626-67 \citep{JainPaulDutta2010}, GX 304-1 \citep{Jincy2011}, 4U 1901+03 \citep{Marykutty2011} and Her X-1 \citep{MoonEiken2001}. In order to explain wings observed in HMXB sources, \citet{LazztiStella1997} and \citet{Burderi1993} have shown that the periodic and aperiodic variability components of the power spectrum are not to be considered independent of each other, as has been traditionally assumed. They show that these wings arise as a result of the coupling between the coherent periodic pulse variability and the aperiodic red noise component. We also note that the QPO feature is observed simultaneously alongside the broadened peak unlike in some other sources where the pulse peak narrows down once the QPO starts appearing (see for example, \citealt{Marykutty2011}). A clear understanding of such anomalies requires a systematic study using of transient pulsars across multiple observations.
\subsection{Outbursts in transient Be X-ray binaries}
IGR 19294+1816 has been repeatedly studied during each of its previous outbursts: the very first 2009 outburst using \textit{INTEGRAL} \citep{Atel2}, the second outburst in October 2010 using \textit{INTEGRAL} \citep{Bozzo2011} the combined 2009 \& 2010 outbursts using RXTE \citep{Roy2017}, and the further outbursts in 2017-2018 using \textit{Swift} and {\it NuSTAR~\/} \citep{Harrison2013, Tsygankov2019}. Our study of this pulsar during its most recent 2019 outburst using {\it AstroSat~\/} has additionally strengthened our knowledge about some of the timing and spectral characteristics of this source.
The detection of a pulse peak and its higher harmonics have been reported in all previous outbursts including in this current work. Notably, the higher harmonics decrease in strength as the outburst progresses from its highest to its lowest flux levels as seen from RXTE observations \citep{Roy2017}. Inhomogeneities in the NS accretion disk also get prominently detected in the form of mHz QPOs in a number of HMXB pulsars during their periastron passage outbursts (for example, 4U 0115+63: \citealt{Dugair2013}, V 0332+53:\citealt{Caballero-Garc-2016}, etc.) and IGR 19294+1816 is no exception to this. The strong presence of the mHz QPO (with an increasing rms power with energy) for IGR 19294+1816 at such a high luminosity, possibly indicates that the inner regions surrounding the magnetosphere become most visible during the highest flux states. Interestingly, this is in stark contrast with the behavior observed for another HMXB transient, V0332+53 \citep{Caballero-Garc-2016}, where the mHz QPO is most visible during the lowest flux states (and the rms power of the QPO decreases at higher energies, see \citealt{Qu2005}).
Another interesting outburst characteristic is the variation of the pulse fraction. RXTE observations \citep{Roy2017} and LAXPC observations (this work) tend to show an increasing pulse fraction all the way up to $\sim$30 keV, as expected in most accretion powered X-ray pulsars. However, beyond 30 keV, the PF drops dramatically. This is a peculiar deviation from the trend reported using {\it NuSTAR~\/} \citep{Tsygankov2019} for this source. Although the current LAXPC observations have captured the source during a much higher luminosity compared to the {\it NuSTAR~\/} observations, we note that the PF is lower.
Unlike the other transient Be X-ray pulsars that exhibit luminosity dependent pulse profiles (for example, A0535+262, 1A 1118-61, etc. see \citealt{bpaul2011} and references therein), IGR 19294+1816 has not demonstrated any detectable changes in the pulse profiles as a function of different luminosity states as also pointed out by \citet{Tsygankov2019}. It has also not exhibited any luminosity state dependence for the CRSF parameters within the last decade. It does however show a variation of pulse profiles as a function of energy. Most pulsars exhibit a dependence of profile shape with energy as well as time \citep{WhiteSwankHolt1983}. At different source luminosities, the appearance of complex sub structures in the pulse profiles has been reported in some sources like GX 304-1 \cite{Devasia2011}. The changes in the pulse profile shape seen for IGR 19294+1816 and similar pulsars can be attributed to the variable geometry and physical processes that occur near the NS surface and within the accretion column.
\section{Conclusions}
We have analysed the outburst characteristics of the transient X-ray pulsar IGR 19294+1816 using the LAXPC and SXT instruments on board {\it AstroSat~\/} during the falling phase of its 2019 outburst. We have detected a low frequency QPO at 0.032~Hz; a similar feature (at $\sim$0.035~Hz) was tentatively reported earlier by \citet{Rodrig2009} using RXTE. We have also confirmed the presence of a strong cyclotron feature at 42~keV and a fairly weak Fe emission line at 6.4~keV using the joint SXT and LAXPC broadband spectra. Our studies indicate that some spectro-timing features for IGR 19294+1816 are consistent with other transient Be X-ray binaries and some properties are not. We encourage future broadband observations with good spectral resolution that can sample different accretion regimes in order to test various E$_{\rm CRSF}$/L$_{\rm X}$ correlations for this object.
Obtaining stronger binary parameter constraints also will help understand this class of transient Be X-ray binary sources better. \\
\textit{Acknowledgements} : This publication is based on the results obtained from the {\it AstroSat~\/} mission of the Indian Space Research Organisation (ISRO), archived at the Indian Space Science Data Centre (ISSDC). We thank members of LAXPC instrument team for their contribution to the development of the LAXPC instrument. We also thank the LAXPC \& SXT POC at TIFR for verifying and releasing the data via the ISSDC data archive and providing the necessary software tools. The authors are grateful to Alexander Mushtukov for providing us with the latest critical luminosity model curve that have been used in this study.\\
\noindent\textbf{Data availability statement:}\\
The data underlying this article are available in the AstroSat data archive: https://astrobrowse.issdc.gov.in/astro$_:$\\archive/archive/Home.jsp \\
|
{
"timestamp": "2021-09-30T02:03:47",
"yymm": "2109",
"arxiv_id": "2109.14022",
"language": "en",
"url": "https://arxiv.org/abs/2109.14022"
}
|
\section{Introduction}
One of the promises of 5G wireless communication systems is to support large scale machine- type communication (MTC), that is, to provide connectivity to a massive number of devices as in the Internet of Things \cite{Durisi,Tullberg,Larsson}. In massive MTC scenarios, the connection density will be up to $10^6$ devices per square kilometer \cite{IMT}. A key characteristic of MTC is that the user traffic is typically sporadic so that in any given time interval, only a small fraction of devices are active. Also, short packets are the most common form of traffic generated by sensors and devices in MTC. This requires a fundamentally different design than that for supporting sustained high-rate mobile broadband communication.
A variety of effective multiple access techniques have been adopted in the previous and current cellular networks, such as frequency division multiple access (FDMA), time division multiple access (TDMA), code division multiple access (CDMA), and orthogonal frequency division multiple access (OFDMA) \cite{McCann}.
For these systems, resource blocks are orthogonally divided in time, frequency, or code domains.
This makes signal detection at the access point (AP) fairly simple because the interference between adjacent blocks is minimized.
However, due to the limitation of the number of orthogonal resource blocks, it can only support a limited number of devices.
To provide connectivity to a massive number of devices, an information-theoretic paradigm called \emph{many-user access} has been studied in \cite{Chen,Chen2}.
It was shown to be asymptotically optimal for active users to simultaneously transmit their identification signatures followed by their message bearing codewords.
Further, various massive access schemes have been proposed; see examples \cite{Yu,Yu2,Yu3,Hanzo,Zhang,Luo,Andreev,Polyanskiy,Poor,Ravi,Yener,Letaief,Sohrabi,Senel,Schober,Ahn,Lau,Jia,Howard,Robert,ThompsonCalderbank,ZhangLi,Applebaum,AndreevKowshik,Amalladinne,LiXiang,ChenGuo} and references therein.
Indeed, active device identification and channel estimation are initial steps to enable message decoding in MTC.
Due to the sporadic traffic in MTC, these problems are usually cast as neighbor discover or compressed sensing problems \cite{WZhu,Zhang,Luo,Amalladinne,ThompsonCalderbank,Yu,Yu2,Yu3,Sohrabi,Senel,Schober,Schepker,Gil,Giannakis,Jeong,Mir,Shim}.
When the channel coefficients are known at the AP, several compressed sensing schemes were proposed for active device identification \cite{Jeong,Mir,Shim}.
Further, approximate message passing (AMP)-based algorithms were applied for joint active device identification and channel estimation \cite{Sohrabi,Yu,Yu2,Yu3,Senel,Schober}.
In addition, greedy compressed sensing algorithm was designed for sparse signal recovery based on orthogonal matching pursuit \cite{Schepker,Gil}.
Another approach to the active device identification is slotted ALOHA.
Recently, an enhanced random access scheme called coded slotted ALOHA was proposed in \cite{Paolini,Casini}, where each message is repeatedly sent over multiple slots, and information is passed between slots to recover messages lost due to collision.
These works assume synchronized transmission and perfect interference cancelation.
The asynchronous model has been studied in \cite{Gaudenzi,Sandgren}.
It is pointed in \cite{Ordentlich} that slotted ALOHA only supports the detection of a single device within each slot.
Instead, the authors in \cite{Ordentlich} proposed the $T$-fold ALOHA scheme such that the decoder can simultaneously decode up to $T$ messages in the same slot.
By combining with serial interference cancellation, the performance of the $T$-fold ALOHA scheme is further improved in \cite{Vem}. And \cite{Kowshik,AndreevKowshik} proposed the $T$-fold ALOHA based random access scheme for handing Rayleigh fading channels and asynchronous transmission.
To identify the active device from an enormous number of potential users in the system, each device must be assigned a unique sequence.
For a given positive integer $m$, a Reed-Muller (RM) code book is generated with up to $2^{m(m+3)/2}$ codewords of length $2^m$.
The code book size is so large that every user is assigned a different signature in any practical system.
Given this, RM sequence-based massive access schemes were proposed in \cite{Luo,Zhang,Robert,Howard,ZhangLi,Hanzo,YangGuo}.
In \cite{Howard}, a chirp detection algorithm for deterministic compressed sensing based on RM codes and associated functions was proposed.
The authors in \cite{Robert} have further enhanced the chirp detection algorithm with the slotting and patching framework.
However, the algorithm in \cite{Robert} works only for the additive white Gaussian noise (AWGN) channel (the channel estimation problem is thus not considered therein).
For fading channels, an iterative RM device identification and channel estimation algorithm is adopted in \cite{Hanzo} based on the derived nested structured of RM codes.
In our previous work \cite{YangGuo}, we extend the real codebook used in \cite{Hanzo} to the full codebook, which encodes more bits with little performance loss.
In addition, when the number of active devices is large, the performance of the algorithm in \cite{Hanzo} degrades dramatically.
In contrast, by adopting slotting and message passing, the algorithm in \cite{YangGuo} performs gracefully as the number of active devices increases.
It can be proved that the worst-case complexities of RM codes-based detection algorithms are sub-linear in the number of codewords, which makes it an attractive algorithm for message decoding in MTC.
The above RM detection algorithms are based on the assumption that the signals are synchronized.
However, due to propagation delays in the practical environment, asynchrony cannot be ignored.
In this paper, we investigate the joint device identification/decoding and channel estimation in an asynchronous setting. By removing the unrealistic assumption of fully synchronous transmissions, the proposed scheme is an important step towards a practical design.
In addition, 5G cellular systems are expected to deploy a large number of antennas to take advantage of massive multiple-input multiple-output (MIMO) technologies.
Massive MIMO has been extensively studied to enable massive connectivity \cite{Yu,Yu2,Yu3,Sohrabi,Senel,Schober,HHan,Carvalho,HRao}.
It can take advantage of the increased spatial degrees of freedom to support a large number of devices simultaneously.
Since the user traffic is sporadic in MTC, \cite{Yu,Yu2,Yu3,Sohrabi,Senel,Schober} proposes to formulate the device identification problem based on compressed sensing and thereby can be solved by the computationally efficient AMP algorithm.
And a new pilot random access protocol called strongest-user collision resolution is proposed in \cite{HHan,Carvalho} to solve the intra-cell pilot collision in crowded massive MIMO systems.
We note that algorithms using random codes and/or AMP type of decoding do not scale to millions of potential devices.
Moreover, the preceding algorithms are based on the assumption that the signals are synchronized.
{As far as we know, there has been no research publications on asynchronous massive access with a large number of users and antennas.} To fill this gap, we investigate asynchronous massive access in a multi-cell wireless network using Reed-Muller codes with many receive antennas which has the potential to support billions of potential devices in the network.
The main contributions are summarized as follows:
\begin{itemize}
\item Compared with the algorithms in \cite{Robert,Howard,ZhangLi,Hanzo,YangGuo} where transmitted signals are synchronized, we extend the algorithms to an asynchronous case where the arbitrary delays of each device are estimated based on the derived relationship between an RM sequence and its subsequences.
\item To enhance the performance, we extend the RM detection algorithms to the case where the AP is deployed with a large number of antennas.
\item We further describe an enhanced RM coding scheme with slotting and bit partition. The corresponding detection algorithm is referred to as Algorithm 1. We show that the computational complexity and performance of Algorithm 1 are notably improved, which makes it one important step closer to a practical algorithm.
\item While many papers in the literature study massive access, this work is one of the few that can truly accommodate billions of devices and more.
\end{itemize}
The remainder of this paper is organized as follows.
The system model is presented in Section \ref{Secsystemmodel}.
Section \ref{RMrelationship} outlines the relationship between the RM sequence and its subsequences, which is the basis of the RM asynchronous decoding algorithm.
Section \ref{useridentification} displays the enhanced RM decoding algorithm utilizing slotting, message passing, and bit partition.
Further, the computation complexity analysis is given in Section \ref{CCA}.
Section \ref{Numericalresults} presents the numerical results, while Section \ref{conclusion} concludes the paper.
Throughout the paper, boldface uppercase letters stand for matrices while boldface lowercase letters represent column vectors.
The superscripts $(\cdot)^{\rm T}$, $(\cdot)^{*}$, and $(\cdot)^\dag$ denote the transpose, complex conjugate, and conjugate transpose operator, respectively.
The complex number field is denoted by $\mathbb C$.
$\|\boldsymbol x\|_p$ denotes the $p$-norm of a vector $\boldsymbol x$, ${\|\boldsymbol X\|}_F$ denotes the Frobenius norm of a matrix $\boldsymbol X$, and $|A|$ denotes the cardinality of set $A$.
$\boldsymbol I_n$ denotes an $n\times n$ identity matrix.
$\lceil x\rceil$ represents the rounding function, which returns the smallest
integer greater than $x$.
$\odot$ means element-wise multiplication and ${\rm Arg}(\cdot)\in[-\pi, \pi)$ gives the phase angle of a complex number.
\section{System Model}\label{Secsystemmodel}
\subsection{Transmission Scheme}
Let $\Phi$ denote a large but finite set of devices on the plane with area $S$, where each device is equipped with one antenna.
Further we denote ${\cal K}\subseteq \Phi$ as the active device set on the plane, each of which has $B$ bits to be sent.
Due to the sporadic traffic in MTC, the number of active devices in a given time interval is far less than the total number of devices, i.e., $|{\cal K}|\ll |\Phi|$.
Prior to transmission, a message of $B$ bits is partitioned into $2^d$ sub-blocks, where the $j$-th sub-block consists $B_j$ information bits such that $\sum_{j=1}^{2^d}B_j=B$.
To patch the information bits in different sub-blocks together, we adopt the tree encoder proposed in \cite{Amalladinne2}.
Specifically, the tree encoder appends $l_j$ parity bits to sub-block $j$, where the appended check bits satisfy random parity constraints associated with the message bits contained in the previous sub-blocks ($l_1=0$ in all cases).
These parity check bits are needed to patch the information bits in different patches together, but they do not serve the purpose of transmitting the information.
All the sub-blocks have the same size, i.e., $B_j+l_j=J$.
Assume there are $2^p$ time slots. Each active device randomly selects 2 slots to send its $J$ bits.
We use $p$ bits to encode the location of the primary slot.
And we use an arbitrary subset of size $p$ to encode a \emph{translate}, which gives the secondary slot location when it is added to the primary slot location.
To distinguish the primary and secondary slots, we fix a single \emph{check bit} in the information bits to be 0 for the primary slot and 1 for the secondary slot. Thus deducing 1 bit from the total number of bits transmitted.
Besides, in this paper, we deal with asynchronous transmission.
To estimate the device delay, we assume two information bits to be zeros (To be specified in Section \ref{findpb}).
Fig. \ref{sottingandpatching} depicts the transmission scheme where we set $d=1$ and $K=4$ for simplicity. In Fig. \ref{sottingandpatching}, each device has $B$ information bits to be sent. The information bits is first divided into $2^d$ sub-blocks. Each sub-block contains $B_j$ information bits, along with $l_j$ parity bits such that $J=B_j+l_j$. Then each device sent 2 copies of the $J$ bits in 2 randomly selected slots within the $2^p$ slots.
Finally we perform the proposed decoding scheme and use tree decoder to patch together the information in different sub-blocks.
\begin{figure}
\begin{center}
\includegraphics[width=6in]{slotting_patching.pdf}
\end{center}
\caption{Illustration of the transmission scheme. We set $d=1$ and $K=4$ for simplicity.}\label{sottingandpatching}
\end{figure}
\subsection{Encoding}
Our approach is to encode these $J$ bits in each slot to a length $N=2^m$ second-order RM codes.
A length $2^m$ second-order RM sequence is determined by a symmetric binary matrix $\boldsymbol P^m\in{\mathbb Z}_2^{m\times m}$ and a binary vector $\boldsymbol b^m\in{\mathbb Z}_2^{m}$.
Since $\boldsymbol P^m$ is determined by $\frac12m(m+1)$ bits and $\boldsymbol b^m$ is determined by $m$ bits, each sequence encodes $\frac12m(m+3)$ bits.
Given the matrix-vector pair $(\boldsymbol P^m,\boldsymbol b^m)$, the $n$-th entry of the RM sequence $\boldsymbol X^m$ can be written as \cite{Robert}
\begin{equation}
\begin{split}\label{cm2}
X^m_n={\iota}^{ 2(\boldsymbol b^m)^{\rm T}\boldsymbol a_{n-1}^m+(\boldsymbol a_{n-1}^m)^{\rm T}\boldsymbol P^m\boldsymbol a_{n-1}^m },\quad n=1,\cdots,N,
\end{split}
\end{equation}
where $\iota^2=-1$, $\boldsymbol a_{n-1}^m$ is the $m$-bit binary expression of $(n-1)$. Eq. \eqref{cm2} indicates that $X^m_n\in \{1,-1,\iota,-\iota\}$.
In this case, we have
\begin{align}
J=\frac12m(m+1)+p-3.
\end{align}
Further, the number of information bits is written as
\begin{align}\label{B}
B=2^d\left(\frac12m(m+3)+p-3\right)-\sum\limits_{j=1}^{2^d}l_j
\end{align}
\subsection{Channel Model}
In this paper, we consider OFDM modulation where each symbol consists of $N$ subcarriers.
Denote the frequency samples of device $k$ as $X_{k,n}^m$ where $n=1,2,\cdots,N$ is the subcarrier index.
As explained before, $X_{k,n}^m$ is a RM sequence generated by \eqref{cm2}.
The time-domain OFDM symbol of device $k$ can be written as
\begin{align}\label{OFDMmodel}
x_k(t)=\sqrt{\gamma}\sum_{n=1}^NX_{k,n}^m{\rm e}^{2\pi \iota\Delta fnt},\quad t\in\left[0,\frac1{\Delta f}+\tau_{\rm max}\right],
\end{align}
where $\Delta f$ is the carrier spacing and the symbol duration is $1/\Delta f$ and $\tau_{\rm max}$ is the maximum device delay. $X_{k,n}^m$ is device $k$'s sample to be transmitted in subcarrier $n$. $\gamma$ denotes the transmit power.
We denote ${\cal K}_i$ as the active device set that transmitted in time slot $i$.
Let $K=|{\cal K}|,K_i=|{\cal K}_i|$, we have $K_i\approx 2K/2^p$ since each device randomly choose 2 slots in the $2^p$ time slots.
Without loss of generality, we assume the index of the active devices in slot $i$ is ${\cal K}_i=\{1, 2, \cdots, K_i\}$.
We focus on one AP equipped with $r$ antennas, and assume that the AP is located at the origin of the plane.
The receive signal of the $l$-th antenna of the AP at time slot $i$ is written as
\begin{align}\label{OFDMAPsignal}
y_{l,i}(t)&=\sum_{k=1}^{K_i}h_{k,l}x_k(t-\tau_k)+z_{l,i}(t),
\end{align}
where $\boldsymbol h_k=[h_{k,1},\cdots,h_{k,r}]^{\rm T}$ is the channel vector between device $k$ and the AP and $z_l(t)$ is additive white Gaussian noise; $\tau_k$ is the transmission delay and $x_k(t)$ is the transmit signal of device $k$
Then the AP samples at time $\frac1{\Delta f}\frac{u}{N},u=1,\cdots,N+\left\lceil\tau_{\rm max}N\Delta f\right\rceil$. The total number of samples in each slot is thus $N+M$ where $M=\left\lceil\tau_{\rm max}N\Delta f\right\rceil$ is the length of cyclic prefix.
Furthermore, the total codelength can be written as
\begin{align}\label{Codelength}
C=2^{d+p}(N+M).
\end{align}
Then the AP discard the first $M$ cyclic prefix in the OFDM symbol to form the discrete-time receive signal
\begin{align}\label{OFDMAPdiscrete}
y_{l,i}(u)=\sqrt{\gamma}\sum_{k=1}^{K_i}h_{k,l}\sum_{n=1}^NX_{k,n}^m{\rm e}^{- \iota\Delta_k n}{\rm e}^{\iota\frac{2\pi }{N}nu}+z_{l,i}(u),\quad u=1,\cdots,N,
\end{align}
where $\Delta_k=2\pi\Delta f\tau_k$ is the normalized delay; $z_{l,i}(u)\sim{\cal CN}(0,1)$. We assume the normalized delay is uniformly distributed in $\Delta_k\in[-\pi,\pi]$.
Performing $N$ point DFT on $[y_{l,i}(1),\cdots,y_{l,i}(N)]^{\rm T}$ yields
\begin{align}
Y_{l,i}^m(n)&=\sqrt{\gamma}\sum_{k=1}^{K_i}h_{k,l}\frac1{N}\sum_{u=1}^N{\rm e}^{-\iota\frac{2\pi }{N}nu}\sum_{v=1}^NX_{k,v}{\rm e}^{- \iota\Delta_k v}{\rm e}^{\iota\frac{2\pi }{N}uv}+Z_{l,n}^m(u)\\
&=\sqrt{\gamma}\sum_{k=1}^{K_i}h_{k,l}X_{k,n}{\rm e}^{-\iota\Delta_kn}+Z_{l,i}^m(n)\label{OFDMAPdiscreteDFT},
\end{align}
where
\begin{align}\label{OFDMAPdiscreteDFTnoise}
Z_{l,i}^m(n)&=\frac1{N}\sum_{u=1}^N{\rm e}^{-\iota\frac{2\pi }{N}nu} z_{l,i}(u)\sim{\cal CN}(0,1).
\end{align}
and $n=1,\cdots,N$ and $l=1,\cdots, r$.
Let $\boldsymbol Y_{i}^m(n)=[Y_{1,i}^m(n),\cdots,Y_{r,i}^m(n)]^{\rm T}$. For simplicity, denote $\boldsymbol Y_i^m=\left[\boldsymbol Y_{i}^m(1),\cdots,\boldsymbol Y_{i}^m(N)\right]$ as the DFT results in slot $i$.
\subsection{Propagation Model and Cell Coverage}\label{PM}
Consider a multiaccess channel with active devices distributed across the plane according to a homogeneous Poisson point process with
intensity $\lambda$. The number of active devices on the plane with its area equal to $S$ is a Poisson random variable with mean $\lambda S\approx |\cal K|$.
We further divide the active device set ${\cal K}$ into in-cell device set (neighbor) and out-of-the-cell device set (non-neighbor) of the AP according to the nominal SNR between the AP and the devices. If the nominal SNR between a device and the AP is larger than a threshold, then this device is considered an in-cell device of the AP.
The purpose of the AP is to identify all in-cell devices and/or decode their messages, where transmissions from out-of-the-cell devices are regarded as interference.
The small-scale fading between the device and the AP is modeled by an independent Rayleigh random variable with unit mean.
The large-scale fading is modeled by the free-space path loss which attenuates over distance with some path loss exponent $\alpha>2$.
Let $D_{k}$ and $\boldsymbol G_k=[G_{k,1},\cdots,G_{k,r}]^{\rm T}$ denotes the distance and the small scale Rayleigh fading gain between device $k, k=1,2,\cdots,K$, and the AP, respectively.
Then the channel gain between device $k$ and the $l$-th antenna of the AP is expressed as
\begin{equation}
\begin{split}
|h_{k,l}|^2=D_k^{-\alpha}G_{k,l},
\end{split}
\end{equation}
where the phase of $h_{k,l}$ is uniformly distributed on $[0,2\pi)$.
The coverage of the AP can be defined in many different ways. According to \cite{ZhangGuo}, device $k$ and the AP are neighbors of each other if the channel gain exceeds a certain threshold $\theta$.
Assume device $k$ and the AP are neighbors, i.e., $D_k^{-\alpha}\|\boldsymbol G_k\|_1>r\theta$, we have $D_k<\left({\frac{\|\boldsymbol G_k\|_1}{r\theta}}\right)^{1/{\alpha}}$.
Under the assumption that all devices form a p.p.p., for given $\boldsymbol G_k$, device $k$ is uniformly distributed in a disk centered at the AP with radius $\left({\frac{\|\boldsymbol G_k\|_1}{r\theta}}\right)^{1/{\alpha}}$.
The average number of neighbors of the AP is calculated as
\begin{align}\label{averageneighbor}
K^{*}&=\mathbb E_{\boldsymbol \Phi}\left\{\sum\limits_{k\in\boldsymbol\Phi}1(D_k^{-\alpha}\|\boldsymbol G_k\|_1\ge r\theta)\right\}\\
&=2\pi\lambda\int_0^{\infty}\int_0^{\infty}1(gs^{-\alpha}\ge r\theta)s\frac1{\Gamma(r)}g^{r-1}{\rm e}^{-g}{\rm d}s{\rm d}g\\
&=2\pi\lambda\int_0^{\infty}\int_0^{\left(\frac{g}{r\theta}\right)^\frac1{\alpha}}s\frac1{\Gamma(r)}g^{r-1}{\rm e}^{-g}{\rm d}s{\rm d}g\\
&=\pi\lambda\int_0^{\infty}\left(\frac{g}{r\theta}\right)^\frac2{\alpha}\frac1{\Gamma(r)}g^{r-1}{\rm e}^{-g}{\rm d}g\\
&=\pi\lambda(r\theta)^{-\frac2{\alpha}}\frac{\Gamma\left(\frac2{\alpha}+r\right)}{\Gamma(r)}\label{Kstar}
\end{align}
where $\Gamma(\cdot)$ is the Gamma function and $1(\cdot)$ is the indicator function.
Eq. \eqref{Kstar} indicates that $K^{*}$ is an increasing function of $r$.
In addition, the sum power of all out-of-the-cell devices can be derived as
\begin{align}\label{nonneighbor}
\sigma^2&=\mathbb E_{\boldsymbol \Phi}\left\{\sum\limits_{k\in\boldsymbol\Phi}1(D_k^{-\alpha}\|\boldsymbol G_k\|_1< r\theta)\gamma D_k^{-\alpha}\|\boldsymbol G_k\|_1\right\}\\
&=2\pi\lambda\gamma\int_0^{\infty}\int_0^{\infty}gs^{-\alpha}1(gr^{-\alpha}< r\theta)s\frac1{\Gamma(r)}g^{r-1}{\rm e}^{-g}{\rm d}s{\rm d}g\\
&=2\pi\lambda\gamma\int_0^{\infty}\int_{\left(\frac{g}{r
\theta}\right)^\frac1{\alpha}}^{\infty}s^{1-\alpha}\frac1{\Gamma(r)}g^{r}{\rm e}^{-g}{\rm d}s{\rm d}g\\
&=\frac{2\pi\lambda\gamma}{\alpha-2}\int_0^{\infty}\left(\frac{g}{r\theta}\right)^\frac{2-\alpha}{\alpha}\frac1{\Gamma(r)}g^{r}{\rm e}^{-g}{\rm d}g\\
&=(r\theta)^{1-2/\alpha}\frac{2\pi\lambda\gamma}{(\alpha-2)}\frac{\Gamma\left(\frac2{\alpha}+r\right)}{\Gamma(r)}
\end{align}
\section{A Property of RM Sequences}\label{RMrelationship}
Before given the decoding algorithm, we first derive a property of RM sequence, which is the basis of our decoding algorithm.
Let $m$ be a given positive number. Let $\boldsymbol b^s=[b^m_1,b_2^m,\cdots,b_s^m]^{\rm T}$ be a binary $s$-tuple. For $s=2,\cdots,m$, we have
\begin{equation}
\begin{split}\label{b}
\boldsymbol b^s=\left[\begin{array}{c}
\boldsymbol b^{s-1} \\
b_s^m
\end{array}\right].
\end{split}
\end{equation}
Furthermore, let $P^1=[\beta^m_1]$. For $s=2,\cdots,m$, let the $s\times s$ binary matrix $\boldsymbol P^s$ be defined recursively as
\begin{equation}
\begin{split}\label{P}
\boldsymbol P^s=\left[\begin{array}{cccc}
\boldsymbol P^{s-1} & \boldsymbol \eta^s \\
(\boldsymbol \eta^s)^{\rm T} & \beta^m_s
\end{array}\right]
\end{split},
\end{equation}
where $[\beta^m_1,\beta_2^m,\cdots,\beta_s^m]^{\rm T}$ is the main diagonal elements of $\boldsymbol P^s$, and $\boldsymbol\eta^s$ is a length $s-1$ column vector.
We have the following result.
\begin{prop}\label{Proposition1}
Given a length-$2^m$ RM sequence, its order $s$ and $s-1$ sub-sequences satisfy
\begin{equation}
\begin{split}\label{structure}
\left\{\begin{array}{l}
X^s_{2n}=V_n^{s-1}X_n^{s-1} \\
X^s_{2n-1}=X_n^{s-1}
\end{array}\right.,\quad n=1,\cdots,2^{s-1}, s=2,\cdots,m
\end{split}
\end{equation}
where
\begin{align}\label{v}
V_n^{s-1}&=(-1)^{b_s^m+\frac12\beta_s^m+(\boldsymbol \eta^s)^{\rm T}\boldsymbol a_{n-1}^{s-1}}.
\end{align}
The vector $[V_1^{s-1},\cdots,V_{2^{s-1}}^{s-1}]^{\rm T}$ is a length-$2^{s-1}$ Walsh sequence with frequency $\boldsymbol\eta^s$.
\end{prop}
\begin{IEEEproof}
Recall $a_n^s$ is the $s$-bit expression of $n$. For $n=1,\cdots,2^{s-1}$, the vector $\boldsymbol a_{2n-1}^s$ can be decomposed as
\begin{equation}
\begin{split}
\boldsymbol a_{2n-1}^s=\left[\begin{array}{c}
\boldsymbol a_{n-1}^{s-1} \\
1
\end{array}\right].
\end{split}
\end{equation}
Consequently
\begin{align}
&2(\boldsymbol b^s)^{\rm T}\boldsymbol a_{2n-1}^s+(\boldsymbol a_{2n-1}^s)^{\rm T}\boldsymbol P^s\boldsymbol a_{2n-1}^s\notag\\
&= \left[\begin{array}{cc}
\left(\boldsymbol a_{n-1}^{s-1}\right)^{\rm T} & 1
\end{array}\right]
\left[\begin{array}{cccc}
\boldsymbol P^{s-1} & \boldsymbol \eta^s \\
(\boldsymbol \eta^s)^{\rm T} & \beta^m_s
\end{array}\right]\left[\begin{array}{c}
\boldsymbol a_{n-1}^{s-1} \\ 1
\end{array}\right]+2(\boldsymbol b^s)^{\rm T}\left[\begin{array}{c}
\boldsymbol a_{n-1}^{s-1} \\ 1
\end{array}\right]\\\label{2j1}
&=2(\boldsymbol b^{s-1})^{\rm T}\boldsymbol a_{n-1}^{s-1}\!+2b_s^m\!+\beta_s^m+\!2(\boldsymbol \eta^s)^{\rm T}\boldsymbol a_{n-1}^{s-1}\!+\left(\boldsymbol a_{n-1}^{s-1}\right)^{\rm T}\!\boldsymbol P^{s-1}\boldsymbol a_{n-1}^{s-1}.
\end{align}
Substituting \eqref{2j1} into \eqref{cm2} yields
\begin{align}
X^s_{2n}&=X_n^{s-1}\iota^{2b_s^m+\beta_s^m+2(\boldsymbol \eta^s)^{\rm T}\boldsymbol a_{n-1}^{s-1}}\\
&=V_n^{s-1}X_n^{s-1}.
\end{align}
Likewise, the binary vector $\boldsymbol a_{2j-2}^s$ can be decomposed as
\begin{equation}
\begin{split}
\boldsymbol a_{2n-2}^s=\left[\begin{array}{c}
\boldsymbol a_{n-1}^{s-1} \\
0
\end{array}\right]
\end{split}.
\end{equation}
Then the exponent of $X^s_{2n-1}$ is expressed as
\begin{align}
&2(\boldsymbol b^s)^{\rm T}\boldsymbol a_{2n-2}^s+(\boldsymbol a_{2n-2}^s)^{\rm T}\boldsymbol P^s\boldsymbol a_{2n-2}^s\notag\\
&=\left[\begin{array}{cc}
\left(\boldsymbol a_{n-1}^{s-1}\right)^{\rm T} & 0
\end{array}\right]
\left[\begin{array}{cccc}
\boldsymbol P^{s-1} & \boldsymbol \eta^s \\
(\boldsymbol \eta^s)^{\rm T} & \beta_s^m
\end{array}\right]\left[\begin{array}{c}
\boldsymbol a_{n-1}^{s-1} \\ 0
\end{array}\right]+ 2(\boldsymbol b^s)^{\rm T}\left[\begin{array}{c}
\boldsymbol a_{n-1}^{s-1} \\ 0
\end{array}\right]\\\label{2j}
&=2(\boldsymbol b^{s-1})^{\rm T}\boldsymbol a_{n-1}^{s-1}+\left(\boldsymbol a_{n-1}^{s-1}\right)^{\rm T}\boldsymbol P^{s-1}\boldsymbol a_{n-1}^{s-1}.
\end{align}
Substituting \eqref{2j} into \eqref{cm2} yields
\begin{equation}
\begin{split}
X^s_{2n-1}=X_n^{s-1}.
\end{split}
\end{equation}
\end{IEEEproof}
{\emph {Remark 1}:} The structure of the derived RM sequence is similar to that of given in \cite{Hanzo}. The differences are two folds: 1) given the code length $2^m$, compared with the structure given in \cite{Hanzo}, the structure given in this paper allows us to send $2m$ more bits of information; 2) the way we splitting the sequences is different.
\section{Device Identification/Decoding and Channel Estimation}\label{useridentification}
In this section, we propose a novel RM asynchronous detection algorithm for active device detection and channel estimation that leverages Proposition \ref{Proposition1}.
\subsection{RM Asynchronous Detection Algorithm}
According to Fig. \ref{sottingandpatching}, the AP decodes the messages in different sub-blocks in a sequential manner.
In each sub-block, the AP decodes the messages slot-by-slot. Since each device transmits in 2 time slots, the message decoded in the previous time slot will be propagated to another time slot to eliminate its interference.
The detailed algorithm is summarized as in Algorithm 1.
\begin{center}
\begin{tabular}{l l}
\toprule
\multicolumn{1}{l}{{\bf Algorithm 1}: RM asynchronous detection algorithm.}\\
\midrule
{\bf Input}: the received signal $[\boldsymbol Y^q_1,\cdots,\boldsymbol Y^q_{2^p}]$, the average number of devices $K_{\rm max}$ in each\\
slot.\\
{\bf for} $patch=1:2^d$ {\bf do}\\
\quad Set $\boldsymbol P=[\ ]$, $\boldsymbol b=[\ ]$, $slot=[\ ], \boldsymbol h=[\ ]$, $\boldsymbol \Delta=[\ ]$, $t=0$.\\
\quad{\bf for} $i=1:2^p$ {\bf do}\\
\quad\quad $k\leftarrow0$.\\
\quad\quad {\bf for }$j=1:s$ {\bf do}\\
\quad\quad\quad {\bf if} $slot[j]=i$ {\bf do}\\
\quad\quad\quad\quad Remove the interference of device $j$ in slot $i$ and update $\boldsymbol Y^q_i$ according to \eqref{OFDMAPdiscreteDFTSIC}.\\
\quad\quad\quad\quad $t\leftarrow t+1$.\\
\quad\quad\quad {\bf end if}\\
\quad\quad {\bf end for}\\
\quad\quad $(\hat{\boldsymbol P}^m,\hat{\boldsymbol b}^m,\hat{\boldsymbol h}, \hat{\Delta})\leftarrow {\bf findPb}\ (\boldsymbol Y_i^q)$.\\
\quad\quad Denote $k_1$ as the number of detected messages in slot $i$.\\
\quad\quad {\bf for} $j=1:k_1$\\
\quad\quad\quad {\bf if} $(\hat{\boldsymbol P}_j^m, \hat{\boldsymbol b}_j^q)$ are not recorded in $(\boldsymbol P, \boldsymbol b)$ {\bf do}\\
\quad\quad\quad\quad $t\leftarrow t+1$.\\
\quad\quad\quad\quad $\boldsymbol P[:,:,t]\leftarrow \hat{\boldsymbol P}_{j}^m$.\\
\quad\quad\quad\quad $\boldsymbol b[:,t]\leftarrow \hat{\boldsymbol b}_{j}^m$.\\
\quad\quad\quad\quad $\boldsymbol h[:,t]\leftarrow \hat{\boldsymbol h}_{j}$.\\
\quad\quad\quad\quad $\boldsymbol\Delta[t]\leftarrow \hat{\Delta}_{j}$.\\
\quad\quad\quad\quad Calculate the translate according to $\left(\hat{\boldsymbol P}_j^q, \hat{\boldsymbol b}_j^q\right)$ and update $slot[s]$.\\
\quad\quad\quad {\bf end if }\\
\quad\quad {\bf end for}\\
\quad{\bf end for}\\
\quad Record $\boldsymbol P, \boldsymbol b, \boldsymbol h$, and $\boldsymbol\Delta$ in each sub-block.\\
{\bf end for}\\
{\bf Output}: Using tree decoder to patch the information bits together and output.\\
\bottomrule
\end{tabular}
\end{center}
The {\bf findPb} algorithm in Algorithm 1 returns all the messages transmitted in slot $i$, including the information bits $(\boldsymbol P, \boldsymbol b)$, the channel vector $\boldsymbol h$, and the device delay $\Delta$.
The {\bf findPb} algorithm decodes the messages transmitted in slot $i$ in a sequential manner.
Assume the channel gain of device $k\in\{1,\cdots,K_i\}$ is the biggest.
We will show that device $k$ can be first estimated from the received signal \eqref{OFDMAPdiscreteDFT}.
After device $k$ is detected, the AP performs successive interference cancellation (SIC) to remove the interference of device $k$ to detect the remaining devices.
This requires the AP to estimate not only the matrix-vector pair $(\boldsymbol P_k^m,\boldsymbol b_k^m)$, but also the device delay $\Delta_k$.
In this paper, to estimate the device delay, we let
\begin{align}
b_{l,m}^m=\beta_{l,m}^m=0, \quad l=1,\cdots,K_i.
\end{align}
For simplicity, let $\boldsymbol \Delta_k^m=[\Delta_{k,1}^m, \cdots, \Delta_{k,m}^m]$, where
\begin{align}
\Delta_{k,l}^m={\rm Arg}\left({\rm e}^{2^{l-1}j\Delta_k}\right),\quad l=1,\cdots, m.
\end{align}
In the next section, we show how to estimate the messages of device $k$.
\subsection{The {\bf findPb} Algorithm}\label{findpb}
According to \eqref{b} and \eqref{P}, the matrix-vector pair $(\boldsymbol P^m,\boldsymbol b^m)$ is determined by $(\boldsymbol\eta^{s},b_s^m,\beta_s^m), s=m,\cdots,2$ and $(b_{1}^m,\beta_{1}^m)$.
Specifically, the matrix-vector pair of the $k$th device $(\boldsymbol P_k^m,\boldsymbol b_k^m)$ will be estimated recursively.
We will show that the algorithm first estimates $\boldsymbol \eta_k^{m}$, then $(\boldsymbol \eta_k^{m-1},b_{k,m-1}^m,\beta_{k,m-1}^m)$, and finally the channel coefficient $\boldsymbol h_k$, $(b_{k,1}^m,\beta_{k,1}^m)$, and ${\Delta}_k$.
Then the receiver signal in slot $i$ is updated as
\begin{align}
\boldsymbol Y_{i}^m(n)\leftarrow \boldsymbol Y_{i}^m(n)-\sqrt{\gamma}\boldsymbol h_{k}X_{k,n}^m{\rm e}^{-j\Delta_{k}n}\label{OFDMAPdiscreteDFTSIC},
\end{align}
for detecting the remaining devices.
\subsubsection{Estimation of $(\boldsymbol \eta_k^{m},\Delta_{k,1}^m)$}\label{sectionalpham}
From \eqref{OFDMAPdiscreteDFT} and \eqref{structure}, when $n=1,\cdots,2^{m-1}$, we have
\begin{align}
\boldsymbol Y_{i}^m(2n)&=\sqrt{\gamma}\sum_{k=1}^{K_i} \boldsymbol h_k X_{k,2n}^m{\rm e}^{-2j\Delta_kn}+\boldsymbol Z_{i}^m(2n)\\
&=\sqrt{\gamma}\sum_{k=1}^{K_i} \boldsymbol h_k V_{k,n}^{m-1}X_{k,n}^{m-1}{\rm e}^{-2j\Delta_kn}+\boldsymbol Z_{i}^m(2n)\label{y2j0},
\end{align}
and
\begin{align}
\boldsymbol Y_{i}^m(2n-1)&=\sqrt{\gamma}\sum_{k=1}^{K_i}\boldsymbol h_k X_{k,2n-1}^m{\rm e}^{-j\Delta_k(2n-1)}+\boldsymbol Z_{i}^m(2n-1)\\
&=\sqrt{\gamma}\sum_{k=1}^{K_i} \boldsymbol h_k X_{k,n}^{m-1}{\rm e}^{-j\Delta_k(2n-1)}+\boldsymbol Z_{i}^m(2n-1)\label{y2j1}.
\end{align}
Define $\tilde Y_n^{m-1}=[\boldsymbol Y_{i}^m(2n)]^{\rm T}[\boldsymbol Y_{i}^m(2n-1)]^{*}$ for $n=1,\cdots,2^{m-1}$, \eqref{y2j0} and \eqref{y2j1} lead to
\begin{align}
\tilde Y_n^{m-1}&=\gamma\sum_{k=1}^{K_i} \left\|\boldsymbol h_kX_{k,n}^{m-1}\right\|^2 V_{k,n}^{m-1}{\rm e}^{-j\Delta_k} + \tilde {Z}_{n}^{m-1}\\
&=\gamma\sum_{k=1}^{K_i} \|\boldsymbol h_k\|^2 V_{k,n}^{m-1}{\rm e}^{-j\Delta_k}+ \tilde Z_{n}^{m-1}\label{ym1},
\end{align}
where
\begin{equation}
\begin{split}
\tilde {Z}_{n}^{m-1}&= \gamma\sum_{l=1}^{K_i}\sum_{k\ne l} \boldsymbol h_k^{\rm T}V_{k,n}^{m-1}X_{k,n}^{m-1}(\boldsymbol h_lX_{l,n}^{m-1})^{*}{\rm e}^{-2j\Delta_kn}{\rm e}^{j\Delta_l(2n-1)}\\
&+\sqrt{\gamma}\sum_{k=1}^{K_i} \boldsymbol h_k^{\rm T} V_{k,n}^{m-1}X_{k,n}^{m-1}{\rm e}^{-2j\Delta_kn}(\boldsymbol Z_{i}^m(2n-1))^{*}\\
&+\sqrt{\gamma}(\boldsymbol Z_{i}^m(2n))^{\rm T}\sum_{l=1}^{K_i} (\boldsymbol h_lX_{l,n}^{m-1}{\rm e}^{-j\Delta_k(2n-1)})^{*} +(\boldsymbol Z_{i}^m(2n))^{\rm T}(\boldsymbol Z_{i}^m(2n-1))^{*}.
\end{split}
\end{equation}
The first term in the right-hand side of \eqref{ym1} is a linear combination of Walsh functions $V_{k,n}^{m-1}, k=1,2,\cdots,K$, with frequency $\boldsymbol\eta^m_k$, which can be recovered by applying Walsh-Hadamard Transformation (WHT). The second term, $\tilde {Z}_{n}^{m-1}$, is a linear combination of chirps, which can be considered to be distributed across all Walsh functions to equal degree, and therefore these cross-terms appear as a uniform noise floor.
Let the Hadamard matrix be $\boldsymbol W^m=\left[\boldsymbol w_1^m,\boldsymbol w_2^m,\cdots, \boldsymbol w_{2^m}^m\right]^{\rm T}$ and its $(l,n)$-th elements are $\boldsymbol W_{l,n}^m=(-1)^{(\boldsymbol a_{l-1}^m)^{\rm T}{\boldsymbol a_{n-1}^m}}$. Denote the WHT transformation as $\boldsymbol t^{m-1}=\boldsymbol W^{m-1}\tilde {\boldsymbol Y}^{m-1}$, where $\tilde {\boldsymbol Y}^{m-1}=[\tilde{Y}_1^{m-1},\cdots,\tilde{Y}_{2^{m-1}}^{m-1}]$. The $l$-th entry of $\boldsymbol t^{m-1}$ can be written as
\begin{align}
t_l^{m-1}&=(\boldsymbol w_l^{m-1})^{\rm T}\tilde {\boldsymbol Y}^{m-1}\\
&=\sum_{n=1}^{2^{m-1}}(-1)^{(\boldsymbol a_{l-1}^{m-1})^{\rm T}{\boldsymbol a_{n-1}^{m-1}}}\left(\gamma\sum_{k=1}^{K_i} \|\boldsymbol h_k\|^2 V_{k,n}^{m-1}{\rm e}^{-j\Delta_k} \!+\! \tilde {Z}_{n}^{m-1}\right)\\
&=\gamma\sum_{n=1}^{2^{m-1}}(-1)^{\left(\boldsymbol a_{l-1}^{m-1}\right)^{\rm T}{\boldsymbol a_{j-1}^{m-1}}}\sum_{k=1}^{K_i}{\rm e}^{-j\Delta_k} \|\boldsymbol h_k\|^2 (-1)^{b_{k,m}^m+\frac12\beta_{k,m}^m+(\boldsymbol \eta_k^m)^{\rm T}\boldsymbol a_{n-1}^{m-1}}\notag\\
&+\sum_{n=1}^{2^{m-1}}(-1)^{(\boldsymbol a_{l-1}^{m-1})^{\rm T}{\boldsymbol a_{n-1}^{m-1}}}\tilde {Z}_{n}^{m-1}\label{hadaym0}
\end{align}
Equation \eqref{hadaym0} can be further written as
\begin{align}
t_l^{m-1}&=\gamma\sum_{k=1}^K(-1)^{b_{k,m}^m+\frac12\beta_{k,m}^m} {\rm e}^{-j\Delta_k}\|\boldsymbol h_k\|^2\sum_{n=1}^{2^{m-1}}(-1)^{\left(\boldsymbol \eta_k^{m}+\boldsymbol a_{l-1}^{m-1}\right)^{\rm T}{\boldsymbol a_{n-1}^{m-1}}}\notag\\
&+\sum_{n=1}^{2^{m-1}}(-1)^{(\boldsymbol a_{l-1}^{m-1})^{\rm T}{\boldsymbol a_{n-1}^{m-1}}}\tilde {Z}_{n}^{m-1}\label{hadaym1}.
\end{align}
Equation \eqref{hadaym1} indicates that, if we have $\boldsymbol \eta_k^m=\boldsymbol a_{l-1}^{m-1}$, peaks will appear at frequency $\boldsymbol\eta^m_k, k\in\{1,2,\cdots,{K_i}\}$, where the maximum value is ${\rm e}^{-j\Delta_k}2^{m-1}\gamma |\boldsymbol h_k|^2$.
On this basis, $ \boldsymbol{\hat\eta}_k^m$ can be recovered by searching the largest absolute value of $\boldsymbol t^{m-1}$ and $V_{k,n}^{m-1}$ can be estimated based on $\boldsymbol{\hat\eta}_k^m$.
Furthermore, since $b_{k,m}^m=\beta_{k,m}^m=0$, the delay of device $k$ can be recovered by the phase angle of the maximum value.
\begin{align}
{\hat\Delta}_{k,1}^m=-{\rm Arg}(\max \boldsymbol t^{m-1} )\label{Delta1}.
\end{align}
\subsubsection{Estimation of $(\boldsymbol \eta_k^{m-1},b_{k,m-1}^m,\beta_{k,m-1}^m, {\Delta}_{k,2}^m)$}\label{sectionalpham1}
After recovering $(\boldsymbol{\hat\eta}_k^{m},{\hat \Delta}_{k,1}^m)$, we next estimate $(\boldsymbol\eta_k^{m-1},b_{k,m-1}^m,\beta_{k,m-1}^m, {\Delta}_{k,2}^m)$ in a similar way.
Define
\begin{align}\label{ym1next}
\boldsymbol Y_i^{m-1}(n)=\frac12\left({{\rm e}^{-j\hat{\Delta}_{k,1}^m}\boldsymbol Y_{i}^m(2n-1)}+({\hat V}_{k,n}^{m-1})^{*} \boldsymbol Y_{i}^m(2n)\right).
\end{align}
Under the assumption that ${\hat {\boldsymbol V}}_k^{m-1}$ and ${\hat \Delta}_{k,1}^m$ are correctly estimated, according to \eqref{y2j0} and \eqref{y2j1}, $\boldsymbol Y_i^{m-1}(n)$ is further expressed as
\begin{align}\label{yjm1}
\boldsymbol Y_{i}^{m-1}(n)\!&=\sqrt{\gamma}\boldsymbol h_k X_{k,n}^{m-1}{\rm e}^{-2j\Delta_kn}\!+\boldsymbol A_i^{m-1}(n) +\boldsymbol Z_i^{m-1}(n),
\end{align}
where the term
\begin{align}
\boldsymbol A^{m-1}_i(n)=\frac{\sqrt{\gamma}}2\sum_{l\ne k}\! \boldsymbol h_l X_{l,n}^{m-1}\!{\rm e}^{-2j\Delta_ln}\left({\rm e}^{-j(\hat{\Delta}_{k,1}^m-{\Delta}_l)}+(\hat V_{k,n}^{m-1})^{*}V_{l,n}^{m-1}\right)
\end{align}
consists of all {\em interferences} from other devices which are all second order RM sequences and
\begin{align}
\boldsymbol Z_i^{m-1}(n)=\frac12\left((\hat V_{k,n}^{m-1})^{*}\boldsymbol Z_{i}^m(2n)+{\rm e}^{-j\hat{\Delta}_{k,1}^m}\boldsymbol Z_{i}^m(2n-1)\right)\sim {\cal CN}\left(\boldsymbol 0,\frac{1}{2}\boldsymbol I\right),
\end{align}
i.e., the variance of the channel noise is reduced by half.
Besides, we have
\begin{align}
\left\|\frac12\boldsymbol h_l\left({\rm e}^{-j(\hat{\Delta}_{k,1}^m-{\Delta}_l)}+(\hat V_{k,n}^{m-1})^{*}V_{l,n}^{m-1}\right)\right\|\le \|\boldsymbol h_l\|
\end{align}
which indicates that the equivalent channel gain of the interferences is reduced.
When $n=1,\cdots,2^{m-2}$, applying Proposition.\ref{Proposition1} on \eqref{yjm1} leads to
\begin{align}
\boldsymbol Y_{i}^{m-1}(2n)\!&=\sqrt{\gamma}\boldsymbol h_k X_{k,2n}^{m-1}{\rm e}^{-4j\Delta_kn}\!+\boldsymbol A^{m-1}_i({2n}) +\boldsymbol Z_{i}^{m-1}(2n)\\
&=\sqrt{\gamma}\boldsymbol h_k V_{k,n}^{m-2}X_{k,n}^{m-2}{\rm e}^{-4j\Delta_kn}\!+\boldsymbol A^{m-1}_i(2n) +\boldsymbol Z_{i}^{m-1}(2n),
\end{align}
and
\begin{align}
\boldsymbol Y_{i}^{m-1}(2n-1)&=\sqrt{\gamma}\boldsymbol h_k X_{k,2n-1}^{m-1}{\rm e}^{-2j\Delta_k(2n-1)}+\boldsymbol A^{m-1}_i(2n-1) +\boldsymbol Z_{i}^{m-1}(2n-1)\\
&=\sqrt{\gamma}\boldsymbol h_kX_{k,n}^{m-2}{\rm e}^{-2j\Delta_k(2n-1)}+\boldsymbol A^{m-1}_i(2n-1) +\boldsymbol Z_{i}^{m-1}(2n-1),
\end{align}
Let $\tilde{Y}_n^{m-2}=[\boldsymbol Y_{i}^{m-1}(2n)]^{\rm T}[\boldsymbol Y_{i}^{m-1}(2n-1)]^{*}$, we have
\begin{align}\label{yjm21}
\tilde{Y}_n^{m-2}&={\gamma}\|\boldsymbol h_k\|^2V_{k,n}^{m-2}{\rm e}^{-2j\Delta_k}\!+\tilde{Z}_n^{m-2}
\end{align}
where
\begin{align}
V_{k,n}^{m-2}=(-1)^{b_{k,m-1}^m+\frac12\beta_{k,m-1}^m+(\boldsymbol \eta_{k}^{m-1})^{\rm T}\boldsymbol a_{n-1}^{m-2}}
\end{align}
and
\begin{align}
&\tilde{Z}_n^{m-2}=(\sqrt{\gamma}\boldsymbol h_k V_{k,n}^{m-2}X_{k,n}^{m-2}{\rm e}^{-4j\Delta_kn})^{\rm T}[\boldsymbol A^{m-1}_{i}(2n-1) +\boldsymbol Z_{i}^{m-1}(2n-1)]^{*} \notag \\
&+ [\boldsymbol A^{m-1}_{i}(2n) +\boldsymbol Z_{i}^{m-1}(2n)]^{\rm T}[\sqrt{\gamma}\boldsymbol h_kX_{k,n}^{m-2}{\rm e}^{-2j\Delta_k(2n-1)}\!+\boldsymbol A^{m-1}_{i}(2n\!-\!1) +\boldsymbol Z_{i}^{m-1}(2n\!-\!1)]^{*}.
\end{align}
Similar to \eqref{ym1}, applying WHT on $\tilde {\boldsymbol Y}^{m-2}=[\tilde{Y}_1^{m-2},\cdots,\tilde{Y}_{2^{m-2}}^{m-2}]^{\rm T}$ yields
\begin{align}
t_l^{m-2}&=(\boldsymbol w_l^{m-2})^{\rm T}\tilde {\boldsymbol Y}^{m-2}\\
&=\sum_{n=1}^{2^{m-2}}(-1)^{(\boldsymbol a_{l-1}^{m-2})^{\rm T}{\boldsymbol a_{n-1}^{m-2}}}\left(\gamma \|\boldsymbol h_k\|^2 V_{k,n}^{m-2}{\rm e}^{-2j\Delta_k} \!+\! \tilde {Z}_{n}^{m-2}\right)\\
&=\gamma(-1)^{b_{k,m-1}^m+\frac12\beta_{k,m-1}^m} {\rm e}^{-2j\Delta_k}\|\boldsymbol h_k\|^2\sum_{n=1}^{2^{m-2}}(-1)^{\left(\boldsymbol \eta_k^{m-1}+\boldsymbol a_{l-1}^{m-2}\right)^{\rm T}{\boldsymbol a_{n-1}^{m-2}}}\notag\\
&+\sum_{n=1}^{2^{m-2}}(-1)^{(\boldsymbol a_{l-1}^{m-2})^{\rm T}{\boldsymbol a_{n-1}^{m-2}}}\tilde {Z}_{n}^{m-2}\label{hadaym2}.
\end{align}
Equation \eqref{hadaym2} indicates that $\boldsymbol {\hat\eta}_k^{m-1}$ can be recovered by searching the maximum value of the result. Comparing \eqref{ym1} and \eqref{yjm21}, we know that $\boldsymbol {\hat\eta}_k^{m-1}$ is more likely to be correctly estimated than $\boldsymbol {\hat\eta}_k^{m}$ because the variance of channel noise is reduced by half.
Moreover, we have
\begin{align}\label{bmbetam}
(-1)^{b_{k,m-1}^m+\frac12\beta_{k,m-1}^m}=\left\{\begin{array}{rc}
-i & \ {\rm if}\ (b_{k,m-1}^m,\beta_{k,m-1}^m)=(1,1), \\
-1 & \ {\rm if}\ (b_{k,m-1}^m,\beta_{k,m-1}^m)=(1,0), \\
i & \ {\rm if}\ (b_{k,m-1}^m,\beta_{k,m-1}^m)=(0,1), \\
1 & \ {\rm if}\ (b_{k,m-1}^m,\beta_{k,m-1}^m)=(0,0).
\end{array}\right.
\end{align}
Since $\Delta_k\approx \hat\Delta_{k,1}^m$, equation \eqref{bmbetam} indicates that $(b_{k,m-1}^m,\beta_{k,m-1}^m)$ can be estimated by the polarity of the largest value of ${\rm e}^{2j{\hat\Delta}_{k,1}^m}\boldsymbol t^{m-2}$.
For example, if the real part of the maximum value is positive and greater than the absolute value of the imaginary part, then we have $(b_{k,m-1}^m,\beta_{k,m-1}^m)=(0,0)$.
Further, $V_{k,n}^{m-2}$ is recovered through
\begin{equation}
\begin{split}\label{vkm1}
{\hat V}_{k,n}^{m-2}=(-1)^{\hat b_{k,m-1}^m+\frac12\hat\beta_{k,m-1}^m+(\boldsymbol {\hat\eta}_k^{m-1})^{\rm T}\boldsymbol a_{n-1}^{m-2}},
\end{split}
\end{equation}
and
\begin{align}\label{Delta2}
{\hat\Delta}_{k,2}^m=-{\rm Arg}\left\{(\max \boldsymbol t^{m-1})\left[(-1)^{\hat b_{k,m-1}^m+\frac12\hat\beta_{k,m-1}^m}\right]^{*} \right\}.
\end{align}
\subsubsection{Estimation of Channel Coefficient $(b_{k,1}^m,\beta_{k,1}^m,h_k)$}\label{sectionalpham11}
We continue these process until all the estimates $\left(\boldsymbol {\hat\eta}_k^s, {\hat b}_{k,s}^m, {\hat\beta}_{k,s}^m\right), s\in\{2,\cdots,m\}$ and $\hat{\Delta}_{k,m-1}^m$ are obtained.
We have ${\rm e}^{j\hat{\Delta}_{k,m-1}^m} \approx {\rm e}^{2^{m-2}j{\Delta}_{k}}$.
According to \eqref{yjm1}, the received sequence in the last layer can be written as
\begin{align}
\boldsymbol Y_{i}^{1}(n)&=\sqrt{\gamma} \boldsymbol h_k X_{k,n}^{1}{\rm e}^{-2^{m-1}j\Delta_kn}+\boldsymbol A_i^1(n)+\boldsymbol Z_i^{1}(n), \qquad n=1,2,
\end{align}
where the term $\boldsymbol A_i^1(n)$ consists of all {\em interferences} from other devices which are all second order RM sequences and $\boldsymbol Z_i^{1}(n)\sim{\cal}(\boldsymbol 0,\frac1{2^{m-1}}\boldsymbol I)$.
Accordingly, we have
\begin{align}\label{y11}
\boldsymbol Y_{i}^{1}(1)&=\sqrt{\gamma} \boldsymbol h_k X_{k,1}^{1}{\rm e}^{-2^{m-1}j\Delta_k}+\boldsymbol A_i^1(1)+\boldsymbol Z_i^{1}(1)\\
&=\sqrt{\gamma} \boldsymbol h_k {\rm e}^{-2^{m-1}j\Delta_k}+\boldsymbol A_i^1(1)+\boldsymbol Z_i^{1}(1)
\end{align}
and
\begin{align}\label{y21}
\boldsymbol Y_{i}^{1}(2)&=\sqrt{\gamma} \boldsymbol h_k X_{k,2}^{1}{\rm e}^{-2^{m}j\Delta_k}+\boldsymbol A_i^1(2)+\boldsymbol Z_i^{1}(2)\\
&=\sqrt{\gamma} \boldsymbol h_k (-1)^{b_{k,1}^m+\frac12\beta_{k,1}^m}{\rm e}^{-2^{m}j\Delta_k}+\boldsymbol A_i^1(2)+\boldsymbol Z_i^{1}(2)
\end{align}
Similar to previous processing, define $\tilde{Y}_1^0=(\boldsymbol Y_i^1(2))^{\rm T}(\boldsymbol Y_i^1(1))^{*}$, we have
\begin{align}\label{y10}
\tilde{Y}_1^0={\gamma}|\boldsymbol h_k|^2(-1)^{b_{k,1}^m+\frac12\beta_{k,1}^m}{\rm e}^{-2^{m-1}j\Delta_k}+\tilde Z_1^0,
\end{align}
where
\begin{align}
\tilde{Z}_1^0&=(\sqrt{\gamma} \boldsymbol h_k (-1)^{b_{k,1}^m+\frac12\beta_{k,1}^m}{\rm e}^{-2^{m}j\Delta_k}+\boldsymbol A_i^1(2)+\boldsymbol Z_i^{1}(2))^{\rm T}(\boldsymbol A_i^1(1)+\boldsymbol Z_i^{1}(1))^{*}\\
&+(\boldsymbol A_i^1(2)+\boldsymbol Z_i^{1}(2))^{\rm T}(\sqrt{\gamma} \boldsymbol h_k {\rm e}^{-2^{m-1}j\Delta_k})^{*}
\end{align}
According to \eqref{y10}, $(b_{k,1}^m,\beta_{k,1}^m)$ can be estimated by the polarity of ${\rm e}^{2j\hat{\Delta}_{k,m-1}^m}\tilde{Y}_1^0$.
And $\hat{\Delta}_{k,m}^m$ can be estimated as
\begin{align}\label{delta2m1}
\hat{\Delta}_{k,m}^m=-{\rm Arg}\left(\tilde{Y}_1^0\left((-1)^{\hat b_{k,1}^m+\frac12\hat\beta_{k,1}^m}\right)^{*} \right).
\end{align}
Moreover, the channel coefficient $\boldsymbol h_k$ can be estimated as
\begin{align}\label{hk}
\sqrt{\gamma}\hat {\boldsymbol h}_k=\frac1{2}\left({\boldsymbol Y_i^1(1){\rm e}^{j\hat{\Delta}_{k,m}^m} +\left((-1)^{\hat b_{k,1}^m+\frac12\hat\beta_{k,1}^m}\right)^{*}{\rm e}^{2j\hat{\Delta}_{k,m}^m}\boldsymbol Y_i^1(2)}\right).
\end{align}
Note that we can infer $\hat {\boldsymbol h}_k$ from \eqref{hk} if the AP knows the value of $\gamma$, but we only need to know the product of the two.
\subsubsection{Estimation of $\Delta_k$}\label{sectiondeltak}
So far, the matrix-vector pair $\left(\hat{\boldsymbol P}_k^m,\hat{\boldsymbol b}_k^m\right)$, the corresponding channel coefficient ${\hat{\boldsymbol h}}_k$, and $ {\boldsymbol{\hat\Delta}}_{k}^m$ has been completely estimated. In addition, $\hat{\boldsymbol X}_k^m$ can be obtained through \eqref{cm2}.
To estimate the remaining devices, however, we still need to estimate $\Delta_k$ to remove the interference of device $k$ according to \eqref{OFDMAPdiscreteDFTSIC}.
Let $\hat\Delta_{k,l}^m=\Delta_{k,l}^m+\delta_{k,l}^m, l=1,\cdots,m$, where $\delta_{k,l}^m$ indicates the estimated error.
Typically, $ \{{\delta}_{k,l}^m\}, l=1,\cdots,m$ is a small number.
Thus ${\hat\Delta}_k$ can be estimated through ${\hat\Delta}_{k,1}^m$ directly, i.e., ${\hat\Delta}_k={\hat\Delta}_{k,1}^m$.
Then, to remove the interference of device $k$ according to \eqref{OFDMAPdiscreteDFTSIC}, a straightforward way of evaluating the phase angle $n\hat\Delta_k, n=1,\cdots,N,$ is
\begin{align}
{\rm Arg}({\rm e}^{ jn \hat\Delta_{k,1}^m })={\rm Arg}({\rm e}^{ jn \delta_{k,1}^m }{\rm e}^{ jn \Delta_{k,1}^m }).\label{delayerror}
\end{align}
However, in this way, the error of the phase angle between $n\hat\Delta_k$ and $n\Delta_k$ becomes $n \delta_{k,1}^m$, which will become large for large $n$.
Since we know the error $ \{{\delta}_{k,l}^m\}, l=1,\cdots,m$ is a small number. We can estimate $\hat\Delta_k$ through $\boldsymbol{{\hat\Delta}}_k^m$ instead of just $\hat\Delta_{k,1}^m$.
Let $\boldsymbol{{\tilde\Delta}}_k^m=\left[{{\tilde\Delta}}_{k,1}^m,\cdots,{{\tilde\Delta}}_{k,m}^m\right]$ where
\begin{align}
{{\tilde\Delta}}_{k,l}^m={\rm Arg}\left({\rm e}^{j2^{l-1}\left({\hat\Delta}_{k,1}^m-e_{k}^m\right)} \right),\quad l=1,\cdots,m.
\end{align}
Here $e_{k}^m$ is expected to be the estimation error $\delta_{k,1}^m$ that takes values around 0, and can be evaluated as
\begin{align}
e_{k}^m=\argmin\left\{ \left\|{\hat{{\boldsymbol\Delta}}_{k}^m}-{\tilde{{\boldsymbol\Delta}}_{k}^m} \right\|\right\}.
\end{align}.
Then $\hat\Delta_k$ is estimated as
\begin{align}
\hat\Delta_k&=\hat\Delta_{k,1}^m-e_{k}^m.\label{delayerror}
\end{align}
\subsubsection{The {\bf findPb} Algorithm}
According to the analysis from Section \ref{sectionalpham} to Section \ref{sectiondeltak}, the detailed algorithm is summarized as in Algorithm 2.
\begin{center}\begin{tabular}{l l}
\toprule
\multicolumn{1}{l}{{\bf Algorithm 2}: {\bf findPb} function.}\\
\midrule
{\bf Input}: the received signal $\boldsymbol Y^m_i$, the maximum number of active devices $K_{\rm max}$.\\
{\bf while} $\|\boldsymbol Y^q_i\|_F>\varepsilon $\footnotemark[1] and $k<K_{\rm max}$\footnotemark[2] {\bf do}\\
\quad $k\leftarrow k+1$.\\
\quad {\bf for }$s=m,m-1,\cdots,2$ {\bf do}\\
\qquad Split $\boldsymbol Y^s_i$ into two partial sequences similar to \eqref{y2j0} and \eqref{y2j1}.\\
\qquad Perform the element-wise conjugate multiplication according to \eqref{ym1}.\\
\qquad Perform WHT and recover $\hat{\boldsymbol \eta}^s$ by the binary index of the largest component.\\
\qquad {\bf if} $s=m$ {\bf do}\\
\qquad\quad Set $\hat b_{k,m}^m=0,\hat \beta_{k,m}^m=0$ and recover $\Delta_{k,1}^m$ according to \eqref{Delta1}.\\
\qquad {\bf else} {\bf do}\\
\qquad\quad Recover $(\hat b_{k,s}^m,\hat \beta_{k,s}^m)$, $\hat{\boldsymbol v}_k^{s-1}$, and $\Delta_{k,2}^m$ according to \eqref{bmbetam} -- \eqref{Delta2}, respectively.\\
\qquad {\bf end if}\\
\qquad Calculate $\boldsymbol Y^{s-1}_i$ according to \eqref{ym1next}.\\
\quad {\bf end for}\\
\quad Recover $(\hat b_{k,1}^m,\hat \beta_{k,1}^m)$ according to \eqref{bmbetam} and \eqref{y10}.\\
\quad Add $(\hat{\boldsymbol P}_k^m, \hat{\boldsymbol b}_k^m)$ to the decoded set.\\
\quad Calculate the codeword $\hat{\boldsymbol X}_k^m$ according to $(\hat{\boldsymbol P}_k^m, \hat{\boldsymbol b}_k^m)$ and estimate $\sqrt{\gamma}\hat {\boldsymbol h}_k$ according to \eqref{hk}.\\
\quad Recover $\hat\Delta_{k}$ according to \eqref{delayerror}.\\
\quad Remove the interference of device $k$ and update $\boldsymbol Y^m_i$ according to \eqref{OFDMAPdiscreteDFTSIC}.\\
\quad Break and delete the detected message if $\|\boldsymbol Y^m_i\|_F$ becomes larger after removing device $k$.\\
{\bf end while}\\
{\bf Output}: $\left(\hat{\boldsymbol P}_l^m, \hat{\boldsymbol b}_l^m\right)$, ${\hat{\Delta}}_l$, and $\hat {\boldsymbol h}_l$ for $l=1,2,\cdots, k$.\\
\bottomrule
\end{tabular}
\footnotetext[1]{Various criteria are possible here. We found $\varepsilon=(22K^{-\frac13}r^{-\frac14}(\sigma^2+r2^q))^{\frac12}-d+3(q-p)$ works well. We should emphasize that it's very much an open question as to how to choose this parameter optimally. Our suggestion would be to try to optimize the value empirically for the regime of interest.}
\footnotetext[2]{In practice, we take the iteration limits $K_{\rm max}$ to be more than three times the expected number of messages.}
\end{center}
\section{Computational Complexity Analysis}\label{CCA}
In Algorithm 1, there are $2^d$ sub-blocks. In each sub-block, there are $2^p$ time slots, each of which is length $2^m$.
Thus the total number of iterations in Algorithm 1 is $2^{p+d}$.
In each iteration, Algorithm 1 calls Algorithm 2 to return all the messages that transmitted in the given sub-block and slots.
And the computational complexity of Algorithm 1 mainly comes from the calling of Algorithm 2 in each iteration.
For Algorithm 2, it iterates at most $K_{\rm max}$ times to obtain $K_{\rm max}$ messages. The most expensive operations in each iteration come from two parts: matrix-vector pair estimation through fast WHT and the generation of the RM code through \eqref{cm2}.
We consider the number of multiplication operations required to run the algorithm, where the number of addition operations is similar to it.
For matrix-vector pair estimation, the number of multiplication operations required in each iteration is on the order of $\sum_{s=2}^m{\cal O}((s-1+r)2^{s-1})={\cal O}((m+r-2)2^m)$.
Similarly, in each iteration, the complexity of generating the Reed-Muller code when a matrix-vector pair $\left(\hat{\boldsymbol P}^m, \hat{\boldsymbol b}^m\right)$ is found is ${\cal O}((m^2+2m)2^m)$.
In summary, the worst-case complexity of Algorithm 2 is on the order of ${\cal O}\left\{K_{\rm max}2^m\left(m^2+3m+r-2\right)\right\}$.
Since the number of sub-blocks is $2^d$ and the number of slots in each sub-block is $2^p$, the complexity of Algorithm 1 is thus ${\cal O}\left\{2^{d+p} K_{\rm max}2^m\left[m^2+3m+r-2\right]\right\}$.
In practice, the maximum number of neighbors of the AP in each slot in Algorithm 2 is set to $ K_{\rm max}=\left\lceil\frac{6K^{*}}{2^{p}}r^{\frac14}\right\rceil$.
Accordingly, the complexity of Algorithm 1 is ${\cal O}\left\{6K^{*}2^{d+m}r^{\frac14}\left(m^2+3m+r-2\right)\right\}$.
However, we should emphasize that the above analysis of complexity is the worst case.
Actually, the number of iterations of Algorithm 2 is mainly determined by $\varepsilon$, not $K_{\rm max}$.
If the active devices transmitted in the given slot is decoded, it is likely that the energy of the received signal is smaller than $\varepsilon^2$. Besides, since each active device transmits randomly, the number of active devices in each slot is three times smaller than $K_{\rm max}$ on average.
This also indicates that our algorithm does not need to know the number of active devices in each slot; furthermore, there is no need to know the total number of active devices.
\section{Numerical Results}\label{Numericalresults}
\subsection{Definition of Error Metrics}
We first define the false alarm rate and the miss rate. These are our main performance metrics.
Denote $A^{*}\subset\{1,2,\cdots,2^B\}$ as the index of the messages transmitted by in-cell devices of the AP. We have $|A^{*}|=K^{*}$. And let $A\subset\{1,2,\cdots,2^B\}$ denotes the index of the output messages of the algorithm.
The set relationship is depicted in Fig. \ref{setrelation}.
\begin{figure}[H]
\begin{center}
\includegraphics[width=3in]{falsealertmissed.pdf}
\end{center}
\caption{The set relationship.}\label{setrelation}
\end{figure}
In our algorithm, the AP does not know the number of its in-cell devices $K^{*}$. In this case, the algorithm output all the detected messages in $A$.
Accordingly, we define the false alarm rate in this phase as
\begin{equation}
\begin{split}\label{DefiniFAR}
\frac{|A\backslash A^{*}|}{|A|},
\end{split}
\end{equation}
and the miss rate as
\begin{equation}
\begin{split}\label{DefiniMAR}
\frac{|A^{*}\backslash A|}{|A^{*}|}.
\end{split}
\end{equation}
The false alarm rate and miss rate reflect the error performance of the algorithm when the number of devices in the AP is unknown.
\subsection{Simulation Parameter Setting}
The detailed simulation parameters are listed in Table \ref{tablesimulationparameter}.
We consider a $500\times500$ ${\rm m}^2$ rectangle where the devices are randomly distributed in the plane.
According to \cite{IMT}, the number of devices on the plane with area 0.25 ${\rm km}^2$ can be up to $|\Phi|=2.5\times10^5$, however, only a small part of them are active.
In this paper, we consider the number of active devices $K$ ranges from 1000 to 8000, i.e., $K\in[1000,8000]$.
According to \eqref{averageneighbor}, we have $K^{*}\approx 1.11\times10^{-2}K$ when $r=1$.
And $K^{*}\approx1.25\times10^{-2}K$ when $r=16$.
Assuming $K=1000$, we have $K^{*}\approx11$ when $r=1$ and $K^{*}\approx13$ when $r=16$.
Assuming $K=8000$, we have $K^{*}\approx89$ when $r=1$ and $K^{*}\approx100$ when $r=16$.
The AP aims to decode the messages transmitted by its neighbor, while the messages transmitted by its non-neighbor are treated as interference.
\begin{table}[htbp]
\caption{Simulation Parameters.}
\centering
\begin{tabular}{l|c}
\hline
\label{tablesimulationparameter}
Parameters & Value \\
\hline
Channel gain threshold $\theta$ & $10^{-6}$ \\
Path-loss exponent $\alpha$ & $4$ \\
Transmit SNR of each device $\gamma$ & 60 dB \\
Average number of detected devices in each slot $K_{\rm max}$ & $\left\lceil\frac{6K^{*}}{2^{p}}r^{\frac14}\right\rceil$ \\
The square area of the device distribution region $S$ & $500 \times 500\ {\rm m}^2$\\
Carrier spacing $\Delta f$ & 15 kHz\\
Maximum device delay $\tau_{\rm max}$ & 10 $\upmu$s\\
\hline
\end{tabular}
\end{table}
\subsection{Synchronous Transmission}
In this subsection, we set $\tau_{\rm max}=0$, i.e., the transmissions of different devices are synchronized.
In this case, a state-of-the-art algorithm using RM codes is the list RM\_LLD algorithm given in \cite{Hanzo}.
In \cite{Hanzo}, the author uses only the subset of the RM codewords; thus encodes fewer bits, namely $B=\frac12(d+m+p)(d+m+p+1)$.
Note that the algorithm proposed in \cite{Hanzo} applies only for single antenna case. In our algorithm, we set $r=1,4,16$, respectively.
For fair comparison, we set the codelength to be $C=2^{12}=4,096$ in both algorithms.
In this case, the number of information bits in \cite{Hanzo} is $B=\frac12\times12\times(12-1)=66$ bits.
To make the transmit bits comparable, we set $d=0, m=10, p=2$ in our scheme.
Since the number of slots is small, in our scheme, each messages randomly chooses one of the 4 slots for transmitting.
Accordingly, the number of transmit bits of Algorithm 1 is $B=\frac12m(m+3)+p=67$ bits, which is comparable with \cite{Hanzo}.
Fig. \ref{sync} illustrates the miss rate and false alarm rate of both algorithms.
In Fig. \ref{sync}, the horizontal axis is the number of neighbors of the AP $K^*$ and the total number of active devices $K$ when $r=1$, respectively.
The author in \cite{Hanzo} assume that the AP knows the number of its neighbors, thus the miss rate equals to the false alarm rate for the algorithm in \cite{Hanzo}.
In our algorithm, the AP does not need to know the number of its neighbors.
Fig. \ref{sync} shows that the performance of Algorithm 1 improves as the number of receiving antennas increases.
Fig. \ref{sync} demonstrates that Algorithm 1 outperforms the list RM\_LLD algorithm when the number of active devices is larger than 2000. But the performance of the list RM\_LLD algorithm is better when the number of active devices is smaller than 2000, i.e., the number of neighbors of the AP is less than 22.
This is because the codelength in our algorithm is divided into 4 slots to reduce the number of active devices in each slot. However, when the number of active users is small (for example, below 2000), there is no need to use slotting.
This suggests us that the number of slots should be reduced if the number of active devices is small.
Fortunately, our algorithm can flexibly change the number of slots to deal with different situations.
\begin{figure*}[htbp]
\centering
\subfigure[]{
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=3in]{figure_MAR_withneighbor.pdf}
\end{minipage}%
\label{syncSDP}
}%
\subfigure[]{
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=3in]{figure_FAR_withneighbor.pdf}
\end{minipage}%
\label{syncFAR}
}%
\centering
\caption{Performance comparison with the algorithm in \cite{Hanzo}. (a) The miss rate and (b) the false alarm rate.}
\label{sync}
\end{figure*}
\subsection{Asynchronous Transmission}
In this subsection, we study the performance of Algorithm 1 under asynchronous transmission.
The maximum delay is set to be $\tau_{\rm max}=10\ \upmu$s, further, the length of the cyclic prefix can be calculated as $M=\left\lceil\frac{3}{20}2^m\right\rceil$.
We assume the normalized transmission delay $\Delta_k=2\pi\Delta f\tau_k$ is uniformly distributed in $[-\pi,\pi]$.
As far as we know, this paper is the first using RM codes to handle continuous transmission delay in massive access.
Fig. \ref{fig3} depicts the miss rate and false alarm rate obtained by Algorithm 1 with different number of receive antennas.
We set $q=6, p=6, d=0$, which means that the number of slots and the length of each slot are both $2^6=64$ and $M=10$.
In Fig. \ref{fig3}, the horizontal axis is $K^*$ and $K$ when $r=16$, respectively.
In this case, the total number of information bits can be transmitted is $30$ bits according to \eqref{B}, and the codelength is $2^6(2^6+10)=4,736$ according to \eqref{Codelength}.
\begin{figure*}[htbp]
\centering
\subfigure[]{
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=3in]{figure_OFDM_RM_m6p6_B30_MAR_withneighbor.pdf}
\end{minipage}%
\label{SDP}
}%
\subfigure[]{
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=3in]{figure_OFDM_RM_m6p6_B30_FAR_withneighbor.pdf}
\end{minipage}%
\label{FAR}
}%
\centering
\caption{Performance comparison with different number of receiving antennas. (a) The miss rate and (b) the false alarm rate.}
\label{fig3}
\end{figure*}
Fig. \ref{fig3} shows that the performance of Algorithm 1 improves as the number of receiving antennas increases.
If the number of receive antennas is increased from $r=1$ to $r=2$, the performance of Algorithm 1 will be greatly improved.
However, if we further increase the number of receiving antennas, the increase of the performance is limited.
Differently, we observe in Fig. \ref{fig3} that the miss rate at $K=1000$ is greater than the miss rate at $K=2000$, which might appear surprising.
We believe that this behavior is due to the suboptimal tuning of the parameters $\varepsilon$ and $ K_{\rm max}$ in Algorithm 1.
One of the advantage of Algorithm 1 is that no tuning of parameters is required.
We use the same parameter choices throughout the paper.
It is worth notice that both the miss rate and false alarm rate is below 0.05 when $r=16$.
This indicates that we need use multiple antennas for practice use.
Fig. \ref{fig4} shows the miss rate and false alarm rate versus the number of active devices with different number of information bits.
The number of receive antennas is $r=16$ and $m+p=12$.
For $d=0, 1, 2$, the number of sub-blocks is $1, 2, 4$, respectively, and the corresponding codelengths are $C=4,736, C=9,472$, and $C=18,944$, respectively.
To patch the sub-blocks together, we add different number of parity check bits for different number of sub-blocks.
These choices are made with a view of minimizing the number of bits devoted to parity check while keeping the probability of information loss in the patching process pretty low.
We should emphasize that these choice are not optimal.
Generally, the miss rate and the false alarm rate are related to the parity check bits.
The more parity check bits, the worse the miss rate and the better the false alarm rate.
We refer the reader to \cite{Amalladinne2} for details of this method.
Normally, the more information bits we transmit, the worse the error performance.
For example, when the codelength is $18,944$, the error performance of the curve with $B=93$ bits is better than the curve with $B=121$ bits.
Besides, the error performance of the curve with $B=30$ bits and $B=48$ bits are comparable, but the error performance of the curve with $B=93$ bits is much worse than that of the curve with $B=30$ bits and $B=48$ bits.
This indicates two things: 1) because we need to append some parity check bits, even though we double the number of sub-blocks, the number of information bits will not double; 2) we find that only a small number of sub-blocks (up to 4) is beneficial. The error performance will be much worse if more than 4 sub-blocks are adopted, which is somewhat deviated from the implementation of using 11 sub-blocks in \cite{Amalladinne12}.
\begin{figure*}[htbp]
\centering
\subfigure[]{
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=3in]{figure_patching_m6p6nr16_MAR_withneighbor.pdf}
\end{minipage}%
\label{SDP}
}%
\subfigure[]{
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=3in]{figure_patching_m6p6nr16_FAR_withneighbor.pdf}
\end{minipage}%
\label{FAR}
}%
\centering
\caption{Performance comparison with different number of receiving antennas. (a) The miss rate and (b) the false alarm rate.}
\label{fig4}
\end{figure*}
\section{Conclusion}\label{conclusion}
This paper has developed a new technique for asynchronous massive access using OFDM signaling and RM codes. The access points are assumed to have a large number of antennas. The proposed technique allows a flexible number of information bits and codelength.
Numerical results demonstrate the effectiveness of the proposed technique as well as the gains due to a large number of antennas at the access points.
|
{
"timestamp": "2021-09-30T02:02:12",
"yymm": "2109",
"arxiv_id": "2109.13989",
"language": "en",
"url": "https://arxiv.org/abs/2109.13989"
}
|
\section{\label{sec:intro} Introduction}
\begin{figure}
\includegraphics[width=\columnwidth]{figs/fig1.png}
\caption{\label{fig:1} Monomer distributions and chain heights in (a) flat, (b) converging, and (c) diverging brushes. The width of each tree-like chain profile is proportional to the number of monomers $\delta n$ occupying a thin slice $\delta z$. The distribution of chain ends is sampled from a normalized distribution function $\hat{g}(z)$ characteristic of each brush type, with three highlighted chains at height fractions $0.5 h$, $0.7 h$, and $0.9 h$. The diverging brush (c) is a cylinder of dimensionless mean curvature $Hh \simeq 1.0$ exhibits an end exclusion zone at height fraction $z_{\rm ex}/h \simeq 0.46$.}
\end{figure}
Conformations of long, flexible polymers are altered when one end is grafted to a substrate or is localized to an interface.
With increasing surface grafting density, crowding forces polymer chains to extend from the grafting surface, forming a polymer brush.
In the strong-stretching limit of brushes, at high surface area densities, crowding increasingly distorts the volume swept out by a fluctuating polymer in the brush, eventually limiting them to regions much more narrow than their extension.
This picture of stretching due to confinement was developed in the celebrated Alexander-de Gennes brush theory\cite{Alexander1977,deGennes1976,deGennes1980}.
While providing useful scaling laws for brush height and osmotic pressure, the Alexander-de Gennes theory is unable to capture microscopic details how chains stretch within the brush, due to unphysical constraints on the free-end positions \cite{Milner1991}.
Relaxing the constraint that free ends are fixed at the outer edge of the brush, but considering the entropy of chains within their so-called ``classical path,'' leads to the parabolic brush theory (PBT) of strongly stretched brushes\cite{Semenov1985,Milner1988,MWC1989}.
In PBT, the mean field effects of volume interactions are captured by a spatially varying chemical potential field $\mu(z)$ that acts on segments a height $z$ above the grafting substrate.
The parabolic form of this potential, $\mu(z) - \mu(0) \sim -z^2$, arises from the constraint that every chain has its terminal end at the substrate at $z=0$, regardless of the position of the free end.
As a consequence, the monomer density of each chain depends on height $z$ from the substrate surface, as well as the location $z_0$ of the free end, such that chains are most strongly stretched near the substrate and are under zero tension at their free ends, so the segment distribution for each chain is strongly skewed towards the free ends.
Schematically, this distribution of segments within chains can be illustrated as tree-like region on average with narrow ``trunks'' at anchoring surface and opening up to broad ``crowns'' at their free ends, as illustrated in Fig.~\ref{fig:1}(a).
The free-end distribution $g(z_0)$ is self-consistently coupled to the chemical potential field through volume interactions.
For the case of molten polymer brushes considered in this paper, the form of $g(z_0)$ corresponds to the distribution of these tree-like segment profiles that integrates to a uniform total segment density.
Since every chain in the brush connects to the substrate, every chain contributes to the monomer density of the brush near the grafting substrate, the ``floor'' of this forest of tree-like segment profiles.
The volume available to a chain near the substrate is crowded out by presence of other chains in the brush, and thus fewer chains can have free ends near the substrate.
In contrast, fewer chains reach the upper ``canopies'' of the brush, and hence this region is more available to be filled by the free-end canopies.
As a consequence of these arguments, in a flat (planar) brush, the probability density $g(z_0)$ of finding a chain end at height $z_0$ adopts the monotonically increasing profile displayed in Fig.~\ref{fig:1}(a).
The PBT also holds for brushes bound to concave substrates whose curvature forces chains to splay inwards, as illustrated by Fig.~\ref{fig:1}(b) for a cylindrically-curved grafting surface.
These converging brushes obey the same parabolic potential and thus adopt similar tree-shaped profiles.
However, because inward splay requires monomers to pack into regions of decreasing available area, diffuse (high density) crowns repel the tips of convergent brushes, depleting the free-end distribution $g(z_0)$, resulting in the profile shown on the right of Fig.~\ref{fig:1}(b).
It is a testament to the utility of PBT that it consistently describes both flat and converging brushes \cite{Milner1988,Grest1989,Auroy1992,Grest1994,Carignano1994,Netz1998,Manghi2001,Dimitrov2006,matsen_strong-segregation_2010}.
However, PBT fails to consistently capture the case of chains forced to splay outwards, illustrated in Fig.~\ref{fig:1}(c).
Because these brushes have larger fractions of their volume at their distal ends, a sufficient number of free-end crowns must be packed in this canopy.
However, since each of these chains are rooted down at the brush floor, where there is smaller area available, the PBT leads to an overcrowding in the proximal region of the substrate, which is resolved mathematically by an unphysically negative $g(z_0)$ in this zone.
The failure of PBT for concave brushes was recognized in early formulations by Semenov~\cite{Semenov1985}, and later resolved (first for cylindrical brushed) by generalizing the PBT solution to include an \emph{end-exclusion zone} (EEZ) within a height $z_{\rm ex}$ from the grafting substrate, where $g(0<z_0<z_{\rm ex}) = 0$~\cite{Ball1991}.
The inclusion of an EEZ subtly alters the chemical potential $\mu(z)$ from its parabolic profile, enhancing the stretching chains near the surface, effectively redistributing more segments into the canopy zone of relatively broader crowns.
This alteration enables a solution to the melt packing problem, resulting in the end distribution shown in Fig.~\ref{fig:1}(c).
While the EEZ extension to strong-stretching theory successfully describes brushes where PBT fails, it is not in general analytically tractable, as it solution involves a set of self-consistent integral equations coupling the chemical potential profile (and stretching) to end-zone distribution.
To date, the complexity of these equations has so far limited study to two convex geometries possible: cylinders and spheres \cite{Ball1991,Dan1992,Li1994,Belyi2004,matsen_strong-segregation_2010}.
In general, a grafting substrate is a 2D surface embedded in 3D and is therefore characterized by two curvatures: a mean curvature $H$ and a Gaussian curvature $K$ \cite{Hyde1997_ch1,Kamien2002}.
This combination of curvatures parameterizes the range of surfaces depicted in Fig.~\ref{fig:2}.
In the simplest case, chains are taken to emerge normal to the grafting surface, and this combination of curvatures then determines the splay of the brush.
Converging brushes emerge from negative mean curvature surfaces ($H < 0$), whereas diverging brushes emerge from positive mean curvature surfaces ($H > 0$).
PBT consistently describes all converging brushes, since these do not require EEZs.
At present, an extension of strong-stretching theory to incorporate EEZ has only been studied on cylinders ($K=0$) and spheres ($K=H^2$), which outline region A ($H>0$, $0\leq K \leq H^2$) in Fig.~\ref{fig:2}(a).
Note that no surfaces exist in the regime of $K > H^2$.
In general, surfaces in region A interpolate between cylinders and spheres, so chains always splay outward, even if the surface causes the splay to be anisotropic.
In contrast, saddle-shaped surfaces, defined by $K < 0$, force chains to splay outward in one direction and inward in the orthogonal direction.
This is the case for regions B-D and is unexplored in the context of polymer brushes.
The packing constraints in normally extended brushes are related to substrate geometry through \emph{Steiner's formula},
\begin{equation}\label{eq:steiner}
A(z) = A_0(1 + 2 H z + K z^2) \, ,
\end{equation}
where $A_0$ is the area element of the substrate and $A(z)$ is the area element at height $z$ above the substrate.
As shown in Fig.~\ref{fig:2}(b), the area decreases for $H < 0$ and increases for $H>0$.
The increase is monotonic and accelerates in region $A$.
However, for $K < 0$, the initial increase in area decelerates with increasing $z$, so that the area function has a maximum at $z^* = -H/K$.
Brushes in region B have a height that is smaller than the inflection height, $h < z^*$, whereas brushes in region C are long enough that the maximum area is reached within the brush, so $h > z^*$.
For $z>z^*$, the area decreases with height, meaning that the geometry becomes convergent at its outer edge.
Such brushes can only extend up to a maximum height where the area available to chain canopies vanishes, $A(h)=A_0(1 + 2Hh + Kh^2 )= 0$.
Region D, where $1 + 2Hh + Kh^2 < 0$, is excluded.
A critical application of the strong-stretching of molten brushes has been the analysis of block copolymer free energies~\cite{Semenov1985, Olmsted1998, LikhtmanSemenov}.
All domain morphologies, with the exception of lamellae, require a divergent region, which should incorporate EEZs in one of the brush-like domains.
To date, existing strong-stretching theories of block copolymer melts are based only on PBT, and the lack of a strictly physical EEZ solutions has complicated interpretations of the these predictions.
Most notable perhaps are questions revolving around the strong-segregation stability of bicontinuous network phases, like double-gyroid, whose brush geometries include $K<0$ saddle-like shapes.
In part, the inability to properly assess the thermodynamic stability of these so-called ``complex phases" derives the from the lack of EEZ solutions for these negatively curved regions (B and C).
\begin{figure}
\includegraphics[width=\columnwidth]{figs/fig2.png}
\caption{\label{fig:2} Brushes may splay inward or outward or both and may do so anisotropically. This is controlled by the combination of substrate mean curvature $H$ and Gaussian curvature $K$, as mapped out in (a). In region A, the brush splays outward in all directions. In regions B and C, the brush splays outward in one direction and inward in the orthogonal direction. In region D, the inward splay rate overcomes the outward, causing the brush chains to pass through each other at a focal curve. Representative brush splay is illustrated by the convergence and divergence of rays extending normal to the surfaces in each of the insets. The combination of splay in different directions affects the brush area at different heights $z$ above the substrate surface, as shown in (b), and as illustrated by parallel stacks of area slices in each of the insets, with red representing area increase and blue representing area decrease relative to the area at the substrate. The boundary between regions B and C is shown as a dashed curve with the area maximum at $z^*=h$; for B, the maximum is at $z^*>h$, whereas for C, the maximum is at $z^*<h$.}
\end{figure}
In this paper, we present the solution of the self-consistent equations for strongly-stretched molten brushes in the full range of $H>0$ shapes where EEZs are required, i.e.~regions A, B and C.
Our approach expands on the methods of Belyi \cite{Belyi2004} developed for cylindrical and spherical brushes.
First, we present the EEZ constraint equations in a form that describes brushes of arbitrary curvature.
We then present numerical methods for solving the EEZ constraint equations in each of the regions of interest.
Using our solutions, we find that substrate mean curvature $H$ and Gaussian curvature $K$ play distinct roles in determining how chains pack in a brush.
These solutions show that the size and energetic costs EEZ diminishes with decreasing mean curvature of the brush (as was previously reported for cylinders and spheres), but also as Gaussian curvature becomes increasingly negative and the brush geometry evolves from divergent to convergent.
As these results are generally applicable to systems ranging from highly curved nanoparticles beyond simple spherical and cylindrical shape \cite{Ohno2007,Dukes2010,Tung2013}, to strongly segregated block copolymer phases \cite{Milner1994,Olmsted1998,Grason2006,matsen_strong-segregation_2010}, we include a computational algorithm to interpolate our results of EEZ heights and free energies to arbitrary curvature values over the regions A, B and C in Fig.~\ref{fig:2}.
\section{\label{sec:theory} Constraint equations for curved brushes}
We consider a molten brush with a local geometry described by the area distribution of eq.~(\ref{eq:steiner}), corresponding to fixed values of curvatures $H$ and $K$ and total height $h$ (or equivalently surface density at the graft surface $\sigma_0$)~\footnote{The quadratic form of $A(z)$ need not require the {\it normal} extension of the chains in the brush. I.e.~this form can also arise from chains trajectories that are tilted with respect to normals of the anchoring surface}.
Each segment, indexed by $n \in [0,N]$, in a chain of $N$ total segments is located a distance $z(n)$ away from the grafting surface along the local surface normal direction.
The total free energy of a single chain in the strong-stretching is then given by the Edwards Hamiltonian \cite{Edwards1965} (in units of $k_B T$),
\begin{equation}
\mathcal{H} = \int_0^N{\rm d}n\left\{\frac{3}{2a^2}|\partial_n z|^2 + \mu(z)\right\} \, ,
\end{equation}
where $a$ is the statistical segment length of brush chains~\cite{Matsen2002}.
The first term of $\mathcal{H}$ accounts for the entropic penalty of stretching a ``Gaussian thread'' along the $z$-direction and $\mu(z)$ is a chemical potential acting on monomers a distance $z$ from the surface.
The chemical potential $\mu$ is the mean field that enforces incompressibility (constant local density) at each point in the brush.
Since we are considering monodisperse brushes and the density of monomers at height $z$ for a single chain is given by the ratio ${\rm d}n/{\rm d}z$, the chemical potential $\mu(z)$ must be determined in a way that is self-consistent with the statistics of single chain configurations (i.e.~the vertical distributions of free-end positions and stretching).
Following ref\cite{Milner1991}, free energy minimizing (or ``classical'') trajectories of brush chains can be understood in analogy to trajectories of a 1D particle in by an external potential $-\mu(z)$.
At time $n=0$ (the free end), the particle is released from rest, $\partial_n z|_{n=0} = 0$, at height $z_0 > 0$.
At time $n=N$, the particle must arrive at the surface at $z=0$.
The ``time-of-flight'' of such a particle can be found for an arbitrary potential $\mu$,
\begin{equation}\label{eq:tof}
N(z_0) = -\sqrt{\frac{3}{2a^2}}\int_{z_0}^0\frac{{\rm d}z}{\sqrt{\mu(z) - \mu(z_0)}} \, .
\end{equation}
Since every chain in the brush has the same polymerization index $N$, chain trajectories must satisfy the \emph{tautochrone constraint} $N(z_0) = N$ for all admissible values of $z_0$ \cite{AdamutiTrache1996,Hilfer2000}.
Critically, this constraint only holds for chains with free ends outside of the EEZ, i.e.~when $z_0 > z_{\rm ex}$, where the EEZ is a layer $0 \leq z \leq z_{\rm ex}$ proximal to the brush surface~\cite{Ball1991}.
For $z_0 < z_{\rm ex}$, $N(z_0)$ is not constrained to a constant value.
Chains in a molten brush must additionally satisfy an \emph{incompressibility constraint} wherein the occupied volume fraction is constant for all $z$.
Since the brush is composed entirely of segments, the brush volume is $V = \sigma_0 A_0 N \rho^{-1}$, where $\sigma_0$ is the areal density of chains grafted to the brush substrate, $A_0$ is the area of the brush substrate, $N$ is the number of monomers per chain, and $\rho^{-1}$ is the segment volume.
This volume is constructed via parallel slices of area $A(z)$, given by eq.~(\ref{eq:steiner}) and illustrated in the insets of Fig.~\ref{fig:2}, so the number of segments in a slice of width ${\rm d}z$ is given by $\rho A(z) {\rm d}z$.
Each chain with endpoint $z_0 > z$ locally deposits ${\rm d}n(z|z_0)$ segments between $z$ and $z + {\rm d}z$.
We can tabulate the number of chains that pass through the volume slice at $z$ by integrating endpoint distribution function $g(z_0)$ for the points $z_0\geq z$.
Therefore, the incompressibility constraint can be expressed as
\begin{equation}\label{eq:incompressibility}
-\int_z^h{\rm d}z_0 \,g(z_0) \frac{{\rm d}n}{{\rm d}z}(z|z_0) = \rho A(z) \, ,
\end{equation}
where $g(z_0)$ is normalized such that $\int_{z_{\rm ex}}^h {\rm d}z_0\,g(z_0) = \sigma_0$.
The tautochrone eq.~(\ref{eq:tof}) and incompressibility eq.~(\ref{eq:incompressibility}) constraint equations are integral equations that are coupled by Steiner's equation eq.~(\ref{eq:steiner}).
To rewrite them in a form that is more amenable to numerical solution, following previous authors\cite{Ball1991,Dan1992,Li1994,Belyi2004}, it is convenient to change coordinates from the physical height $z$ to local value of the chemical potential $\mu$.
This change of coordinates depends on the monotonicity of $\mu(z)$, namely ${\rm d}\mu/{\rm d}z < 0$ for all $z > 0$.
At the brush surface ($z=0$), we have $\mu \equiv P$, at the EEZ boundary ($z = z_{\rm ex}$), $\mu \equiv Q$, and at the end of the brush ($z=h$), we have $\mu = 0$.
In chemical potential coordinates, eq.~(\ref{eq:tof}) is given by
\begin{equation}\label{eq:tof_mu}
N(\mu_0) = -\sqrt{\frac{3}{2 a^2}}\int_{\mu_0}^P {\rm d}\mu \frac{{\rm d}z}{{\rm d}\mu}\frac{1}{\sqrt{\mu - \mu_0}}
\end{equation}
and eq.~(\ref{eq:incompressibility}) is given by
\begin{equation}\label{eq:incompressibility_mu}
\sqrt{\frac{3}{2a^2}}\int_0^{\mu} {\rm d}\mu_0 \frac{g(\mu_0)}{\sqrt{\mu - \mu_0}} = \rho A(\mu)
\end{equation}
where $g(\mu_0) \equiv (-{\rm d}z/{\rm d}\mu)|_{\mu = \mu_0}g\left(z(\mu_0)\right)$.
This change of coordinates permits the inversion of each of these equations via an Abel transformation \cite{Ball1991,Belyi2004} so that ${\rm d}z/{\rm d\mu}$ can be solved for in terms of $N(\mu_0)$ and $g(z_0)$ can be solved for in terms of $A(\mu)$.
Through these equations, the tautochrone and incompressibility constraints in the presence of the EEZ provide non-local information about the functional forms of $z(\mu)$ and $A(\mu)$.
Here, we follow the approach of Belyi \cite{Belyi2004} and use the EEZ tautochrone constraint $N(\mu) = N$ for $\mu < Q$ to rewrite eq.~\ref{eq:tof_mu} as
\begin{equation}\label{eq:EEZ1}
\hat{z}(\tilde{\mu} < 1) = \sqrt{\frac{Q h_{\rm fl}^2}{Q_{\rm fl}h^2}}\sqrt{1 - \tilde{\mu}} + \frac{1}{\pi}\int_1^{\tilde{P}} {\rm d}\tilde{\mu}' \mathbbm{K}_<(\tilde{\mu},\tilde{\mu}') \hat{z}(\tilde{\mu}')
\end{equation}
where $h$ is the brush height, $h_{\rm fl} \equiv \sigma_0 N \rho^{-1}$ is the height of the flat brush, $Q_{\rm fl} \equiv 3\pi^2h_{\rm fl}^2/(8 N a^2)$ is the chemical potential of the flat brush at the brush substrate, and the integral kernel $\mathbbm{K}_<$ is given by
\begin{equation}
\mathbbm{K}_<(\tilde{\mu},\tilde{\mu}') \equiv \frac{\sqrt{1 - \tilde{\mu}}}{(\tilde{\mu}' - \tilde{\mu})\sqrt{\tilde{\mu}' - 1}} \, .
\end{equation}
Here, we have introduced the scaled chemical potential coordinate $\tilde{\mu} \equiv \mu/Q$ and the dimensionless height coordinate $\hat{z} \equiv z/h$.
Similarly, we can use EEZ incompressibility constraint $g(\mu) = 0$ for $P > \mu > Q$ to rewrite eq.~(\ref{eq:incompressibility_mu}) as
\begin{equation}\label{eq:EEZ2}
\mathcal{A}(\tilde{\mu} > 1) = \frac{1}{\pi}\int_0^{1}{\rm d}\tilde{\mu}'\,\mathbbm{K}_>(\tilde{\mu},\tilde{\mu}')\mathcal{A}(\tilde{\mu}')
\end{equation}
where the $\mathcal{A}$ is the dimensionless area $\mathcal{A}(\mu) \equiv A(\mu)/A_0$ and the integral kernel $\mathbbm{K}_>$ is given by
\begin{equation}
\mathbbm{K}_>(\tilde{\mu},\tilde{\mu}') \equiv \frac{\sqrt{\tilde{\mu} - 1}}{(\tilde{\mu} - \tilde{\mu}')\sqrt{1-\tilde{\mu}'}} \, .
\end{equation}
Notably, these two integral equations relate information about the unknown functions outside of the EEZ, $\tilde{\mu} < 1$, to information about the unknown functions inside of the EEZ, $1 \leq \tilde{\mu} \leq \tilde{P}$~\cite{Belyi2004}.
The EEZ constraint equations, eqs.~(\ref{eq:EEZ1}) \& (\ref{eq:EEZ2}), are related by Steiner's formula $\mathcal{A}(\tilde{\mu}) = 1 + 2\hat{H}\hat{z}(\tilde{\mu}) + \hat{K}\hat{z}^2(\tilde{\mu})$, where $\hat{H} \equiv H h$ and $\hat{K} = K h^2$ are the dimensionless mean and Gaussian curvatures, respectively.
We can, in principle, solve these coupled integral equations numerically, despite the nonlinear form of their coupling.
This has been done in the specialized cases where the curved surface is described by a single radius of curvature $R$, namely for cylinders ($H = 1/(2R)$ and $K = 0$) as well as spheres ($H = 1/R$ and $K = 1/R^2$) \cite{Belyi2004}.
To solve the equations for arbitrary curvature, we express $\hat{z}(\tilde{\mu})$ in terms of a new function $\psi(\tilde{\mu})$ via
\begin{equation}
\hat{z}(\mu) = \frac{1}{\alpha\hat{H}}\left(\frac{\psi(\tilde{\mu})}{\Psi} - 1\right)\, ,
\end{equation}
where
\begin{equation}
\alpha \equiv K/H^2
\end{equation}
is a dimensionless parameter characterizing the scale of the Gaussian curvature $K$ relative to the mean curvature $H$, $\Psi$ is a dimensionless parameter that, through the boundary condition $\hat{z}(\tilde{P}) = 0$, takes on the boundary value $\Psi = \psi(\tilde{P})$.
Through Steiner's formula, we can find a relationship between $\psi(\tilde{\mu})$ and $\mathcal{A}(\tilde{\mu})$, namely
\begin{equation}\label{eq:psi_area}
\mathcal{A}(\tilde{\mu}) = 1 + \frac{1}{\alpha}\left(\frac{\psi^2(\tilde{\mu})}{\Psi^2} - 1\right) \, .
\end{equation}
After substitution of $\psi(\tilde{\mu})$ into Eqs.~(\ref{eq:EEZ1}) \& (\ref{eq:EEZ2}), we are left with a pair of explicitly coupled nonlinear integral equations,
\begin{subequations}
\begin{align} \label{eq:psi_int_equations}
\psi(\tilde{\mu} < 1) &= \alpha\sqrt{1-\tilde{\mu}} + \frac{2}{\pi}\Psi\arctan\sqrt{\frac{1-\tilde{\mu}}{\tilde{P}-1}} \notag \\
&\mkern+32mu + \frac{1}{\pi}\int_1^{\tilde{P}} {\rm d}\tilde{\mu}' \mathbbm{K}_<(\tilde{\mu},\tilde{\mu}')\psi(\tilde{\mu}')\\
\label{eq:psi_int_equations2}
\psi^2(\tilde{\mu} > 1) &= \frac{2}{\pi}\Psi^2(1 - \alpha)\arctan\sqrt{\tilde{\mu} - 1} \notag\\
&\mkern+32mu + \frac{1}{\pi}\int_0^1 {\rm d}\tilde{\mu}' \mathbbm{K}_>(\tilde{\mu},\tilde{\mu}')\psi^2(\tilde{\mu}')
\end{align}
\end{subequations}
where we have chosen $\Psi = \hat{H}^{-1}\sqrt{Q_{\rm fl}h^2/(Q h_{\rm fl}^2)}$ to remove any dependence of the integral equations on $Q$, $h$, or $\hat{H}$, leaving only $\alpha$ and $\tilde{P}$ as parameters.
The integral equations eqs.~(\ref{eq:psi_int_equations}) \& (\ref{eq:psi_int_equations2}) have quadratic coupling, similar to the integral equations describing spherical brushes\cite{Belyi2004}, yet are valid for arbitrary Gaussian curvature $K \neq 0$ (the cylindrical limit of the equations are included in the Appendix \ref{app:cyl} for completion).
Region A ($0 < \alpha \leq 1$), are described by solutions for which $\psi(\tilde{\mu})/\Psi \geq 1$ for all values of $\tilde{\mu}$, with equality at $\tilde{\mu} = \tilde{P}$; negative Gaussian curvature regions B-D ($\alpha < 0$) are described by solutions with $\psi(\tilde{\mu})/\Psi \leq 1$.
\section{\label{sec:numerical} Numerical Methods}
The forms of the coupled integral equations eqs.~(\ref{eq:psi_int_equations}) \& (\ref{eq:psi_int_equations2}) are reminiscent of coupled inhomogeneous Fredholm integral equations of the second kind, but their nonlinearity puts them outside of this classification, complicating the process of finding solutions.
Nevertheless, we take a classical approach of approximating the integral equations as algebraic equations via the Nystr\"om method \cite{NumericalRecipes} which we then solve iteratively.
We find that this approach works well for $(\alpha,\tilde{P})$ in region A, consistent with previous results focusing on cylinders as spheres~\cite{Belyi2004}, but also for broad parts of region C and D.
For region B, we are unable to find convergence to physical solutions, hinting at some sort of instability in the iterative procedure.
However, we are able to find approximate solutions through a variational method that we outline below.
We have included code that implements the numerical schemes discussed here in a publicly accessible repository\cite{dataset}.
\subsection{Iterative method}
For the iterative method, to ensure continuity at the EEZ, we rewrite the function $\psi$ as $\psi(\tilde{\mu}) = \psi_1 + \Delta \psi(\tilde{\mu})$ and $\psi^2(\tilde{\mu}) = \psi_1^2 + \Delta\psi^2(\tilde{\mu})$, where $\psi_1 \equiv \psi(\tilde{\mu} = 1)$.
This allows us to rewrite eqs.~(\ref{eq:psi_int_equations}) \& (\ref{eq:psi_int_equations2}) as
\begin{subequations}
\begin{align} \label{eq:psi_int_equations_cont}
\psi(\tilde{\mu} < 1) &= \psi_1 + \alpha\sqrt{1-\tilde{\mu}} + \frac{2}{\pi}(\Psi-\psi_1)\arctan\sqrt{\frac{1-\tilde{\mu}}{\tilde{P}-1}} \notag \\
&\mkern+32mu + \frac{1}{\pi}\int_1^{\tilde{P}} {\rm d}\tilde{\mu}' \mathbbm{K}_<(\tilde{\mu},\tilde{\mu}')\Delta\psi(\tilde{\mu}')\\
\label{eq:psi_int_equations_cont2}
\psi^2(\tilde{\mu} > 1) &= \frac{2}{\pi}\psi_1^2\arcsin\frac{1}{\sqrt{\tilde{\mu}}} + \frac{2}{\pi}\Psi^2(1 - \alpha)\arctan\sqrt{\tilde{\mu} - 1} \notag\\
&\mkern+32mu + \frac{1}{\pi}\int_0^1 {\rm d}\tilde{\mu}' \mathbbm{K}_>(\tilde{\mu},\tilde{\mu}')\Delta\psi^2(\tilde{\mu}')
\end{align}
\end{subequations}
which are explicitly continuous at the EEZ boundary $\tilde{\mu} = 1$.
Next, we discretize the interval $[0,\tilde{P}]$ into $\mathcal{N} = 10^4$ subdivisions so that the evaluation of $\psi(\tilde{\mu})$ results in a vector $\psi_n$ for $n = 0,1,\dots,\mathcal{N}$.
This allows us to recast eqs.~(\ref{eq:psi_int_equations_cont}) \& (\ref{eq:psi_int_equations_cont2}) as coupled algebraic equations of the form $\psi(\tilde{\mu} < 1) = f_<[\psi](\tilde{\mu})$ and $\psi^2(\tilde{\mu} > 1) = f_>[\psi^2](\tilde{\mu})$, where the integrals are approximated using Simpson's Rule.
The goal is then to seek zeros of the residuals $g_<[\psi](\tilde{\mu} < 1) \equiv f_<[\psi](\tilde{\mu}) - \psi(\tilde{\mu})$ and $g_>[\psi^2](\tilde{\mu} > 1) \equiv f_>[\psi^2](\tilde{\mu}) - \psi^2(\tilde{\mu})$.
We proceed to iterate these equations using simple mixing with the rules
\begin{equation}\begin{split}
\psi^{(i+1)}(\tilde{\mu} < 1) &= \psi^{(i)}(\tilde{\mu}) + r g_<\left[\psi^{(i)}\right](\tilde{\mu}) \\
\psi^{2,(i+1)}(\tilde{\mu} > 1) &= \psi^{2,(i)}(\tilde{\mu}) + r g_>\left[\psi^{2,(i)}\right](\tilde{\mu})
\end{split}\, ,\end{equation}
where the mixing parameter $r$ takes on values in the interval $(0,1]$.
We iterate until the $L^2$ norm of the total residual, defined as
\begin{align}\label{eq:total_residual}
{\rm Res}\left[\psi_<^{(i)},\psi_>^{(i)}\right] &\equiv \int_0^1 {\rm d}\tilde{\mu} g^2_<\left[\psi^{(i)}\right](\tilde{\mu}) \notag \\
&\mkern+32mu + \int_1^{\tilde{P}} {\rm d}\tilde{\mu} g^2_>\left[\psi^{(i)}\right](\tilde{\mu}) \, ,
\end{align}
reaches a target value.
In order to iterate these equations, the square root of eq.~(\ref{eq:psi_int_equations_cont2}) must be taken, leading to a sign ambiguity.
Per eq.~(\ref{eq:psi_area}), the maximum of the area function $\mathcal{A}$ occurs at $\psi = 0$, which occurs at a point $\tilde{\mu}^* = \tilde{\mu}(\hat{z}^*)$ in chemical potential coordinates.
Since the brush can pass through the maximum area at most once, the sign of $\psi(\tilde{\mu})$ is determined by the sign of $\tilde{\mu} - \tilde{\mu}^*$.
We use linear interpolation on $\psi^2$ to determine the location $\tilde{\mu}^*$ of any zeros and then assign the sign of the square root $\psi = \pm \sqrt{\psi^2}$ accordingly.
This method works well for $\alpha > 0$, consistently reaching total residuals of $10^{-10}$ or lower.
To survey solutions in the region A, we performed sweeps in $\tilde{P}$ for $\alpha = 0.1,\, 0.2, \dots 1.0$; the results of these sweeps are discussed in the next section.
Use of the iterative method on the above equations for $\alpha < 0$ (particularly, in region B where the magnitude of $|-K|$ is small) proves to be unstable, leading to rapidly diverging residuals.
This instability seems to be influenced by the negative values of $\alpha\sqrt{1-\tilde{\mu}}$ in eq.~(\ref{eq:psi_int_equations}).
It is possible to find stable solutions by instead choosing $\Psi \mapsto \Psi/\alpha$, resulting in a modified equation,
\begin{align}
\psi(\tilde{\mu} < 1) &= \sqrt{1-\tilde{\mu}} + \frac{2}{\pi}\Psi\arctan\sqrt{\frac{1-\tilde{\mu}}{\tilde{P}-1}} \notag \\
&\mkern+32mu + \frac{1}{\pi}\int_1^{\tilde{P}} {\rm d}\tilde{\mu}' \mathbbm{K}_<(\tilde{\mu},\tilde{\mu}')\psi(\tilde{\mu}') \, .
\end{align}
However, using values of $\alpha$ between -0.1 and -10, this only results in stably converging solutions in regions C and D and the iteration method fails to find solutions in region B.
Interestingly, while the $\tilde{P} \rightarrow 1$ limit corresponds to vanishing mean curvature $\hat{H}$ for region A, consistent with the disappearance of the EEZ, the same limit corresponds to an increasingly large magnitude of Gaussian curvature $|\hat{K}|$ for regions C and D.
This suggests that while the size of the EEZ is largely dependent on a positive mean curvature $H>0$, it is also modulated by Gaussian curvature to the degree that a sufficiently negative value $K<0$ at fixed $H$ can suppress the EEZ.
We will show that this is indeed the case.
\subsection{Variational method}
In order to study solutions to eqs.~(\ref{eq:psi_int_equations}) \& (\ref{eq:psi_int_equations2}) in region B, we turn to a variational calculation, the goal of which is to determine forms of $\psi(\tilde{\mu})$ that minimize the total residual, eq.~(\ref{eq:total_residual}).
We approximate $\psi(\tilde{\mu})$ by a pair of degree-$d$ Bernstein polynomials,
\begin{equation}\begin{split}
\psi(\tilde{\mu} < 1) &= \sum_{n=0}^d a_n \beta_n^d(\tilde{\mu}) \\
\psi(\tilde{\mu} > 1) &= \sum_{n=0}^d b_n \beta_n^d\left(\frac{\tilde{\mu}-1}{\tilde{P}-1}\right)
\end{split}\, ,\end{equation}
where the Bernstein polynomials are defined as
\begin{equation}
\beta_n^d(t) = \frac{d!}{n!(d-n)!}(1-t)^{d-n}t^n \, ,
\end{equation}
and $a_n,b_n \in \mathbbm{R}$ are coefficients representing the degrees of freedom for the variational calculation.
We further restrict the space of functions by requiring continuity of $\psi(\tilde{\mu})$, so $b_0 = a_d$, and continuity of ${\rm d}\psi/{\rm d}\mu$, so $b_1 - b_0 = (\tilde{P}-1)(a_d - a_{d-1})$.
As a result, the variational calculation has $2d-2$ degrees of freedom.
This Bernstein basis is not only convenient due to its smoothness, it also guarantees that the function $\psi(\tilde{\mu})$ is well-behaved, despite the weakly singular kernels $\mathbbm{K}_<$ and $\mathbbm{K}_>$ \cite{Jafarian2014}.
We numerically minimize the total residual with respect to these coefficients via a Nelder-Mead method, as implemented in the \texttt{Optim} package in the Julia programming language.
While this method generally leads to higher residuals than the iteration method, we do find reasonable solutions achieving residuals of $10^{-6}$ or smaller for $d = 16$ with a gradient tolerance of $10^{-12}$. However, we note that the performance of even this variation method is limited, such that achieving residuals of even this weaker tolerance becomes problemation for sufficiently large alpha $|\alpha|$ (i.e. near to the the $K =0$ line).
We performed sweeps of $\tilde{P}$ for $\alpha$ between -0.1 and -3.
\section{\label{sec:discussion} Results \& Discussion}
\begin{figure*}
\includegraphics[width=\textwidth]{figs/fig3.png}
\caption{\label{fig:3}
Summary of numerical studies after smoothing and interpolation, plotted as a function of dimensionless mean curvature $Hh$ and Gaussian curvature $Kh^2$. We extrapolate to the low curvature regime $Hh\rightarrow 0$ using a fitting function of the form $a\times{\rm exp}(-b/H)$ for contours of fixed $Kh^2$, where $a$ and $b$ are fitting parameters. Data shown for physical regions A, B, and C, the boundaries of which are denoted with dashed lines. (a) Fraction of exclusion zone height $z_{\rm ex}$ to brush height $h$. (b) Percent correction $\Delta \mathcal{F}$ to the parabolic brush free energy due to the inclusion of the end exclusion zone, plotted on log scale. (c) and (d) show relative EEZ size and percent free energy correction as a function of Gaussian curvature $Kh^2$ for fixed values of mean curvature $Hh$ ranging from $0.4$ to $2.0$.
}
\end{figure*}
A solution for $\psi(\tilde{\mu})$ for fixed values of $(\alpha,\tilde{P})$ fully encodes both the structure (stretching and free-end profiles) as well as thermodynamics of strongly-stretched, convex brushes.
The dimensionless mean curvature $\hat{H}$ is given by $\hat{H} = \alpha^{-1}(\psi(0)/\Psi - 1)$, from which we can find the dimensionless Gaussian curvature $\hat{K} = \alpha \hat{H}^2$.
The brush height $h$ is found through volume conservation: the volume of the flat brush, $V_{\rm fl} = A_0h_{\rm fl}$, must be the same as the volume of the curved brush, $V = \int_0^h {\rm d}z\, A(z) = A_0h(1 + \hat{H} + \hat{K}/3)$, so
\begin{equation}
h = h_{\rm fl}(1 + \hat{H} + \hat{K}/3)^{-1} .
\end{equation}
From this, we extract the height of the EEZ, $z_{\rm ex}/h = (\psi(1) - \Psi)/(\psi(0) - \Psi)$, as well as its chemical potential, $Q/Q_{\rm fl} = h^2/(\hat{H}\Psi h_{\rm fl})^2$.
We can calculate $g(z_0)$ by first inverting eq.~(\ref{eq:incompressibility}) via an Abel transform, yielding
\begin{equation}
g(\mu_0) = \sqrt{\frac{2 a^2}{3 \pi^2}}\frac{{\rm d}}{{\rm d}\mu_0}\int_0^{\mu_0} {\rm d}\mu \frac{\rho A(\mu)}{\sqrt{\mu_0 - \mu}}
\end{equation}
and then performing a change of variables, $g(z_0) = -({\rm d}\mu/{\rm d}z)_{z = z_0}g(\mu(z_0))$.
Finally, we find it convenient to define a scaled distribution function $\hat{g}(z_0) \equiv g(z_0)/\sigma_0$, normalized such that $\int_0^h {\rm d}z_0\,\hat{g}(z_0) = 1$.
In Fig.~\ref{fig:3}(a), we plot the end exclusion zone height $z_{\rm ex}/h$ as a function of dimensionless mean curvature $Hh$ and Gaussian curvature $Kh^2$ by interpolating the results of our numerical studies.
We find that the size of the EEZ grows monotonically with $Hh$ at fixed $Kh^2$.
However, this growth rate is strongly modulated by $Kh^2$.
Recall from Fig.~\ref{fig:2}(b) and Steiner's equation eq.~(\ref{eq:steiner}) that the area $A(z)$ available to brush monomers is controlled by mean curvature $H$ for small $z$ and by Gaussian curvature $K$ for large $z$.
The strong dependence of the EEZ size on $Hh$ for low-to-intermediate curvatures can be rationalized by the proximity of the EEZ boundary layer to the grafting substrate: $Hh$ sets the packing geometry of monomers \emph{local} to the EEZ.
However, the strong modulation by $Kh^2$ indicates that the packing of monomers at the brush end exerts \emph{non-local} control over the size of the EEZ.
At high curvatures, the Gaussian curvature plays the dominant role in determining the EEZ size, with the variations in $z_{\rm ex}/h$ with $Hh$ largely saturating in the $Hh \simeq 2$ regime, as seen in Fig.~\ref{fig:3}(c).
Here, the EEZ is large enough that it starts to ``feel'' the packing geometry at the brush end more acutely.
The numerical schemes outlined here do not work well in the limit of a nearly flat brush, where $Hh\rightarrow 0$ and $\tilde{P}\rightarrow 1$.
However, as shown in Appendix \ref{app:weak}, our equations are amenable to the same low-curvature analysis as presented by Belyi \cite{Belyi2004} and we recover the same scaling prediction $z_{\rm ex}/h \sim \exp\{-(2Hh_{\rm fl})^{-1}\} \sim \exp\{-[(2Hh)(1+Hh+Kh^2/3)]^{-1}\}$.
Not only does this scaling function fit our lowest-curvature data well and serve to extrapolate our data to $Hh\rightarrow 0_+$, the variations with $Kh^2$ are also consistent with the trends observed in our data.
Interestingly, this scaling function is non-analytic in $Hh$ and thus does not permit a power series expansion about $Hh\rightarrow 0_+$.
As shown in Appendix \ref{app:fe}, the stretching free energy of a brush is given by
\begin{equation}\label{eq:fe}
F = \frac{\rho}{2}\int_0^P {\rm d}\mu\, z(\mu) A(\mu) = \frac{\rho}{2}\int_0^h {\rm d}z \Big(- z\frac{ {\rm d} \mu}{ {\rm d} z} \Big) A(z) \, .
\end{equation}
Note that PBT brush result $\big(-z \frac{ {\rm d} \mu}{ {\rm d} z} \big) \propto z^2$ shows that the entropic free energy of occupying a height $z$ in the molten brush increases with that height~\cite{LikhtmanSemenov}.
The existence of the EEZ modifies the form of $\big(-\frac{ {\rm d} \mu}{ {\rm d} z} \big)$ and hence the relative free energy costs of occupying different heights from the anchoring surface are re-weighted with respect to the PBT.
To examine the effect of the EEZ on the free energy, we determine the free energy change relative to the PBT free energy, $\Delta\mathcal{F} \equiv (F-F_{\rm PBT})/F_{\rm PBT}$, which is plotted in Fig.~\ref{fig:3}(b).
While the contours of equal free energy correction follow a qualitatively similar trend to the EEZ height in Fig.~\ref{fig:3}(a), the difference in scale is dramatic, with free energy corrections $\lesssim 10^{-4}\%$ for an EEZ height of $\simeq 10\%$ of the brush height.
Gaussian curvature similarly modulates the size of the free energy correction, which decays for negative $K$, as shown in Fig.~\ref{fig:3}(d).
A low curvature approximation of $\Delta \mathcal{F}$ in Appendix \ref{app:weak} suggests reveals a similar scaling with powers of ${\rm exp}\{-(2Hh_{\rm fl})^{-1}\}$, indicating that for low curvature, $\Delta \mathcal{F} \sim (z_{\rm ex}/h)^{2}$ with $\nu = 2$.
However, our numerical results are unable to resolve sufficiently low curvatures to demonstrate this scaling prediction.
As the stretching free energy corrections are remarkably in the low curvature, with values of $\Delta F \gtrsim 1\%$ only appearing for $Hh\gtrsim 0.5$, low-curvature EEZ corrections are unlikely have a significant effect of brush thermodynamics.
Nevertheless, for relatively high curvatures, the EEZ correction to the free energy can be substantial.
To make these results accessible for future studies, we have included supporting software for interpolating EEZ results for arbitrary values of $Hh$ and $Kh^2$ in this range\cite{dataset}.
\begin{figure*}
\includegraphics[width=\textwidth]{figs/fig4.png}
\caption{\label{fig:4}Properties of brushes at fixed $Hh \simeq 0.5$, with $Kh^2\simeq 0.25$ representing a spherical brush, $Kh^2=0$ a cylindrical brush, $Kh^2 \simeq -0.25$ a region B brush, $Kh^2 \simeq -1.37$ and $-1.78$ two region C brushes, and $Kh^2 \simeq -2.01$ a brush near the C/D border. (a) Chemical potential function $\mu(z)$, normalized by the chemical potential at the grafting surface $\mu(0) = P$ for a subset of solutions for clarity (note: $Kh^2 \simeq -1.8$ not shown in this panel since it is indistinguishable from $Kh^2 \simeq -2$). The dashed curve corresponds to the PBT prediction for a concave cylindrical brush with $Hh=-1/2$. The end distribution function $\hat{g}(z)$ and polarization order parameter $p(z)$ for the same subset of solutions are plotted in (b) and (c). The inset of (c) is a magnified view of the polar order parameter as $z\to h$, showcasing the similarities in local chain packing between the concave cylindrical brush and the $Kh^2 \simeq -2$ brush.}
\end{figure*}
To put the magnitudes of free energy corrections in perspective, we can consider domains of AB diblock copolymer melts in their typical range of stability, which is determined by the volume fraction $f_A$ of block A relative to block B, as well as the segregation strength $\chi N$.
At finite $\chi N$, the equilibrium configurations of block copolymers are well-described by a variant of the self-consistent field theory (SCFT) \cite{Edwards1965,Helfand1975}.
Semenov\cite{Semenov1985} postulated that in the strong segregation limit ($\chi N \to \infty$), chains are strongly stretched, with each domain of the microphase separated block copoloymer melt resembling a molten brush.
To establish that this strong-segregation theory (SST) is a rigorous asymptotic limit of SCFT, Likhtman and Semenov\cite{LikhtmanSemenov} determined finite-$\chi N$ corrections based on the translational entropy of chain ends and organization of chains at the AB-interface.
Matsen\cite{matsen_strong-segregation_2010} demonstrated that EEZ effects emerge in the high-$\chi N$ regime of SCFT calculations and that an accurate comparison with SST requires incorporating EEZ corrections to the stretching free energy.
This is particularly true when establishing the stability windows for cylinder ($0.11 \lesssim f_A \lesssim 0.30$) and sphere ($f_A \lesssim 0.11$) phases in the strong-segregation limit \cite{Fredrickson2006,matsen_strong-segregation_2010}.
The corresponding curvature ranges for these two phases are $1.0 \gtrsim Hh \gtrsim 0.4$ and $Hh \gtrsim 1.1$ \& $Kh^2 \gtrsim 1.2$, respectively.
Our results show an EEZ correction to the free energy of $1.37\% \gtrsim \Delta\mathcal{F}_{\rm cyl} \gtrsim 0.03\%$ and $\Delta\mathcal{F}_{\rm sph} \gtrsim 4.17\%$.
These free energy corrections are close to the values of $1.43\% \gtrsim \Delta\mathcal{F}_{\rm cyl} \gtrsim 0.04\%$ and $\Delta\mathcal{F}_{\rm sph} \gtrsim 4.04\%$ obtained by Matsen\cite{matsen_strong-segregation_2010} in a rather different approach to determining the EEZ corrections to PBT that allows free chain ends to have variable tension.
Network phases, such as the double-gyroid, have morphologies with local surface curvatures and domain thicknesses that vary with position over the intermaterial dividing surface between minority and matrix domains.
Taking estimates from ref.\cite{Reddy2021} the brush geometry for a double-gyroid varies from a locally flatter region near to points of 3-fold symmetry (along $\left<110\right>$ directions) with $H h \simeq 0.1 $ and $K h^2 \simeq 0$ to the most negatively curved points, at ``elbow’’-like regions that pass through points of 2-fold symmetry (along $\left<110\right>$ directions) with $H h \simeq 0.2 $ and $K h^2 \simeq -0.3$ (a region B brush).
Each of these regions have free energy corrections of at most $\lesssim 10^{-6}\%$, suggesting that EEZ-based corrections are minute for such double gyroid morphology, relative to cylindrical and sphere morphologies.
Nevertheless, given small free energy differences with competing phases, the extent to which the free energy corrections to the PBT alter the predicted phase boundaries in the strong-segregation ($\chi N \to \infty$) limit remains an open question.
To study how chains behave in each of the regions of interest, we consider solutions with $Hh\simeq 0.5$ and variable $Kh^2$ in Fig.~\ref{fig:4}, in particular how the extent of the EEZ changes.
As shown in Fig.~\ref{fig:4}(a) the chemical potential $\mu(z)$ approaches the PBT chemical potential, under suitable re-scaling by $h$ and $P$, as $Kh^2$ becomes increasingly negative, consistent with the observation that the brush free energy is well approximated by the PBT free energy in this regime.
Varying $Kh^2$ can be accomplished by either fixing $K$ and varying $h$ or vice-versa.
If $K$ is fixed to a negative value, and the brush height $h$ is increased, the size of the EEZ $z_{\rm ex}$ remains fixed but the fraction $z_{\rm ex}/h$ decreases, so that on the scale of the brush, the overall effect of the EEZ diminishes.
Conversely, if $h$ is fixed and $K$ becomes increasingly negative, the area growth due to positive $H$ is quickly overcome by the decrease in area due to negative $K$, decreasing the size $z_{\rm ex}$ of the EEZ.
Physically, as the end of the brush is confined to a smaller area, chain ends are depleted away from the end towards the middle of the brush, as shown in Fig.~\ref{fig:4}(b).
The result is a balance between packing constraints proximal near the grafting substrate and near the free brush end; as the brush becomes more confined, the outward splay near the substrate is reduced.
Thus, the non-local nature of the integral equations describing curved brushes is reflected in the non-local influence on chain packing that the two ends of the brush exert on each other.
The forms of the chemical potential $\mu(z)$ and end distribution function $\hat{g}(z)$ obtained from our numerical studies of the brush equations are strikingly similar to the SCFT results of Matsen for cylindrical and spherical domain brushes\cite{matsen_strong-segregation_2010} in the limit of strong stretching, further supporting their validity.
Finally, in addition to depleting endpoints away from the grafting substrate, the EEZ has a strong influence in on the orientation of segments in the brush.
To see this, consider the polar order parameter $\mathbf{p}(z)$, the average orientation of segments at point $z$ \cite{Fredrickson1992,Zhao2012,Prasad2017}.
Microscopically, the monomer orientation is tangent to the chain, i.e. $\hat{\mathbf{r}} = a^{-1}{\rm d}\mathbf{r}/{\rm d}n$, where the conformation of an individual chain is given by the parametric curve $\mathbf{r}(n)$.
For strongly stretched polymer brushes, the mean orientation of segments points along the normal direction $\mathbf{\hat{n}}$.
Fluctuations reduce the magnitude of the mean orientation, so the mean orientation of $\delta n$ monomers in a single chain is given by $\mathbf{\hat{n}}\delta z/(a\delta n)$, where $\delta z$ is the height interval spanned by $\delta n$ monomers.
Now consider averaging this quantity over all chains passing through a sample volume $\delta V$.
Once again, chain conformations are labeled by the location of the terminal end $z_0$ and the multiplicity of such chains is $\delta A \times g(z_0)$, where $\delta A$ is the cross-sectional area of the sample volume.
In general, such a volume contains $\delta n$ segments per chain.
However, we can choose the sample volume to enclose a single monomer on average, taking the limit $\delta n \rightarrow 1$ with $\delta V \simeq \delta A\times \delta z \rightarrow \rho^{-1}$.
In this limit, the polar order parameter is given by
\begin{equation}
\mathbf{p}(z) = \frac{\mathbf{\hat{n}}}{\rho a}\int_z^h{\rm d}z_0\,g(z_0) = \mathbf{\hat{n}}\frac{h_{\rm fl}}{Na}\int_z^h{\rm d}z_0\,\hat{g}(z_0)\, .
\end{equation}
Thus, averaging over all chains in a region is equivalent to summing over chain the distribution $\hat{g}(z_0)$ from $z$ to $h$.
Note that the upper bound on the polar order parameter is set by the ratio of the flat brush height to the total arclength of a chain $h_{\rm fl}/(Na)$.
Consequently, polymer brushes locally have a zone of uniform, enhanced polar order in an EEZ, as shown in Fig.~\ref{fig:4}(c), which tapers off for $z > z_{\rm ex}$, ultimately depolarizing at the tips of the brush, i.e.~$p \to 0$ as $z\to h$.
We note that polar order parameter for negatively curved brushes which approach the boundary between regions C and D, have polar order parameters that reflect differences in their local geometry, i.e.~for small $z$ they exhibit splay geometry near the grafting surface, but for $z \to h$ they have convergent (concave) geometry, like the inside of a cylindrical domain.
Hence, such a brush (e.g.~the curve $H h \simeq 0.5$; $K h^2 \simeq -2$ in Fig.~\ref{fig:4}(c)) shows both a constant polarization zone in a narrow EEZ near to the grafting surface, but also polarization profile similar to a inwardly curved cylinder near its convergent tips, with $p'(z) \to 0$ as $z \to h$, as shown in the dashed curved for the $H h = - 1/2$.
\section{\label{sec:conclusions} Conclusions}
We have presented a form of the self-consistent brush equations implementing end-exclusion zone corrections that is applicable to substrates of arbitrary curvature.
In the process, we identified several regimes of interest and developed numerical methods to approximate solutions to the equations within those regimes.
These numerical solutions provide rigorous predictions for the size of the end-exclusion zone, as well as the exact strong-stretching free energy, chemical potential field, chain end distribution, and polar order parameter of the brush as a function of substrate mean and Gaussian curvatures for arbitrary convex brush shapes ($H h>0$).
These result show that the the magnitude of the EEZ zone, as well as its thermodynamic correction to the PBT, decreases both with mean curvature (strictly vanishing as $Hh \to 0_+)$ as well as increasingly negative Gaussian curvature for fixed $H h >0$.
Moreover, we have demonstrated that the end-exclusion zone corrections to free energy predicted by the parabolic brush theory are non-analytic.
Consequently, while the polymer brush free energy can be well approximated by a Helfrich-style free energy for low curvature, end-exclusion zone effects cannot be added in as higher order polynomial terms in $H$ and $K$, counter to the usual expectations for structured fluid membranes \cite{MilnerBendingModuli1988,MilnerMacromolecules1994,Birshtein2008,Lei2015}.
We have shown that brush splay not only affects the distribution of free ends within the brush, but also leads to anomalous constant polarization of segments within the EEZ.
While we have neglected a discussion of solvated brushes, the results presented here can be adapted to their study.
Bringing a polymer brush into equilibrium with a solvent bath relaxes the space-filling constraint on the monomer distribution, altering eq.~(\ref{eq:incompressibility}) so that the local monomer density is $\rho A(z)\phi(z)$, where $\phi(z)$ is the volume fraction of monomers.
The volume fraction $\phi(z)$ is determined by the local chemical potential through the constitutive relation $\mu(z) \propto \phi(z)$ for marginal solvents (or $\propto \phi^2(z)$ for $\theta$-solvents) \cite{degennes_scaling}.
Solvated brushes are thus swollen relative to molten brushes, depressing the relative height of the EEZ $z_{\rm ex}/h$.
Moreover, the monomer volume fraction $\phi$ is minimal at the free surface of the brush since it is in osmotic equilibrium with the surrounding solvent bath.
Consequently, chain ends should be depleted from the free surface, altering $g(z)$ such that the maximum density of chain ends occurs in the bulk of the brush (see, e.g.~ref\cite{Belyi2004}).
While we have addressed the problem of general curvature for brushes of homogeneous, monodisperse flexible polymers, the lessons learned here prompt additional questions about more complex brushes and suggest routes towards engineering chain statistics.
The brushes we have studied have three operative length scales that control chain statistics: two radii of curvature and the brush height.
Notably, the appearance of the an EEZ alters the shape of $\mu(z)$ thereby re-weights the local free energy cost (per unit volume) of segments in molten brushes a certain distance from the grafting surface.
We note that similar effects are predicted for flat brushes composed of bidisperse chain lengths \cite{MWC1989,witten_two-component_1989}.
The behavior of polydisperse curved brushes is an open question, in particular whether the intrinsic stratification in differently chain lengths in polydisperse brushes either enhances, or instead suppresses, the geometric tendencies that drive EEZs in monodisperse convex brushes, and how in turn the combination of these two effects alters the thermodynamic sensitivity of brushes to curvature.
\begin{acknowledgments}
The authors are grateful to A.~Reddy and T.~Witten for valuable discussion and comments. This work was supported by the US Department of Energy
2673 (DOE), Office of Basic Energy Sciences, Division of Materials Sciences and Engineering, under Award DE-SC0014599.
\end{acknowledgments}
\section{Appendices}
\section{\label{sec:intro} Introduction}
\begin{figure}
\includegraphics[width=\columnwidth]{figs/fig1.png}
\caption{\label{fig:1} Monomer distributions and chain heights in (a) flat, (b) converging, and (c) diverging brushes. The width of each tree-like chain profile is proportional to the number of monomers $\delta n$ occupying a thin slice $\delta z$. The distribution of chain ends is sampled from a normalized distribution function $\hat{g}(z)$ characteristic of each brush type, with three highlighted chains at height fractions $0.5 h$, $0.7 h$, and $0.9 h$. The diverging brush (c) is a cylinder of dimensionless mean curvature $Hh \simeq 1.0$ exhibits an end exclusion zone at height fraction $z_{\rm ex}/h \simeq 0.46$.}
\end{figure}
Conformations of long, flexible polymers are altered when one end is grafted to a substrate or is localized to an interface.
With increasing surface grafting density, crowding forces polymer chains to extend from the grafting surface, forming a polymer brush.
In the strong-stretching limit of brushes, at high surface area densities, crowding increasingly distorts the volume swept out by a fluctuating polymer in the brush, eventually limiting them to regions much more narrow than their extension.
This picture of stretching due to confinement was developed in the celebrated Alexander-de Gennes brush theory\cite{Alexander1977,deGennes1976,deGennes1980}.
While providing useful scaling laws for brush height and osmotic pressure, the Alexander-de Gennes theory is unable to capture microscopic details how chains stretch within the brush, due to unphysical constraints on the free-end positions \cite{Milner1991}.
Relaxing the constraint that free ends are fixed at the outer edge of the brush, but considering the entropy of chains within their so-called ``classical path,'' leads to the parabolic brush theory (PBT) of strongly stretched brushes\cite{Semenov1985,Milner1988,MWC1989}.
In PBT, the mean field effects of volume interactions are captured by a spatially varying chemical potential field $\mu(z)$ that acts on segments a height $z$ above the grafting substrate.
The parabolic form of this potential, $\mu(z) - \mu(0) \sim -z^2$, arises from the constraint that every chain has its terminal end at the substrate at $z=0$, regardless of the position of the free end.
As a consequence, the monomer density of each chain depends on height $z$ from the substrate surface, as well as the location $z_0$ of the free end, such that chains are most strongly stretched near the substrate and are under zero tension at their free ends, so the segment distribution for each chain is strongly skewed towards the free ends.
Schematically, this distribution of segments within chains can be illustrated as tree-like region on average with narrow ``trunks'' at anchoring surface and opening up to broad ``crowns'' at their free ends, as illustrated in Fig.~\ref{fig:1}(a).
The free-end distribution $g(z_0)$ is self-consistently coupled to the chemical potential field through volume interactions.
For the case of molten polymer brushes considered in this paper, the form of $g(z_0)$ corresponds to the distribution of these tree-like segment profiles that integrates to a uniform total segment density.
Since every chain in the brush connects to the substrate, every chain contributes to the monomer density of the brush near the grafting substrate, the ``floor'' of this forest of tree-like segment profiles.
The volume available to a chain near the substrate is crowded out by presence of other chains in the brush, and thus fewer chains can have free ends near the substrate.
In contrast, fewer chains reach the upper ``canopies'' of the brush, and hence this region is more available to be filled by the free-end canopies.
As a consequence of these arguments, in a flat (planar) brush, the probability density $g(z_0)$ of finding a chain end at height $z_0$ adopts the monotonically increasing profile displayed in Fig.~\ref{fig:1}(a).
The PBT also holds for brushes bound to concave substrates whose curvature forces chains to splay inwards, as illustrated by Fig.~\ref{fig:1}(b) for a cylindrically-curved grafting surface.
These converging brushes obey the same parabolic potential and thus adopt similar tree-shaped profiles.
However, because inward splay requires monomers to pack into regions of decreasing available area, diffuse (high density) crowns repel the tips of convergent brushes, depleting the free-end distribution $g(z_0)$, resulting in the profile shown on the right of Fig.~\ref{fig:1}(b).
It is a testament to the utility of PBT that it consistently describes both flat and converging brushes \cite{Milner1988,Grest1989,Auroy1992,Grest1994,Carignano1994,Netz1998,Manghi2001,Dimitrov2006,matsen_strong-segregation_2010}.
However, PBT fails to consistently capture the case of chains forced to splay outwards, illustrated in Fig.~\ref{fig:1}(c).
Because these brushes have larger fractions of their volume at their distal ends, a sufficient number of free-end crowns must be packed in this canopy.
However, since each of these chains are rooted down at the brush floor, where there is smaller area available, the PBT leads to an overcrowding in the proximal region of the substrate, which is resolved mathematically by an unphysically negative $g(z_0)$ in this zone.
The failure of PBT for concave brushes was recognized in early formulations by Semenov~\cite{Semenov1985}, and later resolved (first for cylindrical brushed) by generalizing the PBT solution to include an \emph{end-exclusion zone} (EEZ) within a height $z_{\rm ex}$ from the grafting substrate, where $g(0<z_0<z_{\rm ex}) = 0$~\cite{Ball1991}.
The inclusion of an EEZ subtly alters the chemical potential $\mu(z)$ from its parabolic profile, enhancing the stretching chains near the surface, effectively redistributing more segments into the canopy zone of relatively broader crowns.
This alteration enables a solution to the melt packing problem, resulting in the end distribution shown in Fig.~\ref{fig:1}(c).
While the EEZ extension to strong-stretching theory successfully describes brushes where PBT fails, it is not in general analytically tractable, as it solution involves a set of self-consistent integral equations coupling the chemical potential profile (and stretching) to end-zone distribution.
To date, the complexity of these equations has so far limited study to two convex geometries possible: cylinders and spheres \cite{Ball1991,Dan1992,Li1994,Belyi2004,matsen_strong-segregation_2010}.
In general, a grafting substrate is a 2D surface embedded in 3D and is therefore characterized by two curvatures: a mean curvature $H$ and a Gaussian curvature $K$ \cite{Hyde1997_ch1,Kamien2002}.
This combination of curvatures parameterizes the range of surfaces depicted in Fig.~\ref{fig:2}.
In the simplest case, chains are taken to emerge normal to the grafting surface, and this combination of curvatures then determines the splay of the brush.
Converging brushes emerge from negative mean curvature surfaces ($H < 0$), whereas diverging brushes emerge from positive mean curvature surfaces ($H > 0$).
PBT consistently describes all converging brushes, since these do not require EEZs.
At present, an extension of strong-stretching theory to incorporate EEZ has only been studied on cylinders ($K=0$) and spheres ($K=H^2$), which outline region A ($H>0$, $0\leq K \leq H^2$) in Fig.~\ref{fig:2}(a).
Note that no surfaces exist in the regime of $K > H^2$.
In general, surfaces in region A interpolate between cylinders and spheres, so chains always splay outward, even if the surface causes the splay to be anisotropic.
In contrast, saddle-shaped surfaces, defined by $K < 0$, force chains to splay outward in one direction and inward in the orthogonal direction.
This is the case for regions B-D and is unexplored in the context of polymer brushes.
The packing constraints in normally extended brushes are related to substrate geometry through \emph{Steiner's formula},
\begin{equation}\label{eq:steiner}
A(z) = A_0(1 + 2 H z + K z^2) \, ,
\end{equation}
where $A_0$ is the area element of the substrate and $A(z)$ is the area element at height $z$ above the substrate.
As shown in Fig.~\ref{fig:2}(b), the area decreases for $H < 0$ and increases for $H>0$.
The increase is monotonic and accelerates in region $A$.
However, for $K < 0$, the initial increase in area decelerates with increasing $z$, so that the area function has a maximum at $z^* = -H/K$.
Brushes in region B have a height that is smaller than the inflection height, $h < z^*$, whereas brushes in region C are long enough that the maximum area is reached within the brush, so $h > z^*$.
For $z>z^*$, the area decreases with height, meaning that the geometry becomes convergent at its outer edge.
Such brushes can only extend up to a maximum height where the area available to chain canopies vanishes, $A(h)=A_0(1 + 2Hh + Kh^2 )= 0$.
Region D, where $1 + 2Hh + Kh^2 < 0$, is excluded.
A critical application of the strong-stretching of molten brushes has been the analysis of block copolymer free energies~\cite{Semenov1985, Olmsted1998, LikhtmanSemenov}.
All domain morphologies, with the exception of lamellae, require a divergent region, which should incorporate EEZs in one of the brush-like domains.
To date, existing strong-stretching theories of block copolymer melts are based only on PBT, and the lack of a strictly physical EEZ solutions has complicated interpretations of the these predictions.
Most notable perhaps are questions revolving around the strong-segregation stability of bicontinuous network phases, like double-gyroid, whose brush geometries include $K<0$ saddle-like shapes.
In part, the inability to properly assess the thermodynamic stability of these so-called ``complex phases" derives the from the lack of EEZ solutions for these negatively curved regions (B and C).
\begin{figure}
\includegraphics[width=\columnwidth]{figs/fig2.png}
\caption{\label{fig:2} Brushes may splay inward or outward or both and may do so anisotropically. This is controlled by the combination of substrate mean curvature $H$ and Gaussian curvature $K$, as mapped out in (a). In region A, the brush splays outward in all directions. In regions B and C, the brush splays outward in one direction and inward in the orthogonal direction. In region D, the inward splay rate overcomes the outward, causing the brush chains to pass through each other at a focal curve. Representative brush splay is illustrated by the convergence and divergence of rays extending normal to the surfaces in each of the insets. The combination of splay in different directions affects the brush area at different heights $z$ above the substrate surface, as shown in (b), and as illustrated by parallel stacks of area slices in each of the insets, with red representing area increase and blue representing area decrease relative to the area at the substrate. The boundary between regions B and C is shown as a dashed curve with the area maximum at $z^*=h$; for B, the maximum is at $z^*>h$, whereas for C, the maximum is at $z^*<h$.}
\end{figure}
In this paper, we present the solution of the self-consistent equations for strongly-stretched molten brushes in the full range of $H>0$ shapes where EEZs are required, i.e.~regions A, B and C.
Our approach expands on the methods of Belyi \cite{Belyi2004} developed for cylindrical and spherical brushes.
First, we present the EEZ constraint equations in a form that describes brushes of arbitrary curvature.
We then present numerical methods for solving the EEZ constraint equations in each of the regions of interest.
Using our solutions, we find that substrate mean curvature $H$ and Gaussian curvature $K$ play distinct roles in determining how chains pack in a brush.
These solutions show that the size and energetic costs EEZ diminishes with decreasing mean curvature of the brush (as was previously reported for cylinders and spheres), but also as Gaussian curvature becomes increasingly negative and the brush geometry evolves from divergent to convergent.
As these results are generally applicable to systems ranging from highly curved nanoparticles beyond simple spherical and cylindrical shape \cite{Ohno2007,Dukes2010,Tung2013}, to strongly segregated block copolymer phases \cite{Milner1994,Olmsted1998,Grason2006,matsen_strong-segregation_2010}, we include a computational algorithm to interpolate our results of EEZ heights and free energies to arbitrary curvature values over the regions A, B and C in Fig.~\ref{fig:2}.
\section{\label{sec:theory} Constraint equations for curved brushes}
We consider a molten brush with a local geometry described by the area distribution of eq.~(\ref{eq:steiner}), corresponding to fixed values of curvatures $H$ and $K$ and total height $h$ (or equivalently surface density at the graft surface $\sigma_0$)~\footnote{The quadratic form of $A(z)$ need not require the {\it normal} extension of the chains in the brush. I.e.~this form can also arise from chains trajectories that are tilted with respect to normals of the anchoring surface}.
Each segment, indexed by $n \in [0,N]$, in a chain of $N$ total segments is located a distance $z(n)$ away from the grafting surface along the local surface normal direction.
The total free energy of a single chain in the strong-stretching is then given by the Edwards Hamiltonian \cite{Edwards1965} (in units of $k_B T$),
\begin{equation}
\mathcal{H} = \int_0^N{\rm d}n\left\{\frac{3}{2a^2}|\partial_n z|^2 + \mu(z)\right\} \, ,
\end{equation}
where $a$ is the statistical segment length of brush chains~\cite{Matsen2002}.
The first term of $\mathcal{H}$ accounts for the entropic penalty of stretching a ``Gaussian thread'' along the $z$-direction and $\mu(z)$ is a chemical potential acting on monomers a distance $z$ from the surface.
The chemical potential $\mu$ is the mean field that enforces incompressibility (constant local density) at each point in the brush.
Since we are considering monodisperse brushes and the density of monomers at height $z$ for a single chain is given by the ratio ${\rm d}n/{\rm d}z$, the chemical potential $\mu(z)$ must be determined in a way that is self-consistent with the statistics of single chain configurations (i.e.~the vertical distributions of free-end positions and stretching).
Following ref\cite{Milner1991}, free energy minimizing (or ``classical'') trajectories of brush chains can be understood in analogy to trajectories of a 1D particle in by an external potential $-\mu(z)$.
At time $n=0$ (the free end), the particle is released from rest, $\partial_n z|_{n=0} = 0$, at height $z_0 > 0$.
At time $n=N$, the particle must arrive at the surface at $z=0$.
The ``time-of-flight'' of such a particle can be found for an arbitrary potential $\mu$,
\begin{equation}\label{eq:tof}
N(z_0) = -\sqrt{\frac{3}{2a^2}}\int_{z_0}^0\frac{{\rm d}z}{\sqrt{\mu(z) - \mu(z_0)}} \, .
\end{equation}
Since every chain in the brush has the same polymerization index $N$, chain trajectories must satisfy the \emph{tautochrone constraint} $N(z_0) = N$ for all admissible values of $z_0$ \cite{AdamutiTrache1996,Hilfer2000}.
Critically, this constraint only holds for chains with free ends outside of the EEZ, i.e.~when $z_0 > z_{\rm ex}$, where the EEZ is a layer $0 \leq z \leq z_{\rm ex}$ proximal to the brush surface~\cite{Ball1991}.
For $z_0 < z_{\rm ex}$, $N(z_0)$ is not constrained to a constant value.
Chains in a molten brush must additionally satisfy an \emph{incompressibility constraint} wherein the occupied volume fraction is constant for all $z$.
Since the brush is composed entirely of segments, the brush volume is $V = \sigma_0 A_0 N \rho^{-1}$, where $\sigma_0$ is the areal density of chains grafted to the brush substrate, $A_0$ is the area of the brush substrate, $N$ is the number of monomers per chain, and $\rho^{-1}$ is the segment volume.
This volume is constructed via parallel slices of area $A(z)$, given by eq.~(\ref{eq:steiner}) and illustrated in the insets of Fig.~\ref{fig:2}, so the number of segments in a slice of width ${\rm d}z$ is given by $\rho A(z) {\rm d}z$.
Each chain with endpoint $z_0 > z$ locally deposits ${\rm d}n(z|z_0)$ segments between $z$ and $z + {\rm d}z$.
We can tabulate the number of chains that pass through the volume slice at $z$ by integrating endpoint distribution function $g(z_0)$ for the points $z_0\geq z$.
Therefore, the incompressibility constraint can be expressed as
\begin{equation}\label{eq:incompressibility}
-\int_z^h{\rm d}z_0 \,g(z_0) \frac{{\rm d}n}{{\rm d}z}(z|z_0) = \rho A(z) \, ,
\end{equation}
where $g(z_0)$ is normalized such that $\int_{z_{\rm ex}}^h {\rm d}z_0\,g(z_0) = \sigma_0$.
The tautochrone eq.~(\ref{eq:tof}) and incompressibility eq.~(\ref{eq:incompressibility}) constraint equations are integral equations that are coupled by Steiner's equation eq.~(\ref{eq:steiner}).
To rewrite them in a form that is more amenable to numerical solution, following previous authors\cite{Ball1991,Dan1992,Li1994,Belyi2004}, it is convenient to change coordinates from the physical height $z$ to local value of the chemical potential $\mu$.
This change of coordinates depends on the monotonicity of $\mu(z)$, namely ${\rm d}\mu/{\rm d}z < 0$ for all $z > 0$.
At the brush surface ($z=0$), we have $\mu \equiv P$, at the EEZ boundary ($z = z_{\rm ex}$), $\mu \equiv Q$, and at the end of the brush ($z=h$), we have $\mu = 0$.
In chemical potential coordinates, eq.~(\ref{eq:tof}) is given by
\begin{equation}\label{eq:tof_mu}
N(\mu_0) = -\sqrt{\frac{3}{2 a^2}}\int_{\mu_0}^P {\rm d}\mu \frac{{\rm d}z}{{\rm d}\mu}\frac{1}{\sqrt{\mu - \mu_0}}
\end{equation}
and eq.~(\ref{eq:incompressibility}) is given by
\begin{equation}\label{eq:incompressibility_mu}
\sqrt{\frac{3}{2a^2}}\int_0^{\mu} {\rm d}\mu_0 \frac{g(\mu_0)}{\sqrt{\mu - \mu_0}} = \rho A(\mu)
\end{equation}
where $g(\mu_0) \equiv (-{\rm d}z/{\rm d}\mu)|_{\mu = \mu_0}g\left(z(\mu_0)\right)$.
This change of coordinates permits the inversion of each of these equations via an Abel transformation \cite{Ball1991,Belyi2004} so that ${\rm d}z/{\rm d\mu}$ can be solved for in terms of $N(\mu_0)$ and $g(z_0)$ can be solved for in terms of $A(\mu)$.
Through these equations, the tautochrone and incompressibility constraints in the presence of the EEZ provide non-local information about the functional forms of $z(\mu)$ and $A(\mu)$.
Here, we follow the approach of Belyi \cite{Belyi2004} and use the EEZ tautochrone constraint $N(\mu) = N$ for $\mu < Q$ to rewrite eq.~\ref{eq:tof_mu} as
\begin{equation}\label{eq:EEZ1}
\hat{z}(\tilde{\mu} < 1) = \sqrt{\frac{Q h_{\rm fl}^2}{Q_{\rm fl}h^2}}\sqrt{1 - \tilde{\mu}} + \frac{1}{\pi}\int_1^{\tilde{P}} {\rm d}\tilde{\mu}' \mathbbm{K}_<(\tilde{\mu},\tilde{\mu}') \hat{z}(\tilde{\mu}')
\end{equation}
where $h$ is the brush height, $h_{\rm fl} \equiv \sigma_0 N \rho^{-1}$ is the height of the flat brush, $Q_{\rm fl} \equiv 3\pi^2h_{\rm fl}^2/(8 N a^2)$ is the chemical potential of the flat brush at the brush substrate, and the integral kernel $\mathbbm{K}_<$ is given by
\begin{equation}
\mathbbm{K}_<(\tilde{\mu},\tilde{\mu}') \equiv \frac{\sqrt{1 - \tilde{\mu}}}{(\tilde{\mu}' - \tilde{\mu})\sqrt{\tilde{\mu}' - 1}} \, .
\end{equation}
Here, we have introduced the scaled chemical potential coordinate $\tilde{\mu} \equiv \mu/Q$ and the dimensionless height coordinate $\hat{z} \equiv z/h$.
Similarly, we can use EEZ incompressibility constraint $g(\mu) = 0$ for $P > \mu > Q$ to rewrite eq.~(\ref{eq:incompressibility_mu}) as
\begin{equation}\label{eq:EEZ2}
\mathcal{A}(\tilde{\mu} > 1) = \frac{1}{\pi}\int_0^{1}{\rm d}\tilde{\mu}'\,\mathbbm{K}_>(\tilde{\mu},\tilde{\mu}')\mathcal{A}(\tilde{\mu}')
\end{equation}
where the $\mathcal{A}$ is the dimensionless area $\mathcal{A}(\mu) \equiv A(\mu)/A_0$ and the integral kernel $\mathbbm{K}_>$ is given by
\begin{equation}
\mathbbm{K}_>(\tilde{\mu},\tilde{\mu}') \equiv \frac{\sqrt{\tilde{\mu} - 1}}{(\tilde{\mu} - \tilde{\mu}')\sqrt{1-\tilde{\mu}'}} \, .
\end{equation}
Notably, these two integral equations relate information about the unknown functions outside of the EEZ, $\tilde{\mu} < 1$, to information about the unknown functions inside of the EEZ, $1 \leq \tilde{\mu} \leq \tilde{P}$~\cite{Belyi2004}.
The EEZ constraint equations, eqs.~(\ref{eq:EEZ1}) \& (\ref{eq:EEZ2}), are related by Steiner's formula $\mathcal{A}(\tilde{\mu}) = 1 + 2\hat{H}\hat{z}(\tilde{\mu}) + \hat{K}\hat{z}^2(\tilde{\mu})$, where $\hat{H} \equiv H h$ and $\hat{K} = K h^2$ are the dimensionless mean and Gaussian curvatures, respectively.
We can, in principle, solve these coupled integral equations numerically, despite the nonlinear form of their coupling.
This has been done in the specialized cases where the curved surface is described by a single radius of curvature $R$, namely for cylinders ($H = 1/(2R)$ and $K = 0$) as well as spheres ($H = 1/R$ and $K = 1/R^2$) \cite{Belyi2004}.
To solve the equations for arbitrary curvature, we express $\hat{z}(\tilde{\mu})$ in terms of a new function $\psi(\tilde{\mu})$ via
\begin{equation}
\hat{z}(\mu) = \frac{1}{\alpha\hat{H}}\left(\frac{\psi(\tilde{\mu})}{\Psi} - 1\right)\, ,
\end{equation}
where
\begin{equation}
\alpha \equiv K/H^2
\end{equation}
is a dimensionless parameter characterizing the scale of the Gaussian curvature $K$ relative to the mean curvature $H$, $\Psi$ is a dimensionless parameter that, through the boundary condition $\hat{z}(\tilde{P}) = 0$, takes on the boundary value $\Psi = \psi(\tilde{P})$.
Through Steiner's formula, we can find a relationship between $\psi(\tilde{\mu})$ and $\mathcal{A}(\tilde{\mu})$, namely
\begin{equation}\label{eq:psi_area}
\mathcal{A}(\tilde{\mu}) = 1 + \frac{1}{\alpha}\left(\frac{\psi^2(\tilde{\mu})}{\Psi^2} - 1\right) \, .
\end{equation}
After substitution of $\psi(\tilde{\mu})$ into Eqs.~(\ref{eq:EEZ1}) \& (\ref{eq:EEZ2}), we are left with a pair of explicitly coupled nonlinear integral equations,
\begin{subequations}
\begin{align} \label{eq:psi_int_equations}
\psi(\tilde{\mu} < 1) &= \alpha\sqrt{1-\tilde{\mu}} + \frac{2}{\pi}\Psi\arctan\sqrt{\frac{1-\tilde{\mu}}{\tilde{P}-1}} \notag \\
&\mkern+32mu + \frac{1}{\pi}\int_1^{\tilde{P}} {\rm d}\tilde{\mu}' \mathbbm{K}_<(\tilde{\mu},\tilde{\mu}')\psi(\tilde{\mu}')\\
\label{eq:psi_int_equations2}
\psi^2(\tilde{\mu} > 1) &= \frac{2}{\pi}\Psi^2(1 - \alpha)\arctan\sqrt{\tilde{\mu} - 1} \notag\\
&\mkern+32mu + \frac{1}{\pi}\int_0^1 {\rm d}\tilde{\mu}' \mathbbm{K}_>(\tilde{\mu},\tilde{\mu}')\psi^2(\tilde{\mu}')
\end{align}
\end{subequations}
where we have chosen $\Psi = \hat{H}^{-1}\sqrt{Q_{\rm fl}h^2/(Q h_{\rm fl}^2)}$ to remove any dependence of the integral equations on $Q$, $h$, or $\hat{H}$, leaving only $\alpha$ and $\tilde{P}$ as parameters.
The integral equations eqs.~(\ref{eq:psi_int_equations}) \& (\ref{eq:psi_int_equations2}) have quadratic coupling, similar to the integral equations describing spherical brushes\cite{Belyi2004}, yet are valid for arbitrary Gaussian curvature $K \neq 0$ (the cylindrical limit of the equations are included in the Appendix \ref{app:cyl} for completion).
Region A ($0 < \alpha \leq 1$), are described by solutions for which $\psi(\tilde{\mu})/\Psi \geq 1$ for all values of $\tilde{\mu}$, with equality at $\tilde{\mu} = \tilde{P}$; negative Gaussian curvature regions B-D ($\alpha < 0$) are described by solutions with $\psi(\tilde{\mu})/\Psi \leq 1$.
\section{\label{sec:numerical} Numerical Methods}
The forms of the coupled integral equations eqs.~(\ref{eq:psi_int_equations}) \& (\ref{eq:psi_int_equations2}) are reminiscent of coupled inhomogeneous Fredholm integral equations of the second kind, but their nonlinearity puts them outside of this classification, complicating the process of finding solutions.
Nevertheless, we take a classical approach of approximating the integral equations as algebraic equations via the Nystr\"om method \cite{NumericalRecipes} which we then solve iteratively.
We find that this approach works well for $(\alpha,\tilde{P})$ in region A, consistent with previous results focusing on cylinders as spheres~\cite{Belyi2004}, but also for broad parts of region C and D.
For region B, we are unable to find convergence to physical solutions, hinting at some sort of instability in the iterative procedure.
However, we are able to find approximate solutions through a variational method that we outline below.
We have included code that implements the numerical schemes discussed here in a publicly accessible repository\cite{dataset}.
\subsection{Iterative method}
For the iterative method, to ensure continuity at the EEZ, we rewrite the function $\psi$ as $\psi(\tilde{\mu}) = \psi_1 + \Delta \psi(\tilde{\mu})$ and $\psi^2(\tilde{\mu}) = \psi_1^2 + \Delta\psi^2(\tilde{\mu})$, where $\psi_1 \equiv \psi(\tilde{\mu} = 1)$.
This allows us to rewrite eqs.~(\ref{eq:psi_int_equations}) \& (\ref{eq:psi_int_equations2}) as
\begin{subequations}
\begin{align} \label{eq:psi_int_equations_cont}
\psi(\tilde{\mu} < 1) &= \psi_1 + \alpha\sqrt{1-\tilde{\mu}} + \frac{2}{\pi}(\Psi-\psi_1)\arctan\sqrt{\frac{1-\tilde{\mu}}{\tilde{P}-1}} \notag \\
&\mkern+32mu + \frac{1}{\pi}\int_1^{\tilde{P}} {\rm d}\tilde{\mu}' \mathbbm{K}_<(\tilde{\mu},\tilde{\mu}')\Delta\psi(\tilde{\mu}')\\
\label{eq:psi_int_equations_cont2}
\psi^2(\tilde{\mu} > 1) &= \frac{2}{\pi}\psi_1^2\arcsin\frac{1}{\sqrt{\tilde{\mu}}} + \frac{2}{\pi}\Psi^2(1 - \alpha)\arctan\sqrt{\tilde{\mu} - 1} \notag\\
&\mkern+32mu + \frac{1}{\pi}\int_0^1 {\rm d}\tilde{\mu}' \mathbbm{K}_>(\tilde{\mu},\tilde{\mu}')\Delta\psi^2(\tilde{\mu}')
\end{align}
\end{subequations}
which are explicitly continuous at the EEZ boundary $\tilde{\mu} = 1$.
Next, we discretize the interval $[0,\tilde{P}]$ into $\mathcal{N} = 10^4$ subdivisions so that the evaluation of $\psi(\tilde{\mu})$ results in a vector $\psi_n$ for $n = 0,1,\dots,\mathcal{N}$.
This allows us to recast eqs.~(\ref{eq:psi_int_equations_cont}) \& (\ref{eq:psi_int_equations_cont2}) as coupled algebraic equations of the form $\psi(\tilde{\mu} < 1) = f_<[\psi](\tilde{\mu})$ and $\psi^2(\tilde{\mu} > 1) = f_>[\psi^2](\tilde{\mu})$, where the integrals are approximated using Simpson's Rule.
The goal is then to seek zeros of the residuals $g_<[\psi](\tilde{\mu} < 1) \equiv f_<[\psi](\tilde{\mu}) - \psi(\tilde{\mu})$ and $g_>[\psi^2](\tilde{\mu} > 1) \equiv f_>[\psi^2](\tilde{\mu}) - \psi^2(\tilde{\mu})$.
We proceed to iterate these equations using simple mixing with the rules
\begin{equation}\begin{split}
\psi^{(i+1)}(\tilde{\mu} < 1) &= \psi^{(i)}(\tilde{\mu}) + r g_<\left[\psi^{(i)}\right](\tilde{\mu}) \\
\psi^{2,(i+1)}(\tilde{\mu} > 1) &= \psi^{2,(i)}(\tilde{\mu}) + r g_>\left[\psi^{2,(i)}\right](\tilde{\mu})
\end{split}\, ,\end{equation}
where the mixing parameter $r$ takes on values in the interval $(0,1]$.
We iterate until the $L^2$ norm of the total residual, defined as
\begin{align}\label{eq:total_residual}
{\rm Res}\left[\psi_<^{(i)},\psi_>^{(i)}\right] &\equiv \int_0^1 {\rm d}\tilde{\mu} g^2_<\left[\psi^{(i)}\right](\tilde{\mu}) \notag \\
&\mkern+32mu + \int_1^{\tilde{P}} {\rm d}\tilde{\mu} g^2_>\left[\psi^{(i)}\right](\tilde{\mu}) \, ,
\end{align}
reaches a target value.
In order to iterate these equations, the square root of eq.~(\ref{eq:psi_int_equations_cont2}) must be taken, leading to a sign ambiguity.
Per eq.~(\ref{eq:psi_area}), the maximum of the area function $\mathcal{A}$ occurs at $\psi = 0$, which occurs at a point $\tilde{\mu}^* = \tilde{\mu}(\hat{z}^*)$ in chemical potential coordinates.
Since the brush can pass through the maximum area at most once, the sign of $\psi(\tilde{\mu})$ is determined by the sign of $\tilde{\mu} - \tilde{\mu}^*$.
We use linear interpolation on $\psi^2$ to determine the location $\tilde{\mu}^*$ of any zeros and then assign the sign of the square root $\psi = \pm \sqrt{\psi^2}$ accordingly.
This method works well for $\alpha > 0$, consistently reaching total residuals of $10^{-10}$ or lower.
To survey solutions in the region A, we performed sweeps in $\tilde{P}$ for $\alpha = 0.1,\, 0.2, \dots 1.0$; the results of these sweeps are discussed in the next section.
Use of the iterative method on the above equations for $\alpha < 0$ (particularly, in region B where the magnitude of $|-K|$ is small) proves to be unstable, leading to rapidly diverging residuals.
This instability seems to be influenced by the negative values of $\alpha\sqrt{1-\tilde{\mu}}$ in eq.~(\ref{eq:psi_int_equations}).
It is possible to find stable solutions by instead choosing $\Psi \mapsto \Psi/\alpha$, resulting in a modified equation,
\begin{align}
\psi(\tilde{\mu} < 1) &= \sqrt{1-\tilde{\mu}} + \frac{2}{\pi}\Psi\arctan\sqrt{\frac{1-\tilde{\mu}}{\tilde{P}-1}} \notag \\
&\mkern+32mu + \frac{1}{\pi}\int_1^{\tilde{P}} {\rm d}\tilde{\mu}' \mathbbm{K}_<(\tilde{\mu},\tilde{\mu}')\psi(\tilde{\mu}') \, .
\end{align}
However, using values of $\alpha$ between -0.1 and -10, this only results in stably converging solutions in regions C and D and the iteration method fails to find solutions in region B.
Interestingly, while the $\tilde{P} \rightarrow 1$ limit corresponds to vanishing mean curvature $\hat{H}$ for region A, consistent with the disappearance of the EEZ, the same limit corresponds to an increasingly large magnitude of Gaussian curvature $|\hat{K}|$ for regions C and D.
This suggests that while the size of the EEZ is largely dependent on a positive mean curvature $H>0$, it is also modulated by Gaussian curvature to the degree that a sufficiently negative value $K<0$ at fixed $H$ can suppress the EEZ.
We will show that this is indeed the case.
\subsection{Variational method}
In order to study solutions to eqs.~(\ref{eq:psi_int_equations}) \& (\ref{eq:psi_int_equations2}) in region B, we turn to a variational calculation, the goal of which is to determine forms of $\psi(\tilde{\mu})$ that minimize the total residual, eq.~(\ref{eq:total_residual}).
We approximate $\psi(\tilde{\mu})$ by a pair of degree-$d$ Bernstein polynomials,
\begin{equation}\begin{split}
\psi(\tilde{\mu} < 1) &= \sum_{n=0}^d a_n \beta_n^d(\tilde{\mu}) \\
\psi(\tilde{\mu} > 1) &= \sum_{n=0}^d b_n \beta_n^d\left(\frac{\tilde{\mu}-1}{\tilde{P}-1}\right)
\end{split}\, ,\end{equation}
where the Bernstein polynomials are defined as
\begin{equation}
\beta_n^d(t) = \frac{d!}{n!(d-n)!}(1-t)^{d-n}t^n \, ,
\end{equation}
and $a_n,b_n \in \mathbbm{R}$ are coefficients representing the degrees of freedom for the variational calculation.
We further restrict the space of functions by requiring continuity of $\psi(\tilde{\mu})$, so $b_0 = a_d$, and continuity of ${\rm d}\psi/{\rm d}\mu$, so $b_1 - b_0 = (\tilde{P}-1)(a_d - a_{d-1})$.
As a result, the variational calculation has $2d-2$ degrees of freedom.
This Bernstein basis is not only convenient due to its smoothness, it also guarantees that the function $\psi(\tilde{\mu})$ is well-behaved, despite the weakly singular kernels $\mathbbm{K}_<$ and $\mathbbm{K}_>$ \cite{Jafarian2014}.
We numerically minimize the total residual with respect to these coefficients via a Nelder-Mead method, as implemented in the \texttt{Optim} package in the Julia programming language.
While this method generally leads to higher residuals than the iteration method, we do find reasonable solutions achieving residuals of $10^{-6}$ or smaller for $d = 16$ with a gradient tolerance of $10^{-12}$. However, we note that the performance of even this variation method is limited, such that achieving residuals of even this weaker tolerance becomes problemation for sufficiently large alpha $|\alpha|$ (i.e. near to the the $K =0$ line).
We performed sweeps of $\tilde{P}$ for $\alpha$ between -0.1 and -3.
\section{\label{sec:discussion} Results \& Discussion}
\begin{figure*}
\includegraphics[width=\textwidth]{figs/fig3.png}
\caption{\label{fig:3}
Summary of numerical studies after smoothing and interpolation, plotted as a function of dimensionless mean curvature $Hh$ and Gaussian curvature $Kh^2$. We extrapolate to the low curvature regime $Hh\rightarrow 0$ using a fitting function of the form $a\times{\rm exp}(-b/H)$ for contours of fixed $Kh^2$, where $a$ and $b$ are fitting parameters. Data shown for physical regions A, B, and C, the boundaries of which are denoted with dashed lines. (a) Fraction of exclusion zone height $z_{\rm ex}$ to brush height $h$. (b) Percent correction $\Delta \mathcal{F}$ to the parabolic brush free energy due to the inclusion of the end exclusion zone, plotted on log scale. (c) and (d) show relative EEZ size and percent free energy correction as a function of Gaussian curvature $Kh^2$ for fixed values of mean curvature $Hh$ ranging from $0.4$ to $2.0$.
}
\end{figure*}
A solution for $\psi(\tilde{\mu})$ for fixed values of $(\alpha,\tilde{P})$ fully encodes both the structure (stretching and free-end profiles) as well as thermodynamics of strongly-stretched, convex brushes.
The dimensionless mean curvature $\hat{H}$ is given by $\hat{H} = \alpha^{-1}(\psi(0)/\Psi - 1)$, from which we can find the dimensionless Gaussian curvature $\hat{K} = \alpha \hat{H}^2$.
The brush height $h$ is found through volume conservation: the volume of the flat brush, $V_{\rm fl} = A_0h_{\rm fl}$, must be the same as the volume of the curved brush, $V = \int_0^h {\rm d}z\, A(z) = A_0h(1 + \hat{H} + \hat{K}/3)$, so
\begin{equation}
h = h_{\rm fl}(1 + \hat{H} + \hat{K}/3)^{-1} .
\end{equation}
From this, we extract the height of the EEZ, $z_{\rm ex}/h = (\psi(1) - \Psi)/(\psi(0) - \Psi)$, as well as its chemical potential, $Q/Q_{\rm fl} = h^2/(\hat{H}\Psi h_{\rm fl})^2$.
We can calculate $g(z_0)$ by first inverting eq.~(\ref{eq:incompressibility}) via an Abel transform, yielding
\begin{equation}
g(\mu_0) = \sqrt{\frac{2 a^2}{3 \pi^2}}\frac{{\rm d}}{{\rm d}\mu_0}\int_0^{\mu_0} {\rm d}\mu \frac{\rho A(\mu)}{\sqrt{\mu_0 - \mu}}
\end{equation}
and then performing a change of variables, $g(z_0) = -({\rm d}\mu/{\rm d}z)_{z = z_0}g(\mu(z_0))$.
Finally, we find it convenient to define a scaled distribution function $\hat{g}(z_0) \equiv g(z_0)/\sigma_0$, normalized such that $\int_0^h {\rm d}z_0\,\hat{g}(z_0) = 1$.
In Fig.~\ref{fig:3}(a), we plot the end exclusion zone height $z_{\rm ex}/h$ as a function of dimensionless mean curvature $Hh$ and Gaussian curvature $Kh^2$ by interpolating the results of our numerical studies.
We find that the size of the EEZ grows monotonically with $Hh$ at fixed $Kh^2$.
However, this growth rate is strongly modulated by $Kh^2$.
Recall from Fig.~\ref{fig:2}(b) and Steiner's equation eq.~(\ref{eq:steiner}) that the area $A(z)$ available to brush monomers is controlled by mean curvature $H$ for small $z$ and by Gaussian curvature $K$ for large $z$.
The strong dependence of the EEZ size on $Hh$ for low-to-intermediate curvatures can be rationalized by the proximity of the EEZ boundary layer to the grafting substrate: $Hh$ sets the packing geometry of monomers \emph{local} to the EEZ.
However, the strong modulation by $Kh^2$ indicates that the packing of monomers at the brush end exerts \emph{non-local} control over the size of the EEZ.
At high curvatures, the Gaussian curvature plays the dominant role in determining the EEZ size, with the variations in $z_{\rm ex}/h$ with $Hh$ largely saturating in the $Hh \simeq 2$ regime, as seen in Fig.~\ref{fig:3}(c).
Here, the EEZ is large enough that it starts to ``feel'' the packing geometry at the brush end more acutely.
The numerical schemes outlined here do not work well in the limit of a nearly flat brush, where $Hh\rightarrow 0$ and $\tilde{P}\rightarrow 1$.
However, as shown in Appendix \ref{app:weak}, our equations are amenable to the same low-curvature analysis as presented by Belyi \cite{Belyi2004} and we recover the same scaling prediction $z_{\rm ex}/h \sim \exp\{-(2Hh_{\rm fl})^{-1}\} \sim \exp\{-[(2Hh)(1+Hh+Kh^2/3)]^{-1}\}$.
Not only does this scaling function fit our lowest-curvature data well and serve to extrapolate our data to $Hh\rightarrow 0_+$, the variations with $Kh^2$ are also consistent with the trends observed in our data.
Interestingly, this scaling function is non-analytic in $Hh$ and thus does not permit a power series expansion about $Hh\rightarrow 0_+$.
As shown in Appendix \ref{app:fe}, the stretching free energy of a brush is given by
\begin{equation}\label{eq:fe}
F = \frac{\rho}{2}\int_0^P {\rm d}\mu\, z(\mu) A(\mu) = \frac{\rho}{2}\int_0^h {\rm d}z \Big(- z\frac{ {\rm d} \mu}{ {\rm d} z} \Big) A(z) \, .
\end{equation}
Note that PBT brush result $\big(-z \frac{ {\rm d} \mu}{ {\rm d} z} \big) \propto z^2$ shows that the entropic free energy of occupying a height $z$ in the molten brush increases with that height~\cite{LikhtmanSemenov}.
The existence of the EEZ modifies the form of $\big(-\frac{ {\rm d} \mu}{ {\rm d} z} \big)$ and hence the relative free energy costs of occupying different heights from the anchoring surface are re-weighted with respect to the PBT.
To examine the effect of the EEZ on the free energy, we determine the free energy change relative to the PBT free energy, $\Delta\mathcal{F} \equiv (F-F_{\rm PBT})/F_{\rm PBT}$, which is plotted in Fig.~\ref{fig:3}(b).
While the contours of equal free energy correction follow a qualitatively similar trend to the EEZ height in Fig.~\ref{fig:3}(a), the difference in scale is dramatic, with free energy corrections $\lesssim 10^{-4}\%$ for an EEZ height of $\simeq 10\%$ of the brush height.
Gaussian curvature similarly modulates the size of the free energy correction, which decays for negative $K$, as shown in Fig.~\ref{fig:3}(d).
A low curvature approximation of $\Delta \mathcal{F}$ in Appendix \ref{app:weak} suggests reveals a similar scaling with powers of ${\rm exp}\{-(2Hh_{\rm fl})^{-1}\}$, indicating that for low curvature, $\Delta \mathcal{F} \sim (z_{\rm ex}/h)^{2}$ with $\nu = 2$.
However, our numerical results are unable to resolve sufficiently low curvatures to demonstrate this scaling prediction.
As the stretching free energy corrections are remarkably in the low curvature, with values of $\Delta F \gtrsim 1\%$ only appearing for $Hh\gtrsim 0.5$, low-curvature EEZ corrections are unlikely have a significant effect of brush thermodynamics.
Nevertheless, for relatively high curvatures, the EEZ correction to the free energy can be substantial.
To make these results accessible for future studies, we have included supporting software for interpolating EEZ results for arbitrary values of $Hh$ and $Kh^2$ in this range\cite{dataset}.
\begin{figure*}
\includegraphics[width=\textwidth]{figs/fig4.png}
\caption{\label{fig:4}Properties of brushes at fixed $Hh \simeq 0.5$, with $Kh^2\simeq 0.25$ representing a spherical brush, $Kh^2=0$ a cylindrical brush, $Kh^2 \simeq -0.25$ a region B brush, $Kh^2 \simeq -1.37$ and $-1.78$ two region C brushes, and $Kh^2 \simeq -2.01$ a brush near the C/D border. (a) Chemical potential function $\mu(z)$, normalized by the chemical potential at the grafting surface $\mu(0) = P$ for a subset of solutions for clarity (note: $Kh^2 \simeq -1.8$ not shown in this panel since it is indistinguishable from $Kh^2 \simeq -2$). The dashed curve corresponds to the PBT prediction for a concave cylindrical brush with $Hh=-1/2$. The end distribution function $\hat{g}(z)$ and polarization order parameter $p(z)$ for the same subset of solutions are plotted in (b) and (c). The inset of (c) is a magnified view of the polar order parameter as $z\to h$, showcasing the similarities in local chain packing between the concave cylindrical brush and the $Kh^2 \simeq -2$ brush.}
\end{figure*}
To put the magnitudes of free energy corrections in perspective, we can consider domains of AB diblock copolymer melts in their typical range of stability, which is determined by the volume fraction $f_A$ of block A relative to block B, as well as the segregation strength $\chi N$.
At finite $\chi N$, the equilibrium configurations of block copolymers are well-described by a variant of the self-consistent field theory (SCFT) \cite{Edwards1965,Helfand1975}.
Semenov\cite{Semenov1985} postulated that in the strong segregation limit ($\chi N \to \infty$), chains are strongly stretched, with each domain of the microphase separated block copoloymer melt resembling a molten brush.
To establish that this strong-segregation theory (SST) is a rigorous asymptotic limit of SCFT, Likhtman and Semenov\cite{LikhtmanSemenov} determined finite-$\chi N$ corrections based on the translational entropy of chain ends and organization of chains at the AB-interface.
Matsen\cite{matsen_strong-segregation_2010} demonstrated that EEZ effects emerge in the high-$\chi N$ regime of SCFT calculations and that an accurate comparison with SST requires incorporating EEZ corrections to the stretching free energy.
This is particularly true when establishing the stability windows for cylinder ($0.11 \lesssim f_A \lesssim 0.30$) and sphere ($f_A \lesssim 0.11$) phases in the strong-segregation limit \cite{Fredrickson2006,matsen_strong-segregation_2010}.
The corresponding curvature ranges for these two phases are $1.0 \gtrsim Hh \gtrsim 0.4$ and $Hh \gtrsim 1.1$ \& $Kh^2 \gtrsim 1.2$, respectively.
Our results show an EEZ correction to the free energy of $1.37\% \gtrsim \Delta\mathcal{F}_{\rm cyl} \gtrsim 0.03\%$ and $\Delta\mathcal{F}_{\rm sph} \gtrsim 4.17\%$.
These free energy corrections are close to the values of $1.43\% \gtrsim \Delta\mathcal{F}_{\rm cyl} \gtrsim 0.04\%$ and $\Delta\mathcal{F}_{\rm sph} \gtrsim 4.04\%$ obtained by Matsen\cite{matsen_strong-segregation_2010} in a rather different approach to determining the EEZ corrections to PBT that allows free chain ends to have variable tension.
Network phases, such as the double-gyroid, have morphologies with local surface curvatures and domain thicknesses that vary with position over the intermaterial dividing surface between minority and matrix domains.
Taking estimates from ref.\cite{Reddy2021} the brush geometry for a double-gyroid varies from a locally flatter region near to points of 3-fold symmetry (along $\left<110\right>$ directions) with $H h \simeq 0.1 $ and $K h^2 \simeq 0$ to the most negatively curved points, at ``elbow’’-like regions that pass through points of 2-fold symmetry (along $\left<110\right>$ directions) with $H h \simeq 0.2 $ and $K h^2 \simeq -0.3$ (a region B brush).
Each of these regions have free energy corrections of at most $\lesssim 10^{-6}\%$, suggesting that EEZ-based corrections are minute for such double gyroid morphology, relative to cylindrical and sphere morphologies.
Nevertheless, given small free energy differences with competing phases, the extent to which the free energy corrections to the PBT alter the predicted phase boundaries in the strong-segregation ($\chi N \to \infty$) limit remains an open question.
To study how chains behave in each of the regions of interest, we consider solutions with $Hh\simeq 0.5$ and variable $Kh^2$ in Fig.~\ref{fig:4}, in particular how the extent of the EEZ changes.
As shown in Fig.~\ref{fig:4}(a) the chemical potential $\mu(z)$ approaches the PBT chemical potential, under suitable re-scaling by $h$ and $P$, as $Kh^2$ becomes increasingly negative, consistent with the observation that the brush free energy is well approximated by the PBT free energy in this regime.
Varying $Kh^2$ can be accomplished by either fixing $K$ and varying $h$ or vice-versa.
If $K$ is fixed to a negative value, and the brush height $h$ is increased, the size of the EEZ $z_{\rm ex}$ remains fixed but the fraction $z_{\rm ex}/h$ decreases, so that on the scale of the brush, the overall effect of the EEZ diminishes.
Conversely, if $h$ is fixed and $K$ becomes increasingly negative, the area growth due to positive $H$ is quickly overcome by the decrease in area due to negative $K$, decreasing the size $z_{\rm ex}$ of the EEZ.
Physically, as the end of the brush is confined to a smaller area, chain ends are depleted away from the end towards the middle of the brush, as shown in Fig.~\ref{fig:4}(b).
The result is a balance between packing constraints proximal near the grafting substrate and near the free brush end; as the brush becomes more confined, the outward splay near the substrate is reduced.
Thus, the non-local nature of the integral equations describing curved brushes is reflected in the non-local influence on chain packing that the two ends of the brush exert on each other.
The forms of the chemical potential $\mu(z)$ and end distribution function $\hat{g}(z)$ obtained from our numerical studies of the brush equations are strikingly similar to the SCFT results of Matsen for cylindrical and spherical domain brushes\cite{matsen_strong-segregation_2010} in the limit of strong stretching, further supporting their validity.
Finally, in addition to depleting endpoints away from the grafting substrate, the EEZ has a strong influence in on the orientation of segments in the brush.
To see this, consider the polar order parameter $\mathbf{p}(z)$, the average orientation of segments at point $z$ \cite{Fredrickson1992,Zhao2012,Prasad2017}.
Microscopically, the monomer orientation is tangent to the chain, i.e. $\hat{\mathbf{r}} = a^{-1}{\rm d}\mathbf{r}/{\rm d}n$, where the conformation of an individual chain is given by the parametric curve $\mathbf{r}(n)$.
For strongly stretched polymer brushes, the mean orientation of segments points along the normal direction $\mathbf{\hat{n}}$.
Fluctuations reduce the magnitude of the mean orientation, so the mean orientation of $\delta n$ monomers in a single chain is given by $\mathbf{\hat{n}}\delta z/(a\delta n)$, where $\delta z$ is the height interval spanned by $\delta n$ monomers.
Now consider averaging this quantity over all chains passing through a sample volume $\delta V$.
Once again, chain conformations are labeled by the location of the terminal end $z_0$ and the multiplicity of such chains is $\delta A \times g(z_0)$, where $\delta A$ is the cross-sectional area of the sample volume.
In general, such a volume contains $\delta n$ segments per chain.
However, we can choose the sample volume to enclose a single monomer on average, taking the limit $\delta n \rightarrow 1$ with $\delta V \simeq \delta A\times \delta z \rightarrow \rho^{-1}$.
In this limit, the polar order parameter is given by
\begin{equation}
\mathbf{p}(z) = \frac{\mathbf{\hat{n}}}{\rho a}\int_z^h{\rm d}z_0\,g(z_0) = \mathbf{\hat{n}}\frac{h_{\rm fl}}{Na}\int_z^h{\rm d}z_0\,\hat{g}(z_0)\, .
\end{equation}
Thus, averaging over all chains in a region is equivalent to summing over chain the distribution $\hat{g}(z_0)$ from $z$ to $h$.
Note that the upper bound on the polar order parameter is set by the ratio of the flat brush height to the total arclength of a chain $h_{\rm fl}/(Na)$.
Consequently, polymer brushes locally have a zone of uniform, enhanced polar order in an EEZ, as shown in Fig.~\ref{fig:4}(c), which tapers off for $z > z_{\rm ex}$, ultimately depolarizing at the tips of the brush, i.e.~$p \to 0$ as $z\to h$.
We note that polar order parameter for negatively curved brushes which approach the boundary between regions C and D, have polar order parameters that reflect differences in their local geometry, i.e.~for small $z$ they exhibit splay geometry near the grafting surface, but for $z \to h$ they have convergent (concave) geometry, like the inside of a cylindrical domain.
Hence, such a brush (e.g.~the curve $H h \simeq 0.5$; $K h^2 \simeq -2$ in Fig.~\ref{fig:4}(c)) shows both a constant polarization zone in a narrow EEZ near to the grafting surface, but also polarization profile similar to a inwardly curved cylinder near its convergent tips, with $p'(z) \to 0$ as $z \to h$, as shown in the dashed curved for the $H h = - 1/2$.
\section{\label{sec:conclusions} Conclusions}
We have presented a form of the self-consistent brush equations implementing end-exclusion zone corrections that is applicable to substrates of arbitrary curvature.
In the process, we identified several regimes of interest and developed numerical methods to approximate solutions to the equations within those regimes.
These numerical solutions provide rigorous predictions for the size of the end-exclusion zone, as well as the exact strong-stretching free energy, chemical potential field, chain end distribution, and polar order parameter of the brush as a function of substrate mean and Gaussian curvatures for arbitrary convex brush shapes ($H h>0$).
These result show that the the magnitude of the EEZ zone, as well as its thermodynamic correction to the PBT, decreases both with mean curvature (strictly vanishing as $Hh \to 0_+)$ as well as increasingly negative Gaussian curvature for fixed $H h >0$.
Moreover, we have demonstrated that the end-exclusion zone corrections to free energy predicted by the parabolic brush theory are non-analytic.
Consequently, while the polymer brush free energy can be well approximated by a Helfrich-style free energy for low curvature, end-exclusion zone effects cannot be added in as higher order polynomial terms in $H$ and $K$, counter to the usual expectations for structured fluid membranes \cite{MilnerBendingModuli1988,MilnerMacromolecules1994,Birshtein2008,Lei2015}.
We have shown that brush splay not only affects the distribution of free ends within the brush, but also leads to anomalous constant polarization of segments within the EEZ.
While we have neglected a discussion of solvated brushes, the results presented here can be adapted to their study.
Bringing a polymer brush into equilibrium with a solvent bath relaxes the space-filling constraint on the monomer distribution, altering eq.~(\ref{eq:incompressibility}) so that the local monomer density is $\rho A(z)\phi(z)$, where $\phi(z)$ is the volume fraction of monomers.
The volume fraction $\phi(z)$ is determined by the local chemical potential through the constitutive relation $\mu(z) \propto \phi(z)$ for marginal solvents (or $\propto \phi^2(z)$ for $\theta$-solvents) \cite{degennes_scaling}.
Solvated brushes are thus swollen relative to molten brushes, depressing the relative height of the EEZ $z_{\rm ex}/h$.
Moreover, the monomer volume fraction $\phi$ is minimal at the free surface of the brush since it is in osmotic equilibrium with the surrounding solvent bath.
Consequently, chain ends should be depleted from the free surface, altering $g(z)$ such that the maximum density of chain ends occurs in the bulk of the brush (see, e.g.~ref\cite{Belyi2004}).
While we have addressed the problem of general curvature for brushes of homogeneous, monodisperse flexible polymers, the lessons learned here prompt additional questions about more complex brushes and suggest routes towards engineering chain statistics.
The brushes we have studied have three operative length scales that control chain statistics: two radii of curvature and the brush height.
Notably, the appearance of the an EEZ alters the shape of $\mu(z)$ thereby re-weights the local free energy cost (per unit volume) of segments in molten brushes a certain distance from the grafting surface.
We note that similar effects are predicted for flat brushes composed of bidisperse chain lengths \cite{MWC1989,witten_two-component_1989}.
The behavior of polydisperse curved brushes is an open question, in particular whether the intrinsic stratification in differently chain lengths in polydisperse brushes either enhances, or instead suppresses, the geometric tendencies that drive EEZs in monodisperse convex brushes, and how in turn the combination of these two effects alters the thermodynamic sensitivity of brushes to curvature.
\begin{acknowledgments}
The authors are grateful to A.~Reddy and T.~Witten for valuable discussion and comments. This work was supported by the US Department of Energy
2673 (DOE), Office of Basic Energy Sciences, Division of Materials Sciences and Engineering, under Award DE-SC0014599.
\end{acknowledgments}
\section{Appendices}
|
{
"timestamp": "2021-09-30T02:01:53",
"yymm": "2109",
"arxiv_id": "2109.13979",
"language": "en",
"url": "https://arxiv.org/abs/2109.13979"
}
|
\section{\Large Supplemental Material}\label{sec:Appendix}
\subsection{Coupled pair of PDEs for the KN perturbations}\label{sec:Appendix1}
The uniqueness theorems \cite{Robinson:2004zz,Chrusciel:2012jk} state that the Kerr-Newman (KN) black hole (BH) is the unique, most general family of stationary asymptotically flat BHs, of Einstein-Maxwell theory. It is characterised by 3 parameters: mass $M$, angular momentum $J\equiv M a$ and charge $Q$. The Kerr, Reissner-Nordstr\"om (RN) and Schwarzschild (Schw) BHs constitute limiting cases: ${Q=0}$, $a=0$ and $Q=a=0$, respectively.
The gravitational and Maxwell fields of the KN BH in Boyer-Lindquist coordinates are given by \cite{Newman:1965my,Adamo:2014baa}
\begin{eqnarray}\label{KNsoln}
ds^2&=&-\frac{\Delta}{\Sigma} \left(\mathrm{d} t-a \sin^2\theta \mathrm{d} \phi \right)^2+\frac{\Sigma }{\Delta }\,\mathrm{d} r^2 + \Sigma \,\mathrm{d} \theta^2
+ \frac{\sin ^2\theta}{\Sigma }\left[\left(r^2+a^2\right)\mathrm{d} \phi -a \mathrm{d} t \right]^2, \nonumber\\
A&=& \frac{Q \,r}{\Sigma}\left(\mathrm{d} t-a \sin^2\theta \mathrm{d} \phi \right),
\end{eqnarray}
with $\Delta = r^2 -2Mr+a^2+Q^2$ and $\Sigma=r^2+a^2 \cos^2\theta$.
Linear gravito-electromagnetic perturbations about the KN background are more easily addressed in the Newman-Penrose (NP) formalism \cite{Newman:1961qr}.
In the context of this formalism there is a well-known set of NP scalars built of contractions of the NP tetrad with the Weyl tensor ({\it e.g.,}\ $\Psi_2$, $\Psi_3$ and $\Psi_4$) or with the Maxwell field strength ({\it e.g.,}\ $\Phi_1$ and $\Phi_2$) \cite{Chandra:1983,Stephani:2003tm} . Out of these, one can construct two
{\it gauge invariant} perturbed quantities, {\it i.e.\,}\ quantities that are invariant under both linear diffeomorphisms and tetrad rotations, namely \cite{Dias:2015wqa}:
\begin{eqnarray}\label{gauging}
&&\psi_{-2}= \left(\bar{r}^*\right)^4 \Psi_4^{(1)}, \nonumber\\
&& \psi_{-1}=\frac{\left(\bar{r}^*\right)^3}{2\sqrt{2}\Phi_1^{(0)}} \left(2\Phi_1^{(0)}\Psi_3^{(1)} -3 \Psi_2^{(0)}\Phi_2^{(1)}\right),
\end{eqnarray}
with $\bar{r} = r+ia\cos \theta$. Here, NP scalars with superscript $^{(0)}$ refer to scalars in the KN background and the superscript $^{(1)}$ to first order perturbations of the scalar.
These NP scalars \eqref{gauging} are the ones relevant for the study of perturbations that are outgoing at future null infinity and regular at the future horizon \footnote{There is a set of two coupled PDEs---related to (4) by a Geroch-Held-Penrose \cite{Geroch:1973am} transformation---for the quantities $\psi_{2}$ and $\psi_{1}$ that are the positive spin counterparts of (4); however these would be relevant if we were interested in perturbations that were outgoing at past null infinity.}.
Ref. \cite{Dias:2015wqa} derived a set of two coupled partial differential equations (PDEs) for $\psi_{-2}$ and $\psi_{-1}$ that describe the most general perturbations (except for trivial modes that shift the parameters of the solution) of a KN BH, namely:
\begin{eqnarray}\label{ChandraEqsAppendix}
&& \left(\mathcal{F}_{-2}+ Q^2 \mathcal{G}_{-2}\right) \psi_{-2} + Q^2 \mathcal{H}_{-2} \psi_{-1} =0 \,, \nonumber \\
&& \left(\mathcal{F}_{-1} +Q^2 \mathcal{G}_{-1}\right)\psi_{-1} + Q^2 \mathcal{H}_{-1} \psi_{-2}=0 \,,
\end{eqnarray}
where the second order differential operators $\{\mathcal{F},\mathcal{G},\mathcal{H}\}$ are given by \cite{Dias:2015wqa}
\begin{eqnarray}\label{def:opsFGH}
\mathcal{F}_{-2}&=&\Delta\mathcal{D}_{-1}^\dagger\mathcal{D}_0 +\mathcal{L}_{-1}\mathcal{L}_2^\dagger -6i \omega\bar{r} \,,
\nonumber \\
\mathcal{G}_{-2}&=&\Delta \mathcal{D}_{-1}^\dagger \alpha_-\bar{r}^*\mathcal{D}_0 -3\Delta \mathcal{D}_{-1}^\dagger \alpha_-
- \mathcal{L}_{-1}\alpha_+ \bar{r}^* \mathcal{L}_2 ^\dagger +3 \mathcal{L}_{-1} \alpha_+ i a \sin \theta \,,
\nonumber \\
\mathcal{H}_{-2}&=&-\Delta \mathcal{D}_{-1}^\dagger \alpha_- \bar{r}^* \mathcal{L}_{-1} -3 \Delta \mathcal{D}_{-1}^\dagger \alpha_- i a \sin \theta
-\mathcal{L}_{-1} \alpha_+ \bar{r}^* \Delta \mathcal{D}_{-1}^\dagger -3\mathcal{L}_{-1} \alpha_+ \Delta \,,
\nonumber \\
\mathcal{F}_{-1}&=&\Delta\mathcal{D}_1\mathcal{D}_{-1}^\dagger +\mathcal{L}_2^\dagger\mathcal{L}_{-1}-6i \omega\bar{r} \,,
\\
\mathcal{G}_{-1}&=& - \mathcal{D}_0 \alpha_+ \bar{r}^* \Delta \mathcal{D}_{-1}^\dagger -3 \mathcal{D}_0 \alpha_+ \Delta
+\mathcal{L}_2^\dagger \alpha_- \bar{r}^* \mathcal{L}_{-1} +3 \mathcal{L}_2^\dagger \alpha_- i a\sin\theta \,,
\nonumber \\
\mathcal{H}_{-1}&=& -\mathcal{D}_0 \alpha_+ \bar{r}^* \mathcal{L}_2^\dagger +3 \mathcal{D}_0 \alpha_+ i a \sin \theta
-\mathcal{L}_2^\dagger \alpha_- \bar{r}^* \mathcal{D}_0 +3 \mathcal{L}_2^\dagger \alpha _- \,,
\nonumber
\end{eqnarray}
with
$\alpha_\pm \equiv \left[3(\bar{r}^2M-\bar{r} Q^2)\pm Q^2\bar{r}^*\right]^{-1}$, and we introduced the radial and angular Chandrasekhar operators \cite{Chandra:1983},
\begin{eqnarray}\label{def:DL}
&& \mathcal{D}_j = \partial_r+\frac{i K_r}{\Delta}+2j\frac{(r-M)}{\Delta}, \quad K_r=am-(r^2+a^2)\omega; \nonumber \\
&& \mathcal{L}_j = \partial_\theta+K_{\theta}+j\cot\theta, \quad K_{\theta}=\frac{m}{\sin\theta}-a\omega\sin\theta.
\end{eqnarray}
The complex conjugate of these operators, namely $\mathcal{D}_j^\dagger $ and $\mathcal{L}_j^\dagger$, can be obtained from $\mathcal{D}_j$ and $\mathcal{L}_j$ via the replacement $K_r \rightarrow - K_r$ and $K_{\theta} \rightarrow - K_{\theta}$, respectively.
Note that fixing a gauge in which $\Phi_{0}^{(1)}=\Phi_{1}^{(1)}=0$, \eqref{ChandraEqsAppendix} reduces to the Chandrasekhar coupled PDE system \cite{Chandra:1983} (see also the derivation in \cite{Mark:2014aja}). Finally, note that in the limit $Q\rightarrow 0$ \eqref{ChandraEqsAppendix} decouple yielding the familiar Teukolsky equation for Kerr \cite{Teukolsky:1972my}.
Since $\partial_t,\partial_\phi$ are Killing vector fields of KN, we can Fourier decompose the perturbations $\{\psi_{-2},\psi_{-1}\}$ as $e^{-i \omega t} e^{i m \phi}$. This introduces the frequency $\omega$ and azimuthal quantum number $m$ of the perturbation. The $t -\phi$ symmetry of the KN BH allows to consider only modes with Re$(\omega)\geq 0$, as long as we study both signs of $m$.
Then, to solve the coupled PDEs \eqref{gauging}, we need to impose physical boundary conditions (BCs).
At spatial infinity, a Frobenius analysis of \eqref{ChandraEqsAppendix} that allows only outgoing waves yields the decay:
\begin{equation}\label{BC:inf}
\psi_{s}{\bigl |}_\infty \!\simeq\!e^{i \omega r} r^{-(2s+1)+i \omega \,\frac{r_+^2+a^2+Q^2}{r_+}} \!\! \left( \!\! \alpha_{s}(\theta)+\frac{\beta_{s}(\theta)}{r}+\cdots\!\!\right), \nonumber
\end{equation}
where $s=-2,-1$, and $\beta_{s}(\theta)$ is a function of $\alpha_{s}(\theta)$ and its derivative fixed by expanding \eqref{ChandraEqsAppendix} at spatial infinity.
At the horizon, a Frobenius analysis whereby we require only regular modes in ingoing Eddington-Finkelstein coordinates, yields the expansion
\begin{equation}\label{BC:H}
\psi_{s}{\bigl |}_H \!\simeq\! \left(r-r_+\right)^{-s-\frac{i (\omega -m\Omega_H)}{4 \pi T_H}}[ a_{s}(\theta)+ b_{s}(\theta)(r-r_+) +\cdots ], \nonumber
\end{equation}
where $b_{s}(\theta)$ is a function of $a_s(\theta)$ and its derivative.
At the North (South) pole $x\equiv \cos\theta =1\,(-1)$, regularity dictates that the fields must behave as
($\varepsilon = 1$ for $|m|\geq 2$, while $\varepsilon = -1$ for $|m|=0,1$ modes)
\begin{equation}\label{BC:N}
\psi_{s}{\bigl |}_{\hbox{\tiny N,(S)}} \hspace{-1pt} \simeq (1\mp x)^{\varepsilon^{\frac{1\pm 1}{2}} \frac{s+|m|}{2}} \hspace{-1pt} \left[ A^{\pm}_{s}(r)+B^{\pm}_{s}(r)(1\mp x)+\cdots \right], \nonumber
\end{equation}
where $B^{+}_{s}(r)$($B^{-}_{s}(r)$) is a function of $A^{+}_{s}(r)$($A^{-}_{s}(r)$) and its derivatives along $r$, whose exact form is fixed by expanding \eqref{ChandraEqsAppendix} around the North (South) pole.
\subsection{WKB coefficients for the separation constant $\lambda_2$}\label{sec:NHappendix}
At extremality, the modes with slowest decay rate (independently of belonging to the NH or PS families) always approach $\mathrm{Im}\,\tilde{\omega}=0$ and $\mathrm{Re}\,\tilde{\omega}=m \tilde{\Omega}_H^{\hbox{\footnotesize{ext}}}$ and \eqref{NH:freq} of the main text provides an excellent approximation to their frequency in an expansion off-extremality (as analysed in the discussion of Fig.~\ref{Fig:spectraFix-a} of the main text). The derivation of the analytical approximation \eqref{NH:freq} of the main text is quite long and thus we will present it in the companion manuscript \cite{ExtendedQNMsKN}.
In \eqref{NH:freq} of the main text, the separation constant $\lambda_2$ has a WKB expansion for large $m$, as given in Eq. \eqref{NH:lambda2wkb} of the main text. The associated WKB coefficients
are:
\begin{subequations}\label{NH:WKBansatzCoef}
\begin{align}
&\lambda_{2,0}=4 \left(1-4 \hat{a} ^2\right),\qquad
\lambda_{2,1}= -4 \left(1+\hat{a}^2\right) \left(2 \sqrt{1-\hat{a} ^2}-\sqrt{1+2 \hat{a} ^2}\right),
\\
&\lambda_{2,2}= \frac{3 \sqrt{1-\hat{a} ^2} \left(1+\hat{a}^2\right)^2 \left(3-726 \hat{a} ^{10}-253 \hat{a} ^8+128 \hat{a} ^6-74 \hat{a} ^4-50 \hat{a} ^2\right)}{\left(1+2\hat{a}^2\right) \left[\left(66 \hat{a} ^6-5 \hat{a} ^4-12 \hat{a} ^2+5\right) \sqrt{1-\hat{a} ^2}+4 \left(1-\hat{a} ^4\right) \sqrt{2 \hat{a} ^2+1}\right]},
\tag{\stepcounter{equation}\theequation} \\
&\lambda_{2,3}=
\bigg[
4 \left(1+2\hat{a}^2\right)^{7/2}\bigg(
578577650112 \hat{a} ^{40}-338129795520 \hat{a} ^{38}-1042453021104 \hat{a} ^{36}+1170932108544 \hat{a} ^{34}
\nonumber \\
&\hspace{0.8cm} +243872180244 \hat{a} ^{32}-1092788709804 \hat{a} ^{30}+457571937931 \hat{a} ^{28}+286639850738 \hat{a} ^{26}-371225227587 \hat{a} ^{24}
\nonumber \\
&\hspace{0.8cm}
+75821376048 \hat{a} ^{22}+83823143199 \hat{a} ^{20}-64522516578 \hat{a} ^{18}+5397537793 \hat{a} ^{16}+11870759300 \hat{a} ^{14}-5939331087 \hat{a} ^{12}
\nonumber \\
&\hspace{0.8cm}
+15670254 \hat{a} ^{10}+798959271 \hat{a} ^8-269248008 \hat{a} ^6-8868395 \hat{a} ^4+20327618 \hat{a} ^2-4782969
\bigg)
\nonumber \\
&\hspace{0.8cm}
+4 \sqrt{1-\hat{a} ^2} \left(1+2\hat{a}^2\right)^3\bigg(
661231600128 \hat{a} ^{40}-788969522880 \hat{a} ^{38}-475886378880 \hat{a} ^{36}+1029138506352 \hat{a} ^{34}
\nonumber \\
&\hspace{0.8cm}
-630648141552 \hat{a} ^{32}-452699156052 \hat{a} ^{30}+658166339168 \hat{a} ^{28}-186975958943 \hat{a} ^{26}-249892000005 \hat{a} ^{24}
\nonumber \\
&\hspace{0.8cm}
+178743692406 \hat{a} ^{22}-3249242106 \hat{a} ^{20}-56479482309 \hat{a} ^{18}+20902690721 \hat{a} ^{16}+3663601312 \hat{a} ^{14}-5845481340 \hat{a} ^{12}
\nonumber \\
&\hspace{0.8cm}
+1100552199 \hat{a} ^{10}+410656173 \hat{a} ^8-279409506 \hat{a} ^6+19829366 \hat{a} ^4+13153165 \hat{a} ^2-4782969
\bigg)\bigg]^{-1}
\nonumber \\
&\hspace{0.8cm}
\bigg[
3 \hat{a} ^2 \sqrt{1-\hat{a} ^2} \left(1+\hat{a}^2\right)^3 \sqrt{2 \hat{a} ^2+1} \bigg(
90588729217536 \hat{a} ^{46}+93586813404480 \hat{a} ^{44}-64234642488192 \hat{a} ^{42}
\nonumber \\
&\hspace{0.8cm}
-54181551934224 \hat{a} ^{40}+14733709326864 \hat{a} ^{38}-34708141099764 \hat{a} ^{36}-8979094220672 \hat{a} ^{34}+34432474064505 \hat{a} ^{32}
\nonumber \\
&\hspace{0.8cm}
-10922161747605 \hat{a} ^{30}-23041644949212 \hat{a} ^{28}+5136927583340 \hat{a} ^{26}+4733507876355 \hat{a} ^{24}-3578226571619 \hat{a} ^{22}
\nonumber \\
&\hspace{0.8cm}
-898929274206 \hat{a} ^{20}+753565243446 \hat{a} ^{18}-135077374365 \hat{a} ^{16}-174223122235 \hat{a} ^{14}+33089919120 \hat{a} ^{12}
\nonumber \\
&\hspace{0.8cm}
+8380363168 \hat{a} ^{10}-9890782275 \hat{a} ^8-803782461 \hat{a} ^6+541670718 \hat{a} ^4-148272034 \hat{a} ^2-57395628 \bigg)
\nonumber \\
&\hspace{0.8cm}
+3 \hat{a} ^2 \left(1+\hat{a}^2\right)^3 \bigg(
158530276130688 \hat{a} ^{48}+192260601732672 \hat{a} ^{46}-226279077675552 \hat{a} ^{44}
\nonumber \\
&\hspace{0.8cm}
-257580189150768 \hat{a} ^{42}+238634465705064 \hat{a} ^{40}+187478664334236 \hat{a} ^{38}-167948153974214 \hat{a} ^{36}
\nonumber \\
&\hspace{0.8cm}
-79050787933609 \hat{a} ^{34}+69165996968940 \hat{a} ^{32}+1562277529575 \hat{a} ^{30}-26149776558142 \hat{a} ^{28}
\nonumber \\
&\hspace{0.8cm}
+6310859786413 \hat{a} ^{26}+3820171951948 \hat{a} ^{24}-4424582883901 \hat{a} ^{22}-417658252182 \hat{a} ^{20}+868831525263 \hat{a} ^{18}
\nonumber \\
&\hspace{0.8cm}
-249677209480 \hat{a} ^{16}-170706582299 \hat{a} ^{14}+47404470046 \hat{a} ^{12}+4708012127 \hat{a} ^{10}-10932078636 \hat{a} ^8-398469675 \hat{a} ^6
\nonumber \\
&\hspace{0.8cm}
+532105820 \hat{a} ^4-176969858 \hat{a} ^2-57395628
\bigg)\bigg].
\end{align}
\end{subequations}
The derivation of \eqref{NH:lambda2wkb} of the main text and of \eqref{NH:WKBansatzCoef} is again long and will be given it in the companion manuscript \cite{ExtendedQNMsKN}. There, we also show that this WKB expansion provides an excellent approximation already for $m=10$ and a good approximation even for $m=2$.
\end{appendix}
|
{
"timestamp": "2021-09-30T02:00:36",
"yymm": "2109",
"arxiv_id": "2109.13949",
"language": "en",
"url": "https://arxiv.org/abs/2109.13949"
}
|
\section{Introduction}
Information loss in black hole evolution is one of the longest-running controversies in theoretical physics \cite{h:76,wald:01,rev-0,rev-00,rev-1,rev-12,rev-12marolf,coy:15,info:21,SFA:21,bfm:21}. Its essence is captured by the following scenario: according to distant observers, matter collapsing into a black hole completely evaporates via Hawking radiation within a finite time. If quantum correlations between the inside and outside of the black hole horizon are not restored during the evaporation, this evolution of low-entropy collapsing matter into high-entropy radiation implies information loss. This problem is referred to as a paradox because a combination of information-preserving theories --- quantum field theory and general relativity (GR) --- ostensibly leads to a loss of information \cite{bmt:17}.
Its status as a paradox, the necessity and/or validity of particular resolutions and their implications for a putative theory of quantum gravity or the fundamental structure of quantum theory are not the subject of our discussion here. Instead, we focus on the consequences of its formulation within the framework of semiclassical gravity. In common with the paradoxes of quantum mechanics, the information loss problem combines classical and quantum elements and some counterfactual reasoning. In this paper, we consider the physical and mathematical consequences of having the necessary elements for its formulation realized.
We find that the conditions required for the formulation of the paradox (in contrast to its resolution) cannot be realized without significant modifications of the late-time black hole radiation, which is considered to be one of the most established results of quantum field theory in curved spacetime. The key technical findings that we report are the discordant properties of generalizations of surface gravity. As a result, we conclude that, while gravitational collapse and gravitationally-induced radiation contain several important physical questions, including matter-gravity correlations, observability of various horizons, and the applicability of semiclassical physics, the standard formulation of apparent loss of information cannot {consistently be made} in the context of semiclassical gravity. Consequently, if the paradox cannot be self-consistently formulated in the best tested framework we currently have available, this suggests that its various proposed resolutions should be reappraised.
We first note that the setting for the formulation of the information loss problem involves at least the following:
\begin{enumerate}
\item Formation of a transient trapped region. Such a region either completely disappears or turns into a stable remnant; in either case, this takes place in finite time as measured by a distant observer Bob. This provides the scattering-like setting to describe the states (and their alleged information content) ``before'' and ``after''.
\item Formation of an event horizon (and not just any other special surface). Its existence is necessary to provide an objective, observer-independent separation of the spacetime into accessible and inaccessible regions, and it is only with respect to this boundary that tracing out of the interior degrees of freedom is not just a technical limitation (akin to our inability to recover correlations between the smoke and information that was contained in the proverbial burned encyclopedia), but a fundamental physical restriction \cite{wald:01,vis:08}.
\item Thermal or nearly-thermal character of the radiation. It is responsible for the eventual disappearance of the trapped region and for the high entropy of the reduced exterior density operator.
\end{enumerate}
Additional assumptions may or should be made to enable a particular formulation of the paradox, but the triad of finite lifetime, event horizon, and temperature are the ineluctable components of the paradox's formulation.
The logical framework of our result is as follows: the existence of a transient trapped region implies its formation at some finite time $t_\mathrm{S}$ as measured by Bob (Sec.~\ref{sec:prerequisites}). Together with the minimal regularity assumption (finiteness of all curvature scalars that are obtained as polynomial invariants of the Riemann tensor at the apparent horizon), this constrains the possible spherically symmetric geometries enough to prescribe a unique formation scenario (Sec.~\ref{horizon}) that requires us to generalize the notion of surface gravity. Fig.~\ref{schema1} schematically represents the geometry that underpins the paradox.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.45\textwidth]{ParadoxFig_v3.pdf}
\caption{
Schematic depiction of collapse into and complete evaporation of a black hole as outlined in Ref.~\cite{h:76}. Spacetime regions corresponding to PBH and MBH solutions are indicated by arrows. Different proposals for the spacetime structure after the singular corner point that corresponds to complete evaporation appear in various resolutions of the information loss paradox (see, e.g.\ Refs.~\cite{rev-0,rev-00,rev-1,coy:15,bfm:21}). Finite formation time $t_\mathrm{S}$ of the apparent horizon according to a distant observer is a necessary condition for the formulation of the paradox. Part of the equal time surface $\Sigma_{t_\mathrm{S}}$ is shown as a dashed purple line. The outer apparent horizon $r_\textsl{g}(t)$ and the inner apparent horizon form the boundary of a PBH and are shown in blue. The apparent horizon $r_\textsl{g}(t)$ is a timelike hypersurface during its entire existence \cite{bmmt:19}. Bob's trajectory is indicated by the green curve. The collapsing matter and its surface are shown as in conventional depictions of the collapse. However, the matter in the vicinity of the outer apparent horizon $\big(t,r_\textsl{g}(t)\big)$ violates the NEC for $t\geqslant t_\mathrm{S}$. Moreover, the energy density, pressure, and flux as seen by an infalling observer Alice vary continuously across it, and the equation of state dramatically differs from that of normal matter that may have been used to model the initial EMT of the collapse (see Sec.~\ref{horizon} and Ref.~\cite{mut:21} for details).
}
\label{schema1}
\end{figure}
Dynamical black hole spacetimes do not possess a timelike Killing field, and thus require different methods to define the surface gravity. The literature contains several possible definitions that are broadly classified according to which of the (equivalent in the stationary case) properties of surface gravity they are related. The results serve as analogs of the Hawking temperature, which they approach in a suitable limit. Under quite general conditions these classes provide close values for their respective quantities. However, we will show that these conditions are not satisfied if the apparent horizon is formed at finite $t_\mathrm{S}$. Consequently, these values differ significantly, and none of them can approach the Hawking temperature $1/4M$ without violating the semiclassical luminosity relation $L\propto M^{-2}$ as we shall demonstrate (Sec.~\ref{kappaT}).
Our article is organized as follows: in the next section we review the assumptions of semiclassical black hole physics. We restrict our discussion to spherical symmetry. Then, we translate the necessary requirements for the formulation of the information loss problem into conditions on self-consistent solutions of the Einstein equations. In Sec.~\ref{horizon}, we summarize the properties of these solutions, emphasizing the near-horizon geometry and the unique scenario of black hole formation. In Sec.~\ref{idUx}, we identify the leading terms in the self-consistent metric using a general evaporation law. Sec.~\ref{kappaT} outlines the consequences of this identification; we demonstrate that the two natural candidates for the Hawking temperature, when evaluated for the configurations of Sec.~\ref{horizon}, disagree with each other and cannot be reconciled with the standard semiclassical result without contradicting the results of Sec.~\ref{idUx}.
We use the $(-+++)$ signature of the metric and work in units where $\hbar=c=G=k_B=1$. Derivatives of a function of a single variable are marked with a prime: $r_\textsl{g}'(t)\equiv dr_\textsl{g}/dt$, $r_+'(v)\equiv dr_+/dv$, etc. Derivatives with respect to the proper time $\sigma$ are denoted by the dot, $\dot r=dr/d\sigma$.
\section{Prerequisites for the paradox} \label{sec:prerequisites}
We work in semiclassical gravity. That means we use classical notions (horizons, trajectories, etc.) and describe dynamics via the Einstein equations $G_{\mu\nu}=8\pi T_{\mu\nu}$, where the standard Einstein tensor on the left-hand side is equated to the expectation value of the renormalized energy-momentum tensor (EMT), $T_{\mu\nu} = \langle \hat T_{\mu\nu} \rangle_\omega$ \cite{hv:book,pp:09,bmt:18}. The quantum state $\omega$ represents both the collapsing matter and the created excitations of the quantum fields.
A general spherically symmetric metric in Schwarzschild coordinates is given by
\begin{align}
ds^2=-e^{2h(t,r)}f(t,r)dt^2+f(t,r)^{-1}dr^2+r^2d\Omega, \label{sgenm}
\end{align}
where $r$ is the areal radius \cite{he:book,c:book}. The function $f(t,r)=1-C(t,r)/r$ is coordinate-independent. The Misner\textendash{}Sharp (MS) mass \cite{ms,faraoni:b,aphor} $C(t,r)/2$ is invariantly defined via
\begin{align}
1-C/r \defeq \partial_\mu r \partial^\mu r \; . \label{defMS}
\end{align}
The same geometry can be described using the advanced null coordinate $v$ as
\begin{align}
ds^2=-e^{2h_+}\left(1-\frac{C_+}{r}\right)dv^2+2e^{h_+}dvdr +r^2d\Omega. \label{lfv}
\end{align}
Invariance of the MS mass implies $C_+(v,r)=C\big(t(v,r),r\big)$, while the functions $h_+(v,r)$ and $h(t,r)$ are the integrating factors in various coordinate transformations, such as
\begin{align}
dt=e^{-h}(e^{h_+}dv- f^{-1}dr). \label{intfu}
\end{align}
The study of null geodesics and their congruences is one of the principal tools of black hole physics. Assuming spherical symmetry, radial null geodesics are determined in Schwarzschild coordinates as the solutions of
\begin{align}
\frac{dr}{dt}=\pm e^{h}f, \label{nullg}
\end{align}
where for $f>0$ the upper sign corresponds to an outgoing geodesic. In $(v,r)$ coordinates the ingoing geodesics correspond to $v=\mathrm{const}$, and the outgoing geodesics satisfy
\begin{align}
\frac{dr}{dv}={e^{h_+}f}. \label{eq:vr-out-geod}
\end{align}
We assume that the spacetime is asymptotically flat and $t$ is the physical time of a distant observer (Bob) to simplify the exposition, though we emphasize that it is not necessary to assume any particular structure at infinity to derive the results we present in the next section.
A future event horizon \cite{he:book,c:book,faraoni:b,fn:book} is a causal boundary separating the domain of outer communication (the region from which it is possible to send signals to any future asymptotic observer) from the rest of spacetime (regions where this is not possible). However, to determine its existence/presence requires knowledge of the entire history of spacetime (and therefore also infinitely far into its future) \cite{fn:book,faraoni:b,vis:08,mV:2014}. In what follows we refer to the causally disconnected spacetime domain as a {\it mathematical black hole} (MBH) \cite{c:book,frolov-def}.
A much more practical and useful definition captures the idea of a black hole as part of space from which nothing can escape at a given moment in time. A trapped region is a domain where both ingoing and outgoing future-directed null geodesics emanating from a spacelike two-dimensional surface with spherical topology have negative expansion. The apparent horizon \cite{he:book,faraoni:b,aphor} is its evolving outer boundary. In general this notion depends on the spacetime foliation, but the apparent horizon is unambiguously defined in all foliations that respect spherical symmetry \cite{faraoni:b,aphor}.
In $(v,r)$ coordinates the expansions of ingoing and outgoing radial geodesic congruences with the tangents
\begin{align}
n^\mu=(0,-e^{-h_+},0,0), \qquad l^\mu=(1,\tfrac{1}{2} e^{h_+} f,0,0),
\end{align}
that satisfy $n\cdot l=-1$ are
\begin{align}
\theta_n = - \frac{2e^{-h_+}}{r}, \qquad \theta_l = \frac{e^{h_+} f}{r},
\end{align}
respectively. The apparent horizon is located at the Schwarzschild radius $r_\textsl{g}(t) \equiv r_+(v)$, namely the largest root of $f(t,r)=0$. Following the nomenclature of Ref.~\cite{frolov-def}, we refer to its interior as a {\it physical black hole} (PBH).
Using the retarded null coordinate $u$ leads to the metric in the form
\begin{align}
ds^2=-e^{2h_-}\left(1-\frac{C_-}{r}\right)du^2-2e^{h_-}dudr +r^2d\Omega, \label{lfu}
\end{align}
which is particularly suitable for describing the spacetime of a white hole (then the Schwarzschild radius $r_-(u)$ is the boundary of the anti-trapped region where both expansions are positive).
In classical GR the event and the apparent horizon are regular surfaces: the curvature scalars, such as the Ricci curvature $R$ and the Kretschmann scalar $R^{\mu\nu\rho\sigma}R_{\mu\nu\rho\sigma}$ are finite. In studies of field theories on curved backgrounds this assumption is necessary to maintain predictability of the theory \cite{hv:book,pt:book}.
The event horizon is an indispensable concept in formulating the paradox \cite{wald:01,bmt:17,vis:08}. Tracing out the inaccessible degrees of freedom naturally leads to entropy production in an overall unitary evolution. To differ from non-paradoxical entropy increases common in thermodynamic subsystems, this separation of spacetime regions should not represent a practical limitation on a distant Bob, but rather an absolute physical restriction. This is provided by an event horizon that bounds the absolutely inaccessible spacetime region according to Bob, which is distinct from the transient (albeit extremely long-lived) trapped region of regular black holes \cite{bar:68,fv:81,hay:06,f:14,cflpv:18,bm:19,cp:rev}.
According to Bob, the formation of black holes from classical collapsing matter takes an infinite amount of time. After at most a few dozen multiples of light-crossing time $r_\textsl{g}$, he cannot receive signals from an infalling observer Alice (an observer co-moving with the matter
who is initially at or near its edge); the redshift requires that the energy of any such detected signal is greater than the mass of the black hole.
Consequently, in classical GR the event horizon and Alice's experiences, like crossing the Schwarzschild radius, are counterfactual \cite{bmt:17}. Her clock readings should indicate various processes occurring at finite proper times $\tau_i$. As Alice cannot communicate her clock readings to Bob, these are experimentally unverifiable consequences of the formalism of GR. Nevertheless, a finite proper crossing time promotes the event horizon, and by extension the quantum states associated with the black hole horizon and its interior, from convenient mathematical concepts to physical entities in the theory.
The paradox that is based on properties of Hawking-like radiation cannot be constructed in such a case. Even if the existence of collapse-induced radiation does not require a horizon for its production \cite{pp:09,haj:87,blsv:06,vsk:07} (thereby resolving an obvious causal difficulty of the collapse and evaporation process taking a finite time according to Bob), formation of the event horizon should also occur at some finite time $t_*$ that precedes the evaporation time $t_\mathrm{e}$.
If the null energy condition (NEC) \cite{mmv:17,ks:20} is satisfied, i.e.\ for any null vector $k^\mu$, $k^\mu k_\mu=0$, contraction with the EMT is non-negative, $T_{\mu\nu}k^\mu k^\nu \geqslant 0$, then the apparent horizon is located inside of the event horizon \cite{he:book,fn:book}. This condition is violated by Hawking radiation. A detailed semiclassical analysis subsequently indicates that part of the trapped region is outside of the MBH \cite{fn:book, faraoni:b,bardeen:81}.
For an evaporating black hole ($r'_\textsl{g} \defeq d r_\textsl{g}/dt<0$) a weaker statement --- existence of the event horizon in finite $t$ implies formation of the apparent horizon at some finite time $t_\mathrm{S}$ --- ensues on the following logical grounds: consider an outward-pointing radial null geodesic that is emitted from a location $(r,t)$ that is outside of the apparent horizon $r_\textsl{g}(t)$. Eq.~\eqref{nullg} indicates that $r'(t) =: v(t)>0$, and as $r_\textsl{g}'(t)<0$ this is true along the entire trajectory. For this geodesic to avoid reaching infinity as $t\to\infty$, at least either $\lim_{t\to\infty}\lim_{r\to r_*}h=-\infty$ or $\lim_{t\to\infty}\lim_{r\to r_*}f=0$ should hold for some $r_*<\infty$. The former is impossible as the Schwarzschild coordinates are regular outside of $r_\textsl{g}$, while the latter contradicts the definition of the MS mass and its relationship with the apparent horizon as $r_\textsl{g}(t)$ is the largest root of $f=0$. Hence such a geodesic does reach future null infinity, and a geodesic that is emitted outside of the receding apparent horizon is not contained within a MBH.
This discussion leads to two conditions that are necessary for the formulation of the information loss problem. First, an apparent horizon must be a regular surface to ensure predictability, and second, it must form at some finite time according to Bob (otherwise formation of the event horizon prior to evaporation of the black hole is impossible, thus preventing formulation of the alleged paradox). In spherical symmetry this is enough to describe the black hole formation scenario and geometry near the apparent horizon \cite{mut:21}.
\section{Near-horizon geometry} \label{horizon}
Both the regularity conditions and the Einstein equations can be conveniently expressed in terms of combined expressions
\begin{align}
\tau_t \defeq e^{-2h} T_{tt}, \qquad \tau^r \defeq T^{rr}, \qquad \tau_t^{~r} \defeq e^{-h}T_t^{~r} ,
\end{align}
that are used instead of the EMT components \cite{bmmt:19}. In particular, the three Einstein equations for $G_{tt}$, $G_t^{~r}$, and $G^{rr}$ are
\begin{align}
\partial_r C &= 8 \pi r^2 \tau_t / f , \label{gtt} \\
\partial_t C &= 8 \pi r^2 e^h \tau_t^{~r} , \label{gtr} \\
\partial_r h &= 4 \pi r \left( \tau_t + \tau^r \right) / f^2 , \label{grr}
\end{align}
respectively.
The requirement of regularity of the apparent horizon and existence of real solutions describing geometry in its vicinity constrain the generic limiting form of the EMT and, eventually, the metric in its vicinity. Regularity is expressed as the demand that curvature scalars obtained from polynomials of components of the Riemann tensor are finite. However, in practice it is sufficient to require that the contractions $T^{\mu\nu}T_{\mu\nu}$ and $T^\mu_{~\mu}$ are finite at the apparent horizon to satisfy the regularity requirement \cite{t:19}. Leading terms in the reduced EMT components can in principle scale as $\tau_a\propto f^{k_a}$, for some powers $k_a$ with $\tau_a$ being one of $\tau_{t}$, $\tau^{r}$, and $\tau_t^{~r}$.
However, a careful analysis shows that only two solutions are possible: those that satisfy $k_a\equiv k=0$ and a subset of the solutions with $k_a\equiv k=1$ \cite{mut:21,t:19,t:20}. In the former case the metric functions that solve Eqs.~\eqref{gtt} and~\eqref{grr} are
\begin{align}
C= r_\textsl{g} - 4 \sqrt{\pi} r_\textsl{g}^{3/2} \Upsilon \sqrt{x}+ \mathcal{O}(x) , \quad h=-\frac{1}{2}\ln{\frac{x}{\xi}} + \mathcal{O}(\sqrt{x}), \label{k0met}
\end{align}
where $x \defeq r-r_\textsl{g}(t)$, and $\xi(t)$ is determined by the choice of time variable. The leading contributions to the reduced EMT components
\begin{align}
& \tau_t \approx \tau^r = -\Upsilon^2 + \mathcal{O}(\sqrt{x}) , \label{eq:taut+r} \\
& \tau_t^{~r} = \pm \Upsilon^2 + \mathcal{O}(\sqrt{x}) , \label{eq:taus}
\end{align}
are parametrized by $\Upsilon(t)$. The minus sign in Eq.~\eqref{eq:taut+r} is necessary to ensure that the solutions of the Einstein equations are real-valued \cite{bmmt:19,t:19}.
The near-horizon geometry is most conveniently expressed \cite{bmmt:19} in $(v,r)$ coordinates for $\tau_t^{~r}\approx - \Upsilon^2$, i.e.\ $r'_\textsl{g}<0$, and in $(u,r)$ coordinates for $\tau_t^{~r} \approx + \Upsilon^2$, i.e.\ $r'_\textsl{g}>0$. In both cases the metric functions are continuous across the horizons, and the expansions of ingoing and outgoing congruences can be readily evaluated. We see that the case $r_\textsl{g}'<0$ corresponds to an evaporating PBH, and $r_\textsl{g}'>0$ to an expanding white hole (contrary to erroneous interpretations of Refs.~\cite{t:19,t:20} that misidentified the latter as an accreting PBH). As our interest lies in the final stages of the collapse, we consider only evaporating PBH solutions in what follows.
Eq.~\eqref{gtr} must then hold identically, which yields the relationship
\begin{align}
r'_\textsl{g}/\sqrt{\xi}=-4\sqrt{\pi r_\textsl{g}}\,\Upsilon, \label{rpr1}
\end{align}
where the prime denotes a derivative with respect to $t$. While the derivation uses the finiteness of $T^\mu_{~\mu} = -R/8\pi$ and $T^{\mu\nu}T_{\mu\nu}=R^{\mu\nu}R_{\mu\nu}/64\pi^2$, all quadratic curvature invariants \cite{exact-e} are finite \cite{t:19,t:20}. This is also true for $k=1$ solutions that are described below.
For both black and white hole solutions the negative sign of $\tau_t$ and $\tau^r$ leads to the violation of the NEC \cite{mut:21,bmmt:19} in the vicinity of the apparent horizon. This can be deduced by studying a future-directed outward (inward) pointing radial null vector $k^\mu$ \cite{bmmt:19}.
Dynamic solutions with $k=1$ lead to finite energy density $\rho(t,r_\textsl{g}) \equiv E$ and pressure $p(t,r_\textsl{g}) \equiv P$. However, only their maximal possible values are consistent \cite{mut:21},
\begin{align}
E=-P=1/(8\pi r_\textsl{g}^2),
\end{align}
and the corresponding metric functions are
\begin{align}
C = r - c_{32}x^{3/2} + \mathcal{O}(x^2) , \quad h = - \frac{3}{2} \ln{\frac{x}{\xi}} + \mathcal{O}(\sqrt{x}) , \label{k1met}
\end{align}
where $c_{32}(t)>0$, and the consistency condition is
\begin{align}
r'_\textsl{g} = - c_{32}\xi^{3/2} / r_\textsl{g} , \label{rgprime}
\end{align}
as we consider only evaporation.
Comparison of various expressions in $(t,r)$ and $(v,r)$ coordinates helps to establish many useful results. Since we use such comparisons quite extensively, we quote some useful expressions below. Components of the EMT are related by
\begin{align}
& \theta_v \defeq e^{-2h_+} \Theta_{vv} = \tau_t , \label{thev} \\
& \theta_{vr} \defeq e^{-h_+} \Theta_{vr} = (\tau_t^{~r} - \tau_t)/f , \label{thevr} \\
& \theta_r \defeq \Theta_{rr} = (\tau^r + \tau_t - 2 \tau_t^{~r}) / f^2 , \label{ther}
\end{align}
where $\Theta_{\mu\nu}$ is used to denote EMT components in $(v,r)$ coordinates.
The relevant Einstein equations then take the form
\begin{align}
\partial_vC_+ &= 8 \pi e^{h_+} r^2 ( \theta_v - \theta_{vr} f) , \\
\partial_rC_+ &= - 8\pi r^2\theta_{vr} ,\\
\partial_rh_+ &= 4 \pi r \theta_r .
\end{align}
An arbitrary spherically symmetric metric that is regular at the apparent horizon satisfies
\begin{align}
& C_+(v,r) = r_+(v) + w_1(v) y + \mathcal{O}(y^2) , \label{cv1} \\
& h_+(v,r) = \chi_1(v) y + \mathcal{O}(y^2) ,
\end{align}
where $y \defeq r - r_+(v)$, $w_1 \leqslant 1$, while $r_+(v) = r_\textsl{g} \big(t(v,r_+), r_\textsl{g}\big)$. The limits $\Theta^+_{\mu\nu} \defeq \lim_{r\to r_+} \Theta_{\mu\nu}$ yield
\begin{align}
\theta_v^+ = (1 - w_1) \frac{r_+'}{8\pi r_+^2} , \quad \theta_{vr}^+ = - \frac{w_1}{8\pi r_+^2} , \quad \theta_r^+ = \frac{ \chi_1}{4\pi r_+} . \label{the3}
\end{align}
Both $k=0$ and $k=1$ solutions are needed to describe the formation of a black hole \cite{mut:21}. Assume that the first marginally trapped surface appears at some $v_\mathrm{S}$ at $r=r_+(v_\mathrm{S})$. For $v \leqslant v_\mathrm{S}$, the MS mass $C(v,r)/2$ in its vicinity is described in $(v,r)$ coordinates by
\begin{align}
C_+(v,r) = \sigma(v) + r_*(v) + \sum_{i \geqslant 1}^\infty w_i(v)(r-r_*)^i,
\end{align}
where $r_*(v)$ corresponds to the maximum of $\Delta_v(r) \defeq C(v,r)-r$. The deficit $\sigma(v) \defeq \Delta_v\big(r_*(v)\big) \leqslant0$ by definition. At the advanced time $v_\mathrm{S}$ the location of the maximum corresponds to the first marginally trapped surface, $r_*(v_\mathrm{S}) = r_+(v_\mathrm{S})$, and $\sigma(v_\mathrm{S}) = 0$. For $v > v_\mathrm{S}$, the deficit $\sigma \equiv 0$ and the MS mass is described by Eq.~\eqref{cv1}.
For $v \leqslant v_\mathrm{S}$, the (local) maximum of $\Delta_v$ satisfies $\partial \Delta_v / \partial r = 0$, hence $w_1(v) - 1 \equiv 0$. From Eqs.~\eqref{the3} and \eqref{thev} it follows that the newly formed black hole is described by a $k=1$ solution, since $w_1=1$ implies $\theta_v^+=0$ and thus $\Upsilon=0$. However, after its formation $r_+(v)$ is no longer a local maximum of $C_+(v,r)$, $w_1<1$, and thus at later times the black hole is described by a $k=0$ solution.
In the vicinity of the apparent horizon the equation for radial null geodesics becomes
\begin{align}
\left.\frac{dr}{dt}\right|_{r=r_\textsl{g}}=\pm\left.e^hf\right|_{r=r_\textsl{g}} = \pm4\sqrt{\xi \pi r_\textsl{g}}\Upsilon=\mp{r'_\textsl{g}}, \label{genull}
\end{align}
where the upper (lower) signature corresponds to outgoing (ingoing) geodesics. This result indicates that massless particles cross the apparent horizon in finite time according to Bob. Massive particles likewise cross the apparent horizon in finite time $t$ \cite{t:19,bbgj:16}, unless they are too slow.
Some additional relations between the two sets of coordinates are useful: a point on the apparent horizon has the coordinates $(v,r_+(v))$ and $(t,r_\textsl{g}(t))$ in the two coordinate systems. Moving from $r_+(v)$ along the line of constant $v$ (i.e.\ along the ingoing radial null geodesic) by $\delta r$ leads to the point $(t+\delta t, r_\textsl{g}+\delta r)$. Using Eq.~\eqref{rgprime} we obtain
\begin{align}
\delta t=-\left.\frac{e^{-h}}{f}\right|_{r=r_\textsl{g}} \!\!\!\!\! \!\!\!\!\delta r=\frac{\delta r}{r'_\textsl{g}} \label{dtrv}
\end{align}
for an evaporating black hole in both the $k=0$ and $k=1$ solutions. This implies that $t(v,r_++\delta y)=t(r_\textsl{g})- \delta y/|r_\textsl{g}'|$, resulting in the relation
\begin{align}
\begin{aligned}
x(r_++y,v) &= r_+ + y - r_\textsl{g} \big( t(v,r_+ + y) \big) \\
& = - r''_\textsl{g} y^2 / (2 r_\textsl{g}'{}^2) + \mathcal{O}( y^3) \label{xyrel}
\end{aligned}
\end{align}
between the coordinates $x(r,t)$ and $y(r,v)$ in the vicinity of the apparent horizon. Then, using the invariance of the MS mass and expanding $C(t,r)$ up to the first order in $r-r_+(v)$, we obtain
\begin{align}
C_+(v,r) &=r_+(v)+w_1y+\ldots =C \big(t(v,r),r\big) \nonumber \\
&=r_\textsl{g}\big(t(v,r_+)\big)+r_\textsl{g}'\left(\frac{y}{r_\textsl{g}'}\right)-2\sqrt{2\pi r_\textsl{g}^3|r_\textsl{g}''|}\frac{\Upsilon}{|r_\textsl{g}'|}y+\ldots \nonumber \\
&=r_++\left(1-2\sqrt{2\pi r_\textsl{g}^3|r_\textsl{g}''|}\frac{\Upsilon}{|r_\textsl{g}'|}\right)y+\ldots,
\end{align}
and find
\begin{align}
w_1= 1-2\sqrt{2\pi r_\textsl{g}^3|r_\textsl{g}''|}\frac{\Upsilon}{|r_\textsl{g}'|}. \label{w1e}
\end{align}
\section{Parameter identification} \label{idUx}
The values of $\Upsilon$ and $\xi$ can be obtained from first principles only if one performs a complete analysis of the collapse of some matter distribution and the quantum excitations it generates. Such an analysis would provide a constructive proof of the existence of PBHs. In absence of such results we first obtain some general relations and then match them with the semiclassical results.
The apparent horizon of a PBH that was formed at a finite time of Bob is timelike \cite{bmmt:19}. Hence it is possible to introduce the induced metric
\begin{equation}
ds^2|_{\mathrm{AH}}=-d\sigma^2+r_\mathrm{AH} d\Omega,
\end{equation}
where in the case of evaporation the proper time is most conveniently expressed in $(v,r)$ coordinates as $d\sigma = \sqrt{2|r_+'|}dv$. To remove the ambiguity we express coordinates of the apparent horizon as functions of proper time, such as $r_\mathrm{AH}(\sigma)$, $t_\mathrm{AH}(\sigma)$, and $v_\mathrm{AH}(\sigma)$. The invariance of the apparent horizon in spherically symmetric foliations means $r_\mathrm{AH}(\sigma) \equiv r_\textsl{g}\big(t_\mathrm{AH}(\sigma)\big)$, etc., and its rate of change is given by
\begin{align}
\frac{dr_\mathrm{AH}}{d\sigma}=r'_\textsl{g}\big(t_\mathrm{AH}(\sigma)\big)\dot t_\mathrm{AH}=r'_+\big(v_\mathrm{AH}(\sigma)\big)\dot v_\mathrm{AH}. \label{ah-rate}
\end{align}
If one assumes that for an evaporating PBH $r_\textsl{g}$ is a monotonously decreasing function of time, one can write
\begin{align}
\dot r_\mathrm{AH}=\Gamma_\mathrm{AH}(r_\mathrm{AH}), \quad r'_\textsl{g}=\Gamma_\textsl{g}(r_\textsl{g}), \quad r'_+=\Gamma_+(r_+),
\end{align}
where the relations between the functions $\Gamma_\mathrm{AH}$, $\Gamma_\textsl{g}$, and $\Gamma_+$ follow from Eq.~\eqref{ah-rate}. Without assuming any particular relation between $r_\textsl{g}'$ and $r_+'$, by using the first expression of Eq.~\eqref{the3} and Eq.~\eqref{thev} with $\tau_t = - \Upsilon^2 + \mathcal{O}(\sqrt{x})$, we obtain
\begin{align}
\Upsilon=\frac{1}{2}\sqrt{\frac{|r_\textsl{g}''|}{2\pi r_\textsl{g}}} \frac{|r_+'|} {|r_\textsl{g}'|}, \label{Upseq}
\end{align}
and from Eq.~\eqref{rpr1},
\begin{align}
\xi = \frac{r_\textsl{g}'^4}{2|r_\textsl{g}''|r_+'^2} .
\end{align}
The semicalssical analysis is based on perturbative backreaction calculations that represent the metric as modified by the Hawking radiation that is produced by a slowly-varying sequence of Schwarzschild metrics. It results in \cite{fn:book,bardeen:81,APT:19} $\Gamma_\textsl{g}(r)=\Gamma_+(r)$,
\begin{align}
\frac{dr_\textsl{g}}{dt}=-\frac{\alpha}{r_\textsl{g}^2}, \qquad \frac{dr_+}{dv}=-\frac{\alpha}{r_+^2}, \label{paget}
\end{align}
where $\alpha$ denotes the emission rate coefficient. Using this result we obtain
\begin{align}
\Upsilon = \frac{1}{2}\sqrt{\frac{\vert r_\textsl{g}'' \vert}{ 2\pi r_\textsl{g}} }=\frac{\alpha}{2\sqrt{\pi}r_\textsl{g}^3} ,
\end{align}
and
\begin{align}
\xi = \frac{r_\textsl{g}'^2}{2 \vert r_\textsl{g}'' \vert} = \frac{1}{4}r_\textsl{g},
\end{align}
where the last equalities on the rhs follow from Eq.~\eqref{paget}. We note that this result agrees on the order of magnitude with the guess of Ref.~\cite{bmmt:19}, but as we will see below the assumptions of Ref.~\cite{bmt:19} are not fulfilled and its estimate is in general incorrect.
\section{Temperature and surface gravity} \label{kappaT}
The surface gravity $\kappa$ plays an important role in GR, {particularly in black hole thermodynamics and more generally in }semiclassical gravity \cite{he:book,fn:book,faraoni:b}. For an observer at infinity the Hawking radiation that is produced on the background of a stationary black hole is thermal with its temperature given by $\kappa/2\pi$ \cite{fn:book,bmps:95}. However, surface gravity is unambiguously defined only in stationary spacetimes, where there are several equivalent definitions. These definitions are related to the inaffinity of null geodesics on the horizon, and to the peeling off properties of null geodesics near the horizon \cite{faraoni:b,kappa,kappaNielsenYoon}.
Stationary asymptotically flat spacetimes admit a Killing vector field $\xi^\mu$ that is timelike at infinity \cite{he:book,c:book,faraoni:b,exact-e}. A Killing horizon is a hypersurface on which the norm $\sqrt{\xi^\mu\xi_\mu}=0$. While logically this concept is independent of the notion of an event horizon, the two are related: for a black hole that is a solution of the Einstein equations in a stationary asymptotically flat spacetime the event horizon coincides with the Killing horizon \cite{fn:book,wald:01}.
A Killing orbit is the integral curve of the Killing vector field. The Killing property $\xi_{(\mu;\nu)}=0$ results in $\xi^\mu\xi_\mu=\mathrm{const}$ on each orbit. Coincidence of the two horizons allows one to introduce the surface gravity $\kappa$ as the inaffinity of null Killing geodesics on the event horizon,
\begin{align}
\xi^\mu_{~;\nu}\xi^\nu \defeq \kappa \xi^\mu.
\end{align}
Assuming sufficient regularity of the metric, expansion of the null geodesics near the apparent horizon $r > r_\textsl{g}$ then establishes the concept of peeling affine gravity \cite{kappa,kappaNielsenYoon},
\begin{align}
\frac{dr}{dt} = \pm 2 \kappa_\mathrm{peel}(t) x + \mathcal{O}(x^2). \label{peeld}
\end{align}
The two definitions coincide in stationary spacetimes. For a Schwarzschild metric with mass $M$ the surface gravity is $\kappa=1/(4M)=1/(2r_\textsl{g})$.
Intuitively, the physical meaning of $\kappa$ can be interpreted as the force that would be required by an observer at infinity to hold a particle (of unit mass) stationary at the event horizon. Since the acceleration of a static observer will play a role in what follows, we reproduce here the derivation in $(t,r)$ coordinates. Consider an observer Eve at some fixed areal radius $r$. Her four-velocity is $u_\mathrm{E}^\mu = \delta^\mu_0 / \sqrt{-\textsl{g}_{00}}$, and her four-acceleration $a_\mathrm{E}^\mu=(0,\Gamma^r_{tt}/\textsl{g}_{00},0,0)$ in the Schwarzschild spacetime satisfies
\begin{align}
g \defeq \sqrt{a^\mu_{\mathrm{E}} a_{{\mathrm{E}}\mu}} = \frac{r_\textsl{g}}{2r^2\sqrt{1-r_\textsl{g}/r}}.
\end{align}
Correcting by the redshift factor $z=-\sqrt{\textsl{g}_{00}}$ gives the surface gravity on approach to the horizon,
\begin{align}
\kappa=\lim_{r\to r_\textsl{g}}zg=1/(2r_\textsl{g}).
\end{align}
Absence of the asymptotically timelike Killing vector in general dynamic spacetimes not only makes various analytic tasks computationally harder, but also requires generalization and reappraisal of the notions that are used in black hole physics. Adapting one of the equivalent versions of surface gravity in stationary spacetimes is necessary. For sufficiently slowly evolving horizons with properties sufficiently close to their classical counterparts these different generalizations of surface gravity are practically indistinguishable \cite{kappa,kappaNielsenYoon}. This is important, as the role of the Hawking temperature is captured in various derivations either by the peeling \cite{peel:blsv} or the Kodama \cite{kpv:21} surface gravity. Indeed, gravitational collapse triggers radiation \cite{haj:87,blsv:06,vsk:07} that for macroscopic black holes at sufficiently late times approaches the standard Hawking radiation.
Nevertheless, this similarity fails for the self-consistent solutions that were described in Sec.~\ref{horizon}. Consider first the peeling surface gravity $\kappa_\mathrm{peel}$ \cite{mut:21}. For differentiable $C$ and $h$ the result is \cite{kappa,kappaNielsenYoon}
\begin{align}
\kappa_\mathrm{peel} = \frac{e^{h(t,r_\textsl{g})} \left( 1 - C'(t,r_\textsl{g}) \right)}{2 r_\textsl{g}} . \label{kap-peel}
\end{align}
However, such an expansion is impossible for both $k=0$ and $k=1$ solutions. The metric functions of Eqs.~\eqref{k0met} and \eqref{k1met} lead to a divergent peeling gravity. This happens because Eq.~\eqref{genull} ensures that there is a nonzero constant term in the expansion of the geodesics, and instead of Eq.~\eqref{peeld} we have
\begin{align}
\frac{dr}{dt} = \pm r_\textsl{g}'+a_{12}(t)\sqrt{x} + \mathcal{O}(x) ,
\end{align}
where $a_{12}$ depends on the higher-order terms of the EMT. Similarly, the redshifted acceleration of a static observer diverges as
\begin{align}
zg = \frac{|r'_\textsl{g}|}{4x} + \mathcal{O}(x^{-1/2}) .
\end{align}
However, the peeling surface gravity was originally introduced using regular Painlev\'{e}--Gullstrand coordinates $({\bar{t}},r)$ \cite{nv:06} (whose properties are briefly summarized in App.~\ref{a:pg}). In fact, the two possible definitions are \cite{nv:06}
\begin{align}
\kappa_{{\mathrm{PG}}_1}=\left.\frac{1}{2r_\textsl{g}}(1-{\partial}_r \bar{ C})\right|_{r=r_\textsl{g}}, \label{kpg1}
\end{align}
where ${\bar{C}}=C(t({\bar{t}},r),r)$ is the MS mass in Painlev\'{e}--Gullstrand coordinates, and \cite{vad:11}
\begin{align}
\kappa_{{\mathrm{PG}}_2}=\left.\frac{1}{2r_\textsl{g}}(1-{\partial}_r \bar{ C}+{\partial}_{\bar{t}} {\bar{C}})\right|_{r=r_\textsl{g}}. \label{kpg2}
\end{align}
Using the invariance of the MS mass, we have
\begin{align}
\frac{{\partial} \bar C}{{\partial} r}=\left.\frac{{\partial} C}{{\partial} t}\frac{{\partial} t}{{\partial} r}\right|_{{\bar{t}}}+\frac{{\partial} C}{{\partial} r}.
\end{align}
Recalling that for an evaporating PBH
\begin{align}
\lim_{r\to r_\textsl{g}} f(t,r)e^{h(t,r)}=-r'_\textsl{g},
\end{align}
we have [selecting the positive sign in Eq.~\eqref{pde1}]
\begin{align}
\left.\frac{{\partial} t}{{\partial} r}\right|_{{\bar{t}}}=-\frac{{{\partial} {\bar{t}}}/{{\partial} r}}{{{\partial}{\bar{t}}}/{{\partial} t}}\to \frac{1}{r'_\textsl{g}}.
\end{align}
For $k=0$ solutions, we then have for $r \to r_\textsl{g}$
\begin{align}
&\frac{{\partial} C(t,r)}{{\partial} t}=r'_\textsl{g}\left(1+\frac{2\sqrt{\pi r_\textsl{g}^3}\Upsilon}{\sqrt{r-r_\textsl{g}}}\right)+\mathcal{O}(\sqrt{x}), \label{dtC} \\
&\frac{{\partial} C(t,r)}{{\partial} r}=-\frac{2\sqrt{\pi r_\textsl{g}^3}\Upsilon}{\sqrt{r-r_\textsl{g}}}+\mathcal{O}(\sqrt{x}).
\end{align}
Substituting everything into the definition Eq.~\eqref{kpg1} results in
\begin{align}
\kappa_{{\mathrm{PG}}_1}=0.
\end{align}
Furthermore, we also obtain
\begin{align}
\kappa_{{\mathrm{PG}}_2}=\left.\frac{ {\partial}_{\bar{t}} {\bar{C}}}{2r_\textsl{g}}\right|_{r=r_\textsl{g}}.
\end{align}
Since
\begin{align}
{\partial}_{\bar{t}}{\bar{C}}={\partial}_t C{\partial}_{\bar{t}} t|_r,
\end{align}
using Eq.~\eqref{dtC} we find
\begin{align}
{\partial}_{\bar{t}} {\bar{C}}\approx\frac{r'_\textsl{g}}{{\partial}_t{\bar{t}}}\left(1+\frac{2\sqrt{\pi r_\textsl{g}^3}\Upsilon}{\sqrt{r-r_\textsl{g}}}\right),\label{mess1}
\end{align}
that in the limit $r \to r_\textsl{g}$ results in three distinct possibilities that depend on the behavior of the function ${\bar{t}}(t,r)$. If as $r\to r_\textsl{g}$ the Painlev\'{e}--Gullstrand time ${\bar{t}}$ diverges faster than $1/\sqrt{r-r_\textsl{g}}$, then $ \kappa_{{\mathrm{PG}}_2}=0=\kappa_{{\mathrm{PG}}_1}$. If ${\bar{t}}$ diverges slower than $1/\sqrt{r-r_\textsl{g}}$, then $ \kappa_{{\mathrm{PG}}_2}$ is divergent. Finally,
\begin{align}
{\bar{t}}=\tau(t)\sqrt{r-r_\textsl{g}}+\mathcal{O}(r-r_\textsl{g}),
\end{align}
where $\tau(t)$ is some function, leads to a finite value of $\kappa_{{\mathrm{PG}}_2}$. In fact, this form is consistent with the limiting form of Eq.~\eqref{pde1} (see App.~\ref{a:pg} for details).
The Kodama vector field can be introduced in any spherically symmetric spacetime \cite{k:80,av:10}. It has many useful properties of the Killing field to which, modulo possible rescaling, it reduces in the static case \cite{faraoni:b,kappa,kappaNielsenYoon,av:10}. Similar to the Killing vector, it is most conveniently expressed in $(v,r)$ coordinates,
\begin{align}
K^\mu=(e^{-h_+},0,0,0).
\end{align}
It is covariantly conserved, and generates the conserved current
\begin{align}
& \nabla_\mu K^\mu = 0 , \\
& \nabla_\mu J^\mu = 0 , \qquad J^\mu \defeq G^{\mu\nu} K_\nu ,
\end{align}
where $G_{\mu\nu}=R_{\mu\nu} - \tfrac{1}{2} \textsl{g}_{\mu\nu}R$ is the Einstein tensor, {thereby giving} a natural geometric meaning to the Schwarzschild coordinate time $t$. The MS mass is its Noether charge.
Since $K_{(\mu;\nu)}\neq 0$, the generalized Hayward--Kodama surface gravity is defined via \cite{hay:98}
\begin{align}
\frac{1}{2} K^\mu(\nabla_\mu K_\nu-\nabla_\nu K_\mu) \defeq \kappa_\mathrm{K} K_\nu,
\end{align}
evaluated on the apparent horizon. Hence
\begin{align}
\kappa_\mathrm{K} = \frac{1}{2} \left.\left(\frac{C_+(v,r)}{r^2} - \frac{\partial_r C_+(v,r)}{r}\right) \right|_{r=r_+} \hspace*{-3mm} = \frac{(1-w_1)}{2r_+}, \label{eq:surfgKodama}
\end{align}
where we used Eq.~\eqref{cv1} to obtain the final result. Thus at the formation of a black hole (i.e.\ of the first trapped surface) this version of surface gravity is zero. At the subsequent evolution stages that correspond to a $k=0$ solution, $\kappa_\mathrm{K}$ is nonzero. However, it approaches the static value $\kappa=1/(4M)$ only if the metric is close to the pure Vaidya metric with $w_1\equiv 0$. This in turn leads to another contradiction with the semiclassical results.
Formation of a PBH as a $k=1$ solution requires $w_1(t_\mathrm{S})=1$. At the subsequent stages $w_1<1$. This transition is continuous, as $\Upsilon(t_\mathrm{S}) \equiv 0$ in $k=1$ solutions and increases thereafter, and thus
\begin{align}
w_1= 1-r_\textsl{g}\frac{r_\textsl{g}''}{r_\textsl{g}'} = 1-\frac{2\alpha}{r_+^2}.
\end{align}
However, for the standard evaporation law $w_1\approx 0$ only when $r_\textsl{g}\sim \sqrt{\alpha}$, i.e.\ in the sub-Planckian regime, and semiclassical physics indubitably breaks down.
If we try to identify the evaporation law $\Gamma(r_\textsl{g})$ by requiring $w_1 \equiv 0$, then we obtain the equation
\begin{align}
\Gamma'(r_\textsl{g})r_\textsl{g} = 1 .
\end{align}
Maintaining a negative $r_\textsl{g}' < 0$ corresponding to the process of evaporation at times $t > t_\mathrm{S}$ is only possible if
\begin{align}
r_\textsl{g}'=\Gamma(r_\textsl{g}) = \ln \frac{r_\textsl{g}(t)}{B} ,
\end{align}
where $B=r_\textsl{g}(t_\mathrm{S})+\beta>r_\textsl{g}(t_\mathrm{S})$. The solution $r_\textsl{g}(t)$ can be expressed in terms of the integral logarithm ${\mathrm{li}}(z)=\int_0^zdt/\ln t$. Using its asymptotic form for $\beta\ll 1$ we obtain the evaporation time
\begin{align}
t_\mathrm{e} \approx r_\textsl{g}(t_\mathrm{S}) \ln \frac{r_\textsl{g}(t_\mathrm{S})}{\beta},
\end{align}
which is radically different from the standard semiclassical results.
\section{Discussion}
Our considerations have shown that a proper formulation of the information loss paradox is quite subtle, and that its standard exposition at the very least warrants considerable revision. Formation of the apparent horizon at some finite time $t_\mathrm{S}$ that distant Bob measures is a necessary condition to set up the information loss problem. A consistent solution of the field equations admits evaporation whilst yielding regularity at the horizon, but necessarily entails violation of the NEC.
The necessity of the NEC violation is an obstacle, as it requires a mechanism to convert the original collapsing matter into exotic matter that must be present in the vicinity of the forming apparent horizon. Conventional mechanisms for mass loss, such as the emission of gravitational waves, should work in tandem with production of the negative energy density matter. Collapse-induced Hawking-like radiation is thus not only a necessary quantum-mechanical ingredient of the paradox, but is necessary for producing its classical setting.
This brings us to a more serious difficulty: two ``close'' generalizations of surface gravity [namely the peeling surface gravity \eqref{kap-peel} and the Kodama--Hayward surface gravity \eqref{eq:surfgKodama}] that underpin different derivations of Hawking radiation on the background of an evolving spacetime are irreconcilable. In fact, three versions of the same peeling surface gravity [Eqs.~\eqref{kap-peel}, \eqref{kpg1}, and \eqref{kpg2}] are irreconcilable as well. Moreover, it is not clear if the required structure of the EMT can be matched \cite{mut:21}.
In addition, if the Hawking temperature is indeed proportional to the peeling surface gravity, then black holes explode (or freeze) on their formation. In this case the semiclassical picture is not valid, and it is impossible to formulate the information loss problem. Alternatively, if the Hawking temperature is proportional to the Kodama surface gravity, then it vanishes at the formation of a black hole; although it increases during evaporation, it can never attain the Hawking result. If the Kodama surface gravity reaches the classical value $\kappa_\mathrm{K}=1/(2r_\textsl{g})$, then it cannot be the black hole temperature. Moreover, it is not clear how, given indications to the contrary \cite{C-R:18}, a process with close to zero flux can ensure the necessary dominance of quantum effects over normal matter in the vicinity of the outer apparent horizon.
Our analysis indicates that the circumstances surrounding the formation of PBHs do not provide a basis to formulate the information loss problem within the semiclassical framework. Therefore, in order to resolve the ``paradox'', new physics is required to provide a mechanism to explain why information is lost to begin with, and describe how this process may occur in a self-consistent way. It should be noted that, even if the issues that have been raised so far are resolved, scrutiny of the precise technical aspects of commonly invoked semiclassical notions indicate that ``Page time unitarity'' may appear to be violated even if the underlying physics is unitary \cite{SFA:21}. A recent study \cite{bfm:21} that is complementary to the argumentation presented here also indicates that the standard form of the paradox can be consistently rendered only if new physics begins to play a role before reaching the Planck scale.
\acknowledgments
RBM is supported by the Natural Sciences and Engineering Research Council of Canada. SM is supported by an International Macquarie University Research Excellence Scholarship and a Sydney Quantum Academy Scholarship. The work of DRT is supported by the ARC Discovery project grant No.\ DP210101279.
|
{
"timestamp": "2022-06-22T02:24:42",
"yymm": "2109",
"arxiv_id": "2109.13939",
"language": "en",
"url": "https://arxiv.org/abs/2109.13939"
}
|
\section{Introduction}
A material that changes in space, where the wave speed is non--uniform, can act as a lens, mirror, absorber, or invisibility cloak, depending on the type of material and how it is arranged. Such structures can be traced back to ancient investigations of `burning lenses'~\cite{smith2017} and remains the subject of most research into electromagnetic (EM) and acoustic materials~\cite{kadic2019}. Yet until recently, the variation of material properties in \emph{time} had received less attention.
Morgenthaler~\cite{morgenthaler1958} established that homogeneous time varying media conserve electromagnetic momentum, rather than energy. As a time variation of the dielectric function appears naturally in the properties of an unsteady plasma, this early work was first applied in plasma physics (e.g.~\cite{stepanov1976}), findings that are generalized and summarized in the concepts of ``time refraction'' and ``time reflection'' described in the work of Mendon\c{c}a and others~\cite{mendonca2000,mendonca2002}. Here a temporal jump in material parameters generates additional waves, analogous to scattering from a spatial boundary, but rotated from space into time and subject to the constraints of causality. Connected to this work, the field of analogue gravity has found deep results concerning materials with a space--time varying wave speed~\cite{schutzhold2005,philbin2008} as a means to mimic a moving medium with an associated analogue event horizon and Hawking radiation~\cite{weinfurtner2011,drori2019}.
In recent years there has been renewed interest in time varying media from those working on artificial electromagnetic and acoustic materials~\cite{xiao2014,ptitcyn2019}. It has been quikcly established that this enables new routes to thin absorbers~\cite{li2021}, gain without a dispersive response~\cite{pendry2021}, the phenomenon of `temporal aiming'~\cite{pena2020a}, temporal anti--reflection coatings~\cite{pena2020b}, and a homogenized response equivalent to bianisotropy~\cite{huidobro2021}. It is now understood that besides being an interesting variant of the usual reflection problem, rapid temporal changes in material properties modify EM and acoustic fields in a distinct way from spatial variations.
We can understand the difference between material variation in space versus time from both Maxwell's equations
\begin{equation}
\left(\begin{matrix}\boldsymbol{0}&\boldsymbol{\nabla}\times\\-\boldsymbol{\nabla}\times&\boldsymbol{0}\end{matrix}\right)\left(\begin{matrix}\boldsymbol{E}\\\boldsymbol{H}\end{matrix}\right)=\frac{\partial}{\partial t}\left(\begin{matrix}\boldsymbol{D}\\\boldsymbol{B}\end{matrix}\right)\label{eq:maxwell}
\end{equation}
and the equations of elasticity
\begin{equation}
\frac{\partial}{\partial t}\left(\rho\frac{\partial\boldsymbol{U}}{\partial t}\right)=\boldsymbol{\nabla}\cdot\boldsymbol{\sigma}\label{eq:elasticity}.
\end{equation}
The \emph{spatial} derivatives in Eq. (\ref{eq:maxwell}) remain finite so long as the tangential electric and magnetic fields are continuous in space, and likewise for the normal stresses in Eq. (\ref{eq:elasticity}). This observation leads to the usual boundary conditions of elasticity and electromagnetism across interfaces between different media, and the continuity of power flow across any interface. Conversely the \emph{temporal} derivatives remain finite so long as the $\boldsymbol{D}$ and $\boldsymbol{B}$ vectors, or equivalently momentum density $\rho\dot{\boldsymbol{U}}$ are continuous in time~\cite{xiao2014,bakunov2014}. These are different boundary conditions, and do not lead to e.g. continuity of power flow in time. In the electromagnetic case these boundary conditions ensure the Abraham momentum, $\boldsymbol{E}\times\boldsymbol{H}$ is conserved across a spatial boundary, while the Minkowskii momentum, $\boldsymbol{D}\times\boldsymbol{B}$ is conserved across a temporal one~\cite{griffiths2012}. This potential to change power flow underlies the phenomenon of `temporal aiming'~\cite{pena2020a}, where a temporal change in the principal axes of the material tensor can be used to redirect the flow of wave energy. Similarly---unlike their spatial counterpart---the transmission through a temporal anti--reflection coefficient is not unity~\cite{mai2021}, indicating a change in the power flow due to the material time variation.
To date there are only a few experimental realisations of time dependent metamaterials. In electromagnetism these typically use on a non--linear response~\cite{zhou2020} to induce the time variation. In acoustics---where the frequencies are usually reduced by many orders of magnitude---there is more freedom to control the time variation of the material. We base the theory in this paper on the concept of active acoustic meta--atoms~\cite{popa2013}, recently implemented by the group of Li and co--workers~\cite{cho2020,wen2020,wen2021} to demonstrate metamaterials with a non--Hermitian and time varying response. Each `digital meta--atom' consists of two closely spaced speaker--microphone pairs inserted into an acoustic waveguide, connected via a single--board computer. The two speakers measure the acoustic pressure field at two points, computing its average and gradient. From this the single board computer computes a response as a function of the measured pressure field, and sends signals to the microphone pair. The response can be programmed with great freedom, convolving the input signal with a pre--specified kernel (e.g. a damped sinusoid as shown in Fig.~\ref{fig:schematic}b) and simultaneously modulating the amplitudes of the speaker output. An arbitrary dispersive, time modulated response can thereby be obtained at kHz frequencies. Such systems are similar to those commonly used in active noise and vibration control~\cite{hansen2012}, but here applied to realise a tailored time dependent dispersive response. Based on this idea we give a simple theoretical model for an array of such scatterers with time modulated properties, as described in Figure~\ref{fig:schematic}.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{fig-1.pdf}
\caption{A waveguide containing an array of digital meta--atoms, each with response kernel $K(\tau)$. (a) Schematic: an acoustic waveguide contains an array of speakers $S$ and microphones $M$, each connected by a single board computer, implementing response kernel $\mathcal{K}(\tau)$, and modulated speaker amplitude $a(t)$. (b) The signal sent to the speaker is equal to the past values of the pressure field $p(t)$ measured by the microphone, convolved with the response kernel $K(t)$, which is here a damped sinusoid (a Lorentzian in the frequency domain). We plot the function defined in Eq. (\ref{eq:lorentz_kernel}) with parameters $\omega_0/\Omega_{\mathcal{K}}=2.0$, $\gamma/\Omega_{\mathcal{K}}=0.1$, and $A/\Omega_{\mathcal{K}}^2=1.5$.}
\label{fig:schematic}
\end{figure}
\section{Digital acoustic meta--atoms}
We first construct a theoretical model of a single digital meta--atom in a waveguide. Consider acoustic pressure waves, obeying the linearized Navier--Stokes equations for a fluid of uniform density $\rho_0$, pressure $p_0$, and zero local flow velocity. For small deviations around this background, the Navier--Stokes equation reduces to a linear equation for the fluid velocity and pressure
\begin{equation}
\rho_0\partial_t\boldsymbol{v}=-\boldsymbol{\nabla}p+\boldsymbol{f}\label{eq:linear_ns}.
\end{equation}
where $\boldsymbol{f}$ is the external force density applied to the fluid. In this linear limit the continuity equation becomes
\begin{equation}
\partial_t\rho+\rho_0\boldsymbol{\nabla}\cdot\boldsymbol{v}=-q\label{eq:linear_c}.
\end{equation}
where $q$ represents a source of mass due to e.g. the fluid moving in and out of a compressed region. Assuming one dimensional propagation (i.e. the fundamental mode in a waveguide), writing $\rho=(\partial\rho/\partial p)_S\, p$, and combining (\ref{eq:linear_ns}--\ref{eq:linear_c}) we find the wave equation for the pressure field,
\begin{equation}
\left(\partial_x^2-\frac{1}{c_{s}^2}\partial_t^2\right) p=\partial_t q+\partial_x f_x\label{eq:driven_wave}
\end{equation}
where the speed of sound is given by $c_s=\left(\partial p/\partial \rho\right)_{S}^{1/2}$. Here the source of the sound is the forcing due to a speaker at $x=0$, driven by a signal $\mathcal{F}(t)$. We take this as a point--like monopole excitation, $\partial_x f_x=0$ and
\begin{equation}
\partial_t\,q=\delta(x)\mathcal{F}(t)\label{eq:forcing}.
\end{equation}
Assuming an incoming plane wave $p_{\rm in}=\cos(\omega(x/c_s-t))$, plus the wave generated from the forcing (\ref{eq:forcing}), the solution to Eq. (\ref{eq:driven_wave}) is
\begin{equation}
p(x,t)=\cos(\omega(x/c_s-t))-\frac{c_s}{2}\int_{0}^{\infty}\mathcal{F}(t-|x|/c_s-\tau)\,d\tau\label{eq:pressure_field}
\end{equation}
Up until this point we have not considered the dependence of the speaker signal $\mathcal{F}(t)$ on the incident pressure field. Via the signal processing on the single--board computer, the speaker amplitude is related linearly to the past behaviour to the pressure field by the following integral equation
\begin{equation}
\mathcal{F}(t)=\frac{a(t)}{w}\partial_t^2\int_{0}^{\infty}d\tau\,K(\tau)\,p(0,t-\tau)\label{eq:constitutive_relation}
\end{equation}
where $K(\tau)$ has dimensions of frequency and represents the causal response of the digital meta--atom. The causal response can be chosen almost arbitrarily through programming the single--board computer. As $\mathcal{F}$ has dimensions of force per volume, we have introduced a characteristic length $w$ of the digital meta--atom. The dimensionless function $a(t)$ is an additional applied time modulation of the signal sent to the speaker pair, and allows the meta--atom to exhibit a pre--specified time dependent response.
\section{Effective medium limit of a digital meta--atom array}
\par
We take a one--dimensional array of these meta--atoms spaced by distance $\Delta$, with one speaker--microphone pair per unit cell. We shall show that in the long wavelength limit the scatterers are equivalent to a homogeneous medium with time varying properties. Inserting Eq. (\ref{eq:constitutive_relation}) into (\ref{eq:driven_wave}--\ref{eq:forcing}) and summing the force over the array of speakers we find the pressure field in the array obeys,
\begin{equation}
\left(\partial_{x}^2-\frac{1}{c_s^2}\partial_t^2\right)p=\frac{a(t)}{w}\sum_{n=-\infty}^{\infty}\delta(x-n\Delta)\,\partial_t^2\int_{0}^{\infty}d\tau\,K(\tau)\,p(n\Delta,t-\tau)\label{eq:scatterer-array}
\end{equation}
Defining $\bar{p}$ as the pressure averaged over a single unit cell,
\begin{equation}
\bar{p}(x,t)=\frac{1}{\Delta}\int_{-\Delta/2}^{\Delta/2}p(x+y,t)\,dy
\end{equation}
we find that it obeys
\begin{equation}
\left(\partial_x^2-\frac{1}{c_s^2}\partial_t^2\right)\bar{p}(x,t)=\frac{a(t)}{w \Delta}\partial_t^2\int_0^{\infty}d\tau\,K(\tau)\,p([x/\Delta]\Delta,t-\tau)\label{eq:average_pressure}
\end{equation}
where $[x/\Delta]$ is the nearest integer to the real number $x/\Delta$. From Eq. (\ref{eq:pressure_field}) we can see that the change in the pressure field over a unit cell is determined by the time variation of each speaker output over the interval $\Delta/c_s$. When neither the speaker amplitude, $a(t)$ or the pressure, $p(t)$ vary significantly over this timescale we can replace $p([x/\Delta]\Delta,t-\tau)$ with $\bar{p}(x,t-\tau)$ and the average pressure obeys
\begin{equation}
\left(\partial_x^2-\frac{1}{c_s^2}\partial_t^2\right)\bar{p}(x,t)=\frac{\bar{a}(t)}{c_s^2}\partial_t^2\int_0^{\infty}d\tau\,K(\tau)\,\bar{p}(x,t-\tau)\label{eq:effective_medium}.
\end{equation}
where $\bar{a}=(c_s^2/w\Delta)\,a$. This wave equation is equivalent to that in a homogeneous dispersive medium (dispersion determined by the Fourier transform of $K(\tau)$), which is modulated in time according to $a(t)$. For memory kernels $K(\tau)$ with a short time response, the right hand side of (\ref{eq:effective_medium}) can be expanded as a series, $\int_0^\infty
d\tau\,K(\tau)u_{\mathcal{K}}(t-\tau)=\sum_{n}(-1)^n\partial_{t}^{n}u_{\mathcal{K}}(t)\int_0^\infty
d\tau\,K(\tau)\tau^n/n!$. Taking just the first term in this series reduces Eq. (\ref{eq:effective_medium}) to that for a scalar wave in a medium with a non--dispersive but time modulated refractive index, as described in e.g.~\cite{holberg1966}. Taking the next two terms in the series introduces successive dispersive terms represented as third and fourth order time derivatives of the pressure in Eq. (\ref{eq:effective_medium}).
As in the work of Cho et al.~\cite{cho2020}, we take our kernel to be of Lorentzian form in the frequency domain, which corresponds to a damped sinusoidal time response
\begin{equation}
K(\tau)=A\Theta(\tau)\frac{\sin(\omega_1\tau)}{\omega_1}{\rm e}^{-\frac{\gamma}{2} \tau}\label{eq:lorentz_kernel}
\end{equation}
where $\omega_1=\sqrt{\omega_0^2-(\gamma/2)^2}$, and $A$ is a constant with dimensions of frequency squared, setting the scale of the response of each digital meta--atom. Writing the pressure field as $\bar{p}=u_{\mathcal{K}}(t){\rm e}^{{\rm i}\mathcal{K} x}$, where $\mathcal{K}\Delta\ll 1$ and defining the auxiliary quantity $v(t)=\int_0^{\infty}d\tau\,K(\tau)\,\partial_t^2 u_{\mathcal{K}}(t-\tau)$ we find that (\ref{eq:effective_medium}) is now equivalent to a non--Hermitian system of two coupled simple harmonic oscillators
\begin{align}
\left(\partial_t^2+\Omega_{\mathcal{K}}^2\right)u_{\mathcal{K}}(t)&=-\bar{a}(t) v(t)\nonumber\\
\left(\partial_t^2+\gamma\partial_t+\omega_0^2\right)v(t)&=A \partial_t^2 u_{\mathcal{K}}(t)\label{eq:two_oscillator_system}
\end{align}
where $\Omega_{\mathcal{K}}=c_s\mathcal{K}$. While the oscillator $u_{\mathcal{K}}(t)$ represents the amplitude of the sound wave within the lattice, propagating with wave--vector $\mathcal{K}$, the amplitude $v(t)$ mimics the dynamics of the speaker and microphone system, coupled through the single--board computer array.
\section{Adiabatic modulation}
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{fig-2.pdf}
\caption{Coupled mode frequencies $\Omega_{\pm}^2$ and effective damping $\gamma_{\rm eff}$ for parameters $A=1.6\Omega_{\mathcal{K}}^2$, $\omega_0=2.5\Omega_{\mathcal{K}}$, and $\gamma=0.3\Omega_{\mathcal{K}}$. Panel (a) shows real and imaginary parts of the coupled mode frequencies computed from (\ref{eq:Wpm}), illustrating the two roots for choice of speaker output $\bar{a}$. Vertical dashed lines show the two exceptional point values of $\bar{a}$ computed from Eq. (\ref{eq:aex}). Horizonal dashed lines indicate the uncoupled oscillator frequencies $\Omega=\Omega_{\mathcal{K}}$ and $\Omega=\sqrt{\omega_0^2-\gamma^2/4}$. Panel (b) shows the effective damping coefficient $\gamma_{\rm eff}$, calculated using (\ref{eq:gamma_eff}), for the pairs of modes shown in (a). This quantity diverges as we approach an exceptional point, and then loses its meaning. We have not plotted values in this region.}
\label{fig:dispersion}
\end{figure}
\par
To gain some insight into the system of equations (\ref{eq:two_oscillator_system}) for the pressure amplitude $u_{\mathcal{K}}$, and auxiliary variable $v$ we take the simplest case where the speaker amplitude $\bar{a}$ is modulated slowly in time. Our approximation is analogous to the WKB approximation~\cite{heading2013,bender1999} often used in optics and quantum mechanics. We first write Eq. (\ref{eq:two_oscillator_system}) in matrix form, defining the vector $|\psi\rangle=(u_{\mathcal{K}},v)^{\rm T}$
\begin{equation}
\left(\begin{matrix}1&0\\-A&1\end{matrix}\right)\partial_{t}^{2}|\psi\rangle+\left(\begin{matrix}0&0\\0&\gamma\end{matrix}\right)\partial_t|\psi\rangle+\left(\begin{matrix}\Omega_{\mathcal{K}}^2&\bar{a}\\0&\omega_0^2\end{matrix}\right)|\psi\rangle=0.\label{eq:matrix_form}
\end{equation}
Inverting the constant matrix in front of the second time derivative in Eq. (\ref{eq:matrix_form}) and performing the transformation $|\psi\rangle=\exp\left(-\frac{1}{2}\Gamma\,t\right)|\psi'\rangle$, where $\Gamma={\rm diag}(0,\gamma)$, eliminates the first order derivative from (\ref{eq:matrix_form}), leaving the second order matrix equation
\begin{equation}
\frac{d^2}{dt^2}|\psi'\rangle+M(t)|\psi'\rangle=0\label{eq:eqm_new_form}
\end{equation}
where the time dependent $2\times2$ matrix $M(t)$ is given by
\begin{equation}
M(t)=\left(\begin{matrix}\Omega_{\mathcal{K}}^2&\bar{a}(t){\rm e}^{-\frac{\gamma\,t}{2}}\\A\Omega_{\mathcal{K}}^2{\rm e}^{\frac{\gamma t}{2}}&\omega_0^2-\frac{1}{4}\gamma^2+A\bar{a}(t)\end{matrix}\right).\label{eq:Mt}
\end{equation}
Note that due to the coupling between the oscillators, this matrix is time dependent even when the coupling functions $A$ and $\bar{a}$ are constant in time. The treatment of this section is thus restricted to small values of the damping $\gamma$
To solve Eq. (\ref{eq:eqm_new_form}) we expand the vector $|\psi'\rangle$ in terms of the instantaneous eigenvectors $|u_{\pm}\rangle$ of the matrix $M(t)$, obeying $M|u_{\pm}\rangle=\Omega_{\pm}^2|u_{\pm}\rangle$. These eigenvectors are~\footnote{As defined here, when the modulation function $\bar{a}$ passes through zero, one of the eigenvectors $|u_{\pm}\rangle$ defined in (\ref{eq:ev}) itself passes through zero, leading to a discontinuity in the overall phase of this eigenvector. To deal with this we add a small imaginary part $\bar{a}\to\bar{a}+{\rm i}\eta$, and take the limit $\eta\to0$.}
\begin{equation}
|u_{\pm}(t)\rangle=\left(\begin{matrix}\bar{a}(t){\rm e}^{-\frac{\gamma t}{2}}\\\Omega^2_{\pm}(t)-\Omega_{\mathcal{K}}^2\end{matrix}\right),\label{eq:ev}
\end{equation}
with associated eigenvalues $\Omega^2_{\pm}$ given by
\begin{equation}
\Omega_{\pm}^2(t)=\frac{1}{2}\left[\Omega_{\mathcal{K}}^2+\omega_0^2-\frac{1}{4}\gamma^2+A\bar{a}\pm\sqrt{\left(\Omega_{\mathcal{K}}^2+\omega_0^2-\frac{1}{4}\gamma^2+A\bar{a}\right)^2-4\Omega_{\mathcal{K}}^2\left(\omega_0^2-\frac{1}{4}\gamma^2\right)}\right].\label{eq:Wpm}
\end{equation}
The two values $\Omega_{\pm}^2$ are plotted in Fig.~\ref{fig:dispersion}a. Due to the dispersive response of the meta--atoms, there are \emph{four} possible frequencies of oscillation, arising from the two square roots of (\ref{eq:Wpm}). These four roots represent the two directions of propagation for each value of $\mathcal{K}$, for both the upper and lower branches of the dispersion relation shown in Fig.~\ref{fig:dispersion}. Due to the non--Hermitian equation of motion (\ref{eq:two_oscillator_system}) there are exceptional point degeneracies in (\ref{eq:Wpm}) for some choices of speaker amplitude $\bar{a}$
\begin{equation}
\bar{a}_{\rm ex}=-\frac{1}{A}\left(\Omega_{\mathcal{K}}\pm\sqrt{\omega_0^2-\frac{1}{4}\gamma^2}\right)^2.\label{eq:aex}
\end{equation}
at which point the two eigenvectors (\ref{eq:ev}) become parallel. Note that these exceptional points require a negative value of the real coupling constant $\bar{a}_{\rm ex}$. These two exceptional point degeneracies are indicated with the vertical dashed lines in Fig.~\ref{fig:dispersion}a.
\par
To derive the form of the wave during an adiabatic modulation of $\bar{a}(t)$ we write the vector $|\psi'\rangle$ as a superposition of the instantaneous eigenvectors (\ref{eq:ev})
\begin{equation}
|\psi'\rangle=c_{+}(t)\frac{{\rm e}^{-{\rm i}\int_0^t\Omega_{+}dt'}}{\sqrt{\Omega_{+}}}|u_+\rangle+c_{-}(t)\frac{{\rm e}^{-{\rm i}\int_0^t\Omega_{-}dt'}}{\sqrt{\Omega_{-}}}|u_-\rangle,\label{eq:adiabatic_form}
\end{equation}
Making the substitution (\ref{eq:adiabatic_form}) in Eq. (\ref{eq:eqm_new_form}) and dropping terms that are second order in the rate of change of $|u_{\pm}\rangle$, we find a first order equation for the amplitudes $c_{\pm}$, with the time variation proportional to the rate of change of the vectors $|u_{\pm}\rangle$
\begin{equation}
\frac{d}{dt}\left(\begin{matrix}c_{+}\\c_{-}\end{matrix}\right)=-\left(\begin{matrix}\langle v_{+}|\frac{d}{dt}|u_{+}\rangle&\sqrt{\frac{\Omega_-}{\Omega_+}}\langle v_{+}|\frac{d}{dt}|u_{-}\rangle{\rm e}^{{\rm i}\int_0^t(\Omega_+-\Omega_-)dt'}\\\sqrt{\frac{\Omega_+}{\Omega_-}}\langle v_{-}|\frac{d}{dt}|u_{+}\rangle{\rm e}^{-{\rm i}\int_0^t(\Omega_+-\Omega_-)dt'}&\langle v_{-}|\frac{d}{dt}|u_{-}\rangle\end{matrix}\right)\left(\begin{matrix}c_{+}\\c_{-}\end{matrix}\right)
\label{eq:eqm_amplitude}.
\end{equation}
Within Eq. (\ref{eq:eqm_amplitude}) we have introduced the dual basis to the non--orthogonal vectors defined in Eq. (\ref{eq:ev})
\begin{align}
\langle v_{+}|&=\frac{1}{\Omega_{+}^2-\Omega_{-}^2}\left(\frac{\Omega_{\mathcal{K}}^2-\Omega_{-}^2}{\bar{a}}{\rm e}^{\frac{\gamma t}{2}},1\right)\nonumber\\
\langle v_{-}|&=\frac{1}{\Omega_{-}^2-\Omega_{+}^2}\left(\frac{\Omega_{\mathcal{K}}^2-\Omega_{+}^2}{\bar{a}}{\rm e}^{\frac{\gamma t}{2}},1\right)\label{eq:dual_vectors},
\end{align}
which obey the orthogonality relation $\langle v_{i}|u_{j}\rangle=\delta_{ij}$. Using the expressions for the eigenvectors (\ref{eq:ev}), the inner products appearing in (\ref{eq:eqm_amplitude})---which are the analogue of the geometric phase~\cite{berry1984} for this non--Hermitian system---are equal to
\begin{equation}
\langle v_{\pm}|\frac{d}{dt}|u_{\pm}\rangle=\pm\frac{\dot{\bar{a}}}{\bar{a}}\left(\frac{\Omega_{\mathcal{K}}^2-\Omega_\mp^2+\frac{1}{2}A\bar{a}}{\Omega_+^2-\Omega_-^2}\right)\mp\frac{\gamma}{2}\frac{\Omega_{\mathcal{K}}^2-\Omega_\mp^2}{\Omega_+^2-\Omega_-^2}+\frac{d}{dt}\ln\left(\sqrt{\Omega_+^2-\Omega_-^2}\right)
\end{equation}
and
\begin{equation}
\langle v_{\mp}|\frac{d}{dt}|u_{\pm}\rangle=\mp\frac{\dot{\bar{a}}}{\bar{a}}\left(\frac{\Omega_{\mathcal{K}}^2-\Omega_\pm^2+\frac{1}{2}A\bar{a}}{\Omega_+^2-\Omega_-^2}\right)\pm\frac{\gamma}{2}\frac{\Omega_{\mathcal{K}}^2-\Omega_\pm^2}{\Omega_+^2-\Omega_-^2}-\frac{d}{dt}\ln\left(\sqrt{\Omega_+^2-\Omega_-^2}\right)
\end{equation}
Provided the frequencies $\Omega_{\pm}$ are real and the phase factors $\exp(\pm{\rm i}\int_0^t(\Omega_+-\Omega_-)dt')$ oscillate several times over the timescale where $\bar{a}$ changes significantly, the off diagonal elements of the matrix in (\ref{eq:eqm_amplitude}) can be neglected (corresponding to dropping an oscillatory integral) and the solutions for weak damping and slow speaker modulation are
\begin{equation}
|\psi_+\rangle=\left(\begin{matrix}u_{\mathcal{K},+}\\v_+\end{matrix}\right)={\rm e}^{-\int_0^t\frac{\Omega_{\mathcal{K}}^2-\Omega_-^2+\frac{1}{2}A\bar{a}}{\Omega_+^2-\Omega_-^2}\frac{\dot{\bar{a}}}{\bar{a}}dt'}\frac{{\rm e}^{-{\rm i}\int_0^t\Omega_+dt'}{\rm e}^{-\frac{\gamma}{2}\int_0^t\frac{\Omega_+^2-\Omega_{\mathcal{K}}^2}{\Omega_+^2-\Omega_-^2}dt'}}{\sqrt{\Omega_+}\sqrt{\Omega_+^2-\Omega_-^2}}\left(\begin{matrix}\bar{a}(t)\\\Omega_+^2-\Omega_{\mathcal{K}}^2\end{matrix}\right)\label{eq:wkb1}
\end{equation}
and
\begin{equation}
|\psi_-\rangle=\left(\begin{matrix}u_{\mathcal{K},-}\\v_-\end{matrix}\right)={\rm e}^{\int_0^t\frac{\Omega_{\mathcal{K}}^2-\Omega_+^2+\frac{1}{2}A\bar{a}}{\Omega_+^2-\Omega_-^2}\frac{\dot{\bar{a}}}{\bar{a}}dt'}\frac{{\rm e}^{-{\rm i}\int_0^t\Omega_-dt'}{\rm e}^{-\frac{\gamma}{2}\int_0^t\frac{\Omega_{\mathcal{K}}^2-\Omega_-^2}{\Omega_+^2-\Omega_-^2}dt'}}{\sqrt{\Omega_-}\sqrt{\Omega_+^2-\Omega_-^2}}\left(\begin{matrix}\bar{a}(t)\\\Omega_-^2-\Omega_{\mathcal{K}}^2\end{matrix}\right).\label{eq:wkb2}
\end{equation}
Solutions (\ref{eq:wkb1}--\ref{eq:wkb2}) become exact in the limit where there is no change in the speaker output $\bar{a}$, and $\gamma=0$. The adiabatic approximation requires the system to be far from points where the acoustic frequencies change from real to complex. This includes exceptional points where $\Omega_+=\Omega_-$, and points of vanishing frequency $\Omega_\pm=0$. The failure of the approximation close to these points is evident in the divergence of the prefactors in Eqns. (\ref{eq:wkb1}--\ref{eq:wkb2}), just as for the ordinary WKB approximation~\cite{heading2013}.
\begin{figure}[h!]
\includegraphics[width=1.0\textwidth]{fig-3.pdf}
\caption{Adiabatic approximation (\ref{eq:wkb1}--\ref{eq:wkb2}) to the pressure field $u_{\mathcal{K}}$ with frequency $\Omega_-$. (a--c) Parameters as in Fig.~\ref{fig:dispersion}, with the speaker amplitude modulated from $\bar{a}=6$ to $\bar{a}=-1.2$, as a hyperbolic tangent $\tanh(t/T)$, with $\Omega_{\mathcal{K}}T=20$. (a) Mode (blue: real part, red: imaginary part, green: absolute value) with frequency $\Omega_{-}$ smoothly changes from decay to amplification with time, despite approaching the exceptional point, as shown in panel (c). Numerical integration of (\ref{eq:two_oscillator_system}) shown as dashed lines. (d--f) Parameters as in Fig.~\ref{fig:dispersion}, illustrating the characteristic $1/\sqrt{\Omega_-}$ as the frequency is increased.\label{fig:wkbgood}}
\end{figure}
\par
For a constant coupling $\bar{a}$ the adiabatic solutions (\ref{eq:wkb1}--\ref{eq:wkb2}) oscillate with frequency $\Omega_{\pm}$ and decay or grow in time as $\exp(-\gamma_{\rm eff,\pm}t)$, where the effective damping constant $\gamma_{\rm eff,\pm}$ is given by
\begin{equation}
\gamma_{\rm eff,\pm}=\pm\frac{\gamma}{2}\left(\frac{\Omega_{\pm}^2-\Omega_{\mathcal{K}}^2}{\Omega_{+}^2-\Omega_{-}^2}\right).\label{eq:gamma_eff}
\end{equation}
This effective damping constant is plotted in Fig.~\ref{fig:dispersion}b, showing that for a negative value of $\bar{a}A$ it can take either sign, while it remains positive for $\bar{a}A>0$. Fig.~\ref{fig:dispersion}b also shows that as the speaker amplitude $\bar{a}$ approaches an exceptional point the effective damping diverges, indicating a failure of the approximation. On the other hand, when $\bar{a}$ equals zero the acoustic mode is decoupled from the single board computer and has an effective damping constant of zero (red line in Fig.~\ref{fig:dispersion}). Moving either side of this point, this mostly acoustic mode is either amplified or damped, depending on the sign of $\bar{a}$. This is the expected behaviour of an acoustic wave coupled to a medium with Lorentzian response when the amplitude of the Lorentzian changes sign, switching from loss to gain.
\par
In Fig.~\ref{fig:wkbgood}a,d we show a comparison between the adiabatic approximation (\ref{eq:wkb1}--\ref{eq:wkb2}) and a direct numerical integration of (\ref{eq:two_oscillator_system}), the two being almost indistinguishable. In Fig.~\ref{fig:wkbgood}a the amplitude $\bar{a}$ changes sign in time and accordingly the $\Omega_{-}$ mode smoothly transitions from exponential decay to exponential growth, as discussed above. Fig.~\ref{fig:wkbgood}d also shows the accuracy of the adiabatic approximation when the amplitude $\bar{a}$ is modulated as a Gaussian, temporarily increasing $\Omega_-$, and decreasing the gap between the frequencies $\Omega_{\pm}$. This increase in frequency causes the wave amplitude to be suppressed by the factor $1/\sqrt{\Omega_-}$ given in Eq. (\ref{eq:wkb2}).
At first sight these results are curious. The adiabatic approximation apparently works well in a regime where $\Omega_-$ is `small' relative to the other frequencies, $\Omega_{+}$, $\omega_0$, and $\Omega_{\mathcal{K}}$. Yet we stated earlier that the approximation ought to break down in this region. The accuracy of the approximation for the $\Omega_-$ mode can be explained through examining the matrix on the right hand side of Eq. (\ref{eq:eqm_amplitude}). To obtain (\ref{eq:wkb1}--\ref{eq:wkb2}) we dropped the off diagonal terms which are proportional to $\sqrt{\Omega_{+}/\Omega_{-}}$ and $\sqrt{\Omega_{-}/\Omega_{+}}$. When $\sqrt{\Omega_{-}/\Omega_{+}}\ll1$ only the top right diagonal element of this matrix can be neglected. The resulting triangular matrix has only one eigenvector, which is the mode with frequency $\Omega_-$. Therefore it is the other ($\Omega_+$) mode that is a poor approximation in this regime.
\begin{figure}[h!]
\includegraphics[width=\textwidth]{fig-4.pdf}
\caption{Speaker modulation $\bar{a}(t)$ where the adiabatic approximation (\ref{eq:wkb1}--\ref{eq:wkb2}) fails. Parameters and quantities plotted as in Fig.~\ref{fig:wkbgood}, but for modulation of the coupling between $\bar{a}=6$ and $\bar{a}=-1.38$, the final value being very close to the exceptional point where $\Omega_+$ and $\Omega_-$ are degenerate. Fig.~\ref{fig:dispersion}b shows that the effective damping in the adiabatic approximation becomes negative and divergent at the exceptional point. Accordingly panels (a) and (d) show the adiabatic approximation (solid) overestimates the amplification of the wave (numerical solution of (\ref{eq:two_oscillator_system}), shown as dashed line). \label{fig:wkbbad}}
\end{figure}
\par
Conversely Fig.~\ref{fig:wkbbad} compares adiabatic and numerical solutions when the coupling $\bar{a}$ is modulated to a value that is $99\%$ of $\bar{a}_{\rm ex}$. While numerical and adiabatic solutions agree well before the modulation, they differ significantly as the exceptional point is approached. For instance the adiabatic approximation overestimates the amplification of the wave in both Fig.~\ref{fig:wkbbad}a and Fig.~\ref{fig:wkbbad}d. This is closely related to the well known failure of the adiabatic approximation in non--Hermitian systems, close to an exceptional point~\cite{heading2013,nenciut1992,uzdin2011,berry2011a,berry2011b}. In time varying systems such as our array of digital meta--atoms, slow parameter change does not necessarily correspond to an adiabatic transition between the instantaneous eigenstates.
\section{Non--adiabatic modulation}
\par
In the opposite limit, we can consider an abrupt change in the speaker output amplitude at $t=0$, between two constant values, $\bar{a}_1$ and $\bar{a}_2$. This abrupt change will lead to the temporal analogue of reflection, as discussed in e.g.~\cite{mendonca2002} where the positive and negative frequency solutions to (\ref{eq:two_oscillator_system}) will become mixed after the abrupt change. Using Eqns. (\ref{eq:wkb1}--\ref{eq:wkb2}), for $t>0$ we take our field to be a sum over the four approximate solutions to (\ref{eq:two_oscillator_system})
\begin{equation}
|\psi(t>0)\rangle=\alpha_a|\psi_a\rangle+\alpha_b|\psi_b\rangle+\alpha_c|\psi_c\rangle+\alpha_d|\psi_d\rangle
\end{equation}
where the four modes $|\psi_{a,b,c,d}\rangle$ are given by
\begin{equation}
|\psi_{a,b}\rangle=\left(\begin{matrix}\bar{a}_{2}\\\Omega_{2,+}^2-\Omega_{\mathcal{K}}^2\end{matrix}\right){\rm e}^{-{\rm i}\omega_{a,b} t},\quad|\psi_{c,d}\rangle=\left(\begin{matrix}\bar{a}_{2}\\\Omega_{2,-}^2-\Omega_{\mathcal{K}}^2\end{matrix}\right){\rm e}^{-{\rm i}\omega_{c,d}t}\label{eq:sol_abcd}
\end{equation}
where the complex frequencies are given by $\omega_{{}^{a}_{b}}=\pm\Omega_{2,+}-({\rm i}\gamma/2)(\Omega_{2,+}^2-\Omega_{\mathcal{K}}^2)(\Omega_{2,+}^2-\Omega_{2,-}^2)^{-1}$ and $\omega_{{}^{c}_{d}}=\pm\Omega_{2,-}+({\rm i}\gamma/2)(\Omega_{2,-}^2-\Omega_{\mathcal{K}}^2)(\Omega_{2,+}^2-\Omega_{2,-}^2)^{-1}$. Before $t=0$ the field is taken as a single mode, corresponding to the $\Omega_{-}$ root of (\ref{eq:Wpm})
\begin{equation}
|\psi(t<0)\rangle=\left(\begin{matrix}\bar{a}_1\\\Omega_{1,-}^2-\Omega_{\mathcal{K}}^2\end{matrix}\right){\rm e}^{-{\rm i}\omega_0 t}
\end{equation}
where $\omega_0=\Omega_{1,-}+({\rm i}\gamma/2)(\Omega_{1,-}^2-\Omega_{\mathcal{K}}^2)(\Omega_{1,+}^2-\Omega_{1,-}^2)^{-1}$. Applying the continuity of the fields $u_{\mathcal{K}}$ and $v$ and their time derivatives at $t=0$ we can determine the four unknowns $\alpha_{a,b,c,d}$ and thereby find the temporal analogue of the reflection coefficients, which are
\begin{align}
\alpha_{a}&=\frac{(\Omega_{1,-}^2-\Omega_{\mathcal{K}}^2)-\left(\frac{\bar{a}_1}{\bar{a}_2}\right)(\Omega_{2,-}^2-\Omega_{\mathcal{K}}^2)}{\Omega_{2,+}^2-\Omega_{2,-}^2}\frac{\omega_b-\omega_0}{\omega_b-\omega_a}\nonumber\\
\alpha_{b}&=\frac{(\Omega_{1,-}^2-\Omega_{\mathcal{K}}^2)-\left(\frac{\bar{a}_1}{\bar{a}_2}\right)(\Omega_{2,-}^2-\Omega_{\mathcal{K}}^2)}{\Omega_{2,+}^2-\Omega_{2,-}^2}\frac{\omega_0-\omega_a}{\omega_b-\omega_a}\label{eq:alpha_ab}
\end{align}
and
\begin{align}
\alpha_{c}&=\frac{(\Omega_{1,-}^2-\Omega_{\mathcal{K}}^2)-\left(\frac{\bar{a}_1}{\bar{a}_2}\right)\left(\Omega_{2,+}^2-\Omega_{\mathcal{K}}^2\right)}{\Omega_{2,+}^2-\Omega_{2,-}^2}\frac{\omega_0-\omega_d}{\omega_d-\omega_c}\nonumber\\
\alpha_{d}&=\frac{\left(\Omega_{1,-}^2-\Omega_{\mathcal{K}}^2\right)-\left(\frac{\bar{a}_1}{\bar{a}_2}\right)\left(\Omega_{2,+}^2-\Omega_{\mathcal{K}}^2\right)}{\Omega_{2,+}^2-\Omega_{2,-}^2}\frac{\omega_c-\omega_0}{\omega_d-\omega_c}.\label{eq:alpha_cd}
\end{align}
\begin{figure}[h!]
\includegraphics[width=\textwidth]{fig-5.pdf}
\caption{Reflection and mode conversion after a sudden change in the speaker output $\bar{a}$. Parameters as in Fig.~\ref{fig:dispersion}. Panel (c) shows the logarithm of the four mode amplitudes (\ref{eq:alpha_ab}--\ref{eq:alpha_cd}) after the modulation, where the initial speaker output is $\bar{a}=5$, and the final amplitude is varied. The divergence of $\alpha_{a,b}$ at $\bar{a}=0$ is due to the vanishing of the vectors $|\psi_{a,b}\rangle$ defined in (\ref{eq:sol_abcd}), and does not represent any divergent physical quantity. Panels (a) and (b) show the real (blue), imaginary (red), and absolute value (green) of the pressure field for the two final values of $\bar{a}$ indicated in (c). Results of numerical integration of (\ref{eq:two_oscillator_system}) are shown as dashed lines.\label{fig:reflection}}
\end{figure}
The four distinct modes arise from the dispersion in this system and accordingly the `reflection' coefficients involve both the conventional mixing of forwards and backwards propagating waves (i.e. amplitudes corresponding to the $\omega_{{}^{a}_{b}}$ or $\omega_{{}^{c}_{d}}$ modes alone), and mode conversion (e.g. mixing between $\omega_{{}^{a}_{b}}$ and $\omega_{{}^{c}_{d}}$ modes).
The distinction between reflection and mode conversion is illustrated in Figs.~\ref{fig:reflection}b and c, where far from the exceptional point the dominant amplitudes after the modulation are of the same type (shown in red) as the incident wave (i.e. the modes are mostly the $\Omega_{-}$ root of the dispersion relation, before and after the modulation). Meanwhile, close to the exceptional point (see Figs.~\ref{fig:reflection}a and c), there is much stronger coupling between the different roots of the dispersion relation. As shown in Fig.~\ref{fig:reflection}c, the $\Omega_{2,-}$ and $\Omega_{2,+}$ waves (red and blue, respectively) are excited in similar proportions by changing $\bar{a}$ to a value that is close to the exceptional point value $\bar{a}_{\rm ex}$, defined in (\ref{eq:aex}). Although we have derived the mode amplitudes (\ref{eq:alpha_ab}--\ref{eq:alpha_cd}) using the adiabatic approximation for $t<0$ and $t>0$, Fig.~\ref{fig:reflection} shows excellent agreement with a direct numerical integration of the equations of motion. The agreement cannot be expected to be as good for incidence of the $\Omega_{1,+}$ mode, for the reason explained in the previous section.
\section{Summary and conclusions}
\par
We have developed a model of acoustic mode propagation through an array of digital meta-atoms, active elements that can be programmed to give an almost arbitrary dispersive, time varying response. Taking the system as an array of coupled point--like sources and detectors---where the source output is varied in time---we find the effective medium limit is equivalent to a dispersive medium, where the effective susceptibility is multiplied by a time dependent modulation, $\bar{a}(t)$. Re--writing the dispersive system as a pair of coupled harmonic oscillators, we derived an approximate solution to the equations of motion for a fixed wave number $\mathcal{K}$, valid for the lowest frequency mode when there is slow modulation of $\bar{a}$ and weak damping, $\gamma$. This solution was verified numerically, where the modulation of the output $\bar{a}$ was chosen to smoothly modify the acoustic mode from one that is damped, to one that grows in time. The adiabatic solution also captures the change in wave amplitude when there is a significant change in the mode frequency.
In general the response of the digital meta--atom array is both non--Hermitian and time varying, and accordingly exhibits exceptional points for some choices of output amplitude. We find our adiabatic solution breaks down close to these exceptional points, where there is a strong coupling between the modes. This shows that the intuition that mode conversion requires the system parameters to be varied on the time--scale of the wave period does not hold in non--Hermitian systems, and may be deliberately broken in such active meta--materials. This may be a useful feature in e.g. time varying electromagnetic media when the wave period is shorter than the possible meta--atom response time.
We also examined the evolution of the wave in the opposite limit, where the speaker amplitudes $\bar{a}$ are changed abruptly in time. Using our adiabatic solution either side of the abrupt change and applying continuity at the interface, we derived formulae for the final mode amplitudes, verifying the formulae numerically. These `reflection coefficients' describe both the generation of negative frequency waves (change in propagation direction), and the conversion between different modes of the system. This is analogous to e.g. a spatial interface between two different multi--mode waveguides, where the interface causes both reflection between like modes, and conversion between different waveguide modes.
While the change in propagation direction is already well known in the non--dispersive approximation to time varying media, the dispersive response controls the \emph{number} of modes supported at a fixed wave number $\mathcal{K}$. As a digital meta--atom can be programmed at will, there is the potential to manipulate this multi--mode conversion phenomenon via such time varying active media.
\acknowledgements
SARH acknowledges funding from the Royal Society and TATA (RPG-2016-186) as well as useful comments from Gianluca Memoli, and Bryn Davies.
|
{
"timestamp": "2021-09-30T02:00:15",
"yymm": "2109",
"arxiv_id": "2109.13923",
"language": "en",
"url": "https://arxiv.org/abs/2109.13923"
}
|
\section{Introduction}
In a stochastic multi-armed bandits (MAB) problem, an agent is repeatedly faced with the choice of sampling from one of $K$ arms, each providing rewards drawn from an unknown distribution. The agent seeks to find proper balance between exploration and exploitation so as to optimize its objective related to rewards collected during a set of trials. Such a framework has multiple applications such as recommendation systems, advertising, finance and medical trials among others, see \cite{bouneffouf2019survey}. Classic versions of the problem are concerned with maximizing expected rewards, as in \cite{sutton1998introduction}. More recent work substitute the expected reward maximisation objective with risk minimization. This is motivated by situations were an adverse outcome on any draw can have very detrimental consequences. Such risk averse MAB problems have applications in health science (e.g. medical trials), robotics and energy management, see for instance \cite{galichet2013exploration}. Risk aversion metrics considered in the literature include for instance a mean-variance trade-off criterion \citep{sani2012risk}, or tail risk measures such as in \cite{yu2013sample}. A popular example of a tail risk measure used in the context of risk averse MAB problems is the conditional value-at-risk (CVaR) introduced by \cite{rockafellar2002conditional}, which quantifies the expected loss (i.e. minus the reward) given it exceeds a given quantile of its distribution.
The present work therefore also considers the same risk measure, despite other potential choices being available such as the value-at-risk (VaR), or expectiles \citep{newey1987asymmetric}. Other risk-averse MAB papers also considered the CVaR. Upper confidence bound algorithms in this context are studied by \cite{maillard2013robust}, \cite{cassel2018general}, \cite{khajonchotpanya2021revised}. Alternative arm selection approaches in the context of risk-averse bandits include the max-min approach discussed in \cite{galichet2013exploration}, the successive rejects relying on concentration bound guarantees of \cite{kolla2019risk}, robust estimation-based algorithms in \cite{kagrecha2020statistically}, or Thompson Sampling approaches in \cite{chang2020risk} and \cite{baudry2021optimal}.
Papers from the risk averse MAB stream mainly consider stationary problems where the loss distribution for each arm remains the same over all trials. In practice, several real-life problems involve cases where the loss distributions progressively fluctuate through time.
Such changes in distributions are referred to as non-stationarities. The present work thus fills a gap from the literature by being, to the best of authors' knowledge, the first to consider non-stationary risk averse multi-armed bandits problems. To tackle such problems, two estimation procedures for the CVaR in the presence of non-stationary losses is proposed. The first one relies on a weighted empirical distribution of losses. The second couples the dual representation of the CVaR with the traditional move-toward-target update formula to estimate expected auxiliary functions found within the latter representation.
Numerical experiments show that in a non-stationary losses context, the proposed CVaR estimation methods exhibit significant outperformance over a more naive sample averaging approach not considering the evolution of distributions.
The paper is divided as follows. Section \ref{se:CVaRestim} describes the CVaR risk measure and outlines how related estimates of risk can be obtained. The risk averse multi-armed bandits setting is discussed in Section \ref{se:MAB}. Results from numerical experiments assessing the performance of proposed methods for a non-stationary risk averse multi-armed bandits problem are provided in Section \ref{se:NumExp}. Section \ref{se:Conclusion} concludes.
\section{CVaR risk measure and estimation}
\label{se:CVaRestim}
The CVaR measure originally proposed by \cite{rockafellar2002conditional} is meant to measure the average loss in a set of worst-case scenarios, thereby reflecting the severity associated with extreme losses.
For a loss random variable $Z$, denote its cumulative distribution function (cdf) by $F_Z$. The CVaR associated with a confidence level $\alpha$ can be formally defined as
\begin{eqnarray*}
\text{CVaR}_\alpha(Z) &\equiv& \frac{1}{1-\alpha} \int^1_{\alpha} q_u(Z) du, \quad \text{where}
\\ q_\alpha(Z) &\equiv& \text{inf}\{z \in R: F_{Z}(z) \geq \alpha \}.
\end{eqnarray*}
Typical values for $\alpha$ include $0.9$, $0.95$ and $0.99$. $q_\alpha(Z)$ is the quantile of confidence level $\alpha$ of the distribution $Z$. If $Z$ is an absolutely continuous random variable (i.e. it is atomless), then the CVaR possesses the following intuitive representation justifying its interpretation as the expected loss in worst-case scenarios:
\begin{equation*}
\text{CVaR}_\alpha(Z) = E[Z | Z \geq q_\alpha(Z)].
\end{equation*}
Moreover, \cite{rockafellar2002conditional} provide the following dual representation for the CVaR which will subsequently be handy in the present work:
\begin{eqnarray}
\label{dualrep}
\text{CVaR}_{\alpha}(Z) &=& \min\limits_{c \in\mathbb{R}}\mathbb{E} [f^{\text{CVaR}}_{c,\alpha}(Z) ], \quad \text{where}
\\ f^{\text{CVaR}}_{c,\alpha}(z) &\equiv& c + \frac{1}{1- \alpha} (z - c) \mathds{1}_{\{z > c\}}. \notag
\end{eqnarray}
Such dual representation expresses the CVaR through a set of unconditional expectations, which can be evaluated conveniently with typical statistical and reinforcement learning techniques. Furthermore, it is shown in \cite{rockafellar2002conditional} that
\begin{equation*}
\arg\!\min\limits_{c \in\mathbb{R}}\mathbb{E} [f^{\text{CVaR}}_{c,\alpha}(Z) ] = q_\alpha(Z),
\end{equation*}
which means that the minimizing auxiliary constant $c$ is the quantile of level $\alpha$ of the distribution of $Z$.
\subsection{Estimation with an i.i.d sample}
A required endeavour in a multi-armed bandits framework is the estimation of the objective function (the CVaR in the present case) applied to the a variable $Z$ through a sample $Z_1,\ldots,Z_n$. When the sample contains independent and identically distributed (i.i.d.) observations, a straightforward approach consists in using the empirical distribution of the sample as an approximation of the true distribution of $Z$. For the CVaR risk measure, this leads to the following sample averaging formula \citep[see for instance][]{troop2021bias}:
\begin{equation}
\label{eq:TCEestimsample}
\widehat{\text{CVaR}}^{(avg)}_{\alpha,n}(Z)
= \frac{\sum_{i=1}^{n} Z_{i} \mathds{1}_{ \{Z_i \geq \hat{q}^{n}_{\alpha}(Z) \} }}{\sum_{i=1}^{n} \mathds{1}_{ \{Z_i \geq \hat{q}^{n}_{\alpha}(Z) \} }}.
\end{equation}
where the empirical cdf of $Z$ is given by
\begin{equation}
\label{SampleCDFestim}
\hat{F}^{n}_Z(z) \equiv n^{-1} \sum_{s=1}^{n} \mathds{1}_{ \{ Z_s \leq z \} },
\end{equation}
and the empirical distribution quantiles are provided by
\begin{eqnarray*}
\hat{q}^{n}_{\alpha}(Z) \equiv \inf\{z\in\mathbb{R} : \hat{F}^{n}_Z(z) \geq \alpha\} = \underset{i }{\min}\{Z_{(i)} : \hat{F}^{n}_Z(Z_{(i)}) \geq \alpha\} = Z_{(\lceil\alpha n\rceil)},
\end{eqnarray*}
with $Z_{(1)},\ldots, Z_{(n)}$ being the order statistics (i.e. sample observations sorted in a non-decreasing order). \cite{trindade2007financial} study among others the asymptotic behavior of such an empirical distribution-based estimator, and thereby show its consistency. This CVaR estimation approach is referred to as the \textit{sample averaging} method.
\subsection{Estimation under non-stationarities}
When the distributions underlying observations $Z_1,\ldots,Z_n$ are not identical, there is no single CVaR to estimate. In a non-startionary MAB setting, the required task would consist in estimating $\text{CVaR}_\alpha(Z_{n+1})$ based on the available sample, under the assumption that the fluctuation in the distribution of $Z_t$ between any time $t$ and $t+1$ is not too severe.
The main idea considered in the present study, which is also illustrated in \cite{sutton1998introduction}, consists in putting more weight on more recent observations, since the distribution of $Z_{n+1}$ is most likely more similar to that of recent observations $Z_{n}, Z_{n-1}, \cdots,$ rather than to that of observations further away (e.g. $Z_1,Z_2,\ldots$). Therefore, consider weights $w^{(n+1)}_1,\ldots,w^{(n+1)}_n$ associated with observations $Z_1,\ldots,Z_n$ when predicting the CVaR for observation $Z_{n+1}$. Exponential decay could be for instance considered to assign more weight to more recent observations. For some $\lambda>0$, this can be accomplished by setting
\begin{equation*}
w^{(n+1)}_j = (1-\lambda)^{n-j} \frac{1-(1-\lambda)}{1-(1-\lambda)^n}, \quad j=1,\ldots,n,
\end{equation*}
which satisfies $\sum^n_{j=1} w^{(n+1)}_j =1$ and $w^{(n+1)}_{j} = (1-\lambda) w^{(n+1)}_{j+1}, \, j=1,\ldots,n-1$. Therefore, a value of $\lambda$ close to zero leads to close to uniform weights across all observations, whereas a $\lambda$ close to one puts the majority of the weight on very recent observations.
This leads to a first potential estimator based on a weighted empirical distribution of losses:
\begin{eqnarray}
\widehat{\text{CVaR}}^{(weight)}_{\alpha,n}(Z_{n+1})
&\equiv& \frac{\sum_{i=1}^{n} w^{(n+1)}_i Z_{i} \mathds{1}_{ \{Z_i \geq \tilde{q}^{n}_{\alpha}(Z_{n+1}) \} }}{\sum_{i=1}^{n} w^{(n+1)}_i \mathds{1}_{ \{Z_i \geq \tilde{q}^{n}_{\alpha}(Z_{n+1}) \} }}, \quad \text{where} \label{weightedCVaR}
\\ \tilde{F}^{n}_{Z_{n+1}}(z) &\equiv&\sum_{s=1}^{n} w^{(n+1)}_{s}\mathds{1}_{ \{ Z_s \leq z \} }, \notag
\\ \tilde{q}^{n}_{\alpha}(Z_{n+1}) &\equiv& \inf\{z\in\mathbb{R} : \tilde{F}^{n}_{Z_{n+1}}(z) \geq \alpha\}. \notag
\end{eqnarray}
This method is referred to as the \textit{weighted empirical estimation} approach.
An issue with that approach is that the stage-$t$ quantile estimate $\hat{q}^{t}_{\alpha}(Z_{t+1})$ fluctuates across stages $t$. Thus \eqref{weightedCVaR} cannot lead to a simple recursive formula linking $\widehat{\text{CVaR}}^{(weight)}_{\alpha,n}(Z_{n+1})$ and $\widehat{\text{CVaR}}^{(weight)}_{\alpha,n-1}(Z_n)$.
It would be convenient to include exponential weighting within the following recursive approach outlined in \cite{sutton1998introduction}:
\begin{equation}
\label{recursGen}
\text{New estimate} = \text{Past estimate} + \text{Learning rate} \times ( \text{Target} - \text{Past estimate}).
\end{equation}
Advantages of using an algorithm along the lines of \eqref{recursGen} are that (i) it works well in an on-line fashion, e.g. if updates need to be run in parallel and/or in real-time, and (ii) it doesn't require storing the entire history of losses incurred unlike \eqref{weightedCVaR}.
Such recursive update formula works conveniently when the objective function to estimate is an expected value. This is where the dual representation of the CVaR becomes useful. Indeed, denote by $\mathcal{E}_{n,c,\alpha}$ the estimate of the quantity $\mathbb{E} [f^{\text{CVaR}}_{c,\alpha}(Z_{n+1}) ]$ found in \eqref{dualrep}. Then, because $\mathcal{E}_{n,c,\alpha}$ is an expectation, a recursion of the type \eqref{recursGen} can be obtained through
\begin{equation}
\label{updateDual}
\mathcal{E}_{n+1,c,\alpha} \equiv \mathcal{E}_{n,c,\alpha} + \lambda \left(f^{CVaR}_{c,\alpha}(Z_n) - \mathcal{E}_{n,c,\alpha}\right), \quad n=1,2,\ldots
\end{equation}
for some $\lambda >0$ with given starting values $\mathcal{E}_{1,c,\alpha}$, which leads to the following estimate of the $\text{CVaR}_\alpha$ due to \eqref{dualrep}:
\begin{equation}
\label{CVaRestimNonstat}
\widehat{\text{CVaR}}^{(recurs)}_{\alpha,n}(Z_{n+1}) \equiv \underset{c \in \mathbb{R}}{\min} \, \mathcal{E}_{n,c,\alpha}.
\end{equation}
Such an estimation approach is referred to as the \textit{dual recursive estimation} method. Similarly to the weighted empirical estimation approach, the higher the value for $\lambda$, the more impact recent observations have on estimates relatively to earlier observations. Estimates \eqref{weightedCVaR} and \eqref{CVaRestimNonstat} are not identical; this can be seen for instance with $\mathcal{E}_{n,c,\alpha}$, $n>0$ depending on starting values $\mathcal{E}_{1,c,\alpha}$, unlike $\widehat{\text{CVaR}}^{(weight)}_{\alpha,n}(Z_{n+1}) $ in \eqref{weightedCVaR}.
\section{Multi-armed bandits setting \label{se:MAB}}
In the multi-armed bandits setting, there are $K$ arms which can be sampled, and the loss\footnote{Bandits problems are often expressed in terms of rewards. However due to the risk averse nature of the agent considered herein, we shall refer to losses which could be understood as minus the rewards.} provided by arm $i$, at stage $t$ (if it is sampled) is denoted $Y_{t,i}$, $t=1,\ldots,T$. Denote by $F_{Y_{t,i}}$ the cdf of such reward. Different goals can be pursued in such context. In pure exploration problems, the goal is simply to attempt identifying the least risky arm (i.e. the one with the smallest CVaR$_\alpha$ with $\alpha$ being given in the problem) within $T$ trials. In other problems, the performance associated with losses incurred over the $T$ stages is important and the objective is to reduce the amount of extreme losses incurred during the run, where the run is defined as the action of going through all time stages and incurring associated losses.
To achieve the desired goal, the agent must select on each stage $t$ an arm (i.e. an action), which is denoted by $a_t$. The set of all actions is characterized by a \textit{policy} which maps available information into an action. Thus a policy $\pi = \{\pi_t\}^T_{t=1}$ contains the sequence of mappings $\pi_t : (a_1,Y_{1,a_1},\ldots,a_{t-1},Y_{t-1,a_{t-1}}) \rightarrow p_t$, $t=1,\ldots,T$, where $p_t$ is the random vector containing probabilities of selecting any action in $\{1,\ldots,K\}$ at stage $t$.
For simplicity and due to their popularity, $\epsilon$-greedy policies are considered in the present study. These entail that in a given stage $t$, the greedy action (i.e. the one with the smallest estimated CVaR) is selected with probability $1-\epsilon$, and otherwise with probability $\epsilon$ an action is randomly sampled across all arms (including the greedy one).\footnote{ Due to the possibility of sampling the greedy action when exploring, the true probability of selecting the greedy action is in fact $1-\epsilon + \epsilon/K$. } Defining an i.i.d. sequence $\{H_t\}^T_{t=1}$ of Bernoulli$(\epsilon)$ random variables where $H_t$ is independent from previous losses and actions $\left(a_1,Y_{1,a_1}\right),\ldots,\left(a_{t-1},Y_{t-1,a_{t-1}}\right)$, actions underlying the $\epsilon$-greedy policy can be represented as
\begin{equation*}
a_t = H_t A_t+ (1-H_t) \hat{a}^*_t
\end{equation*}
where $A_t$ is uniformly sampled among $\{1,\ldots,K\}$ independently of previous realized actions and losses, and $\hat{a}^*_t$ is the greedy actions defined as
\begin{equation}
\label{greedyaction}
\hat{a}^*_t \equiv \underset{i=1,\ldots,K}{\arg \! \min} \, \widehat{\text{CVaR}}_{\alpha,t-1}(Y_{t,i}).
\end{equation}
Each estimation method for the CVaR therefore leads to a different estimate of the greedy action. Larger values of $\epsilon$ are associated with more exploration, forcing the agent to try non-greedy actions more often to refine knowledge about their distributions, whereas a smaller $\epsilon$ entails more exploitation by generating losses from actions deemed less risky based on current estimates. In the presence of non-stationary losses, exploration is even more important as older estimates eventually become obsolete due to changes in the loss distributions. Note that to form the estimate $\widehat{\text{CVaR}}_{\alpha,t-1}(Y_{t,i})$ in \eqref{greedyaction}, the estimator (either \eqref{eq:TCEestimsample}, \eqref{weightedCVaR} or \eqref{CVaRestimNonstat}) is applied on the set of observed losses provided by arm $i$: $\mathcal{S}^t_{i} \equiv \{ Y_{u,i} : a_u = i, \, u=1,\ldots,t-1 \}$
The objective in subsequent numerical experiments consists in empirically detailing the performance of the various estimation methods (including the choice of parameter $\lambda$) when embedded in the $\epsilon$-greedy policy. The first strategy considered, which acts as a benchmark, is the application of the sample averaging method \eqref{eq:TCEestimsample}. Algorithm \ref{euclid} provides the pseudo-code for the $\epsilon$-greedy policy under such a method to estimate the CVaR for all arms.
\begin{algorithm}
\caption{Sample averaging estimation algorithm}\label{euclid}
\begin{algorithmic}[1]
\vspace{0.05 cm}
\State \emph{\textbf{Inputs}}: $\alpha\in(0,1)$, $\lambda>0$, $\epsilon \in (0,1)$
\vspace{0.05 cm}
\State \emph{\textbf{Loop} over all stages $t$}
\vspace{0.05 cm}
\State \hspace{0.5 cm} Sample $H_t \sim$ Bernoulli$(\epsilon)$
\vspace{0.05 cm}
\State \hspace{0.5 cm} \textbf{If} $H_t =0$
\vspace{0.05 cm}
\State \hspace{1 cm} $a_t \gets \underset{i=1,\ldots,K}{\arg \!\min} \hspace{0.1 cm} \widehat{\text{CVaR}}^{(avg)}_{\alpha,t-1}(Y_{t,i})$
\vspace{0.05 cm}
\State \hspace{0.5 cm} \textbf{Else}
\vspace{0.05 cm}
\State \hspace{1 cm} Sample $a_t$ uniformly from $\{1,\ldots,K \}$
\vspace{0.05 cm}
\State \hspace{0.5 cm} Observe $Y_{t,a_t}$, the time-$t$ reward from arm $a_t$
\vspace{0.05 cm}
\State \hspace{0.5 cm} Calculate $\widehat{\text{CVaR}}^{(avg)}_{\alpha,t}(Y_{t+1, a_t})$ by applying \eqref{eq:TCEestimsample} on sample $\mathcal{S}_{a_t}^t$
\vspace{0.05 cm}
\State \hspace{0.5 cm} $\widehat{\text{CVaR}}^{(avg)}_{\alpha,t}(Y_{t+1, i}) \gets \widehat{\text{CVaR}}^{(avg)}_{\alpha,t-1}(Y_{t, i})$ for all arms $i \neq a_t$
\end{algorithmic}
\end{algorithm}
For the weighted empirical estimation approach, the algorithm considered is the exact same as Algorithm \ref{euclid}, except that \eqref{weightedCVaR} is applied on line 9 instead.
The performance of the proposed estimation strategy \eqref{updateDual}-\eqref{CVaRestimNonstat} allowing to consider non-stationarities is also assessed. In theory, \eqref{CVaRestimNonstat} requires having the estimate of $\mathcal{E}^{(i)}_{t,c,\alpha}$ for all $c \in \mathbb{R}$ and for all arms $i$ (as each arm must have distinct values in a MAB setting, hence the added superscript). In practice, estimates can only be stored for a finite number of values for $c$. Therefore, a grid $\mathbb{G}\equiv \{c_1,\ldots,c_M \}$ containing all values that are considered for $c$ is introduced, and the following approximation is applied during the implementation of the method:
\begin{equation*}
\underset{c \in \mathbb{R}}{\min} \, \mathcal{E}^{(i)}_{t,c,\alpha} \approx \underset{c \in \mathbb{G}}{\min} \, \mathcal{E}^{(i)}_{t,c,\alpha},
\end{equation*}
which should be valid provided that the grid $\mathbb{G}$ is sufficiently fine and its range is sufficiently large.
Algorithm \ref{euclid2} outlines the pseudo-code for the application of the $\epsilon$-greedy policy used in conjunction with estimation method \eqref{updateDual}-\eqref{CVaRestimNonstat}. The approach considered involves refining values of $\mathcal{E}^{(i)}_{t,c,\alpha}$ for all $c \in \mathbb{G}$ whenever an action $i$ is sampled.
\begin{algorithm}
\caption{Dual recursive estimation algorithm}\label{euclid2}
\begin{algorithmic}[1]
\vspace{0.05 cm}
\State \emph{\textbf{Inputs}}: $\alpha\in(0,1)$, $\lambda>0$, $\epsilon \in (0,1)$, grid $\mathbb{G}$, initial estimates $\mathcal{E}_{1,c,\alpha}$ for all $c \in \mathbb{G}$
\vspace{0.05 cm}
\State \emph{\textbf{Loop} over all stages $t$}:
\State \hspace{0.5 cm} Sample $H_t \sim$ Bernoulli$(\epsilon)$
\vspace{0.05 cm}
\State \hspace{0.5 cm} \textbf{If} $H_t =0$
\vspace{0.05 cm}
\State \hspace{1 cm} $a_t \gets \underset{i=1,\ldots,K}{\arg \!\min} \, \underset{c \in \mathbb{G}}{\min} \, \mathcal{E}^{(i)}_{t,c,\alpha}$
\vspace{0.05 cm}
\State \hspace{0.5 cm} \textbf{Else}
\vspace{0.05 cm}
\State \hspace{1 cm} Sample $a_t$ uniformly from $\{1,\ldots,K \}$
\vspace{0.05 cm}
\State \hspace{0.5 cm} Observe $Y_{t,a_t}$, the time-$t$ reward from arm $a_t$
\vspace{0.05 cm}
\State \hspace{0.5 cm} \emph{\textbf{Loop}} over all arms $i\neq a_t$ and all $c \in \mathbb{G}$,
\vspace{0.05 cm}
\State \hspace{1 cm} $\mathcal{E}^{(i)}_{t+1,c,\alpha} \gets \mathcal{E}^{(i)}_{t,c,\alpha}$
\vspace{0.2 cm}
\State \hspace{0.5 cm} \emph{{\textbf{Loop}} over all c $\in \mathbb{G}$}:
\State \hspace{1 cm}$\textit{Target}_c \gets c + \frac{1}{1- \alpha} (Y_{t,a_t} - c) \mathds{1}_{\{Y_{t,a_t} > c\}}$
\vspace{0.05 cm}
\State \hspace{1 cm} $\mathcal{E}^{(a_t)}_{t+1,c,\alpha} \gets \mathcal{E}^{(a_t)}_{t,c,\alpha} + \lambda \left(Target_c - \mathcal{E}^{(a_t)}_{t,c,\alpha} \right)$
\end{algorithmic}
\end{algorithm}
\begin{remark}
An important concern extensively studied in the MAB literature is the identification of performance guarantees, for instance through concentration bounds. Concentration bounds on CVaR estimates and related results under various assumptions are provided among others in \cite{brown2007large}, \cite{wang2010deviation}, \cite{kolla2019concentration} and \cite{prashanth2020concentration}. However, given the non-stationary nature of losses considered in the present work, the derivation of general concentration bounds coping with any possible form of non-stationarity is impossible, explaining why a simple exploratory simulation experiment rather than full-blown mathematical derivations are applied subsequently for performance analysis.
\end{remark}
\section{Numerical Experiments}
\label{se:NumExp}
This section details numerical experiments conducted to assess the performance of aforementioned policies in the context of risk averse non-stationary multi-armed bandits problem. The experiments take the form of Monte Carlo simulations and are exploratory (i.e. limited scope) in nature.
\subsection{Multi-armed testbed setting}
Experiments conducted in this section are analogous to the multi-armed testbed simulation performed in Chapter 2 of \cite{sutton1998introduction}. Several runs of the multi-arm bandits problem are simulated with the various competing policies. Performance metrics for each policy are calculated for each separate run, and are then aggregated across the runs to obtain the performance assessment.
In the experiment, $1,\!000$ runs are produced, with each run containing $2,\!000$ stages (i.e. time steps). $K=8$ arms are considered. The confidence level of the CVaR is set to $\alpha=0.90$. The exploration rate $\epsilon=0.05$ is used in $\epsilon$-greedy policies.
For the dual recursive estimation approach \eqref{updateDual}-\eqref{CVaRestimNonstat}, initial estimates of expected values of auxiliary losses are set to $\mathcal{E}^{(i)}_{1,c,\alpha}=0$ for all arms $i=1,\ldots,K$ and all $c$ in the grid $\mathbb{G}$. The grid $\mathbb{G}$ used in experiments
contains $2,\!000$ equally spaced points ranging from $-100$ to $350$.
The following learning rates/weight decay values for $\lambda$ are tested: $0.01$, $0.05$, $0.1$, $0.2$, $0.3$, $0.4$, $0.5$, $0.6$, $0.7$, $0.8$, $0.9$, $0.95$, $0.99$.
Note that the true initial CVaR for each of the arms is always slightly higher than 0 for loss distributions considered in the experiment as detailed subsequently. Hence, setting $\mathcal{E}^{(i)}_{1,c,\alpha}=0$ for all arms $i=1,\ldots,K$ leads to optimistic initial values, which tends to force some exploration in early steps, see \cite{sutton1998introduction}.
The specification of the evolution of loss distributions across time is provided subsequently. The three CVaR estimation strategies embedded in $\epsilon$-greedy strategies that are tested are the sample averaging method \eqref{eq:TCEestimsample}, the weighted empirical estimation \eqref{weightedCVaR} and the dual recursive estimation \eqref{recursGen}.
\subsection{Performance metrics}
To assess the performance of the various CVaR estimation methods and corresponding policies, three metrics are considered. At stage $t$ of a given run, the cumulative hit rate $\mathcal{H}_t$ is defined as the percentage of stages in a given run where the action with the smallest current CVaR was sampled:
\begin{equation*}
\mathcal{H}_{t} \equiv t^{-1} \sum_{u = 1}^{t} \mathds{1}_{ \{ a^*_t = a_t\}}, \quad \text{where } a^*_t \equiv \underset{i \in \{1,\ldots,K\} }{\arg\!\min} \, \text{CVaR}_{\alpha}(Y_{t, i}) .
\end{equation*}
A higher cumulative hit rate indicates better identification of the least risky action. A second metric considered is the regret $\mathcal{R}_t$ defined as
\begin{equation*}
\mathcal{R}_{t} \equiv -\sum_{u = 1}^{t} \left(\min\limits_{i \in \{1,\ldots,K\} } \text{CVaR}_{\alpha}(Y_{u,i}) - \text{CVaR}_{\alpha}(Y_{u,a_t}) \right).
\end{equation*}
For enhanced interpretability, the average regret $\bar{\mathcal{R}}_t= t^{-1} \mathcal{R}_t$ is reported instead of the cumulative regret in subsequent experiments.
Finally, an empirical (unconditional) CVaR metric relying on the sample averaging method \eqref{eq:TCEestimsample} is defined as
\begin{equation*}
\mathcal{C}_{t} \equiv \frac{\sum_{u=1}^{t} Y_{u,a_u} \mathds{1}_{ \{ Y_{u,a_u} \geq \hat{q}^{t}_{\alpha}(Y) \} }}{\sum_{u=1}^{t} \mathds{1}_{ \{ Y_{u,a_u} \geq \hat{q}^{t}_{\alpha}(Y) \} }}
\end{equation*}
where
\begin{equation*}
q^t_\alpha(Y) \equiv \text{inf}\{y \in R: \hat{F}^t_{Y}(y) \geq \alpha \}
\end{equation*}
and $\hat{F}^t_{Y}$ is the empirical distribution obtained with the run's loss sample $\{ Y_{1,a_1},\ldots, Y_{t,a_t}\}$.
The empirical CVaR metric represents a measurement of overall risk faced across all trials, instead of looking at performance trial by trial.
To compute the cumulative hit rate and regret, exact knowledge of the optimal action along with its associated CVaR are required, unlike the empirical CVaR which only depends on observed losses. Note that choosing the action with the smallest CVaR on each stage $t$ does not necessarily corresponds to the sequence of actions $\{a_t\}^T_{t=1}$ leading to the smallest empirical distribution CVaR across all stages. The latter could be calculated based on a dynamic program based on knowledge of loss distributions on each stage, see for instance \cite{godin2016minimizing}, but this complex calculation is not pursued here; the main focus is stage by stage performance, and therefore the metric $\mathcal{C}_{t}$ is merely used as an informal performance indicator rather than the objective function to be optimized.
\subsection{Simulation experiment}
\label{se:slowly}
In the simulation experiment all arms produce losses that are normally distributed, with slowly varying parameters. For each arm, the expected value of the loss distribution is randomly generated at the onset of a run based on a uniform distribution, and it remains fixed for the entire run duration. Conversely, the loss distribution standard deviation is also randomly generated at the run onset from a uniform distribution, but it varies progressively on each time step based on an exponential auto-regressive model.
In any given run, such dynamics are summarized by the following equations:
\begin{eqnarray*}
Y_{t,i} &\sim& \mathcal{N}(\mu_{i} , \sigma^2_{t,i}), \quad t=1,\ldots,T,
\\ \sigma_{t,i}^2 &=& \sigma_{t-1,i}^2 \exp (\epsilon_{t,i}), \quad t=1,\ldots,T,
\end{eqnarray*}
with random shocks $\{ \epsilon_{t,i} \}_{t=1}^{T}$, $i=1,\ldots,8$ forming normal i.i.d. sequences driving the evolution of loss distribution variances, thereby generating non-stationarity.
Distributions considered for the generation of random variables $\epsilon_{t,i}$, $t=1,\ldots,T$, $\mu_{i}$ and $\sigma^2_{0,i}$ are exhibited in Table \ref{TablSpecSimul1}. \textit{Unif}(a,b) denotes a uniform distribution on the $[a,b]$ interval.
\clearpage
\begin{table}[h]
\caption{Loss distribution parameters specification}
\label{TablSpecSimul1}
\centering
\small
\begin{tabular}{cccc}
\hline
{\bf arm 1} & {\bf arm 2} & {\bf arm 3} & {\bf arm 4} \\
\hline\\
\vspace{0.05 cm}
$\mu_1 \sim Unif(0, 2)$ & $\mu_2 \sim Unif(0, 2)$ & $\mu_3 \sim Unif(0, 2)$ & $\mu_4 \sim Unif(0, 2)$ \\
$\sigma_{0,1} \sim Unif(1, 2)$ & $\sigma_{0,2} \sim Unif(1, 2)$ & $\sigma_{0,3} \sim Unif(1, 2)$ & $\sigma_{0,4} \sim Unif(1, 2)$ \\
$\epsilon_{t,1}\sim\mathcal{N}(0, 0.08870^2)$ & $\epsilon_{t,2}\sim\mathcal{N}(0, 0.08871^2)$ &$\epsilon_{t,3}\sim\mathcal{N}(0, 0.08872^2)$ &$\epsilon_{t,4}\sim\mathcal{N}(0, 0.08873^2)$
\vspace{0.1 cm} \\
\hline
{\bf arm 5} & {\bf arm 6} & {\bf arm 7} & {\bf arm 8} \\
\hline\\
\vspace{0.05 cm}
$\mu_5 \sim Unif(0, 2)$ & $\mu_6 \sim Unif(0, 2)$ & $\mu_7 \sim Unif(0, 2)$ & $\mu_8 \sim Unif(0, 2)$ \\
$\sigma_{0,5} \sim Unif(1, 2)$ & $\sigma_{0,6} \sim Unif(1, 2)$ & $\sigma_{0,7} \sim Unif(1, 2)$ & $\sigma_{0,8} \sim Unif(1, 2)$ \\
$\epsilon_{t,5}\sim\mathcal {N}(0, 0.08874^2)$ & $\epsilon_{t,6}\sim\mathcal{N}(0, 0.08874^2)$ &$\epsilon_{t,7}\sim\mathcal{N}(0, 0.08872^2)$ &$\epsilon_{t,8}\sim\mathcal{N}(0, 0.08873^2)$
\end{tabular}
\end{table}
Simulation parameters from Table \ref{TablSpecSimul1} are chosen such that the loss distributions for all arms are sufficiently close to each other to allow the optimal arm to change within a run instead of constantly having a single arm dominating the others throughout entire runs.
The evolution of the CVaR for each of the eight arms across the $10,\!000$ stages within a single simulated run is illustrated in Figure \ref{CVaRevolCase1}. Recall that the CVaR$_\alpha$ of a normally distributed random variable $Y$ with mean $\mu$ and standard deviation $\sigma$ is
\begin{equation*}
\text{CVaR}_\alpha(Y) = \mu + \sigma \dfrac{\phi\left(\Phi^{-1}(\alpha) \right)}{1 - \alpha}
\end{equation*}
with $\Phi$ and $\phi$ being respectively the cdf and the density function (pdf) of a standard normal random variable. In Figure \ref{CVaRevolCase1}, each panel corresponds to an arm, with the red color representing the arm that is optimal on a given stage. The optimal arm indeed varies throughout the run.
\begin{figure}[h]
\centering
\caption{Evolution of the loss distribution CVaR$_{0.9}$ for all arms in a simulated run.}
\label{CVaRevolCase1}
\begin{minipage}{.9\linewidth}
\includegraphics[scale = 0.43]{CVaRallArmsRun.png}
\footnotesize
\emph{Notes:} Each panel exhibits the evolution of the loss distribution CVaR$_{0.9}$ for one of the eight arms over the 2,000 stages in a single simulated run. The red color denotes the arm that is optimal (i.e. having the smallest CVaR$_{0.9}$) on a given stage.
\end{minipage}
\end{figure}
\clearpage
First, to assess the impact of the choice of the step size $\lambda$ within the dual recursive estimation method or the weight exponential decay rate $\lambda$ in the weighted empirical estimation, a sensitivity analysis is conducted by comparing the time-$T$ (i.e. final) performance metrics for various choices of $\lambda$. Results are provided in Figure \ref{Fig:PerfStep}. Note that the set of generated losses over all arms and stages in a given run (i.e. $\{Y_{t,i}\}^T_{t=1}$, $i=1,\ldots,8$) is shared by the different values of $\lambda$ and the different estimation methods to reduce the impact of randomness when comparing the performance of different methods and hyperparameters.
\begin{figure}[h]
\includegraphics[scale=.33]{PerformanceAllLambda.png}
\caption{Performance versus parameter $\lambda$ (averaged over 1000 runs)}
\label{Fig:PerfStep}
\begin{minipage}{.95\linewidth}
\footnotesize
\emph{Notes:} Time-$T$ performance metrics are reported for various choices of the parameter $\lambda$ for the three estimation methods under the slowly-varying loss distributions experiment detailed in Section \ref{se:slowly}. Parameter $\lambda$ corresponds to an exponential weight decay rate for the weighted empirical estimation, and to a learning rate for the dual recursive estimation. The orange curve corresponds to the sample averaging method \eqref{eq:TCEestimsample}, the green curve to the weighted empirical estimation method \eqref{weightedCVaR}, and the blue curve to the dual recursive approach \eqref{CVaRestimNonstat}. Left panel: cumulative hit rate $\mathcal{H}_t$. Middle panel: average regret $\bar{\mathcal{R}}_t$. Right panel: empirical CVaR $\mathcal{C}_t$.
\end{minipage}
\end{figure}
The orange curve representing the sample averaging method is flat since such method does not depend on any step size related parameter $\lambda$.
For both the dual recursive and weighted empirical estimation methods, values around $\lambda=0.5$ seem to provide near-optimal results with respect to the regret and empirical CVaR performance metrics, and good results for the cumulative hit rate. Such value $\lambda=0.5$ is therefore retained for further tests. Compared to values often used in traditional reinforcement learning algorithms, the step size $\lambda=0.5$ could be considered quite large. The good performance associated with such value in the present framework is potentially due to the following: (i) there are not many observations available in the tail of the distribution for each arm, and thus extreme risk quantification requires larger step sizes associated with quicker and stronger updates, and (ii) non-stationarities require the use of larger step sizes than in stationary problems to put larger emphasis on most recent losses.
The performance of the three algorithms with a step size of $\lambda=0.5$ is now compared. The various performance metrics are reported in Figure \ref{fig:PerfFixedLambda} for all stages $t$ and all three estimation algorithms.
\begin{figure}[h]
\includegraphics[scale= .33]{CumulativePerformance.png}
\label{fig:PerfFixedLambda}
\caption{Evolution of performance metrics over time stages $t$ with $\lambda = 0.5$ (averaged over 1000 runs)}
\begin{minipage}{.95\linewidth}
\footnotesize
\emph{Notes:} Time-$t$ performance metrics are reported over the various stages $t$ for the three estimation methods under the slowly-varying loss distributions experiment detailed in Section \ref{se:slowly}. The orange curve corresponds to the sample averaging method \eqref{eq:TCEestimsample}, the green curve to the weighted empirical estimation method \eqref{weightedCVaR}, and the blue curve to the dual recursive approach \eqref{CVaRestimNonstat}. Left panel: cumulative hit rate $\mathcal{H}_t$. Middle panel: average regret $\bar{\mathcal{R}}_t$. Right panel: empirical CVaR $\mathcal{C}_t$.
\end{minipage}
\end{figure}
The weighted averaging (green curve) and dual recursive (blue curve) algorithms clearly outperform the sample averaging (orange curve) in the long run, which is not surprising given the presence of non-stationary losses. Moreover, the performance of the dual recursive approach and the weighted averaging approach is quite similar.
Nevertheless, the ability to use the dual recursion formula on-line without needing to store all observed losses can compel a user to use this method for problems with a large number of stages requiring a quick implementation.
\iffalse
\subsection{Simulation experiment 2: sudden shocks in loss distributions}
In some real life situations, losses distributions may experience more sudden large changes. In order to simulate such situations over T stages, each arm will have m different and uncorrolated distributions over $\dfrac{T}{m}$ stages. Every time the arms change distributions we call it a 'time switch'. At each time switch we generate the distribution's parameters as follows:
\begin{table}[h]
\caption{Loss distribution parameters specification for the first simulation}
\label{TablSpecSimul2}
\centering
\small
\begin{tabular}{cccc}
\hline
{\bf arm 1} & {\bf arm 2} & {\bf arm 3} & {\bf arm 4} \\
\hline\\
\vspace{0.05 cm}
$\mu_1 \sim Unif(0, 3)$ & $\mu_2 \sim Unif(0, 3)$ & $\mu_3 \sim Unif(0, 3)$ & $\mu_4 \sim Unif(0, 3)$ \\
$\sigma_{0,1} \sim Unif(1, 3)$ & $\sigma_{0,2} \sim Unif(1, 3)$ & $\sigma_{0,3} \sim Unif(1, 3)$ & $\sigma_{0,4} \sim Unif(1, 3)$ \\
\vspace{0.1 cm} \\
\hline
{\bf arm 5} & {\bf arm 6} & {\bf arm 7} & {\bf arm 8} \\
\hline\\
\vspace{0.05 cm}
$\mu_5 \sim Unif(0, 3)$ & $\mu_6 \sim Unif(0, 3)$ & $\mu_7 \sim Unif(0, 3)$ & $\mu_8 \sim Unif(0, 3)$ \\
$\sigma_{0,5} \sim Unif(1, 3)$ & $\sigma_{0,6} \sim Unif(1, 3)$ & $\sigma_{0,7} \sim Unif(1, 3)$ & $\sigma_{0,8} \sim Unif(1, 3)$ \\
\end{tabular}
\end{table}
Put appropriate hyperparameters, simulations graphs and results.conclusion
In our second experience we are still dealing with non-stationary distributions. However now they are not changing at every time step, but they are changing more radically at each 'time switch' (ie. every 200 steps). The parameters of our distributions are highlighted in the following table. Here
\begin{table}[h]
\caption{Distributions for the 5 time switches}
\small
\centering
\begin{tabular}{cccccccc}
\hline
{\bf arm1} & {\bf arm2} & {\bf arm3} & {\bf arm4} & {\bf arm5} & {\bf arm6} & {\bf arm7} & {\bf arm8} \\
\hline\\
\vspace{0.05 cm}
$\mathcal{N}(12 , 2^2)$ & $\mathcal{N}(15 , 2.5^2)$ & $\mathcal{N}(12 , 12^2)$ & $\mathcal{N}(14 , 4^2)$ & $\mathcal{N}(20 , 8^2)$ & $\mathcal{N}(20 , 16^2)$ & $\mathcal{N}(25 , 9^2)$ & $\mathcal{N}(1 , 2^2)$ \\
CVaR = 16.125 & CVaR = 20.157 & CVaR = 36.753 & CVaR = 22.251 & CVaR = 36.502 & CVaR = 53.003 & CVaR = 43.564 & CVaR = \textbf{5.125}\\
\hline\\
$\mathcal{N}(13 , 3.5^2)$ & $\mathcal{N}(11 , 3^2)$ & $\mathcal{N}(12 , 12^2)$ & $\mathcal{N}(14 , 3^2)$ & $\mathcal{N}(23 , 8^2)$ & $\mathcal{N}(20 , 13^2)$ & $\mathcal{N}(35 , 5^2)$ & $\mathcal{N}(40 , 15^2)$\\
CVaR = 20.219 & CVaR = \textbf{17.188} & CVaR = 36.753 & CVaR = 20.188 & CVaR = 39.502 & CVaR = 46.815 & CVaR = 45.314 & CVaR = 70.941\\
\hline\\
\vspace{0.05 cm}
$\mathcal{N}(25 , 4.5^2)$ & $\mathcal{N}(21 , 9^2)$ & $\mathcal{N}(18 , 9^2)$ & $\mathcal{N}(21 , 5^2)$ & $\mathcal{N}(23 , 3^2)$ & $\mathcal{N}(18 , 14^2)$ & $\mathcal{N}(45 , 30^2)$ & $\mathcal{N}(40 , 20^2)$ \\
CVaR = 34.282 & CVaR = 42.564 & CVaR = 36.564 & CVaR = 31.314 & CVaR = \textbf{29.188} & CVaR = 46.878 & CVaR = 106.881 & CVaR = 81.254\\
\hline\\
\vspace{0.05 cm}
$\mathcal{N}(15 , 6.5^2)$ & $\mathcal{N}(14 , 6^2)$ & $\mathcal{N}(13 , 14^2)$ & $\mathcal{N}(14 , 1^2)$ & $\mathcal{N}(23 , 16^2)$ & $\mathcal{N}(18 , 12^2)$ & $\mathcal{N}(30 , 9^2)$ & $\mathcal{N}(40 , 10^2)$ \\
CVaR = 28.408 & CVaR = \textbf{26.376} & CVaR = 41.878 & CVaR = 16.063 & CVaR = 56.003 & CVaR = 42.753 & CVaR = 48.564 & CVaR = 60.627 \\
\hline\\
\vspace{0.05 cm}
$\mathcal{N}(15 , 4.5^2)$ & $\mathcal{N}(23 , 9^2)$ & $\mathcal{N}(28 , 7^2)$ & $\mathcal{N}(21 , 5^2)$ & $\mathcal{N}(23 , 3^2)$ & $\mathcal{N}(30, 3^2)$ & $\mathcal{N}(45 , 30^2)$ & $\mathcal{N}(40 , 20^2)$ \\
CVaR = \textbf{24.282} & CVaR = 41.564 & CVaR = 42.439 & CVaR = 31.314 & CVaR = 29.188 & CVaR = 36.188 & CVaR = 106.881 & CVaR = 81.254 \\
\end{tabular}
\end{table}
\begin{table}[h]
\caption{Distribution t $\in [200, 400]$}
\centering
\small
\begin{tabular}{cccccccc}
\hline
{\bf arm1} & {\bf arm2} & {\bf arm3} & {\bf arm4} & {\bf arm5} & {\bf arm6} & {\bf arm7} & {\bf arm8} \\
\hline\\
\vspace{0.05 cm}
$\mathcal{N}(13 , 3.5^2)$ & $\mathcal{N}(11 , 3^2)$ & $\mathcal{N}(12 , 12^2)$ & $\mathcal{N}(14 , 3^2)$ & $\mathcal{N}(23 , 8^2)$ & $\mathcal{N}(20 , 13^2)$ & $\mathcal{N}(35 , 5^2)$ & $\mathcal{N}(40 , 15^2)$\\
CVaR = 20.219 & CVaR = \textbf{17.188} & CVaR = 36.753 & CVaR = 20.188 & CVaR = 39.502 & CVaR = 46.815 & CVaR = 45.314 & CVaR = 70.941
\end{tabular}
\end{table}
\begin{table}[h]
\caption{Distribution t $\in [400, 600]$}
\centering
\small
\begin{tabular}{cccccccc}
\hline
{\bf arm1} & {\bf arm2} & {\bf arm3} & {\bf arm4} & {\bf arm5} & {\bf arm6} & {\bf arm7} & {\bf arm8} \\
\hline\\
\vspace{0.05 cm}
$\mathcal{N}(25 , 4.5^2)$ & $\mathcal{N}(21 , 9^2)$ & $\mathcal{N}(18 , 9^2)$ & $\mathcal{N}(21 , 5^2)$ & $\mathcal{N}(23 , 3^2)$ & $\mathcal{N}(18 , 14^2)$ & $\mathcal{N}(45 , 30^2)$ & $\mathcal{N}(40 , 20^2)$ \\
CVaR = 34.282 & CVaR = 42.564 & CVaR = 36.564 & CVaR = 31.314 & CVaR = \textbf{29.188} & CVaR = 46.878 & CVaR = 106.881 & CVaR = 81.254
\end{tabular}
\end{table}
\begin{table}[h]
\caption{Distribution t $\in [600, 800]$}
\centering
\small
\begin{tabular}{cccccccc}
\hline
{\bf arm1} & {\bf arm2} & {\bf arm3} & {\bf arm4} & {\bf arm5} & {\bf arm6} & {\bf arm7} & {\bf arm8} \\
\hline\\
\vspace{0.05 cm}
$\mathcal{N}(15 , 6.5^2)$ & $\mathcal{N}(14 , 6^2)$ & $\mathcal{N}(13 , 14^2)$ & $\mathcal{N}(14 , 1^2)$ & $\mathcal{N}(23 , 16^2)$ & $\mathcal{N}(18 , 12^2)$ & $\mathcal{N}(30 , 9^2)$ & $\mathcal{N}(40 , 10^2)$ \\
CVaR = 28.408 & CVaR = \textbf{26.376} & CVaR = 41.878 & CVaR = 16.063 & CVaR = 56.003 & CVaR = 42.753 & CVaR = 48.564 & CVaR = 60.627
\end{tabular}
\end{table}
\begin{table}[h]
\caption{Distribution t $\in [800, 1000]$}
\centering
\small
\begin{tabular}{cccccccc}
\hline
{\bf arm1} & {\bf arm2} & {\bf arm3} & {\bf arm4} & {\bf arm5} & {\bf arm6} & {\bf arm7} & {\bf arm8} \\
\hline\\
\vspace{0.05 cm}
$\mathcal{N}(15 , 4.5^2)$ & $\mathcal{N}(23 , 9^2)$ & $\mathcal{N}(28 , 7^2)$ & $\mathcal{N}(21 , 5^2)$ & $\mathcal{N}(23 , 3^2)$ & $\mathcal{N}(30, 3^2)$ & $\mathcal{N}(45 , 30^2)$ & $\mathcal{N}(40 , 20^2)$ \\
CVaR = \textbf{24.282} & CVaR = 41.564 & CVaR = 42.439 & CVaR = 31.314 & CVaR = 29.188 & CVaR = 36.188 & CVaR = 106.881 & CVaR = 81.254 \\
\end{tabular}
\end{table}
We performed 1,000 steps, letting $\alpha = 0.95$, using 0.05- greedy, averaged of 1000 runs, and using 500 c values between (0, 150).\\
stepsizes = [0.01, 0.05, 0.1, 0.2,0.3, 0.4, 0.5, 0.6 ,0.7, 0.75, 0.8, 0.85, 0.9, 0.95]
As in the previous experience we clearly see that higher stepsizes provide more accuracy in the percentage of right arm and regret measure. Even though there is now only 5 times 'switches', large stepsizes seems to be more appropriate in our context since there is only 200 steps between each time switch and at each time switch the distributions are changing radically (ie. the worst arm could all of the sudden becomes the best arm). Lower stepsizes does not succeed in refining the estimates quick enough during each of the 200 steps as well as large stepsizes
We suspect that if we were to have longer time switch, lower stepsizes would indeed perform better. (See 6.4)
\subsection{Longer time switch}
We use 10,000 steps, let $\alpha= 0.05$ , 0.05- greedy, averaged of 10 runs, 1000 c values between (-100, 60), and have a time switch every 2000 steps now. (We are still using the same distributions) \\
We notice now that by using longer time switch even though larger step sizes still seems to be a bit better the differences of results in accuracy between using small or large stepsizes seems to fade away, and smaller stepsizes seems to perform better when using longer time switches which makes sense as between each time switch the 'stationary steps' are longer.
Lastly we notice that for all of the experiments we ran, there seems to be a small decrease of the empirical CVaR measure when using large stepsizes. Maybe this problem will fade out when averaging over more runs, or maybe it is related to the fact that large step size are too 'discriminant' so for the time step we are not picking correctly instead of picking the 2nd or 3rd best arms we pick the 5th or 6th for example. (Need to investigate a bit more here, we can discuss about it on Thursday more)
\fi
\section{Conclusion}
\label{se:Conclusion}
The present paper is the first to introduce action selection methods in the context of non-stationary risk averse multi-armed bandits problems. The objective function considered is the CVaR. Two approaches for its estimation on the various arms are proposed in the context of non-stationarity, one relying on a weighted empirical distribution of losses, and another based on the dual representation of the CVaR involving a recursive update formula. Such recursive formula is convenient if the multi-armed bandits problem is applied in an on-line setting, as it can be paralellized and it does not require storing the history of incurred losses for all arms. Conversely, the approach based on the weighted loss distributions possesses the advantage of not depending on arbitrary initial estimates. The proposed methods are benchmarked against a sample averaging estimator for the CVaR disregarding the potential non-stationary nature of loss distributions within an exploratory simulation experiments with slowly evolving loss distributions. The experiment reveals that the two proposed methods clearly outperform the naive sample averaging benchmark when losses are non-stationary. Moreover, the optimal step size $\lambda$ in the dual recursive estimation update formula (or alternatively the optimal weight decay parameter $\lambda$ in the weighted empirical estimation) is quite large, much above values that are often considered in a stationary case where the objective function is the expectation. Finally, for a suitably chosen parameter $\lambda$, the two non-stationary estimation methods seem to produce similar performance, without one clearly dominating the other.
As future research, it could be interesting to see if extreme value theory \citep[see for instance][]{mcneil2015quantitative} could be used in the context of non-stationary risk averse bandits when confidence levels close to one are considered. Such an approach is applied in the context of stationary bandits in \cite{troop2019risk}, but its extension to the non-stationary case might be non-trivial. Time varying thresholds in the peaks-over-threshold approach as in \cite{kysely2010estimating} could be contemplated as method to tackle this problem. Furthermore, a second interesting research avenue would be the derivation of concentration bounds on CVaR estimates based on assumptions limiting the magnitude of loss distributions fluctuations.
\bibliographystyle{apalike}
|
{
"timestamp": "2021-09-30T02:01:38",
"yymm": "2109",
"arxiv_id": "2109.13977",
"language": "en",
"url": "https://arxiv.org/abs/2109.13977"
}
|
\section{Introduction}
The radiative feedback of high-mass stars can have profound impact on the formation of stellar and planetary systems, as well as on the interstellar environment. Ionising radiation has been shown to be able to externally photoevaporate protoplanetary discs \citep{O'dell1994,Scally2001}, to restrict the mass accretion onto the source star \citep{Kuiper2018,Sartorio2019} and deplete some of the available gas for further star formation \citep{Dale2012,Walch2012,Ali2019,Decataldo2019}. The presence of high-mass stars has also been argued to affect the star formation rate (SFR) in their surroundings. Star formation may be triggered via the collect-and-collapse mechanism, which occurs when the shock swept up by an expanding H~II region collects sufficient amounts of mass to become gravitationally unstable and collapse. This mechanism was first introduced by \cite{Elmegreen1977}, and was expanded by later studies with numerical models, which showed that the masses of the newly produced stars depend on the thickness of the shocked shell \citep{Wunsch2010,Dale2011b}.
Another way of triggering star formation can be achieved through radiation-driven implosion, during which pre-existing cloud cores become unstable under gravity as they get compressed by an expanding H~II region \citep{Kessel-Deynet2003,Mellema2006,Bisbas2011}. However, ionising feedback is mainly expected to have a negative effect on the SFR by altogether dispersing the cloud containing the source stars \citep{Dale2015,Kruijssen2019b,Chevance2020}.
In order to study the above mechanisms, many different methods of incorporating ionising feedback into hydrodynamics simulations have been developed. Early work by \cite{Lucy1977} and \cite{Brookshaw1994} described the propagation of light as a diffusive process, which could be included directly into the hydrodynamics equations. The diffusion approximation only holds for an optically thick medium, and it was later equipped with a flux-limiter in order to handle optically thin environments, giving rise to the flux-limited diffusion method \citep[e.g.][]{Levermore1981}. Other authors have traced rays of light through the diffuse medium and computed the ionisation state of the material along them. This can be achieved either by following the ray from the source to a given point \citep[e.g.][]{Kessel-Deynet2000,Abel2002,Dale2005,Pawlik2008, Wise2011, Haid2019}, or by starting from a chosen point in space and following a ray back to its source \citep{Grond2019}.
In recent years, there have been efforts towards using Monte Carlo Radiative Transfer (MCRT) in combination with hydrodynamics codes, since MCRT treats radiative effects (such as casting of shadows) with high accuracy \citep{Harries2015, Vandenbroucke2018}. Such radiation-hydrodynamics (RHD) schemes have been successfully implemented for grid-based \citep{Harries2015,Smith2017} and moving mesh \citep{Vandenbroucke2018,Smith2020} hydro codes, but are still under development for Smoothed Particle Hydrodynamics (SPH) codes. This is due to the particle nature of SPH and the fact that, with a few exceptions \citep{Forgan2010,Lomax2016}, MCRT algorithms are typically performed on a grid.
In this work we present for the first time a numerical scheme combining SPH and MCRT for modelling ionising radiation "on the fly". In order to reconcile the particle- and grid-based descriptions of the diffuse medium we have employed the use of Voronoi grids, which have been successfully used by other authors for post-processing SPH snapshots \citep[e.g.][]{Hubber2016,Koepferl2017}. We test the scheme by successfully applying it to the well-studied problem of spherical expansion of an H~II region. Furthermore, we perform simulations with varying time step associated with the radiative feedback, spatial resolution, and properties of the interconnecting Voronoi grid, in order to fine tune the scheme's performance.
Finally, we apply our RHD method to a non-uniform density medium, in order to study the effects that an ionising source would have on the galactic centre cloud G0.253+0.016 (also known as the Brick).
The Brick is a high--density cloud in the Central Molecular Zone (CMZ) of our galaxy, and it is believed to be a likely progenitor of a future high--mass stellar cluster \citep{Longmore2012,Rathborne2015,Walker2015}. Despite its high density, the cloud shows limited signs of ongoing star formation \citep{Walker2021}, and does not yet contain any identified H~II regions. Therefore, the Brick is a unique system that may provide the initial conditions for H~II region formation and early evolution. Furthermore, similarly to the rest of the CMZ, the Brick has high density, pressure and temperature \citep{Ginsburg2016,Krieger2017,Barnes2020}, which are representative features of high-redshift star-forming environments, and hence we can probe the early evolution and impact of H~II regions under conditions characteristic of high-redshift galaxies \citep{Kruijssen2013}. In particular, the following questions of interest arise:
\begin{enumerate}
\item What will be the ionised mass of a newly formed H~II region?
\item How rapidly will the ionised mass increase?
\end{enumerate}
This paper is organised as follows. We describe the numerical setup and the simulation parameters in Section~\ref{sec:numerics}. We then present the simulations in uniform density medium in Section~\ref{sec:results}, followed by the simulations in clumpy medium in Section~\ref{sec:clumpy}. Finally, we summarise our findings and give recommendations for future RHD simulations in Section~\ref{sec:conclusion}.
\section{Numerical setup}
\label{sec:numerics}
\subsection{Code Overview}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/rad-hydro.pdf}
\caption{Code structure of the RHD scheme. During each simulation, \textsc{Phantom} performs hydrodynamics calculations and it ouputs its particle parameters in dump files at regular time intervals. We use the ``live analysis'' feature to externally modify some particle properties, each time a dump file is created. The modification is performed by calling \textsc{CMacIonize} in its library form and passing on particle positions and smoothing lengths. In addition, our analysis script outputs a parameters file and a Voronoi grid file, which are necessary for the function of CMI. Finally, CMI performs a radiative transfer calculation and it returns ionic fractions for all particles, which are used for modifying the particles' internal energies.}
\label{fig:code-schematics}
\end{figure}
In this paper, we combine a Smoothed Particle Hydrodynamics (SPH) code and a Monte Carlo Radiative Transfer (MCRT) code in order to produce a radiation-hydrodynamics (RHD) scheme. This is achieved by linking the two codes in the following way. During the runtime of the SPH code its particle distribution is used to construct a Voronoi grid \citep{Voronoi1908}. The particle positions, smoothing lengths and masses are then used for calculating the density of each cell. The grid information is passed on to the MCRT code, which computes the ionic fraction of each grid cell by propagating a large number of photon packets through the Voronoi grid. The ionic fraction is mapped from the grid cells to the SPH particles, and this information is returned to the SPH code. Finally, the particles' internal energies are adjusted according to the ionic fraction. Each of the above steps is explained in greater detail later in this section.
We use the \textsc{Phantom} SPH code \citep{Price2018} for our RHD scheme. As a typical SPH code, it follows a set of Lagrangian particles and it integrates the corresponding fluid equations. \textsc{Phantom} was created with a focus on stellar, planetary and galactic astrophysics. The code is well suited for the study of star formation as it is three-dimensional and it includes the treatment of self-gravity, sink particles and gas-dust mixtures. Additionally, it can be run as both ideal and non-ideal MHD. Crucially for our work, \textsc{Phantom} can execute an analysis module, which is typically used to post-process the SPH outputs (dump files). There is a "live analysis" option, which ensures that the analysis module is executed during \textsc{Phantom}'s runtime.
The MCRT code that we have used is \textsc{CMacIonize} \citep{Vandenbroucke2018}. \textsc{CMacIonize} (CMI) models photoionisation on a variety of 3D grid structures, including Cartesian, AMR and Voronoi grid. The code uses two Voronoi grid construction algorithms: an incremental construction algorithm, based on the publicly available Voro++ library \citep{Rycroft2009}, and a Delaunay tessellation based algorithm, similar to \citet{Springel2010} and \citet{Vandenbroucke2016}. CMI contains a radiation hydrodynamics scheme in itself, allowing the user to choose between a fixed grid and a moving mesh grid hydrodynamics. The code models the photoionisation of hydrogen and helium self-consistently in the energy range between 13.6 and 54.4 eV. The ionisation states of a number of metal ions are modelled approximately, as they contribute to the cooling of the gas. Presently there is no treatment of lower energy photons, dust or radiation pressure (for more information see \citealp{Vandenbroucke2018}). Additionally, CMI exists in a library form, which allows it to be called from other C or Fortran programs.
The MCRT algorithm works as follows \citep[see Sec.~2.1.1 of][]{Vandenbroucke2018}. A large number of photon packets are emitted from one or multiple sources. The properties of each photon packet (i.e. origin, direction and wavelength) are sampled using the Monte Carlo technique. Each photon packet is then propagated through the density grid until a randomly sampled optical depth is reached or until the photon packet exits the grid. The algorithm accumulates the path lengths of the photon packets that pass through each cell to obtain an estimate of the local ionising field \citep{Lucy1999} from which the ionisation state can be calculated. Once a photon packet reaches its sampled optical depth, it is absorbed, and it may be re-emitted if a recombination event occurs locally. This newly emitted photon packet is propagated further if its frequency is high enough to cause ionisation (otherwise the photon packet escapes the simulation without contributing to the ionisation state of the diffuse medium). The photon packets created by recombination events are collectively known as the ``diffuse field''. In the simulations that we present here we omit the diffuse field for consistency with the StarBench test (see Section~\ref{sec:results}). However, it is trivial to take it into account in future work, as it is controlled by a standard \textsc{CMacIonize} input parameter.
The process described above, involving the random sampling and propagation of photon packets, is performed iteratively. At the beginning of the first iteration, \textsc{CMacIonize} assumes that all of the grid cells are fully ionised, and they are assigned an appropriately high temperature. After the propagation the of photon packets, the ionic fraction and the temperature of each grid cell is updated. After a number of iterations (typically $\sim 10$), the ionic fractions and temperatures converge. Note that if we run \textsc{CMacIonize} with a diffuse medium consisting only of hydrogen, the temperature of the ionised gas is set to a constant value (usually $10^4$~K), while if we assume a mix of elements, the temperature is computed iteratively. Despite the fact that \textsc{CMacIonize} has its own cell temperatures, we do not take these values into account in our simulations and we leave it up to the hydrodynamics to assign the gas temperatures based on a particle's ionic fraction. This is done for consistency with the StarBench setup.
The connection between the two codes is established by using \textsc{Phantom}'s option for performing live analysis and \textsc{CMacIonize}'s library functionality (see Figure~\ref{fig:code-schematics}). The CMI library is called as part of \textsc{Phantom}'s analysis module, which passes along information about the SPH particle positions, masses and smoothing lengths, and receives the neutral hydrogen fraction of each particle. The module then uses the neutral hydrogen fractions in order to modify the internal energies of the SPH particles in \textsc{Phantom}. Additionally, the analysis module generates a \textsc{CMacIonize} parameters file necessary for the execution of the MCRT, together with a file containing the positions of a set of Voronoi cell generating sites.
This particular way of linking the two codes uses a fixed time step for the radiative feedback, which coincides with the SPH output frequency. Care has been taken when choosing the appropriate time step, and we will address this further in Section~\ref{sec:results}.
\subsection{Voronoi grid}
\begin{figure*}
\centering
\includegraphics[width=\columnwidth]{figures/voro-coloured-100-2.pdf}
\includegraphics[width=\columnwidth]{figures/voro-coloured-lloyd-100.pdf}
\caption{An example of a 2D Voronoi grid generated around randomly sampled points (\textit{left}), and its regularised counterpart (\textit{right}). In both panels, the grid generating sites are marked with blue circles, and the centroid of each grid cell is marked with a grey circle. The centroid points of the left panel serve as generating sites for the right panel (i.e. one Lloyd iteration has been performed going from left to right). The colour of each grid cell indicates the distance between the generating site and the centroid in units of the cell size.}
\label{fig:voro-example}
\end{figure*}
A Voronoi grid is a structure consisting of polyhedral cells built around a set of generating sites, such that each cell wall bisects the distance between two neighbouring sites \citep{Voronoi1908}. This means that the choice of generating sites uniquely defines the grid and its structural properties. In this work we have used two different choices of generating sites and have compared how the simulation outcome depends on the grid.
The first type of grid that we have used is one where the generating sites coincide with the particle positions. This is the most intuitive and basic way of constructing a Voronoi grid from a set of particles, and it ensures that the concentration of grid cells follows that of the particles. This type of grid, however, has a known issue when large density gradients are present. The cells at the interface between low density and high density material become elongated and under-resolve the interface \citep{Koepferl2017}.
Since a large density gradient is to be expected in our simulations, we have also considered a regularisation of the grid in order to improve the resolution. The regularisation has been performed using Lloyd's algorithm \citep{Lloyd1982}, which is an iterative process. At each iteration the generating sites of the cells are moved to the cell centroids and the grid is subsequently reconstructed. For the simulations presented here we have used five Lloyd's iterations, which have been sufficient to alter the grid structure significantly (see Section~\ref{sec:clumpy}).
A 2D illustration of the Lloyd's algorithm is shown in Figure~\ref{fig:voro-example}. In the left panel of the figure, we have randomly sampled x- and y-positions for a set of generating sites (marked with blue circles), and we have constructed a Voronoi grid around them. Note that we have sampled 100 generating sites with coordinates between $-0.5$ and 1.5, but we only show the [0,1] range in x- and y-coordinates, in order to avoid boundary effects. The left panel also contains the positions of the cell centroids, shown in grey circles. We can see that the less circular cells typically have greater distances between their generating sites and their centroids. This is also emphasised by the cell colour, which corresponds to the distance between a cell's generating site (positioned at $\mathbf{r}_{i}$) and its centroid (positioned at $\mathbf{r}_{{\rm c},i}$) divided by the cell size ($A_i^{1/2}$, where $A_i$ is the cell area). In the right panel of Figure~\ref{fig:voro-example} we plot the same grid after one iteration of Lloyd's algorithm. This means that we take the generating sites from the left panel and we move them to the centroids of their corresponding cells. We see that the grid cells in the right panel are a lot more circular than those in the top one, and have centroids which are much closer to their generating sites.
For both choices of generating sites in our simulations, the grids have been constructed using the direct incremental construction algorithm that is part of \textsc{CMacIonize}, and that is based on the publicly available Voro++ library\footnote{\url{http://math.lbl.gov/voro\%2B\%2B/}} \citep{Rycroft2009}.
\subsection{Density mapping}
Once the Voronoi grid is constructed, we need to assign densities to the cells. We have employed three different types of density mapping, in order to test the robustness of the simulation outcome. The first density mapping takes advantage of the fact that there is one grid cell per particle, and the cell density is given as the particle mass divided by the cell volume, or
\begin{equation}
\rho_{{\rm m},i} = \frac{m_a}{V_i},
\label{eq:MonV}
\end{equation}
where `$a$' is the index of the particle and `$i$' is the index of its corresponding cell. This method (labelled as `M/V') is very quick and easy to compute, however it is prone to large uncertainties since there is no direct correspondence between a particle's smoothing length and the volume of its Voronoi cell. Also note that we can only apply it to the non-regularised Voronoi grid because after Lloyd's algorithm is applied not all cells necessarily contain exactly one particle, and we lose the correspondence between `$a$' and `$i$'.
The second density mapping method (labelled as `Centroid') uses the SPH density estimated at the cell's centroid. The cell density is given by:
\begin{equation}
\rho_{{\rm c},i} = \sum_{a=1}^{N} m_a W(|\mathbf{r}_a - \mathbf{r}_{{\rm c},i}|, h_a),
\label{eq:centroid}
\end{equation}
where `$i$' is the index of the cell with a centroid position at $\mathbf{r}_{{\rm c},i}$, $\mathbf{r}_a$ and $h_a$ are the position and smoothing length of particle `$a$', $N$ is the total number of particles\footnote{Note that when we use a kernel function which goes to zero for large radii (i.e. a kernel function with compact support), only some of the particles have non-zero contributions to the above sum. If $\mathbf{r}_{{\rm c},i}$ was the position of a particle, we would refer to the particles with non-zero contributions as that particle's neighbours. $N$ could then be replaced by $N_{\rm neigh}$ in the above sum. A particle in our simulations has on average 58 neighbours.}, and $W$ is the kernel function. For all of our simulations we have used a cubic spline form for $W$ \citep{Monaghan1985}, which goes to zero for distances to the particle greater than $2h_a$. This mapping method is more consistent with how SPH uses densities, and it only requires slightly longer to compute than the previous method. The downside is that it does not conserve mass when mapping the particle densities onto the cells. Note that the magnitude of the error in the total mass depends on the particle distribution, and the Voronoi cell arrangement. Moreover, the error in the total mass is typically different at each ionisation time step, as the particles have rearranged themselves, and it is not cumulative as the simulation progresses.
Finally, we have also used density mapping (labelled as `Exact') which computes the exact average SPH density within the grid cell, as derived in \cite{Petkova2018}. The method calculates the integral of the kernel function over the cell volume, and it relates it to the cell density by:
\begin{eqnarray}
\rho_{{\rm p},i} & = & \frac{1}{V_i} \int_{V_i} \rho(\mathbf{r'}) \mathrm{d}V' \\
& = & \frac{1}{V_i} \int_{V_i} \sum_{a=1}^N m_a W(|\mathbf{r'}-\mathbf{r}_a|,h_a) \mathrm{d}V'\\
& = & \frac{1}{V_i} \sum_{a=1}^N m_a \int_{V_i} W(|\mathbf{r'}-\mathbf{r}_a|,h_a) \mathrm{d}V'.
\label{eq:exact}
\end{eqnarray}
Unlike the previous density mapping method, this one conserves the total mass when computing cell densities, which ensures better accuracy when performing the radiative transfer. For computational efficiency the values of the integral of the cubic spline kernel function have been pre--computed and used for interpolation. Even so, however, this method remains many times slower then the first two (see Appendix~\ref{sec:computing-time}).
\subsection{Ionic fraction mapping}
After the CMI library computes the ionic fraction of each grid cell, these values have to be mapped onto the particles. Analogously to the previous section, we have used three different ways of performing the ionic fraction mapping depending on the choice of density mapping.
The first type of density mapping (`M/V'; given by eq.~\ref{eq:MonV}) assumes that a particle and its corresponding cell contain the same amount of mass, and hence we can assume that they also contain the same amount of ionised mass. This leads to their ionic fractions being identical:
\begin{equation}
f_{{\rm m},a} = f_{i}.
\label{eq:MonV-inv}
\end{equation}
In the second type of density mapping (`Centroid'; eq.~\ref{eq:centroid}) we can think of each particle as contributing to the density of each cell that it overlaps with. If we multiply these density contributions with the corresponding cell volumes and ionic fractions, they become ionised mass contributions. The inverse mapping for this method is then given by:
\begin{equation}
f_{{\rm c},a} = \frac{\sum_{i=1}^{N} f_{i} V_i m_a W(|\mathbf{r}_a - \mathbf{r}_{{\rm c},i}|, h_a)}{\sum_{i=1}^{N} V_i m_a W(|\mathbf{r}_a - \mathbf{r}_{{\rm c},i}|, h_a)}.
\label{eq:centroid-inv}
\end{equation}
Note that the denominator\footnote{It is possible, in rare cases, that the denominator could be zero. This happens if a particle's kernel function does not overlap with the centroid of any of the cells. In these cases, we conservatively assume that the affected particles are neutral. If we have one odd neutral particle within an H II region, the dynamics of the gas will not change significantly, but even a single ionised particle in the middle of a neutral region can drive some local expansion.} is included in order to ensure that the particle's ionic fraction could not exceed the value of 1. Additionally, since the corresponding method of density mapping does not conserve mass, this type of ionic fraction mapping does not conserve ionised mass either.
Finally, in the exact density mapping (`Exact'; eq.~\ref{eq:exact}) each particle contributes a fraction of its mass to the grid cells it overlaps with. By multiplying this fractional mass contribution with the ionic fraction of the corresponding grid cell, we can construct a mapping which conserves the ionised mass:
\begin{equation}
f_{{\rm p},a} = \sum_{i=1}^{N} f_{i} \int_{V_i} W(|\mathbf{r'}-\mathbf{r}_a|,h_a) \mathrm{d}V'.
\label{eq:exact-inv}
\end{equation}
\subsection{Simulation setup}
We present two sets of simulations -- one reproducing the D-type expansion of an H~II region in a uniform medium, and one ionising a clumpy, star-forming cloud in the galactic centre. For consistency and ease of comparison we have kept most of the physical parameters the same between the two sets (see Table \ref{table:d-type-setup}). The individual setups are presented in more detail in the beginning of Section~\ref{sec:results} and~\ref{sec:clumpy}.
\begin{table}
\caption{Physical parameters for the two sets of simulations.}
\centering
\begin{tabular}{llll}
\hline
Parameter & Units & StarBench test (`SB') & The Brick (`B')\\
\hline
\hline
$\dot Q$ & s$^{-1}$ & $10^{49}$ & $5\times 10^{50}$ \\
$\mu_{\rm o}$ & & 1 & 1 \\
$\mu_{\rm i}$ & & 0.5 & 0.5 \\
$\rho_{\rm o}$ & g cm$^{-3}$ & $5.21 \times 10^{-21}$ & - \\
$T_{\rm i}$ & K & $10^{4}$ & $10^{4}$ \\
$c_{\rm i}$ & km s$^{-1}$ & 12.85 & 12.85 \\
$T_{\rm o}$ & K & $100$ & 65 \\
$c_{\rm o}$ & km s$^{-1}$ & 0.91 & 0.73 \\
\hline
\end{tabular}
\label{table:d-type-setup}
\end{table}
In all simulations \textsc{CMacIonize} has been performed with 10 iterations of $10^6$ photon packets each, in order to compute accurate ionic fractions. In all of our simulations we disregard the diffuse field and invoke the B-case recombination coefficient, $\alpha_B = 2.7 \times 10^{-13}$ cm$^3$ s$^{-1}$, which corresponds to an ionising temperature of $T_{\rm i} = 10^4$~K \citep{Osterbrock1989}. This assumes that any ionising photon released by a recombination event is absorbed locally (i.e. the so-called ``on-the-spot'' approximation). The photoionisation cross-section is assumed to be $\sigma = 6.3 \times 10^{-18}$ cm$^2$.
For the hydrodynamics we have used a polytropic equation of state with $\gamma=1.00011$ in order to mimic two-temperature isothermal gas, even though the exact value of $\gamma$ does not affect the H~II region expansion significantly, according to the models included in \cite{Bisbas2015}. The gas is composed purely of atomic hydrogen ($\mu_{\rm o} = 1$). After each time \textsc{CMacIonize} is called, the particles with ionic fraction higher than 0.5 have their internal energies set to correspond to a temperature of $T_{\rm i} = 10^4$~K and a mean molecular weight of $\mu_{\rm i} = 0.5$.
This creates a high temperature (and energy) contrast between some neighbouring particles, which is commonly known in SPH as ``contact discontinuity''. In order to minimise numerical issues that may arise from the presence of this contact discontinuity, we include artificial thermal conductivity (with $\alpha=1$), which has been introduced as a necessary numerical correction by \citet{Price2008}.
\begin{table*}
\centering
\caption{List of the RHD simulations included in this paper, and their setup. The particle mass is denoted as $m_{\rm{part}}$, the time step is $\delta t$, and the final simulation time is $t_{\rm{end}}$. The mass of the full simulation box is $M_{\rm{cl}}$, and it has a side of $R_{\rm{cl}}$. Note that for all `SB' simulations the mean initial density, $\rho_{\rm o}$, is kept the same, and hence the ratio $M_{\rm{cl}}/R_{\rm{cl}}^3$ remains constant.}
\begin{tabular}{lcccccccr}
\hline
Simulation name & Initial conditions & $m_{\rm{part}}$ [M$_{\rm{\odot}}$] & $\delta t$ [Myr] & $M_{\rm{cl}}$ [M$_{\rm{\odot}}$] & $R_{\rm{cl}}$ [pc] & $t_{\rm{end}}$ [Myr] & Density mapping & Lloyd iterations\\
\hline
\hline
SB I & glass $128^3$ & $10^{-3}$ & $8.5 \times 10^{-4}$ & 2097 & 3 & 0.14 & Exact & yes\\
SB Ia & glass $512^3$ & $10^{-3}$ & $1.36 \times 10^{-3}$ & 134218 & 12 & 2.5 & Centroid & yes\\
SB II & glass $64^3$ & $10^{-3}$ & $1.7 \times 10^{-4}$ & 262 & 1.5 & 0.04 & Exact & no\\
SB III & glass $64^3$ & $10^{-3}$ & $3.4 \times 10^{-4}$ & 262 & 1.5 & 0.04 & Exact & no\\
SB IV & glass $64^3$ & $10^{-3}$ & $8.5 \times 10^{-4}$ & 262 & 1.5 & 0.04 & Exact & no\\
SB V & glass $64^3$ & $10^{-3}$ & $1.02 \times 10^{-3}$ & 262 & 1.5 & 0.04 & Exact & no\\
SB VI & glass $64^3$ & $10^{-3}$ & $1.36 \times 10^{-3}$ & 262 & 1.5 & 0.04 & Exact & no\\
SB VII & glass $64^3$ & $10^{-3}$ & $1.36 \times 10^{-3}$ & 262 & 1.5 & 0.04 & Exact & yes\\
SB VIII & glass $64^3$ & $10^{-3}$ & $1.36 \times 10^{-3}$ & 262 & 1.5 & 0.04 & Centroid & no\\
SB IX & glass $64^3$ & $10^{-3}$ & $1.36 \times 10^{-3}$ & 262 & 1.5 & 0.04 & Centroid & yes\\
SB X & glass $64^3$ & $10^{-3}$ & $1.36 \times 10^{-3}$ & 262 & 1.5 & 0.04 & M/V & no\\
SB XI & glass $64^3$ & $5.12 \times 10^{-1}$ & $2.72 \times 10^{-3}$ & 134218 & 12 & 0.04 & Exact & no\\
SB XII & glass $64^3$ & $5.12 \times 10^{-1}$ & $2.72 \times 10^{-3}$ & 134218 & 12 & 0.04 & Exact & yes\\
SB XIII & glass $64^3$ & $6.4 \times 10^{-2}$ & $2.16 \times 10^{-3}$ & 16777 & 6 & 0.04 & Exact & no\\
SB XIV & glass $64^3$ & $6.4 \times 10^{-2}$ & $2.16 \times 10^{-3}$ & 16777 & 6 & 0.04 & Exact & yes\\
SB XV & glass $64^3$ & $8 \times 10^{-3}$ & $1.71 \times 10^{-3}$ & 2097 & 3 & 0.04 & Exact & no\\
SB XVI & glass $64^3$ & $8 \times 10^{-3}$ & $1.71 \times 10^{-3}$ & 2097 & 3 & 0.04 & Exact & yes\\
SB XVII & glass $128^3$ & $1.25 \times 10^{-4}$ & $1.08 \times 10^{-3}$ & 262 & 1.5 & 0.04 & Exact & no\\
SB XVIII & glass $128^3$ & $1.25 \times 10^{-4}$ & $1.08 \times 10^{-3}$ & 262 & 1.5 & 0.04 & Exact & yes\\
B I & Brick sim. & 0.4463 & $5 \times 10^{-4}$ & 45133 & 6 & 0.04 & Exact & no\\
B II & Brick sim. & 0.4463 & $5 \times 10^{-4}$ & 45133 & 6 & 0.04 & Exact & yes\\
B III & Brick sim. & 0.4463 & $5 \times 10^{-4}$ & 45133 & 6 & 0.04 & Centroid & no\\
B IV & Brick sim. & 0.4463 & $5 \times 10^{-4}$ & 45133 & 6 & 0.04 & Centroid & yes\\
B V & Brick sim. & 0.4463 & $5 \times 10^{-4}$ & 45133 & 6 & 0.04 & M/V & no\\
\hline
\end{tabular}
\label{table:sim-params}
\end{table*}
\section{Simulations in uniform density medium}
\label{sec:results}
In this section, we present a set of simulations of a single ionising source in a uniform density medium (labelled as `SB'; see Table \ref{table:sim-params}).
We adopt the setup of \citep[][StarBench]{Bisbas2015}, who established a benchmark test by performing simulations with a number of different 1D and 3D hydrodynamics codes coupled with ionising feedback. To reproduce their setup we have constructed a cube of uniform density ($\rho_{\rm o}=5.21 \times 10^{-21}$ g cm$^{-3}$) with a source of constant ionising luminosity ($\dot Q = 10^{49}$ s$^{-1}$) in the centre. The SPH particles have been initially arranged in a glass distribution \citep[available as part of the public simulation code SWIFT;][]{Borrow2018} to reduce numerical noise. All particles are initially at rest and have a temperature of $T_{\rm o} = 100$~K. The simulation volume has a side $R_{\rm{cl}}$, and total mass $M_{\rm{cl}}$, where $M_{\rm{cl}} = \rho_{\rm o} R_{\rm{cl}}^3$ and the total number of particles, N, is chosen so that the particle mass is $m_{\rm{part}} \approx 10^{-3}$ M$_{\rm{\odot}}$. We have used periodic boundary conditions to ensure that the cubic volume of gas does not diffuse out due to a pressure gradient. In order to avoid numerical inaccuracies, we have stopped each simulation before the point where the shock reaches the edge of the cube.
The ionising emission of the source creates a spherical H~II region, which undergoes rapid expansion. Determining the time evolution of the H~II region is a very well known problem, which has been studied extensively, both theoretically \citep{Spitzer1978,Hosokawa2006,Raga2012b,Raga2012a,Williams2018} and numerically \citep{Bisbas2015}. We begin this section by reviewing the theoretical work, which lays the foundation for interpreting our simulation results. We then demonstrate that our main simulation (`SB I') is in good agreement with the theory, as well as previous numerical work \citep{Bisbas2015}. For completeness, we also perform a much longer simulation (`SB Ia') which matches the `late test' from \citet{Bisbas2015}. In the remaining subsections we present variations\footnote{In all of the simulations derived from `SB I', we only focus on the first 0.04~Myr of the early StarBench test, since this is the time of most rapid expansion and we expect to see greatest variation with the choice of time step, grid/density mapping and resolution.} of `SB I', in which we alter the ionisation time step (Section~\ref{sec:timestepping}), grid type and density mapping method (Section~\ref{sec:grid&dens}), and resolution (Section~\ref{sec:resolution}). These additional simulations allow us to formulate recommendations for future applications of the radiation-hydrodynamics scheme, which we list in Section~\ref{sec:conclusion}.
\subsection{D-type expansion of an H~II region}
\label{sec:D-type}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/early-test-multiplot-line-IF.pdf}
\caption{Density slices at $z=0$ of a simulation of the early StarBench test, computed with our RHD scheme (`SB I'; see Table \ref{table:sim-params}). Each panel corresponds to a different simulation snapshot, as labelled in the figure. The grey line marks the position of the ionisation front. Note that only one quadrant of the density slices is displayed.}
\label{fig:early-expansion-density}
\end{figure*}
Let us consider an idealised scenario of a pure atomic hydrogen cloud of constant density with an embedded high mass star. The cloud is initially completely at rest, and we will ignore the effects of gravity. As the star begins to emit ionising radiation, an increasingly large spherical region around it will become ionised, until the amount of ionised gas is large enough that the hydrogen recombination rate will balance out the ionisation. This is achieved at the Str\"omgren radius \citep{Stromgren1939}:
\begin{equation}
R_{\rm{St}}= \left (\frac{3\dot Q m_p^2}{4\pi \alpha_B \rho_{\rm o}^2}\right )^{1/3},
\label{Stromgren}
\end{equation}
where $\dot Q$ is the number of ionising photons coming from the star per unit time, $m_p$ is the hydrogen mass, $\rho_{\rm o}$ is the density of the medium and $\alpha_B$ is the recombination coefficient.
The D-type expansion of an H~II region begins after the Str\"omgren radius is reached, and it is caused by the pressure imbalance between the hot, ionised material, and the cold, neutral gas, forcing the ionised gas to expand. The expanding material moves supersonicly into the neutral gas and it sweeps up material to form a thin shock that precedes the ionisation front \citep{Kahn1954}.
The rate of expansion of the H~II region described above has been the subject of many studies. By considering the pressure of the ionised gas and the ram pressure experienced by the neutral gas as the shock sweeps through it, we can arrive at the Spitzer solution for the radius of the ionisation front \citep{Spitzer1978}:
\begin{equation}
R_{\rm{Sp}}(t)= R_{\rm{St}} \left (1 + \frac{7}{4}\frac{c_{\rm i}t}{R_{\rm{St}}}\right )^{4/7},
\label{Spitzer}
\end{equation}
where $c_{\rm i}$ is the sound speed in the ionised medium.
The Spitzer solution was later improved by \citet{Hosokawa2006}, who noticed that the inertia of the shock promotes additional expansion. By solving the equation of motion of the shock, they obtained the Hosokawa-Inutsuka solution \citep{Hosokawa2006}:
\begin{equation}
R_{\rm{HI}}(t)= R_{\rm{St}} \left (1 + \frac{7}{4}\sqrt{\frac{4}{3}}\frac{c_{\rm i}t}{R_{\rm{St}}}\right )^{4/7}.
\label{Hosokawa-Inutsuka}
\end{equation}
The above equations provide good description of the early time expansion of an H~II region. If they are followed to later times, however, their functional form predicts that the H~II region will continue to expand infinitely. This is not what we expect, since the ionised material should eventually reach a pressure equilibrium with the neutral cloud. \cite{Raga2012a} have addressed this issue by including the thermal pressure of the neutral gas into the equation, governing the expansion of the ionisation front, $R_{\rm{IF}}$:
\begin{equation}
\frac{1}{c_{\rm i}}\frac{\mathrm{d}R_{\rm{IF}}(t)}{\mathrm{d}t} = \left (\frac{R_{\rm{St}}}{R_{\rm{IF}}(t)} \right )^{3/4} - \frac{\mu_{\rm i}T_{\rm o}}{\mu_{\rm o}T_{\rm i}} \left (\frac{R_{\rm{St}}}{R_{\rm{IF}}(t)} \right )^{-3/4}.
\label{RagaI}
\end{equation}
In the above, $\mu_{\rm o}$ and $\mu_{\rm i}$ are the mean molecular weights of the neutral and the ionised gas, and $T_{\rm o}$ and $T_{\rm i}$ are the neutral and ionised temperatures respectively. We can argue that typically $\mu_{\rm i}T_{\rm o}/(\mu_{\rm o}T_{\rm i}) \ll 1$ since $T_{\rm i} \gg T_{\rm o}$. Therefore, at early times $\mu_{\rm i}T_{\rm o}/(\mu_{\rm o}T_{\rm i}) (R_{\rm{St}}/R_{\rm{IF}}(t))^{-3/4}$ can be neglected and then equation~\ref{RagaI} leads to the Spitzer solution.
\cite{Raga2012b} improved on the work of \cite{Raga2012a} by including the inertia of the expanding shock, as it propagates through the neutral medium. Their consideration leads to an expansion equation:
\begin{equation}
\frac{1}{c_{\rm i}}\frac{\mathrm{d}R_{\rm{IF}}(t)}{\mathrm{d}t} = \sqrt{\frac{4}{3} \left (\frac{R_{\rm{St}}}{R_{\rm{IF}}(t)} \right )^{3/2} - \frac{\mu_{\rm i}T_{\rm o}}{2\mu_{\rm o}T_{\rm i}}}.
\label{RagaII}
\end{equation}
Similarly, at early times the term $\mu_{\rm i}T_{\rm o}/(2\mu_{\rm o}T_{\rm i})$ can be neglected, which is equivalent to neglecting the thermal pressure of the neutral gas. Equation~\ref{RagaII} then leads to the Hosokawa--Inutsuka solution.
The late time behaviour of an H~II region was further studied by \cite{Williams2018}, who developed a thick--shell solution capturing the decoupling of the ionisation front and the shock front. Even though this solution was intended for improving the late time evolutionary model of an H~II region, it is also relevant for our early time numerical studies with particle--based hydrodynamics, where we see thick shocks forming early on. \citet{Williams2018} point out that the Hosokawa--Inutsuka solution does not distinguish between the position of the shock front and the position of the ionisation front, since the shock is initially very thin. As we will see further in the paper, the thick shocks produced by the particle--based hydrodynamics will result in slightly smaller H~II region sizes than expected. Finally, \citet{Williams2018} use an acoustic approximation to demonstrate that at first the stagnation radius is exceeded by the expanding H~II region, which subsequently shrinks until it reaches its final size through a strongly damped oscillation.
\subsection{Comparison to StarBench}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/ionization-radius-early-full_mustang.pdf}
\includegraphics[width=\columnwidth]{figures/ionization-radius-early-full-residuals_mustang.pdf}
\caption{The expansion of an H~II region, following the StarBench setup (`SB I'; see Table \ref{table:sim-params}). \textit{Top:} ionisation front radius as a function of time. The Spitzer and Hosokawa-Inutsuka solutions are shown in dashed lines, the averaged 3D StarBench solution is in solid grey \citep{Bisbas2015}, and the \textsc{Phantom} + \textsc{CMacIonize} live radiation hydrodynamics scheme is shown in solid black line. \textit{Bottom:} the relative error of the \textsc{Phantom} + \textsc{CMacIonize} solution compared to the Spitzer (red) and Hosokawa-Inutsuka (blue) solutions. In the label, $R_{\rm code}$ refers to the \textsc{Phantom} + \textsc{CMacIonize} solution, while $R_{\rm th}$ refers to one of the theoretical solutions.}
\label{fig:early-expansion}
\end{figure}
We now demonstrate that the new radiation-hydrodynamics scheme presented in this paper reproduces the well-established StarBench `early test' \citep{Bisbas2015}, using a simulation setup with $N = 128^3$, $m_{\rm{part}} = 10^{-3}$~M$_{\rm{\odot}}$ and ionisation time step $\delta t = 8.5 \times 10^{-4}$~Myr (`SB I'; see Table \ref{table:sim-params}). Figure~\ref{fig:early-expansion-density} shows density slices at $z=0$ of the simulation at different times. We can see that qualitatively the D-type expansion is reproduced correctly, as an initially spherical region expands rapidly and sweeps up a shell of shocked gas. It can also be noticed that the shock is relatively thick even at the early stages of its formation. This is due to the use of SPH, and results in a slight suppression of the ionisation front, as discussed by \citet{Bisbas2015}.
In order to quantify the rate of expansion, we have determined the position of the ionisation front by selecting particles with ionic fractions between 0.2 and 0.8, and averaging their radial distance from the source. Note that there is a sharp transition between fully-ionised and neutral particles as a function of radius, and therefore the exact range of partial ionisation used to determine the position of the ionisation front, does not alter the result significantly. This procedure has been repeated for each SPH snapshot to produce the time evolution of the ionisation front presented in Figure~\ref{fig:early-expansion} (the position of the ionisation front is also shown in the four snapshots of Figure~\ref{fig:early-expansion-density} as a grey line). Additionally, the top panel of Figure~\ref{fig:early-expansion} contains the analytic curves of the Spitzer and Hosokawa-Initsuka solutions (see equations \ref{Spitzer} and \ref{Hosokawa-Inutsuka}), and the average, empirically derived expansion curve from the StarBench paper (\cite{Bisbas2015}). In the bottom panel we present the relative error of the ionisation front radius compared to the two theoretical solutions. Note that in the label $R_{\rm code}$ refers to the \textsc{Phantom} + \textsc{CMacIonize} solution, while $R_{\rm th}$ refers to one of the theoretical solutions.
Initially, the expansion of the ionisation front in our simulation happens more slowly than the one predicted by the theoretical models, however, the size of the ionised region soon reaches a value between the Spitzer and Hosokawa-Initsuka solutions. Similar behaviour can be seen in the average StarBench solution, even though their curve remains slightly above ours at all times (see Figure~\ref{fig:early-expansion}). This slight discrepancy is attributed mainly to the thicker shocks produced by SPH, as already mentioned above, but it could also be affected by the choice of ionisation scheme. In Figure~\ref{fig:early-expansion-density} we see that the peak of the shock and the position of the ionisation front are separated by a medium-density shock tail. A different ionisation scheme might place the ionisation front closer to the shock, making the results more consistent with the StarBench paper.
In addition to the `early test', we also perform the `late test' from the StarBench study (`SB Ia'; see Table \ref{table:sim-params}). The simulation setup uses $N=512^3$ particles with masses $m_{\rm part}=10^{-3}$~M$_{\rm{\odot}}$. The time step is $\delta t = 1.36 \times 10^{-3}$~Myr, and we evolve the simulation for 2.5~Myr, which allows us to observe the equilibrium state of the H~II region. Note that the initial gas temperature in this test ($T_{\rm o} = 1000$~K) is higher than in `SB I' in order for the equilibrium state to be reached sooner.
The late test captures the decoupling of the shock and the ionisation front. While the former continues to propagate through the neutral medium, the latter settles into an equilibrium state. This behaviour can be seen in Figure~\ref{fig:late-expansion-density}, which shows density slices at different times, similarly to Figure~\ref{fig:early-expansion-density} for the early test. The decoupling mentioned above occurs between the second and the third panel of Figure~\ref{fig:late-expansion-density}. In addition, we show the time evolution of the radius of the H~II region in Figure~\ref{fig:late-expansion}. The figure demonstrates an excellent agreement between our results (in black) and the StarBench data (in grey). Furthermore, we see that the expansion curve from our simulation follows the Spitzer and the Hosokawa-Inutsuka solutions at early times. As the shock thickens (at $\sim 0.2$~Myr), the expansion curve deviates from these solutions, as expected.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/late-test-multiplot-line-IF.pdf}
\caption{Density slices at $z=0$ of a simulation of the late StarBench test, computed with our RHD scheme (`SB Ia'; see Table \ref{table:sim-params}). Each panel corresponds to a different simulation snapshot, as labelled in the figure. The grey line marks the position of the ionisation front. Note that only one quadrant of the density slices is displayed.}
\label{fig:late-expansion-density}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/ionization-radius-late-full.pdf}
\caption{The late-time expansion of an H~II region, following the StarBench setup (`SB Ia'; see Table \ref{table:sim-params}). The Spitzer and Hosokawa-Inutsuka solutions are shown in dashed lines, the averaged 3D StarBench solution is in solid grey \citep{Bisbas2015}, and the \textsc{Phantom} + \textsc{CMacIonize} live radiation hydrodynamics scheme is shown in solid black line.}
\label{fig:late-expansion}
\end{figure}
\subsection{Timestepping}
\label{sec:timestepping}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/ionization-radius-timesteps.pdf}
\includegraphics[width=\columnwidth]{figures/ionization-radius-timesteps-residuals.pdf}
\caption{The early-time expansion of an H~II region, calculated using different ionisation timesteps (`SB II--VI'; see Table \ref{table:sim-params}). The figure axes and the lines have the same meaning as in Figure~\ref{fig:early-expansion}. The lines demonstrate good convergence between the different time steps that have been used.}
\label{fig:timesteps}
\end{figure}
Next we will examine the sensitivity of the simulation outcome to the choice of ionisation time step, $\delta t$. We have performed a series of small-scale simulations ($N = 64^3$, $m_{\rm{part}} = 10^{-3}$ M$_{\rm{\odot}}$), with $\delta t$ ranging between $1.7 \times 10^{-4}$ Myr and $1.36 \times 10^{-3}$ Myr (`SB II' to `SB VI'; see Table \ref{table:sim-params}). Note that all time steps have been chosen to be smaller than $1.7 \times 10^{-3}$ Myr, since using time steps larger than this critical value causes poor energy conservation and prompts errors in \textsc{Phantom}. This is a byproduct of the initial energy discontinuity between the ionised and the neutral medium (see \citealt{Price2018} for more details).
Figure~\ref{fig:timesteps} presents the expansion curves of the aforementioned simulations, and it shows good agreement between them. Within the first few time steps ($t < 0.002$ Myr) the curves overlap completely, and after that ($t < 0.007$ Myr) a small offset develops between them. The time period of offset development corresponds to the most rapid expansion of the H~II region within the simulations (see the top panel of Figure~\ref{fig:timesteps-explained}). Once formed, the small offset remains between the curves for the rest of the simulation time.
We can explain this good convergence by considering a timestepping condition for the ionisation similar to the Courant criterion \citep{Courant1928}. In order to ensure numerical convergence, we want the ionisation front to expand by at most the local particle/Voronoi cell separation between two consecutive ionisation time steps, i.e.:
\begin{equation}
\delta t < \frac{l_i}{v_{\rm{IF}}}\approx \frac{h_a}{v_{\rm{IF}}}.
\label{eq:dt-constraint}
\end{equation}
In the above $v_{\rm{IF}}$ is the speed of the ionisation front, $h_a$ is the smoothing length of particle 'a', located at the periphery of the ionisation front, and $l_i$ is the size of the co-spatial Voronoi cell. The difference between eq.~\ref{eq:dt-constraint} and the Courant criterion is subtle. The latter uses the (absolute or relative) velocity of a particle, while the former considers the speed of the ionisation front, which can advance independently from the particles.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/vel-if.pdf}
\includegraphics[width=\columnwidth]{figures/cell-size.pdf}
\includegraphics[width=\columnwidth]{figures/dt-from-vel.pdf}
\caption{\textit{Top:} velocity of the ionisation front as a function of time. The coloured lines use the analytic form of the ionisation front speed of the Spitzer (red; eq.\ref{eq:Sp-vel}) and the Hosokawa--Inutsuka (blue; eq.~\ref{eq:HI-vel}) solutions, while the black line has been empirically calculated from `SB VI'. \textit{Middle:} cell size immediately outside the ionisation front as a function of time. The solid line represents the average cube root of the cell volumes and the dash line is the average cube root of the particle mass over cell densities. \textit{Bottom:} upper estimate of the allowed ionisation time step, using eq.~\ref{eq:dt-constraint}. The solid and dashed lines have the same meaning as in the above panel. The grey shaded area contains all of the time steps used for the simulations.}
\label{fig:timesteps-explained}
\end{figure}
In Figure~\ref{fig:timesteps-explained} we have tested that the above condition holds for all of the timestepping test simulations. In the top panel we have plotted $v_{\rm{IF}}/c_{\rm i}$ as a function of time, both calculated as average speed between consecutive snapshots, and estimated theoretically from the Spitzer and Hosokawa-Inutsuka solutions. The estimates have been obtained using equations \ref{Spitzer} and \ref{Hosokawa-Inutsuka}, from which we can write the time derivatives of the H~II region radius as:
\begin{equation}
\dot{R}_{\rm{Sp}}(t)= c_{\rm i} \left (1 + \frac{7}{4}\frac{c_{\rm i}t}{R_{\rm{St}}}\right )^{-3/7},
\label{eq:Sp-vel}
\end{equation}
\begin{equation}
\dot{R}_{\rm{HI}}(t)= \sqrt{\frac{4}{3}} c_{\rm i} \left (1 + \frac{7}{4}\sqrt{\frac{4}{3}}\frac{c_{\rm i}t}{R_{\rm{St}}}\right )^{-3/7}.
\label{eq:HI-vel}
\end{equation}
This results in $\dot{R}_{\rm{Sp}}(t=0)= c_{\rm i}$ and $\dot{R}_{\rm{HI}}(t=0)= \sqrt{4/3} c_{\rm i}$, and decreasing expansion rates for $t>0$. Numerically (and physically), however, the gas is initially at rest. As a result, the initial expansion of the H~II region does not happen as rapidly as predicted, and the calculated value of $v_{\rm{IF}}$ remains lower than $c_{\rm i}$ at all times.
The middle panel of Figure~\ref{fig:timesteps-explained} presents the Voronoi cell size immediately outside the ionisation front as a function of time. These cells are located around the inner edge of the shock and have been selected as having ionic fractions between 0.3 and 0.5. Due to the irregular shapes of Voronoi grid cells, the average cell size has been calculated as the average cube root of the cell volumes or the average cube root of the particle mass over cell densities. We can see in this panel that there is a gentle increase of cell sizes with time, however, their values remain close to the initial conditions.
Finally, in the bottom panel of Figure~\ref{fig:timesteps-explained}, we have calculated upper estimates for the ionisation time step using equation~\ref{eq:dt-constraint}. The shaded box contains all of the $\delta t$ values used for the simulations and this figure demonstrates that the criterion posed in equation~\ref{eq:dt-constraint} is always met. Using the results of Figure~\ref{fig:timesteps-explained} we also propose using the Courant criterion as a conservative timestepping requirement:
\begin{equation}
\delta t < \frac{1}{c_{\rm i}} \left ( \frac{m_{\rm{part}}}{\rho_{\rm o}} \right ) ^{1/3},
\label{eq:dt-criterion}
\end{equation}
which establishes a direct relationship with the initial physical setup of each simulation.
The timestepping criterion derived above ensures the convergence of the StarBench test as the particle mass is altered. In order for this RHD scheme to be reliably applied to more complex astronomical problems, however, we need to be able generalise the choice of time step beyond this simple test. The difficulty here arises from the fact that typically there are particle motions which are independent from the pressure-driven expansion of the H~II region, and these particle motions can rapidly alter the volume of the ionised material. For example, a clump of dense gas can move at the periphery of the H~II region, changing the local 3D geometry and causing a rapid ionisation of previously unavailable low-density gas, or alternatively, restricting radiative access to some parts of the H~II region and thus shrink it.
There is no analytic way of determining the size of the H~II region in non-uniform medium (or else we would not need to perform the radiative transfer altogether), and hence we cannot determine the timestepping criterion precisely. Linking the ionisation time step to the Courant criterion \citep{Courant1928}, however, is probably a reasonable estimate. Ideally this should happen during the simulation runtime and the ionisation time step should be allowed to vary dynamically.
\subsection{Choice of grid and density mapping}
\label{sec:grid&dens}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/ionization-radius-all-grids-dens-residuals.pdf}
\includegraphics[width=\columnwidth]{figures/ionization-radius-all-grids-dens-residuals-h.pdf}
\caption{The early D-type expansion of an H~II region, using different combinations of Voronoi grids and density mapping methods (`SB VI--X'; see Table \ref{table:sim-params}). The plots show the relative error of the \textsc{Phantom} + \textsc{CMacIonize} solutions compared to the Spitzer (red) and Hosokawa-Inutsuka (blue) solutions, expressed as a percentage of the ionisation radius (\textit{top}) and as a multiple of the local smoothing length (\textit{bottom}). The solid lines represent simulations using a basic Voronoi grid, while the dotted lines use a Voronoi grid, which is regularised with Lloyd's iterations. The different shades of each colour correspond to different density mapping methods.}
\label{fig:early-expansion-grids-and-dens}
\end{figure}
In order to study how the choice of Voronoi grid and type of density mapping affect the H~II region expansion, we have performed a set of simulations (`SB VI' -- `SB X'; see Table \ref{table:sim-params}) with $N = 64^3$, $m_{\rm{part}} = 10^{-3}$ M$_{\rm{\odot}}$ and $\delta t = 1.36 \times 10^{-4}$ Myr (the latter in compliance with the timestepping criterion from equation~\ref{eq:dt-criterion}). Since only two of the three types of density mapping (`Exact' and `Centroid') are compatible with both the basic and the regularised Voronoi grid, combining all grids and density mapping methods result in five simulations.
Figure~\ref{fig:early-expansion-grids-and-dens} shows a comparison between the five simulation runs. In the top panel of the figure we can see that all five expansion curves follow the theoretical solutions of Spitzer and Hosokawa-Inutsuka within 5\% and 8\% respectively. Furthermore, the individual curves of the simulations show only minor variation with respect to each other, and hence we have only plotted the residuals with respect to the theoretical solutions to emphasise these small variations.
In the top panel of Figure~\ref{fig:early-expansion-grids-and-dens} we can see that each type of density mapping produces D--type expansion, happening at a slightly different rate (`M/V' happens most rapidly, followed by `Centroid' and then finally by `Exact'). The choice between basic and regularised Voronoi grid, however, results in no difference in the expansion curve (compare the dotted lines with the solid lines of the same shade). This is not surprising, since these simulations start with a `glass' distribution of particles, which in turn create regularly shaped Voronoi grid cells.
It remains to be determined if the differences between the expansion curves, caused by the different types of density mapping, are significant. For this purpose, we have expressed the difference between the radius of the ionisation front of the simulation and that of the theoretical models in terms of the local smoothing length, $h$ (see the bottom panel of Figure~\ref{fig:early-expansion-grids-and-dens}). The smoothing length used in the figure is the average smoothing length of all particles with ionic fractions between 0.2 and 0.8 (which are the same particles used to determine the position of the ionisation front). In the bottom panel of the figure, we can see that all expansion curves are within $\sim 0.2 h$ of each other. This makes the difference between them sub--resolution, and hence we can deduce that neither the density mapping method, nor the choice of Voronoi grid have a significant impact of the D--type expansion of an H~II region in uniform medium.
\subsection{Resolution}
\label{sec:resolution}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/if-expansion-panels-lines_periodic.pdf}
\caption{The early D-type expansion of an H~II region, using initial setup with different spatial resolution (`SB VI--VII',`SB XI--XVIII'; see Table \ref{table:sim-params}). We express the resolution in terms of the smoothing length, where the default resolution is referred to as $h_{\rm o}$. \textit{Top panels:} ionisation front radius as a function of time. The Spitzer and Hosokawa-Inutsuka solutions are shown in dashed red and blue lines. The black lines represent the \textsc{Phantom} + \textsc{CMacIonize} radiation hydrodynamics scheme, with (dotted) and without (solid) using Lloyd's algorithm. \textit{Bottom panels:} the relative error of the \textsc{Phantom} + \textsc{CMacIonize} solutions compared to the Spitzer (red) and Hosokawa-Inutsuka (blue) solutions for simulations with and without Lloyd's algorithm.}
\label{fig:mass-resolution}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/shock-profile-panels-mustang.pdf}
\caption{Radial density profile of the simulations presented in Figure~\ref{fig:mass-resolution} at $t\approx 0.02$ Myr (without using Lloyd's iterations). The grey data points are individual gas particles (from left to right: `SB XVII', `SB VI', `SB XV', `SB XIII', `SB XI'). The Spitzer and Hosokawa-Inutsuka solutions are shown in dashed red and blue lines, and the black lines represent the \textsc{Phantom} + \textsc{CMacIonize} radiation hydrodynamics scheme.}
\label{fig:density-profile}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/ionised-mass-SB-th.pdf}
\caption{Ionised mass as a function of time for the different resolution tests (`SB VI--VII', `SB XI--XVIII'). The ionised mass is calculated by counting the SPH particles with ionic fractions over 0.5, and assuming that they are fully ionised. The solid lines represent the simulations with basic Voronoi grids, while the dotted lines show the simulations with regularised Voronoi grids. Note that the lines converge to a solution as the spatial resolution is increased. A theoretical prediction of the ionised mass as a function of time, based on the Spitzer solution, is shown in a red dashed line.}
\label{fig:ionised-mass-sb}
\end{figure}
\begin{table}
\centering
\begin{tabular}{cccrl}
\hline
Simulation & $m_{\rm{part}}$ & $h$ & \multicolumn{2}{|l|}{Initially ionised particles}\\
name & $[$M$_{\rm{\odot}}]$ & [pc] & with Lloyd & w/o Lloyd \\
\hline
\hline
SB XI--XII & 0.512 & $2.26 \times 10^{-1}$ & 47 & 38\\
SB XIII--XIV & 0.064 & $1.13 \times 10^{-1}$ & 172 & 173 \\
SB XV--XVI & 0.008 & $5.65 \times 10^{-2}$ & 1357 & 1361 \\
SB VI--VII & 0.001 & $2.82 \times 10^{-2}$ & 10510 & 10508 \\
SB XVII--XVIII & 0.000125 & $1.41 \times 10^{-2}$ & 80781 & 80898 \\
\hline
\end{tabular}
\caption{Parameters of the simulations exploring the effect of different mass resolution. The last two columns contain the number of particles that are initially ionised.}
\label{table:mass-res}
\end{table}
Next we study the effects of resolution on the early--time StarBench test. We have achieved that by keeping the same initial gas density, and changing the particle mass, which naturally leads to variations in the smoothing length size (`SB VI', `SB VII', `SB XI' -- `SB XVIII'; see Table \ref{table:sim-params}). We have performed two simulation runs per resolution, one with basic and one with regularised Voronoi grid. The resolution in SPH is expressed in terms of particle mass, however, we report our results in terms of average initial smoothing length, since spatial resolution is more intuitive. The correspondence between mass resolution and spatial resolution is summarised in Table \ref{table:mass-res}. Another implication of the varied resolution is that the initial Str\"omgren sphere is represented by a different number of particles at each simulation setup (see Table \ref{table:mass-res}). We have adopted ionisation time steps according to equation~\ref{eq:dt-criterion}, which are a function of $m_{\rm{part}}$.
Figure~\ref{fig:mass-resolution} summarises the results of the five different spatial resolution tests. {In it, $h_{\rm o}$ denotes the average smoothing length of the SPH particles in the glass distribution of the original StarBench setup. The figure shows} that using $h = 0.5 h_{\rm o}$ and $h = h_{\rm o}$ produces quite similar outcomes. However, as we pick larger smoothing lengths, the solutions start to diverge. In particular, $h = 4 h_{\rm o}$ and $h = 8 h_{\rm o}$ no longer closely follow the Spitzer and Hosokawa-Inutsuka solutions, and their H~II region expansion happens more slowly than expected. Additionally, the choice of basic or regularised Voronoi grid does not significantly alter the simulation outcome at any chosen resolution.
Figure~\ref{fig:density-profile} presents the radial density profile of the simulations at $t\approx 0.02$ Myr. In the figure we can see that the runs with larger smoothing lengths do not reproduce the expected thin shock (that is to say, a shock much thinner than the size of the ionised region). Since all of the theoretical solutions assume a thin shock, the numerical results naturally deviate from these solutions. Furthermore, the simulation with $h = 8 h_{\rm o}$, in particular, has a significantly higher average density of the region internal to the shock, which additionally suppresses the radius of the ionised sphere.
Finally, Figure~\ref{fig:ionised-mass-sb} shows the mass of the ionised gas as a function of time for all of the resolution tests. The masses have been calculated by counting all of the particles with ionic fraction over 0.5 as fully ionised. \textcolor{black}{This definition of the ionised mass gives us the exact amount of hot gas in the hydrodynamics, which has direct effect on the dynamics of the gas, and it is, arguably, the more relevant definition when we look at the time evolution of the H~II region. An alternative approach would be to sum the ionic fractions multiplied by the particle mass. This latter definition produces the amount of ionised gas, as calculated by the radiative transfer (for the density mapping methods that preserve mass).} The figure demonstrates a clear progression between the different simulations as the resolution is increased, and indicates a convergence towards a specific mass curve (see $h=h_{\rm o}$ and $h=0.5 h_{\rm o}$). We can also see that all runs overestimate the initial ionised mass (as they overestimate the Str\"omgren radius). This can be noticed most prominently with the lowest resolution simulations, which overshoot the mass by $\sim 100\%$. This very high initial mass subsequently decreases with time, suppressed by a very thick shock, until it begins to increase after about 0.02 Myr. The initial overestimate of ionised mass happens because the simulations with $h=8h_{\rm o}$ initially ionise only about 40 particles, and hence have a substantial numerical error of the ionised mass.
\citet{Bisbas2015} have expressed the mass of the H~II gas from theoretical considerations, as:
\begin{equation}
M_{\rm{ion}} = \frac{4\pi}{3} \rho_{\rm o} R_{\rm{St}}^{3/2} R_{\rm{IF}}(t)^{3/2}.
\label{eq:mass-ion}
\end{equation}
If we substitute the Spitzer (eq.~\ref{Spitzer}) or the Hosokawa--Inutsuka solution (eq.~\ref{Hosokawa-Inutsuka}) for $R_{\rm{IF}}(t)$, we find that $M_{\rm{ion}} \propto t^{6/7}$. We have plotted the theoretically predicted ionised mass using the Spitzer solution in Figure~\ref{fig:ionised-mass-sb} as a red dashed line. We have used the Spitzer solution and not that of Hosokawa--Inutsuka, because the former provides a better fit to our simulations. We can see from the line that even though the dependence of the mass on the time is sub-linear, for the small values of $t$ which we consider, the curve cannot be visibly distinguished from a straight line. Figure~\ref{fig:ionised-mass-sb} shows that the slope of our simulations (and especially the higher resolution ones) follow the slope of the theoretical solution after a period of initial adjustment. This adjustment period is longer for the lower resolution simulations. In particular, for $h=2h_{\rm o}$ this period is $\approx 0.017$ Myr, for $h=4h_{\rm o}$ it is $\approx 0.033$ Myr, and for $h=8h_{\rm o}$ the slope is not reached at all within the first $\approx 0.040$ Myr of development. The agreement between the simulated and the theoretically predicted slope is expected, since the slope is set by the sound speed, the initial gas density and the Str\"omgren radius, which match between simulations and theory. Note that we do not expect the simulations to converge fully to the red line, as we do not expect them to converge fully (especially at very early times <0.007 Myr) to the Spitzer solution.
Using the above results, we can estimate the resolution necessary for producing a converged initial size of the ionised region. If we assume that reproducing the Str\"omgren sphere radius within a few percent results in satisfactory accuracy, then we need to choose the resolution such that we have at least a few thousand initially ionised particles. Such resolution is sufficient for a uniform density simulation, however we cannot easily test if this estimate holds for simulations of clumpy medium.
By contrast, if we want to accurately model the structure of the shock (provided that one exists) swept up by an H~II region, we need to resolve the thickness of the shock. We can see in Figure~\ref{fig:density-profile} that even the lowest particle smoothing length which has been considered (corresponding to $\sim 8 \times 10^4$ initially ionised particles) does not guarantee a converged shock profile.
\section{Simulations in a clumpy medium}
\label{sec:clumpy}
We expand the parameter space exploration of the previous section, by now considering simulations of a single source in a clumpy medium (see simulations labelled as `B' in Table~\ref{table:sim-params}).
To recreate an astronomically realistic density structure, we have used a simulation of the galactic centre cloud known as the Brick \citep{Dale2019,Kruijssen2019}. We have chosen a simulation snapshot corresponding to the present day location of the Brick along its orbit, and we have selected the particles contained in a $R_{\rm{cl}}=6$ pc cube around the densest region. We have also shifted the origin of the coordinate system of the simulation to the centre of the cube for simplicity. This results in a setup with $N=101128$ particles, $m_{\rm{part}} = 0.4463$ M$_{\rm{\odot}}$ and a total mass of about $M_{\rm{cl}}=45000$ M$_{\rm{\odot}}$. The mean particle velocity has been subtracted from all individual particles' velocities in order to remove the orbital motion of the cloud. All particles have an initial temperature $T_{\rm o} = 65$~K, which was used by \citet{Dale2019} and \citet{Kruijssen2019}, and is also consistent with the observed temperature of the Brick cloud \citep{Ao2013,Ginsburg2016,Krieger2017}. We have used one source of ionising radiation with $\dot Q = 5 \times 10^{50}$ s$^{-1}$, which corresponds to a young stellar population with a mass of $\sim10^4~\rm{M_{\odot}}$ \citep{leitherer14}. The source is located either in a density peak ($x=1.5$ pc, $y=-0.5$ pc, $z=-0.51$ pc; local density $3.6 \times 10^{-19}$~g/cm$^3$), or in lower density environment ($x=0$ pc, $y=0.3$ pc, $z=-0.51$ pc; local density $1.8 \times 10^{-22}$~g/cm$^3$). The positions of the sources are marked in Figure~\ref{fig:clumpy-hii-IC} with stars.
Similarly to the first set of simulations, the source ionises a certain volume of gas which expands. In contrast to the first set of simulations, the ionised volume is not spherically-symmetric. We have explored the expansion of the H~II region using two different positions of the source, as well as different grid types and density mapping methods. Unlike in the previous section, we have not varied the ionisation time step or the resolution. The former choice has been made because we have already derived a timestepping criterion from the uniform density simulations (see eq.~\ref{eq:dt-criterion}). The latter choice is due to the fact that the resolution of a clumpy medium could not be easily altered. Additionally, we vary the ionising luminosity of the source in the range $\dot Q = 10^{50}-10^{51}$ s$^{-1}$, and calculate the corresponding initially ionised mass. Finally, we discuss the physical implications of the presence of an ionising source on the host cloud.
\subsection{Time evolution}
\label{sec:clumpy-timestep}
\begin{figure*}
\centering
\includegraphics[width=\columnwidth]{figures/brick-IC.pdf}
\includegraphics[width=\columnwidth]{figures/brick-IC-cross.pdf}
\caption{Selected cubic volume from a simulation snapshot of the Brick, used as initial conditions for the RHD scheme \citep{Dale2019,Kruijssen2019}. \textit{Left:} gas column density integrated along the z-axis. \textit{Right:} volume density of the gas in the plane of $z=-0.51$ pc. The two stars mark the positions of the ionisation sources that have been used.}
\label{fig:clumpy-hii-IC}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\columnwidth]{figures/brick-panels-dens.pdf}
\includegraphics[width=\columnwidth]{figures/brick-panels-energy.pdf}
\caption{Cross-section images at $z=-0.51$ pc of volume density (\textit{left side}) and specific internal energy (\textit{right side}) of the Brick RHD simulation `B II' (see Table \ref{table:sim-params}) at different times, shown on the panels. The source has been embedded in a dense clump and its location is marked with a star.}
\label{fig:clumpy-hii-expansion}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\columnwidth]{figures/brick-panels-ld-dens.pdf}
\includegraphics[width=\columnwidth]{figures/brick-panels-ld-energy.pdf}
\caption{Same as Figure~\ref{fig:clumpy-hii-expansion}, with the exception that the source has been located in low-density environment.}
\label{fig:clumpy-hii-expansion-ld}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/ionised-mass-lloyd.pdf}
\includegraphics[width=\columnwidth]{figures/ionised-mass-ld-lloyd.pdf}
\caption{Ionised mass as a function of time for the Brick simulations (`B I--V'; see Table \ref{table:sim-params}) with embedded source (\textit{top}) and the simulations with a source in a low-density environment (\textit{bottom}). The solid lines represent runs using the basic Voronoi grid, while the dotted lines correspond to runs with regularised Voronoi grids. The different colours show the type of density mapping used, with $\rho_{\rm p}$ (eq.~\ref{eq:exact}) being black, $\rho_{\rm c}$ (eq.~\ref{eq:centroid}) being blue, and $\rho_{\rm m}$ (eq.~\ref{eq:MonV}) being red.}
\label{fig:clumpy-ionised-mass}
\end{figure}
We have used the Brick setup to perform ten different simulations, which repeats the simulations labelled as `B` in Table \ref{table:sim-params} twice. For half of them the source is embedded in a clump, and for the other half the source is in a lower--density environment (see Figure~\ref{fig:clumpy-hii-IC}). The five simulations per source probe the two different Voronoi grids combined with the three different density mapping methods. We have not altered the resolution of the simulations, as this is difficult to achieve with non--uniform initial conditions. The non--uniform setup has also created difficulties in estimating the ionisation time step in advance. We have used the value of $\delta t = 5 \times 10^{-4}$ Myr. In order to ensure the suitability of this choice, we have computed the right-hand-side of equation~\ref{eq:dt-criterion} for each fully or partially ionised particle during runtime (using the particle's density) and have confirmed that these values were always larger than our time step.
Figures \ref{fig:clumpy-hii-expansion} and \ref{fig:clumpy-hii-expansion-ld} illustrate the expansion of the simulated ionised region with time, when we use the exact density mapping and a regularised Voronoi grid (`B II'). We can see that when the radiation source is located in a dense clump, it initially ionises a small amount of
high density gas. Through its expansion the ionised gas sweeps up a shock, similarly to the StarBench test. In contrast, the source located in lower density environment immediately fills up the inter--clump space with ionised material and its H~II region undergoes gentle expansion.
We can see the dynamic signatures of these two scenarios by looking at the gas density (left hand panels of Figure~\ref{fig:clumpy-hii-expansion} and~\ref{fig:clumpy-hii-expansion-ld}). For clarity, we have also included the specific internal energy of the particles in the right hand panels of Figure~\ref{fig:clumpy-hii-expansion} and~\ref{fig:clumpy-hii-expansion-ld}. Since the ionised areas have significantly higher specific internal energies than the neutral ones, the former appear in light blue and white.
Additionally, if we look at the density peaks in Figures~\ref{fig:clumpy-hii-expansion} and~\ref{fig:clumpy-hii-expansion-ld}, we notice that they diffuse out with time. This behaviour is not caused by the ionisation, but it is a result of the physical assumptions in our simulations, and the way in which the initial conditions have been constructed. The Brick simulation from \citet{Dale2019} and \citet{Kruijssen2019} uses an external gravitational potential and self--gravity. In our setup we have turned off the gravitational forces and we have additionally set all particle velocities to zero. As a result, the density peaks have higher pressure than the surrounding low density regions, and no force to hold them together. This leads to the observed diffusion, which affects (and likely serves to enable) the expansion of the H~II region at later times. This does not hamper the numerical experiment performed here, in which we are exclusively interested in the impact of sub-structure, rather than the full, self-consistent evolution of the region.
To quantify the expansion of the H~II regions, we have computed the ionised mass as a function of time by adding up the masses of the particles with ionic fractions over 0.5 (see Figure~\ref{fig:clumpy-ionised-mass}).
Despite representing two different modes of H~II region expansion (pressure-confined and depressurised), the two panels of Figure~\ref{fig:clumpy-ionised-mass} look very similar to each other. In both panels we see that the ionised mass increases approximately linearly for all simulation setups. This behaviour is also in agreement with the StarBench results (see Figure~\ref{fig:ionised-mass-sb}). The difference between the top panel (high density source) and the bottom panel (low density source) of Figure~\ref{fig:clumpy-ionised-mass} is mostly in the noise level of the curves and the magnitude of their slopes. The H~II region embedded in a clump has noisier curves than the H~II region in low density environment, which can be explained by the clumpy structure of the high density environment (see Figure~\ref{fig:clumpy-hii-expansion}). Additionally, the expansion curves of the embedded H~II region have shallower slopes than those of the low density H~II region. This is expected as the embedded ionised gas needs to overcome the higher pressure of its surrounding material.
\subsection{Voronoi grids and cell densities}
\label{sec:discussion-grids}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/grids.pdf}
\caption{Cross-section images of the starting time Voronoi grids at $z=-0.51$ pc. The colour bar corresponds to the exact volume density, $\rho_{\rm p}$ (see equation~\ref{eq:exact}), mapped onto a Voronoi grid built around the particles' positions (\textit{top}) and a regularised Voronoi grid (\textit{bottom}).}
\label{fig:clumpy-voro-grid}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/brick-ionised-density.pdf}
\caption{Cross-section images of the initially ionised grid cells at $z=-0.51$ pc with $\dot Q = 5\times 10^{50}$ s$^{-1}$ and a source positioned at low density environment within the Brick. The colours correspond to the density of the ionised gas only. The different panels correspond to the five different simulations described in Section~\ref{sec:clumpy}, with an additional reference run. The reference simulation has been performed on a high-resolution ($1024^3$) Cartesian grid. For comparison, each Voronoi grid consists of 101128 cells.}
\label{fig:clumpy-ionised-density}
\end{figure}
While the choice of grid structure and density mapping prescription does not significantly affect the StarBench test, in the Brick simulations we see that the amount of ionised gas can vary greatly between the different configurations. We examine these variations, both in terms of the time evolution of the H~II region (see Figure \ref{fig:clumpy-ionised-mass}), and in terms of the initial ionised mass.
As already discussed in Section \ref{sec:clumpy-timestep}, the different simulation setups produce
qualitatively
the same expansion of their respective H~II regions (with
some exception for
`B III' with the high density source). Furthermore, if we compare the five simulations with the low density source, we see that the ionised mass curves appear to be systematically offset from one another. The simulations with initially lower or higher ionised mass, continue to have lower or higher ionised mass at later times, relative to the other simulations. This may indicate that the differences between the time evolution of the simulations are primarily caused by differences in the initial ionised mass. Alternatively, the conditions which create the initial mass discrepancies (grid and density variations) may persist in time, and continuously maintain the discrepancies.
Similar systematic behaviour is present in the simulations with an embedded source. However, since the expansion curves are noisier, there is no clear distinction between the results of `B~I', `B~II' and `B~IV'. Overall the relative order of the expansion curves of the simulations with an embedded source is similar to that of the curves of the low density source,
which favours the interpretation that the different grid and density configurations maintain differences in the expansion continuously.
We will now focus on comparing the individual simulations with one another. Let us first consider the basic Voronoi grid and the regularised one with the same type of density mapping. Visually, the regularised grid resolves the low- and intermediate--density gas better (see Figure~\ref{fig:clumpy-voro-grid}). Due to the regularisation, however, the grid cells become more evenly distributed and hence the density peaks are represented by fewer cells, causing them to be under--resolved. This means that an embedded ionising source
may experience
a smoothed density profile in its surroundings, with lower peak densities. As a result this source initially ionises a larger region compared to if it was coupled with a basic Voronoi grid structure.
In addition to the structure of the Voronoi grid, the type of density mapping also plays an important role in the outcome of the Brick simulations. The greatest outlier in Figure~\ref{fig:clumpy-ionised-mass} is the curve, corresponding to the Centroid method, combined with the basic Voronoi grid (`B III'). This simulation setup significantly over-predicts the cell mass (by a factor of $\sim 3$; see Appendix~\ref{sec:dens-mapping}), and it results in highly suppressed ionised mass, especially when the source is embedded in a clump. The inaccuracy of the Centroid method is less pronounced when paired with a regularised grid (`B IV'; only $\sim 9$\% higher than the total particle mass), due to the cell shapes. As a result, the behaviour of the simulation is similar to that with the exact density mapping (see the two dashed lines in Figure~\ref{fig:clumpy-ionised-mass}).
Finally, the density mapping in which a particle mass is divided by the cell volume (`M/V'; `B V') agrees reasonably well (within a few tens of percent) with the ionised mass produced by the exact density mapping on a basic Voronoi grid (`B I'). This can be attributed to the fact that while the local cell density may have inaccuracies, on larger scale the average density is correct, since the method conserves mass.
The disagreement between the five different simulations raises the question of which one gives the most accurate solution to the problem. Since we do not have an analytic solution in the general case of non-uniform density medium, we have attempted to answer this question numerically. We have applied \textsc{CMacIonize} to a high-resolution Cartesian grid without performing any subsequent hydrodynamics calculations due to high computational cost. The density mapping has been performed using the Centroid method (eq.~\ref{eq:centroid}). Since the method does not necessarily conserve mass, as previously discussed, the resolution of the Cartesian grid has been gradually increased until both the total mass of the medium and the ionised mass had converged. The convergence is achieved for a grid with $1024^3$ cells. Note that the cell size of this Cartesian grid is $0.006$ pc, while the smallest smoothing length in the Brick simulation is $0.014$ pc, which allows the grid to capture features that might not be resolved in the hydrodynamics.
A comparison of the initial ionised gas density between the five simulations together with the Cartesian grid is shown in Figure~\ref{fig:clumpy-ionised-density}. The figure contains a density slice at $z=-0.51$ pc, and the source has been positioned in the lower density location ($x=0$ pc, $y=0.3$ pc, $z=-0.51$ pc). We can see that the simulations with regularised Voronoi grids are in better agreement with the Cartesian grid (labelled as `Reference'\footnote{Note that the `Reference' image contains a white region in the middle of the ionised volume. This does not indicate a clump of neutral gas, but it is instead a region, which is completely empty of gas (ionised or neutral).}). Furthermore, if we compare the simulations with basic Voronoi grid, we can see that the `Exact' and `M/V' runs agree quite well with each other, and are still relatively similar to the Cartesian grid result, while the `Centroid' run differs the most from all the others.
\begin{table}
\caption{Initial ionised mass in the Brick simulations. The ionised mass is calculated by multiplying the ionic fraction of a particle by the particle's mass and summing over all particles. The reference value is obtained by performing the radiative transfer on a high-resolution ($1024^3$) Cartesian grid.}
\centering
\begin{tabular}{ccc}
\hline
Simulation & $M_{\rm ion}$~[M$_{\odot}$] & $M_{\rm ion}$~[M$_{\odot}$]\\
name & (high density) & (low density)\\
\hline
\hline
B~I & 22.1 & 573 \\
B~II & 30.5 & 499 \\
B~III & 2.0 & 240 \\
B~IV & 33.0 & 469 \\
B~V & 25.6 & 549 \\
Reference & 18.3 & 340 \\
\hline
\end{tabular}
\label{table:brick-initial}
\end{table}
Additionally, we have calculated the total ionised mass (including contributions from the partially ionised cells) obtained with the Cartesian grid. In the case of the embedded source, the total ionised mass is 18.3~M$_{\rm{\odot}}$, while in the case of the low--density source it is 340~M$_{\rm{\odot}}$. Both of these values are lower than the numbers obtained by four of the five RHD simulation setups (see Table~\ref{table:brick-initial}). In particular, the simulations using the exact density mapping and the two types of Voronoi grids, which are expected to produce the most accurate results, overestimate the ionised mass relative to the grid. The only setup which produces lower ionised mass is the `Centroid' run, which overestimates the total mass due to lack of mass conservation, as discussed above. The disagreement between ionised mass computed on the Cartesian grid and that on the Voronoi grids likely originates from the limited resolution of the latter, and it will be discussed further in Section~\ref{sec:clumpy-res}. Note that here we compare the total ionised mass as "seen" by the raditative transfer (i.e. the sum of ionic fraction times particle mass), as opposed to the initial ionised mass in Figure~\ref{fig:clumpy-ionised-mass}, where we have the total ionised mass as "seen" by the hydrodynamics (i.e. the total mass of all particles with ionic fraction over 0.5). This choice has been made for consistency with the Cartesian test.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/lum-ionised-mass-lloyd.pdf}
\includegraphics[width=\columnwidth]{figures/lum-ionised-mass-ld-lloyd.pdf}
\caption{Initial ionised mass as a function of ionising luminosity ($\dot Q$) for the simulations with embedded source (\textit{top}) and the simulations with a source in a low-density environment (\textit{bottom}) within the Brick. The different lines represent runs with different Voronoi grids and types of density mapping, and have the same meaning as in Figure~\ref{fig:clumpy-ionised-mass}. The red symbols have been obtained using high resolution ($1024^3$) Cartesian grid.}
\label{fig:clumpy-lum-function}
\end{figure}
Finally, we also study the initial ionised mass (from the radiative transfer calculation) for different values of the ionising luminosity, $\dot Q$ (see Figure~\ref{fig:clumpy-lum-function}). This is achieved by performing individual MCRT runs without evolving the cloud in time. Figure~\ref{fig:clumpy-lum-function} demonstrates that as we increase $\dot Q$, the initial ionised mass also increases, as expected. Note that the red data points represent the results of the Cartesian grid tests, mentioned above (and shown in Table~\ref{table:brick-initial}).
We also see that the different simulation setups produce curves that do not cross each other as $\dot Q$ is varied. This demonstrates that there are systematic differences, resulting from the choice of grid and density mapping. The order of the curves (from higher to lower mass) in the two panels of Figure~\ref{fig:clumpy-lum-function} differs between the embedded and the low density source, which emphasises the importance of the gas density distribution around the source.
\subsection{Resolution}
\label{sec:clumpy-res}
Despite the fact that we have not altered the resolution of the Brick setup, it is crucial to consider the resolution when discussing our radiation-hydrodynamics results. Since the H~II region has an irregular shape, the most reliable measure of resolution is the number of initially ionised particles. For the simulations with an embedded source this number is under 50, while for the low density source ones it is under 1000. Compared to the uniform density tests, these simulations fall within the lower end of the resolutions that have been considered (see Table~\ref{table:mass-res}). Therefore, from a numerical perspective, we expect that the
expansion of the H~II regions within the Brick is likely to be underpredicted. Note that in our lower resolution StarBench tests the initial ionisation radius and mass are typically overpredicted (see Figure~\ref{fig:mass-resolution} and~\ref{fig:ionised-mass-sb}), but despite that the expansion is slower than in the higher resolution runs. This may be caused by the time scale required for the formation of the shock, which is more prolonged at lower resolution.
The overprediction of the initial ionised mass present in the low resolution StarBench test is consistent with overpredicted initial ionised mass in the Brick, when compared to the Cartesian reference test (see Table~\ref{table:brick-initial} and Figure~\ref{fig:clumpy-lum-function}). Indeed, the curves of the simulations with the exact density mapping (`B~I' and `B~II'; black lines), which we believe to be the most accurate estimate of the ionised mass, lie above the red data points in Figure~\ref{fig:clumpy-lum-function}.
These results indicate a limitation of the RHD scheme, related to the fact that SPH is optimised for resolving fluid elements of equal mass, whereas solving for the ionisation state of a diffuse medium requires high resolution at the ionisation front (this is illustrated by the outlines of the ionised regions in Figure~\ref{fig:clumpy-ionised-density}). This limitation can be addressed either by improving the resolution of the grid structure, or by sufficiently increasing the mass resolution in SPH. The first approach can be achieved by inserting additional generating sites in the Voronoi grid via methods such as the one proposed by \cite{Koepferl2017}, or by choosing a different high-resolution grid structure altogether, such as a Cartesian grid or a grid constructed with an active mesh refinement scheme. The increase of resolution can, however, significantly increase the computational cost of the RHD scheme to the point where it becomes prohibitive. Additionally, increasing the resolution of the grid while keeping the number of SPH particles the same is unlikely to improve the time evolution of the ionised gas. Therefore, it is crucial that we select appropriately small SPH particle mass for the specific problem that we consider.
\subsection{Physical implications for the Brick}
In addition to using the Brick setup for numerical tests, our limited radiation-hydrodynamics simulation can already be used towards answering scientific questions.
As mentioned in the introduction, we want to find out (i) the ionised mass of a newly formed H~II region and (ii) its rate of increase.
The simulations of \citet{Dale2019} and \citet{Kruijssen2019} provide a natural starting point for answering these questions numerically, since they are currently the only existing models of the Brick capturing the effects of the galactic gravitational potential on the structure of the cloud, and reproducing key observable properties, such as the aspect ratio, the column density, the line of sight velocity dispersion and the gradient of the line of sight velocity \citep{Kruijssen2019}. Our RHD runs of the Brick use a snapshot of one of these simulations as initial conditions. However, we have made crucial physical simplifications (such as using a sub-region of the cloud and omitting gravity
) for the purpose of our tests. Therefore, we will discuss the impact of these simplifications while addressing the above questions.
The first of the above questions can be directly answered from our results, and is not affected by our choice of physical assumptions. The size/mass of a newly formed H~II region depends solely on the luminosity and position of the source, as well as the density distribution of the gas. However, as discussed in Section~\ref{sec:discussion-grids}, the density distribution of the gas is sensitive to the choice of grid type and density mapping method. Using Figure~\ref{fig:clumpy-lum-function}, we can conclude that the low density source initially ionises 340--600~M$_{\rm{\odot}}$, while the embedded source ionises only 18--31~M$_{\rm{\odot}}$, for an ionising luminosity of $\dot Q = 5 \times 10^{50}$ s$^{-1}$. The ranges for these values have been obtained by considering the results of the Cartesian grid (the lower value of each range), and those of the simulations with exact density mapping (`SB I' and `SB II'). Even though the results of the Cartesian grid are more accurate due to its higher resolution at the ionisation front, it is useful to consider the full ranges, as they give estimates of the ionised mass uncertainty due to resolution limitations in the RHD simulations. Keeping these uncertainties in mind, we will be able to better estimate the ionised mass for other luminosities, where we do not have Cartesian results.
The ionising luminosities that we have considered in Figure~\ref{fig:clumpy-lum-function} are quite high for single sources (under the assumption of a well-sampled stellar initial mass function, they span stellar population masses of $2{-}20\times10^3~\rm{M_{\odot}}$, \citealt{leitherer14}). This choice has been made because of the embedded source, of which the H~II region otherwise remains sub--grid. Even with these high values of $\dot Q$, the embedded source typically ionises only a few SPH particles (with $m_{\rm{part}}=0.4463$ M$_{\rm{\odot}}$). This is an indicator that a young, high-mass star in this simulated cloud would likely have a very compact and trapped H~II region. Therefore, it appears that only a star which has accreted its natal clump or has been expelled from its birth environment to a low gas density location can substantially ionise the cloud. This result is consistent with the findings of \citep{Sartorio2021}, who studied single-source ionisation in a turbulent box, and found that placing the source next to a filament produced a small H~II region and only had a minor impact on the gas dynamics.
A possible limitation of our simulations is the fact that, in practice, embedded stars accrete material through a disk, and the lower density environment above and below the disk can facilitate a larger H~II region. Furthermore, earlier feedback in the form of stellar wind can carve out bipolar low density cavities, which subsequently become ionised \citep{Kuiper2018}. All of these considerations appear on size (and mass) scales much smaller than our particle resolution, and, hence, are not captured by our RHD simulations. However, while these sub-resolution considerations can affect the initial size and shape of the H~II region of the embedded source, our results provide at least an order of magnitude estimate of the initial ionised mass. Note that for the low density source, we do not expect any significant H~II region variation due to sub-grid geometry.
The second question that we have posed (on how rapidly the mass of the H~II region increases) is also answered by our RHD simulations. From Figure~\ref{fig:clumpy-ionised-mass}, we can measure that the expansion rate of the H~II region in low density environment is $\sim 7-9 \times 10^{-3}$~M$_{\rm \odot}$~yr$^{-1}$,
while that of the embedded H~II region is $\sim 1.6-1.9 \times 10^{-3}$~M$_{\rm \odot}$~yr$^{-1}$. In this case, however, our assumed physical simplifications do affect the results. The fact that we have neglected the effects of gravity means that the denser structures cannot be held together and diffuse out over time (see Figures \ref{fig:clumpy-hii-expansion} and \ref{fig:clumpy-hii-expansion-ld}). As a result, more of the gas enters the outskirts of high-density regions and becomes ionised, implying that the time-evolution results in Figure~\ref{fig:clumpy-ionised-mass} should represent upper limits on the H~II region expansion. However, we can see in Figures~\ref{fig:clumpy-hii-expansion} and \ref{fig:clumpy-hii-expansion-ld} that the majority of the high density structures survive the H~II region expansion and remain neutral, which suggests that the diffusion is likely to have a minor impact on the ionised mass, as the H~II region is expanding in a low-density environment. Some uncertainty remains about whether or not the embedded source would be able to disperse its natal clump on the time-scale that we observe in our simulations when gravity is present and able to disrupt the H~II region expansion. By contrast, from a numerical perspective, the measured H~II region expansion is suppressed due to insufficient spatial resolution (as discussed in Section~\ref{sec:clumpy-res}), and thus represents a lower limit on the expansion. However, according to Figure~\ref{fig:ionised-mass-sb}, this has an effect on the size and mass of the H~II region, but not on its rate of expansion, and hence it does not interfere with our measurement.
Despite the uncertainty in the size of the H~II regions, we see a physically plausible spatial distribution of the ionised gas. The H~II region of the low density source evolves to fill up the low density environment, with only minor destruction done on some of the clumpy structures. Similar behaviour has been observed by \cite{Dale2011} and \citep{Sartorio2021}, who find that ionising feedback can sometimes be inefficient at carving out voids and the H~II region would spread into pre-existing low density regions instead. In contrast, the embedded H~II region manages to expand within its clump, but our limited setup does not allow us to determine if it would be able to destroy the cloud.
An interesting additional question is whether or not the H~II region expansion can trigger star formation. This question is more relevant for the embedded source, where the expansion of the ionised gas sweeps up a shock. Unfortunately, due to the lack of self-gravity, the pre-existing dense cores diffuse out, as previously discussed. As a result, the shock is not able to compress them beyond the tipping point of star (or sink) formation. Furthermore, the relatively low resolution of the embedded H~II region means that the shock is quite thick, and hence its density never reaches high enough values to trigger fragmentation via the collect-and-collapse mechanism. Therefore, the question of triggered star formation will be addressed in a further study.
\section{Conclusions and recommendations}
\label{sec:conclusion}
We have introduced a novel radiation-hydrodynamics scheme combining SPH and MCRT, and linking them via a Voronoi grid. We have demonstrated that our scheme can correctly model the hydrodynamics of an expanding H~II region in uniform density environment. Furthermore, we have applied the RHD scheme to a clumpy cloud model of the Brick \citep{Dale2019,Kruijssen2019}, which is a dense and massive cloud in the CMZ of the Milky Way, thought to be the progenitor of a young massive cluster \citep{Longmore2012}.
As part of our work, we have explored varying the ionisation time step, the SPH resolution, and we have used two different kinds of Voronoi grid structure and three types of density mapping from the particles onto the grid cells. The results of these experiments allow us to formulate recommendations for future simulations, involving SPH, MCRT and Voronoi grids, which we include below.
We recommend the use of a timestepping criterion, which we have derived in eq.~\ref{eq:dt-criterion}. The criterion is similar to that of \citet{Courant1928} and other timestepping conditions that are otherwise employed in SPH codes \citep[e.g.][]{Price2018}. The criterion can ensure convergence in non-dynamical initial conditions. However, additional care should be taken when running simulations with a significant velocity dispersion.
We have studied the effects of varying the spatial resolution of the uniform-density simulations by increasing or decreasing the particle mass. We have found that lowering the resolution causes suppressed expansion of the H~II region due to thicker shocks. Based on our results, we can deduce that we need at least a few thousand initially ionised particles in order to have a good model of D-type expansion in uniform medium. Based on our Brick simulations, we recommend that a similar criterion is also used in simulations in a clumpy medium.
We have performed our simulations with two different types of Voronoi grids and three different types of density mapping (see Section~\ref{sec:numerics}). When comparing the basic to the regularised Voronoi grid, some preference should be given to the latter, as it represents the overall structure of the diffuse medium better. However, when we consider the simulation outcome, the better choice of Voronoi grid is situation dependent. Choosing one grid over the other makes no significant difference to the expansion of the uniform density simulations. In the case of the Brick simulations, regularising the Voronoi grid improves the outcome when the source is in low-density environment and worsens it when the source is embedded, as the regularised grid underresolves density peaks. In terms of the density mapping method, we always recommend using the exact density mapping (see eq.~\ref{eq:exact}), when the computing time is not an issue. If it is not possible to use the exact mapping, the choice between dividing the particle mass by the cell volume (eq.~\ref{eq:MonV}), or using the centroid method (eq.~\ref{eq:centroid}) depends on the choice of Voronoi grid. In the Brick simulations, we find that using the former approximate density mapping method gives similar results to the exact mapping on the basic Voronoi grid. By contrast, the centroid method produces a similar outcome to the exact mapping on the regularised Voronoi grid. Note that all types of density mapping result in very similar simulation results in uniform medium.
In our simulations of the Brick with a source in a low-density environment we can see that the ionised material is preferentially located in pre-existing low-density regions. This does not have a strong effect on the cloud morphology, and it is a phenomenon which has been seen in previous simulations.
In contrast, our simulations of the Brick with a source embedded in a dense clump result in H~II region expansion that sweeps up a shock.
Our results also demonstrate that we need the ionising luminosity of a stellar population of several $10^3~\rm{M_{\odot}}$ in order to ionise material outside the clump of an embedded source, at the extreme gas densities observed in the CMZ. However, the necessary ionising luminosity may be overestimated due to insufficient resolution. In particular, resolving the accretion disc of a young star, and/or including stellar winds, may accelerate the ionisation by allowing ionising photons to escape via low density bipolar cavities \citep{Kuiper2018}.
The RHD scheme proposed in this paper contains a careful and accurate treatment of ionising radiation in combination with the large dynamical range advantages of SPH. As a result, it can be a useful tool for future studies of star formation and ionising feedback. Furthermore, the results presented here will serve as a guide for choosing time steps, resolution, type of Voronoi grid and density mapping for future simulations.
\section*{Acknowledgements}
MAP would like to thank Daniel Price and James Wurster for their technical assistance with \textsc{Phantom}, as well as Jim Dale for useful discussions.
MAP and IAB acknowledge funding from the European Research Council for the FP7 ERC advanced grant project ECOGAL. MAP and JMDK gratefully acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme via the ERC Starting Grant MUSTANG (grant agreement number 714907). JMDK gratefully acknowledges funding from the German Research Foundation (DFG) in the form of an Emmy Noether Research Group (grant number KR4801/1-1). BV acknowledges partial funding from STFC grant ST/M001296/1 and currently receives financial support from the Belgian Science Policy Office (BELSPO) through the PRODEX project "SPICA-SKIRT: A far-infrared photometry and polarimetry simulation toolbox in preparation of the SPICA mission" (C4000128500).
Part of this work was performed using the DiRAC Data Intensive service at Leicester, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K000373/1 and ST/R002363/1 and STFC DiRAC Operations grant ST/R001014/1. DiRAC is part of the National e-Infrastructure.
The SPH figures in this paper were created using SPLASH (\citealt{Price2007}).
\section*{Data Availability}
The data underlying this article will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
|
{
"timestamp": "2021-09-30T02:00:41",
"yymm": "2109",
"arxiv_id": "2109.13953",
"language": "en",
"url": "https://arxiv.org/abs/2109.13953"
}
|
\section{Introduction}
\label{introduction}
What tools can enable humans to identify flaws in an AI agent's reasoning, even if the agent functions at a super-human level? One possibility is for the AI to produce explanations that simplify decisions into reasoning steps that a human can check against their domain and common sense knowledge. While a human may not be able to fully understand the totality of an AI's decisions, our case study demonstrates flaws in the AI's model can still be identified at the level of reasoning steps. These flaws may suggest ways to improve an agent or warn of potentially serious weaknesses.
In this paper, we detail a case study exploring how experts can combine common-sense knowledge with a visual state-action tree generated by an AI to seek out flaws in a non-trivial reinforcement-learning (RL) agent.
We consider an agent architecture based on tree search, where each decision is made by constructing a look-ahead search tree that is used to evaluate action choices in the current environment state. The trees serve as an explanation, which breaks down decisions into individual reasoning steps within the tree. These steps include predicting the effects of actions, predicting the values of leaf nodes, and pruning tree branches. Each step can in concept, be inspected by a human to identify potential flaws.
While tree-based explanations provide the potential for humans to identify step-level flaws, said trees can be large and complex. Thus, it is not clear whether humans will actually be able to detect such flaws in practice given the potentially overwhelming quantity of information.
In this work, we explore this question in the context of a planning-based agent trained to play a complex real-time strategy game (Section \ref{gameRules-information}). We describe the design of an interface for navigating the decision points of game replays and the corresponding trees (Section \ref{sec:interface}). We then describe the experiences of a team of AI experts and developers using the interface to identify flaws (Section \ref{sec:case-study}). We present important design elements of our tree-based explanation interface and how its utility helps experts find flaws in our AI agent. We further present general flaw-finding strategies that may help experts focus their attention to areas likely to contain flaws.
\section{Related Work}
Interestingly, we have not found prior work that provides a principled study of using search trees for explanation purposes. The majority of prior work on explanations for RL concern ``reactive" agents who make decisions via the evaluation of black-box functions. For example, it is common to represent an agent's policy or value function as a neural network, which is simply forward evaluated to make decisions. Explanations, in such cases, may highlight parts of the observations that were important to a decision \cite{pmlr-v80-greydanus18a,DBLP:journals/corr/abs-1809-06061,DBLP:journals/corr/abs-1906-02500,gupta2020explain,atrey2020exploratory}, generate counterfactual inputs \cite{olson2019counterfactual}, extract finite state machines \cite{MISCkoul2018learning}, or display the trade-offs between decisions \cite{juozapaitis2019}. Unfortunately, these types of explanations provide little insight into the internal decision-making process, let alone breaking the decisions into human-consumable steps. As such, these explanations primarily support identifying flaws at the level of full decisions, which may include obvious bad action selection or the agent pays attention to information that is clearly irrelevant. It may be rare, however, for humans to be able to make such judgements for a high-performing AI in a complex environment.
To support our goals, rather than consider reactive RL agents, we instead focus on planning-based RL agents (Section \ref{learned-components}).
A planning-based agent learns or is provided a model of the environment. This model is used for decision making via explicit planning. To this end, we designed a stochastic environment that forces the agent to perform long-term planning and decision trade-offs.
\section{Game Environment: Tug of War}
\label{gameRules-information}
\emph{Tug of War (ToW)} is an adversarial two-player, zero-sum, real-time strategy game our research lab designed for this study using the StarCraft 2 game engine. ToW (Figure \ref{fig:ToW-ScreenShot}) is played on a map divided into top and bottom lanes, where each lane has a two bases (Player 1 - left, Player 2 - right). The game proceeds in 30 second waves, where before the next wave begins each player may select either the top or bottom lane for which to purchase some number of military-unit production buildings (of three different types) with their available currency. These production buildings have different costs. Both Players receive a small amount of currency at the beginning of each wave and a player can linearly increase this stipend by saving to purchase up to three economic buildings, called pylons.
\begin{figure}[htp]
\centering
\includegraphics[width=5cm]{img/intro/gameMapRaw.PNG} \includegraphics[width=3cm]{img/intro/rockPaperScissors.PNG}
\caption{(left) Tug of War game map - Top lane and bottom lane, Player 1 owns the two bases on the left (gold star-shaped buildings), Player 2 owns the two bases on the right. Troops from opposing players automatically march towards their opponent's side of the map and attack the closest enemy in their lane. (right) Unit Rock Paper Scissors - Marines beats Immortals, Immortals beats Banelings, and Banelings beats Marines. We have adjusted unit stats in our custom StarCraft 2 map to befit ToW's balance.}
\label{fig:ToW-ScreenShot}
\end{figure}
At the beginning of each wave, each building produces one unit of the specified type. The unit automatically walks across the lane to attack incoming enemy units and the opponent's base if close enough. The three unit types, Marines, Immortals, and Banelings, form a rock-paper-scissors relationship (Figure \ref{fig:ToW-ScreenShot}). The first player to destroy one of the opponent bases by inflicting enough damage wins. If no base is destroyed after 40 waves, then the player with the lowest health base loses.
ToW is a challenging for reinforcement learning (RL) due to the large state space, complex dynamics, large action space, and sparse reward. The states give perfect information, although the opponent's unspent currency is not directly revealed (but can be inferred from history). The dynamics have randomness due to variations in damage to buildings and units by attacks, which makes predicting group interactions difficult.
Additionally, the agent's possible actions in any given state can vary widely, from tens to tens of thousands of actions; its options are constrained by the number of ways to allocate its budget.
Finally, the reward is just +1 (winning) or 0 (losing) at the end of a game, which can last for up to 40 waves/decisions.
\section{Planning-Based RL Agent}
\label{learned-components}
We describe our planning-based RL architecture and learning approach, where decisions are made via tree search using a learned game model and evaluation function. This architecture automatically produces explanations via the search trees, which will be investigated by humans via our explanation interface (Section \ref{sec:interface}).
{\bf Agent Architecture.} The moment before each new wave is a decision point for the RL agent, where it selects an action based on an abstract, human-understandable, state description. The abstract state includes information about the wave number, health of all bases, the agent's unspent currency, both player's current building counts, pylon quantities, and the number of friendly and enemy troops of each type in each of 4 evenly divided grid cells per lane. The main form of abstraction is the positions of units, which are quantized into 8 grid cells and health of individual units is not provided. Nevertheless, this abstraction is rich enough for humans to understand the state and make reasonable decisions. We will also see it is rich enough to allow for strong AI performance.
Our agent architecture is similar to AlphaZero \cite{silver2018}, an RL agent based on game-tree search that demonstrated mastery of Go, Chess, and Shogi.
Unlike AlphaZero, however, our agent uses a learned model of the game dynamics. The agent is constructed from the following components: 1) \emph{Learned Transition Model}, which predicts the next abstract state given a current abstract state and actions for both players. Note that the next state (next wave) occurs 30 seconds after the current state. 2) \emph{Learned Action Ranker}, which returns a numeric ranking over actions in an abstract state based on their estimated quality. 3) \emph{Learned State-Value Function}, which returns a value estimate (probability of winning) given an abstract state.
Given the learned components, the agent selects a decision point by building a pruned depth limited game tree. Specifically, the search is parameterized by the search depth $D$ and for each depth $d\in \{0,\ldots, D-1\}$ the number of friendly $f_d$ and enemy $e_d$ actions to be expanded. Starting at a node corresponding to the current state, the search uses the action ranker to expand the appropriate number of friendly and enemy actions, and for each combination uses the transition model to produce the predicted state at the next depth level. The tree is expanded until reaching nodes at depth $D$, which are assigned values by the state-evaluation function. The minimax value of the root actions are computed and the best action is selected. In our case study, we use $D=2$ (i.e. 1 minute of game time) with $f_d = (20, 5)$ and $e_d = (10, 3)$. Thus, the root node selects among 20 possible actions in the current state. Figure \ref{fig:ToW-ReplayAndExpl} gives a visualization of a game tree from a particular state.
{\bf Learning Approach.} To learn the required components we use model-free RL to learn a Q-function $Q(s,a)$ that estimates the value of action $a$ in abstract state $s$. Specifically, since this is a two-player game, we use a tournament-style self-play strategy, where a pool of previously trained model-free agents, each with their own Q-function, is used to play against a currently learning agent. The current agent is trained until it achieves a high win-percentage against the pool or training resources are expended. This is similar to the self-play strategy employed by AlphaStar for the full game of StarCraft 2 \cite{vinyals2019}.
To train each agent we use a variant of DQN \cite{mnih2015} called Decomposed Reward DQN \cite{juozapaitis2019}, allowing us to learn a Q-function that returns a vector of probabilities over the different win conditions for each player. The sum of the probabilities equals 1 and the sum over win conditions for a player is the is the value of the action. This vector provides more insight into the agent's decision making and part of the visualization in our explanation interface (Section \ref{sec:interface}). The Q-function of the best agent in the pool (typically the last agent) is used as the learned action ranking for search. In addition, it is also used for the state-evaluation function by letting the state value to be the value of the best action.
We represent the Q-function using a 3 layer feed-forward neural network with an input consisting of features describing the abstract state and action. This outputs a value vector of the state-action pair. Self-play training was conducted for two days, after which the learned agent appeared to be quite strong, likely better than most humans with some game experience.
To learn the dynamics model used for search, we used two sources of data: 1) Data from games between agents in the tournament and random agents, 2) The replay buffer of the final learned agent, which is accumulated as part of the learning process. Each data instance was of the form $(s, a_f, a_e, s', \vec{r})$ giving the current abstract state, friendly action, enemy action, next abstract state, and decomposed-reward vector respectively. Here the reward vector is the zero vector for all states, except at the end of the game where it is the zero vector for a loss and a one-hot encoding of the win condition otherwise. We designed a feed-forward neural network that takes $s$, $a_f$, and $a_e$ as input to predict $s'$ and $\vec{r}$. Note that this approximates the dynamics as a deterministic function. While the actual dynamics are stochastic due to unit level randomization of damage, in aggregate, a deterministic model appears to be adequate for strong play.
\section{Explanation Interface}
\label{sec:interface}
The above planning-based RL agent produces a search tree that serves as an explanation of its decisions at each decision point. While this is a sound certificate of the decision, it may be difficult or impossible for a human to ``fully understand" the reasoning steps leading to the overall decision. This is especially true when the agent is playing at a superhuman level. Rather, a human can still consider the individual learned inferences involved in the search to build confidence in the agent. If the agent perfectly learns the search knowledge (model and value functions), the agent will play perfectly. Thus, any sub-optimal play is due to inaccuracies in the learned components. Our explanation interface is designed to allow humans to explore tree explanations in order to identify such inaccuracies, which can then be used to build confidence or improve the agent.
{\bf Replay Interface.} Figure \ref{fig:ToW-tree-Ui} illustrates the ToW replay interface, which allows a user to navigate games played by the AI. The user can let the game play, pause, or jump to a specific numbered decision point by clicking on the timeline at the bottom of the screen. After selecting a decision point, a high-level view of the agent's choice is shown via a graph of the win-probability estimates computed by the tree for the 20 root actions in sorted order (see Figure \ref{fig:ToW-ScreenShot}). This visualization is used to help select interesting decision points to investigate (Section \ref{sec:case-study}). At any decision point the user can click on the ``Show Explanation" button to bring up the tree explanation interface.
\begin{figure}[tp]
\centering
\includegraphics[width=6cm]{img/ui/replayUI.PNG}
\caption{ToW Replay Interface - Displays a visual replay of the ToW game, including unit building quantities for each lane, currency, pylons, base health, and the player's actions as they happen.}
\label{fig:ToW-tree-Ui}
\end{figure}
\begin{figure*}
\centering
\includegraphics[scale=0.3]{img/ui/UItoTree.PNG}
\caption{
Tug of War Explanation interface, replay interface is paused at D20 and D20's explanation tree is shown.
\textbf{\textit{A}}: ToW Replay Interface paused at D20.
\textbf{\textit{B}}: ToW Replay Interface Friendly Top unit building count and base health.
\textbf{\textit{C}}: Replay player is paused at D20.
\textbf{\textit{D}}: Tree interface at D20. Top 5 actions from the current state shown. Topmost action (principal variation) and 4th best action are fully expanded.
\textbf{\textit{E}}: Game state at D20 as shown in the replay interface. This image is generated using the model-based agent's game-state input vector.
\textbf{\textit{F}}: Other possible lower ranked actions are shown below. Evaluator can expand or hide action nodes to compare futures.
\textbf{\textit{G}}: 88.8\% expectation of winning by destroying P2's Top Base at D22.
\textbf{\textit{H}}: Action node, the agent expects the opponent's best action is to purchase +5 marine buildings in the top lane. Using min-max approximation, the agent believes it should build +1 marine buildings and +1 immortal buildings in the top lane.
\textbf{\textit{I}}: Expected (Predicted) state D21 if both players perform action node \textit{H}.
\textbf{\textit{J}}: Action node, best min-max action to produce optimum state \textit{K} from D21 to D22.
\textbf{\textit{K}}: Expected state at D22 if min-max action pairs \textit{H} and \textit{J} are taken.
\textbf{\textit{L}}: 4th best action with an expected 84.0\% chance of a favorable D22 state. Path has been expanded to show a possible alternative future for D22 if different actions are taken.
}
\label{fig:ToW-ReplayAndExpl}
\end{figure*}
{\bf Tree Explanation Interface.} The tree explanation interface allows a user to visually analyze parts of the agent's search tree at a decision point. The interface shown in Figure \ref{fig:ToW-ReplayAndExpl}, illustrates part of a tree. Figure \ref{fig:ToW-ReplayAndExpl}D is associated with a selected decision point (Figure \ref{fig:ToW-ReplayAndExpl}C). The tree is visualized as left-to-right paths from the current root state (left-most) to leaf nodes. Paths are displayed via visualizations of abstract state nodes, followed by visualizations of actions, which transition to predicted abstract state nodes.
Figure \ref{fig:ToW-ReplayAndExpl}{E} shows an abstract state node; this node visually displays the abstraction data used by the agent to represent states. Base damage prediction in future state nodes are illustrated by red and green base health bars as shown in Figure \ref{fig:HPError}. Red indicates damage inflicted and green indicates remaining health. The house-shaped boxes (Figure \ref{fig:ToW-ReplayAndExpl}H) contain the actions performed by each player from the parent state node. Notice the houses are divided horizontally by a faint grey line thereby distinguishing whether the action was performed in the top or bottom lane.
Initially the user is only shown the principle variation path (i.e. the best mini-max path). From that point, the user can expand to the next-best action at a node, or expand a path to the maximum depth along the principle variation from that node. The interface also allows the user to rearrange paths for easier comparison. For each expanded action we depict the game outcome probabilities (Figure \ref{fig:ToW-ReplayAndExpl}G), which show the outcome expected by the agent for each action based on the tree. The outcome information on actions leading to leaf nodes is directly computed by the learned state-evaluation function. Using this interface, a user can explore the tree to identify potential flaws in the learned knowledge, which might correspond to flaws in the predicted state transitions or flaws in the state evaluation function.
\section{Case Study}
\label{sec:case-study}
This case study illustrates how AI experts and developers can use tree explanations to identify flaws in the underlying learned components of an RL agent. The users of the system included 3 of the main system software developers and 2 of the primary designers of the agent architecture and learning algorithms. These individuals worked together over multiple sessions totalling approximately 4 hours to identify as many flaws as possible. Note that while much explainable AI work is targeted toward non-experts in AI, here we focus on an expert population.
Such experts are likely to be the first users who benefit from such tools. Further, it is reasonable to first demonstrate that experts can effectively use an explainable AI tool before investigating whenever non-AI experts can use the tools effectively.
Below, we first describe the strategies employed to identify likely decision points that contain flaws. Next we present several examples of flaws found during the investigation.
{\bf Selecting Games and Decision Points.}
\label{analysisSelection}
It is important to have strategies for zeroing in on decision points that are likely to uncover flaws. While this is a process that future versions of the explanation system might help with, it is currently a largely a manual process based on intuition.
An important are of future work is to develop effective automated heuristics for the selection of promising states to investigate. For example, other researchers have proposed specific metrics for ``criticality'' of states based on the difference between the maximum Q-value and average Q-value at a state---\emph{``those in which acting randomly will produce a much worse result than acting optimally''}~\cite{Huang2018}.
We first chose to focus our evaluation on games the agent lost because one can anticipate reasoning errors to have contributed to the agent losing.
Thus, we collected a set of 13 losing games by having the agent play a large number of games against other agents in the tournament pool.
We quickly discovered that looking at early decision points was not productive because most actions had uniform win probabilities and the states had little information. This makes sense since early in a game, even bad action choices can be recovered from by a strong agent, which results in similar values across the entire action set.
Thus, the team used the interface to focus on two kinds of states: just prior to sudden drops in win probability estimates (indicating a possible mistake just occurred) and where the action probabilities showed visibly apparent fluctuation across the actions, as shown in Figure~\ref{fig:ToW-q-Chart}.
The team identified candidate states using the above criterion and then investigated the corresponding games around those points. The following describes some example flaws that were identified in this way.
\begin{figure}[tp]
\centering
\includegraphics[width=4cm]{img/bugs/chart/noDrop.PNG}
\includegraphics[width=4cm]{img/bugs/chart/bigDrop.PNG}
\caption{(left) Optimistic Win Expectation - The model based agent expects 2 states into the future, any of the top 22 actions it has evaluated will have a 100\% chance of winning the game. (right) Model Based Agent Large Uncertainty - An example of a decision point with relatively low win expectations that drop sharply. We hypothesize decision points with huge drops in win expectation contain clear flaws. }
\label{fig:ToW-q-Chart}
\end{figure}
{\bf Transition Function Flaws.} Recall Tug of War is an adversarial two-player game where the objective is to inflict the most damage on the opponent’s base. Damage is dealt when units reach attacking range of the enemy’s base. It is a fundamental game rule that damage is irreversible; the health of a base cannot increase. When investigating the tree resulting from a suspicious decision point, which led to a sudden drop in win probability, we identified an error in the state transition function, where the health of a base increased (Figure \ref{fig:HPError}). Here, at decision point 39, Player 1's bottom base is predicted to sustain a significant amount of damage and will have almost no health remaining. However at D40 Player 1's bottom base is predicted to have more health than in D39. This is a serious error that clearly demonstrates the AI has not yet fully learned some key constraints of the game. Such flaws can lead to rare, but critical mistakes in evaluating actions. The flaw suggests that more training is required in situations where the agent's bases have very low health, which was relatively rare late in training as the agent frequently wins.
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{img/bugs/HPRises.PNG}
\caption{Base Health Prediction Error; Red health bars indicates expected damage inflicted, green indicates expected remaining health. In this case, the agent expects to have more base health in D40 than in D39, which is not possible.}
\label{fig:HPError}
\end{figure}
Recall at each decision point, the agent is only able to purchase buildings in either the top or bottom lane. If both players happen to choose the same lane, the other lane will not be affected. Figure \ref{fig:IndependenceError}, illustrates a situation where two actions from the root state have both players selecting the top lane. This implies the bottom lane for the predicted states of those actions should be identical. However, we see this is not the case; instead the number of predicted marines in the bottom lane differs for the two states. This shows that the transition function has not yet sufficiently learned this basic lane-independence property. Again, this suggests more training is necessary, which might include specialized training data augmentation that captures the invariance.
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{img/bugs/unitsNoBuildings.PNG}
\caption{Transition Function Error: P1 expects opponent P2 to have two immortal units in the bottom lane even though P2 lacks the necessary production buildings. This predicted state in D15 is not possible.}
\label{fig:unitsNoBUildings}
\end{figure}
Figure \ref{fig:unitsNoBUildings} depicts player 2 has two immortal units on the field, even though player 2 does not have any buildings to produce immortals. This is an impossible state, and this type of error could completely change the evaluation of a state and its upstream root actions. The team hypothesizes the agent hasn't observed enough situations with immortals in the bottom lane as the agent (P1) rarely uses immortals there. We also notice P1 has an immortal building in the bottom lane. It is possible the agent has not encountered enough situations with immortals in the bottom lane and has not learned the unit/building association.
{\bf State Evaluation Function Flaws.} Figure \ref{fig:TerminalStateRanking} depicts predicted paths where the agent fails to recognize it expects to have lost the game in state D26, thus incorrectly ranking the whole path. As shown in Fig. \ref{fig:TerminalStateRanking}, the agent expects its bottom base to get destroyed, its bottom base has approximately 0 health at D26. However, its Transition function predicts a game state for D27 and the state evaluation function assigns it a non-zero probability of winning. This results in the friendly agent enacting a move on an impossible state, and the value of the impossible state is recursed back to the root action resulting in a state with an expected losing future getting ranked higher than an alternative state with an arguably better move. This flaw might also be considered to be a flaw in the tree construction procedure, where the predicted health should have been considered to be zero which halts tree construction.
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{img/bugs/stateRanking.PNG}
\caption{Terminal State Ranking Error; Player 1 predicts two futures where it expects it will lose in D26. However the probability that it will win the game is non-zero in both cases because it has appended D27, a state that cannot exist because the previous state would have ended the game.}
\label{fig:TerminalStateRanking}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{img/bugs/laneIndependence.PNG}
\caption{Lane independence prediction violation; No action was taken in the bottom lane by either player, yet different quantities of marines are expected to be on the field in the bottom lane despite no change.}
\label{fig:IndependenceError}
\end{figure}
As another example, in Figure \ref{fig:winSwap}, we see two very similar lines of action from the root state, that result in qualitatively very different state evaluations. In particular, the lower action sequence leads to a leaf state where the probability of destroying the bottom base is high. However, the upper action sequence leads to a very similar leaf state, but the probability of destroying the top base is high. The similarity of these action sequences and leaves, combined with the differences in the predicted outcome goes against common sense. This highlights an apparent weakness in the learned state-evaluation function, which could potentially result in inaccurate action selections under the right conditions.
\begin{figure}[tp]
\centering
\includegraphics[width=8.5cm]{img/bugs/winSwap.PNG}
\caption{State Evalution Error; Marginal difference in actions, yet win expectation flips from destroying the enemy's top base to destroying the enemy's bottom base.}
\label{fig:winSwap}
\end{figure}
{\bf Scripting Flaw Detection.} One aspect of the above examples is that in most cases, it was possible to describe the type of flaw in a simple and general way. For example, the transition function incorrectly predicted the increase in a base's health. This suggests that human-guided flaw finding can be used to then automatically check for other similar flaws automatically. In particular, one can write a script that checks a library of games for a general description of a perceived flaw to determine how wide-spread it is.
As an example of this the team wrote a script to search for the type of health increase flaws observed in Figure \ref{fig:HPError}. The script searched all states in all trees for all decision points in 6 losing games played by our agent. It was discovered the agent expected base health to rise in the decision point preceding the moment before losing in 4 out of 6 losing games. Our scripts also indicated the agent made the base health misjudgement by a large margin in some cases. For example, there were instances where it expected the health to rise by more than 10\%, which could easily lead to misjudements in close games.
We expect this strategy of using such an explanation interface to extract general descriptions of potential flaws will be a powerful tool for validating and improving complex AI systems. The results of running the scripts can help direct AI engineers to address the most pressing flaws given limited resources.
\section{Case Study Summary}
Our case study demonstrated it was possible for a team of expert AI researchers and developers to identify non-trivial bugs using a relatively simple explanation interface. Although reasoning errors were not necessarily visible in the overall action selected by the agent, the tree interface allows for underlying flaws to be identified. This demonstrates the utility of this type of ``micro decision" analysis of an agent's decision making. Our study also demonstrated the flaws that were found tended to have simple and general descriptions and suggested those descriptions can be used to search for other similar flaws.
As noted earlier, the model-based agent receives a coarse abstraction of the ToW game state and can only provide tree explanations in terms of this abstraction. Given how explanation trees can easily span thousands of considered futures, with each state node communicating a dense set of information, organizing and presenting said information is a challenge. The size of the data presented also draws human comprehension/attention into question, a structured evaluation process may be needed to focus manual review to critical sections where humans follow a procedure to dissect a tree; else an unguided process resembles searching for a needle in a haystack. In this case study, the team was successful in identifying promising points of investigation, but in general that may not be the case.
While agents are able to converge to a policy after playing millions of ToW games, designing an interface to communicate the game state from the agent's perspective is no trivial task. Our explanation interface design for Tug of War continues to evolve as we consider different audiences with varying levels of AI expertise.
\section{Future Work}
One of the key enablers of our case study's explanation interface is the abstract state representation used by the agent; humans can interpret the tree data. As environments get ever more complex and closer to perceptual data, the internal representations of the agent will have less correspondence with what can be visualized by a human. This is particularly true in systems where the underlying state space used for search is a learned latent space with no presupposed semantics. Thus, a key direction for future research is to identify ways to make the correspondence as tight as possible without sacrificing agent performance. Another key direction of future work is to assist users in identifying promising areas of investigation and to provide a measure of ``testedness" that indicates how similar parts of the tree are to previously ``verified" parts of trees. This would provide users with a sense of progress and improve the efficiency of flaw finding. Finally, flaw finding is just one goal of explainable AI. It is worth considering other objective goals and understanding the types of interfaces and processes that best suit each one.
\section{Acknowledgements}
This work is partially supported by DARPA under the grant N66001-17-2-4030.
\bibliographystyle{plain}
|
{
"timestamp": "2021-09-30T02:01:52",
"yymm": "2109",
"arxiv_id": "2109.13978",
"language": "en",
"url": "https://arxiv.org/abs/2109.13978"
}
|
\section{Introduction}~\label{intro}
The Ising model is regarded as the simplest theoretical framework to describe ferromagnetism, and understand phase transitions~\cite{Ising1925}. The 2D Ising model is a mathematical model of atomic spins on a lattice which exhibits a phase transition that can be computed analytically~\cite{onsanger2d}. The system undergoes a second order phase transition at the critical temperature $T_c$. In particular, when $T<<T_c$, it undergoes spontaneous magnetization, and this phenomena characterizes ferromagnetism or the ordered state. The local interaction between spins is ultimately responsible for this behavior. For high temperatures, $T >> T_c$ the system is in the disordered or the paramagnetic state. In this case, there are no long-range correlations between the spins. Ferromagnetism is a collective phenomena which occurs when the spins of atoms in a lattice align such that the associated magnetic moments all point in the same direction.
Analytical and numerical solutions for the Ising model have been extremely important in the understanding of phase transitions and ferromagnetism. However, in many cases, these solutions are extremely difficult to formulate and compute. Thus various machine learning (ML) methods have been used to understand these phase transitions.
Challenges in applying ML techniques to dynamical Ising models include the large number of interacting degrees of freedom and the so-called quenched disorder~\cite{Radzihovsky}, the latter of which might result in a system evolving without temporal significance. Attempts to overcome these problems in Ising like systems using ML date back to~\cite{LITTLE1974, Hopfield1982, Gardner_1988}. Recently, the successes of deep learning and its applications to complex physical systems have reinvigorated interest in the field~\cite{Shiina_2020,dedieu2021sampleefficient, Azizi2021, Morningstar, Carrasquilla2017, Sprague2021}. All of the recent work has been focused on using CNN for understanding the Ising systems near the critical temperature~\cite{Fukushima_2021, Efthymiou_2019, Tanaka_2017, Alexandrou2020, PhysRevB.103.134422, PhysRevE.102.053306}. Several authors have also used CNNs in Generative Adversarial Networks (GAN)~\cite{liu2017simulating} and in Variational Autoencoders~\cite{Walker2020, PhysRevResearch.2.023266} to generate images that simulate the system near the critical temperature. Given the success of transformers in CV, it is natural to ask whether a ViT can match or outperform a CNN in understanding the phase transitions in an Ising model.
In this work, our contributions are the following: \textbf{(a)} we created a custom-made suite to generate high resolution Ising grid images for a large number of systems with different boundary conditions at various temperatures, \textbf{(b)} a ViT is fine-tuned on these images to predict the state variables corresponding to each of the simulation's experimental constraints, and finally, \textbf{(c)} to the best of our knowledge, this is the first time ViT has been used to understand classical statistical mechanical phenomenon in the Ising model and we show that ViT outperforms CNN based architectures like ResNet-18 and ResNet-50 when using a small number of labeled images.
\section{Ising Model}~\label{ising}
\label{sec:headings}
The Hamiltonian of a system expresses the total energy of the system in question. Classically, the Hamiltonian is understood as the sum of the kinetic and potential energies. In the case of the two-state Ising Model, the standard Hamiltonian, $\mathcal{H}(\sigma)$, reads
\begin{equation}\label{eq:1}
\mathcal{H}(\mathbf{\sigma}) = -\sum_{\langle i,j\rangle} J_{i,j} \sigma_i \sigma_j - B\sum_{i} \sigma_i
\end{equation}
where $J_{i,j}$ is the interaction strength between the $i$th and $j$th spin sites, $B$ is the external magnetic field, and $\sigma_i = \pm 1 $ is the $i$th spin on the grid usually taken, as is the case here, to be an $m \times n$ lattice. The sum ${\langle i,j\rangle}$ is taken over all sites without double counting, and typically, the interaction $J_{i,j}$ is constant for all nearest neighbors across the lattice.
In all our systems, $\mathcal{H}(\sigma ) $ is defined according to the availability of spin sites in a restricted finite volume. Starting at T$_c$ for Ising Models, the order parameter of the second-order phase transitions increases continuously from zero.
Second-order phase transitions are characterized by a high temperature phase with an average zero magnetization (disordered phase) and a low temperature phase with a non-zero average magnetization. This is very well demonstrated in the Metropolis-Hastings type Ising model when observing the continuous increase of the magnetization at a ferromagnetic-paramagnetic phase transition. When J is positive, the spins align parallel to one another, indicating ferromagnetic behavior. On the other hand, when coupling constant J is negative, nearest neighbor spins are in their lowest energy state by aligning themselves anti-parallel. In that case, the system is anti-ferromagnet. The properties of the anti-ferromagnetic 2D Ising model behaves almost identically to the ferromagnetic Ising model. However, in the lowest energy state $T\approx 0$ a checkerboard-like configuration represents the lowest energy state in contrast to the ferromagnetic case where energy is minimized with all spins in one direction or the other (fig~\ref{fig:fig33}).
\begin{table}[t]
\centering
\begin{tabular}{l c c c c}
\toprule
& FSbCR & SbCR & Cr & SpCR \\
\midrule
Train & 220 & 80 & 90 & 210 \\
Validation & 100 & 50 & 60 & 90 \\
Test & 150 & 50 & 70 & 130 \\
\bottomrule
\end{tabular}
\caption{Data distribution: Number of images in each bin for each boundary condition.}
\label{tab:data_dist}
\end{table}
\section{Datasets}
In this section we will describe the datasets used in our experiments. For this work, we developed a custom suite package to create a diversified set of Ising grid images. These images were generated using the single-flip Metropolis-Hastings algorithm with random temperature variations (between 0K-4K) and three different boundary conditions: \textbf{(a)} Periodic boundary conditions (ferromagnetic and anti-ferromagnetic systems): Converts the plane square lattice system into a torus lattice, \textbf{(b)} Anti-periodic boundary conditions (both directions): The sign of the coupling constant is reversed at the top/bottom and left/right boundaries of the lattice grid, and \textbf{(c)} Skewed $\pm$ boundary conditions: Imposes periodic boundary conditions on a lattice except for at the top and bottom rows.
\begin{figure}[h!]
\centering
\includegraphics[width=.7\linewidth]{images/6.png}
\caption{Images of ferromagnetic and anti-ferromagnetic at T = 0 (distinct ground state energy) and T = 4 (disordered system) }
\label{fig:fig33}
\end{figure}
The experiments for the Ising model are carried out on a $100$x$100$ two dimensional lattice. Each lattice point allows for either spin up or spin down, hence there exist $2^{100\text{x}100}$ potential configurations. We ensure equilibrium, or thermalization is reached between temperature steps by only collecting data after $500$ Monte Carlo sweeps. So we wait $750$ (cushion for equilibrium) Monte Carlo sweeps prior to the subsequent calculation~\cite{landau_binder_2014}. A strong proxy for determining whether or not equilibrium has been reached at a new temperature is elucidated by plotting the average magnetization per spin after each implementation of the Metropolis algorithm against the number of iterations.
We generated $1300$ images for each boundary condition with step size increments of 0.01K. These images were classified into $4$ bins of dissimilar sizes. It is important to point out that for temperatures far above the critical temperature ($T >> 2.27$K), the models tend toward purely noisy systems. The unequal bin sizes were selected to show the effectiveness of the classifier in the small intervals around the critical temperature, and to demonstrate the reliability of predictions in qualitatively differentiable subregions: 0K-1.055K which we call the \textbf{far sub-critical region} (FSbCR); 1.055K-2.119K deemed the \textbf{sub-critical region} (SbCR); 2.119K-2.320K as the \textbf{critical region} (CR); and 2.320K-4.0K as the \textbf{super-critical region} (SpCR). Bins were selected to show the ability/sensitivity of the model to identify images in a small neighborhood around the critical temperature (2.27K). This sensitivity is demonstrated in the fact that the size of critical region bin is deliberately selected to be at most 30\% the size of the other bins. We are interested in developing a ML model that can accurately predict the label for the image since we believe that such a system can potentially identify characteristic phenomena associated with known critical regions. Interpreting such a model can shed light on what features are associated with multiple critical regions, thus allowing us to look for similar ones in other systems. An example of our dataset is in Figure~\ref{fig:fig33} and~\ref{fig:ising_fig}, whereas the data distribution for various bins are in Table~\ref{tab:data_dist}. For more figures, please see Appendix.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{images/periodic/panel_periodic_bc_15.png}
\caption{Various Normalized Images Representing Micro-states. The blue area correspond to "spin up" region and the yellow area correspond to "spin down" region. Fig A, B, C, D belongs to FSbCR, SbCR, Cr, SpCR classes respectively.}
\label{fig:ising_fig}
\end{figure}
\section{Methods}~\label{model}
In this section we will describe our method. We start by giving a short introduction to the Vision Transformer and describe how we train the ViT model for our downstream classification task.
\subsection{Vision Transformers}~\label{vit}
Transformers have become the de-facto sequence-to-sequence architecture in NLP after their introduction in the landmark paper~\cite{Vaswani2017}. However, attention-based mechanisms in CV have only been used in conjunction with CNNs. CNNs have been used extensively in CV, as they enjoy inductive biases like translation invariance and a locally restricted receptive field. However, Dosovitskiy et al.~\cite{Dosovitskiy2020} successfully introduced Vision Transformer, a transformer-based architecture, to CV which outperforms state-of-the-art CNNs on various downstream CV tasks. Transformers lack the inductive biases of CNNs, and by design they cannot process grid-structured data. The authors cleverly convert a spatial, non-sequential signal by splitting an image into a sequence of flattened, non-overlapping patches. The success of ViT in CV has led to an explosion of research in creating parameter efficient transformers~\cite{CVT}, and faster training of self-supervised transformers~\cite{DINO, AndreasSteiner}.
In our work, we used the ViT-Base model~\cite{Dosovitskiy2020} from the Hugging Face library~\cite{wolf2020huggingfaces}. We used PyTorch and the Hugging Face Library to train our models on a NVIDIA V100 16GB GPU.
\subsection{Finetuning Vision Transformers}~\label{ft_vit}
We started with the checkpoint of ViT which was pre-trained on ImageNet-21k (a collection of 14 million images and 21k classes) and further fine-tuned on ImageNet (a collection of 1.3 million images and 1,000 classes). We then fine-tuned this ViT model on our training set for $10$ epochs with cross-entropy loss. We used AdamW optimizer~\cite{loshchilov2019decoupled} with learning rate 2e-5 and a weight decay of 5e-6, and early stopping on the validation loss to prevent overfitting. To benchmark our performance, we also fine-tuned a ResNet-18 and ResNet-50 on our data. For a fair comparison, both the ResNets were pretrained on ImageNet-21k and then fine-tuned on ImageNet.
\section{Results}~\label{results}
In this section, we will describe the performance of our model. Table~\ref{tab:results} shows that the ViT outperforms both the ResNets when fine-tuned on such a small dataset. This is an active work in progress as we benchmark ViT on other Ising model simulations like the Blume-Capel model and the q-Potts model.
\begin{table}[h!]
\centering
\begin{tabular}{l c c c}
\toprule
System/Boundary Conditions & ViT & ResNet-18 & ResNet-50 \\
\midrule
Periodic (Ferromagnetic) & $\mathbf{0.934 \pm .008}$ & $0.865 \pm .034 $& $0.907 \pm .021$ \\
Periodic (Anti-ferromagnetic) & $\mathbf{0.935 \pm .012}$ & $0.906 \pm .016$ & $0.899 \pm .026$ \\
Skewed (Ferromagnetic) & $\mathbf{0.931 \pm .021}$ & $.886 \pm .019$ & $0.917 \pm .009$\\
Anti-Periodic (Ferromagnetic) & $\mathbf{0.921 \pm .013}$ & $0.91 \pm .021$ & $0.917 \pm .008$ \\
\bottomrule
\end{tabular}
\caption{Table showing the Macro F1 scores (average and standard deviation over 5 trials) of various models on our test sets}
\label{tab:results}
\end{table}
\section{Conclusion}~\label{conclusion}
We developed a custom suite to generate images from various Ising model simulations at different temperatures and with different boundary conditions. We then used these images to fine-tune a pre-trained ViT and demonstrated that ViT can outperform the current state-of-the-art CNNs. This is still a work in progress as we extend our work to other simulations and applications; to systems undergoing topological phase transition, e.g. XY model, systems widely used to model behaviors of complex systems, e.g. q-Potts model, and the transverse field quantum Ising model (continuous imaginary-time). One of our main motivations in using ViT is to use the attention maps for interpretability. Our goal is to use the attention maps to understand Ising model phase transitions from a visual pattern perspective by looking at Ising configurations, with the different boundary conditions generating qualitatively different looking configurational patterns. We believe this will raise interesting research questions as we try to relate conventional physical approaches usually applied to the Ising problem with the new ideas in computer vision.
\section{Broader Impact}~\label{impact}
The past year has produced a unprecedentedly rapid and cross-disciplinary increase in the study and applications of CV systems across almost every corner of life and industry. Much of the interest is due to the extensive and seemingly endless applicability of transformers and other state of the art deep learning frameworks. In particular, our work shows that state-of-the-art research in machine learning can be used in understanding fundamental sciences and potentially help in discovering new natural phenomena. However we must be mindful of harmful effects of using these computer vision technologies in society. There are well known examples where these computer vision technologies has mischaracterized people of color and sometimes it has led to severe consequences like harassment towards Black and Brown people. Situations like this can be mitigated with careful focus at each step of the development process. Moreover more research needs to be done in interpreting these models and at the end of the day we should acknowledge that our methods in understanding the predictions of these black box models are incomplete. Thus we must be mindful of how we use technology when we wish to introduce new, powerful, and exciting technologies into society.
\begin{ack}
We would like to thank Christina Tang (contact: christina.f.tang@gmail.com) for technical assistance and for refactoring our Ising model simulations suite code into a fully customizable experimentation suite. We would like to thank Tamiko Jenkins (contact: tommymjenkins@gmail.com) for her tireless efforts and input while proof--reading and editing this manuscript. We would also like to thank John Weeks and Yigit Subasi for helpful comments and suggestions on an earlier version of the manuscript.
\end{ack}
\bibliographystyle{unsrt}
|
{
"timestamp": "2021-12-01T02:12:56",
"yymm": "2109",
"arxiv_id": "2109.13925",
"language": "en",
"url": "https://arxiv.org/abs/2109.13925"
}
|
\section{Introduction} \label{sec:introduction}
Upcoming surveys such as the Legacy Survey of Space and Time
\citep[LSST;][]{ivezic19}
that will be conducted at the Vera C. Rubin Observatory, and
the Nancy Grace Roman Space Telescope \citep{spergel15}
will obtain light curves for millions
of astronomical transients. These large datasets will enable many
novel science applications. The number of observed Type~Ia supernovae
(SNe~Ia) will increase by more than two orders of magnitude compared
to current samples \citep{lsst09, scolnic18}. These large samples of SNe~Ia
will be used to constrain cosmological parameters both by improving measurements
of the expansion history of the universe and by performing novel
measurements of local peculiar velocity field \citep{graziani20}.
These surveys will also observe large numbers of
other kinds of transients, such as superluminous supernovae (SLSN), which
will be helpful both to elucidate the origins of these transients
\citep{villar18} and to use them as new cosmological probes \citep{scovacricchi16}.
It is also likely that new kinds of rare transients that
have not previously been observed will be discovered in the datasets
from these surveys.
All of these science applications rely on being able to extract information
from light curves and distinguish between light curves of different kinds
of transients. Observations can be compared directly to simulations
and theory \citep{kerzendorf21}, but these models are not currently accurate enough
to characterize observed light curves at the precision required for many science applications.
For SNe~Ia, a range of different empirical models have been created
to describe their light curves and spectra \citep{guy07, saunders18, leget20, mandel20}.
These are generative models in that they predict the full time-varying
spectrum of a transient in terms of a small number of parameters
whose distribution is reasonably well known.
These models all include explicit descriptions of how the light propagates through the universe
and is observed on a detector. Propagation effects include redshifting of the light from the
transient, dust along the line of sight, and luminosity variation due to effects such as
weak gravitational lensing. A light curve can be observed with different
cadences, with different telescopes, in different bandpasses, at different
signal-to-noises, or at different times. All of these effects will change the observed
light curve for a transient, but they do not affect the intrinsic physics of the underlying transient.
We therefore refer to them as ``symmetries''. By explicitly
handling these symmetries, the previously-described models of SNe~Ia
obtain consistent values of the model parameters when fitting light curves observed
under different conditions. In particular, these models can obtain consistent estimates
of the luminosity when fitting light curves at different redshifts which is essential
for estimating distances to SNe~Ia.
Similar models do not exist for other kinds of transients. Instead,
previous work using light curves of non-SNe~Ias has typically focused on
extracting features from the light curves rather than building a generative
model. Features can be extracted by fitting an empirical functional
form to the light curve \citep{bazin09, sanders15}, by fitting parameters
of a theoretical model to match the light curve \citep{guillochon18},
by estimating a smooth approximation to the light curve from which
arbitrary empirical features can be measured \citep{lochner16, boone19, alves21},
or by using a neural network \citep{muthukrishna19, moller19}.
These methods produce a
large number of features that can be used to characterize a given
light curve, but these features are not typically invariant to
the symmetries of how the light curve was observed. In particular, the features
produced by all of these models tend to be highly dependent on redshift.
This is a major challenge for science applications such as photometric
classification where labeled datasets tend to be heavily biased towards
bright, low-redshift transients \citep{lochner16, boone19}.
In this work, we address these challenges by building a generative model
of all transient light curves that is invariant to observing conditions.
Most techniques that have previously been used to build generative models of SNe~Ia assume
that the diversity of SNe~Ia can be described by a simple linear model.
Such models provide a good description of most normal SNe~Ia, but they
fail to describe peculiar SNe~Ia \citep{boone21a, boone21b} and are
not flexible enough to describe most other kinds of transients.
Furthermore, the techniques used to build these models require large
datasets of well-measured light curves and spectra \citep{betoule14}
that are not available for transients other than SNe~Ia.
Instead, we learn a generative model directly from large samples of
light curves using a modified version of a variational autoencoder
\citep[VAE;][]{kingma14}. A VAE learns a low-dimensional representation
of its input that we call a latent space.
First, an encoder model uses variational inference to approximate the
posterior distribution over the latent space for a given input.
A generative model, also referred to as a decoder, then reconstructs the input given a
point in the latent space. The VAE can be trained by applying
both the encoder and decoder to a given input and by comparing the
original input to the reconstructed one.
Autoencoders have previously been applied to astronomical light curves
\citep{naul18, pasquet19, martinezpalomera20, villar20, villar21} and shown to be effective
for tasks such as photometric classification and outlier identification.
However, these models do not include explicit descriptions of observing symmetries,
so the same transient observed under different conditions (e.g. redshift) will
be assigned to different locations in the latent space. In this work,
we produce a hybrid physics-VAE model where we use a neural network to describe
the intrinsic time-varying spectra of astronomical transients and an explicit physical model for
known symmetries. Our model produces a three-dimensional representation of
the intrinsic diversity
of transients that is insensitive to observing conditions, including redshift.
We call the resulting model ``ParSNIP'' (Parameterization of SuperNova Intrinsic Properties).
We describe the datasets that we use for this work in Section~\ref{sec:dataset}.
In Section~\ref{sec:methods}, we describe the ParSNIP model and how it is trained.
We evaluate the performance of the ParSNIP model as a generative model in
Section~\ref{sec:performance} and show how it is able to learn the full
time-varying spectra for transients despite only being trained on light curve data.
In Section~\ref{sec:applications}, we show how the ParSNIP model achieves
state-of-the-art performance for tasks
such as photometric classification, identification of new transients, or cosmological
distance estimation. Finally,
in Section~\ref{sec:discussion} we discuss how the ParSNIP model can be applied
to other datasets and how it could be improved.
\section{Dataset} \label{sec:dataset}
We evaluate the techniques described in this work on two different datasets of
supernova-like light curves. First, we use a dataset of observed supernova-like
light curves from the Pan-STARRS1 Medium-Deep Survey (PS1) \citep{chambers16},
the details of which
are described in \citet{villar20}. This dataset contains 2,885 light curves
with host-galaxy redshifts of which 557 have spectroscopically-confirmed types.
The light curves were observed in the Pan-STARRS g, r, i, and z bandpasses with
a cadence of $\sim$7 days per filter.
We also evaluate the ParSNIP model on the PLAsTiCC dataset of simulated light curves
described in \citep{kessler19}. This dataset contains 3,006,109 simulated supernova-like light
curves with observations in the Rubin Observatory's u, g, r, i, z, and y bandpasses,
and the authors aimed to simulate a realistic dataset of light curves from
three years of operation of the Rubin Observatory.
This simulation consists of two distinct surveys. The vast majority of the light
curves (2,972,316) are from the Wide-Fast-Deep (WFD) survey which covers around
half of the sky
with observations roughly twice per week in at least one of the six bandpasses.
33,793 light curves are from the Deep-Drilling-Fields (DDF) survey which covers
$\sim$50 deg$^2$ with significantly deeper and more frequent observations.
The authors of the PLAsTiCC dataset also simulated a spectroscopic followup
strategy which produces type labels for 5,153 of the light curves in this dataset.
This labeled dataset is highly biased towards bright transients, and has a mean
redshift of 0.32 compared to 0.50 for the full dataset.
For this analysis, we assume that the redshifts of all of the transients
are known. This assumption is not required for our analysis, but it does simplify
our methodology. We will discuss how our methods can be applied to datasets
where the redshift is not available in Section~\ref{sec:photoz}.
We restrict our analysis to supernova-like light curves. By this, we mean
that we consider any extragalactic light curves where there is an excess of flux for a
single well-defined time period, and where the light curve has a stable
background level before and after this time period. The PS1 dataset that
we use was already restricted to these kinds of light curves. For the PLAsTiCC
dataset, we reject all of the galactic and AGN light curves but keep all other
kinds of transients.
\subsection{Preprocessing} \label{sec:preprocessing}
To preprocess our light curves, we first make a rough estimate of the time of maximum light of the light curve
by taking the median time of the five highest signal-to-noise observations in the
light curve. In each bandpass, we estimate the background level using a biweight estimator
\citep{beers90} on all observations at least 250 days from our estimated time of
maximum light. We subtract this estimate of the background level from each
light curve. For the PS1 dataset, we also correct the light curves for Milky Way
extinction using the dust map from \citet{schlafly11}.
Neural networks typically perform better if their inputs have a limited range,
but flux values can vary over many orders of magnitude. To address this,
we normalize the brightness of each light curve by its maximum flux in
observations in any band with a signal-to-noise of at least five.
After this procedure, most
flux values lie between zero and one. We record the normalization scale, and
use it in future analyses such as distance estimation.
We find that the statistical uncertainties that are reported for both the PLAsTiCC
and PS1 datasets heavily underestimate the true uncertainties for very high
signal-to-noise observations. This is partially due to uncertainties in
our background subtraction procedure. To prevent these underestimated
uncertainties from having excessively large weights in our model, we add
an error floor of 0.01 to all of our flux uncertainties when training the model.
With our normalization, this roughly corresponds to an error floor of 1\% of the peak flux.
Astronomical light curves are typically sparsely sampled, but most
neural network architectures require inputs that are sampled on an
evenly-spaced grid. To be able to use these architectures, we evaluate
our light curves on a grid following a procedure similar to the
one described in \citet{pasquet19}. Note that astronomical observations
from a given telescope typically only occur during a short time interval
each night when the target is near the zenith. All observations of the
same target will be separated in time by some integer number
of sidereal days with residuals of only $\sim$1 hour. We can therefore
evaluate the light curve on a grid of sidereal days and preserve almost all of
the temporal information.
For each transient, we build a grid of observations that has a length
of 300 sidereal days centered on the estimated time of maximum light.
The grid has rows for the flux and inverse flux variance in each bandpass.
For nights where the transient was observed in a given bandpass, we include the observed
flux value and inverse flux variance in the grid. For nights when the transient was
not observed, we simply input zero for both the flux and inverse flux variance. We also include
one row in the grid for the redshift, with the same value repeated at all times. After this
procedure, our light curves are represented by a two-dimensional grid of size
$300\times (2 N + 1)$ where $N$ is the number of bandpasses that we are modeling.
We can then use standard neural network architectures to process our light
curves.
\section{Methods} \label{sec:methods}
\subsection{Overview}
The overall design goal for our model is to build a representation of the
diversity of transients that disentangles intrinsic diversity related
to the physics of the transient from diversity related to the symmetries
of how the transient was observed. A transient should be assigned
the same intrinsic representation regardless of how it was observed.
In particular, we would like for out model to be invariant to redshift, meaning
that a transient is assigned the same intrinsic representation
regardless of the redshift that it is observed at.
Many different approaches exist for modeling disentangled representations with
autoencoders \citep{tschannen18}. These approaches mostly rely on either
using a supervised training procedure (which would require labels that we do not have)
\citep[e.g.][]{kulkarni15} or using regularization to encourage an implicit disentangled
representation \citep[e.g.][]{higgins17}. We adopt an approach similar
to that of spatial-VAE \citep{bepler19} where we generate a disentangled representation
by explicitly including known symmetries in our generative model.
Our generative model first uses a neural network to predict the intrinsic spectra of a given transient
as a function of three intrinsic latent variables. The intrinsic spectra are then passed
through a physics layer that explicitly models how the light propagates through the universe
and is observed. The physics layer also takes as input a set of explicit latent
variables and metadata describing the observing conditions. A schematic overview of our model
is shown in Figure~\ref{fig:schematic}.
\begin{figure*}
\plotone{model_schematic.pdf}
\caption{
Schematic overview of the ParSNIP model. An encoder model predicts the posterior distribution over
a set of latent variables for a given input light curve. We label a subset of these
latent variables as intrinsic latent variables, and use them as input to an intrinsic decoder
model that predicts the full intrinsic time-varying spectrum of the transient. A physics layer
then describes how light propagates through the universe and is observed on a detector.
The physics layer takes as input a set of explicit latent variables and metadata
describing the observing conditions. The encoder and intrinsic decoder are implemented as
neural networks. The physics layer uses only physical relations with known
functional forms and it has no free model parameters.
}
\label{fig:schematic}
\end{figure*}
\subsection{Variational Autoencoders} \label{sec:vae}
To build this model, we developed a modified version of a VAE \citep{kingma14}.
As traditionally formulated, a VAE
is used to model some observed random variables $\by$ in terms of some
unobserved random variables $\bs$. We assume that $\bs$ are drawn from some
prior distribution
$p_\btheta(\bs)$, and that $\by$ are then drawn from
the conditional distribution $p_\btheta(\by|\bs)$.
These distributions are assumed to be part of some family of
distributions parameterized by the hyperparameters $\btheta$.
The posterior distribution $p_\btheta(\bs|\by)$
is in general intractable. Instead of modeling it directly, in a
VAE we instead assume that the posterior can be
parameterized using some family of distributions $q_\bphi(\bs|\by)$
with hyperparameters $\bphi$. As shown in \citet{kingma14}, the
marginal likelihood of a given observation $\by$ is then bounded by:
\begin{align}
\log p_\btheta(\by) & \geq \mathbb{E}_{q_\bphi(\bs|\by)}\left[ \log p_\btheta(\by|\bs) \right] \nonumber \\
& \indent - D_{KL}(q_\bphi (\bs|\by) || p_\btheta(\bs))
\label{eq:elbo}
\end{align}
The right side of this equation is referred to as the ``evidence lower bound'' or ELBO.
The first term of the ELBO captures the likelihood of the data under our model, and can
be interpreted as the reconstruction error of the model. The second term represents the
Kullback-Leibler divergence \citep{kullback51} between the approximate posterior
$q_\bphi(\bs|\by)$ and the prior $p_\btheta(\bs)$, and
is effectively a constraint on the form of the approximate posterior.
Typically, an isotropic multivariate Gaussian is assumed for the prior over the latent
representation $\bs$:
\begin{align}
p_\btheta(\bs) = \mathcal{N}(\bm{0}, \bm{I})
\end{align}
and $p_\btheta(\by|\bs)$ is assumed to be described by a Gaussian distribution
with a diagonal covariance matrix whose mean is given by some deterministic function
$d_\theta(\bs)$:
\begin{align}
p_\btheta(\by|\bs) = \mathcal{N}(d_\btheta(\bs), \sigma_\by^2 \bm{I})
\end{align}
Finally, we assume that $q_\bphi(\bs|\by)$ is described by a Gaussian distribution
with a diagonal covariance matrix:
\begin{align}
\label{eq:encoder}
q_\bphi(\bs|\by) = \mathcal{N}(\mu_\bphi(\by), \sigma_\bphi(\by)^2 \bm{I})
\end{align}
where $\mu_\bphi(\by)$ and $\sigma_\bphi(\by)$ are some deterministic functions
of $\by$. The validity of this assumption will be discussed in
Section~\ref{sec:posteriors}.
Training a VAE then consists of finding appropriate functions for $d_\btheta(\bs)$,
$\mu_\bphi(\by)$, and $\sigma_\bphi(\by)$ that maximize the ELBO in
Equation~\ref{eq:elbo}. These functions are typically implemented as neural
networks. A VAE has an architecture that resembles a classical autoencoder.
The function $\mu_\bphi(\by)$ is analogous to the encoder that takes a set of
observations $\by$ and outputs a location in the latent representation $\bs$. $d_\btheta(\bs)$
can be interpreted as a decoder that generates a forward model of the observations
from the latent representation. The full decoder model of the VAE given by
\begin{align}
p_\btheta(\by,\bs) = p_\btheta(\by|\bs) p_\btheta(\bs)
\end{align}
is a generative model.
\subsection{Incorporating Symmetries into Variational Autoencoders} \label{sec:vae_symmetries}
In this work we modify the standard VAE architecture to
explicitly incorporate symmetries into our model. Note that the functions $d_\btheta(\bs)$,
$\mu_\bphi(\by)$, and $\sigma_\bphi(\by)$ are arbitrary functions. Rather than implementing
these purely as neural networks, we instead use a hybrid implementation where we explicitly
model symmetries of the observations where the functional form is known, and where
we use neural networks for parts of the model whose functional forms are a priori unknown.
Formally, we identify the subset of latent variables that represent known symmetries
of the model as ``explicit latent variables'', and we label them $\bs_e$.
We call the remaining latent variables that represent diversity whose functional
form is unknown ``intrinsic latent variables'', and we label them $\bs_i$.
Then $\bs = \{\bs_e, \bs_i\}$. We also allow for additional
metadata of the observations $\mm$ that describe known observing conditions
(e.g. the time that an observation was taken at). With these definitions,
we model the decoder as
\begin{align}
d_\btheta(\bs_i, \bs_e, \mm) = f_1(\bs_e, \mm, d_{\text{int},\btheta}(f_2(\bs_e, \bs_i, \mm)))
\end{align}
where $d_{\text{int},\btheta}$ is an arbitrary function that models the intrinsic diversity.
$f_1$ and $f_2$ are fixed functions that apply known
symmetries to the inputs and outputs of $d_{\text{int},\btheta}$.
To model transient light curves, we choose to use explicit functional forms for the amplitude
of the light curve $A$, the color (capturing dust reddening of the light curve) $c$,
and a reference time for the light curve $t_0$. Each observation
also has associated metadata for the redshift of the light curve $z$,
the time of the observation $\bt$, and the bandpass that was used for the observation
$\bB(\lambda)$. In our formalism, we have $\bs_e = \{A, c, t_0\}$ and
$\mm = \{z, \bt, \bB(\lambda)\}$. We incorporate the known effects of each of these terms
on astronomical observations into the $f_1$ and $f_2$ functions to produce the following
decoder
\begin{align}
\label{eq:decoder}
& d_\btheta(\bs_i, A, c, t_0, z, \bt, \bB(\lambda)) \\
\nonumber & = \int d\lambda \cdot A \cdot \bB(\lambda) \cdot C(c, \frac{\lambda}{1 + z}) \cdot d_{\text{int},\btheta}\left(\frac{\bt - t_0}{1 + z}, \frac{\lambda}{1 + z}, \bs_i\right)
\end{align}
where $C(c, \lambda)$ is a fixed dust extinction law that is applied in the rest frame, and
$\bB(\lambda)$ are the bandpasses used for each observation with known throughput.
We choose to use the color law $C(c, \lambda)$ from \citet{fitzpatrick07} in this
work with $R_V = 3.1$ as implemented in the \texttt{extinction} package \citep{barbary16b}.
$d_{\text{int},\btheta}$ is then effectively modeling the time-evolution of the rest frame
spectrum of the transient.
The SALT2 model that is often used to fit SNe~Ia \citep{guy07} is
actually a special case of the general model described in Equation~\ref{eq:decoder}
with the a single intrinsic latent parameter $x_1$ describing the width of the light curve,
i.e. $\bs_i = \{x_1\}$. For SALT2, the intrinsic diversity is modeled by a linear sum
of two spline template functions $M_0(\lambda, t)$ and $M_1(\lambda, t)$:
\begin{align}
d_{\text{int},\text{SALT2}}(\bt, \lambda, x_1) = M_0(\lambda, \bt) + x_1 M_1(\lambda, \bt)
\end{align}
The SALT2 model provides a good description of most SNe~Ia, but its model of intrinsic
diversity is linear which is insufficient for some SNe~Ia \citep{boone21a, boone21b}
and most other kinds of transients.
\subsection{Modeling Spectra with a Neural Network}
In order to model a wide range of transients, we choose to implement
$d_{\text{int},\btheta}$ as a multilayer perceptron (MLP). This
function takes as input both the time of an observation and the intrinsic latent variables
$\bs_i$. A large part of the spectrum is typically needed to compute bandpass
photometry, so for computational reasons we choose to predict the full restframe spectrum with
the MLP on a grid of wavelength elements rather than having the wavelength as an additional input.
Synthetic photometry can then be computed by numerically evaluating the integral in
Equation~\ref{eq:decoder}. Labeling the index of each restframe wavelength bin as $n$, our
implementation of the decoder can then be written as
\begin{align}
& d_\btheta(\bs_i, A, c, t_0, z, \bt, \bB(\lambda)) \\
\nonumber & = \sum_{n} A \cdot \bB'_n \cdot C(c, \lambda'_n) \cdot d_{\text{int},\btheta,n}\left(\frac{\bt - t_0}{1 + z}, \bs_i\right)
\end{align}
Here $\lambda'_n$ is the central rest-frame wavelength for bin $n$. $\bB'_n$ are the
integrated bandpasses for bin $n$ which can be computed by integrating the bandpasses
over each wavelength bin
\begin{align}
\label{eq:integrated_bandpasses}
\bB'_n = \int_{\lambda'_{n,\text{min}}}^{\lambda'_{n,\text{max}}} d\lambda \cdot \bB((1 + z) \lambda)
\end{align}
We choose to use a grid of 300 spectral elements with rest frame wavelengths
that are logarithmically-spaced between 1,000 and 11,000~\AA.
Logarithmically-spaced wavelength elements simplify computing the integrated bandpasses
$\bB'_n$ because redshifting the bandpass is equivalent to a translation of the wavelength
elements. We precompute the integrated bandpasses $\bB'_n$ by numerically evaluating the
integral in Equation~\ref{eq:integrated_bandpasses} with 51 times oversampling.
With this configuration, we find that the numerical uncertainties in our synthetic photometry
are $\sim$0.0002~mag which is much smaller than typical measurement uncertainties.
To summarize, we implement $d_{\text{int},\btheta}$ as an MLP that takes as inputs
the intrinsic latent variables $\bs_i$ and a time.
The output of this MLP is a restframe spectrum on a grid of 300 spectral elements.
We then use an explicit physics model of host-galaxy dust,
redshift, and the bandpasses of the telescope to calculate synthetic photometry from
the predicted spectrum and compare with observations. The MLP has several hyperparameters,
namely the number of layers, width of each layer, and
activation function. The values of these hyperparameters will be discussed in
Section~\ref{sec:hyperparameters}.
With this architecture, the full decoder model $d_{\btheta}$ naturally handles all
of the previously-described symmetries of how the transient was observed. We can evaluate
the full model in different bandpasses or at different redshifts without making
any changes to the intrinsic model or to the intrinsic latent variables $\bs_i$.
This is a major difference from previous autoencoder models that predict the
light curve in specific bands directly, and that therefore have a representation
where the intrinsic properties are entangled with properties of how the observations
were taken \citep[e.g.][]{pasquet19, villar20}.
\subsection{Symmetry-Aware Encoders}
We choose to model the posterior distribution of the latent variables for a given light
curve as a Gaussian distribution with a diagonal covariance matrix, as shown in Equation~\ref{eq:encoder}. We assume a
$\mathcal{N}(\bm{0}, \bm{I})$ prior for the intrinsic latent variables. For the
color $c$, we assume a weak prior of $\mathcal{N}(0, 0.3^2)$ which is much larger than
the true dust distribution. Similarly, we assume a weak prior of $\mathcal{N}(0, (20~\text{days})^2)$
for the reference time $t_0$ which should be much wider than the uncertainty from the
estimated times of maximum light from the preprocessing step in Section~\ref{sec:preprocessing}.
As will be discussed in Section~\ref{sec:marginalizingamplitude}, we marginalize over the
amplitude and evaluate its posterior directly so it is ultimately not predicted by our encoder.
We implement the encoder using residual blocks with convolutional layers
\citep{he16}. Each block has two convolutional layers with a kernel size of 3, and we
use ReLU activation functions. A schematic of each block is shown in
Figure~\ref{fig:residual_block}. We chain a series of seven these blocks together with
increasing dilation sizes so that after all seven blocks the receptive field of the network
contains the full light curve. In a traditional VAE, the posteriors of the
latent variables would be predicted
using fully connected layers. While such a network should be theoretically capable of
representing the encoding functions for our model, in
practice, we find that these networks are very unstable and difficult to train. To address
this challenge, we add custom layers that are aware of the symmetries in our data
to predict the explicit latent variables.
\begin{figure}
\epsscale{0.5}
\plotone{residual_block.pdf}
\caption{
Architecture of the residual blocks used for the encoder of our model. Each
block takes as input a 2D array consisting of $N$ channels at $T$ times. Two one-dimensional
convolutional layers are applied along the time dimension with a kernel size of 3 and a dilation of $D$
elements. The input is then padded with zeros in the channel dimension and added with the output of the
second convolutional layer to produce a 2D output with $C$ channels at $T$ times. We label this
block as ``ResidualBlock(D, C)''.
}
\label{fig:residual_block}
\end{figure}
\subsection{A Time-Invariant Encoder}
The main challenge for convergence is computing the reference time $t_0$ for a light
curve. At the start of training, the decoder will not be producing functions that look light
light curves, so the gradient of the loss function with respect to $t_0$ will not be meaningful
and the training will progress very slowly if at all.
We address this by constructing a novel neural network architecture that is mathematically
invariant to time translation.
The encoder predicts the hyperparameters $\mu$ and $\sigma^2$ describing the
posterior distributions over the latent variables in our model. To predict the mean reference time $\mu_{t_0}$, we
require our encoder to be a time-equivariant function, i.e. a function $f(t)$ that satisfies
\begin{align}
f(t + \Delta t) = f(t) + \Delta t
\end{align}
For all of the other model hyperparameters, we require our encoder to be a time-invariant
function, i.e. one that satisfies:
\begin{align}
f(t + \Delta t) = f(t)
\end{align}
Convolutional layers on their own are equivariant:
if the input to a convolutional layer is shifted, the output is identical but shifted by the same
amount as the input. By applying a series of the residual blocks shown in
Figure~\ref{fig:residual_block} without applying any pooling operations, we obtain an output
that is mathematically equivariant to any shift in the input light curve by an integer number of days.
The output of this layer is a two-dimensional array that has a length that is identical to the
length of the input light curve (in our case 300 days) and 200 channels.
To produce a time-invariant function, we throw away all of the time information by applying a
global max-pooling layer to this array.
This operation involves taking the maximum value of the array over the time dimension to collapse it in the
time dimension and produce a one-dimensional array with 200 channels. We use
fully-connected layers over the collapsed array to predict all of the hyperparameters $\mu$ and $\sigma^2$
describing the posterior distributions over the latent variables in our model except for the mean
reference time $\mu_{t_0}$. This procedure is mathematically invariant to any shift in time
of the input light curves by an integer number of days.
To product a time-equivariant function, we begin with the previously-discussed (300 days) $\times$ (200 channels) array
that was output from the residual blocks. We use a fully-connected layer applied to each bin in the
time dimension individually to collapse the channel dimension and obtain a one-dimensional vector $v$ with a
single value for each of the 300 time bins. We then apply a softmax
transformation to this vector to obtain a 300-dimensional vector that sums to one. The entries in this
vector can be thought of as weights for whether $\mu_{t_0}$ appears in a specific bin. We predict the
value of $\mu_{t_0}$ by taking the dot product of this vector with the index of each element in the vector
(i.e. a vector of increasing integers from 1 to 300).
\begin{align}
\mu_{t_0} = \text{SoftMax}[v] \times \text{Index}[v]
\end{align}
Mathematically, if the input light curve is shifted by some integer number of days, then this
function outputs a value for $\mu_{t_0}$ that is incremented by the same number of days as desired.
We call this layer a ``time-indexing layer''. By combining the time-indexing layer with
the global max-pooling layer, we produce an encoder that is mathematically invariant to
a translation in the input of any integer number of days with appropriate transformations for
all of the latent variables. Note that we do not force the reference time to correspond
with any specific feature of the light curve other than having a weak prior on the time
of maximum light. We allow the model to automatically determine a relevant definition
of reference time on its own.
\subsection{Marginalizing Over the Amplitude} \label{sec:marginalizingamplitude}
With the time-indexing layer the model is able to converge and give reasonable predictions.
However, we find empirically that it is challenging for the model to predict the amplitude $A$.
The amplitude corresponds to an overall scaling of the model, and is degenerate with a global
scale of all of the outputs. We find that we can significantly improve the convergence of the model
by modifying the architecture to marginalize over the amplitude rather than have it as an explicit
parameter. To do this, note that given all of the other latent parameters $\bs'$, we can analytically
evaluate the conditional posterior distribution of $A$ as:
\begin{align}
p(A|\bs',\by) = \frac{p(\by|\bs',A)p(A|\bs')}{p(\by|\bs')}
\end{align}
Here $p(\by|\bs')$ is a constant that does not depend on $A$. Assuming a Jeffreys (flat) prior
for $p(A|\bs')$, this becomes:
\begin{align}
p(A|\bs',\by) & \propto p(\by|\bs',A) \\
& \propto \prod \exp\left(-\frac{(\by - A \cdot d_\btheta(\bs', A=1))^2}{2 \sigma_\by^2}\right) \\
& = \frac{1}{\sqrt{2 \pi \sigma_A^2}} \exp\left(-\frac{(A - \mu_A)^2}{2 \sigma_A^2}\right)
\label{eq:amplitude_posterior}
\end{align}
where:
\begin{align}
\sigma_A^2 &= \frac{1}{\sum d_\btheta(\bs', A=1)^2 / \sigma_\by^2} \\
\mu_A &= \frac{\sum \by \cdot d_\btheta(\bs', A=1) / \sigma_\by^2}{\sum d_\btheta(\bs', A=1)^2 / \sigma_\by^2}
\end{align}
We can then evaluate the log-likelihood $\log p_\btheta(\by|\bs')$ necessary for the VAE loss function
in Equation~\ref{eq:elbo} as:
\begin{align}
\log p_\btheta(\by|\bs') &= \int dA \cdot p(\by|\bs',A) \cdot p(A|\bs') \\
&= \int dA \cdot p(\by|\bs',A) + C
\end{align}
where $C$ is some arbitrary normalization constant that can be ignored because it will not affect the
minimization. We estimate this integral numerically using importance sampling with $A$ drawn from its posterior
distribution shown in Equation~\ref{eq:amplitude_posterior}. As is typically done when training
deep learning models, while training we use a single sample for our numerical estimate of the integral.
With this procedure, we analytically marginalize over the amplitude $A$ and it is no longer an explicit
parameter of our model. The Jeffreys prior that we impose on $p(A|\bs')$ is improper, which
means that we are not learning a generative model for the amplitude. This is not a problem for most
science applications as discussed in Section~\ref{sec:priors}, and a similar approach
is taken for most generative models of SNe~Ia \citep[e.g. SALT2;][]{guy07}.
We find that this model that marginalizes over the amplitude is much easier to train than a model
where the amplitude is an explicit parameter.
\subsection{Regularization of the Spectra} \label{sec:spectra_regularization}
As discussed in Section~\ref{sec:vae_symmetries}, we are training our model to predict the full
spectra of transients. We are only training using photometry in this work, which means that we
are effectively performing a deconvolution. As a result, the high-frequency components of the spectra will
not be well-constrained. To prevent the model from adding high-frequency noise to the spectra, we
add a regularization term to our loss function following a procedure similar to the one used by
\citet{crenshaw20} for estimating galaxy spectra from photometry. Given the set of wavelength bins
with index $n$, we apply a penalty on the flux difference between adjacent bins given by:
\begin{align}
\eta \sum_i \left( \frac{d_{\text{int},\btheta,n}(\bt,\bs_i) - d_{\text{int},\btheta,n+1}(\bt,\bs_i)}
{d_{\text{int},\btheta,n}(\bt,\bs_i) + d_{\text{int},\btheta,n+1}(\bt,\bs_i)} \right)^2
\end{align}
Here $\eta$ is a tunable parameter that can be adjusted to determine the strength of the
regularization. We apply this penalty to the full spectra that are predicted at every time where
we have an observation. We find empirically that a value of
$\eta = 0.001$ results in a reasonable balance between reducing noise and oversmoothing the spectrum.
We find that this regularization term contributes only $\sim$0.01\% to the loss function at the
end of training for both of the datasets that we consider.
\subsection{Augmentation} \label{sec:augmentation}
As shown in \citet{boone19}, augmentation can be used to generate a wide range of different light
curves from observations of a single transient. When training our model, we apply a series of
different random transformations to our data. These transformations are all applied independently,
and we choose different transformations in each epoch.
\begin{itemize}
\item Shift the observations by a constant offset sampled from a Normal[0, 20 days] distribution.
\item Scale the observations by a constant factor sampled from a Lognormal[0, 0.5] distribution.
\item Drop observations randomly, with the fraction to drop sampled from a Uniform[0, 0.5] distribution.
\item With probability 50\%, add noise to the observations with a standard deviation sampled from a Lognormal[-4, 1] distribution.
\end{itemize}
These transformations produce a wide range of light curves, with some resembling the original light
curves and others at much lower signal-to-noises with significantly fewer observations. In contrast
to \citet{boone19}, we do not attempt to augment our light curves in redshift as that cannot be
done without making assumptions about the shape of the spectrum. We also do not attempt to reproduce
the observational properties of the instrument. Instead, we simply attempt to cover a wide range of
signal-to-noises ranging from the original values to signal-to-noises well-below the
detection threshold.
\subsection{The ParSNIP model} \label{sec:parsnip_model}
We combine all of the different elements described in this Section to build a hybrid physics-VAE
model that can describe transient light curves. We call this model ``ParSNIP'' (Parametrization
of SuperNova Intrinsic Properties). A schematic of the full ParSNIP architecture is shown in
Figure~\ref{fig:parsnip_architecture}.
\begin{figure}
\epsscale{0.85}
\plotone{parsnip_architecture.pdf}
\caption{
Architecture of the ParSNIP model.
}
\label{fig:parsnip_architecture}
\end{figure}
We implemented this model in PyTorch \citep{pytorch}. We fit the model separately
to the PS1 and PLAsTiCC datasets of transient light curves described in Section~\ref{sec:dataset}
using the Adam optimizer \citep{kingma15} with a learning rate of $10^{-3}$ and batch sizes of $128$.
We use the \texttt{ReduceLROnPlateau} feature in
PyTorch to scale the learning rate by a factor of $0.5$ whenever the optimizer fails to
reduce the loss function after 10 epochs, and we train until the learning rate is decreased to
below $10^{-5}$. This training procedure takes approximately 300 epochs. The computation
times and requirements for both training and inference are discussed in
Section~\ref{sec:computational_requirements}.
\subsection{Hyperparameters}
\label{sec:hyperparameters}
Our model has many different hyperparameters that can be tuned, including the architecture of the
network and the parameters of the optimizer. We trained 56 models on the PS1 dataset using a wide
range of different hyperparameter values. We trained on 90\% of the dataset, and kept 10\% back
for validation. We find that the model is not highly sensitive to most hyperparameters other than
the learning rate, with the loss function typically varying by $<5\%$. The results are consistent
for the training and validation sets. When the model performs similarly well on two different
configurations, we choose the one that is more computationally efficient. We show the range of
hyperparameter values that we
considered along with the optimal values from our tests in Table~\ref{tab:hyperparameters}.
\begin{deluxetable*}{cc}
\tablecaption{Hyperparameter optimization for the ParSNIP model. We show the range of values
considered for each hyperparameter along with the value that resulted in the model with the
smallest loss function in bold. When arrays are given, each element in the array refers to
the hyperparameter value for a different block in the architecture. The optimal configuration
is shown in Figure~\ref{fig:parsnip_architecture}.}
\label{tab:hyperparameters}
\tablehead{\colhead{Hyperparameter} & \colhead{Value}}
\startdata
Batch size & 16, 32, 64, \textbf{128}, 256, 512\\
Learning rate & $10^{-4}$, $2\times10^{-4}$, $5\times10^{-4}$, $\bm{10^{-3}}$, $2\times10^{-3}$, $5\times10^{-3}$, $10^{-2}$ \\
Scheduler factor & 0.1, 0.2, \textbf{0.5} \\
Encoder convolutional block types & Conv1d, \textbf{ResidualBlock} \\
Encoder convolutional block widths & [20, 40, 60, 80, 100, 100, 100] \\
& \textbf{[40, 80, 120, 160, 200, 200, 200]} \\
& [32, 64, 128, 128, 128, 128, 128] \\
& [32, 64, 128, 256, 256, 256, 256] \\
& [32, 64, 128, 256, 512, 512, 512] \\
Encoder fully connected block sizes & 100, 128, \textbf{200}, 256, 400, 512 \\
Decoder architecture & [100] \\
& [50, 100] \\
& [100, 200] \\
& \textbf{[40, 80, 160]} \\
& [64, 128, 256] \\
& [32, 64, 128, 256] \\
\enddata
\end{deluxetable*}
\subsection{Dimensionality of the Intrinsic Latent Representation}
Another important hyperparameter is the dimensionality of the intrinsic
latent representation. We trained the ParSNIP model with varying numbers of intrinsic
dimensions and with all of the other hyperparameters kept at their optimal
values shown in Table~\ref{tab:hyperparameters}. For each model, we evaluated
the VAE loss function shown in Equation~\ref{eq:elbo} on both the training
and validation subsets of the PS1 dataset. The results of this procedure
are shown in Figure~\ref{fig:vae_dimensionality}.
\begin{figure}
\plotone{vae_dimensionality}
\caption{
VAE loss function for different dimensionalities of the intrinsic latent representation.
For each number of dimensions we train three models (shown as black dots), and
we shown the mean value of the loss function across all three models with
colored markers. Beyond three dimensions we see little improvement in the VAE loss function.
}
\label{fig:vae_dimensionality}
\end{figure}
We find that the loss function improves when increasing the dimensionality up to
three for both the training and validation sets, but there is little benefit from
adding more dimensions beyond three. For models with large numbers
of dimensions, we find that most of the dimensions are not used by the VAE, and
it simply outputs uninformative $\mathcal{N}(\bm{0}, \bm{1})$ posteriors for
all of the transients in some dimensions. We choose to use a three-dimensional
model for the rest of our experiments.
\section{Model Performance} \label{sec:performance}
\subsection{Reproducing Light Curves}
To study how well our model can reproduce light curves, we trained the ParSNIP
model on 90\% of the light curves in the PS1 dataset
described in Section~\ref{sec:dataset} with a randomly selected
10\% of the light curves held out for validation. We show examples of the
models for different light curves in the validation subset in
Figure~\ref{fig:model_examples}. We find that the ParSNIP model is able to
generalize well on the PS1 dataset with accurate models for the vast majority
of transients.
\begin{figure*}
\epsscale{1.15}
\plotone{model_examples}
\caption{
Examples of out-of-sample predictions of the ParSNIP model on the validation subset of the PS1 dataset.
These light curves were not included in the dataset that was used to train the model.
Each panel shows the predictions for a different light curve. The observations are
shown as individual points with their uncertainties, and the different colors correspond
to the different bands as shown in the legend. The mean model predictions are shown as
a solid line, and the 1-sigma uncertainties from varying the VAE latent parameters
are shown with shaded contours.
}
\label{fig:model_examples}
\end{figure*}
We calculated the model residuals for all of the observations in our dataset.
For the PS1 training set, we find that the model residuals are consistent with the
statistical uncertainties which suggests that may we have somewhat
overfit the training set. For the validation set, we find
that the distribution of the residuals has a tight core with a dispersion of $\sim$0.06~mag
when statistical uncertainties are taken into account.
We do however see large residuals for some light curves, particularly ones of
transient types that are not well represented in the training set. For observations
with a statistical uncertainty of less that 0.05~mag, we find
that 94\% of observations have residuals of less than 0.2~mag, and $99.2$\% of observations
have residuals of less than 0.5~mag.
We performed the same comparison for a version of the ParSNIP model trained
on 90\% of the light curves in the PLAsTiCC dataset. In this case, we find that the model residuals have a
tight core with a dispersion of $\sim$0.04~mag
when statistical uncertainties are taken into account.
For observations with a statistical uncertainty of less than 0.05~mag, we find
that 97\% of observations have residuals of less
than 0.2~mag and $99.7$\% of observations have residuals of less than 0.5~mag,
with no major differences between the training and validation sets. The PLAsTiCC dataset was
simulated and the subset that we used to train the model contains more than 100 times as
many transients as the PS1 dataset which likely explains the difference in training/validation
performance between the PS1 and PLAsTiCC datasets.
Overall, we find that the ParSNIP model is able to produce good models for the vast majority of
the light curves in both datasets.
\subsection{The ParSNIP Intrinsic Latent Space}
As described in Section~\ref{sec:vae_symmetries}, the ParSNIP model learns a
three dimensional intrinsic latent space $\bs_i$ describing the intrinsic diversity of transients.
To study the structure of the learned latent spaces, we look at where transients with known labels
are located in the latent space for both of our datasets. The results of this procedure
are shown in Figure~\ref{fig:representations}. Most of the different transient types
are well separated despite the fact that the type labels were not used when
training the ParSNIP model.
\begin{figure*}
\epsscale{1.15}
\plottwo{ps1_representation}{plasticc_representation}
\caption{
Visualization of the intrinsic latent spaces learned when trained on the PS1 (left panel) and PLAsTiCC
(right panel) datasets. We show two of the $\bs_i$ latent variables for each transient in the dataset,
and we color each transient by its type if known. We limit this plot to show a maximum of 200 transients
for each type to highlight
rarer types. Despite the fact that the types were not used when training the ParSNIP model,
the model groups transients of the same type at similar locations in the latent space. Note
that these plots show only two dimensions of the intrinsic latent space:
each light curve also has measurements of another $\bs_i$ latent variable, color, and amplitude.
}
\label{fig:representations}
\end{figure*}
For transients of the same type, we find that differences in the ParSNIP intrinsic latent
variables $\bs_i$ correspond to differences in the intrinsic properties
of the transients. This can easily be see using the PLAsTiCC
dataset where all of the parameters that were used for simulating each
light curve are known. In the left panel of Figure~\ref{fig:submodel_representations},
we show that the ParSNIP latent space of superluminous supernovae is capturing the
variation in ejecta mass that went into the simulations. The SNe~II were simulated
using a small number of discrete templates, and we show where light curves simulated
from some of these templates are located in the latent space in the right panel of
Figure~\ref{fig:submodel_representations}. The ParSNIP model identifies these discrete
templates and clusters light curves simulated from each of them in the latent space.
\begin{figure*}
\plottwo{plasticc_slsn_ejecta_mass}{plasticc_snii_templates}
\caption{
Examples of diversity of the ParSNIP intrinsic representations for transients of the same type
in the PLAsTiCC dataset. Left panel: The variation of ParSNIP $s_1$ and $s_2$ latent variables for superluminous
supernovae is explained by differences in the ejecta mass parameter (shown in color) that went
into the simulations. Right panel: Some of the Type~II supernovae in the PLAsTiCC dataset
were simulated using discrete templates. We show the $s_2$ and $s_3$ latent variables for light curves
simulated from six of these templates, and find that the light curves simulated from each template
cluster at similar locations in the ParSNIP intrinsic latent space.
}
\label{fig:submodel_representations}
\end{figure*}
As discussed in Section~\ref{sec:methods}, the ParSNIP model was constructed to produce
an intrinsic latent space that is invariant to how the light curve was observed.
As expected, we find that well-measured light curves simulated with similar parameters
are embedded at very similar locations in the intrinsic latent space even when those light
curves are observed at different redshifts, with different brightnesses, or with
different amounts of host galaxy dust. There are no major correlations between any
of these properties of the observations and the intrinsic latent variables $\bs_i$.
To demonstrate the invariance of the ParSNIP model to redshift, we show the recovered intrinsic
latent space for the PLAsTiCC dataset in different redshift slices in Figure~\ref{fig:plasticc_representation_z}.
We find that the recovered latent space is almost identical in all of the different redshift
slices. There are minor differences
across redshift bins for some of the transient types. For example, the SNe~II have a much broader
distribution in the lower redshift bins compared to the higher redshift
bins. These differences are due to Malmquist bias rather than a limitation of the ParSNIP model.
At low redshifts nearly all SNe~II are detected, but at high redshifts only the SNe~II that
are intrinsically very bright are detected. For the PLAsTiCC dataset, this means that at low
redshifts the SNe~II come from a wide range of templates giving a wide distribution in latent
space, but at high redshifts the majority of the observed SNe~II come from a single template
(number 235) and are clustered tightly in the latent space.
\begin{figure*}
\epsscale{1.15}
\plotone{plasticc_representation_z}
\caption{
Visualization of the intrinsic latent space learned for the PLAsTiCC dataset in
different redshift
ranges. The redshift range of each panel is shown above the panel. In each panel, we show the
$s_1$ and $s_2$ latent variables for 200 transients of each type, and we color each point by its
type. The ParSNIP model was constructed to produce an intrinsic latent space that is invariant
to observing conditions such as redshift, and we find that the latent spaces are nearly
identical in each of the different redshift slices. The slight differences such as the
narrowing of the SNe II distribution at high redshifts are due to Malmquist bias rather than
a limitation of the ParSNIP model. Note that some transient types have a limited redshift
range in the PLAsTiCC simulations so they do not appear in some of the redshift slices.
}
\label{fig:plasticc_representation_z}
\end{figure*}
\subsection{Comparison with SALT2} \label{sec:salt_comparison}
The SALT2 model \citep{guy07} is commonly used to model the light curves of SNe~Ia.
This model is a linear model with one parameter $x_1$
describing the variability in the observed widths of SN~Ia light curves, and a second
parameter $c$ describing the differences in color (which will capture host galaxy dust).
As described in Section~\ref{sec:vae_symmetries}, the ParSNIP model is effectively a generalization
of the SALT2 model. In this subsection, we compare the performance of both of these models
for SNe~Ia. Note that the PLAsTiCC SN~Ia light curves were simulated using SALT2. For a fair
comparison, we only consider the version of the ParSNIP model that was trained
on real PS1 light curves and is thus independent of SALT2.
We investigated whether the latent space that the ParSNIP model learns
for SNe~Ia captures the $x_1$ and $c$ parameters of the SALT2 model.
To do this, we fit the SALT2 model to all of the light curves that were labeled as SNe~Ia
in the PS1 dataset and compared the results with the ParSNIP model on the same subset of light curves.
We limited our comparison to light curves that are well-fit by SALT2, which we define as having
a fit that converges, a best-fit SALT2 $x_1$ parameter between $-5$ and $+5$, an uncertainty on $x_1$ of
less than 0.5, an uncertainty on SALT2 $c$ of less than 0.1, and at least one observation before
maximum light.
The ordering of the intrinsic latent variables $\bs_i$ in the ParSNIP model is arbitrary. To effectively
compare our parameterization of the intrinsic diversity to $x_1$, we solve for the linear
combination of the ParSNIP latent variables $\bs_i$ that best models $x_1$:
\begin{align}
x_1^\text{ParSNIP} = \gamma_0 + \sum_n \gamma_n s_{i,n}
\end{align}
We solve for the coefficients $\gamma_n$ that minimize the sum of squared differences between
$x_1^\text{ParSNIP}$ and $x_1$. Note that we do not include measurement uncertainties in
this procedure because both the SALT2 and ParSNIP models were fit to the same observations
and the uncertainties on the $\bs_i$ and $x_1$ latent variables are likely very highly correlated.
The results of this procedure are shown in the left panel of Figure~\ref{fig:salt_parameters}.
\begin{figure*}
\plottwo{salt_x1_comparison}{salt_c_comparison}
\caption{
Reproducing the SALT2 model parameters from the ParSNIP latent space.
Left panel: SALT2 $x_1$ as a function of the best-fit linear combination of the
ParSNIP intrinsic latent variables $s_i$. Right panel: SALT2 color as a function of the ParSNIP color
with a constant offset.
The one-to-one line is shown in blue in both plots. We are able to recover both SALT2
$x1$ and $c$ from the ParSNIP latent space with high accuracy.
}
\label{fig:salt_parameters}
\end{figure*}
In the right panel of Figure~\ref{fig:salt_parameters}, we show the SALT2 $c$ parameter
in terms of the ParSNIP $c$ parameter. We include a constant offset in this comparison
given by the difference in the median colors between both datasets to account for
different arbitrary zeropoints.
We are able to accurately predict the SALT2 $x_1$ and $c$ parameters
from the ParSNIP latent space for all of these SNe~Ia,
with 88\% and 89\% of the variance explained respectively.
The residuals from the predictions for $x_1$ have an NMAD of 0.24 and a standard
deviation of 0.34. The differences between the ParSNIP $c$ and SALT2 $c$ values
have an NMAD of 0.037~mag and a standard deviation of 0.044~mag. The differences between
the ParSNIP $c$ and SALT2 $c$ values are highly correlated with the $s_3$ parameter
($\rho$=0.64) which simply reflects the fact that the color zeropoint is arbitrary and can
vary across the parameter space. Our prediction of SALT $c$ could be further improved
if this were taken into account. To summarize, given the location of an SN~Ia in the
ParSNIP latent space we are able to accurately predict its SALT2 parameters. This implies that
the ParSNIP latent space contains all of the information that is captured by the
SALT2 parameterization of SNe~Ia.
We compared how well the SALT2 and ParSNIP models are able to reproduce light
curves. An example of the light curve fits of both models to PS1-11bk is shown
in the top panel of Figure~\ref{fig:snia_spectrum_comparison}. The quality of the
light curve fits are comparable. For the training set (validation set), we find a
median reduced $\chi^2$ of 1.42 (1.68) for ParSNIP fits to SNe~Ia
and 1.40 (1.21) for SALT2 fits. Note that for SALT2 we found the exact maximum
likelihood values of the model parameters by fitting the model to the light curves.
For the ParSNIP model, we instead used the encoder to approximate the
maximum-a-posteriori values of the model parameters.
All of the ParSNIP reduced $\chi^2$ values would decrease if a full fit were performed
to find the true maximum likelihood values of the model parameters. Nevertheless,
we find that the ParSNIP model is achieving comparable reconstruction performance
to the SALT2 model despite being trained on a much lower quality dataset.
Both SALT2 and ParSNIP predict the full spectral time series of the light curves
that they fit. We compare the predicted spectra for PS1-11bk at a range of different
times in the lower panel of Figure~\ref{fig:snia_spectrum_comparison}. The SALT2
model was trained using spectra of a large number of SNe~Ia and is known
to provide a good description of the spectra of SNe~Ia. In contrast,
the ParSNIP model was trained using only photometric observations. Nevertheless,
the ParSNIP model accurately reproduces all of the major spectral features
of SNe~Ia as seen in the SALT2 model. The ParSNIP model does have some non-physical
behavior at UV and IR wavelengths (as does SALT2) where the PS1 bandpasses have limited
coverage. This could be improved by including additional followup observations in
the training. We discuss this further in Section~\ref{sec:including_spectra}.
We stress that the ParSNIP model was able to learn the spectrum of an SN~Ia
by effectively deconvolving photometric observations of SNe~Ia
at a wide range of different redshifts, and no spectra were included in the training
dataset.
\begin{figure}
\plotone{snia_lc} \\
\plotone{snia_spectrum}
\caption{
Comparison of the SALT2 and ParSNIP models for the SN~Ia PS1-11bk.
Top panel: Comparison of the light curves. The ParSNIP model is shown with solid
lines, and the SALT2 model is shown with a dashed line. Bottom panel: Comparison
of the spectra predicted by both models at a range of different times. We find
good agreement between the SALT2 and ParSNIP models other than in the UV and IR
regions that are not covered by many observations.
}
\label{fig:snia_spectrum_comparison}
\end{figure}
\subsection{Models of Other Kinds of Transients} \label{sec:models_other}
While empirical models such as SALT2 have been previously developed for SNe~Ia,
models of other kinds of transients are typically much more primitive
due to the lack of spectroscopic observations and more complex variability.
Our techniques produce a generative model for the spectral time series of all of
these transients. To evaluate the performance of these models, we compare the
spectral time series predicted by ParSNIP to observed spectra for two different
core-collapse supernovae: PS1-12cht, a Type~IIn supernova and PS1-12baa, a
peculiar Type~Ic supernova. We retrieved spectra for these two supernovae
from the Open Supernova Catalog \citep{guillochon17}. Both of these supernovae
are from classes that are not well-sampled in the PS1 dataset,
so we show results from a run where these supernovae were included in the training
of the ParSNIP model.
\begin{figure*}
\plottwo{lc_ps1_12cht}{lc_ps1_12baa} \\
\plottwo{spectra_ps1_12cht}{spectra_ps1_12baa}
\caption{
Comparison of the ParSNIP model to observed spectra of core-collapse supernovae.
The left two panels show the model and data for PS1-12cht, a Type~IIn supernova,
and the right two panels show the model and data for PS1-12baa, a peculiar Type~Ic
supernova. The top two panels show the observed PS1 photometry and the ParSNIP
model fit to that photometry. The bottom two panels show observed spectra
for each of these supernovae from PESSTO \citep{smartt15} and
LOSS \citep{shivvers19} respectively along with spectra from the ParSNIP model evaluated
at the same phases. We normalize all of the spectra to the flux at 6000\AA.
We find that the broad structure of the ParSNIP predicted spectra agree well
with the observed spectra despite the fact that the ParSNIP model was only
trained using photometry.
}
\label{fig:other_spectrum_comparison}
\end{figure*}
In the left panel of Figure~\ref{fig:other_spectrum_comparison}, we show the
light curve and spectra of PS1-12cht, a Type~IIn supernova. These spectra were
obtained by the PESSTO collaboration \citep{smartt15} and are relatively featureless.
We find that the ParSNIP model produces good predictions of the overall shape of the spectra
and the photometry, although we do see some ringing of the spectra at bluer wavelengths.
In the right panel of Figure~\ref{fig:other_spectrum_comparison}, we show the
light curve and spectra of PS1-12baa, a peculiar Type~Ic supernova. These spectra
were obtained by the LOSS collaboration \citep{shivvers19}. The spectra of this supernova
show many different emission lines, especially at later times. The ParSNIP model
predictions agree well with the observed spectra at early times and reproduce most
of the observed spectral features. At later times, we find that the ParSNIP model
is able to predict the broad structure of the spectrum, but it struggles to predict
the exact locations and widths of all of the emission lines.
This is not surprising: the ParSNIP model
was only trained on photometry, so the estimates of the spectra come from
effectively deconvolving the photometry of many transients at different redshifts.
PS1-12baa is a peculiar Type~Ic supernova, and there are very few examples of
similar supernovae in the PS1 dataset. The deconvolved spectra are therefore not
very well constrained. This could be addressed by training on a larger dataset,
such as the one that the Rubin Observatory will produce, or by including spectra
or additional followup observations in the training process. These options are discussed in
Section~\ref{sec:including_spectra}.
\subsection{Comparison with Simulations} \label{sec:models_simulations}
The PLAsTiCC dataset was simulated, so we can compare the spectra predicted
by the ParSNIP model directly to those that were used in the simulation. The result
of this procedure is shown for well-measured light curves of a range of different
types of transients in Figure~\ref{fig:plasticc_spectra}. Some classes, such as Type~II
supernovae, were simulated using several different models with very different spectral
properties. We find that the ParSNIP model is able to identify the different models
and recover the underlying spectral time series for each of them.
\begin{figure*}
\epsscale{1.1}
\plotone{spectra_plasticc_comparison} \\
\caption{
Comparison of the spectra predicted by the ParSNIP model to the true spectra in the
PLAsTiCC simulations. Each panel shows the simulated spectra for a given transient in
grey at a range of different times. The spectra predicted by the ParSNIP model for that
transient are shown in green. The titles of each panel
indicate the type of transient and model that was used for the simulation.
We find that the ParSNIP model is able to accurately recover the spectral time series
of all of these different kinds of transients despite only being trained on photometry.
}
\label{fig:plasticc_spectra}
\end{figure*}
The spectra that are recovered by the ParSNIP model tend to be overly smooth compared
to the input simulations. This is primarily due to the fact that
we are learning the spectra by deconvolving photometry. The regularization term described
in Section~\ref{sec:spectra_regularization} can be adjusted to recover more of the
spectral features, but this comes at the cost of introducing additional noise into the model.
In practice, we find that they choice of regularization term has little impact on the
final photometry because all of the high frequency information is lost after the convolution
with the bandpass.
\section{Applications} \label{sec:applications}
There are many different applications for a generative model of all transient light curves.
We discuss three of them in this section. In Section~\ref{sec:classification}, we show
how the ParSNIP model can be used to perform photometric classification even with
heavily biased training sets. In Section~\ref{sec:novel}, we demonstrate a novel
method of searching for new kinds of transients. In Section~\ref{sec:distances},
we show how the ParSNIP model can be used to estimate cosmological distances to SNe~Ia.
\subsection{Photometric Classification} \label{sec:classification}
Upcoming surveys such as the LSST with the Rubin Observatory will
obtain photometric observations of millions of transients, but will only have the
spectroscopic resources to obtain spectroscopy of and label a small fraction of these transients.
Traditional classification techniques only use this small dataset of labeled transients for training
(e.g. \citet{lochner16, boone19}). In contrast, autoencoder-based methods can learn a low-dimensional
representation from the much larger dataset of both labeled and unlabeled transients.
This representation is a set of very informative features which can be used to
train a photometric classifier on the small labeled dataset.
This approach has previously been demonstrated in \citet{pasquet19} and \citet{villar20}.
The main advantage of ParSNIP over previous autoencoders is that the
representation that it learns was constructed to disentangle intrinsic
properties of transients from properties describing how transients
were observed. In particular, the intrinsic representation is disentangled
from redshift. The labeled training sets tend to be heavily
biased towards low-redshift transients, so redshift is not a good feature to use
for classification. \citet{pasquet19} attempted to construct a similar
representation using a contrastive loss function that adds a penalty
when transients with the same label have very different representations. This
technique is effective, but requires a representative sample of labeled
high redshift transients that is often not available. In contrast, ParSNIP can generate such
a representation with only the unlabeled dataset because it is a full
generative model.
To perform photometric classification with ParSNIP, we first augment each light
curve in the training set 100 times following the procedure in
Section~\ref{sec:augmentation}. Note that we do not perform redshift augmentation
as in \citet{boone19}, we only use augmentation to obtain light curves at a wide
range of signal-to-noises and observing conditions. We then perform inference
using the ParSNIP model to estimate the latent representations for all of the transients in the
augmented training set. We convert the amplitude measured by the ParSNIP model
to a pseudo-luminosity $L$ using the cosmological parameters from \citet{planck20}
with the following formula:
\begin{align}
L = -2.5 \log_{10}(A) - \mu_{\text{Planck20}}(z) + 25
\end{align}
The offset of 25 comes from the fact that the input fluxes for both the ParSNIP
and PLAsTiCC datsets were measured on the AB system with a zeropoint of 25. We
refer to $L$ as a pseudo-luminosity because it contains an arbitrary offset
from the unknown zeropoint of the ParSNIP model. This offset will be identical for
all light curves with the same intrinsic representation.
We use the full ParSNIP representation as the features for our classifier, consisting
of the intrinsic latent variables $s_1$, $s_2$, $s_3$, the pseudo-luminosity
$L$, and the color $c$. We also include the predicted uncertainties on all of these
measurements and the uncertainty on the reference time for a total of eleven features.
We do not include redshift as an explicit feature as the redshift distributions
of the labeled and unlabeled datasets tend to be very different, although the redshift
is used to calculate the luminosity.
We train a gradient boosted decision tree on these features using the \texttt{lightgbm}
package \citep{ke17}. When training this classifier, we weight each transient in the
training set so that the sum of weights for each label is equal and so that the average
weight across all transients is one. We use the default hyperparameter values from
\texttt{lightgbm} with a \texttt{multi\_logloss} objective function, except that we
set the \texttt{min\_child\_weight} parameter to 1000
to avoid overfitting the augmented light curves. We train the classifiers with 10-fold
cross validation: we train each classifier on 90\% of the data and evaluate the model on
the remaining 10\%. We repeat this procedure ten times to obtain ten separate classifiers
and out-of-sample predictions for all of the transients in the training set. To avoid data
leakage, we ensure that all augmentations of the same light curve are in the same fold
for this procedure. For the unlabeled dataset, we generate predictions by averaging over
the outputs of all of the classifiers.
\subsection{Photometric Classification on the PS1 Dataset}
For the PS1 dataset, we only have labels for a subset of 557 transients. We predict
the labels for each of these transients using 10-fold cross-validation.
In Figure~\ref{fig:ps1_confusion}, we show a confusion matrix for this dataset.
Each row of the confusion matrix shows what fraction of the transients of a given
type are assigned to each label by the classifier. We find that the ParSNIP model is
able to accurately
classify all of the different types of transients in this dataset, with 63\% to 92\%
of the transients of each type being correctly identified in a five-way classification.
Our results are a significant improvement from both the SuperRAENN \citep{villar20} and
Superphot \citep{hosseinzadeh20} models that have previously
been applied to this dataset. We achieve an
overall accuracy of 89\% for the five-way classification compared to 87\% for SuperRAENN
on the same dataset. For the metric of the``macro-average completeness'' defined in
\citet{villar20} (the mean of the diagonal terms in the confusion
matrix), we achieve a value of 79\% with ParSNIP compared to 69\% for both SuperRAENN
and Superphot. For a two-way classification of SNe~Ia compared to
all other labels, we
achieve a macro-averaged completeness of 96\% compared to 92\% for SuperRAENN.
This implies that a sample of SNe~Ia that was photometrically classified
with ParSNIP would have $\sim$2 times less contamination than a sample classified with
SuperRAENN.
These predictions were all made through cross-validation, so the classifiers were
evaluated on datasets that have very similar properties to the datasets that they
were trained on. The major advantage of ParSNIP over previous models is that its
representation is invariant to symmetries such as redshift, and it is designed
to perform well even when the training set differs significantly from the dataset
that the classifier is applied to. We expect that the improvement in performance
of ParSNIP over previous models such as SuperRAENN or Superphot will be even larger
when applied to the non-representative datasets such as the ones that will be produced
by the Rubin Observatory or the Rubin Space Telescope. Unfortunately it is impossible to test this
performance on real data since we do not have access to the true labels for the
higher redshift transients, although we can test it on simulations.
\begin{figure}
\plotone{ps1_confusion_matrix}
\caption{
Confusion matrix for the cross-validated ParSNIP predictions on the PS1 dataset.
Each entry shows the fraction of transients of the type labeled on the left side of
the plot that are assigned the corresponding type labeled on the bottom of the plot.
}
\label{fig:ps1_confusion}
\end{figure}
\subsection{Photometric Classification on the PLAsTiCC Dataset}
The PLAsTiCC dataset is a simulation of the LSST light curve sample, and has
a labeled training set that is highly nonrepresentative of the full dataset.
We compare the performance of the ParSNIP model for classification of the PLAsTiCC
light curves to the performance of the Avocado
model \citep{boone19} that was the best performing classifer in the PLAsTiCC challenge
\citep{hlozek20}. In the version of ParSNIP discussed in this work, we assume that
we know the redshift of each transient but this information was not available in
the original PLAsTiCC dataset. For a fair comparison, we retrained the Avocado
classifier using the true redshifts instead of photometric redshifts. We show the
confusion matrices for both the ParSNIP and Avocado classifiers in
Figure~\ref{fig:plasticc_confusion}.
\begin{figure*}
\epsscale{1.15}
\plottwo{plasticc_parsnip_confusion_matrix}{plasticc_avocado_confusion_matrix}
\caption{
Confusion matrices for the ParSNIP (left panel) and Avocado (right panel) predictions
on the PLAsTiCC test set. Each entry shows the fraction of transients of the type labeled
on the left side of the plot that are assigned the corresponding type labeled on the
bottom of the plot. We find that the ParSNIP model has similar or better performance
than the Avocado model for every kind of transient.
}
\label{fig:plasticc_confusion}
\end{figure*}
The ParSNIP model achieves similar or better performance than Avocado for
each of the different kinds of transients. We evaluated the weighted log-loss metric developed
for the PLAsTiCC challenge \citep{malz19} limited to the subset of transient types used in this
analysis. Avocado scores 0.599 on this metric while ParSNIP scores 0.535 which is a major
improvement. To illustrate how this difference in performance will affect astrophysical
analyses, we evaluated the receiver operating characteristic (ROC) curve for both of
these classifiers for SNe~Ia. This curve measures the
true positive rate (the fraction of SNe~Ia that are correctly
classified as SNe~Ia) as a function of the false positive rate (the fraction
of non-SNe~Ia that are misclassified as SNe~Ia) for different
thresholds of the classifier output. We show the ROC curve for both the ParSNIP and
Avocado classifiers in Figure~\ref{fig:plasticc_snia_roc}.
\begin{figure}
\plotone{plasticc_snia_roc}
\caption{
Receiver operating characteristic (ROC) curve for SN~Ia classification
on the PLAsTiCC dataset. The ParSNIP model (shown in blue) performs significantly
better than the Avocado model (shown in orange), especially for thresholds corresponding
to low false positive rates that are most interesting for supernova cosmology analyses.
}
\label{fig:plasticc_snia_roc}
\end{figure}
Measuring the Area Under the ROC Curve (AUC) (the integral of this curve), we find a value
of 0.977 for the ParSNIP model compared to 0.962 for the Avocado model. The improvement
of the ParSNIP model mainly comes from a major reduction in the false positive rate
for true positive rates below $\sim90\%$. For a true positive rate of 50\%, the ParSNIP
model has a false positive rate of only 0.44\% which is 2.3 times smaller than the false
positive rate of 1.04\% for the Avocado model. We find similar results for any true positive
rate below 0.9. This is of particular importance for cosmology analyses with SNe~Ia
where it is more important to have a clean dataset than include all transients
that were discovered. A SN~Ia cosmology analysis that uses the
ParSNIP model for classification will have 2.3 times less contamination from other
supernova types compared to an analysis that uses the Avocado model.
Finally, we evaluated the performance of both the ParSNIP and Avocado models as a function
of redshift. We evaluated the AUC for classification of SNe~Ia in 100
evenly-spaced redshift bins between redshifts of 0 and 1.5. The results for both the Avocado
and ParSNIP models are shown in Figure~\ref{fig:auc_redshift}. We find that the ParSNIP
model is stable across different redshifts, and outperforms the Avocado model at all
redshifts. Of particular note is the decrease in performance for the Avocado model at very
low redshifts where we would expect to have very high signal-to-noise and well-measured
light curves. The Avocado model attempts to augment the training dataset by simulating
light curves at redshifts that are more representative of the full dataset, but struggles
to produce good simulations of very low redshift transients because light curves at those
redshifts are much higher signal-to-noise than most of the light curves in the training
set. Fundamentally, the Avocado augmentation process must be heavily tuned for each dataset
that it is applied to in order to minimize differences between the training and test sets.
In contrast, the ParSNIP model builds a redshift-independent representation of
light curves from the full dataset, and is mostly agnostic to differences in the
redshift distributions between the training and full datasets. Other than definitions
of the bandpasses, there are no instrument-specific elements of the ParSNIP model.
The same ParSNIP model was able to fit both the PLAsTiCC and PS1 datasets. We find
that the ParSNIP model has good performance at all redshifts without the need for
procedures such as redshift augmentation.
\begin{figure}
\plotone{auc_redshift}
\caption{
AUC for classification of SNe~Ia as a function of redshift for both the
ParSNIP model (shown in blue) and Avocado model (shown in orange) on the PLAsTiCC
dataset. The ParSNIP model outperforms the Avocado model at all redshifts.
}
\label{fig:auc_redshift}
\end{figure}
\subsection{Detecting Novel Transients} \label{sec:novel}
Upcoming surveys such as LSST will produce samples of transients that are more than
two orders of magnitude larger than
current samples, and could contain entirely new kinds of ``novel'' transients that have
never been previously observed. Previous work on detecting novel transients has typically
focused on identifying transients that are in some way different from the bulk of the
sample \citep{pruzhinskaya19, ishida19, martinezgalarza21, villar21}
without considering what was previously known about transients. As
a result, these methods typically produce a sample of transients that consists
mostly of rare transients (e.g. superluminous supernovae) rather than ones that have
truly never been observed.
In this work, we instead explore an alternative approach to anomaly detection enabled
by the ParSNIP model that takes prior knowledge into account. The ParSNIP model
was constructed so that transients with the same intrinsic properties
are embedded at the same location in the intrinsic latent space regardless of how
they were observed. Assuming that we have known labels for some subset of the transients in a
given dataset, we can identify novel transients by finding unlabeled transients
whose locations in the ParSNIP latent space are different from those of the transients
in the labeled subset. Note that this approach cannot be used with most
previously-used feature extraction methods because their features are correlated with
properties of the observations such as redshift.
We demonstrate this approach on the PLAsTiCC dataset.
The authors of the PLAsTiCC dataset included several different kinds of transients in the full
PLAsTiCC dataset that were not present in the labeled training set, including pair-instability supernovae
(PISN), intermediate luminosity optical transients (ILOT), and calcium-rich transients (CaRT)
\citep{kessler19}. As can be seen in Figure~\ref{fig:representations}, the PISNe and ILOTs
are well-separated from all other kinds of transients in the ParSNIP latent space.
We generate an augmented version of the PLAsTiCC training set with 100
realizations of each light curve following the procedure described in Section~\ref{sec:augmentation}
to cover a wide range of different observing conditions. For each transient in the full
dataset, we then calculate the Euclidean distance between the ParSNIP intrinsic latent variables
$\bs_i$ of that transient and the nearest transient in the augmented training dataset.
This distance measure can be interpreted as a novelty score, where transients with large
distances are very different from all of the transients in the training set.
Of the top 100 transients ranked by this novelty score, 87 are ILOTs and 3 are PISNe,
meaning that 90\% of the transients in
this sample of ``novel transients'' are in fact a new kind of transient that isn't present
in the labeled training set. For comparison, \citet{villar21} use an isolation forest
on the same dataset. They obtain a sample of rare transients that is 95\% pure according
to their definition, but the vast majority of these rare transients are superluminous
supernovae, examples of which are available in the training set, and only $<5$\% of the transients
in their sample come from the types that are not included in the training dataset. We
stress that these are two entirely different approaches to anomaly detection. Which one is
more applicable will depend on the science goals of a given project. However, the ParSNIP representation enables
alternative approaches to anomaly detection that can take prior knowledge into account.
The methodology that we demonstrated here is very simple, and we only considered
the intrinsic ParSNIP representation. More advanced techniques could also
take advantage of luminosity, color, and redshift information or even additional information
such as properties of the host galaxies. They could also look at the density of
transients in each of the training and full datasets.
In practice, the detection of novel transients should involve a feedback loop where
resources are dedicated to followup candidates, and where these are then added back
into the training set. This form of ``active learning'' is discussed in detail in
\citet{ishida19} and an application to isolation forests for anomaly detection is
discussed in \citet{ishida21}. The ParSNIP model is ideally suited to being
used with these techniques.
It extracts a robust representation of transients in an unsupervised
manner, so it can be trained on unlabeled datasets containing novel transients.
Furthermore, the representation only consists of a small number of parameters,
so it is very computationally efficient to use the ParSNIP representation instead
of other feature extraction methods that can produce tens or hundreds of features.
\subsection{Estimating Distances to SNe~Ia} \label{sec:distances}
One major application of the SALT2 model is estimating distances to transients.
As a similar generative model, the ParSNIP model can also be used for distance estimation.
SALT2 and ParSNIP both estimate the brightnesses of transients relative to some arbitrary
zeropoint that can vary over the latent space. By subtracting off the zeropoint, we
obtain a distance estimate. For SALT2, the zeropoint is typically modelled using a linear
function of the SALT2 $x_1$ and $c$ parameters resulting in the following model for
estimating distances
\begin{equation}
\mu_{\textrm{SALT2}} = M + \alpha x_1 + \beta c
\end{equation}
For ParSNIP, we use a similar model with linear corrections for all of the intrinsic
latent variables
\begin{equation}
\mu_{\textrm{ParSNIP}} = M + \alpha_1 s_1 + \alpha_2 s_2 + \alpha_3 s_3 + \beta c
\end{equation}
We fit for the values of $M$, $\alpha_i$ and $\beta$ in both of these models by
comparing the estimated distance moduli to the distance moduli from the
corresponding redshifts using the cosmological parameters from \citet{planck20}.
In both cases, we use the sample of PS1 SNe~Ia with good SALT2 fits described
in Section~\ref{sec:salt_comparison}. We use Chauvenet's criterion to reject
SNe~Ia that are large outliers for either model which removes 8 of the 265
SNe~Ia in our sample. For the same set of SNe~Ia, we find that the RMS of the
SALT2 distance estimates is $0.155 \pm 0.008$~mag and the RMS of the ParSNIP
distance estimates is $0.150 \pm 0.007$~mag. When looking at
only the core of the distribution, we find that ParSNIP is able to estimate distances
with an NMAD of $0.126 \pm 0.009$~mag compared to $0.139 \pm 0.011$~mag for SALT2.
Hence, the ParSNIP model can
be used to estimate accurate distances to SNe~Ia, and it performs slightly better than
the SALT2 model for distance estimation on this sample of SNe~Ia.
One important caveat is that precision cosmology analyses
require a thorough understanding of light curve modeling uncertainties to avoid biases.
SALT2 includes an explicit description of the model covariance at every time/wavelength
that can be used to identify regions of the spectrum where the model is unreliable, such
as the UV/IR as seen in Figure~\ref{fig:snia_spectrum_comparison}. The ParSNIP model
uncertainties for Type~Ia supernova light curves should be thoroughly investigated and
understood before using ParSNIP in precision cosmology analyses.
\section{Discussion} \label{sec:discussion}
\subsection{Computational Requirements} \label{sec:computational_requirements}
All of the training and inference in this work was done using an NVIDIA GeForce
RTX 2080 Ti GPU. Training a new model takes approximately 2 hours for the PS1 dataset
and 23 hours for the PLAsTiCC dataset. The training time could also be decreased
significantly by adjusting the learning rate scheduler without having a major impact
on model performance. Inference takes approximately 11 seconds for
the PS1 dataset or 2 hours for the full PLAsTiCC dataset of over 3 million
light curves. The model can also be trained and evaluated on a CPU. When running
on 8 cores of an AMD Ryzen Threadripper 3970X, we find that the performance is
roughly 6 times slower than the numbers quoted previously for the GPU.
These computational times are not prohibitive even for large surveys
such as LSST. With a single GPU, the ParSNIP model could be retrained on a near
nightly basis and used to perform inference on the entire dataset of LSST light curves.
We can perform inference at a rate of approximately 500 LSST light curves per
second with a single GPU, so inference could easily be performed live on the
LSST alert stream.
\subsection{Supernova Cosmology Without Classification} \label{sec:cosmology_classification}
As discussed in Section~\ref{sec:classification}, surveys such as LSST will
rely on photometric classification to identify samples of SNe~Ia. Previous
cosmological analyses with photometrically-classified
SNe~Ia have used one model to identify whether a transient is an SN~Ia, and a second
model to estimate the distance to that transient assuming that it is an SN~Ia
\citep{hlozek12, jones18}. As a result, the distance estimates for any non SNe~Ia
that leak into the sample are heavily biased.
The ParSNIP model can be used both for classifying transients (as discussed in
Section~\ref{sec:classification} and estimating distances (as discussed in
Section~\ref{sec:distances}), so these two steps could be done simultaneously
in a supernova cosmology analysis. As part of the cosmology fit, one would
fit for the distance modulus zeropoint across the ParSNIP latent
representation as we did for the SN~Ia subset in Section~\ref{sec:distances}.
Fitting distances to all transients will require a more complex zeropoint model
than the simple linear one that we used for SNe~Ia because some transients have much
larger ranges of dispersion in luminosity.
Given such a zeropoint model, distances estimates to individual transients could be obtained
by marginalizing over the latent representation using the posterior distribution
from the ParSNIP model. This procedure would effectively
marginalize over all of the different kinds of transients that each light curve
is compatible with, and would estimate distances to all transients, not just
SNe~Ia. This procedure would reduce many of the biases present in current
cosmology analyses with photometrically-classified supernovae because
it estimates distances to non-SNe~Ia using a model trained on those
transients rather than a model trained on SNe~Ia. We plan on developing
such a model in future work.
\subsection{Photometric Redshifts} \label{sec:photoz}
The current implementation of the ParSNIP model relies on spectroscopic redshifts.
These redshifts will not be available
for all transients, and this is especially true when surveys are in progress.
In principle, the redshift could be treated as an additional explicitly-modeled
latent variable in the VAE. Upcoming
experiments such as the Rubin Observatory or the Roman Space Telescope
will have photometric
redshift models for the host galaxies of most of the transients they observe,
and these could be used as a prior for the VAE. This kind of analysis
would rely on understanding the posteriors of the photometric redshift models
and would have to be tuned to specific experiments.
We also find that the encoder is not very sensitive to changes
in the input redshift. We performed inference on the PS1 dataset with
noise added to all of the input redshifts with a standard deviation of
0.05~mag. The recovered intrinsic representation coordinates
change with an RMS of 0.20, 0.10, and 0.11 for $s_1$, $s_2$, and $s_3$
respectively. These differences are negligible for applications such
as photometric classification. Photometric redshifts could therefore
likely be used directly as inputs to a pretrained ParSNIP model
without a significant degradation in performance.
\subsection{Latent Variable Priors} \label{sec:priors}
The ParSNIP model learns priors on the intrinsic latent variables, but we
assumed fixed weak priors on the explicitly-modeled latent variables. We made this
assumption because the light curve datasets that the model was trained on
are highly biased. Astronomical surveys have some limiting magnitude
beyond which they will not detect transients, so the populations of
observed transients are biased towards brighter, bluer, and lower-redshift
transients. By using a fixed weak prior, we make no assumptions about these
biases and they can be corrected for in further analyses if necessary.
The explicitly-modeled latent variables tend to be very well
constrained. For the PS1 dataset, the median posterior uncertainty
on the reference time is 0.7 days compared to the prior of 20 days,
the median uncertainty on the color is 0.049 compared to the prior
of 0.3, and the median uncertainty on the amplitude is 3\%. All
of these variables are well constrained even for low signal-to-noise
light curves, so the choice of prior has minimal impact on the
posterior distributions. Applications such as photometric classification
or distance estimation only depend on the posterior distributions, so
they will not be significantly impacted by different choices of the
prior distribution.
Simulating new light curves from the ParSNIP model is one application
that does require knowledge of the prior distributions of the explicitly-modeled
latent variables. These prior distributions could be estimated
directly from a large dataset of light curves by explicitly
modeling the selection functions, although this would be very challenging.
Alternatively, the prior distributions can simply be estimated from theoretical
or empirical models as was done for the PLAsTiCC simulations.
\subsection{Latent Variable Posteriors} \label{sec:posteriors}
The decoder of the ParSNIP model is a generative model with a well-defined
likelihood. The encoder model uses variational inference to approximate the posterior
distribution over the latent variables for a given light curve, but it is also possible
to evaluate this posterior distribution directly. In particular, optimizers can be used
to find the maximum-a-posteriori values of the latent variables for a given light curve,
and techniques such as Markov chain Monte Carlo (MCMC) can be used to sample from the
posterior distribution directly. These approaches are currently used in supernova
cosmology analyses with models such as SALT2, and provide much more detailed information
about the posteriors but tend to be very computationally intensive. We added an interface
for the ParSNIP model to the \texttt{SNCosmo} package \citep{barbary16c} that implements
several methods of fitting light curves and evaluating posteriors.
When training ParSNIP model using variational inference, we assumed that the posterior
of the latent variables can be described by a Gaussian distribution with a diagonal
covariance matrix. To test this assumption, we used the \texttt{emcee} \citep{foremanmackey13}
implementation of MCMC in \texttt{SNCosmo} to draw a large number of samples from the posterior
distributions of individual light curves. An example of the sampled posterior distribution for
a typical light curve from the PLAsTiCC dataset is shown in Figure~\ref{fig:mcmc_posterior}.
\begin{figure*}
\plotone{mcmc_posterior}
\caption{
Example of the posterior distributions of the latent variables in the ParSNIP model for a
typical light curve in the PLAsTiCC dataset, sampled with MCMC. Each panel shows the distribution
for two different latent variables, and the diagonal panels show the histograms of individual
latent variables.
}
\label{fig:mcmc_posterior}
\end{figure*}
We find that the posterior distributions over the latent variables other than
amplitude are typically well-described by a Gaussian distribution with a diagonal
covariance matrix, and the
predicted means and standard deviations that we obtain from variational inference are
generally consistent with the true posterior distributions. This statement holds even for
light curves that are only partially sampled, and we do not see any evidence of multi-modal
posterior distributions. The amplitude and color latent variables are always highly correlated
which justifies our decision to marginalize over the amplitude as described in
Section~\ref{sec:marginalizingamplitude}. In general, we find that the posteriors estimated
by variational inference are reasonably accurate, but it may be desirable to evaluate the full
posteriors for applications such as supernova cosmology that require very high precision.
\subsection{Applying the ParSNIP Model to Other Kinds of Light Curves} \label{sec:other_lightcurves}
The model described in this paper was designed to be applied to short-lived
transient light curves, and should not be applied to other kinds of light curves without
modifications. In particular, the preprocessing procedure described in
Section~\ref{sec:preprocessing} roughly aligns and scales all of the light curves assuming
that they have a well-defined maximum. This is not a valid assumption for sources such as
variable stars or active galactic nuclei. We also subtract the background level
of each light curve using observations away from maximum light, and we constrain the model
to only output positive flux values. None of these assumptions are fundamental to the ParSNIP
model, and they should all be adjusted as appropriate if applying the model to other
kinds of light curves (e.g. not subtracting the background for variable stars).
\subsection{Including Additional Observations in the Training} \label{sec:including_spectra}
In this work we assumed that we only have photometry available for all of
the light curves in our sample. Large spectroscopic followup campaigns are
planned for most upcoming surveys. As an example, the 4MOST consortium
projects that they will obtain 30,000 spectra of live transients \citep{swann19}.
These spectra could be used to train the ParSNIP model. The ParSNIP model
already predicts the spectra at every phase when the model is compared to photometry,
and these model spectra could instead be compared to observed spectra directly.
When trained only on photometry, the ParSNIP model is required to learn
spectra by deconvolving the photometry. Including spectra of even a subset
of the transients would constrain the underlying spectral model. This
would be especially helpful for rare transients, such as PS1-12baa
discussed in Section~\ref{sec:models_other}, where there is limited
photometry of similar transients to constrain the positions and
widths of spectral features.
Additional followup photometry in different bands could similarly be included
to constrain the model at wavelengths where spectral coverage is typically lacking,
especially the UV and IR bands. Including a modest amount of IR photometry in the ParSNIP
training dataset would allow ParSNIP to learn a coarse IR spectral model. Such
models only currently exist for Type~Ia supernovae, but are essential for simulating
and planning upcoming experiments such as the Roman Space Telescope.
\subsection{Combining Light Curves from Multiple Surveys} \label{sec:combining_surveys}
Transient light curves have been collected by many different surveys, and the
ParSNIP model could be improved by training on a larger sample of light curves
observed by different instruments. The neural network in the decoder of the ParSNIP
model predicts spectra, and we explicitly compute photometry from those
spectra using the known bandpasses for an instrument. As a result, the ParSNIP
decoder can be used to predict the photometry for any instrument or bandpass
without any modification to the architecture or neural network weights.
The ParSNIP encoder, on the other hand, treats all of the individual bandpasses
separately. Nevertheless, it can operate on multiple instruments
by adding additional channels to the input representing the individual
bandpasses from each instrument. For instrument/bandpass combinations in which
a transient was not observed, we input zero at all times. With this approach,
the model can be trained simultaneously on light curves from different
surveys to produce a single latent representation and decoder model that
can be applied to all surveys. We tested this approach by training the
ParSNIP model simultaneously on the PS1 and PLAsTiCC datasets. We found
that the resulting model had a similar performance to the models trained
on the individual datasets, and transients with the same label in both
datasets are encoded at similar locations in the intrinsic representation.
\section{Conclusions}
The ParSNIP model developed in this work is a novel generative
model that describes how the spectra of transients evolve over time. This
model combines both a neural network to describe the intrinsic
diversity of transients and an explicit physics model of how light from
transients propagates through the universe and is observed.
It can be trained using only a large dataset of light curves without
requiring spectra or labels.
We demonstrated the effectiveness of our model both on both real (PS1) and simulated (PLAsTiCC) data.
With a three-parameter intrinsic representation, we are able to
reproduce out-of-sample light curves with model uncertainties of
$\sim$0.06~mag for the PS1 dataset and $\sim$0.04~mag for the PLAsTiCC
dataset. We also find that we are able to accurately predict the
spectra of the transients despite only training on photometry.
The ParSNIP model can estimate the distances to well-observed light curves
of SNe~Ia with an uncertainty of $0.150 \pm 0.007$~mag compared
to $0.155 \pm 0.008$~mag for the SALT2 model on the same sample of SNe~Ia.
Our model achieves state-of-the-art results
for photometric classification. It produces an intrinsic representation
that is independent of redshift, which means that it not highly sensitive
to the biases in training sets that are a major challenge for other methods.
For classification of SNe~Ia, the ParSNIP model produces
a sample that has 2 times less contamination compared to the SuperRAENN
model on the PS1 dataset, and 2.3 times less contamination compared to
the Avocado model on the PLAsTiCC dataset. The performance of the ParSNIP
model is stable across all redshifts and does not require the use
of techniques such as redshift augmentation. Our model
can also be used to detect novel transients. In a simulated dataset,
of 100 novel transients suggested by our algorithm, 90\% of them
are a new kind of transient that had never previously been observed.
All of the results in this paper can be reproduced with the publicly available
\texttt{parsnip} software package \footnote{\url{https://doi.org/10.5281/zenodo.5493509}}.
This package contains instructions for how to access all of the data that
was used in our analyses. It also contains Jupyter notebooks that can be
used to reproduce all of the figures in this paper.
\acknowledgments
We thank Andrew Connolly, Kara Ponder, Stephen Portillo, John Franklin Crenshaw,
Greg Aldering, Saul Perlmutter, and the anonymous referee for valuable
feedback and discussions.
K. B. acknowledges support from the DiRAC Institute in the Department
of Astronomy at the University of Washington. The DiRAC Institute is supported
through generous gifts from the Charles and Lisa Simonyi Fund for Arts and Sciences, and the Washington Research Foundation.
The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation.
\vspace{5mm}
\facilities{
PS1
}
\software{
Astropy \citep{astropy13, astropy18},
corner \citep{foremanmackey16},
emcee \citep{foremanmackey13},
Extinction \citep{barbary16b}
Jupyter \citep{kluyver16},
LightGBM \citep{ke17},
Matplotlib \citep{hunter07},
NumPy \citep{vanderwalt11},
PyTorch \citep{pytorch},
scikit-learn \citep{scikit-learn},
SciPy \citep{scipy},
SNCosmo \citep{barbary16c}
}
|
{
"timestamp": "2021-09-30T02:02:38",
"yymm": "2109",
"arxiv_id": "2109.13999",
"language": "en",
"url": "https://arxiv.org/abs/2109.13999"
}
|
\section{Introduction and statement of results}}
In this paper, we consider the constant $Q$-curvature-type equation
\begin{equation}\label{n-mfe}
\alpha P_n u+(n-1)!\left(1-\frac{e^{nu}}{\int_{\mathbb{S}^n}e^{nu}dw}\right)=0, \quad \text{on } \mathbb S^n,
\end{equation}
where $\mathbb S^n$ is the $n$-dimensional sphere and
\begin{equation*}
P_n=\begin{cases}
\prod_{k=0}^{\frac{n-2}{2}}(-\Delta+k(n-k-1)),&\mbox{ for $n$ even};\\
\left(-\Delta+\left(\frac{n-1}{2}\right)^2\right)^{1/2}\prod_{k=0}^{\frac{n-3}{2}}(-\Delta+k(n-k-1)),&\mbox{ for $n$ odd}
\end{cases}
\end{equation*}
is the Paneitz operator on $\mathbb{S}^n$ and $\alpha$ is a positive constant. The volume form $dw$ is normalized so that $\int_{\mathbb{S}^n} dw=1$.
The corresponding functional is defined in $H^{\frac{n}{2}}(\mathbb{S}^n)$ by
\begin{equation}\label{functional}
J_\alpha(u)=\frac{\alpha}{2} \int_{\mathbb{S}^n}(P_nu) udw+(n-1)!\int_{\mathbb{S}^n} udw-
\frac{(n-1)!}{n}\ln \int_{\mathbb{S}^n}e^{nu}dw.
\end{equation}
If $\alpha =1$, \eqref{n-mfe} corresponds to the constant $Q$-curvature equation on $\mathbb{S}^n$. It is shown in \cite{Beckner} that the following Beckner's inequality, a higher order Moser-Trudinger type inequality, holds
\begin{equation}
\label{beckner}
J_{1} \ge 0, \quad u \in H^{\frac{n}{2}}(\mathbb{S}^n).
\end{equation}
Furthermore, $J_1$ is invariant under the
conformal transformation
$$ u(\xi) \to v(\tau \xi)+ \frac1n ln( |det(d\tau)(\xi)|)$$
where $\tau$ is an element of the conformal group of $\mathbb{S}^n$ and $ |det(\cdot)|$ is the modulus of the corresponding Jacobian determinant. Equality in \eqref{beckner} is only attained at functions of the form
$$
u(\xi)= -\ln(1- \zeta \cdot \xi)+ C, \quad C \in \mathbb{R},
$$
where $\zeta \in B^{n+1}:=\{ \xi\in \mathbb{R}^{n+1}, |\xi| <1\}$.
(See also \cite{Chang95}.) In particular, \eqref{n-mfe} with $\alpha=1$ has a family of axially symmetric solutions
\begin{equation*}
u(\xi)=-\ln(1-a \xi_1), \quad \xi\in \mathbb{S}^n \mbox{ for }a\in (-1, 1).
\end{equation*}
On the other hand, an improved Aubin-type inequality has been shown in \cite[Lemma 4.6]{Chang95}: for any $\alpha>1/2$, there exists a constant $C(\alpha)\ge 0$ such that $J_{\alpha}(u) \ge -C(\alpha)$ provided that $u$ belongs to the set of functions with center of mass at the origin
\begin{equation*}
\mathfrak L=\{v\in H^{\frac{n}{2}}(\mathbb{S}^n):\int_{\mathbb{S}^n}e^{nv}\xi_jdw=0;\ j=1,2,\cdots,n+1\}.
\end{equation*}
This gives rise to the existence of a minimizer $u_0$ of $J_{\alpha} $ in $ \mathfrak L$, and $u_0$ satisfies
the corresponding Euler-Lagrange equation
\begin{equation}\label{lagrange}
\alpha P_n u+(n-1)!\left(1-\frac{e^{nu}}{\int_{\mathbb{S}^n}e^{nu}dw}\right)= \sum_{i=1}^{n+1} a_i \xi_i e^{nu} , \quad \hbox{on} \quad \mathbb{S}^n
\end{equation}
for some constants $a_i, i=1, 2, \cdots n+1$. Furthermore, by exploiting the invariance of $J_1$ under the conformal transformation, \cite[Remarks (3) (ii) for Cor. 5.4]{Chang95}
implies that the following Kazdan-Warner condition
\begin{equation}\label{kw}
\int_{\mathbb{S}^n} \langle\nabla Q, \nabla \xi_i\rangle e^{nu}dw=0, \quad i=1, 2, \cdots n+1
\end{equation}
is also applicable for the prescribing $Q$-curvature equation
\begin{equation*}
P_n u+ (n-1)! -Q e^{nu}=0, \quad \xi \in \mathbb{S}^n.
\end{equation*}
It is an immediate consequence that $a_i=0, i=1, 2, \cdots n+1$ in \eqref{lagrange}. (See \cite{Wei-Xu}, proof of Theorem 2.6.) This argument is reminiscent of that in \cite[Cor. 2.1]{CY87} on the constant Gaussian curvature type equation, or the mean field equation on $ \mathbb{S}^2$,
\begin{equation}\label{gausian}
-\alpha \Delta u + \left(1-\frac{e^{2u}}{\int_{ \mathbb{S}^2} {e^{2u}}dw}\right)=0, \quad \xi\in \mathbb{S}^2.
\end{equation}
For \eqref{gausian}, there is a vast literature. See, e.g., \cite{CY87}, \cite{GM} and references therein. Moreover,
interested reader is referred to \cite{Chang97,DHL00,Dj08,Hang16,Hang163,Ly19,Lin98,M06,Wei99} for literature on equations that have conformal structure.
\par
In what follows, we shall consider axially symmetric functions that are only dependent on $\xi_1$. The first aim of this article discusses the classification of axially symmetric solutions for \eqref{n-mfe}
at the critical parameter $\alpha=\frac1{n+1}$. We have the following theorem.
\begin{theorem}\label{n}
If $\alpha=\frac{1}{n+1}$ and $u$ is an axially symmetric solution to \eqref{n-mfe} with $n\ge2$, then $u$ must be constant.
\end{theorem}
The rest of this paper focuses on \eqref{n-mfe} with $n$ even in the axially symmetric setting. We shall show that \eqref{n-mfe} admits only constant solutions when $\alpha$ belongs to some suitable sub interval in (1/2, 1) for $n=6, 8$. As a consequence we obtain an improved Aubin-type inequality for axially symmetric functions in $\mathfrak L$. Note that the case $n=4$ has been considered in \cite{Gui20} and similar results are obtained.
Considering solutions axially symmetric about $\xi_1$-axis and denoting $\xi_1$ by $x$, we can reduce \eqref{n-mfe} to
\begin{equation}\label{n-ode}
\alpha\left(-1\right)^\frac{n}{2}[(1-x^2)^\frac{n}{2} u']^{(n-1)}+(n-1)!- \frac{(n-1)!\sqrt\pi\Gamma\left(\frac{n}{2}\right)}{ \Gamma\left(\frac{n+1}{2}\right)\gamma}e^{nu}=0,
\end{equation}
where $\gamma=\int_{-1}^1(1-x^2)^\frac{n-2}{2}e^{nu}$.
One can refer to Section 3, for the detailed derivation of \eqref{n-ode}. By direct computations, we see that the corresponding functional $ I_\alpha(u)$ in $H^{\frac{n}{2}}(-1,1)$ can be expressed as follows
\begin{equation*}
\begin{aligned}
I_\alpha(u)&= \left(-1\right)^\frac{n}{2}
\frac{\alpha}2\int_{-1}^1 (1-x^2)^\frac{n-2}{2}[(1-x^2)^\frac{n}{2} u']^{(n-1)}u+(n-1)!\int_{-1}^1 (1-x^2)^\frac{n-2}{2} u\\
&- \frac{(n-1)!\sqrt\pi\Gamma\left(\frac{n}{2}\right)}{ n\Gamma\left(\frac{n+1}{2}\right)}\ln\left( \frac{\Gamma\left(\frac{n+1}{2}\right)}
{\sqrt\pi\Gamma\left(\frac{n}{2}\right)}\int_{-1}^1(1-x^2)^\frac{n-2}{2}e^{nu} \right).
\end{aligned}
\end{equation*}
where $H^{\frac{n}{2}}(-1,1)$ is defined as the restriction of $H^{\frac{n}{2}}(\mathbb{S}^n)$ in the set of functions axially symmetric about $\xi_1$-axis and $\xi_1=x$. Moreover, the set $\mathfrak L$ is replaced by
\begin{equation}\label{axi-constraint}
\mathfrak L_r=\bigg\{u\in H^2(\mathbb{S}^4):u=u(x) \mbox{ and } \int_{-1}^1x(1-x^2)^\frac{n-2}{2}e^{nu} =0\bigg\}.
\end{equation}
Let
\begin{equation*}
\alpha^{(6)}=\frac{115 + \sqrt{2851}}{273} \approx 0.6168 \mbox{ and }\alpha^{(8)}=\frac{19}{23} \approx 0.8261.
\end{equation*}
Now we state the main results.
\begin{theorem}\label{even}
Let $n=6$ or $8$. If $\alpha^{(n)}\leq \alpha<1$, then \eqref{n-ode} admits only constant solutions. As an immediate consequence, we have
$$\inf_{u\in \mathfrak L_r}I_\alpha(u)=0.
$$
\end{theorem}
We believe that $J_{1/2}(u) \ge 0$ for $u\in \mathfrak L $, given the similar inequality for $ \mathbb{S}^2$ as shown in \cite{GM}.
Next we define the following first momentum functionals on $H^{\frac n2}(\mathbb{S}^n)$
\begin{equation*}
\mathcal{J}_\alpha(u)=\frac{\alpha}{2} \int_{\mathbb{S}^n}(P_nu) udw+(n-1)!\int_{\mathbb{S}^n} udw-
\frac{(n-1)!}{2n}\ln\left( \left(\int_{\mathbb{S}^n}e^{nu}dw\right)^2 -\sum_{i=1}^{n+1} (\int_{\mathbb{S}^n}e^{nu} \xi_i dw)^2 \right).
\end{equation*}
Note that $\mathcal{J}_{\alpha} (u)= J_{\alpha}(u)$ when $ u \in \mathfrak L$.
As a consequence of Theorem \ref{even}, we have the following
form of first Sz\"ego limit theorem on $ \mathbb{S}^n$ for axially symmetric functions.
\begin{theorem}\label{Szego}
Let $n=6$ or $8$, then
$$ \mathcal{J}_{\frac n{n+1}} (u) \ge 0, \quad \forall u \in \{ u\in H^{\frac n2}(\mathbb{S}^n): u(\xi)=u(\xi_1)\}.
$$
\end{theorem}
Using a bifurcation approach and Theorem \ref{n}-\ref{even}, we can also show the existence of non constant axially symmetric solution for $\alpha \in (\frac1{n+1}, \frac12).$
\begin{theorem}\label{nontrivial}
For $n\ge 2$, there exists a non constant solution $u_{\alpha}$ to \eqref{n-ode} for $\alpha \in (\frac1{n+1}, \frac12).$ Moreover, there exists a sequence $\alpha_m \in (\frac1{n+1},\alpha^{(n)}) $ and a sequence of non constant solutions $u_{\alpha_m}, m=1, 2, \cdots$ to \eqref{n-ode} such that $\alpha_m \to \frac12$, $\int_{-1}^1(1-x^2)^\frac{n-2}{2}e^{nu_{\alpha_m}} = \frac{\sqrt\pi\Gamma\left(\frac{n}{2}\right)}{\Gamma\left(\frac{n+1}{2}\right)} $ and $\|u_{\alpha_m}\|_{L^\infty([-1,1])} \to \infty$ as $m \to \infty$.
\end{theorem}
We also establish the following proposition concerning the centers of mass and first order momenta of solutions to \eqref{n-mfe}.
\begin{proposition}\label{n-pro}
If $u$ solves \eqref{n-mfe}, then
\begin{equation*}
\int_{\mathbb{S}^n} e^{nu} \xi_idw=0 \quad \mbox{ and } \int_{\mathbb{S}^n} u \xi_idw=0, \quad i=1,2,\cdots,n+1,
\end{equation*}
whenever $\alpha\neq1$.
\end{proposition}
The remainder of this paper is organized as follows. First, we give some preliminaries and validate Theorem \ref{n} and Proposition \ref{n-pro} in the study of the case $n\ge 2$ in Section 2. Section 3 is devoted to the case $n=6$ or $8$ and the proof of Theorems \ref{even}-\ref{Szego}.
In Section 4, we carry out a bifurcation analysis of \eqref{n-ode} and its equivalent form, and prove Theorem \ref{nontrivial} based on Theorems \ref{n} and \ref{even}.
{\section{Preliminaries and Classification of $\alpha=\frac{1}{n+1}$}}
\vskip 2mm
\par
In this section, we state several preliminaries which will be needed in the proof of our main results.
Note that the eigenfunctions associated with the Paneitz operator coincide with those associated with the Laplacian. It is natural to introduce Gegenbauer polynomials, see \cite[8.93]{GR}, which can be considered as a family of generalized Legendre polynomials.
Let us first introduce the Gegenbauer polynomials (see \cite[8.93]{GR}). Recall that
\begin{equation}\label{Ck}
C_k^{\frac{n-1}{2}}(x)=\left(\frac{-1}{2}\right)^k\frac{ \Gamma\left(k+n-1\right)\Gamma\left(\frac{n}{2}\right)}{k! \Gamma\left(n-1\right) \Gamma\left(k+\frac{n}{2}\right)} (1-x^2)^{-\frac{n-2}{2}}\frac{d^k}{dx^k}(1-x^2)^{k+\frac{n-2}{2}}
\end{equation}
is called Gegenbauer polynomial of order $\frac{n-1}{2}$ and degree $k$. Then $ C_k^{\frac{n-1}{2}}$ satisfies
\begin{equation}\label{Ck-ode}
(1-x^2)(C_k^{\frac{n-1}{2}})''-nx(C_k^{\frac{n-1}{2}})'+\bar\lambda_kC_k^{\frac{n-1}{2}}=0,\quad \ k=0,1,\cdots,
\end{equation}
where $\bar\lambda_k=k(k+n-1)$.
After some calculations, it is easy to see from \cite{GR}
that
\begin{equation}\label{nCk'}
|(C_k^{\frac{n-1}{2}})'|\leq \frac{ \Gamma\left(k+n\right)}{ n\Gamma\left(n-1\right) \Gamma(k) }
\end{equation}
and
\begin{equation}\label{ortho}
\int_{-1}^1(1-x^2)^{\frac{n-2}{2}}C_k^{\frac{n-1}{2}}(x)C_s^{\frac{n-1}{2}}(x)=\begin{cases}
\frac{\pi(k+n-2)!}{2^{(n-2)}k!(k+\frac{n-1}{2})[\Gamma(\frac{n-1}{2})]^2}
:=A_n\frac{(k+n-2)!}{k!(k+\frac{n-1}{2})} &\quad k=s;\\
0 &\quad k\neq s.
\end{cases}
\end{equation}
Furthermore, we know that
\begin{equation}\label{PnCk}
P_n C_k^{\frac{n-1}{2}}= \lambda_k C_k^{\frac{n-1}{2}},
\end{equation}
where
\begin{equation}\label{lambdak}
\lambda_k=\prod_{s=0}^{n-1}(k+s)=\frac{\Gamma(n+k)}{\Gamma(k)}.
\end{equation}
Indeed, for $n$ even,
\begin{equation*}
\begin{aligned}
\lambda_k&=\prod_{s=0}^{\frac{n-2}{2}}[k(k+n-1) +s(n-s-1)]
=\prod_{s=0}^{\frac{n-2}{2}}(k+s)(k+n-1-s)\\
&=\prod_{s=0}^{n-1}(k+s).
\end{aligned}
\end{equation*}
The final formula also works when $n$ is odd.
We now prove Proposition \ref{n-pro}.
Since \eqref{n-mfe} is invariant under addition by a constant, we can normalize $u$ so that
$\int_{\mathbb{S}^4} e^{4u}dw=1$. Then, \eqref{n-mfe} can be written as
\begin{equation}\label{Pn-mfe}
\alpha P_n u=(n-1)!(e^{nu}-1), \quad \xi\in \mathbb{S}^n.
\end{equation}
As in \cite{CY87,KW74}, we can ultiply \eqref{Pn-mfe} by $\xi_i,\ i=1,2,\cdots,n+1$ and integrate to get
\begin{equation*}
{ \alpha\int_{\mathbb{S}^n}(P_n u)\xi_idw=(n-1)!\int_{\mathbb{S}^n}e^{nu}\xi_idw.}
\end{equation*}
It is easy to see from \eqref{Ck-ode} and \eqref{PnCk} that
\begin{equation*}
-\Delta \xi_i=\bar \lambda_1 \xi_i \mbox{ and } P_n \xi_i=\lambda_1 \xi_i,\quad i=1,2,\cdots,n+1.
\end{equation*}
We further have
\begin{equation*}
{ n\alpha\int_{\mathbb{S}^n} u \xi_idw= \int_{\mathbb{S}^n}e^{nu}\xi_idw.}
\end{equation*}
On the other hand, let
\begin{equation*}
Q=\frac{(n-1)!}{\alpha}+(n-1)!\left(1-\frac{1}{\alpha}\right) e^{-nu}.
\end{equation*}
Then \eqref{Pn-mfe} can be reduced to
\begin{equation}
P_n u+ (n-1)! -Q e^{nu}=0
\end{equation}
As stated in the Introduction, the Kazdan-Warner condition \eqref{kw} holds. It follows from \eqref{kw} that
\begin{equation*}
0= n!\left(\frac{1}{\alpha}-1\right)\int_{\mathbb{S}^n}\langle\nabla u, \nabla \xi_i\rangle dw=-n!\left(\frac{1}{\alpha}-1\right)\int_{\mathbb{S}^n} u \Delta \xi_idw=n n!\left(\frac{1}{\alpha}-1\right)\int_{\mathbb{S}^n} u \xi_idw.
\end{equation*}
Therefore,
\begin{equation*}
\int_{\mathbb{S}^n} u \xi_idw=0 \quad \mbox{ and }\quad\int_{\mathbb{S}^n} e^{nu} \xi_idw=0\ \quad i=1,2,\cdots,n+1
\end{equation*}
whenever $\alpha\neq1$. Proposition \ref{n-pro} has been proven.
Throughout this paper, we assume that $u$ is axially symmetric w.r.t. $\xi_1$-axis, i.e.,
$u=u(\xi_1)$ for $u\in\mathcal{C}^\infty(\mathbb{S}^n)$. We may drop the subscript for simplicity to write
\begin{equation*}
u=u(x),\quad x\in(-1,1).
\end{equation*}
Next, we shall prove the uniqueness
of axially symmetric solutions when $\alpha=\frac1{n+1}$ in \eqref{n-mfe} for all $n\ge 2$.
Let
\begin{equation}\label{decomp-u}
u=\sum_{k=0}^\infty a_k C_k^{\frac{n-1}{2}}(x).
\end{equation}
As previously discussed, we can get
\begin{equation}\label{decomp-Pnu}
P_nu=\sum_{k=0}^\infty \lambda_k a_k C_k^{\frac{n-1}{2}}(x),
\end{equation}
Now we assume that $u$ is solution for \eqref{Pn-mfe} and define
\begin{equation}\label{G}
G(x)=(1-x^2)u'.
\end{equation}
One has
\begin{equation}\label{decomp-G}
G=\sum_{k=0}^\infty b_kC_k^{\frac{n-1}{2}}(x).
\end{equation}
By the recursive relations of $C_k^{\frac{n-1}{2}}(x)$(\cite[8.939]{GR})
\begin{equation*}
\begin{aligned}
(1-x^2)(C_k^{\frac{n-1}{2}}(x))'&=2(n-1)C_{k-1}^{\frac{n-1}{2}}(x)-kxC_k^{\frac{n-1}{2}}(x)\\
&=(k+n-1)xC_k^{\frac{n-1}{2}}(x)-(k+1)C_{k+1}^{\frac{n-1}{2}}(x),
\end{aligned}
\end{equation*}
we have
\begin{equation}\label{recursive-C}
(1-x^2)(C_k^{\frac{n-1}{2}}(x))'=\frac{(k+n-1)(k+n-2)}{2k+n-1} C_{k-1}^{\frac{n-1}{2}}(x)-
\frac{k(k+1)}{2k+n-1}C_{k-1}^{\frac{n-1}{2}}(x),\quad \mbox{for }k\geq1.
\end{equation}
Therefore, we see from \eqref{decomp-G} and \eqref{recursive-C} that
\begin{equation}\label{bk}
b_k=\begin{cases}
\frac{(k+n)(k+n-1)}{2k+n+1}-\frac{k(k-1)}{2}a_{k-1} &\mbox{for }k\geq1;\\
\frac{n(n-1)}{n+1}a_1 &\mbox{for }k=0.
\end{cases}
\end{equation}
Differentiate \eqref{Pn-mfe} w.r.t $x$ and multiply both sides by $(1-x^2)$ to get
\begin{equation*}
(1-x^2)(P_nu)'=\frac{n!}{\alpha}e^{nu}(1-x^2)u'.
\end{equation*}
Replacing $e^{nu}$ by $\frac{\alpha}{(n-1)!}P_n u+1$, we derive that
\begin{equation}\label{G-eq}
(1-x^2)(P_nu)'=n P_nu G+\frac{n!}{\alpha}G.
\end{equation}
Inspired by Osgood, Phillips and Sarnak \cite{Osgood88}, we shall compare the coefficients in front of $C_k^{\frac{n-1}{2}}(x)$ in both sides of \eqref{G-eq}.
It is worthy pointing out that 1-d case is solved by comparing Fourier coefficients in \cite{Wang17}.
\par
\begin{proof}[Proof of Theorem \ref{n}]
We first compare the coefficients before $C_0^{\frac{n-1}{2}}(x)$. By \eqref{decomp-Pnu} and \eqref{bk}, we see that
\begin{equation}\label{decomp-Pnu'}
\begin{aligned}
&\quad (1-x)^2(P_nu)'= \frac{n(n-1)}{n+1} \lambda_1a_1 C_0^{\frac{n-1}{2}}(x)\\
&+\sum_{k=1}^\infty\left[\frac{(k+n)(k+n-1)\lambda_{k+1}}{2k+n+1}
a_{k+1}-
\frac{k(k-1)\lambda_{k-1}}{2k+n-3}a_{k-1}\right]C_k^{\frac{n-1}{2}}(x).
\end{aligned}
\end{equation}
On the other hand,
multiplying \eqref{G-eq} by $(1-x^2)^{\frac{n-2}{2}}$ and integrating, we obtain
\begin{equation*}
\frac{2n (n-2)!n!}{n+1} A_n a_1 =n\int_{\mathbb{S}^n}P_nuG+\frac{2n (n-2)!n!}{n+1} A_na_1.
\end{equation*}
Equivalently, we have
\begin{equation}\label{coeffi-C0}
\frac{2 (n-2)!n!}{n+1}\left(1-\frac{1}{\alpha}\right)a_1=\int_{\mathbb{S}^n}P_n uG.
\end{equation}
It remains to compute $\int_{\mathbb{S}^n}P_n uG$.
It follows from \eqref{decomp-Pnu}, \eqref{decomp-G}, \eqref{bk} and \eqref{ortho} that
\begin{equation*}
\begin{aligned}
&\quad\int_{\mathbb{S}^n}P_nuG =\int_{-1}^1
(1-x^2)^{\frac{n-2}{2}}\left(\sum_{k=0}^\infty \lambda_k a_kC_k^{\frac{n-1}{2}}(x)\right)
\sum_{k=0}^\infty b_k C_k^{\frac{n-1}{2}}(x)\\
&=A_n \sum_{k=1}^\infty \frac{(k+n-2)!}{k!(k+\frac{n-1}{2})}\lambda_k a_kb_k\\
&=2A_n \sum_{k=1}^\infty\lambda_k \left[\frac{(k+n)!}{k!(2k+n-1)(2k+n+1)}a_{k+1}a_k -\frac{(k-1)(k+n-2)!}{(k-1)!(2k+n-3)(2k+n-1)}a_{k-1}a_k \right]\\
&=2A_n \sum_{k=1}^\infty \left[\frac{(k+n-1)!\lambda_{k+1}}{(k-1)!(2k+n-1)(2k+n+1)}a_{k+1}a_k -\frac{(k-1)(k+n-2)!\lambda_k}{(k-1)!(2k+n-3)(2k+n-1)}a_{k-1}a_k \right]\\
&=0.
\end{aligned}
\end{equation*}
By \eqref{coeffi-C0}, we conclude that
\begin{equation}\label{b0}
\mbox{ if } \alpha\neq1, \mbox{ then }a_1=0\mbox{ and so }b_0=0.
\end{equation}
\par
Then we compare the coefficients in front of $C_1^{\frac{n-1}{2}}(x)$ in \eqref{G-eq}. More precisely,
\begin{equation}\label{coeffi-C1}
\int_{-1}^1(1-x^2)^{\frac n2} (P_nu)'C_1^{\frac{n-1}{2}}=n \int_{-1}^1 (1-x^2)^{\frac{n-2}{2}}(P_nu)' P_nu G C_1^{\frac{n-1}{2}}+\frac{n!}{\alpha}\int_{-1}^1 (1-x^2)^{\frac{n-2}{2}}GC_1^{\frac{n-1}{2}}.
\end{equation}
From \eqref{decomp-Pnu'},
we deduce that
\begin{equation}\label{coeffi-C1-1}
\begin{aligned}
&\quad \int_{-1}^1(1-x^2)^{\frac n2}(P_nu)'C_1^{\frac{n-1}{2}}(x)= \frac{2 n!(n+1)! }{n+3}
A_na_{2}.
\end{aligned}
\end{equation}
For the second term of RHS of \eqref{coeffi-C1}, we have
\begin{equation}\label{coeffi-C1-2}
\frac{n!}{\alpha}\int_{-1}^1 (1-x^2)^{\frac{n-2}{2}}GC_1^{\frac{n-1}{2}}=\frac{2 (n!)^2 }{\alpha(n+3)}
A_na_{2}.
\end{equation}
For the first term of RHS of \eqref{coeffi-C1}, after integration by part,
we obtain
\begin{equation*}
\begin{aligned}
&\quad \int_{-1}^1(1-x^2)^{\frac{n-2}{2}}P_nu GC_1^{\frac{n-1}{2}}(x)=-\frac{n-1}{n}\int_{-1}^1P_nu Gd((1-x^2)^{\frac n2})\\
&=\frac{n-1}{n}\int_{-1}^1(1-x^2)^{\frac{n-2}{2}}\left[(1-x^2)(P_nu)' G+(1-x^2)G'P_nu\right]dx\\
&:=\frac{n-1}{n}(I+II).
\end{aligned}
\end{equation*}
By \eqref{decomp-u}-\eqref{recursive-C}, we find
\begin{equation}\label{decomp-G'}
\begin{aligned}
(1-x^2)G'&=\sum_{k=1}^\infty b_k\left[\frac{(k+n-1)(k+n-2)}{2k+n-1} C_{k-1}^{\frac{n-1}{2}}(x)-
\frac{k(k+1)}{2k+n-1}C_{k-1}^{\frac{n-1}{2}}(x)\right]\\
&=\frac{n(n-1)}{n+1} b_1C_0^{\frac{n-1}{2}}(x)+\sum_{k=1}^\infty\left(\frac{(k+n)(k+n-1)}{2k+n-1}b_{k+1}
-\frac{k(k-1)}{2k+n-3} b_{k-1}\right)C_{k}^{\frac{n-1}{2}}(x).
\end{aligned}
\end{equation}
After some computations, we deduce from \eqref{decomp-Pnu}, \eqref{ortho},\eqref{decomp-Pnu'} and \eqref{decomp-G'},
\begin{equation*}
\begin{aligned}
I+II&= \frac{2(n+1)!n!}{n+3} A_n a_2b_1-\frac{2n!n!}{n+3} A_n a_2b_1
\\
&+2A_n\sum_{k=2}^\infty a_{k+1} b_k\lambda_{k+1}\left[\frac{(k+n)!}{(2k+n+1)(2k+n-1)k!}
-\frac{(k+n-1)!}{(2k+n-1)(2k+n+1)k!}\right]\\
&+2A_n\sum_{k=2}^\infty a_k b_{k+1}\lambda_{k}\left[\frac{(k+n)!}{(2k+n+1)(2k+n-1)k!}
-\frac{(k+n-1)!}{(2k+n-1)(2k+n+1)(k-1)!}\right]\\
&=\frac{2n(n!)^2}{n+3} A_n a_2b_1+2n A_n\sum_{k=2}^\infty \frac{(k+n-1)!\lambda_{k}}{(2k+n+1)(2k+n-1)k!}\left(
\frac{k+n}{k}a_{k+1} b_k+a_k b_{k+1}\right)\\
&= \frac{2n(n-1)n!(n+1)!}{(n+3)(n+5)} A_n a_2^2+2n (n-1)A_n\sum_{k=2}^\infty \frac{\lambda_{k}(k+n)(k+n)!}{(2k+n+1)(2k+n-1)(2k+n-3)kk!}a_{k+1}^2\\
&-2n (n-1)A_n\sum_{k=2}^\infty \frac{\lambda_{k}(k+n+1)!}{(2k+n+1)(2k+n-1)(2k+n-3)(k+1)!}a_ka_{k+2}\\
&=2n (n-1)A_n\sum_{k=2}^\infty \frac{(k+n-1)!}{(2k+n+1)(2k+n-1)(k-1)!}\\
&\times\left(\frac{(k+n-1)\lambda_{k-1}}{(2k+n-3)(k-1)}a_k^2
-\frac{(k+n+1)(k+n)\lambda_{k}}{k(2k+n-3)(k-1)}a_ka_{k+2}\right)\\
&=n (n-1)A_n\sum_{k=0}^\infty \frac{(k+n-1)!\prod_{s=1}^{n-1}(k+s)}{(k+1)(2k+n+1)(k+1)!}b_{k+1}^2.
\end{aligned}
\end{equation*}
Therefore,
\begin{equation}\label{coeffi-C1-3}
\begin{aligned}
\int_{-1}^1(1-x^2)^{\frac{n-2}{2}}P_nu=(n-1)^2A_n\sum_{k=0}^\infty \frac{(k+n-1)!\prod_{s=1}^{n-1}(k+s)}{(k+1)(2k+n+1)(k+1)!}b_{k+1}^2.
\end{aligned}
\end{equation}
Combining \eqref{coeffi-C1-1}-\eqref{coeffi-C1-3}, we have
\begin{equation*}
\frac{2 (n!)^2 }{(n-1)^2(n+3)}
\left(n+1-\frac{1}{\alpha}\right)a_2=\sum_{k=0}^\infty
\frac{(k+n-1)!\prod_{s=1}^{n-1}(k+s)}{(k+1)(2k+n+1)(k+1)!}b_{k+1}^2.
\end{equation*}
It is easy to see that if $\alpha= \frac1{n+1}$ then $b_k=0$ for $k\ge 1$. Thus, $G\equiv0$ and $u\equiv C$.
\end{proof}
\vskip4mm
{\section{ The case: $n$ is even }}
In this section, we shall show some results for \eqref{n-mfe} with $n\ge6$ even, which can be regarded as the generalization of recent results in \cite{Gui20} for $n=4$.
Let $\theta_i,\ i=1,2,\dots,n$ denote the usual angular coordinates on the sphere with
\begin{equation*}
\theta_n\in[0,2\pi] \quad \mbox{ and }\theta_i\in[0, \pi],\ i=1,2,\dots,n-1
\end{equation*}
and define $x=\cos \theta_1$. Then the metric tensor is
\begin{equation}\label{metirc}
g_{ij}=
\left( \begin{matrix}
(1-x^2)^{-1} & 0 & 0 &\cdots & 0 \\
0 & 1-x^2 & 0 &\cdots & 0 \\
0 & 0 & (1-x^2)\sin^2\theta_2 &\cdots & 0 \\
\vdots &\vdots & \vdots &\ddots & \vdots \\
0 & 0 & 0 &\cdots & (1-x^2)\sin^2\theta_2\dots\sin^2\theta_{n-1}
\end{matrix}
\right )
\end{equation}
In what follows, we shall consider axially symmetric functions which only depend on $x$. For such functions, we have
\begin{equation}
\int_{S^n}dw=\frac{\Gamma\left(\frac{n+1}{2}\right)}{2\pi^{\frac{n+1}{2}}}
\int_{-1}^1\int_0^{\pi}\int_0^{\pi}\cdots\int_0^{2\pi} (1-x^2)^\frac{n-2}{2}\sin^{(n-2)}\theta_2\cdots
\cdots \sin\theta_{n-1} d\theta_{n}\cdots d\theta_2dx.
\end{equation}
Note that
\begin{equation*}
\int_{0}^{\frac{\pi}{2}}sin^{s-1}\theta d\theta=2^{s-2}B\left(\frac{s}{2},\frac{s}{2}\right)
=2^{-1}B\left(\frac{1}{2},\frac{s}{2}\right).
\end{equation*}
Then for $k=2,3,\dots,n-1$,
\begin{equation*}
\int_{0}^{\pi}sin^{n-k}\theta_k d\theta_k= B\left(\frac{1}{2},\frac{n-k+1}{2}\right)
=\frac{\sqrt\pi
\Gamma\left(\frac{n+1-k}{2}\right)}{\Gamma\left(\frac{n+2-k}{2}\right)}.
\end{equation*}
We further have
\begin{equation*}
\begin{aligned}
\int_{S^n}dw&=\frac{\Gamma\left(\frac{n+1}{2}\right)}{\pi^{\frac{n-1}{2}}}\prod_{k=2}^{n-1}\frac{\sqrt\pi
\Gamma\left(\frac{n+1-k}{2}\right)}{\Gamma\left(\frac{n+2-k}{2}\right)}
\int_{-1}^1 (1-x^2)^\frac{n-2}{2}dx\\
&=\frac{\Gamma\left(\frac{n+1}{2}\right)}{\sqrt\pi\Gamma\left(\frac{n}{2}\right)}\int_{-1}^1 (1-x^2)^\frac{n-2}{2}dx\\
&=\frac{(n-1)!}{2^{n-1}[(n/2-1)!]^2}\int_{-1}^1 (1-x^2)^\frac{n-2}{2}dx.
\end{aligned}
\end{equation*}
for even $n$.
Moreover,
\begin{equation*}
\begin{aligned}
\Delta u&= |g|^{-\frac{1}{2}}\frac{\partial}{\partial x}\left(|g|^{\frac{1}{2}}g^{11}\frac{\partial u}{\partial x}\right)
=(1-x^2)^{-\frac{n-2}{2}}\frac{\partial}{\partial x}\left[(1-x^2)^\frac{n}{2}\frac{\partial u}{\partial x}\right]\\
&=(1-x^2)u''-nxu'
\end{aligned}
\end{equation*}
and
\begin{equation}\label{Pn-axial}
P_nu=\left(-1\right)^\frac{n}{2}[(1-x^2)^\frac{n}{2} u']^{(n-1)}=
\left(-1\right)^\frac{n}{2}[(1-x^2)^\frac{n-2}{2} \Delta u]^{(n-2)}
\end{equation}
for $u=u(x)$. Hence, we can transform the original equation \eqref{n-mfe} on $\mathbb{S}^n$ into an ODE \eqref{n-ode}.
In the following, we assume that $\alpha<1.$
Let $G$ be defined as \eqref{G}.
In view of equation \eqref{n-ode}, we drive
\begin{equation}\label{G-eq0}
\alpha(-1)^\frac{n}{2}((1-x^2)^\frac{n-2}{2}G)^{(n-1)}+(n-1)!
- \frac{(n-1)!\sqrt\pi\Gamma\left(\frac{n}{2}\right)}{ \Gamma\left(\frac{n+1}{2}\right)\gamma} e^{nu}=0.
\end{equation}
By differentiating \eqref{G-eq0}, we further have
\begin{equation}\label{G-equation}
(-1)^\frac{n}{2}(1-x^2)^\frac{n}{2} [(1-x^2)^\frac{n-2}{2}G]^{(n)} -\frac{n!}{\alpha} (1-x^2)^\frac{n-2}{2}G-
(-1)^\frac{n}{2}n(1-x^2)^\frac{n-2}{2}G [(1-x^2)^\frac{n-2}{2}G]^{(n-1)}=0.
\end{equation}
For simplicity, let
\begin{equation}\label{Ck-defin}
\hat C_k^{\frac{n-1}{2}} =\frac{ k! \Gamma\left(n-1\right) }{\Gamma\left(k+n-1\right) }C_k^{\frac{n-1}{2}} \quad \mbox{ and }\quad d_k=\frac{\Gamma\left(k+n-1\right)}{ k! \Gamma\left(n-1\right) } b_k
\end{equation}
and drop the hat and the index ${\frac{n-1}{2}}$ in the notation in later discussion.
In the following, we will study the above $ C_k $ and $d_k$.
Combining \eqref{decomp-G} and \eqref{b0},
we have the following decomposition using the orthogonal polynomials $C_k, k\ge 1$:
\begin{equation}\label{decomp-G-1}
G=\sum_{k=1}^\infty d_k C_k (x).
\end{equation}
Recall that
\begin{equation*}
\bar\lambda_k=k(k+n-1) \mbox{ and }\lambda_k=\frac{\Gamma(n+k)}{\Gamma(k)}.
\end{equation*}
Note that \eqref{nCk'} and \eqref{ortho} respectively become
\begin{equation}\label{Ck'}
|C_k'(x)| \leq \frac{\bar\lambda_k}{n},\quad \forall x\in(-1,1);
\end{equation}
and
\begin{equation}\label{Ck-ortho}
\int_{-1}^1 (1-x^2)^\frac{n-2}{2} C_k C_l
=\frac{2^{n-1}[(n/2-1)!]^2\bar\lambda_k}{(n+2k-1) \lambda_k}\delta_{kl}.
\end{equation}
Define $d_1=\beta$
and
\begin{equation*}
t_k^2=\begin{cases}
d_k^2\int_{-1}^1(1-x^2)^\frac{n-2}{2}C_k^2,\quad &\mbox{ for } k\geq2;\\
\beta^2\int_{-1}^1(1-x^2)^\frac{n-2}{2}C_1^2,\quad &\mbox{ for } k=1.
\end{cases}
\end{equation*}
It follows from \eqref{Ck-ode} and \eqref{Pn-axial} that
\begin{equation*}
\left[(1-x^2)^\frac{n}{2} C_k'\right]'=-\bar\lambda_k(1-x^2)^\frac{n-2}{2} C_k
\end{equation*}
and
\begin{equation*}
[(1-x^2)^\frac{n-2}{2} C_k]^{(n-2)}=(-1)^\frac{n-2}{2}\frac{ \lambda_k}{\bar\lambda_k}C_k.
\end{equation*}
After direct calculations, we obtain the following decompositions.
\begin{equation}\label{decomp_G^2}
\int_{-1}^1(1-x^2)^\frac{n-2}{2} G^2= \sum_{k=1}^\infty t_k^2,
\end{equation}
\begin{equation}\label{decomp_G'}
\int_{-1}^1 (1-x^2)^\frac n2(G')^2= \sum_{k=1}^\infty \bar\lambda_k t_k^2,
\end{equation}
\begin{equation}\label{decomp_Gn-2}
\int_{-1}^1\left|[(1-x^2)^\frac{n-2}{2} G]^{(\frac{n-2}{2})}\right|^2 =
\sum_{k=1}^\infty\frac{ \lambda_k}{\bar\lambda_k} t_k^2= \sum_{k=1}^\infty\frac{\Gamma(n+k-1)}{\Gamma(k+1)} t_k^2.
\end{equation}
\par
Next, we shall state some important integral identities which will be used frequently in the proof of the main results.
\begin{lemma}\label{equality}
We establish the following equalities for $G$.
\begin{equation}\label{C1G}
\int_{-1}^1(1-x^2)^\frac{n-2}2 C_1 G=\frac{\sqrt\pi\Gamma\left(\frac{n}{2}\right)}{2\Gamma\left(\frac{n+3}{2}\right)}\beta,
\end{equation}
\begin{equation}\label{enu}
\int_{-1}^1 (1-x^2)^\frac{n}2 \frac{e^{nu}}{\gamma}=\frac{n}{n+1}(1-\alpha\beta),
\end{equation}
\begin{equation}\label{CkG}
\int_{-1}^1 (1-x^2)^\frac{n-2}2 C_k G=-\frac{2^{n-1}[(n/2-1)!]^2 }{\alpha\lambda_k}\int_{-1}^1 \frac{e^{nu}}{\gamma}(1-x^2)^\frac{n}2 C_k',\quad k\geq 2,
\end{equation}
\begin{equation}\label{key-id}
\int_{-1}^1\left|[(1-x^2)^\frac{n-2}{2} G]^{(\frac{n-2}{2})}\right|^2 =\frac{\sqrt\pi(n-2)!\Gamma\left(\frac{n}{2}\right)}{\Gamma\left(\frac{n+3}{2}\right)}
\left(n+1-\frac{1}{\alpha}\right)\beta.
\end{equation}
\end{lemma}
\begin{proof}
A direct calculation shows that
\begin{equation*}
\int_{-1}^1(1-x^2)^\frac{n-2}2 C_1 G= \beta \int_{-1}^1(1-x^2)^\frac{n-2}2x^2 =\frac{\sqrt\pi\Gamma\left(\frac{n}{2}\right)}{2\Gamma\left(\frac{n+3}{2}\right)}\beta.
\end{equation*}
Then \eqref{C1G} follows.
To prove \eqref{enu} and \eqref{CkG}, multiplying \eqref{G-eq0}
by $\int_{-1}^x (1-s^2)^\frac{n-2}{2}C_k(s)ds$ with $k\geq 1$
and integrating over $[-1, 1]$, we have
\begin{equation*}
\begin{aligned}
&\quad \int_{-1}^1\int_{-1}^x (1-s^2)^\frac{n-2}{2}C_k(s)\bigg[ \alpha(-1)^\frac{n}{2}((1-x^2)^\frac{n-2}{2}G)^{(n-1)}+(n-1)!-\frac{2^{n-1}[(n/2-1)!]^2}{\gamma}e^{nu}
\bigg] =0.
\end{aligned}
\end{equation*}
After integrating by parts, we obtain
\begin{equation}\label{22-1}
\begin{aligned}
&\quad (-1)^\frac{n}{2} \alpha \int_{-1}^1\int_{-1}^x (1-s^2)^\frac{n-2}{2}C_k(s)((1-x^2)^\frac{n-2}{2}G)^{(n-1)}\\
& = (-1)^\frac{n+2}{2} \alpha \int_{-1}^1((1-x^2)^\frac{n-2}{2}G)^{(n-2)}(1-x^2)^\frac{n-2}{2}C_k(x)\\
&= \frac{\alpha\Gamma(n+k-1)}{\Gamma(k+1)}\int_{-1}^1 (1-x^2)^\frac{n-2}{2}C_k(x)G.
\end{aligned}
\end{equation}
Furthermore,
\begin{equation}\label{22-2}
\begin{aligned}
\int_{-1}^1\int_{-1}^x (1-s^2)^\frac{n-2}{2}C_k(s) &= \left(x\int_{-1}^x(1-s^2)^\frac{n-2}{2}C_k(s) \right)\Big|_{-1}^1- \int_{-1}^1 (1-x^2)^\frac{n-2}{2}xC_k \\
&=-\frac{\sqrt\pi\Gamma\left(\frac{n}{2}\right)}{2\Gamma\left(\frac{n+3}{2}\right)}\delta_{1k}.
\end{aligned}
\end{equation}
By \eqref{Ck-ode} we find that
\begin{equation}\label{22-3}
\int_{-1}^x (1-s^2)^\frac{n-2}{2}C_k(s)=- \frac1{\bar\lambda_k}(1-x^2)^\frac{n}{2} C_k'(x).
\end{equation}
Let $k=1$, then from \eqref{22-1}--\eqref{22-3} we deduce that
\begin{equation*}
\frac{\sqrt\pi(n-1)!\Gamma\left(\frac{n}{2}\right)}{\bar\lambda_1\Gamma\left(\frac{n+1}{2}\right)} \int_{-1}^1 (1-x^2)^\frac{n}2 \frac{e^{nu}}{\gamma} =\frac{\sqrt\pi(n-1)!\Gamma\left(\frac{n}{2}\right)}{2\Gamma\left(\frac{n+3}{2}\right)} (1-\alpha\beta).
\end{equation*}
This leads to \eqref{enu}.
When $k\ge2$, \eqref{CkG} follows from \eqref{22-1}--\eqref{22-3}. Here we employ the fact that
\begin{equation*}
\frac{ \Gamma(n+k-1)}{\Gamma(k+1)}=\frac{\lambda_k}{\bar\lambda_k} \mbox{ and }2^{n-1}[(n/2-1)!]^2=\frac{(n-1)!\sqrt\pi\Gamma\left(\frac{n}{2}\right)}{ \Gamma\left(\frac{n+1}{2}\right)}.
\end{equation*}
\par
For \eqref{key-id}, multiplying \eqref{G-equation} by $x$ and integrating from $-1$ to $1$, we obtain
\begin{equation}\label{KI-1}
\begin{aligned}
&\quad \int_{-1}^1 \bigg[(-1)^\frac{n}{2}x(1-x^2)^\frac{n}{2} [(1-x^2)^\frac{n-2}{2}G]^{(n)}
-\frac{n!}{\alpha} x(1-x^2)^\frac{n-2}{2}G\\
&-(-1)^\frac{n}{2}n x(1-x^2)^\frac{n-2}{2}G [(1-x^2)^\frac{n-2}{2}G]^{(n-1)}\bigg]=0.
\end{aligned}
\end{equation}
By integrating by parts, we find
\begin{equation}\label{KI-2}
\begin{aligned}
\int_{-1}^1(-1)^\frac{n}{2}x(1-x^2)^\frac{n}{2} [(1-x^2)^\frac{n-2}{2}G]^{(n)}&= \int_{-1}^1 (-1)^\frac{n}{2}[x(1-x^2)^\frac{n}{2}]^{(n)} [(1-x^2)^\frac{n-2}{2}G] \\
&= (n+1)!\int_{-1}^1 x(1-x^2)^\frac{n-2}{2}G.\\
\end{aligned}
\end{equation}
Furthermore,
\begin{equation}\label{G(n/2-1)}
\begin{aligned}
&\quad (-1)^\frac{n}{2}n\int_{-1}^1[x(1-x^2)^\frac{n-2}{2}G][(1-x^2)^\frac{n-2}{2} G]^{(n-1)}\\
&=n\int_{-1}^1[(1-x^2)^\frac{n-2}{2} G]^{(\frac{n-2}2)}\left[x((1-x^2)^\frac{n-2}{2}G)^{(\frac{n}2)}+\frac n2((1-x^2)^\frac{n-2}{2} G)^{(\frac{n-2}2)}\right]\\
&=\frac{n(n-1)}{2}\int_{-1}^1|[(1-x^2)^\frac{n-2}{2} G]^{(\frac{n}{2}-1)}|^2.
\end{aligned}
\end{equation}
It follows from \eqref{KI-1}--\eqref{G(n/2-1)} that
\begin{equation*}
n!\left(n+1-\frac{1}{\alpha}\right)\int_{-1}^1 x(1-x^2)^\frac{n-2}{2}G=\frac{n(n-1)}{2}\int_{-1}^1|[(1-x^2)^\frac{n-2}{2} G]^{(\frac{n}{2}-1)}|^2,
\end{equation*}
which, joint with \eqref{C1G}, implies \eqref{key-id}.
\end{proof}
Multiplying \eqref{G-equation} by $G$ and integrating over$[-1,1]$, we have
\begin{equation}\label{key-equation}
\begin{aligned}
\int_{-1}^1&(-1)^\frac{n}{2}(1-x^2)^\frac{n}{2}G [(1-x^2)^\frac{n-2}{2}G]^{(n)} -\frac{n!}{\alpha} \int_{-1}^1(1-x^2)^\frac{n-2}{2}G^2\\
&- (-1)^\frac{n}{2}n \int_{-1}^1(1-x^2)^\frac{n-2}{2}G^2 [(1-x^2)^\frac{n-2}{2}G]^{(n-1)}=0.
\end{aligned}
\end{equation}
For the first term,
\begin{equation*}
\begin{aligned}
\int_{-1}^1&(-1)^\frac{n}{2}(1-x^2)^\frac{n}{2}G [(1-x^2)^\frac{n-2}{2}G]^{(n)}\\
&=(-1)^\frac{n}{2} \int_{-1}^1[(1-x^2)^\frac{n-2}{2}G][(1-x^2)^\frac{n}{2} G']^{(n-1)}+(-1)^\frac{n}{2}n\int_{-1}^1[x(1-x^2)^\frac{n-2}{2}G][(1-x^2)^\frac{n-2}{2} G]^{(n-1)}\\
&= \lfloor G\rfloor^2+\frac{n(n-1)}{2}\int_{-1}^1|[(1-x^2)^\frac{n-2}{2} G]^{(\frac{n}{2}-1)}|^2,
\end{aligned}
\end{equation*}
where
\begin{equation}\label{norm}
\lfloor G\rfloor^2= \frac{2^{n-1}[(n/2-1)!]^2}{(n-1)!}\int_{\mathbb{S}^n} (P_n G ) G dw= \left(-1\right)^\frac{n}{2} \int_{-1}^1 (1-x^2)^\frac{n-2}{2}[(1-x^2)^\frac{n}{2} G']^{(n-1)} G.
\end{equation}
However, the last term is very sophisticated, and we will consider it for the cases $n=6$ and $n=8$ below.
More precisely, after some complicated computations, we derive that for $n=6$,
\begin{equation}\label{n=6}
\int_{-1}^1(1-x^2)^2 G^2 [(1-x^2)^2 G]^{(5)}= -5\int_{-1}^1(1-x^2)^4 G'(G'')^2-\frac{80}{3}
\int_{-1}^1(1-x^2)^3(G')^3,
\end{equation}
and
for $n=8$,
\begin{equation}\label{n=8}
\begin{aligned}
&\quad\int_{-1}^1(1-x^2)^3 G^2 [(1-x^2)^3 G]^{(7)}=1260
\int_{-1}^1(1-x^2)^4(G')^3+252\int_{-1}^1(1-x^2)^5 G'(G'')^2\\
&+7\int_{-1}^1(1-x^2)^6G'(G^{(3)})^2 -21\int_{-1}^1[(1-x^2)G]^{(3)}(1-x^2)^5 (G'')^2.
\end{aligned}
\end{equation}
The details of the computation are postponed to Appendix A and B.
In view of the above relations, \eqref{key-equation} for $n=6$ becomes
\begin{equation}\label{n=6key-eq1}
\begin{aligned}
&\quad\lfloor G\rfloor^2+15 \int_{-1}^1|[(1-x^2)^2 G]^{''}|^2-\frac{6!}{\alpha} \int_{-1}^1(1-x^2)^2 G^2\\
& -30\int_{-1}^1(1-x^2)^4 G'(G'')^2-160
\int_{-1}^1(1-x^2)^3(G')^3=0.
\end{aligned}
\end{equation}
Similarly,
\eqref{key-equation} for $n=8$ is equivalent to
\begin{equation}\label{n=8key-eq1}
\begin{aligned}
&\quad\lfloor G\rfloor^2+28\int_{-1}^1|[(1-x^2)^3 G]^{(3)}|^2-\frac{8!}{\alpha} \int_{-1}^1(1-x^2)^3 G^2\\
& -56\bigg[180
\int_{-1}^1(1-x^2)^4(G')^3+36\int_{-1}^1(1-x^2)^5 G'(G'')^2\\
&+\int_{-1}^1(1-x^2)^6G'(G^{(3)})^2 -3\int_{-1}^1[(1-x^2)G]^{(3)}(1-x^2)^5 (G'')^2\bigg]=0.
\end{aligned}
\end{equation}
Next we shall show a family of important gradient estimates in $\mathbb{S}^n$, which generalizes a similar result in $\mathbb{S}^4$.
\begin{lemma}\label{Gj-estimate-lemma}
For all $x\in[-1,1]$, we have
\begin{equation}\label{gradient-n}
G_j:= (-1)^j[(1-x^2)^j G]^{(2j+1)}\le \frac{(2j+1)!}{\alpha},\quad x \in (-1, 1), \quad 0\le j\le \frac n2-1.
\end{equation}
\end{lemma}
\begin{proof}
We will first prove the result for the case $n=6$.
Since
\begin{equation*}
\begin{aligned}
\left[(1-x^2)^2 G\right]^{(5)}&=
\left[(1-x^2)((1-x^2)G)''-4x((1-x^2)G)'-2(1-x^2)G\right]^{(3)}\\
&=\left[(1-x^2)((1-x^2)G)^{(3)}-6x((1-x^2)G)''-6((1-x^2)G)'\right]''\\
&=-(1-x^2)G_1''+10x G_1'+20G_1.
\end{aligned}
\end{equation*}
Therefore, by \eqref{G-eq0} we have
\begin{equation}\label{G1''-upbd}
-(1-x^2)G_1''+10x G_1'+20G_1\leq\frac{5!}{\alpha}.
\end{equation}
Let $M_1=\max_{x\in[-1,1]}G_1'(x)$.
\par
Case 1:
\begin{equation*}
M_1=\lim_{x_k\rightarrow 1}G_1'(x_k) \quad \mbox{ for some } x_k\in(-1,1).
\end{equation*}
As in \cite{Gui20}, let $r=|x'|=\sqrt{1-x^2}$, then we write
\begin{equation*}
G(x)=\bar G(r),\ \ G_1(x)=\bar G_1(r) \mbox{ and }u(x)=\bar u(r) \quad \mbox{ for } r\in[0,1)\mbox{ and }x\in(0,1],
\end{equation*}
and $\bar u(r)$ can be extended evenly so that $\bar u(r)\in \mathcal{C}^\infty(-1,1)$.
Hence,
\begin{equation}
\begin{aligned}
G_1(x)&=\bar G_1(r)=\frac{d^3}{dx^3}(-r^2\bar G(r)) \\
&=\frac{d^3}{dx^3}[r^3\sqrt{1-r^2}\bar u_r]
:=\frac{d^3}{dx^3}[g(r^2)],
\end{aligned}
\end{equation}
where $g(s),\ s\in(-1,1)$ is a $\mathcal{C}^\infty$ function.
\par
Let $g_0(s)=g(s)$ and
\begin{equation*}
g_i(s)=-2g_{i-1}'(s)\sqrt{1-s},\quad i=1,2,3.
\end{equation*}
Then $g_i(s)\in \mathcal{C}^\infty(-1,1)$ for $i=1,2,3$. Differentiate $g(r^2)$ with respect to $x$:
\begin{equation*}
\begin{aligned}
\frac{d}{dx}g(r^2)=-2g'(r^2)\sqrt{1-r^2}=g_1(r^2).
\end{aligned}
\end{equation*}
Similarly, we have
\begin{equation*}
\frac{d^i}{dx^i}g(r^2)=g_i(r^2),\quad i=2,3.
\end{equation*}
Therefore, $G_1(x) =\bar G_1(r)\in\mathcal{C}^\infty(-1,1)$ and is even.
We now can write
\begin{equation}\label{G1(r)}
\bar G_1(r)=c_1+c_2r^2+c_3r^4=O(r^6) \quad \mbox{near }r=0.
\end{equation}
Then $c_2\leq0$, since $M_1=G_1(1)=\bar G_1(0)$. Note that near $r=0$,
\begin{equation*}
\begin{aligned}
x G_1'(x)&=\sqrt{1-r^2}\frac{d\bar G_1'(x)}{dr}\frac{dr}{dx}\\
&=-2c_2+O(r^2)
\end{aligned}
\end{equation*}
and
\begin{equation*}
\begin{aligned}
(1-x^2) G_1''(x) = (-2(c_2-4c_3)+O(r^2))r^2.
\end{aligned}
\end{equation*}
It follows from \eqref{G1''-upbd} that
\begin{equation}\label{G1-upbd}
G_1\leq\frac{5!}{20\alpha}=\frac{3!}{\alpha},\quad\forall x\in[-1,1].
\end{equation}
\par
Case 2: if
\begin{equation*}
M_1=\lim_{x_k\rightarrow -1}G_1'(x_k) \quad \mbox{ for some } x_k\in(-1,1).
\end{equation*}
Then it is similar to the case 1 to show \eqref{G1-upbd}.
Case 3: Let $M_1=G_1(x_0)$ for some $x_0\in(-1,1)$. Then
\begin{equation*}
G_1'(x_0)=0 \quad\mbox{and }\quad G''(x_0)\leq0.
\end{equation*}
\eqref{G1-upbd} immediately follows from \eqref{G1''-upbd}.
\par
We see from \eqref{G1-upbd} that
\begin{equation*}
\begin{aligned}
((1-x^2)G)^{(3)}&=-6G'-6xG''+(1-x^2)G'''\geq -\frac{6}{\alpha},\quad\forall x\in[-1,1].
\end{aligned}
\end{equation*}
Repeating the previous arguments, we can conclude
\begin{equation*}
G'\leq \frac{1}{\alpha}, \quad\forall x\in[-1,1].
\end{equation*}
In general, we can start with $G_{j}, j=\frac n2 -2$.
In view of \eqref{G-eq0}, one directly has
\begin{equation*}
(-1)^\frac{n}{2}((1-x^2)^\frac{n-2}{2}G)^{(n-1)}\le \frac{(n-1)!}{\alpha}.
\end{equation*}
Since
\begin{equation*}
(-1)^\frac{n}{2}((1-x^2)^\frac{n-2}{2}G)^{(n-1)}=-(1-x^2) G_{\frac n2-2}''+2(n-1)G_{\frac n2-2}'+(n-1)(n-2) G_{\frac n2-2},
\end{equation*}
we can follow the same arguments above to obtain \eqref{gradient-n} for $j=\frac n2 -2$.
Then we can apply mathematical induction on $j$ from $\frac n2-2$ to $0 $ to conclude \eqref{gradient-n} for all $0\le j \le \frac n2-1.$
\end{proof}
\subsection{The Case:\ n=6}
\vskip 3mm
\noindent\par
Throughout the rest of this subsection,
we will focus on $n=6$ and the equation
\begin{equation}\label{6-mfe}
\alpha P_6 u+5!\left(1- \frac{e^{6u}}{\int_{\mathbb{S}^6}e^{6u}dw} \right)=0,
\end{equation}
where $P_6=-\Delta(-\Delta+4)(-\Delta+6)$.
As previously introduced,
\begin{equation}\label{P6-axial}
P_6u=-[(1-x^2)^3 u']^{(5)}=
-[(1-x^2)^2\Delta u]^{(4)},
\end{equation}
and equation \eqref{6-mfe} is reduced to
\begin{equation}\label{6-ode}
-\alpha[(1-x^2)^3 u']^{(5)}+5!- \frac{2^7}{\gamma}e^{6u}=0.
\end{equation}
In view of \eqref{decomp_G^2}-\eqref{decomp_Gn-2}, we derive that
\begin{equation}\label{n=6decomp-G''1}
\int_{-1}^1(1-x^2)^4 (G'')^2=\sum_{k=1}^\infty\bar\lambda_k(\bar\lambda_k-6) t_k^2.
\end{equation}
Moreover,
\eqref{decomp_Gn-2} can be rewritten as
\begin{equation}\label{n=6decomp-G''}
\int_{-1}^1|[(1-x^2)^2 G]^{''}|^2 =
\sum_{k=1}^\infty(\bar\lambda_k+4)(\bar\lambda_k+6) t_k^2.
\end{equation}
\begin{lemma}
Let $n=6$, using the semi-norm $ \lfloor G\rfloor$ defined in \eqref{norm}, we have the following estimate:
\begin{equation}\label{n=6keyestimate}
\lfloor G\rfloor^2\leq \left(\frac{30}{\alpha}-15\right)\int_{-1}^1 |[(1-x^2)^2G]''|^2-\frac{320}{\alpha}\int_{-1}^1 (1-x^2)^3(G')^2.
\end{equation}
\end{lemma}
\begin{proof}
By \eqref{decomp_G^2}, \eqref{n=6decomp-G''1}, \eqref{n=6decomp-G''} and Lemma \ref{Gj-estimate-lemma}, we get
\begin{equation*}
\begin{aligned}
\lfloor G\rfloor^2 +15 \int_{-1}^1|[(1-x^2)^2 G]^{''}|^2&\le \frac{1}{\alpha}\int_{-1}^1 \left[6!(1-x^2)^2 G^2
+30 (1-x^2)^4 (G'')^2+160(1-x^2)^3(G')^2\right]\\
&=\frac{1}{\alpha}\int_{-1}^1 \left[30\left|[(1-x^2)^2 G]^{''}\right|^2-320(1-x^2)^3(G')^2\right].
\end{aligned}
\end{equation*}
So \eqref{n=6keyestimate} holds.
\end{proof}
\begin{proposition}\label{2/3}
If $\frac{2}{3}<\alpha<1$, any axially symmetric solution to \eqref{6-mfe} must be constant.
\end{proposition}
\begin{proof}
We only need to show that
either $\alpha\le \frac23$ or that $G$ and hence $u$ is identically $0$. Indeed, when $\alpha>2/3$, it follows from \eqref{n=6keyestimate} that
\begin{equation*}
\begin{aligned}
\lfloor G\rfloor^2 +15 \int_{-1}^1|[(1-x^2)^2 G]^{''}|^2&<\frac32\sum_{k=1}^\infty (30\bar\lambda_k^2-20\bar\lambda_k+720)t_k^2.
\end{aligned}
\end{equation*}
Equivalently,
\begin{equation*}
\begin{aligned}
0&>\sum_{k=1}^\infty \left[(\bar\lambda_k+15)(\bar\lambda_k+4)(\bar\lambda_k+6)-\frac32(30\bar\lambda_k^2
-20\bar\lambda_k+720)\right]t_k^2\\
&=\sum_{k=1}^\infty (\bar\lambda_k-6) (\bar\lambda_k^2-14\bar\lambda_k+120) t_k^2.
\end{aligned}
\end{equation*}
In view of \eqref{Ck-ode} the assertion holds.
\end{proof}
For $n=6$, \eqref{C1G} and \eqref{key-id} become
\begin{equation}\label{n=6C1G}
\int_{-1}^1(1-x^2)^2 C_1 G=\frac{16}{105}\beta
\end{equation}
and
\begin{equation}\label{n=6key-id}
\int_{-1}^1 |[(1-x^2)^2G]''|^2=\frac{256}{35}\left(7-\frac{1}{\alpha}\right)\beta.
\end{equation}
Inspired by \cite{Gui00} and \cite{Gui98}, our basic strategy is to assume $\beta\neq 0$, and show that it leads to a contradiction with the range of $\alpha$. It is fairly easy to see from \eqref{n=6key-id} that
\begin{equation}\label{contradiction}
\mbox{ if }\beta=0, \mbox{ then }\nabla u=0, \mbox{ which shows that $u$ is a constant.}
\end{equation}
In what follows, suppose that $\beta\neq0$ and
\begin{equation}\label{alpha-range1}
\frac35<\alpha\le \frac23.
\end{equation}
Then it is easy to see from \eqref{n=6C1G} and \eqref{enu} that
\begin{equation}\label{beta-range1}
0<\beta<\frac{1}{\alpha}.
\end{equation}
\begin{proof}[Proof of Theorem \ref{even} for $n=6$]
We first define the following quantity
\begin{equation}\label{D}
D:=\sum_{k=3}^\infty\left[\bar\lambda_k(\bar\lambda_k+4)(\bar\lambda_k+6)
-\left(14+\frac{112}{9\alpha}\right)
(\bar\lambda_k+4)(\bar\lambda_k+6)+\frac{320}{\alpha}\bar\lambda_k\right]t_k^2.
\end{equation}
We now give the upper bound of $D$. From \eqref{n=6keyestimate} we see that
\begin{equation}\label{D-upbd}
\begin{aligned}
D &=\lfloor G\rfloor^2 -\left(14+\frac{112}{9\alpha}\right)\int_{-1}^1|[(1-x^2)^2 G]^{''}|^2+\frac{320}{\alpha}
\int_{-1}^1 (1-x^2)^3(G')^2\\
&-(\frac{1280}{3\alpha}-960)\beta^2\int_{-1}^1(1-x^2)^2C_1^2\\
&\leq \left(\frac{158}{9\alpha}-29\right)\int_{-1}^1|[(1-x^2)^2 G]^{''}|^2+\frac{16}{21}\left(192-\frac{256}{3\alpha}\right) \beta^2 \\
&\leq \frac{16\beta}{7}\left[
\frac{16}{5}\left(7-\frac{1}{\alpha}\right)\left(\frac{158}{9\alpha}-29\right)
+\left(64-\frac{256}{9\alpha}\right)\beta\right].
\end{aligned}
\end{equation}
It is easy to check $D>0.$ Combining \eqref{beta-range1} and \eqref{D-upbd}, we have
\begin{equation*}
\frac{16}{5}\left(7-\frac{1}{\alpha}\right)\left(\frac{158}{9\alpha}-29\right)
+\left(64-\frac{256}{9\alpha}\right)\frac{1}{\alpha}>0.
\end{equation*}
Therefore,
\begin{equation}\label{alpha-up1}
\alpha<\bar\alpha:=\frac{221 + \sqrt{13345}}{522}<\frac{2}{3}.
\end{equation}
\par
To estimate the refined lower bound of $D$, we define
\begin{equation*}
g(t)=t- \left(14+\frac{112}{9\alpha}\right)+\frac{320}{\alpha} \frac{t}{(t+4)(t+6)}.
\end{equation*}
Then it is easy to see that
\begin{equation*}
g'(t)>0 \mbox{ for }t\ge \bar\lambda_3(=24).
\end{equation*}
Thus,
\begin{equation*}
\begin{aligned}
D&=\sum_{k=3}^\infty\left[\bar\lambda_k- \left(14+\frac{112}{9\alpha}\right)+\frac{320}{\alpha} \frac{\bar\lambda_k}{(\bar\lambda_k+4)(\bar\lambda_k+6)} \right](\bar\lambda_k+4)(\bar\lambda_k+6)t_k^2\\
&\geq\left[\bar\lambda_3- \left(14+\frac{112}{9\alpha}\right)+\frac{320}{\alpha} \frac{\bar\lambda_3}{(\bar\lambda_3+4)(\bar\lambda_3+6)} \right]\sum_{k=3}^\infty(\bar\lambda_k+4)(\bar\lambda_k+6)t_k^2\\
&=\left(10-\frac{208}{63\alpha}\right)\sum_{k=3}^\infty
(\bar\lambda_k+4)(\bar\lambda_k+6)t_k^2.
\end{aligned}
\end{equation*}
On the other hand,
we derive from \eqref{Ck'}, \eqref{Ck-ortho} and \eqref{CkG} that
\begin{equation}\label{bk-upbd}
\begin{aligned}
t_k^2&=d_k^2\int_{-1}^1(1-x^2)^2C_k^2=
\left(\int_{-1}^1(1-x^2)^2C_k^2\right)^{-1}\left[\frac{128}{\alpha\lambda_k}\int_{-1}^1 \frac{e^{6u}}{\gamma}(1-x^2)^3C_k'\right]^2\\
&\leq \frac{(2k+5)(\bar\lambda_k+4)(\bar\lambda_k+6)}{128}\left[\frac{8}{\alpha \lambda_k }\frac{\bar\lambda_k}{7}
(1-\alpha\beta)\right]^2 \\
&= \frac{128(2k+5)}{49(\bar\lambda_k+4)(\bar\lambda_k+6)}\left(\frac{1}{\alpha}-\beta\right)^2,\quad k\geq 2.
\end{aligned}
\end{equation}
In particular,
\begin{equation}\label{ak-upbd}
\begin{aligned}
d_k^2&\leq\left(\int_{-1}^1(1-x^2)^2C_k^2\right)^{-1}\frac{128(2k+5)}{
49(\bar\lambda_k+4)(\bar\lambda_k+6)}
\left(\frac{1}{\alpha}-\beta\right)^2\\
&\le \left[\frac{2k+5}{7}\left(\frac{1}{\alpha}-\beta\right)\right]^2, \quad k\geq 2.
\end{aligned}
\end{equation}
By \eqref{n=6C1G}, \eqref{n=6key-id} and \eqref{bk-upbd}, we get the lower bound of $D$:
\begin{equation}\label{D-lowbd}
\begin{aligned}
D&\geq\left(10-\frac{208}{63\alpha}\right)\sum_{k=3}^\infty
(\bar\lambda_k+4)(\bar\lambda_k+6)t_k^2\\
&= \left(10-\frac{208}{63\alpha}\right)\left[\int_{-1}^1|[(1-x^2)^2 G]^{''}|^2-\frac{128}{7}\beta^2-
\frac{1152}{49}\left(\frac{1}{\alpha}-\beta\right)^2\right]\\
&\geq \frac{16}{7} \left(10-\frac{208}{63\alpha}\right) \left[ \frac{16\beta}{5}\left(7-\frac{1}{\alpha}\right)-8\beta^2
-\frac{72}{7}\left(\frac{1}{\alpha}-\beta\right)^2\right].
\end{aligned}
\end{equation}
By both \eqref{D-upbd} and \eqref{D-lowbd}, we see that
\begin{equation*}
\begin{aligned}
&\quad \frac{16\beta}{7}\left[
\frac{16}{5}\left(7-\frac{1}{\alpha}\right)\left(\frac{158}{9\alpha}-29\right)
+\left(64-\frac{256}{9\alpha}\right)\beta\right] \\
&\geq \frac{16}{7} \left(10-\frac{208}{63\alpha}\right) \left[ \frac{16\beta}{5}\left(7-\frac{1}{\alpha}\right)-8\beta^2
-\frac{72}{7}\left(\frac{1}{\alpha}-\beta\right)^2\right].
\end{aligned}
\end{equation*}
A straightforward computation shows that
\begin{equation*}
\begin{aligned}
0& \leq \frac{16\beta}{5}\left(7-\frac{1}{\alpha}\right)
\left[\frac{158}{9\alpha}-29-\left(10-\frac{208}{63\alpha}\right)
\right] +\beta^2\left[64-\frac{256}{9\alpha}
+8\left(10-\frac{208}{63\alpha}\right) \right]\\
&+ \frac{72}{7} \left(10-\frac{208}{63\alpha}\right)
\left(\frac{1}{\alpha}-\beta\right)^2\\
&=\frac{16\beta}{5}\left(7-\frac{1}{\alpha}\right)
\left(\frac{146}{7\alpha}-39\right)+48\beta^2\left(3-\frac{8}{7\alpha} \right)+ \frac{72}{7} \left(10-\frac{208}{63\alpha}\right)
\left(\frac{1}{\alpha}-\beta\right)^2.
\end{aligned}
\end{equation*}
We further have
\begin{equation}\label{ine-1}
\begin{aligned}
&\quad \beta\left[\frac{2}{5}\left(7-\frac{1}{\alpha}\right)
\left(\frac{146}{7\alpha}-39\right)+ \frac{1}{\alpha}\left(18-\frac{48}{7\alpha} \right)\right]\\
&\ge\left(\frac{1}{\alpha}-\beta\right) \left[\left(18-\frac{48}{7\alpha} \right)\beta- \left(\frac{90}{7}-\frac{208}{49\alpha}\right)
\left(\frac{1}{\alpha}-\beta\right)\right]\\
&:=\left(\frac{1}{\alpha}-\beta\right) I.
\end{aligned}
\end{equation}
We want to show that the term $I$ is nonnegative. Thus, we need to obtain the lower bound of $\beta.$
Note that the argument exploited in \cite{Gui20} is not applicable to this case. Precisely, following \cite{Gui20}, \eqref{D-upbd} implies that
\begin{equation*}
\left(64-\frac{256}{9\alpha}\right)\beta> \frac{16}{5}\left(7-\frac{1}{\alpha}\right)\left(29-\frac{158}{9\alpha}\right).
\end{equation*}
However, the term $29-\frac{158}{9\alpha}$ is positive when $\alpha>\frac{158}{261}\approx0.6054$.
Hence, we choose a suitable $\underline{\alpha}>\frac{158}{261}$ so that
\begin{equation}\label{31}
I>0,\quad \mbox{ for } \alpha\in(\underline{\alpha},\bar\alpha).
\end{equation}
Then $\underline{\alpha}$ satisfies that
\begin{equation*}
\begin{aligned}
I&\geq\left(18-\frac{48}{7\underline{\alpha}} \right)\beta+ \left(\frac{208}{49\bar\alpha}-\frac{90}{7} \right)
\left(\frac{1}{\alpha}-\beta\right)\\
&\geq\left(18+\frac{90}{7}-\frac{48}{7\underline{\alpha}}-\frac{208}{49\bar\alpha} \right)\beta+ \left(\frac{208}{49\bar\alpha}-\frac{90}{7} \right)
\frac{1}{\underline{\alpha}}\\
&\ge\left(\frac{216}{7}-\frac{48}{7\underline{\alpha}}
-\frac{208}{49\bar\alpha} \right)
\left(7-\frac{1}{\underline\alpha}\right)\left(29-\frac{158}{9\underline\alpha}\right)
\frac{9\bar \alpha}{180\bar\alpha-80}+\left(\frac{208}{49\bar\alpha}-\frac{90}{7} \right)
\frac{1}{\underline{\alpha}}\\
&\geq0,
\end{aligned}
\end{equation*}
which indicates $\underline{\alpha}\ge 0.61488$. So we take $\underline{\alpha}=0.61488$.
For $\alpha\in(\underline{\alpha},\bar\alpha)$, it follows from \eqref{ine-1} that
\begin{equation*}
\frac{2}{5}\left(7-\frac{1}{\alpha}\right)
\left(\frac{146}{7\alpha}-39\right)+ \frac{1}{\alpha}\left(18-\frac{48}{7\alpha} \right)>0.
\end{equation*}
Hence we find
\begin{equation}\label{alpha6}
\alpha< \alpha^{(6)}:=\frac{115 + \sqrt{2851}}{273} \approx0.61683.
\end{equation}
Theorem \ref{even} for $n=6$ is proven.
\end{proof}
\vskip 3mm
\subsection{The Case:\ n=8}
\vskip 3mm
\noindent\par
This subsection focuses on the case $n=8$. As we have addressed,
\begin{equation}\label{P8}
P_8=-\Delta(-\Delta+6)(-\Delta+10)(-\Delta+12),
\end{equation}
and equation \eqref{n-mfe} for $n=8$ becomes
\begin{equation}\label{8-ode}
\alpha[(1-x^2)^4 u']^{(7)}+7!- \frac{9*2^9}{\gamma}e^{8u}=0.
\end{equation}
In view of \eqref{decomp_G^2}-\eqref{decomp_Gn-2}, we derive that
\begin{equation}\label{n=8decomp-G3-1}
\int_{-1}^1(1-x^2)^5(G'')^2=\sum_{k=1}^\infty\bar\lambda_k(\bar\lambda_k-8) t_k^2
\end{equation}
and
\begin{equation}\label{n=8decomp-G2}
\int_{-1}^1(1-x^2)^6(G^{(3)})^2=\sum_{k=1}^\infty\bar\lambda_k(\bar\lambda_k^2-26\bar\lambda_k+144) t_k^2.
\end{equation}
Moreover,
\eqref{decomp_Gn-2} can be rewritten as
\begin{equation}\label{n=8decomp-G3}
\int_{-1}^1|[(1-x^2)^3 G]^{(3)}|^2 =
\sum_{k=1}^\infty \tilde\lambda_k t_k^2,
\end{equation}
where \begin{equation}\label{tildek}
\tilde\lambda_k=(\bar\lambda_k+6)(\bar\lambda_k+10)
(\bar\lambda_k+12).
\end{equation}
\vskip 2mm
\begin{lemma}
Let $n=8$, then we have the following estimate:
\begin{equation}\label{n=8keyestimate}
\lfloor G\rfloor^2\leq 28\left(\frac{2}{\alpha}-1\right) \int_{-1}^1|[(1-x^2)^3 G]^{(3)}|^2-\frac{20160}{\alpha}\int_{-1}^1 (1-x^2)^4(G')^2,
\end{equation}
where $\lfloor G\rfloor$ is defined in \eqref{norm}.
\end{lemma}
\begin{proof}
By \eqref{n=8key-eq1}, Lemma \ref{Gj-estimate-lemma} and \eqref{n=8decomp-G2}-\eqref{n=8decomp-G3}, we get
\begin{equation*}
\begin{aligned}
&\quad
\lfloor G\rfloor^2 +28\int_{-1}^1|[(1-x^2)^3 G]^{(3)}|^2\\
&\le \frac{56}{\alpha}\int_{-1}^1 \left[6!(1-x^2)^3 G^2+180
(1-x^2)^4 (G')^2+54(1-x^2)^5(G'')^2+ (1-x^2)^6(G^{(3)})^2\right]\\
&=\frac{56}{\alpha}\sum_{k=1}^\infty \left[720+180\bar\lambda_k+54\bar\lambda_k(\bar\lambda_k-8)
+\bar\lambda_k(\bar\lambda_k^2-26\bar\lambda_k+144) \right]t_k^2\\
&=\frac{56}{\alpha}\sum_{k=1}^\infty \left[(\lambda_k+6)(\lambda_k+10)(\lambda_k+12)-360\bar\lambda_k \right]t_k^2\\
&=\frac{56}{\alpha}\int_{-1}^1 \left[|[(1-x^2)^3 G]^{(3)}|^2-360(1-x^2)^4(G')^2\right].
\end{aligned}
\end{equation*}
The proof is complete.
\end{proof}
\begin{proposition}\label{21/25}
If $\frac{21}{25}\le\alpha<1$, any axially symmetric solution to \eqref{n-mfe} must be constant.
\end{proposition}
\begin{proof}
As in the proof of Proposition \ref{2/3}, when $\alpha>\frac{21}{25}$, it follows from \eqref{n=8keyestimate} that
\begin{equation*}
\begin{aligned}
\lfloor G\rfloor^2 +28\int_{-1}^1|[(1-x^2)^3 G]^{(3)}|^2&=\frac{56}{\alpha}\sum_{k=1}^\infty \left[
\bar\lambda_k^3+28\bar\lambda_k^2-108\bar\lambda_k+720 \right]t_k^2\\
&<\frac{200}3\sum_{k=1}^\infty \left[
\bar\lambda_k^3+28\bar\lambda_k^2-108\bar\lambda_k+720 \right]t_k^2.
\end{aligned}
\end{equation*}
Equivalently,
\begin{equation*}
\begin{aligned}
0&>\sum_{k=1}^\infty \left[(\bar\lambda_k+28)\tilde\lambda_k- \frac{200}3(\bar\lambda_k^3+28\bar\lambda_k^2-108\bar\lambda_k+720 )\right]t_k^2\\
&=\sum_{k=1}^\infty \left(\bar\lambda_k^4-\frac{32}{3}\bar\lambda_k^3-\frac{2492}{3}\bar\lambda_k^2-14976\bar\lambda_k
-27840 \right) t_k^2>0,
\end{aligned}
\end{equation*}
since $\bar\lambda_k\ge8$ and the equation
\begin{equation*}
t^4-\frac{32}{3}t^3-\frac{2492}{3}t^2-14976t
-27840=0
\end{equation*}
has two real solutions $t_1\approx-31.6$ or $t_2\approx2.1$. This is a contradiciton, which completes the proof.
\end{proof}
\begin{remark}
We note that it fails to get the same result as that in Proposition \ref{2/3}. The reason is the following: when $\alpha>2/3$, it follows from \eqref{n=8keyestimate} that
\begin{equation*}
\begin{aligned}
\lfloor G\rfloor^2 +28\int_{-1}^1|[(1-x^2)^3 G]^{(3)}|^2&=\frac{56}{\alpha}\sum_{k=1}^\infty \left[
\bar\lambda_k^3+28\bar\lambda_k^2-108\bar\lambda_k+720 \right]t_k^2\\
&<84\sum_{k=1}^\infty \left[
\bar\lambda_k^3+28\bar\lambda_k^2-108\bar\lambda_k+720 \right]t_k^2.
\end{aligned}
\end{equation*}
Equivalently,
\begin{equation*}
\begin{aligned}
0&>\sum_{k=1}^\infty \left[(\bar\lambda_k+28)(\bar\lambda_k+6)(\bar\lambda_k+10)(\bar\lambda_k+12)-84(\bar\lambda_k^3+28\bar\lambda_k^2-108\bar\lambda_k+720 )\right]t_k^2\\
&=\sum_{k=1}^\infty (\bar\lambda_k-8) (\bar\lambda_k^3-20\bar\lambda_k^2-1476\bar\lambda_k+5040)t_k^2.
\end{aligned}
\end{equation*}
There is no contradiction, since the equation
\begin{equation*}
t^3-20t^2-1476t+5040=0
\end{equation*}
has the following real solutions
\begin{equation*}
t_1\approx-31.6,\quad t_2\approx 3.3\quad t_3\approx48.4.
\end{equation*}
\end{remark}
Recall that \eqref{C1G} and \eqref{key-id} for $n=8$ are reduced to
\begin{equation}\label{n=8C1G}
\int_{-1}^1(1-x^2)^3 C_1 G= \frac{32}{315}\beta
\end{equation}
and
\begin{equation}\label{n=8key-id}
\int_{-1}^1 |[(1-x^2)^3G]^{(3)}|^2= \frac{1024}{7}\left(9-\frac{1}{\alpha}\right)\beta.
\end{equation}
As in the proof of Theorem \ref{even} for $n=6$, in what follows, we assume that
\begin{equation}\label{n=8a-range}
\beta\neq0,\quad \frac23<\alpha< \frac{21}{25}
\end{equation}
and note
\begin{equation*}
0<\beta<\frac{1}{\alpha}.
\end{equation*}
\begin{proof}[Proof of Theorem \ref{even} for $n=8$]
Define
\begin{equation}\label{E}
E:=\sum_{k=3}^\infty\left[\bar\lambda_k \tilde\lambda_k -\left(18+\frac{18}{\alpha}\right)
\tilde\lambda_k
+\frac{20160}{\alpha}\bar\lambda_k\right]t_k^2.
\end{equation}
We now present the upper bound of $E$. From \eqref{n=8keyestimate}-- \eqref{n=8key-id} we derive
\begin{equation}\label{E-upbd}
\begin{aligned}
E&=\lfloor G\rfloor^2 - \left(\frac{18}{\alpha}+18\right) \int_{-1}^1|[(1-x^2)^3 G]^{(3)}|^2+\frac{20160}{\alpha}\int_{-1}^1 (1-x^2)^4(G')^2\\
&-7!\left(\frac{14}{\alpha}-10\right)\beta^2\int_{-1}^1(1-x^2)^2C_1^2\\
&\leq \left(\frac{38}{\alpha}-46\right)\int_{-1}^1|[(1-x^2)^3 G]^{(3)}|^2-1024\left(\frac{7}{\alpha}-5\right)\beta^2 \\
&=\frac{1024\beta}{7}\left[
\left(\frac{38}{\alpha}-46\right)\left(9-\frac{1}{\alpha}\right)
+7\left(5-\frac7{\alpha}\right)\beta\right].
\end{aligned}
\end{equation}
To estimate the lower bound of $D$,
define
\begin{equation*}
\bar g(t)=t -\left(18+\frac{18}{\alpha}\right) +\frac{20160}{\alpha}\frac{t}{(t+6)(t+10)(t+12)}.
\end{equation*}
Differentiating $\bar g(t)$, we have
\begin{equation*}
\bar g'(t)=1- \frac{20160}{\alpha} \frac{2 \left(t^3+14 t^2-360\right)}{(t+6)^2 (t+10)^2 (t+12)^2}.
\end{equation*}
After some calculations, we deduce that $\bar g''(t)>0$ for $t>\bar\lambda_3(=30)$. Thus, for $t\ge 30$,
\begin{equation*}
\bar g'(t)\ge\bar g'(30)=1-\frac{109}{252 \alpha}>0
\end{equation*}
due to \eqref{n=8a-range}. We further have
\begin{equation}\label{E-lowbd1}
\begin{aligned}
E&=\sum_{k=3}^\infty\left[\bar\lambda_k -\left(18+\frac{18}{\alpha}\right)
+\frac{20160\bar\lambda_k}{\alpha \tilde\lambda_k}\right]\tilde\lambda_kt_k^2\\
&\geq\left(12-\frac{8}{\alpha}\right)\sum_{k=3}^\infty \tilde\lambda_kt_k^2\\
&>0,
\end{aligned}
\end{equation}
since $\alpha>\frac23$. We conclude from \eqref{n=8a-range}, \eqref{E-upbd} and \eqref{E-lowbd1} that
\begin{equation*}
0<7\left(\frac7{\alpha}-5\right)\beta< \left(\frac{38}{\alpha}-46\right)\left(9-\frac{1}{\alpha}\right)
\end{equation*}
and so
\begin{equation}\label{alpha-n=8}
\alpha< \alpha^{(8)}:=\frac{19}{23}<\frac{21}{25}.
\end{equation}
\end{proof}
\begin{remark}
We wish to obtain for $n=8$ a similar inequality as \eqref{ine-1} for $n=6$ so that the range of $\alpha $ can be more refined. However, the similar approach seems not working as shown below.
As in \eqref{bk-upbd}, we find
\begin{equation}\label{tk-upbd}
\begin{aligned}
\tilde\lambda_k t_k^2 \le \frac{512(2k+7)}{9 }\left(\frac{1}{\alpha}-\beta\right) ^2, \quad k\geq 2.
\end{aligned}
\end{equation}
It follows from \eqref{n=8C1G}, \eqref{n=8key-id} and \eqref{tk-upbd} that \begin{equation}\label{E-lowbd}
\begin{aligned}
E &\geq\left(12-\frac{8}{\alpha}\right)\sum_{k=3}^\infty \tilde\lambda_kt_k^2\\
&\ge\left(12-\frac{8}{\alpha}\right)
\left[\int_{-1}^1|[(1-x^2)^3 G]^{(3)}|^2-512\beta^2-
\frac{512*11}{9}\left(\frac{1}{\alpha}-\beta\right)^2\right]\\
&=512\left(12-\frac{8}{\alpha}\right) \left[ \frac{2\beta}{7}\left(9-\frac{1}{\alpha}\right) -\beta^2
-\frac{11}{9}\left(\frac{1}{\alpha}-\beta\right)^2\right].
\end{aligned}
\end{equation}
Combining both \eqref{E-upbd} and \eqref{E-lowbd}, we conclude that
\begin{equation*}
\begin{aligned}
&\quad \frac{2\beta}{7}\left[
\left(\frac{38}{\alpha}-46\right)\left(9-\frac{1}{\alpha}\right)
+7\left(5-\frac7{\alpha}\right)\beta\right] \\
&\geq \left(12-\frac{8}{\alpha}\right) \left[ \frac{2\beta}{7}\left(9-\frac{1}{\alpha}\right) -\beta^2
-\frac{11}{9}\left(\frac{1}{\alpha}-\beta\right)^2\right].
\end{aligned}
\end{equation*}
Equivalently,
\begin{equation*}
\begin{aligned}
0&\le \frac{2\beta}{7}\left(9-\frac{1}{\alpha}\right)
\left(\frac{46}{\alpha}-58\right)
+22\beta^2 \left(1-\frac{1}{\alpha}\right)
+\frac{11}9\left(12-\frac{8}{\alpha}\right)
\left(\frac{1}{\alpha}-\beta\right)^2.
\end{aligned}
\end{equation*}
Furthermore,
\begin{equation}\label{key-ine}
\begin{aligned}
&\quad \beta\left[ \frac{2 }{7}\left(9-\frac{1}{\alpha}\right)
\left(\frac{46}{\alpha}-58\right)
+\frac{22}\alpha \left(1-\frac{1}{\alpha}\right) \right]\\
&\ge\left(\frac{1}{\alpha}-\beta\right) \left[ 22\beta \left(1-\frac{1}{\alpha}\right)
-\frac{11}9\left(12-\frac{8}{\alpha}\right)
\left(\frac{1}{\alpha}-\beta\right)\right].
\end{aligned}
\end{equation}
However, the r.h.s. of \eqref{key-ine} is negative. Thus, the previous argument for $n=6$ is not applicable here.
\end{remark}
Theorem \ref{even} follows from \eqref{contradiction}, \eqref{alpha6} and \eqref{alpha-n=8}.
Next we shall show Theorem \ref{Szego}.
\begin{proof}[Proof of Theorem \ref{Szego}]
Following \cite{Chang95}, we define $\phi_{P, t}, P \in \mathbb{S}^n, t>0$ to be $ \phi_{P, t}(\xi)=\tilde \xi:= \pi_{P}^{-1}(ty)$, where $y=\pi_{P}(\xi)$ is the stereographic project of $\mathbb{S}^n$ from $P$ as the north pole to the equatorial plane.
In particular, we denote $\phi_{t}=\phi_{P_0, t}$ where $P_0=(1, 0, \cdots 0) $.
Given $u \in H^\frac n2(\mathbb{S}^n)$ and $t>0$, let
$$
v(\xi)= u(\phi_t(\xi)) + \frac{n+1}{n^2} \ln |det(d \phi_t)|, \quad \xi \in \mathbb{S}^n.
$$
We first claim that $\mathcal{J}_{\frac n{n+1}}$ owns the following invariance property:
\begin{equation}\label{invariance}
\mathcal{J}_{\frac n{n+1}}(u)=\mathcal{J}_{\frac n{n+1}}(v), \quad \forall u \in H^\frac n2(\mathbb{S}^n), \, \, t>0.
\end{equation}
The proof has been carried out in detail for the case
$n = 4$ in \cite[Pro. 3.4]{Gui20}. The same argument there works for general $n$ with
slight modifications; so we will skip the proof here. We further see that for any $ u \in H^\frac n2(\mathbb{S}^n)$, there is a $\phi_{P, t}$ such that
$$
v(\xi)= u(\phi_{P,t}(\xi)) + \frac{n+1}{n^2} \ln |det(d \phi_{P,t})|, \quad \xi \in \mathbb{S}^n
$$
belongs to $ \mathfrak L$.
In conclusion, Theorem \ref{Szego} follows
immediately from Theorem \ref{even} and \eqref{invariance}.
\end{proof}
We note that a similar but more general Sz\"ego limit theorem for $u\in H^1(\mathbb{S}^2)$ is proven in \cite{CG} using a variational method under a mass center constraint, in combination with the improved Moser-Trudinger inequality in \cite{GM}.
\vskip4mm
{\section{ Bifurcation}}
\par
In this section we shall obtain results on bifurcation curves to \eqref{n-ode} in general for $\alpha>0$ and in particular for $\alpha \in (\frac1{n+1}, \frac12)$.
We shall first apply the standard bifurcation theory to analyze the local bifurcation diagram. Let us recall the following general theorem.
\par
\begin{theorem}{\rm(\cite[Theorem 1.7]{CR71})}\label{local}
Let $X$,$Y$ be Hilbert spaces, $V$ a neighborhood of $0$ in $X$ and $F:(-1,1)\times V\rightarrow Y$ a map with the following properties:
\begin{itemize}
\item[(1)] $F(t,0)=0$ for any $t$;
\item[(2)] $\partial_{t}F$,$\partial_{x}F$ and $\partial_{t,x}^{2}F$ exist and are continuous;
\item[(3)] $\ker(\partial_{x}F(0,0))=\mbox{span}\{w_{0}\}$ and $Y/\mathcal{R}(\partial_{x}F(0,0))$ are one-dimensional;
\item[(4)] $\partial_{t,x}^{2}F(0,0)w_0\not\in\mathcal{R}(\partial_{x}F(0,0))$.
\end{itemize}
If $Z$ is any complement of $ \ker(\partial_{x}F(0,0))$ in $X$.
Then there exists $\varepsilon_{0}>0$, a neighborhood of $(0,0)$ in $U\subset(-1,1)\times X$ and continuously differentiable maps $\eta:(-\varepsilon_{0},\varepsilon_{0})\rightarrow\mathbb{R}$ and $z:(-\varepsilon_{0},\varepsilon_{0})\rightarrow Z$ such that $\eta(0) =0,\
z(0)=0$ and
\begin{equation} \nonumber
F^{-1}(0)\cap U\setminus((-1,1)\times\{0\})=\{(\eta(\varepsilon),\varepsilon w_{0}+\varepsilon z(\varepsilon))\mid\varepsilon\in(-\varepsilon_{0},\varepsilon_{0})\}.\\
\end{equation}
\end{theorem}
Recall that the shape of the above local bifurcating branch can be further described by the following theorem (see, e.g., \cite[I.6]{KH12}):
\begin{theorem}\label{local2}
In the setting of Theorem \ref{local}, let $\psi\neq0\in Y^{-1}$ satisfy
\begin{equation*}
\mathcal{R}(\partial_{x}F(0,0))=\{y\in Y\mid\langle\psi,y\rangle=0\},
\end{equation*}
where $Y^{-1}$ is the dual space of $Y$. Then we have
\begin{equation*}\label{derivative}
\eta'(0) =-\frac{\langle\partial^{2}_{x,x}F(0,0)[w_{0},w_{0}],
\psi\rangle}{2\langle\partial^{2}_{t,x}F(0,0)w_{0},\psi\rangle}.
\end{equation*}
Furthermore, the bifurcation is transcritical provided that $ \eta'(0)\neq 0$.
\end{theorem}
\vskip 2mm
\par
Note that critical points of $I_\alpha(u)$
satisfy
\begin{equation}\label{41}
\left(-1\right)^\frac{n}{2}[(1-x^2)^\frac{n}{2} u']^{(n-1)}+ \rho(1-x^2)^\frac{n-2}{2}\left(1-\frac{\sqrt\pi\Gamma\left(\frac{n}{2}\right)}{ \Gamma\left(\frac{n+1}{2}\right)} \frac{ e^{nu}}{\int_{-1}^1(1-x^2)^\frac{n-2}{2}e^{nu}}\right) =0,
\quad x \in (-1, 1),
\end{equation}
where $\rho=\frac{(n-1)!}{\alpha}$.
Let
\begin{equation*}
\mathcal{V}=\bigg\{u\in H^n(\mathbb{S}^n):u=u(x),\ \int_{\mathbb{S}^n}udw=0 \bigg\}; \quad
\mathcal{W}=\bigg\{u\in L^2(\mathbb{S}^n):u=u(x),\ \int_{\mathbb{S}^n}udw=0\bigg\}
\end{equation*}
and define a nonlinear operator $\mathcal{T}:\mathbb{R}\times \mathcal{V}\rightarrow \mathcal{W}$ as
\begin{equation*}
\mathcal{T}(\rho,u)=P_n u+\rho \left(1- \frac{e^{nu}}{\int_{\mathbb{S}^n} e^{nu}dw}\right).
\end{equation*}
Obviously, the operator $\mathcal{T}$ is well defined. After direct computations, one has
\begin{equation*}
\partial_{u}\mathcal{T}(\rho,0)\phi=P_n\phi-n\rho\phi.
\end{equation*}
Define
\begin{equation*}
\mathcal{F}(\rho,u)=u+\rho P_n^{-1} \left(1- \frac{e^{nu}}{\int_{\mathbb{S}^n} e^{nu}dw}\right).
\end{equation*}
Let
$\mathcal{S}$ denote the closure of the set of nontrivial solutions of
\begin{equation}\label{47}
\mathcal{F}(\rho,u)=0.
\end{equation}
It is clear that \eqref{47} and \eqref{41} are equivalent.
\par
Let $\lambda_k$ and $C_k^{\frac{n-1}{2}}$ be given in \eqref{PnCk}. Then by similar arguments as in \cite[Lemma 5.3]{Gui20}, we have
\begin{theorem}\label{bifur1}
Let $ \rho_k=\frac{\lambda_k}{n}$ for $k=1,2,3,\dots$, the points $(\rho_{k},0)$ are bifurcation points
for the curve of solutions $(\rho,0)$. In particular, there exists $\varepsilon_{0}>0$ and continuously differentiable functions $\rho_k:(-\varepsilon_{0},\varepsilon_{0})\rightarrow \mathbb{R}$ and $\psi_k:(-\varepsilon_{0},\varepsilon_{0})\rightarrow \{C_k^{\frac{n-1}{2}}\}^\bot$ such that $\rho_k(0)=\rho_k$, $\psi_k(0)=0$ and every
nontrivial solution of \eqref{41} in a small neighborhood of $(\rho_{k},0)$ is of the form
\begin{equation*}
(\rho_k(\varepsilon),\varepsilon C_k^{\frac{n-1}{2}}+\varepsilon \psi_k(\varepsilon)).
\end{equation*}
In particular, when $k=2$, the bifurcation point $(\rho_2, 0)=(\frac{(n+1)!} {n}, 0) $ is a transcritical bifurcation point. Indeed, we have
\begin{equation*}\label{trans}
\rho_2'(0)= - \frac{(n+1)!}{2} {\frac{ \int_{-1}^{1} (1-x^2)^{\frac{n-2}{2}}(C_k^{\frac{n-1}{2}})^3}{ \int_{-1}^{1} (1-x^2)^{\frac{n-2}{2}}(C_k^{\frac{n-1}{2}})^2}}= -\frac{(n+1)!(n-1)^2}{n(n+5)}\neq 0.
\end{equation*}
\end{theorem}
\begin{corollary}
Let $ \alpha_k=\frac{n!}{\lambda_k}$ for $k=1, 2,3,\dots$, the points $(\alpha_{k},0)$ are bifurcation points for the curve of solutions $(\alpha,0)$ of \eqref{n-ode}. Moreover, when $k=2$, the bifurcation point $(\frac1{n+1}, 0) $ is a transcritical bifurcation point.
\end{corollary}
\begin{remark}
When $k=1$, the bifurcation leads to the family of solutions $u=-\ln(1-ax), a \in (-1, 1)$ and $\rho=(n-1)!$.
It is clear that $(\rho_k, 0)$ is not a transcritical bifurcation point for $k $ odd since $C_k^{\frac{n-1}{2}}$ is an odd function and $\rho'(0)=0$ in this case. It should be true that $(\rho_k, 0)$ is a transcritical bifurcation point for $k $ even, we only need to check if {$ \int_{-1}^{1} (1-x^2)^{\frac{n-2}{2}}(C_k^{\frac{n-1}{2}})^3\not =0$} in this case, which can be confirmed for small $k$ numerically. However, in this paper we only need to use the transcriticality of $(\rho_2, 0)$.
\end{remark}
In order to analyze the global bifurcation diagram, we employ a global bifurcation theorem via degree arguments (see \cite{KH12,R}) and also exploit special properties of solutions to \eqref{41}.
First, we recall a global bifurcation result (see \cite[Theorem II.5.8]{KH12}).
\begin{proposition}\label{t47}
In Theorem \ref{bifur1}, the bifurcation at $(\rho_{k},0)$ is global
and satisfies the Rabinowitz alternative, i.e., a global continuum of solutions to \eqref{41} either goes
to infinity in $R \times \mathcal{W}$ or meets the trivial solution curve at $(\rho_m, 0)$ for some $m \ge 1$ and $m\neq k$.
\end{proposition}
Next we state and prove the following more specific global bifurcation result regarding \eqref{41}.
\begin{theorem}\label{main-bifur}
1) For $k\ge 2$, there exists a global continuum of solutions
$\mathcal{B}^+_k \subset \mathcal{S} \setminus \{ (\rho, 0), \rho \in \mathbb{R}\}$
of \eqref{41} which coincides in a small neighborhood of $(\rho_k, 0)$ with
$$\{ (\rho_k(\varepsilon),\varepsilon C_k^{\frac{n-1}{2}}+\varepsilon \psi_k(\varepsilon)), \varepsilon<0\}.$$
$\mathcal{B}^+_k $ is contained in
$\mathcal{N}_2:= \{ (\rho, u): \rho > \frac{(n+1)!}n, \, u \in L^2(-1, 1) \}$ and is uniformly bounded in $L^2(-1, 1)$ for $\rho$ in
any fixed finite interval $[\rho_m, \rho_M] \subset (\frac{(n+1)!}n, \infty)$. Furthermore, $ \mathcal{B}^+_k $ satisfies the improved Rabinowitz alternative, i.e., either $\mathcal{B}^+_k$ extends in $\rho$ to infinity or meets the trivial solution curve at $(\rho_m, 0)$ for some $m \geq 2$.
2) Similarly, for $k\ge 2$, there exists a global continuum of solutions $\mathcal{B}^{-}_k $ which coincides in a small neighborhood of $(\rho_k, 0)$ with $\{ (\rho_k(\varepsilon),\varepsilon C_k^{\frac{n-1}{2}}+\varepsilon \psi_k(\varepsilon)), \varepsilon>0\} $. When $k\ge 3$, $\mathcal{B}^{-}_k $ is contained in
$\mathcal{N}_2$ and satisfies the boundedness for $\rho$ in any fixed finite interval $[\rho_m, \rho_M] \subset (2(n-1)!, \infty)$. Furthermore, the improved Rabinowitz alternative holds.
3) Moreover, $\mathcal{B}^+_k=\{ u: u(x)=v(-x), v \in \mathcal{B}^{-}_k\} $ when $k$ is odd.
4) The global continuum of solutions $\mathcal{B}^{-}_2$ of \eqref{41} must be contained in the set
$$\mathcal{N}_1:= \left\{ (\rho, u): \rho \in \left( \frac{(n-1)! }{ \alpha^{(n)}}, \frac{(n+1)!}n\right) \supset (2(n-1)!, \frac{(n+1)!}n), \, u \in L^2(-1, 1)\right \}.$$
Furthermore, $\mathcal{B}^{-}_2$ is unbounded in $L^\infty ([-1, 1] )$, and there exists a sequence of $ (\rho^{(t)}, u^{(t)}) \in \mathcal{B}^{-}_2, t =1, 2, \cdots $ such that $ \rho^{(t)} \to 2(n-1)!$ and
$\|u^{(t)}\|_{L^\infty([-1,1])} \to \infty$. As an immediate consequence, there is a nontrivial solution to \eqref{41} for any $\rho \in (2(n-1)!, \frac{(n+1)!}n)$.
\end{theorem}
\begin{proof}
The proof is similar to that of the case $n=4$ in \cite{Gui20}. So we omit it.
\end{proof}
\begin{proof}[Proof of Theorem \ref{nontrivial}]
Theorem \ref{nontrivial} follows immediately from Theorem \ref{main-bifur}. This leads to the existence of a nontrivial solution to \eqref{n-ode} for $\alpha \in (\frac1{n+1}, \frac12)$.
\end{proof}
\section{Appendix A: Proof of \eqref{n=6}}
\vskip 2mm
\noindent\par
In this appendix, we compute the term $ \int_{-1}^1(1-x^2)^2G^2 [(1-x^2)^2G]^{(5)}$.
First,
\begin{equation}\label{3-1}
\begin{aligned}
&\quad \int_{-1}^1(1-x^2)^2G^2 [(1-x^2)^2G]^{(5)}= \int_{-1}^1(1-x^2)^4G^2 G^{(5)}-
20\int_{-1}^1x(1-x^2)^3G^2 G^{(4)}\\
&-40\int_{-1}^1(1-3x^2)(1-x^2)^2G^2 G^{(3)}+240\int_{-1}^1x(1-x^2)^2G^2 G^{''}
+120\int_{-1}^1 (1-x^2)^2G^2 G^{'}\\
&:=\sum_{i=1}^5 I_i.
\end{aligned}
\end{equation}
Let
\begin{equation*}
\begin{aligned}
G_5&=\int_{-1}^1(1-x^2)^4(G^3)^{(5)}=3\int_{-1}^1(1-x^2)^4G^2 G^{(5)}+30\int_{-1}^1(1-x^2)^4G G'G^{(4)}\\
&+60\int_{-1}^1(1-x^2)^4 GG''G^{(3)}+60\int_{-1}^1(1-x^2)^4 (G')^2G^{(3)}+90\int_{-1}^1(1-x^2)^4 G'(G'')^2\\
&:=\sum_{i=1}^5 G_5i,
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
G_4&=\int_{-1}^1x(1-x^2)^3(G^3)^{(4)}=3\int_{-1}^1x(1-x^2)^3G^2 G^{(4)}+24\int_{-1}^1x(1-x^2)^3G G'G^{(3)}\\
&+18\int_{-1}^1x(1-x^2)^3 G(G'')^2+36\int_{-1}^1x(1-x^2)^3 (G')^2G^{''}\\
&:=\sum_{i=1}^4 G_4i,
\end{aligned}
\end{equation*}
$X_1=(1-7x^2)(1-x^2)^2$, $X_2=(1-3x^2)(1-x^2)^2$ and
\begin{equation*}
\begin{aligned}
G_{3}^{(j)}&=\int_{-1}^1X_j(G^3)^{(3)}=3\int_{-1}^1X_j G^2 G^{(3)}+18\int_{-1}^1X_j GG'G''+
6\int_{-1}^1X_j(G')^3 \\
&:=\sum_{i=1}^3 G_{3}^{(j)}i,\quad j=1,2.
\end{aligned}
\end{equation*}
Here, we neglect the coefficients before $G_5i,\ G_4i$ and $G_3i$. For example,
$G_54=\int_{-1}^1(1-x^2)^4 (G')^2G^{(3)}.$
\par
After integration by parts,
\begin{equation}\label{G_5}
\begin{aligned}
& \int_{-1}^1(1-x^2)^4G^2 G^{(5)}=-\int_{-1}^1[(1-x^2)^4G^2]'G^{(4)}=8G_41-2G_52;\\
& \int_{-1}^1(1-x^2)^4G G'G^{(4)}=8G_42-(G_53+G_54);\\
&\int_{-1}^1(1-x^2)^4 GG''G^{(3)}=4G_43-\frac12 G_55;\\
&\int_{-1}^1(1-x^2)^4 (G')^2G^{(3)}=8G_44-2G_55.
\end{aligned}
\end{equation}
Similarly,
\begin{equation}\label{G_4}
\begin{aligned}
& \int_{-1}^1x(1-x^2)^3G^2 G^{(4)}=-G_{3}^{(1)}1-2G_42;\\
&\int_{-1}^1x(1-x^2)^3G G'G^{(3)}=-G_{3}^{(1)}2-G_43-G_44\\
&\int_{-1}^1x(1-x^2)^3 G(G'')^2=-G_{3}^{(1)}2-G_42-G_44\\
&\int_{-1}^1x(1-x^2)^3 (G')^2G^{''}=-\frac13G_{3}^{(1)}3\\
\end{aligned}
\end{equation}
Then
\begin{equation*}
\begin{aligned}
&\quad I_1+I_2+a_1G_5+a_2G_4\\
&=(1+3a_1)G_51+30a_1G_52+60a_1(G_53+G_54)+90a_1G_55\\
&+(3a_2-20)G_41+24a_2G_42+18a_2G_43+36a_2G_44\\
&=(24a_1-2)G_52+...+(24a_1+3a_2-12)G_41+...\\
&=(24a_1-2)[8G_42-(G_53+G_54)]+60a_1(G_53+G_54)+90a_1G_55
+(24a_1+3a_2-12)G_41+...\\
&=(36a_1+2)(G_53+G_54)+90a_1G_55 +(24a_1+3a_2-12)G_41+(24a_2+192a_1-16)G_42+...\\
&=-5G_55 +(24a_1+3a_2-12)G_41+(24a_2+192a_1-16)G_42+(144a_1+18a_2+8)(G_43+2G_44).\\
\end{aligned}
\end{equation*}
Thus,
\begin{equation*}
\begin{aligned}
&\quad I_1+I_2+a_1G_5+a_2G_4+5G_55\\
&=(24a_1+3a_2-12)G_41+(24a_2+192a_1-16)G_42+(144a_1+18a_2+8)(G_43+2G_44)\\
&=-(24a_1+3a_2-12)G_{3}^{(1)}1+(144a_1+18a_2+8)(G_42+G_43+2G_44)\\
&=-(24a_1+3a_2-12)G_{3}^{(1)}1-(144a_1+18a_2+8)G_{3}^{(1)}2
+(144a_1+18a_2+8)G_44.
\end{aligned}
\end{equation*}
Let $t=24a_1+3a_2$, then we have
\begin{equation*}
\begin{aligned}
&\quad I_1+I_2+a_1G_5+a_2G_4+5G_55+\frac{t-12}{3}G_{3}^{(1)}\\
&=6(t-12)G_{3}^{(1)}2-(6t+8)G_{3}^{(1)}2+2(t-12)G_{3}^{(1)}3
-\frac{6t+8}{3}G_{3}^{(1)}3\\
&=-80G_{3}^{(1)}2
-\frac{80}{3}G_{3}^{(1)}3.
\end{aligned}
\end{equation*}
Notice that
\begin{equation}\label{G-3}
\begin{aligned}
&\int_{-1}^1X_j G^2 G^{(3)}=-\int_{-1}^1X_j'G^2G''-2\int_{-1}^1X_jGG'G''=-\int_{-1}^1X_j'G^2G''
-2G^{(j)}_32\\
&\int_{-1}^1X_j GG'G''=-\frac12\int_{-1}^1X_jG(G')^2-\frac12G^{(j)}_33. \\
\end{aligned}
\end{equation}
Then
\begin{equation*}
\begin{aligned}
&\quad I_3-80G_{3}^{(1)}2-\frac{80}{3}G_{3}^{(1)}3
=40\int_{-1}^1X_2'G^2G''+80\int_{-1}^1[X_2-X_1]GG'G''-\frac{80}{3}G_{3}^{(1)}3\\
&=40\int_{-1}^1X_2'G^2G''+320\int_{-1}^1x^2(1-x^2)^2GG'G''-\frac{80}{3}G_{3}^{(1)}3.
\end{aligned}
\end{equation*}
Thus,
\begin{equation*}
\begin{aligned}
&\quad I_3+I_4-80G_{3}^{(1)}2-\frac{80}{3}G_{3}^{(1)}3
=I_4+\int_{-1}^1 40X_2'G^2G''+320\int_{-1}^1x^2(1-x^2)^2GG'G''
-\frac{80}{3}G_{3}^{(1)}3\\
&=80\int_{-1}^1x(1-x^2)^2[3(1-x^2)-(9x^2-5)]G^2G''-320\int_{-1}^1x(1-x^2)(1-3x^2)G(G')^2\\
&-\int_{-1}^1(1-x^2)^2\left[160x^2-\frac{80}{3}(1-7x^2)\right](G')^3\\
&=160\int_{-1}^1x(1-x^2)(3x^2-1)(G^2G''+2G(G')^2)-\frac{80}{3}
\int_{-1}^1(1-x^2)^3(G')^3\\
&=\frac{160}{3}
\int_{-1}^1x(1-x^2)(3x^2-1)(G^3)''-\frac{80}{3}
\int_{-1}^1(1-x^2)^3(G')^3.
\end{aligned}
\end{equation*}
It is easy to see that
\begin{equation*}
\begin{aligned}
\sum_{i=1}^5I_i&=-a_1\int_{-1}^1(1-x^2)^4(G^3)^{(5)}-a_2\int_{-1}^1x(1-x^2)^3(G^3)^{(4)}
-\frac{t-12}{3}\int_{-1}^1
(1-7x^2)(1-x^2)^2(G^3)^{(3)}\\
&-\frac{160}{3}
\int_{-1}^1x(1-x^2)(3x^2-1)(G^3)''+40\int_{-1}^1(1-x^2)^2(G^3)'\\
&-5\int_{-1}^1(1-x^2)^4 G'(G'')^2-\frac{80}{3}
\int_{-1}^1(1-x^2)^3(G')^3\\
&=-5\int_{-1}^1(1-x^2)^4 G'(G'')^2-\frac{80}{3}
\int_{-1}^1(1-x^2)^3(G')^3.
\end{aligned}
\end{equation*}
\section{Appendix B: Proof of \eqref{n=8}}
Compute $\int_{-1}^1(1-x^2)^3 G^2 [(1-x^2)^3 G]^{(7)}$. First,
\begin{equation}\label{8-1}
\begin{aligned}
&\quad\int_{-1}^1(1-x^2)^3 G^2 [(1-x^2)^3 G]^{(7)}= \int_{-1}^1(1-x^2)^6G^2 G^{(7)} -42\int_{-1}^1x(1-x^2)^5G^2 G^{(6)}\\
&-126 \int_{-1}^1(1-x^2)^4\left(1-5 x^2\right) G^2 G^{(5)}+840 \int_{-1}^1x(1-x^2)^3\left(3-5 x^2\right) G^2 G^{(4)}\\
&+2520\int_{-1}^1(1-x^2)^3\left(1-5 x^2\right) G^2 G^{(3)}-3\times 7!\int_{-1}^1x(1-x^2)^3 G^2 G''-7!\int_{-1}^1(1-x^2)^3 G^2 G'\\
&:=\sum_{i=1}^7 I_i
\end{aligned}
\end{equation}
Let
\begin{equation*}
\begin{aligned}
G_7&=\int_{-1}^1(1-x^2)^6(G^3)^{(7)}=3\int_{-1}^1(1-x^2)^6G^2 G^{(7)}+42\int_{-1}^1(1-x^2)^6G G'G^{(6 )}\\
&+126\int_{-1}^1(1-x^2)^6 GG''G^{(5)}+126\int_{-1}^1(1-x^2)^6 (G')^2G^{(5)}+210\int_{-1}^1(1-x^2)^6 GG^{(3)}G^{(4)}\\
&+630\int_{-1}^1(1-x^2)^6 G'G''G^{(4)}+630\int_{-1}^1(1-x^2)^6 (G'')^2G^{(3)}+420\int_{-1}^1(1-x^2)^6 G'(G^{(3)})^2\\
&:=\sum_{i=1}^8G_7i;
\end{aligned}
\end{equation*}
and
\begin{equation*}
\begin{aligned}
G_6&=\int_{-1}^1x(1-x^2)^5(G^3)^{(4)}=3\int_{-1}^1x(1-x^2)^5G^2 G^{(6)}+36\int_{-1}^1x(1-x^2)^5G G'G^{(5)}\\
&+90\int_{-1}^1x(1-x^2)^5 GG''G^{(4)}+90\int_{-1}^1x(1-x^2)^5 (G')^2G^{(4)}+60\int_{-1}^1x(1-x^2)^5 G(G^{(3)})^2\\
&+360\int_{-1}^1x(1-x^2)^5 G'G''G^{(3)}+90\int_{-1}^1x(1-x^2)^5 (G'')^3\\
&:=\sum_{i=1}^7 G_6i.
\end{aligned}
\end{equation*}
We consider these functions
\begin{equation}\label{X^ji}
\begin{aligned}
X_5^{(1)}&=[x(1-x^2)^5]'=(1-x^2)^4(1-11x^2),\quad X_5^{(2)}=(1-x^2)^4(1-5x^2);\\
X_4^{(s)}&=\left[X_5^{(j)}\right]', \ s=1,2;\qquad X_4^{(3)}=x(1-x^2)^3(3-5x^2);\\
X_3^{(s)}&=\left[X_4^{(j)}\right]', \ s=1,2,3;\qquad X_3^{(4)}=(1-x^2)^3(1-5x^2).
\end{aligned}
\end{equation}
Then define
\begin{equation*}
\begin{aligned}
G_5^{(j)}&=\int_{-1}^1 X_5^{(j)} (G^3)^{(5)}=3\int_{-1}^1X_5^{(j)}G^2 G^{(5)}+30\int_{-1}^1X_5^{(j)}G G'G^{(4)}\\
&+60\int_{-1}^1X_5^{(j)} GG''G^{(3)}+60\int_{-1}^1X_5^{(j)} (G')^2G^{(3)}+90\int_{-1}^1X_5^{(j)} G'(G'')^2\\
&:=\sum_{i=1}^5 G_5^{(j)}i,\quad j=1,2;
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
G_4^{(j)}&=\int_{-1}^1X_4^{(j)}(G^3)^{(4)}=3\int_{-1}^1X_4^{(j)}G^2 G^{(4)}+24\int_{-1}^1X_4^{(j)}G G'G^{(3)}\\
&+18\int_{-1}^1X_4^{(j)} G(G'')^2+36\int_{-1}^1X_4^{(j)} (G')^2G^{''}\\
&:=\sum_{i=1}^4 G_4i, \quad j=1,2,3;
\end{aligned}
\end{equation*}
and
\begin{equation*}
\begin{aligned}
G_{3}^{(j)}&=\int_{-1}^1X_3^{(j)}(G^3)^{(3)}=3\int_{-1}^1X_3^{(j)} G^2 G^{(3)}+18\int_{-1}^1X_3^{(j)}GG'G''+
6\int_{-1}^1X_3^{(j)}(G')^3\\
&:=\sum_{i=1}^3 G_{3}^{(j)}i, \quad j=1,2,3,4.
\end{aligned}
\end{equation*}
As in Appendix A, we neglect the coefficients before $G_7i,\ G_6i,\dots \ G_3^{(j)}i$.
\par
After integration by parts, one has
\begin{equation}\label{G-7}
\begin{aligned}
&G_71=\int_{-1}^1(1-x^2)^6G^2 G^{(7)}=-\int_{-1}^1[(1-x^2)^6G^2]'G^{(4)}=12G_61-2G_72;\\
& G_72=12G_62-(G_73+G_74);\quad G_73=12G_63-(G_75+G_76);\\
&G_74=12G_64-2 G_76;\quad G_75=6G_65-\frac12 G_78;\\
&G_76=12G_66-(G_77+G_78);\quad G_77=4 G_67;
\end{aligned}
\end{equation}
Similarly,
\begin{equation}\label{G-6}
\begin{aligned}
&-G_61=G_5^{(1)}1+2G_62;\quad &-G_62=G_5^{(1)}2+(G_63+G_64);\\
&-G_63=G_5^{(1)}3+(G_65+G_66); &-G_64=-G_5^{(1)}4+2G_66;\\
&-G_65=G_5^{(1)}3+(G_63+G_66); &-G_66=\frac12G_5^{(1)}5+\frac12G_67;\\
\end{aligned}
\end{equation}
\begin{equation}\label{G-5}
\begin{aligned}
&-G_5^{(j)}1=G_4^{(j)}1+2G_5^{(j)}2;\quad &-G_5^{(j)}2=G_4^{(j)}2+(G_5^{(j)}3+G_5^{(j)}4);\\
&-G_5^{(j)}3=\frac12G_4^{(j)}3+\frac12G_5^{(j)}5;
\quad &-G_5^{(j)}4=G_4^{(j)}4+2G_5^{(j)}5;\\
\end{aligned}
\end{equation}
\begin{equation}\label{G-4}
\begin{aligned}
-G_4^{(j)}1=G_3^{(j)}1+2G_4^{(j)}2;\quad -G_4^{(j)}2-G_4^{(j)}3=G_3^{(j)}2+G_4^{(j)}4;\quad
-G_4^{(j)}4=\frac13G_3^{(j)}3.
\end{aligned}
\end{equation}
By these relations, we calculate
\begin{equation*}
\begin{aligned}
&\quad I_1+I_2+a_1G_7+a_2G_6\\
&=(1+3a_1)G_71+42a_1G_72+126a_1(G_73+G_74)+210a_1G_75
+630a_1(G_76+G_77)+420a_1G_78\\
&+(3a_2-42)G_61+36a_2G_62+90a_2(G_63+G_64)+60a_2G_65
+360a_2G_66+90a_2G_67 \\
&=(36a_1-2)G_72+...+(36a_1+3a_2-30)G_61+...\\
&=(90a_1+2)(G_73+G_74)+...(360a_1+30a_2+36)G_62+...-(36a_1+3a_2-30)G^{(1)}_51\\
&=(120a_1-2)(G_75+3G_76)+630a_1G_77+420a_1G_78+(720a_1+60a_2-12)(G_63+G_64)\\
&+60a_2G_65
+360a_2G_66+90a_2G_67-(36a_1+3a_2-30)G^{(1)}_51-(360a_1+30a_2+36)G^{(1)}_52\\
&=(270a_1+6)G_77+7G_78+3(720a_1+60a_2-12)G_66+90a_2G_67-(36a_1+3a_2-30)G^{(1)}_51\\
&-(360a_1+30a_2+36)G^{(1)}_52-(720a_1+60a_2-12)(G^{(1)}_53+G^{(1)}_54)\\
&=7G_78+42G_67-(36a_1+3a_2-30)G^{(1)}_51-...-(1080a_1+90a_2-18)G^{(1)}_55.
\end{aligned}
\end{equation*}
Let
\begin{equation*}
II=I_1+I_2+a_1G_7+a_2G_6-7G_78-42G_67 \mbox{ and }12a_1+a_2=t.
\end{equation*}
Then it follows from \eqref{G-5} that
\begin{equation*}
\begin{aligned}
II&=-(3t-30)G^{(1)}_51
-( 30t+36)G^{(1)}_52-(60t-12)(G^{(1)}_53+G^{(1)}_54)-(90t-18)G^{(1)}_55\\
&=(3t-30)G^{(1)}_41-(24t+96)G^{(1)}_52-...\\
&=(3t-30)G^{(1)}_41+(24t+96)G^{(1)}_42-(36t-108)(G^{(1)}_53+G^{(1)}_54)
-(90t-18)G^{(1)}_55\\
&=(3t-30)G^{(1)}_41+(24t+96)G^{(1)}_42+18(t-3)(G^{(1)}_43+G^{(1)}_44)
-252G^{(1)}_55.
\end{aligned}
\end{equation*}
Joint with \eqref{G-4}, we further have
\begin{equation*}
\begin{aligned}
II+252G^{(1)}_55&=(3t-30)G^{(1)}_41+(24t+96)G^{(1)}_42+18(t-3)(G^{(1)}_43+G^{(1)}_44)\\
&=-3(t-10)G^{(1)}_31+6(3t+26)G^{(1)}_42+18(t-3)(G^{(1)}_43+G^{(1)}_44)\\
&=-3(t-10)G^{(1)}_31-6(3t+26)G^{(1)}_32-210G^{(1)}_43+6(3t-44)G^{(1)}_44\\
&=-3(t-10)G^{(1)}_31-6(3t+26)G^{(1)}_32-2(3t-44)G^{(1)}_33-210G^{(1)}_43.
\end{aligned}
\end{equation*}
Repeating the above arguments, we find
\begin{equation*}
\begin{aligned}
-\frac1{126}I_3&= G^{(2)}_51=G^{(2)}_31-4G^{(2)}_32+2G^{(2)}_33-5G^{(2)}_43-5G^{(2)}_55;\\
\frac1{840}I_4&= G^{(3)}_41=-G^{(3)}_31+2G^{(3)}_32-\frac23G^{(3)}_33+2G^{(3)}_43.
\end{aligned}
\end{equation*}
Then we compute
\begin{equation*}
\begin{aligned}
&\quad II+I_3+I_4+I_5+tG_3^{(1)}\\
&=II-126G^{(2)}_51+840 G^{(3)}_41+2520 G^{(4)}_31+tG_3^{(1)}\\
&=-252G^{(1)}_55+630G^{(2)}_55-210G^{(1)}_43+630G^{(2)}_43+1680G^{(3)}_43\\
&+\left(30G^{(1)}_31 -126G^{(2)}_31-840G^{(3)}_31+2520G^{(4)}_31\right)
+\left(1680G^{(3)}_32-156G^{(1)}_32+504G^{(2)}_32\right) \\
&+
\left(88G^{(1)}_33-252G^{(2)}_33 -1260G^{(3)}_33\right)\\
&:=III_1+III_2+\sum_{i=1}^3III_3^{(i)}.
\end{aligned}
\end{equation*}
By \eqref{X^ji}, we derive
\begin{equation*}
\begin{aligned}
III_1&=-252G^{(1)}_55+630G^{(2)}_55\\
&=\int_{-1}^1\left[630 \left(1-5 x^2\right) \left(1-x^2\right)^4-252 \left(1-11 x^2\right) \left(1-x^2\right)^4\right]G'(G'')^2\\
&=378\int_{-1}^1 \left(1-x^2\right)^5G'(G'')^2;
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
III_2&=-210G^{(1)}_43+630G^{(2)}_43+1680G^{(3)}_43= 0;
\end{aligned}
\end{equation*}
and
\begin{equation*}
\begin{aligned}
III_3^{(1)}&= 30G^{(1)}_31 -126G^{(2)}_31-840G^{(3)}_31+2520G^{(4)}_31=72\int_{-1}^1(25 x^4-48 x^2+19)(1-x^2)^2G^2 G^{(3)}\\
&:=72\int_{-1}^1 X_3^{(5)}G^2 G^{(3)};\\
III_3^{(2)}&=1680G^{(3)}_32-156G^{(1)}_32+504G^{(2)}_32=216\int_{-1}^1(15 x^4-26x^2+3)(1-x^2)^2GG'G''\\
&:=216\int_{-1}^1X_3^{(6)}(1-x^2)^2GG'G'';\\
III_3^{(3)}&=88G^{(1)}_33-252G^{(2)}_33 -1260G^{(3)}_33=72\int_{-1}^1(15 x^4-26x^2+3)(1-x^2)^2(G')^3\\
&:=72\int_{-1}^1X_3^{(6)}(1-x^2)^2(G')^3.\\
\end{aligned}
\end{equation*}
It follows from \eqref{G-3} that
\begin{equation*}
\begin{aligned}
III_3&:=\sum_{i=1}^3III_3^{(i)}=-72\int_{-1}^1\left(X_3^{(5)}\right)'G^2 G''+\int_{-1}^1 \left(216X_3^{(5)}-144X_3^{(6)}\right)GG'G''\\
&+72\int_{-1}^1X_3^{(6)}(1-x^2)^2(G')^3\\
&=-72\int_{-1}^1\left(X_3^{(5)}\right)'G^2G''
-288\int_{-1}^1X_2G(G')^2+1260\int_{-1}^1(1-x^2)^4(G')^3,
\end{aligned}
\end{equation*}
where $X_2=x(1-x^2)(15 5x^4-16x^2+19)$.
Then we consider
\begin{equation*}
\begin{aligned}
III_3+I_6+I_7&=-144\int_{-1}^1X_2\left[G^2G''+2G(G')^2\right]
-5040\int_{-1}^1(1-x^2)^3G^2G'+1260\int_{-1}^1(1-x^2)^4(G')^3\\
&=-48\int_{-1}^1X_2(G^3)'-1680\int_{-1}^1(1-x^2)^3(G^3)'
+1260\int_{-1}^1(1-x^2)^4(G')^3.
\end{aligned}
\end{equation*}
Put these results together, we conclude that
\begin{equation*}
\begin{aligned}
\sum_{i=1}^7I_i&=-a_1\int_{-1}^1(1-x^2)^6(G^3)^{(7)}
-a_2\int_{-1}^1x(1-x^2)^5(G^3)^{(6)}
-t\int_{-1}^1
X_3^{(1)}(G^3)^{(3)}\\
&-48\int_{-1}^1X_2(G^3)'-1680\int_{-1}^1(1-x^2)^3(G^3)'+7\int_{-1}^1(1-x^2)^6 G'(G^{(3)})^2+42\int_{-1}^1x(1-x^2)^5 (G'')^3\\
&+378\int_{-1}^1 \left(1-x^2\right)^5G'(G'')^2+1260\int_{-1}^1(1-x^2)^4(G')^3\\
&=7\int_{-1}^1(1-x^2)^6 G'(G^{(3)})^2+42\int_{-1}^1x(1-x^2)^5 (G'')^3+378\int_{-1}^1 \left(1-x^2\right)^5G'(G'')^2\\
&+1260\int_{-1}^1(1-x^2)^4(G')^3.\\
\end{aligned}
\end{equation*}
Note that
\begin{equation*}
\begin{aligned}
&\quad \int_{-1}^1[(1-x^2)G]^{(3)}(1-x^2)^5 (G'')^2=\int_{-1}^1(1-x^2)^6 (G'')^2G^{(3)}\\
&-6\int_{-1}^1x(1-x^2)^5 (G'')^3-6\int_{-1}^1(1-x^2)^5 G'(G'')^2\\
&=-2\int_{-1}^1x(1-x^2)^5 (G'')^3-6\int_{-1}^1(1-x^2)^5 G'(G'')^2
\end{aligned}
\end{equation*}
Therefore,
\begin{equation*}
\begin{aligned}
&\quad\int_{-1}^1(1-x^2)^3 G^2 [(1-x^2)^3 G]^{(7)}=1260
\int_{-1}^1(1-x^2)^4(G')^3+252\int_{-1}^1(1-x^2)^5 G'(G'')^2\\
&+7\int_{-1}^1(1-x^2)^6G'(G^{(3)})^2 -21\int_{-1}^1[(1-x^2)G]^{(3)}(1-x^2)^5 (G'')^2.
\end{aligned}
\end{equation*}
\vskip4mm
$\mathbf{Acknowledgement}$
This research is partially supported by NSF research grants DMS-1601885 and DMS-1901914.
|
{
"timestamp": "2021-09-30T02:00:15",
"yymm": "2109",
"arxiv_id": "2109.13924",
"language": "en",
"url": "https://arxiv.org/abs/2109.13924"
}
|
\section{Introduction}
To explain the differences in political influence, researchers often refer to the \textit{financial resources} that an organization possesses. Even though there is little empirical research on the causal mechanism, the argument makes intuitively sense. Organizations with high resources are freer to choose their lobbying activities and can focus on those strategies that promise the most influence. An obvious example is expensive political advertising campaigns, which are only available to wealthy political actors. Similarly, \textit{financial resources} allow organizations to gather more and better information. This can open doors or even enable direct influence.
In any case, the implementation of most political strategies benefits from larger funds \hl{and hardly any strategy will suffer from greater \textit{financial resources}}. Nevertheless, research on \hl{the nexus between \textit{financial resources} and political influence} is characterized by its contradictory results. Several reasons can be put forward for this.
First, assessments of \textit{financial resources} for political purposes often rely on imprecise measurements. Primarily, this can be attributed to the fact that such information is rarely freely available. It is shared by the actors concerned, if at all, only under assurances of confidentiality. As a result, often other measures, such as the total resources of an organization, are used. In the case of an organization that uses all of its funds for political purposes, such an operationalization may hardly distort the relationship to influence. The situation is different, for example, in the case of a firm whose expenditures for political purposes may represent only a very small portion of its total budget. In such a case, the use of total funds introduces a lot of noise. Accordingly, in such circumstances, it may be difficult to establish a link between \textit{financial resources} for political purposes and the influence of an organization.
Second, a similar issue arises in measuring influence. \hl{The problem starts with the fact that there is no generally accepted definition of the concept. While plenty of contributions go without explicitly defining the concept, the ones that do tend to refer to concepts of power. The pluralist tradition typically proceeds from Dahl's} (\citeyear{Dahl1957}, pp. 202--203) simple but influential notion that ``A has power over B to the extent that he can get B to do something that B would not otherwise do". Proceeding from this conceptualisation, many studies focus on control over policy outputs \citep[e.g.,][]{Dur2007,Helboe2013,Stevens2020}. The elitist response to pluralism was that a focus on open contestation neglects the less visible dimensions of power. Works in this tradition maintain that influential actors might exploit power asymmetries to prevent some issues from appearing on the political agenda (\cite{Bachrach1962}, p. 948; \cite{Schattschneider1960}, p. 69). While Schlozman (\citeyear{Schlozman1986OrganizedDemocracy}) directed the focus to the bases to exercise influence, other scholars \citep[e.g.,][]{Finger2019,Lowery2013}
followed Lukes' (\citeyear{Lukes1974Power:Viw}, p. 28) \hl{focus on the three faces of power emphasising also actors' ability to shape others' ``perceptions, cognitions, and preferences in such a way that they accept their role in the existing order of things".}
In addition to the lack of a universally accepted definition, influence necessarily requires a causal mechanism between intention and effect. Thus, in essence, the construct being explained already requires the researcher to capture multiple variables. This is further complicated by the fact that certain actors would prefer to keep their own influence veiled. It is not surprising, then, how different the approaches to measuring influence are and, consequently, how the results of the corresponding studies vary.
This study is set out to address these issues and provides new insights into some of these aspects. First, we shed light on how the different types of measures correspond. Whereas in most previous research papers one or at most two types of measurements are used, we compare the distribution of three different measurements of influence. Our unique data collection allows us to compare \textit{self-perceived influence}, \textit{reputational influence} and \textit{preference attainment}. The results show that the three variables are distributed very differently. There is much to suggest that the variables indeed measure different constructs.
Second, we compare the explanatory power of \textit{financial resources} with respect to the different types of measurement. In this context, it should be emphasized that our analysis explicitly focuses on \textit{financial resources} for political purposes. However, our data allow for a comparison with total resources and the number of employees commissioned to follow political events, two popular alternative operationalizations. \hl{To accommodate the different scales of measurement of influence we run three separate regressions; a standard OLS regression for \textit{self-perceived influence}, a zero-inflated count regression model for the reputational measure and a mixed model for the \textit{preference attainment} variable.} Again, the empirical analysis points to substantial differences between the types of measurements. \textit{Financial resources} for political purposes show a statistically significant association for all three measures. However, this is positive only in the case of the two perceptional measures, but not \textit{preference attainment}.
Third, we provide new evidence on the causal mechanism that allows \textit{financial resources} to have an impact on lobbying influence. \hl{We examine to what extent the effect of \textit{financial resources} on influence is translated through 13 different lobbying activities. Following the approach of} \citep{Imai2011UnpackingStudies}, we apply a counterfactual framework to identify these mediation effects. The analysis shows that the indirect effects of \textit{financial resources} on lobbying influence run through very specific activities. In line with the other findings, the results suggest that there are important differences between measures.
\section{Background}
The three questions relate to different, although related literatures. The ultimate goal is to reveal the causal mechanism of \textit{financial resources} on an organization's lobbying influence. However, the analysis of this causal mechanism requires a critical assessment of the relationship between \textit{financial resources} and lobbying influence. For this, in turn, it is central to consider the scientific evidence related to the measurement of lobbying influence.
\subsection{Measurement of influence}
Accordingly, we begin by addressing the state of research on the measurement of lobbying influence. The literature distinguishes between \textit{self-perceived influence}, peer-attributed or \textit{reputational influence}, and \textit{preference attainment}
So-called \textit{self-perceived influence}, i.e. measuring policy influence by asking stakeholders to evaluate their own influence, finds widespread use in the literature \citep[e.g.,][]{Helboe2013,Lyons2020,McKay2012,Newmark2017,Stevens2020,Yackee2020}. Capturing \textit{reputational influence} is more tedious and probably therefore less common \citep[e.g.,][]{Finger2019,Stevens2020}. Typically, peers or experts are asked to assess the lobbying influence of stakeholders.
The two methods share two important characteristics. First, they are both characterized by the fact that the measured quantity is not influence as such. Rather, these measures describe perceptions of influence. Second, both, self-perceived and \textit{reputational influence}, presuppose a causal mechanism between intention and effect.
Both properties contrast with \textit{preference attainment}, which has come to be widely used too \citep[e.g.,][]{Mahoney2007a,Dur2008,Nelson2012LobbyingRulemaking,Kluver2013LobbyingUnion,Bernhagen2014}. Here, the size of the overlap between individual preferences and policy outcomes serves as a measure to separate the winners from the losers of a policy process. What is missing, however, is the requirement of a causal relationship. An actor can see its preferences fulfilled without having contributed anything to such an outcome. Rather than calling it interest group influence, some scholars therefore refer to it as lobbying success \citep[e.g.,][]{Bernhagen2014}.
Since none of the three variables directly measure lobbying influence, the question arises as to which advantages and disadvantages might otherwise be decisive in choosing the appropriate measure. Table \ref{t_measuresoverview} offers an overview over the three measures.\input{files_mediation.draft/old/t_measures.overview_20210819}
One strength of \textit{reputational influence} is that it is typically based on a variety of data sources, and thus individual biased judgments carry less weight. However, this type of data gathering is also associated with high costs. Moreover, the measurement runs the risk of overestimating types of influence that are particularly visible. \hl{Actors that engage in lobbying activities that create public attention, e.g., major advertising campaigns, may be considered more influential than organizations that may take a less prominent role in public perception but may be equally effective, e.g., by lobbying decision-makers individually.}
In the case of \textit{self-perceived influence}, the relative simplicity of the data gathering is an important advantage. In contrast to \textit{reputational influence}, the assessment is also based on first-hand information: regarding their own activities and immediate consequences thereof, actors have outstanding knowledge.
However, there is a risk that actors may deliberately provide inaccurate or strategic responses concerning their own influence. In an attempt to give a greater picture of their relevance and legitimacy, organizations may be tempted to provide inflated accounts of their influence \citep[e.g.,][]{Lyons2020}. Other organizations may feel tempted to underestimate their influence to avoid negative reactions \citep[e.g.,][]{Stevens2020}
A similar challenge arises with \textit{preference attainment}, namely the truthful representation of preferences, a crucial element of this method \citep[e.g.,][]{Dur2008}. If an actor anticipates later negotiating concessions, there is an incentive to misrepresent true preferences. If the calculation of the \textit{preference attainment} is based on such strategic rather than actual preferences, this can lead to distorted estimates \citep[e.g.,][]{Dur2007,Lowery2013}.
An important advantage of this method, however, is that it is usually based on publicly available data such as consultation submissions and legislative texts. In addition, the fact that it is not a matter of perceptions is often emphasized. Rather, the measure is based on the actual correspondence between stated preferences and the policy outcome.
This also means that the method is insensitive to the channels of influence. It does not matter whether influence is exerted in a back room or with a large-scale advertising campaign; only the visible results in terms of policy outcomes count.
However, the lobbying influence of other involved parties represents a problem. An actor, whose lobbying efforts made no difference, may nonetheless be considered successful because for some unrelated reasons, e.g., lobbying of more influential actors, her policy goals were fully attained. At the same time, a seemingly unsuccessful actor may have been influential, although his policy goals may be unattained. This is the case when such influence was exercised to prevent an even less desirable outcome\footnote{\cite{Helboe2013} addresses this problem by linking influence more closely to specific activities (e.g. questions to parliamentary committees) instead of preference statements, and attaches greater relevance to the chronological sequence. However, this thwarts one of the strengths of \textit{preference attainment}, namely that success can be measured regardless of the channel of influence \citep{Dur2008}. In addition, such an approach requires considerable effort in the coding of activities, which may prevent its large-scale application in the context of \textit{preference attainment}.}.
Based on these few empirical results, a more in-depth analysis is warranted. Indeed, with an explicit focus on the differences between measures, this study goes beyond previous contributions by comparing three measures of influence.
\subsection{Financial resources and influence}
A question similar in nature also presents itself with regard to the implications of differences between measures on their relationship with potential predictors. \hl{Building on the analysis from the first step, we can test to what extent potential differences in the distribution of influence between the measures also lead to variances in the power of an explanatory approach.}
In the context of our study, we are particularly interested in the question whether the association between \textit{financial resources} and lobbying influence depends on the measurement choice. Two things stand out when looking at previous contributions to this topic. First, the results on the effect of resources on lobbying influence are mixed. \cite{McKay2012} finds organizations' economic resources to explain hardly any variance neither in \textit{preference attainment} nor \textit{self-perceived success} (not influence!). Similarly, multiple other studies find no noteworthy effect on their \textit{preference attainment} measure of influence \citep[e.g.,][]{Mahoney2007LobbyingUnion,Baumgartner2009a,Junk2019WhenCoalitions}. \cite{Kluver2013LobbyingUnion} goes beyond the individual organization as the unit of observations and analyzes the relevance of lobbying camps. Statistically significant associations with the \textit{preference attainment} measure are found for both the aggregate economic power at the lobbying camp level and the individual level. Interestingly, however, the association is positive only in the first case, while the probability of attaining preferences decreases with individual economic power. \cite{Stevens2020}, on the other hand, use a composite measure building on self-perceived and \textit{reputational influence} and find a positive association with economic resources.
Hence, it is possible that the contradictory findings result from the use of different measurement methods. However, even among studies using the same measurement methods, there seems to be substantial disagreement.
Second, and possibly partly responsible for these inconsistent results, there are substantial differences in the operationalization of \textit{financial resources}. With the sum of reported lobbying expenditures, \cite{Junk2019WhenCoalitions} arguably employs the most precise variable. Other operationalizations can be expected to introduce a higher degree of noise into the relationship between \textit{financial resources} for political purposes and lobbying influence. For example, \cite{McKay2012} uses total resources (revenue, budget or sales) while a number of studies use staff size. However, there are differences here as well, as this can be based on the total number of employees \citep[e.g.,][]{Kluver2013LobbyingUnion} or the employees entrusted with lobbying tasks \citep[e.g.,][]{Stevens2020,DeBruycker2019}.
These differences between studies make it difficult to compare the effect of \textit{financial resources} on different measures of influence across studies. Further complicating matters, the studies each also differ with respect to other aspects like country, policy venue or policy content. In contrast, the uniform setting of our study allows for a reasonably standardized comparison. Hence, in the second step of our empirical analysis, we compare the explanatory power of \textit{financial resources} with respect to the three different types of influence measurements.
\subsection{The causal mechanism}
\hl{The analyses of the previous two questions lay the foundations for the final task, examining the mechanism through which \textit{financial resources} affect lobbying influence}. Generally, in line with how rational choice scholars conceptualize lobbying, the argument rests upon the assumption that resources can be converted to something that is of value to decision makers \citep{Stigler1971TheRegulation}. E.g., many political decisions build on insights from actors affected. Developing the capacity to prepare and provide such expert information usually is easier with higher financial means \citep[e.g.,][]{Hall2006LobbyingSubsidy}.
In any case, resources need to be mobilized to unfold their potential impact in terms of influencing policy \citep[e.g.,][]{Duygan2021IntroducingRegimes}. E.g., policy output is not going to be affected simply by possessing a lot of money. Some exceptions like structural power may independently exert influence by the law of anticipated reactions. However, we argue that typically, the effect of resources on policy influence is mediated by the activities the actor engages in. In other words, this wealth needs to be put to use, e.g. by gathering additional information and providing it to decision makers. Figure \ref{f_framework} illustrates the relationships considered in this study.
\begin{figure}[!ht]
\input{files_mediation.draft/f_mediationframework_20210802}
\caption{Mediation framework.}
\label{f_framework}
\end{figure}
\begin{figure}[!ht]
\input{files_mediation.draft/f_indirecteffects_20210816}
\caption{Indirect effects.}
\label{f_framework.alt}
\end{figure}
Lobbying strategies describe the activities that interest groups engage in to influence policies according to their preferences. While this also refers to the decision whether to join forces or lobby alone \citep[e.g.,][]{Nelson2012LobbyingRulemaking}, the literature often focuses on insider and outsider strategies \citep[e.g.,][]{Mahoney2007LobbyingUnion,DeBruycker2019}. Surprisingly little research has been conducted on how these activities mediate the effect of resources on lobbying influence. \cite{Binderkrantz2019TheUK} uses group type as a proxy for different resource endowments and distinguishes between insider and outsider strategies. The empirical analyses reveal that in addition to a direct effect of group type on influence, there is also an indirect effect running through the choice of lobbying strategies.
In another notable exception, \cite{McKay2012} finds that a range of lobbying activities predict lobbying success but only a subset of them are associated with higher financial means, namely engaging in multiple venues as well as contributing and participating in PACs. It is worth noting that the analysis of these indirect effects builds on a rather weak association between resources and lobbying success. Also, while mediation analysis has experienced substantial development since, the empirical approach in \cite{McKay2012} builds on pairs of bivariate relationships only.
In the context of this study, it is important to note that scholars have repeatedly looked at the explanatory power of a business/non-business dichotomy \citep[e.g.,][]{Yackee2006ABureaucracy,Dur2019TheUnion}. Indeed, some contributions suggest an elevated responsiveness of policy makers to requests from groups representing business interests \citep[e.g.,][]{Gilens2014TestingCitizens,Varone2020}. Although resources have repeatedly been found not to vary systematically with group type \citep[e.g.,][]{Baroni2014DefiningGroups,Kluver2012BiasingPolicy-Making}, the type of interest represents an important control in the empirical analyses of our study.
To sum up, given the sparse evidence on the causal mechanism outlined above, a more fundamental elaboration of indirect effects of \textit{financial resources} on lobbying influence seems warranted. As a final step, we analyze a range of potential causal mechanism of the effect of \textit{financial resources} on each of the three measures of influence. To this end, we look at indirect effects running through the lobbying activities that organizations engage in.
\newpage
\section{Research design}
\subsection{Methods}\label{methods}
In this section, we lay out the empirical strategy to address the research questions previously described. This is followed by a short introduction to our case. Finally, we describe our data collection strategy and explain how we operationalized the constructs outlined above.
To address the question about the effect of \textit{financial resources} on influence, we run three separate regressions; one for each measure of influence. Because of the differences in measurement scales (see subsection on \nameref{data_operationalisation}), we use three different regression models.
For \textit{self-perceived influence}, we apply a standard linear regression model. As in both other models, the main interest lies in organization-specific \textit{financial resources}. As a control, we include a variable indicating whether the actor represents business interests.
In the case of \textit{reputational influence}, we estimate a zero-inflated count data regression model, which returns a zero- and a count \hl{model} \citep{Zeileis2008RegressionR}. We used the same set of predictors in both models. The zero-model, estimating the probability of being mentioned by no survey participant, uses logistic regression. To explain the variance among non-zero outcomes, the count model, then, uses Poisson regression.
In contrast to the perceptional measures, the observation unit for the \textit{preference attainment} measure is on the subject-item level. Hence, in addition to the predictors used in the other specifications, we included two random intercepts to account for potentially correlated errors among observations for the same organizations as well as for the same items of a bill. The model is estimated using a mixed effects logistic regression.
While the regression results may explain whether an organization’s \textit{financial resources} affect its influence over energy policy, it provides little insight on how such a potential effect comes about. On the condition that the regression results identify such an effect, in the final step of our analysis, we shed light on the causal mechanism that leads \textit{financial resources} to affect policy influence.
To that end, we estimate to what extent the effect of \textit{financial resources} is mediated by various lobbying activities. The analysis follows the approach of \cite{Imai2010IdentificationEffects}. It makes use of a potential outcomes framework and builds on two separate models. First, the mediated variable, i.e., political resources, as well as potential confounders are used to model the mediator, i.e., lobbying activities. Second, along with the same set of independent variables, the predicted values of the mediator are then used to fit the model of the actual outcome variable, in our case influence. From this second model, the average difference in the predicted outcomes for different values of the mediator, holding constant the value of the mediated variable, represents the average causal mediation effect (ACME).
We conduct this mediation analysis for each potential causal mechanism separately \hl{using the} \textit{mediation} and the \textit{maczic} package \citep{Tingley2014Mediation:Analysis,Cheng2018MediationData}. Under the assumption that the lobbying activities are not causally related, the ACMEs can be consistently estimated without explicitly including other mediators \citep{Imai2013IdentificationExperiments}. This does not hold for the average direct effect (ADE), which tends to be overestimated if not all indirect effects are being considered simultaneously. However, the goal of the analysis is to determine the mediated effect.
Finally, we estimate the confidence intervals using the default quasi-Bayesian Monte Carlo method, as this was the only option available for all three types of regressions. However, the results were robust to the use of non-parametric bootstraps for the perceptional measures, where this method was available (see Table \ref{app_t_mediation.boot} and Figure \ref{app_f_coefplot.boot} in \ref{app_bootstrap}).
An important advantage of this form of mediation analysis is that there are no requirements regarding the functional form \footnote{While this is true in theory, the technical implementation required minor adjustments in the case of the mixed model for the \textit{preference attainment} measure. More specifically, the outcome model features a random intercept only for organizations. This stands in contrast to the specification used in the main regression, which includes an additional random intercept for items. To ensure that this does not fundamentally alter the relationship between \textit{financial resources} and the \textit{preference attainment} variable, model 1 of \ref{app_t_reg.compare.pa} in \ref{app_regression} features the regression results for this modified specification. While the estimates slightly vary, the directions remain the same and both predictor variables continue to enjoy statistical significance.}
This is in contrast to the product and difference methods, which are commonly used to estimate indirect effects in linear structural equation models (LSEM) \citep{Baron1986TheConsiderations}. In the case of logistic regression, as repeatedly used in this study, neither method estimates mediation effects consistently \citep{Imai2010AAnalysis}. Moreover, the underlying identification strategy omits necessary assumptions required to establish causal mechanisms \citep{Glynn2012TheEffects}.
Indeed, given our approach’s reliance on a counterfactual outcome that is never actually observed, the identification of ACMEs requires the strong assumption of sequential ignorability. Its first part, the exogeneity of the mediated variable, may be considered standard. The second aspect is more restrictive, particularly in non-experimental settings like this study. Conditioning on the full set of relevant confounders and, importantly, the mediated variable, the observed mediator value is assumed to be ignorable, implying identifiability of the ACME \citep{Imai2011UnpackingStudies}. In the context of this study, this means that all variables confounding the relationship between the lobbying activity and the influence measures must be conditioned on. This includes \textit{financial resources}, reflecting the sequentiality element that sets apart the causal mediation effect (sequential ignorability assumption) from a causal effect of the mediator (conventional exogeneity assumptions)
Finally, the assumption also prohibits any influence of the mediated variable, \textit{financial resources} in our case, on the other confounders, no matter whether they are observed or not. This also implies that there must not be any causal relationship between different mediators, as this would also represent post-treatment confounding \citep{Imai2013IdentificationExperiments}.
Given the assumption is satisfied, ACME is identifiable even though not all potential outcomes can be observed. Unfortunately, the assumption is not testable. However, at least for some violations, sensitivity analyses can indicate to what extent the assumption needs to be violated for the mediated effects to lose statistical significance \citep{Imai2010AAnalysis}.
\subsection{Data and operationalization}\label{data_operationalisation}
Switzerland has recently experienced important energy policy processes with distinct outcomes. This includes a federal strategy to realign its national energy policy. After several years in parliament, the so-called 'Energiestrategie 2050’ was accepted by the electorate in 2017. Shortly after, a major revision of the regulation on carbon emissions (CO2 law as part of ‘Klimapolitik nach 2020’) failed to gain a legislative majority in its first attempt. A revised version was then passed but eventually rejected in a public referendum in 2021.
While not unique, Switzerland's political system is interesting in that the legislative process is preceded by a highly formalized consultation submission process. Stakeholders are invited to submit their views on policy proposals. Due to this institution, Swiss policy-making is characterized by a more ``direct incorporation of a wide variety of parties and organizations" (\citeauthor{Weiler2015}, \citeyear{Weiler2015}, p.~750) than other political systems. Given that the consultation process is open to all stakeholders and implemented transparently (e.g., online publication of stakeholders' submissions), there is a certain level playing field in terms of stakeholders' \textit{access} to the political decision-making process. This makes it all the more interesting to study whether certain political resources or strategies make some stakeholders more influential than others.
Our empirical analysis builds on two separate data sources. Publicly available consultation submissions were used to determine the \textit{preference attainment} measures. The remaining variables, including the perceptional measures of influence, stem from a survey we conducted with Swiss energy stakeholders in 2019.
The \textit{preference attainment} measures are based on subject-item data from the preferences indicated in consultation submissions on the two policies mentioned above, 'Energiestrategie 2050’ and ‘Klimapolitik nach 2020’. These bills were considered suitable for the purposes of this study for the following reasons. First, they were subject to intensive political and public debate and were both ultimately put to an optional referendum by the Swiss people's party (SVP). Not only did this spotlight mobilize a larger set of actors. Also, the attention required public position taking by organizations that under different circumstances may prefer to hide their preferences and lobby decision makers more covertly. Similarly, the contested nature of these bills may be more informative with regard to which organizations truly are influential. To attain one side's preferences, decision makers had to let down other major stakeholders.
Second, the two policies cover the full spectrum of energy policy topics, such as nuclear phase-out schedule, car emission limits, subsidies for renewables, transparency rules for power supply firms, ratification of the Paris Agreement, emission reduction targets, emission trading scheme or fossil heating bans. With regard to the comparison to other measures of influence, this quality of our sample is important. A more narrow set of issue areas may not be as representative to measure actor-specific influence on energy policy.
Finally, the multiple choice format of these two consultations makes it possible to collect data on preferences for a large sample in a standardized manner. For other, smaller consultations, participants are invited to share their views in a more general text form. This format entails a lot more ambiguity and requires the coder to interpret the answer, leading to difficulties in extracting preferences and comparing policy positions. The lack of clearly spelled out questions further hampers comparability.
First, all item-specific choices for each participant were registered. Next, these preferences were matched with the corresponding outcome from the final version of the bill that the parliament agreed upon. This resulted in a binary measure indicating whether the preferred outcome was realized or not. The measure was collected for all item-organization observations. In case the parliamentary outcome was unclear, the item was dropped for all organizations. If an organization did not provide an answer to a specific question or answered ambiguously, only the corresponding subject-item observation was dropped.
This approach resulted in 44 items that were considered for the \textit{preference attainment} measure\footnote{The full list of items is made up of 26 items from the 'Energiestrategie 2050' consultation and 18 from the ‘Klimapolitik nach 2020’ consultation. For a full list of items, see Table \ref{app_t_consultation.item.es2050} and Table \ref{app_t_consultation.item.klima} in \ref{app_consultation}.}. The survey data builds on a larger sample size. In addition to the participant from the two policies mentioned above, submission from another two major Swiss energy policy consultation were considered (ordinance level implementation of Energiestrategie 2050 \& Stromversorgungsgesetz, which regulates energy supply). Taking into account that some organizations participated in more than one consultation, 740 unique participants from the were identified. While individuals were excluded, there were no further formal or informal conditions to qualify as a stakeholder. However, 60 organizations were dropped because they had ceased to exist or because neither online nor postal contact details could be identified.
To inform them about the project and the process, potential survey participants were sent an invitation letter with personalized details whenever possible. Based on the insights from a trial with 38 organizations, the survey underwent minor adjustments to reduce drop out rates.
The survey was run on the Qualtrics survey platform. Given Switzerland's language regions, respondents could choose between a German, French or an English version of the survey. Organizations that had not submitted a response within the two-week deadline were sent a reminder letter (main wave) or called by phone (trial). Enclosed with the reminder letter was a paper version of the survey and a pre-franked envelope. Respondents were asked to either use the online link or return the hard copy within two weeks.
Overall, data collection took 20 weeks.\footnote{Given its nonrecurring nature, this type of data gathering process is susceptible to biases from topical contexts. While the Fridays for Future movement did gain traction over this period, the authors are not aware of individual energy policy related events (e.g. referendum on energy bills), which may suggest distortions from temporal anomalies.} Out of 680 organizations invited to participate, 364 submitted a response, which resulted in a response rate of 53.5\%. After removing insufficiently completed submissions, the remaining data contains 312 observations (45.8\%) on a total of 42 survey items.
To determine \textit{self-perceived influence}, respondents were asked to assess the influence of their organization on Swiss energy policy. Respondents had to choose from an ordered six point scale ranging from ``No influence at all" to ``Very strong influence". In a separate item, respondents were asked to list the organizations that they considered to be influential in Swiss energy policy. For each unique answer provided, the mentions across all participants were added up, not counting self-mentions.
As mentioned above, the remaining variables were also based on survey responses. \texttt{Financial Resources} were measured by an organization's annual budget devoted to political purposes. In the survey, respondents were asked to choose out of seven answer options ranging from less than CHF 5'000 to more than CHF 1'000'000. Respondents were not forced to respond to this item. However, when trying to skip the item, they were reminded of the importance of this information. Out of the 312 organization in our sample, 269 (86\%) provided the information. Regarding the entire sample (680 organizations), this corresponds to approximately 40\%. The binary variable measuring whether an organization represented \textit{Business Interests} was hand-coded. The measure distinguishes all organizations that were considered to be \textit{Energy Businesses}, \textit{Other Businesses} or \textit{Business Associations} from the remaining actors.
Finally, respondents were asked about the lobbying activities their organizations had pursued in the past ten years. A list of 13 activities with an additional text entry option was provided and respondents were free to tick as many as applied for them, resulting in one binary subitem for every activity (see Table \ref{t_activities} for an overview of lobbying activities presented).
\newpage
\input{files_mediation.draft/t_activities_20210729}
\newpage
\section{Results}
\subsection{Descriptive results}
Table \ref{t_descriptives} presents descriptive statistics for all variables. One important insight is that the reduction in sample size results from missing data on \texttt{financial resources}, \textit{self-perceived influence} and of course the smaller sample for the \textit{preference attainment} measure. Furthermore, the table reveals noteworthy patterns in the dependent variables. Most strikingly, in the majority of subject-item observations, preferences overlapped with the policy outcome. Although not directly comparable, this contrasts with the fact that more than half of all organizations were not named by any respondent as influential, highlighting the zero-inflated nature of the \textit{reputational influence} measure. Finally, for the \textit{self-perceived influence} measure,the mean is also slightly lower than the median suggesting that survey participants were more reluctant to label themselves as very influential compared to not influential at all.
For informational purposes, the table also includes an entry for the number of \textit{preference attainment} items per organization, which was not a variable used in any of the specification as such \footnote{Rather than presenting numbers on the whole population of consultation participants, this entry refers only to the subsample that also submitted a survey response.}. The number of observations ranges from 1 to 41 items per organizations, underlining the importance of a mixed model approach rather than using a ratio variable of preferences attained.
\begin{table}[!ht] \centering
{\small{
\input{files_mediation.draft/t_descriptives_20210729}
}}
\caption{Descriptive statistics.}
\label{t_descriptives}
\end{table}
To begin, we look at the relationship between the dependent variables. The scatterplots in Figure \ref{f_influence.measures} reveal the degree of association between the three variables. The different units of observation (subject level for perceptional measures, subject-item level for \textit{preference attainment}) make it difficult to compare. To account for this, an additional variable reflecting the ratio of attained preferences was calculated, which represents \textit{preference attainment} on the subject level. Given the unbalanced nature of the data, the denominator varied according to the number of items the organization indicated their preferences on.
The smoothing line in (a) does hint at some level of agreement between \textit{self-perceived influence} and \textit{preference attainment}. To a lesser extent, this also holds for the relationship between reputational and \textit{self-perceived influence} (see (c)). However, this is not the case in the scatterplot for \textit{preference attainment} and \textit{reputational influence} in (b). Higher values for one measure are not associated with higher values in the other.
\begin{figure}
\centering
\begin{minipage}{0.7\textwidth}
\centering
\resizebox{0.7\textwidth}{!}{
\input{files_mediation.draft/f_self.pa_lab_20210816}
}
\subcaption[first caption.]{}\label{fig:1a}
\end{minipage}%
\centering
\begin{minipage}{0.5\textwidth}
\centering
\resizebox{\textwidth}{!}{
\input{files_mediation.draft/f_rep.pa_lab_20210816}
}
\subcaption[second caption.]{}\label{fig:1b}
\end{minipage}%
\centering
\begin{minipage}{0.5\textwidth}
\centering
\resizebox{\textwidth}{!}{
\input{files_mediation.draft/f_rep.self_lab_20210816}
}
\subcaption[third caption.]{}\label{fig:1c}
\end{minipage}%
\caption{Relationships between influence measures. To make the graphs easier to interpret, minimal noise was added to avoid overlapping
observations. Also, the observations for \textit{reputational influence} are plotted only for values up to 50. This leads to 6 observations (values between 57 and 118) not being featured in the graph (b) and (c). However, the smoothing lines (fitted using Loess) and their confidence intervals are based on the full sample.}
\label{f_influence.measures}
\end{figure}
\begin{figure}
\setlength\tabcolsep{1pt}
\setkeys{Gin}{width=\hsize}
\begin{tabularx}{\textwidth}{l XXX
& \centering\begin{tabular}{@{}c@{}}Self-perceived \\ influence\end{tabular}
& \centering\begin{tabular}{@{}c@{}}Reputational \\ influence\end{tabular}
& \centering\arraybackslash\begin{tabular}{@{}c@{}}Preference \\ attainment\end{tabular} \\
\rotatebox{90}{\begin{tabular}{@{}l@{}}Self-perceived \\ influence\end{tabular}}
& \resizebox{0.3\textwidth}{!}{}
& \resizebox{0.3\textwidth}{!}{\input{files_mediation.draft/f_self.rep_20210816}}
& \resizebox{0.3\textwidth}{!}{\input{files_mediation.draft/f_self.pa_20210816}}\\
\rotatebox{90}{\begin{tabular}{@{}l@{}}Reputational \\ influence\end{tabular}}
& \resizebox{0.3\textwidth}{!}{\input{files_mediation.draft/f_rep.self_20210816}}
& \resizebox{0.3\textwidth}{!}{}
& \resizebox{0.3\textwidth}{!}{\input{files_mediation.draft/f_rep.pa_20210816}}\\
\rotatebox{90}{\centering\arraybackslash\begin{tabular}{@{}l@{}}Preference \\ attainment\end{tabular}}
& \resizebox{0.3\textwidth}{!}{\input{files_mediation.draft/f_pa.self_20210816}}
& \resizebox{0.3\textwidth}{!}{\input{files_mediation.draft/f_pa.rep_20210816}}
& \resizebox{0.3\textwidth}{!}{}
\end{tabularx}
\caption{Relationships between influence measures. To make the graphs easier to interpret, minimal noise was added to avoid overlapping observations. Also, the observations for \textit{reputational influence} are plotted only for values up to 50. This leads to 6 observations (values between 57 and 118) not being featured in the respective graphs. However, the smoothing lines (fitted using Loess) and their confidence intervals are based on the full sample.}
\label{f_influence.measures2}
\end{figure}
\begin{figure}
\centering
\begin{minipage}{0.4\textwidth}
\centering
\resizebox{\textwidth}{!}{
\input{files_mediation.draft/f_self.rep_lab_20210816}
}
\subcaption[first caption.]{}\label{fig:1a_2}
\end{minipage}%
\centering
\begin{minipage}{0.4\textwidth}
\centering
\resizebox{\textwidth}{!}{
\input{files_mediation.draft/f_rep.self_lab_20210816}
}
\subcaption[second caption.]{}\label{fig:1b_2}
\end{minipage}%
\centering
\begin{minipage}{0.4\textwidth}
\centering
\resizebox{\textwidth}{!}{
\input{files_mediation.draft/f_self.pa_lab_20210816}
}
\subcaption[third caption.]{}\label{fig:1c_2}
\end{minipage}%
\centering
\begin{minipage}{0.4\textwidth}
\centering
\resizebox{\textwidth}{!}{
\input{files_mediation.draft/f_pa.self_lab_20210816}
}
\subcaption[fourth caption.]{}\label{fig:1d_2}
\end{minipage}%
\centering
\begin{minipage}{0.4\textwidth}
\centering
\resizebox{\textwidth}{!}{
\input{files_mediation.draft/f_rep.pa_lab_20210816}
}
\subcaption[fifth caption.]{}\label{fig:1e_2}
\end{minipage}%
\centering
\begin{minipage}{0.4\textwidth}
\centering
\resizebox{\textwidth}{!}{
\input{files_mediation.draft/f_pa.rep_lab_20210816}
}
\subcaption[sixth caption.]{}\label{fig:1f_2}
\end{minipage}%
\caption{Relationships between influence measures. To make the graphs easier to interpret, minimal noise was added to avoid overlapping
observations. Also, the observations for \textit{reputational influence} are plotted only for values up to 50. This leads to 6 observations (values between 57 and 118) not being featured in graph (a,b,e,f). However, the smoothing lines (fitted using Loess) and their confidence intervals are based on the full sample.}
\label{f_influence.measures3}
\end{figure}
\subsection{Regression results}
The descriptive results suggest that the three dependent variables do measure somewhat distinct concepts. This begs the question to what extent this is also reflected in their respective relationships with \textit{financial resources}.
Indeed, the regressions in Table \ref{t_reg.compare} reveal differing patterns. In the case of \textit{self-perceived influence}, \texttt{financial resources} is found to be a positive and statistically highly significant predictor. More \texttt{financial resources} are associated with organizations considering themselves more influential.
The result for the \textit{reputational influence} model are similar. While the top three coefficients in the table refer to the count model, the bottom three feature the results for the zero-model. While both statistically significant, the two coefficients for \texttt{financial resources} are seemingly contrasting. This is due to the fact that the zero-model predicts zeros, i.e. non-occurrence. Hence, more \textit{financial resources} are associated with a decrease in the probability for an organization to be mentioned by none of the survey participants (zero-model). Similarly, the expected number of mentions as an influential actor increases with more \textit{financial resources} (count-model).
In contrast, the mixed model for the \textit{preference attainment} measure suggests a negative relationship with \textit{financial resources}. Higher \textit{financial resources} are associated with lower probability of preferences being attained. This negative association is statistically different from zero too, although it does not reach the same level of statistical significance, which the estimates for the two perceptional measures of influence enjoy.
Interestingly, organizations representing business interests consider themselves less influential and are less likely to attain their preferences. Both estimates are statistically different from zero. Also, the probability to be mentioned as influential by at least one survey participant is lower for such interests. However, this relationship does not reach any common level of statistical significance.
Although there is substantial overlap, the samples on which these three models build are not identical. For each measure, all available observations were included. However, the results of all three models were largely the same for a subsample of 182 organizations for which there was data on all three measures (see models 1-3 in Table \ref{app_t_reg.compare.all} in \ref{app_regression}).
As outlined above, the number of items on which the \textit{preference attainment} measure builds, varies between participants. Organizations with very few indicated preferences may introduce a potential bias for the \textit{preference attainment} variable. However, the results, again, were robust to the exclusion of 88 subject-item observations from the 24 organization for which there were only 10 observations or fewer available (see model 2 in Table \ref{app_t_reg.compare.pa} in \ref{app_regression}).
Finally, it is important to point out that the effect sizes between the three models are hardly comparable. While the predictor variables are the same, the scale of measurement varies between the three models, rendering a comparison of the estimates difficult. Given the different regression models, the same caveat also applies to the question to what extent the specification can explain the variances in the dependent variables. Hence, we also refrain from comparing goodness of fit statistics between the models.
\begin{table}[!h]\centering
\resizebox{.9\textwidth}{!}{
\input{files_mediation.draft/t_regression.comparison_20210802}
}
\caption{Comparison of dependent variables.}
\label{t_reg.compare}
\end{table}
\subsection{Mediation results}
Having outlined the implications the choice of the dependent variable holds, we can now move from our methodological to our final, more substantive contribution. As mentioned above, in most cases, simply holding large \textit{financial resources} is not going to result in influence over policy automatically. Hence, given the effects of \textit{financial resources} on influence identified in the previous analysis, we look at pathways through which such resources may manifest their influence potential. The analysis tests to what extent 13 different lobbying activities mediate the effect of \textit{financial resources} on the influence variables.
The average causal mediation effect (ACME) for each activity are illustrated in Figure \ref{f_coefplot.bayes}. While the figure includes the corresponding confidence intervals, a more comprehensive overview of the mediation results can be found in Table \ref{t_mediation.bayes}.
Not all ACMEs reach statistical significance and, more interestingly, there are differences between influence measures. The drafting of political opinion papers, physical access to the Swiss parliament and having participated in parliamentary hearings are statistically significant mediators for both, \textit{self-perceived} and \textit{reputational influence}. Contributions to media outlets are found to mediate the effect of \texttt{financial resources} only on \textit{reputational} but not \textit{self-perceived influence}, while the opposite is true in the case of organizations that indicated to have participated in official working groups to draft new legislation.
This line of analysis is not limited to positive predictors. This means that the causal mechanism that leads to a negative effect of \texttt{financial resources} on preferences attained can be revealed too. Significant mediation effect in that case would suggest that more \textit{financial resources} leads organizations to chose activities, which - perhaps counter-intuitively - have a negative effect on the probability of preferences being attained. However, the mediation analysis on the \textit{preference attainment} model does not provide any evidence for this to be the case. None of the ACMEs are statistically significantly different from zero.
Returning to the perceptional measures, the results suggest partial mediation as opposed to full mediation for all statistically significant activities and both measures. In terms of effect size, relative comparisons between activities seem best suited to clarify the effects, especially given the ordinal nature of the variable used to measure \textit{financial resources}. For \textit{reputational influence}, the coefficients for hearings in parliamentary commission, having drafted political opinion or position papers and having provided contributions to mass media are all comparable in size. Physical access to parliament, however, even after accounting for differences in the ATEs (average treatment effect; aggregated effects of direct and mediated effect), mediates a substantially larger proportion of the effect compared to the other activities.
For \textit{self-perceived influence}, having participated in hearings in parliamentary commission features as the most important mediation of \texttt{financial resources}, followed by participation in official working or expert groups to draft legislation. Physical access to parliament and having drafted political opinion or position papers feature a substantially smaller mediation effect.
The different scales of measurement prohibit a direct comparison of effect sizes across the different measures of influences. However, some relative inferences can be drawn, nonetheless. While the absolute values are uninformative, the ratio between the coefficients provides further evidence on the importance of physical access to parliament for \textit{reputational influence}. In contrast, for \textit{self-perceived influence}, the relative comparison highlights the relevance of hearings in parliamentary commissions as a mediator.
\begin{figure}
\resizebox{.8\textwidth}{!}{
\input{files_mediation.draft/f_coefplot.bayes_20210722}
}
\caption{Comparison of ACME estimates with quasi-Bayesian 95\% confidence intervals.} \label{f_coefplot.bayes}
\end{figure}
\begin{landscape}
\begin{table}[!h]\centering
{\input{files_mediation.draft/t_mediation.bayes_20210722}
}
\caption{Mediation analysis with quasi-Bayesian 95\% confidence intervals.}
\label{t_mediation.bayes}
\end{table}
\end{landscape}
\subsection{Robustness checks}
In the appendix we present the results for several robustness checks. The three models in Table \ref{app_t_reg.compare.all} use the same specifications as the main regressions. However, the sample is limited to the 182 observations for which there are values for all three measures. The results as well as the statistical significance remain largely unchanged.
Given the numerous contributions in the literature that employ alternative measures for \textit{financial resources}, in Table \ref{app_t_reg.compare.polstaff} we present the results for the three dependent variables using the number of full time equivalent employees that are commissioned to follow political events rather than \textit{financial resources} for political purposes. The sample sizes are slightly larger. While the results suggest a similar relationship for between political staff and the perceptional measures as in the main analysis, there is no statistically significant association with \textit{preference attainment}. However, the explanatory power of Business increases with the use of Political Staff for the \textit{preference attainment} model but also \textit{self-perceived influence}.
Table \ref{app_t_reg.compare.budget} features the results for \texttt{overall budget} instead of \texttt{financial resources} for political purposes. The relationship between budget and influence ceases to reach statistical significance for the \textit{self-perceived influence} measure and \textit{preference attainment}. At the same time, the estimates in the \textit{reputational} model increase substantially.
Model 1 of Table \ref{app_t_reg.compare.pa}, features the estimates for the \textit{preference attainment} regression that uses only one random intercept for the grouping variable organization. We find that the estimate for \texttt{financial resources} remains statistically significant although at a lower level. In Model 2 we return to the normal specification but only include organizations for which there are more than ten subject-item observations. Results are very similar to the ones in the main regression, including the statistical significance.
In Table \ref{app_t_reg.compare.pa_indiv} we present the results for the subsamples from the two consultations. Not only are the samples smaller than in the main analysis but the sample size also varies substantially between the two models. While the estimates for \texttt{financial resources} change only little, statistical significance decreases in both cases. Interestingly, in the case of the 'Energiestrategie 2050', the coefficient for Business interests doubles in size compared to the main regression. It also gains statistical significance while the opposite is true for the consultation on 'Klimapolitik nach 2020'.
Finally, we also conducted the same mediation analysis using bootstrapped rather than Bayes confidence intervals in Figure \ref{app_f_coefplot.boot} and Table \ref{app_t_mediation.boot}. For \textit{reputational influence} the same activities that were identified to mediate in the main analysis, were also found to have statistically significant indirect effect with this alternative confidence interval estimation procedure. In the case of \textit{self-perceived influence}, in addition to the activities from the main analysis, exchange with politicians and media contributions were also found to have estimates statistically significantly different from zero.
\section{Discussion}
inside/outside
are we actually analyzing influence or are perceptions of influence something distinct (not necessarily less interesting/worthwile)
\newpage
\section{Conclusion}
Understanding how special interest shapes policy has long been of interest to many political scientists and economists. However, various empirical findings have not been in agreement as to even the most (seemingly) straightforward question concerning lobbying influence: whether better-endowed stakeholders influence policies more.
Our paper revisited this relationship between \textit{financial resources} and lobbying influence, in such a way that would help us understand potential sources of discrepancies.
[Broader implications of our study. Why do wee care? Include a few sentences here or at the end.]
We approached this problem from three angles. First, carefully studying varying constructs of lobbying influence employed in the literature, we built our analysis on a comparison of three types of influence measures that appear frequently in the existing studies—the self-perceived, reputation-based, and \textit{preference attainment} measures. What is important here is that we constructed the three measures based on the same (or nested) set of actors by combining data from an original survey with 312 Swiss energy policy stakeholders with document data from policy consultation processes. Existing empirical studies employ different measures of influence in different policy contexts, but few scholars seem to be concerned that the qualitatively different conclusions might simply be artifacts of distinct influence measures (and the selection of other covariates).
Second, as our survey directly probed the size of budget allocated specifically to political purposes, it allowed us to overcome common challenges associated with the operationalization of relevant \textit{financial resources}. Finally, we addressed the issue of potential confounding between actors’ \textit{financial resources} and various lobbying activities. In other words, we tested whether \textit{financial resources} can be at work by enabling various lobbying activities, instead of treating their effects independently as the literature has suggested.
Our data showed that... (or a more elegant way of nailing the take-away in a few sentences)
limitations. ...and done!
- Limitations
- reverse causality; usually sequential important, here it could run the other way around
- different data structure of different influence measures
- \textit{preference attainment}: An actor, whose lobbying efforts made no difference, may nonetheless be considered successful because for some unrelated reasons, e.g., lobbying of more influential actors, her policy goals were fully attained. At the same time, a seemingly unsuccessful actor may have been influential although his policy goals may be unattained. This is the case when such influence was exercised to prevent an even less desirable outcome.
- anticipated reaction is a problem for all three measures
- Some exceptions like structural power may independently exert influence by the law of anticipated reactions. Typically, however, the effect of resources on influence is mediated by the choice of strategies.
- measurement error in dv LEADS TO inflated SE for predictors and we lose power for predictor-outcome relationships
- PARTICULARLY DIFFICULT ASSUMPTION BECAUSE UNCLEAR WHETHER COVARIATES ARE REALLY BEFORE TREATMENT
- typically direct effect will be overestimated or mistermed as it is more a effect of all other mechanism (except tested mediation) than a direct effect
- Future research
- item-specific perceptional measures of influence
\newpage
\section*{Theoretical Framework}
\subsection*{Existing Approaches}
Differences in influence between interest groups have been attributed to a whole range of explanatory approaches. While not necessarily mutually exclusive, the following approaches do focus on distinct aspects of the influence production process.
A very prominent narrative to explain variation in interest group influence is group type. Most importantly, scholars have repeatedly looked at the explanatory power of a business/non-business dichotomy \citep[e.g.,][]{Yackee2006ABureaucracy,Dur2019TheUnion}. Other group types that have been tested include cause groups and NGOs \citep[e.g.,][]{Dur2007InclusionPolicy,Bunea2013IssuesPolicy}. Given the difficulty of capturing detailed data on relevant variables, group type often serves as a proxy for resource endowment\citep[e.g.,][]{Mahoney2007LobbyingUnion,McKay2012}.\footnote{The results from multiple studies have put the legitimacy of such an approach into question, however. Resources have repeatedly been found not to vary systematically with group type \citep[e.g.,][]{Baroni2014DefiningGroups,Kluver2012BiasingPolicy-Making}.} Nonetheless, some contributions highlight a potential independent effect of such group types, most prominently an elevated responsiveness of policy makers to requests from groups representing business interests \citep[e.g.,][]{Gilens2014TestingCitizens,Varone2020}.
Moving from group types to interest groups' resources, one could argue that this responsiveness can more accurately be explained by a business' structural power, e.g., a firm threatening to relocate or dis-invest \citep[e.g.,][]{Binderkrantz2014AConsultations,Dur2015InterestLose}. The effect of resource endowments can manifest itself in more straightforward fashion too, though. Generally, in line with how rational choice scholars conceptualise lobbying, the argument rests upon the assumption that resources can be converted to something that is of value to decision makers \citep{Stigler1971TheRegulation}. E.g., many political decisions build on insights from actors affected. Developing the capacity to prepare and provide such expert information usually is easier with higher financial means \citep[e.g.,][]{Hall2006LobbyingSubsidy}. While this line of argument building on resource exchange theory is important, the rationale to consider resources is not limited to it. E.g., higher economic resources also permit an interest group to follow political events more closely and react to it in timely manner.
Putting aside the question on what actors possess, the following explanatory approach instead focuses on what they do. Lobbying strategies describe the activities that interest groups engage in to influence policies according to their preferences. While this also refers to the decision whether to join forces or lobby alone \citep[e.g.,][]{Nelson2012LobbyingRulemaking}, the literature often focuses on insider and outsider strategies \citep[e.g.,][]{Mahoney2007LobbyingUnion,DeBruycker2019}. The evaluation of a strategy's suitability seems a lot more context dependent than in the case of resources, where more may not necessarily be better but hardly ever worse. The adequate choice of strategy may depend on the goal (influencing agenda-setting vs. influencing policy decisions \citep[e.g.,][]{Binderkrantz2019TheUK}), the decision maker to be lobbied (MPs vs. representatives of the executive branch \citep[e.g.,][]{Varone2020}) and, most importantly, the interest groups' own strengths, e.g., its resources \citep[e.g.,][]{Binderkrantz2005a}.
\subsection*{Adding Interdependent Effects}
Building on these three explanatory approaches, our framework incorporates the interdependent effects between them. Some selected moderation as well as mediation effects have been addressed in the past \citep[e.g.,][]{McKay2012, Binderkrantz2019TheUK}. Though a more fundamental elaboration allowing for simultaneous consideration of such effects seems indicated. Figure \ref{F_Framework} illustrates the relationships considered in this study. On the one hand, the framework addresses the mediating effect of strategies on the relationship between an interest group's resources and its policy influence. On the other hand, it accounts for group type dependent effects of resources and strategies.
\begin{figure}[!ht]
\input{files_mediation.draft/f_theoreticalframework_20210430}
\caption{Theoretical framework}
\label{F_Framework}
\end{figure}
As already alluded to above, the appropriate use of strategies is highly dependent on an actor's resources. Similarly, in most cases, the endowment with resources, by itself, is an insufficient factor in explaining differences in influence. Instead, resources need to be mobilised to unfold their potential impact in terms of influencing policy \citep[e.g.,][]{Duygan2021IntroducingRegimes}. E.g., policy output is not going to be affected simply by possessing a lot of policy expertise. This knowledge needs to be put to use, e.g. by providing additional information to decision makers. Some exceptions like structural power may independently exert influence by the law of anticipated reactions. Typically, however, the effect of resources on influence is mediated by the choice of strategies.
More specifically, we argue that the effect of an interest group's resource endowment on its policy influence is mediated by the activities it engages in. Put differently, actors who engage in those activities that complement their resources are more influential than actors
that fail to mobilize the relevant resources they possess.
In addition, we are interested in the moderation effect of group types with respect to both, resources and strategies. As mentioned above, empirical researchers have often used group type as a proxy for other explanatory variables that are more difficult to gather data on \citep[e.g.,][]{Mahoney2007LobbyingUnion,McKay2012}. Carried to the extreme, it can be argued that a typology of groups will cluster actors that share similar characteristics, e.g. resource endowments, but does not carry an influence-relevant quality itself.
Hence, considering moderation effects of group types for resources and strategies serves the following two purposes. With respect to its policy implications, potentially heterogeneous effects of resources and strategies between group types may inform interest groups on their most promising lobbying approaches. Under the assumption of a causal relationship, interest groups are well-advised to engage in those strategies and build those resources for which their group type enjoys disproportionately high effectiveness. From a theoretical perspective, such a finding would also point to an incomplete characterisation of interest groups. Moderating effects of group type ultimately should be ascribed to an underlying trait shared by actors of the same type that has not been captured. Of course the same argument also applies to potentially significant main effects of group type.
\newpage
\subsection*{Notes}
Finally, on top, there is the possibility of moderated mediation.
Again, differences in policy influence may result not because policy makers are more or less responsive to business interests but because for some factors there may be differences in prevalence between group types. I.e., if there was an activity to influence policy that is very effective but only certain groups are able to or chose to engage in it, then this may falsely be attributed to group type.
\section{Theoretical framework on Business}
Potential differences in influence between Business and non-Business interests may be attributed to different explanatory approaches, including variation in resource endowment, variation in strategy choice but also truly different levels of control over policy outputs. While these approaches need not be exclusive they focus on distinct aspects of the influence production process.
\subsection*{Resources}
Often, popular narratives draw the picture of influential Business actors that derive their control over policy output from financial power. Conjectures on questionable, corruption-like lobbying practices may shape public perceptions in this context. However, elevated monetary resource endowments can manifest themselves in less reprehensible fashion too. E.g., many political decisions build on insights from actors affected and the capacity to prepare and provide such expert information is often facilitated by higher financial means.
Independent of the mechanism, this explanatory approach builds on the argument that Business does not enjoy more influence per se. Rather, Business actors are assumed to be endowed with higher resources, which in turn leads to more policy influence. Indeed, there are some previous contributions that
Mehr Ressourcen/Expertise bezüglich Marktimplikationen
In fact proxy
\subsection*{Strategies}
Putting aside the question on what actors have, the following explanatory approach instead focuses on what they do. Again, differences in policy influence may result not because policy makers are more or less responsive to business interests but because for some factors there may be differences in prevalence between group types. I.e., if there was an activity to influence policy that is very effective but only certain groups are able to or chose to engage in it, then this may falsely be attributed to group type.
Einfacher zu organisieren, weil harmonischere Präferenzen
Again, plenty of contri
While the endowment with resources may reflect a necessary condition, the following explanatory. In the example outlined above
by accounting for influence producing factors that are distributed unevenly between group types, the previous two approaches relativize differences between group types.
\subsection*{Heterogeneous effects}
Finally, group type specific differences in influence may exactly be that; i.e., unequal control over policy outputs between group types. Such an effect is reflected by the remaining variation of influence explained by group types. However, group type dependent effects of the newly introduced factors may also support such an argument.
Unfortunately, differences in influence may also be hidden in averaged effects of the newly introduced factors. Controlling for strategy choice or different levels of resource endowments is important. However, such an approach needs to ensure it does not ignore potentially group type specific responsiveness towards these factors. More specifically, if higher levels of resources or specific strategies only work for some groups, this needs to be considered when determining differences in influence levels across group types.
\newpage
\section{Introduction}
To explain the differences in political influence, researchers often refer to the financial resources that an organization possesses \citep[e.g.,][]{Stratmann2006,Cigler2019}. Even though there is little empirical research on the causal mechanism of this relationship, the argument makes intuitive sense. Organizations with plenty of resources are freer to choose their lobbying activities and can focus on those strategies that promise the most influence. An obvious example is expensive political advertising campaigns, which are only available to wealthy political actors. Similarly, financial resources allow organizations to gather more and better information. This can open doors or even enable direct influence.
In any case, the implementation of most political strategies should benefit from larger funds, and hardly any strategy will suffer from greater financial resources. Nevertheless, research on the nexus between financial resources and political influence is characterized by its contradictory results. Several reasons can be put forward for this.
To begin with, assessments of financial resources for political purposes, or \textit{political budget}, often rely on imprecise measurements. Primarily, this can be attributed to the fact that such information is rarely freely available. If at all, such information is shared only under assurances of confidentiality. As a result, researchers often use other measures such as staff size \citep[e.g.,][]{Kluver2011TheUnion,Yackee2020}. The total resources of an organization are another popular alternative \citep[e.g.,][]{Mahoney2007LobbyingUnion,McKay2012,Kluver2013LobbyingUnion}.
In the case of an organization that uses all of its funds for political purposes, such an operationalization may not be problematic. However, there are many observations where this can distort the relationship between \textit{political budget} and influence. For example, a firm whose expenditures for political purposes may represent only a very small portion of its total budget. Here, the use of total funds introduces a lot of noise. It follows that under such an operationalization, it may be difficult to establish a link between financial resources for political purposes and the influence of an organization.
A similar issue arises in measuring lobbying influence. The problem starts with the fact that there is no generally accepted definition of the concept of influence. While plenty of contributions go without explicitly defining the concept, those that do tend to refer to concepts of power. Indeed, a common definition of influence is not only missing in this study's narrow context of \textit{lobbying influence} but with studies on \textit{political influence} in general. The pluralist tradition typically proceeds from Dahl's (\citeyear{Dahl1957}, pp. 202--203) simple but influential notion that ``A has power over B to the extent that he can get B to do something that B would not otherwise do". Proceeding from this conceptualization, many studies focus on control over policy outputs \citep[e.g.,][]{Dur2007,Helboe2013,Stevens2020}. The elitist response to pluralism was that a focus on open contestation neglects the less visible dimensions of power. Works in this tradition maintain that influential actors might exploit power asymmetries to prevent some issues from appearing on the political agenda (\cite{Bachrach1962}, p. 948; \cite{Schattschneider1960}, p. 69). While Schlozman (\citeyear{Schlozman1986OrganizedDemocracy}) directed the focus to the bases to exercise influence, other scholars \citep[e.g.,][]{Finger2019,Lowery2013}
followed Lukes' (\citeyear{Lukes1974Power:Viw}, p. 28) focus on the three faces of power emphasizing also actors' ability to shape others' ``perceptions, cognitions, and preferences in such a way that they accept their role in the existing order of things".
Returning to \textit{lobbying influence} in particular, another challenge lies in the fact that influence necessarily implies a causal mechanism between intention, action and effect. Thus, in essence, the construct already requires the researcher to capture multiple variables. This is further complicated by the fact that certain actors would prefer to keep their own influence veiled. It is not surprising, then, that very different approaches to measuring influence can be found in the literature, and that the results of the respective studies vary considerably. To sum up, while the issue has received a lot of attention in the literature, the relationship between \textit{political budget} and influence continues to be characterized by ambiguity. To a large extent, this can be attributed to different conceptualizations and operationalization of both, \textit{political budget} and influence.
This study is set out to address these issues and provides new insights into some of these aspects. First, comparing the most common measures of lobbying influence, we shed light on how these different types of measures correspond. Whereas most previous research papers apply one or at most two types of measurements, we contrast the distribution of three different measurements of influence. Our unique data collection allows us to compare \textit{self-perceived influence}, \textit{reputational influence} and \textit{preference attainment}. The results show that the three variables are distributed quite differently. While there are signs of a common trend between the \textit{self-perceived} and \textit{reputational influence}, there is much to suggest that \textit{preference attainment} measures a rather distinct construct.
Second, we compare the explanatory power of \textit{political budget} with respect to the different types of measurement. In this context, it is important to note that our analysis explicitly focuses on financial resources for political purposes. However, our data allow for a comparison with total resources and the number of employees commissioned to follow political events, two popular alternative operationalizations. To accommodate the different scales of measurement of influence, we run separate regressions for \textit{self-perceived influence}, \textit{reputational influence} and \textit{preference attainment}. Again, the empirical analysis points to substantial differences between the types of measurements. \textit{Political budget} shows a statistically significant association for all three measures. Indeed, the relationship with \textit{political budget} appears to be very similar for the two perceptional measures, i.e., \textit{self-perceived influence} and \textit{reputational influence}. However, while significant, the relationship with \textit{political budget} is negative for \textit{preference attainment}.
Third, we provide new evidence on the causal mechanism that allows \textit{political budget} to have an impact on lobbying influence. We examine to what extent lobbying activities translate the effect of \textit{political budget} on influence. Following the approach of \cite{Imai2011UnpackingStudies}, we apply a counterfactual framework to identify these indirect effect through mediation analysis. The findings show that the indirect effects of \textit{political budget} on lobbying influence run through specific activities. In line with the other findings, the results suggest that there are important differences between the three measures of influences.
The following section introduces the relevant debates in the lobbying literature, maps out our research questions and introduces our core contributions. This is followed by the third part, which presents the research design including the data gathering process, variable operationalization and an introduction to the methods applied. The following section presents the findings from the empirical analyses while the subsequent discussion provides some interpretation and connects the results back to the relevant scientific discussions. Finally, the paper concludes with a summary of the main implications of our work and provides an overview of additional aspects that deserve attention in future research.
\section{Background}
\subsection{Literature review}
The subsequent analysis relates to different, yet related literatures. The ultimate goal is to reveal the causal mechanism of the effect of \textit{political budget} on an organization's lobbying influence. However, the analysis of this causal mechanism requires a critical assessment of the relationship between \textit{political budget} and lobbying influence. For this, in turn, it is central to consider the scientific evidence related to the measurement of lobbying influence.
\subsubsection{Measurement of influence}
Accordingly, we begin by addressing the state of research on the measurement of lobbying influence. The literature distinguishes between three different types of quantitative lobbying influence measures: \textit{self-perceived influence}, peer-attributed or \textit{reputational influence}, and \textit{preference attainment}
So-called \textit{self-perceived influence}, i.e., measuring lobbying influence by asking stakeholders to evaluate their own influence, finds widespread use in the literature \citep[e.g.,][]{Helboe2013,Lyons2020,McKay2012,Newmark2017,Stevens2020,Yackee2020}. Capturing \textit{reputational influence} is more labor-intensive and less common \citep[e.g.,][]{Finger2019,Stevens2020}. Typically, peers or experts assess the lobbying influence of stakeholders. The two methods share two important characteristics. First, they are both characterized by the fact that the measured quantity is not lobbying influence as such, but perceptions thereof. Second, both, \textit{self-perceived} and \textit{reputational influence}, presuppose a causal mechanism between intention and effect.
Both properties contrast with the third measure, \textit{preference attainment}, which has come to be widely used too \citep[e.g.,][]{Mahoney2007LobbyingUnion,Dur2008,Nelson2012LobbyingRulemaking,Kluver2013LobbyingUnion,Bernhagen2014}. Here, the size of the overlap between individual preferences and enacted policies serves as a measure to distinguish the winners from the losers of a policy process. What is missing, however, is the requirement of a causal relationship. An actor can see its preferences fulfilled without having contributed anything to such an outcome. Rather than calling it lobbying influence, some scholars therefore refer to it as lobbying success \citep[e.g.,][]{Bernhagen2014}.
It follows that none of the three variables measure lobbying influence directly. Table \ref{t_measuresoverview} offers an overview of the advantages and disadvantages of the three measures.%
\begin{table}[!h]\centering\footnotesize
\input{files_mediation.draft/t_measures.overview_20210825}
\caption{Measures of lobbying influence - overview of conceptual and empirical properties.}
\label{t_measuresoverview}
\end{table}
One strength of \textit{reputational influence} is that it is typically based on a number of data sources, for example peers or a group of experts, and thus individual, biased judgments carry less weight. However, this type of data gathering is also associated with high costs. Moreover, the measurement runs the risk of overestimating types of influence that are particularly visible. Actors that engage in lobbying activities that create public attention, e.g., major advertising campaigns, may be considered more influential than organizations that take a less prominent role in the public perception but may be equally effective, e.g., by lobbying decision makers directly.
In the case of \textit{self-perceived influence}, the relative simplicity of the data gathering is an important advantage. Also, in contrast to \textit{reputational influence}, the assessment builds on first-hand information: regarding their own activities and immediate consequences thereof, actors have outstanding knowledge. However, there is a risk that actors may deliberately provide inaccurate or strategic responses concerning their own influence. In an attempt to give a greater picture of their relevance and legitimacy, organizations may be tempted to provide inflated accounts of their influence \citep[e.g.,][]{Lyons2020}. Other organizations may feel tempted to underestimate their influence to avoid negative reactions \citep[e.g.,][]{Stevens2020}
A similar challenge arises with \textit{preference attainment}, namely the truthful representation of preferences \citep[e.g.,][]{Dur2008}. Such data can be obtained by asking experts or stakeholders to indicate other stakeholders' policy preferences or 'ideal points' \citep[e.g.,][]{Bernhagen2014,DeBruycker2019}, asking stakeholders to indicate their own policy preferences \citep[e.g.,][]{Mahoney2007LobbyingUnion,Baumgartner2009a}, or determining preferences from policy documents \citep[e.g.,][]{Yackee2006ABureaucracy,Kluver2011TheUnion}. In the context of the two latter methods, there may be an incentive to misrepresent true preferences leading to distorted estimates if an actor anticipates later negotiating concessions \citep[e.g.,][]{Dur2007,Lowery2013}.
The lobbying influence of other involved parties represents another issue. An actor, whose lobbying efforts made no difference, may nonetheless be considered successful because for some unrelated reasons, e.g., lobbying of more influential actors, the actor attained all her policy goals. At the same time, an actor may have been influential without achieving policy goals, e.g., when the actor's influence prevented an even less desirable outcome\footnote{\cite{Helboe2013} addresses this problem by linking influence more closely to specific activities (e.g., questions to parliamentary committees) instead of preference statements, and attaches greater relevance to the chronological sequence. However, this thwarts one of the strengths of \textit{preference attainment}, namely that success can be measured regardless of the channel of influence \citep{Dur2008}. In addition, such an approach requires considerable effort in the coding of activities, which may prevent its large-scale application in the context of \textit{preference attainment}.}.
An important advantage of the preference attainment method, however, is that it can be based on publicly available data such as consultation submissions and legislative texts. In addition, some scholars emphasize the fact that it is not a matter of perceptions. Rather, the measure builds on the factual congruence between stated preferences and the policy outcome. This also means that the method is insensitive to the channels of influence. It does not matter whether influencing takes place in a back room or in the course of a large-scale advertising campaign; only the visible results in terms of policy outcomes count.
\subsubsection{Political budget and influence}
With the differences between influence measures in mind, we move to another set of studies, which address the explanatory power of potential predictor variables. Here, the question arises as to what extent potential differences between the measures may also be reflected in the corresponding explanatory approaches. The role of \textit{political budget} on the one hand, and \textit{business interests} on the other hand are of particular interest.
Two things stand out when looking at previous contributions on the effect of resources on lobbying influence. First, there are substantial differences in the operationalization of \textit{political budget}. With the sum of reported lobbying expenditures, \cite{Junk2019WhenCoalitions} not only employs the variable that is most similar to ours. Arguably, it is also the most precise measurement. Other operationalizations can be expected to introduce a higher degree of noise into the relationship between an actor's resource endowment and lobbying influence. For example, \cite{McKay2012} uses total resources (revenue, budget or sales) while a number of studies use staff size. However, there are differences here as well, as this can be based on the total number of employees \citep[e.g.,][]{Kluver2013LobbyingUnion} or the employees entrusted with lobbying tasks \citep[e.g.,][]{Mahoney2007LobbyingUnion,Stevens2020}.
Second, and possibly resulting from these differences in the operationalization, the results on the explanatory power of financial resources are mixed. For example,
\cite{Junk2019WhenCoalitions} finds no noteworthy effect on \textit{preference attainment}. This is in line with multiple other studies using other operationalizations of financial resources \citep[e.g.,][]{Mahoney2007LobbyingUnion,Baumgartner2009a}. \cite{Kluver2013LobbyingUnion}, however, goes beyond the individual organization as the unit of observations and analyzes the relevance of lobbying camps. Both, the aggregate economic power at the lobbying camp level and the individual level, exhibit statistically significant associations with \textit{preference attainment}. Interestingly, however, the association is positive only in the first case, while the probability of attaining preferences decreases with individual economic power. With respect to measures other than preference attainment, \cite{McKay2012} finds organizations' economic resources to explain hardly any variance neither in \textit{preference attainment} nor \textit{self-perceived success} (not influence). \cite{Stevens2020}, on the other hand, use a composite measure building on \textit{self-perceived} and \textit{reputational influence} and find a positive association with economic resources.
These contradictory findings may result from the use of different measurement methods. However, even among studies using the same measurement concepts, there seems to be substantial disagreement. In any case, these differences regarding the measurement of influence as well as \textit{political budget} make it difficult to compare findings across studies. Further complicating matters, the studies also differ with regard to other aspects like country, policy venue or policy content.
Turning to the other major predictor, whether an organization represents \textit{business interests}, it is important to note that the literature often associates a larger \textit{political budget} with this type of interest \citep[e.g.,][]{McKay2012,Varone2020}. In fact, some contributions go as far using the business/non-business dichotomy as a proxy for resources \citep[e.g.,][]{Binderkrantz2014AConsultations}. However, resources have repeatedly been found not to vary systematically with group type \citep[e.g.,][]{Baroni2014DefiningGroups,Kluver2012BiasingPolicy-Making}. Irrespective of the relationship between group type and resources, multiple contributions suggest an elevated responsiveness of policymakers to requests from groups representing \textit{business interests} \citep[e.g.,][]{Gilens2014TestingCitizens,Rasmussen2014DeterminantsConsultations,Balles2020SpecialAttention,Varone2020}. In any case, \textit{business interests} represent an important predictor that is taken into account in most empirical analyses of influence \citep[e.g.,][]{Yackee2006ABureaucracy,Dur2019TheUnion}.
\subsubsection{The causal mechanism}
Finally, the third avenue of research relevant for this study focuses on the mechanism through which \textit{political budget} may affect lobbying influence. Generally, in line with how rational choice scholars conceptualize lobbying, the argument rests upon the assumption that resources can be converted to something that is of value to decision makers \citep{Stigler1971TheRegulation}. For example, many political decisions build on insights from actors affected \citep{Austen-Smith2009}. Developing the capacity to prepare and provide such expert information usually is easier with higher financial means \citep[e.g.,][]{Hall2006LobbyingSubsidy}.
While there may be exceptions, typically, stakeholder cannot affect policy outcomes simply by possessing a lot of money. Resources need to be mobilized to unfold their potential impact in terms of influencing policy \citep[e.g.,][]{Duygan2021IntroducingRegimes}.
In this context, lobbying strategies describe the activities that interest groups engage in to influence policies according to their preferences. While this also refers to the decision whether to join forces or lobby alone \citep[e.g.,][]{Nelson2012LobbyingRulemaking}, the literature often focuses on insider and outsider strategies \citep[e.g.,][]{Mahoney2007LobbyingUnion,DeBruycker2019}. Surprisingly little research has been conducted on how these activities mediate the effect of resources on lobbying influence, with \cite{Binderkrantz2019TheUK} representing a notable exception. Using group type as a proxy for different resource endowments, they distinguish between insider and outsider strategies. The empirical analyses reveal that in addition to a direct effect of group type on influence, there indeed is also an indirect effect running through this choice of lobbying strategy.
In another contribution to this topic, \cite{McKay2012} finds that a range of lobbying activities predict lobbying success but only a subset of them are associated with higher financial means, namely engaging in multiple venues as well as contributing and participating in PACs. It is worth noting that the analysis of these indirect effects builds on a rather weak association between resources and lobbying success. Also, while mediation analysis has experienced substantial development since, the empirical approach in \cite{McKay2012} builds on pairs of bivariate relationships only.
\subsection{Research questions}
Based on these gaps in the literature, this study addresses the following three research questions. First, comparing \textit{self-perceived influence}, \textit{reputational influence} and \textit{preference attainment}, we conduct an empirical investigation to test whether (and to what extent) the patterns of influence emerging from the three measures overlap. Are there substantial differences between the three measures, or do actors, which are influential according to one measure, also score high on the two other measures?
Second, to what extent can an organization's \textit{political budget} explain its influence scores? Building on the insights from the first analysis on the comparability of measures, we analyze whether the explanatory power of \textit{political budget} corresponds between influence measures.
Third, what lobbying activities, if any, transmit a potential effect of \textit{political budget} on influence? Again, we compare the findings to assess the congruence between the three measures.
\subsection{Contributions}
Our contribution has broader implications for our understanding of how interest groups influence public decision-making. This fundamental topic has long been of interest to many political scientists and economists across subfields \citep[e.g.,][]{Salisbury1989WhoLobbyists,Wawro2001,Hall2006LobbyingSubsidy,Box-Steffensmeier2013QualityMakingb,Gilens2014TestingCitizens,Schnakenberg2016,Box-Steffensmeier2019Cue-TakingLetters,Bertrand2014IsProcess,Bertrand2020,Schnakenberg2021}.
Over the past decades, the topic has gained even more practical relevance as debates around urgent policy agendas have become increasingly contested and politicized \citep{Lupia2013a,Bouleau2019PoliticizationEnvironment,Fowler2015,Druckman2017UsingEffective}.
As such, scholars have emphasized the importance of investigating the politics---especially the working of lobbying---behind policymaking, for instance, in migration \citep{Spirig2021},
climate change \citep{stokes2016electoral,goldberg2020OilEnvironment,Farrell2016CorporateChange,kim2014electric,Cory2021SupplyCoalition}, energy system transitions \citep{Stutzer2021,Hughes2020,Duygan2021IntroducingRegimes}, and public health \citep{Bowers2018HowImplants,Wouters2020,Harris2021}.
Inspecting what it means for competing stakeholders to be politically influential and how the distribution of influence is linked to their endowments as well as lobbying practices thus leads to a better understanding of the nature of political contestations. Resulting insights may help to explain why democratic policy processes often lead to a stalemate in critical policy areas.
Much of the prior literature probing the resource-influence nexus has been muddled by conflicting empirical findings due to both data limitations and fragmented research targets.
In the first place, actors' \emph{influence} is notoriously hard to operationalize.
With an explicit focus on the differences between measures, this study goes beyond previous contributions by comparing three measures of lobbying influence. %
Moreover, good data are hard to obtain also for the financial resources allocated to political purposes. As a result, each existing study has resorted to various proxy measures.
In contrast, the uniform setting of our study allows for a reasonably standardized comparison using a precise measure of \emph{political budget} as well as alternative operationalizations across the three different measures of influence. %
Concerning the target of empirical analysis, at least since Berry's groundbreaking study of American public interest groups \citep{Berry1977}, prior literature has often conjectured on the role of interest groups' lobbying \emph{practices} (e.g., inside and outside lobbying strategies)
as relevant pieces of the puzzle. However, few studies have explicitly embedded these factors in their analyses.
Consequently, in our view, the literature overall seems to suggest---and it is indeed reasonable to theorize---that lobbying strategies potentially mediate the effect of \emph{political budget} on the actors' political influence
and yet, existing data and approaches do not allow us to test these relationships in detail. We argue that typically, the effect of resources on policy influence is mediated by the activities the actor engages in. In other words, wealth needs to be put to use, e.g., by gathering information and providing it to decision makers. Figure \ref{f_framework} illustrates the relationships considered in this study.
\begin{figure}[hbt!]
\includegraphics[width=.96\textwidth]{files_mediation.draft/f_framework_20210905.png}
\caption{Mediation framework.}
\label{f_framework}
\end{figure}
\begin{figure}[hbt!]
\includegraphics[width=.96\textwidth]{files_mediation.draft/f_indirecteffects_20210824.png}
\caption{Indirect effects.}
\label{f_indirect}
\end{figure}
Our study benefits from a novel dataset with a comprehensive set of three influence measures (each corresponding with the concept employed in previous studies), a variable precisely measuring the \emph{political budget}, and an empirical framework that allows stakeholder activities (each of which belongs either to an inside or outside lobbying strategy) to mediate the effect of resources on influence.
Thus, not only is our study the first to provide a direct comparison of the three influence measures most often used in the lobbying literature. It also assesses whether and to what extent \emph{political budget} is linked to the actors' influence and analyzes the relevance of the lobbying practices for this relationship.
\newpage
\section{Empirical strategy}
\subsection{Analysis of political influence}\label{methods}
In this section, we lay out the empirical strategy to address the research questions. This is followed by a short introduction to our case. Finally, we describe our data collection strategy and explain how we operationalized the constructs outlined above.
To address the question about the effect of \textit{political budget} on lobbying influence, we run three separate regressions; one for each measure of influence. Because of the differences in measurement scales (see subsection on \nameref{data_operationalisation}), we use three different regression models.
For \textit{self-perceived influence}, we apply a standard linear regression model. As in both other models, the main interest lies in organization-specific \textit{political budget}. As a control, we include a variable indicating whether the actor represents \emph{business interests}.
In the case of \textit{reputational influence}, we estimate a zero-inflated count data regression model, which returns a zero- and a count model \citep{Zeileis2008}. We used the same set of predictors in both models. The zero-model, estimating the probability of being mentioned by no survey participant, uses logistic regression. To explain the variance among non-zero outcomes, the count model, then, uses Poisson regression.
In contrast to the perceptional measures, the observation unit for the \textit{preference attainment} measure is on the subject-item level. Hence, in addition to the predictors used in the other specifications, we included two random intercepts to account for potentially correlated errors among observations for the same organizations as well as for the same items of a bill. We estimate the model using mixed effects logistic regression.
While the regression results may explain whether an organization’s \textit{political budget} affects its influence over energy policy, it provides little insight on how such a potential effect comes about. On the condition that the regression results identify such an effect, in the final step of our analysis, we shed light on the causal mechanism that leads \textit{political budget} to affect policy influence.
To that end, we estimate to what extent the effect of \textit{political budget} is mediated by various lobbying activities. The analysis follows the approach of \cite{Imai2010IdentificationEffects}. It applies a potential outcomes framework and builds on two separate models. First, we estimate the mediator model featuring the mediated variable, i.e., \emph{political budget}, as well as potential confounders, i.e., lobbying activities, as predictors. Second, along with the same set of independent variables, the predicted values of the mediator, then, serve to fit the model of the actual outcome variable, which in our case is influence. From this second model, the average difference in the predicted outcomes for different values of the mediator, holding constant the value of the mediated variable, represents the average causal mediation effect (ACME).
We conduct this mediation analysis for each potential causal mechanism separately using the \textit{mediation} and the \textit{maczic} package \citep{Tingley2014Mediation:Analysis,Cheng2018MediationData}. Under the assumption that the lobbying activities are not causally related, the ACMEs can be consistently estimated without explicitly including other mediators \citep{Imai2013IdentificationExperiments}. This does not hold for the average direct effect (ADE), which tends to be overestimated if not all indirect effects are being considered simultaneously. However, the goal of the analysis is to determine the mediated effect.
Finally, we estimate the confidence intervals using the default quasi-Bayesian Monte Carlo method, as this was the only option available for all three types of regressions. However, the results were robust to the use of non-parametric bootstraps for the perceptional measures, where this estimation of the confidence intervals was possible (see Table \ref{app_t_mediation.boot} and Figure \ref{app_f_coefplot.boot} in appendix \ref{app_bootstrap}).
An important advantage of this form of mediation analysis is that there are no requirements regarding the functional form\footnote{While this is true in theory, the technical implementation required minor adjustments in the case of the mixed model for the \textit{preference attainment} measure. More specifically, the outcome model features a random intercept only for organizations. This stands in contrast to the specification used in the main regression, which includes an additional random intercept for items. To ensure that this does not fundamentally alter the relationship between \textit{political budget} and the \textit{preference attainment} variable, model 1 of \ref{app_t_reg.compare.pa} in appendix \ref{app_regression} features the regression results for this modified specification. While the estimates slightly vary, the directions remain the same and both predictor variables continue to enjoy statistical significance.}
This is in contrast to the product and difference methods, which are commonly used to estimate indirect effects in linear structural equation models \citep{Baron1986TheConsiderations}. In the case of logistic regression, as repeatedly used in this study, both of these methods would fail to estimate mediation effects consistently \citep{Imai2010AAnalysis}. Moreover, the underlying identification strategies omit necessary assumptions required to establish causal mechanisms \citep{Glynn2012TheEffects}.
Indeed, since our approach relies on a counterfactual outcome that is never actually observed, the identification of ACMEs requires the strong assumption of sequential ignorability. Its first part, the exogeneity of the mediated variable, may be considered standard. The second aspect is more restrictive, particularly in non-experimental settings like this study. Conditioning on the full set of relevant confounders and, importantly, the mediated variable, the observed mediator value is assumed to be ignorable, implying identifiability of the ACME \citep{Imai2011UnpackingStudies}. In the context of this study, this means that all variables confounding the relationship between the lobbying activity and the influence measures must be conditioned on. This includes \textit{political budget}, reflecting the sequentiality element that sets apart the causal mediation effect (sequential ignorability assumption) from a causal effect of the mediator (conventional exogeneity assumptions)
Finally, the assumption also prohibits any influence of the mediated variable, \textit{political budget} in our case, on the other confounders, no matter whether they are observed or not. This also implies that there must not be any causal relationship between different mediators, as this would also represent post-treatment confounding \citep{Imai2013IdentificationExperiments}.
Given the assumption is satisfied, ACME is identifiable even though not all potential outcomes can be observed. Unfortunately, the assumption is not testable. However, at least for some violations, sensitivity analyses can indicate to what extent the assumption needs to be violated for the mediated effects to lose statistical significance \citep{Imai2010AAnalysis}.
\subsection{Data collection}\label{data_operationalisation}
To examine the research questions introduced above, we collected data on policy actors' resources, lobbying activities, preferences, and lobbying influence. Our study is embedded in the context of Swiss energy policy, which -- as we will show below -- is uniquely suited for such an empirical investigation. Switzerland has recently experienced important energy policy processes with distinct outcomes. This includes a federal strategy to realign its national energy policy. After several years in parliament, the so-called 'Energiestrategie 2050’ was accepted by the electorate in 2017. Shortly after, a major revision of the regulation on carbon emissions (CO2 law as part of ‘Klimapolitik nach 2020’) failed to gain a legislative majority in its first attempt. The parliament then passed a revised version, which eventually was rejected in a public referendum in 2021.
Switzerland's political system is well suited for our study because the legislative process builds on the results of a highly formalized and transparent consultation submission process. Stakeholders can, and frequently do, submit their views on public policy proposals. Due to this institution, Swiss policymaking is characterized by a more ``direct incorporation of a wide variety of parties and organizations" (\citeauthor{Weiler2015}, \citeyear{Weiler2015}, p.~750) than other political systems. Importantly, given that the consultation process is open to all stakeholders and implemented transparently (e.g., online publication of stakeholders' submissions), there is a certain level playing field in terms of stakeholders' \textit{access} to the political decision-making process. This makes it all the more interesting to study whether certain political resources or strategies make some stakeholders more \textit{influential} than others.
Our empirical analysis builds on two separate data sources. To determine the \textit{preference attainment} measures, we use publicly available consultation submissions. The remaining variables, including the perceptional measures of influence, stem from a survey we conducted with Swiss energy stakeholders in 2019.
The \textit{preference attainment} values build on subject-item data from the preferences indicated in consultation submissions on the two policies mentioned above, 'Energiestrategie 2050’ and ‘Klimapolitik nach 2020’. These bills are suited for the purposes of this study for the following reasons. First, they were subject to intensive political and public debate and were both ultimately put to an optional referendum, which in both cases was filed by the Swiss people's party (SVP). Not only did this spotlight mobilize a larger set of actors. Also, the attention required public position taking by organizations that under different circumstances may prefer to hide their preferences and lobby decision makers more covertly. Similarly, the contested nature of these bills may be more informative with regard to which organizations truly are influential. To attain one side's preferences, decision makers had to let down other major stakeholders.
Second, the two policies cover the full spectrum of energy policy topics, such as nuclear phase-out schedule, car emission limits, subsidies for renewable energies, transparency rules for power supply firms, ratification of the Paris Agreement, emission reduction targets, emission trading scheme, and fossil heating bans. With regard to the comparison of influence measures, this feature of our sample is important. A narrower set of issue areas may not be as representative to measure actor-specific influence on energy policy.
Finally, the multiple choice format of these two consultations makes it possible to collect data on preferences for a large sample in a standardized manner. Smaller consultations usually ask participants to share their views in a more general text form. This format entails a lot more ambiguity and requires the coder to interpret the answer, leading to difficulties in extracting preferences and comparing policy positions. The lack of clearly spelled out questions further hampers comparability.
To determine the \textit{preference attainment}, we first registered all item-specific choices for each participant. Next, we matched these preferences with the corresponding outcome from the final version of the bill that the parliament agreed upon. This resulted in a binary measure indicating whether the parliament's decisions overlapped with the stakeholder's preferred outcome. We determined this value for all item-organization observations. In case the parliamentary outcome was unclear, we dropped the item for all organizations. If an organization did not provide an answer to a specific question or answered ambiguously, we only dropped the corresponding subject-item observation.
This approach resulted in 44 items that were considered for the \textit{preference attainment} measure\footnote{The full list of items contains 26 items from the 'Energiestrategie 2050' consultation and 18 from the ‘Klimapolitik nach 2020’ consultation. For a full list of items, see Table \ref{app_t_consultation.item.es2050} and Table \ref{app_t_consultation.item.klima} in appendix \ref{app_consultation}.}. The survey data builds on a larger sample size. In addition to the participants from the two policies mentioned above, we also considered the submissions from another two major Swiss energy policy consultations (ordinance level implementation of Energiestrategie 2050 \& Stromversorgungsgesetz, which regulates energy supply). Taking into account that some organizations participated in more than one consultation, 740 unique participants were identified. While we excluded individuals, there were no further formal or informal conditions to qualify as a stakeholder. However, 60 organizations had to be dropped because they had ceased to exist or because we were not able to obtain neither online nor postal contact details.
To inform them about the project and the process, potential survey participants received an invitation letter with personalized details whenever possible. Based on the insights from a trial with 38 organizations, the survey underwent minor adjustments to reduce drop out rates.
We ran the survey on the Qualtrics survey platform. Given that Switzerland consists of several language regions, respondents could choose between a German, a French, or an English version of the survey. Organizations that had not submitted a response within the two-week deadline received a reminder letter (main wave) or phone call (trial) encouraging them to participate. Enclosed with the reminder letter was a paper version of the survey and a pre-franked envelope. Respondents could either use the online link or return the hard copy within two weeks.
Overall, data collection took 20 weeks.\footnote{Given its nonrecurring nature, this type of data gathering process is susceptible to biases from topical contexts. While the climate strike movement did gain traction over this period, the authors are not aware of individual energy policy related events (e.g., referenda on energy bills), which may suggest distortions from temporal anomalies.} Out of 680 organizations invited to participate, 364 submitted a response, resulting in a response rate of 53.5\%. After removing insufficiently completed submissions, the remaining data contains 312 observations (45.8\%) on a total of 42 survey items.
To determine \textit{self-perceived influence}, we asked respondents to assess the influence of their organization on Swiss energy policy. Respondents had to choose from an ordered six point scale ranging from ``No influence at all" to ``Very strong influence". Moreover, as a measure of \textit{reputational influence}, respondents were asked to list the organizations that they considered to be influential in Swiss energy policy. For each unique answer provided, we added up the mentions across all participants, not counting self-mentions.
As mentioned above, the remaining variables also build on survey responses. \texttt{Political budget} was measured by an organization's annual budget devoted to political purposes. In the survey, we asked respondents to choose out of seven answer options ranging from less than CHF 5'000 to more than CHF 1'000'000. The survey did not force participants to respond to this item. However, when trying to skip the item, respondents were reminded of the importance of this information. Out of the 312 organization in our sample, 269 (86\%) provided the information. Regarding the entire sample (680 organizations), this corresponds to approximately 40\%. The binary variable measuring whether an organization represented \textit{Business Interests} was hand-coded. The measure distinguishes \textit{Energy Businesses}, \textit{Other Businesses} and \textit{Business Associations} from the remaining actors.
Finally, we asked respondents about the lobbying activities their organizations had pursued in the past ten years. We provided a list of 13 activities with an additional text entry option and respondents were free to tick as many as applied to them, resulting in one binary subitem for every activity (see Table \ref{t_activities} for an overview of lobbying activities presented).
\begin{table}[!h]\centering\footnotesize
\input{files_mediation.draft/t_activities_20210825}
\caption{Survey items on lobbying activities.}
\label{t_activities}
\end{table}
\newpage
\section{Results}
\subsection{Descriptive results}
Table \ref{t_descriptives} presents descriptive statistics for all variables. One important insight is that the reduction in sample size results from missing data on \texttt{political budget}, \textit{self-perceived influence} and of course the smaller sample for the \textit{preference attainment} measure. Furthermore, the table reveals noteworthy patterns in the dependent variables. Most strikingly, in the majority of subject-item observations, preferences overlapped with the policy outcome. Although not directly comparable, this contrasts with the fact that more than half of all organizations were not named by any respondent as influential, highlighting the zero-inflated nature of the \textit{reputational influence} measure. Finally, for the \textit{self-perceived influence} measure, the mean is also slightly lower than the median suggesting that survey participants were more reluctant to label themselves as very influential compared to not influential at all.
\begin{table}[!h]\centering\footnotesize
\input{files_mediation.draft/t_descriptives_20210823}
\caption{Descriptive statistics.}
\label{t_descriptives}
\end{table}
For informational purposes, the table also includes an entry for the number of \textit{preference attainment} items per organization, which was not a variable used in any of the specification as such \footnote{Rather than presenting numbers on the whole population of consultation participants, this entry refers only to the subsample that also submitted a survey response.}. The number of observations ranges from 1 to 41 items per organizations, underlining the importance of a mixed model approach rather than using a ratio variable of preferences attained.
To begin, we look at the relationship between the dependent variables. The scatterplots in Figure \ref{f_influence.measures} reveal the degree of association between the three variables. The different units of observation (subject level for perceptional measures, subject-item level for \textit{preference attainment}) make it difficult to compare. To account for this, we calculated an additional variable reflecting the ratio of attained preferences, representing \textit{preference attainment} on the subject level. Given the unbalanced nature of the data, the denominator varied according to the number of items the organization indicated their preferences on.
The smoothing lines do hint at some modest level of agreement between \textit{self-perceived} and \textit{reputational influence}. There are short segments with a negative slope. For most observations though, higher values on one measure correspond with higher values on the other. A very different picture emerges from the scatterplots with \textit{preference attainment}. With \textit{reputational influence} there seems to be no common trend at all. In the case of \textit{self-perceived influence}, observations with very high values also tend to exhibit above average values in \textit{preference attainment}. However, the smoothing lines feature major segments of negative slopes or sideward scattering.
\begin{figure}[ht!]
\includegraphics[width=.77\textwidth]{files_mediation.draft/f_distributions_3x2_20210825.png}
\caption{Relationship between dependent variables.}
\label{f_influence.measures}
\floatnote{Each row features one of the three combinations of measures. The second column presents the same combination as the first with switched axes. Since the fitted model (Loess regression) attributes all the error to the respective dependent variable, switching the axes results in distinct smoothing lines and confidence intervals. In contrast to the subject-item observations used in the following analyses, in this graph, \textit{preference attainment} is represented on the subject level as the ratio of preferences attained. To make the graphs easier to interpret, minimal noise (0.2 of the resolution of the data in width and height) was added to avoid overlapping observations. Also, the observations for \textit{reputational influence} are plotted only for values up to 50. This leads to 6 observations (values between 57 and 118) not being featured in the respective graphs. However, the smoothing lines and their confidence intervals are based on the full sample.}
\end{figure}
\clearpage
\subsection{Regression results}
The descriptive results suggest that the three dependent variables do measure somewhat distinct concepts. This begs the question to what extent this is also reflected in their respective relationships with \textit{political budget}.
Indeed, the regressions in Table \ref{t_reg.compare} reveal differing patterns. In the case of \textit{self-perceived influence}, \texttt{political budget} is a positive and statistically highly significant predictor. A larger \texttt{political budget} is associated with organizations considering themselves more influential.
The result for the \textit{reputational influence} model are similar. While the top three coefficients in the table refer to the count model, the bottom three feature the results for the zero-model. Both are statistically significant, but the two coefficients for \texttt{political budget} are seemingly contrasting. This is due to the fact that the zero-model predicts zeros, i.e., non-occurrence. Hence, a larger \textit{political budget} corresponds with a decrease in the probability for an organization to be mentioned by none of the survey participants (zero-model). Similarly, the expected number of mentions as an influential actor increases with a larger \textit{political budget} (count-model).
In contrast, the mixed model for the \textit{preference attainment} measure suggests a negative relationship with \textit{political budget}. A larger \textit{political budget} is associated with a lower probability to attain policy preferences. This negative association is statistically different from zero, too, although it does not reach the same level of statistical significance as the estimates for the two perceptional measures of lobbying influence.
Interestingly, organizations representing \textit{business interests} consider themselves less influential and are less likely to attain their preferences. Both estimates are statistically different from zero. Also, the probability to be mentioned as influential by at least one survey participant is lower for such interests. However, this relationship does not reach any common level of statistical significance.
Although there is substantial overlap, the samples on which these three models build are not identical. For each measure, we included all available observations, which varies between measures. However, the results of all three models were largely the same for a subsample of 182 organizations for which there was data on all three measures (see models 1-3 in Table \ref{app_t_reg.compare.all} in appendix \ref{app_regression}).
As outlined above, the number of items on which the \textit{preference attainment} measure builds varies between participants. Organizations with very few indicated preferences may introduce a potential bias for the \textit{preference attainment} variable. However, the results, again, were robust to the exclusion of 88 subject-item observations from the 24 organization for which there were only 10 observations or fewer available (see model 2 in Table \ref{app_t_reg.compare.pa} in \ref{app_regression}).
Finally, it is important to point out that the effect sizes between the three models are hardly comparable. While the predictor variables are the same, the scale of measurement varies between the three models, rendering a comparison of estimates difficult. Given the different regression models, the same caveat also applies to the question to what extent the specification can explain the variances in the dependent variables. Hence, we also refrain from comparing goodness of fit statistics between the models.
\begin{table}[!h]\centering\footnotesize
\input{files_mediation.draft/t_regression.comparison_20210825}
\caption{Regression results - comparison of influence measures.}
\label{t_reg.compare}
\end{table}
\subsection{Mediation results}
Having outlined the implications the choice of the dependent variable holds, we can now move from our methodological to our final, more substantive contribution. As mentioned above, in most cases, simply holding large financial resources is not going to result in influence over policy automatically. Hence, given the effects of \textit{political budget} on influence identified in the previous analysis, we investigate mechanisms through which such resources may manifest their influence potential. The analysis tests the extent to which 13 different lobbying activities mediate the effect of \textit{political budget} on the influence variables.
Figure \ref{f_coefplot.bayes} illustrates the average causal mediation effect (ACME) for each activity. While the figure includes the corresponding confidence intervals, a more comprehensive overview of the mediation results can be found in Table \ref{t_mediation.bayes}\footnote{See Appendix \ref{app_sensitivity} for a discussion of the underlying identification assumptions for the mediation analysis in the context of our analysis.}.
Not all ACMEs reach statistical significance and, more interestingly, there are differences between influence measures. The drafting of political opinion papers, physical access to the Swiss parliament and having participated in parliamentary hearings are statistically significant mediators for both, \textit{self-perceived} and \textit{reputational influence}. Contributions to media outlets mediate the effect of \texttt{political budget} only on \textit{reputational} but not \textit{self-perceived influence}, while the opposite is true in the case of organizations that indicated to have participated in official working groups to draft new legislation.
Our mediation analysis is not limited to positive effects; i.e., causal mechanisms can relate to negative effects as well. Significant mediation effects in that case would suggest that a larger \textit{political budget} leads organizations to choose activities, which -- perhaps counter-intuitively -- have a negative effect on their lobbying influence. However, the mediation analysis on the \textit{preference attainment} model does not provide any evidence for this to be the case. None of the ACMEs are statistically significantly different from zero.
For both perceptional measures, the results suggest partial mediation as opposed to full mediation for all statistically significant activities. In terms of effect size, relative comparisons between activities seem best suited to clarify the effects, especially given the ordinal nature of the variable used to measure \textit{political budget}. For \textit{reputational influence}, the coefficients for hearings in parliamentary commission, drafting political opinion or position papers and providing contributions to mass media are all comparable in size. Physical access to parliament, however, even after accounting for differences in the ATEs (average treatment effect; aggregated effects of direct and mediated effect), mediates a substantially larger proportion of the effect compared to the other activities.
For \textit{self-perceived influence}, participation in hearings in parliamentary commission features as the most important mechanism linking \texttt{political budget} with influence, followed by participation in official working or expert groups to draft legislation. Physical access to parliament and drafting political opinion or position papers feature a substantially smaller mediation effect.
The different scales of measurement prohibit a direct comparison of effect sizes across the different measures of influences. However, some relative inferences can be drawn, nonetheless. While the absolute values are uninformative, the ratio between the coefficients provides further evidence on the importance of physical access to parliament for \textit{reputational influence}. In contrast, for \textit{self-perceived influence}, the relative comparison highlights the relevance of hearings in parliamentary commissions as a mediator.
\begin{figure}[hbt!]
\includegraphics[width=.93\textwidth]{files_mediation.draft/f_coefplot.bayes_20210906.png}
\caption{Comparison of ACME estimates.}
\label{f_coefplot.bayes}
\floatnote{95\% confidence intervals estimated using quasi-Bayesian Monte Carlo approximation.}
\end{figure}
\begin{landscape}
\begin{table}[!h]\centering
\input{files_mediation.draft/t_mediation.bayes_20210722}
\caption{Mediation analysis estimates.}
\label{t_mediation.bayes}
\end{table}
\end{landscape}
\subsection{Inside vs. outside lobbying}
A closer look at the mediation results reveals an interesting pattern. In the case of \textit{self-perceived influence}, it is mainly inside lobbying activities that appear to translate the effect of \textit{political budget}. Physical access to parliament, participation in official working or expert groups and hearings in parliamentary commissions all aim to influence decision-makers directly. Perhaps, actors may exploit political position and opinion papers in the context of an outside lobbying strategy too. Typically, however, they serve to inform and persuade insiders of the policymaking process.
For \textit{reputational influence}, the classification is less pronounced. Contributing to print, radio, or television, one of the significant mediating activities, represents an outside lobbying strategy. Nonetheless, in addition to the unclassifiable activity of drafting political position or opinion papers, both other significant mediators represent typical elements of an inside lobbying strategy.
Hence, there is reason to suspect a fundamental difference between inside and outside strategies in terms of their ability to translate \textit{political budget} into influence. In the following, final analysis, we put this alleged dichotomy to test (see Table \ref{t_mediation.insideoutside} and Figure \ref{f_coefplot.insideoutside}). Aggregating the individual, binary activity variables into two overall inside and outside lobbying variables, we rerun the analysis using the two new variables as mediators \footnote{Our classification of inside and outside lobbying activities builds on \cite{Binderkrantz2005a}. We assigned "Exchange with politicians", "Commissioning lobbyists", "Physical access to parliament", "Hearings in parl. commissions", "Official working/expert group", "Political opinion/position papers" to the inside lobbying category and "Financing or conducting research", "Contributions to print/radio/television", "Digital media communication", "Expert conferences/public debates", "Funding of political advertising", "Funding and/or collection of signatures", "Demonstration call" to the outside lobbying category.}.
\begin{table}[!h]\centering\footnotesize
\input{files_mediation.draft/t_mediation.insideoutside_20210826}
\caption{Mediation analysis estimates - inside vs. outside lobbying}
\label{t_mediation.insideoutside}
\end{table}
\begin{figure}[hbt!]
\includegraphics[width=.85\textwidth]{files_mediation.draft/f_coefplot.insideoutside_20210907.png}
\caption{Comparison of ACME estimates - inside vs. outside lobbying.}
\label{f_coefplot.insideoutside}
\floatnote{95\% confidence intervals estimated using quasi-Bayesian Monte Carlo approximation.}
\end{figure}
In contrast to the findings from the individual activities, the difference between inside and outside lobbying strategies for the aggregated variables is less clear. For both, \textit{self-perceived} and \textit{reputational influence}, the effect size of the aggregated inside lobbying variable is substantially larger than for outside lobbying. However, not only the inside but also the outside mediator variable reached statistical significance for both perceptional measures.
Interestingly, the analysis of the aggregated mediator variables revealed another interesting finding with regard to the \textit{preference attainment} measure. As shown above, we found that none of the individual lobbying activities mediate the negative effect of \textit{political budget} on \textit{preference attainment} in a statistically significant manner. In contrast, the aggregated outside lobbying variable does. Moreover, in proportion to the overall effect of \textit{political budget} on \textit{preference attainment}, the proportion mediated is substantial.
\subsection{Robustness checks}
In the appendix we present the results for several robustness checks. The three models in Table \ref{app_t_reg.compare.all} use the same specifications as the main regressions. However, the sample is limited to the 182 observations for which there are values for all three measures. The results as well as the statistical significance remain largely unchanged.
Given the numerous contributions in the literature that employ alternative measures for financial resources, in Table \ref{app_t_reg.compare.polstaff} we present the results for the three dependent variables using the number of full time equivalent employees that are commissioned to follow political events rather than \textit{political budget}. The sample sizes are slightly larger. With respect to the perceptional measures of lobbying influence, the results for political staff are similar to those obtained when using \texttt{political budget} in the main analysis, but there is no statistically significant association with \textit{preference attainment}. However, the explanatory power of \texttt{business interests} increases with the use of \texttt{political staff} for the \textit{preference attainment} model. The same is true for \textit{self-perceived influence}.
Table \ref{app_t_reg.compare.budget} features the results for \texttt{overall budget} instead of \texttt{political budget}. The relationship between budget and influence is insignificant for the \textit{self-perceived influence} measure and \textit{preference attainment}. At the same time, the estimates in the \textit{reputational} model increase substantially.
Model 1 of Table \ref{app_t_reg.compare.pa}, features the estimates for the \textit{preference attainment} regression that uses only one random intercept for the grouping variable organization. We find that the estimate for \texttt{political budget} remains statistically significant although at a lower level. In Model 2 we return to the normal specification but only include organizations for which there are more than ten subject-item observations. Results are very similar to the ones in the main regression, including the statistical significance.
In Table \ref{app_t_reg.compare.pa_indiv} we present the results for the subsamples from the two consultations. Not only are the samples smaller than in the main analysis, but the sample size also varies substantially between the two models. While the estimates for \texttt{political budget} change only little, statistical significance decreases in both cases. Interestingly, in the case of the 'Energiestrategie 2050', the coefficient for Business interests doubles in size compared to the main regression. It also gains statistical significance, while the opposite is true for the consultation on 'Klimapolitik nach 2020'.
Finally, we also conducted the same mediation analysis using bootstrapped rather than Bayes confidence intervals in Figure \ref{app_f_coefplot.boot} and Table \ref{app_t_mediation.boot}. For \textit{reputational influence}, the same activities that were identified to mediate in the main analysis were also found to have statistically significant indirect effect with this alternative confidence interval estimation procedure. In the case of \textit{self-perceived influence}, in addition to the activities from the main analysis, exchange with politicians and media contributions also featured estimates that are statistically significantly different from zero.
\section{Discussion}
We began our empirical analysis by asking whether the three measures of political influence behave in similar manners. Note that conceptually, preference attainment places greater emphasis on observed policy outputs (hence it allows us to construct an objective score), whereas self-perceived influence and reputational influence place more emphasis on actors’ agency in shaping policies, which is an unobserved dynamic stemming from the actors’ intention to shape policies (hence these approaches force us to rely on their stated perceptions).
Comparing our three measures based on a nested set of actors from a uniform political context, three noteworthy patterns emerge. First, these different conceptual emphases behind preference attainment and the two perception-based measures indeed lead to distinct scoring. The finding indicates that preference attainment measures a different construct from the other two. Second, studies using preference attainment as a measure of political influence would lead to the conclusion that stakeholders with greater financial resources are less likely to achieve their policy goals. In contrast to this finding, our analysis confirms the popular expectation that a larger political budget is indeed more likely to lead to greater political influence when the influence is operationalized by the other two measures, self-perceived and reputational influence. Finally, our results offer empirical evidence for the similarity between two perception-based measures. Self-perceived influence and reputational influence do not only correlate positively with each other, but also correlate positively with political budget.
In the first place, we wanted to compare the three measures because prior literature has often used them interchangeably as proxies for political influence. Our findings suggest, however, that it is important for researchers to acknowledge the difference both theoretically and empirically. In our view, preference attainment is a measure of policy goal achievement, which is a convolution of \emph{all} the actors’ policy preferences and political influence. The measure cannot indicate a single actor’s influence. The similarity between self-perceived and reputational measures implies that an organization’s self-assessment of its political influence is not detached from how the other actors assess its influence. This is an assuring result for researchers using either of the measures. Here one might eventually consider a practical design element. The self-perceived measure would provide complete information about political influence for the entire sample of actors, whereas using the reputational measure, the data tend to include many more actors that are never mentioned by the other actors as influential. We cannot capture the potential variation among actors that are not highly prominent.
In the multivariate analysis, we also considered the role of \emph{business interests} in predicting political influence. This variable measuring whether an organization represents business interests has frequently appeared in prior studies \citep[e.g.,][]{Yackee2006ABureaucracy,Dur2019TheUnion}. While it is often associated with a larger \textit{political budget} \citep[e.g.,][]{McKay2012,Varone2020}, in some cases it has even been used as a proxy for financial resources \citep[e.g.,][]{Binderkrantz2014AConsultations}. Interestingly, in our analyses \emph{business interests} were associated with less influence for \textit{self-perceived influence} as well as \textit{preference attainment}. For the zero model of \textit{preference attainment}, there is a similar trend that does not reach statistical significance, however.
Hence, against common belief and numerous prior contributions on the topic, our analysis hints at a reduced responsiveness of policymakers for \emph{business interests} \citep[e.g.,][]{Gilens2014TestingCitizens,Balles2020SpecialAttention,Varone2020}.
Finally, we turn to the mechanism through which \emph{political budget} might affect lobbying influence. Prior literature has theorized that financial resources allow stakeholders to engage in various lobbying activities, and these activities in turn increase the chance that the actors influence policies. By explicitly modeling the link between \emph{political budget} and \emph{influence} that is mediated by various lobbying strategies, we found several important patterns that help to solve the puzzles posited by existing studies. First, both inside and outside lobbying strategies matter. More precisely, financial resources that are materialized through either inside lobbying strategies (activities that are directly targeted at policymakers) or outside lobbying strategies (activities that create similar electoral pressure indirectly by influencing the general public) would increase the chance for the actor to influence policies. The pattern is consistent between the self-perceived and reputational influence measures. Some prior work has also noted that inside lobbying activities, i.e., direct access to politics or politicians, is a privilege that some, but not all, actors enjoy. Behind this argument is a theoretical conjecture that inside lobbying is more effective in influencing policy. Our findings hint at this pattern. For the self-perceived and reputational influence measure respectively, the effect of financial resources that is transmitted through inside lobbying practices exhibits a larger effect than that through outside lobbying practices.
Before concluding this paper, we would like to come back to the distinct behavior of the preference attainment measures. As can be seen in Figure~\ref{f_coefplot.insideoutside}, the mediation analysis again shows that \emph{political budget} is negatively associated with \emph{political influence}, and the pattern holds equally through inside and outside lobbying strategies.
There are very different explanations for the negative effect of \emph{political budget} on lobbying influence. Taken at its face value, the results suggest that actors with a larger \textit{political budget} attain fewer preferences. Moreover, the mediation analysis suggests that these actors engage in a high number of outside activities and that such a lobbying strategy leads to fewer preferences attained. Consequently, it is possible that primarily the use of an outside lobbying strategy leads to lower preference attainment. Thus, a lager \textit{political budget} would not necessarily be associated with fewer preferences attained. Rather, a larger \textit{political budget} could lead to the use of more outside lobbying strategies, which would then have a negative effect on preference attainment. The fact that a comparatively high share of the effect of \textit{political budget} on preference attainment appears to run through the outside count variable lends support to this hypothesis. However, it should be noted that in the literature, the use of an outside lobbying strategy is typically associated with low-budget campaigns. Wealthy actors supposedly use their resources for preferential access to decision makers and thus are more likely to pursue an inside lobbying strategy.
Alternatively, the results may stem from the fact that the chosen application of preference attainment defines the window of influence taking too narrowly. It is conceivable, for example, that the first draft of the bills already takes into account the preferences of financially powerful actors to the maximum extent. Such regulatory capture of the administrative authorities has already been described in detail in the literature. Changes made in the subsequent policy process in parliament can then only negatively influence the preference attainment score of such actors. As a consequence, the presented analysis would truthfully identify a negative association between \textit{political budget} and attained preferences between the first bill and the final parliamentary bill. However, using it as an influence measure would misrepresent the influence of \textit{political budget} on energy policy overall. This is in contrast to the perceptional measures, which can also take into account possible influence prior to the first bill.
\newpage
\section{Conclusion}
Understanding how interest groups shape policy has long been of interest to many political scientists and economists. However, various empirical findings have not been in agreement as to even the most (seemingly) straightforward question concerning lobbying influence: whether better-endowed stakeholders influence policies more.
Our paper revisited this relationship between financial resources and lobbying influence, in such a way that would help us understand potential sources of discrepancies.
We approached this problem from three angles. First, carefully studying varying constructs of lobbying influence employed in the literature, we built our analysis on a comparison of three types of influence measures that appear frequently in the existing studies—the \textit{self-perceived}, \textit{reputation-based}, and \textit{preference attainment} measures. What is important here is that we constructed the three measures based on the same (or nested) set of actors by combining data from an original survey with 312 Swiss energy policy stakeholders with document data from policy consultation processes. Existing empirical studies employ different measures of influence in different policy contexts, but few scholars seem to be concerned that the qualitatively different conclusions might simply be artifacts of distinct influence measures (and the selection of other covariates).
Second, as our survey directly probed the size of budget allocated specifically to political purposes, it allowed us to overcome common challenges associated with the operationalization of relevant financial resources. Finally, we addressed the issue of potential confounding between actors’ \textit{political budget} and various lobbying activities. In other words, we tested whether \textit{political budget} can be at work by enabling various lobbying activities, instead of treating their effects independently as the literature has suggested.
Public policymaking provides ample opportunity for stakeholders to exert influence on legislation and this may well be intended. Transparency on who successfully influences policy as well as common knowledge of the effectiveness of resources or strategies may represent important complements, however.
Our study reveals that the identification of influential actors strongly depends on the influence measure applied. Similarly, conclusions on the effect of \textit{political budget} on influence may be comparable among perceptional measures of influence but contrast with the findings from the \emph{preference attainment} measure. Finally, there is much to suggest that the lobbying activities that an actor engages in, represent a crucial factor in translating the potential of \textit{political budget} into actual influence.
Regarding future research, observing the relationships from this analysis over time represents the logical next step. As in most studies of influence, our empirical strategy cannot rule out the possibility of reverse causality completely. It is possible that the influence of an organization, or the perception thereof, leads to resources being directed towards such actors. Hence, a longitudinal research design may be able to address such concerns. However, given the difficulty of obtaining suitable data even for one point in time only, such an endeavor would require substantial resources.
Nonetheless, such an approach may also be able to further corroborate some of the interpretations deduced from the mediation analyses. Indeed, introducing within-actor variance in terms of lobbying activities as well as \textit{political budget} may further substantiate the plausibility of the underlying, untestable assumptions.
Finally, it would be interesting to learn to what extent our findings also extend to other institutional environments. For example, in addition to the lobbying taking place in parliament, the \textit{preference attainment} scores may be susceptible to the exertion of influence before the authorities submit the first draft to the official consultation process. A comparison between or within countries, taking into account differences in the policy process, may serve to highlight the robustness, or lack thereof, of our findings on the role of \textit{political budget} and lobbying activities regarding lobbying influence.
\newpage
|
{
"timestamp": "2021-09-30T02:00:17",
"yymm": "2109",
"arxiv_id": "2109.13928",
"language": "en",
"url": "https://arxiv.org/abs/2109.13928"
}
|
\section{Introduction}
Reference sets encompassing accurate experimental and/or theoretical data have always enjoyed a high popularity in the electronic structure community. Indeed, these allow for rapid and fair comparisons between theoretical models for
properties of interest. The pioneering reference sets are likely the so-called Gaussian-$x$ databases originally collected by Pople and collaborators 25 years ago and constantly extended since then. \cite{Pop89,Cur91,Cur97b,Cur98,Cur07b}
These datasets contain a large number of experimental reference values (atomization energies, bond dissociation energies, etc) and remain popular today to assess the performances of new exchange-correlation functionals (see, for example, Ref.~\citenum{Sun15a}) within density-functional theory (DFT) as well as higher-level methods. \cite{Pet12,Sce20,Yao20} Many other sets collecting reliable experimental and/or theoretical values have been developed throughout the years. The
panel is so wide that being exhaustive is beyond reach, but we can cite: (i) the S22 and S66 benchmark sets of interaction energies for weakly-bound systems; \cite{Jur06,Rez11} (ii) the HEAT set collecting high-accuracy theoretical
formation enthalpies; \cite{Taj04} (iii) the $GW$100 and $GW$5000 sets of ionization energies; \cite{Van15,Stu20} and (iv) the very extended GMTKN$xy$ thermochemical and kinetic databases proposed by Goerigk and Grimme.
\cite{Goe10,Goe11,Goe17} In this framework, one can certainly also pinpoint the successes of Barone's group in deriving very accurate ``semi-experimental'' structural, rotational, and vibrational reference parameters for a large panel of
molecules of various sizes. \cite{Pic15,Pen16,Men17}
Since 2018, our groups have made joint efforts to produce highly-accurate energies of electronically excited states (ESs) in molecular systems, \cite{Loo18a,Loo19c,Loo20a,Loo20d,Loo21a} in line
with the earlier contributions from the Mulheim group. \cite{Sch08,Sil10b,Sil10c} Our key results were recently collected in the so-called QUEST database, \cite{Ver21} that contains more than 500 theoretical
best estimates (TBEs) of vertical transition energies (VTEs) computed in small- and medium-sized organic molecules with Dunning's {\emph{aug}-cc-pVTZ} basis set. For the smallest systems (1--3 non-hydrogen atoms),
\cite{Loo18a,Loo20d} most VTEs are of full configuration interaction (FCI) quality and they have been computed with the \textit{Configuration Interaction using a Perturbative Selection made Iteratively} (CIPSI) method.
\cite{Hur73,Gin13,Gin15,Gar17b,Gar18,Gar19} For the molecules containing 4 non-hydrogen nuclei, \cite{Loo20a} most reference values are derived from basis-set-extrapolated coupled-cluster (CC) calculations including contributions from the singles,
doubles, triples, and quadruples (CCSDTQ) \cite{Kuc91}. Finally for the larger systems, \cite{Loo20d,Ver21} CC with singles,
doubles, and triples (CCSDT) \cite{Nog87,Scu88,Kuc01,Kow01,Kow01b} values are employed to define the TBEs, typically
by computing the CCSDT excitation energies with a double-$\zeta$ basis set and correcting for basis set effects thanks to VTEs determined with the corresponding approximate third-order CC (CC3) model. \cite{Chr95b,Koc95}
The original QUEST database contains a reasonably broad panel of organic and inorganic molecules (closed- and open-shell compounds, cyclic and linear systems, pure hydrocarbons, heteroatomic structure, etc) and ESs
(valence and Rydberg transitions, singlet, doublet, and triplet excitations, with or without a significant double excitation character, etc) but it clearly lacked charge-transfer (CT) excitations, an aspect that we have recently corrected. \cite{Loo21a}
As the VTEs cannot be measured experimentally, but are the ``simplest'' ES property to compute, the QUEST database is especially useful for performing cross-comparisons between
computational models. \cite{Loo20c} As examples, the VTEs included in QUEST and their corresponding TBEs have allowed us and others to (i) clearly determine the relative accuracies of CC3 and CCSDT-3; \cite{Loo18a,Loo20a}
(ii) unambiguously define the performance of ADC(3); \cite{Loo20b} (iii) assess the fourth-order approximate CC approach (CC4) compared to its CCSDTQ parent; \cite{Loo21b} and (iv) benchmark hybrid and double-hybrid
exchange-correlation functionals \cite{Cas19,Cas21b,Mes21,Mes21b,Gro21} as well as orbital-optimized excited-state DFT calculations. \cite{Hai21}
Understandably, most molecules included in QUEST are rather compact. To date, only four molecules from the QUEST dataset contain 8 non-hydrogen atoms or more: octatetraene, the highly-symmetric ($D_{2h}$) benzoquinone,
naphthalene, and tetra-azanaphthalene. \cite{Ver21} For comparison, QUEST includes 20 (10) molecules containing 4 (6) non-hydrogen nuclei. \cite{Ver21} There is a clear imbalance here, and the present contribution is a step towards
correcting this bias by providing TBEs for a large number of ESs for the ten compounds sketched in Figure \ref{Fig-1} which encompass from 8 to 10 non-hydrogen atoms. These ten compounds have been selected to be of chemical interest
with many building blocks used as chromophores in real-life dye chemistry. As we detail below, beyond extending the QUEST database, the present work also provides, for the vast majority of ESs considered, what can likely be
viewed as the most accurate VTEs to date for these systems. Consequently, the data collected here can also be of interest for more specific studies, e.g., for choosing an appropriate exchange-correlation functional for studies focussing on
optimizing the absorption properties of diketopyrrolopyrroles.
\section{Computational details}
We closely follow the computational protocols used in our previous works, \cite{Loo18a,Loo19c,Loo20a,Loo20b,Loo20d,Ver21,Loo21a} which is briefly summarized below.
Note that all calculations are performed within the frozen-core approximation.
The ground-state geometries are optimized at the CC3 level \cite{Chr95b,Koc95} with the {cc-pVTZ} basis set using CFOUR2.1. \cite{Mat20} These optimizations are achieved in a Z-matrix format, so that point group symmetry
($C_{2v}$, $C_{2h}$, or $D_{2h}$) is strictly enforced. Default convergence parameters are applied. Cartesian coordinates are provided in the Supporting Information (SI) for all molecules treated here. These
geometries are next used to compute VTEs with CC methods. Note that, in the following, we do not specify the EOM (equation-of-motion) or LR (linear-response) prefix in the CC schemes as the two formalisms
deliver the same transition energies (yet different properties).
In a first step, we perform CCSD/{\emph{aug}-cc-pVTZ} calculations \cite{Pur82,Scu87,Koc90b,Sta93,Sta93b} for the 6--20 lowest-energy ESs of all molecules considering both the singlet and triplet manifolds. The main purpose
of these calculations, performed with GAUSSIAN 16, \cite{Gaussian16} is to screen the various ESs and identify the key molecular orbitals (MOs) associated with these transitions (extended data are given in the SI
for all molecules). The specific nature of these ESs is determined by examining these MOs, which allows us, in the vast majority of the cases, a straightforward identification of the Rydberg and valence transitions.
For the latter, we also perform ADC(2)/{\emph{aug}-cc-pVTZ} \cite{Tro97,Dre15} calculations using Q-CHEM 5.3/5.4, \cite{Epi21} as well as TD-CAM-B3LYP/{\emph{aug}-cc-pVTZ} \cite{Yan04} calculations with GAUSSIAN, \cite{Gaussian16} in
an effort to estimate the CT character of the valence ESs. Following Ref.~\citenum{Loo21a}, we consider two CT metrics: (i) the electron-hole distance as determined from the ADC(2) transition density matrix,
\cite{Pla14,Pla14b} and (ii) the CT distance as obtained with CAM-B3LYP applying the well-known Le Bahers' model. \cite{Leb11c,Ada15} As detailed elsewhere, \cite{Loo21a} although these two metrics often
yield rather consistent estimates of the CT strength, there is no definitive definition of CT state. We consider that a given transition has a non-negligible CT character if the electron-hole distance provided by these
metrics is greater than 1~\AA.
In a second stage, we apply CC3 \cite{Chr95b,Koc95} to estimate the VTEs of selected ESs with three atomic basis sets, namely {6-31+G(d)}, {\emph{aug}-cc-pVDZ}, and {\emph{aug}-cc-pVTZ}. These calculations are carried out with either CFOUR, \cite{Mat20}
and/or DALTON 2018, \cite{dalton} the latter being able to provide CC3 triplet transitions. For all tested cases, the two codes provide the same VTEs within $\pm$0.001 eV. When the CC3/{\emph{aug}-cc-pVTZ} calculations were achievable with
Dalton, we provide below $\%T_1$, that is, the percentage of single excitations involved in a given transition at this level of theory. Consistent with earlier works, \cite{Sch08,Loo18a,Loo19c} $\%T_1$ is typically much larger for
triplet transitions than for their singlet counterparts, the latter having a larger contributions from double excitations. This is why, for all singlet ESs considered herein, we also report CCSDT \cite{Nog87,Scu88,Kuc01,Kow01,Kow01b}
excitation energies (computed with CFOUR \cite{Mat20}) obtained with at least one of the two double-$\zeta$ basis sets, the difference between the CC3 and CCSDT VTEs providing a first hint at the convergence of the VTEs with
respect to the maximum excitation degree of the truncated CC series. Of course, neither CC3 nor CCSDT transition energies can be considered as of FCI quality. Thus, one can certainly wonder what would be the impact of ramping
up further the truncation degree of the CC expansion. However, given the size of the molecules treated here, such task is very clearly beyond computational reach. Besides, we wish to stress that, for most of the molecules depicted
in Figure \ref{Fig-1}, the present CC3 and CCSDT VTEs are the first being published. Indeed, as detailed in the next Section, all previous efforts typically consider significantly lower levels of theory.
In a third phase, we assess the performances of many wave function methods using these new TBEs. All these benchmark calculations employ the {\emph{aug}-cc-pVTZ} basis set. Consistent with the QUEST database, \cite{Ver21} we test:
CIS(D), \cite{Hea94,Hea95} EOM-MP2, \cite{Sta95c} CC2, \cite{Chr95,Hat00} CCSD, \cite{Pur82,Scu87,Koc90b,Sta93,Sta93b} STEOM-CCSD, \cite{Noo97,Dut18} CCSD(T)(a)*, \cite{Mat16} CCSDR(3), \cite{Chr96b} CCSDT-3,
\cite{Wat96,Pro10} CC3, \cite{Chr95b,Koc95} ADC(2), \cite{Tro97,Dre15} ADC(3), \cite{Tro02,Har14,Dre15} and ADC(2.5). \cite{Loo20b} The ADC(2), ADC(3), and EOM-MP2 calculations are performed with Q-CHEM, \cite{Epi21}
using the resolution-of-the-identity (RI) approximation and tightening the convergence and integral thresholds. Note that the ADC(2.5) VTEs are the simple average of the ADC(2) and ADC(3) values.
The CIS(D) and CCSD VTEs are obtained with GAUSSIAN, \cite{Gaussian16} CC2 and CCSDR(3) results are obtained with DALTON \cite{dalton}, whereas CCSD(T)(a)* and CCSDT-3 energies are computed with CFOUR. \cite{Mat20}
STEOM-CCSD calculations are carried out with ORCA 4.2.1 \cite{Nee12} and we report only ESs for which the active character percentage is larger than $98\%$. In addition, we also evaluate the spin-component scaling (SCS)
and scaled spin-opposite (SOS) CC2 approaches, as implemented in TURBOMOLE 7.3. \cite{Hel08,Turbomole} For ADC(2), we apply two distinct sets of SOS parameters, the ones available in Q-CHEM \cite{Kra13} and the ones proposed
by TURBOMOLE. \cite{Hel08}
\section{Results and discussion}
\subsection{Reference values}
\subsubsection{Azulene}
The ESs of this isomer of naphthalene were investigated in several theoretical works, \cite{Die04,Mur04b,Fal09,Huz12,Pie13,Vos15,Vey20,Loo21a} and we have considered here six singlet ESs (four valence, two Rydberg) and
four triplet ESs (all valence). We refer the interested reader to Table \ref{Table-1} as well as the SI for further details regarding oscillator strengths and involved MO pairs. The four valence singlet ESs are consistently attributed by experimental
\cite{Gil78,Hir78,Fuj83,Bla08,Vos15} and theoretical \cite{Mur04b,Fal09,Huz12,Pie13,Vos15} approaches. For their Rydberg counterparts, the assignments are clearly more challenging (see below). \cite{Bla08,Pie13}
According to the analysis of the CT strengths presented in our earlier work, \cite{Loo21a} two ESs present a mild CT character with an electron-hole separation around $1$ \AA. For these two states, the present work is,
as far as we are aware, the first to present CC results including iterative triples for azulene.
\begin{table*}[htp]
\caption{\small Vertical transition energies (in eV) of azulene. We provide the symmetry of all states as well as the nature of the transition. The rightmost columns list selected data from the literature.}
\label{Table-1}
\footnotesize
\vspace{-0.3 cm}
\begin{tabular}{l|cccc|ccccccc}
\hline
&\multicolumn{2}{c}{6-31+G(d)} & \emph{aug}-cc-pVDZ & \emph{aug}-cc-pVTZ & \multicolumn{7}{c}{Lit.} \\
State& CC3 & CCSDT & CC3 & CC3 & Exp.& Exp. & Exp. & Th. & Th. & Th. & Th. \\
\hline
$^1B_2$ (Val, $\pi\ra\pis$) &2.171 &2.163 &2.177 &2.169 &1.77$^a$&1.72$^b$ &1.77$^e$ &1.96$^f$ &2.25$^g$ &1.62$^h$ &1.83$^i$ \\
$^1A_1$ (CT, $\pi\ra\pis$) &3.959 &3.965 &3.878 &3.843 &3.57$^a$ &3.56$^c$ &3.56$^e$ &3.81$^f$ &3.99$^g$ &3.41$^h$ &3.46$^i$ \\
$^1B_2$ (CT, $\pi\ra\pis$) &4.561 &4.580 &4.523 &4.491 & &4.22$^d$ &4.23$^e$ &4.15$^f$ &4.66$^g$ &4.08$^h$ &4.13$^i$ \\
$^1A_2$ (Ryd.) &5.012 &5.031 &4.783 &4.855 & &4.40$^d$ &4.72$^e$ & &4.79$^g$ & & \\
$^1A_1$ (Val, $\pi\ra\pis$) &5.018 &5.060 &4.941 &4.914 & & &4.40$^e$ &4.94$^f$ &5.05$^g$ &4.77$^h$ &4.50$^i$ \\
$^1B_1$ (Ryd.) &5.338 &5.355 &5.216 &5.285 & & &5.19$^e$ & &5.22$^g$ & & \\
$^3B_2$ (Val, $\pi\ra\pis$) &2.211 & &2.189 & &1.72$^a$& & & & & &1.76$^i$ \\
$^3A_1$ (Val, $\pi\ra\pis$) &2.430 & &2.466 & &2.38$^a$& & & & & &2.26$^i$ \\
$^3A_1$ (Val, $\pi\ra\pis$) &2.923 & &2.900 & &2.85$^a$& & & & & &2.70$^i$ \\
$^3B_2$ (Val, $\pi\ra\pis$) &4.203 & &4.161 & &3.86$^a$& & & & & &3.87$^i$ \\
\hline
\end{tabular}
\vspace{-0.3 cm}
\begin{flushleft}
\begin{footnotesize}
$^a${Photoelectron spectroscopy from Ref.~\citenum{Vos15};}
$^b${0-0 energy in frozen matrix from Ref.~\citenum{Gil78};}
$^c${0-0 energy from fluorescence study of Ref.~\citenum{Hir78};}
$^d${0-0 energy from the fluorescence spectrum of the jet-cooled derivative in Ref.~\citenum{Fuj83};}
$^e${``Electronic energy'' from pump-probe experiments of Ref.~\citenum{Bla08}. Here, we simply assigned the two lowest Rydberg according to their energetic ordering;}
$^f${CASPT2/6-31G(d) values from Ref.~\citenum{Mur04b};}
$^g${CCSDR(3)/DZ+P values from Ref.~\citenum{Fal09};}
$^h${$\delta$-CR-EOMCC(2,3)/cc-pVDZ values from Ref.~\citenum{Pie13};}
$^i${DFT/MRCI values from Ref.~\citenum{Vos15}, determined at the anionic geometry.}
\end{footnotesize}
\end{flushleft}
\end{table*}
For the singlet transitions, the variations between CCSDT/{6-31+G(d)} and CC3/{6-31+G(d)} are very mild (roughly $\pm0.02$ eV) except for the second $A_1$ ES for which a slightly larger effect can be noticed ($+0.04$ eV).
In the same vein, the basis set effects are following the expected trends, with maximal variations of approximately $0.07$ eV between the {\emph{aug}-cc-pVDZ} and {\emph{aug}-cc-pVTZ} singlet VTEs at the CC3 level. There is therefore a high
level of consistency between the various values collected in Table \ref{Table-1}.
There are several measurements of the optical properties of azulene performed with diverse experimental techniques, \cite{Gil78,Hir78,Fuj83,Bla08,Vos15} some of which are summarized at the right- hand-side of Table \ref{Table-1}.
Let us start with the singlet transitions. Our VTE for the lowest $^1B_2$ is significantly below the measured 0-0 energy, by approximately $-0.4$ eV. We recall that this is actually the expected trend: the vertical energies do not account for the geometric
relaxation in the ES, nor the difference of zero-point vibrational energies of the two states, and should therefore exceed their 0-0 counterparts. \cite{Die04,San16b} The same trend pertains for all higher-energy valence transitions with typical shifts going
from $-0.3$ to $-0.5$ eV between the VTEs listed in Table \ref{Table-1} and the experimental values. For the two Rydberg ESs, attributing the two lowest experimental peaks to the two lowest theoretical Rydberg ESs, one obtains
trends that are not incompatible with the measurements. Yet, one should clearly be very cautious and more experimental and theoretical analyses would be welcome for these particular ESs. For the triplets transitions, our estimates are slightly
(for the two $^3A_1$ states) or significantly (for the two $^3B_2$ states) larger than the measurements but provide the same ranking.
As compared to the first CASPT2 values published almost two decades ago, \cite{Mur04b} our CC3/{\emph{aug}-cc-pVTZ} values are similar for the $^1A_1$ VTEs, but higher for the $^1B_2$ excitations. If one compares to the previous high-level CC
estimates including perturbative triples corrections (and performed with relatively compact basis sets), \cite{Fal09,Pie13} one notes that the present energies are slightly lower/higher than the CCSDR(3) values of Ref.~\citenum{Fal09}
for the valence/Rydberg transitions, whereas, surprisingly, the $\delta$-CR-EOMCC(2,3) results \cite{Fra11a} of Ref.~\citenum{Pie13} are significantly lower for a reason that remains unclear to us.
Note, however, that the CCSD excitation energies of Ref.~\citenum{Pie13} and ours (see Section \ref{sec:bench}) are in excellent agreement.
\subsubsection{BOD and BTD}
\begin{table*}[htp]
\caption{\small Vertical transition energies (in eV) of BOD and BTD. See caption of Table \ref{Table-1} for more details.}
\label{Table-2}
\footnotesize
\vspace{-0.3 cm}
\begin{tabular}{ll|ccccc|cc}
\hline
\multicolumn{8}{c}{2,1,3-benzoxadiazole (BOD)}\\
&&\multicolumn{2}{c}{6-31+G(d)} & \multicolumn{2}{c}{\emph{aug}-cc-pVDZ} & \emph{aug}-cc-pVTZ & \multicolumn{2}{c}{Lit.}\\
State& $\%T_1$& CC3 & CCSDT & CC3 & CCSDT & CC3 & Exp. & Th.\\
\hline
$^1B_2$ (Val, $\pi\ra\pis$) &88.6 &4.706 &4.794 &4.575 &4.661 &4.520 &4.00$^a$ &4.16$^b$\\
$^1A_1$ (Val, $\pi\ra\pis$) &83.5 &4.989 &4.990 &4.940 &4.945 &4.906 &4.40$^a$ &4.85$^b$\\
$^1A_2$ (Val, $n\ra\pis$) &86.9 &5.461 &5.483 &5.368 &5.396 &5.284 \\
$^1B_1$ (Val, $n/\sigma\ra\pis$) &85.6 &5.997 &6.009 &5.899 &5.915 &5.833 \\
$^3B_2$ (Val, $\pi\ra\pis$) &97.5 &2.763 & &2.751 & &2.739 \\
$^3A_1$ (Val, $\pi\ra\pis$) &97.2 &4.181 & &4.118 & &4.084 \\
\hline
\multicolumn{8}{c}{2,1,3-benzothiadiazole (BTD)}\\
&&\multicolumn{2}{c}{6-31+G(d)} & \multicolumn{2}{c}{\emph{aug}-cc-pVDZ} & \emph{aug}-cc-pVTZ & \multicolumn{2}{c}{Lit.}\\
State& $\%T_1$& CC3 & CCSDT & CC3 & CCSDT & CC3 & Exp.& Th.\\
\hline
$^1B_2$ (CT, $\pi\ra\pis$) &86.1 &4.419 &4.481 &4.301 &4.363 &4.229 &3.77$^a$&3.94$^b$ \\
$^1A_1$ (Val, $\pi\ra\pis$) &86.5 &4.465 &4.477 &4.405 &4.417 &4.359 &4.05$^a$&4.11$^b$ \\
$^1A_2$ (Val, $n\ra\pis$) &87.7 &4.977 &4.984 &4.886 &4.897 &4.795 & & \\
$^1B_1$ (Val, $n/\sigma\ra\pis$) &86.1 &5.616 &5.620 &5.520 &5.525 &5.417 & & \\
$^3B_2$ (Val, $\pi\ra\pis$) &97.3 &2.833 & &2.836 & &2.820 &2.28$^c$& \\
$^3A_1$ (Val, $\pi\ra\pis$) &97.3 &3.646 & &3.551 & &3.485 & & \\
\hline
\end{tabular}
\vspace{-0.3 cm}
\begin{flushleft}
\begin{footnotesize}
$^a${Vapor phase 0-0 energies from Ref.~\citenum{Hol69};}
$^b${ADC(3)/{\emph{aug}-cc-pVDZ} value from Ref.~\citenum{Prl16b}; for the lowest singlet of BTD, a value of $4.28$ eV was reported as basis set extrapolated TBE in Ref.~\citenum{Loo21a}
whereas a $4.15$ eV TD-DFT estimate is given in Ref.~\citenum{Ref11}; }
$^c${0-0 phosphorescence measured in a frozen dichlorobenzene matrix from Ref.~\citenum{Lin78}. The same work reports a 0-0 energy of $3.52$ eV for the lowest singlet, significantly
redshifted as compared to the vapor measurement of Ref.~\citenum{Hol69}.}
\end{footnotesize}
\end{flushleft}
\end{table*}
Our results for 2,1,3-benzoxadiazole (BOD, also named benzofurazan in the literature) and 2,1,3-benzothiadiazole (BTD) are listed in Table \ref{Table-2}. These building blocks are popular
in many applications, e.g., they can be used as fluorescent probes for the former \cite{Liu11f} and as an accepting moiety in solar cell materials for the latter.\cite{Li12e} Unsurprisingly, the ordering of the ESs is the same for the two
compounds, the sulfur-bearing molecule presenting more redshifted values, except for the lowest triplet state. The lowest ES of BTD has a significant CT character, \cite{Loo21a}
whereas its BOD counterpart does not display significant separation between the electron and the hole according to popular metrics. \cite{Leb11c,Pla14,Pla14b} As can be seen in Table
\ref{Table-2}, rather usual basis set effects are obtained with regular decrease of the transition energies as the basis set size increases, although the amplitude of the changes are strongly
state-dependent, as illustrated by the ``insensitivity'' to basis set effect of the lowest triplet state. For the two lowest triplet ESs of $\pi\ra\pis$ nature, very large $\%T_1$ ($> 97$\%) are calculated, and one
can be confident that the CC3 values are accurate. For the singlet ESs, the differences between the CC3 and CCSDT results are of the order of $0.02$ eV, except for the lowest $^1B_2$ ES:
the CC3 estimates ($4.71$ and $4.42$ eV for BOD and BTD, respectively) are significantly smaller than their CCSDT counterparts ($4.79$ and $4.48$ eV, respectively). Such trend is typical of CT states, \cite{Koz20,Loo21a}
and seems to apply to the first ES of BOD though the tested metrics did not revealed a significant CT character.
For substituted furazans, several TD-DFT studies can be found, \cite{Tsu09,Bro12,Chi14c} but apparently the only previous investigation of the building block itself is the work of Prlj \emph{et al.}
\cite{Prl16b} who studied the $L_a$/$L_b$ (or $^1B_2$/$^1A_1$) ordering in several bicyclic systems. This work reports CC2 transition energies closely matching the present values (4.588 and 4.887 eV), whereas
the ADC(3) VTEs show a larger gap between the two ESs (see rightmost column in Table \ref{Table-2}). The vapour spectrum of BOD was measured in 1969, \cite{Hol69} and the experimental
0-0 energies are shifted by approximately $-0.5$ eV compared to our VTEs, which is a typical trend. For BTD, more measurements are available \cite{Hol69,Gor71,Hen75,Lin78} and the lowest transitions were treated
previously with TD-DFT \cite{Ref11,Chi14b,Men15,Prl16b} and wave function approaches, \cite{Prl16b,Loo21a} a few relevant values being summarized in Table \ref{Table-2}. The gap between the two lowest
singlet transitions is around $0.10$ eV according to our data, the $^1B_2$ ES being the lowest-energy state. Previous CC2 and ADC(3) values report a similar pattern. \cite{Prl16b} In contrast CCSD/{\emph{aug}-cc-pVTZ} yields almost degenerated
transitions but with the incorrect ordering (see the SI). The experimental BTD 0-0 energies of Ref.~\citenum{Hol69} are red-shifted by $-0.23$ and $-0.35$ eV as compared to those of BOD for the $^1B_2$ and $^1A_1$
transitions, respectively. The CC3/{\emph{aug}-cc-pVTZ} heteroatomic shifts of the vertical energies are of the same order of magnitude, i.e., $-0.29$ and $-0.55$ eV.
\subsubsection{DPP}
Due to its intense absorption around 500 nm, DPP is also an extraordinary popular moiety to design dyes used in automotive paints, light harvesting applications, or fluorescent sensors. \cite{Grz15}
While there exist many TD-DFT calculations of DPP-containing compounds in the literature, we could not find previous theoretical works devoted to the chromogen itself, but for a 2009
TD-DFT contribution, \cite{Lun09b} and studies limited to the solvation effects of the lowest transition. \cite{Chi14b,Men15} According to our calculations (Table \ref{Table-3}) the lowest
singlet ES is a bright $^1B_u$ that interestingly lies more than $1.5$ eV above the corresponding triplet, hinting at small intersystem crossing. Next, one finds two (nearly) dark ESs of $^1A_u$ ($n\ra\pis$) and
$^1A_g$ ($\pi\ra\pis$) symmetries, whereas the fourth ESs is a dark $^1B_g$ ($\pi\ra\pis$). For all eight transitions listed in Table \ref{Table-3}, the basis set effects are rather small, with variations of ca.~$-0.05$ to
$-0.10$ eV between the {6-31+G(d)} and {\emph{aug}-cc-pVTZ} VTEs. With the former basis set, we could perform CCSDT calculations, and the outcome hints that the CC3 VTEs are slightly too low by approximately $0.03$-$0.04$ eV
for the four singlet ESs. As in BOD and BTD, very large $\%T_1$ are found for the triplet ESs, and one can likely view the CC3/{\emph{aug}-cc-pVTZ} excitation energies as reliable TBEs for the triplets.
\begin{table*}[htp]
\caption{\small Vertical transition energies (in eV) of DPP. See caption of Table \ref{Table-1} for more details.}
\label{Table-3}
\footnotesize
\vspace{-0.3 cm}
\begin{tabular}{ll|cccc|c}
\hline
&&\multicolumn{2}{c}{6-31+G(d)} & \emph{aug}-cc-pVDZ & \emph{aug}-cc-pVTZ & Lit. \\
State& $\%T_1$& CC3 & CCSDT & CC3 & CC3 & Th. \\
\hline
$^1B_u$ (Val, $\pi\ra\pis$) &88.4 &3.650 &3.683 &3.541 &3.535& 3.42$^a$\\
$^1A_u$ (Val, $n\ra\pis$) &83.7 &3.953 &3.989 &3.907 &3.863& 3.75$^a$\\
$^1A_g$ (Val, $\pi\ra\pis$) &87.0 &3.959 &4.009 &3.872 &3.910& 3.95$^a$\\
$^1B_g$ (Val, $n\ra\pis$) &81.5 &4.397 &4.426 &4.324 &4.309& 4.21$^a$\\
$^3B_u$ (Val, $\pi\ra\pis$) &97.4 &1.957 & &1.923 &1.927& \\
$^3A_g$ (Val, $\pi\ra\pis$) &97.3 &3.804 & &3.751 &3.743& \\
$^3A_u$ (Val, $n\ra\pis$) &94.9 &3.867 & &3.787 &3.781& \\
$^3B_g$ (Val, $n\ra\pis$) &94.5 &4.310 & &4.240 &4.226& \\
\hline
\end{tabular}
\vspace{-0.3 cm}
\begin{flushleft}
\begin{footnotesize}
$^a${TD-PBE0/6-311++G(d,p) from Ref.~\citenum{Lun09b}.}
\end{footnotesize}
\end{flushleft}
\end{table*}
\subsubsection{FF, PP, and TT}
Furo[3,2-$b$]furan (FF), 1,4-dihydropyrrolo[3,2-$b$]pyrrole (PP) and thieno[3,2-$b$]thiophene (TT) are centrosymmetric bicyclic molecules encompassing two identical and fused five member cycles. TT is often used as
a linker in $\pi$-delocalized polymers, \cite{Mcc09} whereas PP is an increasingly popular accepting moiety, notably in quadrupolar systems showing large nonlinear optical responses. \cite{Tas19} Our
results are collected in Table \ref{Table-4}.
According to our calculations, the lowest singlet ES of FF is of Rydberg character. It presents almost the same VTE as the bright $^1B_u$, the latter being typical in $\pi$-delocalized dyes of $C_{2h}$ symmetry.
The hallmark dark $\pi\ra\pis$ $^1A_g$ ES lies approximately $0.6$ eV higher. As expected it has a smaller single excitation character ($\%T_1=82.6$\%) than the other ESs treated here, but the difference between the
CC3 and CCSDT VTEs remains small ($-0.02$ eV). This can be compared to the $^1A_g$ ES in hexatriene ($\%T_1=65.3$\%) for which the CC3-CCSDT difference is about five times larger. \cite{Ver21}
We have also found two other Rydberg transitions of $B_g$ symmetry, and two $\pi\ra\pis$ triplet ESs with very large $\%T_1$, strongly redshifted compared to the corresponding singlet excitations. Going
from {\emph{aug}-cc-pVDZ} to {\emph{aug}-cc-pVTZ} induces an increase/decrease of the VTEs for the Rydberg/valence transitions. In PP, the CC3/{\emph{aug}-cc-pVTZ} calculations indicate that the four lowest singlet transitions are of Rydberg nature
(the $\pi\ra\pis$ $^1B_u$ lies higher, at 5.499 eV according to CC3/{\emph{aug}-cc-pVTZ}). All these four ESs have a strong single-excitation character. In contrast, in the triplet manifold, the lowest-energy ES has a valence character, the two following ones being
of Rydberg nature. The picture is vastly different in TT, in which one first notices two low-lying nearly-degenerated singlet valence ESs of $B_u$ symmetry (see also the discussion below), the Rydberg
transitions appearing at slightly higher energy. In the triplet manifold of TT, three valence $\pi\ra\pis$ ESs could be identified. All seven ESs of TT listed in Table \ref{Table-4} are characterized by $\%T_1 > 85$\%,
highly similar CC3 and CCSDT VTEs, and rather mild basis set effects.
\begin{table*}[htp]
\caption{\small Vertical transition energies (in eV) of FF, PP, and TT. See caption of Table \ref{Table-1} for more details.}
\label{Table-4}
\footnotesize
\vspace{-0.3 cm}
\begin{tabular}{ll|ccccc|c}
\hline
\multicolumn{8}{c}{Furo[3,2-$b$]furan (FF)}\\
&&\multicolumn{2}{c}{6-31+G(d)} & \multicolumn{2}{c}{\emph{aug}-cc-pVDZ} & \emph{aug}-cc-pVTZ & Lit.\\
State& $\%T_1$& CC3 & CCSDT & CC3 & CCSDT & CC3 & Th.\\
\hline
$^1A_u$ (Ryd) &93.4 &5.590 &5.602 &5.357 &5.361 &5.430& \\
$^1B_u$ (Val, $\pi\ra\pis$) &91.5 &5.644 &5.672 &5.500 &5.526 &5.463&5.63$^a$ \\
$^1B_g$ (Ryd) &93.4 &5.985 &6.002 &5.783 &5.789 &5.859& \\
$^1B_g$ (Ryd) &93.1 &6.148 &6.165 &5.934 &5.942 &5.993& \\
$^1A_g$ (Val, $\pi\ra\pis$) &82.6 &6.250 &6.228 &6.080 &6.067 &6.040& \\
$^3B_u$ (Val, $\pi\ra\pis$) &97.9 &3.661 & &3.601 & &3.578& \\
$^3A_g$ (Val, $\pi\ra\pis$) &98.2 &4.956 & &4.897 & &4.869& \\
\hline
\multicolumn{8}{c}{1,4-Dihydropyrrolo[3,2-$b$]pyrrole (PP)}\\
&&\multicolumn{2}{c}{6-31+G(d)} & \multicolumn{2}{c}{\emph{aug}-cc-pVDZ} & \emph{aug}-cc-pVTZ \\
State& $\%T_1$& CC3 & CCSDT & CC3 & CCSDT & CC3 \\
\hline
$^1A_u$ (Ryd) &92.8 &4.558 &4.563 &4.454 &4.445 &4.545\\
$^1B_g$ (Ryd) &92.5 &4.761 &4.768 &4.656 &4.649 &4.746\\
$^1A_u$ (Ryd) &92.0 &5.088 &5.078 &5.020 &4.994 &5.133\\
$^1B_g$ (Ryd) &93.1 &5.275 &5.281 &5.067 &5.055 &5.145\\
$^3B_u$ (Val, $\pi\ra\pis$) &97.9 &3.921 & &3.867 & &3.841\\
$^3A_u$ (Ryd) &97.4 &4.529 & &4.431 & &4.524\\
$^3B_g$ (Ryd) &97.3 &4.739 & &4.641 & &4.733\\
\hline
\multicolumn{8}{c}{Thieno[3,2-$b$]thiophene (TT)}\\
&&\multicolumn{2}{c}{6-31+G(d)} & \multicolumn{2}{c}{\emph{aug}-cc-pVDZ} & \emph{aug}-cc-pVTZ & Lit. \\
State& $\%T_1$& CC3 & CCSDT & CC3 & CCSDT & CC3 & Th. \\
\hline
$^1B_u$ (Val, $\pi\ra\pis$) &87.5 &5.100 &5.096 &5.003 &5.004 &4.964&5.04$^b$\\
$^1B_u$ (Val, $\pi\ra\pis$) &90.6 &5.439 &5.460 &5.301 &5.320 &5.224&5.29$^b$\\
$^1B_g$ (Ryd) &90.3 &5.465 &5.465 &5.455 &5.454 &5.412&\\
$^1A_u$ (Ryd) &91.6 &5.487 &5.485 &5.483 &5.474 &5.518&\\
$^3B_u$ (Val, $\pi\ra\pis$) &97.7 &3.488 & &3.490 & &3.465&\\
$^3B_u$ (Val, $\pi\ra\pis$) &97.2 &4.376 & &4.301 & &4.261&\\
$^3A_g$ (Val, $\pi\ra\pis$) &97.9 &4.669 & &4.613 & &4.580&\\
\hline
\end{tabular}
\vspace{-0.3 cm}
\begin{flushleft}
\begin{footnotesize}
$^a${ADC(2)/cc-pVQZ value from Ref.~\citenum{Prl15};}
$^b${SAC-CI/cc-pVTZ value from Ref.~\citenum{Prl15}.}
\end{footnotesize}
\end{flushleft}
\end{table*}
For FF, we have found only one previous study, \cite{Prl15} that reported an ADC(2) value for the lowest $^1B_u$ transition in reasonable agreement with the present estimate. For the non-substituted PP
we could not find previous theoretical nor experimental works. In contrast, TT has been studied using various levels of theory. \cite{Chi14b,Prl15,Men15} For its two lowest ESs, a refined theoretical study
proposed by the Corminboeuf group, \cite{Prl15} highlighted the challenge of obtaining an accurate ordering with TD-DFT (almost) independently of the selected exchange-correlation functional. At the CCSD/{\emph{aug}-cc-pVTZ}
level, these two ESs are quite strongly mixed in terms of underlying MOs (see Table S10 in the SI). Our best estimates actually provide the same ordering as the CC2 and SAC-CI approaches used in Ref.~\citenum{Prl15}, i.e.,
the lowest ES is dominated by a $\text{HOMO}-1$ to LUMO character, whereas the second ES is mainly corresponding to a HOMO to LUMO excitation.
\subsubsection{Phthalazine and quinoxaline}
We also consider two diazanaphthalenes having to $C_{2v}$ point group symmetry that are popular building blocks in dye chemistry, namely phthalazine and quinoxaline. \cite{Wu13c,Thi16} The latter presents lowest singlet and triplet
ESs of different chemical natures, making quinoxaline derivatives particularly appealing for purely organic TADF applications, thanks to an exceptionally efficient intersystem crossing. \cite{Shi15b} We report in Table \ref{Table-5}
numerous singlet and a few triplet excited states for these two compounds.
\begin{table*}[htp]
\caption{\small Vertical transition energies (in eV) of phthalazine and quinoxaline. See caption of Table \ref{Table-1} for more details.}
\label{Table-5}
\footnotesize
\vspace{-0.3 cm}
\begin{tabular}{l|cccc|ccccc}
\hline
\multicolumn{10}{c}{Phthalazine}\\
&\multicolumn{2}{c}{6-31+G(d)} & \emph{aug}-cc-pVDZ & \emph{aug}-cc-pVTZ & \multicolumn{4}{c}{Lit.} \\
State& CC3 & CCSDT & CC3 & CC3 & Exp.& Exp. & Th. & Th. \\
\hline
$^1A_2$ (CT, $n\ra\pis$) &3.986 &4.012 &3.889 &3.872 &3.61$^a$ &3.01$^c$ &3.68$^d$ &3.74$^e$\\
$^1B_1$ (CT, $n\ra\pis$) &4.427 &4.446 &4.317 &4.283 &3.92$^a$ &3.72$^c$ &4.12$^d$ &4.20$^e$\\
$^1A_1$ (Val, $\pi\ra\pis$) &4.539 &4.517 &4.501 &4.473 &4.13$^a$ &4.09$^c$ & &4.46$^e$\\
$^1B_2$ (Val, $\pi\ra\pis$) &5.364 &5.406 &5.201 &5.146 &4.86$^a$ &4.59$^c$ &5.27$^d$ &4.98$^e$\\
$^1B_1$ (CT, $n\ra\pis$) &5.662 &5.690 &5.561 &5.520 & & & &\\
$^1A_2$ (Mixed) &5.954 &6.052 &5.745 &5.744 & & & &\\
$^1A_2$ (CT, $n\ra\pis$) &6.037 &6.043 &5.930 &5.870 & &5.33$^c$ & &\\
$^1A_1$ (Val, $\pi\ra\pis$) &6.243 &6.204 &6.185 &6.148 & & & &\\
$^1A_2$ (Ryd) &6.606 &6.620 &6.362 &6.430 & & & &\\
$^1B_2$ (Ryd) &6.379 &6.408 &6.133 &6.234 & & & &\\
$^1A_1$ (Val, $\pi\ra\pis$) &6.510 &6.554 &6.410 &6.364 &5.84$^a$ & & &\\
$^3B_2$ (Val, $\pi\ra\pis$) &3.452 & &3.440 &3.430 &2.85$^b$ &2.74$^c$ & &3.42$^e$\\
$^3A_2$ (CT, $n\ra\pis$) &3.721 & &3.628 &3.626 & & & &3.48$^e$\\
$^3B_1$ (CT, $n\ra\pis$) &3.801 & &3.720 &3.711 & & & &3.67$^e$\\
$^3A_1$ (Val, $\pi\ra\pis$) &4.333 & &4.266 &4.224 & & & &4.30$^e$\\
\hline
\multicolumn{10}{c}{Quinoxaline}\\
&\multicolumn{2}{c}{6-31+G(d)} & \emph{aug}-cc-pVDZ & \emph{aug}-cc-pVTZ & \multicolumn{5}{c}{Lit.} \\
State& CC3 & CCSDT & CC3 & CC3 & Exp.& Exp. & Exp. & Th. & Th. \\
\hline
$^1B_1$ (Val, $n\ra\pis$) &3.944 &3.954 &3.831 &3.790 &3.58$^a$ &3.36$^c$&3.36$^f$ &3.83$^d$ &3.76$^e$ \\
$^1A_1$ (Val, $\pi\ra\pis$) &4.333 &4.318 &4.295 &4.263 &4.00$^a$ &3.97$^c$&3.96$^f$ & &4.26$^e$ \\
$^1B_2$ (CT, $\pi\ra\pis$) &4.772 &4.827 &4.643 &4.586 &4.34$^a$ &4.09$^c$& &4.89$^d$ &4.45$^e$ \\
$^1A_2$ (Val, $n\ra\pis$) &5.220 &5.233 &5.109 &5.087 & & & &5.39$^d$ &5.03$^e$ \\
$^1A_2$ (Val, $n\ra\pis$) &5.549 &5.547 &5.442 &5.390 & & & & & \\
$^1A_1$ (CT, $\pi\ra\pis$) &5.770 &5.761 &5.716 &5.674 &5.35$^a$ &5.33$^c$&5.36$^f$ & & \\%
$^1B_1$ (CT, $n\ra\pis$) &6.368 &6.433 &6.163 &6.140 & &5.70$^c$& & & \\
$^1B_2$ (Val, $\pi\ra\pis$) &6.489 &6.511 &6.332 &6.277 & & & & & \\
$^3B_2$ (Val, $\pi\ra\pis$) &3.286 & &3.270 &3.255 & &2.68$^c$& & &3.26$^e$ \\%
$^3B_1$ (Val, $n\ra\pis$) &3.461 & &3.368 &3.352 & &3.04$^c$& & &3.31$^e$ \\%
$^3A_1$ (Val, $\pi\ra\pis$) &4.012 & &3.919 &3.875 & & & & &4.76$^e$ \\%
\hline
\end{tabular}
\vspace{-0.3 cm}
\begin{flushleft}
\begin{footnotesize}
$^a${MCD spectra measured in $n$-heptane from Ref.~\citenum{Kai78};}
$^b${0-0 phosphoprescence in a frozen matrix from Ref.~\citenum{Lim70};}
$^c${0-0 energy collected in Ref.~\citenum{Inn88} (see references therein);}
$^d${CASPT2/cc-pVQZ values from Ref.~\citenum{Mor09};}
$^e${CC2/{\emph{aug}-cc-pVDZ} data from Ref.~\citenum{Eti17}:}
$^f${Gas-Phase 0-0 energies from Ref.~\citenum{Gla70}}
\end{footnotesize}
\end{flushleft}
\end{table*}
The impact of including full (CCSDT) rather than approximate (CC3) triples is never large yet appears not insignificant for several ESs of phthalazine with increase of the VTEs by $0.04$ eV for the two highest ESs considered in Table \ref{Table-5}
as well as for the lowest singlet state of $B_2$ symmetry. For quinoxaline, the largest differences between the CCSDT/{6-31+G(d)} and CC3/{6-31+G(d)} transition energies are found for two CT transitions: $^1B_2$ ($+0.06$ eV) and
$^1B_1$ ($+0.07$ eV). As already explained, such slight CC3 underestimation of the CT VTEs has been reported before. \cite{Koz20,Loo21a} Regarding basis set effects, nothing beyond expectations emerges with a mean
decrease of $-0.04$ eV when switching from {\emph{aug}-cc-pVDZ} to {\emph{aug}-cc-pVTZ} for quinoxaline, the actual shift being not very ES-dependent (the corrections go from $-0.015$ to $-0.057$ eV). The scenario is similar for phthalazine CT and valence excitations
(mean impact of going from {\emph{aug}-cc-pVDZ} to {\emph{aug}-cc-pVTZ} is $-0.03$ eV, with values ranging from $-0.001$ to $-0.060$ eV), but both the sign and magnitude of the basis corrections are vastly different for the two Rydberg transitions.
Several theoretical and experimental transition energies have been previously given for both systems, \cite{Lim70,Gla70,Kai78,Inn88,Mor09,Eti17} though, to the best of our knowledge, none reported so many ESs, nor considered third-order CC models.
We can thus consider the VTEs of Table \ref{Table-5} as the most accurate published to date. When comparing to experimental data, it appears that, as for the above-discussed compounds, the VTEs exceed the energies of the
experimental peaks \cite{Kai78} typically by $0.2$-$0.4$ eV, whereas even larger differences are found with respect to the measured 0-0 energies. \cite{Inn88} The six CASPT2/cc-pVQZ values of Ref.~\citenum{Mor09} are in
reasonable agreement with the present CC3/{\emph{aug}-cc-pVTZ} results with a mean absolute deviation (MAD) of $0.25$ eV, the usual CASPT2 trend of yielding too small VTEs being clearly highlighted here, except for the first $^1B_2$ transition in phthalazine.
The deviations with respect to the CC2/{\emph{aug}-cc-pVDZ} data of Ref.~\citenum{Eti17} are small (MAD of $0.09$ eV) if one excludes the $^3A_1$ transition of quinoxaline for which a very large difference of $0.89$ eV is found between the present value
and the literature estimate. Given that the $\%T_1$ value associated with this state is very large (96.9\%) and that our CC2/{\emph{aug}-cc-pVTZ} result computed on the geometry used in the present study is only $0.16$ eV off its CC3 counterpart, this large
difference is somewhat surprising.
\subsubsection{TTF}
Tetrathiafulvalene (TTF) possesses very specific redox properties, \cite{Nie00,Sai07} but its optical signatures are also of interest, notably due to its accepting properties in CT complexes \cite{Nie00} as well as to the presence of several low-lying
$\pi\ra\sigmas$ transitions, that are rather unusual in organic derivatives. \cite{Pou02b} We consider here the closed-shell singlet form of TTF (Table \ref{Table-6}), and determine VTEs for 8 singlet and 6 triplet ESs. It is noteworthy that
we enforce planarity during the optimization, to benefit from the high $D_{2h}$ symmetry, whereas the actual structure is very slightly bent. \cite{Har94}
\begin{table*}[htp]
\caption{\small Vertical transition energies (in eV) of TTF. See caption of Table \ref{Table-1} for more details.}
\label{Table-6}
\footnotesize
\vspace{-0.3 cm}
\begin{tabular}{ll|cccc|ccccc}
\hline
&&\multicolumn{2}{c}{6-31+G(d)} & \emph{aug}-cc-pVDZ & \emph{aug}-cc-pVTZ & \multicolumn{5}{c}{Litt.}\\
State& $\%T_1$& CC3 & CCSDT & CC3 & CC3 & Exp.& Th. & Th. & Th. & Th. \\
\hline
$^1B_{3u}$ (Val, $\pi\ra\sigmas$) &90.1 &2.958 &2.971 &2.866 &2.787 &2.72$^a$ &2.08$^b$&2.57$^c$&2.68$^d$&2.42$^e$ \\
$^1B_{2u}$ (Val, $\pi\ra\pis$) &87.9 &4.159 &4.190 &3.795 &3.742 &3.35$^a$ &3.05$^b$&3.30$^c$&3.46$^d$&3.31$^e$ \\
$^1B_{1g}$ (Val, $\pi\ra\sigmas$) &90.5 &4.059 &4.067 &4.013 &3.979 & &3.31$^b$&3.72$^c$& \\
$^1B_{2g}$ (Val, $\pi\ra\sigmas$) &89.3 &4.083 &4.082 &4.057 &4.048 & &3.41$^b$&3.92$^c$& \\
$^1B_{3u}$ (Ryd.) &92.8 &4.192 &4.193 &4.048 &4.113 & &3.63$^b$&4.25$^c$& \\
$^1B_{3g}$ (Val, $\pi\ra\pis$) &86.2 &4.496 &4.521 &4.264 &4.218 & &3.44$^b$&3.72$^c$& \\
$^1B_{1u}$ (Val, $\pi\ra\pis$) &90.6 &4.805 &4.816 &4.601 &4.514 &3.91$^a$ &4.01$^b$&4.28$^c$&4.25$^d$&4.18$^e$ \\
$^1B_{2g}$ (Ryd.) &92.0 &4.648 &4.649 &4.495 &4.551 & &3.80$^b$& & \\
$^3B_{3u}$ (Val, $\pi\ra\sigmas$) &96.7 &2.804 & &2.727 &2.652 & &1.92$^b$&2.33$^c$& \\
$^3B_{1u}$ (Val, $\pi\ra\pis$) &97.7 &3.158 & &3.055 &2.993 & &2.76$^b$&2.64$^c$& \\
$^3B_{2u}$ (Val,$\pi\ra\pis$) &97.1 &3.383 & &3.160 &3.123 & &2.89$^b$&2.64$^c$& \\
$^3B_{3g}$ (Val, $\pi\ra\pis$) &97.4 &3.565 & &3.431 &3.400 & &3.11$^b$&2.90$^c$& \\
$^3B_{1g}$ (Val, $\pi\ra\sigmas$) &96.8 &3.862 & &3.833 &3.803 & & & & \\
$^3B_{2g}$ (Val, $\pi\ra\sigmas$) &96.8 &3.946 & &3.942 &3.916 & & & &\\
\hline
\end{tabular}
\vspace{-0.3 cm}
\begin{flushleft}
\begin{footnotesize}
$^a${$\lambda_\mathrm{max}$ in hexane from Ref.~\citenum{Eng77}; very similar values have been reported in other measurements;\cite{Cof71,Wul77}}
$^b${MS-CASPT2 values from Ref.~\citenum{Pou02b};}
$^c${TD-DFT (B3P86/{\emph{aug}-cc-pVDZ}) values from Ref.~\citenum{Pou02};}
$^d${CC2/aug-TZVPP values from Ref.~\citenum{Ker09};}
$^e${$\pi$-CASPT2C/aV(T+d)Z values from Ref.~\citenum{Ker09};}
\end{footnotesize}
\end{flushleft}
\end{table*}
The basis set effects seem under control for TTF. We observe, in particular, limited changes between the {\emph{aug}-cc-pVDZ} and {\emph{aug}-cc-pVTZ} data, with an average downshift of $-0.03$ eV (MAD: $0.05$ eV), and a maximal change
of $-0.09$ eV ($^1B_{1u}$). The two Rydberg ESs are the only ones for which going from the double- to the triple-$\zeta$ basis set actually increases the computed transition energy, a common pattern for
the present set of molecules. It is also noteworthy that for all transitions, the differences between the CCSDT and CC3 transitions, as given with the {6-31+G(d)} basis set, are small, none of them exceeding $0.03$ eV, consistent
with the fact that all $\%T_1$ are rather large. All these elements give us confidence that the reported VTEs are accurate.
Interestingly, previous TD-DFT, \cite{Pou02,Ker09} CC2, \cite{Ker09} and CASPT2 \cite{Pou02b,Ker09} investigations are available for TTF. If one disregards the Rydberg transitions, the ordering of the CASPT2
ESs reported in 2002 \cite{Pou02b} matches the present CC3 one, but these ``old'' CASPT2 VTEs are notably too low, with underestimations of roughly $-0.4$ eV or more for most states. The more recent CASPT2 energies
are closer to our present values, yet remain too small. In contrast the TD-DFT calculations performed with the B3P86 global hybrid, \cite{Pou02} as well as CC2 calculations \cite{Ker09} provide estimates closer
to the current results. The UV/Vis spectra of TTF has been measured by several groups in apolar solvents, \cite{Cof71,Wul77,Eng77} allowing comparisons for three transitions. As expected in such VTE \emph{versus}
$\lambda_{\mathrm{max}}$ comparison, the CC3 values are larger by $0.1$-$0.6$ eV, which indicates that the data of Table \ref{Table-6} are reasonable.
\subsection{Benchmarks}
\label{sec:bench}
To define our TBEs, we typically consider: (i) CCSDT/double-$\zeta$ [6-31+G(d,) or {\emph{aug}-cc-pVDZ}] VTEs corrected for basis set effects thanks to the difference between CC3/double-$\zeta$ and CC3/{\emph{aug}-cc-pVTZ} for singlet ESs, and (ii)
select the CC3/{\emph{aug}-cc-pVTZ} VTEs for triplet ESs. The complete list of TBEs and each specific protocol employed to obtain these are available in Table S11 in the SI.
CC3 nor CCSDT are exact theories, so before assessing other methods, let us briefly discuss the expected accuracy of the present TBEs. For the singlet ESs, our reference values are essentially of CCSDT quality and none of the ES
treated here has a significant double-excitation character. More specifically, all $\%T_1$ (that could be obtained) exceed 80\%, and the variations between CC3 and CCSDT are typically very small. While this provides confidence that the TBEs
are accurate, they are likely not precise enough to fairly compare methods that yield highly similar results and are closely related, e.g., CCSDR(3) and CCSD(T)(a)*. For the triplet ESs, our reference values, of CC3/{\emph{aug}-cc-pVTZ} quality, are
likely sufficient to assess the tested methods. They are three reasons for this assertion. First, all triplet ESs treated here have very large $\%T_1$. Second, in the full QUEST database, the mean absolute error obtained with CC3/{\emph{aug}-cc-pVTZ}
(as compared to higher levels of theory) is as small as $0.01$ eV. \cite{Ver21} Third, the three other CC methods including corrections for the triples, namely, CCSDR(3), CCSD(T)(a)*, and CCSDT-3, have not been implemented for triplet
ESs, so that only methods significantly less advanced than CC3 are benchmarked below.
In Table \ref{Table-7}, we report the statistical results obtained considering the full set of data (only singlets are included for all CC methods including triples): mean signed error (MSE), mean absolute error (MAE), root mean
square error (RMSE) and the standard deviation of the errors (SDE). For the present set of excitations, the distribution of the errors in VTEs (with respect to the TBEs/{\emph{aug}-cc-pVTZ}) is represented in Figure \ref{Fig-2}.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{Figure-2.pdf}
\caption{Distribution of the error (in eV) in VTEs (with respect to the TBE/{\emph{aug}-cc-pVTZ} values) for various methods. See Table \ref{Table-7} for the values of the corresponding statistical quantities. QC and TM indicate that Q-CHEM and TURBOMOLE scaling factors are considered, respectively. The SOS-CC2 and SCS-CC2 approaches are obtained with the latter code}
\label{Fig-2}
\end{figure*}
\begin{table*}[htp]
\footnotesize
\vspace{-0.3 cm}
\caption{Statistical analysis, taking as reference the TBE/{\emph{aug}-cc-pVTZ} values, for various theoretical models. TM and QC stand for the TURBOMOLE and Q-CHEM
definitions of the scaling factors, respectively. On the right-hand side, we compare the MAE obtained here with the previously values obtained for smaller
molecules containing 1 to 3 (and 4 to 6) non-hydrogen atoms. \cite{Ver21} All values are in eV. }
\label{Table-7}
\begin{tabular}{lccccc|cccccccc|ccc}
\hline
&\multicolumn{5}{c}{All states} &\multicolumn{8}{c}{MAE (subgroup)} &\multicolumn{3}{c}{MAE (size)} \\
Method & Count & MSE &MAE &RMSE &SDE &Sing. & Trip. & Val. & CT & Ryd &$n\ra\pis$ &$\pi\ra\pis$ &$\pi\ra\sigmas$ &1--3 at. & 4--6 at. &This set\\
\hline
CIS(D) &88 &0.23 &0.24 &0.28 &0.16 &0.20 &0.30 &0.27 &0.28 &0.07 &0.28 &0.28 &0.14 &0.23 &0.22 &0.24 \\
EOM-MP2 &91 &0.50 &0.50 &0.60 &0.34 &0.51 &0.48 &0.51 &0.60 &0.36 &0.61 &0.54 &0.54 &0.13 &0.24 &0.50 \\
STEOM-CCSD &63 &-0.04 &0.13 &0.16 &0.13 &0.09 &0.17 &0.14 &0.06 &0.12 &0.07 &0.18 &0.18 &0.10 &0.12 &0.13 \\
CC2 &91 &0.00 &0.12 &0.14 &0.15 &0.11 &0.13 &0.12 &0.11 &0.12 &0.09 &0.13 &0.13 &0.19 &0.15 &0.12 \\
SOS-CC2 [TM] &90 &0.24 &0.24 &0.28 &0.14 &0.24 &0.24 &0.25 &0.27 &0.17 &0.41 &0.17 &0.17 &0.19 &0.23 &0.24 \\
SCS-CC2 [TM] &91 &0.16 &0.16 &0.19 &0.09 &0.15 &0.19 &0.18 &0.17 &0.08 &0.26 &0.15 &0.15 &0.18 &0.17 &0.16 \\
ADC(2) &90 &-0.01 &0.10 &0.14 &0.13 &0.10 &0.11 &0.11 &0.10 &0.07 &0.12 &0.12 &0.12 &0.19 &0.14 &0.10 \\
SOS-ADC(2) [TM] &91 &0.24 &0.25 &0.30 &0.17 &0.28 &0.20 &0.23 &0.25 &0.30 &0.36 &0.15 &0.15 &0.18 &0.21 &0.25 \\
SOS-ADC(2) [QC] &91 &0.05 &0.12 &0.15 &0.15 &0.14 &0.09 &0.11 &0.18 &0.11 &0.14 &0.10 &0.10 &0.14 &0.12 &0.12 \\
CCSD &91 &0.16 &0.18 &0.22 &0.15 &0.23 &0.10 &0.17 &0.29 &0.10 &0.31 &0.15 &0.15 &0.07 &0.13 &0.18 \\
ADC(3) &90 &-0.10 &0.20 &0.24 &0.22 &0.17 &0.24 &0.21 &0.22 &0.10 &0.21 &0.23 &0.23 &0.24 &0.21 &0.20 \\
ADC(2.5) &88 &-0.05 &0.07 &0.09 &0.08 &0.08 &0.07 &0.07 &0.09 &0.06 &0.06 &0.09 &0.09 &0.10 &0.08 &0.07 \\
CCSD(T)(a)* &58 &0.09 &0.09 &0.11 &0.06 &0.09 & &0.10 &0.12 &0.05 &0.14 &0.09 &0.09 &0.03 &0.05 &0.09 \\
CCSDR(3) &58 &0.09 &0.09 &0.10 &0.06 &0.09 & &0.10 &0.11 &0.05 &0.13 &0.08 &0.08 &0.03 &0.05 &0.09 \\
CCSDT-3 &58 &0.06 &0.06 &0.08 &0.05 &0.06 & &0.07 &0.08 &0.04 &0.10 &0.06 &0.06 &0.03 &0.05 &0.06 \\
CC3 &58 &-0.02 &0.02 &0.03 &0.02 &0.02 & &0.02 &0.03 &0.01 &0.02 &0.03 &0.03 &0.02 &0.01 &0.02 \\
\hline
\end{tabular}
\end{table*}
On the left-hand-side of Table \ref{Table-7}, one can find the statistical results obtained considering all ESs. EOM-MP2 appears to strongly overestimate the VTEs and exhibits a large dispersion. In short, it cannot be recommended for
the present bicyclic compounds. Amongst the computationally light models, CIS(D) is clearly a better option though the associated MAE remains quite large, $0.24$ eV, a rather typical error bar for this level of theory. \cite{Goe10a,Jac15b,Loo18a,Loo20a,Ver21}
EOM-MP2 and CIS(D) performances are clearly inferior to the ones of two other popular models, namely ADC(2) and CC2 that both deliver very reasonable estimates of the VTEs for the present molecular set, especially ADC(2) that
enjoys a trifling MSE and a MAE as small as $0.10$ eV. For both CC2 and ADC(2), these trends are compatible with the conclusions obtained from the QUEST database, \cite{Ver21} as well as in many previous works benchmarking ADC(2)
and/or CC2 for optical properties. \cite{Hat05c,Jac18a,Sch08,Sil10c,Win13,Har14,Jac15b,Kan17,Loo18a,Loo20a} For CC2, the two spin-scaled approaches (SOS and SCS) significantly deteriorate the CC2 MSE and MAE, leading to a clear overestimation.
However, both SOS-CC2 and SCS-CC2 do decrease CC2's dispersion, especially SCS-CC2 that returns a SDE of $0.09$ eV only, a level of consistency that can only be further improved with higher-level approaches according to the data of Table \ref{Table-7}.
Such accuracy drop and gain in consistency of the spin-scaled CC2 methods has been reported before. \cite{Goe10a,Jac15b,Taj20a,Loo20d} For ADC(2), none of the two tested SOS parameters is beneficial as compared to the original
method, but it is very clear that the default Q-CHEM parametrization \cite{Kra13} is superior to its TURBOMOLE analog for the present set. It also appears that STEOM-CCSD is a valuable approach with small MSE, MAE, and SDE.
However, as explained in the computational details, several ESs that are challenging have been removed from the STEOM-CCSD evaluation set, making the statistical quantities possibly biased. Nevertheless, the similarity between the
STEOM-CCSD and CC2 performances observed in the present study was already underlined in earlier studies. \cite{Loo18a,Dut18,Loo20a,Loo20d}
Moving now to methods with higher $\mathcal{O}(N^6)$ scaling, one notices that CCSD yields
too large VTEs and a SDE comparable to the CC2 one. The overestimation trend of CCSD was again reported by various groups, though with magnitudes depending significantly on the actual test set, \cite{Sch08,Car10,Wat13,Kan14,Kan17,Dut18,Ver21}
an aspect discussed below. In contrast, ADC(3) delivers too small transition energies (MSE of $-0.10$ eV) and a rather large dispersion (SDE of $0.22$ eV). This relatively poor performance of ADC(3) is consistent with our findings for smaller
compounds, \cite{Loo20b} but can be efficiently mitigated by averaging with the ADC(2) VTEs. Indeed, the error patterns of ADC(2) and ADC(3) are opposite, \cite{Loo18a} and the simple average between the results of these
two methods is effective: ADC(2.5), which has in practice the same computational cost as ADC(3), is the only $\mathcal{O}(N^6)$ method delivering MSE, MAE, RMSE, and SDE all smaller than $0.10$ eV for the present set.
To finish, we have assessed four CC methods including triples against the CCSDT-quality reference data for the singlet transitions. The two perturbative approaches, CCSD(T)(a)* and CCSDR(3), deliver very similar statistical errors and can be essentially
viewed as equivalent for practical purposes. Such similarity was reported by Matthews and Watson for very compact compounds, \cite{Mat16} but with significantly smaller deviations as compared to CCSDT. Turning to
the two CC approaches with approximated iterative triples, namely CCSDT-3 and CC3, the latter seems to have the edge for the present set, with average errors below the chemical accuracy threshold ($0.043$ eV). We note that a similar ranking was
reported in 2017 for small molecules. \cite{Kan17}
The central section of Table \ref{Table-7} reports MAEs obtained for various subsets. We underline that the CT and $\pi\ra\sigmas$ subsets are both limited in size (13 and 6 ESs) and nature (rather weak CT and transitions found in TTF only, respectively). Hence, the
present trends should be analyzed cautiously. Most tested approaches deliver rather similar MAEs for the singlet and triplet ESs. However, CCSD is clearly more accurate for the triplets, in line with their larger $\%T_1$ values than the singlets, whereas both
STEOM-CCSD and CIS(D) apparently show the opposite behavior with better performances for the singlets. Surprisingly, most methods provide more accurate Rydberg VTEs than valence VTEs; the differences are especially large for
CIS(D), CCSD, ADC(3), ADC(2.5) and all CC models including triples with significantly smaller MAEs for the Rydberg transitions. Although such trend are not unprecedented, \cite{Ver21} the differences between valence and Rydberg
MAEs are quite large here. Interestingly, for much small systems, Kannar, Tajti, and Szalay reported that CC2 was much less accurate for the Rydberg transitions, \cite{Kan17} an effect that we do not observe here. We believe that this might be related to the
low-lying nature of the transitions considered here whereas the Rydberg transitions of Ref.~\citenum{Kan17} correspond to high-energy ESs with a very diffuse character. For the present set of ESs, one also notices significant differences of MAEs for the
$n\ra\pis$ and $\pi\ra\pis$ transitions, the latter being typically more accurately treated, except with STEOM-CCSD.
Let us now turn towards the impact of the molecular size, so as to investigate if larger compounds are more or less challenging than smaller ones for the different methods. In the three rightmost columns of Table \ref{Table-7}, we compare the MAEs determined
for the present set to those reported in the QUEST database for molecules containing 1--3 and 4--6 non-hydrogen atoms. \cite{Ver21} One can broadly divide the various methods in three groups. In the first group, one notices a significant deterioration
of the performance as the system size increases, hinting that the method cannot be reasonably recommended for large compounds. In this category, one finds: (i) EOM-MP2, with a MAE quadrupling between the smallest and
largest molecules, likely explaining why this method gave reasonable performances in previous works; \cite{Taj16,Kan17} (ii) CCSD with an error steadily increasing from $0.07$ to $0.18$ eV as the size of the benchmarked compounds increases;
and (iii) CCSD(T)(a)* and CCSDR(3) with MAEs tripling between the smallest and largest molecules, though remaining under the $0.10$ eV threshold. While the perturbative triples clearly help in correcting the CCSD VTEs, they are insufficient in order to
fully compensate the CCSD overestimations for larger molecules. In the second category of approaches, Table \ref{Table-7} indicates significant improvements of the accuracy as system size increases, i.e., these methods are likely recommendable
for ``real-life'' applications. Both ADC(2) and CC2 belong to this category, with a MAE divided by a factor of two when considering the larger rather than smaller molecules. This trend likely explains the contrasted CC2 results
obtained previously when experimental references on large compounds were considered, \cite{Goe10a,Win13,Jac15b} or when high-level TBEs on tiny molecules were used \cite{Sch08,Kan17,Loo18a} as references. In the last category, one
finds approaches that behave relatively similarly for all system size, and this is essentially all the remaining methods.
\section{Concluding remarks}
The vertical transition energies to 91 ESs of bicyclic molecules containing between 8 and 10 non-hydrogen atoms have been evaluated. The selected molecules were chosen to be representative of $\pi$-conjugated chromogens actually present
in dyes and fluorophores used daily in measurements, e.g., azulene, benzothiadiazole, diketopyrrollopyrrole, quinoxaline, and tetrathiafulvalene. To define our TBEs/{\emph{aug}-cc-pVTZ} reference values, we typically relied on CC3 for the triplet excited states
as they are characterized by a strongly dominant single-excitation character. For the singlet ESs, we employed corrected CC3 VTEs thanks to double-$\zeta$ CCSDT values. Although the present set does not include transitions with a (dominant)
double excitation character, it nevertheless includes a reasonable mix of singlet (58), triplet (33), valence (60), Rydberg (17), and CT (13) ESs. Such variety of states is rather typical for molecules of this size, though systems with stronger CT
character are missing. \cite{Koz20,Loo21a}
Thanks to these TBEs, we have benchmarked 16 wave function methods commonly employed for ES calculations on this type of systems. We stress that the present set considerably extends the number of ESs of ``large'' molecules (more than
6 non-hydrogen atoms) included in the QUEST database. \cite{Ver21} This allows for a fair evaluation of the performances of various methods as the size of the molecules increases. The outcomes of the present study clearly show that the quality
of the VTEs provided by both ADC(2) and CC2 improves with system size, whereas CCSD follows the opposite pathway. In the group of computationally ``cheap'' models, it seems reasonable to recommend ADC(2), CC2, and STEOM-CCSD.
Besides, ADC(2.5) appears as a good compromise to further improve the accuracy: it outperforms the three previous methods almost systematically for a cost equivalent to ADC(3). For the present set, ADC(2.5) delivers statistical deviations
comparable to those obtained with the more resource-intensive CCSD(T)(a)*, CCSDR(3), and CCSDT-3 approaches, whereas in the case of smaller compounds, the reverse trend was clearly found. Finally, it is probably worth stressing that the
present work hints that CC3 delivers excellent accuracies for all treated subsets (ES nature, size of the compounds, etc). While CC3's $\mathcal{O}(N^7)$ scaling prevents applications on large systems with large basis sets, CC3/double-$\zeta$
calculations are achievable on chromophores, fluorophores, and photochromes of actual practical interest, e.g., substituted naphthoquinones, coumarins, and azobenzenes.
The present results could serve as a reliable guide to select a cheaper level of theory for specific classes of organic compounds. Such a purely theoretical strategy to select a low-scaling theoretical model (e.g., an adequate exchange-correlation
functional to perform TD-DFT calculations), on the basis of a straightforwardly accessible property (the vertical transition energy in the present case) can likely be viewed as complementary to the protocols developed by Barone and coworkers
in which they model complex spectroscopic properties (such as vibronic spectra) so as to allow direct comparisons with experiment. \cite{Egi13,Bai13,Bai18}
\section*{Acknowledgements}
PFL thanks the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement no.~863481) for financial support.
DJ is indebted to the CCIPL computational center installed in Nantes for (the always very) generous allocation of computational time.
\section*{Supporting Information Available}
Raw CCSD/{\emph{aug}-cc-pVTZ} data. Cartesian coordinates. Theoretical best estimates. VTEs for all benchmarked methods.
|
{
"timestamp": "2021-09-29T02:26:01",
"yymm": "2109",
"arxiv_id": "2109.13894",
"language": "en",
"url": "https://arxiv.org/abs/2109.13894"
}
|
\section{Introduction} \label{sec:intro}
The study of near-Earth asteroids (NEAs) offers the opportunity to look into more detail at the physical properties and composition of their counterparts in the main asteroid belt. NEAs also represent a direct link between
meteorites found on Earth and their parent bodies in the solar system. Thus, an important part in tracing the origin of these meteorites involves determining the composition and source region of NEAs.
Here, we present near-infrared (NIR) spectroscopic data of NEAs 6178 (1986 DA) and 2016 ED85. This work was motivated by a close flyby of 1986 DA in April 2019. To date, this is the only NEA that has been
confirmed to be a metal-rich body from radar observations \citep{1991plas.rept..174O}; however, no detailed compositional analysis using spectroscopic data has been performed. During the course of this research,
we also had the opportunity to observe 2016 ED85 in September 2020 as part of an ongoing survey of NEAs. This object was chosen for observation not only because of its close approach to Earth but also because it
came from the same region in the outer belt as 1986 DA. The data reduction later revealed that 2016 ED85 had almost identical spectral characteristics to 1986 DA and other known metal-rich asteroids; hence, the asteroid was
included in our original study.
Metal-rich asteroids are thought to represent the exposed cores of differentiated asteroids whose crusts and mantles were stripped away following a catastrophic disruption \citep[e.g.,][]{1989aste.conf..921B}. A more recent theory
suggests that some of these objects, in particular (16) Psyche, might still preserve a rocky mantle, and that the metal present on the surface could be the result of ferrovolcanic eruptions that covered the rocky material with liquid
metal \citep{2020NatAs...4...41J}. Some of the largest known metal-rich asteroids are located in the middle and outer part of the asteroid belt between $\sim$ 2.65 and 3.0 au. These asteroids have high radar albedos ($\hat{\sigma}_{OC}$), with measured values ranging from $\sim$ 0.22 to 0.60 \citep[e.g.,][]{2007Icar..186..126M, 2010Icar..208..221S, 2015Icar..245...38S} and geometric albedos ($P_{V}$) ranging from $\sim$0.10 to 0.30 \citep[e.g.,][]{2004AJ....128.3070C, 2010Icar..210..674O, 2011M&PS...46.1910H}. They are normally classified as M-types in the Tholen taxonomy \citep{1984PhDT.........3T}, or as Xk- and Xe-types in the Bus-DeMeo taxonomy \citep{2009Icar..202..160D}. In the NIR, their spectra are characterized by having red slopes, convex shapes, and in some cases weak absorption bands at $\sim$ 0.9 and 1.9 $\mu$m attributed to the presence of pyroxene \citep{2010Icar..210..674O, 2011M&PS...46.1910H, 2014Icar..238...37N}. Evidence for a 3 $\mu$m hydration absorption band has also been found on some of these objects \citep[e.g.,][]{2000Icar..145..351R, 2015Icar..252..186L, 2017AJ....153...31T}. Based on their spectral characteristics, metal-rich asteroids have been considered as the possible parent bodies of iron meteorites, enstatite chondrites, stony-irons, and metal-rich carbonaceous chondrites \citep[e.g.,][]{1979aste.book..688G, 1989aste.conf..921B, 1991plas.rept..174O, 2005Icar..175..141H, 2010Icar..208..221S, 2011M&PS...46.1910H}.
In this work we carry out a comprehensive analysis of the NIR spectra of 1986 DA and 2016 ED85 in order to constrain their surface composition. We also use the orbital parameters of
these objects to determine their most likely source region. In addition, the spectra of the NEAs are compared with the spectra of metal-rich asteroids in the main belt to identify their possible parent body.
Laboratory spectra of meteorite samples have also been acquired in order to investigate the relationship
between these NEAs and stony-iron meteorites and metal-rich carbonaceous chondrites. Finally, we estimate the amounts of metals that could be present in 1986 DA and how much they could be worth.
\section{Size, Radar Albedo, and Surface Bulk Density}
\cite{1991plas.rept..174O} carried out radar observations of 1986 DA with the Arecibo Observatory's 2380 MHz radar. They estimated a radar cross-section ($\sigma_{OC}$) of 2.40$\pm$0.36 km$^{2}$ for this asteroid. The
radar albedo of an asteroid can be determined if its $\sigma_{OC}$ and diameter ($D$) are known:
\begin{equation}
\hat{\sigma}_{OC}=\frac{4\sigma_{OC}}{\pi D^{2}}
\end{equation}
Using a diameter of 2.3 km from \cite{1987AJ.....93..738T}, \cite{1991plas.rept..174O} calculated a $\hat{\sigma}_{OC}$ = 0.58 for 1986 DA. This value, when compared to other metal-rich asteroids, is at the highest end of radar
albedos measured for these objects \citep[e.g.,][]{2010Icar..208..221S, 2015Icar..245...38S}.
The diameter of 1986 DA has been updated since the work of \cite{1987AJ.....93..738T}. \cite{2011ApJ...743..156M} calculated a diameter of 3.199$\pm$0.381 km for 1986 DA from data obtained with NEOWISE. This value was later
revised by \cite{2014ApJ...785L...4H}, who obtained a diameter of 2.8$\pm$0.42 km. In the present work we adopt this diameter for our analysis. Thus, inserting this value in Equation (1) gives us $\hat{\sigma}_{OC}$ = 0.39$\pm$0.13 for
1986 DA. This is lower than the value calculated by \cite{1991plas.rept..174O}, but it is still high enough for this object to be considered an asteroid dominated by metal \citep{2015Icar..245...38S}.
\cite{1991plas.rept..174O} also reported a broad range of $\sigma_{OC}$ values for 1986 DA, ranging from 1.1 to 4.8 km$^{2}$. This range is equivalent to a radar albedo variation across the surface of 0.18-0.78 (assuming $D$ = 2.8 km). \cite{1991plas.rept..174O} attributed the variation in $\sigma_{OC}$ to an extremely irregular surface at a scale of 10-100 m. They concluded that 1986 DA had a nonconvex irregular shape, possibly bifurcated.
\cite{1985Sci...229..442O} found a relationship between the Fresnel radar reflectivity and the bulk density ($\rho$) of particulate mixtures of rock and metal. \cite{2010Icar..208..221S} expressed this relationship as a function of
the radar albedo (for $\hat{\sigma}_{OC}$$>$0.07):
\begin{equation}
\rho = 6.944\hat{\sigma}_{OC}+1.083
\end{equation}
We used this relationship and found that 1986 DA has a surface bulk density of 3.79 g cm$^{-3}$, with a total range of 2.33-6.5 g cm$^{-3}$. According to \cite{2010Icar..208..221S}, surface irregularities on metal-rich asteroids are more likely to cause large variations in radar albedo than on rocky bodies. Figure \ref{f:Figure1} shows the radar albedo and
near-surface bulk density for 1986 DA and all main belt M/X-type asteroids observed by radar \citep{2010Icar..208..221S, 2015Icar..245...38S}. The range of possible surface bulk densities for 1986 DA is consistent with most of the values measured for the proposed meteorite analogs for metal-rich bodies.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[height=9cm]{Figure1}
\caption{\label{f:Figure1} {\small Radar albedo and near-surface bulk density for 1986 DA and all main belt M/X-type asteroids observed by radar. Also shown, regions corresponding to carbonaceous chondrites (CC), high
metal carbonaceous chondrites (CH), enstatite chondrites (EC), stony-irons (SI), bencubbinites (CB), and iron dominated meteorites (FE). Error bars for the M/X-types correspond to the uncertainties in radar albedo reported by
\cite{2015Icar..245...38S}. Variations in radar albedo with rotation phase are not shown. Figure adapted from \cite{2010Icar..208..221S, 2015Icar..245...38S}.}}
\end{center}
\end{figure*}
Contrary to 1986 DA, there are no radar data available for 2016 ED85. For this reason, in the present study this object will be considered as a candidate metal-rich body. The only parameter available
for this object is its absolute magnitude, which from the JPL Small-Body Database has a value $H$=17.64. If the absolute magnitude and geometric albedo of an asteroid are known, the diameter can
be estimated using the following relationship \citep{2007Icar..190..250P}:
\begin{equation}
D(km) = [1329/(P_{V})^{1/2}]\times10^{-H/5}
\end{equation}
Thus, if this object is confirmed to be a metal-rich asteroid, given the range of geometric albedos measured for these bodies, its diameter could be in the range of $
\sim$0.72-1.25 km. Since the radar albedo of 2016 ED85 has never been measured, we cannot estimate its surface bulk density. Future radar observations will be required
to determine whether the surface properties of this asteroid are similar to those of 1986 DA.
\section{Near-Infrared Spectroscopic Observations}
The two NEAs were observed with the SpeX instrument \citep{2003PASP..115..362R} on NASA's Infrared Telescope Facility (IRTF). NIR spectra (0.7-2.5 $\mu$m) of 1986 DA and 2016 ED85 were obtained in low-resolution
(R$\sim$150) prism mode with a 0.8” slit width on April 09, 2019 and September 22, 2020 UTC, respectively. During the observations, the slit was oriented along the parallactic angle in order to minimize the effects of
differential atmospheric refraction. Spectra were obtained in two different slit positions (A-B) following the sequence ABBA. In order to correct the telluric bands from the asteroid spectra, a G-type local extinction star was
observed before and after the asteroid. NIR spectra of a solar analog were also obtained to correct for possible spectral slope variations. Observational circumstances for the asteroids are presented in Table 1. All spectra were
reduced using the IDL-based software Spextool \citep{2004PASP..116..362C}. For a detailed description of the data reduction process, see \cite{2013Icar..225..131S}.
\begin{table}[h!]
\caption{\label{t:Table1} {\small Observational circumstances. The columns in this table are: object number and designation, date, phase angle ($\alpha$), V-magnitude, heliocentric distance (r), airmass and solar analog
used.}}
\begin{tabular}{ccccccc}
\tableline
Object&Date (UT)&$\alpha$ $(^{\circ})$&mag. (V)&r (au)&Airmass&Solar Analog \\ \hline
6178 (1986 DA)&09-April-2019&55&17.4&1.23&1.29&SAO 93936 \\
2016 ED85&22-Sept-2020&43&16.6&1.17&1.07&SAO 93936 \\
\tableline
\end{tabular}
\end{table}
\section{Composition}
Figure \ref{f:Figure2} shows the NIR spectra of 1986 DA and 2016 ED85. Both spectra exhibit a red slope and a weak pyroxene absorption band at $\sim$0.9 $\mu$m. The spectra of these two asteroids are very similar, with
2016 ED85 showing a redder slope than 1986 DA. Spectral band parameters, including the band center and band depth of the 0.9 $\mu$m pyroxene band, were measured from the spectra of 1986 DA and 2016 ED85 using a
Python code described in \cite{2020AJ....159..146S}. The Band I center was measured after dividing out the linear continuum and corresponds to the position of the minimum reflectance value obtained by fitting a polynomial over the
bottom of the absorption band. The Band I depth was measured from the continuum to the band center, and it is given as a percentage depth. We obtained a Band I center of 0.93$\pm$0.01 $\mu$m for both objects. The Band I depth of
1986 DA and 2016 ED85 was found to be 4.1$\pm$0.2\% and 4.3$\pm$0.2\%, respectively.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[height=9cm]{Figure2}
\caption{\label{f:Figure2} {\small Spectra of 6178 (1986 DA) and 2016 ED85 obtained with the SpeX instrument on NASA’s IRTF. The spectra have been binned and offset for clarity.}}
\end{center}
\end{figure*}
As mentioned before, 2016 ED85 can only be considered a candidate metal-rich body based on its spectral similarities to 1986 DA and other metal-rich asteroids. In the following analysis we investigate the possibility that the spectral
characteristics of this object are dominated by the presence of metal, i.e., it will be treated in the same way as 1986 DA, but keeping in mind that these results need to be confirmed by radar observations of this asteroid.
In order to determine the pyroxene chemistry of the NEAs, we employed the equations of \cite{2009M&PS...44.1331B}; these equations were derived from the analysis of howardite, eucrite and diogenite (HED) meteorites and make use of the
band centers to calculate the molar content of ferrosilite (Fs) and wollastonite (Wo). Both band centers (Band I and Band II) can be used for this calculation. Before determining the pyroxene chemistry, a temperature correction derived
by \cite{2012Icar..217..153R} was applied to the Band I center in order to account for the differences between the surface temperature of the asteroid and the room temperature at which the equations were derived. A similar
approach was used by \cite{2017AJ....153...29S} to determine the composition of asteroid (16) Psyche, the largest known M-type asteroid. We found that the pyroxene chemistry for both asteroids is
Fs$_{40.6\pm3.3}$Wo$_{8.9\pm1.1}$; these values fall within the range of HED meteorites \citep[e.g.][]{1998LPI....29.1220M} and are similar to the ones calculated for some M-type
asteroids in the main
belt \citep[e.g.][]{2011M&PS...46.1910H}.
\cite{2017AJ....153...29S} found a correlation between the Band I depth (BD) and the pyroxene abundance in intimate mixtures of orthopyroxene and metal. This correlation is described by the following second-order polynomial
fit:
\begin{equation}
opx/(opx+metal) = -0.000274\times BD^{2}+0.033\times BD+0.014
\end{equation}
where $opx/(opx + metal)$ is the orthopyroxene-metal abundance ratio. Using the measured Band I depth and this spectral calibration, we estimated an orthopyroxene abundance of 0.15$\pm$0.01 for both asteroids. This value is more
than twice the mean value calculated for (16) Psyche by \cite{2017AJ....153...29S}.
\section{Source Region and Parent Body}
The orbits of 1986 DA and 2016 ED85, with semimajor axes near the 5:2 mean motion resonance with Jupiter (5:2 MMR) or beyond, are suggestive that they originated in the outer asteroid belt. This is
also the region where some of the largest known metal-rich asteroids reside. Due to their particular location, it is plausible that 1986 DA and 2016 ED85 (if confirmed to be metal-rich) are fragments from such bodies. The location of several
known M/X-type asteroids is shown in Figure \ref{f:Figure3}. It is worth mentioning that both 1986 DA and 2016 ED85 are now on planet-crossing orbits, and they reached those locations by entering into resonances that change
their inclination values. For that reason, Figure \ref{f:Figure3} should be interpreted with caution; a similarity in inclination between a given M/X-type object and either of our NEAs may be a coincidence.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[height=9cm]{Figure3}
\caption{\label{f:Figure3} {\small Inclination vs. semimajor axis for 6178 (1986 DA), 2016 ED85, and known M/X-types \citep[e.g.,][]{2004AJ....128.3070C, 2010Icar..208..221S, 2010Icar..210..674O, 2011M&PS...46.1910H, 2014Icar..238...37N}, and V-types \citep{2017Icar..295...61L, 2018AJ....156...11H, 2020MNRAS.491.5966M} in the middle and outer belt. The number and name of the M/X-type asteroids are indicated. The location of the 5:2 mean motion resonance is
indicated with a vertical dashed line. Background asteroids are depicted in light gray and were obtained from the Asteroids Dynamic Site (AstDyS-2). For clarity, only background asteroids with $H$ $<$ 16 are shown.}}
\end{center}
\end{figure*}
In order to make a quantitative assessment of likely source regions for these bodies, we input them into the near-Earth-object (NEO) model described by \cite{2017A&A...598A..52G, 2018Icar..312..181G}. In their model, over
70,000 test asteroids with diameter $D$ = 0.1 and 1 km were dynamical tracked as they escaped the main asteroid belt and transneptunian populations. These test asteroids were followed until they hit a planet, the Sun, or were ejected
out of the inner solar system via a close encounter with Jupiter. Most of the main belt bodies escaped out one of several regions, including the $\nu_{6}$ secular resonance, the 3:1, 5:2, and 2:1 MMR with Jupiter; and the Jupiter family
comet region. From here, residence time probability distributions were created that calculated how much time these bodies spent in different (a, e, i) bins. By summing all of the probability distributions together with different weighting
values, and then combining them with an observational bias model for the detection of NEOs, they were able to fit their NEO model population to observed NEOs. By varying the weighting values, they found that a best fit model yields not only an estimate of the debiased NEO orbital distribution but also the relative importance that a given NEO source region provides objects to each (a, e, i) bin. Hence, by inputing the (a, e, i) orbits of 1986 DA and 2016
ED85 into this model, we can make predictions of their probable source and departure location.
We found that the most likely region from which 1986 DA and 2016 ED85 originated is the 5:2 MMR with Jupiter near 2.8 au, with probabilities of 76\% and 49\%, respectively (Table 2). We argue that the probabilities make sense
because there are numerous large M- and X-type asteroids residing near the borders of the 5:2 resonance (Figure \ref{f:Figure3}); several of them are plausible parent bodies for either NEA.
Now, it is possible that both 1986 DA and 2016 ED85 are objects from the background population of the asteroid belt. While both are favored to escape the 5:2 MMR, we also cannot rule out the possibility that they came from
different NEO source regions, namely, those with lower but nonzero probabilities of reaching their listed (a, e, i) orbits. It is also possible that the NEOs had different parent bodies. For this exercise, however, we will assume that the following is
true: (i) the similarity in spectra between the objects is not a fluke, and they are connected to a single parent body or parent family; (ii) both came from the 5:2 MMR, the highest probability source in Table 2; and (iii) that the parent body or
parent family can deliver $D$ $>$ 3 km bodies to the 5:2 MMR right now, a condition needed to explain both bodies given that their dynamical lifetimes are both of the order of a few Myr at best \citep{2002Icar..156..399B, 2018Icar..312..181G}.
\begin{table}[h]
\caption{\label{t:Table2} {\small Orbital parameters and possible regions from which 1986 DA and 2016 ED85 originated. The columns in this table are: object designation, semimajor axis (a), eccentricity (e), inclination (i), and probability
(P) of having originated from the resonances $\nu_{6}$, 3:1, 5:2, 2:1, and the Jupiter family comet region (JFC).}}
\begin{tabular}{ccccccccc}
\tableline
Asteroid& a (au)&e& i ($^{\circ}$)&P ($\nu_{6}$)&P (3:1)& P (5:2)&P (2:1)&P (JFC) \\ \hline
6178 (1986 DA)&2.82&0.58&4.31&0.02&0.08& 0.76&0.02&0.13 \\
2016 ED85&3.00&0.69&8.81&0.01& 0.04&0.49&0.11&0.36 \\
\tableline
\end{tabular}
\end{table}
In order to separate the primary parent body candidates from the secondary ones, we compared the NIR spectra of 1986 DA and 2016 ED85 with the spectra of some of the M/X-type asteroids in the vicinity
(Figures \ref{f:Figure4} and \ref{f:Figure5}). We found that the spectral characteristics of these two NEAs are consistent with some of the M/X-type asteroids in the main belt, which also exhibit red spectral slopes and a weak pyroxene
absorption band at $\sim$ 0.9 $\mu$m. Overall, the candidate parent bodies with the most similar spectra were (216) Kleopatra, (322) Phaeo, (497) Iva, and (558) Carmen.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[height=9cm]{Figure4}
\caption{\label{f:Figure4} {\small NIR spectra of 6178 (1986 DA) and M/X-type asteroids (16) Psyche \citep{2017AJ....153...29S}, (69) Hesperia \citep{2016PDSS..248.....H}, (216) Kleopatra \citep{2016PDSS..248.....H}, (322) Phaeo
\citep{2004AJ....128.3070C}, (417) Suevia \citep{2016PDSS..248.....H}, (497) Iva (this work), (558) Carmen \citep{2016PDSS..248.....H}, and (1014) Semphyra \citep{2004AJ....128.3070C}. The spectrum of (497) Iva was obtained with the
IRTF on January 02, 2020 UTC as part of this study. All spectra are normalized to unity at 1.5 $\mu$m.}}
\end{center}
\end{figure*}
\begin{figure*}[!ht]
\begin{center}
\includegraphics[height=9cm]{Figure5}
\caption{\label{f:Figure5} {\small NIR spectra of 2016 ED85 and M/X-type asteroids (16) Psyche \citep{2017AJ....153...29S}, (69) Hesperia \citep{2016PDSS..248.....H}, (216) Kleopatra \citep{2016PDSS..248.....H}, (322) Phaeo
\citep{2004AJ....128.3070C}, (417) Suevia \citep{2016PDSS..248.....H}, (497) Iva (this work), (558) Carmen \citep{2016PDSS..248.....H}, and (1014) Semphyra \citep{2004AJ....128.3070C}. The spectrum of (497) Iva was obtained with the
IRTF on January 02, 2020 UTC as part of this study. All spectra are normalized to unity at 1.5 $\mu$m.}}
\end{center}
\end{figure*}
One interesting aspect about the metal-rich asteroids in the middle/outer belt is that few of them are associated with an asteroid family. If 1986 DA and 2016 ED85 are fragments that resulted from a collision between a metal-rich
asteroid and another object, one would expect more fragments to be created after the collision, leaving behind an asteroid family composed of this type of object. In particular, to explain 1986 DA, which is roughly 3 km in diameter, a
candidate family needs to be located adjacent to the 5:2 resonance and be truncated at sizes that are at least 3 km.
Many families have been identified in the middle and outer belt that are also close to the 5:2 resonance \citep{2015aste.book..297N}. Most of them can be ruled out on the basis of their albedo and spectroscopic data. However, there
are four families that are worth mentioning: Phaeo, Brasilia, San Marcello, and 1999 CG1. Their proper semimajor axes vs. proper inclinations are shown in Figure \ref{f:Figure6}.
\begin{figure*}[!h]
\begin{center}
\includegraphics[height=9cm]{Figure6}
\caption{\label{f:Figure6} {\small Inclination vs. semimajor axis for 6178 (1986 DA), 2016 ED85, and asteroid families Phaeo, Brasilia, San Marcello, and 1999 CG1 from \cite{2015PDSS..234.....N}. The location of the 5:2 mean motion
resonance is indicated with a vertical dashed line. Background asteroids are depicted in light gray and were obtained from the Asteroids Dynamic Site (AstDyS-2). For clarity, only background asteroids with $H$ $<$ 16 are shown.}}
\end{center}
\end{figure*}
\textbf{Phaeo family}. To date, the only family in this region whose parent body has been found to have spectral characteristics similar to a metal-rich asteroid is the Phaeo family \citep{2004AJ....128.3070C}. It also fits the
dynamical and size characteristics necessary to be a source of both NEAs. Using test bodies evolving from orbits similar to those in the Phaeo family, we find that many can reach the present-day orbit of 1986 DA and 2016 ED85.
A possible problem with this family, however, is that the taxonomic classification of (322) Phaeo remains ambiguous, since this object was originally classified as a D-type by \cite{2009Icar..202..160D}. The albedo of (322) Phaeo
\citep[0.0837$\pm$0.0178;][]{2012Icar..221..365P}, on the other hand, is fairly consistent within errors with the albedo of 1986 DA (0.096$\pm$0.029) derived by \cite{2014ApJ...785L...4H}, although we note that the mean albedo of the family, $\sim$ 0.06 \citep{2015aste.book..323M, 2015aste.book..297N}, appears to be lower than the typical values for metal-rich asteroids. More spectroscopic observations of (322) Phaeo and members of its family are required to determine whether this is indeed a metal-rich asteroid family.
\textbf{Brasilia family}. The Brasilia family is composed of X-type asteroids with a mean albedo of 0.18 \citep{2015aste.book..323M, 2015aste.book..297N}. \cite{2015aste.book..297N} noted that (293) Brasilia could be an interloper, in which case (1521)
Seinajoki would be the largest asteroid of this family. Although NIR spectroscopic data are not available, visible spectra of some members of this family
show a red slope with the possible presence of a 0.9 $\mu$m band \citep{2002Icar..158..106B}, consistent with metal-rich asteroids. The family’s albedo, however, is higher than that of 1986 DA.
\textbf{San Marcello family}. The San Marcello family shares similar characteristics with the Brasilia family, i.e., it is composed of X-type asteroids with a mean albedo of 0.19 \citep{2015aste.book..323M, 2015aste.book..297N}. The proximity of this family to
the resonance makes it a possible candidate; however, NIR spectroscopic data are required to confirm its affinity with metal-rich asteroids.
\textbf{1999 CG1 family}. According to \cite{2015aste.book..297N}, this family is composed of S-type asteroids. The reason why it has been included in our list is because the mean albedo of the family ($\sim$0.10) is too low for S-type
asteroids but within the range of metal-rich asteroids and 1986 DA. This could make it a plausible fit. At this time, there are no NIR spectroscopic data available that could be used to confirm its composition.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[height=10cm]{Figure7}
\caption{\label{f:Figure7} {\small Absolute magnitude vs. semimajor axis for 6178 (1986 DA), 2016 ED85, and asteroid families Phaeo, Brasilia, San Marcello, and 1999 CG1 from \cite{2015PDSS..234.....N}. The location of the 5:2 mean motion resonance is indicated with a vertical dashed line.}}
\end{center}
\end{figure*}
In addition to the fact that these families border the 5:2 resonance, it is also important to look at the expected contribution rates of resonant objects in the size range of 2016 ED85 and 1986 DA. In Figure \ref{f:Figure7} we plot the
absolute magnitude vs. semimajor axis for the families described above. All of them show the characteristic V shape resulting from the size-dependent semimajor axis drift due to the Yarkovsky effect. The wider V shape of the San
Marcello and 1999 CG1 families indicates their older age compared to the other two families. All the families seem to have produced resonant objects in the size range of 1986 DA and can produce asteroids of the size of 2016 ED85;
they have observed objects that are truncated by the 5:2 resonance, and that implies that bodies too small to be detected have also reached the same resonance.
In summary, in this exercise we have considered the scenario in which the two NEAs come from the same parent body and are fragments from an asteroid family. Although the limited data prevent
us from doing a more rigorous analysis, we have identified the Phaeo family as a candidate to produce 1986 DA and 2016 ED85 under this scenario. The family has the following:
\begin{itemize}
\item A large remnant (322 Phaeo) whose spectroscopic signature is similar to both NEAs.
\item A parent body with an albedo that is consistent with the albedo of 1986 DA.
\item An orbital distribution that shows that the family is adjacent to the 5:2 resonance.
\item Orbital evidence from the family that it has been delivering 1986 DA-sized and smaller bodies to the 5:2 resonance.
\item Test bodies that can reach the present-day orbits of both NEAs according to the dynamical test body runs provided by \cite{2017A&A...598A..52G, 2018Icar..312..181G}.
\end{itemize}
The main issue about this family is that it could be composed of primitive bodies, given its low albedo and the fact that (322) Phaeo has been classified as a D-type asteroid. D-types have
spectra with very steep slopes but are featureless in the NIR \citep{2009Icar..202..160D}. The detection of the 0.9 $\mu$m band in the NIR spectrum of (322) Phaeo by \cite{2004AJ....128.3070C}
would rule out the D-type taxonomy for this object; however, only one NIR spectrum of (322) Phaeo has been analyzed. More spectroscopic data of this asteroid and members of its family would
help to confirm their taxonomic type and constrain their composition. The other families, Brasilia, San Marcello, and 1999 CG1, are considered secondary candidates. Where we have
information, they appear to have different spectroscopic signatures and albedos from either of our NEAs. We hesitate to rule them out, though, for the reasons listed above.
\section{Meteorite Analogs}
The next step in this study is to identify possible meteorite analogs for 1986 DA and 2016 ED85. Because 1986 DA is a mixture of rocks and metal, and there is the possibility for 2016 ED85 to be it as well, we look at
stony-iron meteorites and metal-rich carbonaceous chondrites as possible analogs, since they share similar characteristics. Enstatite chondrites, which have been considered as meteorite analogs for metal-rich asteroids, are not considered
here because the pyroxene in these meteorites is iron-free and does not exhibit the 0.9 $\mu$m feature present in the spectra of the NEAs. Stony-iron meteorites that have been suggested as potential analogs for metal-rich asteroids include pallasites and mesosiderites, whereas metal-rich carbonaceous chondrites include high metal carbonaceous chondrites (CH) and bencubbinite (CB) chondrites.
Pallasites consist mostly of metal and olivine in roughly equal amounts, with troilite as a minor phase \citep{1998LPI....29.1220M}. They are thought to be fragments from the core-mantle boundary of differentiated asteroids. We have
ruled out pallasites as meteorite analogs because the presence of olivine would produce an absorption feature centered at $\sim$ 1.05 $\mu$m, which would not match the Band I center measured for 1986 DA and 2016 ED85 at
0.93 $\mu$m.
Mesosiderites are breccias composed of similar proportions of silicates and FeNi metal. The silicate component is very similar in composition to HED meteorites, containing pyroxene, olivine, and Ca-rich feldspar
\citep[e.g.,][]{1979LPSC...10.1109H, 1998LPI....29.1220M, 2001M&PS...36..869S}. Two models have been proposed to explain the origin of mesosiderites. In the first model, the metal and the silicates would have originated in two different bodies following the collision between a metallic core and the basaltic surface
of a differentiated asteroid \citep{1985Natur.318..168W}. In the second model, both the metal and the silicates originate in a single differentiated asteroid with a molten core and are mixed together after the asteroid is disrupted by
another object \citep{2001M&PS...36..869S}. \cite{1993Icar..101..201R} found it unlikely that mesosiderites and HEDs formed in the same parent body, although a more recent study by \cite{2019NatGe..12..510H} points to asteroid (4) Vesta as the parent body of both meteorites. Mesosiderites have also been linked to the Maria asteroid family located adjacent to the 3:1 MMR \citep{2011Icar..213..524F}.
The CH chondrites are polymict breccias characterized by the presence of small cryptocrystalline chondrules and a high abundance of Fe,Ni-metal ($\sim$20 vol\%)
\citep{2006mess.book...19W}. These meteorites are believed to have formed in the solar nebula \citep[e.g.,][]{1988E&PSL..91...19W, 2004GeCoA..68.3409C}. Their
bulk density is higher than most carbonaceous chondrites but lower than that of objects like 1986 DA, suggesting that CH chondrites are not good meteorite analogs.
Bencubbinites are also breccias containing high metal abundances of $\sim$ 60-80 vol\% and chemically reduced silicates including Fe-poor olivine, low-Ca pyroxene, and high-Ca pyroxene. Calcium–aluminum-rich inclusion are also
found in some of these meteorites \citep{1998M&PSA..33Q.166W, 2001M&PS...36..401W}. They are thought to have formed either directly from the solar nebula \citep[e.g.,][]{1979GeCoA..43..689N, 1990Metic..25..269W, 2001M&PS...36..401W} or in a metal-enriched gas resulting from a protoplanetary impact \citep[e.g.,][]{2002GeCoA..66..647C, 2005Natur.436..989K}.
In this work we focused on the analysis of mesosiderites and bencubbinites. Despite the fact that these meteorites have been considered as possible analogs for metal-rich asteroids, powder spectra are scarce, making it difficult to establish a linkage between them and the
asteroids. For this study we used two samples, the mesosiderite NWA 6370 and the bencubbinite Gujba; both samples have similar proportions of silicates and metal. The
samples were crushed with a pestle and mortar and dry sieved to three different grain sizes: $<$45, $<$150, and 150-500 $\mu$m. This broad range of grain sizes was chosen to investigate whether increasing the grain size could
produce a better fit between the meteorite and asteroid spectra. Visible and NIR spectra (0.35-2.5 $\mu$m) were obtained relative to a Labsphere Spectralon disk using an ASD Labspec4 Pro spectrometer at an incident angle
$i$=0$^{\circ}$ and emission angle $e$=30$^{\circ}$. For each measurement, 1000 scans were obtained and averaged to create the final spectrum. Spectral band parameters were measured for all samples in the same way as was done for
the asteroid spectra; the measured values are presented in Table 3.
\begin{table}[h]
\caption{\label{t:Table3} {\small Spectral band parameters for the mesosiderite NWA 6370 and the bencubbinite Gujba. The columns in this table are: sample, grain size, albedo (reflectance value measured at 0.55 $\mu$m), Band I
center (BIC), Band II center (BIIC), Band I depth (BID), and Band II depth (BIID).}}
\begin{tabular}{ccccccc}
\tableline
Sample&Grain Size ($\mu$m)&Albedo&BIC ($\mu$m)&BIIC ($\mu$m)&BID (\%)&BIID (\%) \\ \hline
NWA 6370&$<$45&0.11&0.929$\pm$0.001&1.929$\pm$0.004&18.4$\pm$0.1&10.6$\pm$0.2 \\
NWA 6370&$<$150&0.14&0.932$\pm$0.001&1.953$\pm$0.005&30.7$\pm$0.1&22.7$\pm$0.2 \\
NWA 6370&150-500&0.09&0.930$\pm$0.001&1.953$\pm$0.007&12.1$\pm$0.1&7.8$\pm$0.1 \\
Gujba&$<$45& 0.16&0.918$\pm$0.002&-&3.8$\pm$0.1&- \\
Gujba&$<$150&0.14&0.920$\pm$0.003&-&5.3$\pm$0.1&- \\
Gujba&150-500& 0.08&0.919$\pm$0.003&-&7.7$\pm$0.2&- \\
\tableline
\end{tabular}
\end{table}
\begin{figure*}[!h]
\begin{center}
\includegraphics[height=10cm]{Figure8}
\caption{\label{f:Figure8} {\small Top row: normalized reflectance for the mesosiderite NWA 6370 (left) and the bencubbinite Gujba (right) for three different grain sizes. All the spectra are normalized to unity at 1.5 $\mu$m. Bottom
row: absolute reflectance for three different grain sizes for the mesosiderite NWA 6370 (left) and the bencubbinite Gujba (right).}}
\end{center}
\end{figure*}
\begin{figure*}[!h]
\begin{center}
\includegraphics[height=9cm]{Figure9}
\caption{\label{f:Figure9} {\small Spectra of 6178 (1986 DA) and 2016 ED85 compared with the spectra of the bencubbinite Gujba ($<$45 $\mu$m) and the mesosiderite NWA 6370 (150-500 $\mu$m). The spectra have been
normalized to unity at 1.5 $\mu$m.}}
\end{center}
\end{figure*}
Figure \ref{f:Figure8} shows the normalized spectra (top row) for the two samples. All the mesosiderite spectra exhibit two absorption bands centered at $\sim$0.93 and 1.94 $\mu$m. Variations in the intensity of the absorption
bands with grain size are evident for the mesosiderite NWA 6370. The spectrum corresponding to the largest grain size exhibits the weakest absorption bands and the steepest spectral slope, producing the closest match in terms of
spectral slope with the asteroid spectra (Figure \ref{f:Figure9}). However, these large grains will also cause more light to be absorbed, resulting in a significant decrease in reflectance for this spectrum. This can be seen in Figure
\ref{f:Figure8} (bottom row), where the absolute reflectances for the three grain sizes are compared. The bencubbinite spectra only show one absorption band centered at $\sim$ 0.92 $\mu$m. For Gujba, we also observed variations in the
band depth, although not as pronounced as NWA 6370. All the spectra have a relatively flat slope; the spectrum corresponding to the grain size of $<$ 45 $\mu$m has a slightly steeper slope than the others, but much less than the
asteroid spectra (Figure \ref{f:Figure9}). Like NWA 6370, the largest grain size shows the lowest reflectance.
The use of different grain sizes did not help to improve the spectral match between the
bencubbinite and the asteroid spectra. In the case of the mesosiderite, the largest grain size produced a better fit of the spectral slope; however, the absorption bands are still too deep to produce a good match with the asteroid spectra.
These results indicate that grain size alone is not responsible for the differences observed between the meteorite and the asteroid spectra.
The compositional analysis of 1986 DA and 2016 ED85 showed that both objects have a pyroxene chemistry comparable to that of HEDs, but inconsistent with the magnesian pyroxene found in bencubbinites. This difference in the
pyroxene chemistry and the significant spectral differences found in this work suggest that bencubbinites are not good meteorite analogs for 1986 DA and 2016 ED85. The situation with the mesosiderites is less clear; the pyroxene chemistry
for some of these meteorites is consistent with the values found for the two NEAs, but their
spectra have deeper absorption bands and less pronounced spectral slopes. One possible explanation for these spectral differences could be the effect of space weathering on the NEAs, which is known to produce reddening of the spectral slope and suppression of the absorption bands
\citep[e.g.,][]{2001JGR...10610039H, 2006Icar..184..327B, 2010Icar..209..564G}. However, irradiation experiments performed on the mesosiderite Vaca Muerta by \cite{2009Icar..202..477V} showed that space weathering is
not enough to reproduce the red spectral slopes of metal-rich asteroids.
If grain size and space weathering cannot explain the spectral differences between the mesosiderites and the metal-rich asteroids, perhaps metal abundance could, since adding metal would increase the spectral slope and suppress
the absorption bands. To explore this possibility, we modeled the spectra of 1986 DA and 2016 ED85 with a mixing model \citep{2001JGR...10610039H} that allowed us to combine the silicate component of a mesosiderite with meteoritic metal. For this study we used the spectrum of the mesosiderite Vaca Muerta ($<$45 $\mu$m). This sample was used instead of NWA 6370 because it is composed almost entirely of pyroxene with little metal, which is
preferred in this case in order to have a more accurate estimate of the amount of metal present in the mixture. The sample was prepared in the same way as NWA 6370, and the spectra were obtained under the same configuration. For the metal
component we used the spectra of two iron meteorites, Gibeon (IVA) and Georgetown (IAB). The Gibeon powder was obtained as cutting shavings, which were sieved with acetone to grain sizes of 106-212 $\mu$m and dried in a
heated vacuum oven. The Georgetown powder was obtained by crushing a small fragment with a pestle and mortar, and the powder was then dry sieved to a grain size of $<$45 $\mu$m. These are the
same samples used in \cite{2021PSJ.....2...95C}. The spectra were obtained following the same procedure as used with NWA 6370 and Gujba. All the endmember spectra are shown in Figure \ref{f:Figure10}.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[height=8cm]{Figure10}
\caption{\label{f:Figure10} {\small Spectra of the endmembers used with the mixing model. The samples include the mesosiderite Vaca Muerta, the iron IVA meteorite Gibeon, and the iron IAB meteorite Georgetown. The grain
size for each sample is indicated.}}
\end{center}
\end{figure*}
A combination of three endmembers was used for the mixing model, the two iron meteorites plus the mesosiderite. Using two iron meteorites whose spectra have different spectral slopes has the advantage of allowing a better fit of the
spectral slope. Asteroid spectra were extrapolated and normalized to unity at 0.55 $\mu$m. The normalized spectra were then multiplied by their geometric albedos in order to convert from relative reflectance to absolute reflectance. For 1986 DA we used the revised value $P_{V}$=0.096$\pm$0.029 obtained by \cite{2014ApJ...785L...4H}. Since the albedo of 2016
ED85 is not known, we assumed the same value calculated for 1986 DA. Although 2016 ED85 can only be considered a candidate metal-rich body at this point,
we wanted to see whether it was possible to model its NIR spectrum in the same way as 1986 DA.
From \cite{2001JGR...10610039H}, the reflectance relative to a standard can be written as
\begin{equation}
\Gamma(\gamma)=\frac{r(sample)}{r(standard)}=\frac{1-\gamma^{2}}{(1+2\gamma \mu_{0})(1+2\gamma \mu)}
\end{equation}
where $\mu_{0}$=$cos(i)$ and $\mu$=$cos(e)$. From Equation (5), $\gamma$ can be determined and the single scattering albedo for each endmember calculated as
\begin{equation}
w=1-\gamma^{2}
\end{equation}
The $w$ values are then linearly combined and converted to reflectance with Equation (5). This was done with a Python routine that incorporates the {\it{curve\_fit}} function included in the SciPy library for Python
\citep[e.g.,][]{2020NatMe..17..261V}. This function uses the Levenberg-Marquardt algorithm, which performs a nonlinear least-squares fit of the function to the data. The routine then returns the optimal quantities for the endmembers so
that the sum of the squares of the differences between the function and the data ($\chi^{2}$) is minimized. The metal abundances of the two iron meteorites were left as free parameters, whereas the pyroxene contribution from Vaca
Muerta was given a fixed value of 0.15 (see section 4). For each asteroid the albedo was allowed to vary within its error. Mineral abundances obtained from the mixing models are presented in Table 4.
\begin{table}[h!]
\begin{center}
\caption{\label{t:Table4} {\small Mineral abundances obtained from the mixing model. The albedo corresponds to the reflectance value measured at 0.55 $\mu$m for each fit.}}
\begin{tabular}{cccccc}
\tableline
Asteroid&Gibeon&Georgetown&Vaca Muerta&Albedo&$\chi^{2}$ \\ \hline
(16) Psyche&0.37&0.57&0.06& 0.088&0.001 \\
6178 (1986 DA)&0.57&0.28&0.15&0.105&0.003 \\
2016 ED85&0.70&0.15&0.15&0.114&0.004 \\
\tableline
\end{tabular}
\end{center}
\end{table}
\begin{figure*}[!ht]
\begin{center}
\includegraphics[height=8.5cm]{Figure11}
\caption{\label{f:Figure11} {\small Spectra of 6178 (1986 DA) and 2016 ED85, and the best fit obtained using the mixing model. The silicate component comes from the mesosiderite Vaca Muerta, the metal component from the iron
meteorites Gibeon and Georgetown. The spectra have been offset for clarity.}}
\end{center}
\end{figure*}
\begin{figure*}[!ht]
\begin{center}
\includegraphics[height=8.5cm]{Figure12}
\caption{\label{f:Figure12} {\small Spectrum of asteroid (16) Psyche from \cite{2017AJ....153...29S}, and the best fit obtained using the mixing model. The silicate component comes from the mesosiderite Vaca Muerta, the metal
component from the iron meteorites Gibeon and Georgetown. The visible part of the spectrum was obtained from \cite{2002Icar..158..106B}.}}
\end{center}
\end{figure*}
Figure \ref{f:Figure11} shows the best fits obtained for 1986 DA and 2016 ED85. In general, we found that increasing the metal content significantly improves the match between the meteorite and the asteroid spectra. We performed the
same analysis on the spectra of other metal-rich asteroids and obtained similar results. As an example, Figure \ref{f:Figure12} shows the best fit obtained for (16) Psyche; in this case, the pyroxene contribution from Vaca
Muerta was fixed to a value of 0.06 based on the results obtained by \cite{2017AJ....153...29S}. For Psyche, the metal component from Georgetown contributed much more to the fit than for the NEAs, which required a higher quantity
from Gibeon to match the spectral slope (Table 4).
The results from the mixing model demonstrate that a higher metal content on the surface of the NEAs and other metal-rich asteroids could explain the spectral differences between them and the mesosiderites. As stated earlier, there is no
consensus about the origin of these meteorites; if Vesta is their parent body, then we would have to rule out a link between the mesosiderites and the metal-rich bodies in the middle and outer belt. Despite this possibility, our results suggest that the parent body of asteroids
like 1986 DA and possibly 2016 ED85 shares similar characteristics with the parent body of the mesosiderites, and that similar events that led to the formation of these meteorites could have taken place in other parts of the solar
system. Evidence for this would be given by the presence of basaltic (V-type) asteroids not related to Vesta in the same region where some of the largest M/X-types reside between $\sim$2.65 and 3.0 au (Figure \ref{f:Figure3}).
\section{1986 DA as a target for asteroid mining}
The idea of asteroid mining is not new; \cite{1977TechnolRev....79...7G} discussed the benefits of mining extraterrestrial resources, not only for the potential economic value but also as a way of reducing the environmental damage on
Earth. They considered the scenario of mining a small asteroid containing one cubic kilometer of Ni-Fe metals and estimated that for a delivery rate of 50,000 metric tons of nickel per day, the annual return at that time would have been \$100
billion. \cite{1994JGR....9921129K, 1996USGS....2821K} argued that because of the abundance and low prices of Fe and Ni, their exploitation for use on Earth would not be required in the short term, although he pointed out that they
could be used in space construction. Instead, he considered exploiting precious metals such as Au and the platinum group metals (PGM), which include Ru, Rh, Pd, Os, Ir, and Pt. \cite{1996USGS....2821K}
evaluated three mining scenarios, one of them involving the exploitation of a 2.6 km metallic NEA in the 90$^{th}$ percentile of Pt richness. He found that the annual value of precious metals from this object (in 1995 U.S dollars), if
marketed over 50 yr, would be $\sim$ \$48 billion.
Due to their high metal content and close flybys to Earth, objects like 1986 DA and perhaps 2016 ED85 could be possible targets for asteroid mining in the future. In this section we provide an estimate of the amount of metals
that 1986 DA could contain and how much they could be worth. We have left 2016 ED85 out of this section because of the limited data available for this object.
The analysis presented here should be taken as a rough estimate, since accurate values for the mass, bulk density, and abundance of metal for 1986 DA are unknown. Because of this, several assumptions are made; in
particular, we will assume that the properties measured from the surface of the asteroid are representative of the whole body. Since this is a relatively small object, this assumption is useful as a first-order approximation. Estimates
about the cost of developing the necessary technology to extract and deliver the minerals are beyond the scope of this analysis.
We start by calculating the volume of 1986 DA; for this, we will assume that the object is a prolate ellipsoid, which is likely more accurate than assuming a spherical shape. If the amplitude of the lightcurve ($\Delta$m) of the
asteroid is known, then the ratio of its axes $a/b$ can be determined using the following relationship:
\begin{equation}
\Delta m=2.5log\frac{a}{b}
\end{equation}
For 1986 DA, \cite{2019MPBu...46..304W} found that $\Delta$m = 0.09 mag. If we assume that the radius of the asteroid (1400 m) corresponds to the semimajor axis ($a$), then $b$ = 1288.63 m. The volume of a prolate ellipsoid is
given by
\begin{equation}
V=\frac{4}{3}\pi ab^{2}
\end{equation}
which yields $V$ = 9.74$\times$10$^{9}$ m$^{3}$. Thus, assuming that the surface bulk density of the asteroid (3790 kg/m$^{3}$) is representative of the whole body, this results in a
total mass (in metric tons) $M$ = 3.69$\times$10$^{10}$ mt for 1986 DA. From the compositional analysis we determined that this asteroid could have a metal content of $\sim$ 85\%; therefore, the metal mass fraction for this object is
$M_{metal}$ = 3.14$\times$10$^{10}$ mt.
\begin{table}[h!]
\begin{center}
\caption{\label{t:Table5} {\small Abundance of metals in an average iron meteorite and 6178 (1986 DA). Abundances corresponding to Fe, Ni, Co, and Cu in iron meteorites are from \cite{1975himt.book.....B}.
The abundances of Au and the PGM are from \cite{1994JGR....9921129K, 1996USGS....2821K}, and correspond to a good 90$^{th}$ percentile in Ir and Pt. Metal values have been calculated only for precious metals, i.e., Au
and the PGM, and are reported in Trillion USD.}}
\begin{tabular}{cccc}
\tableline
Metal&Abundance&Mass (mt)&Metal value (10$^{12}$ USD) \\ \hline
Fe&90.6 (\% weight)&2.84$\times$10$^{10}$&- \\
Ni&7.9 (\% weight)&2.48$\times$10$^{9}$&- \\
Co&0.5 (\% weight)&1.57$\times$10$^{8}$&- \\
Cu&150 (ppm)&4.71$\times$10$^{6}$&- \\
Ru&20.7 (ppm)&6.49$\times$10$^{5}$&0.1749 \\
Rh&3.9 (ppm)&1.22$\times$10$^{5}$&5.9583 \\
Pd&2.6 (ppm)&8.16$\times$10$^{4}$&1.7035 \\
Os&14.1 (ppm)&4.42$\times$10$^{5}$&0.0161 \\
Ir&14.0 (ppm)&4.39$\times$10$^{5}$&0.5167 \\
Pt&28.8 (ppm)&9.03$\times$10$^{5}$&2.2406 \\
Au&0.6 (ppm)&1.88$\times$10$^{4}$&1.0436 \\
\tableline
\end{tabular}
\end{center}
\end{table}
\begin{figure*}[!ht]
\begin{center}
\includegraphics[height=9cm]{Figure13}
\caption{\label{f:Figure13} {\small Metal content for 6178 (1986 DA) compared to the reserves worldwide as of 2020. World reserves for each metal were obtained from the U.S. Geological Survey mineral commodity
summaries 2021.}}
\end{center}
\end{figure*}
Once we have determined the total mass of the metal component, we need to identify the metals that could be present in this object. For this, we use as a reference the composition of the average iron meteorite (Table 5), which
includes Fe, Ni, and Co as the most abundant metals. We have also included Cu, Au, and the PGM because of their strategic use and value. The mass of each metal was obtained by multiplying its abundance by the total mass of the
metal component of the asteroid. The amount of each metal for 1986 DA is shown in Table 5. To put these numbers in context, in Figure \ref{f:Figure13} we compare the mass of the metals with the reserves worldwide
as of 2020. We estimated that the amounts of Cu and Au present in 1986 DA would be lower than those on Earth, whereas the amounts of Fe, Ni, Co, and the PGM would exceed the reserves worldwide.
From this point, determining how much the asteroid is worth would only require multiplying the mass of each metal by its market price. However, increasing the metal supply would have an effect on the prices of these commodities on
Earth. \cite{1994JGR....9921129K, 1996USGS....2821K} determined that market prices of most of these metals would crash as a result of flooding the market with metals imported from the asteroid. Thus, current market prices need to
be adjusted to account for this effect. \cite{1994JGR....9921129K} developed an empirical model where the adjusted or deflated prices $p'$ are related to the current prices, $p_{o}$, by
\begin{equation}
p'=p_{o}(P'/P_{o})^{-0.60}
\end{equation}
where $P'$ is the rate of production of the asteroid’s metal plus terrestrial metal and $P_{o}$ is the rate of current production. Using Equation (9), and following \cite{1994JGR....9921129K, 1996USGS....2821K}, we estimated how much 1986
DA could be worth considering only the precious metals, i.e., Au and the PGM. Assuming that the asteroid is mined and the metals marketed over 50 yr, we found that the annual value of precious metals (in 2021
U.S dollars) for 1986 DA would be $\sim$\$233 billion. The adjusted prices in Table 5 show that 1986 DA could be worth a total of $\sim$\$11.65 trillion.
\section{Summary}
We carried out a detailed physical and compositional characterization of the NEAs 1986 DA and 2016 ED85. We found that the NIR spectra of both asteroids exhibit
characteristics distinctive of metal-rich asteroids, i.e., red slopes, convex shapes, and a weak pyroxene absorption band at $\sim$0.93 $\mu$m. Radar observations
of 1986 DA confirmed that this is a metal-rich asteroid, whereas 2016 ED85 can only be considered as a candidate metal-rich body at this point.
The compositional analysis of 1986 DA and 2016 ED85 showed that these objects have a pyroxene chemistry (Fs$_{40.6\pm3.3}$Wo$_{8.9\pm1.1}$) comparable
to HEDs and mesosiderites. The intensity of the 0.9 $\mu$m band suggests that both asteroids could be composed of $\sim$15\% pyroxene and 85\% metal.
A comparison between the NIR spectra of the two NEAs and the spectra of M/X-type asteroids in the middle and outer belt showed that the spectral
characteristics of 1986 DA and 2016 ED85 are consistent with those of the asteroids in the main belt.
We used the NEO model developed by \cite{2017A&A...598A..52G, 2018Icar..312..181G} to determine possible source regions for these bodies. We found
that the most likely region from which 1986 DA and 2016 ED85 originated is the 5:2 MMR with Jupiter near 2.8 au, with probabilities of 76\% and 49\%, respectively.
We evaluated the scenario in which both NEAs come from the same parent body and resulted from the formation of an asteroid family. This scenario would require the existence of a family with
similar spectral characteristics, located close to the 5:2 resonance, and capable of delivering bodies with sizes $>$ 3 km to the resonance. We identified the Phaeo family as a possible candidate, although we note that more spectroscopic observations are needed to confirm that it is composed of metal-rich bodies.
The spectra of the NEAs were compared with laboratory spectra of bencubbinite and mesosiderite samples. Differences in the NIR spectra and pyroxene
chemistry suggest that bencubbinites are not good meteorite analogs for 1986 DA and 2016 ED85. Mesosiderites, on the other hand, were found to have a similar
pyroxene chemistry and produced a good spectral match when metal was added, suggesting similarities between the parent body of the NEAs and the parent body of these meteorites.
We estimated that the amounts of Fe, Ni, Co, and the PGM present in 1986 DA could exceed the reserves worldwide. Moreover, if 1986 DA is mined
and the metals marketed over 50 yr, the annual value of precious metals for this object would be $\sim$\$233 billion.
\begin{acknowledgments}
This research work was supported by NASA Near-Earth Object Observations grant NNX17AJ19G (PI: V. Reddy). We thank the IRTF TAC for awarding time to this project, as well as the IRTF TOs and MKSS staff for their support. The
authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most
fortunate to have the opportunity to conduct observations from this mountain. We thank the anonymous reviewers for useful comments that helped improve this paper.
\end{acknowledgments}
|
{
"timestamp": "2021-09-30T02:00:36",
"yymm": "2109",
"arxiv_id": "2109.13950",
"language": "en",
"url": "https://arxiv.org/abs/2109.13950"
}
|
\section{Introduction}
There are many sources of neutrinos in our nature (see, e.g., a review in Ref.~\cite{Vitagliano:2019yzm}). The detection of neutrinos from explosive astrophysical sites such as core-collapse supernovae (CCSNe), and neutron star mergers helps understand the mechanism of their explosive phenomena. The neutrino fluxes are affected by neutrino oscillations that are sensitive to environment inside the astrophysical sites (see, e.g., reviews in Refs.~\cite{Horiuchi2018WhatDetection,Janka2017NeutrinoSupernovae,Mirizzi2015SupernovaDetection}). Coherent forward scatterings of neutrinos with background matter induce a refractive effect that has an influence on flavor conversions of neutrinos. The matter effect called the ``Mikheyev-Smirnov-Wolfenstein (MSW) effect" \cite{Wolfenstein1978NeutrinoMatter,Mikheev:1986gs} is caused by charged current interactions of neutrinos with background electrons. Neutrino-neutrino interactions play important role in flavor conversions in dense neutrino gas , where large numbers of neutrinos are produced. It is numerically found that neutrino-neutrino interactions induce non-linear flavor conversions so-called ``Collective neutrino oscillations" (CNOs) in CCSNe \cite{Duan:2006an,Fogli:2007bk,Dasgupta:2010cd,Duan:2010bf,Friedland:2010sc,Mirizzi:2010uz,Cherry:2010yc,Chakraborty:2011gd,Wu:2014kaa,Cirigliano:2017hmk,Cirigliano:2018rst,Sasaki:2017jry,Sasaki:2019jny,Cherry:2019vkv,Zaizen2020JCAP,Zaizen:2020xum}, neutron star mergers \cite{Malkus:2014iqa,Malkus:2015mda,Wu:2015fga,Zhu:2016mwa,Frensel:2016fge,Chatelain:2016xva,Tian:2017xbr,Vlasenko:2018irq,Shalgar:2017pzd}, and the early Universe \cite{Kostelecky:1993yt,Dolgov:2002ab,Wong:2002fa,Johns:2016enc,deSalas:2016ztq,Hasegawa:2019jsa,Hansen:2020vgm}. The collective neutrino oscillations can change neutrino spectra dramatically and potentially affect neutrino signals in neutrino detectors \cite{Wu:2014kaa,Sasaki:2019jny,Zaizen2020JCAP,Zaizen:2020xum} and nucleosynthesis inside astrophysical sites such as the $\nu p$ process \cite{MartinezPinedo:2011br,Pllumbi:2014saa,Sasaki:2017jry,Xiong:2020ntn} and the $r$ process \cite{Malkus:2015mda,Wu:2017drk,George:2020veu}.
Recently, much attention has been focused on fast-pair wise collective neutrino oscillations whose oscillation scales are
$\sim\mathcal{O}(10^{-5})\,\mathrm{km}$ (see e.g. a review \cite{Tamborra:2020cul}). The fast flavor conversions may occur near neutrino spheres and have strong impact on explosion dynamics in astrophysical sites. The possibility of fast flavor conversions was first proposed in Ref.~\cite{Sawyer:2005jk}. Fast flavor conversions are associated with anisotropic angular distribution of neutrinos \cite{Sawyer:2005jk,Sawyer:2008zs,Sawyer:2015dsa}. Such fast modes are not confirmed in numerical simulations assuming the ``bulb model" \cite{Duan:2006an} where all of neutrinos are emitted isotropically on the surface of a fixed neutrino sphere irrespective of neutrino species and neutrino energy. The growth of instability induced by fast flavor conversions is studied through the linear stability analysis \cite{Dasgupta:2016dbv,Izaguirre:2016gsx,Chakraborty:2016lct,Capozzi:2017gqd}. The instability of fast flavor conversions is studied in CCSNe \cite{Abbar:2018shq,Abbar:2019zoq,Glas:2019ijo,DelfanAzari:2019tez,DelfanAzari:2019epo,Morinaga:2019wsv,Nagakura2019a,Abbar:2020qpi} and neutron star mergers \cite{Wu:2017qpc,George:2020veu,Li:2021vqj} by employing simulation data of neutrino radiation hydrodynamics. The crossing between angular distribution of $\nu_{e}$ and that of $\bar{\nu}_{e}$ so-called ``electron-lepton number (ELN) crossing" is associated with fast flavor conversions \cite{Abbar:2017pkh,Izaguirre:2016gsx}. The methods of ELN-crossings search in multi-dimensional (multi-D) CCSNe simulations are recently developed \cite{Abbar:2020fcl,Nagakura:2021suv,Johns:2021taz} and such treatments are applied to state-of-the-art supernova simulations \cite{Capozzi:2020syn,Nagakura2021a}. In general, numerical simulations of fast flavor conversions in non-linear regime are challenging problem due to the numerical difficulties, but the fast flavor conversions can be calculated even in non-linear regime within local simulations
\cite{Dasgupta:2017oko,Abbar:2018beu,Capozzi:2018clo,Johns:2019izj,Shalgar:2020xns,Shalgar:2020wcx,Shalgar:2021wlj,Martin:2019gxb,Martin:2021xyl,Zaizen2021a,Xiong:2021dex,Kato2021a,Wu2021a,Richers:2021nbx,Johns:2021arXiv210411369J,Shalgar:2021arXiv210615622S}.
The Boltzmann collision terms that correspond to contributions from incoherent scatterings, emission, and absorption of neutrinos are taken into account in simulations of collective neutrino oscillations \cite{Cirigliano:2017hmk,Cirigliano:2018rst,Zaizen2020JCAP,Capozzi:2018clo,Shalgar:2020wcx,Shalgar:2021wlj,Martin:2021xyl,Kato2021a}.Neutrino scattering terms can increase the transition probability of neutrinos after the fast flavor conversions \cite{Shalgar:2020wcx,Johns:2021arXiv210411369J}. On the other hand, Ref.~\cite{Martin:2021xyl} shows that fast flavor conversions are damped and neutrino spectra become isotropic on the scale of the mean free path of neutrinos. These two results seem to contradict with each other and the role of Boltzamann collision on fast flavor conversions is still unknown.
In this work, we calculate fast flavor conversions in the non-linear regime and study the effect of neutrino scatterings on the fast flavor conversions based on the dynamics of two flavor neutrino polarization vectors. In Sec.~\ref{sec:Methods}, we explain our numerical setup for fast flavor conversions in the non-linear regime. In Sec.~\ref{sec:Results}, we show the numerical results and discuss the effect of neutrino scatterings. Here, we divide the evolution of fast flavor conversions into three stages: linear evolution phase, limit cycle phase, and relaxation phase. We also discuss effects of various collision terms on fast flavor conversions. We finally summarize our result in Sec.~\ref{sec:Conclusions}.
\section{Methods}\label{sec:Methods}
We calculate fast flavor conversions of two flavor neutrinos ($\nu_{e}$,$\nu_{x}$) and antineutrinos ($\bar{\nu}_{e}$,$\bar{\nu}_{x}$) with collision terms of neutrino scattering based on a formalism in Ref.~\cite{Shalgar:2020wcx}. We analyze behaviors of the flavor conversions through the geometrical representation of neutrino density matrices. In this section, we explain numerical setup for our calculation and introduce equation of motion of polarization vectors to analyze the dynamics of flavor conversions.
Neutrino oscillations considering Boltzmann collisions are expressed by the time evolution of neutrino density matrix $\rho$ and that of antineutrino $\bar{\rho}$ \cite{Sigl:1992fn,Yamada:2000za,Cirigliano:2009yt,Vlasenko:2013fja,Blaschke:2016xxt,Cirigliano:2018rst,Shalgar:2020wcx,Martin:2021xyl}:
\begin{align}
\frac{\mathrm{d}}{\mathrm{d} t}\rho=&-i[H,\rho] + C[\rho,\bar{\rho}],\\
\frac{\mathrm{d}}{\mathrm{d} t}\bar{\rho}=&-i[\bar{H},\bar{\rho}] + \bar{C}[\rho,\bar{\rho}],
\end{align}
where $H(\bar{H})$ and $C(\bar{C})$ are Hamiltonian and collision term of neutrinos (antineutrinos), respectively. The time integration of the equations is performed by Runge-Kutta 4th method.
We have confirmed that the results with the 5th-order method are the same as that of the 4th-order method.
The time step is chosen as $\Delta t < 0.1/\max[H_{i,j,\theta},\bar{H}_{i,j,\theta}]$, where $i,j$ are $e,x$; and the components of Hamiltonian depend on the angle, $\theta$.
In explosive astrophysical sites without magnetic fields, neutrino Hamiltonian is composed of vacuum Hamiltonian, the MSW matter term and potential of neutrino-neutrino interactions \cite{Sasaki:2021bvu}. For simplicity, the MSW matter potential is ignored in our calculation. Instead of the matter potential, we impose an effective vacuum mixing angle: $\theta_{\mathrm{v}}=10^{-6}$ that compensates the effect of matter suppression \cite{Mirizzi:2010uz}. The vacuum Hamiltonian of two flavor neutrino is described by
\begin{equation}
H_{\mathrm{vac}}=\frac{\omega}{2}\left(
\begin{array}{c c}
-\cos2\theta_{\mathrm{v}}&\sin2\theta_{\mathrm{v}}\\
\sin2\theta_{\mathrm{v}}&\cos2\theta_{\mathrm{v}}
\end{array}
\right),
\end{equation}
where $\omega=\frac{\Delta m^{2}}{2E}$ is a vacuum frequency composed of a neutrino energy $E$ and neutrino mass difference $\Delta m^{2}$. We use the small mass difference: $\Delta m^{2}=2.5\times10^{-6}\,\mathrm{eV}^{2}$ which leads to a periodic trend of fast flavor conversions \cite{Dasgupta:2017oko,Johns:2019izj,Shalgar:2020xns}. The fast flavor conversions are associate with angular dependence in neutrino distributions.
In this work, We remove energy dependence in neutrino distribution and focus on flavor conversions of single energy neutrinos ($E=50$\,MeV).
In the case of azimuthal symmetric neutrino emission, the potential of neutrino-neutrino interaction is given by the integration over a polar angle $\theta$ \cite{Shalgar:2020xns},
\begin{align}
H_{\nu\nu}(\cos\theta)=2\pi\mu&\int^{1}_{-1}\mathrm{d}\cos\theta^{\prime}\left(
1-\cos\theta^{\prime}\cos\theta
\right)\nonumber\\
&\left\{
\rho(\cos\theta^\prime)-\bar{\rho}(\cos\theta^\prime)
\right\},
\end{align}
where $\mu$ is the strength of neutrino-neutrino interactions. Throughout the calculation, we fix the strength of neutrino-neutrino interaction as $\mu=10^{4}\,\mathrm{km}^{-1}$.
To minimize the error in the integration, we employ 2000 Gauss-Legendre mesh for $\theta$.
In general, the Boltzmann collision of neutrino scatterings should depend on neutrino energy, scattering angles and flavors. In this work, we ignore the flavor dependence in collision terms and focus on elastic scattering of neutrinos. We employ direction-changing collisions \cite{Shalgar:2020wcx},
\begin{align}
C[\rho,\bar{\rho}]=&-\int^{1}_{-1}\mathrm{d}\cos\theta^{\prime}C_{\mathrm{loss}}\rho(\cos\theta)\nonumber\\
&+\int^{1}_{-1}\mathrm{d}\cos\theta^{\prime}C_{\mathrm{gain}}\rho(\cos\theta^{\prime}), \label{eq:collision rho}\\
\bar{C}[\rho,\bar{\rho}]=&-\int^{1}_{-1}\mathrm{d}\cos\theta^{\prime}\bar{C}_{\mathrm{loss}}\bar{\rho}(\cos\theta)\nonumber\\
&+\int^{1}_{-1}\mathrm{d}\cos\theta^{\prime}\bar{C}_{\mathrm{gain}}\bar{\rho}(\cos\theta^{\prime}),\label{eq:collision rhob}
\end{align}
where the first terms and second terms of the above equations represent ``loss" and ``gain" terms, respectively. The number of neutrinos (antineutrinos) are conserved when $C_{\mathrm{loss}}=C_{\mathrm{gain}}$ ($\bar{C}_{\mathrm{loss}}=\bar{C}_{\mathrm{gain}}$) is satisfied. Here, we assume constant collision terms: $C_{\mathrm{loss}}=C_{\mathrm{gain}}=\bar{C}_{\mathrm{loss}}=\bar{C}_{\mathrm{gain}}=C/2$ irrespective of neutrino scattering angles as followed in Refs.~\cite{Shalgar:2020wcx,Shalgar:2021wlj}.
In the beginning of the calculation ($t=0$), we employ the distributions of $\nu_{e}$ and $\bar{\nu}_{e}$ such as
\begin{equation}
\label{eq:initial condition}
\begin{split}
&\rho_{ee}(\cos\theta)=0.5,\\
&\bar{\rho}_{ee}(\cos\theta)=0.47+0.05\exp(-2(\cos\theta-1)^{2}),
\end{split}
\end{equation}
where $\rho_{xx}$ and $\bar{\rho}_{xx}$ are equal to zero (see the top panel of Fig.~\ref{fig:spec relaxation_caseb}). There is a ELN crossing around $\cos\theta\sim0.5$ in Eq.~(\ref{eq:initial condition}). These initial conditions correspond to those in the Case B in Ref.~\cite{Shalgar:2020wcx}. We impose a random phase, i,e., $\rho_{ex}=\rho_{ee} \epsilon$, $\bar{\rho}_{ex}=\bar{\rho}_{ee} \epsilon$,where $\epsilon$ is a random complex number of the order of $10^{-8}$.
In our numerical setup, the neutrino density matrix depends on the polar scattering angle $\theta$ and the time $t$.
Hereafter, for the simple notation, we do not write the dependence explicitly. In two flavor neutrinos, the neutrino density matrix is decomposed of Pauli matrices $\sigma_{i}(i=x,y,z)$ and polarization vector $\mathbf{P}=(P_{x},P_{y},P_{z})$:
\begin{equation}
\label{eq:geometrical representation}
\rho=\left(
\begin{array}{c c}
\rho_{ee}&\rho_{ex}\\
\rho_{xe}&\rho_{xx}
\end{array}
\right)
=\frac{\mathrm{Tr}\rho}{2}I_{2\times2}+\frac{P_{i}\sigma_{i}}{2},
\end{equation}
where $I_{2\times2}=\mathrm{diag}(1,1)$. The density matrix of antineutrino $\bar{\rho}$ is also represented by the polarization vector of antineutrino $\mathbf{\bar{P}}=(\bar{P}_{x},\bar{P}_{y},\bar{P}_{z})$ in the same way. In our numerical setup, the equations of motion of polarization vectors are written as
\begin{equation}
\begin{split}
\label{eq:EOM neutrino vector}
\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{P}&=\left(
+\omega\mathbf{B}+\mu^{\prime}\mathbf{D_{0}}-\mu^{\prime}\cos\theta\mathbf{D_{1}}
\right)\times\mathbf{P}-C\mathbf{P}+C\average{\mathbf{P}},\\
\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{\bar{P}}&=\left(
-\omega\mathbf{B}+\mu^{\prime}\mathbf{D_{0}}-\mu^{\prime}\cos\theta\mathbf{D_{1}}
\right)\times\mathbf{\bar{P}}-C\mathbf{\bar{P}}+C\average{\mathbf{\bar{P}}}.
\end{split}
\end{equation}
The variables in the equations are defined as
\begin{equation}
\begin{split}
\label{eq:vectors}
\mathbf{B}&=\left(
\sin2\theta_\mathrm{v},0,-\cos2\theta_\mathrm{v}
\right),\\
\mathbf{D_{0}}&=\average{\mathbf{P}-\mathbf{\bar{P}}},\\
\mathbf{D_{1}}&=\average{\left(
\mathbf{P}-\mathbf{\bar{P}}
\right)\cos\theta},
\end{split}
\end{equation}
where $\mu^{\prime}=4\pi\mu$. Here, $\average{A}=\frac{1}{2}\int^{1}_{-1} A\ \mathrm{d}\cos\theta$ represents the angular average of a quantity $A$ that is a function of $\cos\theta$. In the next Section, we analyze the behaviors of neutrino oscillations based on the motion of the polarization vectors governed by Eq.~(\ref{eq:EOM neutrino vector}). The $z$-components of polarization vectors include information of numbers of neutrinos. The finite value of $C$ changes the length of the neutrino polarization vector. From Eq.~(\ref{eq:EOM neutrino vector}), we can derive the equation of motion of the angled averaged length of polarization vectors,
\begin{equation}
\begin{split}
\label{eq:mean square polarization vectors}
\frac{\mathrm{d}}{\mathrm{d}t}\average{|P|^{2}}&=-2C\left(
\average{|P|^{2}}-|\average{P}|^{2}
\right),\\
\frac{\mathrm{d}}{\mathrm{d}t}\average{|\bar{P}|^{2}}&=-2C\left(
\average{|\bar{P}|^{2}}-|\average{\bar{P}}|^{2}
\right).
\end{split}
\end{equation}
The right hand sides of Eq.~\eqref{eq:mean square polarization vectors} vanish when the deviations in the angular distributions of neutrino polarization vector and that of antineutrinos disappear. The Eq.~(\ref{eq:mean square polarization vectors}) implies that the distributions of polarization vectors become isotropic in equilibrium states of fast flavor conversions owing to collision effects. Furthermore, the Boltzmann collision changes the direction of emitted neutrinos, so that, in general, the $\mathrm{Tr}\rho$ and $\mathrm{Tr}\bar{\rho}$ are no longer invariant during flavor conversions even though the total neutrino numbers $2\average{\mathrm{Tr}\rho}$ and $2\average{\mathrm{Tr}\bar{\rho}}$ are conserved. The time evolution of these traces of density matrices can be solved analytically. In the case of initial distribution as shown in Eq.~(\ref{eq:initial condition}), the trace of neutrino density matrix does not evolve ($\mathrm{Tr}\rho=0.5$). On the other hand, the value of $\mathrm{Tr}\bar{\rho}$ exponentially approach to that of $\average{\mathrm{Tr}\bar{\rho}}(=0.4857)$ irrespective of scattering angle $\theta$ and the angular dependence finally disappears in equilibrium.
\section{Results}\label{sec:Results}
Figure \ref{fig:caseB-t-Pex} shows the overall evolution of angle averaged transition probability $\average{P_{ex}}$ \cite{Shalgar:2020wcx},
\begin{equation}
\label{eq:transition probability general}
\average{P_{ex}}=\frac{\average{\rho_{ee}}_{\rm ini}-\average{\rho_{ee}}}{\average{\rho_{ee}}_{\rm ini}-\average{\rho_{xx}}_{\rm ini}},
\end{equation}
where $\average{\rho_{\alpha\alpha}}_{\rm ini}$ is the initial value of $\average{\rho_{\alpha\alpha}}(\alpha=e,x)$. In our numerical setup with Eqs.~(\ref{eq:collision rho}) and (\ref{eq:collision rhob}), the trace of neutrino density matrix is invariant during flavor conversions ($\mathrm{Tr}\rho=0.5$), so that the $\average{P_{ex}}$ is described by
\begin{equation}
\label{eq:transition probability}
\average{P_{ex}}=\frac{1}{2}\left(
1-\frac{\average{P_{z}}}{\average{P_{z}}_{\rm ini}}
\right),
\end{equation}
where $\average{P_{z}}_{\rm ini}$ is the initial value of $\average{P_{z}}$, $0.5$.
In the case of $C=0\,\mathrm{km}^{-1}$ (blue curve), the periodic structure of fast flavor conversions is confirmed. On the other hand, in the case of $C=1\,\mathrm{km}^{-1}$ (red curve), flavor conversions are enhanced by the collision terms and the transition probability reaches an equilibrium value. Properties of fast flavor conversions in our calculation are consistent with results in Ref.~\cite{Shalgar:2020wcx} (see the Case B of Fig.~$2$ in the paper) except for the time scale of flavor conversions. The probability in Fig.~\ref{fig:caseB-t-Pex} becomes large when $t>0.05\times10^{-5}\,{\rm s}$ in both cases. The flavor conversions in Fig.~\ref{fig:caseB-t-Pex} evolve faster than those of calculations in Ref.~\cite{Shalgar:2020wcx}. Such discrepancy in the time scale of flavor conversions is also reported in QKE-MC simulations \cite{Kato2021a}. This issue might be related to the strength of the initial perturbation. Since we imposed $\sim 10^{-8}$ random seed, the first peak in $\average{P_{ex}}$ may arise faster.
\begin{figure}[t]
\includegraphics[width=0.95\linewidth]{t-Pex_caseb.png}\\
\caption
The time evolution of transition probability $\average{P_{ex}}$ with (red) and without (blue) the collision terms.
}
\label{fig:caseB-t-Pex}
\end{figure}
The dynamics of fast flavor conversions is mainly divided by three epochs such as the linear evolution phase, the limit cycle phase, and the relaxation phase. The fast flavor conversions are enhanced in the limit cycle phase owing to the collision effect. The distribution of neutrinos finally become isotropic in the relaxation phase when the evolution time is comparable with the time scale of the collision term ($c t\sim C^{-1}$), where $c$ is the speed of light.
Hereafter we omit $c$ when we convert timescale to length scale.
\subsection{Linear evolution phase}
\begin{figure}[t]
\includegraphics[width=0.95\linewidth]{palog00905_caseb.png}\\
\caption
The map of normalized neutrino polarization vector $(\Tilde{P}_{x},\Tilde{P}_{y})$ in the linear evolution phase ($t=0.03\times10^{-5}$\,s). The normalized polarization vector is defined in Eq.~(\ref{eq:polarization vector normalized}).
}
\label{fig:palog00905_caseb}
\end{figure}
We call the epoch before reaching the first peak in $\average{P_{ex}}$ linear evolution phase. In this early phase of fast flavor conversions ($t<0.05\times10^{-5}$\,s), the instability of flavor conversions appears near the ELN crossing in the initial distributions of $\nu_{e}$ and $\bar{\nu}_{e}$ ($\cos\theta\sim0.5$). As shown in Fig.~\ref{fig:caseB-t-Pex}, collision effects are negligible in the linear evolution phase because of the large time scale of the collision terms ($t \ll C^{-1}$), so that flavor conversions without and with collision terms are almost equivalent. Here, we use time snapshots of neutrino polarization vectors with $C=1\,\mathrm{km}^{-1}$ at $t=0.03\times10^{-5}$\,s to study behaviors of fast flavor conversions during the linear phase.
Fig.~\ref{fig:palog00905_caseb} shows a map of normalized neutrino polarization vectors on the $\Tilde{P}_{x}-\Tilde{P}_{y}$ plane at $t=0.03\times 10^{-5}$\,s. In this figure, for the convenience to see a polarization vector in the case of a small transition probability, the polarization vector is normalized as
\begin{equation}
\begin{split}
\label{eq:polarization vector normalized}
\Tilde{P}_{x}&=\left(1+\frac{\log_{10}P_{R}/|P|}{15}\right)
\cos (P_\phi),\\
\Tilde{P}_{y}&=\left(1+\frac{\log_{10}P_{R}/|P|}{15}\right)
\sin (P_\phi),
\end{split}
\end{equation}
where $P_{R}=\sqrt{P_{x}^{2}+P_{y}^{2}}$ and $P_\phi=\tan^{-1}(P_{y}/P_{x})$ (in this paper, inverse tangent is calculated by \verb!atan2! function in \verb!C++!). At first, all neutrino polarization vectors lie in the $z-$axis ($(0,0)$ in Fig.~\ref{fig:palog00905_caseb}). As the calculation time has passed, the polarization vector begins a spiral motion around the $z-$axis increasing the value of $P_{R}$.
\begin{figure}[htb]
\includegraphics[width=0.85\linewidth]{phase00905_caseb.png}\\
\includegraphics[width=0.85\linewidth]{pznorm00905_caseb.png}\\
\includegraphics[width=0.85\linewidth]{dphi00905_caseb.png}
\caption
Polarization vector at $t=0.03\times10^{-5}$\,s.
Top: The angular distribution of the phase $P_\phi=\tan^{-1}(P_{y}/P_{x})$.
The gray line is the phase of averaged polarization vector, $\langle P\rangle_\phi=\tan^{-1}(\langle P_y\rangle/\langle P_x\rangle)$.
Middle: The angular distribution of $1-P_{z}/|P|$ in the unit of $10^{-5}$.
Bottom: The angular distribution of the phase difference $\delta$. The definition of the $\delta$ is given in Eq.~(\ref{eq:phase difference}).
The gray lines indicate $1,0,-1$\,$[\pi]$.
}
\label{fig:phase00905_caseb}
\end{figure}
The dynamics of the polarization can be understood in the cylindrical coordinate ($R$, $\phi$, $z$) as follows. First, we need to define and explain some variables.
The quantity $P_\phi$ represents a phase of polarization vector on the $\Tilde{P}_{x}$-$\Tilde{P}_{y}$ plane. In negative value of $\cos\theta$, the motions of polarization vectors synchronize with each other, which results in the same value of $P_\phi$ in the top panel of Fig.~\ref{fig:phase00905_caseb}. On the other hand, the phase $P_\phi$ is distributed broadly in the case of $\cos\theta>0$. The phase of $\cos\theta>0.6$ is close to that of $\cos\theta<0$. The $P_\phi$ becomes opposite phase in $0<\cos\theta<0.2$. The value of $P_{R}$ becomes small around $\cos\theta\sim0$, so that $\theta=\pi/2$ can be regarded as a singular point for $P_\phi$. We discuss the reason why the $P_{R}$ is small around $\cos\theta=0$ in Sec.\ref{sec:Relaxation phase}.
The $z$-component of the neutrino polarization vector is related to the transition probability. According to Eq.~(\ref{eq:transition probability}), the transition probability increases as the value of $P_{z}$ decreases. The middle panel of Fig.~\ref{fig:phase00905_caseb} shows the distribution of $1-P_{z}/|P|$ at $t=0.03\times10^{-5}$\,s. The flavor conversions do not proceed in $\cos\theta<0.2$. However,
the flavor conversions become prominent in $\cos\theta\sim0.4$ that is close to the angle of the ELN crossing ($\cos\theta\sim 0.5$) in the initial distributions.
Here, we define the phase difference between $\mathbf{P}$ and the polarization vector of neutrino Hamiltonian $\mathbf{H}$ on the $x$-$y$ plane,
\begin{equation}
\label{eq:phase difference}
\begin{split}
H_{R}&=\sqrt{H_{x}^{2}+H_{y}^{2}},\\
\delta&=P_\phi-H_\phi,
\end{split}
\end{equation}
where $\mathbf{H}=\omega\mathbf{B}+\mu^{\prime}\mathbf{D_{0}}-\mu^{\prime}\cos\theta\mathbf{D_{1}}$ is the polarization vector of neutrino Hamiltonian and $H_\phi=\tan^{-1}(H_{y}/H_{x})$ is the phase of $\mathbf{H}$ on the $x$-$y$ plane.The bottom panel of Fig.~\ref{fig:phase00905_caseb} shows the phase difference $\delta$ in Eq.~(\ref{eq:phase difference}) at $t=0.03\times10^{-5}$ s. The neutrino polarization vectors are almost antiparallel to the Hamiltonian vector in $\cos\theta<0.2$ because of $\cos\delta\sim-1$. The polarization vector of the neutrino Hamiltonian on the $x$-$y$ plane can be written as Eqs.~\eqref{eq:Hamiltonian vector x}~and~\eqref{eq:Hamiltonian vector y}. Since $H_x$ and $H_y$ are proportional to $\cos\theta$, $|H_{\phi}(\cos\theta\to+0)-H_{\phi}(\cos\theta\to-0)|\sim\pi$ is satisfied.
Therefore, from Eq.~\eqref{eq:phase difference}, the jump ,$|P_{\phi}(\cos\theta\to+0)-P_{\phi}(\cos\theta\to-0)|\sim\pi$ appears around $\cos\theta=0$ as shown in the top panel of Fig.~\ref{fig:phase00905_caseb}.
The evolution of $P_{z}$ is related to the phase difference $\delta$. In the linear evolution phase, the contribution of the collision term is negligible, so that the time evolution of $P_{z}$ is derived from Eq.~(\ref{eq:EOM neutrino vector}),
\begin{equation}
\label{eq:time evolution Pz in the linear phase}
\frac{\mathrm{d}}{\mathrm{d}t}P_{z} \sim H_{x}P_{y}-H_{y}P_{x}=H_{R}P_{R}\sin\delta.
\end{equation}
The value of $\sin\delta$ in Eq.~(\ref{eq:time evolution Pz in the linear phase}) is negligible when the direction of $\mathbf{P_{R}}=(P_{x},P_{y})$ is parallel ($\delta=0$) or antiparallel ($\delta=\pm\pi$) to that of $\mathbf{H_{R}}=(H_{x},H_{y})$ on the $x$-$y$ plane. At $t=0.03\times10^{-5}$\,s, the value of $|\sin\delta|$ is no longer negligible in $\cos\theta>0.2$ and becomes maximum in $\cos\theta\sim0.4$ (see the bottom panel of Fig.~\ref{fig:phase00905_caseb}), so that fast flavor conversions proceed prominently in $\cos\theta\sim0.4$ as shown in the middle panel of Fig.~\ref{fig:phase00905_caseb}.
Several studies connect the dynamics of neutrino oscillation with the synchronization phenomena \cite{Pantaleone:1998xi,Raffelt:2010za,Akhmedov:2016gzx}.
The evolution of the phase of polarization vector can be interpreted in the framework of the Kuramoto model \cite{Kuramoto2012}, i.e.,
\begin{align}
\frac{{\rm d} \phi_i }{{\rm d} t} = \omega_i +\frac{K}{N}\sum_{j=1}^N \sin (\phi_j-\phi_i),
\end{align}
where $\phi_i$ is the phase of $i$-th oscillator, which rotates with the frequency, $\omega_i$.
The total number of oscillators is $N$ and $K$ is the coupling constant.
Sufficiently high $K$ makes synchronization, i.e., all $\phi_i$ rotates with the same frequency irrespective to the original $\omega_i$.
In the context of neutrino oscillation, this is a trivial equilibrium and no flavor conversions happen.
The flavor conversion is expected when K becomes slightly lower, and the perfect synchronization is broken \cite{Raffelt:2010za}. Note that, Refs.~\cite{Abrams2004,Kuramoto2002} consider a more complicated functions, which resemble our setup.
In our case, the strong coupling constant, $\mu$, synchronize the phase: the polarization vector at $\cos\theta>0.6$ and $\cos\theta<0$ rotates with the same phase (see Fig.~\ref{fig:phase00905_caseb} top panel).
Due to the angular distribution with ELN crossing, that at $0<\cos\theta<0.2$ rotates with a different phase but the shape of the $\delta$ is kept in the linear phase, and this part is also synchronized to the average frequency.
As a result of the synchronization, $\delta$ is kept to $-\pi/2$ at $\cos\theta \sim 0.4$, and it plays an essential role in the flavor conversion.
The profile of $\delta$ in Fig.~\ref{fig:phase00905_caseb} is clearly explained by the condition of the synchronization, see Appendix~\ref{sec:slp 1} for the details.
\begin{figure}[htbp]
\includegraphics[width=0.95\linewidth]{t-Pex_zoom_casseb.png}\\
\caption
Enlarge view of Fig.~\ref{fig:caseB-t-Pex} in the limit cycle phase.
}
\label{fig:t-Pex_zoom_casseb}
\end{figure}
\subsection{Limit cycle phase}\label{sec:Limit cycle phase}
\begin{figure}[htbp]
\includegraphics[width=0.95\linewidth]{d-z01_C0_caseb.png}\\
\includegraphics[width=0.95\linewidth]{d-r01_C0_caseb.png}
\caption
Top: The time evolution of neutrino polarization vector ($\cos\theta=0.7$) on the $\delta$-$P_{z}$ plane from $0.08\times10^{-5}$\,s to $0.16\times10^{-5}$\,s. Here, we set $C=0\,\mathrm{km}^{-1}$. At $t=0.08\times10^{-5}$\,s, $\delta$ is close to $-0.1\pi$. The polarization vector moves counterclockwise along the track.
Bottom: The time evolution of polarization vector on the $\delta$-$P_{R}$ plane. The polarization vector moves clockwise along the track.
}
\label{fig:d-z01_C0_caseb}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.95\linewidth]{t-delta_zoom_caseb.png}\\
\includegraphics[width=0.95\linewidth]{t-dP_zoom_caseb.png}
\caption
Top: The time evolution of $\delta$ at $\cos\theta=0.7$ in the limit cycle phase.
Bottom: The time evolution of $\eta=P_\phi -\langle P \rangle_{\phi}$ at $\cos\theta=0.7$ in the limit cycle phase.
The blue (red) line shows the case of $C=0\,\mathrm{km}^{-1}$ ($C=1\,\mathrm{km}^{-1}$).
}
\label{fig:t-delta_zoom_caseb}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.95\linewidth]{d-z01_C1_caseb.png}\\
\includegraphics[width=0.95\linewidth]{d-r01_C1_caseb.png}
\caption
Same to Fig.~\ref{fig:d-z01_C0_caseb} but here, we consider the collision term ($C=1\,\mathrm{km}^{-1}$).
}
\label{fig:d-z01_C1_caseb}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.95\linewidth]{t-Adelta_zoom_caseb.png}
\caption{
The time evolution of $|\sin\delta|$ at $\cos\theta=0.7$ (orange) and $\cos\theta=0.3$ (green).
For positive and negative $\delta$, solid and dashed curve are used, respectively.
}
\label{fig:t-abs_sin_delta_caseb}
\end{figure}
Here, we define the limit cycle phase as the period from the linear evolution phase until $\average{P_{ex}}$ settles down to its equilibrium value. Fig.~\ref{fig:t-Pex_zoom_casseb} shows an enlarge view of Fig.~\ref{fig:caseB-t-Pex} focusing on the flavor conversions in the limit cycle phase. In the case without collision terms (blue curve), flavor conversions become periodic. On the other hand, in the case with collision term, the amplitude of the flavor conversion becomes gradually smaller and the transition probability is enhanced by collision effect (red curve). In the limit cycle phase, the collision effect is no longer negligible in flavor conversions. We study the collision effect on the motion of neutrino polarization vectors in the limit cycle phase. The equation of motion of $\mathbf{P}$ in Eq.~(\ref{eq:EOM neutrino vector}) is decomposed by
\begin{align}
\label{eq:time evolution Pz in the limit cycle phase}
\frac{\mathrm{d}}{\mathrm{d}t}P_{z}&=\;\; H_{R}P_{R}\sin\delta-C\left(
P_{z}-\average{P_{z}}
\right),\\
\label{eq:time evolution Pr in the limit cycle phase}
\frac{\mathrm{d}}{\mathrm{d}t}P_{R}&=-H_{R}P_{z}\sin\delta-C\left(
P_{R}-\average{P}_{R}\cos\eta
\right),\\
P_R \frac{{\rm d} P_\phi}{{\rm d} t} &=
-H_R P_z \cos\delta
-C\langle P\rangle_R\sin\eta+P_R H_z
\label{eq:time evolution Pp in the limit cycle phase}
\end{align}
where $H_{R}$ and $\delta$ are given in Eq.~(\ref{eq:phase difference}), and $\eta=P_\phi -\langle P \rangle_\phi$.
Note that we first calculate average polarization $\langle P_x \rangle$ and $\langle P_y \rangle$, and then obtain
$\average{P}_R=\sqrt{\average{P_x}^2+\average{P_y}^2}$ and
$\langle P \rangle_\phi = \tan^{-1}\left(\average{P_y} / \average{P_x}\right)$. In general,
$\average{P}_R \neq \average{P_R}$
and
$\average{P}_\phi \neq \average{P_\phi}$, where the right hand sides are the average of $P_R$ and $P_\phi$, respectively.
Fig.~\ref{fig:d-z01_C0_caseb} shows the time evolution of $P_{z}$ and $P_{R}$ at $\cos\theta=0.7$ without the collision ($C=0\,\mathrm{km}^{-1}$) during $t$ in $0.08\text{--}0.16$ in the unit of $10^{-5}\,\mathrm{s}$, respectively. Without collision terms, the evolution tracks of polarization vectors in Fig.~\ref{fig:d-z01_C0_caseb} are closed. The blue curve in the top panel of Fig.~\ref{fig:t-delta_zoom_caseb} shows the evolution of $\delta$ without collision effect. At first, the value of $\delta$ is approximately $-0.1\pi$ and almost constant. The value of $\delta$ increases dramatically around $t=0.1\times10^{-5}$\,s. According to the first terms on the right hand sides of Eqs.~(\ref{eq:time evolution Pz in the limit cycle phase}) and (\ref{eq:time evolution Pr in the limit cycle phase}), the value of $P_{z}$ ($P_{R}$) decreases (increases) when the $\delta$ is negative. In the case of positive $\delta$, the value of $P_{z}$ ($P_{R}$) increases (decreases). The perpendicular component is negligible ($P_{R}\sim0$) and the $z$-component is constant ($P_{z}\sim 0.5$) at $t=(0.12\text{--}0.14)\times10^{-5}$\,s irrespective of decreasing $\delta$. In such evolution phase, the small $H_{R}$ suppresses the time evolution of $P_{z}$ and $P_{R}$. Therefore, the polarization vector moves counterclockwise (clockwise) in
Top panel (Bottom panel) of Fig.~\ref{fig:d-z01_C0_caseb}.
In the case with neutrino scatterings ($C=1\,\mathrm{km}^{-1}$), the evolution track of neutrino polarization vector is no longer a closed orbit but a ``limit cycle" on the phase space because of the collision terms in Eqs.(\ref{eq:time evolution Pz in the limit cycle phase}) and (\ref{eq:time evolution Pr in the limit cycle phase}). The spiral motion of $P_{z}$ and $P_R$ at $\cos\theta=0.7$ is shown in the top and bottom panels of Fig.~\ref{fig:d-z01_C1_caseb}, respectively.
The value of $\delta$ is negative at $t=0.08\times10^{-5}$\,s (see the red curve in the top panel of Fig.~\ref{fig:t-delta_zoom_caseb}). By the first term on the right hand side of Eq.~(\ref{eq:time evolution Pz in the limit cycle phase}) (Eq.~\eqref{eq:time evolution Pr in the limit cycle phase}), the value of $P_{z}$ ($P_{R}$) decreases (increases) at first.
On the other hand, in more later phase, $\delta$ becomes positive and $P_{z}$ ($P_{R}$) increases (decreases).
The ranges of $P_{z}$ and $P_{R}$ changing in one cycle gradually decrease with each cycle.
As shown in the red curve of the top panel of Fig.~\ref{fig:t-delta_zoom_caseb}, the oscillation amplitude of $\delta$ is decreasing and converging to zero.
In the bottom panel, the evolution of $\eta$ is also shown.
Comparing the top and bottom panels, we found $\delta\sim - \eta$.
Since the collision tries to align the polarization and decreases $|\eta|=|P_\phi-\langle P\rangle_\phi |$, $|\delta|\sim|\eta|$ is also expected to become smaller as time passes.
After such synchronization between $\mathbf{P_{R}}$ and $\mathbf{H_{R}}$, the first terms on the right hand side of Eqs.~\eqref{eq:time evolution Pz in the limit cycle phase} and \eqref{eq:time evolution Pr in the limit cycle phase} are negligible and the transition probability $\average{P_{ex}}$ no longer changes, which results in the end of the limit cycle phase.
One of the most interesting features of this model is that the mean value of $P_z$ becomes smaller as time passes (e.g., top of Fig.~\ref{fig:d-z01_C1_caseb}).
Fig.~\ref{fig:t-abs_sin_delta_caseb} shows $|\sin \delta|$. It is similar to Fig.~\ref{fig:t-delta_zoom_caseb}, but we can compare positive $\delta$ and negative $\delta$ easily. In the case of $\cos\theta=0.7$, the averaged $|\sin \delta|$ for negative $\delta$ is larger than that of positive $\delta$ in one cycle. In addition, the period with negative $\delta$ is slightly longer than the positive part.
From Eq.~\eqref{eq:time evolution Pz in the limit cycle phase},
this imbalance of positive and negative $\delta$ leads to the gradual decrease of $P_z$.
Note that this does not happen for all $\theta$.
In the case of $\cos\theta=0.3$, on the other hand, the positive part is slightly larger than the positive part.
This excess slightly increases the mean $P_z$ (see the bottom of Fig.~\ref{fig:perp relaxation_caseb}).
This imbalance comes from the highly non-linear dynamics of the partially synchronized oscillators,
and it is difficult to identify the mechanism to produce that. Here we show a hypothesis of the possible origin, focusing on
the impact of the collision term on $\delta=P_\phi-H_\phi$. Note that other terms may also cause the imbalance.
First we extract the impact of the collision term on the phase of the polarization vector:
$\left.\frac{{\rm d}P_\phi}{{\rm d}t}\right|_{\rm coll}= \frac{{\rm d}P_\phi}{{\rm d}t} - \left.\frac{{\rm d}P_\phi}{{\rm d}t}\right|_{C=0}$, where $\left.\frac{{\rm d}P_\phi}{{\rm d}t}\right|_{C=0}$ is obtained by substituting $C=0$ in Eq.~\eqref{eq:time evolution Pp in the limit cycle phase}. Similarly, we consider the collisional part of $\delta$: $\left.\frac{{\rm d}\delta}{{\rm d}t}\right|_{\rm coll}= \frac{{\rm d}\delta}{{\rm d}t} - \left.\frac{{\rm d}\delta}{{\rm d}t}\right|_{C=0}$ .
This term could be well approximated as
\begin{align}
\label{eq:Time evolution of delta}
\left.\frac{{\rm d}\delta}{{\rm d}t}\right|_{\rm coll}
&=
\left.\frac{{\rm d}P_\phi}{{\rm d}t}\right|_{\rm coll}
-\left.\frac{{\rm d}H_\phi}{{\rm d}t}\right|_{\rm coll}
&\sim-C\left(\frac{\average{P}_R}{P_R}\right)\sin\eta,
\end{align}
where we ignore the effect of the collision term in $\frac{{\rm d} H_\phi}{{\rm d} t}$ following Appendix~\ref{sec:derivation of time evolution of delta},
and the collision term in Eq.~\eqref{eq:time evolution Pp in the limit cycle phase} is extracted.
We demonstrate how the collision term violates the time symmetry using the schematic diagram of Fig.~\ref{fig:schematic}.
Near $t=0.0945\times 10^{-5}\,{\rm s}$ in Fig.~\ref{fig:t-abs_sin_delta_caseb},
$\delta$ of $\cos\theta=0.7$ changes its sign from negative to positive, i.e., $\frac{{\rm d}\delta}{{\rm d}t}>0$ (point A in the diagram).
The collision term would decelerate $\delta$ when $t < 0.0945\times 10^{-5}\,{\rm s}$ and $\delta <0$ since
the collision term, $\left.\frac{{\rm d}\delta}{{\rm d}t}\right|_{\rm coll}$, is proportional to $-C \sin\eta \sim C\sin\delta < 0$ ( here $\eta\sim -\delta$. see Fig.~\ref{fig:t-delta_zoom_caseb}).
Now $\frac{{\rm d}\delta}{{\rm d}t}$ is positive, and $\left.\frac{{\rm d}\delta}{{\rm d}t}\right|_{\rm coll}$ is negative, then the evolution of $\delta$ would be decelerated by the collision (curve of D to A in the diagram).
On the other hand, when $t > 0.0945\times 10^{-5}\,{\rm s}$ and $\delta >0$,
the collision term accelerate the evolution of $\delta$ since $\frac{{\rm d}\delta}{{\rm d}t}>0$ and
$\left.\frac{{\rm d}\delta}{{\rm d}t}\right|_{\rm coll}$ are both positive (Curve A-B in the diagram).
These two effects make the duration in negative $\delta$ longer and positive $\delta$ shorter.
Caveat that at $t=0.099\times 10^{-5}\,{\rm s}$, the collision term, vise versa, makes
the duration in negative $\delta$ shorter and the duration in positive $\delta$ longer.
Here $\frac{{\rm d}\delta}{{\rm d}t}<0$, and $\left.\frac{{\rm d}\delta}{{\rm d}t}\right|_{\rm coll}>0$ ($<0$) for positive (negative) $\delta$ in Curve B-C (C-D).
We need more careful analysis for quantitative argument, which we keep in a future study as well as the possibility of other origin.
\begin{figure}[htbp]
\includegraphics[width=0.8\linewidth]{diagram_caseb.pdf}
\caption
Schematic diagram of the evolution of $\delta$ and $\frac{{\rm d}\delta}{{\rm d} t}$ .
}
\label{fig:schematic}
\end{figure}
\subsection{Relaxation phase}
\label{sec:Relaxation phase}
\begin{figure}[t]
\includegraphics[width=0.95\linewidth]{p_perp_caseb.png}\\
\includegraphics[width=0.95\linewidth]{pz_caseb.png}
\caption
Top: The time evolution of perpendicular component of neutrino polarization vectors at different angles (color curves), $P_R$, and that of angle averaged one (black), $\langle P\rangle_{R}$.
Bottom: The time evolution of $P_{z}$ at different angles and that of the angle averaged $z$-component $\average{P_{z}}$.
The red, orange, green, cyan, and purple curves show the case of $\cos\theta=1,0.7,0.3,0$ and $-1$, respectively.
}
\label{fig:perp relaxation_caseb}
\end{figure}
As implied in Eq.~(\ref{eq:mean square polarization vectors}), the distributions of neutrinos affected by fast flavor conversions become isotropic in the relaxation phase ($t>C^{-1}=0.33\times 10^{-5}$\,s). Up to the limit cycle phase, the fast flavor conversions are enhanced by collision effects and the transition probability $\average{P_{ex}}$ does not change anymore as shown in Fig.~\ref{fig:caseB-t-Pex}. In the relaxation phase, the collision terms force all of polarization vectors to face $z$-axis keeping the value of $\average{P_{ex}}$.
The top panel of Fig.~\ref{fig:perp relaxation_caseb} shows the time evolution of $P_{R}$ in different angles and that of $\average{P}_{R}$. Owing to the coupling of neutrino-neutrino interactions with neutrino scatterings, the value of $P_{R}$ increases and saturates around $t\sim 0.1\times 10^{-5}$\,s.
Note that the time of the first peak would depend on the initial perturbation. We impose $P_{R}\sim 10^{-8}$, initially.
After the saturation, all of polarization vector starts to be parallel to the %
$z$-axis and the values of $P_{R}$ are reduced to zero. The perpendicular component of $\cos\theta=0$ (cyan curve) is small and the polarization vector always points near the $z$-axis. From Eqs.(\ref{eq:EOM neutrino vector}) and (\ref{eq:vectors}), equation of motions of $\mathbf{D_{0}}$ is described by
\begin{equation}
\label{eq:EOM of D0}
\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{D_{0}}=\omega\left(
\average{\mathbf{P}}+\average{\mathbf{\bar{P}}}
\right)\times\mathbf{B}.
\end{equation}
The above equation of motion indicates that, in the case of a small vacuum frequency ($\omega t \ll 1$), $\mathbf{D_{0}}$ is almost conserved during the flavor conversions and parallel to the $z$-axis. In addition, $\mathbf{B}$ is also directed to the $z$-axis in the case of a small mixing angle ($\theta_{\mathrm{v}} \ll 1$). Then, $\omega\mathbf{B}+\mu^{\prime}\mathbf{D_{0}}$ in Eq.~(\ref{eq:EOM neutrino vector}) is also parallel to the $z$-axis, so that the $P_{R}$ is almost negligible at $\cos\theta=0$. The evolution of $P_{z}$ in different angles and that of $\average{P_{z}}$ are shown in the bottom panel of Fig.~\ref{fig:perp relaxation_caseb}. The fast flavor conversions increase the transition probability and induce different values of $P_{z}$ depending on the scattering angle $\theta$ until the end of the limit cycle $t\sim C^{-1}=0.33\times 10^{-5}$\,s. The flavor conversions in the limit cycle phase are especially enhanced at $\cos\theta=0.7$ (orange curve) and $\cos\theta=1$ (red curve) in the bottom panel of Fig.~\ref{fig:perp relaxation_caseb}. Such enhancement of flavor conversions in forward scattered neutrinos is also confirmed in the middle panel of Fig.\ref{fig:spec relaxation_caseb}. However, after the limit cycle, the values of $P_{z}$ in different angles (color curves) converge on the value of $\average{P_{z}}$ (black curve). Then, the fast flavor conversions of neutrinos have finished when $P_{z}=\average{P_{z}}$ is satisfied in all of the scattering angles. Such behaviors of $P_{z}$ can be understood by the equation of motion of $P_{z}$ in the relaxation phase. Eq.(\ref{eq:transition probability}) shows that the $\average{P_{z}}$ is related to the transition probability as $\average{P_{ex}}=0.5-\average{P_{z}}$ in our numerical setup. As discussed in Sec. \ref{sec:Limit cycle phase}, the time evolution of $\average{P_{ex}}$ reaches an equilibrium in the limit cycle phase owing to the synchronization between polarization vectors of neutrino density matrix and that of Hamiltonian ($\mathbf{P_{R}}\parallel\mathbf{H_{R}}$). Therefore, the value of $\delta$ is negligible and the value of $\average{P_{z}}$ becomes constant before the relaxation phase. Then, the equation of motion of $P_{z}$ in the relaxation phase is derived from Eq.~(\ref{eq:time evolution Pz in the limit cycle phase}),
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d}t}P_{z}\sim-C\left( P_{z}-\average{P_{z}}
\right),
\end{equation}
which explains the relaxation of $P_{z}$ to $\average{P_{z}}$ in all scattering angles at $t> C^{-1}=0.33\times 10^{-5}$\,s as shown in the bottom panel of Fig.\ref{fig:perp relaxation_caseb}. Here, we discuss the relaxation of polarization vectors in neutrino sector, but the case of antineutrinos are similar. In the case of antineutrinos, the polarization vector $\mathbf{\bar{P}}$ is relaxed to $(0,0,\average{\bar{P}})$ irrespective of the scattering angle $\theta$.
\begin{figure}[htbp]
\includegraphics[width=0.9\linewidth]{rhoeexx00000_caseb.png}\\
\includegraphics[width=0.9\linewidth]{rhoeexx08000_caseb.png}\\
\includegraphics[width=0.9\linewidth]{rhoeexx90000_caseb.png}
\caption
Time snapshots of neutrino spectra.
The initial ($t=0$\,s), after limit cycle ($t=0.27\times 10^{-5}$\,s), after the relaxation ($t=3.00\times 10^{-5}$\,s) snapshots are taken, respectively.
The distribution of $\rho_{ee}$, $\rho_{xx}$, $\bar{\rho}_{ee}$, and $\bar{\rho}_{xx}$
correspond to red, blue, orange, and purple curves, respectively.
}
\label{fig:spec relaxation_caseb}
\end{figure}
The angular distributions of neutrinos after fast flavor conversions are shown in Fig.~\ref{fig:spec relaxation_caseb} bottom panel. As implied in Eq.~(\ref{eq:mean square polarization vectors}), the angular dependence disappears in the distribution of neutrinos. The density matrices of neutrinos (antineutrinos) become diagonal because of the negligible $P_{R}$($\bar{P}_{R}$) and the finite $P_{z}$($\bar{P}_{z}$).
The middle panel is taken at $t=0.27\times 10^{-5}$\,s.
Up to the limit cycle, the values of $\rho_{ee}$ and $\bar{\rho}_{ee}$ depend on the scattering angle \cite{Shalgar:2020wcx}.
It looks similar to the stationary solution without collision \cite{Xiong:2021dex}.
However, such angular dependence disappear during the relaxation phase and the neutrino distribution becomes isotropic because of the collision effect. In Fig.~\ref{fig:spec relaxation_caseb} bottom panel, the value of $\rho_{xx}$ (blue curve) is equal to that of $\bar{\rho}_{xx}$ (purple curve). Such correspondence can be explained by a conservation law below
\begin{equation}
\label{eq:conservation law D0}
\mathbf{B}\cdot\mathbf{D_{0}}=\mathrm{const.}\,.
\end{equation}
The above equation is derived from Eq.~(\ref{eq:EOM of D0}). In the case of small vacuum mixing angle, $\average{P_{z}}-\average{\bar{P}_{z}}$ is almost conserved according to Eq.~(\ref{eq:conservation law D0}). Furthermore the $\average{\mathrm{Tr}\rho}$ and $\average{\mathrm{Tr}\bar{\rho}}$ are time-independent quantities. Therefore, we can show that $\average{\rho_{xx}}$ is almost equal to $\average{\bar{\rho}_{xx}}$ at any time:
\begin{equation}
\begin{split}
\average{\rho_{xx}}-\average{\bar{\rho}_{xx}}&=\frac{\average{\mathrm{Tr}\rho}}{2}-\frac{\average{\mathrm{Tr}\bar{\rho}}}{2}-\frac{\average{P_{z}}}{2}+\frac{\average{\bar{P}_{z}}}{2}\\
&\sim\left(
\average{\rho_{xx}}-\average{\bar{\rho}_{xx}}
\right)(t=0)\\
&=0.
\end{split}
\end{equation}
After the relaxation phase, neutrino distribution becomes isotropic, so that $\rho_{xx}\sim\bar{\rho}_{xx}$ is satisfied in the end of the calculation as shown in Fig.~\ref{fig:spec relaxation_caseb}.
Note that the appearance of the relaxation phase depends on the situation.
The ELN crossing, considered here, typically appears above the neutrino sphere \citep{Nagakura2021a}, and it means the opacity is less than one, i.e., $t < C^{-1}$.
On the other hand, the ELN crossing in proto-neutron stars,
opacity is larger than one, and the relaxation phase should be considered.
In any case, the discussion here would be helpful to understand the role of the collision term.
\subsection{Dependence of neutrino scattering terms}
\begin{figure}[htbp]
\includegraphics[width=0.9\linewidth]{t-Pex_martin2021_3.png}\\
\includegraphics[width=0.9\linewidth]{sacling_martin2021.png}
\caption
Top: The time evolution of Eq.(\ref{eq:transition probability general}) with NC collision terms in Eqs.~(\ref{eq:collision rho martin}) and (\ref{eq:collision rhob martin}) with different values of $\kappa_{0}$. Bottom: The results with different values of $\mu$ with $\kappa_{0}/\mu=10^{-4}$. Here, we finished the calculations of $\mu=10^{n}\,\mathrm{km}^{-1}$ at $t=3.3\times10^{-1-n}$ s ($n=3,4,5$).
}
\label{fig:collision martin2021}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=0.9\linewidth]{t-Pex_johns2021_2.png}\\
\includegraphics[width=0.9\linewidth]{t-Pex_johns2021_comp_2.png}
\caption
Top: The time evolution of Eq.(\ref{eq:transition probability general}) with CC collision terms in Eqs.~(\ref{eq:collision rho johns}) and (\ref{eq:collision rhob johns}). The red curve shows the result of asymmetric case, i.e., $\Gamma^{\rm CC}=0.22\, \mathrm{km}^{-1}$ and $\bar{\Gamma}^{\rm CC}=0.067\,\mathrm{km}^{-1}$.
The blue curve shows the result of the symmetric case, $\Gamma^{\rm CC}=\bar{\Gamma}^{\rm CC}=0.22\,\mathrm{km}^{-1}$. Bottom: The results with different strengths of the CC collision terms, $\Gamma^{\rm CC}=0.22f_{\rm CC}\, \mathrm{km}^{-1}$ and $\bar{\Gamma}^{\rm CC}=0.067f_{\rm CC}\, \mathrm{km}^{-1}$, where $f_{\rm CC}=1, 10,$ and $100$.
}
\label{fig:collision johns2021}
\end{figure}
In the previous section, we confirm that the collision terms of Eqs.~(\ref{eq:collision rho}) and (\ref{eq:collision rhob}) enhance fast neutrino flavor conversions and make neutrino angular distributions isotropic. In this section, we calculate angle averaged transition probabilities by using collision terms of neutrino scatterings in Refs.~\cite{Martin:2021xyl,Johns:2021arXiv210411369J}. We use the same numerical setup as that of the previous section except for the collision term.
First, we consider the collision terms of neutrino scatterings in neutral-current (NC) reactions.
For the updated NC reaction, we employ the elastic neutrino-nucleon collision in Ref.~\cite{Martin:2021xyl},
\begin{align}
C^{\mathrm{NC}}[\rho]=&-\kappa_{0}\rho(\cos\theta)\nonumber\\
&+\frac{1}{2}\int^{1}_{-1}\mathrm{d}\cos\theta^{\prime}\left(\kappa_{0}-\frac{\kappa_{1}}{3}\cos\theta\cos\theta^{\prime}\right)\rho(\cos\theta^{\prime}), \label{eq:collision rho martin}\\
\bar{C}^{\mathrm{NC}}[\bar{\rho}]=&-\kappa_{0}\bar{\rho}(\cos\theta)\nonumber\\
&+\frac{1}{2}\int^{1}_{-1}\mathrm{d}\cos\theta^{\prime}\left(\kappa_{0}-\frac{\kappa_{1}}{3}\cos\theta\cos\theta^{\prime}\right)\bar{\rho}(\cos\theta^{\prime}),\label{eq:collision rhob martin}
\end{align}
where we fix $\kappa_{1}/\kappa_{0}=0.5$ as in Ref.\cite{Martin:2021xyl}. The gain terms in the above NC collisions depend on the neutrino scattering angle $\theta$. The collision terms in Eqs.~(\ref{eq:collision rho}) and (\ref{eq:collision rhob}) are reproduced, when $\kappa_{1}=0\,\mathrm{km}^{-1}$ and $\kappa_{0}=C$.
The top panel of Fig.~\ref{fig:collision martin2021} shows the evolution of the transition probability $\average{P_{ex}}$ in Eq.~(\ref{eq:transition probability general}) with different values of $\kappa_{0}$. In the case of $\kappa_{0}=1\,\mathrm{km}^{-1}$ (red curve), the flavor conversion is well enhanced by the collision effect and the result is almost identical to that of $C=1\,\mathrm{km}^{-1}$ in Fig.~\ref{fig:caseB-t-Pex}. This correspondence suggests the small contribution of $\kappa_{1}$ to the NC scattering. As shown in the blue, green, and orange curves in Fig.~\ref{fig:collision martin2021}, the flavor conversions are more suppressed in the large value of $\kappa_{0}$. Any flavor conversion does not appear in $\kappa_{0}\geq 8\,\mathrm{km}^{-1}$, where all of the polarization vectors are directed to the $z$-axis by strong collision terms so that the instability for fast flavor conversions does not grow up sufficiently. Nothing is happening in the neutrino sector, but the distribution of electron antineutrino becomes isotropic following the classical Boltzmann equation without neutrino oscillations. Such damping effect of a large $\kappa_{0}$ is consistent with the result of Ref.~\cite{Martin:2021xyl}.
The bottom panel of Fig.~\ref{fig:collision martin2021} shows the results with different values of $\mu$ while maintaining the ratio, $\kappa_{0}/\mu=10^{-4}$. The transition probabilities scale to $\mu^{-1}$ and the flavor conversion is raised even with the large collision parameter, $\kappa_{0}=10\,\mathrm{km}^{-1}$ (green curve). Therefore, the collision effect can be characterized by the ratio, $\kappa_{0}/\mu$. It seems that a small collision term, $\kappa_{0}\leq \tau^{-1}_{\rm{osc}}\propto\mu$ is required to enhance the flavor conversions, where $\tau_{\rm{osc}}$ is a characteristic oscillation time proportional to $\mu^{-1}$.
Next, we consider charged-current (CC) reactions.
For the CC scatterings, we study the effect of neutrino electron scatterings following the collision term in Ref.~\cite{Johns:2021arXiv210411369J},
\begin{align}
C^{\mathrm{\rm CC}}[\rho]=&-\Gamma^{\rm CC}\left(
\begin{array}{cc}
\rho_{ee} & \frac{\rho_{ex}}{2} \\
\frac{\rho_{xe}}{2} & 0
\end{array}
\right)+\Gamma^{\rm CC}\left(
\begin{array}{cc}
\average{\rho_{ee}} & 0\\
0 & 0
\end{array}
\right),
\label{eq:collision rho johns}\\
\bar{C}^{\mathrm{\rm CC}}[\bar{\rho}]=&-\bar{\Gamma}^{\rm CC}\left(
\begin{array}{cc}
\bar{\rho}_{ee} & \frac{\bar{\rho}_{ex}}{2} \\
\frac{\bar{\rho}_{xe}}{2} & 0
\end{array}
\right)+\bar{\Gamma}^{\rm CC}\left(
\begin{array}{cc}
\average{\bar{\rho}_{ee}} & 0\\
0 & 0
\end{array}
\right),
\label{eq:collision rhob johns}
\end{align}
where $\Gamma^{\rm CC}=1/\lambda_{\nu_{e}e}$ and $\bar{\Gamma}^{\rm CC}=1/\lambda_{\bar{\nu}_{e}e}$ are calculated from the rates of the electron scatterings (see Eq.~($11$) in Ref.~\cite{Johns:2021arXiv210411369J}).
The red curve in the top panel of Fig.~\ref{fig:collision johns2021} shows the transition probability $\average{P_{ex}}$ in Eq.~(\ref{eq:transition probability general}) for the typical values at a post-bounce near the proto-neutron star, $\Gamma^{\rm CC}=0.22\,\mathrm{km}^{-1}$ and $\bar{\Gamma}^{\rm CC}=0.067\,\mathrm{km}^{-1}$ for $E=50$\,MeV neutrinos. The transition probability increases due to the collision effect as in Fig.~\ref{fig:caseB-t-Pex}. The equilibrium value of transition probability is larger than that of the NC scattering because of the large asymmetry, $\Gamma^{\rm CC}\neq\bar{\Gamma}^{\rm CC}$. On the other hand, the flavor conversion does not equilibrate immediately for the symmetric collision parameter, $\Gamma^{\rm CC}=\bar{\Gamma}^{\rm CC}=0.22\,\mathrm{km}^{-1}$, as shown in the blue curve in the top panel of Fig.~\ref{fig:collision johns2021}. Such property between $\Gamma^{\rm CC}$ and $\bar{\Gamma}^{\rm CC}$ is also reported in Ref.~\cite{Johns:2021arXiv210411369J}. Our results clarify the role of the asymmetry on the CC scatterings.
In the symmetric case, when $\Gamma^{\rm CC}=\bar{\Gamma}^{\rm CC}\geq 8\, \mathrm{km}^{-1}$, the flavor conversions are strongly suppressed, as in the NC scattering. On the other hand, as shown in the bottom panel of Fig.~\ref{fig:collision johns2021}, the flavor conversions in the asymmetric case are not suppressed even in the large collision terms ($f_{\rm CC}=10,100$). In the asymmetric case, collision terms associated with $\Gamma^{\rm CC}-\bar{\Gamma}^{\rm CC}$ can couple the time evolution of $\mathbf{D}_{n}=~\average{(\mathbf{P}-\mathbf{\bar{P}})\cos^{n}\theta}$ and that of $\mathbf{S}_{m}=~\average{(\mathbf{P}+\mathbf{\bar{P}})\cos^{m}\theta}$ to each other \cite{Johns:2021arXiv210411369J}. Such coupling may make a qualitative difference between the collision effect in the symmetric case and that of the asymmetric case. The asymmetry of the collision rate is quite possible in explosive astrophysical sites. Here we focus on monochromatic energy neutrinos, but the asymmetry of the collision rate can be increased by considering the energy dependence of neutrinos.
\section{Conclusions}\label{sec:Conclusions}
We calculate fast flavor conversions with collision effects of neutrino scatterings and analyze behaviors of the flavor conversions based on the dynamics of neutrino polarization vectors in cylindrical coordinate where we can easily access the information of the phase of rotation.
We find that the the collision terms in Ref.~\cite{Shalgar:2020wcx} induce enhancement of flavor conversions and isotropization of neutrino distributions. In the linear phase, the instability of fast flavor conversions grows up around the ELN crossing where the phase difference $\delta$ takes an intermediate value in the range between $0$ and $-\pi$. In the limit cycle phase, the evolution track of neutrino polarization vector without collision effects is closed, and flavor conversions show a periodic trend. On the other hand, with the collision terms, the evolution track of the neutrino polarization vector is no longer closed because of the decrease of $|\delta|$ in every cycle.
The collision term breaks the symmetry between positive and negative $\delta$, and the induced imbalance enhances the total transition probability.
After the synchronization of neutrino polarization vectors, the value of $\delta$ finally converges to zero. At the end of the limit cycle, the evolution of $\average{P_{ex}}$ settles down to equilibrium, and the distributions of neutrinos are dependent on the neutrino scattering angle. Such properties are well consistent with the results in Ref.~\cite{Shalgar:2020wcx}. However, in the relaxation phase, all of the neutrino polarization vectors align with the $z$-axis keeping the value of $\average{P_{ex}}$. The distributions of neutrinos finally become isotropic after the relaxation phase.
Furthermore, we calculate the effect of collision terms used in previous studies, which helps unify presently disparate effects of collisional instability. For the NC scattering, the flavor conversions are enhanced in the small collision term. On the other hand, the large collision term prevents flavor conversions. Our results suggest that the collision effect is characterized by the ratio of the parameters between collision terms and neutrino-neutrino interactions. For the CC scattering, the transition probability is significantly raised up in a large asymmetry between neutrino and antineutrino collision rates. In the case of the asymmetric CC collision terms, the flavor conversions are not suppressed even in large collision parameters.
Here, we remark on the uncertainties of our works. The calculation results of fast flavor conversions are very sensitive to the numerical setup. The flavor conversions without collisions are highly periodic in our calculation assuming the spatial homogeneity and ignoring the azimuthal angle dependence. In some case, even without the collision term, fast flavor conversions can decay and reach a stationary solution \cite{Xiong:2021dex}. It has been shown that the flavor conversions become more chaotic and non-periodic in spatial inhomogeneous system \cite{Martin:2019gxb,Zaizen2021a}. In addition, the periodic structure of flavor conversions is broken in the calculation with the azimuthal angle dependence \cite{Richers:2021nbx,Shalgar:2021arXiv210615622S}. The significant enhancement of flavor conversions due to the collision effect may be obscured, when the assumption in our calculation is relaxed. Our simplified numerical setup should be updated in order to study the collision effect precisely in more realistic environment. Here, we focus on behaviors of fast flavor conversions of two flavor neutrinos for simplicity, but three flavors of neutrinos are required to predict the reliable neutrino signal in explosive astrophysical sites.
\begin{acknowledgments}
We thank E. Kokubo, M. Delfan Azari and L. Johns for fruitful discussions and useful comments. This work was carried out under the
auspices of the National Nuclear Security Administration of the
U.S. Department of Energy at Los Alamos National Laboratory under
Contract No.~89233218CNA000001.
This study was supported in part by JSPS/MEXT KAKENHI Grant Numbers
JP18H01212,
JP17H06364,
JP21H01088.
This work is also supported by the NINS program for cross-disciplinary
study (Grant Numbers 01321802 and 01311904) on Turbulence, Transport,
and Heating Dynamics in Laboratory and Solar/Astrophysical Plasmas:
"SoLaBo-X”, and also by MEXT as “Program for Promoting
researches on the Supercomputer Fugaku” (Toward a unified view of
the universe: from large scale structures to planets, JPMXP1020200109) with JICFuS
Numerical computations were carried out on PC cluster at the Center for Computational Astrophysics,
National Astronomical Observatory of Japan.
\end{acknowledgments}
|
{
"timestamp": "2022-02-25T02:10:37",
"yymm": "2109",
"arxiv_id": "2109.14011",
"language": "en",
"url": "https://arxiv.org/abs/2109.14011"
}
|
\section{Tests of Lorentz violation with high-energy astrophysical neutrinos}
Lorentz symmetry is a fundamental symmetry underlying both quantum field theory and general relativity.
Nevertheless, violation of Lorentz symmetry, often known as Lorentz violation (LV), has been searched for since the iconic Michelson-Morley experiment~\cite{Michelson:1887zz}.
LV has shown to occur in beyond-the-Standard-Model (BSM) theories such as string theory~\cite{Kostelecky:1988zi}, non-commutative field theory~\cite{Carroll:2001ws}, loop quantum gravity~\cite{Gambini:1998it},
Ho\v{r}ava-Lifshitz gravity~\cite{Pospelov:2010mp}, etc.
There are many experimental efforts to search for LV, but so far no significant evidence for LV has been found.
Constraints obtained from different systems have being compiled in Ref.~\cite{Kostelecky:2008ts}.
Since the expected effect of LV is small, experiments tend to use special systems to maximize their sensitivities,
such as interferometers (optics~\cite{Kostelecky:2016kkn}, matter wave~\cite{Jaffe:2016fsh}, wave function~\cite{Pruttivarasin:2014pja}, etc).
High-energy particles (LHC~\cite{Carle:2019ouy}, high-energy gamma rays~\cite{Amelino-Camelia:1997ieq}, ultra-high-energy cosmic rays, or UHECRs~\cite{Maccione:2009ju}, etc) are used to search for signatures of high-dimension LV operators~\cite{Kostelecky:2009zp,Kostelecky:2011gq,Kostelecky:2013rta} that have mass dimensions are greater than four.
Among the many experiments hunting for LV, the searches using high-energy astrophysical neutrinos are special for the following three reasons:
\begin{enumerate}
\item Neutrino energy reaches higher than any anthropogenic beam.
\item Neutrinos travel very long distance, from source to detection, in a straight path.
\item Quantum mixings can enhance the sensitivity.
\end{enumerate}
\noindent
LV can be seen as a classical \ae ther field, a new background field permeating the vacuum. Propagation of neutrinos may be affected by this which can cause a variety of effects, including spectrum distortion, modifying the group velocity, anomalous neutrino oscillations, with the direction dependence. Astrophysical neutrinos propagate long distances without interactions, and they have the advantage to search for these exotic effects. Furthermore, the higher-dimension operators, the nonrenormalizable sector of effective field theory, have stronger energy dependence and high-energy astrophysical neutrinos can be more sensitive to them. For example, the dimension-six operator is the lowest order interaction term with new physics. Lastly, these effects are likely to be very small, and kinematic tests may not be sensitive enough to find them. Neutrinos are natural interferometers, and using quantum mixings of them we can reach the signal region of LV expected from quantum-gravity motivated models.
Fig.~\ref{fig:energy} shows the phase space of new physics one can explore with neutrinos~\cite{Arguelles:2019rbn}.
Here, the horizontal axis is the neutrino energy and the vertical axis is the propagation distance of neutrinos from source to detector.
Large areas below 100~GeV and 100~km are explored by anthropogenically produced neutrinos (\textit{e.g.} reactor, short- and long-baseline, or SBL and LBL, neutrino experiments) and low-energy astrophysical neutrinos (solar and supernova neutrinos). However, higher energy and longer baseline have not been explored.
High-energy astrophysical neutrinos travel over 100~Mpc and they can explore new physics which are extremely weakly coupled with neutrinos, such as LV.
High-energy astrophysical neutrinos can also reach PeV energies and may enhance the new physics related to power-counting nonrenormalizable operators~\cite{Kostelecky:2011gq}.
\end{paracol}
\nointerlineskip
\begin{figure}[H]
\widefigure
\includegraphics[width=15 cm]{Definitions/fig_scales_energy_icrc.png}
\caption{Neutrino sources are shown as a function of neutrino energy and distance traveled.
Anthropologically produced neutrinos, including reactor neutrinos, very-short- , short- , and long-baseline (VSBL, SBL, and LBL) neutrino beam experiments, can investigate new physics related to short travel distance. Low-energy astrophysical neutrinos can be used to study new physics related with longer travel distance.
High-energy astrophysical neutrinos explore the highest energies and longest traveled distance, which is the top right corner in this figure. Figure is adapted from Ref.~\cite{Arguelles:2019rbn}.\label{fig:energy}}
\end{figure}
\begin{paracol}{2}
\switchcolumn
\section{Tests of Lorentz violation with kinematic observables}
High-energy particles, such as gamma-rays~\cite{Amelino-Camelia:1997ieq,Gagnon:2004xh,Altschul:2006uw,Kaufhold:2007qd,Maccione:2007yc,Altschul:2007vc,Klinkhamer:2008ky,Altschul:2010nf,Diaz:2015hxa,Altschul:2016ces,Schreck:2017isa,Colladay:2017qfr} and UHECRs~\cite{Maccione:2009ju}, have been used to test LV.
Similarly, neutrinos can be used to find exotic effects if they exist, with two advantages.
First, neutrinos are elementary particles, while UHECRs are composite --- in fact, with unknown composition.
This makes the interpretation of LV constraints with neutrinos easier to interpret on a theoretical framework.
Second, high-energy gamma-rays interact with the cosmic microwave background and thus have shorter distances traveled than high-energy astrophysical neutrinos.
The effects of LV in high-energy neutrinos arise from them experiencing the effect of a non-trivial vacuum as they propagate from their sources to us.
A field in vacuum, motivated by new physics, could permeate space-time and violate the large-scale isotropy of the Universe and hence produce LV (effectively similar to the classical \ae ther).
Under such conditions, neutrinos would emit particles in vacuum~\cite{Cohen:2011hx,Diaz:2013wia,Borriello:2013ala,Stecker:2014xja,Wang:2020tej} and this energy loss would attenuate the highest energy neutrinos as they travel long distance.
Such a test has been performed using the high-energy astrophysical neutrino samples~\cite{IceCube:2018cha,IceCube:2020wum}.
Multi-messenger astronomy allows us to study the difference of time-of-flight (ToF) between neutrinos and photons beyond the neutrino mass effect. The first such opportunity was the supernova 1987A, where data was used the tests of Lorentz invariance~\cite{Longo:1987gc,Krauss:1987me,Ellis:2011uk}. These tests may be more interesting to use high-energy astrophysical neutrinos due to energy dependencies of some models, and the first such opportunity was the blazar TXS0506+056, the first identified high-energy astrophysical neutrino source~\cite{IceCube:2018cha}.
From the observation of the neutrino and photon arrival times, several limits of neutrino velocity deviation from the speed of light were derived~\cite{Ellis:2018ogq,Laha:2018hsh,Wei:2018ajw}.
Despite that TXS0506+056 has, so far, been the only detected source, searches for neutrino emissions from transient events are continuously performed.
These analyses usually assume that neutrinos are travelling at the speed of light; however, if the neutrino ToF is modified due to LV, one could find more coincidence with transient events by assuming LV.
The couplings of neutrinos and LV background fields can be classified into two groups, CPT-odd and CPT-even types.
The CPT-odd LV fields would change its sign in Lagrangian operators and hence effectively violate CPT transformation.
If the effective velocity of neutrinos is
larger than in the Lorentz-invariant case, that of antineutrinos is lower and vice versa.
The IceCube collaboration has not identified any high-energy astrophysical neutrino point sources except TXS0506+056 with high significance~\cite{IceCube:2019cia} (see~\cite{Stein:2020xhk} for a recent search of potential high-energy neutrino sources).
In particular, no gamma ray burst (GRB) has been identified as a high-energy astrophysical neutrino source~\cite{IceCube:2016ipa,IceCube:2017amx} under the Standard Model assumptions.
However, by assuming energy dependent couplings with LV and sign changes (time delay or time advanced), it is possible to find coincidences with GRBs with scale of new physics order $\sim 10^{17}~{\rm GeV}$~\cite{Zhang:2018otj} (See~\cite{Crivellin:2020oov} about the implication of this results in the charged lepton sector).
Although this is very tantalizing result, it is challenging to verify it experimentally because neutrino and anti-neutrino cross-section difference is less than 15\% at $\ge 200$~TeV~\cite{CooperSarkar:2011pa,Arguelles:2015wba,Bertone:2018dse} and the charge separation is possible only in special reactions, such as resonant $W$-boson production~\cite{IceCube:2021rpz}.
In the near future, data from IceCube and neutrino observatories currently under construction, such as KM3NeT~\cite{Adrian-Martinez:2016fdl} and GVD~\cite{Avrorin:2019vfc}, will provide increased sensitivity to Lorentz Violation.
Further along, a generation neutrino telescopes on ice (IceCube-Gen2~\cite{IceCube-Gen2:2020qha}), water (P-ONE~\cite{Agostini:2020aar}), mountains (Ashra NTA~\cite{Sasaki:2017zwd},TAMBO~\cite{Romero-Wolf:2020pzh}, and GRAND~\cite{GRAND:2018iaj}), or outer space (POEMMA~\cite{POEMMA:2020ykm}), among others, will be able to test this hypothesis further.
\section{Tests of Lorentz violation with neutrino oscillations}
Neutrinos are natural interferometers, which are able to measure extremely small quantities --- such as the difference in neutrino masses --- by observing the \textit{beats} of the different neutrino flavors.
Searches for distortions arising from LV in the neutrino oscillation pattern have been performed by almost all neutrino oscillation experiments~\cite{Kostelecky:2008ts}.
Among them, natural sources --- such as solar neutrinos, atmospheric neutrinos, and astrophysical neutrinos --- have advantages due to very-long-baseline and/or higher attainable energy.
Atmospheric neutrinos can be produced at the other side of the Earth and penetrate the Earth diameter (=12742~km), and provide the largest possible interferometer on the Earth to search for neutrino oscillations.
The energy of these neutrinos reaches order 50~TeV or more~\cite{Fedynitch:2018cbl}, which correspond to the highest energy neutrinos produced by particles arriving at the Earth's surface.
The atmospheric neutrino flux below around 50~TeV is produced predominantly from pion and kaon decays and this is called the "conventional" atmospheric neutrino flux.
This flux is relatively well understood and has being measured, compared with high-energy atmospheric neutrinos produced by charmed meson decay, which are predicted with larger errors and have avoided detection so far.
Furthermore, the astrophysical neutrino flux starts to overtake atmospheric neutrino flux from around 50~TeV.
Thus these conventional atmospheric neutrinos can be used to search LV.
The concept of such a search is shown in the Fig.~\ref{fig:atmo},~left.
The LV oscillatory effect can be searched for in two ways depending on the assumptions of the largest non-zero LV terms.
On one hand, Super-Kamiokande~\cite{Super-Kamiokande:2014exs} and the IceCube-40 (partially instrumented IceCube)~\cite{IceCube:2010fyu} searched for signatures of anisotropy in atmospheric neutrinos due to LV.
On the other hand, AMANDA-II~\cite{IceCube:2009ckd} and IceCube~\cite{IceCube:2017qyp} looked for the spectral distortions due to LV.
Fig.~\ref{fig:atmo},~right, shows the effect of spectrum distortion due to the presence of an interaction between neutrinos and an isotropic LV background field.
One can see the very high sensitivity of this approach, especially for high-dimension LV operators.
Here, atmospheric neutrinos detected by IceCube are sensitive to dimension-six LV operators down to $\sim 10^{-36}~{\rm GeV}^{-2}$, making these neutrinos provide one of the most sensitive probes of LV.
To use the highest energy atmospheric neutrino data, IceCube used the up-going data sample for this analysis~\cite{IceCube:2015qii}.
These muons are created by neutrino interactions in the rock surrounding the detector or the ice in the Antarctic glacier. The signal of LV is exhibited as a spectrum distortion in the high-energy muons.
As you see from Fig.~\ref{fig:atmo}, right, the data is consistent with unity and there is no obvious sign of LV.
This search does not find nonzero LV and produces a limit that reaches down to $\sim 10^{-24}~{\rm GeV}$ for the dimension-three operator, or $\sim 10^{-36}~{\rm GeV^{-2}}$ for the dimenion-six operator~\cite{IceCube:2017qyp}.
These are among the best constraints on LV from table-top experiments to cosmology.
\end{paracol}
\nointerlineskip
\begin{figure}[H]
\widefigure
\includegraphics[width=7.5 cm]{Definitions/Fig1_v3}
\includegraphics[width=7.5 cm]{Definitions/Fig2_v2}
\caption{Left, artistic illustration of the search for LV with atmospheric neutrino oscillations.
Atmospheric neutrinos are produced in the upper atmosphere, and their flavors may be converted due to couplings between neutrinos and LV background fields as they propagate.
The effects induced by the new physics are negligible if neutrinos travel a short distance; namely for neutrinos entering the IceCube volume from the horizontal direction.
However, neutrinos produced near the northern sky travel a long distance before they reach IceCube, and they are significantly more impacted by interactions with the LV background fields.
Right, expected atmospheric neutrino oscillation probability ratio with function of energy due to LV.
Here, the vertical axis is the double ratio of oscillation probability for neutrinos from the vertical to horizontal direction. No LV is adjusted to the unity in this figure.
Nonzero LV modifies this ratio and the larger deviations happen with the larger LV couplings.
The figure shows the sensitivity to $|c_{\mu\tau}^{(6)}|$, one of dimension-six operators which parameterize LV and this analysis is sensitive with.
Figures are adapted from~\cite{IceCube:2017qyp}. \label{fig:atmo}}
\end{figure}
\begin{paracol}{2}
\switchcolumn
\section{Tests of Lorentz violation with neutrino mixings}
Astrophysical neutrinos are extremely-long-baseline neutrino oscillation experiments.
In these systems, neutrino coherence depends on the details of the astrophysical neutrino source, detection method, and the distance of propagation.
For example, for the observed high-energy astrophysical neutrino flux, which is dominated by extragalactic sources, neutrino oscillations are not observable due to the relative poor energy resolution and extremely large ratio of baseline to energy.
In this scenario, we are only able to observe neutrino mixing instead of oscillations among neutrino states.
However, even if we cannot resolve the neutrino oscillation pattern, new physics effects such as LV remain observable.
This is because the propagation eigenstates and detection eigenstates are not the same.
The SNO experiment searched for LV from the annual modulation of the solar neutrino signal~\cite{SNO:2018mge}.
By assuming non-isotropic static LV background field within the solar system, neutrinos propagating one direction may be affected differently than others.
Search of such a signal must control all other natural modulations of solar neutrinos, including the eccentricity of the Earth's orbit and day-night caused by the Earth matter propagation.
The sensitivity of this search reaches order $\sim 10^{-21}~{\rm GeV}$ in dimension-three LV coefficients.
High-energy astrophysical neutrinos offer even longer baselines.
Most of these neutrinos do not have identified sources and the flux is isotropic. Also the source candidates populate distances from the Earth on the order 100~Mpc or more.
In the high-energy starting event (HESE) sample by IceCube~\cite{IceCube:2020wum}, the energy of these neutrinos ranges from around 60~TeV to 2~PeV.
These high-energy neutrinos can push the search of higher-dimension LV operators.
Fig.~\ref{fig:astro}, left, shows the sensitivities of different systems to LV operators.
High-energy astrophysical neutrino flavors are expected to offer the most sensitive LV searches in dimension-five and dimension-six operators~\cite{Katori:2019xpc}.
\end{paracol}
\nointerlineskip
\begin{figure}[H]
\centering
\widefigure
\includegraphics[width=8 cm]{LVlimit_color}
\includegraphics[width=7 cm]{Definitions/flavor_scan_data_bunnies_7yr_steps21_Nov_fixMuNorm_inel_gf_source_shaded_serif_paper2c_contour2c_tau}
\caption{Left, the LV sensitivities of different systems.
The sensitivity is normalized with the Planck energy ($E_P=1.22\times 10^{19}~{\rm GeV}$) assuming LV arises from Planck scale physics.
This means that the natural scale of a dimension-five LV operator is $1/E_P$, a dimension-six LV operator is $1/E_P^2$, and so on, and the sensitivity is normalized so that these numbers appear to be unity.
Lines are gravitational Cherenkov emission~\cite{Moore:2001bv,Kostelecky:2015dpa}, GRB vacuum birefringence~\cite{Kostelecky:2013rv}, UHECRs~\cite{Maccione:2009ju}, atmospheric neutrino oscillations~\cite{IceCube:2017qyp}, and the expected sensitivity from the high-energy astrophysical neutrino flavors.
Figure is adapted from~\cite{Katori:2019xpc}.
Right, HESE flavor ratio $(\nu_e:\nu_\mu:\nu_\tau)$ measurement by IceCube.
Each corner represents electron, muon, and tau neutrino dominant state.
The standard scenarios (dotted line area) give roughly $(\nu_e:\nu_\mu:\nu_\tau)\sim(1:1:1)$ on the Earth regardless of the assumption of the flavor ratio at the source. This means the expected flavor ratio on the Earth is always around the center of this plot in standard assumptions. On the other hand, current data contours enclose a large area, so different standnard scenarios cannot be distinguished.
Figure is adapted from~\cite{IceCube:2020abv}.
\label{fig:astro}}
\end{figure}
\begin{paracol}{2}
\switchcolumn
Fig.~\ref{fig:astro}, right, shows the current status of the high-energy neutrino flavor ratio measurements.
Since the statistics of high-energy neutrinos are low, flavors are measured integrated over the whole spectrum and normalized with the total flux; namely the flavor ratio, $(\nu_e:\nu_\mu:\nu_\tau)$ is reported.
The astrophysical neutrino production models give some information about the neutrino flavor composition at the source.
The most likely scenario is that they are some combination of electron and muon neutrinos.
Two extreme cases are productions of astrophysical neutrinos which are dominated by electron neutrinos $(1:0:0)$ or by muon neutrinos $(0:1:0)$.
And all other possibilities are between these two models.
Remarkably, all of these scenarios make more or less the same flavor ratio at the Earth by neutrino mixing, namely $\sim (1:1:1)$~\cite{Arguelles:2015dca,Bustamante:2015waa}.
All of these predict flavor ratio at the Earth is near the center of the flavour triangle and the spread of the center region is related to the current uncertainty of the mixing angles; for projections see~\cite{Song:2020nfh}.
On the other hand, current data encloses a large region and it is not easy to distinguish any particular scenario~\cite{IceCube:2015rro,IceCube:2015gsk,IceCube:2018pgc,IceCube:2020abv}.
Thus, it is necessary to shrink this contour to measure possible deviation of the flavor ratio from the standard case.
This requires larger sample sizes and better algorithms to measure neutrino flavors in neutrino telescopes~\cite{Song:2020nfh}. Many different types of new physics can be discovered from the effective operator approach with astrophysical neutrino flavour measurement, including
neutrino-dark matter interactions~\cite{Miranda:2013wla,deSalas:2016svi,Farzan:2018pnk}, neutrino-dark energy interactions~\cite{Ando:2009ts,Klop:2017dim},
neutrino self-interactions~\cite{DiFranzo:2015qea,Cherry:2016jol,Creque-Sarbinowski:2020qhz}, and neutrino long-range forces~\cite{Bustamante:2018mzu}.
The first IceCube results to test these models are published recently~\cite{IceCube:2021tdn}.
To summarize, the search for signatures of LV with high-energy astrophysical neutrinos has just begun.
The sensitivity to certain operators seems to exceed any known systems and reaches the Planck scale.
Thus, the study of these probes has a great potential for the discovery of violations of fundamental space-time symmetries.
\section*{Acknowledgement} We thank to Rogan Clark for careful reading of this manuscript.
CAA is supported by the Faculty of Arts and Sciences of Harvard University, and the Alfred P. Sloan Foundation.
TK is supported by the Science and Technology Facilities Council (UK).
\end{paracol}
\reftitle{References}
\externalbibliography{yes}
|
{
"timestamp": "2021-12-14T02:19:52",
"yymm": "2109",
"arxiv_id": "2109.13973",
"language": "en",
"url": "https://arxiv.org/abs/2109.13973"
}
|
\section{Introduction}
There is great interest in using electronic health record (EHR) data as a cost-effective resource to support biomedical research \citep{coorevits2013, pathak2013, cook2015, cowie2017}. A growing number of studies relying on data extracted from the EHR are appearing in the medical literature. These articles, however, are showing up alongside others that highlight concerns of data quality and potentially misleading findings from analyses using EHR data that do not properly address data quality issues \citep{floyd2012, duda2012, hersh2013, weiskopf2019, eichler2019}.
To fully realize the potential of EHR data for biomedical research, widely recognized problems of data accuracy and completeness must be addressed.
Computerized data quality checks (e.g., querying/excluding inconsistent measurements or those outside the range of plausible values) are necessary but not sufficient for quality data. Validation, in which trained personnel thoroughly compare EHR-derived data with the original source documents (e.g., paper medical charts or the entire EHR itself for a patient), is best practice for ensuring data quality \citep{duda2012}. However, full validation of EHR data is costly and time-consuming, and is generally not possible for large cohorts or those comprising multiple centers. Instead, investigators may validate sub-samples of patient records. This validation sample can then be used to inform researchers of the errors in their data and their phenotyping algorithms. Data from the validation sub-samples can then be used with unvalidated data from the full EHR to adjust analyses and improve estimation \citep{huang2018,giganti2020}.
Since researchers have limited funds, it is important to maximize the information obtained from data validation. The efficiency of estimators using validated EHR data can be improved with carefully designed validation sampling strategies.
The literature on two-phase sampling is relevant \citep{breslowcain1988,breslowchatterjee1999}. In our setting, phase 1 consists of EHR data available on all subjects and phase 2 consists of the subset of records that were selected for validation. Optimal two-phase designs have been studied for settings where there is an expensive explanatory variable that is only measured in the phase 2 subsample \citep{McIsaac&Cook2014,Tao2019,Han2021}; in our case, the validated value of an EHR-derived variable can be thought of as this expensive variable. Optimal two-phase designs rely on phase 1 data that are correlated with the expensive explanatory variable of interest; in our case, the unvalidated variable is often a good surrogate for the validated value, which can help with designing efficient validation samples. However, with EHR data, there are typically errors across multiple variables \citep{giganti2020}, which complicates sampling designs and subsequent analyses that incorporate the validated data.
Generalized raking, also known as survey calibration, is a robust and efficient way to obtain estimates that incorporate data from both phase 1 and phase 2, even with multiple error-prone variables \citep{Deville1992, lumley2011}. Generalized raking estimators, which include members of the class of optimally efficient augmented inverse probability weighted estimators \citep{robinsrotnitzky&zhao1994,lumley2011}, tilt inverse probability weights using auxiliary information available in the phase 1 sample.
Optimal designs for generalized raking estimators are not easily derived, but the optimal design for the inverse probability weighted (IPW) estimator, based on Neyman allocation \citep{reilly&pepe1995, McIsaac&Cook2014, amorim2021}, is typically an excellent design for a generalized raking estimator \citep{Chen&Lumley2021}. However, the optimal design depends on parameters that are usually unknown without previous data collection.
The necessity of prior data to design optimal sampling strategies has led to multi-wave sampling schemes. \citet{McIsaac&Cook2015} proposed multi-wave sampling strategies and illustrated two-wave sampling in a setting with a binary outcome and an error-prone binary covariate. Data from the first wave was used to adaptively estimate parameters needed to design the optimal phase 2 sample, and the second wave sampled based on this estimated optimal design. Others have also considered similar two-wave sampling strategies for different settings \citep{Han2021,Chen&Lumley2020}. Multi-wave sampling has shown a remarkable ability to yield sampling designs that are nearly as efficient as the optimal sampling design, and therefore have the potential to optimize resources in practice.
In this manuscript, we describe our experience designing and implementing a multi-wave validation study with EHR data to estimate the associations between maternal weight gain during pregnancy and risks of childhood obesity and asthma. To our knowledge, this is the first implementation of a multi-wave sampling design to address data quality issues in the EHR. Other innovative and important developments in this paper include the application of functional principal components analyses to estimate maternal weight gain during pregnancy and to initiate data quality checks \citep{yao2005functional}; the implementation of a multi-frame analysis to combine results across two independent validation samples targeting our two endpoints \citep{metcalf2009}; and estimation via generalized raking techniques, with multiply imputed influence functions to estimate the optimal auxiliary variable \citep{OhetAl2021,han2016}. The use of these methods allows us to obtain efficient estimates of our associations of interest that address data quality concerns across many EHR variables while making minimal assumptions.
Our manuscript is organized as follows. In section 2 we provide a brief scientific background for the mother-child weight study and present our analysis models. In section 3 we describe our phase 1 EHR data and derivation of key variables. In section 4 we describe our data validation process. In section 5, we detail our multi-wave sampling strategy for selecting records to validate. In section 6, we present results for the childhood obesity/asthma study. Section 7 includes a discussion. Analysis code and other materials necessary for reproducibility are provided at [website anonymized].
\section{Maternal Weight Change during Pregnancy and Child Health Outcomes}
\subsection{Background}
Maternal obesity and excessive weight gain during pregnancy have been associated with childhood obesity \citep{heslehurst2019, voerman2019, heerman2014} and childhood asthma \citep{forno2014}.
However, small sample sizes have limited the ability to detect the nuanced and complex nature of these associations: for example, few studies have reported trimester-specific maternal weight gain during pregnancy, and studies tend to group both exposure and outcome variables, leading to wide confidence intervals and less certainty of the true effect size at extreme values of BMI \citep{heslehurst2019}. In addition, it is difficult to ascertain population sub-group effects, especially by race/ethnicity \citep{goldstein2018}. Consequently, there is growing interest in conducting large epidemiological studies using EHR data to evaluate the association between maternal gestational weight gain and child health outcomes \citep{wang2020}. However, data obtained from EHRs suffer from quality issues, necessitating data validation.
The approach to the current study was also informed by a community-engagement process, where our study team met with a group of women from the community to discuss issues related to weight gain during pregnancy, childhood health, and health disparities \citep{joosten2015}. The goal of this process was to refine our study questions to reflect issues important to patients and their families.
\subsection{Primary and Secondary Analysis Models}
Of primary interest is the association between maternal weight change during pregnancy, $X$, and the time to childhood obesity, $T$. We do not observe $T$ in all children; follow-up is truncated at the first of the child's date of last visit or 6th birthday. Let $C$ be the time to censoring, $Y=\text{min}(T,C)$ be the censored-failure time, and $\Delta=I(T\leq C)$ be the indicator childhood obesity is observed.
Other covariates, $\mathbf{Z}$, include maternal BMI at conception, maternal age at delivery, maternal race, maternal ethnicity, cesarean delivery, maternal diabetes, smoking during pregnancy, maternal history of depression, insurance status, marital status, number of prior children, whether the child was a singleton, estimated gestational age, and child sex. We assume that $T$ and $C$ are independent conditional on $(X,\mathbf{Z})$. Our primary analysis model is a priori specified as the Cox model, $h(t|X,\mathbf{Z})=h_{0}(t)\exp(\beta X+ \beta_Z \mathbf{Z})$, where $h(t|X,\mathbf{Z})$ is the hazard of obesity at time $t$ conditional on $X$ and $\mathbf{Z}$, and $h_0(t)$ is an unspecified baseline hazard function. Of primary interest is estimation of $\beta$.
Of secondary interest is the association between maternal weight change during pregnancy and childhood asthma. Given challenges making definitive diagnoses in very young children \citep{vogelberg2019}, we only consider asthma diagnoses during ages 4 and 5 years; the subset of children in the obesity study who have data between their fourth birthday and sixth birthday are included in these analyses.
Our secondary analysis model is a priori specified as a logistic regression model with the outcome asthma (yes/no). The primary exposure is maternal weight change during pregnancy, and covariates are maternal BMI at conception, maternal age at delivery, maternal race, maternal ethnicity, cesarean delivery, maternal diabetes, smoking during pregnancy, insurance status, estimated gestational age, child sex, and maternal asthma. To simplify presentation, we do not mathematically define variables for the secondary analysis.
Instead of observing $(Y,\Delta,X,\mathbf{Z})$, our phase 1 data consist of error-prone versions of these variables, denoted $(Y^*,\Delta^*,X^*,\mathbf{Z}^*)$, and auxiliary variables, $\mathbf{A}$, that are not directly included in analysis models but may provide useful information for sampling or weighting. Our strategy is to validate a phase 2 sample of records so that we know $(Y,\Delta,X,\mathbf{Z})$ for this sample. Before we get to that, we first describe the phase 1 data.
\section{Phase 1 Data}
\subsection{EHR Data Sources}
We received data from all mothers in the Vanderbilt University Medical Center (VUMC) EHR who gave birth between December 2005 to August 2019 and could be linked with children whose data were also in the EHR. Study investigators received data tables extracted from the EHR including demographics, ICD-9/ICD-10 diagnoses, labs, medications, encounters, insurance data, and medical record numbers that allowed linking mothers with their children.
We received data for 20,684 mothers and 25,284 linked children. For mothers who delivered more than one child in separate pregnancies, we selected the first delivered child; in the case of multiple births from a single pregnancy, we randomly picked one child for inclusion.
Mother-child dyads were included if the child had at least one pair of height-weight measurements after 2 years of age, the mother had at least one height measurement, and the mother had at least one weight measurement during the year preceding the pregnancy up to the delivery date. A small number of mothers ($n=38$) with weight exceeding 180 kg (400 lbs) or whose weight was reported in the EHR to have changed more than 70 kg (150 lbs) during pregnancy were excluded. These data screening steps left $N=$10,335 mother-child dyads included in the study as the phase 1 sample. The asthma sub-study included 7,053 (68\%) of these children.
Children’s weight and height measurements during their first 6 years of life were cleaned using a validated algorithm developed by \citet{daymont2017}. Body mass index (BMI) was computed using heights and weights measured on the same day. If there were no same day measurements, then we used the nearest height measurement within $\pm 3$, $\pm 7$, $\pm 14$, and $\pm 30$ days for weights measured when children's ages were $<90$ days, 90-119 days, 120-729 days, and $\ge 730$ days, respectively; 35\% of children's heights were imputed in this manner. A total of 12.0\% (29,265 out of 243,550) of heights and 16.6\% (88,033 out of 529,959) of weights were excluded because of no corresponding weight/height measurement. Children’s BMI percentile was calculated using the R package \texttt{childsds} \citep {vogel2019}. Obesity was defined as BMI $\geq$ 95th percentile based on age and sex according to the U.S. Centers for Disease Control and Prevention growth curves between ages 2 to 5 years (up until 6th birthday) \citep{flegal2013}. The date of obesity was defined as the first date where a child met the obesity endpoint. Children were not eligible to be classified as having obesity before age 2.
Childhood asthma and maternal diagnoses of asthma, diabetes, gestational diabetes, and depression were determined using ICD-9 or ICD-10 codes and based on published Phecodes \citep{wu2019}.
If the EHR indicated the mother had any smoking history prior to delivery, she was categorized as an ever smoker, otherwise, as never smoker.
Maternal BMI was computed using each mother's median height; measurements before the age of 15 years and extreme values $\leq 50$ cm or $\geq 200$ cm were excluded.
\subsection{Deriving Maternal Weight Change}
Maternal weight change per week during pregnancy is ideally computed as the weight immediately preceding delivery minus the weight at the time of conception divided by the number of weeks of the pregnancy. There are several challenges with calculating this exposure. First, although all women in our study had at least one weight during pregnancy and most had multiple weights (median of 9 measurements, ranging from 1 to 66), the weight just before giving birth was often not known. Second, the date of conception, which is difficult to obtain in the best designed studies, was not readily extractable from the EHR data. Therefore, to estimate our exposure of interest with our phase 1 EHR data, we assumed that conception occurred 273 days before delivery and maternal weights at delivery and conception were estimated from weight trajectories fit using functional principal components analyses (FPCA). The assumption of a uniform, 273-day gestational period is obviously an oversimplification, but similar assumptions have been made and are necessary in settings with unavailable dates of conception (e.g., \citet{pereira2021}). The actual duration of pregnancy is addressed in our phase 2 validation sample.
We now describe the FPCA procedure. Let $W_1(t), \ldots, W_N(t)$ denote a random sample of women's weights $W(t)$ at each time $t$ during pregnancy on a common domain $\mathcal{T}=[-365,272]$ days, where $t=0$ represents the date of conception and $t=273$ is the date of birth.
We assume measurements are independent between subjects and that $W(t)$ has a smooth trajectory over $\mathcal{T}$. It follows from the Karhunen-Lo\`eve expansion \cite{karhunen1946spektraltheorie} that time-varying variations can be decomposed into linear combinations of eigenfunctions, the FPCA \cite{ramsay2007applied}, such that
\begin{align}
W_i(t) = \mu(t) + \sum_{k \geq 1} \xi_{ik} \phi_k(t),\label{K-L}
\end{align}
where $\mu(t) = \mathrm{E}W_1(t)$ is the mean function and the $\xi_{ik}$ are uncorrelated random variables with mean zero and variance $\lambda_k$ satisfying $\lambda_k \geq \lambda_{k+1}$ for any $k=1,2,\ldots$.
In our study, weights are measured at different time points such that the $i$-th mother has a record history $\{ W_i(t_{i1}), \ldots, W_i(t_{im_i}) \}$ along time points $t_{i1} < \cdots < t_{im_i}$, where the number of measurements $m_i$ also varies between mothers. In addition, we allow our observed weight measurements to be contaminated by additive measurement errors $\widetilde{W}_{ij} = W_i(t_{ij}) + \epsilon_{ij}$, where $\epsilon_{ij}$ is an independent Gaussian error with mean zero.
Hence, we write $\mathcal{W}_1 = \{ \widetilde{W}_{ij}: 1 \leq j \leq m_i, \, 1 \leq i \leq N \}$ to be the set of longitudinal weight observations in the phase 1 data with $N=10,335$, where $\widetilde{W}_{ij}$ is the error-prone weight record of the $i$-th mother measured at the gestational age $t_{ij}$.
Yao et al. (2005) \cite{yao2005functional} proposed the principal components analysis through conditional expectation (PACE) such that the best linear estimate of the FPC score $\xi_{ik}$ is given by
\begin{align}
\hat{\xi}_{ik} = \hat{\lambda}_k \widehat{\boldsymbol{\phi}}_{ik}^\top \widetilde{\Sigma}_i^{-1} (\widetilde{\mathbf{W}}_i - \widehat{\boldsymbol{\mu}}_i), \label{pace-fpcs}
\end{align}
where $\widetilde{\mathbf{W}}_i = (\widetilde{W}_{i1}, \ldots, \widetilde{W}_{im_i})^\top$ are $m_i$-longitudinal observations, $\widehat{\boldsymbol{\mu}}_i = (\hat{\mu}(t_{i1}), \ldots, \hat{\mu}(t_{im_i}))^\top$ are estimates of $\mathrm{E}\widetilde{\mathbf{W}}_i = (\mu(t_{i1}), \ldots, \mu(t_{im_i}))^\top$, and $\widetilde{\Sigma}_i$ is the $m_i \times m_i$ variance-covariance matrix estimate of $\widetilde{\mathbf{W}}_i$. Here, $\hat\lambda_k$ and $\widehat{\boldsymbol{\phi}}_{ik} = (\hat{\phi}_k(t_{i1}), \ldots, \hat{\phi}_k(t_{im_i}))^\top$ are estimates of the eigenvalue $\lambda_k$ and the evaluation of eigenfunction $\phi_k(t)$ at time points $t_{i1} < \cdots < t_{im_i}$, respectively, where the pair $({\lambda}_k, {\phi}_k(t))$ is defined as the solution of the functional eigenequations given by \citet{yao2005functional}.
We approximate the functional representation of the true time-varying trait $W_i(t)$ in \eqref{K-L} with the first leading $K$ components of FPC scores $\hat{\xi}_{ik}$ and eigenfunctions $\hat{\phi}_k(t)$ as
\begin{align}
\widehat{W}_i(t) = \hat{\mu}(t) + \sum_{k=1}^K \hat{\xi}_{ik} \hat{\phi}_k(t). \label{estimate-K-L}
\end{align}
We refer to \citet{yao2005functional} for technical details and confidence band estimation of the PACE method, and \citet{ramsay2007applied} and \citet{wang2016functional} for overviews of functional data analysis.
We used the R package \texttt{fdapace} \cite{carroll2021fdapace} for the numerical implementation of FPCA with longitudinal data.
The FPCA results applied to the phase 1 data $\mathcal{W}_1$ suggested that mothers' weight trajectories can be well approximated using \eqref{estimate-K-L} with $K=3$; the first three eigenfunctions $(\hat\phi_1(t), \hat\phi_2(t), \hat\phi_3(t))$ explained $99.9$\% of the variance. Figure \ref{figure2} of the Supplementary Material shows the estimated mean function $\hat\mu(t)$, which suggests that mothers gained approximately $12$ kg ($\approx \hat\mu(272) - \hat\mu(0)$) on average during pregnancy; the estimated covariance function; and the first three eigenfunctions.
Figure \ref{figure3} of the Supplemental Material depicts weight trajectories of six mothers constructed by the FPCA with the phase 1 data.
The phase 1 weight at conception for mother $i$ was estimated as $\widehat{W}_i(0)$, and the phase 1 exposure of interest, the individual weight gain per week during pregnancy was given by $X_i^*=\left[\widehat{W}_i(272) - \widehat{W}_i(0)\right]/(273/7)$.
\section{Phase 2 Data Validation}
The previous section describes how we derived the phase 1 data $(Y^*,\Delta^*,X^*,\mathbf{Z}^*,\mathbf{A})$ from the EHR. This section describes the data validation procedures to obtain phase 2 data $(Y, \Delta, X, \mathbf{Z})$ on a probabilistic sample of mother-child records.
Data used to derive all outcomes, the primary exposure, and all covariates were validated by a single research nurse. Data were validated by a thorough review of the EHR. It is important to recognize that the phase 1 EHR data were extracted by programmers and that phenotypes and derived variables used in phase 1 analyses were constructed computationally. In contrast, during the data validation, the research nurse looked through the complete EHR, including data not readily extracted and free text fields, to validate, and in some cases, find data. For example, estimated gestational age could not be readily extracted by programmers from the EHR and was therefore not in the phase 1 data; however, this information is in the EHR and was able to be extracted by the research nurse. Number of prior children and marital status were similarly not in the phase 1 data but extracted by the research nurse from the EHR. Other desired variables (e.g., smoking during pregnancy) were approximated in the phase 1 sample using readily available data (e.g., any history of smoking prior to delivery), but were able to be more accurately extracted from a thorough review of the EHR.
Note that although we refer to these manually abstracted data as the validated data, they may be incorrect. The research nurse may make mistakes or the correct diagnosis may not be in the EHR because it was not entered or missed by health care providers. In general, we assume that these validated data are of higher quality than the data algorithmically extracted from the EHR, and the validated data are considered the gold standard in our analyses.
The research nurse entered validation data into two spreadsheets and an electronic case report form using the Research Electronic Data Capture (REDCap) software \citep{harris2009}. The two spreadsheets were used to reduce the data entry burden and number of button clicks to record audit findings for repeated values. The first spreadsheet contained maternal weights extracted from the EHR and the second spreadsheet contained children's heights and weights. All other phase 2 data were entered into the REDCap forms.
We initially performed a pilot validation of 12 mother-child dyad records to refine our procedures and forms; validated data from the pilot was excluded from analyses. In the pilot validation, we realized that manual entry of dozens of weights per mother-child dyad would be extremely time-consuming, yield a small proportion with errors that needed to be fixed, and could result in validation data entry errors. Therefore, for our phase 2 sample, the research nurse only validated the following phase 1 measurements: children’s heights/weights closest to their birthdays,
children’s heights/weights that led to a first diagnosis of obesity, maternal weight closest to but prior to delivery, maternal weight closest to but prior to 272 days before delivery, and any maternal weights flagged as potential outliers due to being outside the 95\% confidence bands for the FPCA-predicted trajectories.
Values were either verified as correct (i.e., matching the phase 1 value), replaced with the correct value, or removed if deemed to be an error but no replacement was found. After chart review, the estimated maternal weight changes during pregnancy for each of the women selected for validation were again estimated using FPCA, but incorporating the updated data in equations (\ref{pace-fpcs}) and (\ref{estimate-K-L}). Note that the estimated gestation period was also entered as part of the phase 2 validation, which typically resulted in a new date of conception; the timing of her weight measurements $t$ was adjusted accordingly.
Figure \ref{figure4} contains a de-identified example from the actual data. In this example, five weight measurements were flagged as outside the FPCA 95\% confidence band based on phase 1 data (left panel), so the research nurse checked weights corresponding to those dates. The weight above the 95\% confidence band was found to be incorrect, whereas the weights below the confidence bands were verified as correct. The estimated gestational age based on the chart review was 259 days. The weight trajectory during pregnancy based on the validated data was then re-estimated for this mother and is shown in the right panel. The validated exposure of interest, weight change per week, was then re-computed as $X_i=\left[\widehat{W}_i(258) - \widehat{W}_i(0)\right]/(259/7)$.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.45\textwidth]{Figure1-mother-145212-before-validation.png}
\hspace{4mm}
\includegraphics[width=0.45\textwidth]{Figure1-mother-145212-after-validation.png}
\caption{The estimated weight trajectory and $95$\%-confidence band derived using FPCA for one of the mothers based on phase 1 (left) and phase 2 (right) data; dates have been shifted for de-identification. Red crosses in the left panel were identified as potential outliers and were manually validated. After validation, we updated the weight trajectory (right panel); the outlier weight $>$ 100 kg was found to be erroneous and removed.}
\label{figure4}
\end{figure}
\section{Multi-Wave Phase 2 Validation Design}
In this section, we describe our phase 2 sampling design. We had enough resources to validate approximately 1000 mother-child dyad records. We decided to target the first three-fourths of our validation sample ($n=750$) towards optimizing the primary analysis (obesity endpoint) and the remaining towards optimizing the secondary analysis (asthma endpoint). Figure \ref{fig:flowchart} provides a broad overview of the sampling strategy; details are described in subsequent sections. To understand our validation sampling strategy, there are a few key ideas that first need to be reviewed.
\begin{figure}[!ht]
\centering
\includegraphics[width=.75\textwidth]{PCORIdiagram20210618-bes.pdf}
\caption{Schematic of multi-wave sampling strategy for data validation in the childhood obesity study and the childhood asthma sub-study.}
\label{fig:flowchart}
\end{figure}
\subsection{Generalized raking}
We perform analyses combining phase 1 and phase 2 data using generalized raking. Generalized raking, also known as survey calibration, is a well-known technique in the survey sampling literature, but only recently has been recognized in the biostatistics literature as a practical approach to obtain augmented inverse probability weighted estimators \citep{lumley2011}. In brief, generalized raking takes the sampling weights (one over the sampling probabilities for each record) and calibrates them with an auxiliary variable (or vector of auxiliary variables) available in the phase 1 data such that the new calibrated weights are as close as possible to the original sampling weights but under the constraint that the sum of the auxiliary variable in the re-weighted phase 2 data is equal to its known sum in the phase 1 data. Such an approach improves efficiency over IPW estimators as long as the auxiliary variable is linearly associated with the variable of interest, with efficiency gains growing with increasing correlation \citep{OhetAl2021}. In our setting, the primary goal is to estimate a regression coefficient, specifically the log hazard ratio, $\beta$, and the most efficient auxiliary variable is the expected efficient influence function for $\beta$, denoted $E\left[H(Y,\Delta,X,\mathbf{Z})|Y^*,\Delta^*,X^*,\mathbf{Z}^*,\mathbf{A}\right]$ \citep{breslow2009}. This variable relies on unknown parameters, but a potentially good estimate of it is the influence function for $\beta$ fit to the error-prone phase 1 data, denoted $H^*=H(Y^*,\Delta^*,X^*,\mathbf{Z}^*)$. An even better estimate might be the influence function for the log hazard ratio fit to multiply imputed estimates of the validated data \cite{breslow2009aje,han2020b,han2016}, specifically, $\hat H=\sum_{m=1}^M H(\hat Y^{(m)},\hat \Delta^{(m)},\hat X^{(m)}, \mathbf{\hat Z}^{(m)})/M$, where $(\hat Y^{(m)},\hat \Delta^{(m)},\hat X^{(m)}, \mathbf{\hat Z}^{(m)})$ represent the $m$th imputation of $(Y,\Delta,X,\mathbf{Z})$ for $m=1,\ldots, M$ imputation replications. The imputation model is constructed from the validated data in the phase 2 sample.
More precisely, let $\theta=(\beta,\beta_Z)$, and let $\theta_0$ be the parameter defined by the population Cox partial likelihood score equation such that $\sum_{i=1}^N U_i(\theta_0)=0$. Let $R_i$ be the indicator that record $i$ is selected for the phase 2 sample, and let $\pi_i$ denote the sampling probability, i.e., $P(R_i=1|Y^*,\Delta^*,X^*,\mathbf{Z}^*,\mathbf{A})$ where $0<\pi_i<1$. The IPW estimator, $\hat \theta_{IPW}$, is the solution to $\sum_{i=1}^N R_i U_i(\theta)/\pi_i = 0$. The generalized raking estimator, $\hat \theta_{R}$, is the solution to the equation $\sum_{i=1}^N R_i g_i U_i(\theta)/\pi_i = 0$, where $g_i$ is chosen to minimize $\sum_{i=1}^N R_i d(g_i/\pi_i, 1/\pi_i)$ for some distance measure $d(\cdot,\cdot)$ subject to the constraint that $\sum_{i=1}^N H_i = \sum_{i=1}^N R_i g_i H_i/\pi_i,$ where $H_i$ is an estimate of the expected efficient influence function for $\beta$, either the naïve influence function, $H^*_i$, or the multiply imputed influence function, $\hat H_i$. Here we use $d(a,b)=a\log(a/b)-a+b$.
\subsection{Stratification, Neyman Allocation, and Multi-wave Sampling}
For an IPW estimator of a mean or total, the optimal stratified sampling strategy is Neyman allocation \citep{neyman1938}. Although not necessarily optimal for generalized raking, the loss of efficiency when using raking with a Neyman allocation design versus the theoretically optimal design has been seen to be minimal \citep{Chen&Lumley2021}. Neyman allocation is also fairly straightforward to implement. Given a set of strata, Neyman allocation samples proportional to the number of observations in the strata times the standard deviation of the variable of interest in the strata. Since a regression coefficient, including the log-hazard ratio estimator from the Cox model, is asymptotically equivalent to the sum of influence functions, Neyman allocation in our setting is to sample proportional to the product of the number of records in a stratum times the standard deviation of the influence function for the target coefficient in that stratum \citep{Chen&Lumley2020, amorim2021}. Again, we do not know the true influence function, but we can estimate it from the phase 1 data, and as we start to collect phase 2 data, we can re-estimate the influence function using the phase 2 data and adjust our sampling accordingly.
Following the adaptive multi-phase sampling approach by \citet{McIsaac&Cook2015}, and related work by \citet{Chen&Lumley2020} and \citet{Han2021}, we divided our phase 2 sample into multiple waves. In the first wave, we estimate the influence function of $\beta$ with $H^*$, the naïve influence function described above. We then allocate $n_{(1)}$, the sample size of the first wave of our phase 2 sample, across the set of $\mathcal{S}_1$ strata in wave 1 via Neyman allocation,
\begin{equation} \label{neyman}
n_{(1),s} = n_{(1)}\frac{N_s \hat \sigma_s(H^*)}{ \sum_s N_s \hat \sigma_s(H^*)},
\end{equation}
where $N_s$ is the population size of stratum $s \in \mathcal{S}_{1}$ and $\hat \sigma_s(H^*)$ is the estimated standard deviation of $H^*$ in stratum $s$.
For the $k$th sampling wave ($k>1)$, we determine the desired set of strata $\mathcal{S}_k$, which may be the same as $\mathcal{S}_{k-1}$, or individual $s \in \mathcal{S}_{k-1}$ can be split into 2 or more smaller strata. We use the phase 2 data to fit the desired model using the validated data to directly estimate the influence function of interest.
We then estimate the sample design with Neyman allocation for the total cumulative validated sample size $\sum_{j=1}^k n_{(j)}$, where $n_{(j)}$ is the size of the $j$th wave of validation sampling. The strategy for the $k$th wave is then to sample the difference between the derived optimal allocation for stratum $s$ and the number already sampled in that stratum. Specifically, for each $k>1$, the Neyman allocation for a stratum $s \in \mathcal{S}_k$ is given by
\begin{equation}
n_{(k),s} = \bigg(\sum_{j=1}^k n_{(j)}\bigg)\frac{N_s \hat \sigma_{s,k-1}}{ \sum_s N_s \hat \sigma_{s,k-1}} \,\, - \,\, \sum_{j=1}^{k-1} n_{(j),s},
\end{equation}
where $\hat \sigma_{s,k-1}$ is the estimated standard deviation in stratum $s$ of the influence function using data already validated, i.e., $H(Y,\Delta,X,\mathbf{Z}|R_{k-1}=1)$ where $R_{k-1}$ is the cumulative indicator that data have been validated by wave $k-1$.
If a stratum is determined to have been oversampled relative to its optimal allocation in the current wave (i.e., $n_{(k),s}<0$), that stratum is closed to further sampling and Neyman allocation is recalculated for the total number to be validated in the remaining strata.
In our case, since the cost of validation is essentially equivalent across all records, we can further improve precision by carefully choosing how to stratify our population for sampling. In general, creating strata based on both the outcome and the exposure jointly can result in more efficient designs \citep{breslowchatterjee1999}. More strata are generally more efficient than fewer strata \citep{lumley2010, amorim2021}.
In addition, the most efficient stratification is one where Neyman allocation suggests to sample approximately equal numbers from each stratum \citep{sarndal2003, amorim2021}. However, in practice it can be difficult to optimally select strata boundaries when there are a large number of strata, and some imbalance in the number in each stratum does not have too much impact on efficiency \citep{amorim2021}.
Put together, our general sampling strategy was to stratify on both the primary exposure and outcome together and to choose a fair number of strata such that the number of records sampled in each stratum based on Neyman allocation was approximately equal. After each sampling wave, we re-calculated the influence function based on the phase 2 data, re-computed the optimal number to be sampled based on Neyman allocation with this updated influence function, divided large strata following the principle that optimality is achieved by sampling approximately equal numbers from strata,
and then selected the next wave's sample based on this updated stratification / allocation. We note in subsequent waves strata can be split, but in order for the final post-stratification weights to be well-defined, strata cannot be merged. Details of the choices made for the obesity and asthma sampling frames are provided in the following sections.
\subsection{Multi-wave sampling for obesity endpoint}
Our phase 2 sample for the obesity endpoint validated 750 paired records over a total of four sampling waves. Strata were created based on phase 1 data including the childhood obesity event indicator, the censored-failure time (time to childhood obesity or censoring), and the exposure of interest (estimated maternal weight change during pregnancy). We fit a simplified Cox model to the phase 1 data with the outcome time-to-obesity, the exposure of estimated change in maternal weight during pregnancy, and covariates estimated BMI at conception, maternal diabetes, maternal age at delivery, child sex, child ethnicity, and child race. From this model, we computed the estimated influence function for the maternal weight gain log-hazard ratio for each mother-child dyad. This influence function was then used to create the wave 1 sampling design, where strata boundaries were chosen such that the Neyman allocation was fairly similar across strata.
For wave 1 of our phase 2 sample, we started with 21 strata based on seven combinations of obesity/follow-up (censored in ages [2, 5) years, censored in ages [5, 6), obesity in ages [2, 2.5) years, obesity in ages [2.5, 3) years, obesity in ages [3, 4) years, obesity in ages [4, 5) years, and obesity in ages [5, 6) years) and three categories for mother’s estimated weight change during pregnancy ($\leq 5.14$ kg, (5.14, 20.5] kg, and $>20.5$ kg, where 5.14 kg and 20.5 kg represent the 5th and 95th percentiles for weight change in the phase 1 data). These choices of strata make intuitive sense: records with the most influence on the log-hazard ratio are those in the tails of the exposure and those experiencing the event, particularly early into follow-up \citep{lawless2018}. Our plan was to validate 250 records in wave 1; due to rounding, we sampled 252 records. Unfortunately, we had an error in our code which was not discovered until we began planning our wave 2 sample. This error caused us to sample more than was optimal from those records with maternal weight gains outside the 5th and 95th percentiles; without this coding error, our wave 1 strata would likely have been based on less extreme weight gain percentiles, e.g., perhaps the 10th and 90th percentiles.
Table 1 shows the final strata, the population total in each stratum ($N_s$), the number sampled from each stratum in each wave ($n_{(k),s})$, and the total number sampled from each stratum ($n_s$). Note that since wave 1 had fewer strata than the final number of strata (21 vs. 33), some of the original strata that were subsequently divided are represented by multiple rows. (For example, 8 records were sampled from the original stratum B; these 8 were distributed in some manner across final strata 2-4, not just from final stratum 2.)
Upon receiving the wave 1 validation data, we fit a weighted Cox regression model to the validated data (weights equal to the inverse of the sampling probabilities) to obtain influence functions and estimate their standard deviations in each of our 21 strata. This Cox model included phase 2 data for the outcome, the exposure of interest, and nearly all covariates specified for our final model. (The model did not include the indicator that the child was a singleton and dichotomized a few of our categorical covariates.)
For wave 2, we chose to validate an additional 248 records bringing our total number validated up to 500. We used the updated estimates of the standard deviation of the influence function in each stratum and (correctly) applied Neyman allocation for a total sampled of 500. From this, we learned that we had over-sampled from some strata and under-sampled from others. For example, the optimal number to be sampled from original stratum A (obesity=0, follow-up $\in$(2, 5], and weight change $\leq$ 5.14) based on Neyman-allocation after wave 1 was 6, but we had already sampled 7.
In contrast, the estimated optimal number to be sampled from original stratum E (obesity=0, follow-up $\in$(5, 6], weight change $\in$ (5.14, 20.5], i.e., the union of final strata 7-11 in Table 1) was 105; in wave 1 we had sampled 16 from this stratum meaning in wave 2 we would need to sample 89. Keeping in mind that that optimal strata boundaries would sample approximately equal numbers from each stratum after applying Neyman allocation, we further divided strata. Specifically, prior to performing sampling for wave 2 we divided 4 strata into 9 new strata (one stratum, E, was split into 3 strata), making a total of 26 strata. Neyman allocation was used to decide the optimal way to sample 500 records from these 26 strata.
Nine of these new strata which included a total of 108 records sampled from wave 1 had already been over-sampled (i.e., $n_{(1),s} \geq$ Neyman allocation for stratum $s$ for $n=500$), so these strata were closed, and Neyman allocation was re-computed to determine how best to sample 392 records (=500-108) from the remaining 17 (=26-9) open strata. Based on this procedure, the number of records sampled from each stratum in wave 2 is given in column $n_{(2),s}$; this sums to 248, the size of our wave 2 validation sample.
The process was repeated after collecting wave 2 validation data to select which records to sample in wave 3 ($n_{(3)}=125$) and then again after collecting wave 3 validation data to select which records to sample in wave 4 ($n_{(4)}=125$). For wave 3 there were a total of 30 strata (only 12 of which were sampled from) and for wave 4 we expanded to 33 strata (only 16 of which were sampled from). Additional details for these waves, including which strata were split and when, can be inferred from Table 1.
\begin{table}
\caption{Multi-wave Sampling Design for Childhood Obesity Endpoint}
\centering
\begin{threeparttable}
\begin{tabular}{lllllllllll}
\hline
Original & Final & Obesity & Follow-up & Maternal & $N_s$ & $n_{(1),s}$ & $n_{(2),s}$ & $n_{(3),s}$ & $n_{(4),s}$ & $n_s$ \\
Strata & Strata & & Time (yrs) & Gestational & & & \\
& & & & Weight & & & \\
& & & & Gain (kg) & & & \\
\hline
A & 1 & 0 & (2, 5] & $\leq 5.14$ & 190 & 7 & 0 & 0 & 0 & 7\\
\rowcolor{Gray}
B & 2 & 0 & (2, 5] & (5.14, 12] & 1904 & 8 & 21 & 7 & 3 & 24\\
\rowcolor{Gray}
& 3 & 0 & (2, 5] & (12, 16] & 1356 & & & 28 & 0 & 34\\
\rowcolor{Gray}
& 4 & 0 & (2, 5] & (16, 20.5] & 526 & & & & 27 & 37\\
C & 5 & 0 & (2, 5] & $> 20.5$ & 177 & 8 & 2 & 3 & 0 & 13\\
\rowcolor{Gray}
D & 6 & 0 & (5, 6] & $\leq 5.14$ & 208 & 14 & 18 & 1 & 0 & 33\\
E & 7 & 0 & (5, 6] & (5.14, 8.6] & 429 & 16 & 22 & 0 & 0 & 25\\
& 8 & 0 & (5, 6] & (8.6, 12] & 1478 & & 15 & 5 & 13 & 39\\
& 9 & 0 & (5, 6] & (12, 14] & 846 & & 18 & 21 & 20 & 44\\
& 10 & 0 & (5, 6] & (14, 16] & 563 & & & & 22 & 40\\
& 11 & 0 & (5, 6] & (16, 20.5] & 588 & & & 22 & 8 & 35\\
\rowcolor{Gray}
F & 12 & 0 & (5, 6] & (20.5, 24.3] & 154 & 17 & 19 & 0 & 0 & 32\\
\rowcolor{Gray}
& 13 & 0 & (5, 6] & $>24.3$ & 71 & & 24 & 0 & 0 & 28\\
G & 14 & 1 & (2, 2.5] & $\leq 5.14$ & 49 & 17 & 0 & 0 & 0 & 17\\
\rowcolor{Gray}
H & 15 & 1 & (2, 2.5] & (5.14, 10] & 140 & 20 & 19 & 16 & 3 & 28\\
\rowcolor{Gray}
& 16 & 1 & (2, 2.5] & (10, 12] & 126 & & & 8 & 1 & 22\\
\rowcolor{Gray}
& 17 & 1 & (2, 2.5] & (12, 16] & 205 & & 12 & 8 & 5 & 29\\
\rowcolor{Gray}
& 18 & 1 & (2, 2.5] & (16, 20.5] & 76 & & & & 3 & 14\\
I & 19 & 1 & (2, 2.5] & $> 20.5$ & 33 & 17 & 0 & 0 & 0 & 17\\
\rowcolor{Gray}
J & 20 & 1 & (2.5, 3] & $\leq 5.14$ & 13 & 12 & 0 & 0 & 0 & 12\\
K & 21 & 1 & (2.5, 3] & (5.14, 12] & 129 & 12 & 13 & 0 & 2 & 19\\
& 22 & 1 & (2.5, 3] & (12, 20.5] & 129 & & 15 & 0 & 1 & 24\\
\rowcolor{Gray}
L & 23 & 1 & (2.5, 3] & $> 20.5$ & 19 & 12 & 0 & 0 & 0 & 12\\
M & 24 & 1 & (3, 4] & $\leq 5.14$ & 21 & 10 & 0 & 0 & 0 & 10\\
\rowcolor{Gray}
N & 25 & 1 & (3, 4] & (5.14, 12] & 175 & 13 & 25 & 0 & 5 & 20\\
\rowcolor{Gray}
& 26 & 1 & (3, 4] & (12, 20.5] & 203 & & & 3 & 4 & 30\\
O & 27 & 1 & (3, 4] & $> 20.5$ & 28 & 13 & 0 & 0 & 0 & 13\\
\rowcolor{Gray}
P & 28 & 1 & (4, 5] & $\leq 5.14$ & 22 & 9 & 0 & 0 & 0 & 9\\
Q & 29 & 1 & (4, 5] & (5.14, 20.5] & 261 & 10 & 19 & 0 & 4 & 33\\
\rowcolor{Gray}
R & 30 & 1 & (4, 5] & $> 20.5$ & 24 & 11 & 4 & 0 & 0 & 15\\
S & 31 & 1 & (5, 6] & $\leq 5.14$ & 14 & 8 & 0 & 0 & 0 & 8\\
T & 32 & 1 & (5, 6] & (5.14, 20.5] & 167 & 8 & 2 & 3 & 4 & 17\\
\rowcolor{Gray}
U & 33 & 1 & (5, 6] & $> 20.5$ & 11 & 10 & 0 & 0 & 0 & 10\\
\hline
Total & & & & & 10335 & 252 & 248 & 125 & 125 & 750\\
\hline
\end{tabular}
\begin{tablenotes}{\item $N_s$ is the population size in stratum $s$, $n_{(1),s}$ is the number sampled from the stratum in wave 1, $n_{(2),s}$ is the number sampled from the stratum in wave 2, and $n_{(3),s}$ and $n_{(4),s}$ are defined similarly. $n_s$ is the total number sampled from stratum $s$ over all waves of the phase 2 validation sampling.}
\end{tablenotes}
\end{threeparttable}
\end{table}
\subsection{Multi-wave sampling for asthma endpoint}
To target validation towards the asthma endpoint, strata were chosen by dividing mother-child records into strata based on phase 1 data for the child’s asthma status and the mother’s estimated weight gain during pregnancy. Recall that those included in the asthma study ($N=7,053$) were a subset of those included in the obesity study ($N=10,335$). Of the 750 records already validated for the obesity endpoint, 582 met inclusion criteria for the asthma study and were used to decide which additional records to validate for the asthma endpoint. Our strategy was 1) to use the phase 2 data to build an imputation model for the validated data, 2) to impute “validated data” from that model for all mother-child records that had not been validated, 3) to fit a working analysis model to the complete data, 4) to compute the influence function for the maternal weight gain log-odds ratio, 5) to repeat this across multiple imputations to obtain the average influence function per mother-child dyad, and then 6) to perform Neyman allocation based on these estimated average influence functions, potentially refining strata so the allocation was approximately balanced across strata.
Our working outcome model was a logistic regression model with the outcome asthma (yes/no) based on the validated/imputed data; the exposure variable as the validated/imputed estimated maternal weight change; validated/imputed covariates BMI at conception, estimated gestational age, and maternal asthma; and unvalidated covariates maternal race, maternal ethnicity, cesarean section, maternal age at delivery, and child sex. We could have imputed all variables in our final analysis model, but for simplicity we chose not to impute some variables that we felt were less important to the association. The validated estimated gestational age was first imputed using the R function \texttt{mice}; from this, the estimated maternal weight gain during pregnancy and BMI at conception were obtained from the FPCA. Then maternal asthma and child asthma were imputed using logistic regression models.
Table 2 shows the strata for the validation sampling targeted for the asthma analysis. There were five strata for wave 1 (fifth overall sampling wave) based on various combinations of the asthma endpoint and maternal weight gain during pregnancy. The numbers sampled in each stratum based on Neyman allocation for 125 records are shown in Table 2. After wave 1 of the asthma validation sampling, the process was repeated, combining all phase 2 validated data across the 5 prior waves to re-estimate the average multiply imputed influence function for the maternal weight gain log-odds ratio for asthma, which was then used to determine sampling strategies for our 6th and final wave of sampling. Unlike the obesity sampling, we had not over-sampled from any of our strata. However, strata were split, creating ten total strata from which fairly equal numbers (across the 284 targeted for asthma) were sampled.
As the validation procedure was identical for both the primary and secondary analyses, our plan was to use all validated records for both analyses. Thus, in each of the 6 waves of phase 2 sampling, we validated all variables needed for both the obesity and asthma analyses. In this way, we could combine these two separate sampling frames using the approach of \citet{metcalf2009}. This approach requires that the phase 2 samples be independent. Hence, records that had already been sampled in the obesity validation study were eligible for sampling in the asthma validation study. There was some overlap between sampled records. If a pair of records had already been validated as part of the obesity validation study, we did not re-validate data, but we used the already validated data and made note of the double-sampling for our analyses. Given that we had enough resources to validate approximately 1000 records, we selected 284 records for our final wave sample, knowing that there would be some overlap. It turned out that 38 of these 284 (13\%) were already validated as part of the obesity validation, so the total number of unique mother-child dyads validated across the two sampling designs was 996.
\begin{table}
\caption{Multi-wave Sampling Design for Childhood Asthma Endpoint}
\centering
\begin{threeparttable}
\begin{tabular}{llllllll}
\hline
Original & Final & Asthma & Maternal & $N_s$ & $n_{(1),s}$ & $n_{(2),s}$ & $n_s$ \\
Strata & Strata & & Gestational & & & \\
& & & Weight & & & \\
& & & Gain (kg) & & & \\
\hline
A & 1 & 0 & $<5$ & 306 & 31 & 27 & 31 \\
& 2 & 0 & [5, 10) & 1251 & & 4 & 31 \\
\rowcolor{Gray}
B & 3 & 0 & [10, 12) & 1520 & 16 & 16 & 20 \\
\rowcolor{Gray}
& 4 & 0 & [12, 15) & 1681 & & 13 & 25 \\
C & 5 & 0 & [15, 19.5) & 1105 & 24 & 21 & 34 \\
& 6 & 0 & $\geq 19.5$ & 459 & & 23 & 34 \\
\rowcolor{Gray}
D & 7 & 1 & $< 8$ & 115 & 23 & 11 & 23 \\
\rowcolor{Gray}
& 8 & 1 & [8, 12) & 278 & & 13 & 24 \\
E & 9 & 1 & [12, 17] & 240 & 31 & 4 & 27 \\
& 10 & 1 & $\geq 17$ & 98 & & 27 & 35 \\
\hline
Total & & & & 7053 & 125 & 159 & 284 \\
\hline
\end{tabular}
\begin{tablenotes}{\item $N_s$ is the population size in stratum $s$, $n_{(1),s}$ is the number sampled from the stratum in wave 1, $n_{(2),s}$ is the number sampled from the stratum in wave 2, and $n_s$ is the total number sampled from stratum $s$ over both waves of the phase 2 validation sampling.}
\end{tablenotes}
\end{threeparttable}
\end{table}
\section{Analysis of Mother-Child Obesity Data}
\subsection{Analysis Approach}
To understand how different analysis choices affect our results, we compare several estimates of the association between maternal weight gain during pregnancy with the outcomes of childhood obesity and childhood asthma. The first estimate was from the model fit to the phase 1 data only; in addition to being based on error-prone data, this model is not fully adjusted for all of the relevant covariates because marital status, number of prior live births, and estimated gestational age were not available in the phase 1 data. We then considered the inverse-probability weighted estimators based on only the phase 2 data. Because we used two sampling frames for selecting records, we provide two IPW estimators for each endpoint. The first of our IPW estimators, which we refer to as IPW single frame (IPW$_\text{SF}$), is based on only the phase 2 records that were targeted for validation for that endpoint. The second of our IPW estimators, which we refer to as IPW multi-frame (IPW$_\text{MF}$), incorporates data from all 996 phase 2 records following the multi-frame analysis approach of \citet{metcalf2009}. Specifically, in this approach the records that were in both sampling frames (i.e., the 7,053 records in the asthma study) were included twice in the combined sampling frame and weights are adjusted accordingly. Let $\pi_i^O$ be the sampling probability for record $i$ in the obesity sampling frame (i.e., defined by the final strata in Table 1). Similarly, let $\pi_i^A$ be the sampling probability for record $i$ in the asthma sampling frame (i.e., defined by the final strata in Table 2). The subset of 3,282 records in the obesity frame but not the asthma frame received a sampling weight of 1/$\pi_i^O$. The 7,053 records in both frames that were duplicated in the combined sampling frame received weights of $\phi_i/\pi_i^O$ and $(1-\phi_i)/\pi_i^A$, respectively. We set $\phi_i=\pi_i^O/(\pi_i^O+\pi_i^A)$, which yielded the Hansen-Hurwitz estimator and implied that that the weight assigned to the unit did not depend on the sample in which it was drawn \citep{metcalf2009}. Standard error estimates properly account for the duplication of records. A similar approach was applied for the asthma endpoint.
The generalized raking estimators can potentially improve the efficiency of the multi-frame IPW estimator by calibrating the weights using estimates of the efficient influence function of the target regression parameter. Calibration was based on either the naïve influence function or on the multiply imputed influence function; the resulting estimators are referred to as Raking$_{\text{Nv}}$ and Raking$_{\text{MI}}$, respectively. The naïve influence function was extracted from the Cox model described earlier in this section that was based on only the error-prone phase 1 data. The multiply imputed influence function was based on the following procedure: 1) using phase 2 data, fit a model for the validated variables conditional on the unvalidated variables; 2) using this model, impute \lq \lq validated data" for all phase 1 records (including those in the phase 2 sample); 3) fit the full Cox model to the fully imputed \lq \lq validated data" and obtain the estimated influence function for each record; 4) repeat steps 2 and 3 multiple (in our case 100) times; 5) for each observation, compute the average of the estimated multiply imputed influence functions; 6) use this average influence function to calibrate weights.
The performance of the Raking$_{\text{Nv}}$ versus Raking$_{\text{MI}}$ estimators depends on how much error was in the phase 1 data and how well the phase 2 data can be imputed. We present a detailed analysis of error rates before presenting final study results.
\subsection{Error Rates}
Table 3 summarizes phase 1 and combined, unweighted phase 2 data for key study variables. In the phase 1 sample, 18\% of children developed obesity between ages 2-5 years. We over-sampled children with obesity in our phase 2 sample, where 42\% of children were found to meet the obesity definition. (Note that the phase 2 data presented in Table 3 are not weighted and so are not meant to represent estimates in the study population, but rather what was observed in the validation sample.) Treating validated data in the phase 2 sample as error-free, the childhood obesity outcome was misclassified only 6 times in the phase 1 data (0.6\%), with 1 child falsely classified as having obesity (positive predictive value [PPV] = 0.998) and 5 children incorrectly classified as not having obesity (negative predictive value [NPV] = 0.991). In the subset of patient records meeting inclusion criteria for the asthma study in the phase 1 sample, 10\% had asthma between ages 4-5 years. The asthma outcome had higher rates of misclassification than the obesity outcome: 10.4\% of children had their asthma diagnosis misclassified with PPV=0.570 and NPV=0.973.
The estimated maternal weight gain during pregnancy (median of 0.30 kg/wk) was highly prone to error with all values in phase 1 data differing from those derived from the phase 2 data. This was partly due to correcting erroneous maternal weights, but especially due to our ability to obtain the estimated length of pregnancy from the chart reviews that was not available in phase 1. On average, the estimated weight gain from the phase 2 sample was 19.6 grams per week lower in the phase 2 data than the phase 1 data, with the discrepancy ranging from 655 g/wk lower to 933 g/wk higher, although 93\% of validated records had discrepancies under 100 g/wk. Similarly, the estimated BMI at conception was different from that estimated in phase 1 data for all mothers in the phase 2 sample. The median BMI discrepancy was 0.13 kg/m$^2$ heavier in phase 2 than phase 1 for those records that were validated, with discrepancies ranging from 6.8 kg/m$^2$ lower to 8.6 kg/m$^2$ higher, and 83\% having a discrepancy less than 1 kg/m$^2$.
Other variables with high levels of misclassification in the phase 1 data were maternal diabetes (10.9\% misclassified), maternal depression (13.5\% misclassified), and insurance status (24.3\% misclassified). Our phase 1 approximation of smoking during pregnancy, which was estimated based on any evidence of smoking prior to delivery, also had a fairly high level of misclassification (11.8\%). In contrast, misclassification in the phase 1 data was fairly low for race (5.4\%), ethnicity (1.1\%), cesarean delivery (1.3\%), child sex (0.4\%), singleton (1.2\%), and maternal asthma (4.5\%).
\begin{table}
\caption{Characteristics of phase 1 and unweighted phase 2 samples, including discrepancies.}
\footnotesize
\centering
\begin{threeparttable}
\begin{tabular}{llllllll}
\hline
Variable & Phase 1 & Phase 2$^a$ & Percent & Discrepancy \\
& $N=10,335$ & $n=996$ & Error$^b$ & \\
\hline
Child obesity & 17.9\% & 42.0\% & 0.6 & PPV=0.998, NPV=0.991 \\
Time to event/censoring (age, yrs) & 4.3 (2.9, 6.0)$^c$ & 4.8 (3.0, 6.0) & 4.7 & med 1.0 (range 0.04, 1.8)\\
Maternal weight gain (kg/wk) & 0.30 (0.26, 0.38) & 0.30 (0.22, 0.41) & 100 & med $-0.02$ (range $-0.66, 0.93$) \\
Maternal BMI (kg/m$^2$) & 25.9 (22.6, 30.5) & 27.9 (23.8, 33.1) & 100 & med 0.13 (range $-6.8, 8.6$) \\
Maternal age (yrs) & 28.0 (23.5, 32.3) & 27.4 (23.0, 31.8) & 0 & -- \\
Maternal race & & & 5.4 \\
\hspace{.2in} White & 61.8\% & 56.8 & & PPV=0.952, NPV=0.962 \\
\hspace{.2in} Black & 23.1\% & 29.7 & & PPV=0.986, NPV=0.993 \\
\hspace{.2in} Asian & 6.9\% & 4.0 & & PPV=0.904, NPV=0.998 \\
\hspace{.2in} Other/Unknown & 8.2\% & 9.4 & & PPV=0.778, NPV=0.966 \\
Maternal ethnicity, Hispanic & 14.9\% &14.9\% & 1.1 & PPV=0.948, NPV=0.996 \\
Maternal diabetes & & & 10.9 \\
\hspace{.2in} None & 83.3\% & 89.4 & & PPV=0.991, NPV=0.553\\
\hspace{.2in} Gestational & 13.7\% & 6.7 & & PPV=0.420, NPV=0.992\\
\hspace{.2in} Type 1 or 2 & 3.0\% & 3.9 & & PPV=0.472, NPV=0.977\\
Cesarean delivery & 36.2\% & 38.2\% & 1.3 & PPV=0.989, NPV=0.986 \\
Child sex, male & 52.7\% & 55.4\% & 0.4 & PPV=0.995, NPV=0.998 \\
Maternal depression & 8.9\% & 10.9\% & 13.5 & PPV=0.376, NPV=0.926 \\
No private insurance & 45.9\% & 67.6\% & 24.3 & PPV=0.941, NPV=0.580 \\
Singleton & 98.1\% & 97.3\% & 1.2 & PPV=0.992, NPV=0.826 \\
Maternal smoking$^d$ & 6.3\% & 13.2\% & 11.8 & PPV=0.618, NPV=0.897\\
Married$^e$ & -- & 51.8\% & -- & -- \\
Number prior live births$^e$ & -- & 0.5 (0, 1) & -- & -- \\
Gestational age$^e$ (wks) & -- & 39.1 (38.1, 40.3) & -- & -- \\
\\
Child asthma$^f$ & 10.4\% & 13.0\% & 10.4 & PPV=0.570, NPV=0.973 \\
Maternal asthma$^f$ & 7.8\% & 11.0\% & 4.5 & PPV=0.827, NPV=0.968 \\
\hline
\end{tabular}
\begin{tablenotes}{
\item $^a$ Children diagnosed with obesity and asthma were intentionally over-sampled in phase 2.
\item $^b$ Percentage of phase 2 values that did not match phase 1 value.
\item $^c$ Median (25th percentile, 75th percentile) are reported for continuous variables.
\item $^d$ Any evidence in the EHR of smoking prior to delivery was used as a surrogate for smoking status during pregnancy.
\item $^e$ Marital status, number of prior live births, estimated gestational age, and smoking status during pregnancy were not available in the phase 1 data. Estimated gestational age was assumed to be 39 weeks for computing average weight gain during pregnancy.
\item $^f$ Child asthma and maternal asthma are only shown for the $N=7,053$ in phase 1 and $n=828$ in phase 2 meeting the inclusion criteria for the asthma sub-study.
}
\end{tablenotes}
\end{threeparttable}
\end{table}
\subsection{Regression Results}
Table 4 shows log hazard ratio estimates and standard errors based on the various estimators. The estimated log-hazard ratio of childhood obesity for maternal weight gain during pregnancy was fairly similar across all estimators. The variance of the IPW single frame estimate was the largest. Relative to the single frame, incorporating the multi-frame IPW led to a 38\% decrease in the variance and the variance was further decreased 33\% by raking the multi-frame IPW estimator using either the naïve influence function or the multiply imputed influence function.
Holding all other factors constant, a child from a woman who gained 250 grams more per week during pregnancy (i.e., 9.75 kg in additional weight over a normal 39 week pregnancy) would be estimated to have a 24\% increased hazard of obesity before age 6 (hazard ratio [HR]$=1.24$; 95\% CI 1.14-1.36) using only the unvalidated phase 1 data versus a 30\% increased hazard (HR=1.30; 95\% CI 1.14-1.48) using our multi-frame estimator with weights raked by the naïve influence function.
Smoking and insurance status were quite error-prone in the phase 1 data, and their relationship with childhood obesity was stronger using the validated data and raking analyses. Some apparent associations with childhood obesity in the phase 1 data were no longer seen (i.e., 95\% CI for $\beta$ crossing 0) in the generalized raking results including associations with Asian and other race, cesarean delivery, male sex, and singleton status. Contributors to the loss of association may at least in part have been due to decreased precision when incorporating the validation data (e.g., cesarean section), attenuation (e.g., Asian race), or inclusion of other variables (e.g., association with singleton status may have been confounded by estimated gestational age). Interestingly, gestational diabetes appeared protective in the raked analyses (HR=0.58, 95\% CI 0.38-0.90) but not in analyses using only the phase 1 data (HR=1.13, 95\% CI 0.99-1.28).
\begin{table}
\caption{Estimated log hazard ratio estimates ($\beta$) for childhood obesity and standard errors (SE) based on various data and estimators.}
\footnotesize
\centering
\begin{tabular}{lrrrrrrrrrr}
\hline
& \multicolumn{2}{c}{Phase 1} & \multicolumn{2}{c}{IPW$_{\text{SF}}$} & \multicolumn{2}{c}{IPW$_{\text{MF}}$} & \multicolumn{2}{c}{Raking$_{\text{Nv}}$} & \multicolumn{2}{c}{Raking$_{\text{MI}}$} \\
& $\beta$ & SE & $\beta$ & SE & $\beta$ & SE & $\beta$ & SE & $\beta$ & SE \\
\hline
Maternal weight gain (kg/wk) & 0.87 & 0.18 & 0.83 & 0.42 & 1.17 & 0.33 & 1.06 & 0.27 & 1.00 & 0.26 \\
Maternal BMI (5 kg/m$^2$) & 0.28 & 0.02 & 0.34 & 0.05 & 0.32 & 0.03 & 0.32 & 0.03 & 0.32 & 0.03 \\
Maternal age (10 yrs) & -0.05 & 0.04 & 0.12 & 0.17 & 0.15 & 0.11 & 0.15 & 0.11 & 0.15 & 0.11 \\
Maternal race, Black & -0.03 & 0.06 & -0.14 & 0.21 & -0.24 & 0.14 & -0.24 & 0.14 & -0.24 & 0.14 \\
Maternal race, Asian & 0.24 & 0.11 & 0.37 & 0.39 & 0.08 & 0.25 & 0.10 & 0.25 & 0.10 & 0.25 \\
Maternal race, other/unknown & 0.41 & 0.08 & 0.24 & 0.25 & 0.04 & 0.17 & 0.04 & 0.17 & 0.04 & 0.17 \\
Maternal ethnicity, Hispanic & 0.72 & 0.06 & 0.88 & 0.23 & 0.95 & 0.15 & 0.95 & 0.14 & 0.94 & 0.14 \\
Maternal diabetes, gestational & 0.12 & 0.06 & -0.40 & 0.30 & -0.54 & 0.22 & -0.54 & 0.22 & -0.54 & 0.22 \\
Maternal diabetes, type 1/2 & 0.13 & 0.12 & -0.02 & 0.42 & -0.19 & 0.27 & -0.15 & 0.26 & -0.15 & 0.26 \\
Cesarean delivery & 0.12 & 0.05 & 0.29 & 0.16 & 0.17 & 0.10 & 0.17 & 0.10 & 0.17 & 0.10 \\
Child sex, male & 0.12 & 0.05 & -0.21 & 0.16 & -0.15 & 0.10 & -0.15 & 0.10 & -0.14 & 0.10 \\
Maternal depression & 0.08 & 0.08 & -0.27 & 0.28 & -0.19 & 0.18 & -0.17 & 0.18 & -0.16 & 0.18 \\
No private insurance & 0.18 & 0.05 & 0.55 & 0.21 & 0.60 & 0.14 & 0.59 & 0.14 & 0.59 & 0.14 \\
Singleton & 0.44 & 0.21 & -0.02 & 0.49 & -0.00 & 0.33 & 0.03 & 0.32 & 0.02 & 0.32 \\
Maternal smoking & 0.32 & 0.10 & 0.48 & 0.25 & 0.48 & 0.17 & 0.46 & 0.17 & 0.46 & 0.17 \\
Married & & & 0.18 & 0.20 & 0.32 & 0.13 & 0.31 & 0.13 & 0.31 & 0.13 \\
Number prior live births & & & -0.06 & 0.08 & -0.07 & 0.05 & -0.08 & 0.05 & -0.08 & 0.05 \\
Gestational age (wks) & & & 0.06 & 0.03 & 0.03 & 0.02 & 0.03 & 0.02 & 0.03 & 0.02 \\
\hline
\end{tabular}
\end{table}
Similar sets of analyses were performed to estimate odds ratios for our asthma outcome (Table 5). In analyses based only on phase 1 data, the estimated beta coefficient of asthma for maternal weight gain during pregnancy was $-0.54.$ The generalized raking estimators were the opposite direction: 0.25 (raking with naïve influence function) and 0.26 (raking with multiply imputed influence function). In terms of a 250 g/wk difference in weight gain, these correspond to odds ratios of 0.88 (95\% CI 0.75-1.02) with phase 1 data only and 1.07 (95\% CI 0.74-1.53) using the multi-frame analysis and raking with the naïve influence function. Although both estimates would fail to conclude that a mother's weight gain during pregnancy is associated with an increased risk of childhood asthma, the naïve estimator is weakly suggestive of a protective effect, whereas the raked estimators provide no evidence of an association. The standard error of the log odds ratio greatly decreased when doing the multi-frame sample (the single frame sample was based on only 284 chart reviews, whereas the multi-frame was based on all 996), but raking, either with the naïve or multiply imputed influence function, did not further reduce the standard error. This may be because the phase 1 asthma data were somewhat poor surrogates for a true asthma diagnosis and the imputation model was not able to recover much information. The multi-frame IPW and raking estimators suggested a stronger association between childhood asthma and Black race, male sex, and public insurance than was seen using the unvalidated EHR data; these are all known risk factors for developing asthma. Maternal asthma was similarly predictive of childhood asthma in phase 1 and raking analyses. 95\% confidence intervals for the odds ratios for higher BMI at conception and younger age at delivery went from not including 1 in the Phase 1 analyses to including 1 in the raking analyses. Again, gestational diabetes was predictive of a lower risk of asthma. Finally, longer estimated gestational age was associated with a lower odds of the child developing asthma in the generalized raking analyses; no such estimate could be computed using the phase 1 data alone.
\begin{table}
\caption{Estimated log odds ratio ($\beta$) for childhood asthma and standard errors (SE) based on various data and estimators.}
\footnotesize
\centering
\begin{tabular}{lrrrrrrrrrr}
\hline
& \multicolumn{2}{c}{Phase 1} & \multicolumn{2}{c}{IPW$_{\text{SF}}$} & \multicolumn{2}{c}{IPW$_{\text{MF}}$} & \multicolumn{2}{c}{Raking$_{\text{Nv}}$} & \multicolumn{2}{c}{Raking$_{\text{MI}}$} \\
& $\beta$ & SE & $\beta$ & SE & $\beta$ & SE & $\beta$ & SE & $\beta$ & SE \\
\hline
Maternal weight gain (kg/wk) & -0.54 & 0.31 & -0.18 & 1.51 & 0.48 & 0.73 & 0.25 & 0.74 & 0.26 & 0.74 \\
Maternal BMI (5 kg/m$^2$) & 0.10 & 0.03 & 0.05 & 0.17 & 0.10 & 0.07 & 0.09 & 0.07 & 0.10 & 0.07 \\
Maternal age (10 yrs) & -0.18 & 0.07 & 0.14 & 0.38 & -0.07 & 0.18 & -0.08 & 0.17 & -0.08 & 0.17 \\
Maternal race, Black & 0.71 & 0.09 & 1.37 & 0.64 & 1.25 & 0.26 & 1.28 & 0.25 & 1.28 & 0.25 \\
Maternal race, Asian & -0.34 & 0.22 & 1.09 & 0.96 & 0.76 & 0.53 & 0.78 & 0.52 & 0.79 & 0.52 \\
Maternal race, other/unknown & 0.05 & 0.19 & 0.08 & 0.82 & 0.49 & 0.36 & 0.45 & 0.36 & 0.45 & 0.36 \\
Maternal ethnicity, Hispanic & -0.09 & 0.14 & -0.09 & 0.75 & 0.20 & 0.30 & 0.25 & 0.30 & 0.25 & 0.30 \\
Maternal diabetes, gestational & -0.38 & 0.14 & -16.19 & 0.55 & -2.43 & 0.54 & -2.33 & 0.53 & -2.33 & 0.53 \\
Maternal diabetes, type 1/2 & 0.10 & 0.20 & 0.70 & 1.08 & 0.51 & 0.50 & 0.53 & 0.48 & 0.54 & 0.48 \\
Cesarean delivery & 0.16 & 0.08 & 0.03 & 0.48 & -0.15 & 0.21 & -0.14 & 0.21 & -0.14 & 0.21 \\
Child sex, male & 0.47 & 0.08 & 0.77 & 0.41 & 0.70 & 0.21 & 0.73 & 0.21 & 0.73 & 0.21 \\
No private insurance & 0.11 & 0.09 & 0.63 & 0.59 & 0.90 & 0.28 & 0.90 & 0.28 & 0.90 & 0.28 \\
Maternal smoking & -0.44 & 0.24 & 1.19 & 0.61 & 0.31 & 0.28 & 0.28 & 0.28 & 0.29 & 0.28 \\
Maternal asthma & 0.70 & 0.12 & 0.54 & 0.77 & 0.75 & 0.27 & 0.72 & 0.26 & 0.71 & 0.26 \\
Gestational age (wks) & & & -0.07 & 0.06 & -0.07 & 0.03 & -0.07 & 0.03 & -0.07 & 0.03 \\
\hline
\end{tabular}
\end{table}
\section{Discussion}
In this manuscript, we describe our experience implementing a multi-wave validation study to address EHR data quality issues and obtain efficient estimates of the association between maternal gestational weight gain and diagnoses of childhood obesity and asthma. Our multi-wave sampling approach targeted records for validation based on information learned in prior sampling waves. This strategy resulted in a phase 2 sample that attempted to get the most use out of limited resources for data validation. We obtained estimates of association using a novel generalized raking procedure that efficiently combined validation data across multiple sampling waves within two sampling frames with the larger, error-prone EHR data. The resulting augmented inverse probability weighted estimators addressed complicated error structures across multiple variables in a robust manner that reliably approximates effect estimates had the entire phase 1 sample been validated. We also employed a cutting-edge functional principal components analysis to estimate maternal weight gain during pregnancy.
During the validation, we found several variables with appreciable error, including the primary exposure variable of maternal weight gain during pregnancy. Our first set of analyses evaluated the association between maternal weight gain during pregnancy and childhood obesity using Cox regression. In this case, the childhood obesity phenotype derived from the EHR data was very accurate (PPV $99\%$). Despite errors in the estimated maternal gestational weight gain, hazard ratio estimates did not differ substantially when models were run on the unvalidated phase 1 data versus those incorporating data from the validated subsample. In the second set of analyses, we evaluated the association between maternal weight gain during pregnancy and diagnosis of childhood asthma using logistic regression. Our electronic phenotype to identify a diagnosis of childhood asthma from the EHR was not very accurate (57\% PPV). Hence, we observed a much larger difference between the odds ratios based on models using the unvalidated phase 1 data (log OR -0.54, SE 0.31) versus the those incorporating data from the validated subsample (log HR 0.26, SE 0.74 using generalized raking with multiple imputation). In addition, covariates included in each model also demonstrated substantive changes in the strength of association with the outcomes based on analyses that ignored versus incorporated validation data.
The vast majority of EHR validation studies reported in the biomedical literature validate sub-optimal subsamples (most employ simple random sampling or case-control sampling) and do not incorporate validation data into final analyses, other than simply reporting estimates of PPV or similar measures of data quality. There are bias-variance trade-offs between naïve analyses of phase 1 data versus those that carefully incorporate validation data, and in some cases, the decreased precision of estimates using validation data may outweigh the increased bias of using unvalidated data. Though generally we can hope that errors in EHR data yield estimates with minimal bias, we cannot know this will be the case until we actually validate the study data and examine the quality of the error-prone EHR and directly calculate its impact on estimates. The impact that poor data quality can have on study estimates has been observed time and again to be potentially substantial \citep{floyd2012,giganti2019, giganti2020}. Hence, the size and choice of the validation sample and the analysis methods are critical and can greatly impact precision. Our multi-wave adaptive sampling design together with generalized raking is an effective approach that permits robust estimation, while avoiding the costs of full data validation.
We learned several lessons from our multi-wave validation study. First, adaptive sampling designs provide an important chance to recover from a poorly chosen first sampling wave. After completing all data validation, it may be of interest to see how far our sampling design was from the optimal allocation. Supplementary Table \ref{tableNA-obesity} contains the estimated optimal Neyman allocation for $n=750$ in the obesity frame based on the estimated influence function after completing all four sampling waves. We over-sampled from some of the smaller strata comprised of children whose phase 1 data indicated that they developed obesity and had extreme mother weight changes.
With that said, the only “penalty” for over-sampling these strata is the induced under-sampling from other strata, thereby reducing efficiency relative to the optimal design. However, because multi-wave sampling relies on estimates of unknown nuisance parameters, the optimal sampling strategy will never be exactly achieved in practice. In addition, some lack of efficiency in design can be recovered in the analysis approach. Generalized raking, in particular, has shown a remarkable ability to result in efficient estimation \citep{amorim2021, Chen&Lumley2021, OhetAl2021}.
We found that it took quite a bit of time between receiving validation data from one wave to design the next wave. Upon receiving validation data we needed to perform data quality checks, de-identify data, re-run FPCA analyses, re-fit regression models to estimate influence functions, re-compute Neyman allocation, and then meet as a team to discuss whether and how to divide strata. Keeping track of all of the interim datasets also became tedious. To help avoid coding errors, to speed up the multi-wave validation process, and to better track interim datasets, we have developed an R package, \texttt{optimall} (https://github.com/yangjasp/optimall), which performs Neyman allocation, allows easy splitting of strata, and keeps track of various datasets in an efficient manner. This package also implements the method of \citet{wright2017}, which provides exact optimality for a fixed sample size. Neyman allocation yields the optimal allocation fraction, but it cannot be implemented exactly for a fixed sample size due to rounding. Hence, our wave 1 sample which targeted 250 records ended up sampling 252 records because of rounding. Although this type of imprecision does not result in a substantial loss of efficiency, the exact approach of \citet{wright2017}, which was implemented in later waves of our validation sampling, is preferable.
When there are two parameters of interest, it is not possible to design a validation study that is simultaneously optimal for both; one must compromise the optimality of one in interest of improving estimation for the other. In our case, we were primarily interested in the association between maternal weight gain and childhood obesity and we focused three-fourths of our validation sample to optimize estimation of this regression parameter; however, we were also interested in the association between maternal weight gain on childhood asthma, and we chose to sacrifice some precision for estimating the former in interest of improving estimation in the latter. Rather than having two separate sampling frames and then combining results using a multi-frame approach, there are other ways that we could have attempted to improve the precision of both parameter estimates. For example, one strategy might have been to further divide our obesity-defined strata based on childhood asthma status after an interim sampling wave, select a sample to optimize estimation of the asthma odds ratio, and then in later sampling waves return to Neyman allocation for estimating the obesity parameter correcting for any over-sampling due to targeting the asthma parameter.
Certainly there is room for additional research in this area. The literature on optimal designs (e.g., A-optimality, D-optimality, and E-optimality) provide other strategies that may be useful \citep{boyd2009}.
Our study has potential limitations in addition to those already mentioned. The phase 1 sample may be unrepresentative because it is only comprised of mother-child pairs that could be linked in the database; our data validation did not investigate whether some mother-child dyads were inappropriately excluded due to inaccurate linkage. In addition, there are many other challenges to using EHR data that go beyond what one can glean from data validation (e.g., non-random treatment assignment, sparse or erratic data capture, and poor follow-up). Future research using individual participant data could further explore the complex picture of child obesity development to inform targeted interventions; for example, additional analyses are planned to considering maternal weight gain during the last trimester or to study how the association betweeen maternal gestational weight gain and obesity may differbased on a mother's BMI.
In conclusion, we applied innovative designs and analyses to address data quality issues across multiple variables in the EHR to efficiently estimate associations between a mother's weight gain during pregnancy and her child's risks of developing of obesity and asthma. With the increased utilization of secondary-use data for biomedical research, sampling designs and analysis methods of this nature will be increasingly important.
|
{
"timestamp": "2021-09-30T02:02:39",
"yymm": "2109",
"arxiv_id": "2109.14001",
"language": "en",
"url": "https://arxiv.org/abs/2109.14001"
}
|
\section{Introduction}\label{ss:Intro}
Let $X$ be an $m\times n$ matrix with entries being i.i.d. real ($\beta=1$), complex ($\beta=2$), or quaternionic ($\beta=4$) centered normal random variables with $\mathbb{E}(|X_{11}|^2) = \beta$.
Then we say that the $(m+n)\times(m+n)$ Hermitian matrix
\begin{equation}\label{eq:chiral}
{H} = \begin{pmatrix}
\textbf{0}_{m\times m} & X\\\
X^*& \textbf{0}_{n\times n}
\end{pmatrix}.
\end{equation}
belongs to the chiral Gaussian orthogonal ($\beta=1$), unitary $(\beta=2$), symplectic ($\beta=4$) random matrix ensemble (chGOE, chGUE, chGSE, respectively).
In this paper we find explicitly the joint eigenvalue distribution of rank one Hermitian and non-Hermitian perturbations of chiral ensembles:
\begin{equation}\label{eq:chiralPert}
\widetilde{H}= \begin{pmatrix}
\Gamma & X\\\
X^* &\textbf{0}_{n\times n}
\end{pmatrix}.
\end{equation}
Here $\Gamma$ is an $m\times m$ matrix with $\operatorname{rank}\,\Gamma =1$ and either $\Gamma=\Gamma^*$ (Hermitian perturbation) or $\Gamma=-\Gamma^*$ (anti-Hermitian perturbation).
The matrix $\Gamma$ can be either deterministic or random but independent from $X$. We will also allow arbitrary $\beta>0$ different from $\beta=1,2,4$ (see Section~\ref{ss:Models} for details).
The main results are Theorems~\ref{th:Hermitian} and~\ref{th:nonHermitian} for Hermitian and non-Hermitian perturbations, respectively. We use methods developed in~\cite{KK,Koz17,Koz20}. Namely, first, we develop sparse (Jacobi) matrix models for chiral ensembles and their perturbations in the spirit of Dumitriu--Edelman~\cite{Dumede02} (see Section~\ref{ss:Models}). This allows us to use the theory of orthogonal polynomials and Jacobi matrices to compute a Jacobian of a certain change of variables (Section~\ref{ss:Jacobians}) which leads to the desired distribution (Sections~\ref{ss:Hermitian} and~\ref{ss:nonHermitian}).
By multiplying matrices in~\eqref{eq:chiral} and~\eqref{eq:chiralPert} by $i$, and letting $Y=iX$, $\Lambda=i\Gamma$, we can equivalently work with the chiral Gaussian anti-Hermitian model
$$\begin{pmatrix}
\textbf{0}_{m\times m} & Y\\\
-Y^* &\textbf{0}_{n\times n}
\end{pmatrix}
$$
and its Hermitian $\Lambda=\Lambda^*$ and anti-Hermitian $\Lambda=-\Lambda^*$ perturbations
\begin{equation}\label{eq:antiHermPert}
\begin{pmatrix}
\Lambda & Y\\\
-Y^* &\textbf{0}_{n\times n}
\end{pmatrix}.
\end{equation}
All the results in this paper can be trivially restated for this case: all the matrix models and eigenvalues simply get a factor of $i$. The benefit of this would be that the characteristic polynomial of~\eqref{eq:antiHermPert} in the case $\Lambda=\Lambda^*$ has real coefficients (instead of alternating between purely imaginary and purely real as in Section~\ref{ss:nonHermitian}), so its zeros belong to $\{z:\Re z<0\}$ and are symmetric with respect to $\mathbb{R}$.
Chiral random matrix theory has been an important instrument in quantum chromodynamics (QCD), going back to works \cite{va1,va0,va3}, see ~\cite{ake,Dam,va4,va2} for overviews, lecture notes, and further references.
There is a vast literature on low rank {\it non-Hermitian} perturbations of Hermitian random matrices, owing to its physical applications in quantum chaotic scattering. For an overview, physical applications, and references, we refer readers to the papers \cite{fyo16,fyosav15,fyosom03,nucl}. The exact eigenvalue distribution of low rank non-Hermitian perturbations of Gaussian and Laguerre $\beta$-ensembles was the topic of \cite{fyokho99,Koz17,Koz20,SokZel,StoSeb,Ull} in particular.
The low rank non-Hermitian perturbations of chiral ensembles that we study here do not seem to have been studied in the literature before. A different type of non-Hermitian perturbations (of full rank) have been studied recently in~\cite{kie}.
Literature that studies {\it Hermitian} perturbations of Gaussian and Laguerre random matrix ensembles is huge.
The additive model $H+\Gamma$ for perturbations of Gaussian random matrices $H$ bears the name Gaussian with an external source or shifted mean Gaussian ensemble, see~\cite{BleKuj,BreHik96,Pastur72,Zinn1,Zinn2} among many others.
The usual model for perturbations of Laguerre ensembles is $ (I+\Gamma)^{1/2} X^* X (I+\Gamma)^{1/2}$ with $\Gamma=\Gamma^*$ of low rank. This is typically referred to as the spiked Wishart ensembles, see, e.g., \cite{BBP,BGN1,DesFor06,Johnstone}.
Clearly this corresponds to
perturbation $X\mapsto X(I+\Gamma)^{1/2}$ and $X^*\mapsto (I+\Gamma)^{1/2} X^*$
in the chiral model~\eqref{eq:chiral}.
Another type of perturbation of Laguerre/Wishart ensembles actively studied in the literature is $(X+\Gamma)^* (X+\Gamma)$. This corresponds to $X\mapsto X+\Gamma$ and $X^*\mapsto (X+\Gamma)^*$
in~\eqref{eq:chiral} which bear the name chiral Gaussian ensembles with a source, see e.g.~\cite{DesFor08,For13,FGS18,SWG9} and~\cite[Sect 11.2.2]{Forrester-book}.
We stress that eigenvalues of our Hermitian perturbed model~\eqref{eq:chiralPert}, however, do {\it not} correspond to a change of variables applied to eigenvalues of a simple perturbation of the Laguerre random matrix.
{ \bf Acknowledgments}: Research of G. A. was supported by Vergstiftelsen foundation.
\section{Jacobi matrix models}\label{ss:Models}
\subsection{Jacobification: case $m\le n$}\label{ss:chiralModel1}
As was shown by Dumitriu--Edelman~\cite{Dumede02}, $X$ can be bidiagonalized in the following sense: there are $m\times m$ and $n \times n$ unitary matrices $L$ and $R$ such that
\begin{equation}\label{BX}
B := L X R=\left( \begin{matrix}
x_1 & & & & & \multicolumn{3}{|c}{} \\
y_1 & x_2 & & & & \multicolumn{3}{|c}{} \\
& y_2 & \ddots & & & \multicolumn{3}{|c}{\mathbf{0}_{m\times (n-m)}}\\
& & \ddots &\ddots & & \multicolumn{3}{|c}{} \\
& & & y_{m-1} & x_m & \multicolumn{3}{|c}{}
\end{matrix}\right)
\end{equation}
with
\begin{equation}\label{eq:L}
L e_1= L^*e_1=e_1,
\end{equation}
where $e_1$ is the vector with $1$ in the first entry and $0$ in all others. Here the $x_j$'s and $y_j$'s are independent random variables with the distributions
\begin{align}
\label{eq:x's}
x_j
&\sim \chi_{\beta(n-j+1)}, \,\,\,\, 1\leq j\leq m,\\
\label{eq:y's}
y_j &\sim \chi_{\beta(m-j)}, \,\,\,\, 1\leq j\leq m-1,
\end{align}
where $\chi_\alpha$ stands for the chi-distributed random variable with parameter $\alpha>0$ given by the p.d.f.
$\tfrac{1}{2^{\alpha/2-1}\Gamma(\tfrac{\alpha}{2})} x^{\alpha-1} e^{-x^2/2}$ for $x > 0$.
Trivially, ~\eqref{BX} implies $B^* := R^* X^* L^* $ which means that our chiral matrix $H$ from~\eqref{eq:chiral} can be unitarily reduced to
\begin{equation}\label{eq:chiralModelInterm}
\begin{pmatrix}
L & \textbf{0}_{m\times n}\\\
\textbf{0}_{n\times m} & R^*
\end{pmatrix} H \begin{pmatrix}
L^* & \textbf{0}_{m\times n}\\\
\textbf{0}_{n\times m} & R
\end{pmatrix}
=
\begin{pmatrix}
\textbf{0}_{m\times m} &B\\\
B^*& \textbf{0}_{n\times n}
\end{pmatrix}
\end{equation}
The right-hand side is a sparse matrix with independent entries. However we want a Jacobi (tridiagonal) form in order to employ theory of orthogonal polynomials. To this end, we introduce the $(m+n)\times (m+n)$ permutation matrix $P$ corresponding to the permutation
\begin{equation}\label{eq:permut}
\begin{pmatrix}
1 & 2 & 3 & 4 & \cdots & 2m-1 & 2m & \multicolumn{1}{|c}{2m+1} & \cdots & m+n \\
1 & m+1 & 2 & m+2 & \cdots & m & 2m & \multicolumn{1}{|c}{2m+1} & \cdots & m+n
\end{pmatrix}.
\end{equation}
This produces
\begin {equation}\label{ddd}
P\begin{pmatrix}
\textbf{0}_{m\times m} &B\\\
B^*& \textbf{0}_{n\times n}
\end{pmatrix} P^*=
\left(
\begin{matrix}
0 & x_1 &&&&& \multicolumn{3}{|c}{}\\
x_1 & 0 & y_1 &&&& \multicolumn{3}{|c}{} \\
& y_1 & 0 & x_2 &&& \multicolumn{3}{|c}{ \multirow{2}{*}{$\mathbf{0}_{2m\times(n-m)} $} } \\
& & \ddots & \ddots & \ddots && \multicolumn{3}{|c}{} \\
&& & y_{m-1} & 0 & x_m & \multicolumn{3}{|c}{}\\
&& & & x_m &0 & \multicolumn{3}{|c}{} \\
\hline
\multirow{2}{*}{} & \multicolumn{4}{c}{\multirow{2}{*}{$\mathbf{0}_{(n-m)\times 2m}$}} &\multirow{2}{*}{} &
\multicolumn{3}{|c}{\multirow{2}{*}{$\mathbf{0}_{(n-m)\times (n-m)}$}}
\\
&&&&&& \multicolumn{3}{|c}{}
\\
\end{matrix}
\right) = : J.
\end{equation}
Observe also that
\begin{equation}\label{eq:P}
P e_1=P^*e_1=e_1, \quad P I_{1\times 1} P^*=I_{1\times 1},
\end{equation}
where $I_{1\times 1}$ is the diagonal matrix with 1 in $(1,1)$-entry and $0$ everywhere else. We will use these properties later in the text.
This ensemble already appeared earlier in \cite{jac}, see also \cite{dum2}.
\subsection{Jacobification: case $m\ge n+1$}\label{ss:chiralModel2}
Arguments of Dumitriu--Edelman work for the case $m\ge n+1$ with the following modifications:~\eqref{BX} becomes
\begin{equation}\label{BX2}
B := L X R=\left( \begin{matrix}
x_1 & & & & \\
y_1 & x_2 & & & \\
& y_2 & \ddots & & \\
& & \ddots &\ddots & \\
& & & y_{n-1} & x_n \\
& & & & y_n \\
\hline
\multirow{2}{*}{} & \multirow{2}{*}{} &\multirow{2}{*}{$\textbf{0}_{m-n-1,n}$} &\multirow{2}{*}{} & \multirow{2}{*}{} \\ \\
\end{matrix}\right);
\end{equation}
distributions of $x_j$'s and $y_j$'s are now
\begin{align}
\label{eq:x's2}
x_j
&\sim \chi_{\beta(n-j+1)}, \,\,\,\, 1\leq j\leq n,\\
\label{eq:y's2}
y_j
&\sim \chi_{\beta(m-j)}, \,\,\,\, 1\leq j\leq n;
\end{align}
equation~\eqref{eq:chiralModelInterm} remains unchanged; the permutation matrix $P$ in~\eqref{eq:permut} is now
\begin{equation}\label{eq:permut2}
\begin{pmatrix}
1 & 2 & 3 & 4 & \cdots & 2n-1 & 2n & \multicolumn{1}{|c}{2n+1} & \cdots & m+n \\
1 & m+1 & 2 & m+2 & \cdots & n & n+m & \multicolumn{1}{|c}{n+1} & \cdots & m
\end{pmatrix};
\end{equation}
finally,~\eqref{ddd} becomes
\begin {equation}\label{ddd2}
P\begin{pmatrix}
\textbf{0}_{m\times m} &B\\\
B^*& \textbf{0}_{n\times n}
\end{pmatrix} P^*=
\left(
\begin{matrix}
0 & x_1 &&&&& \multicolumn{3}{|c}{}\\
x_1 & 0 & y_1 &&&& \multicolumn{3}{|c}{} \\
& y_1 & 0 & x_2 &&& \multicolumn{3}{|c}{ \multirow{2}{*}{$\mathbf{0}_{(2n+1)\times(m-n-1)} $} } \\
& & \ddots & \ddots & \ddots && \multicolumn{3}{|c}{} \\
&& & x_{n} & 0 & y_n & \multicolumn{3}{|c}{}\\
&& & & y_n &0 & \multicolumn{3}{|c}{} \\
\hline
\multirow{2}{*}{} & \multicolumn{4}{c}{\multirow{2}{*}{$\mathbf{0}_{(m-n-1)\times (2n+1)}$}} &\multirow{2}{*}{} &
\multicolumn{3}{|c}{\multirow{2}{*}{$\mathbf{0}_{(m-n-1)\times (m-n-1)}$}}
\\
&&&&&& \multicolumn{3}{|c}{}
\\
\end{matrix}
\right) = : J.
\end{equation}
\subsection{Chiral Gaussian $\beta$-ensembles}\label{ss:chiral}
In the previous two subsections we have obtained that $H$ from~\eqref{eq:chiral} is unitarily equivalent to the Jacobi matrix $J$ in~\eqref{ddd} with~\eqref{eq:x's}--\eqref{eq:y's}.
It will occasionally be convenient to have a notation for the same Jacobi matrix but without the last zero block. So let us introduce the matrix $\mathcal{J}$ which is obtained by removing the last $n-m$ of zero rows and columns of $J$ in~\eqref{ddd} ($m\le n$) or the last $m-n+1$ of zero rows and columns in~\eqref{ddd2} ($m\ge n+1$). We obtain the $N\times N$ Jacobi matrix
\begin{equation}\label{generalJ}
\mathcal{J}:= \begin{pmatrix}
0 & a_1 \\
a_1 & 0 & a_2 \\
& a_2 & 0 & \ddots \\
& & \ddots & \ddots & a_{N-1}\\
&& & a_{N-1} & 0
\end{pmatrix},
\end{equation}
where $a_{2j-1} = x_j$, $a_{2j}=y_j$ and either $N=2m$,~\eqref{eq:x's}--\eqref{eq:y's} ($m\le n$) or $N=2n+1$,~\eqref{eq:x's2}--\eqref{eq:y's2} ($m\ge n+1$).
We will say that $\mathcal{J}$ belongs to the \textbf{chiral Gaussian $\beta$-ensemble}, chG$\beta$E for short. This ensemble makes sense for arbitrary $\beta>0$, not just $\beta=1,2,4$.
\subsection{Rank one Hermitian perturbations}
Now we consider the perturbed model \eqref{eq:chiralPert} with Hermitian $\Gamma$. Since $\Gamma$ has rank 1, we can choose $\Gamma$ to be positive semi-definite. Let
\begin{equation}\label{l}
l= \|\Gamma\|_{{HS}}:= \left(\sum_{j,k=1}^m |\Gamma_{jk}|^2\right)^{1/2}.
\end{equation}
be the Hilbert--Schmidt norm of the perturbation.
\begin{proposition}\label{pr:Model1}
Let $\widetilde{H}$ be as in~\eqref{eq:chiralPert}. Assume that $\Gamma=\Gamma^*\ge {\normalfont \textbf{0}}_{m\times m}$ has $\operatorname{rank}\,\Gamma =1$ and $||\Gamma||_{HS} = l$. Further assume that $\Gamma$ has real, complex, quaternionic entries for $\beta=1,2,4$, respectively, that are either deterministic or random but independent from $X$. Then $\widetilde{H}$ is unitarily equivalent to
\begin{equation}\label{eq:Model1}
J+ l I_{1\times 1},
\end{equation}
where $J$ is~\eqref{ddd} or ~\eqref{ddd2}.
\end{proposition}
\begin{remarks}
1. We will consider~\eqref{eq:Model1} for general $\beta>0$ and view it as the rank one Hermitian perturbation of the chiral Gaussian $\beta$-ensemble from Subsection~\ref{ss:chiral}. In fact, we will remove the zero block and will be working with $\mathcal{J}+ l I_{1\times 1}$.
2. The trick in the proof with reducing rank one perturbation to $(1,1)$-entry which carries through to the Jacobi matrix model is well known: it has been used in~\cite{KK,Koz17,Koz20}, and even earlier by Bloemendal--Vir\'{a}g~\cite{BloVir} in their study of spiked Laguerre ensembles.
\end{remarks}
\begin{proof}
$\Gamma$ can be represented as $\Gamma= U (lI_{1\times 1}) U^*$ for some $m\times m$ matrix $U$ which is orthogonal, unitary, or unitary symplectic for $\beta=1,2,4$, respectively.
Then the matrix $\widetilde{H}$ (see~\eqref{eq:chiralPert}) satisfies
\begin {equation}\label{equi1}
\begin{pmatrix}
U^* & \textbf{0}_{m\times n}\\\
\textbf{0}_{n\times m}& \textbf{1}_{n\times n}
\end{pmatrix}
\widetilde{H}
\begin{pmatrix}
U & \textbf{0}_{m\times n}\\\
\textbf{0}_{n\times m} & \textbf{1}_{n\times n}
\end{pmatrix}=
\begin{pmatrix}
\textbf{0}_{m\times m} & U^*X\\\
(U^*X)^* & \textbf{0}_{n\times n}
\end{pmatrix}+lI_{1\times 1}.
\end{equation}
Here $\textbf{1}_{n\times n}$ stands for the $n\times n$ identity matrix. Now, note that $U$ is independent of $X$, so the joint distribution of the elements of $Y=U^*X$ is identical to the distribution of $X$ by Gaussianity.
Hence we can apply the arguments from Subsection~\ref{ss:chiralModel1}/\ref{ss:chiralModel2} but to $Y$ instead of $X$ to arrive at
\begin {multline}\label{equi2}
P \begin{pmatrix}
L & \textbf{0}_{m\times n}\\\
\textbf{0}_{n\times m} & R^*
\end{pmatrix}
\left(
\begin{pmatrix}
\textbf{0}_{m\times m} &Y\\\
Y^*& \textbf{0}_{n\times n}
\end{pmatrix}
+lI_{1\times 1}\right)
\begin{pmatrix}
L^* & \textbf{0}_{m\times n}\\\
\textbf{0}_{n\times m} & R
\end{pmatrix}
P^*
\\
=
P\begin{pmatrix}
\textbf{0}_{m\times m} &B\\\
B^*& \textbf{0}_{n\times n}
\end{pmatrix}
P^*
+l PI_{1\times 1}P^*
=
J+ l I_{1\times 1},
\end{multline}
where we have used~\eqref{eq:L} and~\eqref{eq:P}.
\end{proof}
\subsection{Rank one non-Hermitian perturbations}
In the exact same way, we can consider the perturbed model~\eqref{eq:chiralPert} with anti-Hermitian $\Gamma$.
\begin{proposition}\label{pr:Model2}
Let $\widetilde{H}$ be as in~\eqref{eq:chiralPert}. Assume $\Gamma=-\Gamma^*$, $(-i\Gamma)\ge {\normalfont \textbf{0}}_{m\times m}$, $\operatorname{rank}\,\Gamma =1$ and $||\Gamma||_{HS} = l$. Further assume that $i\Gamma$ has real, complex, quaternionic entries for $\beta=1,2,4$, respectively, that are either deterministic or random but independent from $X$. Then $\widetilde{H}$ is unitarily equivalent to
\begin{equation}\label{eq:Model2}
J+ i l I_{1\times 1},
\end{equation}
where $J$ is~\eqref{ddd} or ~\eqref{ddd2}.
\end{proposition}
\begin{proof}
Notice that $-i\Gamma$ is Hermitian positive semi-definite and of rank one, so $-i\Gamma= U (lI_{1\times 1}) U^*$ for some $m\times m$ matrix $U$ which is orthogonal, unitary, or unitary symplectic for $\beta=1,2,4$, respectively. The rest of the arguments go through without any changes.
\end{proof}
\subsection{Anti-bidiagonal models}
Matrix model $\mathcal{J}+l I_{1\times 1}$ (as well as $\mathcal{J}+il I_{1\times 1}$, of course) can be also represented in the so-called anti-bidiagonal form. To do so, we introduce another permutation matrix
$$
Q=
\begin{pmatrix}
1 & 2 & 3 & \cdots & N-2 & N-1 & N \\
N & N-2 & N-4& \cdots & N-5 & N- 3& N-1
\end{pmatrix}.
$$
Then
\begin{equation}\label{eq:antibidiagonal}
Q \left(\mathcal{J}+l I_{1\times 1}\right) Q^*
=
\begin{pmatrix}
0 & 0 & \cdots & 0 & a_{N-1} \\
0 & 0 & \cdots & a_{N-3} & a_{N-2} \\
\vdots & \vdots & & \vdots & \vdots \\
0 &a_{N-3} & \cdots & 0 & 0\\
a_{N-1} & a_{N-2}& \cdots& 0 & 0
\end{pmatrix}
\end{equation}
This matrix has two anti-diagonals with the perturbation term $l$ being now ``in the middle'' at the position $(\lfloor \tfrac{N}{2} \rfloor +1,\lfloor \tfrac{N}{2} \rfloor +1)$.
\section{Location of the eigenvalues}
In the next two statements, we find all the possible configurations of eigenvalues for our perturbed Jacobi ensembles~\eqref{eq:Model1}, ~\eqref{eq:Model2}. Even more is true: every possible configuration of eigenvalues occurs exactly once.
\begin{proposition}\label{bijection2}
Let $N>1$. Then there is a one-to-one correspondence between $N$ points $z_1, z_2,\ldots, z_{N}$ with $z_1>-z_2>z_3>\cdots>(-1)^{N-1} z_{N}$ and the matrices $\mathcal{J}+lI_{1\times 1}$ where $\mathcal{J}$ is of the form \eqref{generalJ} and $a_1,\ldots, a_{N-1},l >0$.
\end{proposition}
\begin{proof}
This was shown by Holtz \cite[Corollary 2]{Holtz} who classified eigenvalues of matrices~\eqref{eq:antibidiagonal}.
\end{proof}
\begin{proposition}\label{bijection}
Let $N>1$. Then there is a one-to-one correspondence between $N$ points $z_1, z_2,\ldots, z_{N}$ in $\mathbb C_+:=\{z\in \mathbb C: \Im z>0 \}$ (counting multiplicity) that are symmetric with respect to the imaginary axis and the matrices $\mathcal{J}+ilI_{1\times 1}$ where $J$ is of the form \eqref{generalJ}
and $a_1,\ldots, a_{N-1},l >0$.
\end{proposition}
\begin{proof}
Let $z_1,\ldots, z_{N}$ be $N$ points in $\mathbb C_+$ that are symmetric with respect to the imaginary axis. By the results of Arlinski\u{\i}--Tsekanovski\u{\i}~\cite[Theorem~5.1, Corollary~6.5]{ArlTse06}, there is a $\mathcal{J}$ of the form \eqref{generalJ} and $l>0$ such that $z_1,\ldots, z_{N}$ are the eigenvalues of $\mathcal{J}+ilI_{1\times 1}$.
Conversely, let $\mathcal{J}$ be a Jacobi matrix as in \eqref{generalJ} and $l>0$. By~\cite[Prop~4.1]{ArlTse06} eigenvalues of $\mathcal{J}+ilI_{1\times 1}$ belong to $\mathbb C_+$. Since $-(\mathcal{J}+ilI_{1\times 1})^*=W(\mathcal{J}+ilI_{1\times 1})W^*$, where $W$ is the diagonal unitary matrix with diagonal $\{1,-1,1-1,\ldots\}$, we obtain the symmetry of the eigenvalues with respect to the imaginary axis.
\end{proof}
\section{Spectral measures of chiral Gaussian $\beta$-ensembles}
Given an $k\times k$ Hermitian matrix $H$, define its spectral measure with respect to $e_1$ to be the probability measure $\mu$ satisfying
\begin{equation}\label{eq:spmeas}
\langle e_1, H^j e_1 \rangle = \int_\mathbb{R} x^j d\mu(x), \quad \mbox{ for all } j\in \mathbb Z_{\ge 0}.
\end{equation}
We will refer to it as simply ``the spectral measure'' from now on. By diagonalizing $H$ and assuming $e_1$ is cyclic, one can see that
$$
\mu = \sum_{j=1}^k w_j \delta_{\lambda_j}
$$
with $\sum_{j=1}^k w_j = 1$ and $w_j>0$. Here $\{\lambda_j\}_{j=1}^k$ are the eigenvalues of $H$ (which are distinct by cyclicity), and $w_j = |\langle v_j,e_1\rangle|^2$, where $v_j$ is the corresponding eigenvector.
Now let us assume that $H$ is from the chGOE, chGUE, or chGSE.
As we show in Subsections~\ref{ss:chiralModel1} and~\ref{ss:chiralModel2}, $H$ and $J$ are unitarily equivalent $H=UJU^*$. Moreover, $Ue_1=U^*e_1=e_1$ implies that they have identical spectral measures. Finally, spectral measures of $J$ and $\mathcal{J}$ coincide, which can be trivially seen from~\eqref{eq:spmeas}. In the next theorem, we compute this common spectral measure. The result works for any $\beta>0$.
\begin{theorem}\label{thmmm}
For $\beta>0$ let $\mathcal{J}$ belong to chG$\beta$E
$($see Subsection~\ref{ss:chiral}$)$. Let $a=|n-m|+1-2/\beta$.
\begin{enumerate}[$(i)$]
\item If $m\le n$ $($that is, $\mathcal{J}$ is $2m\times 2m$$)$, then with probability $1$ the spectral measure of $\mathcal{J}$ is:
\begin{equation}\label{eq:spMeasDiscrete}
\mu=\sum_{j=1}^m \tfrac12 w_j (\delta_{\lambda_j} + \delta_{-\lambda_j})
\end{equation}
with the joint distribution of $\lambda_1,\ldots, \lambda_m$, $w_1,\ldots ,w_{m-1}$ given by
\begin{align}
\frac{2^m}{h_{\beta,m,a}}&\prod_{j=1}^m\lambda_j^{\beta a+1}e^{-\lambda_j^2/2}\prod_{1\leq j<k\leq m}|\lambda_k^2-\lambda_j^2|^\beta d\lambda_1\dots d\lambda_m \\
&\times \frac{\Gamma(\beta m/2)}{\Gamma(\beta/2)^m}\prod_{j=1}^m w_j^{\beta/2-1} d w_1\dots d w_{m-1}.
\end{align}
\item If $m \ge n+1$ $($that is, $\mathcal{J}$ is $(2n+1)\times(2n+1)$$)$, then with probability $1$ the spectral measure of $\mathcal{J}$ is:
\begin{equation}\label{eq:spMeasDiscrete2}
\mu=w_0\delta_{0}+\sum_{j=1}^n \tfrac12 w_j (\delta_{\lambda_j} + \delta_{-\lambda_j})
\end{equation}
with the joint distribution of $\lambda_1,\ldots, \lambda_n$, $w_1,\ldots ,w_n$ given by
\begin{align}
\frac{2^n}{h_{\beta,n,a}}&\prod_{j=1}^n\lambda_j^{\beta a+1}e^{-\lambda_j^2/2}\prod_{1\leq j<k\leq n}|\lambda_k^2-\lambda_j^2|^\beta d\lambda_1\dots d\lambda_n \\
&\times \frac{\Gamma(\beta m/2)}{\Gamma(\beta/2)^n \Gamma{(\beta(m-n)/2)}}
w_0^{\beta(m-n)/2-1}
\prod_{j=1}^n w_j^{\beta/2-1}d w_1\dots d w_{n}.
\end{align}
\end{enumerate}
Here the normalization constant is
\begin{equation}
h_{\beta,s,a}=2^{s(a\beta/2+1+(s-1)\beta/2)}\prod_{j=1}^s\frac{\Gamma(1+\beta j/2)\Gamma{(1+\beta a/2+\beta(j-1)/2)}}{\Gamma{(1+\beta/2)}}.
\end{equation}
\end{theorem}
\begin{proof}
Jacobi matrices~\eqref{generalJ} with non-zero $a_j$'s have simple spectrum. From this and symmetry, we then get that for $m\leq n$, $\mathcal{J}$ has $m$ distinct positive eigenvalues $\lambda_1,\ldots, \lambda_m$ and $m$ distinct negative eigenvalues $-\lambda_1,\ldots, -\lambda_m$, so the spectral measure of $\mathcal{J}$ has form~\eqref{eq:spMeasDiscrete}.
Similarly, if $m\geq n+1$ then $\mathcal{J}$ has $n$ distinct positive eigenvalues $\lambda_1,\ldots, \lambda_n$, $n$ distinct negative eigenvalues $-\lambda_1,\ldots, -\lambda_n$ and a simple eigenvalue at $\lambda_0:=0$. Consequently, the spectral measure of $\mathcal{J}$ has form~\eqref{eq:spMeasDiscrete2}.
Notice that the matrix
\begin{equation}\label{eq:chiralWithB}
G=\begin{pmatrix}
\textbf{0}_{m\times m} &B\\\
B^*& \textbf{0}_{n\times n}
\end{pmatrix}
\end{equation}
is unitarily equivalent to $J$: see~\eqref{ddd} and~\eqref{ddd2}. Moreover, because of ~\eqref{eq:P}, $G$ has the same spectral measure as $J$, $\mathcal{J}$.
For $k\neq 0$, we can write a normalized eigenvector of $G$ corresponding to $\lambda_k$ in the form
\begin{equation}
\begin{pmatrix}
u^{(k)}\\
v^{(k)}
\end{pmatrix}
\end{equation}
so that
\begin{align}
BB^*u^{(k)}&=\lambda_k^2 u^{(k)},\\
B^*Bv^{(k)}&=\lambda_k^2 v^{(k)}
\end{align}
are satisfied. Note that
\begin{equation}
\begin{pmatrix}
u^{(k)}\\
-v^{(k)}
\end{pmatrix}
\end{equation}
is a normalized eigenvector of $G$ associated with $-\lambda_k$. By orthononormality of the eigenvectors we have
\begin{align}
& \|u^{(k)}\|^2+\|v^{(k)}\|^2=1,\\
& \|u^{(k)}\|^2-\|v^{(k)}\|^2=0,
\end{align}
and thus $\|u^{(k)}\|^2=1/2$. Recall~\eqref{eq:spMeasDiscrete},~\eqref{eq:spMeasDiscrete2} that denoted the eigenweight on $\lambda_k$ by $w_k/2$ (for $k\neq 0$). Then $w_k=2 |\langle u^{(k)},e_1\rangle|^2$.
For $k>0$, let
\begin{equation}\label{eig1}
\lambda_k^\prime:=\lambda_k^2
\end{equation}
and $w_k^\prime$ be the eigenweight for $BB^*$ at $\lambda_k^\prime$. Then $\sqrt{2}u^{(k)}$ is a normalized eigenvector for $BB^*$ corresponding to $\lambda_k^\prime$. Thus,
\begin{equation}\label{eig2}
w_k^\prime=2 |\langle u^{(k)},e_1\rangle|^2=w_k.
\end{equation}
For the case (ii) we also have $w'_0=1-\sum_{k=1}^n w'_k=1-\sum_{k=1}^n w_k= w_0$.
Finally, recall that the joint distribution of $\{\lambda_k^\prime\}$ and $\{w_k^\prime\}$ of $BB^*$ and of the $\beta$-Laguerre random matrix coincide (\cite{Dumede02}, \cite[Lemma 4]{Koz17}, \cite[Proposition 1]{Koz17}).
Using \eqref{eig1}, \eqref{eig2} we can therefore write the joint distribution of the $\lambda_k$'s and $w_k$'s.
\end{proof}
\section{Jacobians}\label{ss:Jacobians}
We fix $l>0$ and for $\mathcal{J}$ as in~\eqref{generalJ} let
\begin{align}\label{eq:Jl}
\mathcal{J}_l & = \mathcal{J}+lI_{1\times 1}, \\
\label{eq:Jil}
\mathcal{J}_{il} & = \mathcal{J}+ilI_{1\times 1}.
\end{align}
In this section, we compute the Jacobian(s) of the change of variables from the spectral parameters (that is, $\lambda_j$'s and $w_j$'s) to the Maclaurin coefficients $\kappa_j$'s of the characteristic polynomial $\kappa(z)$ of $\mathcal{J}_l$ or $\mathcal{J}_{il}$.
\begin{theorem}\label{eigd2}
Let $l>0$.
\begin{enumerate}[$(i)$]
\item Let $\mathcal{J}$ be a $2m\times 2m$ Jacobi matrix of the form \eqref{generalJ} with $a_1,\ldots,a_{2m-1}>0$ and $m>0$. Denote $\mu$ to be its spectral measure~\eqref{eq:spMeasDiscrete}.
Let
\begin{equation}\label{det11}
\det (z-\mathcal{J}_l) = \sum_{j=0}^{2m}\kappa_j z^j.
\end{equation}
Then
\begin{equation}\label{dist111}
\left\lvert \det{\frac{\partial{(\kappa_0,\ldots, \kappa_{2m-2})}}{\partial{(\lambda_1,\ldots, \lambda_{m},w_1, \ldots, w_{m-1})}}} \right\rvert=2^m l^{m-1}\prod_{j=1}^m \lambda_j \prod_{1\leq j<k\leq m} |\lambda_j^2-\lambda_k^2|^2.
\end{equation}
\item
Let $\mathcal{J}$ be a $(2n+1)\times (2n+1)$ Jacobi matrix of the form \eqref{generalJ} with $a_1,\ldots,a_{2n}>0$ and $n>0$. Denote $\mu$ to be its spectral measure~\eqref{eq:spMeasDiscrete2}.
Let
\begin{equation}\label{det133}
\det (z-\mathcal{J}_l) = \sum_{j=0}^{2n+1}\kappa_j z^j.
\end{equation}
Then
\begin{equation}\label{dist23}
\left\lvert \det{\frac{\partial{(\kappa_0,\ldots, \kappa_{2n-1})}}{\partial{(\lambda_1,\ldots, \lambda_{n},w_1, \ldots, w_{n})}}} \right\rvert=2^n l^{n}\prod_{j=1}^n \lambda_j^3 \prod_{1\leq j<k\leq n} |\lambda_j^2-\lambda_k^2|^2.
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[$(i)$]
\item Note that $\kappa_{2m}=1$ and $\kappa_{2m-1}=-l$ are fixed constants here.
Let $\textbf{m}(z)=\langle e_1, (\mathcal{J}-z)^{-1} e_1 \rangle $. Then
\begin{equation}\label{mfun}
\textbf{m}(z)=\sum_{j=1}^m \frac{w_j}{2}\left(\frac{1}{\lambda_j-z}+\frac{1}{-\lambda_j-z}\right)=z\sum_{j=1}^m\frac{w_j}{\lambda_j^2-z^2}.
\end{equation}
First, we observe that
\begin{align}\label{det15}
\sum_{j=0}^{2m}\kappa_j z^j&=\det (z-\mathcal{J}_l)\\
&= \det(z-\mathcal{J}) \det (I-(z-\mathcal{J})^{-1} l I_{1\times 1})\\
&=(1+l \textbf{m}(z)) \prod_{j=1}^{m} (z^2-\lambda_j^2) \label{longl1}
\end{align}
and
\begin{equation}\label{sss}
l\textbf{m}(z)\prod_{j=1}^m(z^2-\lambda_j^2)=-lz\sum_{j=1}^m w_j \prod_{\substack{1\leq k \leq m\\
k\neq j}} (z^2-\lambda_k^2).
\end{equation}
Let
\begin{align}\label{cjdj}
&c_j= \kappa_{2j}, \quad j=0,\ldots,m,\\
&d_j= \kappa_{2j+1}, \quad j=0,\ldots,m-1,
\end{align}
where $c_{m}=1$, $d_{m-1}=-l$.
Letting $u=z^2$ and $\lambda_j^\prime=\lambda_j^2$ we get from
\eqref{longl1} and \eqref{sss} that
\begin{align}
&\sum_{j=0}^{m} c_j u^{j}= \prod_{j=1}^{m} (u-\lambda'_j),\label{cjdj22}\\
&\sum_{j=0}^{m-1} d_j u^{j}= -l\sum_{j=1}^m w_j \prod_{\substack{1\leq k \leq m \label{cjdj222}\\
k\neq j}} (u-\lambda'_k).
\end{align}
From~\eqref{cjdj22} we get
\begin{equation}\label{c0}
\left\lvert \det{\frac{\partial{(c_0,\ldots, c_{m-1})}}{\partial{(\lambda_1^\prime,\ldots, \lambda_{m}^\prime)}}}\right\rvert=\prod_{1\leq j <k\leq m}|\lambda_j^\prime-\lambda_k^\prime|= \prod_{1\leq j <k\leq m}|\lambda_j^2-\lambda_k^2|.
\end{equation}
Since
\begin{equation}
\left\lvert \det{\frac{\partial{(\lambda_1^\prime,\ldots, \lambda_{m}^\prime)}}{\partial{(\lambda_1,\ldots, \lambda_{m})}}}\right\rvert=2^m \prod_{j=1}^m \lambda_j,
\end{equation}
\eqref{c0} yields
\begin{equation}\label{c00}
\left\lvert \det{\frac{\partial{(c_0,\ldots, c_{m-1})}}{\partial{(\lambda_1,\ldots, \lambda_{m})}}}\right\rvert=2^m \prod_{j=1}^m \lambda_j \prod_{1\leq j <k\leq m}|\lambda_j^2-\lambda_k^2|.
\end{equation}
By \eqref{cjdj22},
\begin{equation}\label{zero}
\frac{\partial{(c_0,\ldots, c_{m-1})}}{\partial{(w_1,\ldots, w_{m-1})}}=\begin{pmatrix}\mathbf{0}_{m\times (m-1)} \end{pmatrix}.
\end{equation}
Now we consider~\eqref{cjdj222}.
In view of \cite[eq.(5.9), eq.(5.14)]{Koz17}, \eqref{cjdj222} implies that
\begin{align}
\left\lvert \det{\frac{\partial{(d_0,\ldots, d_{m-2})}}{\partial{(w_1,\ldots, w_{m-1})}}}\right\rvert&=l^{m-1} \prod_{1\leq j<k\leq m}|\lambda_j^\prime-\lambda_k^\prime|\\
&=l^{m-1} \prod_{1\leq j<k\leq m}|\lambda_j^2-\lambda_k^2|.\label{add}
\end{align}
Combining \eqref{c00}, \eqref{zero}, \eqref{add} we get
\begin{align}
\left\lvert \det{\frac{\partial{(\kappa_0,\ldots, \kappa_{m-2})}}{\partial{(\lambda_1,\ldots, \lambda_{m}, w_1,\ldots, w_{m-1})}}}\right\rvert&=\left\lvert \det{\frac{\partial{(c_0,\ldots, c_{m-1}, d_0, \ldots, d_{m-2})}}{\partial{(\lambda_1,\ldots, \lambda_{m}, w_1,\ldots, w_{m-1})}}}\right\rvert\\
&=2^m l^{m-1} \prod_{j=1}^m \lambda_j \prod_{1\leq j<k\leq m}|\lambda_j^2-\lambda_k^2|^2.\label{resu1}
\end{align}
\item Note that $\kappa_{2n+1}=1$ and $\kappa_{2n}=-l$ are constants. We again start with $\textbf{m}(z)=\langle e_1, (\mathcal{J}-z)^{-1} e_1 \rangle $ which becomes
\begin{equation}\label{mfun1}
\textbf{m}(z)=\sum_{j=1}^n \frac{w_j}{2}\left(\frac{1}{\lambda_j-z}+\frac{1}{-\lambda_j-z}\right)-\frac{w_0}{z}=z\sum_{j=1}^n\frac{w_j}{\lambda_j^2-z^2}-\frac{w_0}{z}.
\end{equation}
Now,
\begin{align}\label{det11}
\sum_{j=0}^{2n+1}\kappa_j z^j&=\det (z-\mathcal{J}_l)\\
&= \det(z-\mathcal{J}) \det (I-(z-\mathcal{J})^{-1} l I_{1\times 1})\\
&=(1+l\textbf{m}(z)) z\prod_{j=1}^{n} (z^2-\lambda_j^2) \label{longl12}
\end{align}
and
\begin{align}\label{sss2}
lz\textbf{m}(z)\prod_{j=1}^n(z^2-\lambda_j^2)=-l\sum_{j=0}^n w_j \prod_{\substack{0\leq k \leq n.\\
k\neq j}} (z^2-\lambda_k^2)
\end{align}
Define
\begin{align}\label{cjdj3}
c_j&= \kappa_{2j+1},\,\,\,\,\, j=0,\ldots,n,\\
d_j&= \kappa_{2j},\,\,\,\,\, j=0,\ldots,n,
\end{align}
with $c_n=1$, $d_n=-l$.
Taking $u=z^2$ and $\lambda_j^\prime=\lambda_j^2$ we get from \eqref{longl12} and \eqref{sss2} that
\begin{align}
\label{c22}
&\sum_{j=0}^{n} c_j u^{j}= \prod_{j=1}^{n} (u-\lambda'_j), \\
\label{eq:dpoly}
&\sum_{j=0}^{n} d_j u^{j}= -l\sum_{j=0}^n w_j \prod_{\substack{0\leq k \leq n\\
k\neq j}} (u-\lambda'_k).
\end{align}
Using~\eqref{c22} we get
\begin{equation}\label{c1}
\left\lvert \det{\frac{\partial{(c_0,\ldots, c_{n-1})}}{\partial{(\lambda_1,\ldots, \lambda_{n})}}}\right\rvert=2^n\prod_{j=1}^n\lambda_j \prod_{1\leq j <k\leq n}|\lambda_j^2-\lambda_k^2|.
\end{equation}
Using~\eqref{eq:dpoly},
\begin{align}
\left\lvert \det{\frac{\partial{(d_0,\ldots, d_{n-1})}}{\partial{(w_1,\ldots, w_{n})}}}\right\rvert&=l^{n} \prod_{0\leq j<k\leq n}|\lambda_j^\prime-\lambda_k^\prime|\\
&=l^{n} \prod_{j=1}^n \lambda_j^2 \prod_{1\leq j<k\leq n}|\lambda_j^2-\lambda_k^2|.\label{add2}
\end{align}
Combining \eqref{c1}, \eqref{add2} we get
\begin{align}
\left\lvert \det{\frac{\partial{(\kappa_0,\ldots, \kappa_{2n-1})}}{\partial{(\lambda_1,\ldots, \lambda_{n}, w_1,\ldots, w_{n})}}}\right\rvert&=\left\lvert \det{\frac{\partial{(c_0,\ldots, c_{n-1}, d_0, \ldots, d_{n-1})}}{\partial{(\lambda_1,\ldots, \lambda_{n}, w_1,\ldots, w_{n})}}}\right\rvert\\
&=2^n l^{n} \prod_{j=1}^n \lambda_j^3 \prod_{1\leq j<k\leq n}|\lambda_j^2-\lambda_k^2|^2.\label{trtr11}
\end{align}
\end{enumerate}
\end{proof}
Notice that in the case~\eqref{eq:Jl} coefficients of $\kappa$ were real, while in the case~\eqref{eq:Jil} they are real or purely imaginary.
Indeed, for a monic polynomial $\kappa(z)=\sum_{j=0}^k \kappa_j z^j$ of degree $k$ whose zeros are symmetric with respect to imaginary axis,
$$
Q(z)=i^{k}\kappa(z/i)
$$
is a monic polynomial with real coefficients. This means
$\kappa(z)=Q(iz)i^{-k}$, and therefore
$\Im \kappa_{k-2}=\Im \kappa_{k-4}=\dots=0$ and $\Re \kappa_{k-1}=\Re \kappa_{k-3}=\dots =0$.
\begin{theorem}\label{eigd}
Let $l>0$.
\begin{enumerate}[$(i)$]
\item Let $\mathcal{J}$ be a $2m\times 2m$ Jacobi matrix of the form \eqref{generalJ} with $a_1,\ldots,a_{2m-1}>0$ and $m>0$. Denote $\mu$ to be its spectral measure~\eqref{eq:spMeasDiscrete}.
Let
\begin{equation}\label{det1}
\det (z-\mathcal{J}_{il}) = \sum_{j=0}^{2m}\kappa_j z^j.
\end{equation}
Then
\begin{multline}\label{dist1}
\left\lvert \det{\frac{\partial{(\Re\kappa_0,\Im\kappa_1,\ldots,\Re\kappa_{2m-4},\Im\kappa_{2m-3},\Re\kappa_{2m-2})}}{\partial{(\lambda_1,\ldots, \lambda_{m},w_1, \ldots, w_{m-1})}}} \right\rvert
\\ =2^m l^{m-1}\prod_{j=1}^m \lambda_j \prod_{1\leq j<k\leq m} |\lambda_j^2-\lambda_k^2|^2.
\end{multline}
\item
Let $\mathcal{J}$ be a $(2n+1)\times (2n+1)$ Jacobi matrix of the form \eqref{generalJ} with $a_1,\ldots,a_{2n}>0$ and $n>0$. Denote $\mu$ to be its spectral measure~\eqref{eq:spMeasDiscrete2}.
Let
\begin{equation}\label{det13}
\sum_{j=0}^{2n+1}\kappa_j z^j=\det (z-\mathcal{J}_{il}).
\end{equation}
Then
\begin{equation}\label{dist2}
\left\lvert \det{\frac{\partial{(\Im\kappa_0,\Re\kappa_1,\ldots, \Im\kappa_{2n-2},\Re\kappa_{2n-1})}}{\partial{(\lambda_1,\ldots, \lambda_{n},w_1, \ldots, w_{n})}}} \right\rvert=2^n l^{n}\prod_{j=1}^n \lambda_j^3 \prod_{1\leq j<k\leq n} |\lambda_j^2-\lambda_k^2|^2.
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
The only difference from the setting in the previous theorem is that $l$ gets an extra factor of $i$, and the same happens with the coefficients $\kappa_{2j-1}$'s in (i) or $\kappa_{2j}$'s in (ii). The modulus of the Jacobian in~\eqref{dist1} and~\eqref{dist2} is therefore the same as in~\eqref{dist111} and~\eqref{dist23}, respectively.
\end{proof}
\section{Eigenvalues for rank one Hermitian perturbations}\label{ss:Hermitian}
\begin{theorem}\label{th:Hermitian}
Let $\mathcal{J}$ belong to chG$\beta$E $($see Section~\ref{ss:chiral}$)$, $l>0$, $a=|n-m|+1-2/\beta$, and
\begin{equation}\label{eq:Jl2}
\mathcal{J}_l:=\mathcal{J}+lI_{1\times 1}.
\end{equation}
\begin{enumerate}[$(i)$]
\item Let $m\le n$. The eigenvalues of $\mathcal{J}_l$ are distributed on
\begin{align}
\left\{(z_j)_{j=1}^{2m}: \sum_{j=1}^{2m} z_j=l,\,\,\, \,\, z_1> -z_2>z_3>\cdots >z_{2m-1}>-z_{2m} >0 \right\}
\end{align}
according to
\begin{align}
\frac{1}{Z_{\beta,m,a}} \, \,l^{1-\frac{m\beta}2}e^{l^2/4} \, {\prod_{j=1}^{2m}}|z_j|^{\frac{2\beta a -\beta+2}{4}}e^{-z_j^2/4}\prod_{1\leq j<k\leq 2m}|z_j-z_k|
\prod_{j,k=1}^{2m} |z_j+z_k|^{\frac{\beta-2}{4}}
\prod_{j=1}^{2m-1} dz_j.
\label{real1}
\end{align}
Here
\begin{equation}\label{eq:Z}
Z_{\beta,m,a} = \frac{2^{m(\beta-2)/2} \,h_{\beta,m,a} \,[\Gamma(\beta/2)]^m}{m! \Gamma(\beta m /2)}.
\end{equation}
\item Let $m\ge n+1$. The eigenvalues of $\mathcal{J}_l$ are distributed on
\begin{align}
\left\{(z_j)_{j=1}^{2n+1}: \sum_{j=1}^{2n+1} z_j=l,\,\,\, \,\,z_1> -z_2>z_3>\cdots >-z_{2n}>z_{2n+1} >0 \right\}
\end{align}
according to
\begin{align}
\frac{l^{1-\frac{m\beta}2}e^{l^2/4}}{W_{\beta,m,n,a}} \, \prod_{j=1}^{2n+1}|z_j|^{\frac{2\beta m -2\beta n-\beta-2}{4}}e^{-z_j^2/4}\prod_{1\leq j<k\leq 2n+1}|z_j-z_k| \prod_{j,k=1}^{2n+1} |{z_j}+z_k|^{\frac{\beta-2}{4}} \prod_{j=1}^{2n} dz_j. \label{real2}
\end{align}
\end{enumerate}
Here
\begin{equation}\label{eq:W}
W_{\beta,m,n,a}=\frac{2^{\frac{(2n+1)(\beta-2)}{4}} \,h_{\beta,n,a} \,[\Gamma(\beta/2)]^n\,\Gamma(\beta(m-n)/2) }{n! \Gamma(\beta m /2) }.
\end{equation}
\end{theorem}
\begin{remarks}
1. 1. As a corollary, eigenvalues of Hermitian perturbations of chGOE, chGUE, chGSE (see Proposition~\ref{pr:Model1}) are ~\eqref{real1} together with $z=0$ of algebraic multiplicity $n-m$ (for the case $m\le n$), and ~\eqref{real2} together with $z=0$ of algebraic multiplicity $m-n-1$ (for the case $m\ge n+1$).
2. See the end of this section for the case when $l$ is not deterministic but random.
\end{remarks}
\begin{proof}
\begin{enumerate}[$(i)$]
\item Let $\sum_{j=0}^{2m} \kappa_j z^j= \det(z-\mathcal{J}_l).$ Then
\begin{align}
\prod_{1\leq j<k\leq 2m}|z_j-z_k|&= \left\lvert \det{\frac{\partial{(\kappa_0,\ldots, \kappa_{2m-1})}}{\partial{(z_1,\ldots, z_{2m})}}}\right\rvert \label{a1}\\
&= \left\lvert \det{\frac{\partial{(\kappa_0,\ldots, \kappa_{2m-1})}}{\partial{(z_1,\ldots,z_{2m-1}, \kappa_{2m-1)}}}}\right\rvert \label{a2}\\
&=\left\lvert \det{\frac{\partial{(\kappa_0,\ldots, \kappa_{2m-2})}}{\partial{(z_1,\ldots, z_{2m-1})}}}\right\rvert\label{a3}.
\end{align}
The equality \eqref{a1} is well known, \eqref{a2} is a result of $\sum_{j=1}^{2m}z_j=-\kappa_{2m-1}$
and \eqref{a3} follows by removing the last row and column from the determinant~\eqref{a2}.
Combining part $(i)$ of Theorem \ref{thmmm}, \eqref{a3} and \eqref{dist111} we get the density of $dz_1\cdots dz_{2m-1}$:
\begin{align}
m! \frac{l^{1-m}\displaystyle\prod_{1\leq j<k\leq 2m}|z_j-z_k|}{h_{\beta,m,a}\displaystyle\prod_{1\leq j<k\leq m}|\lambda_j^2-\lambda_k^2|^2}&\prod_{j=1}^m\lambda_j^{\beta a}e^{-\lambda_j^2/2}\prod_{1\leq j<k\leq m}|\lambda_k^2-\lambda_j^2|^\beta \nonumber \\
&\times \Gamma(\beta m/2)\prod_{j=1}^m \frac{w_j^{\beta/2-1}}{\Gamma(\beta/2)}\label{bigdist22}.
\end{align}
Notice the extra factor of $m!$ that comes from the fact that $\lambda_j$'s were not ordered while $z_j$'s are.
It follows from \eqref{det15}, \eqref{cjdj22} that
\begin{equation}\label{yt1}
\sum_{j=1}^m \lambda_j^2=-c_{m-1}=-\kappa_{2m-2}=-\sum_{1\leq i<j\leq 2m}z_i z_j.
\end{equation}
Since $\sum_{j=1}^{2m} z_j=l$, we have
\begin{align}
l^2&= \sum_{j=1}^{2m} z_j^2+2\sum_{{1\leq i<j\leq 2m}}z_i z_j \label{yt2}\\
&=\sum_{j=1}^{2m} z_j^2-2\sum_{j=1}^{m} \lambda_j^2.\label{yt3}
\end{align}
Thus
\begin{equation}\label{pp1}
\sum_{j=1}^{m} \lambda_j^2=\frac{-l^2+\sum_{j=1}^{2m} z_j^2}{2}.
\end{equation}
It follows from \eqref{det15}, \eqref{cjdj22} that
\begin{equation}\label{pp2}
\prod_{j=1}^m \lambda_j^2= |c_0| = |\kappa_0|= \prod_{j=1}^{2m} |z_j| .
\end{equation}
By \eqref{longl1}, we have
\begin{align}\label{wjs}
\displaystyle \frac{w_j}{2}=\left\lvert \mathrm{Res}_{z=\lambda_j} \textbf{m}(z) \right\rvert =\left\lvert \mathrm{Res}_{z=\lambda_j}\frac{\prod_{k=1}^{2m} (z-z_k)}{l\prod_{k=1}^{m}
(z^2-\lambda_k^2)}\right\rvert=\left\lvert \frac{\prod_{k=1}^{2m} (\lambda_j-z_k)}{2l\lambda_j \prod_{\substack{1\leq k \leq m\\
k\neq j}} (\lambda_k^2-\lambda_j^2) }\right\rvert.
\end{align}
Similarly,
\begin{align}\label{wjs2}
\displaystyle \frac{w_j}{2}=\left\lvert \mathrm{Res}_{z=-\lambda_j} \textbf{m}(z) \right\rvert =\left\lvert \mathrm{Res}_{z=-\lambda_j}\frac{\prod_{k=1}^{2m} (z-z_k)}{l\prod_{k=1}^{m}
(z^2-\lambda_k^2)}\right\rvert=\left\lvert \frac{\prod_{k=1}^{2m} (\lambda_j+z_k)}{2l\lambda_j \prod_{\substack{1\leq k \leq m\\
k\neq j}} (\lambda_k^2-\lambda_j^2) }\right\rvert.
\end{align}
By \eqref{cjdj22}
\begin{equation}\label{split}
\prod_{k=1}^m (z^2-\lambda_k^2)=\sum_{j=0}^m \kappa_{2j}z^{2j}=\frac{1}{2}\prod_{k=1}^{2m}(z-z_k)+\frac{1}{2}\prod_{k=1}^{2m}(z+z_k)
\end{equation}
is satisfied.
Letting $z=z_1,\ldots,z_{2m}$ in \eqref{split} yields
\begin{equation}\label{sum}
\prod_{\substack{k=1,\ldots, 2m\\
j=1,\ldots, m}} |z_k^2-\lambda_j^2| = \frac{1}{4^m}\prod_{k,j=1}^{2m} |z_j+z_k| .
\end{equation}
Combining \eqref{wjs}, \eqref{wjs2} , and \eqref{sum}, and we get
\begin{equation}\label{summ}
\prod_{j=1}^{m} w_j^2=
\frac{ \prod_{k,j=1}^{2m} |z_j+z_k| }{l^{2m} 4^m \prod_{j=1}^m \lambda_j^2 \prod_{1\leq j<k\leq m} |\lambda_k^2-\lambda_j^2 |^4 }
\end{equation}
Substituting \eqref{summ}, \eqref{pp1}, \eqref{pp2} into \eqref{bigdist22} we obtain \eqref{real1}.
\item
Let $\sum_{j=0}^{2n+1} \kappa_j z^j=\det(z-\mathcal{J}_l).$ By a similar argument as in (i), we see that
\begin{align}
\prod_{1\leq j<k\leq 2n+1}|z_j-z_k|=\left\lvert \det{\frac{\partial{(\kappa_0,\ldots, \kappa_{2n-1})}}{\partial{(z_1,\ldots, z_{2n})}}}\right\rvert\label{b1}
\end{align}
and
\begin{equation}\label{rr1}
\sum_{j=1}^{n} \lambda_j^2 =\frac{-l^2+\sum_{j=1}^{2n+1} z_j^2}{2}.
\end{equation}
Using part $(ii)$ in Theorem \ref{thmmm}, \eqref{b1} and \eqref{dist23}, we find the distribution of the $z_j$'s:
\begin{align}
&n! \frac{\prod_{1\leq j<k\leq 2n+1}|z_j-z_k|}{2^n l^{n}\prod_{j=1}^n \lambda_j^3 \prod_{1\leq j<k\leq n} |\lambda_j^2-\lambda_k^2|^2}\frac{2^n\prod_{j=1}^n\lambda_j}{h_{\beta,n,a}}\prod_{j=1}^n\lambda_j^{\beta a}e^{-\lambda_j^2/2} \nonumber\\
&\times \prod_{1\leq j<k\leq n}|\lambda_k^2-\lambda_j^2|^\beta \times \frac{w_0^{\beta(m-n)/2-1}}{\Gamma{(\beta(m-n)/2)}} \times\Gamma(\beta m/2)\prod_{j=1}^n\frac{w_j^{\beta/2-1}}{\Gamma(\beta/2)}\nonumber\\
& \times dz_1\cdots dz_{2n}. \label{bbb9}
\end{align}
It follows from \eqref{longl12} that
\begin{align}\label{tunn}
\displaystyle w_0=\left\lvert \mathrm{Res}_{z=0} \textbf{m}(z) \right\rvert =\left\lvert \mathrm{Res}_{z=0}\frac{\prod_{k=1}^{2n+1} (z-z_k)}{lz\prod_{k=1}^{n}
(z^2-\lambda_k^2)}\right\rvert=\left\lvert \frac{\prod_{k=1}^{2n+1} z_k}{l \prod_{k=1}^n \lambda_k^2} \right\rvert.
\end{align}
Similarly,
\begin{align}\label{las2}
\displaystyle \frac{w_j}{2}=\left\lvert \mathrm{Res}_{z=\lambda_j} \textbf{m}(z) \right\rvert =\left\lvert \mathrm{Res}_{z=\lambda_j}\frac{\prod_{k=1}^{2n+1} (z-z_k)}{lz\prod_{k=1}^{n}
(z^2-\lambda_k^2)}\right\rvert&=\left\lvert \frac{\prod_{k=1}^{2n+1} (\lambda_j-z_k)}{2l\lambda_j^2 \prod_{\substack{1\leq k \leq n\\
k\neq j}} (\lambda_k^2-\lambda_j^2) }\right\rvert\\
\label{las3}
=\left\lvert \mathrm{Res}_{z=-\lambda_j} \textbf{m}(z) \right\rvert &=\left\lvert \frac{\prod_{k=1}^{2n+1} (\lambda_j+z_k)}{2l\lambda_j^2 \prod_{\substack{1\leq k \leq n\\
k\neq j}} (\lambda_k^2-\lambda_j^2) }\right\rvert
.
\end{align}
By \eqref{longl12} and \eqref{sss2}
\begin{equation}\label{las1}
z \prod_{k=1}^n (z^2-\lambda_k^2)=\frac{1}{2}\prod_{k=1}^{2n+1}(z-z_k)+\frac{1}{2}\prod_{k=1}^{2n+1}(z+z_k).
\end{equation}
Letting $z=z_1,\ldots,z_{2n+1}$ in \eqref{las1} implies
\begin{equation}\label{tum}
\prod_{\substack{k=1,\ldots, 2n+1\\
j=1,\ldots, n}} |z_k^2-\lambda_j^2| = \frac{\prod_{k,j=1}^{2n+1} |z_j+z_k|}{2^{2n+1}\prod_{k=1}^{2n+1} |z_k|} .
\end{equation}
Combining \eqref{tum}, \eqref{las2}, \eqref{las3}, we obtain
\begin{equation}\label{tumm}
\prod_{j=1}^{n} w_j^2=
\frac{ \prod_{k,j=1}^{2n+1} |z_j+z_k| }{l^{2n} 2^{2n+1} \prod_{j=1}^{2n+1} |z_j| \prod_{j=1}^n \lambda_j^4 \prod_{1\leq j<k\leq n} |\lambda_k^2-\lambda_j^2 |^4 }.
\end{equation}
Substituting \eqref{rr1}, \eqref{tunn}, \eqref{tumm} into \eqref{bbb9}, we get \eqref{real2}.
\end{enumerate}
\end{proof}
It is natural to choose $l$ to be random and independent of $\mathcal{J}$. For example, let $l$ be $\sqrt{2} \chi_{\beta m/2}$-distributed, i.e., with probability distribution
$$
F(l) \, dl= \frac{1}{2^{\beta m/2-1}\Gamma(\beta m/4)} l^{\frac{m\beta}{2}-1} e^{-l^2/4} \, dl
$$
on $(0,\infty)$. Then making an extra change of variables from $\{z_1,\ldots,z_{k-1},l\}$ to $\{z_1,\ldots,z_{k}\}$, we arrive at the following joint distribution of eigenvalues:
\begin{itemize}
\item[(i)] If $m\le n$, then eigenvalues of $\mathcal{J}_l$ are distributed on
\begin{equation}\label{eq:confSpace}
\left\{(z_j)_{j=1}^{2m}: \,\,\, \,\, z_1> -z_2>z_3>\cdots >z_{2m-1}>-z_{2m} >0 \right\}
\end{equation}
according to
\begin{equation}
\frac{1}{\tilde{Z}_{\beta,m,a}} \, \,\, {\prod_{j=1}^{2m}}|z_j|^{\frac{2\beta a -\beta+2}{4}}e^{-z_j^2/4}\prod_{1\leq j<k\leq 2m}|z_j-z_k|
\prod_{j,k=1}^{2m} |z_j+z_k|^{\frac{\beta-2}{4}}
\prod_{j=1}^{2m} dz_j.
\end{equation}
Here
\begin{equation}
\tilde{Z}_{\beta,m,a} = \frac{2^{m \beta-m-1} \,h_{\beta,m,a} \Gamma(\beta m/4)\,[\Gamma(\beta/2)]^m}{m! \Gamma(\beta m /2)}.
\end{equation}
For $\beta=2$ this takes an especially simple form
\begin{equation}\label{eq:notPfaff}
\frac{1}{\tilde{Z}_{2,m,|n-m|}} \, \,\, {\prod_{j=1}^{2m}}|z_j|^{|n-m|}e^{-z_j^2/4}\prod_{1\leq j<k\leq 2m}|z_j-z_k|
\prod_{j=1}^{2m} dz_j.
\end{equation}
At first sight one might expect that~\eqref{eq:notPfaff} has a Pfaffian structure but recall the configuration space is~\eqref{eq:confSpace} which complicates analysis substantially.
\item[(ii)] If $m\ge n+1$, then eigenvalues of $\mathcal{J}_l$ are distributed on
\begin{align}
\left\{(z_j)_{j=1}^{2n+1}: \,\,\, \,\,z_1> -z_2>z_3>\cdots >-z_{2n}>z_{2n+1} >0 \right\}
\end{align}
according to
\begin{align}
\frac{1}{\tilde{W}_{\beta,m,n,a}} \, \prod_{j=1}^{2n+1}|z_j|^{\frac{2\beta m -2\beta n-\beta-2}{4}}e^{-z_j^2/4}\prod_{1\leq j<k\leq 2n+1}|z_j-z_k| \prod_{j,k=1}^{2n+1} |{z_j}+z_k|^{\frac{\beta-2}{4}} \prod_{j=1}^{2n+1} dz_j.
\end{align}
Here
\begin{equation}
\tilde{W}_{\beta,m,n,a}=\frac{2^{\frac{(2n+1)(\beta-2)}{4}+\frac{\beta m}{2}-1} \,h_{\beta,n,a} \,\Gamma(\beta m/4) [\Gamma(\beta/2)]^n\,\Gamma(\beta(m-n)/2) }{n! \Gamma(\beta m /2) }.
\end{equation}
For $\beta=2$ this becomes
\begin{align}
\frac{1}{\tilde{W}_{2,m,n,|m-n|}} \, \prod_{j=1}^{2n+1}|z_j|^{ m -n-1}e^{-z_j^2/4}\prod_{1\leq j<k\leq 2n+1}|z_j-z_k| \prod_{j=1}^{2n+1} dz_j.
\end{align}
\end{itemize}
\section{Eigenvalues for rank one non-Hermitian perturbations}\label{ss:nonHermitian}
Let $\mathcal{J}$ be an $N\times N$ random matrix from chG$\beta$E, and consider
\begin{equation}\label{eq:Jil2}
\mathcal{J}_{il}:=\mathcal{J}+ilI_{1\times 1}
\end{equation}
for some $l>0$.
In order to simplify the final answer we will assume $l$ to be random, independent from $\mathcal{J}$ (or $H$ for $\beta=1,2,4$) with absolutely continuous distribution $F(l) \,dl$ with $F(l)>0$ for $l>0$ and $0$ otherwise. Other distributions of $l$ (or the deterministic case) can also be treated in the exact same manner, and we leave it as an exercise to an interested reader.
As we discussed in Proposition~\ref{bijection}, eigenvalues of~\eqref{eq:Jil2} belong to $\mathbb{C}_+$, and they are symmetric with respect to the imaginary axis. The set of all possible configurations $\{z_j\}_{j=1}^N$ of these eigenvalues, therefore, decomposes as the disjoint union
$$
X_N:=\bigcup_{\stackrel{L\ge 0, M\ge0}{L+2M=N}} X_{L,M},
$$
where
\begin{multline}
X_{L,M}:=
\Big\{
\{z_j\}_{j=1}^N \in\mathbb{C}_+^N: z_1,\ldots,z_L \in i\mathbb{R}_+; \\
z_{L+1} = - \bar{z}_{L+1+M}, \ldots, z_{L+M} = -\bar{z}_{L+2M}
\Big\}.
\end{multline}
For each $z_j$, let $z_j=x_j + i y_j$, $x_j,y_j\in\mathbb{R}$.
We will say that $\{z_j\}_{j=1}^N$ on $X_N$ have joint distribution $f(z_1,\ldots,z_N) \big| \bigwedge_{j=1}^N z_j \big|$ (with $f$ being invariant under permutation of its arguments), if conditionally on the event $\{z_j\}_{j=1}^N \in X_{L,M}$ the distribution becomes
\begin{multline}\label{eq:wedges}
2^M \frac{1}{ L! M!2^M} f(iy_1,\ldots,iy_L,\pm x_{L+1}+iy_{L+1},\pm x_{L+2}-i y_{L+2},\ldots,\pm x_{L+M}+i y_{L+M})
\\
\times \prod_{j=1}^L \,dy_j \prod_{j=L+1}^{L+M} (dx_{j} dy_{j}) .
\end{multline}
Here the factor $\tfrac{1}{ L! M!2^M}$ corresponds to the number of permutations on $X_{L,M}$ that preserve the configuration, and $2^M$ comes from $\big| dz \wedge d(-\bar{z}) \big| = 2 dx\,dy$.
For a more formal introduction to such point processes, we refer the reader to \cite{boro}.
\begin{theorem}\label{th:nonHermitian}
Let $\mathcal{J}$ belong to chG$\beta$E $($see Section~\ref{ss:chiral}$)$, $a=|n-m|+1-2/\beta$,
\begin{equation}
\mathcal{J}_{il}:=\mathcal{J}+ilI_{1\times 1},
\end{equation}
where $l$ is independent of $\mathcal{J}$ with distribution $F(l)dl$, $F(l)>0$ for $l>0$ and $0$ otherwise.
\begin{enumerate}[$(i)$]
\item Let $m\le n$.
Then $\{z_j\}_{j=1}^{2m}$ are jointly distributed on $X_{2m}$ according to
\begin{multline}
\frac{1}{Z_{\beta,m,a}} \,F(l) l^{1-\frac{m\beta}2} e^{-l^2/4} {\prod_{j=1}^{2m}}|z_j|^{\frac{2\beta a -\beta+2}{4}}e^{-z^2_j/4}
\\
\times \prod_{1\leq j<k\leq 2m}|z_j-z_k|
\prod_{j,k=1}^{2m} |{z_j}-\bar{z}_k|^{\frac{\beta-2}{4}} \, \Big|\bigwedge_{j=1}^{2m} dz_j \Big|,
\label{www1}
\end{multline}
where $l=\big|\sum_{j=1}^{2m} z_j\big|$ and $Z_{\beta,m,a}$ is~\eqref{eq:Z}.
\item Let $m\ge n+1$. Then $\{z_j\}_{j=1}^{2n+1}$ are jointly distributed on $X_{2n+1}$ according to
\begin{multline}
\frac{1}{W_{\beta,m,n,a}} \, F(l) l^{1-\frac{m\beta}2} e^{-l^2/4} \prod_{j=1}^{2n+1}|z_j|^{\frac{2\beta m -2\beta n-\beta-2}{4}}e^{-z_j^2/4}
\\
\,\times \prod_{1\leq j<k\leq 2n+1}|z_j-z_k|
\prod_{j,k=1}^{2n+1} |{z_j}-\bar{z}_k|^{\frac{\beta-2}{4}} \Big|\bigwedge_{j=1}^{2n+1} dz_j \Big|,
\label{www2}
\end{multline}
where $l=\big|\sum_{j=1}^{2n+1} z_j\big|$ and $W_{\beta,m,n,a}$ is~\eqref{eq:W}.
\end{enumerate}
\end{theorem}
\begin{remarks}
1. As a corollary, eigenvalues of non-Hermitian perturbations of chGOE, chGUE, chGSE (see Proposition~\ref{pr:Model2}) are ~\eqref{www1} together with $z=0$ of algebraic multiplicity $n-m$ (for the case $m\le n$), and ~\eqref{www2} together with $z=0$ of algebraic multiplicity $m-n-1$ (for the case $m\ge n+1$).
2. Even though $z_j$'s are in $\mathbb{C}_+$, because of the symmetry $\sum z_j^2 = \sum \Re(z_j^2)$ is a real quantity.
\end{remarks}
\begin{proof}
\begin{enumerate}[$(i)$]
\item
Recall the characteristic polynomial $\kappa(z)$ in~\eqref{det1} and that
$$
Q(z)=i^{N}\kappa(z/i)
$$
is a monic polynomial with real coefficients and zeros at $\{iz_j\}_{j=1}^{2m}$.
Let us assume that $z_j$'s belong to $X_{L,M} \subset X_{2m}$. Using~\cite[Lemma 6.5]{KK} (if one applies it to $Q$), we get
\begin{align}\label{vande}
\left\lvert \det{\frac{\partial{(\Re\kappa_0,\Im\kappa_1,\ldots,\Im\kappa_{2m-3},\Re\kappa_{2m-2},\Im\kappa_{2m-1})}}{\partial{(y_1,\ldots, y_L ,x_{L+1},y_{L+1},\ldots, x_{L+M},y_{L+M})}}}\right\rvert=2^M \prod_{1\leq j<k\leq 2m}|z_j-z_k|.
\end{align}
Combining \eqref{vande} with \eqref{dist1} and $\kappa_{2m-1}=-il$ we obtain
\begin{align}
\left\lvert \det{\frac{\partial{(\lambda_1,\ldots, \lambda_{m},w_1, \ldots, w_{m-1},l)}} {\partial{(y_1,\ldots, y_L ,x_{L+1},y_{L+1},\ldots, x_{L+M},y_{L+M})}} } \right\rvert
\\ = 2^{M-m} \frac{\prod_{1\leq j<k\leq 2m}|z_j-z_k|}{ l^{m-1}\prod_{j=1}^m \lambda_j \prod_{1\leq j<k\leq m} |\lambda_j^2-\lambda_k^2|^2}.
\end{align}
Now we use this Jacobian together with Theorem \ref{thmmm}(i) we obtain the joint density of $x_j$'s and $y_j$'s:
\begin{multline}\label{eq:intermediateDensity}
\frac{m!}{2^M M!L!}\frac{2^M}{h_{\beta,m,a}} \prod_{1\leq j<k\leq 2m}|z_j-z_k|
\prod_{j=1}^m\lambda_j^{\beta a}e^{-\lambda_j^2/2}\prod_{1\leq j<k\leq m}|\lambda_k^2-\lambda_j^2|^{\beta-2}
\\
\times \Gamma(\beta m/2)\prod_{j=1}^m \frac{w_j^{\beta/2-1}}{\Gamma(\beta/2)} l^{1-m}F(l)
\times \prod_{j=1}^L \,dy_j \prod_{j=L+1}^{L+M} (dx_{j} dy_{j}).
\end{multline}
Notice the extra factor of $\frac{1}{2^M M!L!}$ since we do not impose ordering on our $z_j$'s so each configuration appears ${2^M M!L!}$ times. Similarly, $m!$ comes from the absence of ordering in $\lambda_j$'s.
Using~\eqref{longl1},
\begin{equation}\label{mfun2}
\sum_{j=0}^{2m} \kappa_j z^j=(1+il \textbf{m}(z))\prod_{j=1}^m (z^2-\lambda_j^2).
\end{equation}
Here
\begin{align}
\kappa_{2m}&=1,\\
\Im \kappa_{2j}&=0,\,\,\,\,\,\, j=0,\ldots,m-1,\\
\Re \kappa_{2j+1}&=0,\,\,\,\,\,\, j=0,\ldots,m-1,\label{re}
\end{align}
and
\begin{equation}\label{re1}
\sum_{j=0}^{2m} \Re \kappa_j z^j=\prod_{j=1}^m (z^2-\lambda_j^2).
\end{equation}
It follows that
\begin{equation}\label{rpp2}
\prod_{j=1}^m \lambda_j^2= |\Re \kappa_0| = |\kappa_0|= \prod_{j=1}^{2m} |z_j|.
\end{equation}
and
\begin{equation}\label{yt1}
\sum_{j=1}^m \lambda_j^2=-\kappa_{2m-2}=-\sum_{1\leq i<j\leq 2m}z_i z_j.
\end{equation}
Since $\sum_{j=1}^{2m} z_j=\operatorname{Tr}(\mathcal{J}_{il}) = il$, we have
\begin{align}
-l^2&= \sum_{j=1}^{2m} z_j^2+2\sum_{{1\leq i<j\leq 2m}}z_i z_j. \label{zt2}\\
&=\sum_{j=1}^{2m} z_j^2-2\sum_{j=1}^{m} \lambda_j^2.\label{zt3}
\end{align}
Thus
\begin{equation}\label{ppp1}
\sum_{j=1}^{m} \lambda_j^2=\frac{l^2+\sum_{j=1}^{2m} z_j^2}{2}.
\end{equation}
Using \eqref{mfun2}, \eqref{re1}, we obtain
\begin{align}\label{arg}
\frac{1}{2}\prod_{j=1}^{2m}(z-z_j)+\frac{1}{2}\prod_{j=1}^{2m}(z-\bar{z}_j)=\prod_{j=1}^m (z^2-\lambda_j^2).
\end{align}
Letting $z=z_1,\ldots, z_{2m}$ in \eqref{arg}, we get
\begin{equation}\label{summmm}
\prod_{\substack{k=1,\ldots, 2m\\
j=1,\ldots, m}} |z_k^2-\lambda_j^2| = \frac{1}{4^m}\prod_{k,j=1}^{2m} |{z_j}-\bar{z}_k|.
\end{equation}
By~\eqref{mfun2},
\begin{align}\label{wjsw}
\displaystyle \frac{w_j}{2}=\left\lvert \mathrm{Res}_{z=\lambda_j} \textbf{m}(z) \right\rvert =\left\lvert \mathrm{Res}_{z=\lambda_j}\frac{\prod_{k=1}^{2m} (z-z_k)}{il\prod_{k=1}^{m}
(z^2-\lambda_k^2)}\right\rvert
&=\left\lvert \frac{\prod_{k=1}^{2m} (\lambda_j-z_k)}{2l\lambda_j \prod_{\substack{1\leq k \leq m\\
k\neq j}} (\lambda_k^2-\lambda_j^2) }\right\rvert.
\\
\label{wjsw-new}
=\left\lvert \mathrm{Res}_{z=-\lambda_j} \textbf{m}(z) \right\rvert
&=\left\lvert \frac{\prod_{k=1}^{2m} (\lambda_j+z_k)}{2l\lambda_j \prod_{\substack{1\leq k \leq m\\
k\neq j}} (\lambda_k^2-\lambda_j^2) }\right\rvert.
\end{align}
Equalities~\eqref{wjsw}, \eqref{wjsw-new}, and \eqref{summmm} yield
\begin{align} \label{imp2}
\frac{1}{4^m}\prod_{j=1}^m w_j^2=\frac{\prod_{j,k=1}^{2m}|{z_j}-\bar{z}_k|}{(2l)^{2m} 4^m \prod_{j=1}^m \lambda_j^2 \displaystyle\prod_{1\leq j<k\leq m}|\lambda_j^2-\lambda_k^2|^4},
\end{align}
Substituting \eqref{rpp2}, \eqref{ppp1}, \eqref{imp2}
into \eqref{eq:intermediateDensity} we get \eqref{www1}.
\item We follow similar line of reasoning as in (i). Suppose $z_j$'s belong to $X_{L,M} \subset X_{2n+1}$.
By~\cite[Lemma 6.5]{KK}
\begin{align}\label{vande2}
\left\lvert \det{\frac{\partial{(\Im\kappa_0,\Re\kappa_1,\ldots,\Im\kappa_{2n-2},\Re\kappa_{2n-1},\Im\kappa_{2n})}}{\partial{(y_1,\ldots, y_L ,x_{L+1},y_{L+1},\ldots, x_{L+M},y_{L+M})}}}\right\rvert=2^M \prod_{1\leq j<k\leq 2n+1}|z_j-z_k|,
\end{align}
and then from~\eqref{dist2} we get
\begin{align}
\left\lvert \det{\frac{\partial{(\lambda_1,\ldots, \lambda_{n},w_1, \ldots, w_{n},l)}} {\partial{(y_1,\ldots, y_L ,x_{L+1},y_{L+1},\ldots, x_{L+M},y_{L+M})}} } \right\rvert
\\ = 2^{M-n} \frac{\prod_{1\leq j<k\leq 2n+1}|z_j-z_k|}{ l^{n}\prod_{j=1}^n \lambda_j^3 \prod_{1\leq j<k\leq n} |\lambda_j^2-\lambda_k^2|^2}.
\end{align}
Combining part $(ii)$ of Theorem \ref{thmmm}, and \eqref{vande2} we obtain the joint density of $z_j$'s:
\begin{align}
\frac{n!}{2^M M! L!}& \frac{2^{M-n} \prod_{1\leq j<k\leq 2n+1}|z_j-z_k| }{ l^{n}\prod_{j=1}^n \lambda_j^3 \prod_{1\leq j<k\leq n} |\lambda_j^2-\lambda_k^2|^2} \frac{2^n\prod_{j=1}^n\lambda_j}{h_{\beta,n,a}}\prod_{j=1}^n\lambda_j^{\beta a}e^{-\lambda_j^2/2} \prod_{1\leq j<k\leq n}|\lambda_k^2-\lambda_j^2|^\beta \nonumber\\
& \times \frac{w_0^{\beta(m-n)/2-1}}{\Gamma{(\beta(m-n)/2)}} \times\Gamma(\beta m/2)\prod_{j=1}^n\frac{w_j^{\beta/2-1}}{\Gamma(\beta/2)}
\times \prod_{j=1}^L \,dy_j \prod_{j=L+1}^{L+M} (dx_{j} dy_{j}).
\label{longg}
\end{align}
Substituting
\begin{align}\label{imp11}
\sum_{j=1}^n \lambda_j^2&= \frac12\left(l^2+\sum_{j=1}^{2n+1} {z_j^2}\right),\\
\label{w0}
w_0&=\frac{\displaystyle\prod_{j=1}^{2n+1}|z_j|}{l\displaystyle\prod_{j=1}^n|\lambda_j|^2},\\
\label{imp22}
\prod_{j=1}^n w_j^2&=\frac{\prod_{j,k=1}^{2n+1}|{z_j}-\bar{z}_k|}{l^{2n} 2^{2n+1}\prod_{j=1}^{2n+1}|z_j| \prod_{j=1}^n \lambda_j^4 \displaystyle\prod_{1\leq j<k\leq n}|\lambda_j^2-\lambda_k^2|^4},
\end{align}
into \eqref{longg} we get \eqref{www2}.
\end{enumerate}
\end{proof}
|
{
"timestamp": "2021-09-30T02:01:57",
"yymm": "2109",
"arxiv_id": "2109.13982",
"language": "en",
"url": "https://arxiv.org/abs/2109.13982"
}
|
\section{ParaLiNGAM}
\label{sec:alg}
In this section, we present the ParaLiNGAM algorithm for accelerating computationally-intensive part of DirectLiNGAM without changing its accuracy.
As mentioned before, DirectLiNGAM discovers causal order in $p$ iterations. Moreover, these iterations are consecutive, i.e., an iteration cannot be started unless the previous has been already finished. Herein, ParaLiNGAM is also executed in $p$ iterations. \mRefFig{fig:alg:base} illustrates the procedure of one iteration.
\begin{figure*}[tp]
\centering
\includegraphics[width = \textwidth]{fig/flowchart}
\caption{Procedure of one iteration in ParaLiNGAM: Each iteration is divided into steps.
Each step has three parts: Compare, Message Passing, Scheduler, which are accomplished with pre-selected workers from the previous step ($\mathcal{W}_k$). Details are discussed in Section \ref{sec:alg}.}
\label{fig:alg:base}
\end{figure*}
In each iteration, computations of each variable are assigned to a specific \textbf{worker}. Iterations are broken into \textbf{steps}. In each step $k$, a subset of all workers($\mathcal{W}$) in an iteration, which is denoted by $\mathcal{W}_k$, is selected to start comparing themselves with other workers (Compare part). Next, the workers inform each other about their computations by sending messages (Message Passing part).
Finally, the \textbf{scheduler} gathers all workers' scores and selects some of the workers for the next step, i.e., $\mathcal{W}_{k+1}$.
\begin{algorithm
\begin{algorithmic}[1]
\Input $\mathcal{X}$
\Output $K$
%
\State $K=\emptyset$
\State $U =[1,\cdots, p]$
\State Par: $NormalizeData(\mathcal{X})$
\State Par: $ \Sigma = CalculateCovMat(\mathcal{X})$
\Repeat
\State Par: $root = ParaFindRoot(\mathcal{X}, U, \Sigma)$
\State Append $root$ to $K$
\State Remove $root$ from $U$
\State Par: $\mathcal{X} = UpdataData(\mathcal{X}, U, \Sigma, root)\hfill//$Sec\ref{sec:math}
\label{line:RegressRoot}
\State Par: $\Sigma = UpdateCovMat(\mathcal{X}, U, \Sigma, root) \hfill//$ Sec\ref{sec:math}
\label{line:UpdateCovMat}
\Until{$U$ is not empty}
\end{algorithmic}
\caption{ParaLiNGAM}
\label{alg:ParaLiNGAM}
\end{algorithm}
The description of ParaLiNGAM is given in \mRefAlg{alg:ParaLiNGAM}. The lines starting with ``Par" show that they are executed in parallel. ParaLiNGAM algorithm is executed similar to DirectLiNGAM. First, we initialize $U$ and $K$ (lines $1-2$). Next, we find a casual order in lines $5-11$. General procedure of this algorithm is same as DirectLiNGAM, however, computations' details have been changed which we discuss them in the sequel.
In DirectLiNGAM, samples of all variables have to be normalized in $FindRoot$ function (lines $7-8$ in \mRefAlg{alg:baseLiNGAM_FR}). For sake of efficiency, all variables are normalized simultaneously in line $3$ of \mRefAlg{alg:ParaLiNGAM} for the first iteration. For the next iterations, this task is done with $UpdateData$ function (line $9$ in \mRefAlg{alg:ParaLiNGAM}).
Regressing variables on each other, is a frequent task in DirectLiNGAM which is performed in $Compare$ (lines $9-10$ in \mRefAlg{alg:baseLiNGAM_FR}) and $RegressRoot$ (line $7$ in \mRefAlg{alg:baseLiNGAM}) functions, and it needs variables' covariance matrix. Hence, it is desirable to store covariance matrix (which we denote it by $\Sigma$) in each iteration to avoid redundant computations. In \mRefAlg{alg:ParaLiNGAM}, the first covariance matrix of variables is calculated in line $4$, and it is updated in each iteration (line $10$).
Furthermore, we will show in Section \ref{sec:math} how to reuse computations from previous iterations in normalizing data and obtaining the covariance matrix which results in reducing the computational complexity without degrading the accuracy.
We discussed the summary of changes in \mRefAlg{alg:baseLiNGAM}. Now, we are ready to explain these changes in more details. First, we present a parallel solution for finding root in each iteration ($ParaFindRoot$ function in Algorithm \ref{alg:ParaLiNGAM_FR}).
\begin{algorithm
\begin{algorithmic}[1]
\Input $\mathcal{X}, U, \Sigma$
\Output $root$
\item[\textbf{\# of Workers:}] $|U|$
%
\If {$U$ has only one element}
\State return the element
\EndIf
\State $r = |U|$
\State $\mathcal{W} = [1, 2, 3, ..., r]$
\State $\mathcal{S} = [\textbf{0}]_{r}$
\State $\mathcal{M} = [\emptyset]_{r\times r}$
\State $\mathcal{D} = diag([True]_r)$
\State $State = \{U, \Sigma, r, \mathcal{S}, \mathcal{M}, \mathcal{D}, \gamma \}$
\State $\mathcal{W}' = \mathcal{W}$
\State $\mathcal{C} = \textbf{1}_{r}$
\Repeat
\State Par: $\mathcal{S}[w] += Compare(w, \mathcal{C}[w], State)
\State Par: $\mathcal{S}[w] += CheckMessages(w, State)\hfill//$Sec\ref{sec:messaging}
\State $finished, \mathcal{W'}, \mathcal{C} = Scheduler (\mathcal{W}, \mathcal{C}, State)\hfill//$Sec\ref{sec:scheduler}
\Until{$finished == True$}
\State $root=U[\arg\min(\mathcal{S})]$
\end{algorithmic}
\caption{$ParaFindRoot$}
\label{alg:ParaLiNGAM_FR}
\end{algorithm}
In order to find root in DirectLiNGAM (\mRefAlg{alg:baseLiNGAM_FR}), all variables compare themselves with other ones (line $5$). In the parallel version, we assign each variable to a worker to perform its computations. In other words, instead of a $for$ statement in line $5$ of \mRefAlg{alg:baseLiNGAM_FR}, we have workers, which can work in parallel to perform the comparisons. Moreover, comparing each variable/worker to other variables/workers is divided into steps instead of iterating on all variables (line $6$ \mRefAlg{alg:baseLiNGAM_FR}). Next, we discuss the details of presented solution for finding root (\mRefAlg{alg:ParaLiNGAM_FR}).
The description of $ParaFindRoot$ function is given in \mRefAlg{alg:ParaLiNGAM_FR}. First, we define $r$ as the number of remaining variables in this iteration, which equals to the size of $U$ (line $4$). In each iteration, every pair of variables have to be compared to find the root. Moreover, these comparing procedure is independent of each other. We can use $r$ workers, and worker $i$ is responsible for performing variable $U[i]$'s computations.
For simplicity of notations, from now on, we denote workers by the corresponding variables assigned to them. We define $\mathcal{W}$ as a list of all workers' indices in an iteration (line $5$). In line $6$, $\mathcal{S}$ is initialized the same as the main algorithm (line $4$ in \mRefAlg{alg:baseLiNGAM_FR}) to store scores. Each worker might have useful information for the other workers, which can be shared with a \textbf{messaging} mechanism. To do so, we define $\mathcal{M}$ which is $r\times r$ matrix filled with $\emptyset$ (line $7$). Worker $i$ can send a message to worker $j$ by writing in $\mathcal{M}[j][i]$. More details of the messaging mechanism and its effect on the performance of algorithm will be discussed in \mRefSec{sec:messaging}. Note that matrix $\mathcal{M}$ is just temporary memory for messaging and it resets in each step.
Hence, another variable is required for evaluating the progress of an iteration.
For this purpose, in line $8$, we define a $r \times r$ matrix $\mathcal{D}$ in which diagonal entries are initially $True$ while others are $False$, to monitor workers' progress. Worker $i$ writes $True$ in $\mathcal{D}[i][j]$ after comparing itself with worker $j$.
For sake of brevity, we collect all variables defined in Algorithm \ref{alg:ParaLiNGAM_FR}, and the threshold $\gamma$ with a small value (Section \ref{sec:thresh}, \ref{sec:scheduler}) in set $State$ (line $9$).
As mentioned before, finding root in each iteration is divided into steps. In each step, some workers are selected, and each of them has to compare itself to another worker. Selected workers are indicated with $\mathcal{W}'$ (line $10$). In the first step of each iteration, all of workers have same priority. Hence, $\mathcal{W}'$ equals $\mathcal{W}$. Moreover, list of target workers to be compared with is defined as $\mathcal{C}$ (line $11$), and worker $i$ compares itself with $\mathcal{C}[i]$. At the beginning of each iteration, $\mathcal{C}$ is initiated with a list filled with $1$, which means all of the workers start comparing themselves with the first worker.
In each step, first, selected workers compare themselves with the assigned workers and send a message (line $13$). Comparing and sending a message is an independent task for each worker and can be performed in parallel. Then, workers check for new messages from others and update their scores (line $14$). Afterwards, $Scheduler$ selects workers for the next step according to this step's status. Moreover, it modifies $\mathcal{C}$ for selected workers (line $15$) and determines whether to terminate an iteration after checking its $State$. Finally, similar to \mRefAlg{alg:baseLiNGAM_FR}, the worker with minimum score is chosen as the root of iteration (line $17$).
There are still some implementation details that will be discussed in the next parts. More specifically,
messaging between workers will be discussed in \mRefSec{sec:messaging}. The details of \textbf{threshold} mechanism which determines $\mathcal{W}'$, the set of selected workers, is given in \mRefSec{sec:thresh}. The scheduling of workers based on the threshold mechanism is discussed in \mRefSec{sec:scheduler}. Mathematical simplification for accelerating $UpdateData$ and $UpdateCovMat$ process is also discussed in \mRefSec{sec:math}. Finally, implementation of this algorithm on GPU and further details on some parts of solution are given in \mRefSec{sec:alg2}.
\subsection{Messaging}
\label{sec:messaging}
In DirectLiNGAM, as mentioned in \mRefSec{sec:baseLiNGAM}, computation of test $I$ for $x_i \rightarrow x_j$ has some similarities to the ones for $x_i \leftarrow x_j$. It is worth mentioning, this property is not just for $I$, and this technique can be used for some other tests \cite{hyvarinen2013pairwise}.
When worker $i$ compares itself to worker $j$ (computing $I(x_i, x_j, r_i^{(j)}, r_j^{(i)})$), it can also compute the test in the reverse direction ($I(x_j, x_i, r_j^{(i)}, r_i^{(j)})$), which is worker $j$'s task. Hence, we can assign full comparison to just one worker, and after finishing each comparison, the worker who performed the test, has to inform the other worker about its result by sending a message. With messaging, which does not add computation load
we can halve the comparisons (from $p(p-1)$ to $p(p-1)/2$). To use this mechanism, every active worker in a step has to send a message after finishing its comparison (line 13 in \mRefAlg{alg:ParaLiNGAM_FR}), and each worker checks for their messages (which can be from any $r-1$ other workers in the whole iteration) with $CheckMessages$ function.
\begin{algorithm
\begin{algorithmic}[1]
\Input $w, State$
\Output $score$
%
\State $score = 0$
\For{$~~(i=1;~~ i <= r;~~ i+=1)$}
\If{$\mathcal{M}[w,i] != \emptyset$}
\State $score += \mathcal{M}[w, i]$
\State$\mathcal{D}[w, i] = True$
\State $\mathcal{M}[w, i] = \emptyset$
\EndIf
\EndFor
\end{algorithmic}
\caption{$CheckMessages$}
\label{alg:CheckMessages}
\end{algorithm}
The description of $CheckMessages$ function is given in \mRefAlg{alg:CheckMessages}. First, we define $score$ as variable for sum of scores. In lines 2-3, worker $w$ checks for new messages. If another worker, say $i$, has sent a message, first, $score$ is updated (line 4). Next, worker $w$ marks the sender worker ($i$) as "done" by writing $True$ in $\mathcal{D}[w, i]$ (line 5). Finally, message is replaced with $\emptyset$ to prevent it from recalculation in next steps. The senders and receivers might not be active simultaneously in one step. As a result, workers consider messages from whole workers in an iteration and not just from active workers in the current step.
\subsection{Threshold}
\label{sec:thresh}
As mentioned earlier, we need to perform $p(p-1)/2$ comparisons in each iteration. However, all of them are not necessary. Suppose that the final score of the root in an iteration is $0.05$. In this case, we can terminate any worker's computation whose score has reached $0.05$.
To reduce the number of comparisons, we consider an upper bound for the score which we call it threshold, and assume that root's score will probably be less than this threshold. If that is the case, the iteration is over and we can choose the root when at least one worker could finish its comparisons without reaching the threshold, while all other workers reach it without completing their tasks.
Herein, the main issue is how to choose a proper threshold.
To overcome this issue, first, we choose a small value for the threshold. Furthermore, we terminate workers who have already reached this value. Then, if all workers terminate and neither of them could finish their comparisons, we increase the threshold. We continue this procedure till iteration termination conditions satisfy, i.e., at least one worker finish all their comparisons without reaching the threshold. Consequently, using the threshold mechanism can result in reducing the number of comparisons.
Now, we discuss the correctness of the threshold mechanism in each iteration.
At the end of each iteration, each worker is in one of these two groups:
1) Worker's score is more than the threshold, and it may not even finish its comparisons.
2) Worker finishes its comparisons, and still, its score is below the threshold.
Workers in the first group have higher scores than the threshold even they continue their comparisons. The root is a worker with a minimum score. As a result, it is always chosen from the second group of workers. In fact, we select the root from workers in the second group which has the minimum score among them. Hence, early termination of the first group of workers does not affect algorithm results.
Details of reducing number of comparisons by the threshold mechanism are discussed in \mRefSec{sec:scheduler}.
\subsection{Scheduler}
\label{sec:scheduler}
In this part, we explain how to schedule workers with the threshold mechanism. At the end of each step, the scheduler has to decide whether or not to finish current iteration.
\begin{algorithm
\begin{algorithmic}[1]
\Input $\mathcal{W}, \mathcal{C}, State$
\Output $finish, \mathcal{W'}, \mathcal{C}$
%
\State $finish = False$
\ForAll {$ w \in \mathcal{W}$} // checking the termination
\If{$\mathcal{S}[w] < \gamma$}
\If {$\mathcal{D}[w,:]$ has at least one $False$}
\State $finish = False$
\State \textbf{Break}
\Else
\State $finish = True$
\EndIf
\EndIf
\EndFor
\If {$finish$}
\State \textbf{return}
\EndIf
\While{$\mathcal{S}[w] > \gamma, ~~ \forall w \in \mathcal{W}$}
\State $\gamma = \gamma \times c$ // updating threshold
\EndWhile
\State $\mathcal{W'} = [ w \in \mathcal{W}~~|~~ \mathcal{S}[w] < \gamma]$
\ForAll {$ w \in \mathcal{W'}$}
\Repeat
\State $\mathcal{C}[w] += 1$
\Until{$(\mathcal{D}[w,\mathcal{C}[w]] == False ~~ \& \& ~~ \mathcal{C}[\mathcal{C}[w]]!=w) ~~ | | ~~ \mathcal{C}[w] > |\mathcal{W}| $}
\EndFor
\State $\mathcal{W'} = \mathcal{W'}\backslash[ w \in \mathcal{W'}~~|~~ \mathcal{C}[w] > |\mathcal{W}|]$
\end{algorithmic}
\caption{$Scheduler$}
\label{alg:Scheduler}
\end{algorithm}
As mentioned earlier in \mRefSec{sec:thresh}, if at least one worker has finished its comparisons and its score is below than the threshold, we can terminate the iteration.
The description of scheduler is given in \mRefAlg{alg:Scheduler}. The termination of an iteration is checked in lines $1-14$. First, we define $finish$ as a flag for termination conditions (line $1$). Then, we have to find workers with scores less than the threshold (line $2-3$) and check whether they have finished their comparisons or not (line 4). If at least one worker has unfinished comparisons and its score is below than the threshold, we need to continue another step (lines $5-6$). If some workers have finished their comparisons, $finish$ is changed to $True$, and we wait for other workers' status (line $7-9$). Then, we check $finish$ value and decide whether or not to terminate the iteration (lines $12-14$).
Now we discuss the scheduler's task when an iteration is not terminated and it should be continued for another step. For the new step, we have to change threshold if it is needed (lines $15-17$). There are multiple ways to increase the value of threshold. Here, we just multiply by some constant $c$ (see \mRefSec{sec:schGPU} for more details on selecting the desirable constant $c$). We continue updating threshold until at least one of workers' scores is below than the threshold. Then, the scheduler chooses workers ($\mathcal{W}'$) and comparison targets ($\mathcal{C}$) for the next step.
Workers with scores less than the threshold are considered as the workers of new step (line $18$). $\mathcal{C}$ is updated for the selected workers in lines $19-23$. As mentioned, $\mathcal{C}$ is initialized with \textbf{$1$} and workers need to compare themselves with worker $1$ for their first step. For the next steps, we start increasing $\mathcal{C}[w]$ for worker $w$ till we find a pair ($w$, $\mathcal{C}[w]$) which their comparison has not performed yet.
In a step, it might occur that two workers compare with each other simultaneously, which is not desirable as they are performing redundant tests. In order to prevent these cases, the scheduler also checks for repetitive pairs of comparison (second condition in line $22$).
A worker might finish its comparisons while its score was greater than the threshold, but it becomes smaller after updating threshold. Such workers have to wait for the next step. In this case, first, $\mathcal{C}[w]$ is set to a value greater than $|\mathcal{C}|$, then the scheduler omits such workers from $\mathcal{W}'$ (line $24$).
Some details of scheduling depend on implementation considerations and will be discussed in \mRefSec{sec:schGPU}.
\subsection{Mathematic Simplification}
\label{sec:math}
As mentioned earlier, DirectLiNGAM algorithm always works with normalized data. Furthermore, normalization and regression tasks are frequently performed in the algorithm. These tasks depend on computing the covariances of variables, and it would be desirable to obtain them in an efficient manner. In this section, we demonstrate that if the relationship between variables is linear, which is one of the main assumption in LiNGAM, we can estimate variance of residual of regressions and use them in normalization step (lines $7 - 8$ in \mRefAlg{alg:baseLiNGAM_FR}). Furthermore, we can estimate coefficients used in regressing variables in data updating procedure (line $7$ in \mRefAlg{alg:baseLiNGAM}).
First, we calculate the adjusted sample variance $s^2$ for residual of a regression. Residual of $x_i$ regressed on $x_j$ (denoted by $r_i^{(j)}$) is defined as:
\begin{equation}
r_i^{(j)} = x_i - \frac{cov(x_i, x_j)}{var(x_j)} x_j, i \neq j.
\end{equation}
Moreover, if we assume that samples are normalized, it can be shown that $E[r_i^{(j)}] = 0$. Furthermore, calculating residuals only needs the covariance matrix. Consider $b = cov(x_i, x_j)$ and assume that both variables are normalized. We can write:
\begin{equation}
\begin{split}
s^2 &= \frac{1}{n-1} \sum (r_i^{(j)} - E[r_i^{(j)}])^2 \\
&= \frac{1}{n-1} \sum (x_i - b x_j)^2 = \frac{1}{n-1} \sum (x_i^2 - b^2x_j^2 - 2b x_i x_j) \\
&= \frac{1}{n-1} (\sum x_i^2 + b^2 \sum x_j^2 - 2b\sum x_i x_j) \\
&= var(x_i) + b^2 var(x_j) - 2b cov(x_i, x_j)\\
&= 1 + b^2 - 2b^2 = 1 - b^2 = 1 - cov^2(x_i, x_j).
\end{split}
\label{eq:var_update}
\end{equation}
Thus, we showed that the variance of $r_i^{(j)}$ is equal to $1 - cov^2(x_i, x_j)$. Therefore, in order to normalize it, we just have to divide all samples to $\sqrt{1 - cov^2(x_i, x_j)}$. The details of $UpdateData$ function, which is called in \mRefAlg{alg:ParaLiNGAM}, is given in \mRefAlg{alg:UpdateData}.
\begin{algorithm
\begin{algorithmic}[1]
\Input $\mathcal{X}, U, \Sigma, root$
\Output $\mathcal{X}$
\item[\textbf{\# of Workers:}] $|U|$
\State $\mathcal{W} = [1, 2, 3, ..., |U|]$
\State Par: \textbf{for} {$~~(i=0;~~ i < |\mathcal{X}[w,:]|;~~ i+=1)$} \textbf{do}
\State $~~~~\mathcal{X}[w, i] = \dfrac{\mathcal{X}[w, i] - \Sigma[w, root] \mathcal{X}[root, i]}{\sqrt{1 - \Sigma^2[w, root]}}$
\State \textbf{end for}
\end{algorithmic}
\caption{$UpdateData$}
\label{alg:UpdateData}
\end{algorithm}
Next, we calculate the covariance between two residuals, which can be used in computing the covariance matrix for the next iteration.
Suppose we have $b_1 = cov(x_i, x_{root})$ and $b_2 = cov(x_j, x_{root})$. We have:
\begin{equation}
\begin{split}
cov(r_i^{root}, r_j^{root}) &= \frac{1}{n-1} \sum (x_i - b_1x_{root})(x_j - b_2x_{root}) \\
&= \!\begin{multlined}[t]
\frac{1}{n-1} \sum (x_ix_j - b_2x_ix_{root} - b_1 x_j x_{root} \\+ b_1b_2x_{root}^2)
\end{multlined}\\
&= \!\begin{multlined}[t]
cov(x_i, x_j) - b_2 cov(x_i, x_{root}) \\- b_1 cov(x_j, x_{root}) + b_1b_2 var(x_{root})
\end{multlined}\\
&= cov(x_i, x_j) - b_1b_2.
\end{split}
\label{eq:CovUpdate}
\end{equation}
From Equations \ref{eq:CovUpdate} and \ref{eq:var_update}, we can simply update the covariance matrix in each iteration just from the covariance matrix in the previous iteration, without using variables' samples (excluding the first iteration).
Details of $UpdateCovMat$ Function, which is called in \mRefAlg{alg:ParaLiNGAM}, is given in \mRefAlg{alg:UpdateCovMat}. Please note that $r_i^{root}$ and $r_j^{root}$ are not normalized in \mRefEq{eq:CovUpdate}. Therefore, it is needed to divided the expression for the covariance by the variances of $r_i^{root}$ and $r_j^{root}$ (line 3 in \mRefAlg{alg:UpdateCovMat}).
\begin{algorithm
\begin{algorithmic}[1]
\Input $\mathcal{X}, U, \Sigma, root$
\Output $\Sigma$
\item[\textbf{\# of Workers:}] $|U|$
\State $\mathcal{W} = [1, 2, 3, ..., |U|]$
\State Par: \textbf{for} {$~~(i=0;~~ i < |\Sigma[w,:]|;~~ i+=1)$} \textbf{do}
\State $~~~~\Sigma[w, j] = \dfrac{\Sigma[w, j] - \Sigma[w, root] \Sigma[j, root]} {\sqrt{1 - \Sigma^2[w, root]}\sqrt{1 - \Sigma^2[j, root]}}$
\State \textbf{end for}
\end{algorithmic}
\caption{$UpdateCovMat$}
\label{alg:UpdateCovMat}
\end{algorithm}
\section{Conclusion}
\label{sec:conc}
In this paper, we proposed a
parallel algorithm for learning causal sturctures in LiNGAM model based on DirectLiNGAM algorithm. In the proposed algorithm, we employed a threshold mechanism to save a large portion of comparison in DirectLiNGAM. Moreover, we proposed a message mechanism and mathematical simplifications to further reduce the runtimes. Experiments showed the scalability of
our prospered algorithms with respect to the number of variables, the number of samples, and different types of graphs and achieved remarkable performance with respect to serial solution.
\appendices
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\section{Experimental Evaluation}
\label{sec:exp}
\subsection{Setup}
\label{sec:exp:setting}
The proposed ParaLiNGAM algorithm is implemented in C++ language using CUDA parallel programming framework. The source code is available online \cite{sourceParaLiNGAM}.
We experimentally evaluate ParaLiNGAM, along with DirectLiNGAM \cite{shimizu2011directlingam} which is a sequential method. The latest implementation of DirectLiNGAM is in Python language \cite{sourceDirectLiNGAM}. In order to have fair comparisons, we re-implemented DirectLiNGAM in C++ language. This implementation is available in \cite{sourceParaLiNGAM}.
We employ a server machine with Intel Xeon CPU with $16$ cores operating at $2.1$ GHz.
Since DirectLiNGAM is a sequential method, it is executed on a single core.
The CUDA kernels in ParaLiNGAM are executed on Nvidia Tesla V$100$ GPU, and the other procedures are executed sequentially on a single core. We use Ubuntu OS 20.04, GCC version 9.3, and CUDA version 11.1.
\subsection{Real-World Datasets}
\label{sec:exp:real_data}
\begin{table}[tp]
\centering
\caption{Benchmark datasets.}
\label{tab:datasetSpec}
\begin{tabular}{|c|c|c|}
\hline
Dataset & \# of reactions & \# of non-zero variables ($p$) \\
\hline
iML1515 & 2712 & 2326 \\
\hline
iEC1372\_W3110 & 2758 & 2339 \\
\hline
iECDH10B\_1368 & 2742 & 2252 \\
\hline
iY75\_1357 & 2759 & 2249 \\
\hline
iAF1260b & 2388 & 1588 \\
\hline
iAF1260 & 2382 & 1633 \\
\hline
iJR904 & 1075 & 770 \\
\hline
E.coli Core &
95 & 85 \\
\hline
\end{tabular}
\end{table}
We utilize seven Genome-scale metabolic networks as our benchmarks \cite{feist2007genome, reed2003expanded, monk2017ml1515, orth2011comprehensive, monk2016multi, monk2013genome, feist2010model} to evaluate the performance of ParaLiNGAM.
The metabolites are molecules involved in chemical reactions in a cell, and sets of these chemical reactions are so-called Metabolic networks \cite{lacroix2008introduction}.
These metabolic networks can be studied in silico with flux balance analysis \cite{orth2010flux} which is a way to simulate metabolic networks and to measure effects on the system by external
influences. The flux balance analysis is utilized to generate datasets.
The data generation procedure is performed with COBRA-toolbox \cite{becker2007quantitative, schellenberger2011quantitative}, and it considers the metabolic network of the Escherichia coli bacteria str. K-12. In order to utilize these networks, we acquire their models from BiGG Models\cite{king2016bigg}.
To generate data, first, we import models from BiGG models in COBRA. Then, utilizing COBRA's optGpSampler \cite{megchelenbrink2014optgpsampler}, we generate data by uniformly sampling from solution space with hit and run algorithm.
In this algorithm, first, $n$ points (samples) are generated in the middle of the solution, and then these points are relocated in a random direction. Although, after substantial steps in this procedure, some reactions' samples remain zero. These reactions are removed from datasets.
Details of the generated datasets are shown in Table ~\ref{tab:datasetSpec}. The first column shows the number of reactions, and the second column shows the number of non-zero variables among these reactions.
Each variable (reaction) is generated with $10000$ samples.
\subsection{Performance Comparison}
\subsubsection*{Comparing ParaLiNGAM with DirectLiNGAM }
The runtimes of both DirectLiNGAM and ParaLiNGAM are reported in Table \ref{tab:realTimings}. The accuracy of the proposed solution is exactly the same as the DirectLiNGAM. Thus, we only report runtime of these experiments.
The third column shows serial runtime. The serial runtime on iJR904 dataset is 287780 seconds (approximately 3.3 days).
Since the computational complexity has cubic relation with respect to the number of variables, the runtime of iAF1260b dataset, which is the next smallest dataset after iJR094, would probably be longer than three weeks. Thus, it is impractical to measure the runtime for other datasets, and the serial runtime is reported just for two datasets, which is 485 seconds for E.coli Core and 287780 seconds for iJR904. More comparisons with the serial solution is reported in Section \ref{sec:scalability} for synthetic datasets.
The fourth column reports runtime of ParaLiNGAM for all datasets, which ranges from 759 milliseconds to 91.3 seconds. The speedup ratio over serial execution for measured datasets is up to 3152.
\begin{table*}[tp]
\centering
\caption{Comparing the serial and parallel implementations. The third and fourth columns show the runtimes. The last column shows the speedup ratio, which is calculated by dividing the serial runtime over the parallel runtime.}
\label{tab:realTimings}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Dataset & \# of variables & Serial runtime (sec.) & GPU runtime (sec.) & Speedup ratio \\
\hline
iML1515 & 2326 & \textemdash \textemdash & 1321 & \textemdash \textemdash\\
\hline
iEC1372\_W3110 & 2339 & \textemdash \textemdash & 1420 & \textemdash \textemdash \\
\hline
iECDH10B\_1368 & 2252 & \textemdash \textemdash & 1216 & \textemdash \textemdash \\
\hline
iY75\_1357 & 2249 & \textemdash \textemdash & 1174 & \textemdash \textemdash \\
\hline
iAF1260b & 1588 & \textemdash \textemdash & 507 & \textemdash \textemdash \\
\hline
iAF1260 & 1633 & \textemdash \textemdash & 518 & \textemdash \textemdash \\
\hline
iJR904 & 770 & 287780 ($\sim$3.3 days) & 91.3 & 3152 \\
\hline
E.coli core & 85 & 485 & 0.759 & 638\\
\hline
\end{tabular}
\end{table*}
\subsubsection*{Comparing ParaLiNGAM with Other Parallel Methods}
Herein, three baseline parallel algorithms are introduced, and their performance are compared against the proposed ParaLiNGAM algorithm. See Fig. \ref{fig:baseLineTiming}. The first baseline algorithm is formed by assigning each variable to a block, and blocks compare themselves to each other. In specific, we have blocks equal to the number of remaining variables in each iteration, and each block performs one comparison at a time. We call this algorithm \textbf{Block Worker}.
The second baseline algorithm is similar to the previous one, in that each variable is assigned to a block, but in each block, threads are responsible for different comparisons. Hence, each block can perform comparisons simultaneously. Since running lots of comparisons in parallel requires a lot of memory, this method cannot fit in GPU for large numbers of variables. In specific, we need separate memory for normalized variables and calculated residuals (see Compare Function in Algorithm \ref{alg:baseLiNGAM_FR}). As a result, we need $O(r^2 n)$ memory ($r$ is the number of remaining variables) in each iteration, which is problematic for large datasets. However, in Block Worker, we have just $r$ simultaneous comparison at a time and require just $O(r n)$ memory, which is a moderate value.
Hence, we need to optimize memory usage in this algorithm.
In specific, we store parameters like mean, variance, and covariance of comparing variables used in calculations. Then, we merely read input data from memory to perform comparisons. This solution not only solves memory issues but also reduces the runtime by avoiding redundant memory reads and writes. We call this algorithm \textbf{Thread Worker}.
In the third baseline algorithm, in each iteration, every comparison is assigned to a block, i.e., we have $r\times r$ blocks ($r$ is the number of remaining variables), and block of index $(i, j)$ compares $X_i$ with $X_j$. Like the previous baseline algorithm, performing many comparisons in parallel requires a lot of memory. Therefore, the above mentioned optimizations are utilized to reduce memory usage in this algorithm as well. We call this algorithm \textbf{Block Compare}.
\begin{figure*}[tp]
\centering
\includegraphics[width = \linewidth]{fig/baseLine}
\vskip -3mm
\caption{Comparing the performance of ParaLiNGAM with three baseline algorithms on GPU. Every bar illustrates speedup ratio between the base line algorithm and ParaLiNGAM.}
\label{fig:baseLineTiming}
\end{figure*}
As illustrated in Fig. \ref{fig:baseLineTiming}, ParaLiNGAM is 16 X to 2.4 X faster than the Block Worker method, 11.5 X to 6.7 X faster than Thread Worker, and 10.4 X to 1.1 X faster than Block Compare. Note that E.coli Core dataset has fewer variables compared to other datasets, and it under-utilizes the GPU.
In most cases, Block Worker has the worst performance among the three baseline methods. This is due to its inefficient memory usage. Between Thread Worker and Block Compare, the first method employs too much parallelism, and threads of each block try to access different parts of memory, which is less efficient compared with Block Compare, in which, there is a limited number of concurrent comparisons (number of blocks that can run simultaneously on the GPU).
\subsection{Scalability}
\label{sec:scalability}
In this section, we evaluate the scalability of ParaLiNGAM. In particular, we measure the runtime of our proposed algorithm against DirectLiNGAM for different numbers of variables ($p$) and samples ($n$).
We follow a similar procedure as ICA-LiNGAM \cite{shimizu2006linear} for the data generation mechanism.
First, we choose the number of parents for each variable, and generate a random matrix for adjacency matrix $B$. In sparse graphs, the number of parents is uniformly selected from interval $[1, 0.2 p]$, and for dense graphs the interval is $[0.25 p, 0.5p]$ ($p$ is number of variables).
Next, non-zero entries of $B$ are replaced with a random value, from interval $[-0.5, -0.95] \bigcup [0.5, 0.95]$.
Next, we generate exogenous noise $N_i$ for each variable by sampling from Gaussian distribution, then pass them through a power non-linearity (keeping same sign, but changing the absolute value to an exponent in interval $[0.5, 0.8] \bigcup [1.2, 2]$).
Finally, we generate samples for all variables recursively and permute them randomly. For this section, datasets are generated for number of variables of $p = 100, 200, 500, 1000$ and sample sizes of $n = 1024, 2048, 4096, 8192$.
As mentioned earlier, the proposed solution does not change the algorithm accuracy and has the same precision. Therefore, only the runtime of the algorithms are shown in Fig. \ref{fig:syntheticTiming} for sparse and dense graphs.
\begin{figure*}[tp]
\centering
\includegraphics[width = 1.05 \linewidth]{fig/synTime}
\caption{Runtimes of ParaLiNGAM and DirectLiNGAM for a) sparse graphs and b) dense graphs for different number of variables and sample sizes.}
\label{fig:syntheticTiming}
\end{figure*}
In Fig. \ref{fig:syntheticTiming}, each column shows runtime for $p=100, 200, 500, 1000$, and rows (a) and (b) show runtime for sparse and dense graphs, respectively. The runtimes of sparse graphs are similar to dense graphs for DirectLiNGAM algorithm, due to the same procedure and computations. In particular, the independent test is performed for all pairs of variables in all iterations despite the graph density. Hence, runtime merely depends on the number of variables and samples. The runtime of DirectLiNGAM varies from 71.4 seconds to 658806 seconds ($\sim$ 7.6 days). However, ParaLiNGAM attains a much smaller runtime compared with DirectLiNGAM, and its runtime varies from 119 milliseconds to 151 seconds. The speedup ratio of the proposed algorithm over the serial implementation ranges from 536 X to 4657 X. Furthermore, the speedup ratio increases as $p$ and $n$ increase.
\section{Implementation Details}
\label{sec:alg2}
Further details on the proposed parallel algorithm is presented in this section. First, a short background on CUDA is presented in Section \ref{sec:CUDA}. Further details on our GPU implementation are discussed in Sections \ref{sec:GPUImp} and \ref{sec:schGPU}.
\subsection{CUDA}
\label{sec:CUDA}
CUDA is a parallel programming API for Nvidia GPUs. GPU is a massively parallel processor with hundreds to thousands of cores.
CUDA follows a hierarchical programming model. At the top level, computationally intensive functions are specified by the programmer as CUDA \textbf{kernels}. For briefness, from now on, we use kernel instead of CUDA kernel. A kernel is specified as a sequential function for a single \textbf{thread}. The kernel is then launched for parallel execution on the GPU by specifying the number of concurrent threads.
Threads are grouped into \textbf{blocks}. A GPU kernel consists of a number of blocks, and every block consists of a number of threads.
In order to identify blocks within a kernel, and threads within a block, a set of indices are used in the CUDA API, for instance, $blockIdx.x$ denotes the block index in dimension $x$ within a kernel.
\subsection{GPU Implementation}
\label{sec:GPUImp}
In this section, we present further details on the implementation of ParaLiNGAM on GPU hardware.
In the ParaLiNGAM algorithm, we used workers to handle variables' tasks. In CUDA, we assign workers to blocks. Moreover, each block can divide its computations among parallel threads to improve the performance, e.g., for performing tasks like $CheckMessages$ and $Compare$. Herein, for brevity of notation, we denote $blockIdx.x$ by $w$.
Now we discuss two approaches to implement ParaLiNGAM on GPU.
The first approach is to assign fewer threads to each block and decrease the computation power of each block, in the meantime, run all of the blocks together.
Thereby, a whole iteration can be launched by one kernel. Moreover, we can perform scheduling in GPU by considering one block for the scheduler, i.e., all blocks but one skip scheduling part based on their block IDs.
However, this solution has some drawbacks.
First, each worker requires its own exclusive memory. Hence, all workers may not fit in GPU memory. As a result, it is not scalable, and we cannot utilize it for large number of variables.
Moreover, blocks are slower due to the smaller number of threads. As a result, this approach would be time-consuming, even if all the blocks fit in the GPU.
The second approach is to run $Compare$ and $CheckMessages$ tasks in separate kernels on GPU while performing the scheduling task on the host (CPU). This approach is scalable and does not have the above problems. However, launching kernels is time-consuming. Moreover, we launch two kernels separately for each step, which is not efficient.
Unlike the first approach, we can consider some modifications to the second approach in order to improve its performance, which is discussed in the next part.
\subsection{Scheduling on GPU}
\label{sec:schGPU}
In order to resolve second approach's issues, one solution is to relax the synchronization between $Compare$ and $CheckMessages$. In Algorithm \ref{alg:ParaLiNGAM_FR}, we consider a barrier between these two functions (lines $9$ and $10$) in order to update the score of workers faster by checking their messages only after all messages are received.
However, removing the barrier causes some workers to receive their messages later in the further steps. Hence, relaxing the synchronization causes a delay in delivery of some messages, however, it halves the kernel launching delays, which is quite beneficial. As a result, the workers (blocks) can compare and check for messages independently, and thus, we can merge $Compare$ and $CheckMessages$ kernels. Still, assigning each step to a kernel and evaluating iteration status by the host is not an efficient option and causes too many kernel calls. Therefore, the procedure of scheduling must be revised.
In ParaLiNGAM, a worker can perform just one comparison (if it is active) in each step. This limitation was due to the synchronization of $Compare$ and $CheckMessages$, which is now relaxed.
Hence, now we can divide an iteration into steps by threshold updates, instead of performing one comparison per worker. In other words, workers now can continue their comparisons to reach the threshold instead of just performing one comparison.
To implement this solution, we modify some of the scheduler's tasks in order to move them to the worker. Scheduler has three tasks: checking termination of an iteration, updating the threshold, and updating $\mathcal{C}$. Now, we discuss modifications that are needed in these three tasks.
In the first task, i.e., checking iteration's termination, worker $w$ can keep track of its finished comparisons by checking $\mathcal{D}[w]$.
Therefore, workers can continue to compare themselves with each other till they reach the threshold or finish their comparisons. Moreover, they can announce their state by $finish$ flag.
The second task is updating the threshold, which is a mechanism to divide iteration into steps.
In Algorithm \ref{alg:ParaLiNGAM_FR}, workers synchronize before updating the threshold.
As mentioned before, synchronizing a kernel is not efficient. Hence, we update the threshold outside of the kernel in the host.
In Section \ref{sec:scheduler}, constant $c$ is introduced to control the amount of change in threshold in each update.
In particular, higher values for $c$ would cause increasing the number of comparisons. However, it can be more efficient due to fewer kernel calls. As a result, this parameter must be adjusted according to the number of tests, test duration, and launching kernel delays.
In the last task, workers can also determine their comparisons' target by checking their unfinished workers in $\mathcal{D}$.
Now we discuss changes in the main algorithms to implement them on GPU which are given in Algorithm \ref{alg:SchedulerV2} and Algorithm \ref{alg:GPUKernel}.
Main loop of Algorithm \ref{alg:ParaLiNGAM_FR} (lines $14-18$) is replaced to Algorithm \ref{alg:SchedulerV2}. In this algorithm, $\mathcal{C}$ is modified for more efficiency, and also functions of $Compare$, $CheckMessages$ and some parts of $Scheduler$ are moved to $GPUKernel$ (Algorithm \ref{alg:GPUKernel}).
\begin{algorithm
\begin{algorithmic
\State $\mathcal{C} = [2, 3, ..., r, 1]$
\State $finish = False$
\Repeat
\State GPU: $finish, State = GPUKernel(\mathcal{W},
State)$
\State $\gamma *= c$
\Until{$finish$}
\end{algorithmic}
\caption{$ParaLiNGAM~Code patch$}
\label{alg:SchedulerV2}
\end{algorithm}
The description of $GPUKernel$ is given in Algorithm \ref{alg:GPUKernel}.
The main part of the algorithm is a loop in lines $1-13$. A worker's task is completed if it compares itself to all other workers, which is checked in line $13$.
In line $2$, same as line $14$ of Algorithm \ref{alg:ParaLiNGAM_FR}, workers check for their messages. Since GPU has limited resources, it makes a queue for blocks if it cannot fit them on its streaming multiprocessors (SMs). Hence, some workers launch later, therefore, checking messages as the first task would help them gain information from other previously active workers and check their scores to see if they have already reached the threshold.
In lines $3-5$, workers check their scores to see if they reached the threshold. In that case, they have to stop for this step and wait for the threshold update.
Then, in lines $6-8$, workers also check for termination of the iteration, and if they have finished their comparisons after checking messages, they set $finish$ as $True$.
Next, in lines $9-11$, workers choose their comparison target by increasing $\mathcal{C}[w]$.
Then, workers perform their comparisons in line $12$. Finally, if a worker finishes comparisons without reaching the threshold, it sets $finish$ as $True$ and notifies the host scheduler that is the end of the current iteration.
In line $22$ in Algorithm \ref{alg:Scheduler}, we used a condition to avoid redundant comparisons. In discussed implementation on GPU, we perform comparisons asynchronously. Hence, checking redundant comparisons is not straightforward as before, and we need to use more complicated mechanisms. For brevity, such details are not mentioned in the algorithms. But, in short, for this matter, we utilize a flag for each comparison, and workers try to lock them with atomicCAS operation.
Moreover, if a worker reaches some comparison that its flag is already set, the worker skips that comparison and would receive that comparison's result later.
\begin{algorithm}[h]
\begin{algorithmic}[1]
\Input $State$
\Output $finish$
\Block p
\Repeat
\State $\mathcal{S}[w] += CheckMessages(w, State)$\hfill //Check
\If {$\mathcal{S}[w] > \gamma$}\hfill //Evaluate
\State \textbf{Exit}
\EndIf
\If {$\mathcal{D}[w, :]$ is all $True$}
\State \textbf{break}
\EndIf
\Repeat \hfill // Compare
\State $\mathcal{C}[w] = (\mathcal{C}[w]+1)\%size(U)$
\Until{$\mathcal{D}[w][\mathcal{C}[w]]$ is $False$}
\State $\mathcal{S}[w] += Compare(w, \mathcal{C}[w], state)
\hfill$
\Until{$\mathcal{D}[w, :]$ is all $True$}
\State $finish = True$
\end{algorithmic}
\caption{$GPU Kernel$}
\label{alg:GPUKernel}
\end{algorithm}
\section{Introduction}
\label{sec:intro}
\IEEEPARstart{D}{iscovering} the underlying causal mechanism in various natural phenomena or human social behavior is one of the primary goals in artificial intelligence and machine learning. For instance, we may be interested in recovering causal relationships between different regions of brain by processing fMRI signals \cite{huang2021diagnosis,sanchez2019estimating} or estimating causal strengths between genes in a gene regulatory networks (GRN) by observing gene expression levels \cite{marbach2012wisdom,haury2012tigress}. Having access to such causal relationships can enable us to answer to interventional or counter-factual questions which has broad impacts on designing a truly intelligent system \cite{pearl2018book}. The golden standard for the causal discovery is through conducting controlled experiments. Unfortunately, performing experiments in a system might be too costly or even infeasible \cite{ghassami2018budgeted}. As a result, there have been extensive studies in the literature of causality to recover causal relationships from merely observational data \cite{peters2017elements}.
Causal relationships among a set of variables can be represented by a directed acyclic graph (DAG) where there is a directed edge from variable $X$ to variable $Y$ if $X$ is a direct cause of $Y$. From the observational distribution, it can be shown that the true underlying causal graph can be recovered up to a Markov equivalence class (MEC) \cite{koller2009probabilistic}. There are two main approaches for recovering an MEC: constraint-based and score-based approaches. In the constraint-based approach, the MEC is identified by performing sufficient number of conditional independence (CI) tests over the observational distribution. PC \cite{spirtes2000causation} is a well-known algorithm for performing such CI tests in an efficient manner. PC algorithm runs in polynomial time to recover MEC if the maximum degree of causal graph is bounded by a constant. In the score-based approaches, the goal is to find the class of graphs maximizing a likelihood based score. Greedy equivalence search (GES) \cite{chickering2002optimal} is one of the main score-based algorithms which reconstructs MEC by adding edges in a greedy manner.
As we mentioned above, without further assumption on the causal model, one can recover the causal graph up to an MEC. In order to uniquely recover the causal graph, we need to consider further assumptions on the causal mechanisms. For instance, if the causal mechanisms are non-linear and exogenous noises are additive, then the causal structure can be identified uniquely \cite{hoyer2008nonlinear}. Moreover, if the causal mechansims are linear, we can still recover the causal graph uniquely if the additive exogenous noises are non-Gaussian \cite{shimizu2006linear}. This assumption on the model is commonly called linear non-Gaussian acyclic model (LiNGAM) \cite{shimizu2006linear}. In ~\cite{shimizu2006linear}, an algorithm based on independent component analysis (ICA), commonly called ICA-LiNGAM, has been proposed which recovers the true causal graph under LiNGAM model. Later, a regression based method, called DirectLiNGAM ~\cite{shimizu2011directlingam}, has been presented to mitigate issues in using ICA algorithm. DirectLiNGAM algorithm has two main steps. In the first step, a causal order is obtained over the variables in the system. To do so, we compare any pair of variables like $X$ and $Y$, by regressing $Y$ on $X$ and checking whether the residual is independent of $Y$. A score is computed to measure the amount of dependency ~\cite{hyvarinen2013pairwise}. Afterwards, we select a variable that is most independent of its residuals, i.e., having minimum score among remaining variables and then append it to the causal order. We call this variable in each iteration as the root variable. Next, we remove this variable from the system by regressing it out and repeat the same procedure above until no variable is remained. After obtaining the causal order, in the second step, we perform multiple linear regressions based on the causal order in order to recover the underlying causal graph.
The executions of constraint-based or score-based algorithms might become too time-consuming as the number of variables increases in the system \cite{zarebavani2019cupc}. There have been some recent efforts to accelerate causal structure learning algorithms on multi-core machines.
In the constraint-based approach, in~\cite{le2015fast}, Le et al. implemented a parallel version for a variant of PC algorithm (called PC-stable) on multi-core CPUs which reduces the runtime by an order of magnitude. Madsen et al. \cite{madsen2017parallel} proposed a method to perform conditional independence tests in parallel for PC algorithm. For the case of using GPU hardware, Schmidt et al. \cite{schmidt2018order} proposed a method to parallelize a small part of PC-stable algorithm. In \cite{zarebavani2019cupc}, Zare et al. proposed a GPU-based parallel algorithm for accelerating the whole PC-stable algorithm. The proposed algorithm parallelizes conditional independence tests over the pairs of variables or the conditional sets. Experimental results showed a significant speedup ratio up to 4000 in various real dataset. In \cite{schmidt2019out}, Schmidt et al. devised an out-of-core solution for accelerating PC-stable in order to handle extremely high-dimensional settings. Later, for discrete data, Hagedorn and Huegle \cite{hagedorn2021gpu} proposed a parallel PC algorithm on GPU for learning causal structure. Recently, Srivastava et al. \cite{srivastava2020parallel} presented a parallel framework to learn causal structures based on discovering Markov Blankets.
In score-based approach, Ramsey et al. \cite{ramsey2017million} proposed fast GES algorithm which accelerates updating score by caching scores of previous steps. They also implemented a parallel version of it on multi-core CPUs. Furthermore, there is a recent parallel solution for other search algorithms in the score based approach \cite{lee2019parallel}.
There are some recent studies with the main focus on evaluating the performance of causal structure learning algorithms in recovering the true underlying causal graph \cite{scutari2018learns,heinze2018causal}. It has been shown that LiNGAM algorithm has comparable or better performance than most existing methods and it is more suitable for high dimensional settings \cite{heinze2018causal}. Unfortunately, the runtimes of both variants of LiNGAM algorithm (ICA-LiNGAM or DirectLiNGAM) grow significantly as the number of variables increases. Thus, the current sequential implementations cannot be utilized for dataset with large number of variables. To the best of our knowledge, there is no previous parallel implementation of LiNGAM algorithm. In this paper, we propose a parallel algorithm, which we call ParaLiNGAM, for learning causal structure based on DirectLiNGAM algorithm. Our experiments show that the first step of DirectLiNGAM is computationally intensive and we focus on accelerating this step in this paper. Similar to DirectLiNGAM, we obtain the causal order in a number of iterations sequentially while in each iteration, we parallelize the process of finding the root variable.
The main contributions of the paper are given as follows:
\begin{itemize}
\item
We propose a threshold mechanism in order to reduce the number of comparisons in each iteration. In this mechanism, we consider an upper limit on the score of root variable and whenever a variable exceeds this limit, we do not perform further comparisons corresponding with that variable. Our experiments show that the threshold mechanism can save up to $93.1\%$ comparisons that we have in DirectLiNGAM.
\item When we compare variable $X$ with $Y$, a part of computation is similar to the case that we are comparing $Y$ with $X$ in the reverse direction. Thanks to this observation, we employ a messaging mechanism in order to avoid performing redundant computations which reduces runtimes by a factor of about two.
\item We derive the mathematical formulations for normalizing and also regression which are frequently utilized in DirectLiNGAM algorithm. These mathematical formulations enable us to reduce the runtime and use the memory more efficiently.
\item We evaluate ParaLiNGAM on various synthetic and real data. Experimental results show that the proposed algorithm can reduce runtime of DirectLiNGAM significantly by a factor up to $4657$.
\end{itemize}
The rest of this paper is organized as follows. In Section \ref{sec:prelim}, we review some preliminaries on structural equal models, LiNGAM model, and DirectLiNGAM algorithm. In Section \ref{sec:alg}, we present ParaLiNGAM algorithm for learning causal structures in LiNGAM model. We provide some implementation details in Section \ref{sec:alg2}. We evaluate the performance of ParaLiNGAM algorithm in Section \ref{sec:exp}. Finally, we conclude the paper in Section \ref{sec:conc}.
\section{Preliminaries}
\label{sec:prelim}
\subsection{Structural equation models}
\label{sec:prelim:sem}
Structural equation models (SEMs) are mathematical models that can be used to describe the data-generating process and causal relations of variables \cite{bollen1989structural, pearl2000models}. In particular, SEMs consists of a collection of $p$ equations where $p$ is the number of variables in the system. The causal mechanism of assigning values to the variable $X_j$, $1\leq j \leq p$, can be written as follows:
\begin{equation}
X_j= f_j(PA_j, N_j),
\end{equation}
where $PA_j$ are called parents of $X_j$ and have direct cause on it. Moreover, $N_j$ is the exogenous noise corresponding to variable $X_j$. Exogenous noises are generated outside of the model and their data-generation processes are not modeled in the SEM.
We can represent causal relationships among the variables in an SEM by a directed graph where there is a directed edge from $X_i$ to $X_j$ if $X_i \in PA_j$.
\begin{Example}
Consider the following SEM:
\begin{figure}[tp]
\centering
\includegraphics[width = 0.45 \columnwidth]{fig/Casuse_effect}
\caption{Example of an SEM.}
\label{fig:pre:SEM}
\end{figure}
\begin{equation}
\begin{gathered}
X_3 = N_3, \\
X_5 = f_5(X_3, N_5), \\
X_1 = f_1(X_5, N_1), \\
X_4 = f_4(X_1, N_4), \\
X_2 = f_2(X_3, X_4, N_2),
\end{gathered}
\label{eq:sem}
\end{equation}
where the corresponding causal graph is illustrated in \mRefFig{fig:pre:SEM}. As can be seen, there is a directed edge from a direct cause to its effect. For instance, $X_3$ is the direct cause of $X_5$ and there is a directed edge from $X_3$ to $X_5$.
\label{ex:base}
\end{Example}
\subsection{Linear Non-Gaussian Acyclic Model (LiNGAM)}
One of the common assumption in the literature of causality is that the causal relations between variables are acyclic, i.e., the corresponding causal graph is a \textbf{directed acyclic graph} (\textbf{DAG})\footnote{A directed acyclic graph is a graph whose edges are all directed and there is no directed
cycle in the graph.}. By this assumption, there is always a \textbf{causal order} of variables $X_i$, $i\in\{1,\dotsc,p\}$, in the DAG so that no latter variable in the causal order has a direct path to any earlier variable. We denote the position of each variable $X_i$ in the causal order by $k(i)$. For instance, for the causal graph in \mRefFig{fig:pre:SEM}, $k=[3,5,1,4,2]$
is a causal order.
As an additional assumption, one can consider that the functional relations of variables are linear. Thus, the model can be reformulated as follows:
\begin{equation}
X_i = \sum_{k(j)<k(i)}b_{ij}X_{j} + N_i,
\label{eq:base_linear}
\end{equation}
where $b_{ij}$ is the causal strength representing magnitude of direct causation from $X_j$ to $X_i$. Furthermore, it is assumed that the exogenous noises have zero mean, non-zero variance, and are independent of each other (i.e., no latent confounder is in the system). We can rewrite Eq. \ref{eq:base_linear} in the matrix form as follows:
\begin{equation}
X = BX + N,
\label{eq:base_mat}
\end{equation}
where $X$ and $N$ are $p$-dimensional random vectors, and $B$ is a $p \times p$ matrix of causal strengths. For instance, the SEM of \mRefFig{fig:pre:SEM} can be written as follows:
\begin{equation}
\begin{bmatrix}
X_1 \\ X_2 \\ X_3 \\ X_4 \\ X_5
\end{bmatrix}
=
\begin{bmatrix}
0 & 0 & 0 & 0 & 3 \\
0 & 0 & 6 & -3 & 0 \\
0 & 0 & 0 & 0 & 0 \\
4 & 0 & 0 & 0 & 0 \\
0 & 0 & 5 & 0 & 0 \\
\end{bmatrix}
\begin{bmatrix}
X_1 \\ X_2 \\ X_3 \\ X_4 \\ X_5
\end{bmatrix}
+
\begin{bmatrix}
N_1 \\ N_2 \\ N_3 \\ N_4 \\ N_5
\end{bmatrix}
,
\label{eq:base_mat_ex}
\end{equation}
where zero entries of $B$ show the absence of directed edges. It can be shown that a simultaneous permutations of rows and columns of matrix $B$ according to a causal order can convert it to a \textbf{strictly lower triangular} matrix, due to the acyclicity assumption \cite{bollen1989structural}.
In the example in \mRefFig{fig:pre:SEM}, we can rewrite equations in the following form to make the matrix $B$ strictly lower triangular:
\begin{equation}
\begin{bmatrix}
X_3 \\ X_5 \\ X_1 \\ X_4 \\ X_2
\end{bmatrix}
=
\begin{bmatrix}
0 & 0 & 0 & 0 & 0 \\
5 & 0 & 0 & 0 & 0 \\
0 & 3 & 0 & 0 & 0 \\
0 & 0 & 4 & 0 & 0 \\
6 & 0 & 0 & -3 & 0 \\
\end{bmatrix}
\begin{bmatrix}
X_3 \\ X_5 \\ X_1 \\ X_4 \\ X_2
\end{bmatrix}
+
\begin{bmatrix}
N_3 \\ N_5 \\ N_1 \\ N_4 \\ N_2
\end{bmatrix}
.
\label{eq:base_mat_ex_prder}
\end{equation}
It can be shown that the causal structure cannot be recovered uniquely if the distributions of exogenous noises are Gaussian ~\cite{pearl2000models}.
However, in ~\cite{shimizu2006linear}, it has been proved that the model can be fully identified from observational data if all the exogenous noises are non-Gaussian. They called the non-Gaussian version of the linear acyclic SEM, \textbf{Linear} \textbf{Non-Gaussian} \textbf{Acyclic Model} (\textbf{LiNGAM}). In the rest of this paper, we assume the model of data generation obeys the assumptions in LiNGAM.
\subsection{Causal Structure Learning Algorithms for LiNGAM}
\label{sec:baseLiNGAM}
ICA-LiNGAM ~\cite{shimizu2006linear} was the first algorithm for the LiNGAM model, which applies an independent component analysis (ICA) algorithm to observed data and try to find the best strictly lower triangular matrix for $B$ that fits the observed data. This algorithm is fast due to well-developed ICA techniques. However, the algorithm has several drawbacks, e.g., getting stuck in local optimal, scale-dependent calculations, and usually estimating dense graph even for sparse ground truth causal graphs~\cite{shimizu2011directlingam}.
\textbf{DirectLiNGAM} was proposed in~\cite{shimizu2011directlingam}, in order to resolve ICA-LiNGAM's issues and converges to an acceptable approximation of matrix $B$ in a fixed number of steps.
Also, we can provide prior knowledge to the algorithm, which can improve the performance of recovering the correct model. However, computation cost of DirectLiNGAM is more than ICA-LiNGAM, and it cannot be applied on large graphs\cite{shimizu2011directlingam}.
DirectLiNGAM algorithm consists of two main steps: In the first step, the causal order of variables is estimated by repeatedly searching for a root in the remaining graph and regressing out its effect on other variables. In the second step, causal strengths are estimated by using some conventional covariance-based regression according to the recovered causal order~\cite{shimizu2011directlingam}. Experimental results show that the second step is fairly fast since we are only performing linear regressions. However, the first step is computationally intensive and we focus on accelerating this part in this paper.
\begin{algorithm}[tp]
\begin{algorithmic}[1]
\Input $\mathcal{X}$
\Output $K$
%
\State $U =\{1,\cdots, p\}$
\State $K=\emptyset$
\Repeat
\State $root = FindRoot(\mathcal{X}, U)$
\State Append $root$ to $K$
\State Remove $root$ from $U$
\State $\mathcal{X} = RegressRoot(\mathcal{X}, U, root)$
\Until{$U$ is not empty}
\State Estimate causal strengths $B$ from $K$
\end{algorithmic}
\caption{DirectLiNGAM}
\label{alg:baseLiNGAM}
\end{algorithm}
The description of DirectLiNGAM algorithm is given in \mRefAlg{alg:baseLiNGAM}. The input of the algorithm is matrix $[\mathcal{X}]_{p\times n}$
whose $i$-th row, $x_i$, contains $n$ samples from variable $X_i$.
The output of algorithm is $K$, a causal order list of variables. First, we initialize $U$ by a list of all variables' indecies and set $K$ to an empty list (lines $1-2$). Next, $K$ is going to be filled with variables from $U$ using a comparison between variables to form a causal order. Recovering a causal order takes $p$ (size of $U$) iterations. In each iteration, the most independent variable in $U$ ($root$ of that iteration) is determined by the $FindRoot$ function. Then the $root$ moves from $U$ to $K$. Next, the data of remaining variables in $U$ are updated by regressing them on the root ($RegressRoot$ function), which has been shown that it preserves correct causal orders in the remaining part ~\cite{shimizu2011directlingam}. Finally, the matrix B is recovered from the causal order in $K$s using conventional covariance-based regression methods.
\begin{algorithm}[tp]
\begin{algorithmic}[1]
\Input $\mathcal{X}, U$
\Output $root$
%
\If {$U$ has only one element}
\State return the element
\EndIf
\State $\mathcal{S} = [\textbf{0}]_{|U|}$
\For {$i$ in $U$}
\For {$j$ in $U\backslash\{i\}$}
\State $Normalize(x_i)$
\State $Normalize(x_j)$
\State $r_i^{(j)} = Regress(x_i, x_j)$
\State $r_j^{(i)} = Regress(x_j, x_i)$
\State $Normalize(r_i^{(j)})$
\State $Normalize(r_j^{(i)})$
\State $\mathcal{S}[i] += \min\{0, I(x_i, x_j, r_i^{(j)}, r_j^{(i)})\}^2$
\EndFor
\EndFor
\State $root=$ $U[arg\min(Scores)]$
\end{algorithmic}
\caption{$FindRoot$}
\label{alg:baseLiNGAM_FR}
\end{algorithm}
The purpose of $FindRoot$ function (see \mRefAlg{alg:baseLiNGAM_FR}) is to find most independent variable from its residuals by comparing all pairs of variables given in $U$. Each variable in $U$ has a score with initial value of zero and all of them are stored in an array called $\mathcal{S}$ (line $4$).
First, samples of each variable are normalized. Next, each variable $X_i$ is regressed on any other variable like $X_j$ in $U$. Afterwards, the regressed values are normalized. Finally, an independence test $I$ is performed and its result is added to a score of variable $X_i$. In ~\cite{hyvarinen2013pairwise}, a likelihood ratio test is proposed that assigns a real number to the pair of variables:
\begin{equation}
I(x_i, x_j, r_i^{(j)}, r_j^{(i)}) = H(x_j) + H(r_i^{(j)}) - H(x_i) - H(r_j^{(i)}),
\label{eq:Itest}
\end{equation}
where $H$ is differential entropy, which can be approximated by computationally simple function as follows ~\cite{hyvarinen2013pairwise, hyvarinen1998analysis}:
\begin{equation}
\!\begin{multlined}[t]
\hat{H}(u) = H(v) - k_1[E\{\log\cosh(u)\}- \beta]^2 \\
- k_2[E\{u\exp(-u^2/2)\}]^2.
\end{multlined}
\label{eq:HApp}
\end{equation}
In above equation, $H(v) = \frac{1}{2}(1 + \log2\pi)$ is the entropy of the standardized Gaussian distribution, and the other constants can be set to:
\begin{center}
$k_1 \approx 79.047,$\\
$k_2 \approx 7.4129,$\\
$\beta \approx 0.37457.$\\
\end{center}
A positive/negative value of $I$ indicates the independence/dependence of that variable compared to the other one. For aggregating score for each variable and determining total independence of a variable from others, only the amount of dependence is considered.
In other words, in this method, only negative value of $I$ are considered and its square is added to the score.
The variable with minimum score is selected as the root variable.
Please note that we consider lines $9-13$ as \textbf{$Compare$} function and use this as the based function for comparing two variables in next sections.
\section{ParaLiNGAM}
\label{sec:alg}
In this section, we present the ParaLiNGAM algorithm for accelerating computationally-intensive part of DirectLiNGAM without changing its accuracy.
As mentioned before, DirectLiNGAM discovers causal order in $p$ iterations. Moreover, these iterations are consecutive, i.e., an iteration cannot be started unless the previous has been already finished. Herein, ParaLiNGAM is also executed in $p$ iterations. \mRefFig{fig:alg:base} illustrates the procedure of one iteration.
\begin{figure*}[tp]
\centering
\includegraphics[width = \textwidth]{fig/flowchart}
\caption{Procedure of one iteration in ParaLiNGAM: Each iteration is divided into steps.
Each step has three parts: Compare, Message Passing, Scheduler, which are accomplished with pre-selected workers from the previous step ($\mathcal{W}_k$). Details are discussed in Section \ref{sec:alg}.}
\label{fig:alg:base}
\end{figure*}
In each iteration, computations of each variable are assigned to a specific \textbf{worker}. Iterations are broken into \textbf{steps}. In each step $k$, a subset of all workers($\mathcal{W}$) in an iteration, which is denoted by $\mathcal{W}_k$, is selected to start comparing themselves with other workers (Compare part). Next, the workers inform each other about their computations by sending messages (Message Passing part).
Finally, the \textbf{scheduler} gathers all workers' scores and selects some of the workers for the next step, i.e., $\mathcal{W}_{k+1}$.
\begin{algorithm
\begin{algorithmic}[1]
\Input $\mathcal{X}$
\Output $K$
%
\State $K=\emptyset$
\State $U =[1,\cdots, p]$
\State Par: $NormalizeData(\mathcal{X})$
\State Par: $ \Sigma = CalculateCovMat(\mathcal{X})$
\Repeat
\State Par: $root = ParaFindRoot(\mathcal{X}, U, \Sigma)$
\State Append $root$ to $K$
\State Remove $root$ from $U$
\State Par: $\mathcal{X} = UpdataData(\mathcal{X}, U, \Sigma, root)\hfill//$Sec\ref{sec:math}
\label{line:RegressRoot}
\State Par: $\Sigma = UpdateCovMat(\mathcal{X}, U, \Sigma, root) \hfill//$ Sec\ref{sec:math}
\label{line:UpdateCovMat}
\Until{$U$ is not empty}
\end{algorithmic}
\caption{ParaLiNGAM}
\label{alg:ParaLiNGAM}
\end{algorithm}
The description of ParaLiNGAM is given in \mRefAlg{alg:ParaLiNGAM}. The lines starting with ``Par" show that they are executed in parallel. ParaLiNGAM algorithm is executed similar to DirectLiNGAM. First, we initialize $U$ and $K$ (lines $1-2$). Next, we find a casual order in lines $5-11$. General procedure of this algorithm is same as DirectLiNGAM, however, computations' details have been changed which we discuss them in the sequel.
In DirectLiNGAM, samples of all variables have to be normalized in $FindRoot$ function (lines $7-8$ in \mRefAlg{alg:baseLiNGAM_FR}). For sake of efficiency, all variables are normalized simultaneously in line $3$ of \mRefAlg{alg:ParaLiNGAM} for the first iteration. For the next iterations, this task is done with $UpdateData$ function (line $9$ in \mRefAlg{alg:ParaLiNGAM}).
Regressing variables on each other, is a frequent task in DirectLiNGAM which is performed in $Compare$ (lines $9-10$ in \mRefAlg{alg:baseLiNGAM_FR}) and $RegressRoot$ (line $7$ in \mRefAlg{alg:baseLiNGAM}) functions, and it needs variables' covariance matrix. Hence, it is desirable to store covariance matrix (which we denote it by $\Sigma$) in each iteration to avoid redundant computations. In \mRefAlg{alg:ParaLiNGAM}, the first covariance matrix of variables is calculated in line $4$, and it is updated in each iteration (line $10$).
Furthermore, we will show in Section \ref{sec:math} how to reuse computations from previous iterations in normalizing data and obtaining the covariance matrix which results in reducing the computational complexity without degrading the accuracy.
We discussed the summary of changes in \mRefAlg{alg:baseLiNGAM}. Now, we are ready to explain these changes in more details. First, we present a parallel solution for finding root in each iteration ($ParaFindRoot$ function in Algorithm \ref{alg:ParaLiNGAM_FR}).
\begin{algorithm
\begin{algorithmic}[1]
\Input $\mathcal{X}, U, \Sigma$
\Output $root$
\item[\textbf{\# of Workers:}] $|U|$
%
\If {$U$ has only one element}
\State return the element
\EndIf
\State $r = |U|$
\State $\mathcal{W} = [1, 2, 3, ..., r]$
\State $\mathcal{S} = [\textbf{0}]_{r}$
\State $\mathcal{M} = [\emptyset]_{r\times r}$
\State $\mathcal{D} = diag([True]_r)$
\State $State = \{U, \Sigma, r, \mathcal{S}, \mathcal{M}, \mathcal{D}, \gamma \}$
\State $\mathcal{W}' = \mathcal{W}$
\State $\mathcal{C} = \textbf{1}_{r}$
\Repeat
\State Par: $\mathcal{S}[w] += Compare(w, \mathcal{C}[w], State)
\State Par: $\mathcal{S}[w] += CheckMessages(w, State)\hfill//$Sec\ref{sec:messaging}
\State $finished, \mathcal{W'}, \mathcal{C} = Scheduler (\mathcal{W}, \mathcal{C}, State)\hfill//$Sec\ref{sec:scheduler}
\Until{$finished == True$}
\State $root=U[\arg\min(\mathcal{S})]$
\end{algorithmic}
\caption{$ParaFindRoot$}
\label{alg:ParaLiNGAM_FR}
\end{algorithm}
In order to find root in DirectLiNGAM (\mRefAlg{alg:baseLiNGAM_FR}), all variables compare themselves with other ones (line $5$). In the parallel version, we assign each variable to a worker to perform its computations. In other words, instead of a $for$ statement in line $5$ of \mRefAlg{alg:baseLiNGAM_FR}, we have workers, which can work in parallel to perform the comparisons. Moreover, comparing each variable/worker to other variables/workers is divided into steps instead of iterating on all variables (line $6$ \mRefAlg{alg:baseLiNGAM_FR}). Next, we discuss the details of presented solution for finding root (\mRefAlg{alg:ParaLiNGAM_FR}).
The description of $ParaFindRoot$ function is given in \mRefAlg{alg:ParaLiNGAM_FR}. First, we define $r$ as the number of remaining variables in this iteration, which equals to the size of $U$ (line $4$). In each iteration, every pair of variables have to be compared to find the root. Moreover, these comparing procedure is independent of each other. We can use $r$ workers, and worker $i$ is responsible for performing variable $U[i]$'s computations.
For simplicity of notations, from now on, we denote workers by the corresponding variables assigned to them. We define $\mathcal{W}$ as a list of all workers' indices in an iteration (line $5$). In line $6$, $\mathcal{S}$ is initialized the same as the main algorithm (line $4$ in \mRefAlg{alg:baseLiNGAM_FR}) to store scores. Each worker might have useful information for the other workers, which can be shared with a \textbf{messaging} mechanism. To do so, we define $\mathcal{M}$ which is $r\times r$ matrix filled with $\emptyset$ (line $7$). Worker $i$ can send a message to worker $j$ by writing in $\mathcal{M}[j][i]$. More details of the messaging mechanism and its effect on the performance of algorithm will be discussed in \mRefSec{sec:messaging}. Note that matrix $\mathcal{M}$ is just temporary memory for messaging and it resets in each step.
Hence, another variable is required for evaluating the progress of an iteration.
For this purpose, in line $8$, we define a $r \times r$ matrix $\mathcal{D}$ in which diagonal entries are initially $True$ while others are $False$, to monitor workers' progress. Worker $i$ writes $True$ in $\mathcal{D}[i][j]$ after comparing itself with worker $j$.
For sake of brevity, we collect all variables defined in Algorithm \ref{alg:ParaLiNGAM_FR}, and the threshold $\gamma$ with a small value (Section \ref{sec:thresh}, \ref{sec:scheduler}) in set $State$ (line $9$).
As mentioned before, finding root in each iteration is divided into steps. In each step, some workers are selected, and each of them has to compare itself to another worker. Selected workers are indicated with $\mathcal{W}'$ (line $10$). In the first step of each iteration, all of workers have same priority. Hence, $\mathcal{W}'$ equals $\mathcal{W}$. Moreover, list of target workers to be compared with is defined as $\mathcal{C}$ (line $11$), and worker $i$ compares itself with $\mathcal{C}[i]$. At the beginning of each iteration, $\mathcal{C}$ is initiated with a list filled with $1$, which means all of the workers start comparing themselves with the first worker.
In each step, first, selected workers compare themselves with the assigned workers and send a message (line $13$). Comparing and sending a message is an independent task for each worker and can be performed in parallel. Then, workers check for new messages from others and update their scores (line $14$). Afterwards, $Scheduler$ selects workers for the next step according to this step's status. Moreover, it modifies $\mathcal{C}$ for selected workers (line $15$) and determines whether to terminate an iteration after checking its $State$. Finally, similar to \mRefAlg{alg:baseLiNGAM_FR}, the worker with minimum score is chosen as the root of iteration (line $17$).
There are still some implementation details that will be discussed in the next parts. More specifically,
messaging between workers will be discussed in \mRefSec{sec:messaging}. The details of \textbf{threshold} mechanism which determines $\mathcal{W}'$, the set of selected workers, is given in \mRefSec{sec:thresh}. The scheduling of workers based on the threshold mechanism is discussed in \mRefSec{sec:scheduler}. Mathematical simplification for accelerating $UpdateData$ and $UpdateCovMat$ process is also discussed in \mRefSec{sec:math}. Finally, implementation of this algorithm on GPU and further details on some parts of solution are given in \mRefSec{sec:alg2}.
\subsection{Messaging}
\label{sec:messaging}
In DirectLiNGAM, as mentioned in \mRefSec{sec:baseLiNGAM}, computation of test $I$ for $x_i \rightarrow x_j$ has some similarities to the ones for $x_i \leftarrow x_j$. It is worth mentioning, this property is not just for $I$, and this technique can be used for some other tests \cite{hyvarinen2013pairwise}.
When worker $i$ compares itself to worker $j$ (computing $I(x_i, x_j, r_i^{(j)}, r_j^{(i)})$), it can also compute the test in the reverse direction ($I(x_j, x_i, r_j^{(i)}, r_i^{(j)})$), which is worker $j$'s task. Hence, we can assign full comparison to just one worker, and after finishing each comparison, the worker who performed the test, has to inform the other worker about its result by sending a message. With messaging, which does not add computation load
we can halve the comparisons (from $p(p-1)$ to $p(p-1)/2$). To use this mechanism, every active worker in a step has to send a message after finishing its comparison (line 13 in \mRefAlg{alg:ParaLiNGAM_FR}), and each worker checks for their messages (which can be from any $r-1$ other workers in the whole iteration) with $CheckMessages$ function.
\begin{algorithm
\begin{algorithmic}[1]
\Input $w, State$
\Output $score$
%
\State $score = 0$
\For{$~~(i=1;~~ i <= r;~~ i+=1)$}
\If{$\mathcal{M}[w,i] != \emptyset$}
\State $score += \mathcal{M}[w, i]$
\State$\mathcal{D}[w, i] = True$
\State $\mathcal{M}[w, i] = \emptyset$
\EndIf
\EndFor
\end{algorithmic}
\caption{$CheckMessages$}
\label{alg:CheckMessages}
\end{algorithm}
The description of $CheckMessages$ function is given in \mRefAlg{alg:CheckMessages}. First, we define $score$ as variable for sum of scores. In lines 2-3, worker $w$ checks for new messages. If another worker, say $i$, has sent a message, first, $score$ is updated (line 4). Next, worker $w$ marks the sender worker ($i$) as "done" by writing $True$ in $\mathcal{D}[w, i]$ (line 5). Finally, message is replaced with $\emptyset$ to prevent it from recalculation in next steps. The senders and receivers might not be active simultaneously in one step. As a result, workers consider messages from whole workers in an iteration and not just from active workers in the current step.
\subsection{Threshold}
\label{sec:thresh}
As mentioned earlier, we need to perform $p(p-1)/2$ comparisons in each iteration. However, all of them are not necessary. Suppose that the final score of the root in an iteration is $0.05$. In this case, we can terminate any worker's computation whose score has reached $0.05$.
To reduce the number of comparisons, we consider an upper bound for the score which we call it threshold, and assume that root's score will probably be less than this threshold. If that is the case, the iteration is over and we can choose the root when at least one worker could finish its comparisons without reaching the threshold, while all other workers reach it without completing their tasks.
Herein, the main issue is how to choose a proper threshold.
To overcome this issue, first, we choose a small value for the threshold. Furthermore, we terminate workers who have already reached this value. Then, if all workers terminate and neither of them could finish their comparisons, we increase the threshold. We continue this procedure till iteration termination conditions satisfy, i.e., at least one worker finish all their comparisons without reaching the threshold. Consequently, using the threshold mechanism can result in reducing the number of comparisons.
Now, we discuss the correctness of the threshold mechanism in each iteration.
At the end of each iteration, each worker is in one of these two groups:
1) Worker's score is more than the threshold, and it may not even finish its comparisons.
2) Worker finishes its comparisons, and still, its score is below the threshold.
Workers in the first group have higher scores than the threshold even they continue their comparisons. The root is a worker with a minimum score. As a result, it is always chosen from the second group of workers. In fact, we select the root from workers in the second group which has the minimum score among them. Hence, early termination of the first group of workers does not affect algorithm results.
Details of reducing number of comparisons by the threshold mechanism are discussed in \mRefSec{sec:scheduler}.
\subsection{Scheduler}
\label{sec:scheduler}
In this part, we explain how to schedule workers with the threshold mechanism. At the end of each step, the scheduler has to decide whether or not to finish current iteration.
\begin{algorithm
\begin{algorithmic}[1]
\Input $\mathcal{W}, \mathcal{C}, State$
\Output $finish, \mathcal{W'}, \mathcal{C}$
%
\State $finish = False$
\ForAll {$ w \in \mathcal{W}$} // checking the termination
\If{$\mathcal{S}[w] < \gamma$}
\If {$\mathcal{D}[w,:]$ has at least one $False$}
\State $finish = False$
\State \textbf{Break}
\Else
\State $finish = True$
\EndIf
\EndIf
\EndFor
\If {$finish$}
\State \textbf{return}
\EndIf
\While{$\mathcal{S}[w] > \gamma, ~~ \forall w \in \mathcal{W}$}
\State $\gamma = \gamma \times c$ // updating threshold
\EndWhile
\State $\mathcal{W'} = [ w \in \mathcal{W}~~|~~ \mathcal{S}[w] < \gamma]$
\ForAll {$ w \in \mathcal{W'}$}
\Repeat
\State $\mathcal{C}[w] += 1$
\Until{$(\mathcal{D}[w,\mathcal{C}[w]] == False ~~ \& \& ~~ \mathcal{C}[\mathcal{C}[w]]!=w) ~~ | | ~~ \mathcal{C}[w] > |\mathcal{W}| $}
\EndFor
\State $\mathcal{W'} = \mathcal{W'}\backslash[ w \in \mathcal{W'}~~|~~ \mathcal{C}[w] > |\mathcal{W}|]$
\end{algorithmic}
\caption{$Scheduler$}
\label{alg:Scheduler}
\end{algorithm}
As mentioned earlier in \mRefSec{sec:thresh}, if at least one worker has finished its comparisons and its score is below than the threshold, we can terminate the iteration.
The description of scheduler is given in \mRefAlg{alg:Scheduler}. The termination of an iteration is checked in lines $1-14$. First, we define $finish$ as a flag for termination conditions (line $1$). Then, we have to find workers with scores less than the threshold (line $2-3$) and check whether they have finished their comparisons or not (line 4). If at least one worker has unfinished comparisons and its score is below than the threshold, we need to continue another step (lines $5-6$). If some workers have finished their comparisons, $finish$ is changed to $True$, and we wait for other workers' status (line $7-9$). Then, we check $finish$ value and decide whether or not to terminate the iteration (lines $12-14$).
Now we discuss the scheduler's task when an iteration is not terminated and it should be continued for another step. For the new step, we have to change threshold if it is needed (lines $15-17$). There are multiple ways to increase the value of threshold. Here, we just multiply by some constant $c$ (see \mRefSec{sec:schGPU} for more details on selecting the desirable constant $c$). We continue updating threshold until at least one of workers' scores is below than the threshold. Then, the scheduler chooses workers ($\mathcal{W}'$) and comparison targets ($\mathcal{C}$) for the next step.
Workers with scores less than the threshold are considered as the workers of new step (line $18$). $\mathcal{C}$ is updated for the selected workers in lines $19-23$. As mentioned, $\mathcal{C}$ is initialized with \textbf{$1$} and workers need to compare themselves with worker $1$ for their first step. For the next steps, we start increasing $\mathcal{C}[w]$ for worker $w$ till we find a pair ($w$, $\mathcal{C}[w]$) which their comparison has not performed yet.
In a step, it might occur that two workers compare with each other simultaneously, which is not desirable as they are performing redundant tests. In order to prevent these cases, the scheduler also checks for repetitive pairs of comparison (second condition in line $22$).
A worker might finish its comparisons while its score was greater than the threshold, but it becomes smaller after updating threshold. Such workers have to wait for the next step. In this case, first, $\mathcal{C}[w]$ is set to a value greater than $|\mathcal{C}|$, then the scheduler omits such workers from $\mathcal{W}'$ (line $24$).
Some details of scheduling depend on implementation considerations and will be discussed in \mRefSec{sec:schGPU}.
\subsection{Mathematic Simplification}
\label{sec:math}
As mentioned earlier, DirectLiNGAM algorithm always works with normalized data. Furthermore, normalization and regression tasks are frequently performed in the algorithm. These tasks depend on computing the covariances of variables, and it would be desirable to obtain them in an efficient manner. In this section, we demonstrate that if the relationship between variables is linear, which is one of the main assumption in LiNGAM, we can estimate variance of residual of regressions and use them in normalization step (lines $7 - 8$ in \mRefAlg{alg:baseLiNGAM_FR}). Furthermore, we can estimate coefficients used in regressing variables in data updating procedure (line $7$ in \mRefAlg{alg:baseLiNGAM}).
First, we calculate the adjusted sample variance $s^2$ for residual of a regression. Residual of $x_i$ regressed on $x_j$ (denoted by $r_i^{(j)}$) is defined as:
\begin{equation}
r_i^{(j)} = x_i - \frac{cov(x_i, x_j)}{var(x_j)} x_j, i \neq j.
\end{equation}
Moreover, if we assume that samples are normalized, it can be shown that $E[r_i^{(j)}] = 0$. Furthermore, calculating residuals only needs the covariance matrix. Consider $b = cov(x_i, x_j)$ and assume that both variables are normalized. We can write:
\begin{equation}
\begin{split}
s^2 &= \frac{1}{n-1} \sum (r_i^{(j)} - E[r_i^{(j)}])^2 \\
&= \frac{1}{n-1} \sum (x_i - b x_j)^2 = \frac{1}{n-1} \sum (x_i^2 - b^2x_j^2 - 2b x_i x_j) \\
&= \frac{1}{n-1} (\sum x_i^2 + b^2 \sum x_j^2 - 2b\sum x_i x_j) \\
&= var(x_i) + b^2 var(x_j) - 2b cov(x_i, x_j)\\
&= 1 + b^2 - 2b^2 = 1 - b^2 = 1 - cov^2(x_i, x_j).
\end{split}
\label{eq:var_update}
\end{equation}
Thus, we showed that the variance of $r_i^{(j)}$ is equal to $1 - cov^2(x_i, x_j)$. Therefore, in order to normalize it, we just have to divide all samples to $\sqrt{1 - cov^2(x_i, x_j)}$. The details of $UpdateData$ function, which is called in \mRefAlg{alg:ParaLiNGAM}, is given in \mRefAlg{alg:UpdateData}.
\begin{algorithm
\begin{algorithmic}[1]
\Input $\mathcal{X}, U, \Sigma, root$
\Output $\mathcal{X}$
\item[\textbf{\# of Workers:}] $|U|$
\State $\mathcal{W} = [1, 2, 3, ..., |U|]$
\State Par: \textbf{for} {$~~(i=0;~~ i < |\mathcal{X}[w,:]|;~~ i+=1)$} \textbf{do}
\State $~~~~\mathcal{X}[w, i] = \dfrac{\mathcal{X}[w, i] - \Sigma[w, root] \mathcal{X}[root, i]}{\sqrt{1 - \Sigma^2[w, root]}}$
\State \textbf{end for}
\end{algorithmic}
\caption{$UpdateData$}
\label{alg:UpdateData}
\end{algorithm}
Next, we calculate the covariance between two residuals, which can be used in computing the covariance matrix for the next iteration.
Suppose we have $b_1 = cov(x_i, x_{root})$ and $b_2 = cov(x_j, x_{root})$. We have:
\begin{equation}
\begin{split}
cov(r_i^{root}, r_j^{root}) &= \frac{1}{n-1} \sum (x_i - b_1x_{root})(x_j - b_2x_{root}) \\
&= \!\begin{multlined}[t]
\frac{1}{n-1} \sum (x_ix_j - b_2x_ix_{root} - b_1 x_j x_{root} \\+ b_1b_2x_{root}^2)
\end{multlined}\\
&= \!\begin{multlined}[t]
cov(x_i, x_j) - b_2 cov(x_i, x_{root}) \\- b_1 cov(x_j, x_{root}) + b_1b_2 var(x_{root})
\end{multlined}\\
&= cov(x_i, x_j) - b_1b_2.
\end{split}
\label{eq:CovUpdate}
\end{equation}
From Equations \ref{eq:CovUpdate} and \ref{eq:var_update}, we can simply update the covariance matrix in each iteration just from the covariance matrix in the previous iteration, without using variables' samples (excluding the first iteration).
Details of $UpdateCovMat$ Function, which is called in \mRefAlg{alg:ParaLiNGAM}, is given in \mRefAlg{alg:UpdateCovMat}. Please note that $r_i^{root}$ and $r_j^{root}$ are not normalized in \mRefEq{eq:CovUpdate}. Therefore, it is needed to divided the expression for the covariance by the variances of $r_i^{root}$ and $r_j^{root}$ (line 3 in \mRefAlg{alg:UpdateCovMat}).
\begin{algorithm
\begin{algorithmic}[1]
\Input $\mathcal{X}, U, \Sigma, root$
\Output $\Sigma$
\item[\textbf{\# of Workers:}] $|U|$
\State $\mathcal{W} = [1, 2, 3, ..., |U|]$
\State Par: \textbf{for} {$~~(i=0;~~ i < |\Sigma[w,:]|;~~ i+=1)$} \textbf{do}
\State $~~~~\Sigma[w, j] = \dfrac{\Sigma[w, j] - \Sigma[w, root] \Sigma[j, root]} {\sqrt{1 - \Sigma^2[w, root]}\sqrt{1 - \Sigma^2[j, root]}}$
\State \textbf{end for}
\end{algorithmic}
\caption{$UpdateCovMat$}
\label{alg:UpdateCovMat}
\end{algorithm}
\section{Conclusion}
\label{sec:conc}
In this paper, we proposed a
parallel algorithm for learning causal sturctures in LiNGAM model based on DirectLiNGAM algorithm. In the proposed algorithm, we employed a threshold mechanism to save a large portion of comparison in DirectLiNGAM. Moreover, we proposed a message mechanism and mathematical simplifications to further reduce the runtimes. Experiments showed the scalability of
our prospered algorithms with respect to the number of variables, the number of samples, and different types of graphs and achieved remarkable performance with respect to serial solution.
\appendices
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\section{Experimental Evaluation}
\label{sec:exp}
\subsection{Setup}
\label{sec:exp:setting}
The proposed ParaLiNGAM algorithm is implemented in C++ language using CUDA parallel programming framework. The source code is available online \cite{sourceParaLiNGAM}.
We experimentally evaluate ParaLiNGAM, along with DirectLiNGAM \cite{shimizu2011directlingam} which is a sequential method. The latest implementation of DirectLiNGAM is in Python language \cite{sourceDirectLiNGAM}. In order to have fair comparisons, we re-implemented DirectLiNGAM in C++ language. This implementation is available in \cite{sourceParaLiNGAM}.
We employ a server machine with Intel Xeon CPU with $16$ cores operating at $2.1$ GHz.
Since DirectLiNGAM is a sequential method, it is executed on a single core.
The CUDA kernels in ParaLiNGAM are executed on Nvidia Tesla V$100$ GPU, and the other procedures are executed sequentially on a single core. We use Ubuntu OS 20.04, GCC version 9.3, and CUDA version 11.1.
\subsection{Real-World Datasets}
\label{sec:exp:real_data}
\begin{table}[tp]
\centering
\caption{Benchmark datasets.}
\label{tab:datasetSpec}
\begin{tabular}{|c|c|c|}
\hline
Dataset & \# of reactions & \# of non-zero variables ($p$) \\
\hline
iML1515 & 2712 & 2326 \\
\hline
iEC1372\_W3110 & 2758 & 2339 \\
\hline
iECDH10B\_1368 & 2742 & 2252 \\
\hline
iY75\_1357 & 2759 & 2249 \\
\hline
iAF1260b & 2388 & 1588 \\
\hline
iAF1260 & 2382 & 1633 \\
\hline
iJR904 & 1075 & 770 \\
\hline
E.coli Core &
95 & 85 \\
\hline
\end{tabular}
\end{table}
We utilize seven Genome-scale metabolic networks as our benchmarks \cite{feist2007genome, reed2003expanded, monk2017ml1515, orth2011comprehensive, monk2016multi, monk2013genome, feist2010model} to evaluate the performance of ParaLiNGAM.
The metabolites are molecules involved in chemical reactions in a cell, and sets of these chemical reactions are so-called Metabolic networks \cite{lacroix2008introduction}.
These metabolic networks can be studied in silico with flux balance analysis \cite{orth2010flux} which is a way to simulate metabolic networks and to measure effects on the system by external
influences. The flux balance analysis is utilized to generate datasets.
The data generation procedure is performed with COBRA-toolbox \cite{becker2007quantitative, schellenberger2011quantitative}, and it considers the metabolic network of the Escherichia coli bacteria str. K-12. In order to utilize these networks, we acquire their models from BiGG Models\cite{king2016bigg}.
To generate data, first, we import models from BiGG models in COBRA. Then, utilizing COBRA's optGpSampler \cite{megchelenbrink2014optgpsampler}, we generate data by uniformly sampling from solution space with hit and run algorithm.
In this algorithm, first, $n$ points (samples) are generated in the middle of the solution, and then these points are relocated in a random direction. Although, after substantial steps in this procedure, some reactions' samples remain zero. These reactions are removed from datasets.
Details of the generated datasets are shown in Table ~\ref{tab:datasetSpec}. The first column shows the number of reactions, and the second column shows the number of non-zero variables among these reactions.
Each variable (reaction) is generated with $10000$ samples.
\subsection{Performance Comparison}
\subsubsection*{Comparing ParaLiNGAM with DirectLiNGAM }
The runtimes of both DirectLiNGAM and ParaLiNGAM are reported in Table \ref{tab:realTimings}. The accuracy of the proposed solution is exactly the same as the DirectLiNGAM. Thus, we only report runtime of these experiments.
The third column shows serial runtime. The serial runtime on iJR904 dataset is 287780 seconds (approximately 3.3 days).
Since the computational complexity has cubic relation with respect to the number of variables, the runtime of iAF1260b dataset, which is the next smallest dataset after iJR094, would probably be longer than three weeks. Thus, it is impractical to measure the runtime for other datasets, and the serial runtime is reported just for two datasets, which is 485 seconds for E.coli Core and 287780 seconds for iJR904. More comparisons with the serial solution is reported in Section \ref{sec:scalability} for synthetic datasets.
The fourth column reports runtime of ParaLiNGAM for all datasets, which ranges from 759 milliseconds to 91.3 seconds. The speedup ratio over serial execution for measured datasets is up to 3152.
\begin{table*}[tp]
\centering
\caption{Comparing the serial and parallel implementations. The third and fourth columns show the runtimes. The last column shows the speedup ratio, which is calculated by dividing the serial runtime over the parallel runtime.}
\label{tab:realTimings}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Dataset & \# of variables & Serial runtime (sec.) & GPU runtime (sec.) & Speedup ratio \\
\hline
iML1515 & 2326 & \textemdash \textemdash & 1321 & \textemdash \textemdash\\
\hline
iEC1372\_W3110 & 2339 & \textemdash \textemdash & 1420 & \textemdash \textemdash \\
\hline
iECDH10B\_1368 & 2252 & \textemdash \textemdash & 1216 & \textemdash \textemdash \\
\hline
iY75\_1357 & 2249 & \textemdash \textemdash & 1174 & \textemdash \textemdash \\
\hline
iAF1260b & 1588 & \textemdash \textemdash & 507 & \textemdash \textemdash \\
\hline
iAF1260 & 1633 & \textemdash \textemdash & 518 & \textemdash \textemdash \\
\hline
iJR904 & 770 & 287780 ($\sim$3.3 days) & 91.3 & 3152 \\
\hline
E.coli core & 85 & 485 & 0.759 & 638\\
\hline
\end{tabular}
\end{table*}
\subsubsection*{Comparing ParaLiNGAM with Other Parallel Methods}
Herein, three baseline parallel algorithms are introduced, and their performance are compared against the proposed ParaLiNGAM algorithm. See Fig. \ref{fig:baseLineTiming}. The first baseline algorithm is formed by assigning each variable to a block, and blocks compare themselves to each other. In specific, we have blocks equal to the number of remaining variables in each iteration, and each block performs one comparison at a time. We call this algorithm \textbf{Block Worker}.
The second baseline algorithm is similar to the previous one, in that each variable is assigned to a block, but in each block, threads are responsible for different comparisons. Hence, each block can perform comparisons simultaneously. Since running lots of comparisons in parallel requires a lot of memory, this method cannot fit in GPU for large numbers of variables. In specific, we need separate memory for normalized variables and calculated residuals (see Compare Function in Algorithm \ref{alg:baseLiNGAM_FR}). As a result, we need $O(r^2 n)$ memory ($r$ is the number of remaining variables) in each iteration, which is problematic for large datasets. However, in Block Worker, we have just $r$ simultaneous comparison at a time and require just $O(r n)$ memory, which is a moderate value.
Hence, we need to optimize memory usage in this algorithm.
In specific, we store parameters like mean, variance, and covariance of comparing variables used in calculations. Then, we merely read input data from memory to perform comparisons. This solution not only solves memory issues but also reduces the runtime by avoiding redundant memory reads and writes. We call this algorithm \textbf{Thread Worker}.
In the third baseline algorithm, in each iteration, every comparison is assigned to a block, i.e., we have $r\times r$ blocks ($r$ is the number of remaining variables), and block of index $(i, j)$ compares $X_i$ with $X_j$. Like the previous baseline algorithm, performing many comparisons in parallel requires a lot of memory. Therefore, the above mentioned optimizations are utilized to reduce memory usage in this algorithm as well. We call this algorithm \textbf{Block Compare}.
\begin{figure*}[tp]
\centering
\includegraphics[width = \linewidth]{fig/baseLine}
\vskip -3mm
\caption{Comparing the performance of ParaLiNGAM with three baseline algorithms on GPU. Every bar illustrates speedup ratio between the base line algorithm and ParaLiNGAM.}
\label{fig:baseLineTiming}
\end{figure*}
As illustrated in Fig. \ref{fig:baseLineTiming}, ParaLiNGAM is 16 X to 2.4 X faster than the Block Worker method, 11.5 X to 6.7 X faster than Thread Worker, and 10.4 X to 1.1 X faster than Block Compare. Note that E.coli Core dataset has fewer variables compared to other datasets, and it under-utilizes the GPU.
In most cases, Block Worker has the worst performance among the three baseline methods. This is due to its inefficient memory usage. Between Thread Worker and Block Compare, the first method employs too much parallelism, and threads of each block try to access different parts of memory, which is less efficient compared with Block Compare, in which, there is a limited number of concurrent comparisons (number of blocks that can run simultaneously on the GPU).
\subsection{Scalability}
\label{sec:scalability}
In this section, we evaluate the scalability of ParaLiNGAM. In particular, we measure the runtime of our proposed algorithm against DirectLiNGAM for different numbers of variables ($p$) and samples ($n$).
We follow a similar procedure as ICA-LiNGAM \cite{shimizu2006linear} for the data generation mechanism.
First, we choose the number of parents for each variable, and generate a random matrix for adjacency matrix $B$. In sparse graphs, the number of parents is uniformly selected from interval $[1, 0.2 p]$, and for dense graphs the interval is $[0.25 p, 0.5p]$ ($p$ is number of variables).
Next, non-zero entries of $B$ are replaced with a random value, from interval $[-0.5, -0.95] \bigcup [0.5, 0.95]$.
Next, we generate exogenous noise $N_i$ for each variable by sampling from Gaussian distribution, then pass them through a power non-linearity (keeping same sign, but changing the absolute value to an exponent in interval $[0.5, 0.8] \bigcup [1.2, 2]$).
Finally, we generate samples for all variables recursively and permute them randomly. For this section, datasets are generated for number of variables of $p = 100, 200, 500, 1000$ and sample sizes of $n = 1024, 2048, 4096, 8192$.
As mentioned earlier, the proposed solution does not change the algorithm accuracy and has the same precision. Therefore, only the runtime of the algorithms are shown in Fig. \ref{fig:syntheticTiming} for sparse and dense graphs.
\begin{figure*}[tp]
\centering
\includegraphics[width = 1.05 \linewidth]{fig/synTime}
\caption{Runtimes of ParaLiNGAM and DirectLiNGAM for a) sparse graphs and b) dense graphs for different number of variables and sample sizes.}
\label{fig:syntheticTiming}
\end{figure*}
In Fig. \ref{fig:syntheticTiming}, each column shows runtime for $p=100, 200, 500, 1000$, and rows (a) and (b) show runtime for sparse and dense graphs, respectively. The runtimes of sparse graphs are similar to dense graphs for DirectLiNGAM algorithm, due to the same procedure and computations. In particular, the independent test is performed for all pairs of variables in all iterations despite the graph density. Hence, runtime merely depends on the number of variables and samples. The runtime of DirectLiNGAM varies from 71.4 seconds to 658806 seconds ($\sim$ 7.6 days). However, ParaLiNGAM attains a much smaller runtime compared with DirectLiNGAM, and its runtime varies from 119 milliseconds to 151 seconds. The speedup ratio of the proposed algorithm over the serial implementation ranges from 536 X to 4657 X. Furthermore, the speedup ratio increases as $p$ and $n$ increase.
\section{Implementation Details}
\label{sec:alg2}
Further details on the proposed parallel algorithm is presented in this section. First, a short background on CUDA is presented in Section \ref{sec:CUDA}. Further details on our GPU implementation are discussed in Sections \ref{sec:GPUImp} and \ref{sec:schGPU}.
\subsection{CUDA}
\label{sec:CUDA}
CUDA is a parallel programming API for Nvidia GPUs. GPU is a massively parallel processor with hundreds to thousands of cores.
CUDA follows a hierarchical programming model. At the top level, computationally intensive functions are specified by the programmer as CUDA \textbf{kernels}. For briefness, from now on, we use kernel instead of CUDA kernel. A kernel is specified as a sequential function for a single \textbf{thread}. The kernel is then launched for parallel execution on the GPU by specifying the number of concurrent threads.
Threads are grouped into \textbf{blocks}. A GPU kernel consists of a number of blocks, and every block consists of a number of threads.
In order to identify blocks within a kernel, and threads within a block, a set of indices are used in the CUDA API, for instance, $blockIdx.x$ denotes the block index in dimension $x$ within a kernel.
\subsection{GPU Implementation}
\label{sec:GPUImp}
In this section, we present further details on the implementation of ParaLiNGAM on GPU hardware.
In the ParaLiNGAM algorithm, we used workers to handle variables' tasks. In CUDA, we assign workers to blocks. Moreover, each block can divide its computations among parallel threads to improve the performance, e.g., for performing tasks like $CheckMessages$ and $Compare$. Herein, for brevity of notation, we denote $blockIdx.x$ by $w$.
Now we discuss two approaches to implement ParaLiNGAM on GPU.
The first approach is to assign fewer threads to each block and decrease the computation power of each block, in the meantime, run all of the blocks together.
Thereby, a whole iteration can be launched by one kernel. Moreover, we can perform scheduling in GPU by considering one block for the scheduler, i.e., all blocks but one skip scheduling part based on their block IDs.
However, this solution has some drawbacks.
First, each worker requires its own exclusive memory. Hence, all workers may not fit in GPU memory. As a result, it is not scalable, and we cannot utilize it for large number of variables.
Moreover, blocks are slower due to the smaller number of threads. As a result, this approach would be time-consuming, even if all the blocks fit in the GPU.
The second approach is to run $Compare$ and $CheckMessages$ tasks in separate kernels on GPU while performing the scheduling task on the host (CPU). This approach is scalable and does not have the above problems. However, launching kernels is time-consuming. Moreover, we launch two kernels separately for each step, which is not efficient.
Unlike the first approach, we can consider some modifications to the second approach in order to improve its performance, which is discussed in the next part.
\subsection{Scheduling on GPU}
\label{sec:schGPU}
In order to resolve second approach's issues, one solution is to relax the synchronization between $Compare$ and $CheckMessages$. In Algorithm \ref{alg:ParaLiNGAM_FR}, we consider a barrier between these two functions (lines $9$ and $10$) in order to update the score of workers faster by checking their messages only after all messages are received.
However, removing the barrier causes some workers to receive their messages later in the further steps. Hence, relaxing the synchronization causes a delay in delivery of some messages, however, it halves the kernel launching delays, which is quite beneficial. As a result, the workers (blocks) can compare and check for messages independently, and thus, we can merge $Compare$ and $CheckMessages$ kernels. Still, assigning each step to a kernel and evaluating iteration status by the host is not an efficient option and causes too many kernel calls. Therefore, the procedure of scheduling must be revised.
In ParaLiNGAM, a worker can perform just one comparison (if it is active) in each step. This limitation was due to the synchronization of $Compare$ and $CheckMessages$, which is now relaxed.
Hence, now we can divide an iteration into steps by threshold updates, instead of performing one comparison per worker. In other words, workers now can continue their comparisons to reach the threshold instead of just performing one comparison.
To implement this solution, we modify some of the scheduler's tasks in order to move them to the worker. Scheduler has three tasks: checking termination of an iteration, updating the threshold, and updating $\mathcal{C}$. Now, we discuss modifications that are needed in these three tasks.
In the first task, i.e., checking iteration's termination, worker $w$ can keep track of its finished comparisons by checking $\mathcal{D}[w]$.
Therefore, workers can continue to compare themselves with each other till they reach the threshold or finish their comparisons. Moreover, they can announce their state by $finish$ flag.
The second task is updating the threshold, which is a mechanism to divide iteration into steps.
In Algorithm \ref{alg:ParaLiNGAM_FR}, workers synchronize before updating the threshold.
As mentioned before, synchronizing a kernel is not efficient. Hence, we update the threshold outside of the kernel in the host.
In Section \ref{sec:scheduler}, constant $c$ is introduced to control the amount of change in threshold in each update.
In particular, higher values for $c$ would cause increasing the number of comparisons. However, it can be more efficient due to fewer kernel calls. As a result, this parameter must be adjusted according to the number of tests, test duration, and launching kernel delays.
In the last task, workers can also determine their comparisons' target by checking their unfinished workers in $\mathcal{D}$.
Now we discuss changes in the main algorithms to implement them on GPU which are given in Algorithm \ref{alg:SchedulerV2} and Algorithm \ref{alg:GPUKernel}.
Main loop of Algorithm \ref{alg:ParaLiNGAM_FR} (lines $14-18$) is replaced to Algorithm \ref{alg:SchedulerV2}. In this algorithm, $\mathcal{C}$ is modified for more efficiency, and also functions of $Compare$, $CheckMessages$ and some parts of $Scheduler$ are moved to $GPUKernel$ (Algorithm \ref{alg:GPUKernel}).
\begin{algorithm
\begin{algorithmic
\State $\mathcal{C} = [2, 3, ..., r, 1]$
\State $finish = False$
\Repeat
\State GPU: $finish, State = GPUKernel(\mathcal{W},
State)$
\State $\gamma *= c$
\Until{$finish$}
\end{algorithmic}
\caption{$ParaLiNGAM~Code patch$}
\label{alg:SchedulerV2}
\end{algorithm}
The description of $GPUKernel$ is given in Algorithm \ref{alg:GPUKernel}.
The main part of the algorithm is a loop in lines $1-13$. A worker's task is completed if it compares itself to all other workers, which is checked in line $13$.
In line $2$, same as line $14$ of Algorithm \ref{alg:ParaLiNGAM_FR}, workers check for their messages. Since GPU has limited resources, it makes a queue for blocks if it cannot fit them on its streaming multiprocessors (SMs). Hence, some workers launch later, therefore, checking messages as the first task would help them gain information from other previously active workers and check their scores to see if they have already reached the threshold.
In lines $3-5$, workers check their scores to see if they reached the threshold. In that case, they have to stop for this step and wait for the threshold update.
Then, in lines $6-8$, workers also check for termination of the iteration, and if they have finished their comparisons after checking messages, they set $finish$ as $True$.
Next, in lines $9-11$, workers choose their comparison target by increasing $\mathcal{C}[w]$.
Then, workers perform their comparisons in line $12$. Finally, if a worker finishes comparisons without reaching the threshold, it sets $finish$ as $True$ and notifies the host scheduler that is the end of the current iteration.
In line $22$ in Algorithm \ref{alg:Scheduler}, we used a condition to avoid redundant comparisons. In discussed implementation on GPU, we perform comparisons asynchronously. Hence, checking redundant comparisons is not straightforward as before, and we need to use more complicated mechanisms. For brevity, such details are not mentioned in the algorithms. But, in short, for this matter, we utilize a flag for each comparison, and workers try to lock them with atomicCAS operation.
Moreover, if a worker reaches some comparison that its flag is already set, the worker skips that comparison and would receive that comparison's result later.
\begin{algorithm}[h]
\begin{algorithmic}[1]
\Input $State$
\Output $finish$
\Block p
\Repeat
\State $\mathcal{S}[w] += CheckMessages(w, State)$\hfill //Check
\If {$\mathcal{S}[w] > \gamma$}\hfill //Evaluate
\State \textbf{Exit}
\EndIf
\If {$\mathcal{D}[w, :]$ is all $True$}
\State \textbf{break}
\EndIf
\Repeat \hfill // Compare
\State $\mathcal{C}[w] = (\mathcal{C}[w]+1)\%size(U)$
\Until{$\mathcal{D}[w][\mathcal{C}[w]]$ is $False$}
\State $\mathcal{S}[w] += Compare(w, \mathcal{C}[w], state)
\hfill$
\Until{$\mathcal{D}[w, :]$ is all $True$}
\State $finish = True$
\end{algorithmic}
\caption{$GPU Kernel$}
\label{alg:GPUKernel}
\end{algorithm}
\section{Introduction}
\label{sec:intro}
\IEEEPARstart{D}{iscovering} the underlying causal mechanism in various natural phenomena or human social behavior is one of the primary goals in artificial intelligence and machine learning. For instance, we may be interested in recovering causal relationships between different regions of brain by processing fMRI signals \cite{huang2021diagnosis,sanchez2019estimating} or estimating causal strengths between genes in a gene regulatory networks (GRN) by observing gene expression levels \cite{marbach2012wisdom,haury2012tigress}. Having access to such causal relationships can enable us to answer to interventional or counter-factual questions which has broad impacts on designing a truly intelligent system \cite{pearl2018book}. The golden standard for the causal discovery is through conducting controlled experiments. Unfortunately, performing experiments in a system might be too costly or even infeasible \cite{ghassami2018budgeted}. As a result, there have been extensive studies in the literature of causality to recover causal relationships from merely observational data \cite{peters2017elements}.
Causal relationships among a set of variables can be represented by a directed acyclic graph (DAG) where there is a directed edge from variable $X$ to variable $Y$ if $X$ is a direct cause of $Y$. From the observational distribution, it can be shown that the true underlying causal graph can be recovered up to a Markov equivalence class (MEC) \cite{koller2009probabilistic}. There are two main approaches for recovering an MEC: constraint-based and score-based approaches. In the constraint-based approach, the MEC is identified by performing sufficient number of conditional independence (CI) tests over the observational distribution. PC \cite{spirtes2000causation} is a well-known algorithm for performing such CI tests in an efficient manner. PC algorithm runs in polynomial time to recover MEC if the maximum degree of causal graph is bounded by a constant. In the score-based approaches, the goal is to find the class of graphs maximizing a likelihood based score. Greedy equivalence search (GES) \cite{chickering2002optimal} is one of the main score-based algorithms which reconstructs MEC by adding edges in a greedy manner.
As we mentioned above, without further assumption on the causal model, one can recover the causal graph up to an MEC. In order to uniquely recover the causal graph, we need to consider further assumptions on the causal mechanisms. For instance, if the causal mechanisms are non-linear and exogenous noises are additive, then the causal structure can be identified uniquely \cite{hoyer2008nonlinear}. Moreover, if the causal mechansims are linear, we can still recover the causal graph uniquely if the additive exogenous noises are non-Gaussian \cite{shimizu2006linear}. This assumption on the model is commonly called linear non-Gaussian acyclic model (LiNGAM) \cite{shimizu2006linear}. In ~\cite{shimizu2006linear}, an algorithm based on independent component analysis (ICA), commonly called ICA-LiNGAM, has been proposed which recovers the true causal graph under LiNGAM model. Later, a regression based method, called DirectLiNGAM ~\cite{shimizu2011directlingam}, has been presented to mitigate issues in using ICA algorithm. DirectLiNGAM algorithm has two main steps. In the first step, a causal order is obtained over the variables in the system. To do so, we compare any pair of variables like $X$ and $Y$, by regressing $Y$ on $X$ and checking whether the residual is independent of $Y$. A score is computed to measure the amount of dependency ~\cite{hyvarinen2013pairwise}. Afterwards, we select a variable that is most independent of its residuals, i.e., having minimum score among remaining variables and then append it to the causal order. We call this variable in each iteration as the root variable. Next, we remove this variable from the system by regressing it out and repeat the same procedure above until no variable is remained. After obtaining the causal order, in the second step, we perform multiple linear regressions based on the causal order in order to recover the underlying causal graph.
The executions of constraint-based or score-based algorithms might become too time-consuming as the number of variables increases in the system \cite{zarebavani2019cupc}. There have been some recent efforts to accelerate causal structure learning algorithms on multi-core machines.
In the constraint-based approach, in~\cite{le2015fast}, Le et al. implemented a parallel version for a variant of PC algorithm (called PC-stable) on multi-core CPUs which reduces the runtime by an order of magnitude. Madsen et al. \cite{madsen2017parallel} proposed a method to perform conditional independence tests in parallel for PC algorithm. For the case of using GPU hardware, Schmidt et al. \cite{schmidt2018order} proposed a method to parallelize a small part of PC-stable algorithm. In \cite{zarebavani2019cupc}, Zare et al. proposed a GPU-based parallel algorithm for accelerating the whole PC-stable algorithm. The proposed algorithm parallelizes conditional independence tests over the pairs of variables or the conditional sets. Experimental results showed a significant speedup ratio up to 4000 in various real dataset. In \cite{schmidt2019out}, Schmidt et al. devised an out-of-core solution for accelerating PC-stable in order to handle extremely high-dimensional settings. Later, for discrete data, Hagedorn and Huegle \cite{hagedorn2021gpu} proposed a parallel PC algorithm on GPU for learning causal structure. Recently, Srivastava et al. \cite{srivastava2020parallel} presented a parallel framework to learn causal structures based on discovering Markov Blankets.
In score-based approach, Ramsey et al. \cite{ramsey2017million} proposed fast GES algorithm which accelerates updating score by caching scores of previous steps. They also implemented a parallel version of it on multi-core CPUs. Furthermore, there is a recent parallel solution for other search algorithms in the score based approach \cite{lee2019parallel}.
There are some recent studies with the main focus on evaluating the performance of causal structure learning algorithms in recovering the true underlying causal graph \cite{scutari2018learns,heinze2018causal}. It has been shown that LiNGAM algorithm has comparable or better performance than most existing methods and it is more suitable for high dimensional settings \cite{heinze2018causal}. Unfortunately, the runtimes of both variants of LiNGAM algorithm (ICA-LiNGAM or DirectLiNGAM) grow significantly as the number of variables increases. Thus, the current sequential implementations cannot be utilized for dataset with large number of variables. To the best of our knowledge, there is no previous parallel implementation of LiNGAM algorithm. In this paper, we propose a parallel algorithm, which we call ParaLiNGAM, for learning causal structure based on DirectLiNGAM algorithm. Our experiments show that the first step of DirectLiNGAM is computationally intensive and we focus on accelerating this step in this paper. Similar to DirectLiNGAM, we obtain the causal order in a number of iterations sequentially while in each iteration, we parallelize the process of finding the root variable.
The main contributions of the paper are given as follows:
\begin{itemize}
\item
We propose a threshold mechanism in order to reduce the number of comparisons in each iteration. In this mechanism, we consider an upper limit on the score of root variable and whenever a variable exceeds this limit, we do not perform further comparisons corresponding with that variable. Our experiments show that the threshold mechanism can save up to $93.1\%$ comparisons that we have in DirectLiNGAM.
\item When we compare variable $X$ with $Y$, a part of computation is similar to the case that we are comparing $Y$ with $X$ in the reverse direction. Thanks to this observation, we employ a messaging mechanism in order to avoid performing redundant computations which reduces runtimes by a factor of about two.
\item We derive the mathematical formulations for normalizing and also regression which are frequently utilized in DirectLiNGAM algorithm. These mathematical formulations enable us to reduce the runtime and use the memory more efficiently.
\item We evaluate ParaLiNGAM on various synthetic and real data. Experimental results show that the proposed algorithm can reduce runtime of DirectLiNGAM significantly by a factor up to $4657$.
\end{itemize}
The rest of this paper is organized as follows. In Section \ref{sec:prelim}, we review some preliminaries on structural equal models, LiNGAM model, and DirectLiNGAM algorithm. In Section \ref{sec:alg}, we present ParaLiNGAM algorithm for learning causal structures in LiNGAM model. We provide some implementation details in Section \ref{sec:alg2}. We evaluate the performance of ParaLiNGAM algorithm in Section \ref{sec:exp}. Finally, we conclude the paper in Section \ref{sec:conc}.
\section{Preliminaries}
\label{sec:prelim}
\subsection{Structural equation models}
\label{sec:prelim:sem}
Structural equation models (SEMs) are mathematical models that can be used to describe the data-generating process and causal relations of variables \cite{bollen1989structural, pearl2000models}. In particular, SEMs consists of a collection of $p$ equations where $p$ is the number of variables in the system. The causal mechanism of assigning values to the variable $X_j$, $1\leq j \leq p$, can be written as follows:
\begin{equation}
X_j= f_j(PA_j, N_j),
\end{equation}
where $PA_j$ are called parents of $X_j$ and have direct cause on it. Moreover, $N_j$ is the exogenous noise corresponding to variable $X_j$. Exogenous noises are generated outside of the model and their data-generation processes are not modeled in the SEM.
We can represent causal relationships among the variables in an SEM by a directed graph where there is a directed edge from $X_i$ to $X_j$ if $X_i \in PA_j$.
\begin{Example}
Consider the following SEM:
\begin{figure}[tp]
\centering
\includegraphics[width = 0.45 \columnwidth]{fig/Casuse_effect}
\caption{Example of an SEM.}
\label{fig:pre:SEM}
\end{figure}
\begin{equation}
\begin{gathered}
X_3 = N_3, \\
X_5 = f_5(X_3, N_5), \\
X_1 = f_1(X_5, N_1), \\
X_4 = f_4(X_1, N_4), \\
X_2 = f_2(X_3, X_4, N_2),
\end{gathered}
\label{eq:sem}
\end{equation}
where the corresponding causal graph is illustrated in \mRefFig{fig:pre:SEM}. As can be seen, there is a directed edge from a direct cause to its effect. For instance, $X_3$ is the direct cause of $X_5$ and there is a directed edge from $X_3$ to $X_5$.
\label{ex:base}
\end{Example}
\subsection{Linear Non-Gaussian Acyclic Model (LiNGAM)}
One of the common assumption in the literature of causality is that the causal relations between variables are acyclic, i.e., the corresponding causal graph is a \textbf{directed acyclic graph} (\textbf{DAG})\footnote{A directed acyclic graph is a graph whose edges are all directed and there is no directed
cycle in the graph.}. By this assumption, there is always a \textbf{causal order} of variables $X_i$, $i\in\{1,\dotsc,p\}$, in the DAG so that no latter variable in the causal order has a direct path to any earlier variable. We denote the position of each variable $X_i$ in the causal order by $k(i)$. For instance, for the causal graph in \mRefFig{fig:pre:SEM}, $k=[3,5,1,4,2]$
is a causal order.
As an additional assumption, one can consider that the functional relations of variables are linear. Thus, the model can be reformulated as follows:
\begin{equation}
X_i = \sum_{k(j)<k(i)}b_{ij}X_{j} + N_i,
\label{eq:base_linear}
\end{equation}
where $b_{ij}$ is the causal strength representing magnitude of direct causation from $X_j$ to $X_i$. Furthermore, it is assumed that the exogenous noises have zero mean, non-zero variance, and are independent of each other (i.e., no latent confounder is in the system). We can rewrite Eq. \ref{eq:base_linear} in the matrix form as follows:
\begin{equation}
X = BX + N,
\label{eq:base_mat}
\end{equation}
where $X$ and $N$ are $p$-dimensional random vectors, and $B$ is a $p \times p$ matrix of causal strengths. For instance, the SEM of \mRefFig{fig:pre:SEM} can be written as follows:
\begin{equation}
\begin{bmatrix}
X_1 \\ X_2 \\ X_3 \\ X_4 \\ X_5
\end{bmatrix}
=
\begin{bmatrix}
0 & 0 & 0 & 0 & 3 \\
0 & 0 & 6 & -3 & 0 \\
0 & 0 & 0 & 0 & 0 \\
4 & 0 & 0 & 0 & 0 \\
0 & 0 & 5 & 0 & 0 \\
\end{bmatrix}
\begin{bmatrix}
X_1 \\ X_2 \\ X_3 \\ X_4 \\ X_5
\end{bmatrix}
+
\begin{bmatrix}
N_1 \\ N_2 \\ N_3 \\ N_4 \\ N_5
\end{bmatrix}
,
\label{eq:base_mat_ex}
\end{equation}
where zero entries of $B$ show the absence of directed edges. It can be shown that a simultaneous permutations of rows and columns of matrix $B$ according to a causal order can convert it to a \textbf{strictly lower triangular} matrix, due to the acyclicity assumption \cite{bollen1989structural}.
In the example in \mRefFig{fig:pre:SEM}, we can rewrite equations in the following form to make the matrix $B$ strictly lower triangular:
\begin{equation}
\begin{bmatrix}
X_3 \\ X_5 \\ X_1 \\ X_4 \\ X_2
\end{bmatrix}
=
\begin{bmatrix}
0 & 0 & 0 & 0 & 0 \\
5 & 0 & 0 & 0 & 0 \\
0 & 3 & 0 & 0 & 0 \\
0 & 0 & 4 & 0 & 0 \\
6 & 0 & 0 & -3 & 0 \\
\end{bmatrix}
\begin{bmatrix}
X_3 \\ X_5 \\ X_1 \\ X_4 \\ X_2
\end{bmatrix}
+
\begin{bmatrix}
N_3 \\ N_5 \\ N_1 \\ N_4 \\ N_2
\end{bmatrix}
.
\label{eq:base_mat_ex_prder}
\end{equation}
It can be shown that the causal structure cannot be recovered uniquely if the distributions of exogenous noises are Gaussian ~\cite{pearl2000models}.
However, in ~\cite{shimizu2006linear}, it has been proved that the model can be fully identified from observational data if all the exogenous noises are non-Gaussian. They called the non-Gaussian version of the linear acyclic SEM, \textbf{Linear} \textbf{Non-Gaussian} \textbf{Acyclic Model} (\textbf{LiNGAM}). In the rest of this paper, we assume the model of data generation obeys the assumptions in LiNGAM.
\subsection{Causal Structure Learning Algorithms for LiNGAM}
\label{sec:baseLiNGAM}
ICA-LiNGAM ~\cite{shimizu2006linear} was the first algorithm for the LiNGAM model, which applies an independent component analysis (ICA) algorithm to observed data and try to find the best strictly lower triangular matrix for $B$ that fits the observed data. This algorithm is fast due to well-developed ICA techniques. However, the algorithm has several drawbacks, e.g., getting stuck in local optimal, scale-dependent calculations, and usually estimating dense graph even for sparse ground truth causal graphs~\cite{shimizu2011directlingam}.
\textbf{DirectLiNGAM} was proposed in~\cite{shimizu2011directlingam}, in order to resolve ICA-LiNGAM's issues and converges to an acceptable approximation of matrix $B$ in a fixed number of steps.
Also, we can provide prior knowledge to the algorithm, which can improve the performance of recovering the correct model. However, computation cost of DirectLiNGAM is more than ICA-LiNGAM, and it cannot be applied on large graphs\cite{shimizu2011directlingam}.
DirectLiNGAM algorithm consists of two main steps: In the first step, the causal order of variables is estimated by repeatedly searching for a root in the remaining graph and regressing out its effect on other variables. In the second step, causal strengths are estimated by using some conventional covariance-based regression according to the recovered causal order~\cite{shimizu2011directlingam}. Experimental results show that the second step is fairly fast since we are only performing linear regressions. However, the first step is computationally intensive and we focus on accelerating this part in this paper.
\begin{algorithm}[tp]
\begin{algorithmic}[1]
\Input $\mathcal{X}$
\Output $K$
%
\State $U =\{1,\cdots, p\}$
\State $K=\emptyset$
\Repeat
\State $root = FindRoot(\mathcal{X}, U)$
\State Append $root$ to $K$
\State Remove $root$ from $U$
\State $\mathcal{X} = RegressRoot(\mathcal{X}, U, root)$
\Until{$U$ is not empty}
\State Estimate causal strengths $B$ from $K$
\end{algorithmic}
\caption{DirectLiNGAM}
\label{alg:baseLiNGAM}
\end{algorithm}
The description of DirectLiNGAM algorithm is given in \mRefAlg{alg:baseLiNGAM}. The input of the algorithm is matrix $[\mathcal{X}]_{p\times n}$
whose $i$-th row, $x_i$, contains $n$ samples from variable $X_i$.
The output of algorithm is $K$, a causal order list of variables. First, we initialize $U$ by a list of all variables' indecies and set $K$ to an empty list (lines $1-2$). Next, $K$ is going to be filled with variables from $U$ using a comparison between variables to form a causal order. Recovering a causal order takes $p$ (size of $U$) iterations. In each iteration, the most independent variable in $U$ ($root$ of that iteration) is determined by the $FindRoot$ function. Then the $root$ moves from $U$ to $K$. Next, the data of remaining variables in $U$ are updated by regressing them on the root ($RegressRoot$ function), which has been shown that it preserves correct causal orders in the remaining part ~\cite{shimizu2011directlingam}. Finally, the matrix B is recovered from the causal order in $K$s using conventional covariance-based regression methods.
\begin{algorithm}[tp]
\begin{algorithmic}[1]
\Input $\mathcal{X}, U$
\Output $root$
%
\If {$U$ has only one element}
\State return the element
\EndIf
\State $\mathcal{S} = [\textbf{0}]_{|U|}$
\For {$i$ in $U$}
\For {$j$ in $U\backslash\{i\}$}
\State $Normalize(x_i)$
\State $Normalize(x_j)$
\State $r_i^{(j)} = Regress(x_i, x_j)$
\State $r_j^{(i)} = Regress(x_j, x_i)$
\State $Normalize(r_i^{(j)})$
\State $Normalize(r_j^{(i)})$
\State $\mathcal{S}[i] += \min\{0, I(x_i, x_j, r_i^{(j)}, r_j^{(i)})\}^2$
\EndFor
\EndFor
\State $root=$ $U[arg\min(Scores)]$
\end{algorithmic}
\caption{$FindRoot$}
\label{alg:baseLiNGAM_FR}
\end{algorithm}
The purpose of $FindRoot$ function (see \mRefAlg{alg:baseLiNGAM_FR}) is to find most independent variable from its residuals by comparing all pairs of variables given in $U$. Each variable in $U$ has a score with initial value of zero and all of them are stored in an array called $\mathcal{S}$ (line $4$).
First, samples of each variable are normalized. Next, each variable $X_i$ is regressed on any other variable like $X_j$ in $U$. Afterwards, the regressed values are normalized. Finally, an independence test $I$ is performed and its result is added to a score of variable $X_i$. In ~\cite{hyvarinen2013pairwise}, a likelihood ratio test is proposed that assigns a real number to the pair of variables:
\begin{equation}
I(x_i, x_j, r_i^{(j)}, r_j^{(i)}) = H(x_j) + H(r_i^{(j)}) - H(x_i) - H(r_j^{(i)}),
\label{eq:Itest}
\end{equation}
where $H$ is differential entropy, which can be approximated by computationally simple function as follows ~\cite{hyvarinen2013pairwise, hyvarinen1998analysis}:
\begin{equation}
\!\begin{multlined}[t]
\hat{H}(u) = H(v) - k_1[E\{\log\cosh(u)\}- \beta]^2 \\
- k_2[E\{u\exp(-u^2/2)\}]^2.
\end{multlined}
\label{eq:HApp}
\end{equation}
In above equation, $H(v) = \frac{1}{2}(1 + \log2\pi)$ is the entropy of the standardized Gaussian distribution, and the other constants can be set to:
\begin{center}
$k_1 \approx 79.047,$\\
$k_2 \approx 7.4129,$\\
$\beta \approx 0.37457.$\\
\end{center}
A positive/negative value of $I$ indicates the independence/dependence of that variable compared to the other one. For aggregating score for each variable and determining total independence of a variable from others, only the amount of dependence is considered.
In other words, in this method, only negative value of $I$ are considered and its square is added to the score.
The variable with minimum score is selected as the root variable.
Please note that we consider lines $9-13$ as \textbf{$Compare$} function and use this as the based function for comparing two variables in next sections.
|
{
"timestamp": "2021-09-30T02:02:20",
"yymm": "2109",
"arxiv_id": "2109.13993",
"language": "en",
"url": "https://arxiv.org/abs/2109.13993"
}
|
\section{Introduction}
We study the maximum diameter of connected graphs in terms of other graph parameters such as order, minimum degree, etc.
Several papers \cite{Amar, EPPT, Gold, Moon} have shown that:
\begin{theorem}\label{th:ori}
For a fixed minimum degree $\delta \geq 2$, every connected graph $G$ of order $n$ satisfies
$\operatorname{diam}(G) \leq \frac{3n}{\delta+1}+O(1)$, as $n\rightarrow\infty$.
\end{theorem}
This upper bound is sharp (even for $\delta$-regular graphs \cite{Smyth}), but the constructions have complete subgraphs, whose order increases with $n$.
Erd\H{o}s, Pach, Pollack, and Tuza \cite{EPPT} conjectured that the upper bound in Theorem~\ref{th:ori} can be improved, if large cliques are excluded:
\begin{conjecture}[\cite{EPPT}]
\label{con:Erdosetal}
Let $r,\delta\geq 2$ be fixed integers and let $G$ be a connected graph of order $n$ and minimum degree $\delta$.
\begin{enumerate}[label={\upshape (\roman*)}]
\item\label{conpart:even} If $G$ is $K_{2r}$-free and $\delta$ is a multiple of
$(r-1)(3r+2)$ then, as $n\rightarrow \infty$,
\begin{eqnarray*}
\operatorname{diam}(G) &\leq& \frac{2(r-1)(3r+2)}{(2r^2-1)}\cdot \frac{n}{\delta} + O(1)\\
&=&\left(3-\frac{2}{2r-1}-\frac{1}{(2r-1)(2r^2-1)}\right)\frac{n}{\delta}+O(1).
\end{eqnarray*}
\item\label{conpart:odd} If $G$ is $K_{2r+1}$-free and $\delta$ is a multiple of $3r-1$, then, as $n\rightarrow \infty$,
\[ \operatorname{diam}(G) \leq \frac{3r-1}{r}\cdot \frac{n}{\delta} + O(1)=\left(3-\frac{2}{2r}\right)\frac{n}{\delta}+O(1). \]
\end{enumerate}
\end{conjecture}
Furthermore, they created examples showing that the above conjecture, if true, is sharp, and showed
part (ii) of the conjecture for $r=1$.
Czabarka, Dankelmann and Sz\'ekely \cite{dankelmanos} arrived at the
conclusion of Conjecture \ref{con:Erdosetal}~\ref{conpart:odd} for $r=2$ under a stronger hypothesis:
\begin{theorem}
\label{th:CDS}
For every connected $4$-colorable graph $G$ of order $n$ and
minimum degree $\delta\ge 1$,
\( \operatorname{diam}(G) \leq \frac{5n}{2\delta}-1. \)
\end{theorem}
Czabarka, Singgih and Sz\'ekely \cite{counterexpaper} gave an infinite family of $(2r-1)$-colorable (hence $K_{2r}$-free) graphs with diameter
$\frac{(6r-5)(n-2)}{(2r-1)\delta+2r-3}-1$, providing a counterexample for Conjecture \ref{con:Erdosetal}~\ref{conpart:even} for
every $r\geq 2$ and $\delta> 2 (r-1)(3r+2)(2r-3)$.
The question whether Conjecture \ref{con:Erdosetal}~\ref{conpart:even} holds in the
range $(r-1)(3r+2)\le\delta\le 2(r-1)(3r+2)(2r-3)$ remains open.
The counterexample led Czabarka {\sl et al.} \cite{counterexpaper}
to the modified conjecture below, which no longer requires cases for the parity of the order of the excluded complete subgraphs:
\begin{conjecture}[\cite{counterexpaper}] \label{con:7o3}
For every $k\ge 3$ and $\delta\ge \lceil\frac{3k}{2}\rceil-1$,
if $G$ is a $K_{k+1}$-free (under a stronger hypothesis, $k$-colorable) connected graph of order $n$ and minimum degree at least $\delta$,
$\operatorname{diam}(G)\leq \left(3-\frac{2}{k}\right)\frac{n}{\delta}+O(1)$.
\end{conjecture}
Czabarka, Singgih and Sz\'ekely \cite{kcolorable} showed that the extremal graphs for the diameter maximization problem of Conjecture~\ref{con:7o3}
include graphs blown up from some very specific structures, called {\sl canonical clump graphs}. Furthermore, \cite{kcolorable} showed using the
weak duality theorem of linear programming that providing a sufficiently good solution for a dual problem on canonical clump graphs gives an upper bound for the
diameter of graphs blown up from canonical clump graphs (see Theorem~\ref{th:tool}), and hence an upper bound for the diameter maximization problem of Conjecture~\ref{con:7o3}.
Using this method, they proved:
\begin{theorem} [\cite{{kcolorable}}] \label{th:upperbound} Assume $k\ge 3$. If $G$ is a connected $k$-colorable graph of minimum degree at least $\delta$, then
\[ \operatorname{diam}(G)\leq \frac{3k-4}{k-1}\cdot\frac{n}{\delta}-1=\left(3-\frac{1}{k-1}\right)\frac{n}{\delta}-1\].
\end{theorem}
\noindent Czabarka, Singgih and Sz\'ekely \cite{kcolorable} also made a slight improvement on Theorem~\ref{th:upperbound} for $3$-colorable graphs, but with a different argument.
In this paper we give a common short proof of the Conjecture~\ref{con:7o3} (under the stronger hypothesis) for $k=3$ and $4$ (the latter being Theorem~\ref{th:CDS} from Dankelmann {\sl et al.} \cite{dankelmanos}) using the approach above:
\begin{theorem}\label{th:main} Assume $k=3$ or $4$. If $G$ is a connected $k$-colorable graph of order $n$, and of minimum degree at least $\delta\ge 1$, then $\operatorname{diam}(G)\le\left(3-\frac{2}{k}\right)\frac{n}{\delta}-1.$
\end{theorem}
The main tool of the proof is still the use of canonical clump graphs, however, we focus on an even smaller class, {\sl strongly canonical clump graphs}, of which blown up copies
are still present among the extremal graphs for the diameter maximization problem of Conjecture~\ref{con:7o3}, as shown in Section~\ref{sec:clump}.
We partition the strongly canonical graph into segments of three types. Weighting the vertices such that the total weight of the neighbors of any vertex is at most $1$
and the average weight of a layer in each segment is $\frac{k}{3k-2}$ finishes the proof.
When $k\in\{3,4\}$, Type 1 and Type 2 segments (defined in Section \ref{sec:definitions}) have a very limited structure, as shown in Lemma~\ref{lm:main}.
For $k\ge 5$ we have examples of segments that cannot be weighted according to this scheme, so new ideas are needed.
\section{Clump Graphs}\label{sec:clump}
Given a $k$-colorable connected graph $G$ of order $n$ and minimum degree at least $\delta$, choose a vertex $x$ whose eccentricity
is $\operatorname{diam}(G)$.
Take a \emph{fixed} good $k$-coloring of $G$. Let \emph{layer} $L_i$ denote the set of vertices at distance $i$ from $x$, and a \emph{clump} in $L_i$ be the set of vertices in $L_i$ that have the same color.
The number of layers is $\operatorname{diam}(G)+1$. We call a graph \emph{layered}, if such a vertex $x$ and the distance layers $L_0=\{x\},L_1,\ldots, L_D$ are given.
Let $c(i) \in \{1,2,\ldots, k\}$ denote the number of colors used in layer $L_i$ by our fixed coloration. We can assume without loss of generality that
any two vertices in layer $L_i$ in $G$, which are differently colored, are joined by an edge in $G$, and also that two vertices in consecutive layers, which are differently colored, are also joined by an edge in $G$. We call this assumption \emph{saturation} with respect to the fixed good $k$-coloring.
Assuming saturation does not make loss of generality, as adding these edges does not decrease degrees, keeps the fixed good $k$-coloration, and does not reduce
the diameter, while making the graph more structured for our convenience.
From the layered and saturated graph $G$ above,
we create an
\emph{unweighted clump graph} $H=H(G)$. Vertices of $H$ correspond to the clumps of $G$. Two vertices of $H$ are connected by an edge if there were edges between the corresponding clumps in $G$. $H$ is naturally $k$-colored and layered, based on the coloration and layering of $G$. With a slight abuse of notation, we denote the layers of $H$ by
$L_i$ as well.
To create a \emph{weighted clump graph},
we assign positive integer \emph{weights} to each vertex of the unweighted clump graph. Blowing up vertices of $H$ into as many copies as their weight is, we obtain a bigger $k$-colorable graph
of the same diameter (we do not put edges between successors of the same vertex). In case the weights are the cardinalities of the clumps in $G$, after the blow-up of $H=H(G)$ we get back $G$. The degree of a vertex $v$ in a blow-up
of $H$, where $v$ is a successor of a vertex $w$ of $H$ by blow-up, is the sum of the weights of neighbors of the vertex $w$ in $H$. The number of vertices in a blow-up of $H$ is the sum of the weights of all vertices in $H$.
The following theorem was proven in \cite{kcolorable}:
\begin{theorem}[\cite{kcolorable}] \label{th:canonical} Assume $k\ge 3$. Let $G'$ be a $k$-colorable connected graph of order $n$, diameter $D$ and minimum degree at least $\delta$.
Then there is a saturated $k$-colored
and layered connected graph $G$ of the same parameters $n$ and $\delta$, with layers $L_0,\ldots,L_D$, for which the following hold for every
$i$ $(0\le i\le D-1)$:
\begin{enumerate}[label={\upshape (\roman*)}]
\item\label{part:1k} If $c(i)=1$, then $c(i+1)\le k-1$.
\item\label{part:manycolor} The number of colors used to color the set $L_i\cup L_{i+1}$ is $\min(k,c(i)+c(i+1))$. In particular, when
$c(i)+c(i+1)\le k$, then $L_i$ and $L_{i+1}$ do not share any color.
\item\label{part:k1} If $c(i)=k$, then $i\ge 2$ and $c(i+1)\ge 2$.
\item\label{part:singleton} If $|L_i|>c(i)$, i.e., $L_i$ contains two vertices of the same color, then $i>0$ and $c(i)+\max\bigl(c(i-1),c(i+1)\bigl)\geq k$.
\end{enumerate}
\end{theorem}
{\sl Canonical clump graphs} were defined in \cite{kcolorable} as $H=H(G)$ clump graphs, where $G$ satisfies the conclusions of Theorem~\ref{th:canonical}. Now we define
{\sl strongly canonical clump graphs} for $D\geq 2$ as $H=H(G)$ canonical clump graphs (i.e., $G$ satisfies the conclusions of Theorem~\ref{th:canonical}), and in addition,
$c(0)=c(D)=1$.
It is not difficult to see the following: if the graph $G'$ in the assumption of Theorem~\ref{th:canonical}
is layered with $|L_0'|=1$ and $c'(D)=1$,
then the proof of Theorem~\ref{th:canonical} in \cite{kcolorable} provides a layered $G$ with $|L_0|=1$ (and hence $c(0)=1$), and $c(D)=1$.
Based on this observation, the following lemma implies that to resolve Conjecture~\ref{con:7o3} (or proving Theorem~\ref{th:main}), we may assume that $G$ has a
strongly canonical clump graph.
\begin{lemma}\label{lm:cD1} Assume $k\ge 3$ and $D\geq 2$. Let $G'$ be a $k$-colored layered connected graph of order $n$, diameter $D$, and minimum degree at least $\delta$,
with layers $L'_0,\ldots,L'_D$.
Then
there is a $k$-colored layered connected graph $G$ of the same parameters, with layers $L_0,\ldots,L_D$, for which $c(0)=c(D)=1$, and for each
$i$ $(0\le i\le D-2)$, we have $c'(i)=c(i)$ and $L'_i=L_i$.
\end{lemma}
\begin{proof}
As $|L'_0|=1$ is necessary in a layered graph, we must have $c'(0)=1$, and if $c'(D)=1$, the choice $L'_i=L_i$ suffices. If $c'(D)>1$, pick a color $A$ in $L_D'$. If possible, pick such a color
that also appears in $L_{D-2}'$. This ensures that
for all colors $B$ in $L_D'$ such that $B\ne A$ there is a color $C$ in $L_{D-2}'$ such that $B\ne C$ (where $C=A$, if $A$ appeared in $L_{D-2}$, otherwise any color in $L_{D-2}$ works). Create a layered graph graph $G$ from $G'$ by moving all vertices in $L_{D}'$
that are not colored $A$ to the next-to-last layer, which will be $L_{D-1}$, and connect them to all vertices in $L_{D-2} = L_{D-2}'$ that have different color. Note that
for all vertices of $ L_{D-1}$, there is at least one such vertex.
As we only changed the number of vertices in layers $D-1$ and $D$, and did not change the coloration of the vertices, the claim follows.
\end{proof}
\section{Duality}
Let $\mathcal{H}_{k,D,\delta}$ denote the family of unweighted canonical clump graphs of
diameter $D$ that arises from connected $k$-colorable graphs $G$ with diameter $D$ and minimum degree at least $\delta$, when the order of $G$ is unspecified.
We will rely on the following result from~\cite{kcolorable}:
\begin{theorem}
\label{th:tool} {\rm (\cite{kcolorable})}
Fix $k\ge 3$. Assume that there exists constants $\tilde{u}>0$ and $C\ge 0$ such that for all $D$ and $\delta$, and for all $H\in\mathcal{H}_{k,D,\delta}$, the optimum of the linear program
\[
\text{Maximize } \delta\cdot \sum_{y\in V(H)} u(y),
\]
subject to the condition
\begin{equation}
\forall x\in V(H) \,\,\,\,\,\,\,\, \sum_{y\in V(H)\, : \, xy\in E(H)} u(y)\le 1. \label{dualcond}
\end{equation}
is at least
\[
\tilde{u}\delta D+C.
\]
Then for any $k$-colorable graphs $G$ with minimum degree $\delta$ on $n$ vertices, we have
\[
\operatorname{diam}(G) \le \frac{1}{\tilde{u}}\frac{n}{\delta}-\frac{C}{\tilde u}.
\]
\end{theorem}
In Theorem~\ref{th:tool} and in its proof we may change the family of canonical clump graphs $\mathcal{H}$ to the family of strongly canonical clump graphs $\mathcal{H'}$
keeping all arguments valid.
\section{Some definitions and observations} \label{sec:definitions}
Recall that we use the sloppy notation $L_i$ for the layers of the clump graph $H(G)$ as well, not just for the layers of $G$. Hence $c(i)=|L_i|$, if $L_i$ denotes a layer
of the clump graph. Based on the arguments of Section~\ref{sec:clump}, we have:
\begin{claim}\label{def:strongcan} An unweighted $k$-colorable strongly canonical clump graph with layers $L_0,\ldots,L_D$ satisfies the following properties:
\begin{enumerate}[label={\upshape (\roman*)}]
\item\label{part:ends} $|L_0|=|L_D|=1$,
\item\label{part:nok1} If $|L_i|=k$, then $2\le i \le D-1$ and $\min(|L_{i-1}|,|L_{i+1}|)\ge 2$, and
\item\label{part:match} For $i\in[D]$, the edges that do not appear between $L_{i-1}$ and $L_i$ form a matching of size $\max(k,|L_{i-1}|+|L_i|)-k$.
\end{enumerate}
\end{claim}
For the following definition, and also for the rest of this section, assume that we are given
a $k$-colorable canonical clump graph $H$ with layers $L_0,\ldots ,L_D$. We define for convenience two additional layers, as $L_{-1}=L_{D+1}=\emptyset$.
For a vertex $x\in V(H)$, let $N(x)$ denote the set of neighbors of $x$.
\begin{definition} For each $i:0\le i\le D$, define the set $S_i=\{x\in L_i: L_{i-1}\cup L_i\subseteq N(x)\}$. We call a
layer $L_i$ \emph{big} if $|S_i|>\frac{k}{2}$. A layer is \emph{small} if it is not big.
\end{definition}
Note that if $L_i$ is big, then $i\notin\{0,D\}$. We set $S_{-1}=S_{D+1}=\emptyset$, in accordance with $L_{-1}=L_{D+1}=\emptyset$.
\begin{lemma}\label{lm:basic} Assume $D\geq 2$. Let $H$ be an unweighted $k$-colorable strongly caonical clump graph with layers $L_0,\ldots,L_D$.
The following is true for each $i:0\le i\le D$:
\begin{enumerate}[label={\upshape (\roman*)}]
\item\label{part:sizes} $|L_i|\le k-\max(|S_{i-1}|,|S_{i+1}|)$,
\item\label{part:ub} $|S_i|\le k-1$,
\item\label{part:type} if $L_i$ is big, then $1\le i\le D-1$ and $L_{i-1},L_{i+1}$ are small,
\item\label{part:small} if $|L_i|=1$, then $L_i=S_i$,
\item\label{part:type3} $\max(|L_i\setminus S_i|,|L_{i+1}\setminus S_{i+1}|)\le k-|S_i|-|S_{i+1}|$,
\item\label{part:big} if $|S_i|=k-1$, then $L_i=S_i$ and for $j=i\pm 1$, $|L_j|=|S_j|=1$,
\item\label{part:last} if $k\in\{3,4\}$ and $L_i$ is big, then $|S_i|=k-1$.
\end{enumerate}
\end{lemma}
\begin{proof}
\ref{part:sizes} follows from the facts that $S_{i-1}\cup L_i$, and also $S_{i+1}\cup L_i$, forms a complete subgraph in $k$-colorable graph $H$.
\ref{part:ub} follows from \ref{part:sizes} and the fact that $L_{i-1}\cup L_{i+1}$ contains at least one vertex.
\ref{part:type} follows from \ref{part:sizes}.
\ref{part:small} :
As $|L_i|=1$, then from Claim~\ref{def:strongcan}~\ref{part:nok1} we get that $\max(|L_{i-1}|,|L_{i+1}|)\le k-1$. By Claim~\ref{def:strongcan}~\ref{part:match}, the vertex in $L_i$ is adjacent to every vertex in $L_{i-1}\cup L_{i+1}$.
For \ref{part:type3},
$S_{i}\cup S_{i+1}\cup (L_i\setminus S_i)$ forms a complete graph in the $k$-colorable graph $H$, and hence $|L_i\setminus S_i|\leq k-|S_i|-|S_{i+1}|$, and similarly,
$S_{i}\cup S_{i+1}\cup (L_{i+1}\setminus S_{i+1})$ forms a complete graph, and hence $|L_{i+1}\setminus S_{i+1}|\leq k-|S_i|-|S_{i+1}|$.
For \ref{part:big},
if $|S_i|=k-1$, then $1\leq i\leq D-1$. By~\ref{part:sizes}, $|L_{i-1}|=|L_{i+1}|=1$, and by Claim~\ref{def:strongcan}~\ref{part:nok1}, $|L_i|\le k-1$, and by $k-1=|S_i|\leq |L_i|\le k-1$, $S_i=L_i$.
For \ref{part:last},
if $L_i$ is big, then by definition $\frac{k}{2}<|S_i|$. By \ref{part:ub} $|S_i|\le k-1$. For $k\in\{3,4\}$, these give $|S_i|=k-1$.
\end{proof}
\begin{definition} Let $H$ be an unweighted $k$-colorable strongly canonical clump graph with layers $L_0,\ldots,L_D$. If for some $s\geq 1$ the contiguous segment of layers
$L_i,L_{i+1},\ldots,L_{i+2s}$ satisfies all the following conditions:
\begin{enumerate}[label={\upshape (\roman*)}]
\item for each $j:1\le j\le s$ the layer $L_{i+2j-1}$ is big (thus, $L_{i+2j-2},L_{i+2j}$ are small),
\item $i= 0$ or $L_{i-1}$ is small,
\item $i+2s= D$ or $L_{i+2s+1}$ is small,
\end{enumerate}
then we say that the contiguous segment is Type 1, if $s=1$, and Type 2, if $s>1$.
\end{definition}
\begin{definition} Let $H$ be an unweighted $k$-colorable strongly canonical clump graph with layers $L_0,\ldots,L_D$. Assume that $t\geq 0$. We say that the contiguous segment of layers
$L_i,L_{i+1},\ldots,L_{i+t}$ is Type 3, if the following hold:
\begin{enumerate}[label={\upshape (\roman*)}]
\item for each $j:i\le j\le i+t$ the layer $L_{j}$ is small,
\item if $i\ne 0$ then $i> 2$ and $L_{i-2}$ is big (thus, $L_{i-1},L_{i-3}$ are small),
\item if $i+t\ne D$ then $i+t<D-2$ and $L_{i+t+2}$ is big (thus, $L_{i+t+1},L_{i+t+3}$ are small).
\end{enumerate}
\end{definition}
Observe that in a Type 3 segment every layer is small.
The following Lemma easily follows from the definition of strongly canonical clump graphs and Lemma~\ref{lm:basic}.
\begin{lemma}\label{lm:main}
Let $H$ be an unweighted $k$-colorable strongly canonical clump graph. Then the layers $L_{0},\ldots,L_D$ can be partitioned into segments of Type 1, Type 2 and Type 3. Moreover,
if $k\in\{3,4\}$ and $L_j$ is a layer in a Type 1 or Type 2 segment, then $L_j=S_j$ and $|L_j|\in\{1,k-1\}$.
\end{lemma}
\section{Proof of Theorem~\ref{th:main}}
Assume $k\in\{3,4\}$, and let $H$ be an unweighted $k$-colorable strongly canonical clump graph. By Theorem~\ref{th:tool}, it is enough to find a dual weighting $u$ of the vertices of $H$, which satisfies the conditions of that theorem and has total weight
$(D+1)\frac{k}{3k-2}$. Fix a partition of the layers of $H$ into segments of Type 1, Type 2 and Type 3 according to Lemma~\ref{lm:main}.
For shortness, we will say that layer $L_i$ is of Type $j$, if $L_i$ falls into a segment of Type $j$.
Consider a vertex $v$ in a layer $L_i$. Set $u(v)$ as follows:
\begin{itemize}
\item If $L_i$ is of Type 1:
$u(v)=\begin{cases} \frac{2}{3k-2}, & {\rm if } |L_i|=k-1, \\ \frac{k+2}{2(3k-2)} & {\rm otherwise}. \end{cases}$
\item
If $L_i$ is of Type 2:\\
$u(v)=\begin{cases} \frac{1}{2(k-1)} ,& {\rm if\ } |L_i|=k-1,\\
\frac{k+2}{2(3k-2)}, & {\rm if\ } |L_i|=1 \hbox{ and } L_i \hbox{\rm \ is not the first or last layer of the segment,}\\
\frac{3k+2}{4(3k-2)}, & \hbox{otherwise}.
\end{cases}$\\
Note that as $k\ge 3$, $\frac{k+2}{2(3k-2)}<\frac{3k+2}{4(3k-2)}<\frac{k}{3k-2}$.
\item
If $L_i$ is of Type 3:\\
$u(v)=\begin{cases}
\frac{k}{(3k-2)|L_i|}, &{\rm if\ } |L_i|\le\frac{k}{2}\\
\frac{2}{3k-2} ,& {\rm if\ } |L_i|>\frac{k}{2} {\rm \ and\ } v\in S_i\\
\frac{k-2|S_i|}{(3k-2)(|L_i|-|S_i|)} , & \hbox{otherwise}.\end{cases}$\\
Note that as $|S_i|\le\frac{k}{2}$, we get $u(v)\ge 0$. Also, $u(v)\geq \frac{2}{3k-2}$ if $|L_i|\le\frac{k}{2}$ or $v\in S_i$.
\end{itemize}
We define the weight $u(X)$ of a vertex set $X$ as $\sum_{v\in X} u(v)$.
It is easy to check that for any Type 3 layer $L_i$, $u(L_i)=\frac{k}{3k-2}$. Also, first and last layers in a segment of any type have weight at most $\frac{k}{3k-2}$.
If $L_i,L_{i+1},L_{i+2}$ is a Type 1 segment, then $u(L_i)+u( L_{i+1})+u(L_{i+2})=(k-1)\cdot \frac{2}{3k-2}+2\cdot\frac{k+2}{2(3k-2)}=3\cdot\frac{k}{3k-2}$.
Assume that $L_i,L_{i+1},\ldots,L_{i+2s}$ is a Type 2 segment. The total weight of the layers of this segment is
$
s\cdot \frac{1}{2}+ (s-1)\cdot \frac{k+2}{2(3k-2)}+2\cdot \frac{3k+2}{4(3k-2)}=(2s+1)\cdot\frac{k}{3k-2}
$.
This gives that the total weight of $H$ is $(D+1)\frac{k}{3k-2}$, as required. We need to check condition (\ref{dualcond}) at every vertex $v\in V(H)$.
Assume first that $v\in L_i$, where $L_i$ is of Type 1.
If $L_i$ is big, then $u(N(v))=(k-2) \cdot \frac{2}{3k-2}+2\cdot \frac{k+2}{2(3k-2)}=1$.
If $L_i$ is small, then it is the first or last layer in its segment. As first and last layers in a segment of any type have weight at most $\frac{k}{3k-2}$, we have $u(N(v))\le\frac{k}{3k-2}+(k-1)\cdot \frac{2}{3k-2}=1$.
Assume next that $v\in L_i$, where $L_i$ is of Type 2.
If $L_i$ is small, and is not the first or last layer in the segment, then $u(N(v))= 2\cdot(k-1)\cdot \frac{1}{2k-1} =1$. If $L_i$ is the first or last layer in the segment, then
$u(N(v))\le (k-1)\cdot \frac{1}{2(k-1)}+\frac{k}{3k-2}<1$. If $L_i$ is big, then $u(N(v))$ is the greatest if $L_i$ is the second or next-to-last layer in its segment. Therefore
\begin{eqnarray*}
u(N(v))&\le& \Biggl(\frac{1}{2}-\frac{1}{2(k-1)}\Biggl) +\frac{k+2}{2(3k-2)}+ \frac{3k+2}{4(3k-2)}=\frac{(11k+2)(k-1)-6k+4}{4(k-1)(3k-2)}\\
&=&\frac{(11k-4)(k-1)-2}{4(3k-2)(k-1)}\le 1.
\end{eqnarray*}
Assume that $v\in L_i$, where $L_i$ is of Type 3. Then $\max(u(L_i),u(L_{i-1}),u(L_{i+1}))\le\frac{k}{3k-2}$. If $u(v)\ge\frac{2}{3k-2}$, then $u(N(v))\le u(L_{i-1})+u( L_{i})+u( L_{i+1})-u(v)\le\frac{3k}{3k-2}-\frac{2}{3k-2}=1$.
Otherwise, we have that $|L_i|>\frac{k}{2}$ and $v\notin S_i$. Since $v\notin S_i$, there is a $j\in\{i-1,i+1\}$ and a $w\in L_j$ that is not a neighbor of $v$.
As $w\in L_j\setminus S_j$, by Lemma~\ref{lm:basic}~\ref{part:small} we have that $|L_j|>1$, therefore $L_j$ is also of Type 3.
It $u(v)+u(w)\ge\frac{2}{3k-2}$, then we get, as before, that
$u(N(v))\le 1$, as needed. In particular, if $u(w)\ge\frac{2}{3k-2}$ then we are done. So we may further assume that $|L_j|>\frac{k}{2}$ and $w\notin S_j$,
Moreover, from Lemma~\ref{lm:basic}\ref{part:type3} we have $\max(|L_i|-|S_i|,|L_j|-|S_j|)\le k- |S_i|-|S_j|$. Therefore
\begin{eqnarray*}
u(v)+u(w)&=&\frac{k-2|S_i|}{(3k-2)(|L_i|-|S_i|)}+\frac{k-2|S_j|}{(3k-2)(|L_j|-|S_j|)}\\
&\ge&\frac{k-2|S_i|}{(3k-2)(k-|S_i|-|S_j|)}+\frac{k-2|S_j|}{(3k-2)(k-|S_i|-|S_j|)}
=\frac{2}{3k-2}.
\end{eqnarray*}
This finishes the proof.
|
{
"timestamp": "2021-09-29T02:25:51",
"yymm": "2109",
"arxiv_id": "2109.13887",
"language": "en",
"url": "https://arxiv.org/abs/2109.13887"
}
|
\section{Introduction} \label{sec:intro}
The physical properties of galaxies are almost entirely derived from observations of their light. At distances too large for resolving galaxies into individual stars, stellar population synthesis modeling of the spectral energy distribution may be used to infer integrated properties such as the total stellar mass, star formation rate, and metal content \citep[][and references therein]{Walcher:2011, Conroy:2013}. In the nearby universe, on the other hand, it is possible to study galaxies using observations of individual stars. Star formation histories (SFHs), for instance, can be measured from Hubble Space Telescope (HST)-based resolved star color-magnitude diagrams (CMDs; e.g., \citealt{Weisz:2008}), though such measurements are resource intensive and are only possible for a very small subset of the galaxy population.
In between these two extremes lies the semi-resolved regime. The statistical “lumpiness” observed in relatively nearby galaxies, which are unresolved due to stellar crowding, contains a plethora of information about the underlying stellar populations. For instance, the sparse number of red giants relative to main sequence stars produces surface brightness fluctuations (SBFs) between pixels, which are well-known as an accurate extragalactic distance indicator \citep{1988AJ.....96..807T}, as well as a sensitive probe of stellar populations \citep[e.g.,][]{Buzzoni:1993, Ajhar:1994, Cantiello:2003, Raimondo:2005}. Taking this idea even further, fluctuation spectroscopy \citep{vD:2014} provides a powerful method for studying metal-rich giants in massive elliptical galaxies, and pixel CMDs \citep{Conroy:2016} generalize the SBF technique and are sensitive to SFHs, distances, and metallicities, as demonstrated in \citet{Cook:2019, Cook:2020}.
Current and future wide-field imaging surveys will revolutionize our view of extragalactic stellar systems by producing large sets of deep and high resolution images. Indeed, surveys like the Hyper Suprime-Cam (HSC) Subaru Strategic Program \citep{2018PASJ...70S...4A} and the Dark Energy Survey \citep[DES;][]{Abbott:2018} are already producing large catalogs of low surface brightness galaxies over large areas of the sky (\citealt{Greco:2018}; \citealt{Tanoglidis:2021}), an effort that will be continued in earnest by the Vera Rubin Observatory Legacy Survey of Space and Time \citep[LSST;][]{Ivezic:2019}. From space, the Nancy Grace Roman Space Telescope \citep{Spergel:2015} will deliver HST-quality imaging over wide areas, which will significantly increase the number of systems for which it will be possible to study stellar populations in both the resolved and semi-resolved regimes.
Artificial image simulations have long been an invaluable tool in the preparation for and exploitation of unprecedented data sets. To demonstrate how the then-hypothetical HST might supplement ground-based studies, \citet{1988IAUS..126..455B} simulated HST observations of a nearby globular cluster. The same simulation software was later used in \citet{1988AJ.....96..807T} to quantify the feasibility and limitations of the SBF distance measurement method. Today, similar star-by-star image simulations have become the gold standard for measuring point source completeness and modeling resolved stellar populations \cite[e.g.,][]{Stetson:1987, Dolphin:2000, Dalcanton:2009}.
In this work, we present \code{Art}ificial Stellar \code{Pop}ulations (\texttt{ArtPop}\xspace), an open-source Python package for modeling stellar populations and generating their corresponding images. \texttt{ArtPop}\xspace was conceptually introduced in \citet{Danieli:2018}, and it soon proved useful in helping to confirm the distance to the ``galaxy lacking dark matter'' \citep{vD-DF2-distance-2018}. \texttt{ArtPop}\xspace was then generalized, expanded upon, and used in \citet{Greco:2021} to study SBFs in low-luminosity stellar systems.
The goal of \texttt{ArtPop}\xspace is to provide the community with a user-friendly tool for generating artificial images of resolved and semi-resolved stellar systems. It complements more comprehensive packages like \code{GalSim} \citep{Rowe:2015}, which tend to have steep learning curves and have most often been applied to studies of unresolved galaxies. It is our hope that \texttt{ArtPop}\xspace will be useful both for scientific applications and in the classroom, where it can provide students with a unique perspective of stellar populations and their image formation.
The paper is organized as follows. In Section \ref{sec:methods}, we present our methods for synthesizing stellar systems and generating artificial images. In Section \ref{sec:code}, we describe the \texttt{ArtPop}\xspace software package. We include links (indicated by the {\color{linkcolor}\faFileCodeO}\ icon) to the Python code that created each example and figure. In Section \ref{sec:examples}, we provide a diverse set of scientific and pedagogical example applications. Finally, we conclude with a summary in Section \ref{sec:summary}.
\section{Synthesizing Stellar Systems} \label{sec:methods}
There are three main components that are necessary for generating artificial images of fully-populated stellar systems. The first is modeling stellar populations and the associated stellar fluxes. To synthesize complex star formation histories, multiple single-burst populations may be combined. The second is spatial information, including the distance to the system and image coordinates for all its stars. Finally, image processing tools are required to inject the stellar fluxes into an image, convolve with a point-spread function (PSF), and add noise according to a set of instrumental and observational parameters. In this section, we describe how we have implemented each of these components.
\subsection{Stellar Populations}\label{sec:pops}
The basic building block of stellar population synthesis is the simple stellar population (SSP), which consists of a population of stars born at the same time with a single metallicity and abundance pattern. To build SSPs, \texttt{ArtPop}\xspace starts from pre-calculated stellar isochrones and synthetic photometry. While the code can work with any set of models, here we use the Modules for Experiments in Stellar Astrophysics \citep[MESA;][]{Paxton:2011, Paxton:2013, Paxton:2015} Isochrones and Stellar Tracks (MIST) project\footnote{\url{http://waps.cfa.harvard.edu/MIST/}} \citep{Dotter:2016, Choi:2016}. In particular, we use synthetic photometry generated from the rotating models with $v/v_\mathrm{crit}=0.4$ from MIST version 1.2.
Given a set of stellar isochrones, we populate an SSP by sampling stellar masses from the initial mass function, $\Phi(M_i)$, using inverse transform sampling. The form of $\Phi(M_i)$ is an optional parameter in \texttt{ArtPop}\xspace. Unless noted otherwise in this work, we assume the initial mass function from \citet{Kroupa:2001}, with a minimum mass of $M_\mathrm{min}=0.1~M_\odot$. The maximum mass of sampled stars is set by the stellar isochrone at a given age and metallicity. The total mass of the SSP, including both stars and stellar remnants, is given by
\begin{equation}\label{eqn:total-mass}
M_\mathrm{tot}(t) = \int_{M_\mathrm{min}}^{M(t)} M_\mathrm{act}(M_i) \Phi(M_i)\,dM_i + M_\mathrm{rem},
\end{equation}
where $t$ is the stellar population age, $M_\mathrm{act}(M_i)$ is the actual mass of a star that had an initial mass $M_i$, and $M_\mathrm{rem}$ is the mass contained in stellar remnants including white dwarfs, neutron stars, and black holes. For the normalization of the initial mass function, we use a maximum mass of $M_\mathrm{max}=120~M_\odot$.
We assign masses to stellar remnants using the prescription of \citet{Renzini:1993}. For stars with initial masses $M_i<8.5~M_\odot$, the remnants are assumed to be white dwarfs of mass $M_\mathrm{rem} = 0.077\,M_i + 0.48$. Stars with $8.5~M_\odot \leq M_i < 40~M_\odot$ leave behind 1.4~$M_\odot$ neutron stars. Finally, the most massive stars with initial masses $M_i \geq 40~M_\odot$ produce black holes of mass 0.5\,$M_i$. These initial-final mass relations are of course rough approximations, but they are standard in the field of stellar population synthesis \citep[e.g.,][]{BC03, Maraston:1998, Conroy:2009}, making it possible to interpret and compare \texttt{ArtPop}\xspace models in this context.
In practice, \texttt{ArtPop}\xspace only samples ``live'' stars, which have masses that are included in the isochrone. To account for stellar remnants in the total mass that is reported by the \texttt{ArtPop}\xspace model, we apply a correction factor given by the ratio of the mass---as defined in Equation~(\ref{eqn:total-mass})---with and without remnants. This factor generally ranges from total masses (live stars and remnants) that are ${\sim}5\%$-$50\%$ larger than the sum of the sampled stellar masses, depending on the age and metallicity of the SSP. Of course, older SSPs, in which stars have more time to evolve, have more mass locked up in stellar remnants than their younger counterparts.
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{artpop_diagram.jpg}
\caption{A schematic overview of the procedure we follow to generate artificial images of fully-populated stellar systems. In (a), we sample stellar masses from an initial mass function (weights indicated by the color bar) and interpolate the associated stellar fluxes from an isochrone of the indicated age and metallicity. In (b), we assign spatial positions to each star using random draws from a two-dimensional Plummer distribution. In (c), the stars are injected into an image array, which is then convolved with the point-spread function (PSF). Finally, noise from the sources, sky, and detector are added to the image. In (d), we show a $grI$-composite image of an \texttt{ArtPop}\xspace simulation of a metal poor globular cluster at 5~kpc. In this example, the mock observations used the filter set, PSFs, and pixel scale of HST ACS/WFC. \script{figure_1_greco_and_danieli.py}}
\label{fig:diagram}
\end{figure*}
\subsection{Spatial Information}\label{sec:spatial}
In addition to the stellar population synthesis described in the previous section, we must specify the spatial properties of the stars to completely describe the system. In particular, we need to know the stellar density distribution in physical units, the system distance to convert relative positions to angular units and stellar luminosities to brightnesses, and the pixel scale of the mock imaging system to convert the stellar positions to image coordinates. \texttt{ArtPop}\xspace can inject stars into images using arbitrary user-provided image coordinates, but we provide functions for sampling random positions from common spatial distributions.
Currently, \texttt{ArtPop}\xspace has sampling functions for a uniform surface brightness distribution, the Plummer distribution \citep{Plummer:1911}, which provides a good description of the distribution of mass in globular clusters, and the more general S\'{e}rsic distribution \citep{Sersic:1968}, which spans the range of observed concentrations in stellar density distributions. An important note about sampling stellar positions is that the maximum radius out to which stars are sampled should be at least several times the scale length of the spatial distribution, even if such stars will fall outside of the image. Otherwise, too many stars will be sampled near the center of the galaxy, leading to a surface brightness distribution that is inconsistent with the input parameters.
\subsection{Artificial Images}\label{sec:art-images}
Given a simulated stellar population, including the positions and fluxes of every star, \texttt{ArtPop}\xspace generates an artificial image by injecting the individual stellar fluxes into an image array and convolving with a point-spread function (PSF). The current implementation of \texttt{ArtPop}\xspace assumes each star falls within the center of a pixel. For most purposes, this simplification will have a negligible effect, provided the PSF is well-sampled \citep[e.g.,][]{Olsen:2003}; to the extent that spatial sampling matters for a given application, its impact will be most acute in rare stellar phases for which the density is such that we expect few stars per pixel. We plan to include optional subpixel star injection in a future version of \texttt{ArtPop}\xspace.
There are several options for adding noise to an \texttt{ArtPop}\xspace simulation, spanning the noiseless ``ideal'' case to fully artificial images with read noise and Poisson noise from the sky and artificial sources. In the latter case, the user must provide the necessary instrumental and observational parameters, including the aperture diameter, detector and photometric filter properties, exposure time, and sky surface brightness.
To convert stellar magnitudes in bandpass $x$ to photon counts, we use the analytic expression
\begin{equation}\label{eqn:counts}
C_x = A\cdot\varepsilon_x\cdot\frac{t_\mathrm{exp}}{h}\frac{\Delta \lambda_x}{\lambda_{\mathrm{eff,}\,x}}10^{-0.4\,(m_x\,+\, 48.6)},
\end{equation}
where $A$ is the effective collecting area of the telescope, $\varepsilon_x$ is the efficiency in band $x$ (set to unity by default), $t_\mathrm{exp}$ is the exposure time, $h$ is the Planck constant, $\Delta \lambda_x$ and $\lambda_{\mathrm{eff,}\,x}$ are the width and effective wavelength of bandpass $x$, and $m_x$ is the stellar AB magnitude \citep{Oke:1983} in bandpass $x$.
After convolution with the PSF, Poisson noise is generated from the combined counts of the source and sky, and the read noise is assumed to be Gaussian. If the galaxy is injected into a real image---which will already have detector and sky noise---Poisson noise is optionally generated from only the source counts before converting into the image flux units, provided the necessary parameters for Equation~(\ref{eqn:counts}) are given as input.
\section{The Software Package} \label{sec:code}
In this section, we give an overview of the \texttt{ArtPop}\xspace software package and provide coding examples to demonstrate the code implementation and basic usage. The code is written in the Python programming language and is entirely open source. We refer the reader to the project website\footnote{\url{https://artpop.readthedocs.io}} for a detailed description of the code, installation instructions, and a growing list of tutorials. We are actively developing \code{ArtPop} in a public GitHub repository\footnote{\url{https://github.com/ArtificialStellarPopulations/ArtPop}} and welcome bug reports, feature requests, and code contributions from the community. Throughout the paper, we provide links (as {\color{linkcolor}\faFileCodeO}\ icons) to the Python code that created each figure and example.
\subsection{Code Structure} \label{sec:code-structure}
An important feature of the code design is that it is highly modular and extensible. This makes it possible for each of \texttt{ArtPop}\xspace's functionalities to be used independently, together, or in combination with independently-generated input data (e.g., stellar positions and/or fluxes). \texttt{ArtPop}\xspace is divided into three primary modules---one for each of the components necessary for generating artificial images of fully-populated stellar systems, as described in Section~\ref{sec:methods}. In particular, the core of the code is composed of the \code{artpop.stars}, \code{artpop.space}, and \code{artpop.image} modules, which are used for building stellar populations, sampling spatial distributions, and generating artificial images, respectively.
In Figure~\ref{fig:diagram}, we show a schematic overview of the procedure we follow to generate an artificial HST-like image of a globular cluster. For each step, we indicate the \texttt{ArtPop}\xspace module in which the corresponding code is implemented. In (a), we use the \code{artpop.stars} module to sample stellar masses from a user-specified initial mass function (indicated by the color bar) and interpolate the associated fluxes from a stellar isochrone. In (b), each star is assigned a spatial position in image coordinates using the \code{artpop.space} module. The stellar fluxes and positions are stored in an \code{artpop.Source} object, which in (c), is ``observed'' using the \code{artpop.image} module. Finally, the \texttt{ArtPop}\xspace image simulation is shown as a $grI$-composite image in (d).
\subsection{Coding Example: Stellar Populations}\label{sec:code-ssp}
A useful example of \texttt{ArtPop}\xspace's modularity is using the \code{artpop.stars} module to generate simple and composite stellar populations. This capability is independent from making images and may be used to calculate integrated population parameters such as total magnitude, color, and the surviving stellar mass. Given a user-specified initial mass function and isochrone model, \texttt{ArtPop}\xspace can calculate such parameters either using numerical integration or by sampling a finite number of stars. The latter method introduces stochasticity at low stellar mass, which may be desired in certain applications.
Calculations involving stellar isochrones are carried out in the flexible \code{Isochrone} class. Three input arguments are required to initialize an \code{Isochrone} object: SSP initial stellar masses (\code{mini}), the actual stellar mass after accounting for mass loss (\code{mact}), and a table of the associated stellar magnitudes (\code{mags}). Importantly, the code implementation is independent from how these parameters were generated, provided they are given in the correct format. Assuming these arguments have been defined, the code may be implemented as follows \script{section_3_greco_and_danieli.py}:
\begin{lstlisting}[language=Python]
from artpop.stars import Isochrone
iso = Isochrone(mini, mact, mags)
\end{lstlisting}
The \code{iso} object has methods for performing real-time calculations of integrated SSP parameters. For example, if the magnitude table contains LSST magnitudes, the IMF-weighted $g-i$ color and surviving stellar mass (assuming a Salpeter initial mass function) of the SSP may be calculated using
\begin{lstlisting}[language=Python]
g_i = iso.ssp_color(
blue="LSST_g", # blue filter
red="LSST_i", # red filter
imf="salpeter" # initial mass function
)
m_survive = iso.ssp_surviving_mass("salpeter")
\end{lstlisting}
For convenience, we have implemented a helper MIST-specific class, \code{MISTIsochrone}, which inherits all the methods from \code{Isochrone}. The user provides the desired SSP age, metallicity, and photometric system, and \code{MISTIsochrone} loads the required input parameters using the MIST synthetic photometry grids\footnote{\url{http://waps.cfa.harvard.edu/MIST/model_grids.html}}, interpolating over metallicity if necessary.
In Figure~\ref{fig:sps}, we show the results from \texttt{ArtPop}\xspace calculations of the time evolution of an SSPs mass-to-light ratio (top left), $V-I$ color (top right), $I$- and $V$-band SBF magnitude, and $\overline{V}-\overline{I}$ SBF color. The calculations are performed using the \code{MISTIsochrone} class. As with all figures in this paper, a link to the code used to create the figure is provided in the caption. We note that we carried out a detailed comparison between the stellar population synthesis calculations of \texttt{ArtPop}\xspace and the Flexible Stellar Population Synthesis software package \citep{Conroy:2009, Conroy-Gunn-2010} and found consistent results when care was taken to control for all model differences (e.g., spectral library, filter throughput functions, and mass definitions).
The above calculations were performed using integrals over the full IMF. To build stellar populations composed of a finite numbers of stars, \texttt{ArtPop}\xspace samples stellar masses from the IMF and interpolates stellar fluxes from a stellar isochrone, as described in Section~\ref{sec:pops}. This task is performed using the \code{SSP} class, which takes an \code{Isochrone} object, the IMF, and the number of stars (or total stellar mass) in the system:
\begin{lstlisting}[language=Python]
from artpop.stars import SSP
ssp = SSP(
isochrone=iso, # Isochrone object
num_stars=1e5, # number of stars
imf="salpeter", # initial mass function
)
\end{lstlisting}
Similar to the \code{MISTIsochrone} class, there is a \code{MISTSSP} helper class, which loads a MIST isochrone for a given set of SSP parameters. The above \code{ssp} object has various methods for calculating integrated properties. For example, the $i$-band magnitude and $g-i$ color are calculated using the \code{total\_mag} and \code{integrated\_color} methods:
\begin{lstlisting}[language=Python]
i = ssp.total_mag("LSST_i")
g_i = ssp.integrated_color("LSST_g", "LSST_i")
\end{lstlisting}
The distance of the population is set to 10~pc by default, so the above magnitude is in absolute units.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{sps_demo.pdf}
\caption{\code{ArtPop} calculations of the time evolution of SSPs. The top row shows the $V$- and $I$-band mass-to-light ratios (left) and integrated $V-I$ color (right). The bottom row shows the $V$- and $I$-band absolute SBF magnitudes (left) and $\overline{V} - \overline{I}$ SBF color (right). The SSPs were generated using the \citet{Kroupa:2001} initial mass function. Solar metallicity SSPs are shown as dashed lines, and metal poor SSPs with $[\mathrm{Fe/H}]=-1.5$ are shown as solid lines. \script{figure_2_greco_and_danieli.py}}
\label{fig:sps}
\end{figure}
To build composite stellar populations (CSPs) in \texttt{ArtPop}\xspace, \code{ssp} objects may be intuitively added together using the \code{+} operator. For example, suppose we have created two SSPs, one old (\code{ssp\_old}) and the other young (\code{ssp\_young}). Then, we may combine them into a single composite population as follows:
\begin{lstlisting}[language=Python]
csp = ssp_old + ssp_young
\end{lstlisting}
The new \code{csp} object is a composite of the old and young SSPs, inheriting all the same methods for calculating integrated properties.
We emphasize that, since \code{SSP} objects contain a finite number of stars, the integrated properties will be stochastic due to incomplete sampling of the mass function \citep[e.g.,][]{Santos:1997, Greco:2021}. The number of stars required for the calculations to converge is a function of stellar population parameters (e.g., due to the frequency of rare, luminous stars) and photometric bandpass, but in general it takes a total stellar mass of ${>}10^6$~M$_\odot$ to approach a fully sampled mass function. However, if the goal is to fully sample the IMF, the \code{Isochrone} class should be used.
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{artpop_psf_exptime.pdf}
\caption{\texttt{ArtPop}\xspace simulations of an old dwarf spheroidal of stellar mass $3\times10^6~\mathrm{M}_{\odot}$, placed at a distance of $5\,\mathrm{Mpc}$. The galaxy is composed of an SSP of age $12.6\,\mathrm{Gyr}$ and metallicity $\mathrm{[Fe/H]} = -1.8$. We show $gri$-composite images of mock observations of varying image resolution (FWHM values indicated at the top of each column) and exposure times (indicated on the left of each row). The image resolutions were chosen to be similar to HST/ACS (left column), HSC (middle column), and SDSS (right column). All images assume a mirror diameter of 8~m and sky brightnesses of 22, 21, and 20 mag~arcsec$^{-2}$ in $g$, $r$, and $i$, respectively. In each panel, the green line indicates the scale of 20\arcsec. \script{figure_3_greco_and_danieli.py}}
\label{fig:exptime_seeing}
\end{figure*}
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{artpop_sb.pdf}
\caption{Simulated HST ACS/WFC $grI$-composite images of uniformly distributed SSPs with $\log(\mathrm{Age})=\mathrm{10}\,\mathrm{Gyr}$ and $[\mathrm{Fe}/\mathrm{H}]=-1.6$. The mean $I$-band surface brightness of each column is indicated at the top, and the distance to the populations in each row is indicated on the left. This figure visually demonstrates the well-known result that surface brightness is independent of distance (in the nearby universe). The images are $35\arcsec\times\,25\arcsec$, with a pixel scale of 0.05~arcsec~pixel$^{-1}$. \script{figure_4_greco_and_danieli.py}}
\label{fig:sb}
\end{figure*}
\subsection{Coding Example: Image Simulations}\label{sec:code-image}
To create artificial images, \texttt{ArtPop}\xspace uses \code{Imager} objects, which are implemented in the \code{artpop.image} module. There are two types of \code{Imager} objects: \code{IdealImager} and \code{ArtImager}. The former generates noiseless images, and the latter generates fully artificial images that include simulated noise from the sky, source, and detector. Initializing an \code{ArtImager} object therefore requires both instrumental (e.g., mirror diameter and read noise) and observational (e.g., exposure time and the sky surface brightness) parameters as input.
Similar to real observatories, a single \code{Imager} object is designed to ``observe'' any number of sources. This is convenient when the goal is to mock observe many sources using the same imaging setup. In \texttt{ArtPop}\xspace, sources are stored in \code{Source} objects, which are containers that hold the positions and magnitudes of the stars. A primary purpose of \texttt{ArtPop}\xspace is to generate these positions and magnitudes, though this is not necessary to create images using \code{Imager} and \code{Source} objects. As a simple example, let us create a mock observation using the SSP created in Section~\ref{sec:code-ssp}.
First, we create a \code{Source} object using the magnitudes from the previously created SSP. For this example, we use the \code{artpop.plummer\_xy} function to sample positions from a Plummer distribution \script{section_3_greco_and_danieli.py}:
\begin{lstlisting}[language=Python]
from astropy import units as u
from artpop import Source
from artpop.space import plummer_xy
# image dimensions
xy_dim = (501, 501)
pixel_scale = 0.2 * u.arcsec / u.pixel
# returns a 2D numpy array
xy = plummer_xy(
num_stars=ssp.num_stars,
scale_radius=500*u.pc,
distance=8*u.Mpc,
xy_dim=xy_dim,
pixel_scale=pixel_scale
)
# ssp magnitudes stored in astropy table
src = Source(
xy=xy, # image coordinates
mags=ssp.mag_table, # magnitude table
xy_dim=xy_dim # image dimensions
)\end{lstlisting}
The above \code{src} object holds the stellar positions and magnitudes for a system of 10$^5$ stars (the number of stars in \code{ssp}) at a distance of 8~Mpc, with a spatial distribution that follows a Plummer profile of scale radius 500~pc. We also specify the image dimensions and pixel scale in order to convert the positions to image coordinates and flag stars that fall within the image. Stars that fall outside the image contribute to the total mass but are masked within the array of stellar positions, since they will not be injected into the image.
For simplicity, we will observe the source using an \code{IdealImager}, which may be initialized without any input parameters:
\begin{lstlisting}[language=Python]
from artpop.image import IdealImager
imager = IdealImager()
\end{lstlisting}
Mock observations are carried out using the \code{observe} method. Here, we will observe the artificial source in the $i$ band, assuming 0\farcs6 seeing, which we will model as a Moffat profile using the \code{moffat\_psf} function:
\begin{lstlisting}[language=Python]
from artpop.image import moffat_psf
# returns a 2D numpy array
psf = moffat_psf(
fwhm=0.6*u.arcsec,
pixel_scale=pixel_scale
)
obs = imager.observe(src, "LSST_i", psf)
\end{lstlisting}
The returned object, \code{obs}, is an \code{IdealObservation} object, which is a container that holds the PSF-convolved image, as well as metadata such as the zero point and observation bandpass.
The above examples show that multiple steps are required to create stellar positions and magnitudes, which are required to create a \code{Source} object. For convenience, \texttt{ArtPop}\xspace provides helper classes for creating complete \code{Source} objects in a single step. For example, a \code{Source} object composed of an SSP with a S\'ersic\xspace spatial distribution and synthetic photometry generated using the MIST isochrones can be created using the \code{MISTSersicSSP} class.
\section{Example Applications} \label{sec:examples}
There is a wide range of use cases for \texttt{ArtPop}\xspace. From visualizing the age-metallicity degeneracy to measuring survey detection efficiencies to generating synthetic data for machine learning algorithms, its potential applications span both scientific and pedagogical projects. In this section, we present example \texttt{ArtPop}\xspace applications that highlight different features of the code. Each example has a corresponding figure with a link to the code used to generate it in the caption.
\subsection{Image resolution and exposure time}\label{sec:resolution}
The \code{ArtImager} class generates fully artificial images, adding noise from the detector, sky, and artificial source according to the user-provided instrumental and observational parameters. For a fixed artificial source, this is particularly useful for visually and quantitatively exploring the interplay between observational parameters such as exposure time and image resolution.
In Figure \ref{fig:exptime_seeing}, we show $gri$-composite images of the same dwarf galaxy placed at 5~Mpc, with the pixel scales and resolutions similar to HST/ACS (left column), HSC (middle column), and SDSS (right column). The dwarf galaxy has a Plummer mass distribution with a scale radius of 400~pc and a total stellar mass of $3\times10^6$~M$_\odot$. The bottom row shows mock observations with an exposure time of 3~min, and the top row shows mock observations with a factor of 10 longer exposure time. Other than the resolutions and pixel scales, the mock observation setups are identical.
Comparing the top and bottom rows, the increase in exposure time leads to the expected increase in the signal-to-noise ratio. Stars in the outskirts of the galaxy disappear below the noise level in the short exposure panels. The comparisons become more interesting
when we also vary the image resolution (both seeing and pixel scale). At the highest resolution, the RGB is resolved in the longer 30~min exposure, but the small pixels (0.05~arcsec~pixel$^{-1}$) make the source appear as a diffuse object. As the resolution is decreased, stars blend into brighter point sources and the full galaxy becomes more easily detected as a single coherent object.
\vspace{0.5cm}
\subsection{Surface brightness and distance}\label{sec:sb}
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{phase-ssp-grid.pdf}
\caption{Simulated $gri$-composite images of dwarf galaxies of stellar mass 10$^6$~M$_\odot$ at a distance of 5 Mpc. The galaxies are SSPs with $\mathrm{[Fe/H] = -1.5}$, assuming the MIST model isochrones. Each row shows a single galaxy of fixed age, which is indicated on the left. For each galaxy, the leftmost panel shows the full SSP, and the remaining five panels show stars that are on the main sequence (MS), red giant branch (RGB), core-helium burning (CHeB) stars, and the early and thermally pulsating asymptotic branches (E-AGB and TP-AGB, respectively). The simulations were tuned to resemble an LSST-like observatory and observing conditions with exposure times of 90 minutes in $i$ and 45 minutes in $g$ and $r$. Each panel is 2~kpc on a side. As noted in the main text, the phases are defined according to the MIST primary equivalent evolutionary phases. \script{figure_5_greco_and_danieli.py}}
\label{fig:phase-grid}
\end{figure*}
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{dwarf_stoc.pdf}
\caption{DECam $g-i$ color distribution of 1000 realizations of an \texttt{ArtPop}\xspace dwarf galaxy with fixed model parameters. The galaxy is composed of an ancient, metal poor SSP with an age of 12.6~Gyr and metallicity of $\mathrm{[Fe/H] = -2}$. Given its low stellar mass of 10${^5}$~M$_\odot$, stochasticity in the numbers of luminous evolved stars leads to a wide range of integrated and visual properties. The $gri$-composite images on the top show the bluest (left), median (middle), and reddest (right) dwarf galaxy in the sample. A DECam-like observatory and observing conditions were assumed, with ${\sim}1\arcsec$ seeing and 2~hr (1~hr) exposures in $i$ ($g$ and $r$). The galaxy is placed at a distance of 2.5~Mpc, and the images are 800~pc on a side. \script{figure_6_greco_and_danieli.py}}
\label{fig:dwarf-stoc}
\end{figure*}
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{mist_vs_parsec_dwarf.pdf}
\caption{Visual comparison of artificial dwarf galaxies generated using stellar fluxes from the MIST (left panel) and PARSEC (right panel) isochrones. The isochrones were generated consistently using the Flexible Stellar Population Synthesis software package. Other than the choice of isochrone, the model parameters are identical, including the individual stellar positions and masses. The images are HST ACS/WFC $grI$-composites, with the integrated $g_{475}-I_{814}$ galaxy color indicated below each image. The middle panel shows the color-magnitude diagram of the constituent MIST (red) and PARSEC (blue) stars. The galaxies are SSPs composed of $5\times10^6$ stars with the population parameters indicated in the middle panel. \script{figure_7_greco_and_danieli.py}}
\label{fig:mist-vs-parsec}
\end{figure*}
In the nearby universe, surface brightness is independent of distance. This important result can be visualized using \texttt{ArtPop}\xspace's \code{MISTUniformSSP} class, which generates a uniformly distributed SSP using the MIST isochrone models and a user-specified average surface brightness and distance. Fixing the SSP parameters and field of view, the number of stars in an image increases as a function of distance, exactly compensating for dimming due to the inverse square law to ensure the surface brightness remains constant.
In Figure~\ref{fig:sb}, we show image simulations of uniformly distributed SSPs with mean $I$-band surface brightnesses of 26 (left column), 23 (middle column), and 20 mag~arcsec$^{-2}$ (right column)---spanning the stellar density range from low-mass dwarf galaxies to globular clusters \citep[e.g.,][]{Munoz:2015}. We place the SSPs at distances of 0.5 (bottom row), 2 (middle row), and 8~Mpc (top row). In all panels, the stellar population age and metallicity are 10~Gyr and [Fe/H]~$=-1.6$, respectively. Each image is $35\arcsec\times\,25\arcsec$, with a pixel scale of 0.05~arcsec~pixel$^{-1}$. For the mock observations, we used the \code{ArtImager} with 90~minute exposures in each of the HST ACS/WFC $I_\mathrm{814}$, $r_\mathrm{606}$, and $g_\mathrm{475}$ filters. The PSFs were generated using \code{Tiny Tim} \citep{Krist:1995}.
As expected, we see that fainter stars become increasingly resolved at fainter surface brightnesses (lower stellar density) and closer distances. At the nearest distance in the 23~mag~arcsec$^{-2}$ column (middle panel of the bottom row), there are ${\sim}3\times10^5$ stars, with the blue main sequence resolved into individual stars and only a handful of luminous giants in the frame. As the stars are placed to larger distances, their numbers increase as the square of the distance---there are ${\sim}5\times10^6$ stars in the 2~Mpc panel and ${\sim}8\times10^7$ in the 8~Mpc panel. At large distance and high surface brightness, Poisson fluctuations in the numbers of stars are too small to detect, leading to a visually smooth image. See \citet{Greco:2021} for a detailed study of these so-called surface brightness fluctuations using \texttt{ArtPop}\xspace.
More than 200 million stars are required to generate the high surface brightness 20~mag~arcsec$^{-2}$ population at 8~Mpc, which is memory intensive. For such situations, \texttt{ArtPop}\xspace provides an option so set a magnitude limit, fainter than which individual stars will not be sampled. Instead, the flux from the stars is combined into a smooth model, which is added to the image along with the brighter individual stars.
\subsection{Dwarf galaxies and stellar populations}\label{sec:stars}
\texttt{ArtPop}\xspace makes it easy to visualize stellar systems as a function of astrophysical (e.g., distance, stellar mass, and SSP age) and observational (e.g., exposure time, bandpass, aperture diameter, and sky surface brightness) parameters. Moreover, since stars are injected individually, it is possible to visually compare how different phases of stellar evolution contribute to the (semi)resolved appearance and integrated properties of the system, which helps build intuition for interpreting images of similar systems in real data.
To demonstrate the pedagogical utility of isolating stellar phases in artificial images, Figure~\ref{fig:phase-grid} shows $gri$-composite images of simulated dwarf galaxies at a distance of 5~Mpc. The distribution of stars in each galaxy follows a S\'ersic\xspace distribution, with a total stellar mass of 10$^6$~M$_\odot$. Each galaxy is composed of a metal-poor SSP with $\mathrm{[Fe/H] = -1.5}$ and an age ranging from 10~Gyr (top row) to 100~Myr (bottom row), assuming the MIST model isochrones. Each row shows the same galaxy realization, with the full SSP shown in leftmost panel. The remaining five panels show stars that are in the evolutionary phase indicated at the top of each column. From left to right, the phases are the main sequence (MS), red giant branch (RGB), core-helium burning (CHeB) stars, and the early and thermally pulsating asymptotic branches (E-AGB and TP-AGB, respectively). Similar to \citet{Greco:2021}, we label phases of stellar evolution according to the MIST primary equivalent evolutionary phases \citep[EEP;][]{Choi:2016, Dotter:2016}, which are useful computationally but in some cases lead to terminology that differs from standard nomenclature (e.g., an RGB phase associated with high-mass stars).
When the mass of a stellar system is $\lesssim10^5$~M$_\odot$ \citep{Greco:2021}, the IMF becomes significantly undersampled. At such low masses, the numbers of the most luminous stars range from zero to a dozen or so, leading to a wide range of integrated properties and visual appearance. Using \texttt{ArtPop}\xspace, we can quantitatively and visually compare different mock galaxy realizations with identical stellar population and observational parameters.
In Figure~\ref{fig:dwarf-stoc}, we show the DECam $g-i$ color distribution of 1000 realizations of an \texttt{ArtPop}\xspace dwarf galaxy with fixed model parameters. The galaxy has an age of 12.6~Gyr, metallicity of $\mathrm{[Fe/H]=-2}$, and low stellar mass of 10$^5$~M$_\odot$. Stochasticity in the numbers of evolved stars---particularly AGB stars---results in a standard deviation of ${\sim}0.05$~mag and range of ${\sim}0.25$~mag in $g-i$ color. In the top three panels, we show $gri$-composite images of the bluest (left), median color (middle), and reddest (right) dwarf galaxy realization in the sample.
\subsection{Comparing isochrone models}\label{sec:iso_compare}
With new imaging surveys like LSST and ultimately using the Roman Space Telescope, there will soon be a vast increase in the number of low-mass galaxies with high-quality imaging and (semi-)resolved stellar populations. These systems will span a large range of stellar population parameters, potentially providing powerful benchmarks for stellar population synthesis models. While model comparisons using color-magnitude diagrams are common, \texttt{ArtPop}\xspace's modular design makes it straightforward to additionally generate artificial galaxy images based on the different models.
In Figure~\ref{fig:mist-vs-parsec}, we show a visual comparison of dwarf galaxies generated using stellar fluxes from the MIST (left panel) and PARSEC \citep[right panel;][]{Bressan:2012} isochrone models. The shown HST ACS/WFC $grI$-composite images are based on mock observations with an HST-like observatory using 180~min exposures. To ensure the stellar fluxes were calculated consistently, we used the Flexible Stellar Population Synthesis software package \citep{Conroy:2009, Conroy-Gunn-2010}. The center panel shows the MIST (blue points) and PARSEC (red points) color-magnitude diagrams associated with the stars in the mock images.
Other than the isochrone model, all model parameters are identical---including the stellar positions and masses. The galaxies are composed of SSPs with $5\times10^6$ stars of age ${\sim}$355~Myr and metallicity $\mathrm{[Fe/H] = -1.5}$. To exactly match the stellar masses, we restricted the mass range from 0.1~M$_\odot$ to 2.8305~M$_\odot$. The minimum stellar mass is set by the MIST isochrones, whereas the maximum stellar mass is set by the PARSEC isochrones.
\subsection{Injecting into real images}
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{dwarf_des.pdf}
\caption{\texttt{ArtPop}\xspace model of an ultra-faint dwarf galaxy with a stellar mass of $\mathrm{M}_{\star}=5 \cdot 10^5\,\mathrm{M}_{\odot}$, placed at a distance of $1\,\mathrm{Mpc}$. The left panel shows the gri-composite color image and the middle panel shows the same model injected into a Dark Energy Survey image. In the right panel we show the $gr$ color-magnitude diagram, where stars are color-coded according to their stellar phase. The dashed horizontal line marks the $g$-band limiting magnitude reported by DES DR1 of $m_{g,\mathrm{lim}}=24.33\,\mathrm{mag}$. Ignoring star-galaxy separation issues, stars that are brighter than this limit should, in theory, be detected in the image. \script{figure_8_greco_and_danieli.py}}
\label{fig:dwarf_des}
\end{figure*}
A complementary approach to using the \code{ArtImager} class for generating fully artificial images (as shown in Section \ref{sec:resolution}-\ref{sec:iso_compare}) is to inject artificial sources into \textit{real} astronomical images. Using this approach, \texttt{ArtPop}\xspace has been proven an effective tool in estimating imaging survey depth and detection completeness (\citealt{Greene:2021}). Particularly, the dual-mode functionality of \texttt{ArtPop}\xspace, namely generating stellar population models and artificial images, can be used simultaneously in the following way.
Using the \code{MISTSersicSSP} class, we generate an artificial dwarf galaxy source, placed at $1\,\mathrm{Mpc}$, assuming an old SSP with $M_\star=5\cdot 10^5\,\mathrm{M}_\odot$, $\log (\mathrm{Age})=10.1\,\mathrm{Gyr}$, $\mathrm{[Fe/H]=-2}$, and a S\'ersic\xspace surface brightness distribution. We then use the \code{IdealImager} to generate a noiseless image of the source. Assuming DES-like observational parameters, we ``observe'' the galaxy in the $g$, $r$, and $i$ bands. The $gri$-composite noiseless image of the model is shown in the left panel of Figure \ref{fig:dwarf_des}.
Next, we inject the noiseless model into a DES image\footnote{DES DR1 coadd tiles ($0.7306\,\mathrm{deg}$ on a side) were downloaded from the following public data server: \url{http://desdr-server.ncsa.illinois.edu/despublic/dr1_tiles/.}}. This is done by simply adding the image shown on the left panel to the a DES tile. The injected image is shown in the middle panel of Figure~\ref{fig:dwarf_des}. As expected, many of the stars that are visible in the outskirts of the noiseless model image blend into the noise in the middle image, though a small number of giants are visually detected. Finally, in the right panel, we show the CMD of the source using synthetic photometry in the DECam photometric system. The dashed black line marks the $g$-band limiting magnitude of the DES DR1.
For understanding the detection of low stellar density systems in imaging surveys, models of dwarf galaxies, spanning a range of stellar masses, chemical compositions, ages, and morphologies can be generated. As demonstrated, \texttt{ArtPop}\xspace provides both catalogs of stars that can be injected into existing survey catalogs, accounting for noise and detection limits, and realistic images with photometry in the appropriate photometric system that can be then injected into survey images.
\section{Summary} \label{sec:summary}
In this paper we have presented \texttt{ArtPop}\xspace, a public software package for synthesizing of stellar populations and simulating realistic images of stellar systems. The code is modular and designed to allow maximal user flexibility. \texttt{ArtPop}\xspace is under active development and currently provides the following capabilities:
\begin{enumerate}
\item Stellar population synthesis: The \code{artpop.stars} module builds simple and composite stellar populations by sampling a user-specified initial mass function. Stellar fluxes are calculated by interpolating pre-calculated magnitude grids from a stellar isochrone model, which the user is free to choose. \github{stars} \readthedocs{pops}
\item Sampling spatial distributions: The \code{artpop.space} module samples two-dimensional positions in image coordinates. Currently, we have implemented samplers for uniform, Plummer, and S\'ersic\xspace distributions. Grid sampling for arbitrary two-dimensional functions is also possible using \code{Astropy} model objects\footnote{\url{https://docs.astropy.org/en/stable/modeling/index.html}}. \github{space} \readthedocs{spatial}
\item Image simulations: The \code{artpop.image} module generates artificial images of \code{Source} objects. The simulations can be fully artificial images with realistic noise or ideal noiseless images, which may be injected into real imaging data. \github{image} \readthedocs{artimages} \readthedocs{inject}
\end{enumerate}
These three functionalities can be used independent or collectively to generate catalogs and images of different stellar systems such as galaxies, globular clusters, and stellar streams.
We encourage the reader to install \texttt{ArtPop}\xspace, go through the tutorials on the website, and run some of the examples given in this paper. Installation instructions are given at \url{https://artpop.readthedocs.io/en/latest/getting_started/install.html}. Please feel free to report bugs and request features using the \texttt{ArtPop}\xspace GitHub issues page or by submitting a pull request to make a code contribution. More information about contributing to \texttt{ArtPop}\xspace can be found at \url{https://artpop.readthedocs.io/en/latest/getting_started/contribute.html}.
\software{
\code{astropy} \citep{Astropy-Collaboration:2013aa},
\code{Flexible Stellar Population Synthesis} \citep{Conroy:2009, Conroy-Gunn-2010},
\code{matplotlib} \citep{Hunter:2007aa},
\code{numpy} \citep{Van-der-Walt:2011aa},
\code{scipy} (\url{https://www.scipy.org})
}
\acknowledgments
We are deeply indebted to the open-source astronomy and scientific Python communities. Indeed, we learned almost everything we know about developing open-source software from poking around public GitHub repositories. We thank Adrian Price-Whelan and Pieter van Dokkum for providing useful comments, and we thank Ava Polzin for her help generating HST PSFs.
J.P.G. was supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-1801921. S.D. is supported by NASA through Hubble Fellowship grant HST-HF2-51454.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
|
{
"timestamp": "2021-09-30T02:00:33",
"yymm": "2109",
"arxiv_id": "2109.13943",
"language": "en",
"url": "https://arxiv.org/abs/2109.13943"
}
|
\section{Introduction}
Inspired by a prediction for a categorification of the Witten-Reshitikhin-Turaev invariant of a closed oriented 3-manifold~\cite{W, RT1} in \cite{GPPV, GPV}, a two variable series invariant $F_K(x,q)$ for a complement $M^{3}_K$ of a knot $K$ was introduced in \cite{GM}. Although its rigorous definition is yet to be found, it possesses various properties such as the Dehn surgery formula and the gluing formula. This knot invariant $F_K$ takes the form\footnote{Implicitly, there is a choice of group; originally, the group used is ${\rm SU}(2)$.}
\begin{gather}
F_K(x,q)= \frac{1}{2} \sum_{\substack{m \geq 1 \\ m \ \text{odd}}}^{\infty} \big(x^{m/2}-x^{-m/2}\big)f_{m}(q) \in \frac{1}{2^{c}} q^{\Delta} \mathbb{Z}\big[x^{\pm 1/2}\big]\big[\big[q^{\pm 1}\big]\big],
\end{gather}
where $f_{m}(q)$ are Laurent series with integer coefficients\footnote{They can be polynomials for monic Alexander polynomial of $K$ (See Section 3.2)}, $c \in \mathbb{Z}_{+}$ and $\Delta \in \mathbb{Q}$. Moreover, $x$-variable is associated to the relative ${\rm Spin}^c \big(M^{3}_K,\partial M^{3}_K \big)$-structures, which is affinely isomorphic to $H^2\big(M^{3}_K, \partial M^{3}_K ; \mathbb{Z}\big) \cong H_1\big(M^{3}_K;\mathbb{Z}\big)$; it has an infinite order, which is reflected as a series in $F_K$. The rational constant $\Delta$ was investigated in \cite{GPP}, which elucidated its intimate connection to the d-invariant (or the correction term) in certain versions of the Heegaard Floer homology ($HF^{\pm}$) for rational homology spheres. The physical interpretation of the integer coefficients in $f_{m}(q)$ are number of BPS states of 3d $\mathcal{N}=2$ supersymmetric quantum field theory on $M^{3}_{K}$ together with boundary conditions on $\partial M^{3}_{K}$. One of various properties of $F_K$ is that it was conjectured to possess similar characteristics as the colored Jones polynomial, for example, q-holomorphicity~\cite{GL}.
\noindent\begin{conjecture}[{\cite[Conjecture 1.6]{GM}}] For any knot $K \subseteq$ $S^3$, the normalized series $f_{K}(x,q)$ satisfies a linear recursion relation generated by the quantum A-polynomial of $K$ $\hat{A}_K(q,\hat{x},\hat{y})$:
\begin{gather}
\hat{A}_{K}(q, \hat{x},\hat{y}) f_{K}(x,q) = 0,
\end{gather}
where $f_{K}:=F_{K}(x,q)/\big(x^{1/2}-x^{-1/2}\big)$.
\end{conjecture}
\noindent The actions of $\hat{x}$ and $\hat{y}$ are
$$
\hat{x} f_{K}(x,q)= x f_{K}(x,q) \qquad \hat{y}f_{K}(x,q)= f_{K}(xq,q).
$$
\newline
This property was used to compute $F_K$ for the figure eight knot $4_1$ in \cite{GM} and was verified for $m(5_2)$ in \cite{P2}. Moreover, the same method was applied to find $F_K$ for a cabling of $4_1$~\cite{C2}.
\newline
\noindent\textbf{Acknowledgments.} I would like to thank Carsten Schneider and Pavel Putrov for helpful explanations. I am grateful to Sergei Gukov for valuable suggestions on the draft of this paper.
\section{A connected sum formula}
We propose a connect sum formula for $F_K$.
\noindent\begin{conjecture} For any two knots $K_1$ and $K_2$ in $Y=\mathbb{Z} HS^3$, $F_K(x,q)$ of their connect sum $K_1 \, \# \, K_2$ is
\begin{equation}
F_{K_1 \, \# \, K_2} (x,q) = \frac{F_{K_1}(x,q) F_{K_2}(x,q)}{x^{1/2} - x^{-1/2}}\, \in \frac{1}{2^{c}} q^{\Delta} \mathbb{Z}\big[x^{\pm 1/2}\big]\big[\big[q^{\pm 1}\big]\big],
\end{equation}
where $c \in \mathbb{Z}_{+}$ and $\Delta \in \mathbb{Q}$.
\end{conjecture}
\section{Quantum torus and recursion ideal}
Let $\mathcal{T}$ be a quantum torus
$$
\mathcal{T} := \mathbb{C}[t^{\pm 1}]\left\langle M^{\pm 1}, L^{\pm 1}\right\rangle / (LM- t^2 ML).
$$
The generators of the noncommutative ring $\mathcal{T}$ acts on a set of discrete functions, which are colored Jones polynomials $J_{K,n} \in \mathbb{Z}[t^{\pm 1}]$ in our context, as
$$
M J_{K,n}= t^{2n} J_{K,n} \qquad L J_{K,n}= J_{K,n+1}.
$$
The recursion(annihilator) ideal $\mathcal{A}_{K}$ of $J_{K,n}$ is the left ideal $\mathcal{A}_{K}$ in $\mathcal{T}$ consisting of operators that annihilates $J_{K,n}$:
$$
\mathcal{A}_{J_{K,n}} : = \left\{ \alpha_{K} \in \mathcal{T}\, |\, \alpha_{K} J_{K,n} = 0 \right\}.
$$
As it turns out that $\mathcal{A}_{K}$ is not a principal ideal in general. However, by adding inverse polynomials of t and M to $\mathcal{T}$~\cite{G},
we obtain a principal ideal domain $\tilde{\mathcal{T}}$
$$
\tilde{\mathcal{T}} : = \left\{ \sum_{j \in \mathbb{Z}} a_{j}(M) L^{j} \Big|\, a_{j}(M) \in \mathbb{C}[t^{\pm 1}](M),\, a_{j}= \text{almost always}\quad 0 \right\}
$$
Using $\tilde{\mathcal{T}}$ we get a principal ideal $\tilde{\mathcal{A}_{K}}:= \tilde{\mathcal{T}}\mathcal{A}_{K}$ generated by a single polynomial $\hat{A}_{K}$
$$
\hat{A}_{K}(t,M,L)= \sum_{j=0}^{d} a_{j}(t,M)L^{j}.
$$
This $\hat{A}_{K}$ polynomial is a noncommutative deformation of a classical A-polynomial of a knot~\cite{CCGLS} (see also \cite{CL}). Alternative approaches to obtain $\hat{A}_K(t,M,L)$ are by quantizing the classical A-polynomial curve using the twisted Alexander polynomial or applying the topological recursion technique~\cite{GS}. The AJ conjecture states that the classical polynomial can be obtained from its quantum version by setting $t=-1$ (up to an overall rational function of $M$)~\cite{G, Gukov}.
\section{Recursion relations}
We provide evidence for (3) using the q-holonomic property (2) of $F_K$ for connected sums of torus knots. For right handed torus knots $T(s,t)\quad 2 \leq s < t\quad gcd(s,t)=1$, their $F_K$ were computed~\cite{GM}:
\begin{equation}
F_K(x,q)= \frac{1}{2} \sum_{\substack{m \geq 1 \\ m \ \text{odd}}}^{\infty} \ \epsilon_m \big(x^{m/2}-x^{-m/2}\big) q^{\frac{m^2-(st-s-t)^2}{4st}}
\end{equation}
$$
\epsilon_m = \begin{cases}
-1,\quad m \equiv st+s+t\quad \text{or}\quad st-s-t\quad \text{mod}\, 2st\\
+1,\quad m \equiv st+s-t\quad \text{or}\quad st-s+t\quad \text{mod}\, 2st\\
0,\quad \text{otherwise.}
\end{cases}
$$
For the left handed torus knots $T(s,-t)$, their coefficient functions can be obtained from (4) by $f_m (q^{-1})$ and $F_{T(s,-t)} \in 2^{-c} q^{\Delta} \mathbb{Z}\big[q^{\pm 1}\big]\big[\big[x^{\pm 1/2}\big]\big]$.
\newline
\noindent In the following examples, we used \cite{KK} to obtain the quantum A-polynomials for the connected sum of knots.
\newline
\noindent \underline{$K=T(2,3)\, \# \, T(2,3)$}\quad The minimal degree homogeneous recursion relation for $K$ is
\begin{equation}
r_0\, F_{T(2,3)}(x,q)^2 + r_1\, F_{T(2,3)}(xq,q)^2 + r_2\, F_{T(2,3)}(xq^2,q)^2 + r_3\, F_{T(2,3)}(xq^3,q)^2 = 0
\end{equation}
\begin{align*}
r_0 & = -q+q \left(q^3+q^5\right) x^2+q^6 x^3-q^9 x^4-2 q^{11} x^5+q^{16} x^7\\
r_1 & = 1+\left(-2 q+q^3-q^5\right) x^2-2 q^2 x^3+\left(q^2-2 q^4+3 q^6-q^8\right) x^4+\left(2 q^3+2 q^7\right)x^5\\
& +\left(q^4 +q^5-2 q^7+3 q^9-q^{11} \right) x^6-2 q^8 x^7+\left(-q^7-q^9-2q^{10}+q^{12}-q^{14}\right) x^8 -q^9 x^9\\
& +\left(q^{12} +q^{15} \phantom{1} \right) x^{10}+2 q^{14} x^{11}-q^{19}x^{13}\\
r_2 & = q^4 x^3-2 q^5 x^5+q^4 \left(-q^2-q^5\right) x^6+q^6 x^7+q^4 \left(q^3+q^5+2 q^6-q^8+q^{10}\right) x^8+2q^{11} x^9\\
& +q^4 \left(-q^6-q^7+2 q^9-3 q^{11}+q^{13}\right) x^{10}+q^4 \left(-2 q^8-2 q^{12}\right) x^{11}+q^4 \left(-q^{10}+2 q^{12}-3 q^{14}+q^{16} \right) x^{12}\\
& +2 q^{17} x^{13}+q^4 \left(2q^{15}-q^{17}+q^{19}\right) x^{14}-q^{24} x^{16}\\
r_3 & = -q^{19} x^9+2 q^{20} x^{11}+q^{21} x^{12}-q^{21} x^{13}+q^{19} \left(-q^3-q^5\right) x^{14}+q^{25}x^{16}\\
\end{align*}
For instance, at x-order, the above four terms in the same order are
\begin{align*}
T_0 & =-q^3+3 q^6+q^8-2 q^{11}+2 q^{12}-2 q^{13}+8 q^{15}-2 q^{17}-2 q^{18}-4 q^{20}-2q^{21}+O(q^{23})\\
T_1 & =-4 q^2-4 q^3+6 q^4-14 q^6-4 q^7+4 q^8-6 q^9+q^{10}+3 q^{11}-10 q^{12}+q^{13}+12q^{14}- O( q^{15})\\
T_2 & = \frac{3}{q} -1+9 q+8 q^2+3 q^3-4 q^4+13 q^6+8 q^7-5 q^8+12 q^9-q^{10}+q^{11}+10q^{12}+q^{13}-O(q^{14})\\
T_3 & = -\frac{3}{q} + 1 -9 q-4 q^2+2 q^3-2 q^4-2 q^6-4 q^7-6 q^9-2 q^{11}-2 q^{12}-2 q^{17}+4q^{19}-O(q^{21})\\
\end{align*}
The figure below shows that as the upper bound of the summation in (1) increases, the minimum power of q-term that survived increases. This indicates that the desired cancellations occur.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=1]{plot1.pdf}
\caption{Minimum powers of q-terms that survived in (5) for the powers of x shown. The upper bound corresponds to the maximum value among the upper bounds in the summations in $F_{T(2,3) \, \# \, T(2,3)} (xq^j,q),\, j=0,\cdots ,3$\, (see Appendix A for the plots of other powers of x).}
\end{center}
\end{figure}
\newline
\noindent \underline{$K=T(2,3)\, \# \, T(2,5)$}\quad The minimal degree homogeneous recursion relation for $K$ is
$$
t_0\, F_{T(2,3)}(x,q)\, F_{T(2,5)}(x,q) + t_1\, F_{T(2,3)}(xq,q)\, F_{T(2,5)}(xq,q) + t_2\, F_{T(2,3)}(xq^2,q)\,F_{T(2,5)}(xq^2,q)
$$
\begin{equation}
\hspace{-3cm} + t_3\, F_{T(2,3)}(xq^3,q)\, F_{T(2,5)}(xq^3,q) + t_4\, F_{T(2,3)}(xq^4,q)\, F_{T(2,5)}(xq^4,q) = 0
\end{equation}
The coefficient functions $t_i (x,q) \in \mathbb{Z}[x,q]$ are recorded in \cite{C1}. At x-order, the above five terms in the same order are
\begin{align*}
R_0 & = -q^5+q^6-q^7+q^8-2 q^9+5 q^{10}-q^{11}+q^{12}-q^{13}+4 q^{14}+2 q^{15}+4 q^{16}-3 q^{17}- O( q^{19})\\
R_1 & = -4 q^4-q^5+7 q^6+3 q^7-7 q^8-11 q^9-8 q^{10}+10 q^{11}+14 q^{12}-5 q^{13}-2 q^{14}-7 q^{15}- O(q^{16})\\
R_2 & = -\frac{2}{q^2}+\frac{4}{q} -10 +12 q-5 q^2-3 q^3+2 q^4+3 q^5-q^6+7 q^7-11 q^8-14 q^9+4 q^{10}-O( q^{11})\\
R_3 & = -\frac{1}{q^7}+\frac{1}{q^6}-\frac{4}{q^5}+\frac{6}{q^4}+\frac{1}{q^3}-\frac{5}{q^2}+\frac{3}{q}+11 -13q+2 q^2+6 q^3+3 q^4-6 q^5-9 q^6-O( q^7)\\
R_4 & = \frac{1}{q^7}-\frac{1}{q^6}+\frac{4}{q^5}-\frac{6}{q^4}-\frac{1}{q^3}+\frac{7}{q^2}-\frac{7}{q}-1+q+3q^2-3 q^3-q^4+5 q^5+2 q^6-q^7+q^9-O( q^{10})\\
\end{align*}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=1]{graph1.pdf}
\caption{Minimum powers of q-terms that survived in (6) for the powers of x shown. The upper bound corresponds to the maximum value among the upper bounds in the summations in $F_{T(2,3) \, \# \, T(2,5)} (xq^j,q),\, j=0,\cdots ,4$\, (see Appendix A for the plots of other powers of x).}
\end{center}
\end{figure}
\noindent \underline{$K=T(2,3)\, \# \, T(3,5)$}\quad The minimal degree homogeneous recursion relation for $K$ is
$$
h_0\, F_{T(2,3)}(x,q)\, F_{T(2,5)}(x,q) + h_1\, F_{T(2,3)}(xq,q)\, F_{T(2,5)}(xq,q) + h_2\, F_{T(2,3)}(xq^2,q)\, F_{T(2,5)}(xq^2,q)
$$
\begin{equation*}
\hspace{-4cm} + h_3\, F_{T(2,3)}(xq^3,q)\, F_{T(2,5)}(xq^3,q) + h_4\, F_{T(2,3)}(xq^4,q)\, F_{T(2,5)}(xq^4,q)
\end{equation*}
\begin{equation}
\hspace{-3.5cm} + h_5\, F_{T(2,3)}(xq^5,q)\, F_{T(2,5)}(xq^5,q) + h_6\, F_{T(2,3)}(xq^6,q)\, F_{T(2,5)}(xq^6,q) = 0
\end{equation}
The coefficient functions $h_i (x,q)\in \mathbb{Z}[x,q]$ are listed in \cite{C1}. At $x^{0}$ order, the above seven terms in the same order are
\begin{align*}
W_0 & = 5 q^{304}+9 q^{305}+15 q^{306}+32 q^{307}+48 q^{308}+64 q^{309}+79 q^{310}+91 q^{311}+ O(q^{312})\\
W_1 & =q^{295}+2 q^{296}-5 q^{297}-18 q^{298}-36 q^{299}-66 q^{300}-104 q^{301}-155 q^{302}- O(q^{303})\\
W_2 & = -q^{286}+q^{288}-4 q^{289}-10 q^{290}-11 q^{291}+9 q^{292}+41 q^{293}+78 q^{294}+O(q^{295})\\
W_3 & = 2 q^{289}+9 q^{290}+4 q^{291}-9 q^{292}-16 q^{293}-35 q^{294}-51 q^{295}-54 q^{296}- O( q^{297})\\
W_4 & = q^{275}+2 q^{276}-q^{277}-8 q^{278}+4 q^{279}+13 q^{280}+19 q^{281}+32 q^{282}+46 q^{283}+O(q^{284})\\
W_5 & = -2 q^{289}-9 q^{290}-4 q^{291}+9 q^{292}+16 q^{293}+35 q^{294}+50 q^{295}+52 q^{296}+50 q^{297}+O( q^{298})\\
W_6 & = -q^{275}-2 q^{276}+q^{277}+8 q^{278}-4 q^{279}-13 q^{280}-19 q^{281}-32 q^{282}-46 q^{283}-45q^{284}-O( q^{285})\\
\end{align*}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=1]{graph5.pdf}
\caption{Minimum powers of q-terms that survived in (7) for the powers of x shown. The three dots are nearly overlapping. The upper bound corresponds to the maximum value among the upper bounds in the summations in $F_{T(2,3) \, \# \, T(3,5)} (xq^j,q),\, j=0,\cdots ,6$\, (see Appendix A for the plots of other powers of x).}
\end{center}
\end{figure}
\noindent \underline{$K=T(2,3)\, \# \, T(2,-3)$}\quad The minimal degree homogeneous recursion relation for $K$ is
$$
b_0\, F_{T(2,3)}(x,q)\, F_{T(2,-3)}(x,q) + b_1\, F_{T(2,3)}(xq,q)\, F_{T(2,-3)}(xq,q) + b_2\, F_{T(2,3)}(xq^2,q)\, F_{T(2,-3)}(xq^2,q)
$$
\begin{equation}
\hspace{-4cm} + b_3\, F_{T(2,3)}(xq^3,q)\, F_{T(2,-3)}(xq^3,q)+ b_4\, F_{T(2,3)}(xq^4,q)\, F_{T(2,-3)}(xq^4,q)=0,
\end{equation}
\newline
\noindent where the coefficient functions $b_i (x,q)\in \mathbb{Z}[x,q]$ are listed in \cite{C1}. For this composite knot, the cancellation in (8) is subtle compared to the connected sums of the right handed torus knots since the coefficient function $f_m(q)$ of the left handed torus knots have the form $q^{-r},\, r \in \mathbb{Z}_{+}$. Specifically, arbitrary high and low powers of q from $F_{T(2,3)}$ and $F_{T(2,-3)}$, respectively, which appear for large values of the upper bound of the summations in $F_{T(2,\pm 3)}$, can combine to yield $O(1)$-powers of q that is required for cancellations. Desired cancellations become evident when we group the terms in (8) in powers of q and observe cancellations among x terms. It turns out that for some powers of q such as $q$ (Figure 20) and $q^{500}$ (Figure 27), cancellations do not occur in $x^p$ or $x^{-p},\quad p\in \mathbb{Z}_{+}$ when the upper bound is not high enough. Furthermore, another gap can be created for some powers of q when the upper bound is high enough. Therefore, we scrutinized the growth of width of gaps in x-terms as the upper bound is increased for various powers of q.
\newline
For example, when the upper bound of the summation is 325, a subset of x-terms at $q^{100}$ in (8) are
\begin{align*}
(8) & \supset \frac{76}{x^{281}}+\frac{49}{x^{280}}-\frac{118}{x^{279}}-\frac{21}{x^{278}} +\frac{51}{x^{277}}+\frac{26}{x^{276}} -\frac{11}{x^{275}}-\frac{34}{x^{274}}+\frac{14}{x^{273}}+\frac{14}{x^{272}}-\frac{3}{x^{271}}-\frac{3}{x^{270}}\\
& -2 x^2-8 x^3+16 x^4 +91 x^5 -83 x^6-151 x^7+69 x^8+154 x^9-71 x^{10}-15 x^{11}-x^{12}+x^{13}\\
& -7 x^{287}+24 x^{288}-6 x^{289} -33 x^{290}+14 x^{291} +14 x^{292}-12 x^{293}+15 x^{294} +6 x^{295}-24x^{296}\\
& +x^{297}+12 x^{298}+3 x^{299}-55 x^{300}+28 x^{301} +100 x^{302}-57 x^{303} -56 x^{304}+48 x^{305}-47 x^{306}+ \cdots
\end{align*}
\noindent There is a gap between $x^{14}$ and $x^{286}$ and there is another gap from $x^0$ to $x^{-269}$. These gaps are due to cancellations as we can see from the five terms in Appendix A. In the figure below, we observe that the gap size widens for $q^{100}$ as the upper bound of the summation is increased.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=1]{plot100.pdf}
\caption{At $q^{100}$, the width of the gaps in $x^p$ terms (blue) and in $1/x^p$ terms (orange), $p \in \mathbb{Z}_{+}$ is displayed. The upper bound corresponds to the maximum value among the upper bounds in the summations in $F_{T(2,3) \, \# \, T(2,-3)} (xq^j,q),\, j=0,\cdots ,4$\, (see Appendix A for the plots of other powers of q).}
\end{center}
\end{figure}
\newline
For lower powers of q such as $q,q^2$ and $q^{-1}$, when the upper bound is 165, cancellations among small positive powers of x occur. This second gap widens as the upper bound is increased. For instance, at $q^{-3}$, $x$ and $x^2$ terms are absent when the upper bound is 165. As it is increased to 187, $x$ to $x^5$ terms are canceled.
\newline
\section{Comparison to the analytic results}
In this section, we compare the $SU(2)$ WRT invariant of integral homology spheres at fixed roots of unity obtained analytically and numerically. For the latter method, we utilize the conjectured Dehn surgery formula in \cite{GM}, which relates $F_K$ and $\hat{Z}$:
\newline
\noindent \textbf{Conjecture 5.1} ({\cite[Conjecture 1.7]{GM}})\, For any $K \subseteq S^3$ and let $S^3_{p/r}(K)$ be a 3-manifold obtained from Dehn surgery on $K$ along $p/r \in \mathbb{Q}^{\ast}$. Then
$$
\hat{Z}_{b}[S^3_{p/r}(K); q] = \pm q^d \, \mathcal{L}^{(b)}_{p/r} \left[ \left( x^{\frac{1}{2r}} - x^{-\frac{1}{2r}} \right) F_{K}(x,q) \right] \qquad d \in \mathbb{Q},
$$
$$
\mathcal{L}^{(b)}_{p/r} : x^{u}q^{v} \mapsto \begin{cases}
q^{-u^2 r/p}q^v & \text{if}\quad ru - b \in p\mathbb{Z} \\
0 & \text{otherwise}
\end{cases}
$$
where $\mathcal{L}$ is a $|q|<1$ generalization of the Laplace transform~\cite{BBL}.
\newline
\noindent On analytic side, the integer Dehn surgery formula for the WRT invariant at a primitive $k$-th root of unity is~\cite{BBL,BL}
\begin{equation}
\tau_k [S^3_{p}(K)] = \frac{\sum_{n=1}^{k-1} \, [n]^2 \, q^{p(n^2 -1)/4}\, J_{n}(K)}{\sum_{n=1}^{k-1}\, [n]^2 \, q^{sign(p) (n^2 -1)/4}}\qquad [n]= \frac{q^{n/2}-q^{-n/2}}{q^{1/2}-q^{-1/2}}
\end{equation}
where $J_{n}(K)$ is $sl(2)$ colored Jones polynomial of $K$ and $p \in \mathbb{Z}$ is the surgery slope or equivalently framing of $K$. When $p=-1$, it results in $S^3_{-1}(K)=\mathbb{Z} HS^3$ for any $K$. For this class of manifolds, the decomposition of the $SU(2)$ WRT invariant in terms of $\hat{Z}$ is~\cite{GPPV}
\begin{equation}
Z_{CS} \left[ S^3_{-1}(K); q=e^{\frac{i 2 \pi}{k}} \right] = \frac{-i}{2\sqrt{2k}}\, \lim_{q \rightarrow e^{\frac{i 2 \pi}{k}}} \hat{Z}_{0} (q).
\end{equation}
It is simply related to $\tau_k$
$$
Z_{CS} \left[ S^3_{-1}(K); q=e^{\frac{i 2 \pi}{k}} \right] = \frac{-i(q^{1/2} - q^{-1/2})}{\sqrt{2k}}\, \tau_k [S^3_{-1}(K)].
$$
For the examples below, we display the $sl(2)$ colored Jones polynomial for the torus knot $T(s,t)$,
$$
J_{n}(T(s,t);q)= -\frac{q^{-\frac{s t n^2}{4} } q^{\frac{(s-1) (t-1)}{2} }}{q^{\frac{n}{2}}-q^{-\frac{n}{2}}}\, \sum_{r=0}^{stn} \epsilon_{s t n-r}\, q^{\frac{r^2-(s t-s-t)^2}{4 s t}}\qquad n \in \mathbb{N}
$$
where $ 2 \leq s < |t|$,\, $gcd(s,t)=1$ and $\epsilon$ is in (4) (The unknot normalization is $J_{n}=1$).
\newline
\noindent \underline{$K=T(2,3)\, \# \, T(2,3)$}: At $k=3$, applying the analytic formula (9) yields
$$
Z_{CS} \left[ S^3_{-1}(K); e^{\frac{i 2 \pi}{3}} \right] = 0.7071,
$$
where $J_{n}(K_1 \, \# K_2)= J_{n}(K_1) J_{n} (K_2)$ is used. On the numerical side, after $\hat{Z}$ is obtained from Conjecture 5.1, we truncate the q-power series at a large finite power $N$ of q to find the limiting value of $\hat{Z}_{0} (q)$ as $q$ goes to a root of unity. We choose the truncation power to be $N=20000$ and extract the limiting value of $\hat{Z}_{0} (q)$. The figure below shows that the q-series converges to
$$
\lim_{q \rightarrow e^{\frac{i 2 \pi}{3}}}\, \frac{2}{q^2}\hat{Z}_{0} [S^3_{-1}(K); q] \longrightarrow -0.0003504774588 - i 6.925958533.
$$
The overall monomial is introduced for numerical convenience. After substituting the limiting value into (10), we find $Z_{CS} \approx 0.7068717087$, thus it agrees with the above analytical value.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.5]{Surgery233.pdf}
\caption{The extrapolation of $2\hat{Z}_{0} (q \rightarrow e^{\frac{i 2 \pi}{3}} )/q^2$ associated with $K$ at $N=20000$. Real part (blue) and imaginary part (orange) of $\hat{Z}_{0}$. }
\end{center}
\end{figure}
\noindent At $k=4$, the analytic formula (9) results in
$$
Z_{CS} \left[ S^3_{-1}(K); e^{\frac{i 2 \pi}{4}} \right] = 0.5 .
$$
As in the previous case, we truncate the q-power series at $N=20000$ and find the limiting value of $\hat{Z}$ as $q$ goes to $i$.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.5]{Surgery234.pdf}
\caption{The extrapolation of $2\hat{Z}_{0} (q \rightarrow e^{\frac{i 2 \pi}{4}} )/q^2$ associated with $K$ at $N=20000$. Real part (blue) and imaginary part (orange) of $\hat{Z}_{0}$. }
\end{center}
\end{figure}
\noindent The q-series approaches to
$$
\lim_{q \rightarrow e^{\frac{i 2 \pi}{4}}}\, \frac{2}{q^2}\hat{Z}_{0} [S^3_{-1}(K); q] \longrightarrow 3.968560094 - i 4.028195455.
$$
Using (10), $Z_{CS} \approx 0.5 $, which matches with the analytical result.
\newline
\noindent At $k=5$, the analytic formula (9) produces
$$
Z_{CS} \left[ S^3_{-1}(K); e^{\frac{i 2 \pi}{5}} \right] = -0.3 + i 1.36263.
$$
We truncate the q-power series at $N=30000$ and find the limiting value of $\hat{Z}$.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.5]{Surgery235.pdf}
\caption{The extrapolation of $2\hat{Z}_{0} (q \rightarrow e^{\frac{i 2 \pi}{5}} )/q^2 $ associated with $K$ at $N=30000$. Real part (blue) and imaginary part (orange) of $\hat{Z}_{0}$. }
\end{center}
\end{figure}
\noindent The q-series approaches to
$$
\lim_{q \rightarrow e^{\frac{i 2 \pi}{5}}}\, \frac{2}{q^2}\hat{Z}_{0} [S^3_{-1}(K); q] \longrightarrow 1.6675682 + i 17.42149573.
$$
From (10), $Z_{CS} \approx -0.3 + i 1.35 $, which agrees with the analytical result.
\newline
\noindent \underline{$K=T(2,3)\, \# \, T(2,5)$}: At $k=3$, applying the analytic formula (9) yields
$$
Z_{CS} \left[ S^3_{-1}(K); e^{\frac{i 2 \pi}{3}} \right] = 0.7071.
$$
After truncating the q-power series at $N=25000$ and then extracting the limiting value of $\hat{Z}_{0} (q)$ results in Figure 8. It shows that the q-series converges to
$$
\lim_{q \rightarrow e^{\frac{i 2 \pi}{3}}}\, \frac{2}{q^4} \hat{Z}_{0} [S^3_{-1}(K); q] \longrightarrow 5.989718 + i 3.450427632 .
$$
After substituting it into (10), we find $Z_{CS} \approx 0.705499 - i 0.00068351$.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.5]{surgery23253.pdf}
\caption{The extrapolation of $2\hat{Z}_{0} (q \rightarrow e^{\frac{i 2 \pi}{3}} )/q^4$ associated with $K$ at $N=25000$. Real part (blue) and imaginary part (orange) of $\hat{Z}_{0}$. }
\end{center}
\end{figure}
\noindent At $k=4$, the analytic formula (9) results in
$$
Z_{CS} \left[ S^3_{-1}(K); e^{\frac{i 2 \pi}{4}} \right] = 0.5 .
$$
As in the above case, we truncate the q-power series at $N=25000$ and find the limiting value of $\hat{Z}$.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.5]{surgery23254.pdf}
\caption{The extrapolation of $2\hat{Z}_{0} (q \rightarrow e^{\frac{i 2 \pi}{4}} )/q^4$ associated with $K$ at $N=25000$. Real part (blue) and imaginary part (orange) of $\hat{Z}_{0}$. }
\end{center}
\end{figure}
\noindent The q-series approaches to
$$
\lim_{q \rightarrow e^{\frac{i 2 \pi}{4}}}\, \frac{2}{q^4}\hat{Z}_{0} [S^3_{-1}(K); q] \longrightarrow -4.05379317 + i 4.09952837721.
$$
From (10), we obtain $Z_{CS} \approx 0.509582 -i 0.002858461 $.
\newline
\noindent At $k=5$, the analytic formula (9) gives
$$
Z_{CS} \left[ S^3_{-1}(K); e^{\frac{i 2 \pi}{5}} \right] = 0.1148764603 + i 0.3535533906.
$$
We truncate the q-power series at $N=25000$ and find the limiting value of $\hat{Z}$.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.5]{surgery23255.pdf}
\caption{The extrapolation of $2\hat{Z}_{0} (q \rightarrow e^{\frac{i 2 \pi}{5}} )/q^4 $ associated with $K$ at $N=25000$. Real part (blue) and imaginary part (orange) of $\hat{Z}_{0}$. }
\end{center}
\end{figure}
\noindent The q-series approaches to
$$
\lim_{q \rightarrow e^{\frac{i 2 \pi}{5}}}\, \frac{2}{q^4}\hat{Z}_{0} [S^3_{-1}(K); q] \longrightarrow 0.007799372126 - i 4.707580478.
$$
From (10), $Z_{CS} \approx 0.114412 + i 0.354142$.
\newline
|
{
"timestamp": "2021-09-30T02:03:46",
"yymm": "2109",
"arxiv_id": "2109.14021",
"language": "en",
"url": "https://arxiv.org/abs/2109.14021"
}
|
\section{Introduction}
Symbolic regression is a task that we can solve with
genetic programming (GP) and a common example where GP is
particularly effective in practical applications. Symbolic
regression is a machine learning task whereby we try to find a
mathematical model represented as a closed-form expression
that captures dependencies of variables from a dataset.
Genetic programming has been proven to be well-suited for
this task especially when there is little knowledge about the
data-generating process. Even when we have a good understanding of the
underlying process, GP can identify counterintuitive or unexpected
solutions.
\subsection{Motivation}
GP has some practical limitations when used for symbolic regression. One limitation
is that---as a stochastic process---it might produce highly dissimilar
solutions even for the same input data. This can be very helpful to
produce new ``creative'' solutions. However, it is problematic when we try
to integrate symbolic regression in carefully engineered solutions
(e.g. for automatic control of production plants). In such situations
we would hope that there is an optimal solution and the solution
method guarantees to identify the optimum. Intuitively, if the data
changes only slightly, we expect that the optimal regression solution
also changes only slightly. If this is the case we know that the
solution method is trustworthy (cf.~\cite{Kotanchek:2007:GPTP,Stijven:2015:GPTP}) and we
can rely on the fact that the solutions are optimal at least with
respect to the objective function that we specified. Of course this is
only wishful thinking because of three fundamental reasons: (1) the
symbolic regression search space is huge and contains many different
expressions which are algebraically equivalent, (2) GP has no
guarantee to explore the whole search space with reasonable
computational resources and (3) the "optimal solution" might not
be expressible as a closed-form mathematical expressions using
the given building blocks. Therefore, the goal is to find
an approximately optimal solution.
\subsection{Prior Work}
Different methods have been developed with
the aim to improve the reliability of symbolic regression. Currently,
there are several off-the-shelf software solutions which use
enhanced variants of GP and are noteworthy in this context: the
DataModeler
package\footnote{\url{http://www.evolved-analytics.com/}} \cite{Kotanchek2013}
provides extensive capabilities for symbolic regression on top of
Mathematica\texttrademark. Eureqa\texttrademark\, is a commercial
software
tool\footnote{\url{https://www.nutonian.com/products/eureqa/}} for
symbolic regression based on research described in
\cite{Schmidt:2006:GPTP,Schmidt:2009:GPTP,Schmidt:2010:GPTP}. The open-source framework HeuristicLab\footnote{\url{https://dev.heuristiclab.com}} \cite{wagner2005heuristiclab} is a general software environment for heuristic and
evolutionary algorithms with extensive functionality for symbolic regression and white-box modeling.
In other prior work, several researchers have presented
non-evolutionary solution methods for symbolic regression. Fast
function extraction (FFX) \cite{McConaghy:2011:GPTP} is a deterministic
method that uses elastic-net regression \cite{zou2005regularization}
to produce symbolic
regression solutions orders of magnitudes faster than GP for many
real-world problems.
The work by Korns toward ``extremely accurate'' symbolic regression
\cite{Korns:2013:GPTP,Korns:2014:GPTP,Korns:2015:GPTP} highlights the
issue that baseline GP does not guarantee to find the optimal solution
even for rather limited search spaces. They give a useful systematic
definition of increasingly larger symbolic regression search spaces
using abstract expression grammars \cite{Korns:2009:GEC} and describes
enhancements to GP to improve it's reliability. The work by Worm and
Chiu on prioritized grammar enumeration \cite{Worm:2013:GECCO} is
closely related. They use a restricted set of grammar rules for
deriving increasingly complex expressions and describe a deterministic
search algorithm, which enumerates the
search space for limited symbolic regression problems.
\subsection{Organization of this Chapter}
Our contribution is conceptually an extension of prioritized grammar enumeration \cite{Worm:2013:GECCO}, although our implementation of the method deviates significantly. The most relevant extensions are that we cut out large parts of the search space and provide a general framework for integrating heuristics in order to improve the search efficiency. Section \ref{sec:search_space} describes how we reduce the size of the search space which is defined by a context-free grammar:
\begin{enumerate}
\item We restrict the structure of solution to prevent too complicated solutions.
\item We use grammar restrictions to prevent semantic duplicates---solutions with different syntax but same semantics, such as algebraic transformations. With these restrictions, most solutions can only be generated in exactly one way.
\item We efficiently identify remaining duplicates with semantic hashing, so that (nearly) all solutions in the search space are semantically unique.
\end{enumerate}
In Section \ref{sec:exploring_search_space}, we explain the algorithm that iterates all these semantically unique solutions. The algorithm sequentially generates solutions from the grammar and keeps track of the most accurate one. For very small problems, it is even feasible to iterate the whole search space \cite{Kronberger2019}. However, our goal in larger problems is to find accurate and concise solutions early during the search and to stop the algorithm after a reasonable time. The search order is determined with heuristics, which estimate the quality of solutions and prioritize promising ones in the search. A simple heuristic is proposed in Section \ref{sec:search_heuristic}. Modeling results in Section \ref{sec:experiments} show that this first version of our algorithm can already solve several difficult noiseless benchmark problems.
\section{Definition of the Search Space}
\label{sec:search_space}
The search space of our deterministic symbolic regression algorithm is defined by a context-free grammar. Production rules in the grammar define the mathematical expressions that can be explored by the algorithm. The grammar only specifies possible model structures whereby placeholders are used for numeric coefficients. These are optimized separately by a curve-fitting algorithm (e.g.~optimizing least squares with an gradient-based optimization algorithm) using the available training data.
In a general grammar for mathematical expressions---as it is common in symbolic regression with GP for example---the same formula can be derived in several forms. These duplicates inflate the search space. To reduce them, our grammar is deliberately restricted regarding the possible structure of expressions. Remaining duplicates that cannot be prevented by a context-free grammar are eliminated via a hashing algorithm. Using both this grammar and hashing, we can generate a search space with only semantically unique expressions.
\subsection{Grammar for Mathematical Expressions}
In this work we consider mathematical expressions as list of symbols which we call \textit{phrases} or \textit{sentences}. A phrase can contain both \textit{terminal} and \textit{non-terminal} symbols and a sentence only terminal symbols. Non-terminal symbols can be replaced by other symbols as defined by a grammar's \textit{production rules} while terminal symbols represent parts of the final expression like functions or variables in our case.
Our grammar is very similar to the one by Kronberger et al.~\cite{Kronberger2019}. It produces only rational polynomials which may contain linear and nonlinear terms, as outlined conceptually in Equation~\ref{eq:polynomial_structure}. The basic building blocks of terms are linear and non-linear functions $\{+, \times$, inv, exp, log, sin, square root, cube root$\}$. Recursion in the production rules represents a strategy for generating increasingly complex solutions by repeated nesting of expressions and terms.
\begin{equation}
\begin{split}
\mathit{Expr} &= c_1 \mathit{Term}_1 + c_2 \mathit{Term}_2 + \ldots + c_n\\
\mathit{Term} &= \mathit{Factor}_0 \times \mathit{Factor}_1 \times \ldots \\
\mathit{Factor} &\in \{\mathit{variable}, \log(\mathit{\mathit{variable}}), \exp(\mathit{\mathit{variable}}), \sin(\mathit{\mathit{variable}}) \} \\
\end{split}
\label{eq:polynomial_structure}
\end{equation}
We explicitly disallow nested non-linear functions, as we consider such solutions too complex for real-world applications. Otherwise, we allow as many different structures as possible to keep accurate and concise models in the search space. We prevent semantic duplicates by generating just one side of mathematical equality relations in our grammar, e.g.~we allow $xy+xz$ but not $x(y+z)$. Since each function has different mathematical identities, many different production rules are necessary to cover all special cases. Because we scale every term including function arguments, we also end up with many placeholders for coefficients in the structures. All production rules are detailed in Listing~\ref{lst:grammar} and described in the following.
\input{lst_grammar.tex}
We use a polynomial structure as outlined in Equation~\ref{eq:polynomial_structure} to prevent a factored form of solutions. The polynomial structure is enforced with the production rules \texttt{Expr} and \texttt{Term}. We restrict the occurrence of the multiplicative inverse ($= \frac{1}{\ldots}$), the square root and cube root function to prevent a factored form such as $\frac{1}{x+y} \frac{1}{x+z}$. This is necessary since we want to allow sums of simple terms as function arguments (see non-terminal symbol \texttt{SimpleExpr}). Therefore, these three functions can occur at most once time per term. This is defined with symbol \texttt{OneTimeFactors} and one production rule for each combination. The only function in which we do not allow sums as argument is exponentiation (see \texttt{ExpFactor}), since this form is substituted by the overall polynomial structure (e.g.~we allow $e^x e^y$ but not $e^{x+y}$). Equation \ref{eq:covered_identities} shows some example identities and which forms are supported.
\begin{equation}
\begin{split}
\text{in the search space:} \quad \quad \quad \hphantom{\equiv} & \quad \quad \quad \text{not in the search space:} \\[3pt]
c_1 xy + c_2 xz + c_3
\ \ \equiv & \ \
x(c_4 y + c_5 z) + c_6 \\[3pt]
c_1 \frac{1}{c_2 x + c_3 xx + c_4 xy + c_5 y + c_6} + c_7
\ \ \equiv & \ \
c_8 \frac{1}{c_{9} x + c_{10}} \ \frac{1}{c_{11} x + c_{12} y + c_{13}} + c_{14} \\[3pt]
c_1 \exp(c_2 x) \exp(c_3 y) + c_4
\ \ \equiv & \ \
c_5 \exp(c_6 x + c_7 y) + c_8
\end{split}
\label{eq:covered_identities}
\end{equation}
We only allow (sums of) terms of variables as function arguments, which we express with the production rules \texttt{SimpleExpr} and \texttt{SimpleTerm}. An exception is the multiplicative inverse, in which we want to include the same structures as in ordinary terms. However, we disallow compound fractions like in Equation \ref{eq:covered_identities_inverse}. Again, we introduce separate grammar rules \texttt{InvExpr} and \texttt{InvTerm} which cover the same rules as \texttt{Term} except the multiplicative inverse.
\begin{equation}
\begin{split}
\text{in the search space:} \quad \quad \quad \hphantom{\equiv} & \quad \quad \quad \text{not in the search space:} \\[3pt]
c_1 \frac{1}{c_2 \log(c_3 x + c_4) + c_5} + c_6
\ \ \equiv & \ \
c_7 \frac{1}{c_8 \frac{1}{c_9 \log(c_{10} x + c_{11}) + c_{12}} + c_{13}} + c_{14}
\end{split}
\label{eq:covered_identities_inverse}
\end{equation}
In the simplest case, the grammar produces an expression $E_0 = c_0 x+c_1$, where $x$ is a variable and $c_0$ and $c_1$ are coefficients corresponding to the slope and intercept. This expression is obtained by considering the simplest possible \texttt{Term} which corresponds to the derivation chain \texttt{Expr} $\to$ \texttt{Term} $\to$ \texttt{RecurringFactors} $\to$ \texttt{VarFactor} $\to$ $x$. Further derivations could lead for example to the expression $E_1 = c_0 x + (c_1 x + c_2)$, produced by nesting $E_0$ into the first part of the production rule for \texttt{Expr}, where the \texttt{Term} is again substituted with the variable $x$.
However, duplicate derivations can still occur due to algebraic properties like associativity and commutativity. These issues cannot be prevented with a context-free grammar because a context-free grammar does not consider surrounding symbols of the derived non-terminal symbol in its production rules. For example the expression $E_1 = c_0 x + (c_1 x + c_2)$ contains two coefficients $c_0$ and $c_1$ for variable $x$ which could be folded into a new coefficient $c_\mathit{new} = c_0 + c_1$. This type of redundancy becomes even more pronounced when \texttt{VarFactor} has multiple productions (corresponding to multiple input variables), as it becomes possible for multiple derivation paths to produce different expressions which are algebraically equivalent, such as $c_1x + c_2y$, $c_3x+ c_4x + c_5y$, $c_6y+c_7x$ for corresponding values of $c_1 ... c_7$. Another example are $c_1 x y$ and $c_2 y x$ which are both equivalent but derivable from the grammar.
To avoid re-visiting already explored regions of the search space, we implement a caching strategy based on expression hashing for detecting algebraically equivalent expressions. The computed hash values are the same for algebraically equivalent expressions. In the search algorithm we keep the hash values of all visited expressions and prevent re-evaluations of expressions with identical hash values.
\subsection{Expression Hashing}
We employ expression hashing by Burlacu et al.~\cite{Burlacu2019eurocast} to assign hash values to subexpressions within phrases and sentences. Hash values for parent expressions are aggregated in a bottom-up manner from the hash values of their children using any general-purpose hash function.
We then simplify such expressions according to arithmetic properties such as commutativity, associativity, and applicable mathematical identities. The resulting canonical minimal form and associated hash value are then cached in order to prevent duplicated search effort.
\begin{figure}
\centering
\input{fig_hashtree.tex}
\caption{Hash tree example, in which the hash values of all nodes are calculated from both their own node content and the has value of their children \cite{Burlacu2019eurocast}.}\label{fig:hash-tree}
\end{figure}
Expression hashing builds on the idea of Merkle trees \cite{Merkle1988}. Figure \ref{fig:hash-tree} shows how hash values propagate towards the tree root (the topmost symbol of the expression) using hash function $\oplus$ to aggregate child and parent hash values. Expression hashing considers an internal node's own symbol, as well as associativity and commutativity properties. To account for these properties, each hashing step must be accompanied by a corresponding sorting step, where child subexpressions are reordered according to their type and hash value. Algorithm~\ref{alg:expression-hashing} ensures that child nodes are sorted and hashed before parent nodes, such that calculated hash values are consistent towards the root symbol.
\input{fig_tree-hashing.tex}
An expression's hash value is then given by the hash value of its root symbol. After sorting, sub-expressions with the same hash value are considered isomorphic and are simplified according to arithmetic rules. The simplification procedure is illustrated in Figure~\ref{fig:example-simplification} and consists of the following steps:
\begin{enumerate}
\item \textbf{Fold}: Apply associativity to eliminate nested symbols of the same type. For example, postfix expression \texttt{a b + c +} consists of two nested additions where each addition symbol has arity 2. Folding flattens this expression to the equivalent form \texttt{a b c +} where the addition symbol has arity 3.
\item \textbf{Simplify}: Apply arithmetic rules and mathematical identities to further simplify the expressions. Since expression already include placeholders for numerical coefficients, we eliminate redundant subexpressions such as \texttt{a a b +} which becomes \texttt{a b +}, or \texttt{a a +} which becomes \texttt{a}.
\item Repeat steps 1 and 2 until no further simplification is possible.
\end{enumerate}
Nested $+$ and $\times$ symbols in Figure~\ref{fig:example-simplification} are folded in the first step, simplifying the tree structure of the expression. Arithmetic rules are then applied for further simplification. In this example, the product of exponentials
\begin{equation*}
\exp(c_1 \times x_1) \times \exp(c_2 \times x_1) \equiv \exp((c_1+c_2) \times x_1)
\end{equation*} is simplified since from a local optimization perspective, optimizing the coefficients of the expression yields better results for a single coefficient $c_3 = c_1 + c_2$, thus it makes no sense to keep both original factors. Finally, the sum $c_4 x_1 + c_5 x_1$ is also simplified since one term in the sum is redundant.
After simplification, the hash value of the simplified tree is returned as the hash value of the original expression. Based on this computation we are able to identify already explored search paths and avoid duplicated effort.
\begin{figure}
\centering
\subfloat[Original expression]{\input{fig_simplify-original.tex}}
\subfloat[Folded expression]{\input{fig_simplify-folded.tex}}
\subfloat[Minimal form]{\input{fig_simplify-canonical.tex}}
\caption{Simplification to canonical minimal form during hashing}\label{fig:example-simplification}
\end{figure}
\section{Exploring the Search Space}
\label{sec:exploring_search_space}
By limiting the size of expressions, the grammar and the hashing scheme produce a large but finite search space of semantically unique expressions. In an exhaustive search, we iterate all these expressions and search for the best fitting one. Thereby, we derive sentences with every possible derivation path. An expression is rejected if another expression with the same semantic---according to hashing---has already been generated during the search. When a new, previously unseen sentence is derived, the placeholders for coefficients are replaced with real values and optimized separately. The best fitting sentence is stored.
\begin{algorithm}
\caption{Iterating the Search Space}
\label{alg:search_space_search}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\SetKw{Return}{return}
\Input{Data set $ds$, max. number of variable references $\mathit{maxVariableRefs}$}
\Output{Best fitting expression}
\BlankLine
$openPhrases \gets$ empty data structure\;
$seenHashes \gets$ empty set\;
Add $StartSymbol$ to $openPhrases$\;
$bestExpression \gets$ constant symbol\;
\BlankLine
\While{$openPhrases$ \textnormal{is not empty}}
{
$oldPhrase \gets$ fetch and remove from $openPhrases$\;
$nonTerminalSymbol \gets$ leftmost nonterminal symbol in $oldPhrase$\;
\ForEach{\textnormal{production} $prod$ \textnormal{of} $nonTerminalSymbol$}
{
$newPhrase \gets$ apply $prod$ on copy of $oldPhrase$\;
\If{\textnormal{VariableRefs(}$newPhrase$\textnormal{)} $\le$ $\mathit{maxVariableRefs}$}{
$hash \gets$ Hash($newPhrase$)\;
\If{$seenHashes$ \textnormal{not contains} $hash$}{
Add $hash$ to $seenHashes$\;
\If{$newPhrase$ \textnormal{is sentence}}
{
Fit coefficients of $newPhrase$ to $ds$\;
Evaluate $newPhrase$ on $ds$\;
\If{$newPhrase$ \textnormal{is better than} $bestExpression$} {
$bestExpression \gets newPhrase$\;
}
}\Else{
Add $newPhrase$ to $openPhrases$\;
}
}
}
}
}
\Return $bestExpression$
\end{algorithm}
Algorithm~\ref{alg:search_space_search} outlines how all unique expressions are derived: We store unfinished phrases---expressions with non-terminal symbols---in a data structure such as a stack or queue. We fetch phrases from this data structure one after another, derive new phrases, calculate their hash values and compare these hash values to previously seen ones. To derive new phrases, we always replace the \emph{leftmost} non-terminal symbol in the old phrase with the production rules of this non-terminal symbol. If a derived phrase becomes a sentence with only terminal symbols, its coefficients are optimized and its fitness is evaluated. Otherwise, if it still contains derivable non-terminal symbols, it is put back on the data structure.
We restrict the length of a phrase by its number of variable references---e.g.~$x x$ and $\log(x) + x$ have two variable references. Phrases that exceed this limit are discarded in the search. Since every non-terminal symbol is eventually derived to at least one variable reference, non-terminal symbols count as variable references. In our experiments, a limit on the complexity has been found to be the most intuitive way to estimate an appropriate search space limit. Other measures, e.g.~the number of symbols are harder to estimate since coefficients, function symbols and the non-factorized representation of expression quickly inflate the number of symbols in a phrase.
\subsection{Symbolic Regression as Graph Search Problem}
Without considering the semantics of an expression, we would end up exploring a search tree like in Figure~\ref{fig:search_tree}, in which semantically equivalent expressions are derived multiple times (e.g.~$c_1x+c_2x$ and $c_1x+c_2x+c_3x$). However, hashing turns the search tree into a directed search graph in which nodes (derived phrases) are reachable via one or more paths, as shown in Figure~\ref{fig:search_graph}. Thus, hashing prevents the search in a graph region that was already visited. From this point of view, Algorithm~\ref{alg:search_space_search} is very similar to simple graph search algorithms such as depth-first or breadth-first search.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig_search_tree.pdf}
\caption{Search tree of expression generation without semantic hashing.}
\label{fig:search_tree}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig_search_graph.pdf}
\caption{Search graph with loops caused by semantic hashing.}
\label{fig:search_graph}
\end{figure}
\subsection{Guiding the Search}
In Algorithm~\ref{alg:search_space_search}, the order in which expressions are generated is determined by the data structure used. A stack or a queue would result in a depth-first or a breadth-first search respectively. However, as the goal is to find well-fitting expressions quickly and efficiently, we need to guide the traversal of a search graph towards promising phrases.
Our general framework for guiding the search is very similar to the idea used in the A* algorithm~\cite{hart1968formal}. We use a priority queue as data structure and assign a priority value to each phrase, indicating the expected quality of sentences which are derivable from that phrase. Phrases with high priority are derived first in order to discover well-fitting sentences, steering the algorithm towards good solutions.
Similar to the A* algorithm, we cannot make a definite statement about a phrase's priority before actually deriving all possible sentences from it. Therefore, we need to estimate this value with problem-specific heuristics. The calculation of phrase priorities provides us a generic point for integrating heuristics for improving the search efficiency and extending the algorithm's capabilities in future work.
\section{Steering the Search}
\label{sec:search_heuristic}
We introduce a simple heuristic for guiding the search and leave more complex and efficient heuristics for future work. The proposed heuristic makes a pessimistic estimation of the quality of a phrase's derivable sentences. This is done by evaluating phrases before they are derived to sentences. With the goal of finding short and accurate sentences quickly, the priority value considers both the expected quality and the length of a phrase.
\subsection{Quality Estimation}
Estimating the expected quality of an unfinished phrase is possible due to the polynomial structure of sentences and the derivation of the leftmost non-terminal symbol in every phrase. Since expressions are sums of terms ($c_1 \mathit{Term}_1 + c_2 \mathit{Term}_2 + ...$), repeated expansion of the leftmost non-terminal symbol derives one term after another. This results in phrases such as in Equation~\ref{eq:unfinished_phrase}, in which the first two terms $c_1 \log(c_2 x + c_3)$ and $c_4 x x$ contain only terminal symbols and the last non-terminal symbol is \textit{Expr}.
\begin{equation}
\stackrel{\mathit{finished Term}_1}{c_1 \log(c_2 x + c_3)}
\quad + \quad
\stackrel{\mathit{finished Term}_2}{c_4 x x\vphantom{\log()}}
\quad +
\underbrace{
\mathit{Expr}
}_{\text{Treat as coefficient}}
\label{eq:unfinished_phrase}
\end{equation}
Phrases where the only non-terminal symbol is \texttt{Expr} are evaluated as if they were full sentences by treating \texttt{Expr} as a coefficient during the local optimization phase.
We get a pessimistic estimate of the quality of derivable sentences, since derived sentences with more terms can only have better quality. The quality can only improve with more terms because of separate coefficient optimization and one scaling coefficient per term, as shown in Equation~\ref{eq:unfinished_phrase_abstract}. If a term which does not improve the quality is derived, the optimization of coefficients will cancel it out by setting the corresponding scaling coefficient to zero (e.g.~$c_5$ in Equation~\ref{eq:unfinished_phrase_abstract}).
\begin{equation}
\mathit{finished Term}_1
\quad + \quad
\mathit{finished Term}_2
\quad +
\underbrace{c_5 \mathit{Term}}_{\text{Can only improve quality}
}
\label{eq:unfinished_phrase_abstract}
\end{equation}
This heuristic works only for phrases in which \textit{Expr} is the only non-terminal symbol. For sentences with different non-terminal symbols, we reuse the estimated quality from the last evaluated parent phrase. The estimate is updated when a new term with only terminal symbols is derived and again only one \textit{Expr} remains. For now, we do not have a reliable estimation method for terms that contain non-terminal symbols and leave this topic for future work.
\subsection{Priority Calculation}
To prevent arbitrary adding of badly-fitting terms that are eventually scaled down to zero, our priority measure considers both a phrase's length and its expected accuracy. To balance these two factors, these two measures need to be in the same scale. We use the normalized mean squared error (NMSE) as quality measure which is in the range $[0,1]$ for properly scaled solutions. This measure corresponds to $1 - R^2$ (coefficient of determination). As length measure we use the number of symbols relative to the maximum sentence length.
Since we limit the search space to a maximum number of variable references of a phrase, we cannot exactly calculate the maximum possible length of a phrase. Therefore, we estimate this maximum length with a greedy procedure: Starting with the grammar's start symbol \textit{Expr}, we iteratively derive a new phrase using the longest production rule. If two production rules have the same length, we take the one with least non-terminal symbols and variable references.
Phrases with \emph{lower} priority values are expanded first during the search. The priority for steering the search from Section~\ref{sec:exploring_search_space} is the phrase's $\operatorname{NMSE}$ value minus its weighted relative length, as shown in Equation~\ref{eq:priority}. The weight $w$ controls the greediness and allows corrections of over- or underestimations of the maximum length. However, in practice this value is not critical.
\begin{equation}
\label{eq:priority}
\operatorname{priority(\mathit{p})} \ = \ \operatorname{NMSE}(p) \ - \ w \frac{\operatorname{len}(\mathit{p})}{\mathit{length}_{\max}}
\end{equation}
\section{Experiments}
\label{sec:experiments}
We run our algorithm on several synthetic benchmark datasets to show that the search space defined by our restricted grammar is powerful enough to solve many problems in feasible time. As benchmark datasets, we use noiseless datasets from physical domains \cite{CHEN:2018:ESA} and Nguyen-, Vladislavleva- and Keijzer-datasets \cite{White2013} as defined and implemented in the HeuristicLab framework.
The search space was restricted in the experiments to include only sentences with at most 20 variable references. We evaluate at most 200~000 sentences. Coefficients are randomly initialized and then fitted with the iterative gradient-based Levenberg-Marquardt algorithm \cite{levenberg1944method, marquardt1963algorithm} with at most 100 iterations. For each model structure, we repeat the coefficient fitting process ten times with differently initialized values to reduce the chance of finding bad local optima.
As a baseline, we also run symbolic regression with GP on the same benchmark problems. Therefore, we execute GP with strict offspring selection (OSGP) \cite{affenzeller:2009} and explicit optimization of coefficients \cite{Kommenda:2013}. The OSGP settings are listed in Table \ref{tab:osgp_settings}. The OSGP experiments were executed with the \textit{HeuristicLab} software framework\footnote{\url{https://dev.heuristiclab.com}} \cite{wagner2005heuristiclab}. Since this comparison focuses only on particular weaknesses and strengths of our proposed algorithm over state of the art-techniques, we use the same OSGP settings for all experiments and leave out problem-specific hyper parameter-tuning.
\begin{table}[t]
\caption{OSGP experiment settings}
\label{tab:osgp_settings}
\begin{tabular}{p{.35\textwidth}p{.63\textwidth}}
\hline\noalign{\smallskip}
Parameter & Setting \\
\noalign{\smallskip}\svhline\noalign{\smallskip}
Population size & 500 \\
Max.~selection pressure & 300 \\
Max.~evaluated solutions & 200 000 \\
Mutation probability & 15\% \\
Selection & Gender-specific selection (random and proportional) \\
Crossover operator & Subtree swapping \\
Mutation operator & Point mutation, tree shaking, changing single symbols, replacing/removing branches \\
Max.~tree size & Number of nodes: 30, depth: 50 \\
Function set & $+, -, \times, \div $, exp, log, sin, cos, square, sqrt, cbrt \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\end{table}
\subsection{Results}
Both the exhaustive search and OSGP were repeated ten times on each dataset. All repetitions of the exhaustive search algorithm led to the exact same results. This underlines the determinism of the proposed methods, even though we rely on stochasticity when optimizing coefficients. Also the OSGP results do not differ much. Tables \ref{tab:keijzer}-\ref{tab:other_benchmarks} show the achieved NMSE values for the exhaustive search and the median NMSE values of all OSGP repetitions. NMSE values in the Tables \ref{tab:keijzer}-\ref{tab:other_benchmarks} smaller than $10^{-8}$ are considered as exact or good-enough approximations and emphasized in bold. The exhaustive search found a good solution (NMSE $< 10^{-8}$) within ten minutes for all datasets. If no such solution was found, the algorithm runs until it reaches the max.~number of evaluated solutions, which can take days for larger datasets.
\begin{table}[t]
\caption{Median NMSE results for Keijzer instances.}
\label{tab:keijzer}
\input{tab_keijzer}
\end{table}
The experimental results show, that our algorithm struggles with problems with complex terms---for example with Keijzer data sets 4, 5 and 11 in Table \ref{tab:keijzer}. This is probably because our heuristic works ''term-wise''---our algorithm searches completely broad without any guidance within terms which still contain non-terminal symbols. This issue becomes even more pronounced when we have to find long and complex function arguments. It should also be noted that our algorithm only finds non-factorized representations of such arguments, which are even longer and therefore even harder to find in a broad search.
\begin{table}[b]
\caption{Median NMSE results for Nguyen instances.}
\label{tab:nguyen}
\input{tab_nguyen}
\end{table}
For the Nguyen datasets in Table \ref{tab:nguyen} and the Keijzer datasets 12-15 in Table \ref{tab:keijzer}, we find the exact or good approximations in most cases with our exhaustive search. Especially for simpler datasets, the results of our algorithm surpasses the one of OSGP. This is likely due to the datasets' low number of training instances, which makes it harder for OSGP to find good approximations.
\begin{table}[t]
\caption{Median NMSE results for Vladislavleva instances.}
\label{tab:vladislavleva}
\input{tab_vladislavleva}
\end{table}
Some problems are not contained in the search space, thus we do not find any good solution for them. This is the case for Keijzer 6, 9 and 10 in Table \ref{tab:keijzer}, for which we do not support the required function symbols in our grammar. Also all Vladislavleva datasets except 6 and 7 in Table \ref{tab:vladislavleva} and the problems ''Fluid Flow'' and ''Pagie-1'' in Table \ref{tab:other_benchmarks} are not in the hypothesis space as they are too complex.
\begin{table}
\caption{Median NMSE results for other instances.}
\label{tab:other_benchmarks}
\input{tab_other_benchmarks}
\end{table}
Another issue is the optimization of coefficients. Although several problems have a simple structure and are in the search space, we do not find the right coefficients for arguments of non-linear functions, for example in Nguyen 5-7. The issue hereby is that we iterate over the actually searched model structure but determine bad coefficients. As we do never look again at the same model structure, we can only find an approximation. This is a big difference to symbolic regression with genetic programming, in which we might find the same structure again in next generations.
\section{Discussion}
\label{sec:discussion}
Among the nonlinear system identification techniques, symbolic
regression is characterized by its ability to identify complex
nonlinear relationships in structured numerical data in the form of
interpretable models. The combination of the power of nonlinear
system identification without a priori assumptions about the model
structure with the white-box ability of mathematical formulas
represents the unique selling point of symbolic regression. If tree-based
GP is used as search method,
the ability to interpret the found models is limited due to the
stochasticity of the GP search. Thus, at the end of the modeling
phase, several similarly complex models of approximately the same
quality can be produced, which have completely different structures
and use completely different subsets of features. These
last-mentioned limitations due to ambiguity can be countered using a
deterministic approach in which only semantically unique models may be
used. This approach, however, requires a lot of restrictions regarding
search space complexity in order to specify a subspace in which an
exhaustive search is feasible. On the other hand, the exhaustive claim
enables the approach to generate extensive model libraries
already in the offline phase, through which as soon as a concrete task
is given in the online phase, it is only necessary to navigate in a
suitable way.
In a very reduced summary, one could characterize the classical tree-based
symbolic regression using GP and the approach of
deterministically and exhaustively generating models in such a way that the
latter enables a complete search in an incomplete search space
while the classical approach performs an incomplete search in a rather
complete search space.
\subsection{Limitations}
The approach we have described in this contribution also has several
limitations. For the identification of optimal coefficient values we rely on the
Levenberg-Marquardt method for least squares, which is a local search
routine using gradient information. Therefore, we can only hope to
find global optima for coefficient values. Finding bad local optima for coefficients is
less of a concern when using GP variants with a similar local
improvement scheme because there are implicitly many restarts through
the evolutionary operations of recombination and mutation. In the
proposed method we visit each structure only once and therefore risk
to discard a good solution when we are unlucky to find good coefficients.
We have worked only with noiseless problem instances yet. We observed in first experiments with noisy problems instances that the algorithm might get stuck trying to improve non-optimal partial solutions due to its greedy nature. Therefore, we need further investigations before we move on with the development of our algorithm to noisy real-world problems.
Another limitation is the poor scalability of grammar enumeration
when increasing the number of features or the size of the search
space. When increasing these parameters we can not expect to explore a
significant part of the complete search space and must increasingly
rely on the power of heuristics to hone in on relevant subspaces.
Currently, we have only integrated a single heuristic which evaluates
terms in partial solutions and prioritizes phrase which include
well-fitting terms. However, the algorithm has no way to prioritize incomplete terms and is inefficient when trying to find complex terms.
\section{Outlook}
Even when considering the above mentioned limitations of the currently
implemented algorithm we still see significant potential in the
approach of more systematic and deterministic search for symbolic
regression and we already have several ideas to improve the algorithm
and overcome some of the limitations.
The integration of improved heuristics for guided search is our
top-priority. An advantage of the concept is that it is extremely general
and allows to experiment with many different heuristics. Heuristics
can be as simple as prioritizing shorter expressions or less complex
expressions. More elaborate schemes which guide the search based on
prior knowledge about the data-generating process are easy to
imagine. Heuristics could incorporate syntactical information
(e.g. which variables already occur within the expression) as well as
information from partial evaluation of expressions. We also consider
dynamic heuristics which are adjusted while the algorithm is running
and learning about the problem domain.
Potentially, we could even identify and learn heuristics
which are transferable to other problem instances and would improve
efficiency in a transfer learning setting.
Getting trapped in local optima is less of a concern when we apply
global search algorithms for coefficient values such as evolution
strategies, differential evolution, or particle swarm optimization
(cf. \cite{Korns:2010:GPTP}). Another approach would be to reduce the
ruggedness of the objective function through regularization of the
coefficient optimization step. This could be helpful to reduce the
potential of overfitting and getting stuck in sub-optimal subspaces
of the search space.
Generally, we consider grammar enumeration to be effective only when
we limit the search space to relatively short expressions---which is often the
case in our industrial applications. Therein lies the main potential
compared to the more general approach of genetic programming. In this
context we continue to explore potential for segmentation of the
search space \cite{Kronberger2019} in combination with grammar
enumeration in an offline phase for improving later search
runs. Grammar enumeration with deduplication of structures could also
be helpful to build large offline libraries of sub-expressions that
could be used by GP
\cite{angeline:1993:ema,eurogp:KeijzerRMC05,Krawiec:2012:GECCOcomp,chrisgptp2015behavioral}.
|
{
"timestamp": "2021-09-29T02:26:01",
"yymm": "2109",
"arxiv_id": "2109.13895",
"language": "en",
"url": "https://arxiv.org/abs/2109.13895"
}
|
\section{Introduction}
Within a dispersionless band of a crystalline solid, electrons have diverging effective mass and localized wavefunctions can remain localized, notably in the absence of disorder.
The inclusion of Coulomb repulsion then gives rise to strongly interacting many-body systems, which have been predicted to exhibit phenomena ranging from ferromagnetism \cite{Mielke1991, Mielke1991a, Tasaki1992, Tasaki2008} and flat-band many-body localization \cite{Danieli2020a, Daumann2020, Kuno2020, Khare2020, Roy2020, Orito2021}, to unconventional superconductivity \cite{Xie2020, Hu2019, Julku2020, Hazra2019} and zero-magnetic-field fractional quantum Hall states \cite{Neupert2011, Wang2012, Regnault2011, Sun2011}.
Experimental work, too, has targeted flat-band physics, for example through quantum simulation on various platforms such as photonics \cite{Leykam2018, Baboux2016, Mukherjee2018, Ma2020a}, quantum circuits \cite{Kollar2019a, Hung2021}, and ultracold atoms \cite{Jo2012, He2020}, as well as on materials such as magic-angle twisted bilayer graphene and twisted bilayer transition metal dichalcogenides \cite{Bistritzer2011,Marchenko2018,Zhang2020,IqbalBaktiUtama2021,Lisi2021}.
Certain families of lattices are known to host flat bands.
For example, bipartite lattices have flat bands at the center of their spectra, with band degeneracy equal to the difference in number of sites per unit cell in each sublattice; the Lieb lattice is a well-known example \cite{Lieb1989}.
Additionally, mathematical generators of flat-band lattices have been proposed \cite{Flach2014, Xu2020}.
Specific lattices have also been identified to host flat bands, including the kagome \cite{Syozi1951} and pyrochlore lattices \cite{Subramanian1983}, see Figure \ref{fig:linelattices}(a).
These are both examples of line-graph lattices, though their flat bands are not gapped \cite{Mielke1991, Mielke1991a}.
Line graphs are graphs (a set of vertices connected by edges) that reflect the adjacency between edges of another graph, which we term the root graph, see Figure \ref{fig:linelattices}(b).
More specifically, every edge in the root graph is represented by a vertex in its line graph, and edges in the line graph connect vertices arising from incident edges in the root graph.
The adjacency matrix of a line graph can be shown to have $-2$ as its lowest eigenvalue \cite{Cvetkovic2004}.
Through the addition of discrete translation invariance, (finite-size) line graphs can be extended to line-graph lattices.
Correspondingly, for dimensions $D>1$ the associated tight-binding Hamiltonian with amplitude-$1$ hopping exhibits one or more exactly flat band or bands at the bottom of the spectrum, with eigenvalue $-2$ \cite{Kollar2019}.
Line-graph lattices have emerged as a means for generating flat bands within the field of quantum simulation with superconducting circuits.
In particular, lattices of microwave cavities have been constructed as a path towards simulating condensed matter systems \cite{Kollar2019a, Carusotto2020}.
In such lattices, each cavity acts as a lattice site for photons.
As a result, a circuit layout with cavities on every edge and capacitive coupling between cavities at each vertex simulates the corresponding line graph.
Stemming from these ideas, the topology of line-graph-lattice flat bands has been examined; line-graph lattices and line-graph lattices with select perturbations have been theoretically shown to have exactly flat or nearly flat fragile topological bands \cite{Chiu2020, Ma2020}.
More generally, the identification and characterization of flat-band lattice models is integral to theoretical and experimental work \cite{Liu2014}.
For example, the kagome lattice is a rich theoretical playground for studying magnetism and resonating valence bond states \cite{Elser1989, Hastings2001}.
More recently, it has inspired materials design to experimentally realize Dirac cones and flat bands \cite{Kang2020a, Kang2020}.
Similarly, the pyrochlore lattice has also been the focus of theoretical simulation and first-principles calculations \cite{Hase2019}.
Much work has been done to identify flat-band materials and classify those with bipartite structure---including split-graph lattices and the Lieb lattice---or with kagome or pyrochlore sublattices \cite{Regnault2021}.
However, prior to this work, it was not known what other crystal structures, if any, are line-graph lattices.
Here we develop and execute a high-throughput screening for crystalline structures that are line-graph lattices.
The materials are from the \href{https://www.topologicalquantumchemistry.fr/flatbands}{Materials Flatband Database}\ \cite{Calugaru2021, Regnault2021}.
This database identifies flat-band materials from the \href{https://www.topologicalquantumchemistry.fr/}{Topological Quantum Chemistry}\ \cite{Bradlyn2017, Vergniory2021}, which contains most stoichiometric structures from the Inorganic Crystal Structure Database (ICSD).
Of the $55,206$ ICSDs in the Materials Flatband Database, we find $4409$ hosting line-graph lattice crystalline structures, $2970$ of which are not kagome or pyrochlore structures. Our results are publicly available on the \href{https://www.topologicalquantumchemistry.fr/flatbands}{Materials Flatband Database}.
Furthermore, we find over $388$ unique line-graph lattices, verified to be consistent with line graphs by computing the tight-binding model band spectra.
Because line-graph lattices exhibit flat bands due to geometric frustration rather than fine-tuned parameters, these materials and their underlying lattices are particularly promising for materials engineering and design, first-principles theoretical study, and quantum simulation.
\begin{figure}[t!]
\centering
\includegraphics{fig1.pdf}
\caption{\textbf{(a)} The kagome lattice and \textbf{(b)} a pyrochlore-like lattice, and their band energies through high-symmetry points of their respective Brillouin zones. The kagome lattice is the line graph of the honeycomb lattice, and the pyrochlore lattice is the line-graph of the diamond lattice. Under the tight-binding model with $s$-orbital-like hopping of amplitude $1$, these lattices exhibit exactly flat bands at eigenvalue $-2$. Lattice sites are denoted with circles, and hopping between them shown by lines. Unit cells are outlined in gray. For pyrochlore, tetrahedra are colored to aid visibility of the lattice structure. \textbf{(c)} A honeycomb lattice (left) with the kagome overlaid (right), highlighting the line-graph construction connecting the two. \textbf{(d)} The Krausz-$(2, 1)$ partition for the kagome lattice. Here cliques of size $3$, which look like triangles, are highlighted in blue. Because each vertex is part of at most two of these cliques, and each edge is part of exactly one clique, this is indeed a valid Krausz partition. The number and arrangement of cliques per unit cell (outlined in gray) characterizes the lattice, see main text. We also note that the tetrahedra coloring of (b) represents a Krausz partition for the pyrochlore lattice. \textbf{(e)} Line-graph lattices in 2D can be further characterized by considering faces of the lattice, here outlined in blue. See main text for details.}
\label{fig:linelattices}
\end{figure}
\section{Method}
Our algorithm to determine whether a given lattice is a line graph relies upon one key insight: line-graph lattices are composed of fully connected subgraphs, where each bond is part of exactly one subgraph and each site can be a part of at most two subgraphs.
Within graph theory, these fully connected subgraphs are known as cliques.
Such a clique partitioning is called a Krausz-$(2, 1)$ partition \cite{Krausz1943}, which we will refer to as a Krausz partition for simplicity.
If a Krausz partition exists, the resulting graph is a line graph; otherwise, it is not.
The partitions for the kagome and pyrochlore lattices are shown in Figure \ref{fig:linelattices}(d) and (b), respectively.
We note that this is a purely geometric method of identifying line-graph lattices, based solely on the connectivity of sites.
It does not depend on the space symmetry group of the material or occupation of particular high-symmetry points in the lattice (maximal Wyckoff positions).
With this particular consideration in mind, our search proceeds over all Material Flatband Database ICSD entries as follows, see Figure \ref{fig:appxalg}.
First, we determine the lattice structure, given by the connectivity of atomic sites and its dimension.
Following \cite{Regnault2021}, we assume that the hopping between any two atoms depends on their spatial separation and place a cutoff for long bond lengths.
The search is iterated on various cutoff parameters, detailed in Appendix \ref{appx:algorithm}.
We search over the resulting 3D structures to extract lattice geometries with flat bands over the entire three-dimensional (3D) Brillouin zone.
In addition, we search over two-dimensional (2D) structures on the various Miller planes to identify lattices with flat bands along a 2D plane of the Brillouin zone.
Second, we determine whether each structure is a line-graph lattice.
This begins by checking the effective dimensionality of the structure, to analyze only those which are 2D, quasi-2D, or 3D.
Next, we check whether the number of edges (bonds) is below the upper limit for a line-graph lattice, given its number of vertices (sites).
At this point, note that any algorithm to search for a Krausz partition is likely better-suited for finite-sized graphs.
To reduce such a graph without affecting whether it is a line graph, we isolate all of the edges of a single unit cell and their adjacent vertices, such that this graph can be translated by the lattice vectors to construct the entire lattice.
Crucially, while no two edges of this reduced graph are translationally invariant, this is not the case for the vertices.
Upon rearranging these edges and vertices under only lattice vector translations, we create finite-sized graphs which will be line graphs if and only if the original lattice is a line-graph lattice.
In the interest of computational efficiency, at this point we ignore lattices that are too complex, however we estimate the effects of this to be small, see Appendix \ref{appx:algorithm} for details.
The cliques can then be extracted via the Bron-Kerbosh algorithm \cite{Bron1973}, from which the presence or absence of a Krausz partition can be determined.
We can test the success of our algorithm by calculating the tight-binding spectra of our detected line-graph lattices and confirming the presence of exactly flat bands at $-2$ across their respective 3D or 2D Brillouin zones.
Additional details of our algorithm can be found in Appendix \ref{appx:algorithm}.
Finally, we filter the line-graph lattices themselves.
This characterization allows us to identify the prevalence of the kagome and pyrochlore lattices among our extracted materials.
It also allows us to identify other common line-graph lattices that may be of interest for theoretical study.
The most coarse-grained criteria is the dimensionality of the lattice.
Next, we tabulate the sizes of the clique(s) adjacent to each vertex in the unit cell and count the frequency of each clique-size singlet or pair.
Lattices which only differ by integer multiples of these frequencies are grouped together.
This accounts for lattices whose unit cells are different sizes but otherwise equivalent, for example lattices whose unit cells are comprised of two copies of the unit cell of another lattice.
As examples, the characterizations for the pyrochlore and kagome lattices can be seen from Figure \ref{fig:linelattices}(b) and (d).
The depicted pyrochlore unit cell consists of two size-$4$ cliques, with light and dark coloring.
There are four vertices per unit cell, and each is shared by two size-$4$ cliques.
The kagome characterization is similarly simple: the unit cell consists of two size-$3$ cliques, and each of the three vertices per unit cell is shared by two size-$4$ cliques.
In two dimensions, additional characterizations are possible.
In particular, if edge crossings within a clique are permitted (but not across multiple cliques), then the graph can be embedded on a torus.
The concept of ``faces'' of this graph, neglecting the regions bounded by cliques, is then well-defined: they are regions bounded by edges and vertices that contain no other edges or vertices.
For the kagome lattice, as seen in Figure \ref{fig:linelattices}(e), these faces correspond to the non-shaded (non-clique) regions.
They are all hexagons, bounded by six edges and six vertices, as outlined in blue.
As a result, we can determine the size and number of faces per unit cell, the ordered list of clique sizes adjacent to each face, and the two face sizes adjacent to each vertex.
The kagome lattice contains one hexagon (size-$6$ face) per unit cell with six size-$3$ cliques adjacent to it, and two size-$6$ faces are adjacent to each vertex.
As before, lattices which only differ by integer multiples of these frequencies are grouped together.
These attributes fully define the graph, such that the graphs of each group are isomorphic to one another.
By contrast, these characterizations are not possible in 3D.
Our groups of quasi-2D and 3D line-graph lattices may then in fact consist of multiple similar but non-isomorphic lattices, so our cited number of unique line-graph lattices is a lower bound.
\section{Results}
\begin{table}[tb]
\begin{centering}
\begin{tabular}{lcccc}
\hline
& 2D & quasi-2D & 3D & total\\
\hline
unique materials & \hspace{-1.7pt}\begin{tabular}{c}3761\\(6.81\%)\end{tabular} & \hspace{-1.7pt}\begin{tabular}{c}131\\(0.24\%)\end{tabular} & \hspace{-1.7pt}\begin{tabular}{c} 729\\(1.32\%)\end{tabular} & \hspace{-1.7pt}\begin{tabular}{c} 4409\\(7.99\%)\end{tabular}\\
\hspace{-1.7pt}\begin{tabular}{l} not kagome or \\ pyrochlore-like\end{tabular} & 2655& 129 & $\geq$340 & $\geq$3053\\
\hspace{-1.7pt}\begin{tabular}{l} gapped \\ (tight-binding model)\end{tabular} & 273 & 7 & 120 & 398\\
$S$-matrix compatible & 5 & 42 & 504 & 551\\
\hline
\end{tabular}
\caption{Of the $55,\!206$ ICSD entries of the Materials Flatband Database, here we tabulate the number of unique entries exhibiting lattice structures that are line-graph lattices. Percentages are taken relative to the entire set of Materials Flatband Database entries. We analyze structures either by taking a cut through a Miller plane (2D) or by keeping the entire 3D structure; quasi-2D ICSDs arise from 3D structures without tunneling along one spatial direction. Of these ICSDs, we also note the number with lattices that are not the kagome or pyrochlore lattices; have gapped flat bands; or are conducive to the $S$-matrix method (see main text for additional details). Some ICSDs are represented in multiple columns and some give rise to multiple line-graph-lattice structures that differ in the above characteristics.}\label{table:summary}
\end{centering}
\end{table}
One may not expect to find many crystal structures that are line-graph lattices.
As these lattices are fully comprised of cliques, they contain clusters of atomic sites with all-to-all tunneling of equal amplitude---a feature which seems relatively uncommon.
Indeed, under criteria identifying different features of the kagome and pyrochlore lattices from those examined here, related work has identified just over $11\%$ and $3\%$ of Flatband Materials Database entries hosting kagome and pyrochlore sublattices, respectively \cite{Regnault2021} .
The summary of our results is in Table \ref{table:summary} and our identified line-graph materials and lattices can be found in the \href{https://www.topologicalquantumchemistry.fr/flatbands}{Materials Flatband Database}.
Among the $55,206$ ICSD entries screened, we find a select set of unique ICSDs with line-graph crystal structures.
Of these, the line graphs are 3D in $729$ ICSDs, quasi-2D in $131$ ICSDs, and lie on a 2D Miller plane in $3761$ ICSDs.
Among 3D lattices, $443$ ICSDs are pyrochlore-like.
Here, this means that the lattice structure is comprised entirely of size-$4$ and size-$5$ cliques where the cliques of size $4$ ($5$) correspond to (center-occupied) tetrahedra, each with all-to-all hopping between the $4$ ($5$) sites.
Their flat bands are also ungapped.
We note that this is an upper bound on the ICSDs which have a pyrochlore lattice structure, as there may exist lattices that fit the above criteria but are not isomorphic to the pyrochlore lattice, for example the one in Figure \ref{fig:exs3D}(b).
There may also be ICSDs that, for different bond cutoffs, create distinct line-graph lattices.
Within these, both pyrochlore-like and non-pyrochlore-like lattices may be represented.
This subtlety also extends to the other characteristics we consider.
Regarding lower dimensions, $2$ ICSDs have the kagome lattice in their quasi-2D layered structure and $1329$ ICSDs have the kagome lattice on at least one of their Miller planes.
Generally speaking, the pyrochlore and kagome lattices, and those of similar clique compositions, are highly represented among the line-graph lattice structures.
These results reflect the fact that these two particular lattices are well-known within the condensed matter community.
\begin{figure}[t!]
\centering
\includegraphics{fig2.pdf}
\caption{Crystal and band structures of select 3D line-graph-lattice ICSDs, \textbf{(a)} PtSO$_4$ (\icsdflatweb{671491}), \textbf{(b)} Si (\icsdflatweb{189392}), and \textbf{(c)} AgSbO$_3$ (\icsdflatweb{25541}). In (a), the clique partition is shown via the colored tetrahedra, plus the size-$3$ (triangle) cliques between two oxygen and one sulfur atom. Because there is an additional atom in the center of each tetrahedron, those cliques are of size $5$. The partition for (b) consists entirely of size-$4$ cliques. The partition for (c) also consists of cliques of size $4$, but they are arranged differently and each consists of three oxygen atoms and one silver atom. The antimony atoms are each cliques of size $1$, as there are no bonds to other atoms. In all subfigures, unit cells are outlined in black and flat bands in the spectra are highlighted in blue, with the flat-band degeneracy noted.}
\label{fig:exs3D}
\end{figure}
For the majority of these lattices, the flat bands at $-2$ are not gapped; however, $120$ 3D, $7$ quasi-2D, and $273$ 2D ICSDs do exhibit gapped flat bands, with gaps up to $2$ in units of the tunneling amplitude.
In Figures \ref{fig:exs3D} and \ref{fig:exs2D} we highlight a few examples with and without gapped bands, showing their crystal structure and tight-binding spectra along high-symmetry lines.
Figures \ref{fig:exs3D}(c) and \ref{fig:exs2D}(a) provide examples of 3D and 2D lattices, respectively, which have the maximal gap size found.
We additionally include the Krausz partitions and root graphs for the 2D lattices in Figure \ref{fig:exs2D}.
\begin{figure}[t]
\centering
\includegraphics{fig3.pdf}
\caption{\textbf{(i)} Crystal structures and representative compact localized state, \textbf{(ii)} band structures, and \textbf{(iii)} root graphs of 2D line-graph lattices coming from \textbf{(a)} CsGa$_7$ (\icsdflatweb{102864}) along Miller plane $(1 1 1)$, \textbf{(b)} Ir$_2$Ge$_3$Se$_3$ (\icsdflatweb{636733}) along $(1 1 1)$, and \textbf{(c)} Hf$_2$SN$_2$ (\icsdflatweb{250915}) along $(1 1 0)$. (a) is the line graph of the kagome lattice, (b) is the line graph of the split graph of the honeycomb lattice, and (c) is the line graph of a tiling of hexagons and squares. In (i), unit cells are outlined in grey and the cliques of the clique partition are shaded in light blue. The compact localized state is indicated with real-valued amplitudes on the colored sites, where navy (gold) sites indicate positive (negative) amplitude and all amplitudes are equal in magnitude. In (ii), flat bands are highlighted in blue, with flat-band degeneracy noted.}
\label{fig:exs2D}
\end{figure}
The gappedness and degeneracy of these flat bands can be understood by counting the number of linearly independent flat-band eigenstates, termed ``compact localized states'' \cite{Sutherland1986, Aoki1996}.
Within the subspace of gapped flat bands, the number of linearly independent compact localized states per unit cell equals the flat-band band degeneracy.
If the band is instead ungapped, there will be additional eigenstates at the flat-band energy, each indicating a band touching from dispersive bands \cite{Bergman2008, Kollar2019, Chiu2020}.
Figure \ref{fig:exs2D}(a) shows representative flat-band eigenstates for our examples.
We find lattice structures with band degeneracies from $1$ up to $24$.
Generally speaking, lattices with smaller band degeneracies also have fewer sites per unit cell and therefore may be more amenable to theoretical study.
Given that our tight-binding model na\"ively assumes $s$-orbital tunneling and no spin-orbit coupling, we next identify the set of line-graph lattices that can be analyzed using the $S$-matrix method \cite{Calugaru2021, Regnault2021}.
If a lattice can be decomposed into a bipartite lattice of sublattices $A$ and $B$, where $A$ contains a greater number of sites than $B$, then the $S$-matrix method applies.\footnote{While some lattices like the kagome are not strictly speaking bipartite, they can be obtained as a limit case of the $S$-matrix method \cite{Regnault2021}}
Then, if the $A$ sublattice is only weakly perturbed by the $B$ sublattice orbitals, the bipartite lattice can be expected to exhibit flat topological bands, regardless of its orbital composition and presence or absence of spin-orbit coupling \cite{Calugaru2021}.
As shown in the examples of Figure \ref{fig:Smatrix}, we find that any given line-graph lattice has a bipartite decomposition if in its Krausz partition, at least one vertex per clique does not belong to any other cliques.
The $B$ sublattice is given by those vertices belonging to only one clique, while the $A$ sublattice is given by the remaining vertices.
Upon omitting a subset of bonds in the original line-graph lattice, as shown in the lower row of Figure \ref{fig:Smatrix}, the lattice can be made bipartite.
Incidentally, this subset consists of the longest bonds in the lattice, implying an shortened effective bond-length cutoff.
In total, we find $504$ ICSDs in 3D, $42$ in quasi-2D, and $5$ in 2D that are amenable to a bipartite decomposition and therefore may be analyzed using the $S$-matrix method.
\begin{figure}[t!]
\centering
\includegraphics{fig4.pdf}
\caption{Example ICSDs to which the $S$-matrix method applies: \textbf{(a)} ZrSO (\icsdflatweb{31721}) along Miller plane $(1, 1, 1)$, \textbf{(b)} P$_2$O$_3$ (\icsdflatweb{36066}), and \textbf{(c)} B$_2$O$_3$ (\icsdflatweb{79698}). The upper row indicates the lattice structure, which is a line graph, while the lower row shows how the structure can be decomposed into a bipartite lattice after omitting a subset of bonds.}
\label{fig:Smatrix}
\end{figure}
Many line-graph-lattice materials have the same lattice structure, which may indicate particular line-graph lattices of interest.
Table \ref{table:lattice} contains our results determining the number of unique line-graph lattices represented in 2D, quasi-2D, and 3D.
Of these lattices, the kagome and pyrochlore-like lattices appear most frequently; over $30\%$ of our unique line-graph materials exhibit one of these structures.
However, we also find a high degree of representation for the lattices shown in Figure \ref{fig:commonlattices}(a).
They are the line graph of the Lieb lattice and of the line graph of the 3D Lieb-lattice analog.
We characterize these three commonly represented line-graph lattices as follows.
The line graph of the Lieb lattice is comprised of one size-$4$ clique and two size-$2$ cliques (per unit cell), highlighted in blue in Figure \ref{fig:commonlattices}(a).
Of its four vertices, all four are adjacent to one size-$4$ clique and one size-$2$ clique.
This lattice also has one octagon (size-$8$) face, outlined in blue, around which there are four size-$4$ and four size-$2$ cliques in alternating fashion, and each vertex is adjacent to two size-$8$ faces.
The line graph of the 3D Lieb-lattice analog (c) has one size-$6$ clique and three size-$3$ cliques per unit cell.
All six of its vertices are adjacent to one size-$6$ and one size-$2$ clique.
Finally, we highlight extracted line-graph lattices which exhibit gapped flat bands, in contrast to the ungapped flat bands of the kagome and pyrochlore lattices.
In Figure \ref{fig:commonlattices}(b), we present the line graph of the Cairo tiling and a non-pyrochlore lattice of center-occupied tetrahedra.
Four size-$3$ cliques and two size-$4$ cliques make up a unit cell of the line graph of the Cairo tiling, where two vertices are adjacent to two size-$3$ cliques and eight are adjacent to one of each size.
There are four pentagon (size-$5$) faces, outlined in blue, around which the size-$3$ and $4$ cliques are interspersed.
Each vertex is adjacent to two size-$5$ faces.
Interestingly, the non-pyrochlore lattice is very similar to pyrochlore in that all of its attributes under our filtering algorithm are identical.
Yet, its center-occupied tetrahedra (size-$5$) cliques are arranged in such a way that the tight-binding band spectrum exhibits gapped flat bands.
This lattice exemplifies how specific lattice geometries may lead to qualitatively different behavior, even among materials which are stoichiometrically similar.
\begin{table}[tb]
\begin{centering}
\begin{tabular}{l@{\hspace{12pt}}c@{\hspace{10pt}}c@{\hspace{10pt}}c@{\hspace{10pt}}c}
\hline
& 2D & quasi-2D & 3D & total\\
\hline
unique lattices & 293& $\geq$60 & $\geq$55 & $\geq$385\\
gapped & 54 & $\geq$7 & $\geq$20 & $\geq$81\\
$S$-matrix compatible & 4 & $\geq$9 & $\geq$7 & $\geq$18\\
\hline
\end{tabular}
\caption{Because many ICSDs exhibit the same line-graph-lattice structures, here we tabulate the number of unique line-graph lattices found and further categorize them into the ones which are gapped or the ones to which the $S$-matrix method applies.
}\label{table:lattice}
\end{centering}
\end{table}
\begin{figure}[htp]
\centering
\includegraphics{fig5.pdf}
\caption{\textbf{(a)} The line graph of the Lieb lattice and of the 3D Lieb-lattice analog. Apart from the kagome and pyrochlore-like lattices, these are two of the most commonly represented lattices in 2D and 3D among the $4409$ line-graph-lattice ICSDs found, seen in $416$ and $71$ unique ICSDs, respectively. \textbf{(b)} The line graph of the Cairo tiling and a non-pyrochlore lattice of center-occupied tetrahedra. Unlike the kagome and pyrochlore lattices, the flat bands of these lattices are gapped. The former is seen in $75$ ICSDs; the latter is seen in $86$ ICSDs. In the top row, unit cells are outlined in grey; in the bottom row, flat bands are highlighted in blue and labeled by their degeneracy. Their characterizations are included in the main text. For the 2D lattices, the cliques and faces referred to in the characterization are highlighted and outlined, respectively.}
\label{fig:commonlattices}
\end{figure}
\section{Discussion}
One of the goals of quantum simulation is to solve quantum mechanical problems which cannot be solved with current classical computation.
Many such open questions exist within condensed matter, and to this end quantum simulation has made great progress on a multitude of experimental platforms.
More specifically, they have provided a mechanism to benchmark and test numerical techniques and theories, giving rise to new intuition and understanding.
Here, we have taken intuition fostered through the development of superconducting-circuit-based quantum simulation and apply it to a search for real-material candidates.
Of the $55,206$ ICSDs examined, almost $8\%$ are found to host line-graph lattices.
A full description per ICSD entry of these line-graph lattices is provided in the \href{https://www.topologicalquantumchemistry.fr/flatbands}{Materials Flatband Database}. These candidates can be probed through condensed matter experiment and may be a starting point for identifying materials that host strongly interacting electrons in flat bands.
This work demonstrates how insights gained from working with synthetic matter can lead to actionable results in the search for new quantum materials.
Furthermore, from these ICSDs we have found numerous unique line-graph lattices, which give rise to flat bands due to geometric frustration, rather than a fine-tuning of parameters.
Notably, while the kagome and pyrochlore lattices are well-known and prevalent examples, they both exhibit ungapped flat bands in their tight-binding spectra.
We identify additional line-graph lattices and quantify their prevalence.
Of these, we find the line-graph lattices that host gapped flat bands.
Immediate extensions include the development of related algorithms to search for other lattices and families of lattices known to host flat bands.
This includes Tasaki's lattices \cite{Tasaki1998}, lattices that are constructed entirely from cliques but are not line graphs \cite{Tanaka2020}, and decorated or superlattices built from 1D chains \cite{Mizoguchi2019, Lee2020}.
Our search can also be run on a database of monolayer materials.
Of course, the line-graph property of a crystalline structure does not directly indicate that the material itself has a flat band, due to orbital and spin degrees of freedom, varied hopping strengths and next-nearest-neighbor hopping, and disorder.
However, the properties of these structures can be compared to those of ``sister materials'', which have similar composition but are arranged in the root graph structure.
Differences may reveal physics unique to the line-graph flat bands.
Indeed, the band spectra of root-graph lattices and their line graphs differ in that only the line graph exhibits flat bands as its lowest bands, but they can otherwise be quite similar.
The role of lattice geometry may also be disentangled from other degrees of freedom through comparing materials which have the same underlying line-graph lattice, but otherwise differ in their symmetry or other aspects.
More broadly, these newly highlighted lattices are of particular importance given their potential in designing real and synthetic flat-band materials for studies of strongly correlated many-body physics.
\begin{acknowledgments}
We would like to thank B. Andrei Bernevig, Jens Koch, Aavishkar Patel, and Liujun Zou for helpful discussions. The 3D lattice visualizations in Figures \ref{fig:exs3D}, \ref{fig:Smatrix}, and \ref{fig:commonlattices} were created using the Crystallica package, developed by Bianca Eifert and Christian Heiliger of Justus Liebig University Giessen and distributed under the MIT License. We acknowledge support from the Princeton Center for Complex Materials NSF DMR-1420541 and from the ARO MURI W911NF-15-1-0397. N.R. has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement no. 101020833).
\end{acknowledgments}
|
{
"timestamp": "2021-10-18T02:22:15",
"yymm": "2109",
"arxiv_id": "2109.13940",
"language": "en",
"url": "https://arxiv.org/abs/2109.13940"
}
|
\section{Introduction and review}
In \cite{Arkani-Hamed:2019plo, Arkani-Hamed:2020tuz} {\it cluster configuration spaces} have been introduced, which generalize the moduli space ${\cal M}_{0,n}$ (corresponding to the type-$A_{n-3}$ space) to cases corresponding to other finite-type cluster algebras. The idea behind these spaces is that of {\it stringy canonical forms}~\cite{Arkani-Hamed:2019mrd}, which are Euler-Mellin type integrals providing $\alpha'$ deformations of {\it canonical forms}~\cite{Arkani-Hamed:2017tmz} of any polytopes. The prototypes of such string-like integrals are genus-zero (open- and closed-) string integrals, which are very special stringy canonical forms for the so-called ABHY associahedra describing tree amplitudes in bi-adjoint $\phi^3$ theory
as $\alpha' \to 0$~\cite{Arkani-Hamed:2017mur}. For generalized associahedra of any finite-type cluster algebra, one can define a class of special (real and complex) integrals called {\it cluster string integrals} whose $\alpha'\to 0$ limit gives the canonical forms of their ABHY realizations, which are rigid and natural extensions of the usual string integrals. The corresponding cluster configuration space is defined by compactifying the integration domain with a boundary structure mirroring that of the generalized associahedron, which manifests ``factorizations" of the integrals at finite $\alpha'$ dictated by the Dynkin diagram~\cite{Arkani-Hamed:2019plo}; the positive space is literally a curvy generalized associahedron polytope. For type $A_{n{-}3}$, we recover the Deligne-Knudsen-Mumford compactification of ${\cal M}_{0,n}$~\cite{deligne1969irreducibility, Knudsen_1983} and in general the spaces become generalizations of the latter. As we will review shortly, a cluster configuration space is defined in terms of a set of constrained variables that are related to the $Y$-system \cite{Zamolodchikov:1991et}, and it is natural to wonder if one could find certain unconstrained variables as generalizations of worldsheet variables of ${\cal M}_{0,n}$ (type $A$). In this note, we will show that it is indeed the case, and such worldsheet-like variables become very useful for our study of cluster string integrals, the ABHY polytopes, and the corresponding saddle-point equations. In particular, we will use these variables for a systematic study of topological properties {\it e.g.} the number of saddle points (or Euler characteristic), for such configuration spaces, especially for type $D_n$~\cite{Arkani-Hamed:2020tuz}.
On the other hand, the authors of~\cite{Chicherin:2020umh} have recently proposed such variables for classical types $ABCD$, which naturally appear as {\it symbol letters} for Feynman integrals and scattering amplitudes in QFTs. For this purpose, we are interested in a collection of $N$ {\it multiplicatively independent} polynomials of $d$ variables, which are related to the usual cluster variables via some birational maps. A goal of the current paper is to provide a systematic derivation of such polynomials using birational maps from the corresponding $Y$-system or $F$-polynomials for any finite-type cluster algebra. We find simple birational transformations from $d$ initial $y$-variables to $d$ worldsheet-like variables $z_1, z_2, \ldots, z_d$, and all other $y$-variables can be obtained from the $Y$-system (or equivalently from the $N-d$ $F$-polynomials of the $y$'s); by our transformation they generate polynomials of $z$'s. As we will show, there are natural transformations such that these polynomials of $z$'s take particularly simple forms, including the familiar $z_i-z_j$ factors for the type-$A$ moduli space. In terms of these variables, we can alternatively define the cluster configuration space as the $d$-dimensional space with all $N$ polynomials of $z$'s non-vanishing. We will illustrate how this works with the most important example, the configuration space of type $D$, which together with the well-known example of the type-$A$ moduli space, also gives the spaces of types $B$ and $C$ by {\it folding}. We will also give explicit results for the alphabet of types $E_6, F_4$ and $G_2$, and it is clear that the method applies to the remaining cases of $E_7$ and $E_8$ \cite{WangZhao}. These worldsheet-like variables play a crucial role for two purposes: they naturally appear as letters of the symbol for certain classes of Feynman integrals (especially for type $D_n$ in various ladder integrals to all loops~\cite{He:2021mme}), and they are the natural variables for describing the cluster configuration spaces for which we can define cluster string integrals and saddle-point equations {\it etc.}
In addition to providing a systematic derivation of such $z$-variables, we will also apply these variables to the study of cluster string integrals and important topological properties of cluster configuration spaces, such as the number of saddle points. In~\cite{Arkani-Hamed:2020tuz}, it has been shown that the number of saddle points for types $A_d, B_d$ and $C_d$ is $d!, d^d$ and $(2 d)!!/2$, respectively. This was done by computing the number of points in the configuration space for a finite field $\mathbb{F}_p$ (with a generic prime $p$), which turns out to be a degree-$d$ {\it polynomial} of $p$ for these cases; by plugging in $p=1$ we obtain the Euler characteristic, or the number of saddle points up to a sign. In these cases, coefficients of the polynomial give dimensions of cohomology, which can be generated by independent $d\log$ forms.\footnote{Similar computations using the finite-field method has played an important role in determining the number of saddle points {\it etc.} for the so-called Cachazo-Early-Guevara-Mizera (CEGM) amplitudes~\cite{Cachazo:2019ngv}, which are closely related to stringy integrals of Grassmannian case~\cite{Arkani-Hamed:2019rds}; see~\cite{Sturmfels:2020mpv} and especially the appendix of~\cite{Agostini:2021rze} for such results in the context of likelihood equations.} Although such computations can also be done with {\it e.g.} $F$-polynomials, at least for types $A$ and $C$, the use of worldsheet variables has been crucial to obtaining general result (for type $B$ it is coincidental that a new set of ``linear" variables can be used to show that the space is the complement of a hyperplane arrangement, similar to type $A$). In~\cite{Arkani-Hamed:2020tuz}, it has been conjectured that the point count for types $D_4$, $D_5$ and $G_2$ are {\it quasi-polynomials}, which also nicely give Euler characteristics for these spaces, but the lack of such variables for type $D_n$ in general has made computations much more difficult. We will partially solve this problem by performing such a point count for $D_n$ up to $n=10$, which is unimaginable without the help of these variables. Moreover, we are also interested in linear relations among $d\log$ forms made of such polynomials especially for type $D_n$, which will be instrumental to ``bootstrap" all possible integrable symbols with such an alphabet~\cite{He:2021mme}.
Before proceeding, here we will first give a lightening review of the cluster configuration space and associated string integrals, and refer the readers to~\cite{Arkani-Hamed:2020tuz,Arkani-Hamed:2019mrd} for details. It turns out that a gauge-invariant way for describing the compactified moduli space, which allows us to see all the boundaries of the $A_{n-3}$ configuration space explicitly, is by introducing the $N:=n(n{-}3)/2$ $u_{a,b}$ variables (one for each diagonal of $n$-gon labelled by $(a,b)$). They satisfy the same number of constraints called {\it $u$-equations}:
\eq{
1-u_{a,b}=\prod_{(c,d)~{\rm incompatible~with}~(a,b)} u_{c,d} \,,
}
where on the RHS we have the product over all $u_{c,d}$ variables where $(c,d)$ is incompatible with $(a,b)$ (the diagonals cross). It is a remarkable fact that these $n(n{-}3)/2$ equations have a solution space of dimension $n{-}3$, which we call the open $A_{n{-}3}$ configuration space for all $u_{a,b} \neq 0$. In the real case, it is referred to as the positive part if we further ask all $u_{a,b}>0$. The $u$-equations have revealed that the configuration space is a {\it binary geometry}, which has boundary structures that ``factorize" by removing a node of the (type-$A$) Dynkin diagram: as any $u_{c,d}\to 0$, all incompatible $u_{a,b}\to 1$, thus the space factorizes as $A_L \times A_R$ where $L,R$ are associated with the two polygons of the $n$-gon separated by $(c,d)$. For the positive (or more precisely, non-negative) part, all the variables $0\leq u_{a,b}\leq 1$ and the space has the shape of a (curvy) associahedron. In a totally analogous manner, one introduces {\it $u$-variables} and {\it $u$-equations} for any finite-type cluster algebra: for any Dynkin diagram $\Gamma$ with $d$ nodes (the rank of the cluster algebra), we have $N_\Gamma$ such variables and equations with $N_\Gamma$ the dimension of the cluster algebra.\footnote{For $A_d$, $N=d(d{+}3)/2$, for $B_d$ or $C_d$, $N=d(d{+}1)$, for $D_d$, $N=d^2$, and for $E_6, E_7, E_8, F_4$ and $G_2$, we have $N=42, 70, 128, 28$ and $8$ respectively.} Explicitly, we have
\eq{
1-u_I=\prod_{J=1}^{N_\Gamma} u_J^{J | I}
}
for all $I=1,2,\ldots, N_\Gamma$, where $J | I$ is the so-called {\it compatibility degree} from cluster variable $J$ to $I$~\cite{Fomin:2001rc,Arkani-Hamed:2019vag}. Beyond type $A$, these degrees go beyond $0,1$ and we have $J|I>0$ if and only if the two cluster variables are incompatible (only for simply-laced cases, we have $I|J=J|I$). Remarkably, the $N$ equations for $N$ variables again have an $d$-dim solution space where $d$ is the rank of the cluster algebra, which is called the {\it cluster configuration space} ${\cal M}_\Gamma$ of type $\Gamma$. As any $u_J \to 0$, all incompatible $u_I \to 1$ (those with $J | I>0$), and for the positive part (all $u$ between $0$ and $1$), it has the shape of the corresponding generalized associahedron with $N_\Gamma$ facets. Note that our $u$-variables are related to $y$-variables of $Y$-systems via $u_I:=y_I/(1+y_I)$ for $I=1, 2, \ldots, N$, and it is a non-trivial fact that the $u$-equations become equivalent to recurrence relations of the corresponding $Y$-system. One way to parameterize ${\cal M}_\Gamma$ is to use $d$ initial $y$-variables $y_1, \ldots, y_d$ (which correspond to an {\it acyclic} quiver), and by solving the $Y$-system the remaining $N-d$ $y$'s are rational functions of them. Equivalently, all the $y$ or $u$-variables can be expressed as ratios of $F$-polynomials and monomials of $y_1, \ldots, y_d$. The positive part, ${\cal M}^+_\Gamma$, where we have all $0<u_I<1$ or all $y_I>0$, is thus parameterized by $y_i>0$ ($i=1, 2, \ldots, d$) for any initial acyclic quiver. As we have mentioned, it is more advantageous to use unconstrained variables, {\it e.g.} for explicitly computing such string integrals (either as an expansion in $\alpha'$ or even at finite $\alpha'$), or studying their saddle-point equations. However, we emphasize that the $u$-variables constrained by $u$-equations provide a {\it gauge-invariant} description of the configuration space.
Associated with such a space, it is natural to write the cluster string integral {\it e.g.} for the positive part ${\cal M}_\Gamma^+$
\eq{
I_\Gamma(\{X\}):=(\alpha')^d \int_{{\cal M}^+_{\Gamma}} \Omega({\cal M}^+_\Gamma)~\prod_{I=1}^{N_\Gamma} u_I^{\alpha' X_I} \,,
}
where the integral is over ${\cal M}^+_\Gamma$ (with all $u_I>0$), and we do not need the explicit expression of the canonical form $\Omega({\cal M}^+_\Gamma)$ but the property that on any boundary $u_I \to 0$, it becomes the canonical form of the latter which takes a ``factorized" form by removing a node of the Dynkin diagram; the integral is ``regulated" by the ``Koba-Nielsen" factor with exponents $X_I>0$. This is exactly the domain of convergence, which also cuts out an ABHY generalized associahedron ${\cal A}_\Gamma$ with each facet reached by $X_I \to 0$. As $\alpha'\to 0$, the leading order of the integral is given by the {\it canonical function} of the ${\cal A}_\Gamma$, and most remarkably even for finite $\alpha'$, the integral has ``perfect" factorization as each $X_I \to 0$~\cite{Arkani-Hamed:2019plo}.
In particular, for type $A_{n{-}3}$ we have the following $n$-point string integral over (the positive part of) the configuration space
\eq{
I_{A_{n{-}3}} (\{X\}):=(\alpha')^{n{-}3} \int_{{\cal M}^+_{0,n}} \Omega({\cal M}^+_{0,n})~\prod_{(a,b)} u_{a,b}^{\alpha' X_{a, b}} \,,
}
where we have alternatively denoted the space of type $A_{n{-}3}$ as the moduli space ${\cal M}_{0,n}$, and $X_{a,b}$ can be identified with the {\it planar variables}, one for each facet of the ABHY associahedron ${\cal A}_{n{-}3}$. The $\alpha'\to 0$ (low-energy) limit of this open-string integral gives the canonical function of ${\cal A}_{n{-}3}$, which is nothing but the (diagonal) bi-adjoint $\phi^3$ tree amplitude. These physical string integrals, as a special case of cluster string integrals, still factorize at $X_{a,b}=0$, even at finite $\alpha'$, which nicely reflects the ``perfect" factorization of ${\cal M}^+_{0,n}$ as one can see from the $u$-equations. Now let us quickly see how ${\cal M}_{0,n}$ and the worldsheet variables naturally appear from the cluster configuration space of type $A_{n{-}3}$. The $u$-variables as defined above do not refer to any worldsheet picture, but the $u$-equations can be nicely solved once we introduce worldsheet variables $z_i$ for $i=1,2,\ldots, n$ with an $SL(2)$ redundancy. The $u_{i,j}$'s are exactly {\it dihedral coordinates} written as cross ratios of the punctures:
\eq{
u_{i,j}=\frac{z_{i-1,j}\, z_{i,j-1}}{z_{i-1,j-1} \, z_{i,j}}\,,\qquad 1\leq j<i-1<n \,,
}
where $z_{i,j}:=z_j -z_i$, and it is easy to see that the $z$-variables provide the solution to $u$-equations, though we still need to fix the ${\rm SL}(2)$ redundancy {\it e.g.} by choosing $z_1=-1, z_2=0, z_n=\infty$. The open cluster configuration space can be identified with the moduli space ${\cal M}_{0,n}$ and in particular for the positive part, we have $z_i$'s ordered, $z_1<z_2<\cdots<z_n$ (in our fixing, we have $0<z_3<\cdots<z_{n{-}1}$. One can rewrite $I_{A_{n{-}3}}$ in terms of $z$-variables, once we realize that $\Omega({\cal M}^+_{0,n})$ is given by the Parke-Taylor form:
\eq{
I_{A_{n{-}3}}:=(\alpha')^{n{-}3}~\int_{{\cal M}_{0,n}^+} \frac{d^{n{-}3} z}{z_{1,2}z_{2,3} \cdots z_{n, 1}} \prod_{j<i} |z_{i, j}|^{\alpha' s_{i, j}}\,.
}
where the measure is the top, ($n-3$)-form on ${\cal M}_{0,n}$, and we have rewritten the factor $\prod u_{a,b}^{X_{a,b}}$ as the Koba-Nielsen factor with Mandelstam variables $s_{i,j}:=(k_i+k_j)^2=2 k_i \cdot k_j$ (subject to momentum conservation). It is easy to see that linearly-independent Mandelstam variables can be written as
\[
s_{i,j}=-X_{i,j}-X_{i+1,j+1}+X_{i+1,j}-X_{i,j+1}\,.
\]
More precisely, in the gauge-fixing above the factor reads $\prod_{1\leq j<i\leq n-1} z_{i,j}^{\alpha' s_{i,j}}$,
where we have the same number, $n(n{-}3)/2$, of linear factors (since $z_{1,2}=1$ drops out). We will refer to the collection of the $n(n{-}3)/2$ polynomials of degree $1$ as the {\it alphabet}, and in our gauge-fixing it consists of $\{z_i, 1+z_i\}$ for $i=3,\ldots, n{-}1$ and $\{z_{i,j}\}$ for $3\leq j<i\leq n{-}1$. The space is equivalent to the complement of hyperplane arrangement in terms of the alphabet, {\it i.e.} $\{ z_{i,j}=0, z_i=0, 1+z_i=0\}$, and one can easily derive various topological properties from here, {\it e.g.} the number of saddle points is $(n{-}3)!$~\cite{Arkani-Hamed:2020tuz}. Moreover, the number of connected components of ${\cal M}_{0,n}(\mathbb{R})$ is $(n{-}1)!/2$, and they are given by all possible orderings of $z_3, z_4, \ldots, z_{n{-}1}$ (or sign patterns of the alphabet), which are in $1-1$ correspondence with all the $(n{-}1)!/2$ consistent sign patterns of $u$-variables~\cite{Arkani-Hamed:2020tuz}.
In the following, we will systematically introduce worldsheet-like variables for a cluster configuration space of any finite type, and study properties of the space and string integrals {\it etc.}, which play important roles both in mathematics and physics. In section~\ref{sec2}, we will show how to derive from $Y$-systems {\it worldsheet-like variables} $z_1, \ldots, z_d$, which are related to the initial $y_1, \ldots, y_d$ via some birational maps, such that all the $N_\Gamma$ $u$ (or $y$) variables become ratio of polynomials of $z$'s. As we have seen above, naively there are more than $N_\Gamma$ polynomials in $z$'s, but similar to the type-$A$ case, there is a natural ${\rm SL}(2)$ gauge redundancy. After gauge-fixing we find exactly $N_\Gamma$ such polynomials ({\it letters}), and we call their collection the {\it alphabet}. Nicely, we find linear factors similar to those in type $A$, $z_i$, $1+z_i$ and $z_{i,j}$, and polynomials of higher degrees: for types $B, C$ and $D$, there are quadratic polynomials, and for types $E_6$, $F_4$ and $G_2$, there are polynomials of degree at most $4$, $5$ and $4$, respectively. We will use type $D_n$ as the main example: we obtain a nice formula that expresses $u$-variables as ``cross ratios" of the letters (polynomials of $z$'s), and discuss how the boundaries of ${\cal M}^+_{D_n}$ can be obtained by degeneration of these $z$-variables.
In section~\ref{sec3}, we proceed to write cluster string integrals and saddle-point equations using such $z$-variables. The ``Koba-Nielsen" factor is naturally the product of all letters (with exponents), and we focus on the {\it positive configuration space.} whose canonical form can be written nicely in terms of the $z$-variables analogous to the Parke-Taylor factor. We will see how factorization on ``massless" poles of stringy integral correspond to boundaries of the positive part as $z$-variables pinch (as shown explicitly for {\it e.g.} $D_4, B_3$ and $G_2$). Using $z$-variables, we will write the $D_n$ scattering equations which also provide a diffeomorphism from the positive part to the ABHY polytope. Last but not least, we will numerically solve these equations, and count the number of solutions (saddle points) up to $D_7$.
In section~\ref{sec4}, we shall study the topological properties of the cluster configuration spaces using the worldsheet variables. By counting points in the complement of the hypersurfaces defined by the letters over a finite field, we obtain quasi-polynomials that encode information on the Euler characteristic and the dimensions of cohomology, {\it etc.}, up to $D_{10}$. Moreover, we will obtain the number of independent $2$ and $3$ forms for any $D_n$, which are not affected by ``correction terms" in the quasi-polynomials.
\section{Worldsheet-like variables for finite-type cluster algebras}\label{sec2}
The cluster configuration space ${\cal M}_{D}$ is the configuration
space of $u$-variables, which are initially defined in \cite{Arkani-Hamed:2019plo, Arkani-Hamed:2020tuz} as
the solution of $u$-equations \cite{Koba:1969rw, Koba:1969kh, Brown:2009qja}.
\eq{
1-u_I=\prod_{J=1}^{N_\Gamma} u_J^{J | I} \,,
}
where $J|I$ is a non-negative integer assigned to each pair of
the $u$-variables. The possible values for $J|I$ varies for different
underlying cluster type of the $u$-variables. This definition is equivalent to a recursive definition by the local
$u$-equations:
\eq{
\left(\frac{1-u_{v}}{u_{v}}\right)\left(\frac{1-u_{v'}}{u_{v'}}\right)=\prod_{w\ne v}\left(1-u_{w}\right)^{a_{v,w}} \,.\label{eq:locu}
}
where $v,v'$ label the $u$-variables before and after a walk and
$a_{v,w}$ is the Cartan matrix for the Dynkin diagram. The local $u$-equations make it possible for
us to connect $u$-variables with well-studied object in cluster algebra
literature as (\ref{eq:locu}) is in fact the defining relation for
the Zamolodchikov $Y$-system under the map $y_{v}=u_{v}/(1-u_{v})$,
and the integers $J|I$ are identified with the compatibility degrees
between two corresponding elements $y_{I},y_{J}$ in the $Y$-system \cite{Zamolodchikov:1991et}.
\begin{figure}
\centering
\includegraphics[scale=0.25]{AnWS-crop}
\caption{
The $A_{3}$ worldsheet. Each diagonal in a triangulation corresponds to a node in the quiver and is assigned a $Y$-variable. We consider an initial zigzag triangulation where each node is either a source or a sink. At each step, we mutate the source nodes (or equivalently flip the diagonals) and generate new $Y$-variables according to the $Y$-system equations.}
\label{AnWS}
\end{figure}
Let us begin by reviewing how the $Y$-system equations arise from the $A_{n}$ worldsheet, which is a disc with $n$ points on the boundary. Consider a triangulation of the surface. To each $(i,j)$ diagonal on the worldsheet we can associate a $u_{i,j}$ (or $y_{i,j}$)-variable. Each diagonal is contained in some quadrilateral. We can obtain another triangulation by a \emph{flip}: replacing a diagonal with another diagonal in the quadrilateral, as shown in figure \ref{AnWS}. The local $u$-equations express relations between diagonals that are related by a flip:
\eq{
\frac{u_{i,j}}{1-u_{i,j}} \frac{u_{i+1,j+1}}{1-u_{i+1,j+1}} = \frac{1}{1-u_{i,j+1}} \frac{1}{1-u_{i+1,j}} \,.
\label{localu}
}
The local $u$-equations take particularly simple forms in terms of the $y_{i,j}$-variables
\eq{
y_{i,j} y_{i+1,j+1} = (1+y_{i,j+1})(1+y_{i+1,j}) \,.
}
One can associate a quiver to a triangulation where each diagonal corresponds to a node and there is an arrow if one diagonal follows another in the counterclockwise direction around a vertex \cite{Gekhtman_2005}.
To each zigzag triangulation one can associate a quiver where each node is either a source or a sink. Each step in the evolution is represented by mutations on all the source nodes or on all the sink nodes \cite{Fomin_2003}. This is equivalent to a simultaneous flip on the corresponding diagonals. We make the choice to mutate on the source nodes, as shown in figure \ref{AnWS}. The diagonals in the $(i,i+1,j,j+1)$ quadrilateral will be mapped as
\eq{y_{i,j} \to y_{i+1, j+1}\,.}
Using the label in terms of the nodes of the Dynkin diagram and the number of simultaneous mutations performed on the sources or the sinks, the local $u$-equations are equivalent to the $Y$-system equations:
\eq{
Y_{i, t-1} Y_{i, t} = \prod_{j \to i} (1 + Y_{j,t})^{-a_{i,j}} \prod_{i \to j} (1 + Y_{j, t-1})^{-a_{i,j}} \,.
\label{Y-system}
}
We will use upper-case $Y_{i,t}, U_{i,t}$ to denote the (Dynkin node, time step) label and the lower-case $y_{i,j}, u_{i,j}$ to denote the polygon label. A key property of $Y$-systems of Dynkin type is periodicity: the $Y_{i,t}$-variables return to its initial value after a finite number of mutations. The period divides $2(h+2)$, where $h$ is the Coxeter number. In fact, the cross-ratio representation of $Y$-variables was used to prove the periodicity conjecture for $A_n$ type \cite{Volkov_2007}.
\subsection{The gluing construction of configuration spaces}
\begin{figure}
\centering
\includegraphics[scale=0.25]{Dnpolygon-crop}
\caption{
(1) The $D_n$ Dynkin diagram is a disjoint union of two $A_{n-1}$ sub-diagrams.
(2) The overlapping-polygon representation of the $D_n$ worldsheet. Each $A_{n-1}$ worldsheet corresponds to an $(n+2)$-gon. Each diagonal in the zigzag triangulation corresponds to a node in the Dynkin diagram.}
\label{Dnpolygon}
\end{figure}
The $u$-variable parameterization of the cluster configuration space has the advantage that it provides an invariant description of the moduli space and a binary approach to the boundaries. $Y$-systems further allow us to generalize to cluster configuration spaces of Dynkin type. However, it is still desirable to have a description of the moduli space as a configuration space of points. In the following, we discuss a construction based on gluing a pair of $A$-type worldsheets, whose $u$-coordinates are cross ratios of their respective $z$-variables. We glue the two worldsheets as indicated by the Dynkin diagrams. The dihedral coordinates of the diagonals define the initial set of $Y$-variables. By evolving the system according to the $Y$-system equations, we generate all the other variables. We shall see that in addition to the $z_{i}-z_{j}$ factors, new nonlinear factors appear in the cross ratio. The $u$-variables will be expressed as generalized cross ratios of the worldsheet variables.
\paragraph{The $D_4$ example}
Let us work out the $D_4$ example explicitly. We take the initial $Y$- (or $u$-) variables according to the triangulation of figure \ref{Dnpolygon}:
\EQ{
\{ U_{1,0}, U_{2,0}, U_{3,0}, U_{4,0} \}
&= \left\{\frac{z_{1,4} z_{2,3}}{z_{1,3} z_{2,4}},\frac{z_{1,5} z_{2,4}}{z_{1,4} z_{2,5}}, \frac{z_{4,1} z_{5,6}}{z_{4,6} z_{5,1}},\frac{z_{4,1} z_{5, 7}}{z_{4,7} z_{5,1}}\right\} \\
&:= \{u_{4,2}, u_{5,2}, \widetilde u_{ 5}, u_{5}\} \label{initial}
\,.}
We evolve the system according to the $Y$-system equations (\ref{Y-system}).
\EQ{
\{ U_{1,1}, U_{2,1}, U_{3,1}, U_{4,1} \}
&=\left\{\frac{z_{2,5} z_{3,4}}{z_{2,4} z_{3,5}}, -\frac{z_{1,5} z_{3,5} z_{2,6} z_{2,7}}{w_{2,3} z_{2,5}}, \frac{z_{1,6} z_{2,5}}{z_{1,5} z_{2,6}}, \frac{z_{1,7}z_{2,5}}{z_{1,5}z_{2,7}} \right\} \\
&:=\{u_{5,3}, u_{2,3}, u_{2}, \widetilde u_{ 2}\} \,.
}
Here a nonlinear factor appears in the expression for $u_{2,3}$ which is a cubic polynomial in $z_i$:
\eq{w_{i,j} = z_{1,n+3} z_{i,j} z_{n+1,n+2} - z_{1,n+1} z_{i,n+3} z_{j,n+2}\,.}
Noting that $w_{i,j}$ factorizes into a product of $z_{i,j}$ when $i=1$ or $i=j$, one can conveniently rewrite $u_{2,3}$ as a cross ratio of $w_i$'s
\eq{u_{2,3} = \frac{w_{1,3} w_{2,2}}{w_{1,2} w_{2,3}} \,.}
In this way we generate
\EQ{
\{ U_{1,2}, U_{2,2}, U_{3,2}, U_{4,2} \}
&=\left\{ \frac{w_{1,4} w_{2,3}}{w_{1,3} w_{2,4}},\frac{w_{2,4} w_{3,3}}{w_{2,3} w_{3,4}},\frac{w_{2,3} z_{3,6}}{w_{3,3} z_{2,6}},\frac{w_{2,3} z_{3,7}}{w_{3,3} z_{2,7}}\right\} \\
&:=\{u_{2,4}, u_{3,4}, \widetilde u_{3}, u_{3}\} \,, \\
\{ U_{1,3}, U_{2,3}, U_{3,3}, U_{4,3} \}
&=
\left\{\frac{w_{2,5} w_{3,4}}{w_{2,4} w_{3,5}},\frac{w_{3,5} w_{4,4}}{w_{3,4} w_{4,5}},\frac{w_{3,4} z_{4,7}}{w_{4,4} z_{3,7}},\frac{w_{3,4} z_{4,6}}{w_{4,4} z_{3,6}}
\right\} \\
&:=\{u_{3,5}, u_{4,5}, u_{4},\widetilde u_{ 4}\}\,,\\
\{ U_{1,4}, U_{2,4}, U_{3,4}, U_{4,4} \}
&= \left\{\frac{z_{1,4} z_{2,3}}{z_{1,3} z_{2,4}},\frac{z_{1,5} z_{2,4}}{z_{1,4} z_{2,5}}, \frac{z_{4,1} z_{5,6}}{z_{4,6} z_{5,1}},\frac{z_{4,1} z_{5, 7}}{z_{4,7} z_{5,1}}\right\} \\
&:= \{u_{4,2}, u_{5,2}, \widetilde u_{ 5}, u_{5}\} \,.
}
In the final step, we recover the initial set of $U_{i,0}$-variables as guaranteed by periodicity.
In general the diagonals are mapped as:
\eq{
u_{i,j} \to u_{i+1,j+1}, \qquad u_i \to \widetilde u_{{i+1}}, \qquad \widetilde u_{ i} \to u_{i+1}\,.
}
modulo $n=4$ in accordance with the $Y$-system equations. One important observation is that the indices of $u_{i,j}, u_{i}, \widetilde u_{ i}$ range from $2 \le i,j \le n+1$, which makes manifest the periodicity of the $D_n$-type $Y$-system. For example, the variable that comes after $u_{5,2}$ would be $u_{2,3}$.
Let us summarize what we have found for general $n$. When $1 < j < i \le n+1$, the $u_{i,j}$-variables are the standard cross ratios of $z$'s. The other $u$-variables can be generated according to the $Y$-system evolution. They are written as cross ratios involving $z$'s and $w$'s, which can be compactly expressed as
\EQ{
u_{i,\, j} &=\frac{z_{i,j-1}\, z_{i-1,j}}{z_{i,j}\, z_{i-1,j-1} }\,, \quad u_{j,\, i} = \frac{w_{i,j-1}\, w_{i-1,j}}{w_{i,j}\, w_{i-1,j-1}} \,, \\
u_{i} &=\frac{z_{i,+} \, w_{i-1,i} }{z_{i-1,+} \, w_{i,i}}\,, \qquad \widetilde u_{{i}} =\frac{z_{i,-}\,w_{i-1,i}}{z_{i-1,-}\,w_{i,i}}\,,
}
where $1 < j < i \le n+1$. When $i, j = 1$, the variables are defined cyclically, e.g., $u_{1,j} := u_{n+1, j}$ and $u_{1} := u_{n+1}$.
\subsection{The positive $D_n$ moduli space and its boundaries}
The positive region corresponds to a further restriction on the real line and a choice of ordering. We shall denote $z_{n+2} = z_-$ and $z_{n+3} = z_+$ for the two special points. Because the set of worldsheet variables do not include the $z_- - z_+$ factor, $z_{\pm}$ are not ordered with respect to each other. A natural choice of ordering would be $z_{-}, z_{+} \le z_{1} \le \cdots \le z_{n+1}$.
The $y$-variables are positive and the $u$-variables lie in the unit interval. We may also use other orderings, as long as the $u$-variables are positive and lie in the unit interval.
Like in the $A_n$ case, there is an $SL(2, \mathbb{R})$ gauge symmetry that fixes the positions of three points. We take them to be $z_1 = -1, z_2 = 0, z_{n+1} = \infty$. Note that unlike the $A_n$ case, not all points are on an equal footing. The two special points $z_{\pm}$ are not allowed to be taken to $\infty$.
\paragraph{The boundaries}
\begin{figure}
\begin{center}
\includegraphics[scale=0.35]{Dnboundary.pdf}
\end{center}
\caption{The boundary of the $D_n$ moduli space as seen by the worldsheet variables. Each boundary of the positive region corresponds to the pinching of a pair of points and all the points lying in between.}
\label{Dnboundary}
\end{figure}
We find the following patterns of pinching will take us to the boundaries of the positive moduli space ($1 < j < i \le n+1 $). This is shown in figure \ref{Dnboundary}.
\begin{itemize}
\item The $u_{i,j}$ boundary: $z_{i-1}$ pinches with $z_{j}$.
\item The $u_{j,i}$ boundary: $z_{i}$ pinches with $z_{n+1}$, $z_{j-1}$ pinches with $z_1$.
\item The $u_{i}$ boundary: $z_{i-1}$ pinches with $z_{-}$ (but not with $z_{+}$).
\item The $\widetilde u_{ i}$ boundary: $z_{i-1}$ pinches with $z_{+}$ (but not with $z_{-}$).
\end{itemize}
By ``pinching", we mean that the two points, along with all the ordered points in between, are identified.
In the $A_{n}$ configuration space, the pinching of two points always corresponds to a boundary of the moduli space. In the $D_{n}$ configuration space, the pinching of $z_-$ with $z_+$ is not a boundary. Instead, we find the $B_{n-1}$ configuration space as a subspace of $D_n$.
\subsection{The configuration space for other types}
The construction can be extended to the $E_n$ cases by gluing a pair of $A_{n-1}$ and $A_{n-2}$ worldsheets. By defining the initial variables using a triangulation and evolving the $Y$-system equations, we can again generate solutions in terms of the worldsheet coordinates. Further details will be presented in \cite{WangZhao} and here we mention the results. For the nonlinear factors of $E_6$, we find sextic polynomials of the form
\EQ{
w_{i,j,k,l} &= z_{1,9} z_{1,6} z_{i,8} z_{j,7} z_{k,9} z_{l,6}-z_{1,9} z_{1,6} z_{6,9} z_{i,8} z_{j,k} z_{l,7} +z_{1,i} z_{1,6} z_{6,9} z_{j,9} z_{k,8} z_{l,7}\\
&+z_{1,8} z_{1,9} z_{6,7} z_{6,9} z_{i,l} z_{j,k}\,.
\label{E6w}
}
The $u$-variables can be written as generalized cross ratios of $z_{i,j}$ and $w_{i,j,k,l}$.
\paragraph{Non-simply-laced types from folding}
\
By generalizing the cross-ratio parameterization of the $A_{n}$-type $Y$-system, we now have worldsheet parameterizations for all type-ADE $Y$-systems, while such parameterizations for all the other finite types can be achieved by folding. The folding for $z$-parameters is derived from the standard folding of $Y$-systems combined with the birational maps in $ADE$ types \cite{WangZhao}.
In summary, by using the gluing method, we have constructed the worldsheets for finite-type Dynkin systems. Nonlinear $w$-factors appear as we evolve the $Y$-system equations. The cluster configuration spaces, which have previously been described abstractly in terms of $u$-variables, can be parameterized as generalized cross ratios. We have given the $u(z)$ map explicitly for $D_n$. It is clear that such constructions can be extended to $E_7$ and $E_8$ and non-simply-laced cases follow from folding the simply-laced ones. We have found the underlying configuration spaces of points that give rise to a worldsheet description.
\subsection{Summary of alphabets}
Once we have the worldsheet description of $u$-variables in terms of polynomial factors of $z$-variables, we find the alphabets by gauge-fixing the $SL(2)$ redundancy. Let us summarize the results.
We obtain the $A_{n}$ alphabet by gauge-fixing $z_1 \to -1, z_2 \to 0, z_{n+3} \to \infty$, and the $D_{n}$ alphabet by gauge-fixing $z_1 \to -1, z_2 \to 0, z_{n+1} \to \infty$:
\EQ{
\Phi_{A_n}(z_3, z_4, \ldots, z_{n+2}) &= \bigcup_{3 \le i \le n+2} \{z_i,\, 1+z_i \} \cup \bigcup_{3 \le i < j \le n+2} \{z_{i,j}\} \,, \\
\Phi_{D_n}(z_3, z_4, \ldots, z_{n}, z_{-}, z_{+}) &= \Phi_{A_{n-1}}(z_3, z_4, \ldots, z_{n}, z_{-}) \cup \{z_+, 1+z_+\} \cup \bigcup_{3 \le i \le n} \{z_{i,+}, z_i + z_{-}z_{+} \} \\
&\cup \bigcup_{3 \le i < j \le n} \{-z_i+z_j+z_i z_j-z_i z_{-}-z_i z_{+}+z_{-} z_{+} \} \,.
}
The alphabets for $D_n$ and $C_{n-1}$ have appeared explicitly in \cite{Chicherin:2020umh}. The alphabet for $B_{n-1}$ follows from $D_n$ by a simple folding identifying $z_- = z_+$. They are polynomials that are at most quadratic in the $z$-variables:
\EQ{
\Phi_{C_{n-1}}(z_3, z_4, \ldots, z_{n}, z_{-}) &=\Phi_{A_{n-1}}(z_3, z_4, \ldots, z_{n}, z_{-})\cup \bigcup_{3\le i \le j \le n} \{ z_i z_j + z_- \} \,,\\
\Phi_{B_{n-1}}(z_3, z_4, \ldots, z_{n}, z_{-}) &= \Phi_{A_{n-1}}(z_3, z_4, \ldots, z_{n}, z_{-}) \cup \bigcup_{3 \le i \le n} \{z_i + z_{-}^2 \} \\
&\cup \bigcup_{3 \le i < j \le n} \{-z_i+z_j + z_i z_j-2z_i z_{-}+z_{-}^2 \}
\,.}
The alphabets for $E_6, F_4$ and $G_2$ are new and will be presented below. The $E_6$ alphabet is obtained from the sextic polynomial (\ref{E6w}) by gauge-fixing $z_1 \to -1, z_2 \to 0, z_{6} \to \infty$. It consists of 42 letters that are polynomials with degree at most 4:
\EQ{
&\Phi_{E_6}(z_3, z_4, z_5, z_7, z_8, z_9) = \Phi_{A_5} (z_3, z_4, z_5, z_7, z_8) \cup \{z_9, 1+z_9\} \cup \bigcup_{3\le i \le 5} \{z_{i,9}, \, z_{i}+z_7 z_9, \, z_{i}+z_8 z_9 \} \\
&\cup \bigcup_{3\le i < j \le 5} \{-z_i+z_j+z_i z_j- z_i z_7- z_i z_9 + z_7 z_9, \, -z_i+z_j+z_i z_j- z_i z_8- z_i z_9 + z_8 z_9, \\
&z_i z_j-z_i z_7+z_i z_8-z_j z_8 + z_i z_8 z_9-z_7 z_8 z_9 \} \\
&\cup\{-z_3 z_4+z_3 z_7+z_4 z_5-z_4 z_7+z_4 z_8-z_5 z_8+z_3 z_4 z_5-z_3 z_4 z_7-z_3 z_4 z_9-z_3 z_5 z_8+z_3 z_7 z_8\\
&+z_3 z_7 z_9+z_4 z_8 z_9-z_7 z_8 z_9, \, \\
&-z_3 z_5+z_4 z_5+z_3 z_4 z_5-z_3 z_4 z_7+z_3 z_4 z_8-z_3 z_5 z_8-z_3 z_5 z_9-z_3 z_8 z_9+z_4 z_7 z_9+z_5 z_8 z_9 \\
&+z_3 z_4 z_8 z_9-z_3 z_7 z_8 z_9-z_3 z_8 z_9^2+z_7 z_8 z_9^2
\}
\,.
}
The $F_4$ alphabet is obtained by folding the $E_6$ alphabet as $z_7\to -{z_5}/{z_3},z_8\to -{z_5}/{z_4}$, and consists of 28 polynomial letters with degree at most 5:
\EQ{
&\Phi_{F_4} (z_3, z_4, z_5, z_9)= \Phi_{A_4}(z_3, z_4, z_5, z_9) \cup \bigcup_{3\le i \le j \le 4} \{z_i z_j + z_5, z_i z_j - z_5 z_9\} \\
&
\cup \{-z_3^2+z_3 z_4+z_3 z_5-z_5 z_9 +z_3^2 z_4-z_3^2 z_9, \, -z_3^2+2 z_3 z_5-z_5 z_9 + z_3^2 z_5-z_3^2 z_9 \,,
\\
&
-z_3 z_4+z_3 z_5+z_4^2-z_5 z_9 + z_3 z_4^2-z_3 z_4 z_9, \, -z_3 z_4+z_3 z_5+z_4 z_5-z_5 z_9 + z_3 z_4 z_5-z_3 z_4 z_9
\\
&-z_4^2+2 z_4 z_5 -z_5 z_9 + z_4^2 z_5-z_4^2 z_9, \, -z_3^2 z_5+2 z_3 z_4 z_5-z_5^2 z_9 + z_3^2 z_4^2-z_3^2 z_5 z_9, \\
&-2 z_3^2 z_4 +2 z_3 z_4^2 + z_3^2 z_5 + z_3^2 z_9 -2 z_3 z_5 z_9 -z_4^2 z_9 +z_5 z_9^2+ z_3^2 z_4^2-2 z_3^2 z_4 z_9+z_3^2 z_9^2 \,, \\
&-2 z_3 z_4 z_5+2 z_3 z_5^2+z_4^2 z_5-z_5^2 z_9 -z_3^2 z_4^2+2 z_3 z_4^2 z_5+z_3^2 z_5^2-2 z_3 z_4 z_5 z_9 + z_3^2 z_4^2 z_5-z_3^2 z_4^2 z_9
\}
\,.}
The $G_2$ alphabet can be obtained by folding the $B_3$ alphabet as $z_- \to -z_4/z_3$ and consists of 8 polynomial letters with degree at most 4:
\eq{\Phi_{G_2} = \left\{z_3,\, z_4,\, 1+z_3,\, 1+z_4,\, z_3-z_4,\, z_3^2+z_4,\, z_3^3+z_4^2, \, -z_3^3+z_4^2+3 z_3^2 z_4+ z_3^3 z_4\right\}
\label{G2}
\,.}
\section{Cluster string integrals and scattering equations in $z$-variables}\label{sec3}
Having discussed the configuration spaces for finite-type cluster algebras, now we move to the study of cluster string integrals and the associated saddle-point equations, in $z$-variables. We will focus on the type-$D$ case but the discussion directly extends to any other finite type.
\subsection{Cluster string integrals and their factorization}
From the general form of cluster (open-) string integrals, all we need to do is to express the ``Koba-Nielsen" factor, the canonical forms and the integration domain in worldsheet-like variables; given the map from $u$-variables as discussed above, all of these can be obtained by translating those in $u$-variables. The Koba-Nielsen factor $\prod_{I} u_{I}^{X_{I}}$ takes the form
\eq{
\prod_{I} u_{I}^{X_{I}} = \prod_{1\le j < i \le n+1} z_{i,j}^{-{c_{i, j}}} \prod_{1< j < i < n+1} w_{j,i}^{-c_{j,i}} \prod _{i=1}^{n+1} z_{i,+}\,^{-c_i} z_{i,-}\,^{-\widetilde c_{{i}}}
\label{KNDn}
\,.}
Note that all the $c_{i,j}, c_i, \widetilde c_{ i}$ are linear combinations of $X_I$:
\EQ{
c_{i, j} &= X_{i, j} + X_{i+1,j+1} - X_{i, j+1} - X_{i+1, j} \quad \text{for non-adjacent }1\le i,j \le n+1 \,, \\
c_{n+1,1} &= \sum_{i=1}^n \big(X_i + \widetilde X_{ i} - X_{i,i+1} \big) \,, \\
c_{i, i+1} &= X_{i, i+1} + X_{i+1,i+2} - X_{i, i+2} - X_{i+1} - \widetilde X_{{i+1}} \quad \text{for } 1\le i \le n \,, \\
c_{i} &= X_i + \widetilde X_{{i+1}} - X_{i,i+1}, \quad \widetilde c_{ i} = \widetilde X_{ i} + X_{i+1} - X_{i,i+1} \quad \text{for } 1 < i < n+1\,,\\
c_{1} &= -X_{2}, \quad \widetilde c_{ 1} = - \widetilde X_{ 2}, \quad c_{n+1} = -\widetilde X_{ 1}, \quad \widetilde c_{{n+1}} = - X_1 \,.
\label{ABHYDn}
}
Here $X_i$ are defined cyclically, e.g. $X_{n+1} := X_{1}$, and $X_{i,i} = X_{i,i-1} = 0$.
As mentioned, the most basic property of such stringy integrals is that the convergence domain is given by a generalized associahedron. In the $A_{n-3}$ case, the variables $X_{i,j} =(p_i + p_{i+1}+ \cdots + p_{j-1})^2$ are called planar variables, and $c_{a,b} = -(p_a + p_b)^2$ are the Mandelstam variables (with a minus sign). By requiring all $X_{i,j} \geq 0$ and ask all but $n{-}3$ $c_{a,b}$ to be positive constants, they carve out the ABHY ``kinematic associahedron" of type $A_{n-3}$ which gives the convergence domain, and its boundaries exactly encode the factorization properties of the stringy integral \cite{Arkani-Hamed:2017mur}.\footnote{Note that there are same number of $c_{a,b}$ and $X_{i,j}$, thus one must pick $n{-}3$ $c_{a,b}$ not to be set to constants but rather as basis variables for the associahedron; see ~\cite{Arkani-Hamed:2017mur, Arkani-Hamed:2019vag} for precisely which $n{-}3$ $c_{a,b}$ cannot be chosen as constants, which are dictated by the ``mesh" or spacetime picture.} The $D_n$ case is completely analogous: by setting all but $n$ of $c_I$ as positive constants and requiring all $X_I \geq 0$, they carve out a generalized associahedron of type $D_n$, which gives the convergence domain, and its boundaries are in one-to-one correspondence with the boundaries of the configuration space that we studied in the previous section. In a close connection with the $Y$-system, the equations can also be regarded as an evolution in a discrete spacetime with extra boundary conditions \cite{Arkani-Hamed:2019vag}. We will come back to such ABHY generalized associahedron as we discuss the pushforward map from considering saddle-point equations.
Note that when we study stringy integrals, it is always convenient to fix the gauge, {\it e.g.} in the Koba-Nielsen factor exponents of $z_{n+1}$ sum to zero and decouple. Next, we need to find the positive part ${\cal M}^+_{D_n}$ and its canonical forms, when all the $u$-variables are positive (thus between $0$ and $1$). Since all $Y=u/(1-u)$ are manifestly positive given positive initial $Y$'s {\it e.g.} in \eqref{initial}, it is easy to see that this is equivalent to the following ordering: $z_1< z_2< \cdots <z_{n{+}1}$ and $z_\pm <z_1$ (note that there is no ordering between $z_+$ and $z_-$). The canonical form can be obtained as the wedge product of $d\log y$ for any initial cluster with {\it acyclic} quiver, {\it e.g.} those in \eqref{initial}. It is a remarkable fact that the result does not depend on which initial cluster we start with, and for convenience we work with gauge-fixed variables, {\it e.g.} with $z_{n{+}1} \to \infty$, and we obtain a compact ``Parke-Taylor-like" form in the remaining $z$-variables:
\eq{
\omega_{D_n}:=\frac{\prod_{i=\pm, 3}^n d z_i}{z_{1,+} z_{1,-} z_{2,3} z_{3,4} \cdots, z_{n{-}1, n}} \,
}
where in our gauge-fixing $z_{1,\pm}=1+ z_\pm$ and $z_{2,3}=z_3$ (also $z_{1,2}=1$). Thus we have written all the ingredients of the $D_n$ string integral in $z$-variables.
Returning to the $D_4$ example, one may readily write down the stringy integral in terms of the worldsheet variables.
\EQ{
&I_{D_4} =
\int_{0 < z_3 < z_4 \atop z_\pm < -1} \frac{dz_3\, dz_4\, dz_{-}\, dz_{+}}{(1+z_-)(1+z_+)z_3 z_{3,4}} \, z_3^{X_{4,2}} z_4^{X_{1,2}-X_{1,3}-X_{4,2}} z_-^{-\widetilde X_{{2}}-X_3+X_{2,3}} z_{+}^{-\widetilde X_{{2}}-X_3+X_{2,3}}\\
&
\left(1+z_3\right){}^{-X_{3,1}+X_{4,1}-X_{4,2}} \left(1+z_4\right){}^{X_1+\widetilde X_{{1}}-X_{1,2}-X_{4,1}+X_{4,2}} \left(1+z_{-}\right){}^{\widetilde X_{ 2}}\left(1+z_{+}\right){}^{X_{2}} \\
& z_{3,4}{}^{X_{1,3}} z_{3,-}{}^{-\widetilde X_{{3}}+X_{3,4}-X_4} z_{3,+}{}^{-X_3-\widetilde X_{{4}}+X_{3,4}}z_{4,-}{}^{-X_1-\widetilde X_{{4}}+X_{4,1}} z_{4,+}{}^{-\widetilde X_{{1}}-X_4+X_{4,1}} \\
&\left(z_3+z_{-} z_{+}\right){}^{X_3+\widetilde X_{{3}}-X_{2,3}+X_{2,4}-X_{3,4}}\left(z_4+z_{-} z_{+}\right){}^{-X_{2,4}-X_{3,1}+X_{3,4}}\\
& \left(z_3-z_4-z_3z_4 +z_3z_{-} +z_3z_{+} -z_{-} z_{+}\right){}^{X_4+\widetilde X_{{4}}+X_{3,1}-X_{3,4}-X_{4,1}} \,.
}
We may see factorization into lower-point amplitudes at the boundaries of the moduli space. The $u_{1,3}$ boundary corresponds to $z_4 \to z_3$, where $X_{1,3}$ must also tend to 0 and the stringy integral reduces to the $D_3$ integral:
\EQ{
&
\int_{0 < z_3 \atop z_\pm < -1} \frac{dz_3\, dz_{-}\, dz_{+}}{z_3(1+z_-)(1+z_+)} \, z_3^{X_{1,2}-X_{1,3}} z_-^{-\widetilde X_{{2}}-X_3+X_{2,3}}z_+^{-X_2-\widetilde X_{{3}}+X_{2,3}}
\left(1+z_3\right){}^{X_1+\widetilde X_{{1}}-X_{1,2}-X_{3,1}} \\
&\left(1+z_-\right){}^{\widetilde X_{{2}}}\left(1+z_{+}\right){}^{X_{2}} z_{3,-}{}^{-X_1-\widetilde X_{{3}}+X_{3,1}} z_{3,+}{}^{-\widetilde X_{{1}}-X_3+X_{3,1}}\left(z_3+z_{-} z_{+}\right){}^{X_3+\widetilde X_{{3}}-X_{2,3}-X_{3,1}} \,.
}
Similarly for the $z_3 \to 0$. At the $z_+ \to -1$ (or $z_- \to -1$) poles, the $D_4$ integral reduces to the $A_3$ integral, which is equivalent to the $D_3$ integral but appears in a different parameterization:
\EQ{
&\int_{0 < z_3 < z_4 \atop z_- < -1}
\frac{dz_3\, dz_4\, dz_{-}}{z_3 z_4 (1+z_-)} \, z_3^{X_{4,2}} z_4^{X_{1,2}-X_{1,3}-X_{4,2}} z_-^{-\widetilde X_{{2}}+X_{2,3}-X_3} \left(1+z_3\right){}^{-X_{4,2}-X_3+X_4} \\
&\left(1+z_4\right){}^{-X_{1,2}+X_{4,2}+X_1-X_4}\left(1+z_-\right){}^{\widetilde X_{{2}}} z_{3,4}{}^{X_{1,3}} z_{3,-}{}^{-X_{2,3}+X_{2,4}+X_3-X_4} z_{4,-}{}^{-X_{2,4}-X_1+X_4} \,.
}
For other types, the Parke-Taylor form and the Koba-Nielsen factor can be obtained in an analogous manner.
The $B_3$ stringy integral is
\EQ{
&I_{B_3} =
\int_{0 < z_3 < z_4 \atop z_- < -1} \frac{dz_3 dz_4\, dz_{-}}{(1+z_-) z_3 z_{3,4}} \, z_3^{X_{4,2}} z_4^{X_{1,2}-X_{1,3}-X_{4,2}} z_-^{2 X_{2,3}-X_2-X_3} \\
& \left(1+z_3\right){}^{-X_{3,1}+X_{4,1}-X_{4,2}} \left(1+z_4\right){}^{-X_{1,2}-X_{4,1}+X_{4,2}+X_1} \left(1+z_-\right){}^{X_2} \\
&z_{3,4}{}^{X_{1,3}} z_{3,-}{}^{2 X_{3,4}-X_3-X_4} z_{4,-}{}^{2 X_{4,1}-X_1-X_4} \left(z_3 + z_-^2\right){}^{-X_{2,3}+X_{2,4}-X_{3,4}+X_3} \\
&\left(z_4 + z_-^2\right){}^{-X_{2,4}-X_{3,1}+X_{3,4}} \left(-z_3+z_4 +z_3 z_4 -2 z_3 z_- + z_-^2 \right){}^{X_{3,1}-X_{3,4}-X_{4,1}+X_4}
\,.
}
At the same $u_{1,3}$ boundary when $z_4 \to z_3$, it reduces to the $B_2$ integral:
\EQ{
&\int_{0 < z_3 \atop z_- < -1} \frac{dz_3 \, dz_{-}}{(1+z_-) z_3}\,
z_3^{X_{1,2}-X_{1,3}} z_-^{2 X_{2,3}-X_2-X_3} \left(1+z_-\right){}^{X_2} \left(1+z_3\right){}^{-X_{1,2}-X_{3,1}+X_1} \\
&\left(z_- - z_3\right){}^{2 X_{3,1}-X_1-X_3} \left(z_-^2+z_3\right){}^{-X_{2,3}-X_{3,1}+X_3} \,.
}
At the $z_- \to -1$ pole, the $B_3$ integral reduces to the $A_2$ integral
\EQ{
\int_{0 < z_3 < z_4} \frac{dz_3 dz_4}{z_3 z_{3,4}}\,
z_3^{X_{4,2}} z_4^{X_{1,2}-X_{1,3}-X_{4,2}} \left(1+z_3\right){}^{-X_{2,3}+X_{2,4}-X_{4,2}}\left(1+z_4\right){}^{-X_{1,2}-X_{2,4}+X_{4,2}} z_{3,4}{}^{X_{1,3}} \,.
}
Finally, the $G_2$ stringy integral is
\EQ{
I_{G_2} &= \int_{0 < z_3 < z_4} \frac{dz_3 dz_4}{z_3 z_{3,4}}\, z_3^{X_{4,2}} z_4^{X_{1,2}-X_{1,3}+2 X_{2,3}-X_{2,4}-X_{3,1}+X_{3,4}+2 X_{4,1}-X_{4,2}}\\
& \left(1+z_3\right){}^{-X_{3,1}+3 X_{4,1}-X_{4,2}} \left(1+z_4\right){}^{-X_{1,2}-X_{4,1}+X_{4,2}} \left(z_3-z_4\right){}^{X_{1,3}} \left(z_3^2+z_4\right){}^{-X_{2,4}-X_{3,1}+3 X_{3,4}} \\
&\left(z_3^3+z_4^2\right){}^{-X_{2,3}+X_{2,4}-X_{3,4}} \left(-z_3^3+z_4^2+3 z_3^2 z_4+ z_3^3 z_4\right){}^{X_{3,1}-X_{3,4}-X_{4,1}}
\label{G2integral}
\,.
}
One may readily check that it reduces drastically to the $A_1$ integral when $z_4 \to z_3$:
\eq{
\int_{0 < z_3} \frac{dz_3}{z_3} \, z_3^{X_{1,2}-X_{1,3}} \left(1+z_3\right){}^{-X_{1,2}-X_{2,3}} \,.
}
\subsection{The scattering equations of finite type and numbers of solutions}
The so-called scattering equations arise as the saddle-point equations of the stringy integrals. The $D_n$ scattering equations can be read off from the Koba-Nielsen factor \eqref{KNDn} as
\eq{
\sum_ {1\le j < i \le n+1} {c_{i, j}}\, d\log z_{i,j} + \sum_ {1< j < i < n+1} c_{j,i}\, d\log w_{j,i} + \sum_{1 \le i \le n+1} \big(c_i \, d\log z_{i,+} + \widetilde c_i \, d\log z_{i,-} \big) = 0\,.
\label{SEDn}
}
Note that with gauge-fixing, {\it e.g.} $z_1=-1, z_2=0, z_{n+1} = \infty$, we have only $n$ variables, $z_3, \cdots, z_n$ and $z_\pm$, and $n$ equations. An important comment is that by pulling back the scattering equations to any ABHY subspace, we will have a diffeomorphism from the positive part of the space to the interior of the ABHY generalized associahedron~\cite{Arkani-Hamed:2017mur}. For example, for $D_4$ we can choose exactly those $12$ combinations of $X$'s (that is not a single $X$ variable) in \eqref{ABHYDn} and the remaining $4$ $X$'s as variables, we find a map from $(z_3, z_4, z_+, z_-)$ to $(X_{4,2}, X_{1,3}, X_2, \widetilde X_{{2}})$. It is straightforward to check that this maps the positive part of the space to the interior of the ABHY $D_4$ polytope.
Another important question regards the number of solutions for such scattering equations, or the number of saddle points. Recall that for type $A_{n-3}$, the number of saddle points is $(n-3)!$, which can be nicely interpreted as $(n-3)!$ orderings of $n-3$ ``particles" on a line, {\it e.g.} labelled by $z_2, \cdots, z_{n{-}2}$ and bounded by $z_1=0, z_{n{-}1}=1$ with a ``potential" given by the $\log$ of the Koba-Nielsen factor~\cite{Kalousios:2013eca, Cachazo:2016ror}. It is a remarkable fact that given real Mandelstam variables (which are coupling constants in the potential), these $(n{-}3)!$ solutions are all real. Now it is an open question if we can interpret saddle points, or solutions of scattering equations, for other finite-type cluster string integrals in this way. As an illustrative example, the $G_2$ scattering equations can be written explicitly as:
\EQ{
&
\frac{c_1}{z_3}+\frac{c_2}{1+z_3}+\frac{c_5}{z_3-z_4}+\frac{2 c_6 z_3}{z_4+z_3^2}+\frac{3 c_7 z_3^2}{z_4^2+z_3^3}+\frac{3c_8 z_3 \left(-z_3+2 z_4+z_3 z_4\right)}{-z_3^3+z_4^2+3 z_3^2 z_4+ z_3^3 z_4} = 0 \,, \\
&\frac{c_2}{z_4}+\frac{c_4}{1+z_4}-\frac{c_5}{z_3-z_4}+\frac{c_6}{z_4+z_3^2}+\frac{2 c_7 z_4}{z_4^2+z_3^3}+\frac{c_8 \left(2 z_4 +3 z_3^2+ z_3^3\right)}{-z_3^3+z_4^2+3 z_3^2 z_4+ z_3^3 z_4} = 0\,.
}
Here $c_i$ are the exponents that appear in the stringy integral \eqref{G2integral}. The system generically has $13$ solutions, which is consistent with the fact that out of all $25$ connected components of the configuration space, we find exactly $13$ bounded regions. However, if we choose real values of $c_i$, we only find numerically $9$ real solutions for $(z_3, z_4)$ (and two pairs of complex solutions).
The problem of finding the number of saddle points becomes computationally difficult for higher-rank cases. We may numerically search for the solutions using the software HomotopyContinuation.jl \cite{Breiding_2018}. For $D_n$, we found the following numbers of solutions to the scattering equations:
\begin{align}\label{numofsol}
\begin{tabular}{c|c|c|c|c}
$D_4$ & $D_5$ & $D_6$ & $D_7$ & $D_8$ \\
\hline
55 & 674 & 10215 & 183256& $\geq$3787649
\end{tabular}
\end{align}
These numerical calculations can be completed in several hours for $D_n$ up to $n=7$ while for $D_8$, we aborted the computation after the software has struggled to search for the remaining solutions for several days. So we can only confirm that there are at least 3787649 solutions for $D_8$.
We also found 2411 solutions for $F_4$ and 51466 solutions for $E_6$.
\section{Topological properties of the spaces}\label{sec4}
In this section, we study the topology of cluster configuration spaces using a finite-field method. Recall that this has been initiated in \cite{Arkani-Hamed:2020tuz} for at least types $A_n, B_n$ and $C_n$, where for each case the number of points in the space over $\mathbb{F}_p$ is shown to be a polynomial of $p$. Let us list these polynomials for completeness:
\EQ{
|{\cal M}_{A_n}({\mathbb F}_p)| &= (p-2)(p-3)\cdots (p-n-1) \,,
\\
|{\cal M}_{B_n}({\mathbb F}_p)| &= (p-n-1)^n \,,
\\
|{\cal M}_{C_n}({\mathbb F}_p)| &= (p-n-1) (p-3)(p-5) \cdots (p-2n+1) \,.
}
Note that when all the letters can be written as linear factors, the space can be realized as a hyperplane arrangement complement and it is known that we have polynomials for point counts. This is the case for type $A$ in terms of worldsheet variables and type $B$ using certain ``linear" variables as in~\cite{Arkani-Hamed:2020tuz} (for type $C$ we have not been able to determine whether or not it can be realized as a hyperplane arrangement complement). In such hyperplane cases, it is well known that the coefficients of point-count polynomial give dimensions of the $k$-th cohomology for $k=1, 2, \cdots, n$, which are exactly the number of independent $k$-forms generated by wedging $d\log$ of these letters. By plugging $p=1$ we find the Euler characteristics for types $ABC$ as expected, and also by plugging $p=-1$, we obtain the number of connected components.\footnote{Note that the Dynkin classification is with respect to the cluster algebra, not the crystallographic Coxeter arrangement. For example, the braid arrangement corresponding to the Coxeter group of type $A_{n}$ would reduce to our $A_{n-3}$ space after an $SL(2)$ gauge-fixing.}
However, for type $D_n$ and exceptional cases, it is expected that the point count will no longer be polynomials (thus the space cannot be realized as hyperplane arrangement complement), and it is not clear how to get dimensions of cohomology, {\it etc.} in these cases. In this section, we mainly focus on the type-$D_n$ case and perform numerical ``experiments" of counting points for a sufficient number of prime numbers, all the way up to $n=10$. We then conjecture that the point count is always a {\it quasi-polynomial} of $p$ with special ``correction terms". Our conjecture thus yields predictions for dimensions of cohomology for the $D_n$ space, which up to $n=6$ are supported by the number of saddle points we find. Note that in general we do not expect the cohomology to be generated by $d\log$ forms only, but we conjecture that this is still the case for 2-forms and 3-forms and we propose an all-$n$ formula for these dimensions. Finally, we will summarize all the linear relations we find for $d\log$ 2-forms, which are useful for bootstrap based on such an alphabet.
Before proceeding, we also review the simple example of $G_2$, whose point count was found to be a quasi-polynomial using $F$-polynomials in~\cite{Arkani-Hamed:2020tuz}. It is easy to see that the same quasi-polynomial can be found by using the $8$ letters in \eqref{G2}:
\eq{
|{\cal M}_{G_2}({\mathbb F}_p)|= \begin{cases} (p-4)^2 & \mbox{if $p = 2 \mod 3$}\,,\\
(p-4)^2 + 4& \mbox{if $p = 1 \mod 3$}\,.
\end{cases}
}
Note that by plugging in $p=-1$, the point count predicts that there are $25$ connected components, which agrees with the number of sign patterns of the $8$ letters and can be visualized by directly plotting them; by plugging in $p=1$ it gives the Euler characteristic as $13$, which agrees with the number of saddle points as we found above.
\subsection{Point count of $D_n$ over $\mathbb{F}_p$}
As explained in \cite{Arkani-Hamed:2020tuz}, the point count shows that ${\cal M}_{D_n}$ cannot be a hyperplane arrangement complement already for $n=4,5$. For $D_4$ and $D_5$, by counting the number of points in the space over $\mathbb{F}_p$ for prime numbers $p \ge 5$, it was previously found that they can be interpolated by the quasi-polynomials
\EQ{\label{d4d5pointcount}
|{\cal M}_{D_4}({\mathbb F}_p)|&=p^4-16 p^3+93 p^2-231 p+207 + \delta_3(p) \,,
\\
|{\cal M}_{D_5}({\mathbb F}_p)|&=p^5-25 p^4+244 p^3-1156 p^2+2649 p-2355 +\delta_3(p) \left(5 p-36\right)-\delta_4(p) \,.
}
where $\delta_i(p)$ is defined as (note that it differs from the $\delta_i(p)$ defined in~\cite{Arkani-Hamed:2020tuz}):
\eq{
\delta_i(p):=
\begin{cases}
1 &{\rm if}~p=1 ~ {\rm mod}~ i \,,\\
-1 &{\rm otherwise}\,.
\end{cases}
}
These point counts were previously performed using the $F$-polynomials of the cluster algebra. We find that the counting with the worldsheet variables yields the same result, as expected.
Substituting $p=1$, we get $55$ and $-674$ respectively, which agrees with the Euler characteristic, or the number of saddle points up to a sign (see \eqref{numofsol}). For $D_4$ space, we can also determine the dimensions of $1$-,$2$-,$3$- and $4$-cohomology to be exactly $16, 93, 231$ and $208$, which agree with (up to alternating signs) coefficients of $p^3, p^2, p^1$ and $p^0$ if we keep $\delta_3(p)=1$. This is of course consistent with the fact that the alternate sum of these numbers (including $1$ for $p^4$) should give the Euler characteristic (with $p=1$). We also remark that by substituting $p = -1$, we get $ 547$ and $6388$ respectively, which agrees with the number of consistent sign patterns of $u$-variables, and as we have checked, the number of connected components using $z$-variables.
We find that the worldsheet variables allow us to perform numerical computations for higher-$D_n$ cases, which were not feasible using $F$-polynomials. For example, we can apply the finite-field method to the $D_6, D_7$ and $D_8$ cases, and by assuming that the result should be quasi-polynomials with $\delta_i(p)$ terms with $i<n$, we find the following formulas which exhibit a nice pattern:
\EQ{\label{d6d7d8pointcount}
|{\cal M}_{D_6}({\mathbb F}_p)| &=p^6-36 p^5+530 p^4-4070 p^3+17140 p^2-37465 p+33301
\\&
+\delta_3(p) \left(15 p^2-246 p+987\right)-\delta_4(p) \left(6 p-62\right) + 2\, \delta_5(p)\,,
\\
|{\cal M}_{D_7}({\mathbb F}_p)| &= p^7-49 p^6+1014 p^5-11460 p^4+76215 p^3-297641 p^2+631280 p-562269
\\
&+\delta_3(p) \left(35 p^3-966 p^2+8736 p-25823\right)
-\delta_4(p) \left(21 p^2-476 p+2584\right)
\\ &
+\delta_5(p) \left(14 p-212\right) - 2\,\delta_6(p)\,,
\\
|{\cal M}_{D_8}({\mathbb F}_p)| &=p^8-64 p^7+1771 p^6-27629 p^5+265335 p^4-1603427 p^3+5944309 p^2\\ &
-12347924 p +11024276 +\delta_3(p) \left(70 p^4-2856 p^3+43092 p^2-284445 p+691145\right) \\
&-\delta_4(p) \left(56 p^3-2072 p^2+24648 p-94824\right)
+
\delta_5(p) \left(56 p^2-1808 p+13206\right) \\
&-\delta _6(p) \left(16 p-368\right) + 3\, \delta_7(p) \,.\\
}
For $D_6$, we have generated such data for 37 prime numbers, from 7 up to 173; they are more than enough for us to find the first quasi-polynomial, which contains only 12 non-trivial coefficients. For $D_7$, we have generated such data of 21 prime numbers, from 7 to 89; they are more than enough to fix all 16 non-trivial coefficients in the second quasi-polynomial. Note that the coefficient of $\delta_6(p)$ can be absorbed into that of $\delta_3(p)$ since $\delta_6(p)=\delta_3(p)$, and the reason for splitting them in this way will become clear shortly. For $D_8$, we have generated the data of 21 prime numbers, from 11 to 97, they are just enough to fix all 21 non-trivial coefficients in the last quasi-polynomial. We emphasize that these are all conjectures about the point count which we do not know how to prove.
Substituting $p =1 $, we get $ 10 215$, $-183 256$ for $D_6, D_7$ respectively, which agrees with the Euler characteristic or numbers of saddle points (up to a sign) \eqref{numofsol}. The agreement provides very strong support for these quasi-polynomials. For $D_8$, substituting $p = 1 $, we get $ 3 787 655$, which predicts that there should be 6 remaining solutions in addition to the 3787649 ones already found by HomotopyContinuation.jl \eqref{numofsol}.
We have also conjectured quasi-polynomials of $D_9$ and $D_{10}$. For these cases, it has become more and more time-consuming to generate data for large prime numbers. For $D_9$, we have only generated data of 11 prime numbers, from 11 to 47 and for $D_{10}$, we have generated the data of 9 prime numbers, from 11 to 41. In neither case do we have enough data to completely fix the quasi-polynomials, but we can use some other data, {\it e.g.} the number of independent $d\log$ $k$-forms for $k=1,2,3$ {\it etc.} (see below) to propose the following conjectures:
\EQ{
|{\cal M}_{D_9}({\mathbb F}_p)| &=p^9-81 p^8+2888 p^7-59416 p^6+776342 p^5- 6671938 p^4+ 37657424 p^3
\\ & -134394689 p^2 + 274922647 p -245998148
\\ & + \delta_3(p) \left(126 p^5- 7056 p^4 + 156240 p^3 - 1707489 p^2 + 9190566 p -19467332 \right)
\\ & -\delta_4(p) \left(126 p^4-6720 p^3 + 130320 p^2 - 1094400p + 3365875\right)
\\
& + \delta_5(p) \left(168 p^3-8640 p^2 + 135630 p - 670100 \right) -\delta _6(p) \left(72 p^2-3456p +33786\right)
\\
& + \delta_7(p) \left(27 p - 984\right) - 2\, \delta_8(p)\,.
\\
|{\cal M}_{D_{10}}({\mathbb F}_p)| &=
p^{10}-100 p^9+4464 p^8-117036 p^7+1993824 p^6-23038134 p^5+182628726 p^4
\\
&
-979353469 p^3+3394906731 p^2-6862837036 p+6153401165 \\
&+\delta_3(p)\left(210 p^6-15372 p^5+464310 p^4-7398825 p^3+65494485 p^2\right.\\
&\left.-304671969 p+582450185\right)
\\
&
- \delta_4(p)\left(252 p^5-18060 p^4+504120 p^3-6878520 p^2+45974284 p-120501116\right)
\\
&
+\delta_5(p)\left(420 p^4-30480 p^3+767070 p^2-8145380 p + 31203370\right)
\\
&
- \delta _6(p)\left(240 p^3-18000 p^2+373140 p\right) + \delta_7(p)\left(135 p^2-10110 p+ 131289\right)
\\& - \delta_8(p) \left(20 p-1204\right) +3\, \delta_9(p) \,.
}
Substituting $p = 1 $, we get $-88535634$ and $2308393321$, which predict the Euler characteristics (or the number of saddle points up to a sign) of $D_9$ and $D_{10}$ respectively.
It is computationally difficult to determine the number of saddle points (or equivalently the Euler characteristic) of the $D_n$ space for higher $n$. We just make a numerical observation: for $n=4,5,\ldots, 10$, the number of saddle points can be approximated by $(1-n)^n \times \{0.679,0.658,0.654,0.655,0.657,0.660, 0.662\}$. It would be interesting to see if one could infer certain scaling behavior holds for large $n$, and if there might be some explanation for that.
Beyond $D_4$, we have not been able to compute dimensions of cohomology (due to lack of efficient algorithms) except for the number of $1$ forms which is the number of letters, $n_1=n^2$. However, we conjecture that the dimension of $k$-th cohomology is given by (the absolute value of) the coefficients of $p^{n{-}k}$ if we keep all $\delta_i(p)=1$. This conjecture is consistent with the observation that the Euler characteristic is obtained by plugging in $p=1$. To obtain some data that provides a lower bound on the dimensions, we can alternatively compute the number of independent $d\log$ $k$-forms, $n_k$, obtained by wedging $d\log$ of all the letters, for $k=1,2, \ldots, n$. One needs to determine the rank of all the ${n^2 \choose k}$ $d\log$ $k$-forms, by considering linear relations they satisfy. For higher $n$, it also becomes computationally difficult to determine $n_k$, but we have numerically determined such linear relations and obtained the number of independent $k$-forms for all $k$ up to $D_6$, and for $k=2,3$ up to $D_9$:
\begin{center}
\begin{tabular}{c|c|c|c|c|c|c|c}
& 2-forms & 3-forms & 4-forms & 5-forms & 6-forms \\
\hline
$D_4$ & 93 & 231 & 207 & -&-
\\
$D_5$ & 244 & 1156 & 2649 & 2355&-
\\
$D_6$ & 530 & 4070 & 17140 & 37465 & 33300 \\
$D_7$ & 1014 & 11460 & 76215 & &\\
$D_8$ & 1771 & 27629 & & &
\\
$D_9$ & 2888 & 59416 & & &
\\
$D_{10}$ & 4464 & & & &
\end{tabular}
\end{center}
Note that in addition to $n_1=n^2$, which is (the absolute value of) the coefficient of $p^{n{-}1}$, $n_2$ and $n_3$, nicely match the next two coefficients of the quasi-polynomial, which are not affected by the correction terms, $\delta_i(p)$. As part of our conjecture above, these two numbers are conjectured to be the dimension of $2$- and $3$-cohomology, respectively, which seem to be generated by $d\log$ forms (this is no longer true for $k\geq 4$). We propose that the number of independent ($d\log$) 2- and 3-forms are:
\EQ{
n_2 &= \frac{1}{2} (n-1) \left(n^3-n+2\right) \,,\\
n_3 &= \frac{1}{3}\binom{n-1}{2} \left(n^4-3 n^2+5 n+3\right) \,.
}
As we will show in the next section, we can write down all linear relations among 2-forms, thus confirming the result for $n_2$, and it would be interesting to find a similar argument for $n_3$.
\subsection{Cohomology of 2-forms}
The cohomology of the hyperplane arrangement complement is well understood as the Orlik-Solomon algebra. The linear relations among the $d\log$ forms are the Orlik-Solomon ideals and the set of independent $d\log$ forms form the Orlik-Solomon algebra. Our goal is to work out the analogues of the Orlik-Solomon ideals for nonlinear $d\log$ forms.
For $D_{n}$, there are $\binom{n^2}{2}$ 2-forms of $\omega_{i,j} := d\log z_{i,j}$ and $\chi_{i,j} := d\log w_{i,j}$, including trivial ones: $\omega_{1,i} \wedge \omega_{2,i} = 0$.
The 2-forms of $\omega_{i,j}$ satisfy the Arnold relations
\eq{\omega_{i,j} \wedge \omega_{j,k} + \omega_{j,k} \wedge \omega_{k,i} + \omega_{k,i} \wedge \omega_{i,j} = 0\,. \label{arnold}}
They provide $\binom{n}{3} + 2\binom{n}{2}$ constraints for $1\le i < j < k \le n$ and $1\le i < j \le n, k = n+2 \text{ or } n+3$.
Let
\EQ{
&S^+_{i,j} = \omega _{i,n+3}+\omega _{j,n+2}\,, \qquad S^-_{i,j} = \omega _{1,n+3} + \omega _{i,j}\,, \\
&T^+_{i,j} = \omega_{1,i} + \omega_{j,n+2}\,, \qquad T^-_{i,j} = \omega_{1,n+3} + \omega_{n+2,i}\,, \\
&U^+_{i,j} = \omega_{i,n+2} +\omega_{j,n+3}\,, \qquad U^-_{i,j} = \omega_{1,n+2} + \omega_{i,j} \,.
}
There are $3 \binom{n-1}{2}$ relations involving $\chi_{i,j}$ for $1 < i < j < n + 1$:
\EQ{
&S^+_{i,j} \wedge S^-_{i,j} + S^-_{i,j} \wedge \chi_{i,j} + \chi_{i,j} \wedge S^+_{i,j} = 0\,,\\
&T^+_{i,j} \wedge T^-_{i,j} + T^-_{i,j} \wedge \chi_{i,j} + \chi_{i,j} \wedge T^+_{i,j} = 0\,,\\
&U^+_{i,j} \wedge U^-_{i,j} + U^-_{i,j} \wedge \chi_{i,j} + \chi_{i,j} \wedge U^+_{i,j} = 0\,.
}
There are $2 \binom{n-1}{3}$ relations involving $\chi_{i,j}\wedge\chi_{i,k}$ and $\chi_{i,k} \wedge \chi_{j,k}$ for $1 < i < j < k < n + 1$:
\EQ{
&(\omega_{1,i} - \omega_{j,k}) \wedge \chi_{i,j} + \chi_{i,j}\wedge\chi_{i,k} + \chi_{i,k} \wedge (\omega_{1,i} - \omega_{j,k}) =0 \,, \\
&\left(\omega _{i,j}-\omega _{i,n+2}+\omega _{k,n+2}+\omega _{1,n+2}\right)\wedge\chi _{i,k}\ + \chi _{i,k}\wedge\chi _{j,k}+\\
&\chi _{j,k}\wedge\left(\omega _{i,j}-\omega _{j,n+2}+\omega _{k,n+2}+\omega _{1,n+2}\right)
-\left(\omega _{k,n+2}+\omega _{1,n+2}\right)\wedge\left(\omega _{i,n+2}-\omega _{j,n+2}\right) = 0\,.
}
The number of independent 2-forms is thus
\EQ{
n_2 &= \binom{n^2}{2}-\left[\binom{n}{3} + 2\binom{n}{2}\right]-3 \binom{n-1}{2}-2 \binom{n-1}{3} \\
&= \frac{1}{2} (n-1) \left(n^3-n+2\right) \,.
}
It agrees with the prediction from the point count.
\section{Conclusion and Discussions}
In this note, based on the connection between $Y$-systems and configuration spaces for finite-type cluster algebras, we have introduced worldsheet-like variables that generalize worldsheet variables of the moduli space, and applied them for studying cluster configuration spaces. In particular, we have studied in detail these $z$-variables for type $D_n$, which have also provided a set of variables for the symbol alphabet of Feynman integrals as discussed in~\cite{Chicherin:2020umh, He:2021esx}. Our main results can be summarized as follows
\begin{itemize}
\item We find nice polynomials of these $z$-variables which define the configuration space for types $ABCD$ (as well as those for types $E_6$, $F_4$ and $G_2$). We have characterized boundaries of the positive part of the space by degenerations of these polynomials.
\item We write the stringy canonical form and corresponding saddle-point equations in terms of these $z$-variables, which makes it easier to study their properties such as factorizations and pushforward. In particular, we find that the number of saddle points (or Euler characteristics) for the $D_n$ space up to $n=7$ (and those for the $E_6, F_4$ and $G_2$ spaces).
\item We study topological properties of configuration spaces by counting the number of points in $\mathbb{F}_p$: we conjecture that the point count is a quasi-polynomial with specific ``correction terms" for type $D_n$, based on empirical evidence up to $n=10$, which predicts dimensions of cohomology and the Euler characteristic {\it etc.}; for the purpose of studying the symbol alphabet, we have also found all linear relations satisfied by $d\log$ 2-forms of the $D_n$ alphabet for any $n$.
\end{itemize}
There are numerous questions for future investigations raised by our preliminary studies. First of all, we would like to understand possible ``physical" meanings of these worldsheet-like variables. For cluster configuration spaces, this may be related to the possible meaning of cluster string integrals: for $D_n$ case, the stringy integral has not only correct $\alpha'\to 0$ limit as one-loop planar $\phi^3$ integrand, but also correct factorization at massless poles for finite $\alpha'$; it would be highly desirable if we can understand better such stringy integrals using these $z$-variables. As we have seen in~\cite{Chicherin:2020umh, He:2021esx, He:2021non}, the letters of $D_n$ using $z$-variables (for $n\leq 6$) have nicely appeared in the symbol of ``ladder-type" Feynman integrals to all loops, where the $z$'s are nothing but all the {\it last entries}. This has opened up another direction for studying possible physical meanings of such worldsheet-like variables.
Moreover, it would be interesting to see if one can prove the quasi-polynomials for the point count of $D_n$ (up to $n=10$) and maybe even find a general pattern. While they do not correspond to hyperplane arrangement complement, the $z$-variables make it clear that we only need to deal with quadratic polynomials $w_{i,j}$, in addition to those linear factors. We have not been able to come up with any conjecture for the point count for the $E_6$ or $F_4$ case, which seems to require at least some new forms of ``correction terms". It would be highly desirable to see what can we learn about cluster algebras from these topological properties of the corresponding cluster configuration spaces.
Note that type-$A$ cluster algebra is equivalent to the $k=2$ Grassmannian, both of which can be used to describe moduli space for string theory and for the Cachazo-He-Yuan formula \cite{Cachazo:2013hca}.
Other finite-type cluster algebras and higher-$k$ Grassmannians generalize them in two different ways. Higher-$k$ Grassmannian stringy integrals have been studied in~\cite{Arkani-Hamed:2019mrd,He:2020ray}, whose leading orders are equivalent to the higher-$k$ CHY formulas (or CEGM generalized bi-adjoint amplitudes)~\cite{Cachazo:2019ngv,Abhishek:2020xfy}. Tropical Grassmannian \cite{Drummond:2019qjk,Henke:2019hve,Drummond:2020kqg,Lukowski:2020dpn,Henke:2021avn}, matroid subdivisions \cite{Early:2019zyi,Early:2019eun,Cachazo:2020uup,Cachazo:2020wgu} and the planar collections of Feynman diagrams \cite{Borges:2019csl,Cachazo:2019xjx,Guevara:2020lek} are also found to be very useful to study them. Compared to higher-$k$ Grassmannians, all finite-type cluster algebras have better factorization behaviors. As the worldsheet variables have been discovered for other finite-type cluster algebras, it is natural to compare our study of cluster configuration spaces to those Grassmannian cases. It would be interesting to compare the topological properties of their configuration spaces including the Euler characteristic~\cite{Skorobogatov_1996,Cachazo:2019apa,Cachazo:2019ble,Sturmfels:2020mpv,Agostini:2021rze}), their ABHY realizations and even applications to the symbol alphabets that appear in higher-loop integrals of SYM {\it etc.} \cite{Arkani-Hamed:2019rds,Drummond:2019cxm,He:2021non,Ren:2021ztg}.
Last but not least, an important question already mentioned in~\cite{Arkani-Hamed:2020tuz} is if we could classify all the connected components of the $D_n$ configuration space and find their corresponding canonical forms, which have not been understood even for $D_4$. It becomes much more viable now with these $z$-variables, and this would enable us to understand better the cohomologies and more importantly the extension of the stringy integrals (so far for the positive part) to general ``off-diagonal" real integrals and complex integral cases~\cite{Arkani-Hamed:2019mrd}.
\acknowledgments
We thank Zhenjie Li, Qinglin Yang and Yichao Tang for collaborations on related projects, and Yang Zhang for help with the USTC computing platform. YZ would like to thank Alex Edison for discussions on the number of independent forms. PZ would like to thank Yuqi Li for discussions on the string worldsheet. SH's research was supported in part by the National Natural Science Foundation of China under Grant No. 11935013,11947301, 12047502,12047503. The research of YZ is supported by the Knut and Alice Wallenberg Foundation under the grant KAW 2018.0116: From Scattering Amplitudes to Gravitational Waves. PZ is supported by a China Postdoctoral Science Foundation Special Fund, grant number Y9Y2231.
\bibliographystyle{./utphys.bst}
|
{
"timestamp": "2021-09-29T02:26:14",
"yymm": "2109",
"arxiv_id": "2109.13900",
"language": "en",
"url": "https://arxiv.org/abs/2109.13900"
}
|
\section{Scientific motivation}
\label{sec:fundlimits}
Direct imaging of exoplanets and circumstellar disks is key their detailed characterization, but is challenging due to the small angular separation between objects of interets and the much brighter host stars.
A high contrast imaging (HCI) system, as depicted in Figure \ref{fig:HCIsystemarch}, may include dedicated wavefront sensor(s) and focal plane image(s). By convention, a wavefront sensor is defined here as an optical device optimized for wavefront measurement, providing, in the small wavefront aberration regime, a linear relationship between input wavefront state and sensor signal.
\begin{figure}[h]
\centering
\includegraphics[width=17cm]{figures/SPIE2021-fig-relationships.png}
\vspace*{0.3cm}
\caption{A high contrast imaging systems may include wavefront sensor(s) and focal plane image(s). Data obtained by camera(s) is related to the input wavefront state, which is not directly measured. Wavefront control loops (red arrows) use measurements to compensate for residual wavefront errors and deliver/maintain a high contrast area in the science focal plane (top right). Additionally, starlight measurements may be used to calibrate residual light in the high contrast area, as shown with the blue arrows.}
\label{fig:HCIsystemarch}
\end{figure}
Most ground-based HCI systems rely on pupil-plane wavefront sensor(s) to measure the fast, $\mu$m-scale wavefront aberrations induced by atmospheric turbulence. Pupil plane sensors can provide a linear signal over a wide range of wavefront aberration, making them ideal for high speed wavefront control.
HCI systems envisioned for space missions, on the other hand, rely on focal plane image(s) to measure residual aberrations. This choice is motivated by the very small level or allowable residual wavefront error required to meet the $10^{10}$ contrast level for direct imaging of reflected light exoplanets. Such systems must overcome the non-linear relationship between input wavefront and measured focal plane intensity in the high contrast area, so wavefront modulation is employed to unambiguously measure focal plane complex amplitude in order to drive a wavefront control loop. Both approaches could fundamentally be combined as they are complementary - this is especially relevant for ground-based systems, where Extreme-AO level correction can deliver a stable focal plane suitable for focal plane wavefront control. Figure \ref{fig:HCIsystemarch} shows possible wavefront control strategies, each with a thick red arrow connecting the input sensor to the wavefront state upon which the control loop is acting.
Additionally, input sensors may also be used to estimate the residual starlight in the high contrast area, as shown by the blue arrows in Fig. \ref{fig:HCIsystemarch}. This can be performed as a post-processing operation, reconstructing and subtracting the unwanted residual starlight from science data. We discuss and explore this approach in this paper.
\subsection{Representative examples}
We consider a examples representative of challenging high contrast imaging observations with ground and space telescopes:
\begin{itemize}
\item Earth-Sun system at 8pc distance, observed by a 4-m space telescope
\item Earth-size planet orbiting in the habitable zone of a M4 type star at a 4pc distance, observed by a 30-m ground-based telescope
\end{itemize}
\begin{table}[ht]
\caption{Observation examples}
\label{tab:HCIobs}
\begin{center}
\begin{tabular}{|l|c|c|}
\hline
& Space-4m-Earth-G2 & Ground-30m-Earth-M4\\
\hline
\hline
Star & G2 at 8pc & M4 at 4pc \\
\hline
Bolometric luminosity [$L_{Sun}$] & 1.000 & 0.0072 \\
\hline
Planet orbital radius [au] & 1.0 & 0.085 \\
\hline
Maximum angular separation [arcsec] & 0.125 & 0.021 \\
\hline
Reflected light planet/star contrast & 1.5e-10 & 2.1e-8 \\
\hline
\hline
Telescope diameter [m] & 4 & 30 \\
\hline
Science spectral bandwidth & 20\% & 20\% \\
\hline
Central Wavelength & 797 nm (I band) & 1630 nm (H band) \\
\hline
Maximum angular separation [$\lambda$/D] & 3.0 & 1.9 \\
\hline
Efficiency & 20 \% & 20 \% \\
\hline
Total Exposure time & 10 ksec & 10 ksec \\
\hline
\hline
Star brightness & $m_I = 3.60$ & $m_H = 5.65$ \\
\hline
Photon flux in science band (star) & 7.37e8 ph/s & 5.62e9 ph/s \\
\hline
Photon flux in science band (planet) & 0.11 ph/s & 118 ph/s \\
\hline
Background surf. brightness [contrast] & 3.5e-10 (zodi+exozodi) & 1e-5 (starlight)\\
\hline
Background flux in science band & 0.26 ph/s & 56200 ph/s\\
\hline
\hline
{\bf Photon-noise limited SNR (10 ksec)} & 18.1 & 49.7 \\
\hline
{\bf Post-processing timescale (SNR=10 at planet flux)} & 50 mn & 7 mn\\
\hline
{\bf WFS timescale (SNR=10 at background flux)} & 6 mn & 1.8 ms\\
\hline
\end{tabular}
\end{center}
\end{table}
Table \ref{tab:HCIobs} lists key parameters relevant to the photometric detection of the planet for each case.
Observations are background-limited in both cases. For the space-based observation, we adopt a $m_V = 21\:\mathrm{arcsec}^{-2}$ combined zodiacal + exozodiacal background surface brightness, with $V-I=0.7$ matching the stellar spectrum, resulting in a background is 2.4 $\times$ brighter than the planet. The planet photon rate, at 0.11 ph/s, requires a 50mn integration to reach SNR=10. Thanks to the larger collecting area, the planet photon rate is much larger ($>$100 ph/s) for the ground-based observation, but the background is also significantly higher.
The bottom part of the table provides timescales relevant to the identification of speckles in the image. The {\bf time to SNR=10 on planet flux} indicates the exposure time required to detect in the image speckles at a level comparable to the planet image. In the absence of independent speckle calibration, the speckle field should remain stable over multiple such timescales. The last row shows the {\bf time to SNR=10 on speckle at background}, indicating how much time is needed for the instrument to measure speckles at the raw contrast level. Without a separate wavefront sensing scheme, the speckle field must be stable at the raw contrast level over this timescale.
In both cases, the postprocessing and WFS timescales are long compared to possible sources of wavefront disturbances, driving optical stability requirements to become very challenging to meet. In deriving these quantities, we however only considered light available in the high-contrast area (dark hole) of the science image. A more efficient approach would be to use most of the available starlight for wavefront sensing and calibration. This is the goal of our study.
\section{Focal plane speckle calibration}
\label{sec:specklecalib}
\subsection{Goals}
In this first example, we consider dark hole (DH) calibration in a focal plane image where the DH occupies half of the field, while the other half is significantly brighter, and referred to as the bright field (BF). The configuration is shown in Fig. \ref{fig:HCIsystemarch} on the rightmost image. In this configuration, our goal is to have the BF be used to calibrate the DH.
The approach is related to linear dark field control (LDFC) \cite{2021A&A...646A.145M}, where a linear control loop uses the BF as input for wavefront control. LDFC has been demonstrate to stabilize the DH both in the laboratory \cite{2020PASP..132j4502C} and on-sky \cite{2021arXiv210606286B}. Here, we explore extending LDFC in a nonlinear DH calibration algorithm. Ideally, an algorithm would have as input the BF and return an estimate of the DH. This estimate is computed for each exposure and subtracted from the measured DH to remove speckles due to time-variable wavefront errors.
Important for this approach is that the BF is engineered to contain all necessary wavefront information to calibrate the DH.
That is, the BF should exhibit a unique response to sign changes of even and odd wavefront modes, which is not necessarily always the true for every system.
For example, focal-plane wavefront sensing with the BF of a vAPP coronagraph is enabled by a pupil amplitude asymmetry in the coronagraph's design\cite{bos2019focal}.
This allowed for the successful LDFC experiments presented in Ref. \citenum{2021A&A...646A.145M, 2021arXiv210606286B}.
When amplitude aberrations need to be calibrated as well, for example to deal with the asymmetric wind driven halo\cite{cantalloube2018origin} (interference of AO-lag error and scintillation), then the BF also needs to be designed to have a unique response to these aberrations as well\cite{bos2020sky}.
\subsection{Experimental setup}
Data was acquired on the Subaru Coronagraphic Extreme Adaptive Optics (SCExAO)\cite{jovanovic2015subaru, Lozi2018SCExAO} instrument using its internal light source with a Lyot-type coronagraph. The deformable mirror (DM) was configured to yield a $\approx$ 1e6 contrast dark hole in H-band. 128x128 pixel images were acquired at 1550nm (25nm bandpass) at a 7 kHz framerate using a C-RED one camera with a SAPHIRA-type imaging array. During the acquisition, simulated dynamical wavefront aberrations were added to the system deformable mirror to produce time-variable speckles in the DH. Corresponding BF modulations were recorded in the images.
\begin{figure}[h]
\centering
\includegraphics[width=17cm]{figures/DHcalib-SCExAO.png}
\caption{Dark hole calibration using high contrast image bright field.}
\label{fig:DHcalibSCExAO}
\end{figure}
\subsection{Algorithm}
The goal of the experiment is here to quantify how well the image BF can constrain the DF. The approach we adopt is to identify within a large set of images a subset of images with nearly identical BF realisations, and measure the statistical properties of the corresponding DH images. If the DH can be reconstructed from the BF, then images for which the BF is nearly identical should also have nearly identical DHs. If, however, several DH solutions map to the same BF, then our analysis would reveal a large scatter in DH realisations within the BF-selected set.
This statistical approach does not require an algorithm to be developed to compute DH from BF, and does not make any assumption of what such an algorithm is. If identical BFs correspond to identical DHs, then a DH reconstruction algorithm does exist. While it may be conceivable to construct a correspondence table between the two, this may not be practical in high dimension space, and more efficient techniques such as neural networks may be employed, as illustrated in \S \ref{sec:VAMPIRES}.
The steps of the algorithm are:
\begin{itemize}
\item Identify clusters of images with similar BF realisations
\item Select optimal cluster, according to cluster sample size, cluster diameter, and possibly DH flux (see \S \ref{sec:GLINT})
\item Measure variance in DH intensity map within cluster, and compare to full input sample
\end{itemize}
\subsection{Results}
Results are compiled in Figure \ref{fig:DHcalibSCExAO}. The average of all 60000 frames (top left) shows the BF in the lower half of the image and the DH in the top half. The intensity variance across the 60000 images is shown for each pixel of the image at the bottom left, with both readout noise and photon noise variance terms subtracted to reveal actual speckle intensity variance.
The area used to select clusters of BF-similar images is shown in black in the top right image: it is the set of pixels used to compute a distance metric between BF realisations. A cluster of 128 images with similar BFs was identified, and the corresponding average and variance images shown at the center column with identical brightness scales to the ensemble average and variance on the left.
To measure the algorithm performance, the variance across a set of images is computed. Noting $x,y$ the image spatial coordinates and $k$ the image index, the BF variance across the whole set of images is :
\begin{equation}
\sigma^2_{BF,all} = \frac{1}{N_{all} \: N_{pixBF}} \sum_{k,(x,y) \subset BF} \left( I_k(x,y,k) - \left( \frac{1}{N_{all}} \sum_{k1} I(x,y,k1) \right) \right) ^2
\end{equation}
where $I(x,y,k)$ is the intensity of pixel with spatial coordinates $(x,y)$ in frame number $k$, $N_{all}$ is the total number of frames, and $N_{pixBF}$ is the number of pixels $(x,y) \subset BF$ in the bright field. Similarly, variance $\sigma^2_{DH,all}$ is defined for the DH, and $\sigma^2_{BF,cluster}$ is the variance across the set of images within the selected cluster. The DH geometry used for the variance computation is shown if Fig. \ref{fig:DHcalibSCExAO} top right panel.
Figure \ref{fig:DHcalibSCExAO} shows that the variance within the BF is 35.7 $\times$ smaller within the selected 128-sample cluster than across the full dataset. This is to be expected, as the cluster selection is based on minimizing the BF variance across the selected set of images. The corresponding measured DH variance is 30.7 $\times$ smaller within the cluster set than across the full input dataset. This last result demonstrates that images with similar BFs also have similar DHs. {\bf Image selection using BF intensity successfully constrains DH intensity, demonstrating that a BF-to-DH calibration algorithm can be derived to calibrate residual speckles in high contrast images}.
\begin{figure}[h]
\centering
\includegraphics[width=16cm]{figures/singleframe.png}
\vspace*{0.3cm}
\caption{Single frame within the 128-sample cluster. The frame is shown with two different brightness scales (a and b) to highlight the BF and DH image areas. The difference between the frame and the average over the full dataset (c) shows strong signal within the BF, but no detectable signal in the DH due to low SNR.}
\label{fig:HCIsingleframe}
\end{figure}
A single frame within the selected set is shown in Fig. \ref{fig:HCIsingleframe}, along with its deviation from the average intensity image across the full set of images. The BF signal used for the selection is strongly visible in image (c), while the DH area is indistinguishable from the average DH intensity due to low SNR. The selection we performed would therefore not have been possible from the DH intensity alone. This also demonstrates the technique's ability to reconstruct the DH speckle map to an accuracy better than the readout and photon noise.
This last point is essential to the approach's main goal: wavefront variations can be tracked and their effect on the DH calibrated faster than the WFS timescale listed in Table \ref{tab:HCIobs}. For example, a mechanical vibration may create a time-variable speckle that would be indistinguishable from a planet within the DH. The same vibration would modulate the BF intensity, and a BF-to-DH algorithm would detect this modulation and calibrate out the offending DH speckle thanks to higher BF SNR. While the LDFC algorithm leverages the same SNR gain, it would require the LDFC control bandwidth to exceed the vibration timescale. This is not required for postprocessing, as variance analysis of the BF would reveal the vibration, provided that the image framerate (for the BF) is sufficiently fast to resolve the vibration.
\section{PSF reconstruction from WFS telemetry}
\label{sec:VAMPIRES}
While \S \ref{sec:specklecalib} demonstrates that a unique mapping exists between BF and DH, we have not constructed an algorithm to transform BF measurements into DH estimates. The non-linear relationship between the two spaces eliminates conventional linear reconstructors, and the high dimension of the input BF image makes a lookup table impractical. We demonstrate in this section that a neural network is a viable approach to address this challenge.
\label{sec:WFStoPSF}
\begin{figure}[h]
\centering
\includegraphics[width=14cm]{figures/WFSPSF_Figure.png}
\vspace*{0.3cm}
\caption{Prediction of the PSF from the Pyramid WFS data for an on-sky observation, using a neural network. The predicted PSF image (centre) is determined entirely from the current WFS image (left), and is seen to closely match the true PSF measured at that instant (right column). This example shows a PSF with a large amount of wavefront error (including strong coma) to provide a clear illustration.}
\label{fig:wfspsf}
\end{figure}
Wavefront sensors are commonly used to drive an adaptive optics control loop. It is also possible to reconstruct the instantaneous PSF from the current wavefront sensor data. To utilise the greatest possible amount of wavefront sensor information, it is desirable for the reconstruction to be performed using the raw wavefront sensor image, rather than the modal basis used in the AO system. However the relationship between the pixel intensities in the wavefront sensor image and the pixel intensities in the focal plane is non-linear, precluding the use of a simple linear reconstruction. Instead, a neural network is used, as these are highly capable at non-linear inference tasks.
Results of a PSF reconstruction for on-sky data are shown in Figure \ref{fig:wfspsf}. Here, a fully-connected neural network consisting of two 2000-unit layers was trained on 5 minutes of on-sky data, consisting of synchronised images from the pyramid wavefront sensor camera and VAMPIRES visible camera\cite{2015MNRAS.447.2894N} (wavelength 750~nm), running at approximately 500 frames/sec. The network used ReLU activation functions and dropout between each layer as a regularizer, the latter proving to be crucial for successful reconstruction. While the fully-connected network shown here provides good results, certain advantages (such as reduced parameter number and resistance to pupil alignment drift) could be expected from a convolutional neural network, which is the focus of a current study.
\section{Photonic nulling calibration}
\label{sec:GLINT}
While section \ref{sec:specklecalib} demonstrates that a mapping from BF to DF exists, we provided no practical solution to construct a BF-to-DH calibration algorithm. Due to the large number of input dimensions, a brute-force approach is not feasible. The BF-to-DH relationship may also slowly evolve due to slight changes in optical alignment or optical surfaces figures within the coronagraph optical train, rendering old BF-to-DH calibrations stale.
\begin{figure}[h]
\centering
\includegraphics[width=16cm]{figures/GLINT-lab.png}
\vspace*{0.3cm}
\caption{Null calibration with the GLINT photonic nuller: Laboratory demonstration.}
\label{fig:GLINTlab}
\end{figure}
A photonic nuller is an alternative solution to the high contrast imaging challenge. Unlike a coronagraph constructed from bulk optics between which light freely propagates, the photonic nuller couples starlight into a small number of coherent singlemode waveguides. The waveguides are coherently combined to produce starlight destructive interference in null output(s). Bright starlight is directed to bright outputs which measure the intensity in input waveguides (photometry output(s)) and phase offset between input waveguides (WFS output(s)). The photonic nuller concept and its implementation are discussed in publications from the GLINT instrument team\cite{2020MNRAS.491.4180N, Martinod2021NatCo}.
The photonic nuller approach to high contrast imaging appears to be largely immune from the challenges affecting coronagraph systems with bulk optics:
\begin{itemize}
\item Starlight is coupled in a small number of coherent waveguides. At each wavelength, light into the photonic device input is fully described by phase and amplitude (and possibly polarization), so the number of dimension in the input is a few times the number of waveguides
\item The relationship between input variables (phase and amplitude of each waveguide) and output intensities is entirely established within the photonic chip so it is significantly more stable than an optical train of optical components subject to relative misalignments.
\end{itemize}
We tested the DH calibration approach on the GLINT instrument\cite{Martinod2021NatCo} installed on the Subaru Telescope. Figure \ref{fig:GLINTlab} shows results obtained with the internal light source and dynamical wavefront aberrations injected using SCExAO's deformable mirror. The leftmost image (a) is the average GLINT detector image across during the experiment. GLINT's 16 output waveguides are wavelength-dispersed and re-imaged on a nearIR camera, so the image shows 16 spectra, with wavelength ranging from 1340 nm (right edge) to 1690 nm (left edge). Images are acquired at 1.4 kHz framerate to freeze atmospheric turbulence and vibrations. Two null output channels are labelled in the figure, and should be dark in the absence of input wavefront errors. Anti-null outputs are where fully constructive interference should occur for a flat wavefront. Photometry channels track the amount of flux in input waveguides. Other input encode differential phase between input channels as interference fringes. Null \#1 is the destructive interference between two widely separated (B $=$ 5.5 m center-to-center) apertures on the 8.2 m Subaru telescope pupil, while null \#4 corresponds to a shorter (B $=$ 2.15 m center-to-center) baseline.
The left half of Fig. \ref{fig:GLINTlab} shows the average (a) and standard deviation (b) of all frames. Due to varying wavefront errors, null outputs are relatively bright, especially null \#1 which is highly sensitive to wavefront tip-tilt. Since wavefront variations in this test are larger than 1 radian, the average traces are mostly devoid of fringes, and standard deviation is comparable to average intensity. We note that the standard deviation is somewhat smaller, but still noticeable, for the photometric outputs, as the input wavefront errors is large enough to induce tip-tilt across input subapertures, resulting in a time-variable coupling efficiency loss.
We defined the BF selection area as all non-null output channels, and DH as the two null outputs. The BF used for frame selection is outlined by the blue rectangles in Fig. \ref{fig:GLINTlab} panel (c). The ouput DH consisting of two null channels is shown by the red rectangles.
A 20-sample cluster is selected based on BF similarity (blue rectangles), with an added constraint on total average flux within the null output channels. The corresponding average (c) and standard deviation (d) images demonstrate, as in \S \ref{sec:specklecalib}, improved stability of both BF and DH. In this experiment, the input WF errors were considerably larger so the interferometric signal (fringes) was washed out in the whole set average, and the average null depth was poor. The average of the BF selection shows a clear WFS signal and maintains good starlight suppression in the null outputs.
\begin{figure}[h]
\centering
\includegraphics[width=16cm]{figures/GLINT-sky.png}
\vspace*{0.3cm}
\caption{Null calibration with the GLINT photonic nuller: on-sky demonstration.}
\label{fig:GLINTsky}
\end{figure}
We also performed the experiment on-sky as shown in Fig. \ref{fig:GLINTsky}. The star $\alpha$ Boo (Arcturus) was observed with the same setup. Input wavefront errors are smaller than in the laboratory demonstration, so fringes are visible in the average image (a). We choose here a larger 100-sample cluster to mitigate readout noise and photon noise. The null is deeper and stable within the cluster, demonstrating the BF selection does successfully identify high-quality frames within the dataset. The standard deviation in the null channels is small enough that it could not be measured, as it is well below the detector noise level. {\bf The quality (null depth and stability) of the BF-selected dataset is therefore significantly better than possible with a DH-selected dataset}. GLINT's detection of $\alpha$ Boo's finite angular diameter is visible as a stable non-null flux in null \#1.
On-sky results demonstrate uniqueness of null solution for a given BF measurement, which is the necessary condition for BF-to-DH algorithm. There is no evidence for a measurement null space which would induce a variation in the null ouputs without a corresponding signature in the bright channels.
\section{Conclusion}
We have demonstrated that bright starlight in wavefront sensor(s) and bright parts of images can be used to reliably calibrate residual starlight in the dark hole regions of high contrast images. Our preliminary lab and on-sky tests show that this DH estimation is more precise than the DH photon noise, so it may be possible to have self-calibrating HCI systems operate at the photon noise limit imposed by total surface brightness.
We have shown that there exists an unambiguous BF-to-DH mapping, and in our tests, we have shown that there is no uncalibrated DH variations. While our tests are encouraging, we have not yet developed a reliable, practical BF-to-DH reconstruction algorithm, but show that a neural network can solve a closely related PSF reconstruction problem. We note that our lab and on-sky tests were of short duration, and that the underlying BF-to-DH relationship may change over longer period of time, possibly requiring frequent or continuous re-calibration. The approeach appears to be especially powerful for photonic nulling devices, where the starlight suppression and wavefront sensing functions are integrated in a single stable small-size device.
Our findings indicate that future HCI systems should run concurrently with multiple wavefront sensors (WFSs), so that the aggregate WFS information can be collected using as much starlight as possible to provide a high fidelity, high frame rate estimate of the DH speckle field. In such a system, science images can be calibrated to high accuracy, reducing chances for false positives and increasing data quality.
\acknowledgments
This work was supported by NASA grants \#80NSSC19K0336 and \#80NSSC19K0121. This work is based on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. The authors also wish to acknowledge the critical importance of the current and recent Subaru Observatory daycrew, technicians, telescope operators, computer support, and office staff employees. Their expertise, ingenuity, and dedication is indispensable to the continued successful operation of these observatories. The development of SCExAO was supported by the Japan Society for the Promotion of Science (Grant-in-Aid for Research \#23340051, \#26220704, \#23103002, \#19H00703 \& \#19H00695), the Astrobiology Center of the National Institutes of Natural Sciences, Japan, the Mt Cuba Foundation and the director's contingency fund at Subaru Telescope. KA acknowledges support from the Heising-Simons foundation.
\section{Scientific motivation}
\label{sec:fundlimits}
Direct imaging of exoplanets and circumstellar disks is key their detailed characterization, but is challenging due to the small angular separation between objects of interets and the much brighter host stars.
A high contrast imaging (HCI) system, as depicted in Figure \ref{fig:HCIsystemarch}, may include dedicated wavefront sensor(s) and focal plane image(s). By convention, a wavefront sensor is defined here as an optical device optimized for wavefront measurement, providing, in the small wavefront aberration regime, a linear relationship between input wavefront state and sensor signal.
\begin{figure}[h]
\centering
\includegraphics[width=17cm]{figures/SPIE2021-fig-relationships.png}
\vspace*{0.3cm}
\caption{A high contrast imaging systems may include wavefront sensor(s) and focal plane image(s). Data obtained by camera(s) is related to the input wavefront state, which is not directly measured. Wavefront control loops (red arrows) use measurements to compensate for residual wavefront errors and deliver/maintain a high contrast area in the science focal plane (top right). Additionally, starlight measurements may be used to calibrate residual light in the high contrast area, as shown with the blue arrows.}
\label{fig:HCIsystemarch}
\end{figure}
Most ground-based HCI systems rely on pupil-plane wavefront sensor(s) to measure the fast, $\mu$m-scale wavefront aberrations induced by atmospheric turbulence. Pupil plane sensors can provide a linear signal over a wide range of wavefront aberration, making them ideal for high speed wavefront control.
HCI systems envisioned for space missions, on the other hand, rely on focal plane image(s) to measure residual aberrations. This choice is motivated by the very small level or allowable residual wavefront error required to meet the $10^{10}$ contrast level for direct imaging of reflected light exoplanets. Such systems must overcome the non-linear relationship between input wavefront and measured focal plane intensity in the high contrast area, so wavefront modulation is employed to unambiguously measure focal plane complex amplitude in order to drive a wavefront control loop. Both approaches could fundamentally be combined as they are complementary - this is especially relevant for ground-based systems, where Extreme-AO level correction can deliver a stable focal plane suitable for focal plane wavefront control. Figure \ref{fig:HCIsystemarch} shows possible wavefront control strategies, each with a thick red arrow connecting the input sensor to the wavefront state upon which the control loop is acting.
Additionally, input sensors may also be used to estimate the residual starlight in the high contrast area, as shown by the blue arrows in Fig. \ref{fig:HCIsystemarch}. This can be performed as a post-processing operation, reconstructing and subtracting the unwanted residual starlight from science data. We discuss and explore this approach in this paper.
\subsection{Representative examples}
We consider a examples representative of challenging high contrast imaging observations with ground and space telescopes:
\begin{itemize}
\item Earth-Sun system at 8pc distance, observed by a 4-m space telescope
\item Earth-size planet orbiting in the habitable zone of a M4 type star at a 4pc distance, observed by a 30-m ground-based telescope
\end{itemize}
\begin{table}[ht]
\caption{Observation examples}
\label{tab:HCIobs}
\begin{center}
\begin{tabular}{|l|c|c|}
\hline
& Space-4m-Earth-G2 & Ground-30m-Earth-M4\\
\hline
\hline
Star & G2 at 8pc & M4 at 4pc \\
\hline
Bolometric luminosity [$L_{Sun}$] & 1.000 & 0.0072 \\
\hline
Planet orbital radius [au] & 1.0 & 0.085 \\
\hline
Maximum angular separation [arcsec] & 0.125 & 0.021 \\
\hline
Reflected light planet/star contrast & 1.5e-10 & 2.1e-8 \\
\hline
\hline
Telescope diameter [m] & 4 & 30 \\
\hline
Science spectral bandwidth & 20\% & 20\% \\
\hline
Central Wavelength & 797 nm (I band) & 1630 nm (H band) \\
\hline
Maximum angular separation [$\lambda$/D] & 3.0 & 1.9 \\
\hline
Efficiency & 20 \% & 20 \% \\
\hline
Total Exposure time & 10 ksec & 10 ksec \\
\hline
\hline
Star brightness & $m_I = 3.60$ & $m_H = 5.65$ \\
\hline
Photon flux in science band (star) & 7.37e8 ph/s & 5.62e9 ph/s \\
\hline
Photon flux in science band (planet) & 0.11 ph/s & 118 ph/s \\
\hline
Background surf. brightness [contrast] & 3.5e-10 (zodi+exozodi) & 1e-5 (starlight)\\
\hline
Background flux in science band & 0.26 ph/s & 56200 ph/s\\
\hline
\hline
{\bf Photon-noise limited SNR (10 ksec)} & 18.1 & 49.7 \\
\hline
{\bf Post-processing timescale (SNR=10 at planet flux)} & 50 mn & 7 mn\\
\hline
{\bf WFS timescale (SNR=10 at background flux)} & 6 mn & 1.8 ms\\
\hline
\end{tabular}
\end{center}
\end{table}
Table \ref{tab:HCIobs} lists key parameters relevant to the photometric detection of the planet for each case.
Observations are background-limited in both cases. For the space-based observation, we adopt a $m_V = 21\:\mathrm{arcsec}^{-2}$ combined zodiacal + exozodiacal background surface brightness, with $V-I=0.7$ matching the stellar spectrum, resulting in a background is 2.4 $\times$ brighter than the planet. The planet photon rate, at 0.11 ph/s, requires a 50mn integration to reach SNR=10. Thanks to the larger collecting area, the planet photon rate is much larger ($>$100 ph/s) for the ground-based observation, but the background is also significantly higher.
The bottom part of the table provides timescales relevant to the identification of speckles in the image. The {\bf time to SNR=10 on planet flux} indicates the exposure time required to detect in the image speckles at a level comparable to the planet image. In the absence of independent speckle calibration, the speckle field should remain stable over multiple such timescales. The last row shows the {\bf time to SNR=10 on speckle at background}, indicating how much time is needed for the instrument to measure speckles at the raw contrast level. Without a separate wavefront sensing scheme, the speckle field must be stable at the raw contrast level over this timescale.
In both cases, the postprocessing and WFS timescales are long compared to possible sources of wavefront disturbances, driving optical stability requirements to become very challenging to meet. In deriving these quantities, we however only considered light available in the high-contrast area (dark hole) of the science image. A more efficient approach would be to use most of the available starlight for wavefront sensing and calibration. This is the goal of our study.
\section{Focal plane speckle calibration}
\label{sec:specklecalib}
\subsection{Goals}
In this first example, we consider dark hole (DH) calibration in a focal plane image where the DH occupies half of the field, while the other half is significantly brighter, and referred to as the bright field (BF). The configuration is shown in Fig. \ref{fig:HCIsystemarch} on the rightmost image. In this configuration, our goal is to have the BF be used to calibrate the DH.
The approach is related to linear dark field control (LDFC) \cite{2021A&A...646A.145M}, where a linear control loop uses the BF as input for wavefront control. LDFC has been demonstrate to stabilize the DH both in the laboratory \cite{2020PASP..132j4502C} and on-sky \cite{2021arXiv210606286B}. Here, we explore extending LDFC in a nonlinear DH calibration algorithm. Ideally, an algorithm would have as input the BF and return an estimate of the DH. This estimate is computed for each exposure and subtracted from the measured DH to remove speckles due to time-variable wavefront errors.
Important for this approach is that the BF is engineered to contain all necessary wavefront information to calibrate the DH.
That is, the BF should exhibit a unique response to sign changes of even and odd wavefront modes, which is not necessarily always the true for every system.
For example, focal-plane wavefront sensing with the BF of a vAPP coronagraph is enabled by a pupil amplitude asymmetry in the coronagraph's design\cite{bos2019focal}.
This allowed for the successful LDFC experiments presented in Ref. \citenum{2021A&A...646A.145M, 2021arXiv210606286B}.
When amplitude aberrations need to be calibrated as well, for example to deal with the asymmetric wind driven halo\cite{cantalloube2018origin} (interference of AO-lag error and scintillation), then the BF also needs to be designed to have a unique response to these aberrations as well\cite{bos2020sky}.
\subsection{Experimental setup}
Data was acquired on the Subaru Coronagraphic Extreme Adaptive Optics (SCExAO)\cite{jovanovic2015subaru, Lozi2018SCExAO} instrument using its internal light source with a Lyot-type coronagraph. The deformable mirror (DM) was configured to yield a $\approx$ 1e6 contrast dark hole in H-band. 128x128 pixel images were acquired at 1550nm (25nm bandpass) at a 7 kHz framerate using a C-RED one camera with a SAPHIRA-type imaging array. During the acquisition, simulated dynamical wavefront aberrations were added to the system deformable mirror to produce time-variable speckles in the DH. Corresponding BF modulations were recorded in the images.
\begin{figure}[h]
\centering
\includegraphics[width=17cm]{figures/DHcalib-SCExAO.png}
\caption{Dark hole calibration using high contrast image bright field.}
\label{fig:DHcalibSCExAO}
\end{figure}
\subsection{Algorithm}
The goal of the experiment is here to quantify how well the image BF can constrain the DF. The approach we adopt is to identify within a large set of images a subset of images with nearly identical BF realisations, and measure the statistical properties of the corresponding DH images. If the DH can be reconstructed from the BF, then images for which the BF is nearly identical should also have nearly identical DHs. If, however, several DH solutions map to the same BF, then our analysis would reveal a large scatter in DH realisations within the BF-selected set.
This statistical approach does not require an algorithm to be developed to compute DH from BF, and does not make any assumption of what such an algorithm is. If identical BFs correspond to identical DHs, then a DH reconstruction algorithm does exist. While it may be conceivable to construct a correspondence table between the two, this may not be practical in high dimension space, and more efficient techniques such as neural networks may be employed, as illustrated in \S \ref{sec:VAMPIRES}.
The steps of the algorithm are:
\begin{itemize}
\item Identify clusters of images with similar BF realisations
\item Select optimal cluster, according to cluster sample size, cluster diameter, and possibly DH flux (see \S \ref{sec:GLINT})
\item Measure variance in DH intensity map within cluster, and compare to full input sample
\end{itemize}
\subsection{Results}
Results are compiled in Figure \ref{fig:DHcalibSCExAO}. The average of all 60000 frames (top left) shows the BF in the lower half of the image and the DH in the top half. The intensity variance across the 60000 images is shown for each pixel of the image at the bottom left, with both readout noise and photon noise variance terms subtracted to reveal actual speckle intensity variance.
The area used to select clusters of BF-similar images is shown in black in the top right image: it is the set of pixels used to compute a distance metric between BF realisations. A cluster of 128 images with similar BFs was identified, and the corresponding average and variance images shown at the center column with identical brightness scales to the ensemble average and variance on the left.
To measure the algorithm performance, the variance across a set of images is computed. Noting $x,y$ the image spatial coordinates and $k$ the image index, the BF variance across the whole set of images is :
\begin{equation}
\sigma^2_{BF,all} = \frac{1}{N_{all} \: N_{pixBF}} \sum_{k,(x,y) \subset BF} \left( I_k(x,y,k) - \left( \frac{1}{N_{all}} \sum_{k1} I(x,y,k1) \right) \right) ^2
\end{equation}
where $I(x,y,k)$ is the intensity of pixel with spatial coordinates $(x,y)$ in frame number $k$, $N_{all}$ is the total number of frames, and $N_{pixBF}$ is the number of pixels $(x,y) \subset BF$ in the bright field. Similarly, variance $\sigma^2_{DH,all}$ is defined for the DH, and $\sigma^2_{BF,cluster}$ is the variance across the set of images within the selected cluster. The DH geometry used for the variance computation is shown if Fig. \ref{fig:DHcalibSCExAO} top right panel.
Figure \ref{fig:DHcalibSCExAO} shows that the variance within the BF is 35.7 $\times$ smaller within the selected 128-sample cluster than across the full dataset. This is to be expected, as the cluster selection is based on minimizing the BF variance across the selected set of images. The corresponding measured DH variance is 30.7 $\times$ smaller within the cluster set than across the full input dataset. This last result demonstrates that images with similar BFs also have similar DHs. {\bf Image selection using BF intensity successfully constrains DH intensity, demonstrating that a BF-to-DH calibration algorithm can be derived to calibrate residual speckles in high contrast images}.
\begin{figure}[h]
\centering
\includegraphics[width=16cm]{figures/singleframe.png}
\vspace*{0.3cm}
\caption{Single frame within the 128-sample cluster. The frame is shown with two different brightness scales (a and b) to highlight the BF and DH image areas. The difference between the frame and the average over the full dataset (c) shows strong signal within the BF, but no detectable signal in the DH due to low SNR.}
\label{fig:HCIsingleframe}
\end{figure}
A single frame within the selected set is shown in Fig. \ref{fig:HCIsingleframe}, along with its deviation from the average intensity image across the full set of images. The BF signal used for the selection is strongly visible in image (c), while the DH area is indistinguishable from the average DH intensity due to low SNR. The selection we performed would therefore not have been possible from the DH intensity alone. This also demonstrates the technique's ability to reconstruct the DH speckle map to an accuracy better than the readout and photon noise.
This last point is essential to the approach's main goal: wavefront variations can be tracked and their effect on the DH calibrated faster than the WFS timescale listed in Table \ref{tab:HCIobs}. For example, a mechanical vibration may create a time-variable speckle that would be indistinguishable from a planet within the DH. The same vibration would modulate the BF intensity, and a BF-to-DH algorithm would detect this modulation and calibrate out the offending DH speckle thanks to higher BF SNR. While the LDFC algorithm leverages the same SNR gain, it would require the LDFC control bandwidth to exceed the vibration timescale. This is not required for postprocessing, as variance analysis of the BF would reveal the vibration, provided that the image framerate (for the BF) is sufficiently fast to resolve the vibration.
\section{PSF reconstruction from WFS telemetry}
\label{sec:VAMPIRES}
While \S \ref{sec:specklecalib} demonstrates that a unique mapping exists between BF and DH, we have not constructed an algorithm to transform BF measurements into DH estimates. The non-linear relationship between the two spaces eliminates conventional linear reconstructors, and the high dimension of the input BF image makes a lookup table impractical. We demonstrate in this section that a neural network is a viable approach to address this challenge.
\label{sec:WFStoPSF}
\begin{figure}[h]
\centering
\includegraphics[width=14cm]{figures/WFSPSF_Figure.png}
\vspace*{0.3cm}
\caption{Prediction of the PSF from the Pyramid WFS data for an on-sky observation, using a neural network. The predicted PSF image (centre) is determined entirely from the current WFS image (left), and is seen to closely match the true PSF measured at that instant (right column). This example shows a PSF with a large amount of wavefront error (including strong coma) to provide a clear illustration.}
\label{fig:wfspsf}
\end{figure}
Wavefront sensors are commonly used to drive an adaptive optics control loop. It is also possible to reconstruct the instantaneous PSF from the current wavefront sensor data. To utilise the greatest possible amount of wavefront sensor information, it is desirable for the reconstruction to be performed using the raw wavefront sensor image, rather than the modal basis used in the AO system. However the relationship between the pixel intensities in the wavefront sensor image and the pixel intensities in the focal plane is non-linear, precluding the use of a simple linear reconstruction. Instead, a neural network is used, as these are highly capable at non-linear inference tasks.
Results of a PSF reconstruction for on-sky data are shown in Figure \ref{fig:wfspsf}. Here, a fully-connected neural network consisting of two 2000-unit layers was trained on 5 minutes of on-sky data, consisting of synchronised images from the pyramid wavefront sensor camera and VAMPIRES visible camera\cite{2015MNRAS.447.2894N} (wavelength 750~nm), running at approximately 500 frames/sec. The network used ReLU activation functions and dropout between each layer as a regularizer, the latter proving to be crucial for successful reconstruction. While the fully-connected network shown here provides good results, certain advantages (such as reduced parameter number and resistance to pupil alignment drift) could be expected from a convolutional neural network, which is the focus of a current study.
\section{Photonic nulling calibration}
\label{sec:GLINT}
While section \ref{sec:specklecalib} demonstrates that a mapping from BF to DF exists, we provided no practical solution to construct a BF-to-DH calibration algorithm. Due to the large number of input dimensions, a brute-force approach is not feasible. The BF-to-DH relationship may also slowly evolve due to slight changes in optical alignment or optical surfaces figures within the coronagraph optical train, rendering old BF-to-DH calibrations stale.
\begin{figure}[h]
\centering
\includegraphics[width=16cm]{figures/GLINT-lab.png}
\vspace*{0.3cm}
\caption{Null calibration with the GLINT photonic nuller: Laboratory demonstration.}
\label{fig:GLINTlab}
\end{figure}
A photonic nuller is an alternative solution to the high contrast imaging challenge. Unlike a coronagraph constructed from bulk optics between which light freely propagates, the photonic nuller couples starlight into a small number of coherent singlemode waveguides. The waveguides are coherently combined to produce starlight destructive interference in null output(s). Bright starlight is directed to bright outputs which measure the intensity in input waveguides (photometry output(s)) and phase offset between input waveguides (WFS output(s)). The photonic nuller concept and its implementation are discussed in publications from the GLINT instrument team\cite{2020MNRAS.491.4180N, Martinod2021NatCo}.
The photonic nuller approach to high contrast imaging appears to be largely immune from the challenges affecting coronagraph systems with bulk optics:
\begin{itemize}
\item Starlight is coupled in a small number of coherent waveguides. At each wavelength, light into the photonic device input is fully described by phase and amplitude (and possibly polarization), so the number of dimension in the input is a few times the number of waveguides
\item The relationship between input variables (phase and amplitude of each waveguide) and output intensities is entirely established within the photonic chip so it is significantly more stable than an optical train of optical components subject to relative misalignments.
\end{itemize}
We tested the DH calibration approach on the GLINT instrument\cite{Martinod2021NatCo} installed on the Subaru Telescope. Figure \ref{fig:GLINTlab} shows results obtained with the internal light source and dynamical wavefront aberrations injected using SCExAO's deformable mirror. The leftmost image (a) is the average GLINT detector image across during the experiment. GLINT's 16 output waveguides are wavelength-dispersed and re-imaged on a nearIR camera, so the image shows 16 spectra, with wavelength ranging from 1340 nm (right edge) to 1690 nm (left edge). Images are acquired at 1.4 kHz framerate to freeze atmospheric turbulence and vibrations. Two null output channels are labelled in the figure, and should be dark in the absence of input wavefront errors. Anti-null outputs are where fully constructive interference should occur for a flat wavefront. Photometry channels track the amount of flux in input waveguides. Other input encode differential phase between input channels as interference fringes. Null \#1 is the destructive interference between two widely separated (B $=$ 5.5 m center-to-center) apertures on the 8.2 m Subaru telescope pupil, while null \#4 corresponds to a shorter (B $=$ 2.15 m center-to-center) baseline.
The left half of Fig. \ref{fig:GLINTlab} shows the average (a) and standard deviation (b) of all frames. Due to varying wavefront errors, null outputs are relatively bright, especially null \#1 which is highly sensitive to wavefront tip-tilt. Since wavefront variations in this test are larger than 1 radian, the average traces are mostly devoid of fringes, and standard deviation is comparable to average intensity. We note that the standard deviation is somewhat smaller, but still noticeable, for the photometric outputs, as the input wavefront errors is large enough to induce tip-tilt across input subapertures, resulting in a time-variable coupling efficiency loss.
We defined the BF selection area as all non-null output channels, and DH as the two null outputs. The BF used for frame selection is outlined by the blue rectangles in Fig. \ref{fig:GLINTlab} panel (c). The ouput DH consisting of two null channels is shown by the red rectangles.
A 20-sample cluster is selected based on BF similarity (blue rectangles), with an added constraint on total average flux within the null output channels. The corresponding average (c) and standard deviation (d) images demonstrate, as in \S \ref{sec:specklecalib}, improved stability of both BF and DH. In this experiment, the input WF errors were considerably larger so the interferometric signal (fringes) was washed out in the whole set average, and the average null depth was poor. The average of the BF selection shows a clear WFS signal and maintains good starlight suppression in the null outputs.
\begin{figure}[h]
\centering
\includegraphics[width=16cm]{figures/GLINT-sky.png}
\vspace*{0.3cm}
\caption{Null calibration with the GLINT photonic nuller: on-sky demonstration.}
\label{fig:GLINTsky}
\end{figure}
We also performed the experiment on-sky as shown in Fig. \ref{fig:GLINTsky}. The star $\alpha$ Boo (Arcturus) was observed with the same setup. Input wavefront errors are smaller than in the laboratory demonstration, so fringes are visible in the average image (a). We choose here a larger 100-sample cluster to mitigate readout noise and photon noise. The null is deeper and stable within the cluster, demonstrating the BF selection does successfully identify high-quality frames within the dataset. The standard deviation in the null channels is small enough that it could not be measured, as it is well below the detector noise level. {\bf The quality (null depth and stability) of the BF-selected dataset is therefore significantly better than possible with a DH-selected dataset}. GLINT's detection of $\alpha$ Boo's finite angular diameter is visible as a stable non-null flux in null \#1.
On-sky results demonstrate uniqueness of null solution for a given BF measurement, which is the necessary condition for BF-to-DH algorithm. There is no evidence for a measurement null space which would induce a variation in the null ouputs without a corresponding signature in the bright channels.
\section{Conclusion}
We have demonstrated that bright starlight in wavefront sensor(s) and bright parts of images can be used to reliably calibrate residual starlight in the dark hole regions of high contrast images. Our preliminary lab and on-sky tests show that this DH estimation is more precise than the DH photon noise, so it may be possible to have self-calibrating HCI systems operate at the photon noise limit imposed by total surface brightness.
We have shown that there exists an unambiguous BF-to-DH mapping, and in our tests, we have shown that there is no uncalibrated DH variations. While our tests are encouraging, we have not yet developed a reliable, practical BF-to-DH reconstruction algorithm, but show that a neural network can solve a closely related PSF reconstruction problem. We note that our lab and on-sky tests were of short duration, and that the underlying BF-to-DH relationship may change over longer period of time, possibly requiring frequent or continuous re-calibration. The approeach appears to be especially powerful for photonic nulling devices, where the starlight suppression and wavefront sensing functions are integrated in a single stable small-size device.
Our findings indicate that future HCI systems should run concurrently with multiple wavefront sensors (WFSs), so that the aggregate WFS information can be collected using as much starlight as possible to provide a high fidelity, high frame rate estimate of the DH speckle field. In such a system, science images can be calibrated to high accuracy, reducing chances for false positives and increasing data quality.
\acknowledgments
This work was supported by NASA grants \#80NSSC19K0336 and \#80NSSC19K0121. This work is based on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. The authors also wish to acknowledge the critical importance of the current and recent Subaru Observatory daycrew, technicians, telescope operators, computer support, and office staff employees. Their expertise, ingenuity, and dedication is indispensable to the continued successful operation of these observatories. The development of SCExAO was supported by the Japan Society for the Promotion of Science (Grant-in-Aid for Research \#23340051, \#26220704, \#23103002, \#19H00703 \& \#19H00695), the Astrobiology Center of the National Institutes of Natural Sciences, Japan, the Mt Cuba Foundation and the director's contingency fund at Subaru Telescope. KA acknowledges support from the Heising-Simons foundation.
|
{
"timestamp": "2021-10-05T02:41:01",
"yymm": "2109",
"arxiv_id": "2109.13958",
"language": "en",
"url": "https://arxiv.org/abs/2109.13958"
}
|
\section{Introduction}\label{intro}}
Finding correspondences between pairs of images is a fundamental computer vision problem with numerous applications, including image alignment~\cite{BrownL07, GLAMpoint, shrivastava-sa11}, video analysis~\cite{SimonyanZ14}, image manipulation~\cite{HaCohenSGL11, LiuYT11}, Structure-from-Motion (SfM)~\cite{Wu13, SchonbergerF16}, and Simultaneous Localization and Mapping (SLAM)~\cite{EngelKC18}.
Correspondence estimation has traditionally been dominated by sparse approaches~\cite{SIFT, SURF, Brief, ORB, Belousov2017, superpoint, Dusmanu2019CVPR, OnoTFY18, R2D2descriptor, DELF}, which first detect local keypoints in salient regions that are then matched. However, recent years have seen a growing interest in dense methods~\cite{Melekhov2019, Rocco2018a, GLUNet}.
By predicting a match for every single pixel in the image, these methods open the door to additional applications, such as texture or style transfer~\cite{Kim2019, Liao2017}.
Moreover, dense methods do not require detection of salient and repeatable keypoints, which itself is a challenging problem.
Dense correspondence estimation has most commonly been addressed in the context of optical flow~\cite{Baker2011, Horn1981, Hur2020OpticalFE,RAFT}, where the image pairs represent consecutive frames in a video. While these methods excel in the case of small appearance changes and limited displacements, they cannot cope with the challenges posed by the more general geometric matching task. In geometric matching, the images can stem from radically different views of the same scene, often captured by different cameras and at different occasions.
This leads to large displacements and significant appearance transformations between the frames.
In contrast to optical flow, the more general dense correspondence problem has received much less attention~\cite{Melekhov2019,Rocco2018b,RANSAC-flow, GLUNet, pdcnet}.
Dense flow estimation is prone to errors in the presence of large displacements, appearance changes, or homogeneous regions. It is also ill-defined in case of occlusions or in \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot the sky, where predictions are bound to be inaccurate (Fig.~\ref{fig:intro}c). For geometric matching applications, it is thus crucial to know when and where to trust the estimated correspondences.
The identification of inaccurate or incorrect matches is particularly important in, for instance, dense 3D reconstruction~\cite{SchonbergerF16}, high quality image alignment~\cite{shrivastava-sa11,GLAMpoint}, and multi-frame image restoration~\cite{burstsr}.
Moreover, dense confidence estimation bridges the gap between the application domains of the dense and sparse correspondence estimation paradigms. It enables the selection of robust and accurate matches from the dense output, to be utilized in, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, pose estimation and image-based localization.
Uncertainty estimation is also indispensable for safety-critical tasks, such as autonomous driving and medical imaging.
In this work, we set out to widen the application domain of dense correspondence estimation by learning to predict reliable confidence values (Fig.~\ref{fig:intro}d).
\begin{figure*}[t]
\centering%
\newcommand{0.23\textwidth}{0.89\textwidth}%
\begin{tabular}{c}
\small{
\hspace{1.3cm} (a) Query image \hspace{1.7cm} (b) Reference image \hspace{1.7cm} (c) Baseline \hspace{1.9cm} (d) \textbf{PDC-Net+} (Ours) \hspace{2cm}}
\end{tabular}
\includegraphics[width=0.23\textwidth]{image/intro.pdf}
\vspace{-3mm}\caption{
Estimating dense correspondences between the query (a) and the reference (b) image. The query is warped according to the predicted flows (c)-(d).
The baseline (c) does not estimate an uncertainty map and is therefore unable to filter out the inaccurate predictions at \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot occluded and homogeneous regions. In contrast, our PDC-Net+ (d) not only learns more accurate correspondences, but also when to trust them. It predicts a robust uncertainty map that identifies accurate matches and excludes incorrect and unmatched pixels (red).
}\vspace{-2mm}
\label{fig:intro}
\end{figure*}
We propose the Enhanced Probabilistic Dense Correspondence Network, PDC-Net+, for joint learning of dense flow estimation along with its uncertainties. It is applicable even for extreme appearance and view-point changes, often encountered in geometric matching scenarios.
Our model learns to predict the conditional probability density of the dense flow between two images. In order to accurately capture the uncertainty of inlier and outlier flow predictions, we introduce a \emph{constrained mixture model}. In contrast to predicting a single variance, our formulation allows the network to directly assess the probability of a match being an inlier or outlier. By constraining the variances, we further enforce the components to focus on separate uncertainty intervals, which effectively resolves the ambiguity caused by the permutation invariance of the mixture model components.
Learning reliable and generalizable uncertainties without densely annotated real-world training data is a highly challenging problem.
We tackle this issue from the architecture and the data perspective in the context of self-supervised training.
Directly predicting the uncertainties using the flow decoder leads to highly over-confident predictions in, for instance, texture-less regions of real scenes. This stems from the network's ability to extrapolate neighboring matches during self-supervised training. We alleviate this problem by introducing an architecture that processes each spatial location of the correlation volume independently, leading to robust and generalizable uncertainty estimates.
We also revisit the data generation problem for self-supervised learning. We find that current strategies yield too predictable flow fields, which leads to inaccurate uncertainty estimates on real data. To this end, we introduce random perturbations in the synthetic ground-truth flow fields.
Since our strategy encourages the network to focus on the local appearance rather than simplistic smoothness priors, it improves the uncertainty prediction in, for example, homogeneous regions.
To better simulate moving objects and occlusions encountered in real scenes, we further improve upon the self-supervised data generation pipeline by iteratively adding multiple independently moving objects onto a base image pair.
However, since the base image pair is related by a simple background transformation, the network tends to primarily focus on its flow, at the expense of the object motion.
To better encourage the network to learn the more challenging object flows, we introduce an injective criterion for masking out regions from the objective.
Our approach only masks out occluded regions that violate a one-to-one ground-truth mapping. This allows the network to focus on flow estimation of \emph{visible} moving objects as opposed to occluded background regions, while simultaneously learning vital interpolation and extrapolation capabilities.
Our final PDC-Net+ approach generates a predictive distribution, from which we extract the mean flow field and its corresponding confidence map. To ensure its versatility in different domains and applications, we design multiple inference strategies. In particular, we utilize our confidence estimation to further improve the final flow prediction in scenarios with extreme view-point changes, by proposing both a multi-stage and a multi-scale approach.
We apply PDC-Net+ to a variety of tasks and datasets. Our approach sets a new state-of-the-art on the Megadepth~\cite{megadepth} and the RobotCar~\cite{RobotCar} geometric matching datasets.
Without any task-specific fine-tuning, PDC-Net+ generalizes to the optical flow domain by outperforming recent state-of-the-art methods~\cite{RAFT} on the KITTI-2015 training set~\cite{Geiger2013}.
We further apply our approach to pose estimation on both the outdoor YFCC100M~\cite{YFCC} and the indoor ScanNet~\cite{scannet} datasets, outperforming previous dense methods and ultimately closing the gap to state-of-the-art sparse approaches.
We also validate our method for image-based localization and dense 3D reconstruction on the Aachen dataset~\cite{SattlerMTTHSSOP18, SattlerWLK12}. Lastly, we demonstrate that the confidence estimation provided by PDC-Net+ can be directly used for robust image retrieval.
The rest of the manuscript is organized as follows. We review related work in Section~\ref{sec:related-work}. Section~\ref{sec:method} introduces our approach, PDC-Net+. Extensive experimental results are presented in Section~\ref{sec:exp}. Finally, we discuss conclusions and future work in Section~\ref{sec:conclusion}.
\section{Related work}
\label{sec:related-work}
\subsection{Correspondence estimation}
\parsection{Sparse matching} Sparse methods generally consist of three stages: keypoint detection, feature description, and feature matching. Keypoint detectors and descriptors are either hand-crafted~\cite{SIFT, SURF, Brief, ORB} or learned~\cite{GLAMpoint, Belousov2017, superpoint, Dusmanu2019CVPR, OnoTFY18, R2D2descriptor, DELF}.
Feature matching is generally treated as a separate stage, where descriptors are matched exhaustively. This is followed by heuristics, such as the ratio test~\cite{SIFT}, or robust matchers and filtering methods. The filtering methods are either hand-crafted~\cite{abs-1803-07469, ransac} or learned~\cite{BrachmannR19, YiTOLSF18, OANet, SarlinDMR20}. Our approach instead is learned end-to-end and directly predicts dense correspondences from an image pair, without the need for detecting local keypoints.
\parsection{Sparse-to-Dense matching} While sparse methods have yielded very competitive results, they rely on the detection of stable and repeatable keypoints across images, which is a highly challenging problem.
Recently, Germain~\emph{et al}\onedot~\cite{GermainBL19} proposed to transform the sparse-to-sparse paradigm into a sparse-to-dense approach. Instead of trying to detect repeatable feature points across images, feature detection is performed asymmetrically and correspondences are searched exhaustively in the other image. The more recent work S2DNet~\cite{GermainBL20} casts the correspondence learning problem as a supervised classification task and learns multi-scale feature maps.
\parsection{Dense-to-sparse matching} These approaches start from a correlation volume, that densely matches feature vectors between two feature maps at low resolution. These matches are then sparsified and processed at higher resolution to obtain a final set of refined matches. In SparseNC-Net, Rocco~\emph{et al}\onedot~\cite{Rocco20} sparsify the 4D correlation by projecting it onto a sub-manifold. Dual-RC-Net~\cite{DualRCNet} and XRC-Net~\cite{Xreo} instead use a coarse-to-fine re-weighting mechanism to guide the search for the best match in a fine resolution correlation map. Recently, Sun~\emph{et al}\onedot~\cite{LOFTR} introduced LoFTR, a Transformer-based architecture which also first establishes pixel-wise dense matches at a coarse level and later refines the good matches at a finer scale.
\parsection{Dense matching} Dense methods instead directly predict dense matches.
Rocco~\emph{et al}\onedot~\cite{Rocco2018b} rely on a correlation layer to perform the matching and further propose an end-to-end trainable neighborhood consensus network, NC-Net.
However, it is very memory expensive, which makes it difficult to scale up to higher image resolutions, and thus limits the accuracy of the resulting matches.
Similarly, Wiles~\emph{et al}\onedot~\cite{D2D} learn dense descriptors conditioned on an image pair, which are then matched with a feature correlation layer. Other related approaches seek to learn correspondences on a semantic level, between instances of the same class~\cite{Rocco2017a, Rocco2018a, ArbiconNet, SeoLJHC18, GLUNet, MinLPC20, Kim2019}.
Most related to our work are dense flow regression methods, that predict a dense correspondence map or flow field relating an image pair.
While dense regression methods were originally designed for the optical flow task~\cite{Dosovitskiy2015, Ilg2017a, Hui2018, Hui2019, Sun2018, Sun2019, RAFT, DDFlow, SefFlow, ARFlow}, recent works have extended such approaches to the geometric matching scenario, in order to handle large geometric and appearance transformations.
Melekhov~\emph{et al}\onedot~\cite{Melekhov2019} introduced DGC-Net, a coarse-to-fine Convolutional Neural Network (CNN)-based framework that generates dense correspondences between image pairs. It relies on a global cost volume constructed at the coarsest resolution. However, this method restricts the network input resolution to a fixed low dimension.
Truong~\emph{et al}\onedot~\cite{GLUNet} proposed GLU-Net to learn dense correspondences without such a constraint on the input resolution, by integrating both global and local correlation layers. The authors further introduced GOCor~\cite{GOCor}, an online optimization-based matching module acting as a direct replacement to the feature correlation layer. It significantly improves the accuracy and robustness of the predicted dense correspondences.
Shen~\emph{et al}\onedot~\cite{RANSAC-flow} proposed RANSAC-Flow, a two-stage image alignment method. It performs coarse alignment with multiple homographies using RANSAC on off-the-shelf deep features, followed by a fine-grained alignment. Recently, Huang~\emph{et al}\onedot~\cite{LIFE} adopt the RAFT architecture~\cite{RAFT}, originally designed for optical flow, and train it for pairs with large lighting variations in a weakly-supervised framework based on epipolar constraints. COTR~\cite{COTR} uses a Transformer-based architecture to retrieve matches at any queried locations, which are later densified. However, the process is computationally costly, making it unpractical for many applications.
In contrast to these works, we propose a unified network that estimates the flow field along with probabilistic uncertainties.
\subsection{Confidence estimation}
\parsection{Confidence estimation in geometric matching}
Only very few works have explored confidence estimation in the context of dense geometric or semantic matching.
Novotny~\emph{et al}\onedot~\cite{NovotnyCVPR18Self} estimate the reliability of their trained descriptors by using a self-supervised probabilistic matching loss for the task of semantic matching.
A few approaches~\cite{DCCNet, Rocco20, Rocco2018b, DualRCNet, Xreo} represent the final correspondences as a 4D correspondence volume, thus inherently encoding a confidence score for each tentative match. However, generating one final reliable confidence value for each match is difficult since multiple high-scoring alternatives often co-occur.
Similarly, Wiles~\emph{et al}\onedot~\cite{D2D} predict a distinctiveness score along with learned descriptors. However, it is trained with hand-crafted heuristics. In contrast, we do not need to generate annotations to train our uncertainty prediction nor to make assumptions on what it should capture. Instead we learn it solely from correspondence ground-truth in a probabilistic manner.
In DGC-Net, Melekhov~\emph{et al}\onedot~\cite{Melekhov2019} predict both dense correspondences and a matchability map. However, the matchability map is only trained to identify out-of-view pixels rather than to reflect the actual reliability of the matches. Recently, RANSAC-Flow~\cite{RANSAC-flow} also learns a matchability mask using a combination of losses.
In contrast, we introduce a probabilistic formulation that is learned with a single unified loss -- the negative log-likelihood.
\parsection{Uncertainty estimation in optical flow}
While optical flow has been a long-standing subject of active research, only a handful of methods provide uncertainty estimates.
A few approaches~\cite{Aodha2013LearningAC, Barron94performanceof, KondermannKJG07, KondermannMG08, KybicN11} treat the uncertainty estimation as a post-processing step.
Recently, some works propose probabilistic frameworks for joint optical flow and uncertainty prediction. They either estimate the model uncertainty~\cite{GalG15, IlgCGKMHB18}, also known as epistemic uncertainty~\cite{KendallG17}, or focus on the uncertainty from the observation itself, referred to as aleatoric uncertainty~\cite{KendallG17}.
Following recent works~\cite{Gast018, YinDY19}, we aim at capturing aleatoric uncertainty.
Yet, the uncertainty estimates have to generalize to real scenes, which is particularly challenging in the context of self-supervised learning.
Wannenwetsch~\emph{et al}\onedot~\cite{ProbFlow} introduced ProbFlow, a probabilistic approach applicable to energy-based optical flow algorithms~\cite{Barron94performanceof, RevaudWHS15, Sun2014}.
Gast~\emph{et al}\onedot~\cite{Gast018} proposed probabilistic output layers that require only minimal changes to existing networks.
Yin~\emph{et al}\onedot~\cite{YinDY19} introduced HD$^3$F, a method which estimates uncertainty locally, at multiple spatial scales, and further aggregates the results.
Whereas these approaches are carefully designed for optical flow data and restricted to small displacements, we consider the more general setting of estimating reliable confidence values for dense geometric matching, applicable to \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot pose estimation and 3D reconstruction. This brings additional challenges, including coping with significant appearance changes and large geometric transformations.
\subsection{Differences from the preliminary version \cite{pdcnet}}
This paper extends our work PDC-Net~\cite{pdcnet}, which was published at CVPR 2021.
Our extended paper, contains several new additions compared to its preliminary version.
\newcommand{\bp}[1]{\textbf{#1}}
(i) We introduce an injective criterion for masking out occluded regions that violate a one-to-one ground-truth flow. It allows the network to better learn matching of independently moving objects, while still learning vital interpolation and extrapolation capabilities in occluded regions.
(ii) We propose an enhanced self-supervised data generation pipeline by introducing multiple independently moving objects to better model challenges encountered in real scenes.
(iii) We present additional ablation studies, in particular analyzing the effectiveness of our injective mask and self-supervised training strategy.
(iv) We demonstrate the superiority of our confidence estimation over additional baselines, such as variance and forward-backward consistency error~\cite{Meister2017}.
(v) We present experiments on the indoor pose estimation dataset ScanNet~\cite{scannet}, which demonstrates the generalization properties of our approach, solely trained on outdoor data.
(vi) We evaluate our dense approach on the Homography dataset HPatches~\cite{Lenc}, in both the dense and sparse settings.
(vii) We further validate our dense flow and confidence estimation for image-based localization on the Aachen dataset~\cite{SattlerMTTHSSOP18, SattlerWLK12}.
(viii) We propose an approach for image retrieval that is fully based on the confidence estimates provided by PDC-Net+, and evaluate its performance against state-of-the-art global descriptor retrieval methods on the Aachen dataset~\cite{SattlerMTTHSSOP18, SattlerWLK12}.
(ix) We introduce a strategy for employing PDC-Net+ to establish matches given sets of sparse keypoints. Its effectiveness is directly validated on HPatches~\cite{Lenc} and for image-based localization on the Aachen dataset~\cite{SattlerMTTHSSOP18, SattlerWLK12}.
\section{Our Approach}
\label{sec:method}
We introduce PDC-Net+, a method for estimating the dense flow field relating two images, coupled with a robust pixel-wise confidence map. The later indicates the reliability and accuracy of the flow prediction, which is indispensable for applications such as pose estimation, image manipulation, and 3D reconstruction.
\subsection{Probabilistic Flow Regression}
\label{subsec:proba-model}
We formulate dense correspondence estimation with a probabilistic model, which provides a unified framework to learn both the flow and its confidence.
For a given image pair $X = \left(I^q, I^r \right)$ of spatial size $H \times W$, the aim of dense matching is to estimate a flow field $Y \in \mathbb{R}^{H \times W \times 2}$ relating the reference $I^r$ to the query $I^q$. Most learning-based methods address this problem by training a network $F$ with parameters $\theta$ that directly predicts the flow as $Y = F(X; \theta)$. However, this does not provide any information about the confidence of the prediction.
Instead of generating a single flow prediction $Y$, our goal is to learn the conditional probability density $p(Y| X; \theta)$ of a flow $Y$ given the input image pair $X = \left(I^q, I^r \right)$. This is generally achieved by letting a network predict the parameters $\Phi(X; \theta)$ of a family of distributions $p(Y | X; \theta) = p(Y|\Phi(X; \theta)) = \prod_{ij} p(y_{ij} | \varphi_{ij}(X; \theta))$. To ensure a tractable estimation of the dense flow, conditional independence of the predictions at different spatial locations $(i,j)$ is generally assumed. We use $y_{ij} \in \mathbb{R}^2$ and $\varphi_{ij} \in \mathbb{R}^n$ to denote the flow $Y$ and predicted parameters $\Phi$ respectively, at the spatial location $(i,j)$. In the following, we generally drop the sub-script $ij$ to avoid clutter.
Compared to the direct approach $Y = F(X; \theta)$, the generated parameters $\Phi(X; \theta)$ of the predictive distribution can encode more information about the flow prediction, including its uncertainty. In probabilistic regression techniques for optical flow~\cite{Gast018, IlgCGKMHB18} and a variety of other tasks~\cite{KendallG17, Shen2020, Walz2020}, this is most commonly performed by predicting the \emph{variance} of the estimate $y$.
In these cases, the predictive density $p(y|\varphi)$ is modeled using Gaussian or Laplace distributions. In the latter case, the density is given by,
\begin{equation}
\label{eq:laplace}
\mathcal{L}(y| \mu, \sigma^2) =\frac{1}{\sqrt{2 \sigma_u^2}} e^{-\sqrt{\frac{2}{\sigma_u^2}}|u-\mu_u|} . \frac{1}{\sqrt{2 \sigma_v^2}}
e^{-\sqrt{\frac{2}{\sigma_v^2}}|v-\mu_v|}
\end{equation}
where the components $u$ and $v$ of the flow vector $y=(u,v) \in \mathbb{R}^2$ are modelled with two conditionally independent Laplace distributions. The mean $\mu=[\mu_u, \mu_v]^T \in \mathbb{R}^2$ and variance $\sigma^2=[\sigma^2_u, \sigma^2_v]^T \in \mathbb{R}^2_+$ of the distribution $p(y|\varphi) = \mathcal{L}(y | \mu, \sigma^2)$ are predicted by the network as $(\mu, \sigma^2) = \varphi(X; \theta)$ at every spatial location.
\subsection{Constrained Mixture Model Prediction}
\label{sec:constained-mixture}
\begin{figure}[t]
\centering%
\includegraphics[width=0.40\textwidth]{image/error_stat.pdf}
\vspace{-2mm}
\caption{Distribution of errors
$|\widehat{y}-y|$ on MegaDepth~\cite{megadepth} between the flow $\widehat{y}$ estimated by GLU-Net~\cite{GLUNet} and the ground-truth $y$.
}
\vspace{-4mm}
\label{fig:distribution}
\end{figure}
Fundamentally, the goal of probabilistic deep learning is to achieve a predictive model $p(y|X;\theta)$ that coincides with empirical probabilities as well as possible. We can get important insights into this problem by studying the empirical error distribution of a state-of-the-art matching model, in this case GLU-Net~\cite{GLUNet}. As visualized in Fig.~\ref{fig:distribution},
Errors can be categorized into two populations: inliers (in red) and outliers (in blue). Current probabilistic methods~\cite{Gast018, IlgCGKMHB18,abs-2010-04367} mostly rely on a Laplacian model \eqref{eq:laplace} of $p(y|X;\theta)$. Such a model is effective for correspondences that are easily estimated to be either inliers or outliers with \emph{high certainty}, since their distributions are captured by predicting a low or high variance respectively.
However, often the network is not certain whether a match is an inlier or an outlier. A single Laplace can only predict an intermediate variance, which does not faithfully represent the more complicated uncertainty pattern in this case.
\begin{figure}[b]
\vspace{-3mm}
\centering%
{\includegraphics[width=0.63\columnwidth]{image/pair.png}}~%
{\includegraphics[width=0.35\columnwidth]{image/graph.png}}
\vspace{-2mm}
\caption{Predictive log-density $\log p(y|X)$ \eqref{eq:mixture}-\eqref{eq:constraint} for an inlier (red), outlier (blue), and ambiguous (green) match. Our mixture model faithfully represents the uncertainty also in the latter case.
}
\label{fig:predictive}
\end{figure}
\begin{figure*}[t]
\centering%
\newcommand{0.23\textwidth}{\textwidth}
\includegraphics[width=\textwidth]{image/unc_dec.pdf}
\vspace{-5mm}\caption{The proposed architecture for flow and uncertainty estimation. The correlation uncertainty module $U_\theta$ independently processes each 2D-slice $C_{ij\cdot\cdot}$ of the correlation volume. Its output is combined with the estimated mean flow $\mu$ and the mixture model parameters $\phi$ from the previous scale level. These are then given to the uncertainty predictor, which finally estimates the weight $\{\alpha_m\}_1^M$ and variance $\{\sigma^2_m\}_1^M$ parameters of our constrained mixture model \eqref{eq:mixture}-\eqref{eq:scalepred}.}\vspace{-4mm}
\label{fig:arch}
\end{figure*}
\parsection{Mixture model}
To achieve a flexible model capable of fitting more complex distributions, we parametrize $p(y | X; \theta)$ with a mixture model. In general, we consider a distribution consisting of $M$ components,
\begin{equation}
\label{eq:mixture}
p\left(y | \varphi \right)=\sum_{m=1}^{M} \alpha_{m} \mathcal{L}\left(y |\mu, \sigma^2_m\right) \,.
\end{equation}
While we have here chosen Laplacian components \eqref{eq:laplace}, any simple density function can be used. The scalars $\alpha_{m} \geq 0$ control the weight of each component, satisfying $\sum_{m=1}^{M} \alpha_{m} = 1$. Note that all components share the same mean $\mu$, which can thus be interpreted as the estimated flow vector. However, each component has a different variance $\sigma^2_m$.
The distribution \eqref{eq:mixture} is therefore unimodal, but can capture more complex uncertainty patterns. In particular, it allows to model the inlier (red) and outlier (blue) populations in Fig.~\ref{fig:distribution} using separate Laplace components. The network can then predict the probability of a match being an inlier or outlier through the corresponding mixture weights $\alpha_m$. This is visualized in Fig.~\ref{fig:predictive} for a mixture with $M=2$ components. The red and blue matches are with certainty predicted as inlier and outlier respectively, thus requiring only a single active component. In ambiguous cases (green), our mixture model \eqref{eq:mixture} predicts the probability of inlier vs.\ outlier, each modeled with a separate component, giving a better fit compared to the single-component alternative.
\parsection{Mixture constraints}
To employ the mixture model \eqref{eq:mixture}, the network $\Phi$ needs, for each pixel location, to predict the mean flow $\mu$ along with the variance $\sigma^2_m$ and weight $\alpha_m$ of each component, as $\big( \mu, (\alpha_m )_{m=1}^M, ( \sigma^2_m )_{m=1}^M \big) = \varphi(X; \theta)$.
However, an issue when predicting the parameters of a mixture model is its permutation invariance. That is, the predicted distribution \eqref{eq:mixture} is unchanged even if we change the order of the individual components. This can cause confusion in the learning, since the network first needs to \emph{decide} what each component should model before estimating the individual weights $\alpha_m$ and variances $\sigma^2_m$. As shown by our experiments in Sec.~\ref{subsec:ablation-study}, this problem severely degrades the quality of the estimated uncertainties.
We therefore propose a model that breaks the permutation invariance of the mixture \eqref{eq:mixture}. It simplifies the learning and greatly improves the robustness of the estimated uncertainties. In essence, each component $m$ is tasked with modeling a specified range of variances $\sigma^2_m$. We achieve this by constraining the mixture \eqref{eq:mixture} as,
\begin{equation}
\label{eq:constraint}
0 < \beta_1^- \leq \sigma^2_1 \leq \beta_1^+ \leq \beta_2^- \leq \sigma^2_2 \leq \ldots \leq \sigma^2_M \leq \beta_M^+
\end{equation}
For simplicity, we here assume a single variance parameter $\sigma^2_m$ for both the $u$ and $v$ directions in \eqref{eq:laplace}.
The constants $\beta_m^-, \beta_m^+$ specify the range of variances $\sigma^2_m$. Intuitively, each component is thus responsible for a different range of uncertainties, roughly corresponding to different regions in the error distribution in Fig.~\ref{fig:distribution}. In particular, component $m=1$ accounts for the most accurate predictions, while component $m=M$ models the largest errors and outliers.
To enforce the constraint \eqref{eq:constraint}, we first predict an unconstrained value $h_m \in \mathbb{R}$, which is then mapped to the given range as,
\begin{equation}
\label{eq:scalepred}
\sigma^2_m = \beta_{m}^- + (\beta_m^+ - \beta_{m}^-)\, \text{Sigmoid}(h_m) .
\end{equation}
The constraint values $\beta_m^+, \beta_m^-$ can either be treated as hyper-parameters or learned end-to-end alongside $\theta$.
Lastly, we emphasize an interesting interpretation of our constrained mixture formulation \eqref{eq:mixture}-\eqref{eq:constraint}. Note that the predicted weights $\alpha_m$, in practice obtained through a final SoftMax layer, represent the probabilities of each component $m$. The network therefore effectively \emph{classifies} the flow prediction at each pixel into the separate uncertainty intervals \eqref{eq:constraint}.
In fact, our network learns this ability without any extra supervision as detailed next.
\begin{figure*}[t]
\centering%
\newcommand{0.23\textwidth}{0.16\textwidth}
\vspace{-3mm}
\subfloat[Query image \label{fig:arch-visual-query}]{\includegraphics[width=0.23\textwidth]{ image/architecture_comparison/source.png}}~%
\subfloat[Reference image \label{fig:arch-visual-ref}]{\includegraphics[width=0.23\textwidth]{ image/architecture_comparison/target.png}}~%
\subfloat[\centering Common decoder\label{fig:arch-visual-common}]{\includegraphics[width=0.23\textwidth]{ image/architecture_comparison/simple_old_data.png}}~%
\subfloat[\centering Our decoder \label{fig:arch-visual-sep}]{\includegraphics[width=0.23\textwidth]{ image/architecture_comparison/our_approach_old_data.png}}~%
\subfloat[\centering Our decoder and data \label{fig:arch-visual-sep-pertur}]{\includegraphics[width=0.23\textwidth]{ image/architecture_comparison/our_approach_new_data.png}}~%
\subfloat[\centering RANSAC-Flow \label{fig:arch-visual-ransac}]{\includegraphics[width=0.23\textwidth]{ image/architecture_comparison/RANSAC_flow.png}}
\vspace{-1mm}\caption{Visualization of the estimated uncertainties by masking the warped query image to only show the confident flow predictions. The standard approach \protect\subref{fig:arch-visual-common} uses a common decoder for both flow and uncertainty estimation. It generates overly confident predictions in the sky and grass. The uncertainty estimates are substantially improved in \protect\subref{fig:arch-visual-sep}, when using the proposed architecture described in Sec.~\ref{sec:uncertainty-arch}.
Adding the flow perturbations for self-supervised training (Sec.~\ref{subsec:perturbed-data}) further improves the robustness and generalization of the uncertainties \protect\subref{fig:arch-visual-sep-pertur}. For reference, we also visualize the flow and confidence mask \protect\subref{fig:arch-visual-ransac} predicted by the recent state-of-the-art approach RANSAC-Flow~\cite{RANSAC-flow}.
}\vspace{-4mm}
\label{fig:arch-visual}
\end{figure*}
\parsection{Training objective} As customary in probabilistic regression~\cite{prdimp, Gast018, ebmregECCV2020, IlgCGKMHB18, KendallG17, Shen2020, VarameshT20, Walz2020}, we train our method using the negative log-likelihood as the only objective. For one input image pair $X = \left(I^q, I^r \right)$ and corresponding ground-truth flow $Y$, the objective is given by
\begin{equation}
\label{eq:nll}
- \log p\big(Y|\Phi(X;\theta)\big) = - \sum_{ij} \log p\big(y_{ij} | \varphi_{ij}(X;\theta)\big)
\end{equation}
In Appendix~B.1, we provide efficient analytic expressions of the loss \eqref{eq:nll} for our constrained mixture \eqref{eq:mixture}-\eqref{eq:constraint}, that also ensure numerical stability.
\parsection{Bounding the variance}
Our constrained formulation \eqref{eq:constraint} allows us to circumvent another issue with the direct application of the mixture model \eqref{eq:mixture}. When trained with the standard negative log-likelihood objective \eqref{eq:nll}, an unconstrained model encourages the network to primarily focus on easy correspondences. By accurately predicting such matches with high confidence (low variance), the network can arbitrarily reduce the loss during training. Generating fewer, but highly accurate predictions then dominates during training at the expense of more challenging regions. Fundamentally, this problem appears since the negative log-likelihood loss is theoretically unbounded in this case. We solve this issue by simply setting a non-zero lower bound $0 < \beta_1^-$ for the predicted variances in our formulation \eqref{eq:constraint}. This effectively provides a lower bound on the negative-log likelihood loss itself, leading to a well-behaved objective. Such a constraint can be seen as adding a prior term in order to regularize the likelihood. In our experiments, we simply set the lower bound to be a standard deviation of one pixel, i.e.\ $1 = \beta_1^- \leq \sigma^2_1$.
\subsection{Uncertainty Prediction Architecture}
\label{sec:uncertainty-arch}
Our aim is to predict an uncertainty value that quantifies the \emph{reliability} of a proposed correspondence or flow vector. Crucially, the uncertainty prediction needs to \emph{generalize} well to real scenarios, not seen during training.
However, this is particularly challenging in the context of self-supervised training, which relies on synthetically warped images or animated data. Specifically, when trained on simple synthetic motion patterns, such as homography transformations, the network learns to heavily rely on global smoothness assumptions, which do not generalize well to more complex settings. As a result, the network learns to \emph{confidently} interpolate and extrapolate the flow field to regions where no robust match can be found.
Due to the significant distribution shift between training and test data, the network thus also infers confident, yet highly erroneous predictions in homogeneous regions on real data.
In this section, we address this problem by carefully designing an architecture that greatly limits the risk of the aforementioned issues. Our architecture is visualized in Figure~\ref{fig:arch}.
Current state-of-the-art dense matching architectures rely on feature correlation layers. Features $f$ are extracted at resolution $h \times w$ from a pair of input images, and densely correlated either globally or within a local neighborhood of size $d$. In the latter case, the output correlation volume is best thought of as a 4D tensor $C \in \mathbb{R}^{h \times w \times d \times d}$. Computed as dense scalar products $C_{ijkl} = (f_{ij}^r)^\text{T} f_{i+k,j+l}^q$, it encodes the deep feature similarity between a location $(i,j)$ in the reference frame $I^r$ and a displaced location $(i+k,j+l)$ in the query $I^q$. Standard flow architectures~\cite{GLUNet, Melekhov2019, Hui2018, Sun2018} process the correlation volume with a flow decoder, by first vectorizing the last two dimensions, before applying a sequence of convolutional layers over the \emph{reference coordinates $(i,j)$} to predict the final flow.
\parsection{Correlation uncertainty module}
The straightforward strategy for predicting the distribution parameters $\Phi(X;\theta)$ is to simply increase the number of output channels of the flow decoder to include all parameters of the predictive distribution.
However, this allows the network to rely primarily on the local neighborhood when estimating the flow and confidence at location $(i,j)$. It thus ignores the actual reliability of the match and appearance information at the specific location. It results in over-smoothed and overly confident predictions, unable to identify ambiguous and unreliable matching regions, such as the sky. This is visualized in Fig.~\ref{fig:arch-visual-common}.
We instead design an architecture that assesses the uncertainty at a specific location $(i,j)$, without relying on neighborhood information. We note that the 2D slice $C_{ij\cdot\cdot} \in \mathbb{R}^{d\times d}$ of the correlation volume encapsulates rich information about the matching ability of location $(i,j)$, in the form of a confidence map.
In particular, it encodes the distinctness, uniqueness, and existence of the correspondence.
We therefore create a \emph{correlation uncertainty module} $U_{\theta}$ that independently reasons about each correlation slice as $U_\theta(C_{ij\cdot\cdot})$. In contrast to standard decoders, the convolutions are therefore applied over \emph{the displacement dimensions $(k,l)$}. Efficient parallel implementation is ensured by moving the first two dimensions of $C$ to the batch dimension using a simple tensor reshape. Our strided convolutional layers then gradually decrease the size $d \times d$ of the displacement dimensions $(k,l)$ until a single vector $u_{ij} = U_\theta(C_{ij\cdot\cdot}) \in \mathbb{R}^n$ is achieved for each spatial coordinate $(i,j)$ (see Fig.~\ref{fig:arch}).
\parsection{Uncertainty predictor}
The cost volume does not capture uncertainty arising at motion boundaries, crucial for real data with independently moving objects.
We thus additionally integrate predicted flow information in the estimation of its uncertainty. In practice, we concatenate the estimated mean flow $\mu$ with the output of the correlation uncertainty module $U_{\theta}$, and process it with multiple convolution layers. The uncertainty predictor outputs all parameters of the mixture \eqref{eq:mixture}, except for the mean flow $\mu$ (see Fig.~\ref{fig:arch}).
As shown in Fig.~\ref{fig:arch-visual-sep}, our uncertainty decoder, comprised of the correlation uncertainty module and the uncertainty predictor, successfully masks out most of the inaccurate and unreliable matching regions.
\parsection{Uncertainty propagation across scales}
In a multi-scale architecture, the uncertainty prediction is further propagated from one level to the next. Specifically, the flow decoder and the uncertainty predictor both take as input the parameters $\Phi$ of the distribution predicted at the previous level, in addition to their original inputs.
\begin{figure*}[t]
\centering%
\includegraphics[width=0.99\textwidth]{image/dataset_pipeline.pdf}
\vspace{-2mm}
\caption{The synthetic image pair generation pipeline for our self-supervised training. Our pipeline is divided in three parts. First, we randomly generate a synthetic flow field $\tilde{Y}$, used to create an image pair from a base image. Next, perturbations are added to the synthetic flow field, leading to a new pair of query and reference images related by the background flow field $Y_{bg}$. Finally, each object is iteratively added to the image pair, using a randomly sampled object flow $Y_{fg}$. The final ground-truth flow field $Y$ relating the image pair is also updated accordingly.}
\vspace{-4mm}
\label{fig:dataset}
\end{figure*}
\subsection{Data for Self-supervised Uncertainty}
\label{subsec:perturbed-data}
While designing a suitable architecture greatly alleviates the uncertainty generalization issue, the network still tends to rely on global smoothness assumptions and interpolation, especially around object boundaries (see Fig.~\ref{fig:arch-visual-sep}). While this learned strategy indeed minimizes the Negative Log Likelihood loss \eqref{eq:nll} on self-supervised training samples, it does not generalize to real image pairs.
In this section, we further tackle this problem from the data perspective in the context of self-supervised learning.
We aim at generating less predictable synthetic motion patterns than simple homography transformations, to prevent the network from primarily relying on interpolation. This forces the network to focus on the appearance of the image region in order to predict its motion and uncertainty.
Given a base flow $\tilde{Y} \in \mathbb{R}^{H \times W}$ relating $\tilde{I}^r \in \mathbb{R}^{H \times W}$ to $\tilde{I}^q \in \mathbb{R}^{H \times W}$ by a simple transformation, such as a homography~\cite{Melekhov2019, Rocco2017a, GOCor, GLUNet}, we create a residual flow $\epsilon = \sum_i \varepsilon_i \in \mathbb{R}^{H \times W}$ by adding small local perturbations $\varepsilon_i$.
The query image $I^q = \tilde{I}^q$ is left unchanged while the reference $I^r$ is generated by warping $\tilde{I}^r$ according to the residual flow $\epsilon$.
The final perturbed flow map $Y$ between $I^r$ and $I^q$ is achieved by composing the base flow $\tilde{Y}$ with the residual flow $\epsilon$.
The main purpose our perturbations $\varepsilon$ is to teach the network to be uncertain in regions where they cannot easily be identified. Specifically, in homogeneous regions such as the sky, the perturbations do not change the appearance of the reference image ($I^r \approx \tilde{I}^r$) and are therefore unnoticed by the network. However, since the perturbations break the global smoothness of the synthetic flow, the flow errors of those pixels are higher. In order to decrease the loss \eqref{eq:nll}, the network thus needs to estimate a larger uncertainty for the perturbed regions. We show the impact of introducing the flow perturbations in Fig.~\ref{fig:arch-visual-sep-pertur}.
\subsection{Self-supervised dataset creation pipeline}
\label{sec:training-strategy}
We train our final model using a combination of self-supervision, where the image pairs and corresponding ground-truth flows are generated by artificial warping, and real sparse ground-truth. In this section, we focus on our self-supervised data generation pipeline. It is illustrated in Fig~\ref{fig:dataset} and detailed below.
\parsection{Synthetic image pair creation} First, an image pair $(\tilde{I}^r, \tilde{I}^q)$ of dimension $H \times W$ is generated by warping a base image B with a randomly generated geometric transformation, such as a homography or Thin-plate Spline (TPS) transformations.
The base image $B$ is first resized to a fixed size $\overline{H} \times \overline{W}$, larger than the desired training image size $H \times W$. We then sample a random dense flow field $\tilde{Y}$ of the same dimension $\overline{H} \times \overline{W}$, and generate $\overline{I}^r$ by warping image $B$ with $\tilde{Y}$.
The query image $\tilde{I}^q$ is created by centrally cropping the base image $B$ to the fixed training image size $H \times W$, while the reference $\tilde{I}^r$ results from the cropping of $\overline{I}^r$.
The new image pair $(\tilde{I}^r, \tilde{I}^q)$ is related by the ground-truth flow $\tilde{Y}$, which is also cropped and adjusted accordingly.
The overall procedure is similar to~\cite{Rocco2017a, GLUNet, GOCor}.
\parsection{Perturbation creation and integration} Next, perturbations are added to the reference, as detailed in Sec.~\ref{subsec:perturbed-data}, resulting in a new pair ($\hat{I}^q, \hat{I}^r$) that is related by background flow field $Y_{bg}$.
\parsection{Adding objects} To better simulate real scenarios, the synthetic image pair is augmented with random independently moving objects. This is performed by sampling objects from the COCO dataset~\cite{coco}, using their provided ground-truth segmentation masks. To generate motion, we randomly sample an affine flow field $Y_{fg}$ for each object, referred to as the foreground flow.
The objects are inserted into the images ($\hat{I}^q, \hat{I}^r$) using their corresponding segmentation masks, giving the final training image pair $(I^r, I^q)$.
The final synthetic flow $Y$ relating the reference to the query is composed of the object motion flow field $Y_{fg}$ at the location of the moving object in the reference image, and the background flow field $Y_{bg}$ otherwise. The same procedure is repeated iteratively, by considering previously added objects as part of the background.
\subsection{Injective mask}
\label{sec:injective-mask}
Occlusions are pervasive in real scenes. They are generally caused by moving objects or 3D motion of the underlying scene. When a reference frame pixel is occluded or outside the view of the query image, its estimated flow needs to be interpolated or extrapolated. Occluded regions are therefore one of the most important sources of uncertainty in practical applications. Moreover, matching occluded and other non-visible pixels is often virtually impossible or even undefined.
In our self-supervised pipeline, occlusions are simulated by inserting independently moving objects. Since the background image is also transformed by a random warp, the flow is known even in occluded areas. As we have the ground-truth flow vector for every pixel in the reference image, the most straightforward alternative is to train the network by applying the Negative Log Likelihood loss \eqref{eq:nll} over \emph{all} pixels of the reference image.
However, this choice makes it difficult for the network to generalize to real scenes.
The background flow is easier to learn compared to the object flows, since the regular motion of the former largely dominates within the image.
As a result, the network tends to ignore the independently moving objects, while predicting background flow for all pixels in the image.
To alleviate this problem, another alternative is to mask out all occluded regions from the loss computation~\cite{Melekhov2019}. However, by removing all supervisory signals in occluded regions, the network is unable to learn the interpolation and extrapolation of predictions, which is important under mild occlusions in real scenes.
We therefore propose to mask out pixels from the objective based on another condition, namely the \emph{injectivity} of the ground-truth correspondence mapping. Specifically, we mask out the minimal region that ensures a one-to-one ground-truth mapping between the frames. If two or more pixels in the reference frame map to the same pixel location in the query, we only preserve the \emph{visible} pixel and mask out the occluded ones. This allows the network to focus on flow estimation of visible moving objects as opposed to occluded background regions. However, occlusions that do not violate the injectivity condition are preserved in the loss. The network thus learns vital interpolation and extrapolation capabilities. Importantly, the injectivity condition implies an unambiguous and well-defined reverse (i.e.\ inverse) flow. By only allowing such one-to-one matches during training, the network can learn to exploit the uniqueness of a correspondence as a powerful cue when assessing its uncertainty and existence.
We illustrate and further discuss our injective masking approach for self-supervised training using the minimal example visualized in Fig.~\ref{fig:mask}. The query image has two objects, $B$ and $C$. Object $C$ is also visible in the reference image while object B is not present there. The reference image also includes an object $A$, which is solely visible in the reference. Next, we discuss how to handle the cases $A$, $B$, and $C$ when defining the ground-truth flow and our injective mask. Note that the ground-truth flow and our injective mask are both defined in the coordinate system of the reference frame.
\noindent\textbf{Object $A$} is only visible in the reference frame. The network therefore cannot deduce its flow from the available information. However, the image region covered by object $A$ in the reference has a one-to-one background flow. We therefore adopt this as ground-truth in region $A$. It allows the network to learn to interpolate the flow for pixels not visible in the reference frame (i.e.\ the region behind object $A$).
\begin{figure}[t]
\centering%
\includegraphics[width=0.40\textwidth]{image/mask.pdf}
\vspace{-2mm}
\caption{Pair of query and reference images with independently moving objects $A$, $B$ and $C$. Objects $A$ and $B$ are solely visible in the reference and query images respectively. The regions in the reference image that are occluded by the objects are represented with dashed red contours. They correspond to areas $A$, $B'$ and $C'$ in the occlusion mask. Our injective mask corresponds to $C'$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot the black area in the lower right frame.
}
\vspace{-4mm}
\label{fig:mask}
\end{figure}
\noindent\textbf{Object $B$} is solely visible in the query. The background region covered by $B$ in the query corresponds to $B'$ in the reference. Since no other pixels in the reference are mapped to $B$, we can use the background flow as ground-truth in region $B'$ without violating the injectivity condition.
Through the supervisory signal in $B'$, the network learns to interpolate the flow for pixels not visible in the query frame (i.e.\ the region behind object $B$).
\noindent\textbf{Object $C$} is visible in both frames. The region $C'$ in the reference is occluded by object $C$ in the query frame.
Both the background flow in region $C'$ and the object flow in region $C$ of the reference map to the same region $C$ in the query frame. Including both regions in the ground-truth leads to a non-injective mapping. We therefore mask out the occluded region $C'$ from the loss, while preserving the visible region $C$. This forces the network to focus on learning the flow for the moving object $C$, which is more challenging. When including both regions in the loss \eqref{eq:nll} instead, the network tends to ignore the object $C$ in favor of the occluded background flow $C'$, by predicting a high uncertainty for the former.
\parsection{Injective mask}
The final injective mask for the example given in Fig.~\ref{fig:mask} thus corresponds to the region $\Omega_\text{inj}=C'$, visualized in black. In comparison, the full occlusion mask is given by the larger region $\Omega_\text{occ}=C' \cup B' \cup A$, which is outlined by the red dashed region in Fig.~\ref{fig:mask}. Hence, our approach does not mask out occluded regions that preserve the injectivity of the ground-truth flow.
The example described in Fig.~\ref{fig:mask} covers all important cases in the construction of our ground-truth flow and the injective mask. In order to achieve a general approach that is applicable to any number and configuration of objects, we follow an iterative procedure. We first construct the background image pair and its corresponding flow as described in Sec.~\ref{sec:training-strategy}. In each iteration, we then add one new object as discussed in Sec.~\ref{sec:training-strategy}. The added object belongs to one of the cases $A$, $B$, and $C$ above. We update the ground-truth flow and mask as previously described, while considering all previously added objects as part of the background. Note that we preserve the flow for regions in the reference frame that are out-of-view in the query frame, since they comply with the one-to-one property of the ground-truth mapping.
\parsection{Masked objective}
Our final objective is obtained by masking the Negative Log-Likelihood \eqref{eq:nll} with our injective mask.
For the input image pair $X = \left(I^q, I^r \right)$ and ground-truth flow $Y$, we simply sum over all pixels that are not in the masked-out region $\Omega_\text{inj}$,
\begin{equation}
\label{eq:nll-mask}
L(\theta; X, Y) = - \sum_{(i,j) \notin \Omega_\text{inj}} \log p\big(y_{ij} | \varphi_{ij}(X;\theta)\big) \,.
\end{equation}
In Appendix~B.2, we present visualization of our injective mask for multiple example training image pairs.
\subsection{Geometric Matching Inference}
\label{sec:geometric-inference}
Our PDC-Net+ generates a predictive distribution, from which we extract the mean flow field and its corresponding confidence map. This provides substantial versatility, allowing our approach to be deployed in multiple scenarios.
In this section, we present multiple inference strategies, which utilize our confidence estimation to also improve the accuracy of the final flow prediction.
The simplest application of PDC-Net+ predicts the flow and its confidence in a single forward pass. Our approach further offers the opportunity to perform \emph{multi-stage} and \emph{multi-scale} flow estimation on challenging image pairs, without any additional components, by leveraging our internal confidence estimation.
Finally, our dense flow regression can also be used to find robust matches between two sparse sets of detected keypoints.
\parsection{Confidence value} From the predictive distribution $p(y|\varphi(X;\theta))$, we aim at extracting a single confidence value, encoding the reliability of the corresponding predicted mean flow vector $\mu$.
Previous probabilistic regression methods mostly rely on the variance as a confidence measure~\cite{Gast018, IlgCGKMHB18, KendallG17, Walz2020}. However, we observe that the variance can be sensitive to outliers. Instead, we compute the probability $P_R$ of the true flow $y$ being within a radius $R$ of the estimated mean flow $\mu$. It can be computed as,
\begin{equation}
\label{eq:pr}
P_R = P(\|y - \mu\|_\infty < R) = \sum_{m} \alpha_m \left[1-\exp (-\sqrt{2}\frac{R}{\sigma_m}) \right]^2
\end{equation}
Compared to the variance, the probability value $P_R$ provides a more interpretable measure of the uncertainty. This confidence measure is then used to identify the accurate from the inaccurate matches by thesholding $P_R$. The accurate matches may then be further utilized in down-stream applications.
\begin{figure}[t]
\centering%
\includegraphics[width=0.49\textwidth]{image/multi_stage.pdf}
\caption{Illustration of our multi-stage refinement strategy (H) to predict the flow $Y$ and confidence map $P_R$ relating the reference to the query.
}
\vspace{-4mm}
\label{fig:multi-stage}
\end{figure}
\parsection{Multi-stage refinement strategy (H)}
For extreme viewpoint changes with large scale or perspective variations, it is particularly difficult to infer the correct motion field in a single network pass. While this is partially alleviated by multi-scale architectures, it remains a major challenge in geometric matching.
Our approach allows to split the flow estimation process into two parts, the first estimating a simple transformation, which is then used as initialization to infer the final, more complex transformation.
One of the major benefits of our confidence estimation is the ability to \emph{identify} a set of accurate matches from the densely estimated flow field. This is performed by thresholding the confidence map $P_R$ as $P_{R=1} > \gamma$. Following the first forward network pass, these accurate correspondences can be used to estimate a coarse transformation relating the image pair, such as a homography transformation. After warping the query according to the homography, a second forward pass can then be applied to the coarsely aligned image pair. The final flow field is constructed as a composition of the fine flow and the homography transform flow. This process is illustrated in Fig.~\ref{fig:multi-stage}. In our experiments Sec.~\ref{sec:exp}, we indicate this version of our approach with (H).
While previous works also use multi-stage refinement~\cite{Rocco2017a, RANSAC-flow}, our approach is much simpler, applying the \emph{same} network in both stages and benefiting from the internal confidence estimation.
\parsection{Multi-scale refinement strategy (MS)} For datasets with very extreme geometric transformations, we further suggest a multi-scale strategy that utilizes our confidence estimation. In particular, we extend our two-stage refinement approach by resizing the reference image to different resolutions. Specifically, following~\cite{RANSAC-flow}, we use seven scales: 0.5, 0.88, 1, 1.33, 1.66 and 2.0. As for the implementation, to avoid obtaining very large input images (for scaling ratio 2.0 for example), we use the following scheme: we resize the reference image for scaling ratios below 1, keeping the aspect ratio fixed and the query image unchanged. For ratios above 1, we instead resize the query image by one divided by the ratio, while keeping the reference image unchanged. This ensures that the resized images are never larger than the original image dimensions.
The resulting resized image pairs are then passed through the network and we fit a homography for each pair, using our predicted flow and uncertainty map.
From all image pairs with their corresponding scaling ratios, we then select the homography with the highest percentage of inliers, and scale it to the images original resolutions. The original image pair is then coarsely aligned using this homography. Lastly, we follow the same procedure used in our two-stage refinement strategy to predict the final flow. We refer to this process as Multi-Scale (MS).
\parsection{Using our dense flow for sparse matching} Dense correspondences are useful in many applications but are not mandatory in geometric transformation estimation tasks. In such tasks, a detect-then-describe strategy can also be used. In the first step, locally salient and repeatable points are detected in both images. Secondly, descriptors are extracted at the keypoint locations and matched to establish correspondences across the images. Our dense approach can be used to replace the second step, i.e.\ description and matching.
Instead of solely relying on descriptor similarities~\cite{SIFT, SURF, Brief, ORB, Belousov2017, superpoint, Dusmanu2019CVPR}, our dense probabilistic correspondence network additionally learns to utilize, e.g., local motion patterns and global consistency across the views.
Specifically, we also employ our predicted dense flow and confidence map to find correspondences given sets of sparse keypoints, detected in a pair of images.
We denote the set of $N$ and $L$ sparse feature points detected in the query $I^q$ and reference image $I^r$ respectively as $\mathcal{X}^q = \{x^q_i\}_{i=1}^N \subset \mathbb{R}^{2}$ and $\mathcal{X}^r = \{x^r_i\}_{i=1}^L \subset \mathbb{R}^{2}$. PDC-Net+ predicts the dense flow field $Y^{r \rightarrow q}$ and confidence map $P_R^{r \rightarrow q}$ relating the reference to the query image.
We first disregard all reference feature points $x^r_i \in K^r$ for which $P_R^{r \rightarrow q}[x^r_i] < \gamma$, where $\gamma \in [0, 1)$ is a threshold.
Given a reference feature point $x^r_i \in \mathcal{X}^r$ such that $P_R^{r \rightarrow q}[x^r_i] \geq \gamma$, we compute its predicted matching point in the query $\widehat{x}^q_i = Y^{r \rightarrow q}[x^r_i] + x^r_i$ according to the predicted flow $Y^{r \rightarrow q}$.
We subsequently find the query point $x^q_j \in \mathcal{X}^r$ closest to $\widehat{x}^q_i$, that is $j = \text{argmin}_k \left\| \widehat{x}^q_i - x^q_k \right\|$.
Finally, we retain the match between $x^r_i$ and $x^q_j$ if $\left\| \widehat{x}^q_i - x^q_j \right\| < d$, where $d$ is set as a pixel distance threshold.
As a result, we can potentially identify a corresponding feature point $x^q_j \in \mathcal{X}^q$ for each $x^r_i \in \mathcal{X}^r$. The set of such correspondences is denoted is $C_{r \rightarrow q}$.
We can use the same strategy to establish correspondences $C_{q \rightarrow r}$ in the reverse direction from $I^q$ to $I^r$. Given the two sets of correspondences from the two matching directions, we optionally only keep the correspondences for which the cyclic consistency error is below a certain threshold.
\section{Experimental Results}
\label{sec:exp}
We integrate our probabilistic approach into a generic pyramidal correspondence network and perform comprehensive experiments on multiple geometric matching and optical flow datasets. We further evaluate our probabilistic dense correspondence network, PDC-Net+, for several downstream tasks, including pose estimation, image-based localization, and image retrieval. For all tasks and datasets, we employ the \emph{same network}, PDC-Net+, with the \emph{same weights} without any task or dataset-specific fine-tuning.
Further results, analysis, visualizations and implementation details are provided in the Appendix.
\subsection{Implementation Details}
\label{subsec:arch}
We train a single network, termed PDC-Net+, and use the same network and weights for all experiments.
\parsection{Architecture}
We adopt the recent GLU-Net-GOCor~\cite{GOCor, GLUNet} as our base architecture. It consists of two sub-modules operating at two image resolutions, each built as a two-level pyramidal network. The feature extraction backbone consists of a VGG-16 network~\cite{Chatfield14} pre-trained on ImageNet. At each level, we add our uncertainty decoder and propagate the uncertainty prediction to the next level as detailed in Sec.~\ref{sec:uncertainty-arch}.
We model the probability distribution $p\left(y | \varphi \right)$ with the constrained mixture presented in Sec.~\ref{sec:constained-mixture}, using $M=2$ Laplace components. The first is fixed to $\sigma^2_1 = \beta_1^- = \beta_1^+ = 1$ in order to represent the very accurate predictions. The second component models larger errors and outliers, obtained by setting the constraints as $ 2 = \beta_2^- \leq \sigma^2_2 \leq \beta_2^+ = HW$, where $\beta_2^+$ is set to the size $H \times W$ of the training images. We ablate these design choices in Sec.~\ref{subsec:ablation-study}
For inference, we refer to using a single forward-pass of the network as (D), our multi-stage approach (Sec.~\ref{sec:geometric-inference}) as (H) and our multi-scale approach (Sec.~\ref{sec:geometric-inference}) as (MS).
\parsection{Training datasets}
We train our network in two stages.
First, we follow the self-supervised training procedure introduced in Sec.~\ref{sec:training-strategy}. In particular, we augment the data with four random independently moving objects from the COCO~\cite{coco} dataset, with probability 0.8.
In the second training stage, we extend the self-supervised data with real image pairs with sparse ground-truth correspondences from the MegaDepth dataset~\cite{megadepth}. We additionally fine-tune the backbone feature extractor in this stage.
\parsection{Training details} We train on images pairs of size $520 \times 520$. The first training stage involves 350k iterations, with a batch size of 15. The learning rate is initially set to $10^{-4}$, and halved after 133k and 240k iterations.
For the second training stage, the batch size is reduced to 10 due to the added memory consumption when fine-tuning the backbone feature extractor. We train for 225k iterations in this stage. The initial learning rate is set to $5 \cdot 10^{-5}$ and halved after 210k iterations. The total training takes about 10 days on two NVIDIA TITAN RTX with 24GB of memory.
For the GOCor modules~\cite{GOCor}, we train with 3 local and global optimization iterations.
\parsection{Differences to PDC-Net} PDC-Net+ and PDC-Net use the same architecture and probabilistic formulation. Their training strategies differ as follows. In the first stage of training, only a single independently moving object is added to the training image pairs for PDC-Net as opposed to four for PDC-Net+. Moreover, PDC-Net is trained without any mask for the self-supervised part, while the injective mask (Sec.~\ref{sec:injective-mask}) is used for PDC-Net+.
Finally, during the second stage of training, the backbone weights are finetuned with the same learning rate as the rest of the network weights in PDC-Net+, while it was divided by five in PDC-Net.
PDC-Net+ is also trained for a total of 575k iterations against 330k for PDC-Net. This is mostly due to the introduction of multiple objects in the first training stage, which substantially increases the difficulty of the data, thus requiring longer training.
\subsection{Geometric Correspondences and Flow}
\label{subsec:correspondence-est}
We first evaluate our PDC-Net+ in terms of the quality of the predicted flow field.
\subsubsection{Datasets}
We evaluate our approach on three standard datasets with sparse ground-truth, namely the \textbf{RobotCar}~\cite{RobotCar, RobotCarDatasetIJRR}, \textbf{MegaDepth}~\cite{megadepth} and \textbf{ETH3D}~\cite{ETH3d} datasets, as well as the homography estimation dataset \textbf{HPatches}.
\parsection{MegaDepth} MegaDepth is a large-scale dataset, containing image pairs with extreme viewpoint and appearance variations. We follow the same procedure and 1600 test images as~\cite{RANSAC-flow}. It results in approximately 367K correspondences. Following~\cite{RANSAC-flow}, all the images and ground-truths are resized such that the smallest dimension has 480 pixels. As metrics, we employ the Percentage of Correct Keypoints at a given pixel threshold $T$ (PCK-$T$).
\parsection{RobotCar} The RobotCar dataset depicts outdoor road scenes, taken under different weather and lighting conditions. Images are particularly challenging due to their numerous textureless regions.
Following the protocol used in~\cite{RANSAC-flow}, images and ground-truths are resized such that the smallest dimension has the length $480$. The correspondences originally introduced by~\cite{RobotCarDatasetIJRR} are used as ground-truth, consisting of approximately 340M matches. We employ the Percentage of Correct Keypoints at a given pixel threshold $T$ (PCK-$T$) as metrics.
\parsection{ETH3D} Finally, ETH3D represents indoor and outdoor scenes captured from a moving hand-held camera. It contains 10 image sequences at $480 \times 752$ or $514 \times 955$ resolution, depicting indoor and outdoor scenes.
We follow the protocol of~\cite{GLUNet}, sampling image pairs at different intervals to analyze varying magnitude of geometric transformations, resulting in about 500 image pairs and 600k to 1100k matches per interval. As evaluation metrics, we use the Average End-Point Error (AEPE) and Percentage of Correct Keypoints (PCK).
\parsection{HPatches} The HPatches dataset~\cite{Lenc} depicts planar scenes divided in sequences, with transformations restricted to homographies. Each image sequence contains a query image and 5 reference images taken under increasingly larger viewpoints changes.
In line with~\cite{Melekhov2019, RANSAC-flow}, we exclude the sequences labelled \verb|i_X|, which only have illumination changes and employ only the 59 sequences labelled with \verb|v_X|, which have viewpoint changes. It results in a total of 295 image pairs with dense ground-truth flow fields. Following~\cite{Melekhov2019, RANSAC-flow}, we evaluate on images and ground-truths resized to $240 \times 240$ and employ the AEPE and PCK as the evaluation metrics.
\begin{figure}[b]
\centering%
\vspace{-3mm}
\includegraphics[width=0.99\columnwidth, trim=135 0 0 0]{image/eth3d_all.pdf}
\vspace{-3mm}
\caption{Results on ETH3D~\cite{ETH3d}. AEPE (left), PCK-1 (center) and PCK-5 (right) are plotted w.r.t.\ the inter-frame interval length.}
\label{fig:ETH3D}
\end{figure}
\subsubsection{Compared methods}
We compare to dense matching methods, trained for the geometric matching task. SIFT-Flow~\cite{LiuYT11}, NC-Net~\cite{Rocco2018b}, DGC-Net~\cite{Melekhov2019}, GLU-Net~\cite{GLUNet} and GLU-Net-GOCor~\cite{GLUNet, GOCor} are all trained on self-supervised data from different image sources than MegaDepth. The second set of compared methods are trained on MegaDepth images, namely RANSAC-Flow~\cite{RANSAC-flow}, LIFE~\cite{LIFE}, COTR~\cite{COTR} and PDC-Net~\cite{pdcnet}. Note that COTR cannot be evaluated on the MegaDepth test split because its training scenes overlap with the test scenes.
We also compare with the non-probabilistic baseline GLU-Net-GOCor*, which was introduced alongside the initial PDC-Net~\cite{pdcnet}. It is trained using the same settings and data as PDC-Net, but without the probabilistic formulation.
\subsubsection{Results}
In Tab.~\ref{tab:megadepth} we report the results on MegaDepth and RobotCar. Our method PDC-Net+ outperforms all previous works by a large margin at all PCK thresholds.
Remarkably, our PDC-Net+ with a single forward pass (D) is \emph{40 times faster} than the recent RANSAC-Flow while being significantly better.
Our uncertainty-aware probabilistic approach PDC-Net+ also outperforms the baseline GLU-Net-GOCor* in flow accuracy.
This clearly demonstrates the advantages of casting the flow estimation as a probabilistic regression problem, advantages which are not limited to uncertainty estimation. It also substantially benefits the accuracy of the flow itself through a more flexible loss formulation. A visual comparison between PDC-Net+ and baseline GLU-Net-GOCor* on a MegaDepth image pair is shown in Fig.~\ref{fig:intro}, top.
In Fig.~\ref{fig:ETH3D}, we plot the AEPE and PCKs on ETH3D. Our approach significantly outperforms previous approaches for all intervals in terms of PCK. As for AEPE, PDC-Net+ improves upon all methods, including the recent Transformer-based architecture COTR~\cite{COTR}, for intervals up to 11. It nevertheless obtains slightly worse AEPE than RANSAC-Flow for intervals of 13 and 15. However, RANSAC-Flow uses an extensive multi-scale scheme relying on off-the-shelf features extracted at different resolutions. In contrast, our multi-scale approach (MS) is much simpler and effective.
\begin{table}[t]
\centering
\caption{PCK (\%) results on sparse correspondences of the MegaDepth~\cite{megadepth} and RobotCar~\cite{RobotCar, RobotCarDatasetIJRR} datasets. In the top part, methods are trained on different data than MegaDepth while approaches in the bottom part are trained on the MegaDepth training set. We additionally compare the run-time of all methods on $480 \times 480$ images on a NVIDIA GeForce RTX 2080 Ti GPU.
}
\vspace{-3mm}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lccc|ccc|c} \toprule
& \multicolumn{3}{c}{\textbf{MegaDepth}} & \multicolumn{3}{c}{\textbf{RobotCar}} & Run-\\
& PCK-1 $\uparrow$ & PCK-3 $\uparrow$ & PCK-5 $\uparrow$ & PCK-1 $\uparrow$ & PCK-3 $\uparrow$ & PCK-5 $\uparrow$ & time (ms) $\downarrow$\\ \midrule
SIFT-Flow~\cite{LiuYT11} & 8.70 & 12.19 & 13.30 & 1.12 & 8.13 & 16.45 & -\\
NCNet~\cite{Rocco2018b} & 1.98 & 14.47 & 32.80 & 0.81 & 7.13 & 16.93 & - \\
DGC-Net~\cite{Melekhov2019} & 3.55 & 20.33 & 32.28 & 1.19 & 9.35 & 20.17 & \textbf{25} \\
GLU-Net$_{static}$~\cite{GLUNet, GOCor} & 29.27 & 50.46 & 55.93 & 2.21 & 17.06 & 33.69 & 36 \\
GLU-Net$_{dyn}$~\cite{GLUNet} & 21.58 & 52.18 & 61.78 & 2.30 & 17.15 & 33.87 & 36 \\
GLU-Net-GOCor$_{dyn}$~\cite{GOCor} & 37.28 & 61.18 & 68.08 & 2.31 & 17.62 & 35.18 & 70 \\
\midrule
RANSAC-Flow (MS)~\cite{RANSAC-flow} & 53.47 & 83.45 & 86.81 & 2.10 & 16.07 & 31.66 & 3596 \\
LIFE~\cite{LIFE} & 39.98 & 76.14 & 83.14 & 2.30 & 17.40 & 34.30 & 78 \\
GLU-Net-GOCor*~\cite{pdcnet} & 57.77 & 78.61 & 82.24 & 2.33 & 17.21 & 33.67 & \textbf{71} \\
PDC-Net (D)~\cite{pdcnet} & 68.95 & 84.07 & 85.72 & 25.39 & 18.96 & 36.36 & 88 \\
PDC-Net (H)~\cite{pdcnet} & 70.75 & 86.51 & 88.00 & 2.54 & 18.97 & 36.37 & 284 \\
PDC-Net (MS)~\cite{pdcnet} & 71.81 & 89.36 & 91.18 & 2.58 & 18.87 & 36.19 & 1017 \\
\textbf{PDC-Net+ (D)} & 72.41 & 86.70 & 88.12 & 2.57 & \textbf{19.12} & \textbf{36.71} & 88 \\
\textbf{PDC-Net+ (H)} & 73.92 & 89.21 & 90.48 & 2.56 & 19.00 & 36.56 & 284 \\
\textbf{PDC-Net+ (MS)} & \textbf{74.51} & \textbf{90.69} & \textbf{92.10} & \textbf{2.63} & 19.01 & 36.57 & 1017 \\
\bottomrule
\end{tabular}%
}\vspace{1mm}\vspace{-4mm}
\label{tab:megadepth}
\end{table}
\begin{table}[b]
\centering
\caption{Dense correspondence estimation results on HPatches~\cite{Lenc}. Here, the images and ground-truth flow fields are resized to $240 \times 240$, following~\cite{Melekhov2019}.}\label{tab:hp-240}
\vspace{-3mm}
\resizebox{0.34\textwidth}{!}{%
\begin{tabular}{lccc}
\toprule
& AEPE $\downarrow$ & PCK-1 (\%) $\uparrow$ & PCK-5 (\%) $\uparrow$ \\ \midrule
DGC-Net & 9.07 & 50.01 & 77.4\\
GLU-Net & 7.4 & 59.92 & 83.47\\
GLU-Net-GOCor & 6.62 & 58.45 & 85.89 \\
LIFE & 4.3 & 61.36 & 91.94\\
RANSAC-Flow (MS) & 3.79 & 78.42 &96.06\\
GLU-Net-GOCor* & 5.06 & 64.8 &90.24\\
PDC-Net (H) & 4.32 & 85.97 &94.59\\
PDC-Net (MS)& \textbf{3.56} & 87.4 &96.18 \\
\textbf{PDC-Net+ (H)} & 4.29 & 86.56 &94.97\\
\textbf{PDC-Net+ (MS)} & 3.59 & \textbf{87.83} &\textbf{96.36}\\ \midrule
\end{tabular}%
}
\end{table}
We also present results on HPatches in Tab.~\ref{tab:hp-240}. Our PDC-Net+ (H) outperforms all direct approaches in accuracy (PCK) and in robustness (AEPE). Moreover, PDC-Net+ with our multi-scale strategy (MS) obtains better PCK and AEPE results than RANSAC-Flow (MS), whereas it adopts an additional pre-processing step to identify rotated images by choosing the homography with the higher percentage of inliers between pairs of rotated images.
\subsubsection{Generalization to optical flow}
We additionally show that our approach generalizes well to the accurate estimation of optical flow, even though it is trained for the very different task of geometric matching.
We use the established \textbf{KITTI} dataset~\cite{Geiger2013}, and evaluate according to the standard metrics, namely AEPE and Fl.
Since we do not fine-tune on KITTI, we show results on the training splits. In Tab.~\ref{tab:optical-flow}, we compare to methods specifically designed for optical flow~\cite{Sun2018, Sun2018, GOCor, Hui2018, YinDY19, Hui2019, VCN, RAFT} and trained using task-specific datasets~\cite{Dosovitskiy2015,Ilg2017a}. We also compare with more generic dense networks trained on other datasets.
Our approach outperforms all previous generic matching methods (upper part in Tab.~\ref{tab:optical-flow}) by a large margin in terms of both Fl and AEPE. Compared to PDC-Net, the training strategy introduced in PDC-Net+, that leverages multiple objects (Sec.~\ref{sec:training-strategy}) as well as the injective mask (Sec.~\ref{sec:injective-mask}), is highly beneficial for the optical flow task, where independently moving objects are common.
Our PDC-Net+ also significantly outperforms the specialized optical flow methods (bottom part in Tab.~\ref{tab:optical-flow}), even though it is not trained on any optical flow datasets.
\begin{table}[t]
\centering
\caption{Optical flow results on the training splits of KITTI~\cite{Geiger2013}. The upper part contains generic matching networks, while the bottom part lists specialized optical flow methods, not trained on KITTI. }
\vspace{-3mm}\resizebox{0.35\textwidth}{!}{%
\begin{tabular}{lcc|cc}
\toprule
& \multicolumn{2}{c}{\textbf{KITTI-2012}} & \multicolumn{2}{c}{\textbf{KITTI-2015}} \\
& AEPE $\downarrow$ & Fl (\%) $\downarrow$ & AEPE $\downarrow$ & Fl (\%) $\downarrow$ \\ \midrule
DGC-Net~\cite{Melekhov2019} & 8.50 & 32.28 & 14.97 & 50.98 \\
GLU-Net~\cite{GLUNet} & 3.14 & 19.76 & 7.49 & 33.83 \\
GLU-Net-GOCor$_{\textit{dyn}}$~\cite{GOCor} & 2.68 & 15.43 & 6.68 & 27.57 \\
RANSAC-Flow~\cite{RANSAC-flow} & - & - & 12.48 & - \\
GLU-Net-GOCor* & 2.26 & 9.89 & 5.53 & 18.27 \\
LIFE & 2.59 & 12.94 & 8.30 & 26.05 \\
COTR & 2.26 & 10.50 & 6.12 & 16.90 \\
PDC-Net (D) & 2.08 & 7.98 & 5.22 & 15.13 \\
\textbf{PDC-Net+ (D)} & \textbf{1.76} & \textbf{6.60} & \textbf{4.53} & \textbf{12.62} \\
\midrule
PWC-Net~\cite{Sun2018} & 4.14 & 21.38 & 10.35 & 33.7 \\
PWC-Net-GOCor~\cite{Sun2018, GOCor} & 4.12 & 19.31 & 10.33 & 30.53 \\
LiteFlowNet~\cite{Hui2018} & 4.00 & - & 10.39 & 28.5 \\
HD$^3$F~\cite{YinDY19} & 4.65 & - & 13.17 & 24.0 \\
LiteFlowNet2~\cite{Hui2019} & 3.42 & - & 8.97 & 25.9 \\
VCN~\cite{VCN} & - & - & 8.36 & 25.1 \\
RAFT~\cite{RAFT} & - & - & 5.04 & 17.4 \\
\bottomrule
\end{tabular}%
}
\label{tab:optical-flow}
\end{table}
\vspace{-3mm}
\begin{figure*}[t]
\centering
\newcommand{0.23\textwidth}{0.9\textwidth}
\includegraphics[width=0.23\textwidth]{image/sparse_hp.pdf}
\vspace{-3mm}
\caption{Sparse correspondence identification on HPatches~\cite{Lenc}, evaluating the 2000 most confident matches for each method.}
\label{fig:sparse-hp}
\end{figure*}
\subsection{Uncertainty Estimation}
\label{subsec:uncertainty-est}
Next, we evaluate the quality of the uncertainty estimates provided by our approach.
\parsection{Sparsification and error curves}
To assess the quality of the uncertainty estimates, we rely on sparsification plots, in line with~\cite{Aodha2013LearningAC,Ilg2017a, ProbFlow}.
The pixels having the highest uncertainty are progressively removed and the AEPE or PCK of the remaining pixels is plotted in the sparsification curve. These plots reveal how well the estimated uncertainty relates to the true errors. Ideally, larger uncertainty should correspond to larger errors. Gradually removing the predictions with the highest uncertainties should therefore monotonically improve the accuracy of the remaining correspondences. The sparsification plot is compared with the best possible ranking of the predictions, according to their actual errors computed with respect to the ground-truth flow. We refer to this curve as the oracle plot.
Note that, for each model the oracle is different. Hence, an evaluation using a single sparsification plot is not possible. To this end, we use the Sparsification Error, constructed by directly comparing each sparsification plot to its corresponding oracle plot by taking their difference. Since this measure is independent of the oracle, a fair comparison between different methods is possible. As evaluation metric, we use the Area Under the Sparsification Error curve (AUSE).
The sparsification and error plots provide an insightful and directly relevant assessment of the uncertainty. The AUSE directly evaluates the ability to filter out inaccurate and incorrect correspondences, which is the main purpose of the uncertainty estimate in \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot pose estimation and 3D reconstruction. Furthermore, sparsification and error plots are also applicable to non-probabilistic confidence and uncertainty estimation techniques, allowing for a direct comparison with previous work~\cite{RANSAC-flow,Melekhov2019}.
\begin{figure}
\centering
\newcommand{0.23\textwidth}{0.23\textwidth}
\subfloat{\includegraphics[width=0.23\textwidth]{image/EPE_megadepth_unc_measure.pdf}}~%
\subfloat{\includegraphics[width=0.23\textwidth]{image/PCK_5_megadepth_unc_measure.pdf}} \\
\subfloat{\includegraphics[width=0.23\textwidth]{image/EPE_kitti_2015_unc_measure.pdf}}~%
\subfloat{\includegraphics[width=0.23\textwidth]{image/PCK_5_kitti_2015_unc_measure.pdf}}~%
\vspace{-2mm}
\caption{Sparsification plots for AEPE (left) and PCK-5 (right) on MegaDepth (top) and KITTI-2015 (bottom), comparing multiple uncertainty measures when using PDC-Net+ (D). Smaller AUSE (in parenthesis) is better.}
\label{fig:sparsification-unc}
\vspace{-4mm}
\end{figure}
\parsection{Comparison of uncertainty measures} In Fig.~\ref{fig:sparsification-unc}, we report the sparsification plots and AUSE comparing different uncertainty measures applied to the probabilistic formulation of our PDC-Net+. In particular, we compare our confidence estimate $P_R$ \eqref{eq:pr} with computing the variance of the mixture model $V = \sum_{m=1}^M \alpha_m \sigma^2_m$ and the forward-backward consistency error~\cite{Meister2017}. The latter is a common approach to rank and filter matches. Since the same flow regression model is used for all uncertainty measures, the sparsification plots and the Oracle are directly comparable. Using $P_R$ leads to sparsification plots closer to the Oracle (and smaller AUSE) on both MegaDepth and KITTI-2015. When using the PCK-5 metric on MegaDepth, our $P_R$ confidence reduces the AUSE by $58\%$ compared to forward-backward consistency and $35\%$ compared to using the mixture variance.
Moreover, as observed in Fig.~\ref{fig:sparsification-unc}, our approach reduces the AEPE by 30 and 70 \% respectively on MegaDepth and KITTI-2015 by removing only 30 \% of the most uncertain matches.
\parsection{Comparison to State-of-the-art} Fig.~\ref{fig:sparsification} shows the Sparsification Error plots on MegaDepth of our PDC-Net+ and PDC-Net~\cite{pdcnet}, compared to previous dense methods that provide a confidence estimation, namely DGC-Net~\cite{Melekhov2019} and RANSAC-Flow~\cite{RANSAC-flow}. PDC-Net+ and PDC-Net outperform other methods in AUSE, verifying the robustness of the predicted uncertainty estimates. In addition to the large improvements in accuracy, PDC-Net+ largely preserves the quality of the uncertainty estimates achieved by the original PDC-Net.
\begin{figure}[b]
\centering
\vspace{-6mm}\newcommand{0.23\textwidth}{0.23\textwidth}
\subfloat{\includegraphics[width=0.23\textwidth]{image/EPE_megadepth.pdf}}~%
\subfloat{\includegraphics[width=0.23\textwidth]{image/PCK_5_megadepth.pdf}}~%
\vspace{-2mm}
\caption{Sparsification Error plots for AEPE (left) and PCK-5 (right) on MegaDepth, comparing different methods predicting the uncertainty of the match. Smaller AUSE (in parenthesis) is better. }
\label{fig:sparsification}
\end{figure}
\parsection{Sparse correspondence evaluation on HPatches}
Finally, we evaluate the effectiveness of our PDC-Net+ for sparse correspondence identification using the HPatches dataset~\cite{Lenc}. We follow the standard protocol introduced by D2-Net~\cite{Dusmanu2019CVPR}, where 108 of the 116 image sequences are evaluated, with 56 and 52 sequences depicting viewpoint and illumination changes, respectively.
The first image of each sequence is matched against the other five, giving 540 pairs. As metric, we use the Mean Matching Accuracy (MMA), which estimates the average number of correct matches for different pixel thresholds. For each method, the top 2000 matches are selected for evaluation, following~\cite{Rocco20, Xreo, DualRCNet}.
For PDC-Net+, we first resize the images so that the size along the smallest dimension is 600 and then predict the dense flow relating the pair. We rank the estimated correspondences according to their predicted confidence map $P_{R=1}$ \eqref{eq:pr} and select the 2000 most confident for evaluation.
We also evaluate the performance of PDC-Net+ in combination with the SuperPoint detector~\cite{superpoint}, referred to as `PDC-Net+ + SP'. To obtain matches relating the sets of sparse keypoints, we follow the procedure introduced in Sec.~\ref{sec:geometric-inference} with a maximum distance to keypoints of $d=4$. We use the default parameters for SuperPoint, with a Non-Max Suppression (NMS) window of 4. The matches are also ranked according to their predicted confidence map $P_{R=1}$ \eqref{eq:pr}.
We compare our approach to recent detect-then-describe baselines: SuperPoint~\cite{superpoint}, SuperPoint + SuperGlue~\cite{SarlinDMR20}, R2D2~\cite{R2D2descriptor}, D2-Net~\cite{Dusmanu2019CVPR} and DELF~\cite{DELF}. We also compare with dense-to-sparse approaches: Sparse-NCNet~\cite{Rocco20}, DualRC-Net~\cite{DualRCNet} and XRCNet~\cite{Xreo}, along with the purely dense method NC-Net~\cite{Rocco2018b}.
As shown in Fig.~\ref{fig:sparse-hp}, our initial approach PDC-Net sets a new state-of-the-art at all thresholds overall, closely followed by PDC-Net+. Remarkably, PDC-Net+ outperforms the next best method by approximately 20\% at a threshold of 1 pixel. As opposed to other methods, our approach is robust to both illumination and view-point changes.
Moreover, PDC-Net+ in association with SuperPoint (PDC-Net+ + SP) also drastically outperforms SuperPoint and SuperPoint + SuperGlue. It demonstrates that our dense matching approach is also a strong alternative for establishing sparse matches between two images.
\subsection{Pose Estimation}
\label{subsec:down-stream-tasks}
To show the joint performance of our flow and uncertainty prediction, we evaluate our approach for the task of pose estimation. Given a pair of images showing different viewpoints of the same scene, two-view geometry estimation aims at recovering their relative pose. Although it has long been dominated by sparse matching methods, we here evaluate our dense approach for this task.
\begin{table}[b]
\centering
\vspace{-1mm}
\caption{Two-view geometry estimation on the outdoor dataset YFCC100M~\cite{YFCC}. The top section compares sparse methods while the bottom section contains dense methods.}
\vspace{-2mm}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lccc|ccc}
\toprule
& \multicolumn{3}{c}{AUC $\uparrow$} & \multicolumn{3}{c}{mAP $\uparrow$} \\
& @5\textdegree & @10\textdegree & @20\textdegree & @5\textdegree & @10\textdegree & @20\textdegree \\ \midrule
SIFT~\cite{SIFT} + ratio test & 24.09 & 40.71 & 58.14 & 45.12 & 55.81 & 67.20 \\
SIFT~\cite{SIFT} + OANet~\cite{OANet} & 29.15 & 48.12 & 65.08 & 55.06 & 64.97 & 74.83 \\
SIFT~\cite{SIFT} + SuperGlue~\cite{SarlinDMR20} & 30.49 & 51.29 & 69.72 & 59.25 & 70.38 & 80.44 \\
SuperPoint~\cite{superpoint} (SP) & - & - & - & 30.50 & 50.83 & 67.85 \\
SP~\cite{superpoint} + OANet~\cite{OANet} & 26.82 & 45.04 & 62.17 & 50.94 & 61.41 & 71.77 \\
SP~\cite{superpoint} + SuperGlue~\cite{SarlinDMR20} & \textbf{38.72} & \textbf{59.13} & \textbf{75.81} & \textbf{67.75} & \textbf{77.41} & \textbf{85.70} \\
\midrule
D2D~\cite{D2D} & - & - & - & 55.58 & 66.79 & - \\
RANSAC-Flow (MS+SegNet)~\cite{RANSAC-flow} & - & - & - & 64.88 & 73.31 & 81.56 \\
RANSAC-Flow (MS)~\cite{RANSAC-flow} & - & - & - & 31.25 & 38.76 & 47.36 \\
PDC-Net (D) & 32.21 & 52.61 & 70.13 & 60.52 & 70.91 & 80.30 \\
PDC-Net (H) & 34.88 & 55.17 & 71.72 & 63.90 & 73.00 & 81.22 \\
PDC-Net (MS) & 35.71 & 55.81 & 72.26 & 65.18 & 74.21 & 82.42 \\
\textbf{PDC-Net+ (D)} & 34.76 & 55.37 & 72.55 & 63.93 & 73.81 & 82.74 \\
\textbf{PDC-Net+ (H)} & \textbf{37.51} & \textbf{58.08} & \textbf{74.50} & \textbf{67.35} & \textbf{76.56} & \textbf{84.56} \\
\bottomrule
\end{tabular}%
}\vspace{1mm}
\label{tab:YCCM}
\end{table}
\parsection{Datasets and metrics} We evaluate on both an outdoor and an indoor dataset, namely \textbf{YFCC100M}~\cite{YFCC} and \textbf{ScanNet}~\cite{scannet} respectively.
The YFCC100M dataset contains images of touristic landmarks. The provided ground-truth poses were created by generating 3D reconstructions from a subset of the collections~\cite{heinly2015_reconstructing_the_world}. We follow the standard set-up of~\cite{OANet} and evaluate on 4 scenes of the YFCC100M dataset, each comprising 1000 image pairs.
ScanNet is a large-scale indoor dataset composed of monocular sequences with ground truth poses and depth images. We follow the set-up of~\cite{SarlinDMR20} and evaluate on 1500 image pairs.
\begin{table}[t]
\centering
\caption{Two-view geometry estimation on the indoor dataset ScanNet~\cite{scannet}. The top section compares sparse methods, the middle section present dense-to-sparse approaches, and the bottom one contains dense methods. The symbol $\dagger$~denotes that the network was not trained on ScanNet. }
\vspace{-2mm}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lccc|ccc}
\toprule
& \multicolumn{3}{c}{AUC $\uparrow$} & \multicolumn{3}{c}{mAP $\uparrow$} \\
& @5\textdegree & @10\textdegree & @20\textdegree & @5\textdegree & @10\textdegree & @20\textdegree \\ \midrule
ORB~\cite{ORB} + GMS~\cite{GMS} & 5.21 & 13.65 & 25.36 & - & - & - \\
D2-Net~\cite{Dusmanu2019CVPR} + NN & 5.25 & 14.53 & 27.96 & - & - & - \\
ContextDesc~\cite{contextdesc} + Ratio Test & 6.64 & 15.01 & 25.75 & - & - & - \\
SuperPoint~\cite{superpoint} + OANet~\cite{OANet} & 11.76 & 26.90 & 43.85 & - & - & - \\
SuperPoint~\cite{superpoint} + SuperGlue~\cite{SarlinDMR20} & \textbf{16.16} & \textbf{33.81} & \textbf{51.84} & - & - & - \\
\midrule
DRC-Net \textdagger ~\cite{DualRCNet} & 7.69 & 17.93 & 30.49 & - & - & - \\
LoFTR-OT \textdagger~\cite{LOFTR} & 16.88 & 33.62 & 50.62 & - & - & - \\
LoFTR-OT~\cite{LOFTR}& 21.51 & 40.39 & 57.96 & - & - & - \\
LoFTR-DT~\cite{LOFTR} & \textbf{22.02} & \textbf{40.8} & \textbf{57.62} & - & - & - \\ \midrule
PDC-Net \textdagger~\cite{pdcnet} (D) & 17.70 & 35.02 & 51.75 & 39.93 & 50.17 & 60.87 \\
PDC-Net \textdagger~\cite{pdcnet} (H) & 18.70 & 36.97 & 53.98 & 42.87 & 53.07 & 63.25 \\
PDC-Net \textdagger~\cite{pdcnet} (MS) & 18.44 & 36.80 & 53.68 & 42.40 & 52.83 & 63.13 \\
\textbf{PDC-Net+ \textdagger~(D)} & 19.02 & 36.90 & 54.25 & 42.93 & 53.13 & 63.95 \\
\textbf{PDC-Net+ \textdagger~(H)} & \textbf{20.25} & \textbf{39.37} & \textbf{57.13} & \textbf{45.66} & \textbf{56.67} & \textbf{67.07} \\
\bottomrule
\end{tabular}%
}\vspace{1mm}
\vspace{-3mm}\label{tab:scannet}
\end{table}
We use the predicted matches to estimate the Essential matrix with RANSAC~\cite{ransac} and the 5-point Nister algorithm~\cite{TPAMI.2004.17} with an inlier threshold of 1 pixel divided by the focal length. The rotation matrix $\hat{R}$ and translation vector $\hat{T}$ are finally computed from the estimated Essential matrix. As evaluation metrics, in line with~\cite{YiTOLSF18,SarlinDMR20}, we report the area under the cumulative pose error curve (AUC) at different thresholds (5\textdegree, 10\textdegree, 20\textdegree), where the pose error is the maximum of the angular deviation between ground truth and predicted vectors for both rotation and translation. For completeness, we also report the mean Average Precision (mAP) of the pose error for the same thresholds, following~\cite{OANet, RANSAC-flow}.
\parsection{Matches selection for PDC-Net+} The original images are resized to have a minimum dimension of 480, similar to~\cite{RANSAC-flow}, and the intrinsic camera parameters are modified accordingly. Our PDC-Net+ network outputs flow fields at a quarter of the image's input resolution, which are then normally bi-linearly up-sampled. For pose estimation, we instead directly select matches at the outputted resolution and further scale them to the input image resolution. From the estimated dense flow field, we identify the accurate correspondences by thresholding the predicted confidence map $P_R$ \eqref{eq:pr} as $P_{R=1} > 0.1$, and use them for pose estimation.
\begin{figure}[b]
\centering%
\vspace{-4mm}
\subfloat{\includegraphics[width=0.23\textwidth]{image/frame-000090.color_frame-001155.color_evaluation_pdcnet.jpg} }~%
\subfloat{\includegraphics[width=0.23\textwidth]{image/frame-000090.color_frame-001155.color_evaluation_sup.png} }~%
\vspace{-2mm}
\caption{Comparison of matches found by our PDC-Net+ (left) and Superpoint~\cite{superpoint} + SuperGlue~\cite{SarlinDMR20} (right) on an images pair of ScanNet~\cite{scannet}. Correct matches are green lines and mismatches are red lines.}
\label{fig:scannet-matches}
\end{figure}
\parsection{Results on outdoor} Results are presented in Tab.~\ref{tab:YCCM}. Our PDC-Net+ approach outperforms all dense methods by a large margin. We note that RANSAC-Flow relies on a semantic segmentation network to better filter unreliable correspondences, in \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot sky. Without this segmentation, the performance is drastically reduced. In contrast, our approach can directly estimate highly robust and generalizable confidence maps, without the need for additional network components. The confidence masks of RANSAC-Flow and our approach are visually compared in Fig.~\ref{fig:arch-visual-sep-pertur}-\ref{fig:arch-visual-ransac}.
Interestingly, our dense PDC-Net+ approach also outperforms multiple standard sparse methods, such as SIFT~\cite{SIFT} and SuperPoint~\cite{superpoint}. Our approach is even competitive with the recent SuperGlue~\cite{SarlinDMR20}, which employs a graph-based network for sparse matching.
\parsection{Results on indoor} Results on the ScanNet dataset are presented in Tab.~\ref{tab:scannet}. Our PDC-Net+ (H) approach scores second among all methods, slightly behind the very recent LoFTR~\cite{LOFTR}, which employs a Transformer-based architecture. However, this version of LoFTR is trained on ScanNet, while we only train on the outdoor MegaDepth dataset. Our approach thus demonstrates highly impressive generalization properties towards the very different indoor ScanNet dataset. When only considering approaches that are not trained on ScanNet, our PDC-Net+ (H) ranks first, with a significant improvement compared to LoFTR \textdagger. A visual example of our PDC-Net+ applied to a ScanNet image pair is presented in Fig.~\ref{fig:intro}, middle.
We also note that our approach outperforms all the sparse methods, including SuperPoint + SuperGlue, which relies on a Transformer architecture and is trained specifically for ScanNet.
The ScanNet dataset highlights the limitations of detector-based approaches. The images depict many homogeneous surfaces, such as white walls, on which detectors often fail. Dense approaches, such as ours, have an advantage in this context. A visual comparison between PDC-Net+ and SuperPoint + SuperGlue is shown in Fig.~\ref{fig:scannet-matches}.
\subsection{Image-based localization}
\begin{table}[b]
\centering
\vspace{-1mm}
\caption{Visual Localization on the Aachen Day-Night dataset~\cite {SattlerWLK12, SattlerMTTHSSOP18}. We follow the fixed evaluation protocol of~\cite{SattlerMTTHSSOP18} and report the percentage of query images localized within $X$ meters and $Y^\circ$ of the ground-truth pose. }
\vspace{-2mm}
\resizebox{0.49\textwidth}{!}{%
\begin{tabular}{ll|ccc}
\toprule
Method type & Method & 0.5m, 2\textdegree & 0.5m, 5\textdegree & 5m, 10\textdegree \\ \midrule
Sparse & D2-Net~\cite{Dusmanu2019CVPR} & 74.5 & 86.7 & \textbf{100.0} \\
& R2D2~\cite{R2D2descriptor} & 69.4 & 86.7 & 94.9 \\
& R2D2~\cite{R2D2descriptor} (K=20k) & 76.5 & \textbf{90.8} & \textbf{100.0} \\
& SuperPoint~\cite{superpoint} & 73.5 & 79.6 & 88.8 \\
& SuperPoint~\cite{superpoint} + SuperGlue~\cite{SarlinDMR20} & \textbf{79.6} & \textbf{90.8} & \textbf{100.0} \\
& Patch2Pix & 78.6 & 88.8 & 99.0 \\ \midrule
Sparse-to-dense & Superpoint + S2DNet~\cite{GermainBL20} & 74.5 & 84.7 & 100.0 \\ \midrule
Dense-to-sparse & Sparse-NCNet~\cite{Rocco20} & 76.5 & 84.7 & 98.0 \\
& DualRC-Net~\cite{DualRCNet} & \textbf{79.6} & \textbf{88.8} & \textbf{100.0} \\
& XRCNet-1600~\cite{Xreo} & 76.5 & 85.7 & \textbf{100.0} \\ \midrule
Dense & RANSAC-flow (ImageNet)~\cite{RANSAC-flow} + Superpoint~\cite{superpoint} & 74.5 & 87.8 & \textbf{100.0} \\
(flow regression) & RANSAC-flow (MOCO)~\cite{RANSAC-flow} + Superpoint~\cite{superpoint} & 74.5 & 88.8 & \textbf{100.0} \\
& PDC-Net~\cite{pdcnet} & 76.5 & 85.7 & \textbf{100.0} \\
& PDC-Net~\cite{pdcnet} + SuperPoint~\cite{superpoint} & \textbf{80.6} & 87.8 & \textbf{100.0} \\
& \textbf{PDC-Net+} & \textbf{80.6} & 89.8 & \textbf{100.0} \\
& \textbf{PDC-Net+ + SuperPoint} & 79.6 & \textbf{90.8} & \textbf{100.0} \\
\bottomrule
\end{tabular}%
}
\label{tab:aachen}
\end{table}
Finally, we evaluate our approach for the task of image-based localization.
Image-based localization aims at estimating the absolute 6 DoF pose of a query image with respect to a 3D model. It requires sets of accurate and localized matches as input.
We evaluate our approach on the \textbf{Aachen Benchmark}~\cite{SattlerMTTHSSOP18, SattlerWLK12}. It features 4,328 daytime reference images taken with a handheld smartphone, for which ground truth camera poses are provided.
\parsection{Evaluation metric} Each estimated query pose is compared to the ground-truth pose, by computing the absolute orientation error $\left | R_{err} \right |$ and the position error $T_{err}$. $R_{err}$ computes the angular deviation between the estimated and ground-truth rotation matrices. The position error $ T_{err} $ is measured as the Euclidean distance $ \left \|\hat{T} - T \right \|$ between the estimated $\hat{T}$ and the ground-truth position $T$.
Because the ground-truth poses are not publicly available, evaluation is done by submitting to the evaluation server of~\cite{SattlerMTTHSSOP18}. It then reports the percentage of query images localized within $X$ meters and $Y^\circ$ of the ground-truth pose, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot $\left | R_{err} \right | < Y \textrm{and } T_{err} < X $, for predefined $X$ and $Y$ thresholds.
\parsection{Image matching in fixed pipeline} We evaluate on the local feature challenge of the Visual Localization benchmark~\cite{SattlerMTTHSSOP18, SattlerWLK12}. We follow the evaluation protocol of~\cite{SattlerMTTHSSOP18} according to which up to 20 relevant day-time images with known camera poses are provided for each of the 98 night-time images. The aim is to localize the latter using only the 20 day-time images. We first compute matches between the day-time images, which are fed to COLMAP~\cite{SchonbergerF16} to build a 3D point cloud. We also provide matches between the queries and the provided short-list of retrieved images, which are used by COLMAP to compute a localisation estimate for the queries, through 2D-3D matches.
For PDC-Net+ (and PDC-Net), we resize images so that their smallest dimension is 600. From the estimated dense flow fields, we select correspondences for which $P_{R=1} > 0.1$ at a quarter of the input image resolution and further scale and round them to the original image resolution. We compute matches from both directions and keep all the confident correspondences.
Following~\cite{RANSAC-flow}, we also evaluate a version, referred to as PDC-Net+ + SuperPoint, for which we only use correspondences on a sparse set of keypoints using SuperPoint~\cite{superpoint}. We accept a match if the distance to the closest keypoint is below $d=4$ (see Sec.~\ref{sec:geometric-inference}) for an image size where the smallest dimension is 600. As additional requirement, matches must also have a corresponding confidence measure $P_{R=1} > 0.1$. For SuperPoint, we use the default parameters with a Non-Max Suppression window of 4. We also compute matches from both directions and keep all the confident correspondences.
We present results in Table~\ref{tab:aachen}. We compare PDC-Net+ to multiple dense methods, namely RANSAC-Flow~\cite{RANSAC-flow} and PDC-Net~\cite{pdcnet}. For reference, we also report results of several recent sparse methods, as well as sparse-to-dense and dense-to-sparse approaches.
Using dense matches, our PDC-Net+ outperforms previous dense methods at all thresholds. In particular, we achieve a substantial performance gain of $6 \%$ compared to RANSAC-Flow on the most accurate threshold.
Moreover, PDC-Net+ is also on par or better than all methods from the other categories.
Interestingly, PDC-Net+ + SuperPoint also substantially improves upon SuperPoint as in Sec.~\ref{subsec:uncertainty-est}. It demonstrates that our dense flow for matching sparse sets of keypoints is a valid and better alternative than direct matching of descriptors.
\begin{table}[t]
\centering
\caption{Retrieval-based Localization on the Aachen Day-Night dataset~\cite{SattlerWLK12} for a pose error threshold of 5m, 10\textdegree \, in \%. Results for the retrieval methods are extracted from the official leaderboard~\cite{SattlerMTTHSSOP18}.}
\vspace{-3mm}
\resizebox{0.25\textwidth}{!}{%
\begin{tabular}{l|c|c}
\toprule
Method & \textbf{Day} & \textbf{Night} \\\midrule
NetVLAD~\cite{NetVLAD} & 18.9 & 14.3 \\
LLN~\cite{LLN} & 20.8 & 17.3 \\
DenseVLAD~\cite{DenseVLAD} & \textbf{22.8} & \textbf{19.4} \\
PDC-Net~\cite{pdcnet} retrieval & 20.1 & \textbf{19.4} \\
\textbf{PDC-Net+ retrieval} & 17.1 & \textbf{19.4} \\
\bottomrule
\end{tabular}%
}\vspace{-3mm}
\label{tab:aachen-retrieval}
\end{table}
\begin{figure}[b]
\centering%
\vspace{-3mm}
\includegraphics[width=0.49\textwidth]{image/IMG_20161227_172456.jpg_final.pdf}
\vspace{-8mm}
\caption{Example of retrieved images for a night query of the Aachen benchmark~\cite{SattlerWLK12, SattlerMTTHSSOP18}, using PDC-Net+.}
\label{fig:retrieval}
\end{figure}
\parsection{Retrieval} Another application enabled by our uncertainty prediction is image retrieval. For a specific query image, we compute the flow fields and confidence maps $P_R$ \eqref{eq:pr} relating the query to all images of the database. We then rank the database images based on the number of confident matches for which $P_R > \gamma$, where $\gamma$ is a scalar threshold in $[0, 1)$.
For computational efficiency, we compute the flow and confidence map by resizing the images to a relatively low resolution, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot so that the smallest dimension is 400. We use a single forward pass of the network for prediction.
We evaluate on the Aachen Benchmark~\cite{SattlerMTTHSSOP18, SattlerWLK12} with the 824 daytime and 98 nighttime query images. As previously, the evaluation is done through the server of~\cite{SattlerMTTHSSOP18}, which only reports metrics for pre-defined error thresholds.
However, for many query images, the pose error corresponding to the nearest retrieved database image exceeds the pre-defined largest threshold of 5m, 10\textdegree.
Nevertheless, we present pose accuracy results for this largest threshold in Tab.~\ref{tab:aachen-retrieval}.
We compare our approach based on PDC-Net+ to standard retrieval methods: NetVLAD~\cite{NetVLAD}, Landmark Localization Network (LLN)~\cite{LLN} and DenseVLAD~\cite{DenseVLAD}.
Both PDC-Net+ and PDC-Net achieve competitive results compared to approaches that are specifically designed for the image retrieval task. Our single network can thus be applied for both retrieval and matching steps in the localization pipeline.
In Fig.~\ref{fig:retrieval}, we present an example of the 5 closest images retrieved by PDC-Net+ for a night query.
\parsection{3D reconstruction} We also qualitatively show the usability of our approach for dense 3D reconstruction. In Fig.~\ref{fig:aachen}, we visualize the 3D point cloud produced by PDC-Net+ and COLMAP~\cite{SchonbergerF16}, corresponding to the fixed pipeline of the Visual Localization benchmark~\cite{SattlerMTTHSSOP18, SattlerWLK12}.
\begin{figure}[t]
\centering%
\includegraphics*[width=\columnwidth]{image/aachen_rgb_0_07_V2.jpg}
\vspace{-5mm}
\caption{3D reconstruction of Aachen~\cite{SattlerWLK12} using the dense correspondences and uncertainties predicted by PDC-Net+.}\vspace{-3mm}
\label{fig:aachen}
\end{figure}
\subsection{Additional ablation study}
\label{subsec:ablation-study}
Here, we perform a detailed analysis of our approach.
\subsubsection{Ablation of PDC-Net}
We first ablate components common to PDC-Net and PDC-Net+ in Tab.~\ref{tab:ablation}. As baseline, we use a simplified version of GLU-Net, called BaseNet~\cite{GLUNet}. It is a three-level pyramidal network predicting the flow between an image pair.
Our probabilistic approach integrated in this smaller architecture is termed PDC-Net-s.
All methods are trained using only the first-stage training of the initial PDC-Net, described in Sec.~\ref{subsec:arch}.
\parsection{Probabilistic model (Tab.~\ref{tab:ablation},~top section)}
We first compare BaseNet to our approach PDC-Net-s, which models the flow distribution with a constrained mixture of Laplacians (Sec.~\ref{sec:constained-mixture}). On both KITTI-2015 and MegaDepth, our approach brings a significant improvement in terms of flow accuracy. For pose estimation on YFCC100M, using all correspondences of BaseNet performs very poorly. This demonstrates the crucial importance of robust uncertainty estimation.
While an unconstrained mixture of Laplacians already drastically improves upon the single Laplace component, the permutation invariance of the former leads to poor uncertainty estimates, shown by the high AUSE. Constraining the mixture instead results in better metrics for both the flow and the uncertainty.
\parsection{Uncertainty architecture (Tab.~\ref{tab:ablation}, 2$^{nd}$ section)} Although the compared uncertainty decoder architectures achieve similar quality in flow prediction (AEPE and Fl), they provide notable differences in uncertainty estimation (AUSE).
Only using the correlation uncertainty module leads to the best results on YFCC100M, since the module enables to efficiently discard unreliable matching regions, in particular compared to the common decoder approach.
However, this module alone does not take into account motion boundaries. This leads to poor AUSE on KITTI-2015, which contains independently moving objects. Our final architecture (Fig.~\ref{fig:arch}), additionally integrating the mean flow into the uncertainty estimation, offers the best compromise.
\parsection{Perturbation data (Tab.~\ref{tab:ablation},~3$^{rd}$ section)}
While introducing the perturbations does not help the flow prediction for BaseNet, it provides significant improvements in uncertainty \emph{and} in flow performance for our PDC-Net-s. This emphasizes that the improvement of the uncertainty estimates originating from introducing the perturbations also leads to improved and more generalizable flow predictions.
\begin{table}[t]
\centering
\caption{Ablation study. In the top part, different probabilistic models are compared (Sec.~\ref{subsec:proba-model}-~\ref{sec:constained-mixture}). In the second part, a constrained Mixture is used, and different architectures for uncertainty estimation are compared (Sec.~\ref{sec:uncertainty-arch}). In the third part, we analyze the impact of our training data with perturbations (Sec.~\ref{subsec:perturbed-data}). For all other sections, we model the flow as a constrained mixture of Laplace distributions, and we use our uncertainty prediction architecture. In the fourth part, we show the impact of propagating the uncertainty estimates in a multi-scale architecture (Sec.~\ref{sec:uncertainty-arch}).
In the bottom part, we compare different parametrization of the constrained mixture (Sec.~\ref{sec:constained-mixture}).}
\vspace{-2mm}
\resizebox{0.48\textwidth}{!}{%
\begin{tabular}{l@{~}c@{~~}c@{~~}c@{~~}|@{~~}c@{~~}c@{~~}c@{~~}|@{~~}c@{~~}c}
\toprule
& \multicolumn{3}{c}{\textbf{KITTI-2015}} & \multicolumn{3}{c}{\textbf{MegaDepth}} & \multicolumn{2}{c}{\textbf{YFCC100M}}\\
& AEPE & Fl (\%) & AUSE & PCK-1 (\%) & PCK-5 (\%) & AUSE & mAP @5\textdegree & mAP @10\textdegree \\ \midrule
BaseNet (L1-loss) & 7.51 & 37.19 & - & 20.00 & 60.00 & - & 15.58 & 24.00 \\
Single Laplace & 6.86 & 34.27 & 0.220 & 27.45 & 62.24 & 0.210 & 26.95 & 37.10 \\
Unconstrained Mixture & 6.60 & 32.54 & 0.670 & 30.18 & 66.24 & 0.433 & 31.18 & 42.55 \\
Constrained Mixture (PDC-Net-s) & \textbf{6.66} & \textbf{32.32} & \textbf{0.205} & \textbf{32.51} & \textbf{66.50} & \textbf{0.210} & \textbf{33.77} & \textbf{45.17} \\
\midrule
Commun Dec. & 6.41 & 32.03 & \textbf{0.171} & 31.93 & \textbf{67.34} & 0.213 & 31.13 & 42.21 \\
Corr unc. module & \textbf{6.32} & \textbf{31.12} & 0.418 & 31.97 & 66.80 & 0.278 & \textbf{33.95} & \textbf{45.44} \\
Unc. Dec. (Fig~\ref{fig:arch}) (PDC-Net-s) & 6.66 & 32.32 & 0.205 & \textbf{32.51} & 66.50 & \textbf{0.210} & 33.77 & 45.17 \\ \midrule
BaseNet \textbf{w/o} Perturbations & 7.21 & 37.35 & - & 20.74 & 59.35 & - & 15.15 & 23.88 \\
BaseNet \textbf{w} Perturbations & 7.51 & 37.19 & - & 20.00 & 60.00 & - & 15.58 & 24.00 \\
PDC-Net-s \textbf{w/o} Perturbations & 7.15 & 35.28 & 0.256 & 31.53 & 65.03 & 0.219 & 32.50 & 43.17 \\
PDC-Net-s \textbf{w} Perturbations & \textbf{6.66} & \textbf{32.32} & \textbf{0.205} & \textbf{32.51} & \textbf{66.50} & \textbf{0.210} & \textbf{33.77} & \textbf{45.17} \\
\midrule
Uncert. not propagated & 6.76 & \textbf{31.84} & 0.212 & 29.9 & 65.13 & 0.213 & 31.50 & 42.19 \\
PDC-Net-s & \textbf{6.66} & 32.32 & \textbf{0.205} & \textbf{32.51} & \textbf{66.50} & \textbf{0.197} & \textbf{33.77} & \textbf{45.17} \\
\midrule
$0 \leq \sigma^2_1 \leq 1$, $2 \leq \sigma^2_2 \leq \infty$ & 6.69 & 32.58 & \textbf{0.181} & 32.47 & 65.45 & 0.205 & 30.50 & 40.75 \\
$\sigma^2_1=1$, $2 \leq \sigma^2_2 \leq \infty$
& 6.66 & 32.32 & 0.205 & \textbf{32.51} & 66.50 & \textbf{0.197} & \textbf{33.77} & \textbf{45.17} \\
$\sigma^2_1=1$, $2 \leq \sigma^2_2 \leq \beta_2^+=HW$ & \textbf{6.61} & \textbf{31.67} & 0.208 & 31.83 & \textbf{66.52} & 0.204 & 33.05 & 44.48 \\
\bottomrule
\end{tabular}%
}\label{tab:ablation}
\vspace{-3mm}
\end{table}
\parsection{Propagation of uncertainty components (Tab.~\ref{tab:ablation},~4$^{th}$ section)}
For all presented datasets and metrics, propagating the uncertainty predictions (Sec.~\ref{sec:uncertainty-arch}) boosts the performance of the final network. Only the Fl metric on KITTI-2015 is slightly worse.
\parsection{Constrained mixture parametrization (Tab.~\ref{tab:ablation}, bottom section)}
We compare fixing the first component $\sigma^2_1=\beta_1^- =\beta_1^+ = 1.0$ with a version where the first component is instead bounded as $\beta_1^- = 0.0 \leq \sigma^2_1 \leq \beta_1^+=1.0$. Both networks obtain similar flow performance on KITTI-2015 and MegaDepth. Only on optical flow dataset KITTI-2015, the second alternative of $\beta_1^- = 0.0 \leq \sigma^2_1 \leq \beta_1^+=1.0$ obtains a better AUSE. This is because KITTI-2015 shows ground-truth displacements with a much smaller magnitude than in the geometric matching datasets. When estimating the flow on KITTI, it thus results in a larger proportion of very small flow errors (lower EPE and higher PCK than on geometric matching data). As a result, on this dataset, being able to model a very small error (with $\sigma^2_1 \leq 1$) is beneficial. However, fixing $\sigma^2_1=1.0$ instead produces better AUSE on MegaDepth and it gives significantly better results for pose estimation on YFCC100M.
We then compare leaving the second component's higher bound unconstrained as $2 \leq \sigma^2_2 \leq \infty$ (PDC-Net-s) with a bounded version $2 \leq \sigma^2_2 \leq \beta_2^+ = HW$. All results are very similar, the upper bounded network obtains slightly better flow results but slightly worse uncertainty performance (AUSE). However, we found that constraining $\beta_2^+$ leads to a more stable training in practice. We therefore adopt this setting, using the constraints $\sigma^2_1 = \beta_1^- = \beta_1^+=1$, and $2.0 = \beta_2^- \leq \sigma^2_2 \leq \beta_2^+ = HW$ for our final network (see Sec.~\ref{subsec:arch}).
\subsubsection{Ablation of PDC-Net+ versus PDC-Net}
We analyze aspects specific to our enhanced PDC-Net+ approach in Tab.~\ref{tab:ablation-pdcnetv2}. For all experiments, we use PDC-Net+ with only the first stage training, described in Sec.~\ref{subsec:arch}.
\parsection{Number of moving objects (Tab.~\ref{tab:ablation-pdcnetv2}, top)} Introducing multiple independently moving objects during training leads to better flow results on all datasets. Optical flow data particularly benefits from this training, with a substantial improvement of 3\% in Fl metric. It however results in a slight decrease in AUSE, due to the difficulty to model true errors in the presence of multiple objects.
\parsection{Training mask (Tab.~\ref{tab:ablation-pdcnetv2}, bottom)} Training with the injective mask (Sec.~\ref{sec:injective-mask}) leads to a substantial improvement on KITTI-2015 compared to no mask. Indeed, the injective training mask ensures that the ground-truth flow is injective, while still supervising the flow in occluded areas. It thus allows the network to learn two crucial properties, namely to match independently moving objects and to extrapolate the flow in occluded regions.
On the other hand, using the occlusion mask leads to significantly worse results. This is because the network does not learn to extrapolate the predicted flow in occluded areas, which is essential for the optical flow task.
As a result, our injective mask is the best alternative for training a single network that obtains strong results on all datasets.
\begin{table}[t]
\centering
\caption{Ablation study of PDC-Net+. In the top part, we compare training with a single or multiple independently moving objects (Sec.~\ref{sec:training-strategy}). In the bottom part, different masking strategies during training are compared (Sec.~\ref{sec:injective-mask}). All metrics are computed with a single forward-pass (D).}
\vspace{-2mm}
\resizebox{0.48\textwidth}{!}{%
\begin{tabular}{lccc|ccc|cc}
\toprule
& \multicolumn{3}{c}{\textbf{KITTI-2015}} & \multicolumn{3}{c}{\textbf{MegaDepth}} & \multicolumn{2}{c}{\textbf{YFCC100M}}\\
& AEPE & Fl (\%) & AUSE & PCK-1 (\%) & PCK-5 (\%) & AUSE & mAP @5\textdegree & mAP @10\textdegree \\ \midrule
1 object & 6.70 & 19.61 & \textbf{0.123} & 52.62 & 73.78 & \textbf{0.215} & 44.38 & 54.04 \\
Multiple objects & \textbf{5.55} & \textbf{16.75} & 0.136 & \textbf{54.89} & \textbf{75.92} & 0.238 & \textbf{44.45} & \textbf{54.70} \\
\midrule
No mask & 6.01 & 18.07 & 0.139 & 53.66 & 75.05 & 0.247 & 39.92 & 50.94 \\
Injective mask & \textbf{5.55} & \textbf{16.75} & 0.136 & 54.89 & 75.92 & \textbf{0.238} & \textbf{44.45} & 54.70 \\
Occlusion mask & 9.64 & 19.64 & \textbf{0.103} & \textbf{56.41} & \textbf{77.37} & 0.247 & 44.40 & \textbf{55.68} \\
\bottomrule
\end{tabular}%
}
\vspace{-3mm}
\label{tab:ablation-pdcnetv2}
\end{table}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we propose a probabilistic deep network for estimating the dense image-to-image correspondences and an associated confidence estimate. Our network is trained to predict the parameters of the conditional probability density of the flow. To this end, we introduce a constrained mixture of Laplace distributions. To achieve robust and generalizable uncertainty prediction, we propose an architecture and improved self-supervised data generation pipeline.
We further introduced an enhanced self-supervised strategy that better captures independently moving objects and occlusions. We develop an injective training mask, which benefits learning in the presence of occlusions.
We comprehensively evaluate our final PDC-Net+ approach for multiple different applications and settings. Our approach sets a new state-of-the-art on geometric matching and optical flow datasets, while achieving more robust confidence estimates. Moreover, our uncertainty estimation opens the door to applications traditionally dominated by sparse methods, such as pose estimation and image-based localization.
Future work includes developing even more realistic self-supervised training pipelines, capable of better capturing 3D scenes. An alternative direction is to explore unsupervised learning strategies or the incorporation of 3D constraints as supervision. Another direction involves applying our generic PDC-Net+ formulation to improved backbone and flow estimation architectures.
\section{Introduction}\label{sec:introduction}}
\else
\section{Introduction}
\label{sec:introduction}
\fi
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE Computer Society journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
This work was supported by the ETH Z\"urich Fund (OK), a Huawei Gift, Huawei Technologies Oy (Finland), Amazon AWS, and an Nvidia GPU grant.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\section{Detailed derivation of probabilistic model}
\label{sec-sup:proba}
Here we provide the details of the derivation of our uncertainty estimate.
\parsection{Probabilistic formulation} We model the flow estimation as a probabilistic regression with a constrained mixture density of Laplacian distributions (Sec.~\ref{sec:constained-mixture} of the main paper). Our mixture model, corresponding to equation \eqref{eq:mixture} of the main paper, is expressed as,
\begin{equation}
\label{eq:mixture-sup}
p\left(y | \varphi \right) = \sum_{m=1}^{M} \alpha_{m} \mathcal{L}(y| \mu, \sigma^2_m)
\end{equation}
where, for each component $m$, the bi-variate Laplace distribution $\mathcal{L}(y| \mu, \sigma^2_m)$ is computed as the product of two independent uni-variate Laplace distributions, such as,
\begin{subequations}
\label{eq:simple-laplace}
\begin{align}
\mathcal{L}(y| \mu, \sigma^2_m) &= \mathcal{L}(u,v| \mu_u, \mu_v, \sigma^2_u, \sigma^2_v) \\
&= \mathcal{L}(u| \mu_u, \sigma^2_u). \mathcal{L}(v| \mu_v, \sigma^2_v) \\
&= \frac{1}{\sqrt{2 \sigma_u^2}} e^{-\sqrt{\frac{2}{\sigma_u^2}}|u-\mu_u|} . \nonumber \\
& \hspace{5mm} \frac{1}{\sqrt{2 \sigma_v^2}}e^{-\sqrt{\frac{2}{\sigma_v^2}}|v-\mu_v|}
\end{align}
\end{subequations}
where $\mu = [\mu_u, \mu_v]^T \in \mathbb{R}^2$ and $\sigma^2_m = [\sigma^2_u, \sigma^2_v]^T \in \mathbb{R}^2$ are respectively the mean and the variance parameters of the distribution $\mathcal{L}(y| \mu, \sigma^2_m)$. In this work, we additionally define equal variances in both flow directions, such that $\sigma^2_m = \sigma^2_u = \sigma^2_v \in \mathbb{R}$. As a result, equation \eqref{eq:simple-laplace} simplifies, and when inserting into \eqref{eq:mixture-sup}, we obtain,
\begin{equation}
\label{eq:mixture-sup-final}
p\left(y | \varphi \right) =\sum_{m=1}^{M} \alpha_{m} \frac{1}{2 \sigma_m^2} e^{-\sqrt{\frac{2}{\sigma_m^2}}|y-\mu|_1}.
\end{equation}
\begin{figure}[t]
\centering
(A) MegaDepth \\
\includegraphics[width=0.47\textwidth]{sup-images/conf_map_mega.jpg} \\
(B) KITTI-2015 \\
\includegraphics[width=0.47\textwidth]{ sup-images/conf_map_kitti.jpg} \\
\caption{Visualization of the mixture parameters $(\alpha_m )_{m=1}^M$ and $\sigma^2_2$ predicted by our network PDC-Net+, on multiple image pairs. PDC-Net+ has $M=2$ Laplace components and here, we do not represent the scale parameter $\sigma^2_1$, since it is fixed as $\sigma^2_1 = 1.0$. We also show the resulting confidence maps $P_R$ for multiple R. }\vspace{-4mm}
\label{fig:conf-map}
\end{figure}
\parsection{Confidence estimation} Our network $\Phi$ thus outputs, for each pixel location, the parameters of the predictive distribution, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot the mean flow $\mu$ along with the variance $\sigma^2_m$ and weight $\alpha_m$ of each component, as $\big( \mu, (\alpha_m )_{m=1}^M, ( \sigma^2_m )_{m=1}^M \big) = \varphi(X; \theta)$. However, we aim at obtaining a \emph{single} confidence value to represent the relibiability of the estimated flow vector $\mu$.
As a final confidence measure, we thus compute the probability $P_R$ of the true flow being within a radius $R$ of the estimated mean flow vector $\mu$. This is expressed as,
\begin{subequations}
\begin{align}
P_R & = P(\|y - \mu\|_\infty < R) \\
& = \int_{\{y\in \mathbb{R}^2: \|y - \mu\|_\infty < R\}}\! p(y|\varphi) dy \\
&= \sum_{m} \alpha_m \int_{\mu_u-R}^{\mu_u+R}
\frac{1}{\sqrt{2} \sigma_m} e^{-\sqrt{2}\frac{|u-\mu_u|}{\sigma_m}} du \nonumber \\
& \hspace{15mm} \int_{\mu_v-R}^{\mu_v+R}
\frac{1}{\sqrt{2} \sigma_m} e^{-\sqrt{2}\frac{|v-\mu_v|}{\sigma_m}}dv \\
&= \sum_{m} \alpha_m \left[1-\exp (-\sqrt{2}\frac{R}{\sigma_m}) \right]^2
\end{align}
\end{subequations}
This confidence measure is used to identify the accurate matches by thesholding $P_R$. In Fig~\ref{fig:conf-map}, we visualize the estimated mixture parameters $(\alpha_m )_{m=1}^M, ( \sigma^2_m )_{m=1}^M$, and the resulting confidence map $P_R$ for multiple image pair examples.
\section{Training details}
\label{sec-sup:training}
In this section, we derive the numerically stable Negative Log-Likelihood loss, used for training our network PDC-Net+. We also describe in details the employed training datasets.
\subsection{Training loss}
\label{sec-sup:training-loss}
Similar to conventional approaches, probabilistic methods are generally trained using a set of \emph{iid}.\ image pairs $\mathcal{D} = \left \{X^{(n)}, Y^{(n)} \right \}_{n=1}^N$.
The negative log-likelihood provides a general framework for fitting a distribution to the training dataset as,
\begin{subequations}
\label{eq:nll-sup}
\begin{align}
L(\theta; \mathcal{D}) &= - \frac{1}{N} \sum_{n=1}^{N} \log p\left(Y^{(n)} | \Phi(X^{(n)}; \theta) \right) \\
&= - \frac{1}{N} \sum_{n=1}^{N} \sum_{ij} \log p\big(y^{(n)}_{ij} | \varphi_{ij}(X^{(n)};\theta)\big)
\end{align}
\end{subequations}
Note that to avoid clutter, we do not include here our training injective mask (Sec. \ref{sec:injective-mask} of the main paper) in the loss formulation.
Inserting \eqref{eq:mixture-sup-final} into \eqref{eq:nll-sup}, we obtain for the last term the following expression,
\begin{subequations}
\label{eq:p-sup}
\begin{align}
L_{ij} &= -\log p\big(y^{(n)}_{ij} | \varphi_{ij}(X^{(n)};\theta)\big) \\
&= -\log \left ( \sum_{m=1}^{M} \alpha_{m} \frac{1}{2 \sigma_m^2} e^{ -\sqrt{\frac{2}{\sigma_m^2}}|y-\mu|_1 } \right) \\
&= -\log \left( \sum_{m=1}^{M} \frac{e^{\tilde{\alpha}_m}}{\sum_{m=1}^{M}e^{\tilde{\alpha}_m}} \frac{1}{2 \sigma_m^2} e^{ -\sqrt{\frac{2}{\sigma_m^2}}|y-\mu|_1 } \right) \\
&=\log \left( \sum_{m=1}^{M}e^{\tilde{\alpha}_m} \right) \nonumber \\
& \;\;\;\;\; -\log \left( \sum_{m=1}^{M}
e^{\tilde{\alpha}_m}\frac{1}{2 \sigma_m^2} e^{ -\sqrt{\frac{2}{\sigma_m^2}}|y-\mu|_1 } \right) \\
&= \log \left( \sum_{m=1}^{M}e^{\tilde{\alpha}_m} \right) \nonumber \\ &
\;\;\;\;\; -\log \left(\sum_{m=1}^{M} e^{\tilde{\alpha}_m - \log(2) - s_m -\sqrt{2}e^{-\frac{1}{2}s_m}.|y-\mu|_1} \right)
\end{align}
\end{subequations}
where $s_m = log(\sigma^2_m)$. Indeed, in practise, to avoid division by zero, we use $s_{m} = \log(\sigma^2_m)$ for all components $m \in \left \{0, .., M \right \}$ of the mixture density model. For the implementation of the loss, we use a numerically stable logsumexp function.
With a simple regression loss such as the L1 loss, the large errors represented by the heavy tail of the distribution in Fig.~\ref{fig:distribution} of the main paper have a disproportionately large impact on the loss, preventing the network from focusing on the more accurate predictions. On the contrary, the loss \eqref{eq:p-sup} enables to down-weight the contribution of these examples by predicting a high variance parameter for them. Modelling the flow estimation as a conditional predictive distribution thus improves the accuracy of the estimated flow itself.
\subsection{Injective mask}
We show qualitative examples of the training image pairs used during the first training stage as well as the corresponding training mask (opposite of the injective mask) in Fig.~\ref{fig:mask-examples}.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{sup-images/reprojection_mask/ex_1_.png} \\
\includegraphics[width=0.48\textwidth, trim=0 0 0 0]{sup-images/reprojection_mask/ex_1.png} \\
\includegraphics[width=0.48\textwidth]{sup-images/reprojection_mask/ex_2.png} \\
\includegraphics[width=0.48\textwidth]{sup-images/reprojection_mask/ex_4_.png} \\
\includegraphics[width=0.48\textwidth]{sup-images/reprojection_mask/ex_5.png} \\
\includegraphics[width=0.48\textwidth]{sup-images/reprojection_mask/ex_6_.png} \\
\vspace{-2mm}
\caption{Examples of training image pairs used during the first training stage. On the right, we represent the injective mask, where yellow areas are equal to one while purples ones are zero.}
\label{fig:mask-examples}
\end{figure}
\subsection{Training datasets}
Due to the limited amount of available real correspondence data, most matching methods resort to self-supervised training, relying on synthetic image warps generated automatically. We here provide details on the synthetic dataset that we use for self-supervised training, as well as additional information on the implementation of the perturbation data (Sec.~\ref{subsec:perturbed-data} of the main paper). Finally, we also describe the generation of the sparse ground-truth correspondence data from the MegaDepth dataset~\cite{megadepth}.
\parsection{Base synthetic dataset} For our base synthetic dataset, we use the same data as in~\cite{GOCor}. Specifically, pairs of images are created by warping a collection of images from the DPED~\cite{Ignatov2017}, CityScapes~\cite{Cordts2016} and ADE-20K~\cite{Zhou2019} datasets, according to synthetic affine, TPS and homography transformations. The transformation parameters are the ones originally used in DGC-Net~\cite{Melekhov2019}.
These image pairs are further augmented with additional random independently moving objects. To do so, the objects are sampled from the COCO dataset~\cite{coco}, and inserted on top of the images of the synthetic data using their segmentation masks. To generate motion, we randomly sample different affine transformation parameters for each foreground object, which are independent of the background transformations.
It results in 40K image pairs, cropped at resolution $520 \times 520$.
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{ sup-images/perturbation_data.pdf}
\caption{Visualization of our perturbations applied to a pair of reference and query images (Sec.~\ref{subsec:perturbed-data} of the main paper). }\vspace{-2mm}
\label{fig:pertu}
\end{figure}
\parsection{Perturbation data for robust uncertainty estimation} Even with independently moving objects, the network still learns to primarily rely on interpolation when estimating the flow field and corresponding uncertainty map relating an image pair. We here describe in more details our data generation strategy for more robust uncertainty prediction. From a base flow field $Y \in \mathbb{R}^{H \times W \times 2}$ relating a reference image $\tilde{I}^r \in \mathbb{R}^{H \times W \times 3}$ to a query image $\tilde{I}^q \in \mathbb{R}^{H \times W \times 3}$, we introduce a residual flow $\epsilon = \sum_i \varepsilon_i$, by adding small local perturbations $\varepsilon_i \in \mathbb{R}^{H \times W \times 2}$.
More specifically, we create the residual flow by first generating an elastic deformation motion field $E$ on a dense grid of dimension $H \times W$, as described in~\cite{Simard2003}. Since we only want to include perturbations in multiple small regions, we generate binary masks $S_i \in \mathbb{R}^{H \times W \times 2}$, each delimiting the area on which to apply one local perturbation $\varepsilon_i$. The final residual flow (perturbations) thus take the form of $\epsilon = \sum_i \varepsilon_i$, where $\varepsilon_i = E \cdot S_i$.
Finally, the query image $I^q = \tilde{I}^q$ is left unchanged while the reference $I^r$ is generated by warping $\tilde{I}^r$ according to the residual flow $\epsilon$, as $I^r(x) = \tilde{I^r}(x+\epsilon(x))$ for $x \in \mathbb{R}^2$. The final perturbed flow map $Y$ between $I^r$ and $I^q$ is achieved by composing the base flow $\tilde{Y}$ with the residual flow $\epsilon$, as $Y(x) = \tilde{Y}(x+\epsilon(x)) + \epsilon(x)$.
In practise, for the elastic deformation field $E$, we use the implementation of~\cite{info11020125}. The masks $S_i$ should be between 0 and 1 and offer a smooth transition between the two, so that the perturbations appear smoothly. To create each mask $S_i$, we thus generate a 2D Gaussian centered at a random location and with a random standard deviation (up to a certain value) on a dense grid of size $H \times W$. It is then scaled to 2.0 and clipped to 1.0, to obtain a smooth regions equal to 1.0 where the perturbation will be applied, and transition regions on all sides from 1.0 to 0.0.
In Fig.~\ref{fig:pertu}, we show examples of generated residual flows and their corresponding perturbed reference $I^r$, for a particular base flow $Y$, and query $\tilde{I}^r$ and reference $\tilde{I}^q$ images. As one can see, for human eye, it is almost impossible to detect the presence of the perturbations on the perturbed reference $I^r$. This will enable to "fool" the network in homogeneous regions, such as the road in the figure example, thus forcing it to predict high uncertainty in regions where it cannot identify them.
\parsection{MegaDepth training} To generate the training pairs with sparse ground-truth, we adapt the generation protocol of D2-Net~\cite{Dusmanu2019CVPR}. Specifically, we use the MegaDepth dataset, consisting of 196 different scenes reconstructed from 1.070.468 internet photos using COLMAP~\cite{SchonbergerF16}. The camera intrinsics and extrinsics as well as depth maps from Multi-View Stereo are provided by the authors for 102.681 images.
For training, we use 150 scenes and sample up to 300 random images with an overlap ratio of at least 30\% in the sparse SfM point cloud at each epoch. For each pair, all points of the second image with depth information are projected into the first image. A depth-check with respect to the depth map of the first image is also run to remove occluded pixels.
For the validation dataset, we sample up to 25 image pairs from 25 different scenes. We resize the images so that their largest dimension is 520.
During the second stage of training, we found it crucial to train on \emph{both} the synthetic dataset with perturbations and the sparse data from MegaDepth. Training solely on the sparse correspondences resulted in less reliable uncertainty estimates.
\section{Architecture details}
\label{sec-sub:arch-details}
In this section, we first describe the architecture of our proposed uncertainty decoder (Sec.~\ref{sec:uncertainty-arch} of the main paper). We then give additional details about our proposed final architecture PDC-Net+, as well as the initial PDC-Net and its corresponding baseline GLU-Net-GOCor*. We also describe the architecture of BaseNet and its probabilistic derivatives, employed for the ablation study. Finally, we share all training details and hyper-parameters.
\subsection{Architecture of the uncertainty decoder}
\label{subsec-sup:unc-dec}
\parsection{Correlation uncertainty module} We first describe the architecture of our Correlation Uncertainty Module $U_{\theta}$ (Sec.~\ref{sec:uncertainty-arch} of the main paper). The correlation uncertainty module processes each 2D slice $C_{ij\cdot \cdot}$ of the correspondence volume $C$ independently.
More practically, from the correspondence volume tensor $C \in \mathbb{R}^ {b \times h \times w \times (d \times d)}$, where b indicates the batch dimension, we move the spatial dimensions $h \times w$ into the batch dimension and we apply multiple convolutions in the displacement dimensions $d \times d$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot on a tensor of shape $(b \times h \times w) \times d \times d \times 1$. By applying the strided convolutions, the spatial dimension is gradually decreased, resulting in an uncertainty representation $u \in \mathbb{R}^{(b \times h \times w) \times 1 \times 1 \times n}$, where $n$ denotes the number of channels. $u$ is subsequently rearranged, and after dropping the batch dimension, the outputted uncertainty tensor is $u \in \mathbb{R}^{h \times w \times n}$.
Note that while this is explained for a local correlation, the same applies for a global correlation except that the displacement dimensions correspond to $h \times w$.
In Tab.~\ref{tab:arch-local-disp}-~\ref{tab:arch-global-disp}, we present the architecture of the convolution layers applied on the displacements dimensions, for a local correlation with search radius 4 and for a global correlation applied at dimension $ h \times w = 16 \times 16$, respectively.
\begin{table}[b]
\centering
\caption{Architecture of the correlation uncertainty module for a local correlation, with a displacement radius of 4. Implicit are the BatchNorm and ReLU operations that follow each convolution, except for the last one. K refers to kernel size, s is the used stride and p the padding. }
\resizebox{0.49\textwidth}{!}{%
\begin{tabular}{l|l|l}
\toprule
Inputs & Convolutions & Output size \\ \midrule
C; $(b \times h \times w) \times 9 \times 9 \times 1$ & $conv_0$, $K=(3 \times 3)$, s=1, p=0 & $(b \times h \times w) \times 7 \times 7 \times 32$ \\ \midrule
$conv_0$; $(b \times h \times w) \times 7 \times 7 \times 32$ & $conv_1$, $K=(3 \times 3)$, s=1, p=0 & $(b \times h \times w) \times 5 \times 5 \times 32$ \\ \midrule
$conv_1$; $(b \times h \times w) \times 5 \times 5 \times 32$ & $conv_2$, $K=(3 \times 3)$, s=1, p=0 & $(b \times h \times w) \times 3 \times 3 \times 16$ \\ \midrule
$conv_2$; $(b \times h \times w) \times 3 \times 3 \times 16$ & $conv_3$, $K=(3 \times 3)$, s=1, p=0 & $(b \times h \times w) \times 1 \times 1 \times n$ \\
\bottomrule
\end{tabular}%
}
\label{tab:arch-local-disp}
\end{table}
\begin{table}[b]
\centering
\caption{Architecture of the correlation uncertainty module for a global correlation, constructed at resolution $16 \times 16$. Implicit are the BatchNorm and ReLU operations that follow each convolution, except for the last one. K refers to kernel size, s is the used stride and p the padding. }
\resizebox{0.49\textwidth}{!}{%
\begin{tabular}{l|l|l}
\toprule
Inputs & Convolutions & Output size \\ \midrule
C $(b \times h \times w) \times 16 \times 16 \times 1$ & $conv_0$; $K=(3 \times 3)$, s=1, p=0 & $(b \times h \times w) \times 14 \times 14 \times 32$ \\ \midrule
$conv_0$; $(b \times h \times w) \times 14 \times 14 \times 32$ & \begin{tabular}[c]{@{}l@{}}$3 \times 3$ max pool, s=2 \\ $conv_1$, $K=(3 \times 3)$, s=1, p=0\end{tabular} & $(b \times h \times w) \times 5 \times 5 \times 32$ \\ \midrule
$conv_1$; $(b \times h \times w) \times 5 \times 5 \times 32$ & $conv_2$, $K=(3 \times 3)$, s=1, p=0 & $(b \times h \times w) \times 3 \times 3 \times 16$ \\ \midrule
$conv_2$; $(b \times h \times w) \times 3 \times 3 \times 16$ & $conv_3$, $K=(3 \times 3)$, s=1, p=0 & $(b \times h \times w) \times 1 \times 1 \times n$ \\
\bottomrule
\end{tabular}%
}
\label{tab:arch-global-disp}
\end{table}
\parsection{Uncertainty predictor} We then give additional details of the Uncertainty Predictor, that we denote $Q_{\theta}$ (Sec.~\ref{sec:uncertainty-arch} of the main paper).
The uncertainty predictor takes the flow field $Y \in \mathbb{R}^{h \times w \times 2}$ outputted from the flow decoder, along with the output $u \in \mathbb{R}^{h \times w \times n}$ of the correlation uncertainty module $U_{\theta}$. In a multi-scale architecture, it additionally takes as input the estimated flow field and predicted uncertainty components from the previous level. At level $l$, for each pixel location $(i,j)$, this is expressed as:
\begin{equation}
\big((\tilde{\alpha}_m )_{m=1}^M, ( h_m )_{m=1}^M \big)^l = Q_{\theta} \big( Y^l; u^l; \Phi^{l-1} \big)_{ij}
\end{equation}
where $\tilde{\alpha}_m$ refers to the output of the uncertainty predictor, which is then passed through a SoftMax layer to obtain the final weights $\alpha_m$. $\sigma^2_m$ is obtained from $h_m$ according to constraint equation \eqref{eq:constraint} of the main paper.
In practise, we have found that instead of feeding the flow field $Y \in \mathbb{R}^{h \times w \times 2}$ outputted from the flow decoder to the uncertainty predictor, using the second last layer from the flow decoder leads to slightly better results. This is because the second last layer from the flow decoder has larger channel size, and therefore encodes more information about the estimated flow.
Architecture-wise, the uncertainty predictor $Q_{\theta}$ consists of 3 convolutional layers. The numbers of feature channels at each convolution layers are respectively 32, 16 and $2M$ and the spatial kernel of each convolution is $3 \times 3$ with stride of 1 and padding 1. The first two layers are followed by a batch-normalization layer with a leaky-Relu non linearity. The final output of the uncertainty predictor is the result of a linear 2D convolution, without any activation.
\subsection{Architecture of PDC-Net+}
\label{subsec-sup:pdcnet}
Our PDC-Net+ has the same architecture than the initial PDC-Net~\cite{pdcnet}. Specifically, we use GLU-Net-GOCor~\cite{GLUNet, GOCor} as our base architecture, predicting the dense flow field relating a pair of images. It is a 4 level pyramidal network, using a VGG feature backbone. It is composed of two sub-networks, L-Net and H-Net which act at two different resolutions.
The L-Net takes as input rescaled images to $H_L \times W_L= 256 \times 256$ and process them with a global GOCor module followed by a local GOCor module. The resulting flow is then upsampled to the lowest resolution of the H-Net to serve as initial flow, by warping the query features according to the estimated flow. The H-Net takes input images at unconstrained resolution $H\times W$, and refines the estimated flow with two local GOCor modules. However, as opposed to original GLU-Net-GOCor~\cite{GLUNet, GOCor}, we here use residual connections layers instead of DenseNet connections~\cite{Huang2017} and feed-forward layers for the flow and mapping decoders respectively.
From the baseline GLU-Net-GOCor, we create our probabilistic approach PDC-Net+ by inserting our uncertainty decoder at each pyramid level. As noted in~\ref{subsec-sup:unc-dec}, in practise, we feed the second last layer from the flow decoder to the uncertainty predictor of each pyramid level instead of the predicted flow field. It leads to slightly better results.
The uncertainty prediction is additionally \emph{propagated from one level to the next}. More specifically, the flow decoder takes as input the uncertainty prediction (all parameters $\Phi$ of the predictive distribution except for the mean flow) of the previous level, in addition to its original inputs (which include the mean flow of the previous level). The uncertainty predictor also takes the uncertainty and the flow estimated at the previous level.
As explained in Sec.~\ref{subsec:arch} of the main paper, we use a constrained mixture with $M=2$ Laplace components.
The first component is set so that $\sigma^2_1 = 1$, while the second is learned as $ 2 = \beta_2^- \leq \sigma^2_2 \leq \beta_2^+$. Therefore, the uncertainty predictor only estimates $\sigma^2_2$ and $(\alpha_m )_{m=1}^{M=2}$ at each pixel location.
We found that fixing $\sigma^2_1 = \beta_1^- = \beta_1^+ = 1.0$ results in better performance than for example $\beta_1^- = 0.0 \leq \sigma^2_1 \leq \beta_1^+=1.0$. Indeed, in the later case, during training, the network focuses primarily on getting the very accurate, and confident, correspondences (corresponding to $\sigma^2_1$) since it can arbitrarily reduce the variance. Generating fewer, but accurate predictions then dominate during training to the expense of other regions. This is effectively alleviated by setting $\sigma^2_1=1.0$, which can be seen as introducing a strict prior on this parameter.
\subsection{Inference multi-stage and multi-scale }
Here, we provide implementation details for our multi-stage and multi-scale inference strategy (Sec.~\ref{sec:geometric-inference} of the main paper).
After the first network forward pass, we select matches with a corresponding confidence probability $P_{R=1}$ superior to 0.1, for $R=1$. Since the network estimates the flow field at a quarter of the original image resolution, we use the filtered correspondences at the quarter resolution and scale them to original resolution to be used for homography estimation. To estimate the homography, we use OpenCV’s findHomography with RANSAC and an inlier threshold of 1 pixel.
\subsection{Architecture of BaseNet}
\label{arch-basenet}
As baseline used in our ablation study, we utilize BaseNet, introduced in~\cite{GOCor} and inspired by GLU-Net~\cite{GLUNet}. It estimates the dense flow field relating an input image pair.
The network is composed of three pyramid levels and it uses VGG-16~\cite{Chatfield14} as feature extractor backbone. The coarsest level is based on a global correlation layer, followed by a mapping decoder estimating the correspondence map at this resolution. The two next pyramid levels instead rely on local correlation layers. The dense flow field is then estimated with flow decoders, taking as input the correspondence volumes resulting from the local feature correlation layers. Moreover, BaseNet is restricted to a pre-determined input resolution $H_L \times W_L = 256 \times 256$ due to its global correlation at the coarsest pyramid level. It estimates a final flow-field at a quarter of the input resolution $H_L \times W_L$, which needs to be upsampled to original image resolution $H \times W$. The mapping and flow decoders have the same number of layers and parameters as those used for GLU-Net~\cite{GLUNet}. However, here, to reduce the number of weights, we use feed-forward layers instead of DenseNet connections~\cite{Huang2017} for the flow decoders.
We create the different probabilistic versions of BaseNet, presented in the ablation study Tab.~\ref{tab:ablation} of the main paper, by modifying the architecture minimally.
For the network referred to as PDC-Net-s, which also employs our proposed uncertainty architecture (Sec.~\ref{sec:uncertainty-arch} of the main paper), we add our uncertainty decoder at each pyramid layer, in a similar fashion as for our final network PDC-Net+. The flow prediction is modelled with a constrained mixture employing $M=2$ Laplace components. The first component is set so that $\sigma^2_1 = 1$, while the second is learned as $2 = \beta_2^- \leq \sigma^2_2 \leq \beta_2^+ = \infty$.
We train all networks on the synthetic data with the perturbations, which corresponds to our first training stage (Sec.~\ref{subsec:arch}).
\subsection{Implementation details}
Since we use pyramidal architectures with $K$ levels, we employ a multi-scale training loss, where the loss at different pyramid levels account for different weights.
\begin{equation}
\label{eq:multiscale-loss}
\mathcal{L}(\theta)=\sum_{l=1}^{K} \gamma_{l} L_l +\eta \left\| \theta \right\|\,,
\end{equation}
where $\gamma_{l}$ are the weights applied to each pyramid level and $L_l$ is the corresponding loss computed at each level, which refers to the L1 loss for the non-probabilistic baselines and the negative log-likelihood loss \eqref{eq:nll-sup} for the probabilistic models, including our approach PDC-Net+. The second term of the loss \eqref{eq:multiscale-loss} regularizes the weights of the network.
For training, we use similar training parameters as in~\cite{GLUNet}. Specifically, as a preprocessing step, the training images are mean-centered and normalized using mean and standard deviation of ImageNet dataset~\cite{Hinton2012}. For all local correlation layers, we employ a search radius $r=4$.
\parsection{BaseNet} For the training of BaseNet and its probabilistic derivatives (including PDC-Net-s), which have a pre-determined fixed input image resolution of $(H_L \times W_L = 256 \times 256)$, we use a batch size of 32 and train for 106.250 iterations. We set the initial learning rate to $10^{-2}$ and gradually decrease it by 2 after 56.250, 75.000 and 93.750 iterations.
The weights in the training loss \eqref{eq:multiscale-loss} are set to be $\gamma_{1}=0.32, \gamma_{2}=0.08, \gamma_{3}=0.02$ and to compute the loss, we down-sample the ground-truth to estimated flow resolution at each pyramid level.
\parsection{GLU-Net based architectures (GLU-Net-GOCor*, PDC-Net, PDC-Net+)} For GLU-Net-GOCor*, PDC-Net and PDC-Net+, we down-sample and scale the ground truth from original resolution $H\times W$ to $H_L \times W_L$ in order to obtain the ground truth flow fields for L-Net.
During the first stage of training, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot on purely synthetic images, we down-sample the ground truth from the base resolution to the different pyramid resolutions without further scaling, so as to obtain the supervision signals at the different levels. During this stage, the weights in the training loss \eqref{eq:multiscale-loss} are set to be $\gamma_{1}=0.32, \gamma_{2}=0.08, \gamma_{3}=0.02, \gamma_{4}=0.01$, which ensures that the loss computed at each pyramid level contributes equally to the final loss \eqref{eq:multiscale-loss}.
During the second stage of training however, which includes MegaDepth, since the ground-truth is sparse, it is inconvenient to down-sample it to different resolutions. We thus instead up-sample the estimated flow field to the ground-truth resolution and compute the loss at this resolution. In practise, we found that both strategies lead to similar results during the self-supervised training. During the second training stage, the weights in the training loss \eqref{eq:multiscale-loss} are instead set to $\gamma_{1}=0.08, \gamma_{2}=0.08, \gamma_{3}=0.02, \gamma_{4}=0.02$, which also ensures that the loss terms of all pyramid levels have the same magnitude.
\parsection{PDC-Net+} We train on images pairs of size $520 \times 520$. The first training stage involves 350k iterations, with a batch size of 15. The learning rate is initially set to $10^{-4}$, and halved after 133k and 240k iterations.
For the second training stage, the batch size is reduced to 10 due to the added memory consumption when fine-tuning the backbone feature extractor. We train for 225K iterations in this stage. The initial learning rate is set to $5 \cdot 10^{-5}$ and halved after 210K iterations. The total training takes about 10 days on two NVIDIA TITAN RTX.
For the GOCor modules~\cite{GOCor}, we train with 3 local and global optimization iterations. Moreover, in the case of PDC-Net+, a different VGG backbone is used for each sub-module, which act at different image resolution while the same VGG is used in both sub-modules for PDC-Net.
\parsection{PDC-Net and GLU-Net-GOCor*} During the first training stage on uniquely the self-supervised data, we train for 135K iterations, with batch size of 15. The learning rate is initially equal to $10^{-4}$, and halved after 80K and 108K iterations. Note that during this first training stage, the feature back-bone is frozen, but further finetuned during the second training stage.
While finetuning on the composition of MegaDepth and the synthetic dataset, the batch size is reduced to 10 and we further train for 195K iterations. The initial learning rate is fixed to $5.10^{-5}$ and halved after 120K and 180K iterations. The feature back-bone is also finetuned according to the same schedule, but with an initial learning rate of $10^{-5}$.
For the GOCor modules~\cite{GOCor}, we train with 3 local and global optimization iterations.
Our system is implemented using Pytorch~\cite{pytorch} and our networks are trained using Adam optimizer~\cite{adam} with weight decay of $0.0004$.
\section{Experimental setup and datasets}
\label{sec-sup:details-evaluation}
In this section, we first provide details about the evaluation datasets and metrics. We then explain the experimental set-up in more depth.
\subsection{Evaluation metrics}
\parsection{AEPE} AEPE is defined as the Euclidean distance between estimated and ground truth flow fields, averaged over all valid pixels of the reference image.
\parsection{PCK} The Percentage of Correct Keypoints (PCK) is computed as the percentage of correspondences $\mathbf{\tilde{x}}_{j}$ with an Euclidean distance error $\left \| \mathbf{\tilde{x}}_{j} - \mathbf{x}_{j}\right \| \leq T$, w.r.t.\ to the ground truth $\mathbf{x}_{j}$, that is smaller than a threshold $T$.
\parsection{Fl} Fl designates the percentage of outliers averaged over all valid pixels of the dataset~\cite{Geiger2013}. They are defined as follows, where $Y$ indicates the ground-truth flow field and $\hat{Y}$ the estimated flow by the network.
\begin{equation}
Fl = \frac{ \left \| Y-\hat{Y} \right \| > 3 \textrm{ and } \frac{\left \| Y-\hat{Y} \right \|}{\left \| Y \right \|} > 0.05 } {\textrm{\#valid pixels}}
\end{equation}
\parsection{Sparsification Errors} We compute the sparsification error curve on each image pair, and normalize it to $[0,1]$. The final error curve is the average over all image pairs of the dataset.
\parsection{mAP} For the task of pose estimation, we use mAP as the evaluation metric, following~\cite{OANet}. The absolute rotation error $\left | R_{err} \right |$ is computed as the absolute value of the rotation angle needed to align ground-truth rotation matrix $R$ with estimated rotation matrix $\hat{R}$, such as
\begin{equation}
R_{err} = cos^{-1}\frac{Tr(R^{-1}\hat{R}) -1}{2} \;,
\end{equation}
where operator $Tr$ denotes the trace of a matrix. The translation error $T_{err}$ is computed similarly, as the angle to align the ground-truth translation vector $T$ with the estimated translation vector $\hat{T}$.
\begin{equation}
T_{err} = cos^{-1}\frac{T \cdot \hat{T}}{\left \| T \right \|\left \| \hat{T} \right \|} \;,
\end{equation}
where $\cdot$ denotes the dot-product. The accuracy Acc-$\kappa$ for a threshold $\kappa$ is computed as the percentage of image pairs for which the maximum of $T_{err}$ and $\left | R_{err} \right |$ is below this threshold. mAP is defined according to original implementation~\cite{OANet}, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot mAP @5\textdegree\, is equal to Acc-5, mAP @10\textdegree\, is the average of Acc-5 and Acc-10, while mAP @20\textdegree\, is the average over Acc-5, Acc-10, Acc-15 and Acc-20.
\parsection{AUC} AUC denotes the Area Under the Cumulative error plot, where the error is the maximum of $T_{err}$ and $\left| R_{err} \right|$.
\subsection{Evaluation datasets and set-up}
\label{details-eval-data}
\parsection{MegaDepth} The MegaDepth dataset depicts real scenes with extreme viewpoint changes. No real ground-truth correspondences are available, so we use the result of SfM reconstructions to obtain sparse ground-truth correspondences. We follow the same procedure and test images than~\cite{RANSAC-flow}. More precisely, we randomly sample 1600 pairs of images that shared more than 30 points. The test pairs are from different scenes than the ones we used for training and validation. We use 3D points from SfM reconstructions and project them onto the pairs of matching images to obtain correspondences. It results in approximately 367K correspondences. During evaluation, following~\cite{RANSAC-flow}, all the images and ground-truth flow fields are resized to have minimum dimension 480 pixels. The PCKs are calculated per dataset.
\parsection{RobotCar} In RobotCar, we used the correspondences originally introduced by~\cite{RobotCarDatasetIJRR}. During evaluation, following~\cite{RANSAC-flow}, all the images and ground-truth flows are resized to have minimum dimension 480 pixels. The PCKs are calculated per dataset.
\parsection{ETH3D} The Multi-view dataset ETH3D~\cite{ETH3d} contains 10 image sequences at $480 \times 752$ or $514 \times 955$ resolution, depicting indoor and outdoor scenes. They result from the movement of a camera completely unconstrained, used for benchmarking 3D reconstruction.
The authors additionally provide a set of sparse geometrically consistent image correspondences (generated by~\cite{SchonbergerF16}) that have been optimized over the entire image sequence using the reprojection error. We sample image pairs from each sequence at different intervals to analyze varying magnitude of geometric transformations, and use the provided points as sparse ground truth correspondences. This results in about 500 image pairs in total for each selected interval, or 600K to 1000K correspondences. Note that, in this work, following~\cite{GLUNet, GOCor}, we computed the PCK per image and then average per sequence.
This metric is different than the one used for ETH3D in PDC-Net~\cite{pdcnet}, where the PCKs were calculated instead over the whole dataset (image pairs of all the sequences) per interval.
\parsection{KITTI} The KITTI dataset~\cite{Geiger2013} is composed of real road sequences captured by a car-mounted stereo camera rig. The KITTI benchmark is targeted for autonomous driving applications and its semi-dense ground truth is collected using LIDAR. The 2012 set only consists of static scenes while the 2015 set is extended to dynamic scenes via human annotations. The later contains large motion, severe illumination changes, and occlusions.
\parsection{Pose estimation}
We use the selected matches to estimate an essential matrix with RANSAC~\cite{ransac} and 5-pt Nister algorithm~\cite{TPAMI.2004.17}, relying on OpenCV’s 'findEssentialMat' with an inlier threshold of 1 pixel divided by the focal length. Rotation matrix $\hat{R}$ and translation vector $\hat{T}$ are finally computed from the estimated essential matrix, using OpenCV’s 'recoverPose'.
\parsection{3D reconstruction} We use the set-up of~\cite{SattlerMTTHSSOP18}, which provides a list of image pairs to match. We compute dense correspondences between each pair. We resize the images by keeping the same aspect ratio so that the minimum dimension is 600. We select matches for which the confidence probability $P_{R=1}$ is above 0.1, and feed them to COLMAP reconstruction pipeline~\cite{SchonbergerF16}. Again, we select matches at a quarter of the image resolution and scale the matches to original resolution.
On all datasets, for evaluation, we use 3 and 7 steepest descent iterations in the global and local GOCor modules~\cite{GOCor} respectively.
We reported results of RANSAC-Flow~\cite{RANSAC-flow} using MOCO features, which gave the best results overall. %
\section{Detailed results}
\label{sec-sup:results}
In this section, we provide detailed results on uncertainty estimation and we present extensive qualitative results and comparisons. We also show additional ablative experiments.
\subsection{Additional results on uncertainty estimation}
Here, we present sparsification errors curves, computed on the RobotCar dataset. As in the main paper, Sec.~\ref{subsec:uncertainty-est}, we compare our probabilistic approach PDC-Net+, to dense geometric methods providing a confidence estimation, namely DGC-Net~\cite{Melekhov2019}, RANSAC-Flow~\cite{RANSAC-flow} and the initial PDC-Net~\cite{pdcnet}. Fig.~\ref{fig:sparsification-robotcar} depicts the sparsification error curves on RobotCar. As on MegaDepth, PDC-Net+ obtains better uncertainty than RANSAC-Flow and DGC-Net, but slighly worse than PDC-Net.
\begin{table}[t]
\centering
\caption{Metrics evaluated over scenes of ETH3D with different intervals between consecutive pairs of images (taken by the same camera). Note that, in this work, we computed the PCK per image and we further average the PCKs values over each sequence. This is different than in~\cite{pdcnet}, where the PCKs were computed per sequence instead. High PCK and low AEPE are better. }\label{tab:ETH3d-details}
\resizebox{0.49\textwidth}{!}{%
\begin{tabular}{lccccccc}
\toprule
& 3 & 5 & 7 & 9 & 11 & 13 & 15 \\ \midrule
& \multicolumn{7}{c}{\textbf{AEPE}} \\ \midrule
LIFE & 1.83 & 2.14 &2.45 &2.8 & 3.42 &4.6 & 8.6 \\
COTR & 1.71 & 1.92 & 2.16 & 2.47 & 2.85 & 3.23 & 3.76 \\
RANSAC-Flow (MS) & 1.79 & 1.94 & 2.09 & 2.25 & \textbf{2.42} & \textbf{2.64} & \textbf{2.87} \\
GLU-Net-GOCor* & 1.68 & 1.92 & 2.18 & 2.43 & 2.89 & 3.32 & 4.27\\
PDC-Net (H) &1.57& 1.77 &1.99 &2.21 &2.49& 2.72& 3.11 \\
PDC-Net (MS) & 1.59 &1.8 & 1.97 & 2.23 & 2.49 & 2.72 & 3.2 \\
\textbf{PDC-Net+ (H)} & \textbf{1.56} & \textbf{1.74} & \textbf{1.96} & 2.18& 2.48& 2.73 & 3.22 \\
\textbf{PDC-Net+ (MS)} & 1.58 & 1.76 & \textbf{1.96} & \textbf{2.16} & 2.49 & 2.73 & 3.24\\
\midrule
& \multicolumn{7}{c}{\textbf{PCK-1 (\%)}} \\\midrule
LIFE & 51.93 & 45.98 & 41.46 & 38.05 & 35.17 & 32.85 & 30.35 \\
COTR & - & - & - & - & - & - & -\\
RANSAC-Flow (MS) & 58.33 & 54.71 & 51.56 & 48.64 & 46.14 & 44.0 & 41.75 \\
GLU-Net-GOCor* & 59.4 & 55.15 &51.18 &47.88 &44.46 &41.79 &38.9 \\
PDC-Net (H) & 62.73& 59.44 &56.21 &53.49 &50.86 &48.72 &46.5 \\
PDC-Net (MS) & 62.45 & 59.22 &56.1& 53.29 &50.79& 48.67& 46.37 \\
\textbf{PDC-Net+ (H)} & \textbf{63.12} & \textbf{59.93} & \textbf{56.81} & \textbf{54.12} & \textbf{51.59} & \textbf{49.55} & \textbf{47.32} \\
\textbf{PDC-Net+ (MS)} & 62.95 &59.76 &56.64 &54.02 &51.5 &49.38 &47.24 \\
\midrule
& \multicolumn{7}{c}{\textbf{PCK-5 (\%)}} \\\midrule
LIFE & 92.31 &90.54 &88.77 &87.32 &85.74 &84.1 & 81.74 \\
COTR & - & - & - & - & - & - & -\\
RANSAC-Flow (MS) & 91.85 &90.69 &89.45 &88.43& 87.48 &86.39 &85.32 \\
GLU-Net-GOCor* & 93.03 &92.12 &91.04 &90.19 &88.97 &87.8 & 85.92 \\
PDC-Net (H) & 93.52 &92.72 &91.95 &91.24& 90.45 &89.86 &88.98 \\
PDC-Net (MS) & 93.51 &92.72 &91.97 &91.22 &90.48 &89.85& 88.9 \\
\textbf{PDC-Net+ (H)} & \textbf{93.54} & 92.78 & \textbf{92.04} & 91.30 & \textbf{90.60} & 89.9 & \textbf{89.03} \\
\textbf{PDC-Net+ (MS)} & 93.50 & \textbf{92.79} & \textbf{92.04} & \textbf{91.35} & \textbf{90.60} & \textbf{89.97} & 88.97 \\
\bottomrule
\end{tabular}%
}
\end{table}
\subsection{Detailed results on ETH3D}
In Tab.~\ref{tab:ETH3d-details}, we show the detailed results on ETH3D, corresponding to Fig.~\ref{fig:ETH3D} of the main paper. Note that COTR~\cite{COTR} only provides results in terms of AEPE.
\begin{figure}[b]
\centering
\vspace{-6mm}\newcommand{0.23\textwidth}{0.23\textwidth}
\subfloat{\includegraphics[width=0.23\textwidth]{ sup-images/EPE_robotcar.pdf}}~%
\subfloat{\includegraphics[width=0.23\textwidth]{ sup-images/PCK_5_robotcar.pdf}}~%
\caption{Sparsification Error plots for AEPE (left) and PCK-5 (right) on RobotCar. Smaller AUSE (in parenthesis) is better. }
\label{fig:sparsification-robotcar}
\end{figure}
\subsection{Detailed results on HP-240}
In Tab.~\ref{tab:hp-details}, we show the detailed results on HP-240, corresponding to Tab.~\ref{tab:hp-240} of the main paper. We also show qualitative results of PDC-Net+ applied to some image pairs in Fig.~\ref{fig:hp-visual}.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{sup-images/hp.pdf}
\caption{Visual examples of PDC-Net+ applied to images of the HPatches dataset~\cite{Lenc}. We warp the query image according to PDC-Net+ and show predicted unreliable or inaccurate matching regions in red. In the last column, we overlay the reference image with the warped query from PDC-Net+, in the identified accurate matching regions (lighter color).}
\label{fig:hp-visual}
\end{figure}
\begin{table*}[t]
\centering
\caption{Additional ablation study. We analyse the impact of the number of components $M$ used in the constrained mixture model, on the probabilistic BaseNet (PDC-Net-s). All metrics are computed with a single forward-pass of the network.
}
\resizebox{0.99\textwidth}{!}{%
\begin{tabular}{lccc|ccc|cc}
\toprule
& \multicolumn{3}{c}{\textbf{KITTI-2015}} & \multicolumn{3}{c}{\textbf{MegaDepth}} & \multicolumn{2}{c}{\textbf{YFCC100M}}\\
& EPE & F1 (\%) & AUSE & PCK-1 (\%) & PCK-5 (\%) & AUSE & mAP @5\textdegree & mAP @10\textdegree \\ \midrule
$M=2$; $\sigma^2_1=1.0$, $2.0 < \sigma^2_2 < \beta_2^+=s^2$ & 6.61 & 31.67 & \textbf{0.208} & 31.83 & \textbf{66.52} & \textbf{0.204} & 33.05 & 44.48 \\
$M=3$; $\sigma^2_1=1.0$, $2.0 < \sigma^2_2 < \beta_2^+=s^2$, $\sigma^2_3 = s^2$ & \textbf{6.41} & \textbf{30.54} & 0.212 & \textbf{31.89} & 66.10 & 0.214 & \textbf{34.90} & \textbf{45.86} \\
\bottomrule
\end{tabular}%
}
\label{tab:ablation-sup}
\end{table*}
\subsection{Qualitative results}
\parsection{KITTI and data with moving objects}
Here, we first present qualitative results of our approach PDC-Net+ on the KITTI-2015 dataset in Fig.~\ref{fig:kitti-qual}. PDC-Net+ clearly identifies the independently moving objects, and does very well in static scenes with only a single moving object, which are particularly challenging since not represented in the training data.
We also compare the baseline GLU-Net-GOCor*, PDC-Net and PDC-Net+ on images of the segmentation dataset, DAVIS~\cite{Pont-Tuset_arXiv_2017} in Fig.~\ref{fig:davis-qual}. Each image pair features a moving object, with a motion independent from the back-ground (camera motion). PDC-Net+ better matches the moving object compared to PDC-Net. This is thanks to our enhanced self-supervised pipeline (Sec.~\ref{sec:training-strategy}), integrating multiple independently moving object and to our introduced injective mask (Sec.~\ref{sec:injective-mask}).
\parsection{ScanNet} We show the performance of PDC-Net+ (trained on MegaDepth) on the ScanNet dataset in Fig.~\ref{fig:scannet}. It can handle very large view-point changes. Notably, it also deals with homogeneous and textureless regions, very common in this dataset.
We also provide additional comparisons between the matches found by our approach PDC-Net+ and SuperPoint + SuperGlue in~\ref{fig:scannet-matches-sup}. Our approach finds substantially more correct matches than SuperGlue. This is particularly due to the detector, which struggle to find repeatable keypoints in textureless regions. On the contrary, we do not suffer from such limitation.
\parsection{YFCC100M} In Fig.~\ref{fig:YCCM-qual}, we visually compare the estimated confidence maps of RANSAC-Flow~\cite{RANSAC-flow} and our approach PDC-Net+ on the YFCC100M dataset. Our confidence maps can accurately \textit{segment} the object from the background (sky). On the other hand, RANSAC-Flow predicts confidence maps, which do not exclude unreliable matching regions, such as the sky. Using these regions for pose estimation for example, would result in a drastic drop in performance, as evidenced in Tab.~\ref{tab:YCCM} of the main paper.
Note also the ability of our predicted confidence map to identify \emph{small accurate flow regions}, even in a dominantly failing flow field. This is the case in the fourth example from the top in Fig.~\ref{fig:YCCM-qual}.
\parsection{MegaDepth} In Fig.~\ref{fig:mega-1},~\ref{fig:mega-2},~\ref{fig:mega-3}, we qualitatively compare our approach PDC-Net+ to the initial PDC-Net and to the baseline GLU-Net-GOCor* on images of the Megadepth dataset. We additionally show the performance of our uncertainty estimation on these examples. While PDC-Net+ and PDC-Net generally produce similar outputs, PDC-Net+ is more accurate on some examples.
By overlaying the warped query image with the reference image at the locations of the identified accurate matches, we observe that our probabilistic formulation produces \emph{highly precise correspondences}. Our uncertainty estimates successfully identify accurate flow regions and also correctly exclude in most cases homogeneous and sky regions. These examples show the benefit of confidence estimation for high quality image alignment, useful \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot in multi-frame super resolution~\cite{WronskiGEKKLLM19}. Texture or style transfer (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot for AR) also largely benefit from it.
\parsection{Image retrieval on Aachen} In Fig.~\ref{fig:aachen-retrieved}, we present the five closest retrieved images according to PDC-Net+, for multiple day and night queries of the Aachen dataset. Since we do not have access to the ground-truth query poses, we cannot verify which retrieved image is the correct closest to the query. However, for all examples, the retrieved images look very similar to the queries, even for difficult night examples.
\begin{table}[b]
\centering
\caption{Metrics evaluated over the scenes of HP-240. High PCK and low AEPE are better . }\label{tab:hp-details}
\resizebox{0.48\textwidth}{!}{%
\begin{tabular}{lcccccc}
\toprule
& I & II & III & IV & V & all \\ \midrule
& \multicolumn{6}{c}{\textbf{AEPE}} \\ \midrule
DGC-Net & 1.74&5.88&9.07&12.14&16.5&9.07 \\
GLU-Net &0.59&4.05&7.64&9.82&14.89&7.4 \\
GLU-Net-GOCor & 0.78& 3.63& 7.64& 9.06 & 11.98&6.62 \\
LIFE &0.83 & 3.1&4.59&4.92&8.09&4.3 \\
RANSAC-Flow (MS) &0.52&\textbf{2.13}&4.83&5.13&6.36&3.79 \\
GLU-Net-GOCor* &0.74&3.33&6.02&7.66&7.57&5.06 \\
PDC-Net (H)&0.32&2.56&5.98&5.7&7.01&4.32 \\
PDC-Net (MS)&0.35&2.57&\textbf{3.02}&\textbf{5.06}&6.82&\textbf{3.56} \\
\textbf{PDC-Net+ (H)} & \textbf{0.33}&2.66&4.35&6.69&7.43&4.29 \\
\textbf{PDC-Net+ (MS)} & 0.36&2.67&3.29&5.51&\textbf{6.1}&3.59 \\ \midrule
& \multicolumn{6}{c}{\textbf{PCK-1} (\%)} \\ \midrule
DGC-Net
&70.29&53.97&52.06&41.02&32.74&50.01 \\
GLU-Net
&87.89&67.49&62.31&47.76&34.14&59.92 \\
GLU-Net-GOCor
&84.72&64.43&60.07&48.87&34.12&58.45 \\
LIFE
&79.05&64.42&61.95&54.29&47.07&61.36 \\
RANSAC-Flow (MS)
&88.24&80.27&79.33&74.64&69.63&78.42 \\
GLU-Net-GOCor*
&82.81&68.16&64.98&58.5&49.97&64.8\\
PDC-Net (H)
&96.15&90.41&84.72&81.9&76.65&85.97 \\
PDC-Net (MS)
&95.91&\textbf{90.78}&88.95&83.78&77.6&87.4 \\
\textbf{PDC-Net+ (H)}
&95.87&90.75&87.89&82.16&76.14&86.56 \\
textbf{PDC-Net+ (MS)}
&\textbf{96.28}&90.44&\textbf{89.05}&\textbf{84.06}&\textbf{79.32}&\textbf{87.83} \\
\midrule
& \multicolumn{6}{c}{\textbf{PCK-5} (\%)} \\ \midrule
DGC-Net
&93.7&82.43&77.58&71.53&61.78&77.4 \\
GLU-Net
&99.14&92.39&85.87&78.1&61.84&83.47 \\
GLU-Net-GOCor
&98.49&92.25&87.23&81.&70.49&85.89 \\
LIFE
&98.75&94.27&92.12&89.55&85.&91.94 \\
RANSAC-Flow (MS)
&99.35 & 97.65&\textbf{96.34}&93.8&93.17&96.06 \\
GLU-Net-GOCor*
&98.97&94.13&88.19&86.59&83.3&90.24 \\
PDC-Net (H)
&\textbf{99.91}&\textbf{97.99}&91.39&92.56&91.11&94.59 \\
PDC-Net (MS)
&99.82&97.98&96.24&94.44&92.44&96.18 \\
\textbf{PDC-Net+ (H)}
&99.84&97.94&94.73&91.83&90.52&94.97 \\
\textbf{PDC-Net+ (MS)}
&99.8&97.92&96.22&\textbf{94.46}&\textbf{93.38}&\textbf{96.36} \\
\bottomrule
\end{tabular}%
}
\end{table}
\section{Additional ablation study}
\label{sec-sup:ablation}
Finally, we provide additional ablative experiments. As in Sec.~\ref{subsec:ablation-study} of the main paper, we use BaseNet as base network to create the probabilistic models, as described in Sec.~\ref{sec-sub:arch-details}. Similarly, all networks are trained on solely the first training stage of the initial PDC-Net~\cite{pdcnet}.
\parsection{Number of components of the constrained mixture} We compare $M=2$ and $M=3$ Laplace components used in the constrained mixture in Tab.~\ref{tab:ablation-sup}. In the case of $M=3$, the first two components are set similarly to the case $M=2$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot as $\sigma^2_1=1.0$ and $2.0 \leq \sigma^2_2 \leq \beta_2^+=s^2$ where $\beta_2^+$ is fixed to the image size used during training (256 here). The third component is set as $\sigma^2_3 = \beta_3^+ = \beta_3^- = \beta_2^+$. The aim of this third component is to identify outliers (such as out-of-view pixels) more clearly. The 3 components approach obtains a better Fl value on KITTI-2015 and slightly better pose estimation results on YFCC100M. However, its performance on MegaDepth and in terms of pure uncertainty estimation (AUSE) slightly degrade. As a result, for simplicity we adopted the version with $M=2$ Laplace components. Note, however, that more components could easily be added.
\begin{figure*}
\centering%
\includegraphics[width=0.95\textwidth]{ sup-images/kitti2015.jpg}
\caption{Qualitative examples of our approach PDC-Net+ applied to images of KITTI-2015. We plot directly the estimated flow field for each image pair. }
\label{fig:kitti-qual}
\end{figure*}
\begin{figure*}
\centering%
\includegraphics[width=0.95\textwidth]{sup-images/davis.pdf}
\caption{Comparison of baseline GLU-Net-GOCor*, PDC-Net and PDC-Net+ applied to images of the segmentation DAVIS dataset~\cite{Pont-Tuset_arXiv_2017}. Scenes feature a moving object taken from a moving camera. PDC-Net and PDC-Net+ also predict a confidence map, according to which the regions represented in red, are unreliable or inaccurate matching regions. }
\label{fig:davis-qual}
\end{figure*}
\begin{figure*}
\centering%
\vspace{-10mm}\includegraphics[width=0.9\textwidth]{sup-images/retrieval/IMG_20161227_172448.jpg_final.pdf} \\
\includegraphics[width=0.9\textwidth]{sup-images/retrieval/IMG_20161227_172730.jpg_final.pdf} \\
\includegraphics[width=0.9\textwidth]{sup-images/retrieval/IMG_20161227_173116.jpg_final.pdf} \\
\includegraphics[width=0.9\textwidth]{sup-images/retrieval/IMG_20161227_191326.jpg_final.pdf} \\
\includegraphics[width=0.9\textwidth]{sup-images/retrieval/2011-10-13_15-24-05_816.jpg_final.pdf} \\
\includegraphics[width=0.9\textwidth]{sup-images/retrieval/2011-12-17_14-50-43_241.jpg_final.pdf}
\includegraphics[width=0.9\textwidth]{sup-images/retrieval/2011-12-29_15-31-46_760.jpg_final.pdf}
\vspace{-2mm}
\caption{Retrieved images by PDC-Net+ for multiple night or day queries of the Aachen dataset.}
\vspace{-10mm}
\label{fig:aachen-retrieved}
\end{figure*}
\begin{figure*}
\centering%
\vspace{-25mm}
\includegraphics[width=0.75\textwidth]{sup-images/scannet.pdf} \\
\vspace{-2mm}\caption{Qualitative examples of our approach PDC-Net+ applied to images of the ScanNet dataset~\cite{scannet}. In the 2$^{nd}$ column, we visualize the query images warped according to the flow fields estimated by PDC-Net+. PDC-Net+ also predicts a confidence map, according to which the regions represented in red, are unreliable or inaccurate matching regions. In the last column, we overlay the reference image with the warped query from PDC-Net+, in the identified accurate matching regions (lighter color).}
\vspace{-20mm}
\label{fig:scannet}
\end{figure*}
\begin{figure*}
\centering%
\vspace{-25mm}
\includegraphics[width=0.70\textwidth]{sup-images/scannet_matches_1.pdf} \\
\vspace{1mm}
\includegraphics[width=0.70\textwidth]{sup-images/scannet_matches_2.pdf} \\
\caption{Comparison of matches found by our PDC-Net+ (left) and Superpoint + SuperGlue (right) on images of ScanNet. Correct matches are green lines and mismatches are red lines. The pose errors are indicated in the top left corner. }
\vspace{-20mm}
\label{fig:scannet-matches-sup}
\end{figure*}
\begin{figure*}
\centering%
\vspace{-5mm}\includegraphics[width=0.99\textwidth]{sup-images/yfcc.pdf}
\vspace{-2mm}
\caption{Visual comparison of RANSAC-Flow and our approach PDC-Net+ on image pairs of the YFCC100M dataset~\cite{YFCC}. In the 3$^{rd}$ and 4$^{th}$ columns, we visualize the query images warped according to the flow fields estimated by the RANSAC-Flow and PDC-Net+ respectively. Both networks also predict a confidence map, according to which the regions represented in respectively yellow and red, are unreliable or inaccurate matching regions. In the last column, we overlay the reference image with the warped query from PDC-Net+, in the identified accurate matching regions (lighter color). }
\label{fig:YCCM-qual}
\end{figure*}
\begin{figure*}
\centering%
\vspace{-16mm}\includegraphics[width=0.93\textwidth]{ sup-images/mega_1.pdf}
\vspace{-2mm}\caption{Qualitative examples of our approach PDC-Net+, compared to PDC-Net and corresponding non-probabilistic baseline GLU-Net-GOCor*, applied to images of the MegaDepth dataset~\cite{megadepth}. We visualize the query images warped according to the flow fields estimated by the GLU-Net-GOCor*, PDC-Net and PDC-Net+. The probabilistic networks also predict a confidence map, according to which the regions represented in red, are unreliable or inaccurate matching regions.}
\label{fig:mega-1}
\end{figure*}
\begin{figure*}
\centering%
\vspace{-15mm}\includegraphics[width=0.93\textwidth]{ sup-images/mega_2.pdf}
\vspace{-2mm}\caption{Qualitative examples of our approach PDC-Net+, compared to PDC-Net and corresponding non-probabilistic baseline GLU-Net-GOCor*, applied to images of the MegaDepth dataset~\cite{megadepth}. We visualize the query images warped according to the flow fields estimated by the GLU-Net-GOCor*, PDC-Net and PDC-Net+. The probabilistic networks also predict a confidence map, according to which the regions represented in red, are unreliable or inaccurate matching regions.}
\label{fig:mega-2}
\end{figure*}
\begin{figure*}
\centering%
\vspace{-13mm}\includegraphics[width=0.93\textwidth]{ sup-images/mega_3.pdf}
\vspace{-2mm}\caption{Qualitative examples of our approach PDC-Net+, compared to PDC-Net and corresponding non-probabilistic baseline GLU-Net-GOCor*, applied to images of the MegaDepth dataset~\cite{megadepth}. We visualize the query images warped according to the flow fields estimated by the GLU-Net-GOCor*, PDC-Net and PDC-Net+. The probabilistic networks also predict a confidence map, according to which the regions represented in red, are unreliable or inaccurate matching regions.}
\label{fig:mega-3}
\end{figure*}
|
{
"timestamp": "2021-09-30T02:10:48",
"yymm": "2109",
"arxiv_id": "2109.13912",
"language": "en",
"url": "https://arxiv.org/abs/2109.13912"
}
|
\section{Introduction}
Angular dispersion (AD) has remained a ubiquitous optical effect since its inception by Newton in the course of his prism experiments \cite{Sabra81Book}. By AD we refer to the wavelength-dependence of the propagation angle in polychromatic fields, which is introduced by diffractive or dispersive components such as gratings or prisms \cite{Fulop10Review,Torres10AOP}. In general, AD can help change the group velocity \cite{Hebling02OE} or produce group-velocity dispersion (GVD) \cite{Szatmari96OL}, leading to a wide range of applications in dispersion compensation \cite{Martinez84JOSAA,Fork84OL,Gordon84OL}, pulse compression \cite{Bor85OC,Lemoff93OL,Kane97JOSAB}, broadband phase-matching in nonlinear optics \cite{Martinez89IEEE,Szabo90APB,Szabo94APB,Richman98OL,Richman99AO}, and the generation of THz pulses \cite{Hebling02OE,Nugraha19OL,Wang20LPR}.
Surprisingly, despite the passage of centuries, the methodology utilized today to produce AD with prisms and gratings does not differ fundamentally from that implemented by Newton or Fraunhofer. More recently, metasurfaces have been fabricated to control the sign of the first-order AD \cite{Arbabi17Optica,McClung20Light}, or to combine the roles of a grating and a lens and produce a tilted-pulse front (TPF) for potential applications in beam steering \cite{Shaltout19ScienceSteering}. In all these cases -- whether traditional devices or metasurfaces -- only first-order AD is manipulated, but not the higher-order terms \cite{Porras03PRE2}.
This state of affairs has led to a curious gap in optics that has survived unnoticed till today. This gap can be appreciated by classifying pulsed optical fields in free space according to their first three dispersion orders engendered by AD: the axial phase velocity $v_{\mathrm{ph}}$, group velocity $\widetilde{v}$, and GVD. With respect to the axial phase velocity $v_{\mathrm{ph}}$, we divide fields into on-axis ($v_{\mathrm{ph}}\!=\!c$) and off-axis ($v_{\mathrm{ph}}\!\neq\!c$) classes, where $c$ is the speed of light in vacuum; with respect to the axial group velocity $\widetilde{v}$, we have luminal ($\widetilde{v}\!=\!c$) and non-luminal ($\widetilde{v}\!\neq\!c$) classes; and with respect to dispersion, we have fields that are dispersion-free, endowed with anomalous or normal GVD, or have an arbitrary dispersion profile. According to this scheme, optical fields endowed with AD fall into $2\times2\times4\!=\!16$ possible classes. Surprisingly, we find that \textit{the majority of classes of physically admissible field configurations from this classification have yet to be realized}. In fact, representatives from only 6 classes have been synthesized to date, and of the remaining 10 classes only one is physically excluded -- the other 9 classes have thus far eluded optics.
Two factors have contributed to this surprising situation. First, misconceptions have persisted for decades with regards to the set of physically admissible optical fields that can be realized via AD. Specifically, the result by Martinez, Gordon, and Fork \cite{Martinez84JOSAA} purports to show that AD yields \textit{only} anomalous GVD in free space, a result that forms the basis for the utilization of prism pairs \cite{Fork84OL} and other optical systems \cite{Gordon84OL} to compensate for normal material GVD. Second, conventional optical components such as gratings and prisms offer limited control over the spectral profile of AD.
Producing all the physically realizable field configurations accessible via AD requires: (1) independent tuning of multiple orders of AD, a feature that is not provided by conventional optical components; and (2) access to the newly identified \textit{non-differentiable} AD, whereby the derivative of the wavelength-dependent propagation angle is undefined at some wavelength \cite{Hall21OL,Yessenov21ACSP,Hall21OL3NormalGVD,Hall21OE1NonDiff}, a condition that is not produced by any currently available optical device. Non-differentiable AD arises naturally in the study of `space-time' (ST) wave packets \cite{Kondakci16OE,Parker16OE,Kondakci17NP,Porras17OL,Efremidis17OL,Yessenov19OPN,Wong21OE}, where it undergirds their unique characteristics in free space such as propagation invariance \cite{Kondakci18PRL,Bhaduri19OL,Yessenov19OE,Yessenov19Optica,Yessenov20NC,Schepler20ACSP,Wong20AS}, tunable group velocity \cite{Wong17ACSP2,Porras17OL,Efremidis17OL,Kondakci19NC}, self-healing \cite{Kondakci18OL}, Talbot self-imaging in space-time \cite{Hall21APLP,Hall21OL2}, accelerating wave packets \cite{Yessenov20PRL2,Hall21OL4Acceleration}, arbitrary dispersion profiles \cite{Malaguti08OL,Malaguti09PRA,Yessenov21ACSP,Hall21OL3NormalGVD}, and anomalous refraction \cite{Bhaduri20NP}.
We show here that the versatile pulsed-beam shaper developed for the synthesis of ST wave packets \cite{Yessenov19OPN} constitutes a `universal AD synthesizer': it can produce arbitrary AD spectral profiles by controlling the relative weights of the individual AD orders. This pulsed-beam shaper comprises spectral analysis followed by wave-front phase modulation to produce differentiable or non-differentiable AD in the paraxial regime. Using this universal AD synthesizer, we produce representative wave packets from all 15 physically admissible classes of pulsed optical fields from the possible 16 classified according to their axial phase velocity, group velocity, and dispersion profile. By bridging this gap that has persisted for decades in optics, an entirely new toolbox is made available: pulsed beams with arbitrary and readily tunable dispersion characteristics. Such pulsed fields may provide new opportunities in dispersion compensation, nonlinear and quantum optics, micro-particle manipulation, light-matter interactions, and optical signal processing.
\section{Theory of angular dispersion: differentiable and non-differentiable}
We consider scalar optical fields involving one transverse spatial coordinate $x$, while holding the field uniform along $y$, and $z$ is the axial coordinate [Fig.~\ref{Fig:FieldConfiguration}]. If each temporal frequency $\omega$ in presence of AD travels at an angle $\varphi(\omega)$ with respect to the $z$-axis, then the field is $E(x,z;t)\!=\!\int\!d\omega\widetilde{E}(\omega)e^{ik(x\sin\{\varphi(\omega)\}+z\cos\{\varphi(\omega)\}-ct)}$, where $\widetilde{E}(\omega)$ is the Fourier transform of $E(0,0;t)$, $k\!=\!\omega/c$, and the transverse and longitudinal components of the wave vector are $k_{x}(\omega)\!=\!k\sin\{\varphi(\omega)\}$ and $k_{z}(\omega)\!=\!k\cos\{\varphi(\omega)\}$, respectively. We expand $\varphi(\omega)$ around a carrier frequency $\omega_{\mathrm{o}}$: $\varphi(\omega)\!=\!\varphi(\omega_{\mathrm{o}}+\Omega)=\varphi_{\mathrm{o}}+\varphi_{\mathrm{o}}^{(1)}\Omega+\tfrac{1}{2}\varphi_{\mathrm{o}}^{(2)}\Omega^{2}+\cdots$; where $\Omega\!=\!\omega-\omega_{\mathrm{o}}$, $\varphi_{\mathrm{o}}\!=\!\varphi(\omega_{\mathrm{o}})$, $\varphi_{\mathrm{o}}^{(n)}\!=\!\tfrac{d^{n}\varphi}{d\omega^{n}}\big|_{\omega=\omega_{\mathrm{o}}}$, and we expand $k_{x}(\omega)$ and $k_{z}(\omega)$ in terms of transverse and axial dispersion coefficients, respectively \cite{Porras03PRE2}:
\begin{equation}
k_{x}(\omega)=k_{x}^{(0)}+k_{x}^{(1)}\Omega+\tfrac{1}{2}k_{x}^{(2)}+\cdots,\;\;\;k_{z}(\omega)=k_{z}^{(0)}+k_{z}^{(1)}\Omega+\tfrac{1}{2}k_{z}^{(2)}+\cdots.
\end{equation}
\subsection{Phase velocity}
The zeroth-order dispersion terms in free space arising from AD are $k_{x}^{(0)}\!=\!k_{\mathrm{o}}\sin{\varphi_{\mathrm{o}}}$ and $k_{z}^{(0)}=k_{\mathrm{o}}\cos{\varphi_{\mathrm{o}}}$, where $k_{\mathrm{o}}\!=\!\omega_{\mathrm{o}}/c$. We refer to the condition $\varphi_{\mathrm{o}}\!=\!0$ as the `on-axis' configuration [Fig.~\ref{Fig:FieldConfiguration}(c-e)], and to $\varphi_{\mathrm{o}}\!\neq\!0$ as `off-axis' [Fig.~\ref{Fig:FieldConfiguration}(f,g)]. For large $\varphi_{\mathrm{o}}$, the field can still be useful for interacting with localized structures, but a significant propagation distance requires $\varphi_{\mathrm{o}}$ to be small. We define the vector $\vec{k}_{\mathrm{o}}\!=\!(k_{x}^{(0)},k_{z}^{(0)})$ that is orthogonal to the phase front (plane of constant phase) and makes an angle $\varphi_{\mathrm{o}}$ with the $z$-axis. The axial phase velocity $v_{\mathrm{ph}}\!=\!\tfrac{\omega_{\mathrm{o}}}{k_{z}^{(0)}}\!=\!c/\cos{\varphi_{\mathrm{o}}}$ is determined by only $\varphi_{\mathrm{o}}$ \cite{Chiao02OPN}. Therefore, $v_{\mathrm{ph}}\!=\!c$ for on-axis fields $\varphi_{\mathrm{o}}\!=\!0$; otherwise, $v_{\mathrm{ph}}\!=\!c/\cos{\varphi_{\mathrm{o}}}\!\neq\!c$ for off-axis fields. The first tier in our classification regards the axial phase velocity $v_{\mathrm{ph}}$: on-axis fields $v_{\mathrm{ph}}\!=\!c$, and off-axis fields $v_{\mathrm{ph}}\!\neq\!c$.
\begin{figure}[t!]
\centering
\includegraphics[width=11cm]{Fig1
\caption{(a) Intensity profile for a pulsed optical field free of AD, and (b) the associated propagation angle $\varphi(\omega)$. (c) On-axis ($\varphi_{\mathrm{o}}\!=\!0$) pulsed field endowed with AD that may be (d) differentiable or (e) non-differentiable. (f) Off-axis ($\varphi_{\mathrm{o}}\!\neq\!0$) pulsed field endowed with AD and (g) the associated propagation angle $\varphi(\omega)$.}
\label{Fig:FieldConfiguration}
\end{figure}
\subsection{Group velocity}
The first-order dispersion terms in free space arising from AD are:
\begin{equation}\label{Eq:FirstOrderDispersionTerm}
ck_{x}^{(1)}=\omega_{\mathrm{o}}\varphi_{\mathrm{o}}^{(1)}\cos{\varphi_{\mathrm{o}}}+\sin{\varphi_{\mathrm{o}}},\;\;\;ck_{z}^{(1)}=\cos{\varphi_{\mathrm{o}}}-\omega_{\mathrm{o}}\varphi_{\mathrm{o}}^{(1)}\sin{\varphi_{\mathrm{o}}},
\end{equation}
which determine the transverse walk-off and the axial group velocity $\widetilde{v}$, respectively. We make use throughout of dimensionless coefficients $c\omega_{\mathrm{o}}^{n-1}k_{x}^{(n)}$, $c\omega_{\mathrm{o}}^{n-1}k_{z}^{(n)}$, and $\omega_{\mathrm{o}}^{n}\varphi_{\mathrm{o}}^{(n)}$. The pulse front (the plane of constant amplitude) is orthogonal to the vector $\vec{k}_{\mathrm{o}}^{(1)}\!=\!(k_{x}^{(1)},k_{z}^{(1)})$, which makes an angle $\delta_{\mathrm{o}}^{(1)}$ with $\vec{k}_{\mathrm{o}}$, where $\tan{\delta_{\mathrm{o}}^{(1)}}\!=\!\omega_{\mathrm{o}}\varphi_{\mathrm{o}}^{(1)}$ \cite{Hebling96OQE}. The axial group velocity is:
\begin{equation}\label{Eq:GeneralGroupVelocity}
\widetilde{v}=\frac{1}{k_{z}^{(1)}}=\frac{c}{\cos{\varphi_{\mathrm{o}}}-\omega_{\mathrm{o}}\varphi_{\mathrm{o}}^{(1)}\sin{\varphi_{\mathrm{o}}}}=\frac{\cos{\delta^{(1)}}}{\cos{(\varphi_{\mathrm{o}}+\delta_{\mathrm{o}}^{(1)})}}.
\end{equation}
Unlike $v_{\mathrm{ph}}$ that depends solely on the \textit{geometric} factor $\varphi_{\mathrm{o}}$, $\widetilde{v}$ also incorporates an \textit{interferometric} contribution $\omega_{\mathrm{o}}\varphi_{\mathrm{o}}^{(1)}$, and thus can take on luminal \textit{or} non-luminal values in both on-axis \textit{and} off-axis fields. For off-axis fields $\varphi_{\mathrm{o}}\!\neq\!0$, we have in general $\widetilde{v}\!\neq\!c$, but the luminal condition $\widetilde{v}\!=\!c$ is achieved whenever $\varphi_{\mathrm{o}}\!=\!-2\delta^{(1)}$.
At first it appears, however, that \textit{only} luminal group velocities $\widetilde{v}\!=\!c$ can be realized in on-axis fields $\varphi_{\mathrm{o}}\!=\!0$. This was the accepted wisdom until our recent development of `baseband' ST wave packets \cite{Kondakci17NP,Kondakci19NC,Yessenov19OE,Yessenov19PRA}, which are on-axis fields with tunable group velocity $\widetilde{v}\!\neq\!c$ that seem to contradict Eq.~\ref{Eq:GeneralGroupVelocity}. However, the AD underlying baseband ST wave packets is \textit{non-differentiable} at $\omega_{\mathrm{o}}$; that is, $\tfrac{d\varphi}{d\omega}$ is not defined at $\omega\!=\!\omega_{\mathrm{o}}$. Specifically, $\varphi(\omega)\!\approx\!\eta\sqrt{\tfrac{\Omega}{\omega_{\mathrm{o}}}}$ in the vicinity of $\omega_{\mathrm{o}}$, where $\eta$ is a dimensionless constant. Because $\varphi(\omega)\!\propto\!\sqrt{\Omega}$, it is not differentiable at $\omega\!=\!\omega_{\mathrm{o}}$. Nevertheless, $\varphi\tfrac{d\varphi}{d\omega}\!\rightarrow\!\tfrac{\eta^{2}}{2\omega_{\mathrm{o}}}$ is finite and frequency-independent when $\omega\!\rightarrow\!\omega_{\mathrm{o}}$, and the on-axis field is therefore no longer luminal $\widetilde{v}\!=\!c/\widetilde{n}\!\neq\!c$, with an effective group index is $\widetilde{n}\!=\!1-\tfrac{1}{2}\eta^{2}$. Because $\widetilde{v}$ here is frequency-independent, all higher-order dispersion terms are eliminated and the ST wave packet is propagation invariant. We have therefore shown that both luminal \textit{and} non-luminal on-axis fields are indeed feasible in contrast to traditional expectations.
The second tier in our classification concerns $\widetilde{v}$: we distinguish between luminal $\widetilde{v}\!=\!c$ and non-luminal $\widetilde{v}\!\neq\!c$ fields. By combining luminal or non-luminal $v_{\mathrm{ph}}$ and $\widetilde{v}$ as distinguishing criteria, optical fields can be divided into $2\times2\!=\!4$ broad distinct categories.
\subsection{Group-velocity dispersion}
The second-order dispersion terms in free space arising from AD are \cite{Porras03PRE2}:
\begin{eqnarray}\label{Eq:SecondOrderDispersionTerm}
c\omega_{\mathrm{o}}k_{x}^{(2)}&=&(\omega_{\mathrm{o}}^{2}\varphi_{\mathrm{o}}^{(2)}+2\omega_{\mathrm{o}}\varphi_{\mathrm{o}}^{(1)})\cos{\varphi_{\mathrm{o}}}-(\omega_{\mathrm{o}}\varphi_{\mathrm{o}}^{(1)})^{2}\sin{\varphi_{\mathrm{o}}},\nonumber\\
c\omega_{\mathrm{o}}k_{z}^{(2)}&=&-(\omega_{\mathrm{o}}\varphi_{\mathrm{o}}^{(1)})^{2}\cos{\varphi_{\mathrm{o}}}-(\omega_{\mathrm{o}}^{2}\varphi_{\mathrm{o}}^{(2)}+2\omega_{\mathrm{o}}\varphi_{\mathrm{o}}^{(1)})\sin{\varphi_{\mathrm{o}}},
\end{eqnarray}
which determine the GVD experienced by the field along the $x$ and $z$ axes, respectively, and depend on $\varphi_{\mathrm{o}}$, $\omega_{\mathrm{o}}\varphi_{\mathrm{o}}^{(1)}$, and $\omega_{\mathrm{o}}^{2}\varphi_{\mathrm{o}}^{(2)}$.
One misconception needs to be clarified regarding the possibility of producing normal GVD via AD. A result in \cite{Martinez84JOSAA} purports to show that AD in free space produces \textit{only} anomalous GVD. However, this result is \textit{not} universal and applies only to on-axis fields, $\varphi_{\mathrm{o}}\!=\!0$, whereupon $c\omega_{\mathrm{o}}k_{z}^{(2)}\!=\!-(\omega_{\mathrm{o}}\varphi_{\mathrm{o}}^{(1)})^{2}\!<\!0$. For \textit{off-axis} fields $\varphi_{\mathrm{o}}\!\neq\!0$ (which are \textit{not} dealt with in \cite{Martinez84JOSAA}), one may in principle produce normal GVD via AD by tuning the values of $\varphi_{\mathrm{o}}$, $\varphi_{\mathrm{o}}^{(1)}$, and $\varphi_{\mathrm{o}}^{(2)}$ independently. For example, Porras \textit{et al}. \cite{Porras03PRE2} propose setting $\varphi_{\mathrm{o}}^{(1)}\!=\!0$, so that $c\omega_{\mathrm{o}}k_{z}^{(2)}\!=\!-\omega_{\mathrm{o}}^{2}\varphi_{\mathrm{o}}^{(2)}\sin{\varphi_{\mathrm{o}}}$, whereupon normal GVD can be realized by controlling the signs of $\varphi_{\mathrm{o}}$ and $\varphi_{\mathrm{o}}^{(2)}$. This example exemplifies the challenge in producing normal GVD via AD: exquisite control over multiple orders of AD is required, which is \textit{not} offered by conventional optical components. Although the scheme does indeed yield normal GVD, it does not eliminate higher-order dispersion terms. This proposal was realized only very recently \cite{Hall21OL3NormalGVD}.
The challenge remains, however, to produce \textit{on-axis} normal GVD. Consider a wave packet having $k_{z}\!=\!k_{\mathrm{o}}+\tfrac{\Omega}{\widetilde{v}}+\tfrac{1}{2}k_{2}\Omega^{2}$, which is intentionally terminated at second order in $\Omega$ to eliminate all higher-order dispersion terms. This dispersion profile is produced via AD if $\varphi(\omega)$ is given by:
\begin{equation}\label{Eq:AngleForArbitraryDispersion}
\sin{\{\varphi(\omega)\}}=\eta\sqrt{\frac{\Omega}{\omega_{\mathrm{o}}}}\;\;\frac{\omega_{\mathrm{o}}}{\omega}\;\;\sqrt{\left\{1+\frac{1+\widetilde{n}}{2}\frac{\Omega}{\omega_{\mathrm{o}}}+\frac{\sigma}{2}\left(\frac{\Omega}{\omega_{\mathrm{o}}}\right)^{2}\right\}\left\{1-\frac{\sigma}{1-\widetilde{n}}\frac{\Omega}{\omega_{\mathrm{o}}}\right\}},
\end{equation}
which is non-differentiable by virtue of the factor $\sqrt{\Omega}$; here $\sigma\!=\!\tfrac{1}{2}k_{2}\omega_{\mathrm{o}}c$. A universal AD synthesizer must therefore make available non-differentiable AD as a key ingredient to produce normal and anomalous GVD on-axis when $\widetilde{v}\!\neq\!c$, as shown recently in \cite{Yessenov21ACSP,Hall21OL3NormalGVD}.
With regards to GVD, we distinguish between \textit{four} distinct states: (1) dispersion-free fields where all the dispersion coefficients vanish, $k_{z}^{(n)}\!=\!0$ for $n\!\geq\!2$; (2) fields with \textit{anomalous} GVD $k_{z}^{(2)}\!<\!0$; (3) fields with \textit{normal} GVD $k_{z}^{(2)}\!>\!0$ -- regardless of the value of the higher-order dispersion coefficients; and (4) fields with an \textit{arbitrary} dispersion profile, in which multiple dispersion coefficients are specified simultaneously.
\section{Classification of pulsed optical fields}
\begin{figure}[t!]
\centering
\includegraphics[width=13.2cm]{Fig3
\caption{Classification scheme for pulsed optical field configurations according to their three lowest-order dispersion terms: the axial phase velocity $v_{\mathrm{ph}}$, group velocity $\widetilde{v}$, and state of dispersion. G: Grating; FWM: focus-wave mode; FXW: focus X-wave.}
\label{Fig:Classification}
\end{figure}
We classify pulsed optical fields endowed with AD in three tiers according to the lowest dispersion orders as shown in Fig.~\ref{Fig:Classification}:
\begin{enumerate}
\item The axial phase velocity $v_{\mathrm{ph}}\!=\!c/\cos{\varphi_{\mathrm{o}}}$: In this first tier, the fields are either on-axis ($\varphi_{\mathrm{o}}\!=\!0$ and $v_{\mathrm{ph}}\!=\!c$) or off-axis ($\varphi_{\mathrm{o}}\!\neq\!0$ and $v_{\mathrm{ph}}\!=\!c/\cos{\varphi_{\mathrm{o}}}\!\neq\!c$).
\item The axial group velocity $\widetilde{v}$ as determined by $\varphi_{\mathrm{o}}$ and $\varphi_{\mathrm{o}}^{(1)}$ (Eq.~\ref{Eq:FirstOrderDispersionTerm}): In this second tier, the fields are luminal $\widetilde{v}\!=\!c$ or non-luminal $\widetilde{v}\!\neq\!c$.
\item The state of axial dispersion, which we subdivide: (1) no dispersion $k_{z}^{(n)}\!=\!0$ for $n\!\geq\!2$; (2) anomalous GVD $k_{z}^{(2)}\!<\!0$, without regard to higher-order dispersion terms; (3) normal GVD $k_{z}^{(2)}\!<\!0$, without regard to higher-order dispersion terms; or (4) arbitrary dispersion profiles in which multiple dispersion orders are specified.
\end{enumerate}
According to this three-tier classification scheme, $2\times2\times4\!=\!16$ distinct classes of pulsed optical fields can be counted. The first of these 16 classes is on-axis ($\varphi_{\mathrm{o}}\!=\!0$ and $v_{\mathrm{ph}}\!=\!c$), luminal ($\widetilde{v}\!=\!c$), and dispersion-free, so that $\varphi(\omega)\!=\!0$, which corresponds to the trivial case of a plane-wave pulse traveling along the $z$-axis. Only 5 other classes of fields that are identified in Fig.~\ref{Fig:Classification} have been realized to date and have been the focus of study in the fields of AD and TPFs \cite{Fulop10Review,Torres10AOP}. Our systematic survey reveals that only one class is \textit{physically inadmissible}: on-axis ($v_{\mathrm{ph}}\!=\!c$) luminal ($\widetilde{v}\!=\!c$) fields having normal GVD, which is the particular field configuration ruled out in \cite{Martinez84JOSAA}. The 9 classes that were \textit{not} previously realized using conventional means comprise 3 classes with normal GVD and the 4 classes involving arbitrary dispersion, in addition to the dispersion-free and anomalous-GVD classes associated with on-axis non-luminal fields. These missing pulsed field configurations have either been recently realized by our group in the course of studying ST wave packets, or are reported here for the first time to the best of our knowledge.
We proceed to examine these 16 classes of pulsed fields in terms of the projection of their spatio-temporal spectrum onto the $(k_{z},\tfrac{\omega}{c})$-plane \cite{Donnelly93ProcRSLA,Yessenov19PRA}. Because $k_{z}\!=\!\tfrac{\omega}{c}\cos{\{\varphi(\omega)\}}$, the spectral projection takes the form of a 1D curved trajectory, which must lie above the light-line $k_{z}\!=\!\tfrac{\omega}{c}$. Any point \textit{on} the light-line corresponds to $\varphi(\omega)\!=\!0$, and thus belongs to an on-axis field configuration; the spectral trajectory for off-axis fields lies away from the light-line.
\begin{figure}[t!]
\centering
\includegraphics[width=13.2cm]{Fig4}
\caption{Pulsed on-axis ($\varphi_{\mathrm{o}}\!=\!0$ and $v_{\mathrm{ph}}\!=\!c$) fields that are luminal ($\widetilde{v}\!=\!c$). In each panel we plot the spectral trajectory in the $(k_{z},\tfrac{\omega}{c})$-plane and the associated propagation angle $\varphi(\omega)$. The dashed line is the light-line $k_{z}\!=\!\tfrac{\omega}{c}$.}
\label{Fig:OnAxisLuminal}
\end{figure}
\begin{figure}[b!]
\centering
\includegraphics[width=13.2cm]{Fig5}
\caption{Pulsed on-axis ($\varphi_{\mathrm{o}}\!=\!0$ and $v_{\mathrm{ph}}\!=\!c$) fields that are non-luminal ($\widetilde{v}\!\neq\!c$). (a) No dispersion; (b) anomalous GVD; (c) normal GVD; and (d) arbitrary dispersion profile. The dotted curve for $\varphi(\omega)$ in (b-d) is that for the dispersion-free case from (a). The dashed lines in the first row are the light-line $k_{z}\!=\!\tfrac{\omega}{c}$; the dotted lines in the first row are the tangents to the spectral trajectory at $\omega\!=\!\omega_{\mathrm{o}}$; and the dotted curves in the second row are $\varphi(\omega)$ for the dispersion-free field from (a).}
\label{Fig:OnAxisNonluminal}
\end{figure}
\subsection{On-axis, luminal fields}
This collection of 4 classes comprise pulsed fields propagating along the $z$-axis ($\varphi_{\mathrm{o}}\!=\!0$ and $v_{\mathrm{ph}}\!=\!c$) with luminal group velocity $\widetilde{v}\!=\!c$. The spectral trajectory in the $(k_{z},\tfrac{\omega}{c})$-plane is tangential to the light-line at $\omega\!=\!\omega_{\mathrm{o}}$, and its curvature is determined by the GVD. In absence of dispersion, the spectral trajectory lies along the light-line, and $\varphi(\omega)\!=\!0$, which is the trivial case of a plane-wave pulse traveling along the $z$-axis [Fig.~\ref{Fig:OnAxisLuminal}(a)]. In the case of anomalous GVD [Fig.~\ref{Fig:OnAxisLuminal}(b)], the spectral trajectory curves upwards away from the light-line. This wave packet has been realized in \cite{Szatmari96OL}, and can be viewed as an on-axis TPF. An on-axis luminal pulsed field endowed with \textit{nornmal} GVD is physically inadmissible because its spectral trajectory would be tangential to the light-line at $\omega\!=\!\omega_{\mathrm{o}}$ but curve away downwards. Such a field is purely evanescent [Fig.~\ref{Fig:OnAxisLuminal}(c)], and is ruled out by the result in \cite{Martinez84JOSAA}. Finally, an arbitrary dispersion profile can be inculcated as long as the spectral trajectory $k_{z}(\omega)\!=\!\tfrac{\omega}{c}+\sum_{n}\tfrac{1}{n!}k_{n}\Omega^{n}$ lies above the light-line, $k_{z}(\omega)\!<\!\tfrac{\omega}{c}$. This restriction amounts to $\sum_{n}\tfrac{1}{n!}k_{n}\Omega^{n}\!<\!0$ for all $\omega$. This can be satisfied for any magnitudes of $k_{z}^{(n)}$ as long as they are negative-valued for even-order terms, and positive-valued for odd-order terms. Having oppositely signed coefficients must be balanced against other terms in order to retain $\sum_{n}\tfrac{1}{n!}k_{n}\Omega^{n}\!<\!0$ everywhere.
\subsection{On-axis, non-luminal fields}
This class comprises on-axis fields ($\varphi_{\mathrm{o}}\!=\!0$ and $v_{\mathrm{ph}}\!=\!c$) that are non-luminal $\widetilde{v}\!\neq\!c$, which necessitate introducing non-differentiable AD. Until recently, this class of fields went unexplored (see the theoretical studies in \cite{Valtna07OC,ZamboniRached2009PRA} for exceptions). We have recently investigated this class of pulsed fields extensively under the moniker of ST wave packets. The spectral trajectory in the $(k_{z},\tfrac{\omega}{c})$-plane must reach the light-line at $\omega\!=\!\omega_{\mathrm{o}}$ ($\varphi_{\mathrm{o}}\!=\!0$), and the tangent to the trajectory at $\omega_{\mathrm{o}}$ is \textit{not} parallel to the light-line ($\widetilde{v}\!\neq\!c$). If $\widetilde{v}\!=\!c\tan{\theta}$, then $\theta$ is the angle made by this tangent with the $k_{z}$-axis [Fig.~\ref{Fig:OnAxisNonluminal}].
The dispersion-free class [Fig.~\ref{Fig:OnAxisNonluminal}(a)] are propagation-invariant wave packets that ravel rigidly in free space at a group velocity $\widetilde{v}$. Absence of GVD implies that the spectral trajectory is a straight line making an angle $\theta$ with the $k_{z}$-axis, and changing the spectral tilt angle $\theta$ tunes $\widetilde{v}$ across the subluminal ($\theta\!<\!45^{\circ}$, $\widetilde{v}\!<\!c$), superluminal ($45^{\circ}\!<\!\theta\!<\!90^{\circ}$, $\widetilde{v}\!>\!c$), and negative-$\widetilde{v}$ ($\theta\!>\!90^{\circ}$, $\widetilde{v}\!<\!0$) regimes \cite{Kondakci19NC,Yessenov19OE,Bhaduri20NP}). Anomalous GVD [Fig.~\ref{Fig:OnAxisNonluminal}(b)] or normal GVD [Fig.~\ref{Fig:OnAxisNonluminal}(c)] can be readily introduced on equal footing making use of Eq.~\ref{Eq:AngleForArbitraryDispersion} to sculpt the necessary propagation angle $\varphi(\omega)$ that curves the spectral trajectory in the $(k_{z},\tfrac{\omega}{c})$-plane. All higher-order dispersion terms are eliminated here. Finally, realizing arbitrary GVD [Fig.~\ref{Fig:OnAxisNonluminal}(d)] is subject only to the constraint $\sum_{n=2}\tfrac{1}{n!}k_{z}^{(n)}\Omega^{n}\!<\!\big|\tfrac{\Omega}{c}(1-\widetilde{n})\big|$.
\begin{figure}[b!]
\centering
\includegraphics[width=13.2cm]{Fig6}
\caption{Pulsed off-axis ($\varphi_{\mathrm{o}}\!\neq\!0$ and $v_{\mathrm{ph}}\!\neq\!c$) fields that are luminal ($\widetilde{v}\!=\!c$). (a) No dispersion; (b) anomalous GVD; (c) normal GVD; and (d) arbitrary dispersion profile. The dotted curve for $\varphi(\omega)$ in (b-d) is that for the dispersion-free case from (a). The dashed lines in the first row are the light-line $k_{z}\!=\!\tfrac{\omega}{c}$; the dotted lines in the first row are the tangents to the spectral trajectory at $\omega\!=\!\omega_{\mathrm{o}}$; and the dotted curves in the second row are $\varphi(\omega)$ for the dispersion-free field from (a).}
\label{Fig:OffAxisLuminal}
\end{figure}
\subsection{Off-axis, luminal fields}
This class comprises off-axis fields ($\varphi_{\mathrm{o}}\!\neq\!0$ and $v_{\mathrm{ph}}\!\neq\!c$) that are nevertheless luminal ($\widetilde{v}\!=\!c$) by satisfying the constraint $\varphi_{\mathrm{o}}\!=\!-2\delta^{(1)}$. Their spectral trajectories in the $(k_{z},\tfrac{\omega}{c})$-plane do \textit{not} intersect with the light-line, but the tangent to the spectral trajectory at $\omega_{\mathrm{o}}$ is parallel to the light-line. These pulsed fields do not necessitate the incorporation of non-differentiable AD.
For the dispersion-free class [Fig.~\ref{Fig:OffAxisLuminal}(a)], the spectral trajectory is a straight line parallel to the light-line but displaced with respect to it, which represents a luminal propagation-invariant wave packet. This is the focus-wave mode (FWM) discovered by Brittingham in 1983 \cite{Brittingham83JAP}, and was the first propagation-invariant wave packet identified in optics, and the only luminal one \cite{Yessenov19PRA}. By curving the spectral trajectory away from the GVD-free case, anomalous GVD can be realized [Fig.~\ref{Fig:OffAxisLuminal}(b)]. This field configuration can be produced by a grating (where GVD is \textit{always} anomalous), while satisfying $\varphi_{\mathrm{o}}\!=\!-2\delta^{(1)}$.
In contrast to the on-axis luminal scenario where normal GVD is physically inadmissible, for the off-axis luminal fields considered here, normal GVD is indeed admissible [Fig.~\ref{Fig:OffAxisLuminal}(c)]. However, such fields have never been produced. Indeed, it can be though erroneously that the result in \cite{Martinez84JOSAA} rules such a case out. As discussed above, the theorem in \cite{Martinez84JOSAA} applies only to on-axis luminal fields. A case in point is the theoretical prediction that propagation-invariant wave packets exist in media with anomalous GVD \cite{Malaguti08OL}. The reason that normal GVD produced by AD has not been observed previously in off-axis fields is that it requires independent control over $\varphi_{\mathrm{o}}$, $\varphi_{\mathrm{o}}^{(1)}$, and $\varphi_{\mathrm{o}}^{(2)}$, which is not offered by any known optical component.
Producing an arbitrary dispersion profile [Fig.~\ref{Fig:OffAxisLuminal}(d)] requires only that the spectral trajectory remain above the light-line. Preparing such a pulsed field requires independent control over multiple orders of AD, which has not been realized to date.
\begin{figure}[t!]
\centering
\includegraphics[width=13.2cm]{Fig7}
\caption{Pulsed off-axis ($\varphi_{\mathrm{o}}\!\neq\!0$ and $v_{\mathrm{ph}}\!\neq\!c$) fields that are non-luminal ($\widetilde{v}\!\neq\!c$). (a) No dispersion; (b) anomalous GVD; (c) normal GVD; and (d) arbitrary dispersion profile. The dotted curve for $\varphi(\omega)$ in (b-d) is that for the dispersion-free case from (a). The dashed lines in the first row are the light-line $k_{z}\!=\!\tfrac{\omega}{c}$; the dotted lines in the first row are the tangents to the spectral trajectory at $\omega\!=\!\omega_{\mathrm{o}}$; and the dotted curves in the second row are $\varphi(\omega)$ for the dispersion-free field from (a).}
\label{Fig:OffAxisNonluminal}
\end{figure}
\subsection{Off-axis, non-luminal}
The spectral trajectory for an off-axis field ($\varphi_{\mathrm{o}}\!\neq\!0$ and $v_{\mathrm{ph}}\!\neq\!c$) that is non-luminal ($\widetilde{v}\!\neq\!c$) does not intersect with the light-line, and the tangent to the spectral trajectory at $\omega\!=\!\omega_{\mathrm{o}}$ is \textit{not} parallel to the light-line. Rather, this tangent makes an angle $\theta$ with the $k_{z}$-axis, such that $\widetilde{v}\!=\!c\tan{\theta}$. These 4 classes of pulsed fields do \textit{not} require non-differentiable AD for their synthesis.
In absence of dispersion [Fig.~\ref{Fig:OffAxisNonluminal}(a)], the spectral trajectory is a straight line, and thus represents a propagation-invariant wave packet with $\widetilde{v}\!\neq\!c$. This class encompasses several families of such wave packets. If the spectral trajectory when extended to low frequencies passes through the origin $k_{z}\!=\!\tfrac{\omega}{c}\!=\!0$, then it corresponds to a superluminal X-wave \cite{Lu92IEEEa,Saari97PRL}. If the extended spectral trajectory intersects with the light-line $k_{z}\!=\!-\tfrac{\omega}{c}$ ($k_{z}\!<\!0$), then it corresponds to a superluminal focused X-wave \cite{Besieris98PIERS}. Alternatively, if the extended spectral trajectory intersects with the light-line $k_{z}\!=\!\tfrac{\omega}{c}$, then it corresponds to the propagation-invariant ST wave packets in Fig.~\ref{Fig:OnAxisNonluminal}(a), except that the spectral window selected is shifted away from the non-differentiable frequency (the intersection point with the light-line). Such fields can be subluminal or superluminal, or can be even negative-values \cite{Kondakci19NC}.
Introducing anomalous GVD [Fig.~\ref{Fig:OffAxisNonluminal}(b)] can be readily done with a grating in an off-axis configuration \cite{Porras03PRE2}. This field in general is that studied as a TPF \cite{Fulop10Review,Turunen10PO}. Introducing normal GVD [Fig.~\ref{Fig:OffAxisNonluminal}(c)] requires curving the spectral trajectory toward the light-line, an example of which is the configuration proposed by Porras \textit{et al}. in \cite{Porras03PRE2}. This class requires exercising control over multiple orders of AD, which has \textit{not} been available to date. Lastly, arbitrary dispersion can be realized [Fig.~\ref{Fig:OffAxisNonluminal}(d)], once again only if control over multiple orders of AD is available.
\section{Construction for a universal angular-dispersion analyzer}
\begin{figure}[t!]
\centering
\includegraphics[width=8.6cm]{Fig8
\caption{Schematic of the optical arrangement constituting a universal angular dispersion synthesizer.}
\label{Fig:Setup}
\end{figure}
\subsection{Comparison with a conventional diffraction grating}
It is useful to examine briefly the AD produced by a conventional diffraction grating. If the incident and diffraction angles with respect to the grating normal are $\alpha$ and $\varphi$, respectively, then $\sin{\varphi}\!=\!\sin{\alpha}+m\tfrac{\lambda}{\Lambda}$, where $\lambda$ is the wavelength, $\Lambda$ is the grating period, and $m$ is the diffraction order. Only a few free parameters ($\alpha$ and $\Lambda/m$) can be tuned to modify the AD at fixed $\lambda$. Consequently, the different orders of AD are not independent of each other, which in turn entails that different dispersion orders cannot be independently tuned. For example, large values of first-order AD $\omega_{\mathrm{o}}\varphi_{\mathrm{o}}^{(1)}\!=\!(\sin{\alpha}-\sin{\varphi_{\mathrm{o}}})/\cos{\varphi_{\mathrm{o}}}$ require large $\varphi_{\mathrm{o}}$, so that $\omega_{\mathrm{o}}\varphi_{\mathrm{o}}^{(1)}$ and $\varphi_{\mathrm{o}}$ are \textit{not} independent of each other. Similarly, all higher-order AD coefficients can be written in terms of $\alpha$ and $\varphi_{\mathrm{o}}$. As such, one cannot tune one dispersion order without affecting all the others. Indeed, at $\lambda_{\mathrm{o}}\!=\!800$~nm, $\tfrac{\Delta\varphi}{\Delta\lambda}\!\sim\!3^{\circ}$/mm can only be achieved at $\varphi_{\mathrm{o}}\!\sim\!87^{\circ}$.
\subsection{Experimental configuration}
To introduce arbitrary AD into a plane-wave pulse, we follow the two-step strategy depicted schematically in Fig.~\ref{Fig:Setup}. The first step is spectral analysis, whereby a conventional optical component (a diffraction grating here) spreads the spectrum spatially, and the spectrum is then collimated by a cylindrical lens. In the second step, the phase of the spectrally resolved spectrum in the focal plane of the lens is modulated via a SLM \cite{Kondakci17NP} or phase plate \cite{Kondakci18OE,Yessenov20OSAC}, before the spectrum is reconstituted into a pulse via a lens and a grating. In our experiment, we make use of a reflective SLM, and the retro-reflected wave front traces its steps back to the grating. Each wavelength occupies a column along the SLM. Along this direction a linear phase of the form $\Phi(\lambda)\!=\!k_{x}(\lambda)x\!=\!\tfrac{2\pi}{\lambda}\sin{\{\varphi(\lambda)\}}x$ is implemented, where $x$ is the coordinate along the column, and $\varphi(\lambda)$ is the deflection angle with respect to the $z$-axis that we aim to impart to the wavelength $\lambda$. Because the phase $\Phi$ can be set for each wavelength $\lambda$ \textit{independently}, we can produce an arbitrary angular dispersion $\varphi(\lambda)$ constrained only by the technical limits discussed below. In this way, an arbitrary functional form of $\varphi(\lambda)$ can be realized: smooth or discontinuous, differentiable or non-differentiable. For example, $\varphi(\omega)\!\propto\!\sqrt{\Omega}$ that is key to producing on-axis non-luminal pulsed fields can be readily produced. As such, this arrangement can serve as a \textit{universal AD synthesizer}.
\subsection{Large on-axis first-order AD produced by the universal AD synthesizer}
Consider the first-order dispersion coefficient $\varphi^{(1)}\!\approx\!\tfrac{\Delta\varphi}{\Delta\omega}$ where $\Delta\varphi$ is the angular spread associated with the bandwidth $\Delta\omega$. In a grating, $\Delta\varphi$ and $\Delta\lambda$ are not independent of each other, and are instead related through the grating equation. In our system, however, $\Delta\lambda$ and $\Delta\varphi$ are \textit{independent} of each other, and are controlled by two distinct processes. The reason is that we rely on the grating in our arrangement only to spatially resolve the spectrum, but \textit{not} to provide AD, so that $\Delta\omega$ is determined by the grating-lens combination, whereas $\Delta\varphi$ is determined by the numerical aperture of the SLM. With current SLM technology (pixel size $\sim\!10$~$\mu$m), the spatial resolution is far less than that of a grating (grating ruling $<\!1$~$\mu$m), one is led to expect a low $\Delta\varphi$ and thus low $\varphi^{(1)}$. Nevertheless, our system is still capable of providing large $\varphi_{\mathrm{o}}^{(1)}$. The bandwidth $\Delta\omega$ -- over which the small angular spread $\Delta\varphi$ is associated -- can be made quite small by using a high-density grating and a long-focal-length lens. By reducing $\Delta\omega$ that is incident on the SLM at fixed $\Delta\varphi$, one may obtain extremely large values of $\varphi^{(1)}$. For example, $\tfrac{d\varphi}{d\lambda}\!\sim\!3^{\circ}$/nm for $\Delta\lambda\!\sim\!1$ requires an angular spread of only $\Delta\varphi\!\approx\!3^{\circ}$.
It can be readily shown from the geometry of the problem that we have the following identity: $(\omega_{\mathrm{o}}\varphi_{\mathrm{o}}^{(1)})_{\mathrm{s}}\!=\!\tfrac{(\Delta\varphi)_{\mathrm{s}}}{(\Delta\varphi)_{\mathrm{g}}}(\omega_{\mathrm{o}}\varphi_{\mathrm{o}}^{(1)})_{\mathrm{g}}$, where the subscripts `g' and `s' indicate quantities associated with a grating or our SLM-based synthesizer, respectively. If we select a configuration where $(\Delta\varphi)_{\mathrm{s}}\!\approx\!(\Delta\varphi)_{\mathrm{g}}$, then the first-order AD produced by the two approaches are equal. Crucially, however, $(\omega_{\mathrm{o}}\varphi_{\mathrm{o}}^{(1)})_{\mathrm{s}}$ is independent of $\varphi_{\mathrm{o}}$. In fact, large values of $(\omega_{\mathrm{o}}\varphi_{\mathrm{o}}^{(1)})_{\mathrm{s}}$ can be realized on-axis rather than only at large $\varphi_{\mathrm{o}}$ as in a grating. Uniquely, arbitrary $\varphi(\omega)$ profiles can be readily synthesized, limited only by the numerical aperture, diffraction efficiency, and wavelength availability of the SLM.
\section{Measurements}
\begin{figure}[t!]
\centering
\includegraphics[width=11cm]{Fig9}
\caption{Measurements for on-axis luminal fields. In the first row we plot $\varphi(\omega)$, in the second $k_{z}(\lambda)-k_{\mathrm{o}}-\tfrac{\Omega}{c}$, in the third $\widetilde{v}(\lambda)$, and in the fourth $k_{z}^{(2)}(\lambda)$. (a) Dispersion-free field; (b) anomalous GVD with $\omega_{\mathrm{o}}\varphi_\mathrm{o}^{(1)}\!=\!10$; (c) normal GVD is physically inadmissible; and (d) arbitrary dispersion in which $c\omega_\mathrm{o}k_{z}^{(2)}\!=\!-100$ and $c\omega_\mathrm{o}^{2}k_z^{(3)}\!=\!1.5\times10^{5}$.}
\label{Fig:Measurements1}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=11cm]{Fig10}
\caption{Measurements for on-axis non-luminal fields. The rows show the same quantities as in Fig.~\ref{Fig:Measurements1} except that we plot $k_{z}-k_{\mathrm{o}}-\tfrac{\Omega}{\widetilde{v}}$ in the second. Throughout we have $\lambda_{\mathrm{o}}\!=\!800$~nm and $\widetilde{v}(\lambda_{\mathrm{o}})\!=\!1.19c$. (a) Dispersion-free field; (b) anomalous GVD with $c\omega_{\mathrm{o}}k_{z}^{(2)}\!=\!-200$; (c) normal GVD with $c\omega_{\mathrm{o}}k_{z}^{(2)}\!=\!100$; and (d) a dispersion profile in which $c\omega_\mathrm{o} k_{z}^{(2)}\!=\!-100$ and $c\omega_\mathrm{o}^{2}k_{z}^{(3)}\!=\!-10^{6}$.}
\label{Fig:Measurements2}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=11cm]{Fig11}
\caption{Measurements for off-axis luminal fields with $\lambda_{\mathrm{o}}\!=\!799$~nm. The quantities plotted in the rows are the same as in Fig.~\ref{Fig:Measurements1}. (a) Dispersion-free field corresponding to a focused-wave mode with $\varphi_{\mathrm{o}}\!=\!1^{\circ}$; (b) anomalous GVD with $\varphi_{\mathrm{o}}\!=\!1^{\circ}$, $(\omega_{\mathrm{o}}\varphi_\mathrm{o}^{(1)}\!=\!0$, and $\omega_\mathrm{o}^2\varphi_\mathrm{o}^{(2)}\!=\!10^{4}$; (c) normal GVD with $\varphi_{\mathrm{o}}\!=\!1.5^{\circ}$, $(\omega_{\mathrm{o}}\varphi_\mathrm{o}^{(1)}\!=\!0$, and $\omega_\mathrm{o}^2\varphi_\mathrm{o}^{(2)}\!=\!-10^{4}$; and (d) a dispersion profile with $\varphi_{\mathrm{o}}\!=\!1.6^{\circ}$, $c\omega_\mathrm{o}k_{z}^{(2)}\!=\!50$, $c\omega_\mathrm{o}^{2}k_{z}^{(3)}\!=\!-10^{6}$, and $c\omega_\mathrm{o}^{3}k_{z}^{(4)}\!=\!5\times10^{7}$.}
\label{Fig:Measurements3}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=11cm]{Fig12}
\caption{Measurements for off-axis non-luminal fields with $\varphi_{\mathrm{o}}\!=\!1^{\circ}$ and $\lambda_{\mathrm{o}}\!=\!799$~nm. The quantities plotted in each row are the same as in Fig.~\ref{Fig:Measurements2}. (a) Dispersion-free field with $\varphi_{\mathrm{o}}\!=\!2.3^{\circ}$ and $\widetilde{v}\!=\!1.19c$; (b) anomalous GVD with $\varphi_{\mathrm{o}}\!=\!1^{\circ}$ and $\widetilde{v}(\lambda_{\mathrm{o}})\!=\!1.1c$, produced by setting $\omega_{\mathrm{o}}\varphi_\mathrm{o}^{(1)}\!=\!5$ and $\omega_\mathrm{o}^2\varphi_\mathrm{o}^{(2)}\!=\!5000$; (c) normal GVD with $\omega_{\mathrm{o}}\varphi_\mathrm{o}^{(1)}\!=\!5$ and $\omega_\mathrm{o}^2\varphi_\mathrm{o}^{(2)}\!=\!-5000$; and (d) a dispersion profile with $\varphi_{\mathrm{o}}\!=\!1^{\circ}$, $\widetilde{v}\!=\!1.19c$, $c\omega_\mathrm{o}k_{z}^{(2)}\!=\!50$, $c\omega_\mathrm{o}^{2}k_{z}^{(3)}\!=\!-10^{5}$, and $c\omega_\mathrm{o}^{3}k_{z}^{(4)}\!=\!10^{7}$.}
\label{Fig:Measurements4}
\end{figure}
Realizing any spectral profile $\varphi(\omega)$ for the propagation angle is simply a matter of implementing the requisite phase pattern on the SLM. In our experiments, we follow one of two approaches. When only a few low orders of AD are defined (say $\varphi_{\mathrm{o}}$, $\varphi_{\mathrm{o}}^{(1)}$, and $\varphi_{\mathrm{o}}^{(2)}$), and all higher orders are set to zero, then we implement on the SLM the phase $\Phi(\lambda)\!=\!k\sin{\{\varphi(\lambda)\}}$, where $\varphi(\omega)\!=\!\varphi_{\mathrm{o}}+\varphi_{\mathrm{o}}^{(1)}\Omega+\tfrac{1}{2}\varphi_{\mathrm{o}}^{(2)}\Omega^{2}$. If, on the other hand, a particular dispersion profile is targeted, say $k_{z}(\omega)\!=\!k_{\mathrm{o}}+\tfrac{\Omega}{\widetilde{v}}+\tfrac{1}{2}k_{z}^{(2)}\Omega^{2}+\tfrac{1}{6}k_{z}^{(3)}\Omega^{3}$, then we calculate $k_{x}(\omega)\!=\!\sqrt{k^{2}-k_{z}^{2}(\omega)}$, and then implement $k_{x}(\omega)$ directly.
For each field we measure $k_{x}(\omega)$ using a lens and a grating (to implement spatial and temporal Fourier transforms) and from it we extract four quantities: (1) the frequency-dependent propagation angle $\varphi(\omega)\!=\!\arcsin{\{\tfrac{k_{x}(\omega)}{\omega/c}\}}$; (2) the axial wave number $k_{z}(\omega)\!=\!\sqrt{k^{2}-k_{x}^{2}(\omega)}\!=\!k\cos{\{\varphi(\omega)\}}$; (3) the frequency-dependent group velocity $\widetilde{v}(\omega)\!=\!\tfrac{\omega-\omega_{\mathrm{o}}}{k_{z}(\omega)-k_{\mathrm{o}}}$; and (4) the frequency-dependent GVD coefficient $k^{(2)}_{z}(\omega)\!=\!\tfrac{2}{(\omega-\omega_{\mathrm{o}})^{2}}\{k_{z}(\omega)-k_{\mathrm{o}}-\tfrac{\omega-\omega_{\mathrm{o}}}{\widetilde{v}(\omega_{\mathrm{o}})}\}$. For luminal fields (on-axis [Fig.~\ref{Fig:Measurements1}] or off-axis [Fig.~\ref{Fig:Measurements3}]), we plot $k_{z}(\omega)$ relative to the light-line, $k_{z}(\omega)-k_{\mathrm{o}}-\tfrac{\Omega}{c}$, for which only negative values are allowed. For non-luminal fields (on-axis [Fig.~\ref{Fig:Measurements2}] or off-axis [Fig.~\ref{Fig:Measurements4}]), we plot the more convenient quantity $k_{z}(\omega)-k_{\mathrm{o}}-\tfrac{\Omega}{\widetilde{v}}$, which may be either positive or negative.
We systematically realize examples of all 15 physically admissible field configurations enumerated in Fig.~\ref{Fig:Classification}, as we proceed to show.
First, we plot the measurement results for the on-axis luminal fields in Fig.~\ref{Fig:Measurements1} with $\lambda_{\mathrm{o}}\!=\!799$~nm, whereupon $\widetilde{v}(\lambda_{\mathrm{o}})\!=\!c$ and $\varphi(\lambda_{\mathrm{o}})\!=\!\varphi_{\mathrm{o}}\!=\!0$. For the dispersion-free field, $k_{z}(\omega)\!=\!\frac{\omega}{c}$ and $\varphi(\omega)\!=\!0$, corresponding to a plane-wave pulse with $\widetilde{v}(\omega)\!=\!c$ and $k_{z}^{(2)}(\omega)\!=\!0$, as shown in Fig.~\ref{Fig:Measurements1}(a). We include anomalous GVD by adding first-order AD $\omega_{\mathrm{o}}\varphi_{\mathrm{o}}^{(1)}\!=\!10$, which leads to $\varphi(\omega)\!\propto\!\omega$, a wavelength-dependent $\widetilde{v}$, and negative $k_{z}^{(2)}$ [Fig.~\ref{Fig:Measurements1}(b)]. The normal GVD class here is physically inadmissible [Fig.~\ref{Fig:Measurements1}(b)]. In Fig.~\ref{Fig:Measurements1}(d) we show a field whose second- and third-order dispersion coefficients, $k_{z}^{(2)}$ and $k_{z}^{(3)}$, have been set at negative and positive values, respectively. The latter field was synthesized making use of non-differentiable AD to introduce the prescribed on-axis dispersion profile.
Second, for on-axis non-luminal fields [Fig.~\ref{Fig:Measurements2}], the AD must be non-differentiable, with $\lambda_{\mathrm{o}}\!=\!800$~nm, a superluminal group velocity $\widetilde{v}(\lambda_{\mathrm{o}})\!=\!1.19c$, and $\varphi(\lambda_{\mathrm{o}})\!=\!\varphi_{\mathrm{o}}\!=\!0$ throughout. The dispersion-free case is a propagation-invariant ST wave packet \cite{Kondakci17NP,Yessenov19OPN} with $\varphi(\omega)\!\propto\!\sqrt{\Omega}$, $\widetilde{v}(\omega)\!=\!1.19c$ independently of $\omega$, and vanishing dispersion of all orders, as shown in Fig.~\ref{Fig:Measurements2}(a). Anomalous GVD [Fig.~\ref{Fig:Measurements2}(b)] or normal GVD [Fig.~\ref{Fig:Measurements2}(c)] are produced using $\varphi(\omega)$ from Eq.~\ref{Eq:AngleForArbitraryDispersion} after setting $k_{2}$ to negative or positive values, respectively. In both cases, $\widetilde{v}(\omega)$ becomes frequency-dependent with $\widetilde{v}(\lambda_{\mathrm{o}})\!=\!1.19c$, whereas $k_{z}^{(2)}$ is frequency-independent, thus signifying that all dispersion orders above second are eliminated. Finally, a general dispersion profile is shown in Fig.~\ref{Fig:Measurements2}(d) where we set the values of $k_{z}^{(2)}$ and $k_{z}^{(3)}$.
The classes belonging to the third category of off-axis luminal fields are presented in Fig.~\ref{Fig:Measurements3}. In all cases, we guarantee that $\widetilde{v}(\lambda_{\mathrm{o}})\!=\!c$ by setting $\varphi_{\mathrm{o}}\!=\!-2\delta_{\mathrm{o}}^{(1)}$, or alternatively $\tan{\varphi_{\mathrm{o}}}\!=\!-2\tfrac{\omega_{\mathrm{o}}\varphi_{\mathrm{o}}^{(1)}}{1-(\omega_{\mathrm{o}}\varphi_{\mathrm{o}}^{(1)})^{2}}$; i.e., we implement a particular relationship between $\varphi_{\mathrm{o}}$ and $\varphi_{\mathrm{o}}^{(1)}$. In the dispersion-free case [Fig.~\ref{Fig:Measurements3}(a)], $\varphi(\omega)\!=\!\varphi_{\mathrm{o}}$, $\widetilde{v}(\omega)\!=\!c$, and $k_{z}^{(2)}\!=\!0$. This propagation-invariant pulsed field is a segment from a focused-wave mode. Because of the limited bandwidth, however, the characteristic X-shaped spatio-temporal profile would not be visible \cite{Reivelt02PRE}. We introduce anomalous GVD [Fig.~\ref{Fig:Measurements3}(b)] and normal GVD [Fig.~\ref{Fig:Measurements3}(c)] by setting $\varphi_{\mathrm{o}}^{(1)}\!=\!0$ and changing the sign of a non-zero $\varphi_{\mathrm{o}}^{(2)}$. In Fig.~\ref{Fig:Measurements3}(d) we plot the results for a field synthesized with a dispersion profile in which the values of $k_{z}^{(2)}$, $k_{z}^{(3)}$, and $k_{z}^{(4)}$ are set.
Finally, we plot the measurements for off-axis non-luminal fields in Fig.~\ref{Fig:Measurements4}, where the AD introduced is also differentiable. By removing the constraint $\varphi_{\mathrm{o}}\!=\!-2\delta_{\mathrm{o}}^{(1)}$, we now have $\widetilde{v}(\lambda_{\mathrm{o}})\!\neq\!c$, where $\lambda_{\mathrm{o}}\!=\!799$~nm. For the dispersion-free field [Fig.~\ref{Fig:Measurements4}(a)], $\varphi(\omega)$ is non-differentiable and corresponds to a ST wave packet with $\widetilde{v}\!=\!1.19c$, but we are away from its on-axis non-differentiable wavelength, and the entire spectrum considered here is off-axis. Consequently, $\widetilde{v}(\omega)\!=\!1.19c$ and $k_{z}^{(2)}\!=\!0$. We include anomalous GVD [Fig.~\ref{Fig:Measurements4}(b)] or normal GVD [Fig.~\ref{Fig:Measurements4}(c)] by adjusting the values of $\varphi_{\mathrm{o}}$, $\varphi_{\mathrm{o}}^{(1)}$, and $\varphi_{\mathrm{o}}^{(2)}$ in Eq.~\ref{Eq:SecondOrderDispersionTerm} independently. Note that $k_{z}^{(2)}(\omega)$ in these two scenarios are wavelength-dependent, indicating that higher-order dispersion terms are \textit{not} negligible. Lastly, we present in Fig.~\ref{Fig:Measurements4}(d) a field whose dispersion profile has the values of $k_{z}^{(2)}$, $k_{z}^{(3)}$, and $k_{z}^{(4)}$ are all set.
\section{Discussion and conclusion}
One virtue of this systematic survey is that some field configurations which might perhaps otherwise have escaped attention can be instead identified and examined. There is of course an element of arbitrariness in this classification. One could further subdivide the non-luminal fields into sub-categories: subluminal, superluminal, and negative-$\widetilde{v}$ regimes. Although convincing arguments could be mounted supporting this more detailed classification, we nevertheless group all the non-luminal fields together because the major challenge from the experimental perspective lies in the distinction between the luminal and non-luminal cases. Such a subdivision would increase the fraction of fields that have \textit{not} yet been produced, so that our more restricted classification scheme estimates the fraction of yet-to-be realized fields more conservatively.
In our work, we have made use of bulk optical components to construct the universal AD synthesizer. It is important to explore other potential platforms for realizing the same capability. Prime candidates include free-form optics, volume grating systems, metasurfaces, and nanophotonic devices. It is a sobering thought that despite tremendous progress in various aspects of nanophotonics over the past few decades, no known device can produce the missing classes of pulsed fields in Fig.~\ref{Fig:Classification}. One exception is a recent theoretical proposal for the synthesis of on-axis non-luminal pulsed optical fields (propagation-invariant ST wave packets) via a non-local nanophotonic device \cite{Guo21Light}. Our work therefore points to a potential role for nanophotonics in modulating pulsed optical fields by producing controllable AD.
The experimental arrangement described here is capable of introducing AD in one transverse spatial coordinate, but not the other. The main constraint arises from the use of a 2D SLM in which one dimension is reserved for modulating the temporal spectrum, therefore leaving only one dimension for modulating the field spatially. Conventional techniques for producing AD using prisms, gratings, or other dispersive or diffractive devices all introduce AD in one dimension. It remains an open question whether it is possible to construct a universal AD synthesizer in two transverse dimensions.
In conclusion, we have described an optical arrangement that serves as a versatile, high-resolution, \textit{universal angular-dispersion synthesizer} capable of inculcating an arbitrary wavelength-dependent propagation angle into a pulsed optical field. Using this system, we realize representative examples from all 15 physically admissible classes of fields from the 16 possible classes categorized by axial phase velocity, group velocity, and dispersion profile. Access to this broad span of structured pulsed fields with precisely tailored dispersion profiles can help improve nonlinear interactions with materials or structures in the vicinity of their resonances where the refractive index changes rapidly. Furthermore, such sculpted fields will benefit investigations of novel guided pulsed modes \cite{Shiri20NC,Guo21PRR}, and omni-resonant interactions with planar cavities \cite{Shiri20OL,Shiri20APLP}.
|
{
"timestamp": "2021-09-30T02:02:05",
"yymm": "2109",
"arxiv_id": "2109.13987",
"language": "en",
"url": "https://arxiv.org/abs/2109.13987"
}
|
\section{Introduction}
Two-sample tests are one of the most frequently used methods for statistical inference. Across disciplines, we encounter the same setup: two samples $\bm{X}$ and $\bm{Y}$ correspond to two different conditions, and we want to decide if the condition affects the data in some way. Does a drug improve patient outcomes compared to placebo? Are test subjects' reaction times more variable in one context than another? In general, we want to test the null hypothesis that $\bm{X}$ and $\bm{Y}$ come from the same distribution. For now, we will assume $\bm{X}$ and $\bm{Y}$ are univariate, though we discuss multivariate extensions in Section \ref{sec:multi}.
Often, we can assume no specific knowledge of the distributions $F$ and $G$ used to generate $\bm{X}$ and $\bm{Y}$. In this fully non-parametric setting, the problem above is difficult to approach. Some well-known rank-based tests are designed for this context: the Mann-Whitney $U$-test \cite{mann1947test} looks for a location shift between $F$ and $G$, while the Lepage \cite{lepage1971combination} and Cucconi \cite{cucconi1968nuovo} tests address both location and scale. However, even if $F$ and $G$ are not location or scale shifted, differences may hide in the distributions' modality, skewness, local shape, and so forth.
In order to detect more elusive distributional differences, a number of non-parametric tests exist which make no assumptions about how $F$ may differ from $G$. For instance, the Kolmogorov-Smirnov statistic \cite{kolmogorov1933sulla} is the maximal vertical distance between the samples' empirical CDF curves. The Cramer-von Mises statistic \cite{cramer1928composition} sums the squared vertical distance between ECDFs at every point in the combined sample; Anderson-Darling \cite{anderson1952asymptotic} does the same while weighting each vertical distance by an estimate of its variance. Instead of a discrete sum over points in the sample, a test using Wasserstein distance \cite{dobrushin1970prescribing} uses the total area between ECDF curves. The recent DTS test \cite{dowd2020new} somewhat combines Anderson-Darling and Wasserstein, using as its statistic a variance-weighted area between ECDF curves.
Each of these non-parametric tests uses some variety of distance between the ECDFs of $\bm{X}$ and $\bm{Y}$. This approach is logical, as convergence in distribution of probability measures is equivalent to convergence of the associated CDFs in the Kolmogorov-Smirnov metric, provided that the limiting CDF is continuous. Moreover, the Glivenko-Cantelli theorem indicates that ECDFs are uniformly good approximations for their theoretical counterparts.
Yet, the ECDF-based methods do have some drawbacks. The performance of these tests is highly sensitive to
\begin{enumerate}
\item[(1)] the type of alternative and
\item[(2)] the precise shape of the distributions, independently of the difference between them.
\end{enumerate}
As an example of (1), these methods can excel at detecting location and scale shifts, but will struggle to catch bimodality when mean and variance are held constant. As an example of (2), we show in Section \ref{sec:performance} that the relative performance of these tests at detecting a location shift can be exactly \textit{reversed} by a suitable choice of distribution family. One would instead prefer power against location alternatives to be nearly independent of family.
One final drawback pertains to ease of use rather than power. For these methods, ECDF distance measures distributional difference as a scalar quantity. If a test rejects the null, no framework is available to determine exactly why rejection occurred. Even looking at ECDF line graphs and histograms of $\bm{X}$ and $\bm{Y}$, it can be difficult to specify which irregularities account for the rejection, as well as how much each irregularity contributes.
Here, we introduce a new non-parametric two-sample test, AUGUST -- AUGmented cdf for Uniform Statistic Transformation -- which tests for equality of distribution up to a predetermined resolution $d$. AUGUST explicitly tests for multiple orthogonal sources of distributional inequality, giving AUGUST power against a wide range of alternatives. When AUGUST rejects the null, this decomposition into orthogonal signals allows for unambiguous interpretation of exactly how equality in distribution between $\bm{X}$ and $\bm{Y}$ has failed. Moreover, the AUGUST statistic is distribution-free in finite samples, allowing for efficient computation of $p$-values via null simulation.
In Section \ref{sec:insights}, we motivate the AUGUST test by a discussion of ECDF transformations. We show that a test of $F = G$ can be reduced to a test of the uniformity of a particular collection of ECDF-transformed variables. Consider the following transformation: for each point $x$ in $\bm{X}$, what fraction of the $\bm{Y}$ sample is less than $x$? Intuitively speaking, if these fractional values are not uniformly distributed in $[0, 1]$, then the distribution of $\bm{X}$ is a poor fit for $\bm{Y}$, and we should reject $H_0: F = G$. We build on this idea with an approach inspired by resampling.
As a central principle of AUGUST, we use the BET framework introduced in Zhang \cite{zhang2019bet}. Since we wish to test the uniformity of some collection in $[0, 1]$, we can partition $[0, 1]$ into intervals of width $1/2^d$ and record cell probabilities for each of the $2^d$ intervals, where $d$ is a fixed resolution level. Via the Hadamard transform, we map this vector of cell probabilities to a vector of symmetry statistics, thought of as a transformation from the physical domain to the frequency domain. It turns out that operating in the frequency domain simplifies the process of creating a powerful test for uniformity. Moreover, the symmetry statistics can be thought of as detecting uncorrelated sources of non-uniformity at a binary depth $d$. As a result, each symmetry statistic has a clear interpretation as to the way in which non-uniformity fails. In the context of a two-sample test, this interpretation tells us how $F \neq G$.
In Section \ref{sec:method}, we formalize the procedure for the AUGUST test and analyze its running time. Given a total sample size $N$, we prove that the AUGUST statistic can be computed in $O(N\log N)$ elementary operations, and we provide an algorithm that achieves this time complexity. Via simulation, we compare the running time of AUGUST to that of classical methods. In addition, we demonstrate how the AUGUST test can be naturally extended to a non-parametric two-sample test on multivariate data using elliptical cells.
In Section \ref{sec:theoretical}, we show that the AUGUST statistic can be written as a continuous function of a two-sample $U$-statistic. We use this fact to derive the asymptotic distribution of the AUGUST statistic, allowing faster $p$-value computation in a large-sample setting and providing a framework for power analysis under any predetermined alternative.
In Section \ref{sec:performance}, we use simulation studies to compare the power of the AUGUST test to that of other well-known non-parametric two-sample tests. We find that AUGUST has power close to that of the best existing methods in every context, as well as greater power in some circumstances. For example, AUGUST outperforms all other tests considered at detecting unimodality versus multimodality.
We also examine the empirical power of our multivariate extension to the AUGUST test. There are very many existing non-parametric multivariate methods, such as those of Weiss \cite{weiss1960two}; Bickel \cite{bickel1969distribution}; Baumgartner, Weiss, and Schindler \cite{baumgartner1998nonparametric}; Hettmansperger, M\"ott\"onen and Oja \cite{hettmansperger1998affine}; Hall and Tajvidi \cite{hall2002permutation}; Rousson \cite{rousson2002distribution}; Baringhaus and Franz \cite{baringhaus2004new}; Aslan and Zech \cite{aslan2005new}; Eric, Bach, and Harchaoui \cite{harchaoui2007testing}; Oja \cite{oja2010multivariate}; Gretton et al. \cite{gretton2012kernel}; Sz\'ekely and Rizzo \cite{szekely2013energy}; Biswas et al. \cite{biswas2014distribution}; Biswas and Ghosh \cite{biswas2014nonparametric}; Chwialkowski et al. \cite{chwialkowski2015fast}; Lopez-Paz and Oquab \cite{lopez2016revisiting}; Li \cite{li2018asymptotic}; Pan et. al. \cite{pan2018ball}; and Song and Chen \cite{song2020generalized}. Of recent note are the graph-based two-sample tests, including Friedman and Rafsky \cite{friedman1979multivariate}; Schilling \cite{schilling1986multivariate}; Henze \cite{henze1988multivariate}; Liu and Singh \cite{liu1993quality}; Rosenbaum \cite{rosenbaum2005exact}; Chen and Friedman \cite{chen2017new}; and Chen, Chen, and Su \cite{chen2018weighted}. The theoretical properties of graph-based tests such as these are explored in Bhattacharya \cite{bhattacharya2019general}. In a low-dimensional setting, we use simulation studies to demonstrate that the multivariate AUGUST test has comparable power to current methods.
Finally, in Section \ref{sec:nba}, we apply our method to NBA data on throw distance and angle from the net. Do shots follow a different distribution than misses? How about throws early in the game versus late in the game? In addition to answering these questions, we use AUGUST to create graphical data representations addressing why the null was rejected in each case. Finally, with the help of the multivariate AUGUST test, we revisit the NBA shooting data in the context of joint distributions.
\section{Main insights}
\label{sec:insights}
\subsection{CDF transformation}
Given independent samples $\{\bm{X}_i\}_{i = 1}^m$ and $\{\bm{Y}_i\}_{i = 1}^n$, where $\bm{X}_i\sim G$ and $\bm{Y}_i \sim F$, recall that we are interested in testing
\begin{align*}
H_0: F = G \text{ versus } H_a: F\neq G.
\end{align*}
For our purposes, we will assume that $F$ and $G$ are absolutely continuous functions.
To illustrate the main idea, imagine a one-sample setting. We want to test whether or not $\bm{X}_i \sim F$, with $F$ known. Under the null hypothesis $\bm{X}_i \sim F$, it is a well-known result that the transformed variables $\{F(\bm{X}_i):i\in [m]\}$ follow a Uniform$(0, 1)$ distribution. When $\bm{X}$ does not follow $F$, the collection $\{F(\bm{X}_i):i\in [m]\}$ is not uniform. As a result, we can reduce our one-sample test to a test of the uniformity of $\{F(\bm{X}_i):i\in [m]\}$. Moreover, examining \textit{how} the collection $\{F(\bm{X}_i):i\in [m]\}$ fails to be uniform tells us why $F$ does not fit the distribution of $\bm{X}$.
In the two-sample setting, the same intuition holds true: we might construct transformed variables that are nearly uniform in $[0, 1]$ when $F = G$, and that are not uniform otherwise. When the distributions of the two samples are different, the way that uniformity fails should be informative.
Given the fact that the CDF-transformed variables $\{G(\bm{X}_i):i\in [m]\}$ follow a uniform distribution, an intuitive choice would be $\{\hat{F}_{\bm{Y}}(\bm{X}_i): i \in [m]\}$, where $\hat{F}_{\bm{Y}}$ is the empirical CDF of $\bm{Y}$:
\begin{align*}
\hat{F}_{\bm{Y}}(t) = \frac{1}{n}\sum_{i = 1}^{n} I(\bm{Y}_i \leq t).
\end{align*}
The BET framework introduced in Zhang \cite{zhang2019bet} gives us a way to test $\{\hat{F}_{\bm{Y}}(\bm{X}_i): i \in [m]\}$ for uniformity up to a given binary depth $d$, which is equivalent to testing multinomial uniformity over dyadic fractions $\{\frac{1}{2^d},\dots , 1\}$. In particular, we can define a vector $\bm{P}$ of length $2^d$ such that, for $1 \leq i \leq 2^d$,
\begin{align*}
\bm{P}_i = \frac{\#\bigg\{k: \hat{F}_{\bm{Y}}(\bm{X}_k) \in \left[\frac{i-1}{2^d}, \frac{i}{2^d}\right)\bigg\}}{m}.
\end{align*}
Then the vector of symmetry statistics is given by $\bm{S} = \mathbf{H}_{2^d}\bm{P}$, where $\mathbf{H}_{2^d}$ is the Hadamard matrix of size $2^d$ according to Sylvester's construction. In particular, we can restrict our attention to $\bm{S}_{-1}$, since the first coordinate of $\bm{S}$ is always equal to $\sum_{i = 1}^{2^d} \bm{P}_i = 1$. As shown in Zhang \cite{zhang2019bet}, $\bm{S}_{-1}$ is a sufficient statistic for uniformity in the one sample setting, and the BET test based on $\bm{S}_{-1}$ achieves the minimax rate in sample size required for power against a wide variety of alternatives.
We can think of $\bm{S}_{-1}$ in a signal-processing context: the Hadamard transform maps the vector of cell probabilities $\bm{P}$ in the physical domain to the vector of symmetries $\bm{S}_{-1}$ in the frequency domain. This transformation is advantageous since, in the one sample setting, the entries of $\bm{S}_{-1}$ have mean zero and are pairwise uncorrelated under the null. As a result, fluctuations of $\bm{S}_{-1}$ away from $\bm{0}_{2^d-1}$ unambiguously support the alternative, and the coordinates of $\bm{S}_{-1}$ are interpretable as orthogonal signals of nonuniformity. Moreover, the vector $\bm{P}$ always satisfies $\sum_{i = 1}^{2^d} \bm{P}_i = 1$, meaning that the mass of $\bm{P}$ is constrained to a $(2^d-1)$-dimensional hyperplane in $\mathbb{R}^{2^d}$. In contrast, the vector $\bm{S}_{-1}$ is non-degenerate and summarizes the same information about non-uniformity with greater efficiency.
To clarify this procedure, we provide a concrete example. Consider the case $d = 2$, and suppose our calculation for $\bm{S} = \mathbf{H}_{4}\bm{P}$ can be explicitly written
\begin{align*}
\begin{pmatrix} 1.00 \\ 0.00 \\ 0.50 \\ -0.10 \end{pmatrix} = \begin{pmatrix} 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1\end{pmatrix} \begin{pmatrix} 0.35 \\ 0.40 \\ 0.15 \\ 0.10 \end{pmatrix}.
\end{align*}
Note that the first symmetry statistic, $\bm{S}_1 = 1$, is constant and not diagnostic for asymmetry. Of the other statistics, $\bm{S}_3 = 0.5$ is largest in absolute value. This indicates that the greatest imbalance comes from the row $(1, 1, -1, -1)$, which compares the fraction of points in the first half of $[0, 1]$ to the fraction of points in the last half.
One possible choice of test statistic is the quantity $S = \norm{\bm{S}_{-1}}_2^2$. A test based on $S$ is essentially a $\chi^2$ test and has decent power at detecting $F \neq G$. However, it turns out that we can achieve much higher power by constructing $\bm{P}$ a bit differently.
\subsection{An ``Augmented'' CDF}
\label{ssn:augmented}
A problem with directly using the ECDF of $\bm{Y}$ as a transformation for $\bm{X}$ is dependence between the transformed variables $\{\hat{F}_{\bm{Y}}(\bm{X}_i), i \in [m]\}$. While each variable marginally follows a discrete uniform distribution on $\{0, \frac{1}{n}, \dots, 1\}$ under the null, the joint distribution of $\hat{F}_{\bm{Y}}(\bm{X}_1)$ and $\hat{F}_{\bm{Y}}(\bm{X}_2)$ has more mass along the diagonal $\{(0, 0), (\frac{1}{n}, \frac{1}{n}), \dots, (1, 1)\}$. Informally, this is because the events $\{\hat{F}_{\bm{Y}}(\bm{X}_1) = \frac{k}{n}\}$ and $\{\hat{F}_{\bm{Y}}(\bm{X}_2) = \frac{k}{n}\}$ are both more likely to occur when the distance between the order statistics $\bm{Y}_{(k)}$ and $\bm{Y}_{(k+1)}$ is large. Due to this correlation, the symmetry statistics of $\{\hat{F}_{\bm{Y}}(\bm{X}_i), i \in [m]\}$ tend to ``swing'' more heavily in one direction or the other, increasing variance under the null and negatively affecting power.
An unrealistic way to resolve the dependence would be to obtain an entirely different $\bm{Y}$ sample for each transformed variable $\hat{F}_{\bm{Y}}(\bm{X}_i)$. Instead, since we are only interested in the uniformity of $\{\hat{F}_{\bm{Y}}(\bm{X}_i), i \in [m]\}$ up to binary depth $d$, we can decrease the dependence by computing ECDF transformations $\hat{F}_{\bm{Y}^*}(\bm{X}_i)$ based on a small, random subsample $\bm{Y}^*$ of size $r$ from $\bm{Y}$. The following discussion makes this process explicit.
Let $\bm{Y}^*$ be a random subsample from $\bm{Y}$ of size $r = 2^{d+1}-1$. For any $x\in\mathbb{R}$, let $p_k^{\bm{Y}}(x)$ be the probability, conditional on $\bm{Y}$, that either $2k-2$ or $2k-1$ elements of $\bm{Y}^*$ are less than or equal to $x$. It turns out that the probabilities $p_k^{\bm{Y}}(x)$ are essentially hypergeometric and simple to compute:
\begin{align*}
p_k^{\bm{Y}}(x) = \frac{\binom{\#\{i:\bm{Y}_i \leq x\}}{2k-2}\binom{\#\{i:\bm{Y}_i > x\}}{2^{d+1}-1 - (2k-2)}}{\binom{n}{2^{d+1}-1}} + \frac{\binom{\#\{i:\bm{Y}_i \leq x\}}{2k-1}\binom{\#\{i:\bm{Y}_i > x\}}{2^{d+1}-1 - (2k-1)}}{\binom{n}{2^{d+1}-1}}.
\end{align*}
Using the function $p_k^{\bm{Y}}(\cdot)$, we define $\bm{P}_x$ to be the vector of length $2^d$ such that, for each coordinate $k$,
\begin{align*}
\bm{P}_{x, k} = p_{k}^{\bm{Y}}(x), \text{ for } 1 \leq k \leq 2^d.
\end{align*}
Note that $\hat{F}_{\bm{Y}^*}(x) \in \left[\frac{k-1}{2^d}, \frac{k}{2^d}\right)$ exactly when $2k-2$ or $2k-1$ subsampled elements in $\bm{Y}^*$ are less than or equal to $x$. Therefore, we could equally say that
\begin{align*}
\bm{P}_{x, k} & = P\left(\hat{F}_{\bm{Y}^*}(x) \in \left[\frac{k-1}{2^d}, \frac{k}{2^d}\right)\Bigg|\bm{Y}\right), \text{ for } 1 \leq k \leq 2^d.
\end{align*}
It is in precisely this sense that $\bm{P}_x$ can be considered an ``augmented'' CDF: instead of mapping $x$ to a single value in the unit interval, $x\mapsto \bm{P}_x$ maps $x$ to a distribution. Moreover, this characterization explains the choice of subsample size $r = 2^{d+1}-1$. Any $r$ satisfying $r = 2^{q}-1$, $q \geq d$, guarantees that the discrete random variable $\hat{F}_{\bm{Y}^*}(x)$ has the same number of point masses inside every interval of the form $\left[\frac{k-1}{2^d}, \frac{k}{2^d}\right)$. The specific choice of $q = d+1$ has been found to work best empirically.
To collect information about every $\bm{X}_i$, we define the vector $\bm{P}_{\bm{X}}$, now with a vector subscript, by the average of all $\bm{P}_{\bm{X}_i}$:
\begin{align*}
\bm{P}_{\bm{X}} = \frac{1}{m}\sum_{i = 1}^m \bm{P}_{\bm{X}_i}.
\end{align*}
Given that the formula for $p_k^{\bm{Y}}(x)$ is computed from hypergeometric probabilities, we refer the coordinates of $\bm{P}_{\bm{X}}$ as \textit{hypergeometric cell probabilities}. Just as we expect the distribution of the ECDF-transformed variables $\{\hat{F}_{\bm{Y}}(\bm{X}_i): i \in [m]\}$ to be uniform under the null, we expect the mass of $\bm{P}_{\bm{X}}$ to be nearly uniform over its coordinates. The vector of symmetry statistics $\bm{S}_{\bm{X}} = (\mathbf{H}_{2^d}\bm{P}_{\bm{X}})_{-1}$ quantifies non-uniformity in $\bm{P}_{\bm{X}}$.
Notably, the cell probabilities in $\bm{P}_{\bm{x}}$ are computed in reference to a resampling procedure but without actually resampling. As the discussion above suggests, these probabilities could indeed be approximated by a bootstrap procedure: take many subsamples $\bm{Y}^*$ of size $2^{d+1}-1$ from $\bm{Y}$, compute $\hat{F}_{\bm{Y}^*}(x)$ each time, and bin the results as cell counts at intervals of $1/2^d$. The exact cell probabilities $\bm{P}_{x}$ derived above are the limiting cell probabilities of this bootstrap procedure as the number of resamples tends to infinity. The following theorem makes this result explicit.
\begin{theorem}
\label{thm:resamp}
Here, let $\bm{Y}$ be a fixed vector of observed data, and let $x$ be some fixed number. Consider the following bootstrap method for computing the vector $\bm{P}^*_{x}$ using $K$ subsamples from $\bm{Y}$.
\begin{enumerate}
\item Take bootstrap resamples $\bm{Y}^*_k$ of size $2^{d+1}-1$ from $\bm{Y}$ without replacement, for resamples $1 \leq k \leq K$.
\item Compute $\hat{F}_{\bm{Y}^*_k}(x)$, for resamples $1 \leq k \leq K$.
\item Set $\bm{P}^*_{x, i} = \#\left\{k:\hat{F}_{\bm{Y}^*_k}(x) \in \left[\frac{i-1}{2^d}, \frac{i}{2^d}\right)\right\}/K$, for coordinates $1 \leq i \leq 2^d$.\\
It follows that
\begin{align*}
P\left(\lim_{K \rightarrow \infty} \bm{P}^*_x = \bm{P}_x\right) = 1,
\end{align*}
where the probability is taken over the randomness of the resampling.
\end{enumerate}
\end{theorem}
Theorem \ref{thm:resamp} shows that the hypergeometric cell probabilities are equivalent to the limiting cell probabilities of a certain bootstrap procedure. Effectively, one could say that actual resampling is a valid way to approximate $\bm{P}_{\bm{X}}$. In practice, and in the current context of univariate data, it is much faster to directly compute the limiting hypergeometric probabilities.
\subsection{Choice of statistic}
Recall the steps to computing $\bm{S}_{\bm{X}}$:
\begin{align*}
p_k^{\bm{Y}}(x) & = \frac{\binom{\#\{i:\bm{Y}_i \leq x\}}{2k-2}\binom{\#\{i:\bm{Y}_i > x\}}{2^{d+1}-1 - (2k-2)}}{\binom{n}{2^{d+1}-1}} + \frac{\binom{\#\{i:\bm{Y}_i \leq x\}}{2k-1}\binom{\#\{i:\bm{Y}_i > x\}}{2^{d+1}-1 - (2k-1)}}{\binom{n}{2^{d+1}-1}}\\
\bm{P}_{x, k} & = p_{k}^{\bm{Y}}(x), \text{ for coordinate } k = 1 \dots 2^d\\
\bm{P}_{\bm{X}} & = \frac{1}{m}\sum_{i = 1}^m \bm{P}_{\bm{X}_i}\\
\bm{S}_{\bm{X}} & = (\mathbf{H}_{2^d}\bm{P}_{\bm{X}})_{-1}.
\end{align*}
The entries of $\bm{P}_{\bm{X}}$ sum to $1$ and are expected to be roughly uniform under the null. The entries of $\bm{S}_{\bm{X}}$ quantify non-uniformity in $\bm{P}_{\bm{X}}$ and correspond to symmetries encoded in the rows of $\mathbf{H}_{2^d}$. Moreover, $\bm{S}_{\bm{X}}$ is a non-degenerate random vector.
With $\bm{S}_{\bm{Y}}$ defined analogously to $\bm{S}_{\bm{X}}$, we propose the statistic $S = -\bm{S}_{\bm{X}}^T\bm{S}_{\bm{Y}}$. First, this choice of statistic has the advantage of treating the $\bm{X}$ and $\bm{Y}$ samples symmetrically. This is desirable because neither sample is assumed to have privileged status, so it would be counterintuitive for the value of $S$ to change when the roles of $\bm{X}$ and $\bm{Y}$ are switched. In addition, this statistic is a continuous function of the concatenated vector $(\bm{S}_{\bm{X}}, \bm{S}_{\bm{Y}})^T$. In Theorem \ref{thm:null}, we give an asymptotic normality result for $(\bm{S}_{\bm{X}}, \bm{S}_{\bm{Y}})^T$, meaning that the asymptotic distribution of $S$ is accessible. The primary insight leading to this result is the fact that the concatenated vector $(\bm{S}_{\bm{X}}, \bm{S}_{\bm{Y}})^T$ can be written as a two-sample $U$-statistic, which is the subject of Theorem \ref{thm:kernel}. Going forward, we will use $\text{AUGUST}(\bm{X}, \bm{Y}, d)$ to denote the test based on $S$.
The negative sign in $-\bm{S}_{\bm{X}}^T\bm{S}_{\bm{Y}}$ arises from the fact that $\bm{S}_{\bm{X}}$ and $\bm{S}_{\bm{Y}}$ are negatively correlated under the alternative, and we want the critical values of $S$ to be positive. The proposition below gives intuition for this negative correlation in the context of a location shift.
\begin{proposition}
Suppose our $\bm{X}$ and $\bm{Y}$ data are such that $\max_i\{\bm{X}_i\} < \min_j\{\bm{Y}_j\}$. Then $\cos (\theta) = -(2^d - 1)^{-1}$, where $\theta$ is the angle between $\bm{S}_{\bm{X}}$ and $\bm{S}_{\bm{Y}}$ as vectors in $\mathbb{R}^{2^d-1}$.
\end{proposition}
In general, we could say the following: if $\bm{X}$ is to the left of $\bm{Y}$, then $\bm{Y}$ is to the right of $\bm{X}$, and the symmetry statistic detecting left/right imbalance will be positive in $\bm{S}_{\bm{X}}$ and negative in $\bm{S}_{\bm{Y}}$. This negative correlation holds true for all symmetry statistics, so we include the negative sign in $-\bm{S}_{\bm{X}}^T\bm{S}_{\bm{Y}}$ so that this product is large in the positive direction.
It is important to note that $\bm{S}_{\bm{X}}$ and $\bm{S}_{\bm{Y}}$ can be negatively correlated under the null, as well. However, $\norm{\bm{S}_{\bm{X}}}_2$ and $\norm{\bm{S}_{\bm{Y}}}_2$ are larger under the alternative than under the null, meaning that $-\bm{S}_{\bm{X}}^T\bm{S}_{\bm{Y}}$ is still larger under the alternative. See Theorems \ref{thm:null} and \ref{thm:alt} for additional exploration of the theoretical properties of $\bm{S}_{\bm{X}}$ and $\bm{S}_{\bm{Y}}$.
\subsection{Interpreting the results}
\label{ssn:interpret}
Suppose we have real $\bm{X}$ and $\bm{Y}$ data, and we wish to test the distributional equality of our samples. Before performing the AUGUST test, we must choose some resolution $d$ -- this decision determines the scale on which AUGUST will be sensitive. For example, $d = 1$ is sensitive only to mean/location shift, as $\tilde{\mathbf{H}}_{2} = \begin{pmatrix} 1 & -1 \end{pmatrix}$ has one row, comparing left/right side probabilities. At $d = 2$, sensitivity to scale emerges, coming from the row $\begin{pmatrix} 1 & -1 & -1 & 1 \end{pmatrix}$ in $\tilde{\mathbf{H}}_{4}$. In practice, $d = 3$ should be sufficient for global distributional differences, which existing ECDF-based methods can detect. In Zhang \cite{zhang2021beauty}, it is shown that a depth of $d = 3$ is sufficient for a symmetry statistic-based test of independence to outperform both distance correlation and $F$-test, which are known to be optimal, in detecting correlation in bivariate normal distributions.
Higher depths $d > 3$ are additionally sensitive to local information -- this can be useful for alternatives that are extremely close in the Kolmogorov-Smirnov metric but have densities that are bounded apart in the uniform norm. As one example, we may have $\bm{X}$ sampled from Uniform$(0, 1)$ and $\bm{Y}$ sampled from a high frequency square wave distribution with the same support.
Given some choice of $d$, suppose we calculate $\bm{P}_{\bm{X}}, \bm{P}_{\bm{Y}}$, as well as $\bm{S}_{\bm{X}} = (\mathbf{H}_{2^d}\bm{P}_{\bm{X}})_{-1}$, and $\bm{S}_{\bm{Y}} = (\mathbf{H}_{2^d}\bm{P}_{\bm{Y}})_{-1}$ as specified before. The AUGUST test based on $S$ rejects the null, claiming that $\bm{X}$ and $\bm{Y}$ come from different distributions. How can we use the AUGUST test to interpret this rejection?
We can consider $\bm{Y}$ as our \textit{reference sample}, meaning that we will make statements about how points of $\bm{X}$ fall relative to the distribution of $\bm{Y}$. In this case, looking at the entries of $\bm{S}_{\bm{X}}$ next to the matrix $\mathbf{H}_{2^d}$ tells us what we want to know. Each entry in the vector $\bm{S}_{\bm{X}}$ specifies the non-uniformity of $\bm{P}_{\bm{X}}$ with respect to a row of $\mathbf{H}_{2^d}$. In particular, the largest entries of $\bm{S}_{\bm{X}}$ in absolute value tell us the sources of greatest asymmetry in $\bm{P}_{\bm{X}}$.
For a concrete example, we let $d = 3$. Given $\bm{X}$ and $\bm{Y}$ data, suppose that $\mathbf{H}_{8}\bm{P}_{\bm{X}}$ is explicitly computed to be
\begin{align*}
\begin{pmatrix} 1.00 \\ 0.00 \\ -0.10 \\ 0.02 \\ -0.02 \\ -0.02 \\ -0.08 \\ 0.00 \end{pmatrix} = \begin{pmatrix} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 \\ 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 \\ 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 \\ 1 & -1 & 1 & -1 & -1 & 1 & -1 & 1 \\ 1 & 1 & -1 & -1 & -1 & -1 & 1 & 1 \\ 1 & -1 & -1 & 1 & -1 & 1 & 1 & -1 \end{pmatrix}\begin{pmatrix} 0.10 \\ 0.10 \\ 0.14 \\ 0.15 \\ 0.13 \\ 0.12 \\ 0.13 \\ 0.13 \end{pmatrix}.
\end{align*}
Recall that $\bm{S}_{\bm{X}}$ consists of all but the first coordinate of the vector on the left. In this case, the vector $\bm{S}_{\bm{X}}$ has two notable entries, which decompose the non-uniformity of $\bm{P}_{\bm{X}}$ into two orthogonal signals. The largest entry of $\bm{S}_{\bm{X}}$ in absolute value is $-0.10$, corresponding to the third row of $\mathbf{H}_{8}$:
\begin{align*}
\begin{pmatrix} 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 \end{pmatrix}.
\end{align*}
We can interpret this correspondence in the following way: the distribution of $\bm{X}$ has a coarse Venetian blind pattern relative to $\bm{Y}$. The second largest entry of $\bm{S}_{\bm{X}}$ in absolute value is $-0.08$, due to the seventh row of $\mathbf{H}_{8}$:
\begin{align*}
\begin{pmatrix} 1 & 1 & -1 & -1 & -1 & -1 & 1 & 1 \end{pmatrix}.
\end{align*}
From this, we see that the $\bm{X}$ points are centrally concentrated relative to the points of $\bm{Y}$. We would expect the interquartile region of $\bm{Y}$ to contain over half of the points of $\bm{X}$.
Now, suppose we wish to visualize the largest imbalance recorded in $\bm{S}_{\bm{X}}$. In the example above, the largest component of $\bm{S}_{\bm{X}}$ is $-0.10$. Inspecting the corresponding row of $\mathbf{H}_8$, it follows that the third, fourth, seventh, and eighth coordinates of $\bm{P}_{\bm{X}}$ comprise more than half of the total mass in $\bm{P}_{\bm{X}}$.
Let $R_1,\dots, R_8$ be real intervals such that $1/2^d = 1/8$ of the $\bm{Y}$ sample is contained in each $R_i$. These eight intervals correspond to the cells of $\bm{P}_{\bm{X}}$: if $\bm{P}_{\bm{X}, i}$ is large (small), we would expect $R_i$ to contain more (less) than $1/8$ of the $\bm{X}$ sample. In the context of the example above, the regions $R_3, R_4, R_7$, and $R_8$ together contain more than half of the points of $\bm{X}$ but exactly half of the points of $\bm{Y}$. This imbalance reflects the largest asymmetry in $\bm{S}_{\bm{X}}$, and in the context of testing, can be thought of as the primary reason for rejection of the null.
In Figure \ref{fig:ex}, we visualize simulated $\bm{X}$ and $\bm{Y}$ data whose largest asymmetry corresponds to the regions $R_3, R_4, R_7$, and $R_8$. In Section $\ref{sec:nba}$, we use this style of visualization on NBA shooting data.
\begin{figure}
\includegraphics[scale = .55]{visexample.png}
\caption{Visualization of a symmetry statistic using simulated data. In the concrete example of Section \ref{ssn:interpret}, the largest asymmetry in $\bm{S}_{\bm{X}}$ indicates that $\bm{X}$ has a coarse Venetian blind pattern relative to the reference sample $\bm{Y}$. The yellow rectangles above represent this pattern, as shaded regions contain an excess of $\bm{X}$ points relative to the plotted $\bm{Y}$ sample. As these rectangles represent the largest symmetry statistic in $\bm{S}_{\bm{X}}$, this particular imbalance is interpretable as the primary reason for rejection of the null.}
\label{fig:ex}
\end{figure}
\section{Method}
\label{sec:method}
\subsection{Algorithms for the AUGUST statistic}
Algorithms \ref{alg:augmentedcdf} and \ref{alg:AUGUST} formalize the steps to the AUGUST test outlined in earlier sections. In terms of the notation from earlier, Algorithm \ref{alg:augmentedcdf} computes the augmented CDF vector $\bm{P}_x^{\bm{V}}$, and Algorithm \ref{alg:AUGUST} performs a complete test using the statistic $S = -\bm{S}_{\bm{X}}^T\bm{S}_{\bm{Y}}$.
\begin{algorithm}
\caption{AugmentedCDF$(x,\bm{V},d)$}
\label{alg:augmentedcdf}
\begin{algorithmic}[1]
\STATE Initialize zero vector $\bm{P}_x$ of length $2^d$
\STATE $N = \text{length}(\bm{V}_i)$
\STATE $n = 2^{d+1} - 1$
\STATE $K = \#\{i: \bm{V}_i \leq x\}$
\FOR{$i = 1$ to $2^d$}
\STATE $k = 2i - 2$
\STATE $\bm{P}_{x, i} = \frac{\binom{K}{k}\binom{N-K}{n - k}}{\binom{N}{n}} + \frac{\binom{K}{k + 1}\binom{N-K}{n - k - 1}}{\binom{N}{n}}$
\ENDFOR
\STATE Return $\bm{P}_x$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{AUGUST$(\bm{X},\bm{Y},d)$}
\label{alg:AUGUST}
\begin{algorithmic}[1]
\STATE Initialize zero vectors $\bm{P}_{\bm{X}}$, $\bm{P}_{\bm{Y}}$ of length $2^d$
\FOR{$i = 1$ to $\text{length}(\bm{X})$}
\STATE $\bm{P}_{\bm{X}} = \bm{P}_{\bm{X}} + \text{AugmentedCDF}(\bm{X}_i, \bm{Y}, d)$
\ENDFOR
\FOR{$i = 1$ to $\text{length}(\bm{Y})$}
\STATE $\bm{P}_{\bm{Y}} = \bm{P}_{\bm{Y}} + \text{AugmentedCDF}(\bm{Y}_i, \bm{X}, d)$
\ENDFOR
\STATE Assign $\bm{P}_{\bm{X}} = \bm{P}_{\bm{X}}/\text{length}(\bm{X})$ and $\bm{P}_{\bm{Y}} = \bm{P}_{\bm{Y}}/\text{length}(\bm{Y})$
\STATE Assign $\bm{S}_{\bm{X}} = (\mathbf{H}_{2^d}\bm{P}_{\bm{X}})_{-1}$ and $\bm{S}_{\bm{Y}} = (\mathbf{H}_{2^d}\bm{P}_{\bm{Y}})_{-1}$
\STATE Compute the statistic $S = -\bm{S}_{\bm{X}}^T\bm{S}_{\bm{Y}}$
\STATE Reject when $S$ is large
\end{algorithmic}
\end{algorithm}
Recall that our two samples $\bm{X}$ and $\bm{Y}$ have sizes $m$ and $n$, respectively. Treating $d$ as a constant, Algorithm \ref{alg:AUGUST} requires $O(mn)$ elementary operations. This is due to the line $K = \#\{i: \bm{V}_i \leq x\}$ in the function AugmentedCDF$(x,\bm{V},d)$, which necessitates iterating over all entries of $\bm{V}$ each time that AugmentedCDF$(x,\bm{V},d)$ is called. In the function AUGUST, the vectors $\bm{X}$ and $\bm{Y}$ are passed into AugmentedCDF$(x,\bm{V},d)$ as the argument $\bm{V}$ a total of $n$ and $m$ times, respectively.
However, there exists a more efficient implementation of AUGUST for instances when $n$ and $m$ are both large. By first sorting the concatenated $\bm{X}$ and $\bm{Y}$ samples, it is possible to reduce the running time to $O((m+n)\log(m+n))$ operations. The improved algorithm, dubbed AUGUST+, is recorded as Algorithm \ref{alg:AUGUST+}, and the running time of AUGUST+ is recorded in Theorem \ref{thm:runningtime}.
\begin{algorithm}
\caption{AUGUST+$(\bm{X},\bm{Y},d)$}
\label{alg:AUGUST+}
\begin{algorithmic}[1]
\STATE Define $m = \text{length}(\bm{X})$, $n = \text{length}(\bm{Y})$, and $r = 2^{d+1}-1$
\STATE Initialize empty matrix $\mathbf{M}$ of dimension $2\times (m+n)$
\STATE Assign the first row of $\mathbf{M}$ to the concatenated vector $(\bm{X}^T, \bm{Y}^T)$
\STATE Assign the second row of $\mathbf{M}$ to a row vector with $m$ entries equal to $1$ followed by $n$ entries equal to $0$
\STATE Sort the columns of $\mathbf{M}$ ascending by the entries in the first row of $\mathbf{M}$
\STATE Initialize integers $c_x, c_y = 0$ and vectors $\bm{P}_{\bm{X}}, \bm{P}_{\bm{Y}} = \mathbf{0}_{2^d}$
\FOR{$i = 1$ to $(m+n)$}
\IF{$\mathbf{M}_{2, i} = 1$}
\STATE $c_x = c_x + 1$
\FOR{$j = 1$ to $2^d$}
\STATE $k = 2j-2$
\STATE $\bm{P}_{\bm{X}, j} = \bm{P}_{\bm{X}, j} + \displaystyle\frac{\binom{c_y}{k}\binom{n-c_y}{r - k}}{\binom{n}{r}} + \frac{\binom{c_y}{k + 1}\binom{n-c_y}{r - k - 1}}{\binom{n}{r}}$
\ENDFOR
\ELSE
\STATE $c_y = c_y + 1$
\FOR{$j = 1$ to $2^d$}
\STATE $k = 2j-2$
\STATE $\bm{P}_{\bm{Y}, j} = \bm{P}_{\bm{Y}, j} + \displaystyle\frac{\binom{c_x}{k}\binom{m-c_x}{r - k}}{\binom{n}{r}} + \frac{\binom{c_x}{k + 1}\binom{m-c_x}{r - k - 1}}{\binom{m}{r}}$
\ENDFOR
\ENDIF
\ENDFOR
\STATE Assign $\bm{P}_{\bm{X}} = \bm{P}_{\bm{X}}/m$ and $\bm{P}_{\bm{Y}} = \bm{P}_{\bm{Y}}/n$
\STATE Assign $\bm{S}_{\bm{X}} = (\mathbf{H}_{2^d}\bm{P}_{\bm{X}})_{-1}$ and $\bm{S}_{\bm{Y}} = (\mathbf{H}_{2^d}\bm{P}_{\bm{Y}})_{-1}$
\STATE Compute the statistic $S = -\bm{S}_{\bm{X}}^T\bm{S}_{\bm{Y}}$
\STATE Reject when $S$ is large
\end{algorithmic}
\end{algorithm}
\begin{theorem}
\label{thm:runningtime}
AUGUST+$(\bm{X}, \bm{Y}, d)$ runs in $O((m+n)\log(m+n))$ time.
\end{theorem}
In Table \ref{tab:run}, we provide simulation results confirming the conclusion of Theorem \ref{thm:runningtime}. The second column of Table \ref{tab:run} records the running time of Kolmogorov-Smirnov for each sample size, and the third column indicates the running time of the AUGUST+ test, another $O(N\log N)$ algorithm. This implementation of the Kolmogorov-Smirnov test comes from the \texttt{twosamples} R package and uses a pre-compiled C++ function to compute the test statistic, with a default of 2000 permutations. The AUGUST+ algorithm used here is implemented solely in R and uses a known critical value from theory across all tests. The final column of Table \ref{tab:run} gives the ratio of running times for the two tests, demonstrating that AUGUST+ is significantly faster than a permutation-based Kolmogorov-Smirnov test for every sample size considered.
\begin{table}
\label{tab:run}
\begin{center}
\begin{tabular}{ r | r | r | r}
$N$ & Kolmogorov-Smirnov (sec)& AUGUST+ (sec)& Ratio (KS/AUG+)\\
\hline
$10^2$ & 0.06 & 0.01 & 4.98\\
$10^3$ & 0.27 & 0.07 & 3.93\\
$10^4$ & 2.49 & 0.60 & 4.13\\
$10^5$ & 30.36 & 5.95 & 5.11\\
$10^6$ & 416.47 & 42.21 & 9.87
\end{tabular}
\end{center}
\caption{Comparison of running times for Kolmogorov-Smirnov and AUGUST+. Critical values for KS are computed using the default 2000 bootstrapped resamples, while critical values for the AUGUST test use the convergence of $S$ via theory. In every case, AUGUST+ is several times faster than KS, and the result of Theorem \ref{thm:runningtime} is supported.}
\end{table}
\subsection{Multivariate extensions}
\label{sec:multi}
Above, we have taken $\bm{X}$ and $\bm{Y}$ to be univariate iid samples from some distributions $G$ and $F$. It turns out that we can extend the univariate AUGUST test to the problem of multivariate two-sample testing. Suppose that $\bm{X}$ and $\bm{Y}$ are iid samples from some continuous distributions $G$ and $F$, with each $\bm{X}_i$ and $\bm{Y}_j$ in $\mathbb{R}^k$, for some $k \geq 2$ and $1 \leq i \leq m$, $1 \leq j \leq n$. In order to use the AUGUST test, our goal is to transform the multivariate $\bm{X}$ and $\bm{Y}$ data into univariate samples $\tilde{\bm{X}}$ and $\tilde{\bm{Y}}$. The transformed $\tilde{\bm{X}}$ and $\tilde{\bm{Y}}$ should be equal in distribution exactly when the multivariate null hypothesis $H_0: F = G$ is true. The exact form of this transformation determines the geometric interpretation of the cell probabilities computed via AUGUST. One technique to achieve elliptical cells could be appropriately named \textit{mutual Mahalanobis distance}.
Given a mean $\bm{\mu}\in\mathbb{R}^k$ and invertible $k\times k$ covariance matrix $\mathbf{\Sigma}$, recall that the Mahalanobis distance of $\bm{x}\in\mathbb{R}^k$ from $\bm{\mu}$ with respect to $\mathbf{\Sigma}$ is given by
\begin{align*}
MD(\bm{x}; \bm{\mu}, \mathbf{\Sigma}) = \sqrt{(\bm{x} - \bm{\mu})^T\mathbf{\Sigma}^{-1}(\bm{x} - \bm{\mu})}.
\end{align*}
Now, let $\hat{\bm{\mu}}_{\bm{X}}$ and $\hat{\mathbf{\Sigma}}_{\bm{X}}$ be the sample mean and sample covariance matrix of $\bm{X}$. Consider the transformed collections
\begin{align*}
\tilde{\bm{X}}^{(\bm{X})} & = \left\{ MD(\bm{X}_i; \hat{\bm{\mu}}_{\bm{X}}, \hat{\mathbf{\Sigma}}_{\bm{X}}): 1 \leq i \leq m \right\} \\
\tilde{\bm{Y}}^{(\bm{X})} & = \left\{ MD(\bm{Y}_j; \hat{\bm{\mu}}_{\bm{X}}, \hat{\mathbf{\Sigma}}_{\bm{X}}): 1 \leq j \leq n \right\},
\end{align*}
where the superscript $(\bm{X})$ indicates that means and covariances are estimated using the $\bm{X}$ sample. If $\bm{X}$ and $\bm{Y}$ come from the same multivariate distribution, then the collections $\tilde{\bm{X}}^{(\bm{X})}$ and $\tilde{\bm{Y}}^{(\bm{X})}$ should have similar univariate distributions. As a result, given some depth $d$, we can compute the AUGUST statistic for the samples $\tilde{\bm{X}}^{(\bm{X})}$ and $\tilde{\bm{Y}}^{(\bm{X})}$ in order to test for the distributional equality of $\bm{X}$ and $\bm{Y}$.
Recall that the AUGUST test can be thought of as testing for regions of imbalance between $\bm{X}$ and $\bm{Y}$ -- imbalances in distribution appear as non-uniformity in the vector of cell probabilities. Under this Mahalanobis distance transformation, the cells used by AUGUST correspond to nested elliptical rings centered on $\hat{\bm{\mu}}_{\bm{X}}$.
As in the univariate case, it is desirable for the test statistic to be invariant to the transposition of $\bm{X}$ and $\bm{Y}$. To achieve this, we can use the test statistic
\begin{align*}
S_{multi} = \max\left(AUGUST(\tilde{\bm{X}}^{(\bm{X})}, \tilde{\bm{Y}}^{(\bm{X})}, d), AUGUST(\tilde{\bm{X}}^{(\bm{Y})}, \tilde{\bm{Y}}^{(\bm{Y})}, d)\right),
\end{align*}
wherein we use \textit{both} possible Mahalanobis distance transformations for $\bm{X}$ and $\bm{Y}$, compute two AUGUST statistics, and take the maximum. As we show in Section \ref{sec:performance}, a depth of $d = 2$ is sufficiently large to detect common multivariate alternatives.
\section{Theoretical results}
\label{sec:theoretical}
\subsection{General $U$-statistic theory}
Here, we present theory necessary for specifying the asymptotic distribution of the test statistic $S$ for the univariate AUGUST test. Many of the arguments in this subsection imitate the technique and notation of Asymptotic Statistics by A. W. van der Vaart \cite{van2000asymptotic}, extending the $U$-statistic theory of Chapters 11 and 12 to a multivariate kernel. We begin by stating two key lemmas:
\begin{lemma}[Orthogonality of the projection]
\label{lem:orthogofproj}
Let $\bm{U}\in\mathbb{R}^p$ be a random vector, and let $\{W_i\}_{i = 1}^N$ be a collection of $N$ independent observations. Define the projection $\hat{\bm{U}} = E\bm{U} + \sum_{i = 1}^N E(\bm{U} - E\bm{U} | W_i)$. Then
\begin{align*}
E(\bm{U} - \hat{\bm{U}})\hat{\bm{U}}^T = \mathbf{0}_{p\times p}
\end{align*}
\end{lemma}
\begin{lemma}[Closeness of the projection]
\label{lem:closenessofproj}
Let $\{W_i\}_{i = 1}^\infty$ be an independent collection of random variables. Let $\{\bm{U}_N\}_{N = 1}^{\infty}$ be a sequence of non-degenerate random vectors of length $p$. For each $N$, define the projection $\hat{\bm{U}}_N = E\bm{U}_N + \sum_{i = 1}^N E(\bm{U}_N - E\bm{U}_N | W_i)$. Let $\mathbf{\Sigma}_{1, N} = \text{Cov}(\bm{U}_N)$ and $\mathbf{\Sigma}_{2, N} = \text{Cov}(\hat{\bm{U}}_N)$. If $\mathbf{\Sigma}_{1, N}\mathbf{\Sigma}_{2, N}^{-1} \rightarrow I$ as $N \rightarrow \infty$, then
\begin{align*}
\mathbf{\Sigma}_{1, N}^{-\frac{1}{2}}\left(\bm{U}_{N} - E\bm{U}_{N}\right) - \mathbf{\Sigma}_{2, N}^{-\frac{1}{2}}\left(\hat{\bm{U}}_{N} - E\hat{\bm{U}}_{N}\right) \overset{p}{\to} 0.
\end{align*}
\end{lemma}
Using Lemmas \ref{lem:orthogofproj} and \ref{lem:closenessofproj}, one can prove asymptotic normality of $U$-statistics. Suppose we have independent samples $\{X_i\}_{i = 1}^m$ and $\{Y_j\}_{j = 1}^n$, where $\bm{X}_i \sim G$ and $\bm{Y}_j \sim F$. Recall that a two-sample $U$-statistic based on $\bm{X}$ and $\bm{Y}$ has form
\begin{align*}
\bm{U} = \frac{1}{\binom{m}{r}\binom{n}{s}}\sum_{\alpha}\sum_{\beta}\bm{k}(\bm{X}_{\alpha_1}, \dots, \bm{X}_{\alpha_r}, \bm{Y}_{\beta_1}, \dots, \bm{Y}_{\beta_s})
\end{align*}
where $\bm{k}:\mathbb{R}^r\times\mathbb{R}^s \rightarrow \mathbb{R}^p$ is called the kernel corresponding to $\bm{U}$. We impose the restriction that $\bm{k}$ is symmetric in its first $r$ and last $s$ coordinates. Notation-wise, the index $\alpha$ is a combination of length $r$ from the set $\{1, \dots, m\}$, and the outer sum is taken over all such combinations. The index $\beta$ and inner sum are analogous. We can think of $\bm{U}$ as an unbiased estimator of $$\bm{\theta} := E\bm{k}(\bm{X}_1,\dots,\bm{X}_{r}, \bm{Y}_1, \dots, \bm{Y}_{s}).$$
\begin{theorem}
\label{thm:u}
Let $N = n + m$, and assume that $n, m \rightarrow \infty$ in such a way that $m/N \rightarrow \lambda$ for some $\lambda \in (0, 1)$. Define the cross-covariance matrices
\begin{align*}
\xi_{i, j} = \text{Cov}\Bigg(&\bm{k}(\bm{X}_1, \dots,\bm{X}_{r}, \bm{Y}_1, \dots, \bm{Y}_{s}),\\
& \bm{k}(\bm{X}_1, \dots, \bm{X}_i, \bm{X}_{i+1}', \dots, \bm{X}_{r}', \bm{Y}_1, \dots, \bm{Y}_{j}, \bm{Y}_{j+1}', \dots, \bm{Y}_{s}')\Bigg).
\end{align*}
If $\mathbf{\Sigma} = r^2\xi_{1, 0}/\lambda + s^2\xi_{0, 1}/(1-\lambda)$ is invertible, then
\begin{align*}
\sqrt{N}\left(\bm{U} - \bm{\theta}\right) \xrightarrow{d} N(\bm{0}, \mathbf{\Sigma}).
\end{align*}
\end{theorem}
The proof of Theorem \ref{thm:u} relies on invertibility of the limiting covariance matrix. It turns out that as long as this matrix has rank at least one, we still achieve asymptotic normality of $\bm{U}$, albeit possibly to a degenerate distribution.
\begin{theorem}
\label{thm:udeg}
If $\mathbf{\Sigma} = r^2\xi_{1, 0}/\lambda + s^2\xi_{0, 1}/(1-\lambda)$ has rank $q \geq 1$, then
\begin{align*}
\sqrt{N}\left(\bm{U} - \bm{\theta}\right) \xrightarrow{d} N(\bm{0}, \mathbf{\Sigma}).
\end{align*}
\end{theorem}
In general, it can be difficult to prove or disprove the invertibility of the limiting covariance matrix $\mathbf{\Sigma}$. In light of Theorem \ref{thm:udeg}, we can say that $\sqrt{N}\left(\bm{U} - \bm{\theta}\right)$ converges in distribution to some multivariate normal as long as $\mathbf{\Sigma}$ is not identically zero. In particular, this is true whenever some coordinate of $\bm{k}_{1, 0}(\bm{X}_1)$ or $\bm{k}_{0, 1}(\bm{Y}_1)$ (defined in the proof of Theorem \ref{thm:u}) has nonzero variance, which is often trivial to show.
\subsection{Writing $S$ as a function of a $U$-statistic}
Now that we have general results for $U$-statistics, we need to relate our test statistic $S = -\bm{S}_{\bm{X}}^T\bm{S}_{\bm{Y}}$ to a $U$-statistic in some way. In this subsection, we work toward showing that the concatenated vector of symmetry statistics $\begin{pmatrix} \bm{S}_{\bm{X}} \\ \bm{S}_{\bm{Y}} \end{pmatrix}$ is in fact a $U$-statistic.
Let $d\in\mathbb{N}$ be the fixed binary depth, and let the function $\bm{h}:\mathbb{R}\times\mathbb{R}^{2^{d+1}-1} \rightarrow \mathbb{R}^{2^d}$ be given by
\[ \bm{h}_k(x, \bm{y}) = \begin{cases}
1 & \text{if } \#\{j: \bm{y}_j \leq x\} = 2k - 2 \text{ or } 2k - 1\\
0 & \text{otherwise.}
\end{cases}
\]
The following key lemma explains how $\bm{P}_{\bm{X}}$ can be expressed using $\bm{h}$.
\begin{lemma}
\label{lem:h}
With $\bm{h}$ as defined above, it holds that
\begin{align*}
\frac{1}{\binom{n}{2^{d+1}-1}} \sum_{\beta} \bm{h}\left(x, \bm{Y}_{\beta_1}, \bm{Y}_{\beta_2}, \dots, \bm{Y}_{\beta_{2^{d+1}-1}}\right) = \text{AugmentedCDF}(x, \bm{Y}, d).
\end{align*}
Consequently,
\begin{align*}
\frac{1}{m\binom{n}{2^{d+1}-1}} \sum_i\sum_{\beta} \bm{h}\left(\bm{X}_i, \bm{Y}_{\beta_1}, \bm{Y}_{\beta_2}, \dots, \bm{Y}_{\beta_{2^{d+1}-1}}\right) = \bm{P}_{\bm{X}}.
\end{align*}
\end{lemma}
For intuition on the above result, we can look to the classic urn model. Consider an urn with $n$ balls: one red ball for each $\bm{Y}_i \leq x$, and one black ball for each $\bm{Y}_i > x$. Subsampling $2^{d+1}-1$ points from $\bm{Y}$ is equivalent to drawing $2^{d+1}-1$ balls from the urn. In this case, the $k$th coordinate of $\bm{h}$ is an indicator of the event that exactly $2k-2$ or $2k-1$ red balls were drawn. By averaging $\bm{h}_k$ over every possible combination of balls from the urn, we compute the probability of this event. Computing the probability this way is inefficient compared to the obvious hypergeometric approach, but this form ultimately allows us to write $\begin{pmatrix} \bm{S}_{\bm{X}} \\ \bm{S}_{\bm{Y}} \end{pmatrix}$ as a $U$-statistic.
\begin{theorem}
\label{thm:kernel}
There exists a kernel function
$$\bm{k}: \mathbb{R}^{2^{d+1}-1}\times\mathbb{R}^{2^{d+1}-1} \rightarrow \mathbb{R}^{2^d-1}\times\mathbb{R}^{2^d-1}$$
such that
\begin{align*}
& \frac{1}{\binom{m}{2^{d+1}-1}\binom{n}{2^{d+1}-1}}\sum_\alpha\sum_\beta \bm{k}(\bm{X}_{\alpha_1},\dots, \bm{X}_{\alpha_{2^{d+1}-1}}, \bm{Y}_{\beta_1},\dots, \bm{Y}_{\beta_{2^{d+1}-1}}) = \begin{pmatrix} \bm{S}_{\bm{X}} \\ \bm{S}_{\bm{Y}} \end{pmatrix}.
\end{align*}
\end{theorem}
In summary, we have shown that the concatenated vector of symmetry statistics $\begin{pmatrix} \bm{S}_{\bm{X}} \\ \bm{S}_{\bm{Y}} \end{pmatrix}$ is a vector-valued, two-sample $U$-statistic. This result opens the door to asymptotic results for $S = -\bm{S}_{\bm{X}}^T\bm{S}_{\bm{Y}}$ under both the null and alternative.
\subsection{Asymptotic normality}
First, we address the asymptotic distribution of $S$ under the null.
\begin{theorem}
\label{thm:null}
Suppose that we have univariate iid observations $\{\bm{X}_i\}_{i = 1}^m$ and $\{\bm{Y}_j\}_{j = 1}^n$ under the null. Let $N = n + m$, and assume that $n, m \rightarrow \infty$ in such a way that $m/N \rightarrow \lambda$ for some $\lambda \in (0, 1)$. Then
\begin{align*}
\sqrt{N}\begin{pmatrix} \bm{S}_{\bm{X}} \\ \bm{S}_{\bm{Y}} \end{pmatrix} \xrightarrow{d} N(\bm{0}, \mathbf{\Sigma}).
\end{align*}
Defining the cross-covariance matrices
\begin{align*}
\xi_{i, j} = \text{Cov}\Bigg(&\bm{k}(\bm{X}_1, \dots,\bm{X}_{2^{d+1}-1}, \bm{Y}_1, \dots, \bm{Y}_{2^{d+1}-1}),\\
& \bm{k}(\bm{X}_1, \dots, \bm{X}_i, \bm{X}_{i+1}', \dots, \bm{X}_{2^{d+1}-1}', \bm{Y}_1, \dots, \bm{Y}_{j}, \bm{Y}_{j+1}', \dots, \bm{Y}_{2^{d+1}-1}')\Bigg),
\end{align*}
the auto-covariance matrix $\mathbf{\Sigma}$ is given by
\begin{align*}
\mathbf{\Sigma} = (2^{d+1}-1)^2\left(\xi_{1, 0}/\lambda + \xi_{0, 1}/(1-\lambda)\right).
\end{align*}
\end{theorem}
Under the null, Theorem \ref{thm:udeg} and the continuous mapping theorem specify the asymptotic distribution of $S = -\bm{S}_{\bm{X}}^T\bm{S}_{\bm{Y}}$ as an inner product of central, correlated normal random vectors. Under the alternative, one would expect $S$ to be noncentral in some sense, where the amount of noncentrality is dictated by the way in which $F \neq G$. This turns out to be the case, as the next theorem indicates.
First, we provide some definitions used in the next theorem statement and proof. For each $k\in \{1, \dots, 2^d\}$, define the function $p_k^F:\mathbb{R}\rightarrow [0, 1]$ by
\begin{align*}
p_k^F(x) & = \binom{2^{d+1}-1}{2k-2}F(x)^{2k-2}(1-F(x))^{2^{d+1}-1 - (2k-2)} \\
& + \binom{2^{d+1}-1}{2k-1}F(x)^{2k-1}(1-F(x))^{2^{d+1}-1 - (2k-1)},
\end{align*}
and similarly define $p_k^G:\mathbb{R}\rightarrow [0, 1]$ by
\begin{align*}
p_k^G(x) & = \binom{2^{d+1}-1}{2k-2}G(x)^{2k-2}(1-G(x))^{2^{d+1}-1 - (2k-2)} \\
& + \binom{2^{d+1}-1}{2k-1}G(x)^{2k-1}(1-G(x))^{2^{d+1}-1 - (2k-1)}.\\
\end{align*}
These functions can be thought of as theoretical analogs of the probabilities $p_k^{\bm{Y}}(x)$ and $p_k^{\bm{X}}(x)$ from Section \ref{ssn:augmented}. Further, define the quantities
\begin{align*}
p_k^{F:G} = \int p_k^F(x)dG(x)
\end{align*}
and
\begin{align*}
p_k^{G:F} = \int p_k^G(x)dF(x).
\end{align*}
\begin{theorem}
\label{thm:alt}
Suppose that we have univariate, independent observations $\{\bm{X}_i\}_{i = 1}^m$ and $\{\bm{Y}_j\}_{j = 1}^n$, where $\bm{X}_i \sim G$ and $\bm{Y}_j \sim F$. Let $N = n + m$, and assume that $n, m \rightarrow \infty$ in such a way that $m/N \rightarrow \lambda$ for some $\lambda \in (0, 1)$. Then
\begin{align*}
\sqrt{N}\left(\begin{pmatrix} \bm{S}_{\bm{X}} \\ \bm{S}_{\bm{Y}} \end{pmatrix} - \bm{\mu}\right) \xrightarrow{d} N(\bm{0}, \mathbf{\Sigma}),
\end{align*}
where $\bm{\mu}$ is given by
\begin{align*}
\bm{\mu} = \begin{pmatrix} \tilde{\mathbf{H}}_{2^d} & \mathbf{0}_{(2^d-1)\times 2^d} \\ \mathbf{0}_{(2^d-1)\times 2^d} & \tilde{\mathbf{H}}_{2^d} \end{pmatrix}\begin{pmatrix} p_1^{F:G}\\ \vdots \\ p_{2^d}^{F:G} \\[6pt] p_1^{G:F}\\ \vdots \\ p_{2^d}^{G:F}\end{pmatrix}.
\end{align*}
Defining the cross-covariance matrices
\begin{align*}
\xi_{i, j} = \text{Cov}\Bigg(&\bm{k}(\bm{X}_1, \dots,\bm{X}_{2^{d+1}-1}, \bm{Y}_1, \dots, \bm{Y}_{2^{d+1}-1}),\\
& \bm{k}(\bm{X}_1, \dots, \bm{X}_i, \bm{X}_{i+1}', \dots, \bm{X}_{2^{d+1}-1}', \bm{Y}_1, \dots, \bm{Y}_{j}, \bm{Y}_{j+1}', \dots, \bm{Y}_{2^{d+1}-1}')\Bigg),
\end{align*}
where expectations are taken under the alternative, the auto-covariance matrix $\mathbf{\Sigma}$ is given by
\begin{align*}
\mathbf{\Sigma} = (2^{d+1}-1)^2\left(\xi_{1, 0}/\lambda + \xi_{0, 1}/(1-\lambda)\right).
\end{align*}
\end{theorem}
As one consequence of the above result, given distributions $F \neq G$, it is possible to compute the limit in probability of the symmetry statistics $\begin{pmatrix} \bm{S}_{\bm{X}} \\ \bm{S}_{\bm{Y}} \end{pmatrix}$. This limit $\bm{\mu}$ encodes asymmetry at the population level, analogous to how $\begin{pmatrix} \bm{S}_{\bm{X}} \\ \bm{S}_{\bm{Y}} \end{pmatrix}$ encodes asymmetry between the finite samples $\bm{X}$ and $\bm{Y}$. Moreover, using this theorem, one can efficiently simulate the AUGUST statistic under the alternative, making it easy to benchmark AUGUST against a predetermined $F \neq G$ in large samples. In applications that require an \textit{a priori} power analysis, this approach can simplify the process of determining the sample size necessary for detecting a given effect.
\section{Empirical performance}
\label{sec:performance}
\subsection{Univariate performance}
\label{sec:uniper}
In this section, we compare AUGUST to a sampling of other non-parametric two-sample tests: Kolmogorov-Smirnov, Wasserstein, and the recent DTS. We also consider the energy distance test, described in Sz\'ekely and Rizzo \cite{szekely2013energy}. For these simulations, we use a sample size of $n = m = 128$, and for the AUGUST test, we set a depth of $d = 3$. Simulation results are graphed in Figure \ref{fig:compare}.
The first row of Figure \ref{fig:compare} consists of normal and Laplace location alternatives -- situations where differences in the first distributional moment are most diagnostic. Center left, we have a symmetric beta vs. asymmetric beta alternative. While this does not constitute a pure location shift, differences in first moment are most pronounced. Center right, we include a Laplace scale family. The bottom row of Figure 1 focuses on families with identical first and second moments: normal vs. mean-centered gamma on the bottom left, and normal vs. symmetric normal mixture on the bottom right.
For the location alternatives, the power of each method depends on the shape of the distribution. DTS, Wasserstein, and energy distance perform slightly better than AUGUST for normal and beta distributions, and AUGUST in turn outperforms Kolmogorov-Smirnov. In contrast, for a Laplace location shift, Kolmogorov-Smirnov outperforms every test, with AUGUST in second place and DTS last. For the Laplace scale family, Kolmogorov-Smirnov performs badly, with DTS and AUGUST leading.
On the more complicated alternatives, Wasserstein, KS, and energy distance suffer power loss relative to their earlier performance. DTS slightly outperforms AUGUST on the gamma skewness family, while AUGUST outperforms all other tests at detecting normal vs. normal mixture.
The most important lesson is this: no single test performs best in all situations. Even for simple alternatives such as location families, the precise shape of the distribution is highly influential as to the tests' relative performance. Indeed, the performance rankings of DTS, Wasserstein, energy distance, and KS in the Laplace location trials are exactly \textit{reversed} compared to the normal location trials. Notably, the AUGUST test never performs worst out of the methods examined. We theorize that this is because the symmetry statistics $\bm{S}_{\bm{X}}$ and $\bm{S}_{\bm{Y}}$ are weighted equally in every coordinate, meaning that AUGUST is very parsimonious towards the range of potential alternatives. In contrast, the other methods are highly sensitive to location and scale shifts, but they are less robust against more obscure alternatives. The current field of tests may also favor location and scale shifts because these are among the most intuitive families to use for benchmarking. However, this behavior is undesirable in applications that truly require a non-parametric test.
\begin{figure}
\includegraphics[scale = .54]{plotAll.png}
\caption{Comparison of power for non-parametric univariate two-sample tests. We consider the energy distance test (Etest), Kolmogorov-Smirnov (KS), Wasserstein (Wass), DTS, and AUGUST at depth $d = 3$, all with a cutoff of $\alpha = 0.05$ and sample size $n = m = 128$. No test uniformly outperforms all others, though AUGUST is robust against the range of alternatives and never performs worst. All distributions considered are straightforward except perhaps the normal mixture on the bottom right. The parameter $m$ tracks the separation between the mixed Gaussians, and as $m \rightarrow \infty$, the alternative distribution approaches a Rademacher. The AUGUST test dominates all other tests on this alternative.}
\label{fig:compare}
\end{figure}
\subsection{Multivariate performance}
In Figure \ref{fig:multicompare}, we compare the mutual Mahalanobis version of AUGUST to some other well-known non-parametric multivariate two-sample tests in a low-dimensional context ($k = 2$). In particular, we again consider the energy distance test of Sz\'ekely and Rizzo \cite{szekely2013energy}, as well as the generalized edge-count method of Chen and Friedman \cite{chen2017new}, the ball divergence test of Pan et. al. \cite{pan2018ball}, and the classifier test of Lopez-Paz and Oquab \cite{lopez2016revisiting}. For the graph-based method, we use a 5-minimum spanning tree based on Euclidean interpoint distance.
We consider a variety of alternatives -- moving left to right and top to bottom:
\begin{enumerate}
\item $N_2(\bm{0}, \mathbf{I}_2)$ vs. $N_2(\text{center}\times\bm{1}, \mathbf{I}_2)$
\item $N_2(\bm{0}, \mathbf{I}_2)$ vs. $N_2(\bm{0}, \text{scale}\times \mathbf{I}_2)$
\item $N_2\left(\bm{0}, \begin{pmatrix} 1 & 0\\ 0 & 1\end{pmatrix}\right)$ vs. $ N_2\left(\bm{0}, \begin{pmatrix} 1 & \text{cov}\\ \text{cov} & 1\end{pmatrix}\right)$
\item $N_2\left(\bm{0}, \begin{pmatrix} 1 & 0\\ 0 & 9\end{pmatrix}\right)$ vs. $\mathbf{R}_\theta N_2\left(\bm{0}, \begin{pmatrix} 1 & 0\\ 0 & 9\end{pmatrix}\right)$, where $\mathbf{R}_\theta$ is the $2\times 2$ rotation matrix through an angle $\theta$
\item $\exp\left(N_2(\bm{0}, \mathbf{I}_2)\right)$ vs. $\exp\left(N_2(\mu\times\bm{1}, \mathbf{I}_2)\right)$
\item $N_2(\bm{0}, \mathbf{I}_2)$ vs. $(Z, B)$, where $Z\sim N(0, 1)$ and $B$ independently follows the bimodal mixture distribution from Section \ref{sec:uniper}
\end{enumerate}
In Figure \ref{fig:multicompare}, we see that the energy and ball divergence tests dominate the other methods when mean shift is a factor (i.e. in the normal location and log-normal families). On a scale alternative, AUGUST has the best power, with ball divergence at a close second. In contrast, for correlation, rotation, and multimodal alternatives, the edge-count test and AUGUST have superior power, with ball divergence and energy distance coming at or near last place.
Overall, we can say that AUGUST is robust against a wide range of possible alternatives, and it has particularly high performance against a scale alternative, where it outperforms all other methods considered. We theorize that, in part, this is because some of the other methods rely so heavily on interpoint distances. The scale alternative does not result in good separation between $\bm{X}$ and $\bm{Y}$, meaning that interpoint distances are not as diagnostic as they would be in, say, a location shift.
\begin{figure}
\includegraphics[scale = .54]{multicompare.png}
\caption{Comparison of power for multivariate non-parametric two-sample tests at dimension $k = 2$ and sample size $n = m = 128$. For comparison with AUGUST, we consider the energy distance test of Sz\'ekely and Rizzo \cite{szekely2013energy}, the generalized edge-count method of Chen and Friedman \cite{chen2017new} using 5-minimum spanning trees, the ball divergence test of Pan et. al. \cite{pan2018ball}, and the classifier test of Lopez-Paz and Oquab \cite{lopez2016revisiting}. All tests use a threshold of $\alpha = .05$, and the multivariate AUGUST test is performed at a depth $d = 2$. The performance of AUGUST is comparable to that of existing methods in all circumstances, and AUGUST has superior performance against scale alternatives.}
\label{fig:multicompare}
\end{figure}
\section{Studies of NBA shooting data}
\label{sec:nba}
In this section, we demonstrate the interpretability of AUGUST using 2015-2016 NBA play-by-play data. Consider the distributions of throw distances and angles from the net -- are these distributions different for shots and misses? How about for the first two quarters versus the last two quarters? To address these questions, we acquired play-by-play data for the 2015-2016 NBA season. For each throw, the location of the throw was recorded as a pair of $x,y$ coordinates. These coordinates were converted into a distance and angle from the target net, using knowledge of NBA court dimensions.
Four separate AUGUST tests at a depth of $d = 3$ were performed to analyze the distribution of throw distances and angles; data were split according to shots versus misses and early game versus late game. A Bonferroni correction was applied to the resulting $p$-values. At the $\alpha = .05$ level, throw distance and angle follow different distributions for shots versus misses as well as early versus late game.
To demonstrate the unique interpretability of this test, we provide AUGUST visualizations in Figure \ref{fig:bbfigs} as introduced in Section \ref{ssn:interpret}. Each histogram corresponds to one of the two samples in the test -- this sample is indicated on the $x$-axis. The yellow rectangles overlaid on these histograms illustrate the largest symmetry statistic from the corresponding test. For example, the top left plot corresponds to throw distance for shots versus misses. The histogram records the distribution of missed throw distances, and the yellow bars indicate that successful throws tend to be closer to the net. The width of each bar accounts for $1/2^d = 1/8$ of the sample plotted in the histogram.
Each plot in Figure \ref{fig:bbfigs} yields a specific interpretation as to the greatest distributional imbalance:
\begin{itemize}
\item \textit{Top left:} Compared to misses, successful throws tend to be closer to the net.
\item \textit{Top right:} Successful throws come from the side more often than misses.
\item \textit{Bottom left:} Throws early in the game are more frequently from an intermediate distance than late game throws.
\item \textit{Bottom right:} Relative to the late game, throws early in the game come more frequently from the side.
\end{itemize}
This second bullet point is perhaps most counterintuitive -- conventional wisdom would suggest that throws from in front of the net are more accurate than throws from the sides. This apparent paradox comes from the fact that throws from the sides are typically at a much closer range.
\begin{figure}
\includegraphics[scale = .6]{bbfigs.png}
\caption{Distributional differences in NBA data. Each of the four plots corresponds to a two-sample test. One of the samples from each test is plotted as a histogram -- we can refer to this sample as the reference. Yellow rectangles indicate regions where the reference sample is less prevalent than its counterpart. Since the yellow rectangles correspond to the largest computed symmetry statistic, these plots indicate the primary reason that rejection of the null occurred in each test. In the left column, a peak in shot frequency occurs immediately after the three point line at 23 feet, as intuition would suggest.}
\label{fig:bbfigs}
\end{figure}
To conclude this section, we again test for equality in distribution of NBA throw distance and angle, now using a multivariate approach. Applying the mutual Mahalanobis distance method of AUGUST with a cutoff of $\alpha = .05$ and depth $d = 2$, we find that the joint distribution of angles and distances differ across shots/misses as well as early/late game, as one would expect given the result of the univariate tests. Interpreting this conclusion is more difficult than in the univariate case due to the way that the Mahalanobis transformation flattens the data into one dimension: ``cells'' in this case correspond to nested elliptical rings centered on the sample means. Constructing an informative visualization for the multivariate setting may be an interesting problem for future work.
\section{Discussion}
Two-sample testing problems arise in a variety of application areas; often the distribution of either sample is unknown. In this paper, we introduce a non-parametric two-sample test dubbed AUGUST, which tests for differences in distribution between two samples up to a predetermined binary depth $d$. This new statistic is distribution-free in finite samples and can be computed in $O(N\log N)$ elementary operations, where $N$ is the total number of observations across both samples. We propose a multivariate extension of AUGUST, allowing for multi-dimensional tests of distributional equality. In addition, we use $U$-statistic theory to specify the asymptotic distribution of the AUGUST statistic, giving the potential for fast $p$-value calculations in a large sample setting. Via simulation studies, we compare the performance of the univariate and multivariate AUGUST tests to that of other well-known non-parametric tests on a variety of distribution families. We find the performance of AUGUST to be comparable to that of the other tests and superior in some cases, such as at detecting unimodality versus bimodality. In order to showcase the interpretability of AUGUST in a real-world setting, we apply our test to NBA shooting data.
This approach admits several directions for future work. Our statistic only uses rank information from the $\bm{X}$ and $\bm{Y}$ samples, disregarding information about distance between observations. While the AUGUST test retains good power and has the benefit of distribution independence in finite samples, it is possible that the power could be further improved by incorporating point distances in some way.
In a multivariate context, the prototype test of Section \ref{sec:multi} may serve as a useful starting point for future depth-based methods. The current asymptotic theory applies only to the univariate test, meaning that multivariate $p$-value computation requires permutation. Moreover, our present multivariate test is essentially the univariate method in disguise. A future extension of our method to the high-dimensional realm likely calls for a truly multivariate AUGUST-style test.
More broadly speaking, the already difficult problem of non-parametric two-sample testing is even harder in the context of time series. An interpretable, depth-based test may prove useful for the purposes of change point detection.
\section*{Acknowledgements}
The authors thank Hao Chen for valuable comments and suggestions.
\bibliographystyle{plain}
|
{
"timestamp": "2021-10-12T02:04:01",
"yymm": "2109",
"arxiv_id": "2109.14013",
"language": "en",
"url": "https://arxiv.org/abs/2109.14013"
}
|
\section{Introduction}
Tidal disruption events (TDEs), caused when a star is torn apart by tidal forces around a supermassive black hole \citep{Rees1988}, can generate observable flares. Such events are rare \citep[e.g.][]{Wang2004,Stone2016}, but they are unique tools for learning about the population of otherwise quiescent supermassive black holes, accretion physics, strong gravity, and more. Therefore, finding them in transient surveys is desirable. However, such surveys are already producing orders of magnitude more transient candidates than can be vetted and classified spectroscopically. Any preference of TDEs for specific host galaxy types could be used not only to learn about the dynamical processes driving TDE rates, but also to help narrow the search for such events.
\begin{table*}[t]
\centering
\caption{\label{tab:sources}Sources of the Galaxies Consolidated from \citetalias{French2018}.}
\begin{tabular}{llll}
\hline
FZ18 & Source Name & No. of & No. of Unique \\
Table No. & & Galaxies & Galaxies$^a$ \\
\hline
\hline
1 & Spectroscopically Identified QBS Galaxies from SDSS & 19,514 & 19,514\\
2 & Spectroscopically Identified PS Galaxies from SDSS & 1683 & 50$^b$ \\
\hline
5 & Pan-STARRS + WISE Photometrically Identified QBS Galaxies & 57,299 & 57,254 \\
6 & DES + WISE Photometrically Identified QBS Galaxies & 9337 & 9296 \\
7 & SDSS + WISE Photometrically Identified QBS Galaxies & 848 & 832\\
\hline
8 & Pan-STARRS + WISE Photometrically Identified PS Galaxies & 9690 & 750 \\
9 & DES + WISE Photometrically Identified PS Galaxies & 753 & 44\\
10 & SDSS + WISE Photometrically Identified PS Galaxies & 117 & 8 \\
\hline
Total & & & 87,748 \\
\hline
\end{tabular}
\tablecomments{Of these, we removed 53 objects from Tables 5--10 given SDSS DR16 spectra indicating they are QSO's or stars.\\
$^a$Number of galaxies not included in the sources from the previous rows.\\
$^b$Omitted from the source above it by mistake, here we include these galaxies also in Source 1.}
\end{table*}
\cite{Arcavi2014} discovered that optical TDEs occur preferentially in post-starburst (PS; also known as ``E+A'') galaxies. \cite{French2016} later quantified this preference, expanding the definition of the preferred hosts to include also quiescent Balmer-strong (QBS) galaxies. The reason that TDEs prefer PS and QBS galaxies is not yet fully understood (see \citealt{French2020} for a recent review); however it can still be leveraged to help identify promising TDE candidates in transient surveys.
\citet[][hereafter FZ18]{French2018} define QBSs as having a Lick H$\delta_A$ index $>1.3\,{\textrm \AA}$ in absorption and an H$\alpha$ equivalent width $<5\,{\textrm \AA}$ in emission. The PS galaxies are a subset of these, defined with H$\delta_A$ $>4\,{\textrm \AA}$ in absorption and H$\alpha$ equivalent width $<3\,{\textrm \AA}$ in emission.
Unfortunately, spectra are not available for most galaxies. Thus, \citetalias{French2018} use spectroscopically confirmed QBS and PS galaxies from the Sloan Digital Sky Survey \citep[SDSS;][]{York2000} Data Release (DR) 12 main galaxy survey \citep{Strauss2002,Alam2015} to train a machine-learning algorithm to identify QBS and PS galaxies from photometry alone. They then run this algorithm on a combination of Panoramic Survey Telescope and Rapid Response System \citep[Pan-STARRS;][]{Chambers2016} and Wide-field Infrared Survey Explorer \citep[WISE;][]{Wright2010} data, Dark Energy Survey \citep[DES;][]{Abbott2018} and WISE data, and SDSS and WISE data to identify several tens of thousands of new QBS and PS galaxy candidates.
Here, we search the the Transient Name Server (TNS)\footnote{\url{http://www.wis-tns.org}} database and the Zwicky Transient Facility \citep[ZTF;][]{Bellm2014,Graham2019} public photometric data for transients coincident with the centers of galaxies in the \citetalias{French2018} catalog. Our goals are to (1) measure the relative observed fractions of different types of transients occurring in the centers of such galaxies from the TNS data, and (2) check if any unclassified transients in these galaxies could have been missed TDEs, using the ZTF photometric data.
We adopt the nine-year Wilkinson Microwave Anisotropy Probe (WMAP) cosmology \citep{Hinshaw2013} throughout.
\section{Consolidating the FZ18 Galaxy Catalog}
The \citetalias{French2018} catalog is divided into eight subcatalogs, depending on how each galaxy was selected (hereafter we refer to these subcatalogs as ``sources"). We number each source according to its table number in \citetalias{French2018} and list them here in Table \ref{tab:sources}.
We consolidate the galaxies identified by \citetalias{French2018} from all sources into one master catalog. Since there is some overlap between galaxies in different sources, we note in the last column of Table \ref{tab:sources} the number of new galaxies in each source that were not already included in the sources from the previous rows. In total, we obtain 87,748 unique galaxies\footnote{We find 50 galaxies in Source 2 (spectroscopically identified PS galaxies) that are not in Source 1 (spectroscopically identified QBS galaxies), even though Source 2 should be a subset of Source 1. Indeed, these galaxies were omitted from Source 1 in \citetalias{French2018} by mistake. Here, we include them also as members of Source 1 for the rest of the analysis.}. The redshift distribution of the spectroscopically identified galaxies (Sources 1 and 2) is plotted in the top panel of Figure \ref{fig:redshifts}.
Since the \citetalias{French2018} catalog was compiled from SDSS DR12 data, we check which galaxies in the photometrically selected catalog (i.e. sources 5-10) of \citetalias{French2018} have since been observed spectroscopically by SDSS in DR16. We find that 3309 galaxies in the photometrically selected catalog have SDSS DR16 spectra. Of these, 41 have a ``Quasi Stellar Object'' (QSO) classification, and 12 have a ``Star'' classification (the rest all have a ``Galaxy'' classification). We remove from our sample the 53 galaxies with a spectrum having either a ``QSO'' or ``Star'' classification. Our final galaxy catalog thus consists of 87,695 galaxies.
\begin{figure}
\includegraphics[width=\columnwidth]{redshifts_hist.pdf}
\caption{\label{fig:redshifts}Redshift distributions of the galaxies and classified transients coincident with their centers for the \citetalias{French2018} catalog (top; only galaxies from Sources 1 and 2 are shown) and the control catalog (bottom).}
\end{figure}
\section{The Control Galaxy Catalog}
For a control sample we compile a catalog of quiescent galaxies (which are not necessarily Balmer-strong). We select all SDSS DR16 spectroscopically observed galaxies with an H$\alpha$ equivalent width $<3\,{\textrm \AA}$ in emission, as used for the \citetalias{French2018} PS cut (i.e. Source 2). We also require (as done in \citetalias{French2018}) that the redshift of each galaxy be $>0.01$ to avoid aperture bias, that the median signal to noise ratio of the spectrum be $>10$, and that the \texttt{h\_alpha\_eqw\_err} parameter be $>-1$ (i.e. no error flags were reported in the equivalent-width measurement).
These are the same cuts used for the \citetalias{French2018} Source 2 galaxies, just without the H$\delta_A$ absorption requirement.
We find 297,284 such galaxies, which we designate as our control sample (of these, 13,213 are also in the \citetalias{French2018} catalog). Their redshift distribution is very similar to that of the spectroscopically identified \citetalias{French2018} galaxies, and is shown in the bottom panel of Figure \ref{fig:redshifts}.
\section{Searching for Transients}
\begin{table*}[t]
\caption{\label{tab:tnsfracs}Number of Transients in the TNS (and Their Reported TNS Classifications) within 1\arcsec\ of a Galaxy in the Control Sample, in the \citetalias{French2018} Catalogs, and in Each of Its Subcatalogs (or ``Sources'').}
\begin{tabular}{lllllll}
\hline
\hline
Source & Total & Not & SN Ia & TDE & AGN & Galaxy \\
& Transients & Classified & & & \\
\hline
\multicolumn{7}{c}{Control Sample} \\
\hline
Control Catalog & 726 & 577 & 136 & 6 & 1 & 6 \\
Percentage of All Transients & & 79\% & 19\% & 1\% & 0\% & 1\% \\
Percentage of Classified Transients & & & 91\% & 4\% & 1\% & 4\% \\
\hline
\multicolumn{7}{c}{FZ18 Catalog} \\
\hline
All FZ18 Galaxies & 101 & 71 & 25 & 3 & 1 & 1 \\
Percentage of All Transients & & 70\% & 25\% & 3\% & 1\% & 1\% \\
Percentage of Classified Transients & & & 83\% & 10\% & 3\% & 3\% \\
\hline
\hspace{0.2cm}1: SDSS Spec Identified QBSs & 74 & 50 & 20 & 3 & 0 & 1 \\
\hspace{0.2cm}Percentage of All Transients & & 68\% & 27\% & 4\% & 0 & 1\% \\
\hspace{0.2cm}Percentage of Classified Transients & & & 83\% & 12\% & 0 & 4\% \\
\hline
\hspace{0.2cm}2: SDSS Spec Identified PSs & 10 & 7 & 1 & 2 & 0 & 0 \\
\hspace{0.2cm}Percentage of All Transients & & 70\% & 10\% & 20\% & 0 & 0 \\
\hspace{0.2cm}Percentage of Classified Transients & & & 33\% & 67\% & 0 & 0 \\
\hline
\hspace{0.2cm}5: Pan-STARRS+WISE Phot Identified QBSs & 22 & 17 & 4 & 0 & 1 & 0 \\
\hspace{0.2cm}Percentage of All Transients & & 77\% & 18\% & 0 & 5\% & 0 \\
\hspace{0.2cm}Percentage of Classified Transients & & & 80\% & 0 & 20\% & 0 \\
\hline
\hspace{0.2cm}6: DES+WISE Phot Identified QBSs & 2 & 2 & 0 & 0 & 0 & 0 \\
\hspace{0.2cm}Percentage of All Transients & & 100\% & 0 & 0 & 0 & 0 \\
\hspace{0.2cm}Percentage of Classified Transients & & & 0 & 0 & 0 & 0 \\
\hline
\hspace{0.2cm}7: SDSS+WISE Phot Identified QBSs & 3 & 2 & 1 & 0 & 0 & 0 \\
\hspace{0.2cm}Percentage of All Transients & & 67\% & 33\% & 0 & 0 & 0 \\
\hspace{0.2cm}Percentage of Classified Transients & & & 100\% & 0 & 0 & 0 \\
\hline
\hline
\end{tabular}
\tablecomments{No events were found in sources 8,9 and 10, so they are omitted from the table.}
\end{table*}
We next perform an archival search for transients coincident to within 1\arcsec\ with the centers of galaxies in both the \citetalias{French2018} and the control catalogs. This angular-separation cut is used to account for possible inaccuracies in transient or galaxy localizations. It corresponds to $\sim$2 kiloparsecs at the mean galaxy redshift of the \citetalias{French2018} spectroscopic sample, and $\sim$1 kiloparsec at the mean transient redshift of matched events.
\subsection{TNS Search}
\begin{figure*}[t]
\centering
\includegraphics[width=0.7\textwidth]{combined_pie.pdf}\includegraphics[width=0.3\textwidth]{perc_of_classified.pdf}
\caption{\label{fig:classpie}TNS classifications of transients coincident with the centers of galaxies in the control sample, the entire \citetalias{French2018} galaxy catalog, and in the spectroscopically identified QBS and PS galaxies. The share of TDEs among classified transients is larger in the \citetalias{French2018} catalog compared to the control catalog. Specifically, in spectroscopically identified PS galaxies, most classified transients turn out to be TDEs (though the number of transients there is small).}
\end{figure*}
TNS is the official International Astronomical Union (IAU) service for reporting transient events. It incorporates all events reported to the IAU through circulars before the TNS existed in its current form. Public spectroscopic classifications of transients are also reported to the TNS.
We search the complete TNS database up to 2021 August 8 for transients with positions within 1\arcsec\ of objects in the \citetalias{French2018} catalog. We find 101 such objects\footnote{Of these, 37 were identified by ZTF, 18 by ATLAS, 10 by Pan-STARRS, 3 by iPTF, 3 by Gaia, and 2 by ASAS-SN.} (Table \ref{tab:tns} in the Appendix), of which 30\% are spectroscopically classified (their redshift distribution is shown in the top panel of Figure \ref{fig:redshifts}). Of those, 83\% are Type Ia SNe\footnote{Here, we do not distinguish between the different subtypes of SNe Ia.}, and 10\% are TDEs. One event was an active galactic nucleus (AGN) flare and one is classified as ``Galaxy'' (i.e. only galaxy light was visible in the classification spectrum). This could mean that the event was not real, or that it faded before the spectrum was obtained. In the latter case, it could be a missed, rapidly evolving transient, hence we keep it in the sample.
All TDEs are found within 0\farcs5 of their host, while Type Ia SNe show a uniform host-offset distribution out to our cut of 1\arcsec. A cut of 0\farcs5 would decrease the Type Ia SN fraction to 70\% and increase the TDE fraction to 15\%. However, we are dealing with small absolute numbers. A larger sample is required to more accurately analyze class fraction trends with measured host separations. Here, we keep the cut at 1\arcsec\ to avoid biases related to position measurement accuracy.
In the control catalog of quiescent galaxies we find 726 transients coincident to within 1\arcsec\ of a galaxy. Here, only 20\% of the transients are spectroscopically classified (their redshift distribution is shown in the bottom panel of Figure \ref{fig:redshifts}). Of those, 91\% are Type Ia SNe, and only 4\% are TDEs (half of which are in galaxies included also in the \citetalias{French2018} catalog). The rest are classified as AGN or ``Galaxy''\footnote{One event, SN 2018aii, has an ambiguous classification as either a Type Ia or Type Ic SN. Given the quiescent host galaxy, it is more likely to be a Type Ia SN, and we thus include it in that count. In any case, either option has a negligible effect on our statistical results.}.
If we remove from the control sample the 13,213 quiescent Balmer-strong galaxies that are also in the \citetalias{French2018} sample, we find 669 transients, of which 19\% are spectroscopically classified. Of those, 93\% are Type Ia SNe. Because half of the TDEs in the quiescent sample are also in the quiescent Balmer-strong sample, removing them lowers the fraction of spectroscopically confirmed TDEs even further to 2\%. The rest of the classified transients are AGN or ``Galaxy'', as in the full control catalog.
Table \ref{tab:tnsfracs} lists the number of events and their classifications for the full control catalog, the entire \citetalias{French2018} catalog, and per the \citetalias{French2018} sources (no transients were reported in the centers of galaxies from Sources 8--10). Figure \ref{fig:classpie} presents the distribution of classes of transients coincident with the center of a galaxy in the control sample, the entire \citetalias{French2018} catalog, and for transients only in the spectroscopically identified PS and QBS galaxies.
\subsection{ZTF Search}
We next search the ZTF public alert stream for transient candidates with positions within 1\arcsec\ of objects in the \citetalias{French2018} galaxy catalog to check for any possible missed TDEs that were not reported to the TNS or were not classified there. We do this by using the ``E+A Galaxies'' watchlist\footnote{\url{https://lasair.roe.ac.uk/watchlist/321/}} on the Lasair Broker \citep{Smith2019}.
We find 395 ZTF events as of 2021 August 8 coincident with a galaxy in the \citetalias{French2018} catalog (Table \ref{tab:ztf})\footnote{Here we removed 7 events which are in the Lasair watchlist, but are in galaxies identified by SDSS DR16 as ``QSO'' or ``Star''.}.
Of those, 69 were reported to the TNS (and are therefore also included in Table \ref{tab:tns}), and 25 have classifications on the TNS.
We wish to check for missed TDEs among the unclassified events using their publicly available light curves. To do this, we retrieve the ZTF photometry of unclassified events with at least 20 detections, using the ALeRCE broker \citep{Forster2021} client\footnote{\url{https://alerce.readthedocs.io/en/latest/}}. We divide the light curves qualitatively into three groups: ``Gold'' -- those that are clearly transient showing a coherent rise and fall (three objects; Fig. \ref{fig:gold}), ``Silver'' -- those that are clearly variable (one object; Fig. \ref{fig:silver}), and ``Bronze'' -- those showing a rise and then remaining constant, or showing upper limits intertwined with detections, indicating they might be subtraction artifacts or flaring Galactic sources (27 objects; Fig. \ref{fig:bronze}).
\begin{figure}
\includegraphics[width=\columnwidth]{gold.pdf}
\caption{\label{fig:gold}ZTF light curves of our ``Gold'' set of unclassified transients coincident with the center of a galaxy in the \citetalias{French2018} catalog (triangles denote 5$\sigma$ nondetection upper limits). Each light curve is compared to those of two optical TDEs: the prototypical PS1-10jh \citep{Gezari2012} and the rapidly evolving iPTF16fnl \citep{Blagorodnova2017,Brown2018}. The comparison light curves are aligned to the top (rest-frame days from peak) and right (absolute magnitude) axes of each plot. While all events have plausible time scales and luminosities to be TDEs, all are redder than the reference TDEs.}
\end{figure}
\section{Analysis and Discussion}
\subsection{TNS Search: Distribution of Transients Classes}
We use the Clopper–-Pearson method \citep{clopper1934} to estimate the confidence bounds of the observed ratios. This method uses binomial statistics to estimate lower and upper confidence bounds for ratios of rates of different
event types, when the numbers of observed
events are small \citep[as discussed by][]{Gehrels1986}.
Using the $1\sigma$ confidence bounds calculated with this method, we find that in the control sample of quiescent galaxies, $91.3\%\pm2.3\%$ of transients are classified as Type Ia SNe, and only $4.0\%\pm1.6\%$ are classified as TDEs. Thus, one should expect to find $22.7\%\pm0.1$ times more Type Ia SNe than TDEs in such galaxies. In the \citetalias{French2018} catalog, by contrast, the Type Ia SN prevalence decreases to $83.3\%\pm6.8\%$ and the TDE prevalence increases to $10.0\%\pm5.5\%$, decreasing the Type Ia SN to TDE ratio to $8.3\pm0.2$.
This ratio is roughly the same when considering only spectroscopically confirmed QBS galaxies (rather than the entire \citetalias{French2018} catalog), but drastically improves for TDEs in spectroscopically confirmed PS galaxies. There, TDEs are $66.7\%\pm27.1\%$ of classified transients (with the rest being Type Ia SNe). The observed TDE to Type Ia SN ratio in these galaxies is thus $2.0\pm0.6$ to $1$. The ratio of TDEs to Type Ia SNe for these various samples is summarized in the top panel of Figure \ref{fig:no_of_events}.
These are not intrinsic transient rates in each galaxy type, but rather observed fractions. There are likely several observational biases driving the numbers of transients of each type being discovered and classified. For example, most optical TDEs have longer rise-times and more luminous peak magnitudes compared to typical Type Ia SNe \citep[e.g.][and references therein]{Maguire2016,vanVelzen2020}, making TDEs easier to discover and observe spectroscopically compared to Type Ia SNe. Both TDEs and Type Ia SNe are more luminous than typical core collapse SNe \citep[e.g.][and references therein]{Arcavi2016,Pian2016}, suppressing the observed fraction of any possible core collapse events in these bright galaxy centers. Also, the properties of the TDEs and their hosts will affect their detectability \citep{Roth2021}.
In addition, here we combine data from various transient surveys and classification campaigns, some focused on classifying the most likely TDE candidates, some possibly looking for Type Ia SNe, and some possibly avoiding likely AGN. This introduces even more complex (and possibly competing) selection effects into the sample of transients. Therefore it is highly nontrivial, if not impossible, to translate these fractions into intrinsic rates \citep[but see][]{Roth2021}. These observed fractions do, however, reflect the current prospects of community classification results when following up discoveries in galaxy centers.
Naively, spectroscopically identified PS galaxies would thus be the best galaxies to focus a TDE search on, since a transient in such a galaxy is roughly 16 times more likely to be a TDE than a Type Ia SN than if it were in a random galaxy in the \citetalias{French2018} catalog (and about 37 times more likely than if it were in a random quiescent galaxy). Unfortunately, there are only 1683 spectroscopically confirmed PS galaxies in the \citetalias{French2018} catalog, constituting just 2\% of it (middle panel of Figure \ref{fig:no_of_events}). Hence, in absolute numbers, searching for transients in the full catalog will provide roughly 3 times more TDEs, but at the price of having to classify $\sim$8 Type Ia SNe per confirmed TDE (bottom panel of Figure \ref{fig:no_of_events}). Of course, one can also employ photometric classification criteria to newly discovered transients in order to try to reduce Type Ia SN contamination before obtaining spectra.
\subsection{ZTF Search: Light Curves of Unclassified Events}
An important parameter in trying to determine whether an unclassified transient could have been a TDE is the absolute magnitude of its light curve. We search SDSS DR16 and the 2dF Galaxy Redshift Survey \citep[2dFGRS;][]{Colless2003} for host galaxy redshifts of the ZTF photometrically selected events coincident with the center of a galaxy in the \citetalias{French2018} catalog. Our findings are presented in Table \ref{tab:ztf}. For ZTF20abxphdt, a ``Gold'' event, we obtained our own spectrum of the host galaxy and measured the redshift to be 0.0675 from narrow \ion{Ca}{2} H+K and \ion{Na}{1} D absorption features (Fig. \ref{fig:spec})\footnote{Our spectrum was obtained with the Floyds spectrograph mounted on the Las Cumbres Observatory 2-meter telescope in Haleakala, Hawaii \citep{Brown2013}, and was reduced using the \texttt{floydsspec} custom pipeline, which performs flux and wavelength calibration, cosmic-ray removal, and spectrum extraction. The pipeline is available at \url{https://github.com/svalenti/FLOYDS_pipeline/blob/master/ bin/floydsspec/}.}. For each event with a determined redshift, we include an absolute magnitude scale in its light curve in Figures \ref{fig:gold}, \ref{fig:silver} and \ref{fig:bronze}.
The ``Silver'' and ``Bronze'' light curves do not have TDE-like transient behavior \citep[per definition, these are events with no clear rise and decline as seen in optical TDEs;][]{vanVelzen2020}. To determine whether any of the ``Gold'' events might have been a missed TDE, we compare in Figure \ref{fig:gold} each the light curves to those of the prototypical optical TDE PS1-10jh \citep{Gezari2012} and the faint rapidly evolving optical TDE iPTF16fnl \citep{Blagorodnova2017,Brown2018}, whose light curves we obtain from the Open TDE Catalog\footnote{\url{https://tde.space/}}. These two events roughly span the range of known optical TDE light-curve luminosities and time scales (see \citealt{vanVelzen2020} for a review).
While all of the ``Gold'' light curves have peak absolute luminosities and time scales in the correct range, their $g$--$r$ colors are much redder than those of TDEs. We conclude that none of these events are likely missed TDEs, but transients of some other nature.
\section{Summary and Conclusions}
\begin{figure}
\includegraphics[width=\columnwidth]{no_of_events.pdf}
\caption{\label{fig:no_of_events}Top: observed TDE to Type Ia SN ratio of transients in the centers of galaxies drawn from different galaxy catalogs analyzed here (1$\sigma$ Clopper--Pearson confidence bounds are shown but are sometimes smaller than the marker size). Middle: number of galaxies in each catalog. Bottom: total number of TDEs and Type Ia SNe expected in each galaxy catalog, normalized to the number of TDEs in spectroscopically identified PS galaxies. While the ratio of TDEs to Type Ia SNe is largest there, the small number of such galaxies in the catalog means that in absolute numbers, more TDEs can be discovered by using the entire \citetalias{French2018} catalog, but at the price of having $\sim$16 times more Type Ia SNe per TDE.}
\end{figure}
We quantify the chances of a transient discovered in the center of a galaxy from a catalog of likely TDE hosts to be a TDE or a Type Ia SN (no other types of true transients were classified in these hosts) by searching the classifications of all transients discovered in the centers of these galaxies. The catalog is made up of galaxies selected in different ways, with the bulk being photometrically selected.
The catalog reduces the contamination of Type Ia SNe by a factor of roughly 2.7 compared to a control sample of quiescent galaxies. The lowest contamination of Type Ia SNe exists in the spectroscopically identified PS subcatalog, but it constitute only 2\% of the entire catalog. By classifying transients from the entire catalog, three times more TDEs are expected to be found, but with a roughly 16 times larger Type Ia SN contamination.
We have not identified any transients coincident with the center of a galaxy in the catalog as likely missed TDEs.
~\\
We thank O. Yaron for assistance in obtaining TNS data, C. Pellegrino for reducing the ZTF20abxphdt host galaxy spectrum, and M. Nicholl for implementing the \citetalias{French2018} galaxy catalog as a watchlist on Lasair.
I.A. is a CIFAR Azrieli Global Scholar in the Gravity and the Extreme Universe Program and acknowledges support from that program, from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement number 852097), from the Israel Science Foundation (grant number 2752/19), from the United States -- Israel Binational Science Foundation (BSF), and from the Israeli Council for Higher Education Alon Fellowship.
I.N. acknowledges funding from the MIT International Science and Technology Initiatives (MISTI) Israel Program.
|
{
"timestamp": "2021-12-17T02:13:05",
"yymm": "2109",
"arxiv_id": "2109.13945",
"language": "en",
"url": "https://arxiv.org/abs/2109.13945"
}
|
\section{Introduction}
|
{
"timestamp": "2021-09-30T02:00:35",
"yymm": "2109",
"arxiv_id": "2109.13947",
"language": "zh",
"url": "https://arxiv.org/abs/2109.13947"
}
|
\section{Introduction}
\label{section:introduction}
A sequence of groups $\{G_n\}_{n=1}^{\infty}$ exhibits {\em homological stability}
if for each $k \geq 0$, the
value $\HH_k(G_n)$ is independent of $n$ for $n \gg k$. There is a vast
literature on this, starting with early work of Nakaoka on symmetric groups \cite{Nakaoka} and unpublished work of Quillen dealing with
$\GL_n(\Field_q)$. Many sequences of groups exhibit
homological stability: general linear groups over many rings \cite{VanDerKallen}, mapping
class groups \cite{HarerStability}, automorphism groups of free groups \cite{HatcherVogtmannCerf}, etc. See \cite{RandalWilliamsWahl} for a general
framework that encompasses many of these results, as well as a survey of the literature.
\subsection{Twisted coefficients}
Dwyer \cite{DwyerTwisted} showed how to extend this to homology with certain
kinds of twisted coefficients. For instance, it follows from his work that
if $\bbk$ is a field, then for all $m \geq 1$ and $k \geq 0$ we have
\[\HH_k(\GL_{n}(\bbk);(\bbk^{n})^{\otimes m}) \cong \HH_k(\GL_{n+1}(\bbk);(\bbk^{n+1})^{\otimes m}) \quad \text{for $n \gg 0$}.\]
This has been generalized to many different groups and coefficient systems. The
most general result we are aware of is in \cite{RandalWilliamsWahl}, which
shows how to prove such a result for ``polynomial coefficient systems'' on
many classes of groups.
\subsection{New approach}
In this paper, we give an alternate approach to twisted homological stability
for polynomial coefficient systems.
A small advantage of our approach is that it sometimes gives better stability
ranges. More importantly, our approach is more flexible than Dwyer's, which
only works well in settings where the traditional proof of homological
stability applies in its simplest form.
For constant coefficients, the
standard proof of homological stability can give useful information even
in settings where it does not apply directly. Examples of this from
the author's work include the proofs of \cite[Theorem 1.1]{AshPutmanSam}
and \cite[Theorem A]{PutmanStudenmund}. Our approach to twisted coefficients
is similarly flexible. We will illustrate this
by proving Theorem \ref{maintheorem:congruence} below, which shows
how the homology of the general linear group changes when you pass
to a finite-index congruence subgroup. With twisted coefficients, this
seems hard to prove using the traditional approach.
There is a tension between on the one hand developing abstract frameworks like the one in
\cite{RandalWilliamsWahl} that systematize the homological stability machine, and on the
other hand giving flexible tools that can be adapted to nonstandard situations,
including ones that are not necessarily about homological stability per se.
Both of these goals are important. In writing this paper, we had the latter goal in mind, so our focus
will be on basic examples rather than a general abstract result.
\begin{remark}
In forthcoming work \cite{PutmanStableLevel}, we will use our tools to study the stable
cohomology of the moduli space of curves with level structures.
\end{remark}
\begin{remark}
\label{remark:mpp}
Miller--Patzt--Petersen \cite{MillerPatztPetersen} have recently developed
an approach to twisted homological stability that shares some ideas with our paper,
though the technical details and intended applications are different.
Unlike us, they prove a general theorem in the spirit of \cite{RandalWilliamsWahl}.
Our work was done independently of theirs.\footnote{See \cite[Remark 1.2]{MillerPatztPetersen}. We
apologize for taking so long to write our approach up.}
\end{remark}
\subsection{FI-modules}
The easiest example to which our results apply is the symmetric group.
For these groups we will encode our twisted coefficient systems using Church--Ellenberg--Farb's
theory of $\FI$-modules \cite{ChurchEllenbergFarbFI}. Let $\FI$ be the category whose
objects are finite sets and whose morphisms are injections. For a commutative ring $\bbk$, an {\em $\FI$-module}
over $\bbk$ is a functor $M$ from $\FI$ to the category of $\bbk$-modules. For $n \geq 0$,
let\footnote{It is more common to write $[n]$ for $\{1,\ldots,n\}$, but later on when discussing
semisimplicial sets we will need to use the notation $[n]$ for $\{0,\ldots,n\}$.}
$\overline{n} = \{1,\ldots,n\}$. Every object of $\FI$ is isomorphic to $\overline{n}$ for some $n \geq 0$, so
the data of an $\FI$-module $M$ consists of:
\begin{itemize}
\item a $\bbk$-module $M(\overline{n})$ for each $n \geq 0$, and
\item for each injection $f\colon \overline{n} \rightarrow \overline{m}$, an induced $\bbk$-module homomorphism
$f_{\ast}\colon M(\overline{n}) \rightarrow M(\overline{m})$.
\end{itemize}
In particular, the inclusions $\overline{n} \rightarrow \overline{n+1}$ induce a sequence of morphisms
\begin{equation}
\label{eqn:increasingfi}
M(\overline{0}) \rightarrow M(\overline{1}) \rightarrow M(\overline{2}) \rightarrow \cdots.
\end{equation}
The group of $\FI$-automorphisms of $\overline{n}$ is the symmetric group $\fS_n$. This acts
on $M(\overline{n})$, making $M(\overline{n})$ into a $\bbk[\fS_n]$-module. More generally,
for a finite set $S$ the group $\fS_S$ of bijections of $S$ acts on $M(S)$, making $M(S)$
into a $\bbk[\fS_S]$-module.
\begin{example}
\label{example:easyfi}
The group $\fS_n$ acts on $\bbk^n$ for each $n \geq 0$. We can fit the increasing sequence
\[\bbk^0 \rightarrow \bbk^1 \rightarrow \bbk^2 \rightarrow \cdots\]
of $\fS_n$-representations into an $\FI$-module $M$ over $\bbk$ by defining
\[M(S) = \bbk^S \quad \text{for a finite set $S$}.\]
Here the notation $\bbk^S$ means the free $\bbk$-module with basis $S$, and an
injective map $f\colon S \rightarrow T$ between finite sets induces a map
$f_{\ast}\colon M(S) \rightarrow M(T)$ taking basis elements to basis elements. As
a $\bbk[\fS_n]$-module, we have $M(\overline{n}) = \bbk^{\overline{n}} \cong \bbk^n$.
\end{example}
\subsection{Polynomial FI-modules}
For an $\FI$-module $M$ over $\bbk$, the inclusions \eqref{eqn:increasingfi} induce maps between homology groups
for each $k$:
\[\HH_k(\fS_0;M(\overline{0})) \rightarrow \HH_k(\fS_1;M(\overline{1})) \rightarrow \HH_k(\fS_2;M(\overline{2})) \rightarrow \cdots.\]
We would like to give conditions under which these stabilize. We start with
the following. Fix a functorial coproduct $\sqcup$ on $\FI$, which
we think of as ``disjoint union''. Letting $\ast$ be a formal symbol,
for a finite set $S$ we then have the finite set $S \sqcup \{\ast\}$
of cardinality $|S|+1$.
\begin{definition}
\label{definition:derivedfi}
Let $\bbk$ be a commutative ring and let $M$ be an $\FI$-module over $\bbk$.
\begin{itemize}
\item The {\em shifted} $\FI$-module of $M$, denoted $\Sigma M$ is the $\FI$-module over $\bbk$ defined via the formula
$\Sigma M(S) = M(S \sqcup \{\ast\})$ for a finite set $S$.
\item The {\em derived} $\FI$-module
of $M$, denoted $DM$, is the $\FI$-module over $\bbk$ defined via the formula
\[DM(S) = \frac{M(S \sqcup \{\ast\})}{\Image(M(S) \rightarrow M(S \sqcup \{\ast\}))} \quad \text{for a finite set $S$}.\qedhere\]
\end{itemize}
\end{definition}
\begin{remark}
Morphisms between $\FI$-modules over $\bbk$ are natural transformations between functors. This makes the collection
of $\FI$-modules over $\bbk$ into an abelian category, where kernels and cokernel are computed pointwise. With
these conventions, there is a morphism $M \rightarrow \Sigma M$, and $DM = \coker(M \rightarrow \Sigma M)$.
\end{remark}
The idea behind
the following definition goes back to Dwyer's work \cite{DwyerTwisted}, and has
been elaborated upon by many people; see \cite{RandalWilliamsWahl} for a
more complete history.
\begin{definition}
\label{definition:polyfi}
Let $\bbk$ be a commutative ring and let $M$ be an $\FI$-module over $\bbk$. We say that
$M$ is {\em polynomial} of degree $d \geq -1$ starting at $m \in \Z$ if it satisfies
the following inductive condition:
\begin{itemize}
\item If $d = -1$, then for finite sets $S$ with $|S| \geq m$ we require $M(S) = 0$.
\item If $d \geq 0$, then we require the following two conditions:
\begin{itemize}
\item For all injective maps $f\colon S \rightarrow T$ between finite sets with $|S| \geq m$,
the induced map $f_{\ast}\colon M(S) \rightarrow M(T)$ must be an injection.
\item The derived $\FI$-module $DM$ must be polynomial of degree $(d-1)$ starting at $(m-1)$.\qedhere
\end{itemize}
\end{itemize}
\end{definition}
\begin{remark}
\label{remark:exactsequence}
If the $\FI$-module $M$ over $\bbk$ is polynomial of degree $d$ starting at $m \in \Z$, then for all finite
sets $S$ with $|S| \geq m$ we have a short exact sequence
\begin{equation}
\label{eqn:polyfiseq}
0 \longrightarrow M(S) \longrightarrow \Sigma M(S) \longrightarrow DM(S) \longrightarrow 0
\end{equation}
of $\bbk[\fS_S]$-modules.
\end{remark}
\begin{remark}
\label{remark:split}
There is also a notion of a polynomial $\FI$-module being {\em split}, which can slightly
improve the bounds on where stability begins. See \cite{RandalWilliamsWahl} for the definition.
To avoid complicating our arguments, we will not incorporate this into our results.
\end{remark}
\begin{example}
\label{example:deg0fi}
An $\FI$-module $M$ over $\bbk$ is polynomial of degree $0$ starting at $m \in \Z$ if
it satisfies the following two conditions:
\begin{itemize}
\item For all injective maps $f\colon S \rightarrow T$ between finite sets with $|S| \geq m$,
the induced map $f_{\ast}\colon M(S) \rightarrow M(T)$ must be an injection.
\item For all finite sets $S$ of with $|S| \geq m-1$, we must have
\[DM(S) = \frac{M(S \sqcup \{\ast\})}{\Image(M(S) \rightarrow M(S \sqcup \{\ast\}))} = 0.\]
In other words, the map $M(S) \rightarrow M(S \sqcup \{\ast\})$ must be surjective. This
implies more generally that for all injective maps $f\colon S \rightarrow T$ between finite sets with
$|S| \geq m-1$, the induced map $f_{\ast}\colon M(S) \rightarrow M(T)$ must be a surjection.
\end{itemize}
Combining these two facts, we see that $M$ is polynomial of degree $0$ starting at $m \in \Z$ if
for all injective maps $f\colon S \rightarrow T$ between finite sets, the map $f_{\ast}\colon M(S) \rightarrow M(T)$
is an isomorphism if $|S| \geq m$ and a surjection if $|S| = m-1$.
\end{example}
\begin{example}
The $\FI$-module $M$ in Example \ref{example:easyfi} is polynomial of degree $1$ starting
at $0$. More generally, for $d \geq 1$ we can define an $\FI$-module $M^{\otimes d}$ via the formula
\[M^{\otimes d}(S) = \left(\bbk^S\right)^{\otimes d} \quad \text{for a finite set $S$}.\]
The $\FI$-module $M^{\otimes d}$ is polynomial of degree $d$ starting at $0$.
\end{example}
\begin{example}
There is a natural notion of an FI-module being generated and related in finite degree (see
\cite{ChurchEllenbergFarbFI} for the definition).
As was observed in \cite[Example 4.18]{RandalWilliamsWahl}, it follows from work of
Church--Ellenberg \cite{ChurchEllenbergHomology} that if $M$ is generated in degree $d$
and related in degree $r$, then $M$ is polynomial of degree $d$ starting at
$r+\min(d,r)$. We remark that though $r+\min(d,r)$ is sharp in general, in practice
many such $\FI$-modules are polynomial of degree $d$ starting far earlier than $r+\min(d,r)$, often at $0$.
\end{example}
\subsection{Symmetric groups}
Our main theorem about the symmetric group is as follows:
\begin{maintheorem}
\label{maintheorem:sn}
Let $\bbk$ be a commutative ring and let $M$ be an $\FI$-module over $\bbk$ that is polynomial
of degree $d$ starting at $m \geq 0$. For each $k \geq 0$, the map
\[\HH_k(\fS_{n};M(\overline{n})) \rightarrow \HH_k(\fS_{n+1};M(\overline{n+1}))\]
is an isomorphism for $n \geq 2k+\max(d,m-1)+2$ and a surjection for $n=2k+\max(d,m-1)+1$.
\end{maintheorem}
In the key case $m=0$, the stabilization map is thus an isomorphism for $n \geq 2k+d+2$ and
a surjection for $n=2k+d+1$. Theorem \ref{maintheorem:sn} is the natural output of the machine we develop, but
for $m \gg 0$ the following result sometimes gives better bounds (though often worse ones):
\begin{maintheoremprime}
\label{maintheorem:snprime}
Let $\bbk$ be a commutative ring and let $M$ be an $\FI$-module over $\bbk$ that is polynomial
of degree $d$ starting at $m \geq 0$. For each $k \geq 0$, the map
\begin{equation}
\label{eqn:snstabmap}
\HH_k(\fS_{n};M(\overline{n})) \rightarrow \HH_k(\fS_{n+1};M(\overline{n+1}))
\end{equation}
is an isomorphism for $n \geq \max(m,2k+2d+2)$ and a surjection for $n \geq \max(m,2k+2d)$.
\end{maintheoremprime}
\begin{remark}
Theorem \ref{maintheorem:snprime} will be derived from Theorem \ref{maintheorem:sn} using
\eqref{eqn:polyfiseq}.\footnote{Plus the slightly better stability range for constant coefficients
originally proved by Nakaoka \cite{Nakaoka}.}
\end{remark}
The first twisted homological stability result for the symmetric group was proved by
Betley \cite{BetleySymmetric}, who only considered split coefficient systems starting at $m=0$ (c.f.\ Remark \ref{remark:split})
but proved that \eqref{eqn:snstabmap} is an isomorphism for $n \geq 2k+d$.
Randal-Williams--Wahl \cite[Theorem 5.1]{RandalWilliamsWahl}
showed how to deal with general polynomial coefficients, but in this level of generality could only prove that \eqref{eqn:snstabmap}
is an isomorphism for $n \geq \max(2m+1,2k+2d+2)$, though in the split case they could reduce this to
$n \geq \max(m+1,2k+d+2)$.
\subsection{VIC-modules}
We now turn to general linear groups. Let $R$ be a ring, possibly noncommutative. To encode our
coefficient systems on $\GL_n(R)$, we will use $\VIC(R)$-modules, which were introduced by the author
and Sam \cite{PutmanSam, PutmanSamNonCom}. Define $\VIC(R)$ to be the following category:
\begin{itemize}
\item The objects of $\VIC(R)$ are finite-rank free right $R$-modules.\footnote{We use right $R$-modules since this
ensures that $\GL_n(R)$ acts on $R^n$ on the left.} For a finite-rank free right $R$-module $A$, we
will sometimes write $[A]$ for the associated object of $\VIC(R)$ to clarify our statements.
\item For finite-rank free right $R$-modules $A_1$ and $A_2$, a $\VIC(R)$-morphism $[A_1] \rightarrow [A_2]$
is a pair $(f,C)$, where $f\colon A_1 \rightarrow A_2$ is an injection and $C \subset A_2$ is a
submodule\footnote{It would also be reasonable to require $C$ to be free with $\rk(C) = \rk(A_2) - \rk(A_1)$. We will not
make this restriction. For the rings we consider, this frequently holds automatically, at least when $\rk(A_2) - \rk(A_1)$ is not too large (see Lemma \ref{lemma:stabledecomp}).} such that $A_2 = f(A_1) \oplus C$. The composition of $\VIC(R)$-morphisms $(f_1,C_1)\colon [A_1] \rightarrow [A_2]$
and $(f_2,C_2)\colon [A_2] \rightarrow [A_3]$ is the $\VIC(R)$-morphism $(f_2 \circ f_1, C_2 \oplus f_1(C_1))\colon [A_1] \rightarrow [A_3]$.
\end{itemize}
For a commutative ring $\bbk$, a $\VIC(R)$-module over $\bbk$ is a functor $M$ from $\VIC(R)$ to the category
of $\bbk$-modules. Every object of $\VIC(R)$ is isomorphic to $R^n$ for some $n \geq 0$, so the
data of a $\VIC(R)$-module $M$ consists of:
\begin{itemize}
\item a $\bbk$-module $M(R^n)$ for each $n \geq 0$, and
\item for each $\VIC(R)$-morphism $(f,C)\colon [R^n] \rightarrow [R^m]$, an induced $\bbk$-module homomorphism
$(f,C)_{\ast}\colon M(R^n) \rightarrow M(R^m)$.
\end{itemize}
For each $n$, let $\iota_n\colon R^n \rightarrow R^{n+1}$ be the inclusion into the first $n$ coordinates.
We then have a $\VIC(R)$-morphism $(\iota_n,0 \oplus R)\colon [R^n] \rightarrow [R^{n+1}]$. These
induce a sequence of morphisms
\begin{equation}
\label{eqn:increasingvic}
M(R^0) \rightarrow M(R^1) \rightarrow M(R^2) \rightarrow \cdots.
\end{equation}
The group of $\VIC(R)$-automorphisms of $[R^n]$ is $\GL_n(R)$. This acts
on $M(R^n)$, making $M(R^n)$ into a $\bbk[\GL_n(R)]$-module. More generally, for a finite-rank free
right $R$-module $A$ the group $\GL(A)$ acts on $M(A)$.
\begin{example}
\label{example:easyvic1}
We can fit the increasing sequence
\[\Z^0 \rightarrow \Z^1 \rightarrow \Z^2 \rightarrow \cdots\]
of $\GL_n(\Z)$-representations into a $\VIC(\Z)$-module $M$ over $\Z$ by defining
\[M(A) = A \quad \text{for a finite-rank free $\Z$-module $A$}.\]
For a $\VIC(\Z)$-morphism $(f,C)\colon [A_1] \rightarrow [A_2]$, the associated
map $(f,C)_{\ast}\colon M(A_1) \rightarrow M(A_2)$ is simply $f$.
\end{example}
\begin{example}
\label{example:easyvic2}
Example \ref{example:easyvic1} can be generalized as follows. Let $R$ be a ring, let $\bbk$ be a commutative ring,
let $V$ be a $\bbk$-module, and let $\lambda\colon R \rightarrow \End_{\bbk}(V)$ be a ring homomorphism.
Example \ref{example:easyvic1} will correspond to $R = \bbk = V = \Z$ and
\[\lambda(r)(v) = r v \quad \text{for $r \in R = \Z$ and $v \in V = \Z$}.\]
Another example would be $R = \bbk[G]$ for a group $G$ and $V$ a representation of $G$ over $\bbk$.
For each $n \geq 0$, the ring homomorphism $\lambda$ induces a group homomorphism
\[\GL_n(R) \rightarrow \GL_n(\End_{\bbk}(V)),\]
endowing the $\bbk$-module $V^{\oplus n}$ with the structure of a $\bbk[\GL_n(R)]$-module.
We can fit the increasing sequence
\[V^{\oplus 0} \rightarrow V^{\oplus 1} \rightarrow V^{\oplus 2} \rightarrow \cdots\]
of $\GL_n(R)$-representations into a $\VIC(R)$-module $M$ by defining
\[M(A) = A \otimes_R V \quad \text{for a finite-rank free $R$-module $A$}.\]
Here we use $\lambda$ to regard $V$ as a left $R$-module. For a $\VIC(R)$-morphism
$(f,C)\colon [A_1] \rightarrow [A_2]$, the induced map $(f,C)_{\ast}\colon M(A_1) \rightarrow M(A_2)$
is $f \otimes \text{id}$. As a $\bbk[\GL_n(R)]$-module, we have
\[M(R^n) = R^n \otimes_R V = V^{\oplus n}.\qedhere\]
\end{example}
\begin{example}
\label{example:easyvic3}
In Examples \ref{example:easyvic1} and \ref{example:easyvic2}, the $C$ in a $\VIC(R)$-morphism played no role. For
an easy example of a $\VIC(R)$-module where it is important, consider the dual $M^{\ast}$ of the $\VIC(\Z)$-module
$M$ over $\Z$ from Example \ref{example:easyvic1}:
\[M^{\ast}(A) = \Hom(A,\Z) \quad \text{for a finite-rank free $\Z$-module $A$}.\]
For a $\VIC(\Z)$-morphism $(f,C)\colon [A_1] \rightarrow [A_2]$, the associated
map $(f,C)_{\ast}\colon M^{\ast}(A_1) \rightarrow M^{\ast}(A_2)$ takes $\phi \in \Hom(A_1,\Z)$ to the composition
\[A_2 \stackrel{/C}{\longrightarrow} f(A_1) \stackrel{f^{-1}}{\longrightarrow} A_1 \stackrel{\phi}{\longrightarrow} \Z,\]
where the first map is the projection $A_2 = f(A_1) \oplus C \rightarrow f(A_1)$.
\end{example}
\subsection{Polynomial VIC-modules}
For a $\VIC(R)$-module $M$ over $\bbk$, the inclusions \eqref{eqn:increasingvic} induce maps between homology groups
for each $k$:
\[\HH_k(\GL_0(R);M(R^0)) \rightarrow \HH_k(\GL_1(R);M(R^1)) \rightarrow \HH_k(\GL_2(R);M(R^2)) \rightarrow \cdots.\]
Just like for for $\FI$-modules, to make this stabilize we will need to impose a polynomiality condition.\footnote{For finite rings $R$,
the author and Sam proved in \cite{PutmanSam, PutmanSamNonCom} a homological stability result which replaces
the polynomiality condition by a much weaker ``finite generation'' condition. That proof is very different from the one
we will give, and cannot possibly work for infinite rings like $R = \Z$.} The definitions are similar to those for $\FI$-modules:
\begin{definition}
\label{definition:derivedvic}
Let $R$ be a ring, $\bbk$ be a commutative ring, and $M$ be a $\VIC(R)$-module over $\bbk$.
\begin{itemize}
\item The {\em shifted} $\VIC(R)$-module of $M$, denoted $\Sigma M$ is the $\VIC(R)$-module over $\bbk$ defined via the formula
$\Sigma M(A) = M(A \oplus R^1)$ for a finite-rank free right $R$-module $A$.
\item The {\em derived} $\VIC(R)$-module
of $M$, denoted $DM$, is the $\VIC(R)$-module over $\bbk$ defined via the formula
\[DM(S) = \frac{M(A \oplus R^1)}{\Image(M(A) \rightarrow M(A \oplus R^1))} \quad \text{for a finite-rank free right $R$-module $A$}.\qedhere\]
\end{itemize}
\end{definition}
\begin{definition}
\label{definition:polyvic}
Let $R$ be a ring, $\bbk$ be a commutative ring, and $M$ be a $\VIC(R)$-module over $\bbk$.
We say that $M$ is {\em polynomial} of degree $d \geq -1$ starting at $m \in \Z$ if it satisfies
the following inductive condition:
\begin{itemize}
\item If $d = -1$, then for all finite-rank free right $R$-modules $A$ with $\rk(A) \geq m$ we require $M(A) = 0$.
\item If $d \geq 0$, then we require the following two conditions:
\begin{itemize}
\item For all $\VIC(R)$-morphisms $(f,C)\colon \colon [A_1] \rightarrow [A_2]$ with $\rk(A_1) \geq m$,
the induced map $(f,C)_{\ast}\colon M(A_1) \rightarrow M(A_2)$ must be an injection.
\item The derived $\VIC(R)$-module $DM$ must be polynomial of degree $(d-1)$ starting at $(m-1)$.\qedhere
\end{itemize}
\end{itemize}
\end{definition}
\begin{example}
The $\VIC(R)$-modules in Examples \ref{example:easyvic1}, \ref{example:easyvic2}, and \ref{example:easyvic3} are all
polynomial of degree $1$ starting at $0$. Letting $M$ be one of these, for $d \geq 0$ we can define another
$\VIC(R)$-module $M^{\otimes d}$ via the formula
\[M^{\otimes d}(A) = \left(M\left(A\right)\right)^{\otimes d} \quad \text{for a finite-rank free $R$-module $A$}.\]
This is easily seen to be polynomial of degree $d$ starting at $0$.
\end{example}
\subsection{General linear groups}
We now turn to our stability theorem, which will concern the homology of $\GL_n(R)$.
We will need to impose a ``stable rank'' condition on $R$ called $(\SR_r)$ that was introduced
by Bass \cite{BassKTheory}. See \S \ref{section:stablerank} below for the definition and a
survey. Here we will simply say this condition is satisfied by
many rings; in particular, fields satisfy $(\SR_2)$ and PIDs satisfy $(\SR_3)$. More generally
a ring $R$ that is finitely generated as a module over a Noetherian commutative
ring of Krull dimension $r$ satisfies $(\SR_{r+2})$. Thus for instance if $\bbk$ is
a field and $G$ is a finite group, then the group ring $\bbk[G]$ satisfies $(\SR_2)$.
\begin{maintheorem}
\label{maintheorem:gl}
Let $R$ be a ring satisfying $(\SR_r)$, let $\bbk$ be a commutative ring, and let
$M$ be a $\VIC(R)$-module over $\bbk$ that is polynomial of degree $d$ starting at $m \geq 0$.
For each $k \geq 0$, the map
\[\HH_k(\GL_n(R);M(R^n)) \rightarrow \HH_k(\GL_{n+1}(R);M(R^{n+1}))\]
is an isomorphism for $n \geq 2k+\max(2d+r,m+1)$ and a surjection for $n = 2k+\max(2d+r-1,m)$.
\end{maintheorem}
In the key case $m=0$, the stabilization map is thus an isomorphism for $n \geq 2k+2d+r$
and a surjection for $n = 2k+2d+r-1$. Just like for the symmetric group, we will derive from
Theorem \ref{maintheorem:gl} the following variant result which for $m \gg 0$ sometimes gives better bounds:
\begin{maintheoremprime}
\label{maintheorem:glprime}
Let $R$ be a ring satisfying $(\SR_r)$, let $\bbk$ be a commutative ring, and let
$M$ be a $\VIC(R)$-module over $\bbk$ that is polynomial of degree $d$ starting at $m \geq 0$.
For each $k \geq 0$, the map
\begin{equation}
\label{eqn:glstabmap}
\HH_k(\GL_n(R);M(R^n)) \rightarrow \HH_k(\GL_{n+1}(R);M(R^{n+1}))
\end{equation}
is an isomorphism for $n \geq \max(m,2k+2d+r+1)$ and a surjection for $n \geq \max(m,2k+2d+r-1)$.
\end{maintheoremprime}
Dwyer \cite{DwyerTwisted} proved a version of this for $R$ a PID,
though he only worked with split coefficient systems starting at $m=0$ (c.f.\ Remark \ref{remark:split}). He
did not identify a stable range. Later van der Kallen \cite[Theorem 5.6]{VanDerKallen} extended this
to rings satisfying $(\SR_r)$, though again he only worked with split coefficient systems
starting at $m=0$. He proved that \eqref{eqn:glstabmap} is an isomorphism for $n \geq 2k+d+r$.
Randal-Williams--Wahl \cite[Theorem 5.11]{RandalWilliamsWahl} then showed how to deal with
general polynomial modules, though they only stated a result for ones that started at $m=0$.
They proved that \eqref{eqn:glstabmap} is an isomorphism for $n \geq 2k+2d+r+1$.
\begin{remark}
The result we will prove is more general than Theorems \ref{maintheorem:gl} and \ref{maintheorem:glprime} and
applies to certain subgroups of $\GL_n(R)$ as well.\footnote{The work of van der Kallen also applies to these
subgroups.} For instance, if $R$ is commutative then it applies
to $\SL_n(R)$. See \S \ref{section:ktheory} for the definition of the groups we will consider
and Theorems \ref{theorem:xl} and \ref{theorem:xlprime} for the statement of our theorem.
\end{remark}
\subsection{Congruence subgroups}
Our final theorem illustrates how our machinery can be applied to prove a theorem that is not
(directly) about homological stability. Borel \cite{BorelStability1, BorelStability2} proved
that if $\Gamma$ is a lattice in $\SL_n(\R)$ and $V$ is a rational representation of the algebraic
group $\SL_n$, then for $n \gg k$ the homology group $\HH_k(\Gamma;V)$ depends only on $n$ and $V$, not
on the lattice $\Gamma$. In particular, it is unchanged if you pass from $\Gamma$ to a finite-index
subgroup.
We will prove a version of this for $\GL_n(R)$ for rings $R$ satisfying $(\SR_r)$. The basic
flavor of the result will be that passing from $\GL_n(R)$ to appropriate finite-index subgroups
does not change rational homology, at least in some stable range. The finite-index subgroups
we will consider are the finite-index congruence subgroups, which are defined as follows:
\begin{definition}
Let $R$ be a ring and let $\alpha$ be a $2$-sided ideal of $R$. The {\em level-$\alpha$ congruence
subgroup}\footnote{In many contexts it is common to call these {\em principal congruence subgroups}, and
to define a congruence subgroup as a subgroup that contains a principal congruence subgroup. For general
rings $R$, this more general notion
of ``congruence subgroups'' can have some pathological properties.
For number rings $R$, all nontrivial ideals $\alpha$
of $R$ satisfy $|R/\alpha|<\infty$, so for number rings all principal congruence subgroups are finite-index.
This is false for general rings. Even worse, as Tom Church pointed out to me there might exist finite-index subgroups
$\Gamma < \GL_n(R)$ that contain principal congruence subgroups, but do not contain
finite-index principal congruence subgroups.}
of $\GL_n(R)$, denoted $\GL_n(R,\alpha)$, is the kernel of the natural group
homomorphism $\GL_n(R) \rightarrow \GL_n(R/\alpha)$.
\end{definition}
Our theorem is as follows:
\begin{maintheorem}
\label{maintheorem:congruence}
Let $R$ be a ring satisfying $(\SR_r)$, let $\bbk$ be a field of characteristic $0$, and let
$M$ be a $\VIC(R)$-module over $\bbk$ that is polynomial of degree $d$ starting at $m \geq 0$.
Assume furthermore that $M(R^n)$ is a finite-dimensional vector space over $\bbk$ for all $n \geq 0$.
Then for all $2$-sided ideals $\alpha$ of $R$ such that $R/\alpha$ is finite, the map
\[\HH_k(\GL_n(R,\alpha);M(R^n)) \rightarrow \HH_k(\GL_n(R);M(R^n))\]
is an isomorphism for $n \geq \max(m,2k+2d+2r)$.
\end{maintheorem}
We emphasize that Theorem \ref{maintheorem:congruence} is {\em not} a homological stability theorem:\footnote{Though in
light of Theorem \ref{maintheorem:gl} it implies that $\HH_k(\GL_n(R,\alpha);M(R^n))$ stabilizes.}
rather than increasing $\GL_n(R)$ to $\GL_{n+1}(R)$, we are decreasing $\GL_n(R)$ by passing
to the finite-index subgroup $\GL_n(R,\alpha)$. For untwisted rational coefficients,
something like Theorem \ref{maintheorem:congruence} is implicit in work of Charney \cite{CharneyCongruence}.
Though she did not state this, it can easily be derived from her work that for $R$ and $\bbk$ and
$\alpha$ as in Theorem \ref{maintheorem:congruence}, the map
\[\HH_k(\GL_n(R,\alpha);\bbk) \rightarrow \HH_k(\GL_n(R);\bbk)\]
is an isomorphism for $n \geq 2k+2r+4$. Later Cohen \cite{CohenCongruence} proved an
analogous result for the symplectic group, and more importantly for us showed how
to simplify Charney's argument. Our proof of Theorem \ref{maintheorem:congruence} follows
the outline of Cohen's proof, but using our new approach to twisted homological stability.
It seems very hard to do this using the traditional proof of twisted homological stability.
\begin{remark}
The fact that the field $\bbk$ in Theorem \ref{maintheorem:congruence} has characteristic $0$ is essential.
This theorem is false over fields of finite characteristic or over more general rings like $\Z$.
\end{remark}
\begin{remark}
Just like for Theorems \ref{maintheorem:gl} and \ref{maintheorem:glprime}, we will actually
prove something more general than Theorem \ref{maintheorem:congruence} that will apply
to congruence subgroups of certain subgroups of $\GL_n(R)$, e.g., to $\SL_n(R)$ if $R$ is
commutative.
\end{remark}
\begin{remark}
The proof of Theorem \ref{maintheorem:congruence} also requires some some recent
work of Harman \cite{HarmanVIC} classifying certain kinds of finitely generated $\VIC(\Z)$-modules.
\end{remark}
\begin{remark}
If the ring $R$ in Theorem \ref{maintheorem:congruence} is a {\em finite} ring, then we can take $\alpha = R$, so $\GL_n(R,\alpha)$ is trivial.
The case $k=0$ of the theorem thus implies that under its hypotheses, for $R$ a finite ring the action of $\GL_n(R)$ on
$M(R^n)$ is {\em trivial} for $n \geq \max(m,2d+2r)$. It is enlightening to go through our proof and see
how it proves this special case. This will also clarify to the reader how the work of Harman
discussed in the previous remark is used.
\end{remark}
\subsection{Outline}
We start with three sections of preliminary results in \S \ref{section:simplicial} -- \S \ref{section:coefficients}. We then
discuss our twisted homological stability machine in \S \ref{section:stabilitymachine}. To make this useful we also
need an accompanying ``vanishing theorem'', which is in \S \ref{section:vanishingtheorem}. We then have
\S \ref{section:sn} on symmetric groups, which proves Theorems \ref{maintheorem:sn} and \ref{maintheorem:snprime}.
After that, we have three sections of background on rings and general linear groups: \S \ref{section:stablerank}
discusses the stable rank condition $(\SR_r)$, and \S \ref{section:splitbases} and \S \ref{section:glncoefficient}
introduce some important simplicial complexes associated to $\GL_n(R)$. We then have
\S \ref{section:glstability} on general linear groups, which proves Theorems \ref{maintheorem:gl} and
\ref{maintheorem:glprime}. We finally turn our attention to congruence subgroups. This requires some
preliminary results on unipotent representations that are discussed in \S \ref{section:congruenceunipotence}. We
close with \S \ref{section:congruencestability} on congruence subgroups, which proves Theorem \ref{maintheorem:congruence}.
\subsection{Acknowledgments}
I would like to thank Nate Harman, Jeremy Miller, and Nick Salter for helpful conversations. In particular, I would
like to thank Nate Harman for explaining how to prove Lemma \ref{lemma:vicunipotent} below. I would also like to
thank Tom Church, Benson Farb, Peter Patzt, and Nathalie Wahl for providing comments on a previous draft of this paper.
\section{Background I: simplicial complexes}
\label{section:simplicial}
This section contains background material on simplicial complexes. Its main purpose is to establish
notation. See \cite{FriedmanSemisimplicial} for more details.
\subsection{Basic definitions}
A {\em simplicial complex} $X$ consists of the following data:
\begin{itemize}
\item A set $X^{(0)}$ called the {\em $0$-simplices} or {\em vertices}.
\item For each $k \geq 1$, a set $X^{(k)}$ of $(k+1)$-element subsets of the vertices called the {\em $k$-simplices}. These
are required to satisfy the following condition:
\begin{itemize}
\item Consider $\sigma \in X^{(k)}$. Then for $\sigma' \subseteq \sigma$ with $|\sigma'|>0$, we must have $\sigma' \in X^{(|\sigma'|-1)}$.
In this case, we call $\sigma'$ a {\em face} of $\sigma$.
\end{itemize}
\end{itemize}
A simplicial complex $X$ has a geometric realization $|X|$ obtained by gluing together
geometric $k$-simplices (one for each $k$-simplex in $X^{(k)}$) according to the face relation.
Whenever we talk about topological properties of $X$ (e.g.\ being connected), we are
referring to its geometric realization.
\subsection{Links and Cohen--Macaulay}
\label{section:cmcomplex}
Let $X$ be a simplicial complex. The {\em link} of a simplex $\sigma$ of $X$, denoted
$\Link_X(\sigma)$, is the subcomplex of $X$ consisting of all simplices $\tau$
satisfying the following two conditions.
\begin{itemize}
\item The simplices $\tau$ and $\sigma$ are disjoint, i.e., have no vertices in common.
\item The union $\tau \cup \sigma$ is a simplex.
\end{itemize}
We say that $X$ is {\em weakly Cohen--Macaulay} of dimension $n \in \Z$
if it satisfies the following:
\begin{itemize}
\item The complex $X$ must be $(n-1)$-connected. Here our convention is that a space is $(-1)$-connected if
it is nonempty, and all spaces are $k$-connected for $k \leq -2$.
\item For all $k$-simplices $\sigma$ of $X$, the complex $\Link_X(\sigma)$ must be $(n-k-2)$-connected.
\end{itemize}
\begin{example}
Let $X$ be a simplicial triangulation of an $n$-sphere. We claim that $X$ is weakly Cohen--Macaulay
of dimension $n$. There are two things to check:
\begin{itemize}
\item The simplicial complex $X$ is $(n-1)$-connected, which is clear.
\item For a $k$-simplex $\sigma$ of $X$, the complex $\Link_X(\sigma)$ is $(n-k-2)$-connected.
In fact, since our triangulation is simplicial the link of $\sigma$
is a simplicial triangulation of an $(n-k-1)$-sphere.\qedhere
\end{itemize}
\end{example}
\begin{remark}
The adjective ``weak'' is here since we do not require $X$ to be $n$-dimensional, nor do we require links
of $k$-simplices to be $(n-k-1)$-dimensional.
\end{remark}
One basic property of weakly Cohen--Macaulay complexes is as follows:
\begin{lemma}
\label{lemma:linkcm}
Let $X$ be a simplicial complex that is weakly Cohen--Macaulay of dimension $n$ and let $\sigma$
be a $k$-simplex of $X$. Then $\Link_X(\sigma)$ is weakly Cohen--Macaulay of dimension $(n-k-1)$.
\end{lemma}
\begin{proof}
By definition, $\Link_X(\sigma)$ is $(n-k-2)$-connected. Also, if $\tau$ is an $\ell$-simplex
of $\Link_X(\sigma)$, then
$\tau \cup \sigma$ is a $(k+\ell+1)$-simplex of $X$ and
$\Link_{\Link_X(\sigma)}(\tau) = \Link_X(\tau \cup \sigma)$.
This is $n-(k+\ell+1)-2 = n-k-\ell-3$ connected by assumption.
\end{proof}
\section{Background II: semisimplicial sets}
\label{section:semisimplicial}
The category of simplicial complexes has some undesirable features that make it awkward for homological
stability proofs. For instance, if $X$ is a simplicial complex and $G$ is a group acting on $X$, then
one might expect the quotient $X/G$ to be a simplicial complex whose $k$-simplices are the $G$-orbits
of simplices of $X$. Unfortunately, this need not hold. In this section, we discuss the category
of semisimplicial sets, which does not have such pathologies. See \cite{FriedmanSemisimplicial}\footnote{This reference
calls semisimplicial sets $\Delta$-sets} and \cite{EbertRandalWilliamsSemisimplicial} for
more details.
\subsection{Semisimplicial sets}
Let $\Delta$ be the category whose objects are the finite sets $[k] = \{0,\ldots,k\}$ with $k \geq 0$ and
whose morphisms $[\ell] \rightarrow [k]$ are order-preserving injections. A {\em semisimplicial set}
is a contravariant functor $\bbX\colon \Delta \rightarrow \cSet$. Unwinding this,
$\bbX$ consists of the following two pieces of data:
\begin{itemize}
\item For each $k \geq 0$, a set $\bbX^k$ called the {\em $k$-simplices}.
\item For each order-preserving injection $\iota\colon [\ell] \rightarrow [k]$, a map
$\iota^{\ast}\colon \bbX^k \rightarrow \bbX^{\ell}$ called a {\em face map}. For $\sigma \in \bbX^k$, the
image $\iota^{\ast}(\sigma) \in \bbX^{\ell}$ is called a {\em face} of $\sigma$.
\end{itemize}
A semisimplicial set $\bbX$ has a geometric realization $|\bbX|$ obtained by gluing geometric
$n$-simplices together (one for each $n$-simplex) according to the face maps. See
\cite{FriedmanSemisimplicial}
for more details. Whenever we talk about topological properties of $\bbX$
(e.g.\ being connected), we are referring to its geometric realization.
\subsection{Morphisms}
Below we will compare semisimplicial sets with simplicial complexes, but first we describe
their morphisms.
A morphism $f\colon \bbX \rightarrow \bbY$ between semisimplicial sets $\bbX$ and $\bbY$ is a natural
transformation between the functors $\bbX$ and $\bbY$. In other words,
$f$ consists of set maps $f_k\colon \bbX^k \rightarrow \bbY^k$ for each
$k \geq 0$ that commute with the face maps. Such a morphism induces a continuous map
$|f|\colon |\bbX| \rightarrow |\bbY|$. Using this definition, an action of a group
$G$ on a semisimplicial set $\bbX$ consists of actions of $G$ on each set $\bbX^k$
that commute with the face maps, and $\bbX/G$ is a semisimplicial set with
$(\bbX/G)^k = \bbX^k/G$ for each $k \geq 0$.
\subsection{Semisimplicial sets vs simplicial complexes}
Let $\bbX$ be a semisimplicial set. The {\em vertices} of $\bbX$ are the elements of the set $\bbX^0$ of $0$-simplices.
For each $k$-simplex $\sigma \in \bbX^k$, we can define an ordered $(k+1)$-tuple $(v_0,\ldots,v_k)$ of vertices
via the formula
\[v_i = \iota_i^{\ast}(\sigma) \text{\ with $\iota_i\colon [0] \rightarrow [k]$ the map $\iota_i(0) = i$}.\]
We will call $v_0,\ldots,v_k$ the vertices of $\sigma$. This is similar to a simplicial complex, whose vertices are $(k+1)$-element sets of vertices. However, there
are three essential differences:
\begin{itemize}
\item The vertices of a simplex in a semisimplicial set have a natural ordering.
\item The vertices of a simplex in a semisimplicial set need not be distinct.
\item Simplices in a semisimplicial set are {\em not} determined by their vertices.
\end{itemize}
\subsection{Small ordering}
Consider a simplicial complex $X$. A {\em small ordering}\footnote{Our applications in this paper will only use large orderings (defined below),
but we will state our technical results to also apply to appropriate small orderings so they can be used in followup work, e.g., in \cite{PutmanStableLevel}.}
of $X$ is a semisimplicial set $\bbX$ with the following properties:
\begin{itemize}
\item[(a)] The vertices of $\bbX$ are the same as the vertices of $X$.
\item[(b)] The (unordered) set of vertices of a $k$-simplex of $\bbX$ is a $k$-simplex of $X$.
\item[(c)] The map $\bbX^k \rightarrow X^{(k)}$ taking a simplex to its set of vertices is a bijection.
\end{itemize}
Condition (b) implies that all the vertices of a simplex of $\bbX$ are distinct, and condition (c) implies that a
simplex is determined by its set of vertices. The only difference between $\bbX$ and $X$ is therefore that
the vertices of a simplex of $\bbX$ have a natural ordering, while the vertices of a simplex of $X$ are unordered.
It is clear that the geometric realizations $|\bbX|$ and $|X|$ are homeomorphic.
Small orderings always exist; for instance, if you choose a total ordering on the the vertices of $X$, then
you can construct a small ordering $\bbX$ of $X$ as follows:
\begin{itemize}
\item For $k \geq 0$, let $\bbX^k = \Set{$(v_0,\ldots,v_k) \in V^{k+1}$}{$\{v_0,\ldots,v_k\} \in X^{(k)}$ and $v_0 < \cdots < v_k$}$.
\item For an order-preserving injection $\iota\colon [\ell] \rightarrow [k]$, define the face map $\iota^{\ast}\colon \bbX^k \rightarrow \bbX^{\ell}$
via the formula
\[\iota^{\ast}(v_0,\ldots,v_k) = (v_{\iota(1)},\ldots,v_{\iota(\ell)}) \quad \text{for $(v_0,\ldots,v_k) \in \bbX^k$}.\]
\end{itemize}
However, since this depends on the total ordering on $X^{(0)}$ it might have fewer symmetries than $X$. The
example we will give of this will be used repeatedly, so we make a formal definition.
\begin{definition}
For $n \geq 0$, let $\Sim_n$ be the {\em $n$-simplex},\footnote{It would be natural to instead
denote this by $\Delta^n$ or $\Delta_n$, but we are already using $\Delta$ for the category of finite sets used to define
semisimplicial sets.} i.e., the simplicial complex whose vertex set is
$[n] = \{0,\ldots,n\}$ and whose $k$-simplices are all $(k+1)$-element subsets of $[n]$. The geometric realization $|\Sim_n|$ is the usual
geometric $n$-simplex.
\end{definition}
\begin{example}
The symmetric group $\fS_{n+1}$ acts on $\Sim_n$ via its natural action on $[n] = \{0,\ldots,n\}$. However, if we choose a total
ordering on $[n]$ and use this to define a small ordering $\bbX$ of $\Sim_n$ as above, then $\fS_{n+1}$ does
not act on $\bbX$ except in the degenerate case $n=0$. Indeed, it is easy to see that the automorphism group of $\bbX$ is trivial.
\end{example}
\subsection{Large ordering}
If a sufficiently symmetric small ordering of a simplicial complex does not exist, there is another
construction that is often useful. Let $X$ be a simplicial complex. The {\em large ordering} of
$X$, denoted $X_{\ord}$, is the following semisimplicial set:
\begin{itemize}
\item For each $k \geq 0$, the $k$-simplices $X_{\ord}^k$ are ordered $(k+1)$-tuples $(v_0,\ldots,v_k)$ of distinct vertices
of $X$ such that $\{v_0,\ldots,v_k\} \in X^{(k)}$.
\item For an order-preserving injection $\iota\colon [\ell] \rightarrow [k]$, the face map $\iota^{\ast}\colon X_{\ord}^k \rightarrow X_{\ord}^{\ell}$ is
\[\iota^{\ast}(v_0,\ldots,v_k) = (v_{\iota(1)},\ldots,v_{\iota(\ell)}) \quad \text{for $(v_0,\ldots,v_k) \in \bbX^k$}.\]
\end{itemize}
\Figure{figure:largeordering}{LargeOrdering}{Left: the $2$-simplex $\Sim_2$. Right: its large ordering $\OSim_2$.
The drawn portion is the $1$-skeleton.}{0.9}
The following example of this will be used several times, so we introduce notation for it.
\begin{definition}
For $n \geq 0$, let $\OSim_n$ be the large ordering $\left(\Sim_n\right)_{\ord}$ of the $n$-simplex
$\Sim_n$. The $k$-simplices of $\OSim_n$ are thus ordered sequences $(i_0,\ldots,i_k)$ of distinct
elements of $[n]$.
\end{definition}
\begin{example}
\label{example:ordernsimplex}
Each $\Sim_n$ is contractible. However, none of the $\OSim_n$ are contractible except
for $\OSim_0$. For instance, $\OSim_1$ has two $0$-cells $0$ and $1$ and two $1$-cells $(0,1)$ and $(1,0)$, and
its geometric realization is homeomorphic to $S^1$. For an even more complicated example, see
the picture of $\OSim_2$ in Figure \ref{figure:largeordering}.
\end{example}
The semisimplicial set $X_{\ord}$ is much larger than $X$; indeed, each $k$-simplex of $X$ corresponds to $(k+1)!$ simplices
of $X_{\ord}$, one for each total ordering of its vertices. It is clear from its construction that if a group $G$ acts
on $X$, then $G$ also acts on $X_{\ord}$. However, examples like Example \ref{example:ordernsimplex}
might lead one to think that there is no simple relationship between the topologies of $X$ and $X_{\ord}$. This makes
the following theorem of Randal-Williams--Wahl somewhat surprising:
\begin{theorem}[{\cite[Theorem 2.14]{RandalWilliamsWahl}}]
\label{theorem:largeordering}
Let $X$ be a simplicial complex that is weakly Cohen--Macaulay of dimension $n$. Then $X_{\ord}$ is $(n-1)$-connected.
\end{theorem}
\begin{example}
Since $\Sim_n$ is clearly weakly Cohen--Macaulay of dimension $n$, Theorem \ref{theorem:largeordering}
implies that $\OSim_n$ is $(n-1)$-connected. The semisimplicial set $\OSim_n$ is also
called the {\em complex of injective words} on $(n+1)$ letters, and the fact that it
is $(n-1)$-connected was originally proved by Farmer \cite{FarmerWords}.
\end{example}
\subsection{Forward link and forward Cohen--Macaulay}
\label{section:forwardlink}
Let $X$ be a simplicial complex and let $\bbX$ be either a small or large ordering of $X$. Simplices of
$\bbX$ are thus certain ordered sequences of distinct vertices of $X$, and we will write them as $(v_0,\ldots,v_k)$ where
the $v_i$ are vertices. As notation, if $\sigma = (v_0,\ldots,v_k)$ and $\tau = (w_0,\ldots,w_{\ell})$ are ordered
sequences of vertices, we will write $\sigma \cdot \tau$ for $(v_0,\ldots,v_k,w_0,\ldots,w_{\ell})$. Of course,
$\sigma \cdot \tau$ need not be a simplex; for instance, its vertices need not be distinct.
Given a simplex $\sigma$ of $\bbX$, the {\em forward link} of $\sigma$,
denoted $\FLink_{\bbX}(\sigma)$, is the semisimplicial set whose $\ell$-simplices
are $\ell$-simplices $\tau$ of $\bbX$ such that $\sigma \cdot \tau$ is a $(k+\ell+1)$-simplex of
$\bbX$. We will say that $\bbX$ is {\em weakly forward Cohen--Macaulay} of dimension $n$
if $\bbX$ is $(n-1)$-connected and for all $k$-simplices $\sigma$ of $\bbX$, the forward
link $\FLink_{\bbX}(\sigma)$ is $(n-k-2)$-connected.
If $X$ is a simplicial complex that is weakly Cohen--Macaulay of dimension $n$, then
we will say that a small ordering $\bbX$ of $X$ is a {\em CM-small ordering} if
$\bbX$ is weakly forward Cohen--Macaulay of dimension $n$. These need not exist.
However, Lemma \ref{lemma:linkcm} and Theorem \ref{theorem:largeordering} together
imply the following:
\begin{lemma}
\label{lemma:largerorderingcm}
Let $X$ be a simplicial complex that is weakly Cohen--Macaulay of dimension $n$. Then
its large ordering $X_{\ord}$ is weakly forward Cohen--Macaulay of dimension $n$.
\end{lemma}
\section{Background III: coefficient systems}
\label{section:coefficients}
In this section, we define coefficient systems on semisimplicial sets. Informally, these are natural assignments
of abelian groups to each simplex.
\subsection{Simplex category}
To formalize this, we introduce the {\em simplex category} of a semisimplicial set $\bbX$, which is the following
category\footnote{This is related to the poset of simplices of $\bbX$, but is {\em not} always a poset since there can
be multiple morphisms between simplices in it.} $\Simp(\bbX)$:
\begin{itemize}
\item The objects of $\Simp(\bbX)$ are the simplices of $\bbX$.
\item For $\sigma,\sigma' \in \Simp(\bbX)$ with $\sigma \in \bbX^k$ and $\sigma' \in \bbX^{\ell}$, the morphisms from $\sigma$ to $\sigma'$ are
\[\Mor(\sigma,\sigma') = \Set{$\iota\colon [\ell] \rightarrow [k]$}{$\iota$ order-preserving injection w/ $\iota^{\ast}(\sigma) = \sigma'$}.\]
This is nonempty precisely when $\sigma'$ is a face of $\sigma$.
\end{itemize}
The {\em augmented simplex category} of $\bbX$, denoted $\tSimp(\bbX)$, is obtained by adjoining a terminal object
$\ast$ to $\Simp(\bbX)$ that we will call the {\em $(-1)$-simplex}.
\subsection{Coefficient systems}
Let $\bbk$ be a commutative ring. A {\em coefficient system} over $\bbk$ on a semisimplicial set $\bbX$ is a covariant functor
$\cF$ from $\Simp(\bbX)$ to the category of $\bbk$-modules. Unpacking this, $\cF$ consists of the following data:
\begin{itemize}
\item For each simplex $\sigma$ of $\bbX$, a $\bbk$-module $\cF(\sigma)$.
\item For $\sigma \in \bbX^k$ and $\sigma' \in \bbX^{\ell}$ and $\iota\colon [\ell] \rightarrow [k]$ an
order-preserving injection with $\iota^{\ast}(\sigma) = \sigma'$,
a $\bbk$-module morphism $\iota^{\ast}\colon \cF(\sigma) \rightarrow \cF(\sigma')$.
\end{itemize}
These must satisfy the evident compatibility conditions. Similarly, an {\em augmented coefficient system} on
$\bbX$ is a covariant functor $\cF$ from $\tSimp(\bbX)$ to the category of $\bbk$-modules.
\begin{example}
If $\bbX$ is a semisimplicial set and $\bbk$ is a commutative ring, then we have the constant
coefficient system $\ubbk$ on $\bbX$ with $\ubbk(\sigma) = \bbk$ for all simplices $\sigma$. This can
be extended to an augmented coefficient system by setting $\ubbk(\ast) = \bbk$ for the $(-1)$-simplex $\ast$.
\end{example}
\begin{example}
\label{example:orderingsystem}
Recall that $\OSim_n$ is the large ordering of the $n$-simplex $\Sim_n$.
For an $\FI$-module $M$, we can define a coefficient system $\cF_{M,n}$ on $\OSim_n$ via the formula
\[\cF_{M,n}(i_0,\ldots,i_k) = M([n] \setminus \{i_0,\ldots,i_k\}) \quad \text{for a simplex $(i_0,\ldots,i_k)$ of $\OSim_n$}.\]
For an order-preserving injective map $\iota\colon [\ell] \rightarrow [k]$, the induced map
\[\iota^{\ast}\colon \cF_{M,n}(i_0,\ldots,i_k) \rightarrow \cF_{M,n}(i_{\iota(0)},\ldots,i_{\iota(\ell)})\]
is the one induced by the inclusion
\[[n] \setminus \{i_0,\ldots,i_k\} \hookrightarrow [n] \setminus \{i_{\iota(0)},\ldots,i_{\iota(\ell)}\}.\]
This can be extended to an augmented coefficient system by setting
$\cF_{M,n}(\ast) = M([n])$ for the $(-1)$-simplex $\ast$.
\end{example}
The collection of coefficient systems (resp.\ augmented coefficient systems) over $\bbk$ on $\bbX$ forms an abelian category.
\subsection{Equivariant coefficient systems}
Let $G$ be a group, let $\bbX$ be a semisimplicial set on which $G$ acts, and let $\bbk$ be a commutative ring. Consider
a (possibly augmented) coefficient system $\cF$ on $\bbX$. We want to equip $\cF$ with an ``action'' of $G$ that
is compatible with the $G$-action on $\bbX$. For $g \in G$ and a simplex $\sigma$ of $\bbX$, the data of such
an action should include isomorphisms $\cF(\sigma) \rightarrow \cF(g \cdot \sigma)$, and more generally for
$h \in G$ should include isomorphisms $\cF(h \cdot \sigma) \rightarrow \cF(gh \cdot \sigma)$. Moreover, these
isomorphisms should be be compatible with $\cF$ in an appropriate sense.
We formalize this as follows. For $h \in G$, let $\cF_h$ be the coefficient system over $\bbk$ on $\bbX$
defined via the formula
\[\cF_h(\sigma) = \cF(h \cdot \sigma) \quad \text{for a simplex $\sigma$ of $\bbX$}.\]
We say that $\cF$ is a {\em $G$-equivariant coefficient system} if for all $g,h \in G$ we are given
a natural transformation $\Phi_{g,h}\colon \cF_h \Rightarrow \cF_{gh}$. These natural transformations
should satisfy the following two properties:
\begin{itemize}
\item For all $h \in G$, the natural transformation $\Phi_{1,h}\colon \cF_h \Rightarrow \cF_h$ should be
the identity natural transformation.
\item For all $g_1,g_2 \in G$ and all $h \in H$, we require the two natural transformations
\[\cF_h \xRightarrow{\Phi_{g_1 g_2,h}} \cF_{g_1 g_2 h} \quad \text{and} \quad
\cF_h \xRightarrow{\Phi_{g_2,h}} \cF_{g_2 h} \xRightarrow{\Phi_{g_1,g_2 h}} \cF_{g_1 g_2 h}\]
to be equal.
\end{itemize}
Let us unpack this a bit. For all simplices $\sigma$ of $\bbX$ and all $g, h \in G$, the natural transformation
$\Phi_{g,h}$ gives a homomorphism $\cF(h \cdot \sigma) \rightarrow \cF(gh \cdot \sigma)$. This homomorphism
is an isomorphism whose inverse is given by the map $\cF(gh \cdot \sigma) \rightarrow \cF(h \cdot \sigma)$ induced
by $\Phi_{g^{-1},gh}$. Moreover, it must be natural in the sense
that if $\sigma'$ is a face of $\sigma$ via some face map, then the diagram
\[\begin{CD}
\cF(h \cdot \sigma) @>>> \cF(g h \cdot \sigma) \\
@VVV @VVV \\
\cF(h \cdot \sigma') @>>> \cF(g h \cdot \sigma')
\end{CD}\]
must commute. The fact that the natural transformations respect the group law implies
that the stabilizer subgroup $G_{\sigma}$ acts on $\cF(\sigma)$, making it into a $\bbk[G_{\sigma}]$-module.
If $\sigma'$ is a face of $\sigma$ via some face map, the induced map $\cF(\sigma) \rightarrow \cF(\sigma')$
is a map of $\bbk[G_{\sigma}]$-modules, where $G_{\sigma}$ acts on $\cF(\sigma')$ via the inclusion
$G_{\sigma} \hookrightarrow G_{\sigma'}$. Another consequence of the fact that the natural
transformations respect the group law is that for all $k \geq 0$, the direct sum
\[\bigoplus_{\sigma \in \bbX^k} \cF(\sigma)\]
is a $\bbk[G]$-module in a natural way, where the $G$-action restricts to the $G_{\sigma}$-action on
$\cF(\sigma)$ for each $\sigma \in \bbX^k$.
\begin{example}
\label{example:snequivariant}
Let $M$ be an $\FI$-module over $\bbk$ and let $\cF_{M,n}$ be the augmented coefficient system on $\OSim_n$
from Example \ref{example:orderingsystem} defined via the formula
\[\cF_{M,n}(i_0,\ldots,i_k) = M([n] \setminus \{i_0,\ldots,i_k\}) \quad \text{for a simplex $(i_0,\ldots,i_k)$ of $\OSim_n$.}\]
Recalling that $[n] = \{0,\ldots,n\}$, the symmetric group $\fS_{n+1}$ acts on $\OSim_n$. The
augmented coefficient system $\cF_{M,n}$ can be endowed with the structure of an $\fS_{n+1}$-equivariant
augmented coefficient system in the following way. Consider $g,h \in \fS_{n+1}$. We then
define $\Phi_{g,h}$ to be the following natural transformation:
\begin{itemize}
\item For a simplex $(i_0,\ldots,i_k)$ of $\OSim_n$, let the induced map
\[\cF_{M,n}(h(i_0),\ldots,h(i_k)) \rightarrow \cF_{M,n}(g h(i_0),\ldots,g h(i_k))\]
be the map
\[M([n] \setminus \{h(i_0),\ldots,h(i_k)\}) \rightarrow M([n] \setminus \{g h(i_0),\ldots,g h(i_k)\}\]
induced by the bijection
\[[n] \setminus \{h(i_0),\ldots,h(i_k)\} \rightarrow [n] \setminus \{g h(i_0),\ldots,g h(i_k)\}\]
obtained by restricting $g \in \fS_{n+1}$ to $[n] \setminus \{h(i_0),\ldots,h(i_k)\}$.\qedhere
\end{itemize}
\end{example}
\subsection{Chain complex and homology}
Let $\bbX$ be a semisimplicial set and let $\cF$ be a coefficient system on $\bbX$. Define the
{\em simplicial chain complex} of $\bbX$ with coefficients in $\cF$ to be the chain complex
$\CC_{\bullet}(\bbX;\cF)$ defined as follows:
\begin{itemize}
\item For $k \geq 0$, we have
\[\CC_k(\bbX;\cF) = \bigoplus_{\sigma \in \bbX^k} \cF(\sigma).\]
\item The boundary map $d\colon \CC_k(\bbX;\cF) \rightarrow \CC_{k-1}(\bbX;\cF)$ is
$d = \sum_{i=0}^k (-1)^i d_i$, where the map $d_i\colon \CC_k(\bbX;\cF) \rightarrow \CC_{k-1}(\bbX;\cF)$ is
as follows. Consider $\sigma \in \bbX^k$. Let $\iota\colon [k-1] \rightarrow [k]$ be
the order-preserving map whose image omits $i$. Then on the $\cF(\sigma)$ factor of $\CC_n(\bbX;\cF)$,
the map $d_i$ is
\[\cF(\sigma) \stackrel{\iota^{\ast}}{\longrightarrow} \cF(\iota^{\ast}(\sigma)) \hookrightarrow \bigoplus_{\sigma' \in \bbX^{k-1}} \cF(\sigma') = \CC_{k-1}(\bbX;\cF).\]
\end{itemize}
Define
\[\HH_k(\bbX;\cF) = \HH_k(\CC_{\bullet}(\bbX;\cF)).\]
For an augmented coefficient system $\cF$ on $\bbX$, define
$\RC_{\bullet}(\bbX;\cF)$ to be the augmented chain complex defined just like we did above but with
$\RC_{-1}(\bbX;\cF) = \cF(\ast)$ and define
\[\RH_k(\bbX;\cF) = \HH_k(\RC_{\bullet}(\bbX;\cF)).\]
\begin{example}
For a semisimplicial set $\bbX$ and a commutative ring $\bbk$, we have
\[\HH_k(\bbX;\ubbk) = \HH_k(|\bbX|;\bbk) \quad \text{and} \quad \RH_k(\bbX;\ubbk) = \RH_k(|\bbX|;\bbk).\qedhere.\]
\end{example}
\begin{remark}
With our definition, $\RH_{-1}(\bbX;\cF)$ is a quotient of $\cF(\ast)$. This quotient can sometimes be
nonzero. It vanishes precisely when the map
\[\bigoplus_{v \in \bbX^0} \cF(v) \rightarrow \cF(\ast)\]
is surjective.
\end{remark}
Note that if a group $G$ acts on $\bbX$ and $\cF$ is $G$-equivariant coefficient system, then
\[\cdots \rightarrow \RC_{2}(\bbX;\cF) \rightarrow \RC_1(\bbX;\cF) \rightarrow \RC_0(\bbX;\cF) \rightarrow \RC_{-1}(\bbX;\cF) = \cF(\ast) \rightarrow 0\]
is a chain complex of $\bbk[G]$-modules, and each $\RH_k(\bbX;\cF)$ is a $\bbk[G]$-module.
\subsection{Long exact sequences}
Consider a short exact sequence
\[0 \longrightarrow \cF_1 \longrightarrow \cF_2 \longrightarrow \cF_3 \longrightarrow 0\]
of coefficient systems over $\bbk$ on $\bbX$. For each simplex $\sigma$ of $\bbX$, we thus have a short
exact sequence
\[0 \longrightarrow \cF_1(\sigma) \longrightarrow \cF_2(\sigma) \longrightarrow \cF_3(\sigma) \longrightarrow 0\]
of $\bbk$-modules. These fit together into a short exact sequence
\[0 \longrightarrow \CC_{\bullet}(\bbX;\cF_1) \longrightarrow \CC_{\bullet}(\bbX;\cF_2) \longrightarrow \CC_{\bullet}(\bbX;\cF_3) \longrightarrow 0\]
of chain complexes, and thus induce a long exact sequence in homology of the form
\[\cdots \longrightarrow \HH_k(\bbX;\cF_1) \longrightarrow \HH_k(\bbX;\cF_2) \longrightarrow \HH_k(\bbX;\cF_3) \longrightarrow \HH_{k-1}(\bbX;\cF_1) \longrightarrow \cdots.\]
A similar result holds for augmented coefficient systems and reduced homology.
\section{Stability I: stability machine}
\label{section:stabilitymachine}
In this section, we describe our machine for proving twisted homological stability.
\subsection{Classical homological stability}
An {\em increasing sequence of groups} is an indexed sequence of groups $\{G_n\}_{n=0}^{\infty}$ such that
\[G_0 \subseteq G_1 \subseteq G_2 \subseteq \cdots.\]
For each $k \geq 0$, we get maps
\[\HH_k(G_0) \rightarrow \HH_k(G_1) \rightarrow \HH_k(G_2) \rightarrow \cdots.\]
The classical homological stability machine gives conditions under which these stabilize, i.e., such that
the maps $\HH_k(G_{n-1}) \rightarrow \HH_k(G_n)$ are isomorphisms for $n \gg k$. One version of it is as follows.
\begin{theorem}[Classical homological stability]
\label{theorem:classicalstabilitymachine}
Let $\{G_n\}_{n=0}^{\infty}$ be an increasing sequence of groups.
For each $n \geq 1$, let $\bbX_n$ be a semisimplicial set upon which $G_n$ acts. Assume
for some $c \geq 2$ that the following hold:
\begin{itemize}
\item[(a)] For all $-1 \leq k \leq \frac{n-2}{c}$, we have $\RH_k(\bbX_n) = 0$.
\item[(b)] For all $0 \leq k < n$, the group $G_{n-k-1}$ is the $G_n$-stabilizer of a $k$-simplex of $\bbX_n$.
\item[(c)] For all $0 \leq k < n$, the group $G_n$ acts transitively on the $k$-simplices of $\bbX_n$.
\item[(d)] For all $n \geq 2$ and all $1$-simplices $e$ of $\bbX_n$ whose proper faces consist of $0$-simplices
$v$ and $v'$, there exists some $\lambda \in G_n$ with $\lambda(v) = v'$ such that
$\lambda$ commutes with all elements of $(G_n)_e$.
\end{itemize}
Then for all $k$ the map $\HH_k(G_{n-1}) \rightarrow \HH_k(G_n)$ is an isomorphism for
$n \geq ck+2$ and a surjection for $n = ck+1$.
\end{theorem}
\begin{proof}
This can be proved exactly like \cite[Theorem 1.1]{HatcherVogtmannTethers} -- the only major difference
between our theorem and \cite[Theorem 1.1]{HatcherVogtmannTethers} is that we assume a weaker connectivity
range on the $\bbX_n$, which causes stability to happen at a slower rate. We thus omit the details of the
proof, but to clarify our indexing conventions we make a few remarks.
The proof is by induction on $k$.
The base case is $k \leq -1$, where the result is trivial since $\HH_{k}(G) = 0$ for all groups $G$ when $k$ is negative. We could
also start with $k=0$ since $\HH_0(G) = \Z$, but later when we work with twisted coefficients even
the $\HH_0$ statement will be nontrivial. In any case, to go from $\HH_{k-1}$ to $\HH_k$ two steps
are needed (which is why require $c \geq 2$).
The first step is to prove that the map $\HH_k(G_{n-1}) \rightarrow \HH_k(G_n)$ is surjective for
all $n$ sufficiently large, which requires that $\RH_i(\bbX_n) = 0$ for $-1 \leq i \leq k-1$. Once this has been done, we then
prove that $\HH_k(G_{n-1}) \rightarrow \HH_k(G_{n})$ is also injective for $n$ one step larger than needed
for surjective stability. This requires $\RH_i(\bbX_{n}) = 0$ for $-1 \leq i \leq k$. We remark that condition (d) is used
for injective stability but not surjective stability, which is why we only assume it for $n \geq 2$.
This explains our indexing conventions:
\begin{itemize}
\item Surjective stability for $\HH_0$ starts with $\HH_0(G_0) \rightarrow \HH_0(G_1)$, so we need
$\RH_{-1}(\bbX_1) = 0$.
\item Injective stability for $\HH_0$ starts with $\HH_0(G_1) \rightarrow \HH_0(G_2)$, so we need
$\RH_{-1}(\bbX_2) = \RH_{0}(\bbX_2) = 0$.\qedhere
\end{itemize}
\end{proof}
\subsection{Setup for twisted coefficients}
We want to give a version of this with twisted coefficients. Fix a commutative ring $\bbk$.
An {\em increasing sequence of groups and modules} is an indexed sequence of pairs $\{(G_n,M_n)\}_{n=0}^{\infty}$, where the
$G_n$ and the $M_n$ are as follows:
\begin{itemize}
\item The $\{G_n\}_{n=0}^{\infty}$ are an increasing sequence of groups.
\item Each $M_n$ is a $\bbk[G_n]$-module.
\item As abelian groups, the $M_n$ satisfy
\[M_0 \subseteq M_1 \subseteq M_2 \subseteq \cdots.\]
\item For each $n \geq 0$, the inclusion $M_n \hookrightarrow M_{n+1}$ is $G_n$-equivariant, where $G_n$
acts on $M_{n+1}$ via the inclusion $G_n \hookrightarrow G_{n+1}$.
\end{itemize}
Given an increasing sequence of groups and modules $\{(G_n,M_n)\}_{n=0}^{\infty}$, for each $k$ we get maps
\[\HH_k(G_0;M_0) \rightarrow \HH_k(G_1;M_1) \rightarrow \HH_k(G_2;M_2) \rightarrow \cdots\]
between the associated twisted homology groups. We want to show that this stabilizes via a machine
similar to Theorem \ref{theorem:classicalstabilitymachine}.
We will incorporate the $M_n$ into our machine via a $G_n$-equivariant augmented coefficient $\cM_n$
on the semisimplicial set $\bbX_n$ with $\cM_n(\ast) = M_n$. Having done this, we will be forced to replace the
requirement that $\RH_k(\bbX_n) = 0$ in condition (a) by $\RH_k(\bbX_n;\cM_n) = 0$. This
is not easy to check, but we will give a useful criterion in \S \ref{section:vanishingtheorem} below.
The twisted analogue of Theorem \ref{theorem:classicalstabilitymachine} is as follows.
\begin{theorem}[Twisted homological stability]
\label{theorem:stabilitymachine}
Let $\{(G_n,M_n)\}_{n=0}^{\infty}$ be an increasing sequence of groups and modules.
For each $n \geq 1$, let $\bbX_n$ be a semisimplicial set upon which $G_n$ acts and let
$\cM_n$ be a $G_n$-equivariant augmented coefficient system on $\bbX_n$. Assume for some $c \geq 2$ that the following hold:
\begin{enumerate}
\item For all $-1 \leq k \leq \frac{n-2}{c}$, we have $\RH_k(\bbX_n;\cM_n) = 0$.
\item For all $-1 \leq k < n$, the group $G_{n-k-1}$ is the $G_n$-stabilizer of a $k$-simplex $\sigma_k$ of $\bbX_n$
with $\cM_n(\sigma_k) = M_{n-k-1}$. In particular, $\cM_n(\ast) = M_n$.
\item For all $0 \leq k < n$, the group $G_n$ acts transitively on the $k$-simplices of $\bbX_n$.
\item For all $n \geq 2$ and all $1$-simplices $e$ of $\bbX_n$ whose proper faces consist of $0$-simplices
$v$ and $v'$, there exists some $\lambda \in G_n$ with $\lambda(v) = v'$ such that
$\lambda$ commutes with all elements of $(G_n)_e$ and fixes all elements of $\cM_n(e)$.
\end{enumerate}
Then for $k \geq 0$ the map $\HH_k(G_{n-1};M_{n-1}) \rightarrow \HH_k(G_n;M_n)$ is an isomorphism for
$n \geq ck+2$ and a surjection for $n = ck+1$.
\end{theorem}
The proof of Theorem \ref{theorem:stabilitymachine} is almost identical to that of Theorem \ref{theorem:classicalstabilitymachine} (for
which we referred to \cite[Theorem 1.1]{HatcherVogtmannTethers}). We will therefore not give full details, but only describe
how to construct the key spectral sequence (see Example \ref{example:standardexample} below), which will explain the role played by condition (a).
\begin{remark}
Theorem \ref{theorem:stabilitymachine} is related to \cite[Theorem D]{PatztCentralStability}.
\end{remark}
\subsection{Homology of stabilizers}
\label{section:stabhomology}
This requires the following construction. Fix a commutative ring $\bbk$.
Let $G$ be a group acting on a semisimplicial set $\bbX$ and let
$\cM$ be a $G$-equivariant augmented coefficient system on $\bbX$ over $\bbk$. For a simplex $\tsigma$ of $\bbX$, the value $\cM(\tsigma)$ is a
$\bbk[G_{\tsigma}]$-module. For $q \geq 0$, define an augmented coefficient system $\cH_q(\cM)$ on
$\bbX/G$ as follows.
Consider a simplex $\sigma$ of $\bbX/G$, and let $\tsigma$ be a lift of $\sigma$ to $\bbX$. We then define
\[\cH_q(\cM)(\sigma) = \HH_q(G_{\tsigma};\cM(\tsigma)).\]
To see that this is well-defined, let $\tsigma'$ be another lift of $\sigma$ to $\bbX$. There exists some $g \in G$ with
$g \tsigma = \tsigma'$, so $g G_{\tsigma} g^{-1} = G_{\tsigma'}$ and $g \cdot \cM(\tsigma) = \cM(\tsigma')$. Conjugation/multiplication
by $g$ thus induces an isomorphism
\[\HH_q(G_{\tsigma};\cM(\tsigma)) \cong \HH_q(G_{\tsigma'};\cM(\tsigma')).\]
What is more, since inner automorphisms induce the identify on homology (even with twisted coefficients; see \cite[Proposition III.8.1]{BrownCohomology}),
this isomorphism is independent of the choice of $g$, and thus is completely canonical. That it is a coefficient system follows
immediately.
\begin{remark}
If $\ast$ is the $(-1)$-simplex of $\bbX/G$, then the only possible lift $\tast$ is the $(-1)$-simplex of $\bbX$, and by
definition its stabilizer is the entire group $G$. It follows that $\cH_q(\cM)(\ast) = \HH_q(G;\cM(\tast))$.
\end{remark}
\subsection{Spectral sequence}
The spectral sequence that underlies Theorem \ref{theorem:stabilitymachine} is as follows. In it, our convention is that
$\cH_q(\cM) = \underline{0}$ for $q < 0$.
\begin{theorem}[Spectral sequence]
\label{theorem:spectralsequence}
Let $G$ be a group acting on a semisimplicial set $\bbX$, let $\bbk$ be a commutative ring, let
$\cM$ be a $G$-equivariant augmented coefficient system over $\bbk$ on $\bbX$. Assume that $\RH_k(\bbX;\cM) = 0$ for
$0 \leq k \leq r$. Then there exists a spectral sequence $E^{\bullet}_{pq}$ with the following
properties:
\begin{itemize}
\item[(i)] We have $E^1_{pq} = \RC_p(\bbX/G;\cH_q(\cM))$, and the differential
$E^1_{pq} \rightarrow E^1_{p-1,q}$ is the differential on $\RC_{\bullet}(\bbX/G;\cH_p(\cM))$. In particular,
$E^1_{pq} = 0$ if $p<-1$ or if $q < 0$.
\item[(ii)] For $p + q \leq r$, we have $E^{\infty}_{pq} = 0$.
\end{itemize}
\end{theorem}
\begin{example}
\label{example:standardexample}
Let the notation and assumptions be as in Theorem \ref{theorem:stabilitymachine}, and apply Theorem
\ref{theorem:spectralsequence} to $G_n$ and $\bbX_n$ and $M_n$ and $\cM_n$. Since $G_n$ acts
transitively on the $k$-simplices of $\bbX_n$ for $0 \leq k < n$, there is a single
$k$-simplex in $\bbX_n/G_n$. Since $G_{n-k-1}$ is the $G_n$ stabilizer of a $k$-simplex
$\sigma_k$ of $\bbX_n$ with $\cM_n(\sigma_k) = M_{n-k-1}$, we conclude that our spectral
sequence has
\[E^1_{pq} \cong \RC_p(\bbX_n/G_n;\cH_q(\cM_n)) = \HH_q(G_{n-p-1};M_{n-p-1}) \quad \text{for $-1 \leq p < n$}.\qedhere\]
\end{example}
\begin{proof}[Proof of Theorem \ref{theorem:spectralsequence}]
Let
\[\cdots \rightarrow F_2(G) \rightarrow F_1(G) \rightarrow F_0(G) \rightarrow \bbk\]
be a resolution of the trivial $\bbk[G]$-module $\bbk$ by free $\bbk[G]$-modules. The action of $G$ on $\bbX$ makes $\RC_{\bullet}(\bbX;\cM)$ into
a chain complex of $\bbk[G]$-modules, so we can consider the double complex $C_{\bullet,\bullet}$ defined by
\begin{equation}
\label{eqn:doublecomplex}
C_{pq} = \RC_{p}(\bbX;\cM) \otimes_{G} F_{q}(G).
\end{equation}
The spectral sequence we are looking for converges to the homology of this double complex. In fact,
there are two spectral sequences converging to the homology of a double complex, one arising by
filtering it ``horizontally'' and the other by filtering it ``vertically'' (see, e.g., \cite[\S VII.3]{BrownCohomology}).
We will use the horizontal filtration to show that the homology of \eqref{eqn:doublecomplex} vanishes
up to degree $r$ (conclusion (ii)), and then prove that the vertical filtration gives the spectral
sequence described in the theorem.
The spectral sequence arising from the horizontal filtration has
\[E^1_{pq} \cong \HH_p\left(\RC_{\bullet}\left(\bbX;\cM\right) \otimes_G F_q\left(G\right)\right).\]
Our assumptions imply that $\RC_{\bullet}(\bbX;\cM)$ is exact up to degree $r$, and since $F_q(G)$ is a free
$\Z[G]$-module it follows that $\RC_{\bullet}(\bbX;\cM) \otimes_G F_q(G)$ is also exact up to degree $r$.
It follows that $E^1_{pq} = 0$ for $p \leq r$.
We deduce that the homology of the double complex \eqref{eqn:doublecomplex} vanishes up to degree $r$.
The spectral sequence arising from the vertical filtration has
\[E^1_{pq} \cong \HH_q\left(\RC_{p}\left(\bbX;\cM\right) \otimes_G F_{\bullet}\left(G\right)\right).\]
This is the spectral sequence that is referred to in the theorem, and we must prove that it
satisfies (i). Since $F_{\bullet}(G)$ is a free resolution of the trivial $\bbk[G]$-module
$\Z$, the above expression simplifies to
\[E^1_{pq} \cong \HH_q\left(G;\RC_{p}\left(\bbX;\cM\right)\right).\]
For each $p$-simplex $\sigma$ of $\bbX/G$, fix a lift $\tsigma$ to $\bbX$. As a $\bbk[G]$-module, we have
\[\RC_p\left(\bbX;\cM\right) \cong \bigoplus_{\sigma \in \left(\bbX/G\right)^p} \Ind_{G_{\tsigma}}^G \cM\left(\tsigma\right).\]
Plugging this in, we get
\[E^1_{pq} \cong \bigoplus_{\sigma \in \left(\bbX/G\right)^p} \HH_q\left(G;\Ind_{G_{\tsigma}}^G \cM\left(\tsigma\right)\right).\]
Applying Shapiro's Lemma, the right hand side simplifies to
\[\bigoplus_{\sigma \in \left(\bbX/G\right)^p} \HH_q\left(G_{\tsigma};\cM\left(\tsigma\right)\right) = \RC_p(\bbX/G;\cH_q(\cM)).\]
That the differential is as described in (i) is clear. The theorem follows.
\end{proof}
\subsection{Stability machine and finite-index subgroups}
We close this section by proving a variant of Theorem \ref{theorem:stabilitymachine} that we will use
to analyze congruence subgroups of $\GL_n(R)$. To motivate its statement, consider a finite-index
normal subgroup $G'$ of a group $G$ and a $\bbk[G]$-module $M$ over a field $\bbk$ of characteristic $0$.
The conjugation action of $G$ on $G'$ induces an action of $G$ on each $\HH_k(G';M)$, and using
the transfer map (see \cite[Chapter III.9]{BrownCohomology}) one can show that $\HH_k(G;M)$ is
the $G$-coinvariants of the action of $G$ on $\HH_k(G';M)$. From this, we see that the map
\[\HH_k(G';M) \rightarrow \HH_k(G;M)\]
induced by the inclusion $G' \hookrightarrow G$ is an isomorphism if and only if the action
of $G$ on $\HH_k(G';M)$ is trivial. The point of the following is that under
the conditions of our stability machine, it is enough to check a weaker
weaker version of this triviality (condition (h)).
\begin{theorem}
\label{theorem:stabilitymachinefi}
Let $\{(G_n,M_n)\}_{n=0}^{\infty}$ be an increasing sequence of groups and modules.
For each $n \geq 1$, let $\bbX_n$ be a semisimplicial set upon which $G_n$ acts and let
$\cM_n$ be a $G_n$-equivariant augmented coefficient system on $\bbX_n$. Assume for some $c \geq 2$
that conditions (a)-(d) of Theorem \ref{theorem:stabilitymachine} hold, so by that theorem
$\HH_k(G_n;M_n)$ stabilizes. Furthermore, assume that $\{G'_n\}_{n=0}^{\infty}$ is an
increasing sequence of groups such that each $G'_n$ is a finite-index normal subgroup
of $G_n$, and that the following hold:
\begin{itemize}
\item[(e)] Each $M_n$ is a vector space over a field $\bbk$ of characteristic $0$.
\item[(f)] For the $k$-simplex $\sigma_k$ of $\bbX_n$ from condition (b) whose
$G_n$-stabilizer is $G_{n-k-1}$, the $G'_n$-stabilizer of $\sigma_k$ is $G'_{n-k-1}$.
\item[(g)] The quotient $\bbX_n / G'_n$ is $\frac{n-2}{c}$-connected.
\item[(h)] For $k \geq 0$ and $n \geq ck+2$, the action of $G_n$ on $\HH_k(G'_n;M_n)$
induced by the conjugation action of $G_n$ on $G'_n$ fixes the image of the stabilization
map
\begin{equation}
\label{eqn:gprimestab}
\HH_k(G'_{n-1};M(R^{n-1})) \rightarrow \HH_k(G'_n;M(R^n)).
\end{equation}
\end{itemize}
Then for $n \geq ck+2$ the map $\HH_k(G'_n;M_n) \rightarrow \HH_k(G_n;M_n)$ induced
by the inclusion $G'_n \hookrightarrow G_n$ is an isomorphism.
\end{theorem}
\begin{proof}
The proof will be by induction on $k$. The base case $k \leq -1$ is trivial, so assume that $k \geq 0$
and that the result is true for all smaller $k$. Consider some $n \geq ck+2$. As we discussed
before the theorem, because of (e) to prove that $\HH_k(G'_n;M_n) \cong \HH_k(G_n;M_n)$ it
is enough to prove that $G_n$ acts trivially on $\HH_k(G'_n;M_n)$. Condition (h) implies
that to do this, it is enough to prove that $\HH_k(G'_n;M_n)$ is spanned by the
$G_n$-orbit of the image of the stabilization map \eqref{eqn:gprimestab}.
Condition (a) says that $\HH_i(\bbX_n;\cM_n) = 0$ for $-1 \leq i \leq \frac{n-2}{c}$. Since
$n \geq ck+2$, this vanishing holds for $-1 \leq i \leq k$. Applying Theorem \ref{theorem:spectralsequence},
we obtain a spectral sequence $E^{\bullet}_{pq}$ with the following properties:
\begin{itemize}
\item[(i)] We have $E^1_{pq} = \RC_p(\bbX_n/G'_n;\cH_q(\cM_n))$, and the differential
$E^1_{pq} \rightarrow E^1_{p-1,q}$ is the differential on $\RC_{\bullet}(\bbX_n/G'_n;\cH_p(\cM_n))$. In particular,
$E^1_{pq} = 0$ if $p<-1$ or if $q < 0$.
\item[(ii)] For $p + q \leq k$, we have $E^{\infty}_{pq} = 0$.
\end{itemize}
Unwinding the definition of $\cH_q(\cM_n)$, we see that for $\ast$ the $(-1)$-simplex of $\bbX_n$ we have
\[E^1_{-1,k} = \RC_{-1}(\bbX_n/G'_n;\cH_k(\cM_n)) = \HH_k((G'_n)_{\ast};\cM_n(\ast)) = \HH_k(G'_n;M_n).\]
For each simplex $\tau$ of $\bbX_n/G'_n$, fix a lift $\ttau$ to $\bbX_n$. We then have
\begin{equation}
\label{eqn:e10k}
E^1_{0,k} = \RC_0(\bbX_n/G'_n;\cH_k(\cM_n)) = \bigoplus_{\tau \in (\bbX_n/G'_n)^0} \HH_k((G'_n)_{\ttau};\cM_n(\ttau)).
\end{equation}
By conditions (b) and (f), we have
\[\HH_k((G'_n)_{\sigma_0};\cM_n(\sigma_0)) = \HH_k(G'_{n-1};M_{n-1}).\]
Condition (c) says that $G_n$ acts transitively on the $0$-simplices of $\bbX_n$, so the different
$(G'_n)_{\ttau}$ appearing in \eqref{eqn:e10k} are in the $G_n$-orbit of $G'_{n-1}$ and the $\cM_n(\ttau)$ are
the corresponding $G_n$-orbits under the $G_n$-equivariant structure on $\cM_n$.
From these observations, we see that the image of the differential
\[E^1_{0,k} = \bigoplus_{\tau \in (\bbX_n/G'_n)^0} \HH_k((G'_n)_{\ttau};\cM_n(\ttau)) \rightarrow \HH_k(G'_n;M_n) = E^1_{-1,k}\]
is spanned by $G_n$-orbits of the image of the stabilization map
\[\HH_k(G'_{n-1};M_{n-1}) \rightarrow \HH_k(G'_n;M_n).\]
To see that $\HH_k(G'_n;M_n)$ is spanned by the $G_n$-orbits of the image of this stabilization map,
it is therefore enough to prove that the differential $E^1_{0,k} \rightarrow E^1_{-1,k}$ is surjective.
The only nonzero differentials that can possibly hit $E^{\bullet}_{-1,k}$ are
\begin{align*}
E^1_{0,k} &\rightarrow E^1_{-1,k}\\
E^2_{1,k-1} &\rightarrow E^2_{-1,k}\\
&\vdots \\
E^{k+1}_{k,0} &\rightarrow E^{k+1}_{-1,k}.
\end{align*}
By (ii) above we have $E^{\infty}_{-1,k} = 0$, so to prove that the differential $E^1_{0,k} \rightarrow E^1_{-1,k}$ is surjective
it is enough to prove that $E^{i+1}_{i,k-i} = 0$ for $1 \leq i \leq k$. We will actually
prove the following more general result:
\begin{claim}
Fix some $0 \leq q \leq k-1$. Then $E^2_{pq} = 0$ for $-1 \leq p \leq k-q$.
\end{claim}
The terms $E^1_{pq}$ with $-1 \leq p \leq k-q+1$ along
with the relevant $E^1$-differentials are
\[\RC_{-1}(\bbX_n/G'_n;\cH_q(\cM_n)) \leftarrow \RC_{0}(\bbX_n/G'_n;\cH_q(\cM_n)) \leftarrow \cdots \leftarrow \RC_{k-q+1}(\bbX_n/G'_n;\cH_q(\cM_n)).\]
To prove that the $E^2_{pq}$ with $-1 \leq p \leq k-q$ are all zero, we must prove that this chain complex is exact
except possibly at its rightmost term $\RC_{k-q+1}(\bbX_n/G'_n;\cH_q(\cM_n))$.
For a $p$-simplex $\tau$ of $\bbX_n/G'_n$, we have by definition
\[\cH_q(\cM_n)(\tau) = \HH_q((G'_n)_{\ttau};\cM_n(\ttau)).\]
Here $\ttau$ is our chosen lift of $\tau$ to $\bbX_n$. Setting
$V = \HH_q(G_n;M_n)$, the inclusion $(G'_n)_{\ttau} \hookrightarrow G_n$
along with the map
\[\cM_n(\ttau) \rightarrow \cM_n(\ast) = M_n \quad \text{with $\ast$ the $(-1)$-simplex of $\bbX_n$}\]
coming from our coefficient system induce a map
\begin{equation}
\label{eqn:stabproj}
\cH_q(\cM_n)(\tau) = \HH_q((G'_n)_{\ttau};\cM_n(\ttau)) \rightarrow \HH_q(G_n;M_n) = V.
\end{equation}
Letting $\uV$ be the constant coefficient system on $\bbX_n/G'_n$ with value $V$, the maps
\eqref{eqn:stabproj} assemble to a map of coefficient systems $\cH_q(\cM_n) \rightarrow \uV$.
Letting $f_p\colon \RC_{p}(\bbX_n/G'_n;\cH_q(\cM_n)) \rightarrow \RC_{p}(\bbX_n/G'_n;\uV)$ be
the induced map on reduced chain complexes, we have a commutative diagram
\[\minCDarrowwidth10pt\begin{CD}
\RC_{-1}(\bbX_n/G'_n;\cH_q(\cM_n)) @<<< \RC_{0}(\bbX_n/G'_n;\cH_q(\cM_n)) @<<< \cdots @<<< \RC_{k-q+1}(\bbX_n/G'_n;\cH_q(\cM_n)) \\
@VV{f_{-1}}V @VV{f_0}V @. @VV{f_{k-q+1}}V \\
\RC_{-1}(\bbX_n/G'_n;\uV) @<<< \RC_{0}(\bbX_n/G'_n;\uV) @<<< \cdots @<<< \RC_{k-q+1}(\bbX_n/G'_n;\uV).
\end{CD}\]
Condition (g) says that $\bbX_n / G'_n$ is $\frac{n-2}{c}$. Since $n \geq ck+2$, this means that
$\bbX_n / G'_n$ is $k$-connected. Since $0 \leq q \leq k-1$, we deduce that the bottom chain complex
of this diagram is exact except possibly at its rightmost term. To prove that the top
chain complex is exact except possibly at its rightmost term, it is thus enough to prove that
$f_p$ is an isomorphism for $-1 \leq p \leq k-q$ and a surjection for $p=k-q+1$.
Letting $\tau$ be a $p$-simplex of $\bbX_n/G'_n$, the map
$\cH_q(\cM_n)(\tau) \rightarrow \uV(\tau)$ is the map
\begin{equation}
\label{eqn:taustab}
\HH_q((G'_n)_{\ttau};\cM_n(\ttau)) \rightarrow \HH_q(G_n;M_n)
\end{equation}
We must prove that this is an isomorphism for $-1 \leq p \leq k-q$ and a surjection for $p=k-q+1$.
Condition (c) says that $G_n$ acts transitively on the $p$-simplices of $\bbX_n$,
so $\ttau$ is in the same $G_n$-orbit as the $p$-simplex $\sigma_p$ from conditions
(b) and (f), where
\[\HH_q((G'_n)_{\sigma_p};\cM_n(\sigma_p)) = \HH_q(G'_{n-p-1};M_{n-p-1}).\]
Whether or not \eqref{eqn:taustab} is an isomorphism/surjection is invariant under the
$G_n$-action, so it is enough to prove that
\[\HH_q(G'_{n-p-1};M_{n-p-1}) \rightarrow \HH_q(G_n;M_n)\]
is an isomorphism for $-1 \leq p \leq k-q$ and a surjection for $p=k-q+1$.
Factor this as
\[\HH_q(G'_{n-p-1};M_{n-p-1}) \rightarrow \HH_q(G_{n-p-1};M_{n-p-1}) \rightarrow \HH_q(G_n;M_n).\]
Since $G'_{n-p-1}$ is a finite-index subgroup of $G_{n-p-1}$, the transfer map (see \cite[Chapter III.9]{BrownCohomology})
implies that the first map is always a surjection, and our inductive hypothesis says that it is an
isomorphism if $n-p-1 \geq cq+2$. Also, Theorem \ref{theorem:stabilitymachine} says that the second map
is an isomorphism if $n-p-1 \geq cq+2$ and a surjection if $n-p-1 = cq+1$. To prove the
theorem, it is thus enough to prove that $n-p-1 \geq cq+2$ if $-1 \leq p \leq k-q$ and
that $n-p-1 \geq cg+1$ if $p = k-q+1$.
We check this as follows. If $-1 \leq p \leq k-q$, then since $n \geq ck+2$ and $q \leq k-1$ and $c \geq 2$ we have
\begin{align*}
n-p-1 &\geq (ck+2) - (k-q) - 1 = (c-1)k+q+1 \\
&\geq (c-1)(q+1)+q+1 = cq+c \\
&\geq cq+2.
\end{align*}
If instead $p = k-q+1$, then we have
\begin{align*}
n-p-1 &\geq (ck+2) - (k-q+1)-1 = (c-1)+q \\
&\geq (c-1)(q+1) + q = cq+c-1 \\
&\geq cq+1.\qedhere
\end{align*}
\end{proof}
\section{Stability II: the vanishing theorem}
\label{section:vanishingtheorem}
Let $X$ be a simplicial complex and let $\bbX$ be either a small or large ordering of $X$. To
use Theorem \ref{theorem:stabilitymachine}, we need a way to prove that $\RH_k(\bbX;\cF) = 0$
in a range for an augmented coefficient system $\cF$. This section contains a useful criterion for this
that applies in many situations.
\begin{notation}
\label{notation:simplexnotation}
The following notation for $\bbX$ and $\cF$ will be used throughout this section:
\begin{itemize}
\item Simplices of $\bbX$ are ordered sequences of distinct vertices of $X$, and we will write
them as $(v_0,\ldots,v_k)$ with the $v_i$ vertices. If $\tau = (v_0,\ldots,v_k)$ and $\sigma = (w_0,\ldots,w_{\ell})$
are simplices such that $\sigma$ is in the forward link $\FLink_{\bbX}(\tau)$, then we will write
$\tau \cdot \sigma$ for the simplex $(v_0,\ldots,v_k,w_0,\ldots,w_{\ell})$.
Finally, we will write $\emptyset$ or $()$ for the unique $(-1)$-simplex used to define the
augmentation.
\item For a coefficient system $\cF$, we will write
$\cF(v_0,\ldots,v_k)$ for its value on $(v_0,\ldots,v_k)$ rather than the more awkward
but technically correct $\cF((v_0,\ldots,v_k))$. Also, if $\sigma'$ is a face of $\sigma$, then
there is only one face map taking $\sigma$ to $\sigma'$, so we will just talk about the induced
map $\cF(\sigma) \rightarrow \cF(\sigma')$.\qedhere
\end{itemize}
\end{notation}
\subsection{Polynomiality}
Our criterion applies to augmented coefficient systems $\cF$ that are {\em polynomial} of degree $d \geq -1$ up
to dimension $e \geq 0$.
This condition is inspired by the notion of polynomial $\FI$-modules (see Definition \ref{definition:polyfi}).
It is defined inductively in the degree $d$ as follows:
\begin{itemize}
\item A coefficient system $\cF$ is polynomial of degree $-1$ up to dimension $e$ if for all simplices
$\sigma$ of dimension at most $e$, we have $\cF(\sigma) = 0$.
\item A coefficient system $\cF$ is polynomial of degree $d \geq 0$ up to dimension $e$ if it satisfies
the following two conditions:
\begin{itemize}
\item If $\sigma$ is a simplex of dimension at most $e$, then the map $\cF(\sigma) \rightarrow \cF(\emptyset)$
is injective.
\item Let $\tau = (w_0,\ldots,w_{\ell})$ be a simplex with $\ell \leq e$ and let $\bbY$ be the forward link $\FLink_{\bbX}(\tau)$. Let $\cG$ be the coefficient
system on $\bbY$ defined by the formula
\[\cG(\sigma) = \frac{\cF(\sigma)}{\Image\left(\cF\left(w_{\ell} \cdot \sigma\right) \rightarrow \cF\left(\sigma\right)\right)} \quad \text{for a simplex $\sigma$ of $\bbY$}.\]
Then $\cG$ must be polynomial of degree $d-1$ up to dimension $e-\ell$.
\end{itemize}
\end{itemize}
\subsection{Key example}
The following lemma provides motivation for this definition.
\begin{lemma}
\label{lemma:fisystempoly}
Let $M$ be an $\FI$-module that is polynomial of degree $d$ starting at $m \geq 0$ (see Definition \ref{definition:polyfi}).
Fix some $n \geq m$, and let $\cF_{M,n}$ be the coefficient system on $\OSim_n$ discussed in Example \ref{example:orderingsystem}, so
\[\cF_{M,n}(i_0,\ldots,i_k) = M([n] \setminus \{i_0,\ldots,i_k\}) \quad \text{for all simplices $(i_0,\ldots,i_k)$ of $\OSim_n$}.\]
Then $\cF_{M,n}$ is polynomial of degree $d$ up to dimension $n-m$.
\end{lemma}
\begin{proof}
The proof will be by induction on $d$.
If $d=-1$, then consider a simplex $\sigma = (i_0,\ldots,i_k)$ with $k \leq n-m$. We then have
\[\cF_{M,n}(\sigma) = M([n] \setminus \{i_0,\ldots,i_k\}).\]
Since
\begin{equation}
\label{eqn:calccodim}
|[n] \setminus \{i_0,\ldots,i_k\}| = (n+1) - (k+1) = n-k \geq m,
\end{equation}
it follows from the fact that $M$ is polynomial of degree $-1$ starting at $m$ that $\cF_{M,n}(\sigma) = 0$, as desired.
Now assume that $d \geq 0$. There are two things to check. For the first, let $\sigma = (i_0,\ldots,i_k)$ be a simplex
with $k \leq n-m$. We must prove that the map $\cF_{M,n}(\sigma) \rightarrow \cF_{M,n}(\emptyset)$ is injective, i.e., that
the map
\[M([n] \setminus \{i_0,\ldots,i_k\}) \rightarrow M([n])\]
is injective. The calculation \eqref{eqn:calccodim} shows that this injectivity follows from the fact that
$M$ is polynomial of degree $d$ starting at $m$.
For the second, let $\tau = (w_0,\ldots,w_{\ell})$ be any simplex with $\ell \leq n-m$
and let $\bbY$ be the forward link $\FLink_{\OSim_n}(\tau)$. Let $\cG$ be the coefficient
system on $\bbY$ defined by the formula
\[\cG(\sigma) = \frac{\cF_{M,n}(\sigma)}{\Image\left(\cF_{M,n}\left(w_{\ell} \cdot \sigma\right) \rightarrow \cF_{M,n}\left(\sigma\right)\right)} \quad \text{for a simplex $\sigma$ of $\bbY$}.\]
We must prove that $\cG$ is polynomial of degree $d-1$ up to dimension $n-m-\ell$. Without loss of generality,
$\tau = (n-\ell,\ldots,n)$, so $\bbY = \FLink_{\OSim_n}(\tau) = \OSim_{n-\ell-1}$. Recall that we defined the derived $\FI$-module $DM$ in
Definition \ref{definition:derivedfi}. By construction, there is an isomorphism between
the coefficient systems $\cG$ and $\cF_{DM,n-\ell-1}$ on $\OSim_{n-\ell-1}$. Since $M$ is polynomial of
degree $d$ starting at $m$, the $\FI$-module $DM$ is
polynomial of degree $d-1$ starting at $m-1$. By induction, $\cG$ is polynomial of degree $d-1$ starting at
$(n-\ell-1)-(m-1)=n-m-\ell$, as desired.
\end{proof}
\subsection{Statement of vanishing theorem}
Having made these definitions, our vanishing theorem is as follows:
\begin{theorem}
\label{theorem:vanishing}
For some $N \geq -1$ and $d \geq 0$, let $X$ be a simplicial complex that is weakly Cohen--Macaulay of dimension $N+d+1$ and let
$\bbX$ be either a CM-small or large ordering of $X$. Let $\cF$ be an augmented coefficient system on $\bbX$
that is polynomial of degree $d$ up to dimension $N$. Then $\RH_k(\bbX;\cF) = 0$ for $-1 \leq k \leq N$.
\end{theorem}
\begin{proof}
By definition when $\bbX$ is a CM-small ordering of $X$ and by Lemma \ref{lemma:largerorderingcm} when $\bbX$ is the large
ordering of $X$, we have that
\begin{equation}
\label{eqn:forwardcm}
\text{$\bbX$ is weakly forward Cohen--Macaulay of dimension $N+d+1$}.
\end{equation}
The proof will be by induction on $d$. For the base case $d=-1$, by definition
the coefficient system $\cF$ equals the constant coefficient system $\underline{0}$ on the $N$-skeleton
of $\bbX$. This trivially implies that for $-1 \leq k \leq N$ we have $\RH_k(\bbX;\cF) = 0$.
Assume now that $d \geq 0$ and that the theorem is true for smaller $d$. We divide the rest of the
proof into four steps. The descriptions of the steps describe the objects that are introduced
during that step and what the step reduces the theorem to, with the fifth step proving the result.
The inductive hypothesis is invoked in the fourth step.
\begin{stepsa}
We introduce the shifted coefficient systems $\cF_n$ and reduce the theorem to proving that
$\RH_k(\bbX;\cF) \cong \RH_k(\bbX;\cF_{N+1})$ for $-1 \leq k \leq N$.
\end{stepsa}
For $n \geq -1$, define an augmented coefficient system $\cF_n$ on $\bbX$ via the formula
\[\cF_n(v_0,\ldots,v_k) = \begin{cases}
\cF(\emptyset) & \text{if $n \geq k$}\\
\cF(v_{n+1},\ldots,v_k) & \text{if $n<k$}
\end{cases}
\quad \text{for a simplex $(v_0,\ldots,v_k)$ of $\bbX$}.\]
The maps $\cF_n(\sigma) \rightarrow \cF_n(\sigma')$ when $\sigma'$ is a face of $\sigma$ are the ones induced
by $\cF$. The augmented coefficient system $\cF_{N+1}$ equals the constant coefficient system $\underline{\cF(\emptyset)}$
on all simplices of dimension at most $(N+1)$, so the fact that $\bbX$ is weakly forward Cohen--Macaulay
of dimension $(N+d+1)$ (see \eqref{eqn:forwardcm}) implies that $\bbX$ is $N+d \geq N$ connected and hence
\[\RH_k(\bbX;\cF_{N+1}) = \RH_k(\bbX;\underline{\cF(\emptyset)}) = 0 \quad \text{for $-1 \leq k \leq N$}.\]
There is a map $\cF \rightarrow \cF_{N+1}$, and to prove the theorem it is enough to prove
that this induces an isomorphism $\RH_k(\bbX;\cF) \cong \RH_k(\bbX;\cF_{N+1})$
for $-1 \leq k \leq N$.
\begin{stepsa}
We introduce the quotiented shifted coefficient systems $\ocF_{n}$ and reduce the theorem
to proving that $\RH_k(\bbX;\ocF_{n}) = 0$ for $0 \leq n \leq k \leq N+1$.
\end{stepsa}
We can factor the map $\cF \rightarrow \cF_{N+1}$ as
\[\cF = \cF_{-1} \rightarrow \cF_0 \rightarrow \cF_1 \rightarrow \cdots \rightarrow \cF_{N+1},\]
so it is enough to prove that the map $\cF_{n-1} \rightarrow \cF_n$ induces
an isomorphism $\RH_k(\bbX;\cF_{n-1}) \cong \RH_k(\bbX;\cF_n)$ for $0 \leq n \leq N+1$
and $-1 \leq k \leq N$.
Define
\begin{align*}
\ocF_n &= \coker(\cF_{n-1} \rightarrow \cF_n), \\
\cF'_{n-1} &= \Image(\cF_{n-1} \rightarrow \cF_n),\\
\cF''_{n-1} &= \ker(\cF_{n-1} \rightarrow \cF_n).
\end{align*}
We thus have short exact sequences of coefficient systems
\begin{equation}
\label{eqn:ses1}
0 \longrightarrow \cF'_{n-1} \longrightarrow \cF_n \longrightarrow \ocF_n \longrightarrow 0
\end{equation}
and
\begin{equation}
\label{eqn:ses2}
0 \longrightarrow \cF''_{n-1} \longrightarrow \cF_{n-1} \longrightarrow \cF'_{n-1} \longrightarrow 0.
\end{equation}
Both of these induce long exact sequences in homology. Let us focus first on the one associated to
\eqref{eqn:ses2}, which contains segments of the form
\begin{equation}
\label{eqn:les2}
\RH_k(\bbX;\cF''_{n-1}) \longrightarrow \RH_k(\bbX;\cF_{n-1}) \longrightarrow \RH_k(\bbX;\cF'_{n-1}) \longrightarrow \RH_{k-1}(\bbX;\cF''_{n-1}).
\end{equation}
Since $\cF$ is polynomial of degree $d$ up to
dimension $N$, for all simplices $\sigma$ of dimension at most $N$ and all faces $\sigma'$ of $\sigma$
the map $\cF(\sigma) \rightarrow \cF(\sigma')$ is injective. This implies that the map
$\cF_{n-1}(\sigma) \rightarrow \cF_n(\sigma)$ is injective as long as $\sigma$ has dimension
at most $N$, and thus that $\cF''_{n-1}(\sigma) = 0$. It follows that $\RH_k(\bbX;\cF''_{n-1}) = 0$ for $k \leq N$.
Combining this with \eqref{eqn:les2}, we see that
\begin{equation}
\label{eqn:les2conclude}
\RH_k(\bbX;\cF_{n-1}) \cong \RH_k(\bbX;\cF'_{n-1}) \quad \text{for $k \leq N$}.
\end{equation}
We now turn to the long exact sequence associated to \eqref{eqn:ses1}, which contains the segment
\[\RH_{k+1}(\bbX;\ocF_{n}) \longrightarrow \RH_k(\bbX;\cF'_{n-1}) \longrightarrow \RH_k(\bbX;\cF_n) \longrightarrow \RH_k(\bbX;\ocF_{n}).\]
In light of \eqref{eqn:les2conclude}, this implies that to
prove that the map $\cF_{n-1} \rightarrow \cF_n$ induces an isomorphism $\RH_k(\bbX;\cF_{n-1}) \cong \RH_k(\bbX;\cF_n)$
for $0 \leq n \leq N+1$ and $-1 \leq k \leq N$, it is enough to prove that
\[\RH_k(\bbX;\ocF_{n}) = 0 \quad \text{for $0 \leq n \leq N+1$ and $-1 \leq k \leq N+1$}.\]
This can be simplified a little further: if $\sigma$ is a $k$-simplex with $k \leq n-1$, then
$\cF_{n-1}(\sigma) = \cF_n(\sigma) = \cF(\emptyset)$, so $\ocF_{n}(\sigma) = 0$. The group
$\RH_k(\bbX;\ocF_{n})$ is thus automatically $0$ for $k \leq n-1$, so it is actually
enough to prove that
\[\RH_k(\bbX;\ocF_{n}) = 0 \quad \text{for $0 \leq n \leq k \leq N+1$}.\]
\begin{stepsa}
For each $n$-simplex $\tau$ with $0 \leq n \leq N+1$, we construct coefficient systems $\ocF_{n,\tau}$ and prove that
$\ocF_{n}$ is the direct sum of the $\ocF_{n,\tau}$ as $\tau$ ranges over the $n$-simplices, reducing
the theorem to proving that $\RH_k(\bbX;\ocF_{n,\tau}) = 0$ for $0 \leq n \leq k \leq N+1$.
\end{stepsa}
Fix some $0 \leq n \leq N+1$, and consider a $k$-simplex $\sigma$ of $\bbX$ with $k \geq n$. We
claim that the following holds:
\begin{itemize}
\item Let $\sigma'$ be a face of $\sigma$. Then the map $\ocF_n(\sigma) \rightarrow \ocF_n(\sigma')$ is the
zero map unless the first $(n+1)$ vertices of $\sigma$ and $\sigma'$ are equal.
\end{itemize}
To prove this, it is enough to prove the following special case:
\begin{itemize}
\item Let $(w_0,\ldots,w_n,v_0,\ldots,v_m)$ be a simplex of $\bbX$ and let $0 \leq i \leq n$. Then
the map $\ocF_n(w_0,\ldots,w_n,v_0,\ldots,v_m) \rightarrow \ocF_n(w_0,\ldots,\widehat{w_i},\ldots,w_n,v_0,\ldots,v_m)$
is the zero map.
\end{itemize}
For this, observe that
\[\ocF_n(w_0,\ldots,w_n,v_0,\ldots,v_m) = \frac{\cF(v_0,\ldots,v_m)}{\cF(w_n,v_0,\ldots,v_m)}\]
and
\[\ocF_n(w_0,\ldots,\widehat{w_i},\ldots,w_n,v_0,\ldots,v_m) = \frac{\cF(v_1,\ldots,v_m)}{\cF(v_0,\ldots,v_m)}.\]
The map between these is visibly the zero map.
For each $n$-simplex $\tau$ of $\bbX$, this suggests defining a coefficient system $\ocF_{n,\tau}$ on $\bbX$ via the formula
\[\ocF_{n,\tau}(\sigma) = \begin{cases}
\ocF_{n}(\sigma) & \text{if $\sigma$ starts with $\tau$}\\
0 & \text{otherwise}
\end{cases}
\quad \quad \text{for a simplex $\sigma$ of $\bbX$}.\]
By the above, we have a decomposition
\[\ocF_{n} = \bigoplus_{\tau \in \bbX^n} \ocF_{n,\tau}\]
of coefficient systems. To prove that
\[\RH_k(\bbX;\ocF_{n}) = 0 \quad \text{for $0 \leq n \leq k \leq N+1$},\]
it is thus enough to prove that for all $\tau \in \bbX^n$ we have
\[\RH_k(\bbX;\ocF_{n,\tau}) = 0 \quad \text{for $0 \leq n \leq k \leq N+1$}.\]
\begin{stepsa}
Fix an $n$-simplex $\tau$ with $0 \leq n \leq N+1$. We prove that
$\RH_k(\bbX;\ocF_{n,\tau}) = 0$ for $0 \leq n \leq k \leq N+1$.
\end{stepsa}
Define $\bbY = \FLink_{\bbX}(\tau)$. Recalling our notation for simplices of
$\bbX$ in Notation \ref{notation:simplexnotation},
define a coefficient system $\cG$ on $\bbY$ via the formula
\[\cG(\sigma) = \ocF_{n,\tau}(\tau \cdot \sigma) \quad \text{for a simplex $\sigma$ of $\bbY$}.\]
On the level of reduced chain complexes, up to multiplying the differentials by $(-1)^{n+1}$ we have
\[\RC_{\bullet}(\bbY;\cG) \cong \RC_{\bullet+n+1}(\bbX;\ocF_{n,\tau}),\]
so $\RH_{k}(\bbY;\cG) \cong \RH_{k+n+1}(\bbX;\ocF_{n,\tau})$. It is thus enough to prove that
$\RH_k(\bbY;\cG) = 0$ for $-1 \leq k \leq N-n$.
This will be an application of our inductive hypothesis. For this, we make two observations:
\begin{itemize}
\item Since $\bbX$ is weakly forward Cohen--Macaulay of dimension $(N+d+1)$ (see \eqref{eqn:forwardcm}), the
forward link $\bbY$ of the $n$-simplex $\tau$ is weakly forward Cohen--Macaulay of dimension $(N-n+d)$.
\item Write $\tau = (w_0,\ldots,w_n)$. For a simplex $\sigma$ of $\bbY$, we have
\[\cG(\sigma) = \ocF_{n,\tau}((w_0,\ldots,w_n) \cdot \sigma) = \frac{\cF(\sigma)}{\cF((w_n) \cdot \sigma)}.\]
Since $\cF$ is a polynomial coefficient system of degree $d$ up to dimension $N$, it follows that $\cG$ is a polynomial
coefficient of degree $(d-1)$ up to dimension $(N-n)$.
\end{itemize}
Since
\[N-n+d=(N-n)+(d-1)+1,\]
we can apply our inductive hypothesis to conclude that $\RH_k(\bbY;\cG) = 0$ for $-1 \leq k \leq N-n$.
\end{proof}
\section{Stability for symmetric groups}
\label{section:sn}
We now turn to applications of our machinery, starting with Theorems \ref{maintheorem:sn}
and \ref{maintheorem:snprime}.
\begin{proof}[Proof of Theorem \ref{maintheorem:sn}]
We first recall the statement. Let $\bbk$ be a commutative ring and let $M$ be an $\FI$-module over $\bbk$ that is polynomial
of degree $d$ starting at $m \geq 0$. For each $k \geq 0$, we must prove that the map
\[\HH_k(\fS_{n};M(\overline{n})) \rightarrow \HH_k(\fS_{n+1};M(\overline{n+1}))\]
is an isomorphism for $n \geq 2k+\max(d,m-1)+2$ and a surjection for $n=2k+\max(d,m-1)+1$.
Recall that $\overline{n} = \{1,\ldots,n\}$ and $[n-1] = \{0,\ldots,n-1\}$. The group $\fS_n$ acts
on both $M(\overline{n})$ and $M([n-1])$, and there is a $\bbk[\fS_n]$-module isomorphism
$M(\overline{n}) \cong M([n-1])$. In light of this, it is enough to deal with the map
\[\HH_k(\fS_{n};M([n-1])) \rightarrow \HH_k(\fS_{n+1};M([n])),\]
which will fit into our framework a little better.
The group $\fS_n$ acts on $\OSim_{n-1}$. Let $\cF_{M,n-1}$ be
the $\fS_n$-equivariant augmented system of coefficients on $\OSim_{n-1}$ from Example \ref{example:snequivariant}, so
\[\cF_{M,n-1}(i_0,\ldots,i_k) = M([n-1] \setminus \{i_0,\ldots,i_k\}) \quad \text{for a simplex $(i_0,\ldots,i_k)$ of $\OSim_{n-1}$}.\]
The following claim will be used to show that with an appropriate degree shift, this all satisfies
the hypotheses of Theorem \ref{theorem:stabilitymachine}.
\begin{claim}
The following hold:
\begin{enumerate}
\item For all $-1 \leq k \leq n-\max(d,m-1)-2$, we have $\RH_k(\OSim_{n-1};\cF_{M,n-1}) = 0$.
\item For all $-1 \leq k \leq n-1$, the group $\fS_{n-k-1}$ is the $\fS_n$-stabilizer of a $k$-simplex
$\sigma_k$ of $\OSim_{n-1}$ with $\cF_{M,n-1}(\sigma_k) = M([n-k-2])$.
\item For all $0 \leq k \leq n-1$, the group $\fS_n$ acts transitively on the $k$-simplices of $\OSim_{n-1}$.
\item For all $n \geq 2$ and all $1$-simplices $e$ of $\OSim_{n-1}$ of the form $e = (i_0,i_1)$, there
exists some $\lambda \in \fS_n$ with $\lambda(i_0) = i_1)$ such that $\lambda$ commutes with all
elements of $(\fS_n)_e$ and fixes all elements of $\cF_{M,n-1}(e)$.
\end{enumerate}
\end{claim}
\begin{proof}[Proof of claim]
For (a), Lemma \ref{lemma:fisystempoly} says that $\cF_{M,n-1}$ is a polynomial coefficient system of
degree $d$ up to dimension $n-m-1$. Also, $\OSim_{n-1}$ is the large ordering of the $(n-1)$-simplex
$\Sim_{n-1}$ and $\Sim_{n-1}$ is weakly Cohen--Macaulay of dimension $(n-1)$. Letting
\[N = \min(n-m-1,n-d-2) = n - \max(d,m-1)-2,\]
the complex $\Sim_{n-1}$ is weakly Cohen--Macaulay of dimension $N+d+1$ and $\cF_{M,n-1}$ is a polynomial
coefficient system of degree $d$ up to dimension $N$. Theorem \ref{theorem:vanishing} thus
implies that $\RH_k(\OSim_{n-1};\cF_{M,n-1}) = 0$ for $-1 \leq k \leq N$.
For (b), the group $\fS_{n-k-1}$ is the $\fS_n$-stabilizer of the $k$-simplex
\[\sigma_k = \begin{cases}
(\emptyset) & \text{if $k=-1$},\\
(n-k-1,n-k+1,\ldots,n-1) & \text{if $0 \leq k \leq n-1$}
\end{cases}\]
of $\OSim_{n-1}$, and by definition
\[\cF_{M,n-1}(\sigma_k) = M([n-1] \setminus \{n-k-1,n-k+1,\ldots,n-1\}) = M([n-k-2]).\]
Condition (c) is obvious.
For (d), simply let $\lambda$ be the transposition $(i_0,i_1)$, which acts trivially on
\[\cF_{M,n-1}(e) = M([n-1] \setminus \{i_0,i_1\})\]
by basic properties of $\FI$-modules.
\end{proof}
Letting $e = \max(d,m-1)$, the Claim verifies that the conditions of Theorem \ref{theorem:stabilitymachine} are satisfied for
\[G_n = \fS_{n+e+1} \quad \text{and} \quad M_n = M([n+e]) \quad \text{and} \quad \bbX_n = \OSim_{n+e}
\quad \text{and} \quad \cM_n = \cF_{M,n+e}.\]
with $c=2$. The shift by $e+1$ is needed for condition (a) of Theorem \ref{theorem:stabilitymachine}, which
requires that $\RH_k(\bbX_n;\cM_n) = 0$ for all $-1 \leq k \leq \frac{n-2}{2}$. Conclusion (a)
of the Claim says that $\RH_k(\OSim_{n+e};\cF_{M,n+e}) = 0$ for $-1 \leq k \leq (n+e+1)-e-2$, which implies
the desired range of vanishing for $\RH_k(\bbX_n;\cM_n)$ since
\[(n+e+1)-e-2 = n-1 \geq \frac{n-2}{2} \quad \text{for $n \geq 0$}.\]
Applying Theorem \ref{theorem:stabilitymachine}, we deduce that the map
\[\HH_k(\fS_{n+e};M([n+e-1])) \rightarrow \HH_k(\fS_{n+e+1};M([n+e]))\]
is an isomorphism for $n \geq 2k+2$ and a surjection for $n = 2k+1$, which implies
that
\[\HH_k(\fS_{n};M([n-1])) \rightarrow \HH_k(\fS_{n+1};M([n]))\]
is an isomorphism for $n \geq 2k+e+2$ and a surjection for $n = 2k+e+1$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{maintheorem:snprime}]
We first recall the statement. Let $\bbk$ be a commutative ring and let $M$ be an $\FI$-module over $\bbk$ that is polynomial
of degree $d$ starting at $m \geq 0$. For each $k \geq 0$, we must prove that the map
\begin{equation}
\label{eqn:snprimetoprove}
\HH_k(\fS_{n};M(\overline{n})) \rightarrow \HH_k(\fS_{n+1};M(\overline{n+1}))
\end{equation}
is an isomorphism for $n \geq \max(m,2k+2d+2)$ and a surjection for $n \geq \max(m,2k+2d)$.
The proof will be by double induction on $d$ and $m$. There are three base cases:
\begin{itemize}
\item The first is where $m=0$ and $d \geq 1$. Theorem \ref{maintheorem:sn}
says in this case that \eqref{eqn:snprimetoprove} is an isomorphism for
\[n \geq 2k+\max(d,m-1)+2 = 2k+\max(d,-1)+2 = 2k+d+2\]
and a surjection for
\[n=2k+\max(d,m-1)+1 = 2k+\max(d,-1)+1 = 2k+d+1.\]
Since $d \geq 1$, these bounds are even stronger than our purported bounds of
\[n \geq \max(m,2k+2d+2) = \max(0,2k+2d+3) = 2k+2d+2\]
for \eqref{eqn:snprimetoprove} to be an isomorphism and
\[n \geq \max(m,2k+2d) = \max(0,2k+2d+1) = 2k+2d\]
for \eqref{eqn:snprimetoprove} to be a surjection.
\item The second is where $m=0$ and $d=0$. As we discussed in Example \ref{example:deg0fi}, this
implies that the $M(\overline{n})$ are all the same trivial $\bbk[\fS_n]$-representation. We
can thus appeal to the classical theorem of Nakaoka \cite[Corollary 6.7]{Nakaoka} saying that
for these trivial constant coefficient systems the map \eqref{eqn:snprimetoprove} is an isomorphism
for $n \geq 2k$. This is again even stronger than our purported bounds of
\[n \geq \max(m,2k+2d+2) = \max(0,2k+2) = 2k+2\]
for \eqref{eqn:snprimetoprove} to be an isomorphism and
\[n \geq \max(m,2k+2d) = \max(0,2k) = 2k\]
for \eqref{eqn:snprimetoprove} to be a surjection.
\item The third is where $m \geq 0$ and $d = -1$. In this case, by the definition of an $\FI$-module
being polynomial of degree $-1$ starting at $m$ we have for $n \geq m$ that $M(\overline{n}) = 0$ and
hence $\HH_k(\fS_{n};M(\overline{n})) = 0$. In other words, for $n \geq m$ the domain
and codomain of \eqref{eqn:snprimetoprove} are both $0$, so it is trivially an isomorphism.
\end{itemize}
Assume now that $m \geq 1$ and $d \geq 0$, and that the theorem is true for all smaller $m$ and $d$. As
in Definition \ref{definition:polyfi}, let $\Sigma M$ be the shifted $\FI$-module and $D M$ be the derived
$\FI$-module. For $n \geq m$, we have a short exact sequence
\begin{equation}
\label{eqn:fishiftseq}
0 \longrightarrow M(\on) \longrightarrow \Sigma M(\on) \longrightarrow DM(\on) \longrightarrow 0
\end{equation}
of $\bbk[\fS_n]$-modules (see Remark \ref{remark:exactsequence}). The $\FI$-module $\Sigma M$ is polynomial
of degree $d$ starting at $(m-1)$, and the $\FI$-module $DM$ is polynomial of degree $(d-1)$ starting at $(m-1)$.
To simplify our notation, for all $r \geq 0$ and
all $\bbk[\fS_r]$-modules $N$, we will denote $\HH_k(\fS_r;N)$ by $\HH_k(N)$.
The long exact sequence in $\fS_n$-homology associated to \eqref{eqn:fishiftseq}
maps to the one in $\fS_{n+1}$-homology, so for $n \geq m$ and all $k$ we have a commutative diagram
\begin{center}
\scalebox{0.89}{$\minCDarrowwidth10pt\begin{CD}
\HH_{k+1}(\Sigma M(\on)) @>>> \HH_{k+1}(DM(\on)) @>>> \HH_k(M(\on)) @>>> \HH_k(\Sigma M(\on)) @>>> \HH_k(DM(\on)) \\
@VV{g_1}V @VV{g_2}V @VV{f_1}V @VV{f_2}V @VV{f_3}V \\
\HH_{k+1}(\Sigma M(\overline{n+1})) @>>> \HH_{k+1}(DM(\overline{n+1})) @>>> \HH_k(M(\overline{n+1})) @>>> \HH_k(\Sigma M(\overline{n+1})) @>>> \HH_k(DM(\overline{n+1}))
\end{CD}$}
\end{center}
with exact rows. Our inductive hypothesis says the following about the $g_i$ and $f_i$:
\begin{itemize}
\item Since $\Sigma M$ is polynomial of degree $d$ starting at $(m-1)$, the map $f_2$ is an isomorphism
for $n \geq \max(m-1,2k+2d+2)$ and a surjection for $n \geq \max(m-1,2k+2d)$. Also, the map
$g_1$ is an isomorphism for
\[n \geq \max(m-1,2(k+1)+2d+2) = \max(m-1,2k+2d+4)\]
and a surjection for $n \geq \max(m-1,2k+2d+2)$.
\item Since $DM$ is polynomial of degree $(d-1)$ starting at $(m-1)$, the map $f_3$ is an
isomorphism for
\[n \geq \max(m-1,2k+2(d-1)+2) = \max(m-1,2k+2d)\]
and a surjection for $n \geq \max(m-1,2k+2d-2)$. Also, the map $g_2$ is an isomorphism for
\[n \geq \max(m-1,2(k+1)+2(d-1)+2) = \max(m-1,2k+2d+2)\]
and a surjection for $n \geq \max(m-1,2k+2d)$.
\end{itemize}
For $n \geq \max(m,2k+2d+2)$, the maps $g_2$ and $f_2$ and $f_3$ are isomorphisms and the map $g_1$ is a surjection, so
by the five-lemma the map $f_1$ is an isomorphism. For $n \geq \max(m,2k+2d)$, the maps $g_2$ and $f_2$ are surjections
and the map $f_3$ is an isomorphism, so by the five-lemma\footnote{Or, more precisely, one of the four-lemmas.} the
map $f_1$ is a surjection. The claim follows.
\end{proof}
\section{The stable rank of rings}
\label{section:stablerank}
The rest of the paper is devoted to the general linear group and its congruence subgroups.
We begin with a discussion of a ring-theoretic condition called the stable
rank that was introduced by Bass \cite{BassKTheory}. To make this paper a bit
more self-contained,\footnote{And to avoid depending on \cite{BassKTheory},
which is out of print.} we will include the proofs of the results we need (most of which are due
to Bass) if they are short.
\subsection{Unimodular vectors}
The starting point is the following.
\begin{definition}
Let $R$ be a ring. A vector $v \in R^n$ is {\em unimodular} if there is a homomorphism
$\phi\colon R^n \rightarrow R$ of right $R$-modules such that $\phi(v) = 1$.
\end{definition}
\begin{example}
\label{example:unimodular}
The columns of matrices in $\GL_n(R)$ are unimodular.
\end{example}
The following lemma might clarify this definition.
\begin{lemma}
\label{lemma:unimodularsum}
Let $R$ be a ring and let $v = (c_1,\ldots,c_n) \in R^n$. Then $v$ is unimodular if and only if
there exist $a_1,\ldots,a_n \in R$ such that $a_1 c_1 + \cdots + a_n c_n = 1$.
\end{lemma}
\begin{proof}
Immediate from the fact that all right $R$-module morphisms $\phi\colon R^n \rightarrow R$ are of the
form $\phi(x_1,\ldots,x_n) = a_1 x_1 + \cdots + a_n x_n$ for some $a_1,\ldots,a_n \in R$.
\end{proof}
\subsection{Stable rank}
In light of Example \ref{example:unimodular}, one might hope that all unimodular vectors
in $R^n$ can appear as columns of matrices in $\GL_n(R)$. This holds if $R$ is a field or
if $R = \Z$, but does not hold in general. To clarify this, we make the following definition.
\begin{definition}
\label{definition:stablerank}
A ring $R$ satisfies {\em Bass's stable rank condition} $(\SR_r)$ if the following holds for all
$n \geq r$. Let $(c_1,\ldots,c_n) \in R^n$ be a unimodular vector. Then there
exist $b_1,\ldots,b_{n-1} \in R$
such that $(c_1+b_1 c_n,\ldots,c_{n-1}+b_{n-1} c_n) \in R^{n-1}$ is unimodular.
\end{definition}
\begin{example}
This condition only make sense for $r \geq 2$. It is easy to see that fields satisfy $(\SR_2)$ and that PIDs satisfy $(\SR_3)$.
More generally, if a ring $R$ is finitely generated as a module over a Noetherian commutative ring $S$ of
Krull dimension $r$, then $R$ satisfies $(\SR_{r+2})$ (see \cite[Theorem V.3.5]{BassKTheory}).
\end{example}
\subsection{Elementary matrices and their action on unimodular vectors}
Recall that an {\em elementary matrix} in $\GL_n(R)$ is a matrix
that differs from the identity matrix at exactly one off-diagonal position. Let $\EL_n(R)$ be
the subgroup of $\GL_n(R)$ generated by elementary matrices.
One of the key properties of rings satisfying Bass's stable rank condition is the following lemma,
which implies in particular that under its hypotheses all unimodular vectors appear as
columns of matrices in $\GL_n(R)$. In fact, they even appear as columns of matrices in $\EL_n(R)$.
\begin{lemma}
\label{lemma:unimodulartransitive}
Let $R$ be a ring satisfying $(\SR_r)$ and let $n \geq r$. Then $\EL_n(R)$ acts
transitively on the set of unimodular vectors in $R^n$.
\end{lemma}
\begin{proof}
Let $v \in R^n$ be a unimodular vector. It is enough to find some $M \in \EL_n(R)$
such that $M \cdot v = (0,\ldots,0,1)$. Write $v = (c_1,\ldots,c_n)$. Using elementary
matrices, for any distinct $1 \leq i,j \leq n$ and any $\lambda \in R$ we can
add $\lambda$ times the $i^{\text{th}}$ entry to the $j^{\text{th}}$ one, changing the entry
$c_j$ to $c_j + \lambda c_i$. We will use a series of these moves to transform
$v$ into $(0,\ldots,0,1)$.
Applying $(\SR_r)$, we can add multiples of the last entry to the other ones to ensure
that the first $(n-1)$ entries of $v$ form a unimodular vector in $R^{n-1}$. Using Lemma \ref{lemma:unimodularsum},
we can find $a_1,\ldots,a_{n-1} \in R$ such that $a_1 c_1 + \cdots + a_{n-1} c_{n-1} = 1$. For each $1 \leq i \leq n-1$, add $(1-c_n) a_i$ times
the $i^{\text{th}}$ entry to the $n^{\text{th}}$ one. This transforms $v$ into
$(c_1,\ldots,c_{n-1},1)$. Finally, add appropriate multiples of the last entry
of $v$ to the other ones to transform it into $(0,\ldots,0,1)$.
\end{proof}
\subsection{Generation by elementary matrices}
If $\bbk$ is a field then $\EL_n(\bbk) = \SL_n(\bbk)$,
so $\GL_n(\bbk)$ is generated by $\EL_n(\bbk)$ and $\GL_1(\bbk)$, which is embedded in $\GL_n(\bbk)$ via the upper
left hand corner matrix embedding. The following lemma shows that the stable rank condition implies
a similar type of result:
\begin{lemma}
\label{lemma:generategl}
Let $R$ be a ring satisfying $(\SL_r)$ and let $n \geq r-1$. Then $\GL_n(R)$ is generated
by $\EL_n(R)$ and $\GL_{r-1}(R) \subset \GL_n(R)$.
\end{lemma}
\begin{proof}
The proof will be by induction on $n$. The base case $n=r-1$ is trivial, so assume that $n \geq r$ and that
the lemma is true for smaller $n$.
Let $\{v_1,\ldots,v_n\}$ be the standard basis for the right $R$-module $R^n$ and let $C = \oplus_{i=1}^{n-1} v_i \cdot R$.
Consider some $M \in \GL_n(R)$. Applying Lemma \ref{lemma:splitunimodulartransitive} below, we can find some $N \in \EL_n(R)$
such that $N \cdot (M \cdot v_n) = v_n$ and $N \cdot (M \cdot C) = C$. It follows that $N M \in \GL_{n-1}(R)$, so
by induction $N M$ lies in the subgroup generated by elementary matrices and $\GL_{r-1}(R)$. We conclude that $M$
does as well.
\end{proof}
The above proof used the following lemma, which refines Lemma \ref{lemma:unimodulartransitive}.
\begin{lemma}
\label{lemma:splitunimodulartransitive}
Let $R$ be a ring satisfying $(\SR_r)$ and let $n \geq r$. Let $x,y \in R^n$ be unimodular vectors and
$C,D \subset R^n$ be $R$-submodules such that
$R^n = C \oplus x \cdot R$ and $R^n = D \oplus y \cdot R$. Then
there exists some $M \in \EL_n(R)$ such that $M \cdot x = y$ and $M \cdot C = D$.
\end{lemma}
\begin{proof}
Let $\{v_1,\ldots,v_n\}$ be the standard basis for the right $R$-module $R^n$. It is enough to deal with
the case where $x = v_n$ and $C = \oplus_{i=1}^{n-1} v_i \cdot R$. What is more,
Lemma \ref{lemma:unimodulartransitive} says that $\EL_n(R)$ acts
transitively on unimodular vectors in $R^n$, so we can assume without loss of generality that $y = v_n$ as well.
Let $\rho\colon R^n \rightarrow R$ be the projection with $D = \ker(\rho)$ and $\rho(v_n) = 1$.
For $1 \leq i \leq n-1$, let $\lambda_i = \rho(v_i)$. Define $M \in \GL_n(R)$ via the formula
\[M \cdot v_i = v_i - v_n \cdot \lambda_i \quad \text{for $1 \leq i \leq n-1$, and $M \cdot v_n = v_n$}.\]
It is easy to see that $M$ can be written as a product of $(n-1)$ elementary matrices, so $M \in \EL_n(R)$.
For $1 \leq i \leq n-1$, we have
\[\rho(M \cdot v_i) = \rho(v_i) - \rho(v_n) \lambda_i = \lambda_i - \lambda_i = 0,\]
so $M \cdot v_i \in D$. We conclude that $M \cdot C = D$.
\end{proof}
\subsection{Stable freeness}
For a general ring $R$, there can exist non-free $R$-modules $C$ that are stably free in the sense
that $C \oplus R^k \cong R^n$ for some $k \geq 1$. The stable rank condition prevents this, at least
if $n-k$ not too large:
\begin{lemma}
\label{lemma:stabledecomp}
Let $R$ be a ring satisfying $(\SR_r)$. Let $C$ be a right $R$-module such that $C \oplus R^k \cong R^n$.
Assume that $k \leq n-r+1$. Then $C \cong R^{n-k}$.
\end{lemma}
\begin{proof}
It is enough to deal with the case where $k=1$, so $n \geq r$. Identify $C$ with a submodule of $R^n$ and
let $x \in R^n$ be the unimodular vector with $R^n = C \oplus x \cdot R$. Letting $\{v_1,\ldots,v_n\}$
be the standard basis for $R^n$, Lemma \ref{lemma:splitunimodulartransitive} implies that there exists
some $M \in \GL_n(R)$ with $M \cdot x = v_n$ and $M \cdot C = \oplus_{i=1}^{n-1} v_i \cdot R$. In particular,
$M \cdot C \cong R^{n-1}$, so $C \cong R^{n-1}$ as well.
\end{proof}
\subsection{Quotients of rings}
The following lemma shows that the stable rank condition is preserved by quotients:
\begin{lemma}
\label{lemma:stablequotient}
Let $R$ be a ring satisfying $(\SR_r)$ and let $\fq$ be a two-sided ideal in $R$. Then $R/\fq$ satisfies
$(\SR_r)$.
\end{lemma}
\begin{proof}
Let $n \geq r$ and let $\ov \in (R/\fq)^n$ be a unimodular vector. It is enough to prove
that $\ov$ can be lifted to a unimodular vector in $R^n$. Let $(c_1,\ldots,c_n) \in R^n$ be
any vector projecting to $\ov$. Since $\ov$ is unimodular, Lemma \ref{lemma:unimodularsum} implies that
there exist $a_1,\ldots,a_n \in R$ and $q \in \fq$ such that $a_1 c_1 + \cdots + a_n c_n = 1+q$. It
follows that $(c_1,\ldots,c_n,q) \in R^{n+1}$ is unimodular, so by $(\SR_r)$ we can find
$b_1,\ldots,b_n \in R$ such that $v = (c_1 + b_1 q,\ldots,c_n+b_n q) \in R^n$ is unimodular. The
vector $v$ projects to $\ov$.
\end{proof}
\subsection{Stable rank modulo an ideal}
The following lemma gives a variant of the stable rank condition that takes into account an ideal.
\begin{lemma}
\label{lemma:stableideal}
Let $R$ be a ring satisfying $(\SR_r)$ and let $\fq$ be a two-sided ideal of $A$. For some
$n \geq r$, let $(c_1,\ldots,c_n) \in R^n$ be a unimodular vector with $c_n \in \fq$.
Then there exists $b_1,\ldots,b_{n-1} \in \fq$ such that
$(c_1+b_1 c_n,\ldots,c_{n-1} + b_{n-1} c_n) \in R^{n-1}$ is unimodular.
\end{lemma}
\begin{proof}
Using Lemma \ref{lemma:unimodularsum}, we can find $a_1,\ldots,a_n \in R$ with $a_1 c_1+\ldots+a_n c_n = 1$.
We claim that $(c_1,\ldots,c_{n-1},c_n a_n c_n) \in R^n$ is unimodular. Indeed, we have
\begin{align*}
&\left(a_1+a_n c_n a_1\right)c_1 + \cdots + \left(a_{n-1}+a_n c_n a_{n-1}\right) c_{n-1} + \left(a_n\right) c_n a_n c_n \\
&\ \ \ \ = \left(a_1 c_1 + \cdots + a_{n-1} c_{n-1}\right) + a_n c_n \left(a_1 c_1 + \cdots + a_n c_n\right) \\
&\ \ \ \ = \left(a_1 c_1 + \cdots + a_{n-1} c_{n-1}\right) + a_n c_n = 1,
\end{align*}
as desired. Applying $(\SR_r)$, we can find $b'_1,\ldots,b'_{n-1} \in R$ such that
$(c_1 + b'_1 c_n a_n c_n,\ldots,c_{n-1} + b'_{n-1} c_n a_n c_n) \in R^{n-1}$ is unimodular. Since
$c_n \in \fq$, so is $b_i = b'_i c_n a_n$. The lemma follows.
\end{proof}
\subsection{Elementary congruence subgroups and their action on unimodular vectors}
Recall that if $\fq$ is an ideal in $R$, then $\GL_n(R,\fq)$ is the level-$\fq$ congruence subgroup
of $\GL_n(R)$, i.e., the kernel of the map $\GL_n(R) \rightarrow \GL_n(R/\fq)$. An elementary
matrix lies in $\GL_n(R,\fq)$ precisely when its single off-diagonal entry lies in $\fq$. Let
$\EL_n(R,\fq)$ be the subgroup of $\EL_n(R)$ that is {\bf normally} generated by elementary
matrices lying in $\GL_n(R,\fq)$.
Lemma \ref{lemma:unimodulartransitive}
says that the stable rank condition implies that under suitable conditions $\EL_n(R)$ acts transitively
on the set of unimodular vectors. The following lemma strengthens this:
\begin{lemma}
\label{lemma:unimodulartransitivecongruence}
Let $R$ be a ring satisfying $(\SR_r)$ and let $\fq$ be an ideal in $R$. For some
$n \geq r$, let $v,v' \in R^n$ be unimodular vectors that map to the same vector in $(R/\fq)^n$.
Then there exists some $M \in \EL_n(R,\fq)$ with $M \cdot v = v'$.
\end{lemma}
\begin{proof}
We will prove this in two steps.
\begin{stepsb}
This is true if $v' = (1,0,\ldots,0)$.
\end{stepsb}
Since $v$ and $(1,0,\ldots,0)$ map to the same vector in $(R/\fq)^n$, we can write
$v = (1+q_1,q_2,\ldots,q_n)$ with $q_1,\ldots,q_n \in \fq$. Recall that an elementary matrix
lies in $\GL_n(R,\fq)$ if its single off-diagonal entry lies in $\fq$. Just like
in the proof of Lemma \ref{lemma:unimodulartransitive}, using such elementary
matrices, for any distinct $1 \leq i,j \leq n$ and any $\lambda \in \fq$ we can add
$\lambda$ times the $i^{\text{th}}$ entry to the $j^{\text{th}}$ one. We will use
a series of these moves (plus one extra trick) to transform $v$ into $(1,0,\ldots,0)$.
Applying the relative version of $(\SR_r)$ given by Lemma \ref{lemma:stableideal},
we can add $\fq$-multiples of the last entry to the other ones to ensure
that the first $(n-1)$ entries of $v$ form a unimodular vector in $R^{n-1}$.
Using Lemma \ref{lemma:unimodularsum}, we can thus find $a_1,\ldots,a_{n-1} \in R$ such
that $a_1(1+q_1) + a_2 q_2 + \cdots + a_{n-1} q_{n-1} = 1$. For each
$1 \leq i \leq n-1$, add $(q_1-q_n) a_i \in \fq$ times the
$i^{\text{th}}$ entry to the $n^{\text{th}}$ one. This transforms $v$ into
$(1+q_1,q_2,\ldots,q_{n-1},q_1)$.
At this point, we will do something slightly tricky. Let $E \in \EL_n(R)$ be the
elementary matrix that subtracts the $n^{\text{th}}$ row from the first one (notice
that $E$ does not lie in $\GL_n(R,\fq)$). We thus
have
\[E \cdot v = E \cdot (1+q_1,q_2,\ldots,q_{n-1},q_1) = (1,q_2,\ldots,q_{n-1},q_1).\]
We can then find a product $F \in \EL_n(R,\fq)$ of elementary matrices such that
\[F \cdot (1,q_2,\ldots,q_{n-1},q_1) = (1,0,\ldots,0).\]
Indeed, $F$ first subtracts $q_2 \in \fq$ times the first entry from the $2^{\text{nd}}$, then
subtracts $q_3 \in \fq$ times the first entry from the $3^{\text{rd}}$, etc., and
finishes by subtracting $q_1$ times the first entry from the $n^{\text{th}}$. Since $E$ fixes
$(1,0,\ldots,0)$, it follows that
\[E^{-1} F E \cdot v = (1,0,\ldots,0).\]
Since $\EL_n(R,\fq)$ is a normal subgroup of $\EL_n(R)$, we have $M = E^{-1} F E \in \EL_n(R,\fq)$, as desired.
\begin{stepsb}
This is true in general.
\end{stepsb}
Set $w = (1,0,\ldots,0)$. Lemma \ref{lemma:unimodulartransitive} says that there exists some $A \in \EL_n(R)$ with $A \cdot v' = w$.
Applying the previous step to $A \cdot v$, we can find some $B \in \EL_n(R,\fq)$ with $B \cdot A \cdot v = w$, so
$A^{-1} B A \cdot v = A^{-1} \cdot w = v'$. Since $\EL_n(R,\fq)$ is a normal subgroup of $\EL_n(R)$, it follows
that $M = A^{-1} B A$ lies in $\EL_n(R,\fq)$.
\end{proof}
\subsection{Action of congruence subgroup on split unimodular vector}
The following lemma refines Lemma \ref{lemma:unimodulartransitivecongruence} in the same way
that Lemma \ref{lemma:splitunimodulartransitive} refines Lemma \ref{lemma:unimodulartransitive}:
\begin{lemma}
\label{lemma:splitunimodulartransitivecongruence}
Let $R$ be a ring satisfying $(\SR_r)$, let $\fq$ be an ideal in $R$, and let $n \geq d$.
Let $x,y \in R^n$ be unimodular vectors and $C,D \subset R^n$ be $R$-submodules such that
$R^n = C \oplus x \cdot R$ and $R^n = D \oplus y \cdot R$. Assume that $x$ and $y$ (resp.\ $C$ and
$D$) map to the same vector (resp.\ submodule) in $(R/\fq)^n$. Then there exists some $M \in \EL_n(R,\fq)$
with $M \cdot x = y$ and $M \cdot C = D$.
\end{lemma}
\begin{proof}
Using Lemma \ref{lemma:unimodulartransitivecongruence}, we can assume without loss of generality that $x = y$.
Let $\rho\colon R^n \rightarrow R$ be the projection with $D = \ker(\rho)$ and $\rho(x) = 1$.
Lemma \ref{lemma:stabledecomp} implies that $C$ and $D$ are both free of rank $(n-1)$. Let
$\{v_1,\ldots,v_{n-1}\}$ be a free basis for $C$ and let $y_n = x = y$, so $\{v_1,\ldots,v_n\}$ is a free
basis for $R^n$. For $1 \leq i \leq n-1$, let $\lambda_i = \rho(v_i)$.
Since $C$ and $D$ map to the same submodule of $(R/\fq)^n$, we have $\lambda_i \in \fq$. Define
$M \in \GL_n(R,\fq)$ via the formula
\[M \cdot v_i = v_i - v_n \cdot \lambda_i \quad \text{for $1 \leq i \leq n-1$, and $M \cdot v_n = v_n$}.\]
It is easy to write $M$ as a product of elementary matrices, so $M \in \EL_n(R,\fq)$.
For $1 \leq i \leq n-1$, we have
\[\rho(M \cdot v_i) = \rho(v_i) - \rho(v_n) \lambda_i = \lambda_i - \lambda_i = 0,\]
so $M \cdot v_i \in D$. We conclude that $M \cdot C = D$, as desired.
\end{proof}
\subsection{Subgroups from K-theory}
\label{section:ktheory}
We close this section with some K-theoretic constructions. Define
\[\GL(R) = \bigcup_{n=1}^{\infty} \GL_n(R) \quad \text{and} \quad \EL(R) = \bigcup_{n=1}^{\infty} \EL_n(R).\]
It turns out that $\EL(R) = [\GL(R),\GL(R)]$ (this is the ``Whitehead Lemma''; see \cite[Lemma 3.1]{MilnorKTheory}).
By definition, the first algebraic K-theory group of $R$ is the abelian group
\[\KK_1(R) = \HH_1(\GL(R)) = \GL(R) / \EL(R).\]
For a subgroup $\cK \subset K_1(R)$, we define $\GL_n^{\cK}(R)$ to be the preimage of $\cK$ under
the projection
\[\GL_n(R) \hookrightarrow \GL(R) \longrightarrow \KK_1(R).\]
It follows that $\EL_n(R) \subset \GL_n^{\cK}(R) \subset \GL_n(R)$.
\begin{example}
Let $R$ be a ring. For $\cK = \KK_1(R)$, we have $\GL_n^{\cK}(R) = \GL_n(R)$.
\end{example}
\begin{example}
Let $R$ be a commutative ring and let $\det\colon \GL(R) \rightarrow R^{\times}$
be the determinant map. Since the determinant of an elementary matrix is $1$, the map
$\det$ induces a map
\[\fd\colon \KK_1(R) = \GL(R)/\EL(R) \rightarrow R^{\times}.\]
Letting $\cK = \ker(\fd)$, we then have $\GL_n^{\cK}(R) = \SL_n(R)$.
\end{example}
If $\alpha$ is a two-sided ideal of $R$, then we can also define
\[\GL(R,\alpha) = \bigcup_{n=1}^{\infty} \GL_n(R,\alpha) \quad \text{and} \quad \EL(R,\alpha) = \bigcup_{n=1}^{\infty} \EL_n(R,\alpha).\]
Just like in the absolute case, $\EL(R,\alpha)$ is a normal subgroup of $\GL(R,\alpha)$ with abelian
quotient, and this abelian quotient is the first relative K-theory group of $(R,\alpha)$:
\[\KK_1(R,\alpha) = \GL(R,\alpha) / \EL(R,\alpha).\]
See \cite[Lemma 4.2]{MilnorKTheory}. For a subgroup $\cK \subset \KK_1(R,\alpha)$, we define $\GL_n^{\cK}(R,\alpha)$ to
be the preimage of $\cK$ under the composition
\[\GL_n(R,\alpha) \hookrightarrow \GL(R,\alpha) \longrightarrow \KK_1(R,\alpha).\]
Letting $\cK' \subset \KK_1(R)$ be the image of $\cK$ under the map $\KK_1(R,\alpha) \rightarrow \KK_1(R)$, we have
$\GL_n^{\cK}(R,\alpha) \subset \GL_n^{\cK'}(R)$. Just like in the above examples, by
choosing $\cK$ appropriately we can have $\GL_n^{\cK}(R,\alpha) = \GL_n(R,\alpha)$ or (if $R$ is commutative)
$\GL_n^{\cK}(R,\alpha) = \SL_n(R,\alpha)$.
To clarify the nature of these groups, we need the following deep theorem of Vaserstein (``injective
stability for $K_1$''):
\begin{theorem}[{Vaserstein, \cite{Vaserstein}}]
\label{theorem:injectivek1}
Let $R$ be a ring satisfying $(\SR_r)$ and let $n \geq r$. Then $\EL_n(R)$ is a normal
subgroup of $\GL_n(R)$, and the composition
\[\GL_n(R)/\EL_n(R) \longrightarrow \GL(R)/\EL(R) = \KK_1(R)\]
is an isomorphism. Moreover, if $\alpha$ is a two-sided ideal of $R$, then
$\EL_n(R,\alpha)$ is a normal subgroup of $\GL_n(R,\alpha)$, and the composition
\[\GL_n(R,\alpha) / \EL_n(R,\alpha) \longrightarrow \GL(R,\alpha) / \EL(R,\alpha) = \KK_1(R,\alpha)\]
is an isomorphism.
\end{theorem}
This allows us to deal with another example:
\begin{example}
Let $R$ be a ring satisfying $(\SR_r)$ and let $n \geq r$. Set $\cK = 0$. Then by Theorem \ref{theorem:injectivek1}
we have $\GL_n^{\cK}(R) = \EL_n(R)$, and for a two-sided ideal $\alpha$ of $R$ we have
$\GL_n^{\cK}(R,\alpha) = \EL_n(R,\alpha)$.
\end{example}
It also allows us to prove the following useful result:
\begin{lemma}
\label{lemma:stablesubgroup}
Let $R$ be a ring satisfying $(\SR_r)$ and let $n \geq r$. For all subgroups $\cK \subset \KK_1(R)$,
we have
\[\GL_{n+1}^{\cK}(R) \cap \GL_n(R) = \GL_n^{\cK}(R).\]
Similarly, for all two-sided ideals $\alpha$ of $R$ and for all subgroups $\cK \subset \KK_1(R,\alpha)$,
we have
\[\GL_{n+1}^{\cK}(R,\alpha) \cap \GL_n(R,\alpha) = \GL_n^{\cK}(R,\alpha).\]
\end{lemma}
\begin{proof}
By Theorem \ref{theorem:injectivek1}, the maps
\[\GL_n(R) / \EL_n(R) \rightarrow \GL_{n+1}(R) / \EL_{n+1}(R) \rightarrow \KK_1(R)\]
are both isomorphisms. This implies that
\[\GL_{n+1}^{\cK}(R) \cap \GL_n(R) = \GL_n^{\cK}(R).\]
The relative statement is proved similarly.
\end{proof}
\section{Complex of split partial bases}
\label{section:splitbases}
We now introduce a simplicial complex called the complex of split partial bases, which
is a tiny variant on one introduced by Charney \cite{CharneyCongruence}.
\begin{remark}
The lemmas we prove in this section are all essentially due to Charney \cite{CharneyCongruence}. Many
of them are also proved in \cite[\S 5.3]{RandalWilliamsWahl}. We include proofs because they are short, and also
because it is annoyingly nontrivial to match up our notation and indexing conventions with those papers.
\end{remark}
\subsection{Definition}
Let $R$ be a ring and let $M$ be a right $R$-module. The {\em complex of split partial bases} for $M$, denoted
$\Bases(M)$, is the following simplicial complex:
\begin{itemize}
\item The vertices are pairs $(x;C)$ with $x \in M$ a nonzero element and
$C \subseteq M$ a submodule such that $M = C \oplus x \cdot R$.
\item A collection $\{(x_0;C_0),\ldots,(x_k;C_k)\}$ of vertices forms a $k$-simplex
if $x_i \in C_j$ for all distinct $0 \leq i,j \leq k$.
\end{itemize}
The group $\Aut_R(M)$ acts on $\Bases(M)$ on the left. In particular, $\GL_n(R)$ acts on $\Bases(R^n)$.
\subsection{Splitting from a simplex}
The following lemma might clarify the definition of a simplex in $\Bases(M)$.
\begin{lemma}
\label{lemma:splitsimplex}
Let $R$ be a ring, let $M$ be a right $R$-module, and let $\{(x_0;C_0),\ldots,(x_k;C_k)\}$ be a $k$-simplex
of $\Bases(M)$. Setting $C = C_0 \cap \cdots \cap C_k$, we then have
\begin{equation}
\label{eqn:splitsimplex}
M = C \oplus \bigoplus_{i=0}^k x_i \cdot R.
\end{equation}
\end{lemma}
\begin{proof}
The proof is by induction on $k$. The base case $k=0$ is trivial, so assume that $k>0$ and that
the lemma is true for all smaller $k$. For $0 \leq i \leq k-1$, let $C_i' = C_i \cap C_k$. For
these $i$, we claim that $C_k = C_i' \oplus x_i \cdot R$. Since $x_i \in C_k$ and
$M = C_i \oplus x_i \cdot R$, it is enough to show that
for all $z \in C_k$, we can write $z = z' + x_i \cdot a$ for some
$z' \in C_i'$ and $a \in R$. Since $M = C_i \oplus x_i \cdot R$, we can
write $z = z' + x_i \cdot a$ for some $z' \in C_i$ and $a \in R$, and
what we must show is that $z' \in C_i'$. But since
$z,x_i \in C_k$ we have $z' = z-x_i \cdot a \in C_k$, so $z' \in C_i'$, as
desired.
It follows that $\{(x_0;C'_0),\ldots,(x_{k-1};C'_{k-1})\}$ is a $(k-1)$-simplex of
$\Bases(C_k)$. Since
\[C = C_0 \cap \cdots \cap C_k = C'_0 \cap \cdots \cap C'_{k-1},\]
our inductive hypothesis implies that
\[C_k = C \oplus \bigoplus_{i=0}^{k-1} x_i \cdot R.\]
Since $M = C_k \oplus x_k \cdot R$, equation \eqref{eqn:splitsimplex} follows.
\end{proof}
This has the following useful consequence:
\begin{lemma}
\label{lemma:identifysimplex}
Let $R$ be a ring and let $M$ be a right $R$-module. The $k$-simplices of $\Bases(M)$ are in bijection
with tuples $(\{x_0,\ldots,x_k\};C)$, where $\{x_0,\ldots,x_k\}$ is an unordered collection of $(k+1)$
elements of $M$ and $C \subset M$ is a submodule such that
\[M = C \oplus \bigoplus_{i=0}^k x_i \cdot R.\]
\end{lemma}
\begin{proof}
For a $k$-simplex $\{(x_0;C_0),\ldots,(x_k,C_k)\}$ of $\Bases(M)$, by Lemma \ref{lemma:splitsimplex} we can associate
the tuple
\[(\{x_0,\ldots,x_k\};\bigcap_{i=0}^k C_i).\]
Conversely, consider such a tuple $(\{x_0,\ldots,x_k\};C)$. For $0 \leq j \leq k$, define
\[C_j = \bigoplus_{\substack{0 \leq i \leq k \\ i \neq j}} x_i \cdot R.\]
Then $\{(x_0;C_0),\ldots,(x_k;C_k)\}$ is a $k$-simplex of $\Bases(M)$. It is clear that these
two operations are inverses to one another.
\end{proof}
Subsequently, we will freely move between the notations $\{(x_0;C_0),\ldots,(x_k;C_k)\}$ and
$(\{x_0,\ldots,x_k\};C)$ for the $k$-simplices of $\Bases(M)$.
\subsection{Links in complex of split partial bases}
We now focus our attention on $\Bases(R^n)$. If $R$ satisfies a stable rank condition, then the links in this
complex are well-behaved:
\begin{lemma}
\label{lemma:splitbaseslink}
Let $R$ be a ring satisfying $(\SR_r)$. For some $n$ and $k$ with $k \leq n-r+1$, let $\sigma$ be a $k$-simplex
of $\Bases(R^n)$. We then have $\Link_{\Bases(M)}(\sigma) \cong \Bases(R^{n-k-1})$.
\end{lemma}
\begin{proof}
Let $\sigma = (\{x_0,\ldots,x_k\};C)$. Since
\[R^n = C \oplus \bigoplus_{i=0}^k x_i \cdot R,\]
Lemma \ref{lemma:stabledecomp} implies that $C \cong R^{n-k-1}$, so it is enough to
prove that $\Link_{\Bases(R^n)}(\sigma)$ and $\Bases(C)$ are isomorphic. For this,
note that the maps $\phi\colon \Link_{\Bases(M)}(\sigma) \rightarrow \Bases(C)$ and
$\psi\colon \Bases(C) \rightarrow \Link_{\Bases(M)}(\sigma)$ defined on $\ell$-simplices via the formulas
\[\phi((\{y_0,\ldots,y_{\ell}\};D)) = (\{y_0,\ldots,y_{\ell}\};D \cap C)\]
and
\[\psi((\{z_0,\ldots,z_{\ell}\};E)) =
(\{z_0,\ldots,z_{\ell}\};E \oplus \bigoplus_{i=0}^k x_i \cdot R)\]
are inverse isomorphisms.
\end{proof}
\subsection{General linear group action}
We now turn to the action of $\GL_n(R)$ and its elementary subgroup $\EL_n(R)$
on $\Bases(R^n)$. Our main result will be as follows:
\begin{lemma}
\label{lemma:glbasesaction}
Let $R$ be a ring satisfying $(\SR_r)$. Let
\[\{(x_0;C_0),\ldots,(x_k;C_k)\} \quad \text{and} \quad \{(y_0;D_0),\ldots,(y_k;D_k)\}\]
be $k$-simplices of $\Bases(R^n)$. Assume that $k \leq n-r$. The following then hold:
\begin{itemize}
\item[(a)] There exists $M \in \EL_n(R)$ such that $M \cdot (x_i;C_i) = (y_i;D_i)$ for $0 \leq i \leq k$.
\item[(b)] Let $\fq$ be a two-sided ideal of $R$ and let $\pi\colon R^n \rightarrow (R/\fq)^n$ be the projection.
Assume that $\pi(x_i) = \pi(y_i)$ and $\pi(C_i) = \pi(D_i)$ for $0 \leq i \leq k$. Then the $M$ in
part (a) can be chosen to lie in $\EL_n(R,\fq)$.
\end{itemize}
\end{lemma}
\begin{proof}
For $k=0$, this is immediate from Lemmas \ref{lemma:splitunimodulartransitive} and \ref{lemma:splitunimodulartransitivecongruence}.
The general case can be deduced from this by induction using Lemma \ref{lemma:splitbaseslink}.
\end{proof}
\subsection{Topology}
Recall that we defined what it means for a simplicial complex to be weakly Cohen-Macaulay in
\S \ref{section:cmcomplex}. The complex of split partial bases has this property:
\begin{theorem}
\label{theorem:basescm}
Let $R$ be a ring satisfying $(\SR_r)$ and let $n \geq r$. Then $\Bases(R^n)$ is weakly Cohen-Macaulay of
dimension $\left(\frac{n-r+1}{2}\right)$.
\end{theorem}
The key to the proof of Theorem \ref{theorem:basescm} is the following theorem. It is essentially due to Charney \cite[Theorem 3.5]{CharneyCongruence},
though she works with a slightly different complex, so one needs to adapt her proof. For a complete proof of exactly
the statement below, see \cite[Lemma 5.10]{RandalWilliamsWahl} (or rather its proof -- the lemma in the reference deals with the large
ordering of $\Bases(R^n)$, but proves this theorem along the way):
\begin{theorem}
\label{theorem:basesconnected}
Let $R$ be a ring satisfying $(\SR_r)$. Then for all $n \geq 1$, the complex $\Bases(R^n)$ is $\left(\frac{n-r-1}{2}\right)$-connected.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{theorem:basescm}]
Theorem \ref{theorem:basesconnected} says that $\Bases(R^n)$ is $\left(\frac{n-r+1}{2}\right)-1=\left(\frac{n-r-1}{2}\right)$ connected, so
what we must prove is that for all $k$-simplices $\sigma$ of $\Bases(R^n)$, the complex
$\Link_{\Bases(R^n)}(\sigma)$ is $\left(\frac{n-r-1}{2}-k-1\right)$-connected. This is a non-tautological condition
precisely when
\begin{equation}
\label{eqn:meaningfulk}
\frac{n-r-1}{2}-k-1 \geq -1, \quad \text{i.e., when } k \leq \frac{n-r-1}{2}.
\end{equation}
The connectivity condition we want will follow from Theorem \ref{theorem:basesconnected} if
$\Link_{\Bases(R^n)}(\sigma) \cong \Bases(R^{n-k-1})$, which by Lemma \ref{lemma:splitbaseslink} holds
if
\begin{equation}
\label{eqn:goodk}
k \leq n-r+1.
\end{equation}
So we must prove that \eqref{eqn:meaningfulk} implies \eqref{eqn:goodk}, i.e., that
\[\frac{n-r-1}{2} \leq n-r+1.\]
This holds precisely when $n-r \geq -3$, which follows from our assumption that $n \geq r$.
\end{proof}
\section{The large ordering of the complex of split partial bases}
\label{section:glncoefficient}
Let $R$ be a ring. We now discuss the large ordering of $\Bases(R^n)$ and show how to use a $\VIC(R)$-module
$M$ to define a $\GL_n(R)$-equivariant coefficient system on it (or, rather, on an appropriate subcomplex).
\subsection{Large ordering and shifted large ordering}
For $n \geq 0$, define $\OBases(R^n)$ to be the large ordering of $\Bases(R^n)$. As we discussed
in Lemma \ref{lemma:identifysimplex}, the $k$-simplices of $\Bases(R^n)$ can be identified with
tuples $(\{x_0,\ldots,x_k\};C)$, where $x_0,\ldots,x_k \in R^n$ are elements and $C \subset R^n$
is a submodule such that
\[R^n = C \oplus \bigoplus_{i=0}^k x_i \cdot R.\]
The $k$-simplices of $\OBases(R^n)$ are obtained by imposing an ordering on the $x_i$, so can
be identified with ordered tuples $(x_0,\ldots,x_k;C)$ where the $x_i$ and $C$ are as above. It
will be technically annoying that sometimes $C$ is not a free module. To fix this, assume
that $R$ satisfies $(\SR_r)$, and define $\OBases(R^{n,r})$ to be the semisimplicial subset
of $\OBases(R^{n+r})$ consisting of simplices of dimension at most $n$. The group
$\GL_{n+r}(R)$ acts on $\OBases(R^{n,r})$, and the following lemma shows that
$\OBases(R^{n,r})$ avoids some of the annoying features of $\OBases(R^n)$:
\begin{lemma}
\label{lemma:obasesgood}
Let $R$ be a ring satisfying $(\SR_r)$ and let $n \geq 1$. The following hold:
\begin{itemize}
\item[(i)] For each $k$-simplex $(x_0,\ldots,x_k;C)$ of $\OBases(R^{n,r})$, we have
$C \cong R^{n+r-k-1}$.
\item[(ii)] For all $k$ and all subgroups $\cK \subset \KK_1(R)$, the group $\GL_{n+r}^{\cK}(R)$ acts transitively on the $k$-simplices
of $\OBases(R^{n,r})$.
\item[(iii)] For all two-sided ideals $\alpha$ of $R$, we have
\[\OBases(R^{n,r}) / \EL_{n+r}(R,\alpha) = \OBases((R/\alpha)^{n,r}).\]
\item[(iv)] The semisimplicial set $\OBases(R^{n,r})$ is a large ordering of a weakly
Cohen--Macaulay complex of dimension $\frac{n+1}{2}$.
\end{itemize}
\end{lemma}
\begin{proof}
Conclusions (i) and (ii) follow from Lemmas \ref{lemma:stabledecomp} and \ref{lemma:glbasesaction} along
with the fact that we have restricted the simplices of $\OBases(R^{n,r})$ to have dimension at most $n$.
Here for (ii) we are using the fact that $\EL_{n+r}(R) \subset \GL_{n+r}^{\cK}(R)$.
For (iii), the projection
\[\pi\colon \OBases(R^{n,r}) \longrightarrow \OBases((R/\alpha)^{n,r})\]
is $\GL_{n+r}(R)$-equivariant, where the group $\GL_{n+r}(R)$ acts on $\OBases((R/\alpha)^{n,r})$ via
the projection $\GL_{n+r}(R) \rightarrow \GL_{n+r}(R/\alpha)$. Lemma \ref{lemma:stablequotient} says
that $R/\alpha$ satisfies $(\SR_r)$, so by Lemma \ref{lemma:glbasesaction} the group $\EL_{n+r}(R/\alpha)$ acts
transitively on the $k$-simplices of $\OBases((R/\alpha)^{n,r})$ for all $k$. Since
$\EL_{n+r}(R)$ maps surjectively onto $\EL_{n+r}(R/\alpha)$ and $\pi$ is $\EL_{n+r}(R)$-equivariant,
it follows that $\pi$ is surjective. The map $\pi$ descends to a map
\[\opi \colon \OBases(R^{n,r}) / \EL_{n+r}(R,\alpha) \rightarrow \OBases((R/\alpha)^{n,r}).\]
The map $\opi$ is surjective since $\pi$ is. Lemma \ref{lemma:glbasesaction} implies
that two simplices of $\OBases(R^{n,r})$ that map to the
same simplex of $\OBases((R/\alpha)^{n,r})$ differ by an element of $\EL_{n+r}(R)$, and
thus are identified with the same simplex of $\OBases(R^{n,r}) / \EL_{n+r}(R,\alpha)$. It follows
that $\opi$ is injective, and hence an isomorphism, as desired.
For (iv), Theorem \ref{theorem:basescm} says that $\Bases(R^{n+r})$ is weakly Cohen--Macaulay
of dimension $\frac{n+1}{2}$. Since $n \geq 1$, we have $n \geq \frac{n+1}{2}$, so the $n$-skeleton
of $\Bases(R^{n+r})$ is also weakly Cohen--Macaulay of dimension $\frac{n+1}{2}$, as desired.
\end{proof}
\subsection{Coefficient system}
\label{section:glcoef}
Let $R$ be a ring satisfying $(\SR_r)$, let $\bbk$ be a commutative ring, and let $M$ be a
$\VIC(R)$-module over $\bbk$. For each $n \geq 1$, let $\cG_{M,n,r}$ be the augmented coefficient system
on $\OBases(R^{n,r})$ defined via the formula
\[\cG_{M,n,r}(x_0,\ldots,x_k;C) = M(C) \quad \text{for a simplex $(x_0,\ldots,x_k;C)$ of $\OBases(R^{n,r})$},\]
where our convention is that the $-1$-simplex of $\OBases(R^{n,r})$ is $(;R^{n+r})$. For this
to make sense, we need $C$ to be a free $R$-module, which is ensured by Lemma \ref{lemma:obasesgood}.(i).
To see how $\cG_{M,n,r}$ behaves under face maps, note that for a simplex $\sigma = (x_0,\ldots,x_k;C)$ of
$\OBases(R^{n,r})$, the faces of $\sigma$ are of the form
\[\sigma' = (x_{i_0},\ldots,x_{i_{\ell}};C') \quad \text{with} \quad C' = C \oplus \bigoplus_{j \notin \{i_0,\ldots,i_{\ell}\}} x_j \cdot R\]
for increasing sequences $0 \leq i_0 < \cdots < i_{\ell} \leq k$. Letting $\iota\colon C \rightarrow C'$ be the natural
inclusion and letting
\[D = \bigoplus_{j \notin \{i_0,\ldots,i_{\ell}\}} x_j \cdot R,\]
the induced map $\cG_{M,n,r}(\sigma) \rightarrow \cG_{M,n,r}(\sigma')$ is the one induced by the following $\VIC(R)$-morphism:
\[\cG_{M,n,r}(\sigma) = M(C) \stackrel{(\iota,D)_{\ast}}{\longrightarrow} M(C') = \cG_{M,n,r}(\sigma').\]
Just like in Example \ref{example:snequivariant}, the augmented coefficient system $\cG_{M,n,r}$ is
$\GL_{n+r}(R)$-equivariant. It satisfies the following lemma:
\begin{lemma}[{c.f.\ Lemma \ref{lemma:fisystempoly}}]
\label{lemma:vicsystempoly}
Let $R$ be a ring satisfying $(\SR_r)$, let $\bbk$ be a commutative ring, and
let $M$ be a $\VIC(R)$-module over $\bbk$ that is polynomial of degree $d$ starting at $m \geq 0$.
For each $n \geq m$, the coefficient system $\cG_{M,n,r}$ on $\OBases(R^{n,r})$ is polynomial of
degree $d$ up to dimension $n+r-m-1$.
\end{lemma}
\begin{proof}
The proof will be by induction on $d$. If $d=-1$, then consider a simplex $\sigma = (x_0,\ldots,x_k;C)$ with
$k \leq n+r-m-1$. Lemma \ref{lemma:obasesgood}.(i) says that $C \cong R^{n+r-k-1}$, and since
\begin{equation}
\label{eqn:calccodim2}
n+r-k-1 \geq n+r-(n+r-m-1)-1 = m
\end{equation}
the fact that $M$ is polynomial of degree $-1$ starting at $m$ implies that $\cG_{M,n,r}(\sigma) = M(C) = 0$, as desired.
Now assume that $d \geq 0$. There are two things to check. For the first, let $\sigma = (x_0,\ldots,x_k;C)$ be a simplex
with $k \leq n+r-m-1$. We must prove that the map $\cG_{M,n,r}(\sigma) \rightarrow \cG_{M,n,r}(\emptyset)$ is injective, i.e., that
the map
\[M(C) \rightarrow M(R^{n+r})\]
is injective. The calculation \eqref{eqn:calccodim2} shows that this injectivity follows from the fact that
$M$ is polynomial of degree $d$ starting at $m$.
For the second, let $\tau = (y_0,\ldots,y_{\ell};D)$ be any simplex with $\ell \leq n+r-m-1$
and let $\bbY$ be the forward link $\FLink_{\OBases(R^{n,r})}(\tau)$. Set
\[D' = D \oplus \bigoplus_{i=0}^{\ell-1} y_i \cdot R,\]
so $(y_{\ell};D')$ is the last vertex of $\tau$.
Let $\cH$ be the coefficient system on $\bbY$ defined by the formula
\[\cH(\sigma) = \frac{\cG_{M,n,r}(\sigma)}{\Image\left(\cG_{M,n,r}\left((y_{\ell};D') \cdot \sigma\right) \rightarrow \cG_{M,n,r}\left(\sigma\right)\right)} \quad \text{for a simplex $\sigma$ of $\bbY$}.\]
We must prove that $\cH$ is polynomial of degree $d-1$ up to dimension $n+r-m-1-\ell$.
This condition is invariant under the action of $\GL_{n+r}(R)$. Letting $\{v_1,\ldots,v_{n+r}\}$ be the standard
basis for $R^{n+r}$, Lemma \ref{lemma:obasesgood}.(ii) says that by applying an appropriate
element of $\GL_{n+r}(R)$ we can assume without loss of generality that
\[\tau = (v_{n+r-\ell},v_{n+r-\ell+1},\ldots,v_{n+r};D) \quad \text{with} \quad D = \bigoplus_{i=1}^{n+r-\ell-1} v_i \cdot R.\]
This implies that
\[\bbY = \FLink_{\OBases(R^{n,r})}(\tau) = \OBases^{n-\ell-1,r}.\]
Recall that we defined the derived $\VIC(R)$-module $DM$ in Definition \ref{definition:derivedvic}.
By construction, there is an isomorphism between
the coefficient systems $\cH$ and $\cG_{DM,n-\ell-1,r}$ on $\OBases^{n-\ell-1,r}$. Since $M$ is polynomial of
degree $d$ starting at $m$, the $\VIC(R)$-module $DM$ is
polynomial of degree $d-1$ starting at $m-1$. By induction, $\cH$ is polynomial of degree $d-1$ starting at
\[(n-\ell-1+r)-(m-1)-1=n+r-m-1-\ell,\]
as desired.
\end{proof}
\section{Stability for general linear groups}
\label{section:glstability}
We now are in a position to prove Theorems \ref{maintheorem:gl} and \ref{maintheorem:glprime}. In
fact, we will prove more general results that also deal with the groups $\GL_n^{\cK}(R)$. The
following generalizes Theorem \ref{maintheorem:gl}.
\begin{theorem}
\label{theorem:xl}
Let $R$ be a ring satisfying $(\SR_r)$, let $\cK \subset \KK_1(R)$ be a subgroup, let $\bbk$ be a commutative ring, and let
$M$ be a $\VIC(R)$-module over $\bbk$ that is polynomial of degree $d$ starting at $m \geq 0$.
For each $k \geq 0$, the map
\[\HH_k(\GL_n^{\cK}(R);M(R^n)) \rightarrow \HH_k(\GL_{n+1}^{\cK}(R);M(R^{n+1}))\]
is an isomorphism for $n \geq 2k+\max(2d+r,m+1)$ and a surjection for $n = 2k+\max(2d+r-1,m)$.
\end{theorem}
\begin{proof}
The group $\GL_{n+r}^{\cK}(R)$ acts on $\OBases(R^{n,r})$. Let $\cG_{M,n,r}$ be
the $\GL_{n+r}^{\cK}(R)$-equivariant augmented system of coefficients on $\OBases(R^{n,r})$ discussed
in \S \ref{section:glcoef}, so
\[\cG_{M,n,r}(x_0,\ldots,x_k;C) = M(C) \quad \text{for a simplex $(x_0,\ldots,x_k;C)$ of $\OBases(R^{n,r})$}.\]
The following claim will be used to show that with an appropriate degree shift, this all satisfies
the hypotheses of Theorem \ref{theorem:stabilitymachine}.
\begin{claim}
The following hold:
\begin{enumerate}
\item For all $-1 \leq k \leq \min(\frac{n-2d-1}{2},n+r-m-1)$, we have $\RH_k(\OBases(R^{n,r});\cG_{M,n,r}) = 0$.
\item For all $-1 \leq k \leq n$, the group $\GL_{n+r-k-1}^{\cK}(R)$ is the $\GL_{n+r}^{\cK}(R)$-stabilizer of a $k$-simplex
$\sigma_k$ of $\OBases(R^{n,r})$ with $\cG_{M,n,r}(\sigma_k) = M(R^{n+r-k-1})$.
\item For all $0 \leq k \leq n$, the group $\GL_{n+r}^{\cK}(R)$ acts transitively on the $k$-simplices of $\OBases(R^{n,r})$.
\item For all $n \geq 2$ and all $1$-simplices $e$ of $\OBases(R^{n,r})$ of the form $e = ((x_0;C_0),(x_1;C_1))$, there
exists some $\lambda \in \GL_{n+r}^{\cK}(R)$ with $\lambda(x_0;C_0) = (x_1;C_1)$ such that $\lambda$ commutes with all
elements of $(\GL_{n+r}^{\cK}(R))_e$ and fixes all elements of $\cG_{M,n,r}(e)$.
\end{enumerate}
\end{claim}
\begin{proof}[Proof of claim]
For (a), Lemma \ref{lemma:vicsystempoly} says that $\cG_{M,n,r}$ is a polynomial coefficient system of
degree $d$ up to dimension $n+r-m-1$. Also, by Lemma \ref{lemma:obasesgood}.(iv) the semisimplicial set
$\OBases(R^{n,r})$ is the large ordering of the simplicial complex that is weakly
Cohen--Macaulay of dimension $\frac{n+1}{2}$.
Letting
\[N = \min(\frac{n+1}{2}-d-1,n+r-m-1) = \min(\frac{n-2d-1}{2},n+r-m-1),\]
the complex $\OBases(R^{n,r})$ is weakly Cohen--Macaulay of dimension $N+d+1$ and $\cG_{M,n,r}$ is a polynomial
coefficient system of degree $d$ up to dimension $N$. Theorem \ref{theorem:vanishing} thus
implies that $\RH_k(\OBases(R^{n,r});\cG_{M,n,r}) = 0$ for $-1 \leq k \leq N$.
For (b), let $\{v_1,\ldots,v_{n+r}\}$ be the standard basis for $R^{n+r}$.
By Lemma \ref{lemma:stablesubgroup}, the group $\GL_{n+r-k-1}^{\cK}(R)$ is the $\GL_{n+r}^{\cK}(R)$-stabilizer of the $k$-simplex
\[\sigma_k = \begin{cases}
(R^{n+r}) & \text{if $k=-1$},\\
(v_{n+r-k},v_{n+r-k+1},\ldots,v_{n+r}; R^{n+r-k-1}) & \text{if $0 \leq k \leq n$}
\end{cases}\]
of $\OBases(R^{n,r})$, and by definition
\[\cG_{M,n,r}(\sigma_k) = M(R^{n+r-k-1}).\]
Condition (c) is Lemma \ref{lemma:obasesgood}.(ii).
For (d), define $\lambda\colon R^{n+r} \rightarrow R^{n+r}$ to be the $R$-module homomorphism
defined via the formulas
\[\lambda(x_0) = x_1 \quad \text{and} \quad \lambda(x_1) = -x_0 \quad \text{and} \quad \lambda|_{C_0 \cap C_1} = \text{id}.\]
This lies in $\EL_{n+r}(R) \subset \GL_{n+r}^{\cK}(R)$; indeed, the group $\SL_2(\Z)$ is generated by elementary matrices and
\[\left(\begin{matrix} 0 & -1 \\ 1 & 0 \end{matrix}\right) \in \SL_2(\Z),\]
so $\lambda$ can be written as a product of elementary matrices. Here we are using the fact that $n+r \geq r$, so
by Theorem \ref{theorem:injectivek1} the group $\EL_{n+r}(R)$ is a normal subgroup of $\GL_{n+r}(R)$.
In particular, the fact that something can be written
as a product of elementary matrices is independent of the choice of basis. The map $\lambda$ acts trivially on
\[\cG_{M,n,r}(e) = M(C_0 \cap C_1)\]
by basic properties of $\VIC(R)$-modules.
\end{proof}
Letting $e = \max(2d-1,m-r)$, the Claim verifies that the conditions of Theorem \ref{theorem:stabilitymachine} are satisfied for
\begin{center}
\scalebox{0.89}{$G_n = \GL_{n+e+r}^{\cK}(R) \quad \text{and} \quad M_n = M(R^{n+e+r}) \quad \text{and} \quad \bbX_n = \OBases(R^{n+e,r})
\quad \text{and} \quad \cM_n = \cG_{M,n+e,r}.$}
\end{center}
with $c=2$. The shift by $e+r$ is needed for condition (a) of Theorem \ref{theorem:stabilitymachine}, which
requires that $\RH_k(\bbX_n;\cM_n) = 0$ for all $-1 \leq k \leq \frac{n-2}{2}$. Conclusion (a)
of the Claim says that $\RH_k(\OBases(R^{n+e,r});\cG_{M,n+e,r}) = 0$ for $-1 \leq k \leq \min(\frac{(n+e)-2d-1}{2},(n+e)+r-m-1)$,
which implies the desired range of vanishing for $\RH_k(\bbX_n;\cM_n)$ since
\[\frac{(n+e)-2d-1}{2} \geq \frac{(n+2d-1)-2d-1}{2} = \frac{n-2}{2} \quad \text{for all $n$}\]
and
\[(n+e)+r-m-1 \geq n+(m-r)+r-m-1 = n-1 \geq \frac{n-2}{2} \quad \text{for all $n \geq 0$},\]
so
\[\frac{n-2}{2} \leq \min(\frac{(n+e)-2d-1}{2}, (n+e)+r-m-1) \quad \text{for all $n \geq 0$}.\]
Applying Theorem \ref{theorem:stabilitymachine}, we deduce that the map
\[\HH_k(\GL_{n+e+r-1}^{\cK}(R);M(R^{n+e+r-1})) \rightarrow \HH_k(\GL_{n+e+r}^{\cK}(R);M(R^{n+e+r}))\]
is an isomorphism for $n \geq 2k+2$ and a surjection for $n = 2k+1$, which implies
that
\[\HH_k(\GL_{n}^{\cK}(R);M(R^{n})) \rightarrow \HH_k(\GL_{n+1}^{\cK}(R);M(R^{n+1}))\]
is an isomorphism for
\[n \geq 2k + (e+r-1) + 2 = 2k+\max(2d-1,m-r)+r+1 = 2k+\max(2d+r,m+1)\]
and a surjection for
\[n = 2k+\max(2d+r-1,m).\qedhere\]
\end{proof}
The following theorem generalizes Theorem \ref{maintheorem:glprime}:
\begin{theorem}
\label{theorem:xlprime}
Let $R$ be a ring satisfying $(\SR_r)$, , let $\cK \subset \KK_1(R)$ be a subgroup, let $\bbk$ be a commutative ring, and let
$M$ be a $\VIC(R)$-module over $\bbk$ that is polynomial of degree $d$ starting at $m \geq 0$.
For each $k \geq 0$, the map
\begin{equation}
\label{eqn:glprimetoprove}
\HH_k(\GL_n^{\cK}(R);M(R^n)) \rightarrow \HH_k(\GL_{n+1}^{\cK}(R);M(R^{n+1}))
\end{equation}
is an isomorphism for $n \geq \max(m,2k+2d+r+1)$ and a surjection for $n \geq \max(m,2k+2d+r-1)$.
\end{theorem}
\begin{proof}
The proof will be by double induction on $d$ and $m$. There are two base cases:
\begin{itemize}
\item The first is where $m=0$ and $d \geq 0$. Theorem \ref{theorem:xl}
says in this case that \eqref{eqn:glprimetoprove} is an isomorphism for
\[n \geq 2k+\max(2d+r,m+1) = 2k+\max(2d+r,1) = 2k+2d+r\]
and a surjection for
\[n = 2k+\max(2d+r-1,m) = 2k+\max(2d+r-1,0) = 2k+2d+r-1.\]
In both of these we use the fact that $r \geq 2$, which holds since the condition
$(\SR_r)$ only makes sense for $r \geq 2$ (see Definition \ref{definition:stablerank}).
These bounds are even stronger than our purported bounds of
\[n \geq \max(m,2k+2d+r+1) = \max(0,2k+2d+r+1) = 2k+2d+r+1\]
for \eqref{eqn:glprimetoprove} to be an isomorphism and
\[n \geq \max(m,2k+2d+r-1) = \max(0,2k+2d+r-1) = 2k+2d+r-1\]
for \eqref{eqn:glprimetoprove} to be a surjection.
\item The second is where $m \geq 0$ and $d = -1$. In this case, by the definition of a $\VIC(R)$-module
being polynomial of degree $-1$ starting at $m$ we have for $n \geq m$ that $M(R^n) = 0$ and
hence $\HH_k(\GL_{n}^{\cK}(R);M(R^n)) = 0$. In other words, for $n \geq m$ the domain
and codomain of \eqref{eqn:glprimetoprove} are both $0$, so it is trivially an isomorphism.
\end{itemize}
Assume now that $m \geq 1$ and $d \geq 0$, and that the theorem is true for all smaller $m$ and $d$. As
in Definition \ref{definition:polyvic}, let $\Sigma M$ be the shifted $\FI$-module and $D M$ be the derived
$\VIC(R)$-module. For $n \geq m$, we have a short exact sequence
\begin{equation}
\label{eqn:vicshiftseq}
0 \longrightarrow M(R^n) \longrightarrow \Sigma M(R^n) \longrightarrow DM(R^n) \longrightarrow 0
\end{equation}
of $\bbk[\GL_n^{\cK}(R)]$-modules. The $\VIC(R)$-module $\Sigma M$ is polynomial
of degree $d$ starting at $(m-1)$, and the $\VIC(R)$-module $DM$ is polynomial of degree $(d-1)$ starting at $(m-1)$.
To simplify our notation, for all $s \geq 0$ and
all $\bbk[\GL_s^{\cK}(R)]$-modules $N$, we will denote $\HH_k(\GL_s^{\cK}(R);N)$ by $\HH_k(N)$.
The long exact sequence in $\GL_n^{\cK}(R)$-homology associated to \eqref{eqn:vicshiftseq}
maps to the one in $\GL_{n+1}^{\cK}(R)$-homology, so for $n \geq m$ and all $k$ we have a commutative diagram
\begin{center}
\scalebox{0.89}{$\minCDarrowwidth10pt\begin{CD}
\HH_{k+1}(\Sigma M(R^n)) @>>> \HH_{k+1}(DM(R^n)) @>>> \HH_k(M(R^n)) @>>> \HH_k(\Sigma M(R^n)) @>>> \HH_k(DM(R^n)) \\
@VV{g_1}V @VV{g_2}V @VV{f_1}V @VV{f_2}V @VV{f_3}V \\
\HH_{k+1}(\Sigma M(R^{n+1})) @>>> \HH_{k+1}(DM(R^{n+1})) @>>> \HH_k(M(R^{n+1})) @>>> \HH_k(\Sigma M(R^{n+1})) @>>> \HH_k(DM(R^{n+1}))
\end{CD}$}
\end{center}
with exact rows. Our inductive hypothesis says the following about the $g_i$ and $f_i$:
\begin{itemize}
\item Since $\Sigma M$ is polynomial of degree $d$ starting at $(m-1)$, the map $f_2$ is an isomorphism
for $n \geq \max(m-1,2k+2d+r+1)$ and a surjection for $n \geq \max(m-1,2k+2d+r-1)$. Also, the map
$g_1$ is an isomorphism for
\[n \geq \max(m-1,2(k+1)+2d+r+1) = \max(m-1,2k+2d+r+3)\]
and a surjection for $n \geq \max(m-1,2k+2d+r+1)$.
\item Since $DM$ is polynomial of degree $(d-1)$ starting at $(m-1)$, the map $f_3$ is an
isomorphism for
\[n \geq \max(m-1,2k+2(d-1)+r+1) = \max(m-1,2k+2d+r-1)\]
and a surjection for $n \geq \max(m-1,2k+2d+r-3)$. Also, the map $g_2$ is an isomorphism for
\[n \geq \max(m-1,2(k+1)+2(d-1)+r+1) = \max(m-1,2k+2d+r+1)\]
and a surjection for $n \geq \max(m-1,2k+2d+r-1)$.
\end{itemize}
For $n \geq \max(m,2k+2d+r+1)$, the maps $g_2$ and $f_2$ and $f_3$ are isomorphisms and the map $g_1$ is a surjection, so
by the five-lemma the map $f_1$ is an isomorphism. For $n \geq \max(m,2k+2d+r-1)$, the maps $g_2$ and $f_2$ are surjections
and the map $f_3$ is an isomorphism, so by the five-lemma\footnote{Or, more precisely, one of the four-lemmas.} the
map $f_1$ is a surjection. The claim follows.
\end{proof}
\section{Unipotence and its consequences}
\label{section:congruenceunipotence}
We now turn our attention to congruence subgroups. Before we can prove Theorem \ref{maintheorem:congruence}, we
need some preliminary results about unipotent representations.
\subsection{Unipotent representations}
Let $\bbk$ be a field and let $V$ be a vector space over $\bbk$. A {\em unipotent operator}
on $V$ is a linear map $f\colon V \rightarrow V$ that can be written as
$f = \text{id}_V + \phi$ where $\phi\colon V \rightarrow V$ is nilpotent, i.e., there exists
some $k \geq 1$ such that $\phi^k = 0$. If $G$ is a group and $V$ is a $\bbk[G]$-module,
then we say that $V$ is a {\em unipotent representation} of $G$ if all elements of $G$
act on $V$ via unipotent operators. We will mostly be interested in abelian $G$, where
this can be checked on generators:
\begin{lemma}
\label{lemma:unipotentbygen}
Let $G$ be an abelian group generated by a set $S$, let $\bbk$ be a field, and
let $V$ be a $\bbk[G]$-module. Assume that each $s \in S$ acts on $V$
via a unipotent operator. Then $V$ is a unipotent representation of $G$.
\end{lemma}
\begin{proof}
It is enough to prove that if $g_1,g_2 \in G$ are elements that both act
on $V$ via unipotent operators, then $g_1 g_2$ also acts via a unipotent operator.
Let $g_i$ act on $V$ via the linear map $f_i\colon V \rightarrow V$, and write
$f_i = \text{id}_V + \phi_i$ with $\phi_i$ nilpotent. We thus have
\[f_1 f_2 = \text{id}_V + \phi_1 + \phi_2 + \phi_1 \phi_2.\]
Since the $g_i$ commute, the $\phi_i$ also commute. This implies that
$\phi_1+\phi_2+\phi_1 \phi_2$ is nilpotent, so $f_1 f_2$ is a unipotent operator.
\end{proof}
\subsection{Unipotence and VIC(R)-modules}
The following shows how to find many unipotent operators within a $\VIC(R)$-module. We thank Harman
for explaining its proof to us.
\begin{lemma}
\label{lemma:vicunipotent}
Let $R$ be a ring, let $\bbk$ be a field of characteristic $0$, and let $M$ be a
$\VIC(R)$-module over $\bbk$ that is polynomial of degree $d$ starting at $m$.
Assume that $M(R^n)$ is a finite-dimensional vector space over $\bbk$ for all $n$.
Then there exists some $u \geq 0$ such that for $n \geq u$, all elementary
matrices in $\GL_n(R)$ act on $M(R^n)$ via unipotent operators.
\end{lemma}
\begin{proof}
Whether or not an operator is unipotent is unchanged by field extensions, so without
loss of generality we can assume that $\bbk$ is algebraically closed. Via the ring
homomorphism $\Z \rightarrow R$, regard $M$ as a $\VIC(\Z)$-module. This does
not change the fact that it is polynomial of degree $d$ starting at $m$. We
can thus appeal to a theorem of Harman \cite[Proposition 4.4]{HarmanVIC}\footnote{The
statement of Harman's theorem requires $\bbk = \C$, but the proof just uses
the fact that $\bbk$ is algebraically closed.} saying
that there exists some $u \geq 0$ such that for all $n \geq u$, the action of
$\SL_n(\Z)$ on $M(\Z^n) = M(R^n)$ extends to a rational representation of the
algebraic group $\SL_n$.
Increasing $u$ if necessary, we can assume that $u \geq 3$.
Consider some $n \geq u$. For distinct $1 \leq i,j \leq n$ and $r \in R$, let
$e_{ij}^r \in \GL_n(R)$ be the elementary
matrix obtained from the identity by putting $r$ at position $(i,j)$.
We must check that each $e_{ij}^r$ acts on $M(R^n)$ as a unipotent
operator.
Since the action of $\SL_n(\Z)$ on $M(R^n)$ extends to a
rational representation of $\SL_n$, each elementary matrix in $\SL_n(\Z)$ acts as a unipotent
operator \cite[Theorem I.4.4]{BorelAlgebraic}. For distinct $1 \leq i,j,k \leq n$,
we have the matrix identity
\[e_{ij}^r = [e_{ik}^1,e_{kj}^r] = e_{ik}^1 e_{kj}^r \left(e_{ik}^1\right)^{-1} \left(e_{kj}^r\right)^{-1}.\]
Manipulating this, we get
\[\left(e_{ik}^1\right)^{-1} e_{ij}^r = e_{kj}^r \left(e_{ik}^1\right)^{-1} \left(e_{kj}^r\right)^{-1}.\]
Since $e_{ik}^1$ acts on $M(R^n)$ as a unipotent operator and the class of unipotent operators
is closed under conjugation and inversion, the right hand side of this expression acts on $M(R^n)$
as a unipotent operator. The matrices
\[e_{ik}^1 \quad \text{and} \quad \left(e_{ik}^1\right)^{-1} e_{ij}^r\]
commute and act on $M(R^n)$ as unipotent operators, so by Lemma \ref{lemma:unipotentbygen} their
product $e_{ij}^r$ acts on $M(R^n)$ as a unipotent operator, as desired.
\end{proof}
\subsection{Stability of invariants}
If $V$ is a module over a commutative ring $\bbk$ and $f\colon V \rightarrow V$ is
a module homomorphism, then let
\[V^f = \Set{$v \in V$}{$f(v) = v$}\]
denote the submodule of invariants. One basic property of unipotent operators on vector spaces of characteristic $0$ is as
follows:
\begin{lemma}
\label{lemma:stabilityinvariants}
Let $V$ be a finite-dimensional vector space over a field $\bbk$ of characteristic $0$ and let $f\colon V \rightarrow V$
be a unipotent operator. Then for all $n \geq 1$ we have $V^f = V^{f^n}$.
\end{lemma}
\begin{proof}
Since $V^f \subset V^{f^n}$, it is enough to prove that $\dim(V^f) = \dim(V^{f^n})$.
Write $f = \text{id}_V + \phi$ with $\phi$ nilpotent. Set $c_i = \binom{n}{i}$, so
\[f^n = \text{id}_V + \phi' \quad \text{with} \quad \phi' = \sum_{i=1}^n c_i \phi^i.\]
We have
\[V^f = \ker(\phi) \quad \text{and} \quad V^{f^n} = \ker(\phi'),\]
so our goal is to prove that $\ker(\phi)$ and $\ker(\phi')$ have the same dimension.
Since $\bbk$ has characteristic $0$, we have $c_1 = n \neq 0$.
This allows us to write
\[\phi' = c_1 \phi \circ \left(\text{id}_V + \sum_{i=1}^{n-1} \frac{c_{i+1}}{c_1} \phi^i\right) = c_1 \phi \circ\left(\text{id}_V + \phi''\right) \quad \text{with} \quad \phi'' = \sum_{i=1}^{n-1} \frac{c_{i+1}}{c_1} \phi^i.\]
The linear map $\phi''$ is nilpotent, so $\text{id}_V + \phi''$ is invertible. It follows that
\[\dim \ker(\phi') = \dim \ker(c_1 \phi \circ (\text{id}_V + \phi'')) = \dim \ker(c_1 \phi) = \dim \ker(\phi),\]
as desired.
\end{proof}
\subsection{Stability of homology}
Lemma \ref{lemma:stabilityinvariants} is the key input to the following result.
\begin{lemma}
\label{lemma:stabilityhomology}
Let $G$ be an abelian group and let $V$ be a finite-dimensional unipotent
representation of $G$ over a field $\bbk$ of characteristic $0$. Then for
all finite-index subgroups $G' < G$, the inclusion map $G' \hookrightarrow G$
induces an isomorphism $\HH_k(G';V) \cong \HH_k(G;V)$ for all $k \geq 0$.
\end{lemma}
\begin{proof}
We divide the proof into several steps.
\begin{stepsc}
This holds if $G$ is an infinite cyclic group.
\end{stepsc}
We thus have $G = \Z$ and $G' = n \Z$ for some $n \geq 1$. Recall that the
zeroth homology group is the coinvariants, which is isomorphic to the
invariants of the dual. The dual representation $V^{\ast}$ is also a unipotent
representation of $G$, so by Lemma \ref{lemma:stabilityinvariants} we
have
\[\HH_0(G;V) = \left(V^{\ast}\right)^G = \left(V^{\ast}\right)^{G'} = \HH_0(G';V).\]
For the first homology group, since $G = \Z$ is the fundamental group of the circle
we can apply Poincar\'{e} duality and use Lemma \ref{lemma:stabilityinvariants} to see that
\[\HH_1(G;V) \cong \HH^0(G;V) = V^G = V^{G'} = \HH^0(G';V) \cong \HH_1(G';V).\]
Finally, for $k \geq 2$, we have $\HH_k(G;V) = \HH_k(G';V) = 0$.
\begin{stepsc}
This holds if $G$ is a finite abelian group.
\end{stepsc}
In this case, we claim that $V$ is a trivial representation of $G$. Indeed, since
$G$ is finite it is enough to prove that every nontrivial unipotent operator
$f\colon V \rightarrow V$ has infinite order. Write $f = \text{id}_V + \phi$
with $\phi \neq 0$ a nilpotent operator. For all $n \geq 1$, we then have
\[f^n = \text{id}_V + \sum_{i=1}^n \binom{n}{i} \phi^i.\]
Since $\bbk$ has characteristic $0$, we have $\binom{n}{1} = n \neq 0$, so
the coefficient of $\phi$ in this expression is nonzero. Letting $r$ be the order
of $\phi$, the operators $\{\text{id}_V, \phi, \phi^2,\ldots \phi^{r-1}\}$
are linearly independent in the vector space of linear operators, so
$f^n \neq \text{id}_V$, as claimed.
From this, we see that
\[\HH_k(G';V) = \HH_k(G;V) = \begin{cases} V & \text{if $k=0$}, \\ 0 & \text{if $k \geq 1$}.\end{cases}\]
\begin{stepsc}
This holds if $G$ is a finitely generated abelian group.
\end{stepsc}
We can find a chain of subgroups
\[G = G_1 \supset G_2 \supset \cdots \supset G_n = G'\]
where for each $1 \leq i < n$ the group $G_{i+1}$ is an index-$p_i$ subgroup of $G_i$ for some prime $p_i$.
From this, we see that we can assume without loss of generality that $G'$ is an index-$p$ subgroup of $G$
for some prime $p$.
Since $G'$ is an index-$p$ subgroup of the finitely generated abelian group $G$, we can
write $G = C \oplus G''$ and $G' = C' \oplus G''$ with $C$ a cyclic subgroup of $G$ and
$C'$ an index-$p$ subgroup of $C$. We thus have a commutative diagram of short exact sequences
\[\begin{CD}
1 @>>> C' @>>> G' @>>> G'' @>>> 1 \\
@. @VVV @VVV @VV{=}V @.\\
1 @>>> C @>>> G @>>> G'' @>>> 1.\end{CD}\]
This induces a map between the Hochschild--Serre spectral sequences computing $\HH_{\bullet}(G';V)$ and
$\HH_{\bullet}(G;V)$. On the $E^2$-page, this morphism takes the form
\begin{equation}
\label{eqn:abelianspectralsequence}
\HH_p(G'';\HH_q(C';V)) \rightarrow \HH_p(G'';\HH_q(C;V)).
\end{equation}
The previous two steps imply that the inclusion $C' \hookrightarrow C$ induces isomorphisms
$\HH_q(C';V) \cong \HH_q(C;V)$ for all $q$, so the map \eqref{eqn:abelianspectralsequence}
is an isomorphism for all $p$ and $q$. The spectral sequence comparison theorem now implies that
$\HH_k(G';V) \cong \HH_k(G;V)$ for all $k$, as desired.
\begin{stepsc}
This holds if $G$ is an arbitrary abelian group.
\end{stepsc}
Let $\fF$ be the set of finitely generated subgroups of $G$. We thus have
\[G = \lim_{\substack{\longrightarrow \\ H \in \fF}} H \quad \text{and} \quad G' = \lim_{\substack{\longrightarrow \\ H \in \fF}} H \cap G'.\]
Since homology commutes with direct limits, this reduces us to the previous case.
\end{proof}
\section{Stability for congruence subgroups}
\label{section:congruencestability}
We now turn to the proof of Theorem \ref{maintheorem:congruence}. We will
actually prove three increasingly stronger results, with the third a generalization
of Theorem \ref{maintheorem:congruence}.
\subsection{Elementary matrices and unipotence}
The first is as follows, which we regard as the heart of the whole proof.
The differences between this result and Theorem \ref{maintheorem:congruence}
are as follows:
\begin{itemize}
\item It involves a $\VIC(R)$-module that is polynomial of degree $d \geq 0$
starting at $0$.
\item It assumes that elementary matrices act unipotently on $M(R^n)$.
\item It concerns the elementary congruence subgroup $\EL_n(R,\alpha)$
rather than $\GL_n(R,\alpha)$.
\item Finally, it has a better range of stability.
\end{itemize}
\begin{theorem}
\label{theorem:congruenceweak}
Let $R$ be a ring satisfying $(\SR_r)$, let $\bbk$ be a field of characteristic $0$, and let
$M$ be a $\VIC(R)$-module over $\bbk$ that is polynomial of degree $d \geq 0$ starting at $0$.
For each $n \geq 0$, assume that $M(R^n)$ is a finite-dimensional vector space over $\bbk$
and that each elementary matrix in $\GL_n(R)$ acts unipotently on $M(R^n)$.
Then for all $2$-sided ideals $\alpha$ of $R$ such that $R/\alpha$ is finite, the map
\[\HH_k(\EL_n(R,\alpha);M(R^n)) \rightarrow \HH_k(\EL_n(R);M(R^n))\]
is an isomorphism for $n \geq 2k+2d+r+1$.
\end{theorem}
\begin{proof}
Theorem \ref{theorem:xl} says that the stabilization map
\[\HH_k(\EL_n(R);M(R^n)) \rightarrow \HH_k(\EL_{n+1}(R);M(R^{n+1}))\]
is an isomorphism for $n \geq 2k+2d+r+1$. We proved this by
verifying the conditions of our stability machine (Theorem \ref{theorem:stabilitymachine}) with
the parameter $c=2$ (corresponding to the multiple $2$ in front of $k$ in $n \geq 2k+2d+r+1$).
Letting $e = \max(2d-1,m-r)$, the inputs to Theorem \ref{theorem:stabilitymachine} were
\begin{center}
\scalebox{0.89}{$G_n = \EL_{n+e+r}(R) \quad \text{and} \quad M_n = M(R^{n+e+r}) \quad \text{and} \quad \bbX_n = \OBases(R^{n+e,r})
\quad \text{and} \quad \cM_n = \cG_{M,n+e,r}.$}
\end{center}
To prove that furthermore the map
\[\HH_k(\EL_n(R,\alpha);M(R^n)) \rightarrow \HH_k(\EL_n(R);M(R^n))\]
is an isomorphism for $n \geq 2k+2d+r+1$, it is enough to verify the additional
hypotheses of Theorem \ref{theorem:stabilitymachinefi} for
\[G'_n = \EL_{n+e+r}(R,\alpha).\]
These additional hypotheses are numbered (e)-(h), and we verify each of them in turn.
Condition (e) says that each $M_n$ is a vector space over a field $\bbk$ of characteristic $0$,
which is one of our hypotheses.
Condition (f) says that for the $k$-simplex $\sigma_k$ of $\bbX_n$ from condition (b) of Theorem
\ref{theorem:stabilitymachine} whose $G_n$-stabilizer is $G_{n-k-1}$, the $G'_n$-stabilizer of
$\sigma_k$ is $G'_{n-k-1}$. Looking back at our proof of Theorem \ref{theorem:xl}, the
simplex $\sigma_k$ is as follows. Let $\{v_1,\ldots,v_{n+r}\}$ be the standard basis for $R^{n+e+r}$.
We then have
\[\sigma_k = \begin{cases}
(R^{n+e+r}) & \text{if $k=-1$},\\
(v_{n+e+r-k},v_{n+e+r-k+1},\ldots,v_{n+e+r}; R^{n+e+r-k-1}) & \text{if $0 \leq k \leq n$}.
\end{cases}\]
Thus the $\GL_{n+e+r}(R)$-stabilizer of $\sigma_k$ is $\GL_{n+e+r-k-1}(R)$, and
by Lemma \ref{lemma:stablesubgroup} the $G_n = \EL_{n+e+r}(R)$ stabilizer of
$\sigma_k$ is $G_{n-k-1} = \EL_{n+e+r-k-1}(R)$. Another application of
Lemma \ref{lemma:stablesubgroup} says that the $G'_n = \EL_{n+e+r}(R,\alpha)$
stabilizer of $\sigma_k$ is $G'_{n-k-1} = \EL_{n+e+r-k-1}(R,\alpha)$, as desired.
Condition (g) says that the quotient $\bbX_n / G'_n$ is $\frac{n-2}{2}$-connected.
Conclusion (iii) of Lemma \ref{lemma:obasesgood} says that
\[\bbX_n / G'_n = \OBases(R^{n+e,r}) / \EL_{n+e+r}(R,\alpha) \cong \OBases((R/\alpha)^{n+e,r}).\]
Lemma \ref{lemma:stablequotient} says that $R/\alpha$ satisfies $(\SR_r)$, so we
can apply Conclusion (iv) of Lemma \ref{lemma:obasesgood} to see that
$\OBases((R/\alpha)^{n+e,r})$ is a large ordering of a weakly Cohen--Macaulay complex
of dimension $\frac{n+e+1}{2}$. By Theorem \ref{theorem:largeordering}, this implies that
$\OBases((R/\alpha)^{n+e,r})$ is
\[\frac{n+e+1}{2} - 1 = \frac{n+e-1}{2} \geq \frac{n-2}{2}\]
connected. Here we use the fact that $e = \max(2d-1,m-r)$ is at least $-1$, which
follows from our assumption that $d \geq 0$.
Finally, the key condition (h) says that for $k \geq 0$ and $n \geq 2k+2$, the action
of $G_n$ on $\HH_k(G'_n;M(R^n))$
induced by the conjugation action of $G_n$ on $G'_n$ fixes the image of the stabilization
map
\[\HH_k(G'_{n-1};M(R^{n-1})) \rightarrow \HH_k(G'_n;M(R^n)).\]
This is the content of the following claim, which is the heart of our proof. Note
that this claim is even stronger since it holds for all $G_n = \EL_{n+e+r}(R)$, not
just those where $n \geq 2k+2$.
\begin{claim}
The group $\EL_n(R)$ acts trivially on the image of the stabilization map
\begin{equation}
\label{eqn:elstab}
\HH_k(\EL_{n-1}(R,\alpha);M(R^{n-1})) \rightarrow \HH_k(\EL_n(R,\alpha);M(R^n)).
\end{equation}
\end{claim}
The proof of this claim generalizes a beautiful argument of Charney from
\cite{CharneyCongruence}. For distinct $1 \leq i,j \leq n$ and $r \in R$, let $e_{ij}^r \in \EL_n(R)$
denote the elementary matrix obtained from the identity by putting $r$ at position
$(i,j)$. Define $A$ to be the subgroup of $\EL_n(R)$ generated by
$\Set{$e_{nj}^r$}{$1 \leq j \leq n-1$, $r \in R$}$ and let $B$ be the subgroup
generated by $\Set{$e_{in}^r$}{$1 \leq i \leq n-1$, $r \in R$}$. Using the matrix
identity
\[e_{ij}^r = [e_{ik}^r,e_{kj}^1] \quad \text{for distinct $1 \leq i,j,k \leq n$ and $r \in R$},\]
we see that $\EL_n(R)$ is generated by $A \cup B$. It is thus enough to prove
that $A$ and $B$ act trivially on the image of \eqref{eqn:elstab}. The arguments
for $A$ and $B$ are similar, so we will give the details for $A$ and leave $B$
to the reader.
The group $A$ is the abelian subgroup of $\GL_n(R)$ consisting of matrices that differ
from the identity only in their $n^{\text{th}}$ row. In particular, $A$ is isomorphic to the
abelian group $R^{n-1}$. Moreover, letting $\Gamma$ be the subgroup of $\EL_n(R)$ generated
by $A$ and $\EL_{n-1}(R)$, the group $\Gamma$ is a sort of ``affine group'' and in particular
\begin{equation}
\label{eqn:affine}
\Gamma = A \rtimes \EL_{n-1}(R).
\end{equation}
We would like to imitate this for $\EL_n(R,\alpha)$.
Define $A_{\alpha}$ to be the subgroup of $\EL_n(R,\alpha)$
generated by $\Set{$e_{nj}^r$}{$1 \leq j \leq n-1$, $r \in \alpha$}$. As
an abelian group, we have $A_{\alpha} \cong \alpha^{n-1}$. Define $\Gamma_{\alpha}$
to be the subgroup of $\EL_n(R,\alpha)$ generated by $A_{\alpha}$ and $\EL_{n-1}(R,\alpha)$,
so just like \eqref{eqn:affine} we have
\begin{equation}
\label{eqn:affine2}
\Gamma_{\alpha} = A_{\alpha} \rtimes \EL_{n-1}(R,\alpha).
\end{equation}
The stabilization map \eqref{eqn:elstab} factors through the map
\[\HH_k(\Gamma_{\alpha};M(R^n)) \rightarrow \HH_k(\EL_n(R,\alpha);M(R^n))\]
induced by the inclusion $\Gamma_{\alpha} \hookrightarrow \EL_n(R,\alpha)$.
The conjugation action of $A$ on $\EL_n(R,\alpha)$ takes $\Gamma_{\alpha}$ to itself.
It is thus enough to prove that the conjugation action of $A$ on
$\HH_k(\Gamma_{\alpha};M(R^n))$ is trivial. Define $\Gamma'_{\alpha}$ to be the subgroup
of $\EL_n(R)$ generated by $A$ and $\EL_{n-1}(R,\alpha)$. Since inner
automorphisms act trivially on homology (even with twisted coefficients; see
\cite[Proposition III.8.1]{BrownCohomology}), the conjugation action of
$A$ on $\HH_k(\Gamma'_{\alpha};M(R^n))$ is trivial. It is thus enough to prove
that the inclusion $\Gamma_{\alpha} \hookrightarrow \Gamma'_{\alpha}$ induces an
isomorphism
$\HH_k(\Gamma_{\alpha};M(R^n)) \cong \HH_k(\Gamma'_{\alpha};M(R^n))$.
For this, observe that we have a commutative diagram
\[\begin{CD}
1 @>>> A_{\alpha} @>>> \Gamma_{\alpha} @>>> \EL_{n-1}(R,\alpha) @>>> 1 \\
@. @VVV @VVV @VV{=}V @.\\
1 @>>> A @>>> \Gamma'_{\alpha} @>>> \EL_{n-1}(R,\alpha) @>>> 1
\end{CD}\]
with exact rows. The first row is the split exact sequence corresponding
to \eqref{eqn:affine2}, and the second row is the split exact sequence
corresponding to the similar semidirect product decomposition of $\Gamma'_{\alpha}$.
This induces a morphism between the Hochschild--Serre spectral sequences
computing the homology of $\Gamma_{\alpha}$ and $\Gamma'_{\alpha}$ with coefficients
in $M(R^n)$. On the $E^2$-page, this map of spectral sequences takes the form
\begin{equation}
\label{eqn:affiness}
\HH_p(\EL_{n-1}(R,\alpha);\HH_q(A_{\alpha};M(R^n))) \rightarrow \HH_p(\EL_{n-1}(R,\alpha);\HH_q(A;M(R^n))).
\end{equation}
The abelian group $A_{\alpha}$ is a finite-index subgroup of the abelian group $A$, and
since we assumed that elementary matrices act unipotently on $M(R^n)$ we can appeal
to Lemma \ref{lemma:unipotentbygen} to see that the action of $A$ on $M(R^n)$ is a unipotent
action. Lemma \ref{lemma:stabilityhomology} therefore implies that the inclusion
$A_{\alpha} \hookrightarrow A$ induces isomorphisms
\[\HH_q(A_{\alpha};M(R^n)) \cong \HH_q(A;M(R^n)) \quad \text{for all $q$}.\]
We deduce that \eqref{eqn:affiness} is an isomorphism for all $p$ and $q$. The
spectral sequence comparison theorem therefore implies that
the inclusion $\Gamma_{\alpha} \hookrightarrow \Gamma'_{\alpha}$ induces an
isomorphism
\[\HH_k(\Gamma_{\alpha};M(R^n)) \cong \HH_k(\Gamma'_{\alpha};M(R^n)),\]
as desired.
\end{proof}
\subsection{Non-elementary subgroups}
We now generalize Theorem \ref{theorem:congruenceweak} by extending it to subgroups other than the elementary ones.
This causes us to have a worse range of stability.
\begin{theorem}
\label{theorem:xlcongruenceweak}
Let $R$ be a ring satisfying $(\SR_r)$, let $\cK \subset \KK_1(R)$ be a subgroup, let $\bbk$ be a field of characteristic $0$, and let
$M$ be a $\VIC(R)$-module over $\bbk$ that is polynomial of degree $d \geq 0$ starting at $0$.
Assume furthermore that $M(R^n)$ is a finite-dimensional vector space over $\bbk$ for all $n \geq 0$
and that each elementary matrix in $\GL_n(R)$ acts unipotently on $M(R^n)$.
Then for all $2$-sided ideals $\alpha$ of $R$ such that $R/\alpha$ is finite and all subgroups $\cK' \subset \KK_1(R,\alpha)$ that
map onto $\cK$ under the map $\KK_1(R,\alpha) \rightarrow \KK_1(R)$, the map
\[\HH_k(\GL_n^{\cK'}(R,\alpha);M(R^n)) \rightarrow \HH_k(\GL_n^{\cK}(R);M(R^n))\]
is an isomorphism for $n \geq 2k+2d+2r$.
\end{theorem}
\begin{proof}
Consider some $n \geq 2k+2d+2r$. Using Theorem \ref{theorem:injectivek1}, we have a commutative diagram
\[\begin{CD}
1 @>>> \EL_n(R,\alpha) @>>> \GL_n^{\cK'}(R,\alpha) @>>> \cK' @>>> 1 \\
@. @VVV @VVV @VVV @. \\
1 @>>> \EL_n(R) @>>> \GL_n^{\cK}(R) @>>> \cK @>>> 1
\end{CD}\]
with exact rows. This induces a map between the Hochschild--Serre spectral sequences computing the homology
of $\GL_n^{\cK'}(R,\alpha)$ and $\GL_n^{\cK}(R)$ with coefficients in $M(R^n)$. On the $E^2$-page, this
map of spectral sequences is of the form
\begin{equation}
\label{eqn:sstoanalyze}
\HH_p(\cK';\HH_q(\EL_n(R,\alpha);M(R^n))) \rightarrow \HH_p(\cK;\HH_q(\EL_n(R);M(R^n))).
\end{equation}
We analyze this via a sequence of two claims.
\begin{claim}
For all $q \leq k$, the map $\HH_q(\EL_n(R,\alpha);M(R^n)) \rightarrow \HH_q(\EL_n(R);M(R^n))$
is an isomorphism.
\end{claim}
\begin{proof}[Proof of claim]
Immediate from Theorem \ref{maintheorem:congruence}.
\end{proof}
\begin{claim}
For all $q \leq k$, the action of the group $\cK$ (resp.\ $\cK'$) on $\HH_q(\EL_n(R);M(R^n))$ (resp.\ $\HH_q(\EL_n(R,\alpha);M(R^n))$)
is trivial.
\end{claim}
\begin{proof}[Proof of claim]
The previous claim implies that it is enough to prove that $\cK$ acts trivially on $\HH_q(\EL_n(R);M(R^n))$. In
fact, we will prove that all of $\KK_1(R)$ acts trivially on $\HH_q(\EL_n(R);M(R^n))$.
Since
\[n-(r-1) \geq 2k+2d+r+1,\]
Theorem \ref{theorem:xlprime} implies that the map
\begin{equation}
\label{eqn:destabilize}
\HH_q(\EL_{n-(r-1)}(R);M(R^{n-(r-1)})) \rightarrow \HH_q(\EL_n(R);M(R^n))
\end{equation}
is surjective. Theorem \ref{theorem:injectivek1} together with Theorem \ref{lemma:generategl} implies
that the subgroup $\GL_{r-1}(R)$ of $\GL_n(R)$ surjects onto $\KK_1(R)$ under the map
$\GL_n(R) \rightarrow \KK_1(R)$ whose kernel is $\EL_n(R)$. The group $\GL_{r-1}(R)$ is embedded
in $\GL_n(R)$ using the upper-left-hand matrix embedding, but we can conjugate it by any
element of $\GL_n(R)$ and it will still surject onto $\KK_1(R)$. We can thus use the lower-right-hand
embedding, which makes $\GL_{r-1}(R)$ commute with $\EL_{n-(r-1)}(R)$. This implies that
$\KK_1(R)$ acts trivially on the image of \eqref{eqn:destabilize}, and hence on
$\HH_q(\EL_n(R);M(R^n))$.
\end{proof}
By the second Claim, for $q \leq k$ we can rewrite our map of spectral sequences \eqref{eqn:sstoanalyze} as
\begin{equation}
\label{eqn:ssanalyzed}
\HH_p(\cK';\bbk) \otimes \HH_q(\EL_n(R,\alpha);M(R^n)) \rightarrow \HH_p(\cK;\bbk) \otimes \HH_q(\EL_n(R);M(R^n)).
\end{equation}
The map $\cK' \rightarrow \cK$ of abelian groups has a finite kernel and cokernel.\footnote{This follows from the fact
that the map $\KK_1(R,\alpha) \rightarrow \KK_1(R)$ has a finite kernel and cokernel, which follows from the fact that
it fits into an exact sequence \cite[Theorem 6.2]{MilnorKTheory}
\[\KK_2(R/\alpha) \rightarrow \KK_1(R,\alpha) \rightarrow \KK_1(R) \rightarrow \KK_1(R/\alpha)\]
along with the fact that $R/\alpha$ is finite.} Since $\bbk$ is a field of characteristic $0$, this implies that the map
$\HH_p(\cK';\bbk) \rightarrow \HH_p(\cK;\bbk)$ is an isomorphism for all $p$. Combining this with the first Claim,
we see that \eqref{eqn:ssanalyzed} is an isomorphism for all $p$ and $q$ with $q \leq k$. By the spectral sequence
comparison theorem, we deduce that the map
\[\HH_k(\GL_n^{\cK'}(R,\alpha);M(R^n)) \rightarrow \HH_k(\GL_n^{\cK}(R);M(R^n))\]
is an isomorphism, as desired.
\end{proof}
\subsection{The general case: removing unipotence}
We now prove the following theorem, which generalizes Theorem \ref{maintheorem:congruence}.
\begin{theorem}
\label{theorem:xlcongruence}
Let $R$ be a ring satisfying $(\SR_r)$, let $\cK \subset \KK_1(R)$ be a subgroup, let $\bbk$ be a field of characteristic $0$, and let
$M$ be a $\VIC(R)$-module over $\bbk$ that is polynomial of degree $d$ starting at $m$.
Assume furthermore that $M(R^n)$ is a finite-dimensional vector space over $\bbk$ for all $n \geq 0$.
Then for all $2$-sided ideals $\alpha$ of $R$ such that $R/\alpha$ is finite and all subgroups $\cK' \subset \KK_1(R,\alpha)$ that
map onto $\cK$ under the map $\KK_1(R,\alpha) \rightarrow \KK_1(R)$, the map
\begin{equation}
\label{eqn:xlcongruence}
\HH_k(\GL_n^{\cK'}(R,\alpha);M(R^n)) \rightarrow \HH_k(\GL_n^{\cK}(R);M(R^n))
\end{equation}
is an isomorphism for $n \geq \max(m,2k+2d+2r)$.
\end{theorem}
\begin{proof}
Lemma \ref{lemma:vicunipotent} says that there exists some $u$ such that for $n \geq u$,
all elementary matrices in $\GL_n(R)$ act unipotently on $M(R^n)$. The {\em parameters} of $M$ are the triple $(d,m,u)$. They satisfy
$d \geq -1$ and $m,u \in \Z$.
We will prove the theorem by triple induction on the parameters $(d,m,u)$. There are two base cases:
\begin{itemize}
\item The first is where $d \geq 0$ and $m,u \leq 0$. In this case, Theorem \ref{theorem:xlcongruenceweak} says
that \eqref{eqn:xlcongruence} is an isomorphism for
\[n \geq 2k+2d+2r = \max(m,2k+2d+2r),\]
as desired.
\item The second is where $d = -1$ and $m \geq 0$ and $u \geq 0$. In this case, by the definition of a $\VIC(R)$-module
being polynomial of degree $-1$ starting at $m$ we have for $n \geq m$ that $M(R^n) = 0$ and
hence
\[\HH_k(\GL_n^{\cK'}(R,\alpha);M(R^n)) = \HH_k(\GL_n^{\cK}(R);M(R^n)) = 0.\]
In other words, for $n \geq m$ the domain
and codomain of \eqref{eqn:xlcongruence} are both $0$, so it is trivially an isomorphism.
\end{itemize}
We can therefore assume that the parameters $(d,m,u)$ satisfy the following:
\begin{itemize}
\item $d \geq 0$, and either $m \geq 1$ or $u \geq 1$ (possibly both hold: $m \geq 1$ and $u \geq 1$).
\end{itemize}
Moreover, we can assume as an inductive hypothesis that the theorem is true for all $M$ with parameters $(d',m',u')$ satisfying
the following:
\begin{itemize}
\item Either $d' \leq d-1$, or both $m' \leq m-1$ and $u' \leq u-1$.
\end{itemize}
Note that it is {\em not} enough to shrink just one of $m$ and $u$ since otherwise we might never reach one of our base cases.
As in Definition \ref{definition:polyvic}, let $\Sigma M$ be the shifted $\FI$-module and $D M$ be the derived
$\VIC(R)$-module. For $n \geq m$, we have a short exact sequence
\begin{equation}
\label{eqn:vicshiftseqcong}
0 \longrightarrow M(R^n) \longrightarrow \Sigma M(R^n) \longrightarrow DM(R^n) \longrightarrow 0
\end{equation}
of $\bbk[\GL_n(R)]$-modules. The $\VIC(R)$-module $\Sigma M$ has parameters $(d,m-1,u-1)$, and
the $\VIC(R)$-module $DM$ has parameters $(d-1,m-1,u-1)$.
To simplify our notation, for all all $\bbk[\GL_n(R)]$-modules $N$, we will denote
\begin{itemize}
\item $\HH_k(\GL_n^{\cK}(R);N)$ by $\HH_k(N)$.
\item $\HH_k(\GL_n^{\cK'}(R,\alpha);N)$ by $\HH_k(\alpha,N)$.
\end{itemize}
The long exact sequence in $\GL_n^{\cK'}(R,\alpha)$-homology associated to \eqref{eqn:vicshiftseqcong}
maps to the one in $\GL_n^{\cK}(R)$-homology, so for $n \geq m$ and all $k$ we have a commutative diagram
\begin{center}
\scalebox{0.89}{$\minCDarrowwidth10pt\begin{CD}
\HH_{k+1}(\alpha, \Sigma M(R^n)) @>>> \HH_{k+1}(\alpha, DM(R^n)) @>>> \HH_k(\alpha, M(R^n)) @>>> \HH_k(\alpha, \Sigma M(R^n)) @>>> \HH_k(\alpha, DM(R^n)) \\
@VV{g_1}V @VV{g_2}V @VV{f_1}V @VV{f_2}V @VV{f_3}V \\
\HH_{k+1}(\Sigma M(R^n)) @>>> \HH_{k+1}(DM(R^n)) @>>> \HH_k(M(R^n)) @>>> \HH_k(\Sigma M(R^n)) @>>> \HH_k(DM(R^n))
\end{CD}$}
\end{center}
with exact rows. Since $\GL_n^{\cK'}(R,\alpha)$ is a finite-index subgroup of $\GL_n^{\cK}(R)$ and all our coefficients are vector spaces over a field
$\bbk$ of characteristic $0$, the transfer map (see \cite[Chapter III.9]{BrownCohomology}) implies that all the $f_i$ and $g_i$ are surjections.
Our inductive hypothesis says the following about them:
\begin{itemize}
\item Since $\Sigma M$ has parameters $(d,m-1,u-1)$, the map $f_2$ is an isomorphism
for $n \geq \max(m-1,2k+2d+2r)$ and the map $g_1$ is an isomorphism for
\[n \geq \max(m-1,2(k+1)+2d+2r) = \max(m-1,2k+2d+2r+2).\]
\item Since $DM$ has parameters $(d-1,m-1,u-1)$, the map $f_3$ is an isomorphism for
\[n \geq \max(m-1,2k+2(d-1)+2r) = \max(m-1,2k+2d+2r-2)\]
and the map $g_2$ is an isomorphism for
\[n \geq \max(m-1,2(k+1)+2(d-1)+2r) = \max(m-1,2k+2d+2r).\]
\end{itemize}
For $n \geq \max(m,2k+2d+2r)$, the maps $g_2$ and $f_2$ and $f_3$ are isomorphisms and the map $g_1$ is a surjection (remember, it is always
a surjection!), so by the five-lemma the map $f_1$ is an isomorphism, as desired.
\end{proof}
|
{
"timestamp": "2021-09-30T02:03:36",
"yymm": "2109",
"arxiv_id": "2109.14015",
"language": "en",
"url": "https://arxiv.org/abs/2109.14015"
}
|
\section{Introduction}\label{sec:intro}
Integrating cold atoms and nanophotonic devices enables to create original light-matter interfaces. This effort has paved the way for cavity-QED platforms with unprecedented atom-field coupling via nanoscale cavities \cite{Lukin2013,Lukin2014} and for new opportunities by coupling atoms and guided light. Waveguide-QED platforms have emerged with promises not only for developing quantum information network capabilities but also as a new paradigm for creating exotic quantum phases of light and matter \cite{RMPKimble}. In this context, a number of capabilities were demonstrated using cold atoms coupled to the evanescent field of optical nanofibers \cite{Nieddu2016,Solano2017a,Nayak2007,Mitsch2014,Laurat2015,Sayrin2015,Laurat2016,Polzik2016,Solano2017,Polzik2018,Laurat2019,Kato2019,Nicchormaic2020,Nicchormaic2020a,Prasad2020}. More recently, periodic structuring into bandgap-engineered photonic-crystal waveguide has raised a large interest to strongly enhance atom-photon coupling in single pass or provide band-gap-mediated atom-atom interaction \cite{RMPKimble,Goban2014,Kimble2016,Burgers2019}.
The key ingredient for achieving strong atom-photon coupling, is the ability to trap atoms using the evanescent field of the guided modes. This is a challenging task. In a nanofiber-based platform, thanks to a featureless dispersion relation, a two-color evanescent field trap \cite{LeKien2004,Vetsch2010,Kimble2012a} is commonly used. For more complex structures, e.g., microtoroid or photonic-crystal waveguide, atom trapping via guided modes has remained a roadblock and side illumination was mostly used heretofore \cite{Lukin2013,Goban2014,Kimble2020,Arno2021}. A specific feature of such optical microtraps is the strong gradients of electric fields and polarization, which can introduce a spatially varying shift to the atomic energy levels, leading to inhomogeneous shift and fictitious magnetic fields \cite{Arno2016}. Hence, finding an adequate trapping scheme requires an extensive optimization process for each structure. Moreover, considering real multilevel atoms with hyperfine and Zeeman sublevels adds to the complexity. It will thus be convenient to have a versatile tool to optimize and characterize optical dipole traps, in particular near nanophotonic devices.
In this paper we present an open-source Python package, \texttt{\lowercase{nanotrappy}}{} \cite{Nanotrappyweb}, for computing state-dependent optical trapping potential for multilevel alkali atoms. It leverages upon the community maintained database accessible through the Alkali Rydberg Calculator~\cite{Robertson2021} to provide up-to-date computational results. The package provides the user with a programmatic API as well as customized graphical tools for trap vizualization and easy parameter adjustment. These are particularly useful in the context of trap design and in-situ optimization. Even though we put an emphasis on trapping in evanescent fields close to nanophotonic structures, it is a platform independent solution that adresses design and optimization problems faced by the much broader community trapping atoms with optical dipole potentials.
\begin{figure*}[t!]
\centerline{\includegraphics[width = 0.97\textwidth]{package_scheme.pdf}}
\caption{The \texttt{\lowercase{nanotrappy}}{} package for calculating optical trap potentials. The Python package takes multiple physical parameters as inputs (atom, trap configuration, material) as well as precomputed electric field maps, and returns the trap and its properties. A user-friendly graphical interface also enables to tune the powers of the trapping beams, making the computation easy to integrate in a structure optimization workflow. Some examples of nanostructures for which optical trapping of atoms can be simulated in \texttt{\lowercase{nanotrappy}}{}: (a) Optical nanofiber \cite{Gouraud2016}, (b)~Alligator photonic-crystal waveguide \cite{Kimble2016}, (c)~Microtoroid \cite{Alton2013}, (d)~Nanoscale optical cavity \cite{Lukin2013}.}
\label{fig:package_scheme}
\end{figure*}
The workflow of \texttt{\lowercase{nanotrappy}}{} is illustrated in Fig. \ref{fig:package_scheme}. The package has four input classes, namely (i) the alkali atom for which the trap is to be simulated, (ii) the details of the trapping light, e.g., wavelengths, powers or propagation directions, (iii) the material of the structure for computing surface force, and (iv) the field distribution of the light around the nanophotonic structure. The latter one can be directly computed in our package for some simple structures like nanofibers, or imported, e.g. from FDTD calculations. With the above inputs, the package outputs the trapping potential for all the hyperfine Zeeman states of the ground and excited levels. It evaluates the relevant parameters of the trap such as its position, depth and frequencies along all three axes. Once the trap has been computed for a given geometry and a set of parameters, the latter can be varied in real time using the interactive GUI to optimize the trap properties.
The paper is organized as follows. In Section~\ref{sec:theory} we first provide a brief summary of the theoretical framework used for calculating the optical potentials for multilevel alkali atoms. Scalar, vector and tensor shifts are taken into account. We then describe the package architecture and operation in Section~\ref{sec:package_description}, detailing the possible inputs and outputs. The versatility of the package is demonstrated in Section~\ref{sec:application} for three examples of nanophotonic structures, namely, optical nanofiber, alligator slow-mode waveguide and microtoroid, that have been considered in recent years for trapping atoms in evanescent fields.
Section~\ref{Conclusion} concludes the paper.
\section{Theoretical framework} \label{sec:theory}
Optically trapping atoms relies on the conservative potential created by the intensity distribution of detuned laser beams~\cite{Grimm2000}, a technique now widely used for trapping cold atoms in free space.
For optically trapping atoms at sub-wavelength distance from a dielectric surface, two specificities arise.
First, the evanescent field leaking out of the dielectric material should be able to provide a stable trapping potential close to the surfaces~\cite{Balykin1991,Vetsch2010,Kimble2012a,LeKien2004}. Additionally the Casimir-Polder interaction becomes sizeable at such distances and has to be addressed~\cite{LeKien2004,Boustimi2002}.
In this section, we will briefly introduce the atom-light interaction concepts at the heart of the dipole trapping potential, with an emphasis on the influence of the Zeeman hyperfine levels. We recall the origin of scalar, vector and tensor shifts on the hyperfine manifold and their contributions to the polarizability tensor. Then, we discuss the Casimir-Polder interaction at sub-wavelength distance from the surface. Finally, all these contributions are included to compute the total trapping potential.
Note that despite the emphasis on trapping in the evanescent field of modes guided by a dielectric waveguide, the following presentation remains general and valid for any electric field distribution.
\subsection{Atom-light interaction: light shifts}\label{subseq:interaction}
We will consider here the case of a multilevel alkali atom interacting with a monochromatic optical field. To simplify the notation and describe easily the internal state of the atoms, we will adopt convenient notations, following the work of \cite{Steck2019, LeKien2013}. We denote $|s\rangle =~|N,J,F,m_{F}\rangle$ the state of interest with bare energy $E_{s}$ and $|e_{i}\rangle = |N_{i}',J_{i}',F_{i}',m_{F,i}'\rangle$ all the states to which it is dipole-coupled with bare energies $E_{e_{i}}$. $J$ stands for the total electronic angular momenta, $F$ for the total atomic angular momenta and $m_{F}$ for the magnetic quantum number. Here we choose $z$ as the quantization axis.
The Hamiltonian of such atom-field interaction is the textbook semi-classical electric-dipole interaction~\cite{CohenBook}: ${H_{\textrm{AF}}=-\vect{d}\cdot\vect{\mathcal{E}}}$. All the trapping fields are assumed to be classical fields, which is well justified experimentally considering the powers that are typically used.
In order to study the frequency response of an electric dipole interacting with an optical electric field, we define a polarizability tensor $\alpha_{\mu\nu}$ such that the mean induced dipole moment vector becomes:
\begin{equation}
\langle d^{(+)}_{\mu}(\omega) \rangle = \alpha_{\mu\nu}(F,m_{F};\omega)\mathcal{E}^{(+)}_{\nu} ,
\end{equation}
\noindent where $\mathcal{E}^{(+/-)}$ and $d^{(+/-)}$ denotes the positive/negative frequency terms of the electric field and dipole operator respectively, and $\mu, \nu$ are the $q$ indices of the spherical tensor components $\{-1,0,1\}$.
Using time-dependent perturbation theory, one can derive the Kramers-Heisenberg polarizability tensor for a given state $|s\rangle$ and a given angular frequency $\omega$ for the electric field~\cite{Steckcorrection}:
\begin{equation}
\label{eq:polarizability}
\alpha_{\mu\nu}(s;\omega) = \sum_{i} \left(
\frac{\langle s|d_{\nu}|e_{i}\rangle\langle e_{i}|d_{\mu}|s\rangle}
{\hbar(\omega_{e_{i}s}-\omega)} +
\frac{\langle e_{i}|d_{\mu}|s\rangle\langle s|d_{\nu}|e_{i}\rangle}
{\hbar(\omega_{e_{i}s}+\omega)}
\right).
\end{equation}
\noindent Here $d_{q}$ represents the tensor component of the dipole operator $\vect{d}$ and $\omega_{e_{i}s}=(E_{e_{i}}-E_{s})/\hbar$.
Once this polarizability tensor has been defined, we can then express the Stark shift induced by the electric field $\vect{E}$ as:
\begin{equation}
\begin{aligned}
\Delta E(F,m_{F};\omega) &= -\frac{1}{2}\langle \vect{d}(\omega) \rangle \cdot \vect{\mathcal{E}} \\
&= -\frac{1}{2} \langle \vect{d}^{(+)}+\vect{d}^{(-)}\rangle \cdot(\vect{\mathcal{E}}^{(+)}+\vect{\mathcal{E}}^{(-)}) \\
&= -Re(\alpha_{\mu\nu})\mathcal{E}^{(-)}_{\mu}\mathcal{E}^{(+)}_{\nu} .
\end{aligned}
\end{equation}
In order to study the effects of each component of the electric field, a usual method is to split the polarizability tensor into its irreducible parts, namely the scalar, vector and tensor polarizabilities. Such decomposition is detailed in \cite{Steck2019}. For a rank-2 tensor $\alpha_{\mu\nu}$, the decomposition reads:
\begin{equation*}
\alpha_{\mu\nu} = \frac{1}{3}\alpha^{(0)}\delta_{\mu\nu} + \frac{1}{4}\alpha^{(1)}_{\sigma}\epsilon_{\sigma\mu\nu}+\alpha^{(2)}_{\mu\nu}
\end{equation*}
where:
\begin{equation}
\left\{
\begin{aligned}
&\alpha^{(0)} = \alpha_{\mu\mu}\\
&\alpha^{(1)}_{\sigma} = \epsilon_{\sigma\mu\nu}(\alpha_{\mu\nu}-\alpha_{\nu\mu})\\
&\alpha^{(2)}_{\mu\nu} = \alpha_{(\mu\nu)} - \frac{1}{3}\alpha_{\sigma\sigma}\delta_{\mu\nu} .
\end{aligned}
\right.
\end{equation}
After adding all three contributions, we obtain:
\begin{widetext}
\begin{equation}\label{eq:total_shift}
\Delta E(F,m_{F};\omega) = -\alpha^{(0)}(F;\omega)|\mathcal{E}^{(+)}|^{2}- \alpha^{(1)}(i\vect{\mathcal{E}}^{(-)}\times\vect{\mathcal{E}^{(+)}})_{0}\frac{m_{F}}{F}- \alpha^{(2)}(F;\omega)\frac{(3|\mathcal{E}^{(+)}_{0}|^{2}-|\mathcal{E}^{(+)}|^{2})}{2}\left(\frac{3m_{F}^{2}-F(F+1)}{F(2F-1)}\right) ,
\end{equation}
with:
\begin{equation}
\left\{
\begin{aligned}
\alpha^{(0)}(F;\omega) & = \sum_{F'}\frac{2\omega_{FF'}\rde{F}{F'}^{2}}{3\hbar(\omega_{FF'}^{2}-\omega^{2})} \\
\alpha^{(1)}(F;\omega) & = \sum_{F'} (-1)^{(F+F')}\sqrt{\frac{3F(2F+1)}{2(F+1)}}\sixj{1}{1}{1}{F}{F}{F'}\frac{\omega\rde{F}{F'}^{2}}{\hbar(\omega_{FF'}^{2}-\omega^{2})}\\
\alpha^{(2)}(F;\omega) & = \sum_{F'} (-1)^{(F+F')}\sqrt{\frac{40F(2F+1)(2F-1)}{3(F+1)(2F+3)}}\sixj{1}{1}{1}{F}{F}{F'}\frac{\omega_{FF'}\rde{F}{F'}^{2}}{\hbar(\omega_{FF'}^{2}-\omega^{2})} .
\end{aligned}
\right.
\end{equation}
\end{widetext}
\noindent $\alpha^{(0)}$, $\alpha^{(1)}$ and $\alpha^{(2)}$ stand for the scalar, vector and tensor polarizabilities respectively.
The typical numbers available for computations are the reduced dipole element between states of different electronic angular momentum $\rde{J}{J'}$, with the useful relation:
\begin{align}
\rde{F}{F'} &= \rde{J}{J'} \times (-1)^{1+F'+J+I} \nonumber \\
&\quad \times \sqrt{2F'+1}\sixj{J}{J'}{1}{F'}{F}{I}
\end{align}
\noindent where $\{:::\}$ is the Wigner 6-$j$ symbol.
\begin{figure*}
\includegraphics[width = \textwidth]{full_levels.pdf}
\caption{
Effect of the different shift contributions induced by a $\pi$ and $\sigma^{+}$ polarized light on the $|6\mathrm{S}_{1/2},F=4\rangle$ and $|6\mathrm{P}_{3/2},F'=5\rangle$ manifolds of cesium. As a reference, numerical values for the shifts and broadenings are given for an intensity of $10$~mW/$\mu$m$^2$ at a wavelength of $1064$~nm. Vertical axis not to scale.
(a) Unperturbed hyperfine structure.
(b) Effect of the scalar light shift: each manifold is offset differently but irrespective of the $m_{F}$ number. This amounts to a change of the resonant transition frequency. This can be compensated using magic wavelengths, for which the shift is the same for both states: the levels are still perturbed but the transition frequency remains the same.
(c) Effect of the scalar and vector light shifts. The linear dependence on $m_{F}$ of the vector shift is clearly visible, hence the parallel made with a fictitious magnetic field. This shift causes a broadening of the hyperfine manifold when performing spectroscopic measurements, and can be canceled by using only linearly polarized light fields.
(d) Total shift. The tensor shift has a quadratic dependence on $m_{F}$.
In most cases, the unperturbed Zeemaan states $|m_{F}\rangle$ are not eigenstates of the complete Hamiltonian.
}
\label{fig:level_splitting}
\end{figure*}
We now focus on the effects of the different contributions, as shown in Fig.~\ref{fig:level_splitting}.
We can see in Fig.~\ref{fig:level_splitting}(b) that the scalar shift amounts to an offset of the hyperfine manifold which depends only on the wavelength and the intensity of the light. This offset is state-dependent (between ground and excited state manifolds) and thus leads to a shift of the resonant transition.
From an experimental point of view, this shift can in some cases be suppressed by making use of the so-called magic wavelengths~\cite{Ye2008,Kimble2003}, which are available for some alkali atoms like cesium. When the trapping light is at a magic wavelength, the shift becomes state independent and thus the resonant transition frequency remains unchanged.
The form of the second term of Eq.~\eqref{eq:total_shift} highlights the different dependencies of the vector part. The main parameter is the ellipticity of the incident light, and the effect is a state-dependent shift proportional to the magnetic quantum number, which is shown in Fig.~\ref{fig:level_splitting}(c). This can be seen as the action of a fictitious magnetic field given by~\cite{LeKien2013a, Arno2016}:
\begin{equation*}
B_{fict} = \frac{\alpha^{(1)}}{\mu_{B}g_{nJF}F}(i[\vect{\mathcal{E}}^{(-)}\times\vect{\mathcal{E}}^{(+)}]) .
\end{equation*}
\noindent This shift can be canceled by using linearly polarized light, for which the cross product vanishes and thus the fictitious magnetic field as well. In practice around nanophotonic waveguides, or in the tight-focusing regime, this is non trivial because of the strong longitudinal component of the electric field that typically introduces an ellipticity~\cite{VanMechelen2016}. In such settings counterpropagating beams have been used~\cite{Kimble2012,Laurat2019,Kien2005} to cancel out this longitudinal component.
The tensor part indicated by the third term is the most difficult to cancel in practice. We will not detail the mathematical description of this contribution and only make practical statements. Most noticeably, it vanishes for $J = 0$ and $J = 1/2$ states due to the dependence on $J$ of the tensor polarizability. However, it is not the case for excited $J=3/2$ states, which will then experience a significant tensor shift. Regarding the electric field dependence, for pure $\pi$ or $\sigma_{\pm}$ polarizations, the tensor shift part of the Hamiltonian is diagonal in the unperturbed hyperfine basis. It leaves the eigenvectors unchanged while adding a $m_{F}$-dependent shift proportional to $m_{F}^{2}$, as seen in Fig.~\ref{fig:level_splitting}(d). For arbitrary polarizations however, the tensor shift part of the Hamiltonian is non-diagonal and thus introduces coupling terms between hyperfine states. This situation is more complex and best solved numerically.
\subsection{Casimir-Polder interaction}\label{subseq:casimir}
In order to have a full description of the potential seen by the atoms close to a surface, we need to add the van der Waals interaction. These so-called Casimir-Polder (CP) interactions \cite{Casimir1948,Reynaud2017} are usually complex to calculate as they depend on the material, the atoms and the geometry of the structure.
We restrict ourselves to the first order of the potential for an infinite plane $U_{\textrm{CP}} = C_3/d^3$ \cite{Johnson2004}, where $d$ is the distance to the surface.
This approximation puts an upper bound on the Casimir Polder effect. It is a good approximation at very low distances, and even though the curvature of the surface can lead to a 40\% error at the position of the trap minimum (in the case of a nanofiber for example), the effect remains negligible compared to the light induced potentials \cite{LeKien2004}.
Thus, we do not incorporate this correction in our calculation of the trapping potential.
The formula used to compute the $C_{3}$ coefficient for a given atom and material is taken from Ref.~\cite{Caride2005}:
\begin{equation}
C_{3} \approx \frac{\hbar}{4\pi}\int_{0}^{+\infty} \alpha(i\xi)\frac{\epsilon(i\xi)-1}{\epsilon(i\xi)+1} \,d\xi
\end{equation}
\noindent where $\alpha(i\xi)$ is the polarizability of the trapped atom extended over the imaginary axis, and $\epsilon(i\xi)$ is the dielectric permitivity of the material, also computed at imaginary frequencies.
\subsection{Computation of the trap}\label{subseq:trap_computation}
Building on the formalisms needed to describe both atom-light and atom-surface interactions, we can combine them to compute the full trapping potential created by arbitrary optical fields close to a surface.
In order to compute this trapping potential as well as the level mixing induced by the interactions, it is convenient to define an effective Hamiltonian for the system, which can be eventually diagonalized. For the Stark shift defined in Eq.~\eqref{eq:total_shift} this effective Hamiltonian is given by:
\begin{equation}
\begin{aligned}
H_\mathrm{stark} = & -\alpha^{(0)}(F;\omega)|\mathcal{E}^{(+)}|^{2}\\
& - \alpha^{(1)}(F;\omega)(i\vect{\mathcal{E}}^{(-)}\times\vect{\mathcal{E}}^{(+)})_{0}\frac{F_{0}}{F}\\
& - \alpha^{(2)}(F;\omega)\frac{(3|\mathcal{E}^{(+)}_{0}|^{2}-|\mathcal{E}^{(+)}|^{2})}{2}\left(\frac{3F_{0}^{2}-\vect{F}^{2}}{F(2F-1)}\right) .
\end{aligned}
\end{equation}
Writing the CP Hamiltonian as $H_{\textrm{CP}} = U_{\textrm{CP}}\hat{\mathds{1}}$, and using perturbation theory, the total shift for a level $|N,J,F,m_{F}\rangle$ is then given by:
\begin{equation}
\begin{aligned}
\Delta &E_{|N,J,F,m_{F}\rangle}(\omega) = \\
&\langle N,J,F,m_{F}|H_\mathrm{stark}(F;\omega)+H_{\textrm{CP}}|N,J,F,m_{F}\rangle .
\end{aligned}
\end{equation}
It is important to note that in general, the Hamiltonian is not diagonal in the $|N,J,F,m_{F}\rangle$ basis because of the vector and tensor terms. Therefore it is necessary to diagonalize it in order to obtain both the correct eigenvalues and eigenvectors. While for low power, $F$ is still a good quantum number, it is not the case anymore for $m_{F}$ and the interaction gives rise to level mixing inside the magnetic Zeeman manifold.
After introducing the theoretical framework enabling to calculate trapping potentials and state dependent light shifts, we will now present its implementation in \texttt{\lowercase{nanotrappy}}{}.
\begin{figure*}
\includegraphics[width=\textwidth]{interactive.pdf}
\caption{Screenshots illustrating the interactivity of \texttt{\lowercase{nanotrappy}}{}. (a) Interactive 1D plot with sliders (blue bars) controlling the powers of the red-detuned and blue-detuned trapping beams, here for a nanofiber configuration as described in Section~\ref{subseq:nanofiber}. The new eigenlevels can also be selected to see their decomposition on the unperturbed basis. The same controls are given for 2D plots. (b) Screenshot of the Graphical User Interface (GUI) provided as an .exe file with the package. It offers the same functionalities through dropdown menus and allows to optimize the trapping scheme without having to interact with Python code.}
\label{fig:interactive}
\end{figure*}
\section{Package overview}\label{sec:package_description}
\texttt{\lowercase{nanotrappy}}{} is a Python package that computes the trapping potentials induced by laser beams, with an emphasis on modes guided inside nanostructures. It has been tested on Python 3.7+ distributions. \texttt{\lowercase{nanotrappy}}{} is programmed for being efficiently included in the optimization workflow of nanophotonic structure design. We take advantage of the object-oriented programming style of Python in order to provide the user with a simple and accessible API.
This allows for efficient tackling of the computation of such traps, taking multi-levels into account and using up-to-date and synchronized data referenced through ARC.
Note that the package is not a field solver. Instead, based on a pre-computed electric field (done using any third-party solver), \texttt{\lowercase{nanotrappy}}{} interfaces this electric field distribution with an atomic system given physical parameters. A range of free and licensed field solvers already exist and specific methods maybe more suited to each use case. Thus the choice is left to the user and the package is made to be agnostic to the computational method of the optical fields.
Performance-wise, even if the limiting factor of such optimization workflows is often the actual computation of the fields, an effort has been made to make this package efficient, and to provide a parallelizable option that allows to split the computation on multiple CPU cores if needed.
In the following, we introduce the structure of the code through the base classes provided.
\subsection{Atomic system}\label{subseq:atomic_system}
The first class defined is the \texttt{atomicsystem} class. It is based on the Alkali-Rydberg-Calculator (ARC) package~\cite{Robertson2021}, which allows to select any alkali atom in the database, as well as up-to-date spectroscopic data such as transition frequencies or dipole matrix elements.
Building an \texttt{atomicsystem} amounts to selecting such an atom, as well as a state defined by the $N,L,J,F$ quantum numbers.
As \texttt{\lowercase{nanotrappy}}{} is closely linked with current experiments, it incorporates features that have proven to be crucial for the development of such systems. Among those, the issues linked with inhomogeneous broadening of an optical transition due to the Zeeman dependent nature of the light shifts call for calculations of such light shifts down to the hyperfine structure level. The polarizability of any given hyperfine state is thus computed by \texttt{\lowercase{nanotrappy}}{}, as well as $C_{3}$ coefficient of the Casimir-Polder atom-surface interaction.
\subsection{Beams and Trap}
Specific classes are used to describe the trapping light.
For each trapping beam, a \texttt{Beam} instance is created, based on a wavelength, a power and a folder containing formatted pre-computed electric fields~\cite{Formatting}.
The package will then check if the electric fields are available at that wavelength in the specified folder and select the relevant data. Counterprogating beams geometries can also be created with the \texttt{BeamPair} class.
Once the beams are created, they are bundled into the \texttt{TrapBeam} class together with a local propagation axis.
\subsection{Materials and Surfaces}
To handle CP interactions, a \texttt{Surface} class is available, as well as two main subclasses \texttt{Plane} and \texttt{Cylinder}. This allows to handle most practical cases and gives a first order approximation of these interactions, which is very often enough to get an accurate result. The CP potentials are computed for every atomic level separately.
The \texttt{Material} class comes with pre-implemented subclasses of materials (air, SiO$_{2}$, SiN, GaInP...) and can be easily extended to add other materials.
\subsection{Running the simulation}
Once all these physical parameters have been defined in the respective classes, they are bundled together in a \texttt{Simulation} class that realizes the actual computation.
Along the way, the parameters are saved as JSON files and the results as .npy numeric tables. Conveniently, a check is performed before any simulation whether these particular parameters have already been used, in which case the previously computed data is used, thereby avoiding unnecessary computation.
\subsection{Interactivity and optimization}
The \texttt{Vizualizer} class is a core class for displaying the results, with additional features for easy optimization of the trap.
Once the simulation has run, the vizualizer will display the results along with interactive sliders that allow to control the power of each individual beam, as seen on Fig.~\ref{fig:interactive}(a). As mentioned in Section \ref{subseq:atomic_system}, it is possible to display the trapping potential for all Zeeman sublevels inside a given hyperfine state. An interactive tool allows to select a chosen Zeeman state and display additional information such as the decomposition of this new eigenstate in the basis of the unperturbed ones. If a stable trap exists, the trapping position and frequencies are also displayed. This enables to quickly and conveniently scan the powers of the beams in order to check whether the desired value for these parameters are accessible or if the structure design needs to be improved. Moreover, automatic optimization is available: the powers of the beams can be scanned to optimize a chosen parameter (either the trap depth, trap frequency or the trap position) given electric field distributions, as displayed in Fig.~\ref{fig:optimization} for a nanofiber configuration as described below in Section~\ref{subseq:nanofiber}.
A standalone GUI application is also made available with the same functionalities, and shown in Fig.~\ref{fig:interactive}(b).
\begin{figure}[b!]
\includegraphics[width = 0.97\columnwidth]{optimizer_figure.pdf}
\caption{Automatic optimization of the trap depth for a 250~nm radius nanofiber, as in Section~\ref{subseq:nanofiber}. The powers of the 1064~nm red-detuned beams ($P_{1}$) and 780~nm blue-detuned beam ($P_{2}$) are varied and the trap depth is calculated to generate a 2D map. The dotted line follows the local trap depth maximum. Unstable trapping configurations are displayed with a trap depth of 0~mK.}
\label{fig:optimization}
\end{figure}
\section{Package application} \label{sec:application}
In this section we show the versatility of \texttt{\lowercase{nanotrappy}}{} by computing the trapping potentials for atoms around three well known nanotructures. We benchmark \texttt{\lowercase{nanotrappy}}{}'s results against published literature to demonstrate the accuracy of the package.
\subsection{Nanofibers and uniform waveguides} \label{subseq:nanofiber}
\begin{figure*}
\includegraphics[width = 0.95\textwidth]{Nanofiber_figure.pdf}
\caption{Two configurations of optical trapping around a nanofiber.
(a) Uncompensated nanofiber trap configuration with only one blue-detuned beam, and crossed polarisations.
(b) 2D potential $U$ ($U<0$ for stable traps) in the $xz$ plane, with same parameters as in Ref.~\cite{Vetsch2010}. Trapping sites are periodically placed with distance $\lambda_{\textrm{red}}/2$ because of the red standing wave. Stable traps with depth of around 0.4~mK are achieved.
(c) 2D potential in the $xy$ plane at the $z$-position of a trap.
(d) Radial dependence of the trapping potential of the ground ($6$S$_{1/2}$) and excited ($6$P$_{3/2}$) states along the $x$ axis. The splitting of the $m_F$ states in the ground state is not visible at this scale. The trap minimum is located at around 220~nm from the surface but the atoms in the excited state are not trapped.
(e-h) Same plots for the state-insensitive, compensated configuration (see text) with parameters from Ref.~\cite{Kimble2012}. (g) Azimuthal trapping is less efficient in this configuration but (h) a stable trap for excited atoms, with low inhomogeneous broadening is obtained.}
\label{fig:nanofiber}
\end{figure*}
Optical nanofibers have been largely used for atom-nanophotonics interfaces. Relatively simple fabrication technique of subwavelength diameter, low-loss silica nanofiber \cite{Tong2003} and easy integrability with cold atoms makes it a popular choice.
Early works involved an optical nanofiber embedded in an ensemble of atoms in a magneto-optical trap (MOT)~\cite{Nayak2007,Nayak2009}. The first proposal on trapping atoms using the evanescent field of an optical nanofiber was by balancing the attractive gradient force of the evanescent component of a red-detuned field with the centrifugal force when the fiber diameter is about half the wavelength of the trapping light \cite{Balykin2004}. Later it was proposed that the attractive force can be counter-balanced by the evanescent component of a blue-detuned field propagating in the same nanofiber, giving birth to the seminal two-color evanescent field trap~\cite{LeKien2004,Vetsch2010}. A state insensitive, compensated trap was proposed to suppress the inhomogeneous light shifts and demonstrated for cesium atoms~\cite{Kien2005,Kimble2012a,Kimble2012}.\\
In \texttt{\lowercase{nanotrappy}}{}, both electric and magnetic fields of the guided modes of a nanofiber can be analytically computed thanks to the fiber eigenvalue equation~\cite{Snyder2012}, given a radius and a refractive index for the dielectric. This calculation is implemented in \texttt{\lowercase{nanotrappy}}{}, so that for this simple structure the package can be used for computing both the electric field distributions and the trapping potentials.
We use \texttt{\lowercase{nanotrappy}}{} to compute the trapping potentials for an uncompensated trap and a compensated one, and compare them to published literature \cite{Vetsch2010,Kimble2012}. The results are shown in Fig. \ref{fig:nanofiber}. In both cases the goal is to compute the characteristics of the traps for ground and excited state of cesium atoms around a SiO$_2$ nanofiber with 250~nm radius. The differences between the schemes come from the wavelengths, powers, polarization and number of beams used.
Figure \ref{fig:nanofiber}(a) shows the configuration of the uncompensated trap.
The parameters are chosen as per \cite{Vetsch2010}.
A pair of red-detuned, counterpropagating beams at 1064~nm and a single, blue-detuned beam at 780~nm are used to create the trapping potential.
The red- and blue-detuned beams have orthogonal linear polarization.
The total powers used are $P_\mathrm{red}$ = 2 $\times$ 2.2 mW and $P_\mathrm{blue}$~= 25~mW respectively.
The obtained results are shown in Fig.~\ref{fig:nanofiber}(b-d).
A 1D array of evenly spaced traps along the nanofiber with depth 0.4~mK is achieved at around 195 nm from the surface.
The corresponding trap frequencies are 355~kHz, 71~kHz, 355~kHz along $r$, $\theta$ and $z$ respectively.
We note that, in such a trap only the ground state cesium atoms are trapped, the excited $6\mathrm{P}_{3/2}$ states experience a repulsive potential as shown in \ref{fig:nanofiber}(d).
The results are in excellent agreement with \cite{Vetsch2010, VetschPhD2010}.
For the compensated trap, as shown in Fig.~\ref{fig:nanofiber}(e), the scheme and parameters are chosen as per \cite{Kimble2012}.
In this configuration, a second, counterpropagating blue beam is used in order to reduce the vector shift as much as possible.
The powers are $P_\mathrm{blue}$ = 2 $\times$ 16 mW and ${P_\mathrm{red} = 2 \times 0.95 \textrm{mW}}$.
The results are shown in Fig.~\ref{fig:nanofiber}(e-h).
Stable traps are obtained for both the ground and excited levels.
The computation yields a trap at 190~nm from the surface with depth $\sim 0.5$~mK for the ground state and 0.3 to 0.6~mK for the excited state. The results are also in excellent agreement with \cite{Kimble2012, GobanPhD2015}.
We now use this well-known example of a trap around a nanofiber to illustrate step-by-step how to compute dipole traps with \texttt{\lowercase{nanotrappy}}{}. This sample code, only a few lines long, can be easily adapted to any structure and alkali atom of interest.
\lstset{language=Python,
backgroundcolor=\color{backcolour},
basicstyle=\ttfamily\small,
keywordstyle=\color{blue},
commentstyle=\color{grey},
stringstyle=\color{green2},
linewidth=7.8cm,
xleftmargin=-0.8cm,
showstringspaces=false,
identifierstyle=\color{black},
procnamekeys={def,class},
breaklines=true,
breakatwhitespace=true,
postbreak=\raisebox{0ex}[0ex][0ex]{\ensuremath{\hookrightarrow}}}
\begin{enumerate}
\item First, an atomic system has to be specified. This part makes use of the ARC package~\cite{Robertson2021}, hence all alkali atoms can be used. The other parameters of the \texttt{atomicsystem} define the hyperfine level of the ground state considered for trapping, here ground state $6\mathrm{S}_{1/2}$ cesium with $F=4$.
\begin{lstlisting}
#Definition of the atomic system
syst = nt.atomicsystem(Caesium(), nt.atomiclevel(6,S,1/2),f = 4)
\end{lstlisting}
\item The trapping scheme then has to be defined: number of beams, wavelengths, counterpropagating or not... The wavelength is of utmost importance as the package will look for the spatial mode corresponding to this wavelength in the data folder.
\begin{lstlisting}
import nanotrappy as nt
#Definition of the beams used for trapping
blue_beam = nt.Beam(780e-9, direction = "f", power = 25*mW)
red_standing_wave = nt.BeamPair(1064e-9, power1 = 2.2*mW, 1064e-9, power2 = 2.2*mW)
trap = nt.Trap_beams(blue_beam, red_standing_wave)
\end{lstlisting}
\item (Optional) The structure around which the atoms are trapped can also be defined. This is necessary for including the CP potential $U_\textrm{CP}$ (see Sec.~\ref{sec:theory}).
Infinite planes and cylinders are already implemented in the package using $U_{\textrm{CP}} = C_3/d^3$, as well as many materials. If not specified, no surface is taken into account.
\begin{lstlisting}
#Adding a surface for CP interactions
surface = nt.Cylinder(radius = 250e-9, axis = AxisZ(coordinates = (0,0)))
\end{lstlisting}
\item The \texttt{simulation} object that will store the results of the calculations is created, taking as arguments all the previous objects, plus the data folder and the structure material.
\begin{lstlisting}
#Create the simulation object that will store the results
Simul = nt.Simulation(syst,Nm.SiO2(), trap,datafolder,surface)
Simul.geometry = PlaneXZ(normal_coord = 0)
\end{lstlisting}
\item Running the simulation boils down to one line of code.
\begin{lstlisting}
trap2D = Simul.compute()
\end{lstlisting}
For the $500 \times 500$ grid of Fig.~\ref{fig:nanofiber} this evaluation takes less than a minute on a standard office computer.
\item A \texttt{vizualizer} object is then created to display the results and manipulate the optical powers. After the initial computation, the trap powers can be varied, and the changes appear in real time.
\begin{lstlisting}
viz = nt.Vizualizer(Simul, trapping_axis="Y")
fig, ax, slider_ax = viz.plot_trap()
\end{lstlisting}
\end{enumerate}
\subsection{Photonic-crystal waveguides : the APCW}\label{subseq:photonic_crystal}
\begin{figure*}
\includegraphics[width = 0.95\textwidth]{Alligator_figure.pdf}
\caption{Stable trapping of atoms inside the gap of the Caltech alligator photonic-crystal waveguide (APCW).
(a) Schematic of the full APCW, with taper regions, extracted from Ref.~\cite{Goban2015}.
(b) Optimized band structure of the APCW with band edges aligned to D$_1$ and D$_2$ lines. Blue- and red-detuned modes used for trapping are shown with the corresponding color circles. The light line for a suspended waveguide in vacuum is shown in red.
(c) 2D total trapping potential with superimposed structure. A periodic stable trap with depth of more than 3 mK are achievable with powers of 230 $\mu$W for the blue beam and 3 $\mu$K for the red one.
(d) Calculated trapping potential along the $y$ axis. The grey curve corresponds to the total potential $U_{\mathrm{tot}} = U_{\mathrm{red}} + U_{\mathrm{blue}} + U_{\mathrm{CP}}$. Blue- and red-detuned beam contributions are plotted separately for comparison.
(e) Trap along the $x$ axis. The trapping frequency in this direction is large with: $\omega_{x} = 2\pi \times 3$~MHz.}
\label{fig:trap_APCW}
\end{figure*}
We now consider a second example involving trapping atoms around a slow-mode photonic-crystal waveguide. Such a platform with atoms trapped by evanescent modes is still to be experimentally demonstrated, but several theoretical proposals for trapping have recently emerged.
A one-color dipole trap for trapping Cesium atoms was first proposed~\cite{Yu2014,Hung2013}, using only a single blue-detuned laser from the D2 transition. But for all photonic crystals considered, this scheme leads to small trap depths of a few tens of $\mu$K.
To overcome this difficulty without increasing the powers of the trapping beams, generally limited by the power handling of such devices, a two-color dipole trap was proposed for the Caltech alligator photonic crystal waveguide (APCW)~\cite{Burgers2019}, following the ideas implemented with nanofibers. We compute the trapping potential for this APCW with \texttt{\lowercase{nanotrappy}}{} and compare them to \cite{Luan2020}. The results are presented in Fig.~\ref{fig:trap_APCW}.\\
Figure~\ref{fig:trap_APCW}(a) shows the aforementioned waveguide and Fig.~\ref{fig:trap_APCW}(b) its band structure. The parameters of the device are the same as in~\cite{Burgers2019}: the period is 370 nm, the gap is 240 nm wide, the edge modulation is 140 nm and the refractive index is 2 (for SiN). With these numbers the air and dielectric bands are aligned to the D2 and D1 lines of cesium, respectively.
Figure~\ref{fig:trap_APCW}(c) shows a trapping potential in two dimensions. The parameters for the trap are chosen as in~\cite{Luan2020}. A beam red-detuned from the D$_1$ line of Cs at 895~nm ($\delta = 2\pi \times 1700$~GHz) and a beam blue-detuned from the D$_2$ line at 848.1 nm ($\delta = 2\pi \times -130$~GHz) are used to create the trapping potential. The total powers are ${P_\mathrm{blue} = 230~\mu}$W and ${P_\mathrm{red} = 3~\mu}$W.
As shown in Fig.~\ref{fig:trap_APCW}(d-e), a stable trap in the $x$ and $y$ directions is obtained. There is also trapping in the $z$ direction, although with less strength. The trapping sites are positioned in the center of the gap, with a trap depth of 3~mK. The trapping frequencies are $\omega_{y} = 2\pi \times 1.1$~MHz, $\omega_{x} = 2\pi \times 3$~MHz and $\omega_{z} = 2\pi \times 570$~kHz. Confinement on the propagation direction is therefore very strong.
The values computed with \texttt{\lowercase{nanotrappy}}{} are in very good agreement with \cite{Luan2020}. The slight differences come mostly from numerics in the electric field simulations.
\subsection{Microtoroid}\label{subseq:microtoroid}
\texttt{\lowercase{nanotrappy}}{} is a versatile package as it can also be used for structures that are not waveguides. We demonstrate this here by studying the trapping of atoms near a microtoroid resonator.
One of the earliest proposal of trapping atoms with the evanescent field of a microstructure was to use the whispering gallery mode (WGM) of a microsphere \cite{Mabuchi1994}. The high Q factor and small mode volume of such resonators \cite{Kimble1998} can yield single-atom strong coupling on average \cite{Kimble1998a}, even with hot atomic vapors. Later, a toroidal microcavity was proposed as ultrahigh-Q microresonator for cavity QED \cite{Kimble2005}. It shows an even smaller mode volume and increased tunability arising from the added degree of freedom associated with the principal and minor diameters of the microtoroid. Experimentally, strong interaction with a WGM of a microtoroid was demonstrated with free-falling cesium atoms \cite{Kimble2006,Dayan2008}. Following these first demonstrations, schemes for trapping Cesium atoms in the evanescent field of such microresonators were proposed \cite{Stern2011, Alton2011, Alton2013}.
\begin{figure}
\includegraphics[width = 0.97\columnwidth]{Microtoroid_figure.pdf}
\caption{Two-color scheme for trapping atoms on the symmetry plane of a microtoroid resonator. (a) SEM image of a fabricated SiO$_2$ microtoroid extracted from \cite{Kimble2005}. Inset : 2D intensity profile of the red-detuned higher-order mode used for trapping.
(b) 2D potential on the outer edge of the structure. Wavelengths of 900~nm and 850~nm are used for the red- and blue- detuned beams. The line cuts (c) and (d) are taken along the dashed arrow.
(c) Trapping potential along $y$, at $z = 0$ and with CP potential included. The inset shows a zoom at the position of the minimum to highlight the splitting of the $m_F$ states due mostly to the vector shift.
(d) Trapping potential along $x$ with a counterpropagating red-detuned beam. The inset shows a reduced splitting compared to (c) due to cancellation of the vector shift thanks to the red-detuned beam (from $0.19$ to $0.05$~mK). Residual splitting is caused by the blue-detuned beam.}
\label{fig:microtoroid}
\end{figure}
Following Refs.~\cite{Alton2011, Stern2011} we compute with \texttt{\lowercase{nanotrappy}}{} the trapping potential for cesium atoms near a SiO$_2$ microtoroid with a 12 $\mu$m outer major radius $R_{\mathrm{ext}}$, and a 1.5 $\mu$m minor radius $r$. Figure~\ref{fig:microtoroid}(a) shows this structure and the transverse shape of the red-detuned mode used for trapping. The blue-detuned one is not displayed as it is composed of a single lobe in this plane. Modes of the electric field in a microtoroid can be described by their azimuthal number $m$, corresponding to the number of zeros of the field in one turn, and their number $p$, counting the number of lobes in the transverse plane. We choose $m=119$ and $p=0$ for the blue mode and $m=106$ and $p=1$ for the red-detuned beam. The latter has a faster decay in the vertical direction than the blue-detuned one, preventing the atoms from approaching the surface out of the symmetry plane \cite{Vernooy1997, Alton2013}.
Figure~\ref{fig:microtoroid}(b) shows the simulation of a two-color dipole trap in a section of the microtoroid with beams red- and blue-detuned from the cesium D$_2$ line at 852~nm. Lasers with powers $\sim 50$~mW give a trap depth of about 1.5~mK.
Using only one beam of each color produces a strong vector shift at the position of the atoms, manifested by a inhomogeneous broadening of around 0.2~mK (4~MHz) at the trap minimum. Adding a counterpropagating red beam reduces this effect by a factor 4, as shown in Figs. \ref{fig:microtoroid}(c) and (d).
As for the previous examples, the numbers are in very good agreement with the literature. This validates the accuracy of the package and makes it an efficient tool to study optical trapping near nanostructures. \\
\section{Conclusion} \label{Conclusion}
We have developed a package for simulating optical dipole trap for alkali atoms with an emphasis on trapping atoms close to surfaces. The package can efficiently and accurately calculate 3D trapping potentials, incorporating the scalar, vector and tensor light shifts, for all the Zeeman sublevels of the specified states. It also provides the relevant trap parameters, for example the position of the trap minimum, trap depth and the trap frequencies in all three directions. We provided three example of atom trapping near nanophotonic structures and demonstrated thereby the accuracy of the calculation by comparing our results with published literature.
The scope of application of the \texttt{\lowercase{nanotrappy}}{} package is broad as it can be used to simulate optical dipole traps for any given intensity distribution of the trapping field.
This makes the package appealing to a larger atomic physics community. In addition, the capability of calculating the shifts of the Zeeman levels in a given light field, can be used for estimating dephasing and fidelity of a quantum operation and is useful to the atom-based quantum information community.
\acknowledgments
The authors acknowledge fruitful discussions with Malik Kemiche and Nikos Fayard.
This work was supported by the French National Research Agency NanoStrong Project (ANR-18-CE47-0008), and the R\'{e}gion \^{I}le-de-France (DIM SIRTEQ). This project has also received funding from the European Union\textquotesingle s Horizon 2020 research and innovation programme under Grant Agreement No. 899275 (DAALI project). A.U. was supported by the European Union (Marie Curie Fellowship SinglePass 101030421).
|
{
"timestamp": "2022-02-07T02:11:01",
"yymm": "2109",
"arxiv_id": "2109.13954",
"language": "en",
"url": "https://arxiv.org/abs/2109.13954"
}
|
\section{Introduction}
\label{Intro}
There is an important fact about the asymptotic AdS geometry: the conformal boundary of a $(d+1)$-dimensional asymptotically locally AdS (AlAdS) spacetime carries not a metric but a conformal class of metrics, i.e.\ the boundary enjoys Weyl symmetry. This is due to the fact that the asymptotic boundary is formally located at conformal infinity\cite{Penrose:1962ij}. In holographic theories\cite{Maldacena:1997re}, the (background) Weyl symmetry is implied by diffeomorphism invariance in the bulk spacetime (called \emph{Weyl diffeomorphism}). Usually when discussing AdS/CFT, one picks a specific representative of the conformal class. For example, the most commonly used choice for studying the conformal boundary of an AlAdS spacetime is the Fefferman-Graham (FG) gauge \cite{Feffe,Fefferman2011}. However, the FG gauge explicitly breaks the Weyl symmetry by fixing a specific boundary metric.
\par
In a suitable coordinate system $\{z,x^\mu\}$ ($\mu=0,\cdots,d-1$), the metric of any $(d+1)$-dimensional AlAdS spacetime can be expanded with respect to the bulk coordinate $z$ into two series, called the Fefferman-Graham expansion\cite{Ciambelli:2019bzz,Leigh}. The first series has the boundary metric in the leading order, while the subleading terms are determined by the bulk equations of motion; the leading order of the second series represents the vacuum expectation value of the energy-momentum tensor operator of the boundary field theory, which cannot be determined in the absence of an interior boundary condition\cite{Leigh}.
\par
When the spacetime dimension is odd, both series in the FG expansion are power series to infinite order; however, in an even-dimensional spacetime, a logarithmic term will occur at order $O(z^{d-2})$, causing an obstruction to the power series expansion\cite{graham2005ambient}. This logarithmic term in $d=2k$ (with $k$ an integer) gives rise to the \emph{obstruction tensor} ${\cal O}^{(2k)}_{\mu\nu}$. The obstruction tensor was first proposed in \cite{Feffe} as a symmetric traceless tensor of type $(0,2)$, which is Weyl-covariant with Weyl weight $2k-2$ ($k\geqslant2$), and was precisely defined using the ambient metric in \cite{graham2005ambient} (see also \cite{Fefferman2011}). It is also convenient to define the \emph{extended obstruction tensor}\cite{GRAHAM20091956} which has a pole at $d=2k$, and whose residue gives rise to the obstruction tensor. The obstruction tensor for $d=4$ is also known as the Bach tensor\cite{Bach}, which is the only Weyl-covariant tensor in $4d$ that is algebraically independent of the Weyl tensor. It has been shown in \cite{graham2005ambient} that the only irreducible Weyl-covariant tensors in $2k$-dimension with $k\geqslant2$ are the obstruction tensor ${\cal O}^{(2k)}_{\mu\nu}$ and the Weyl tensor (which has weight $0$), while in any odd dimension $d=2k+1$ with $k\geqslant2$ the Weyl tensor is the only one (in $3d$ where the Weyl tensor becomes trivial, it is the Cotton tensor).
\par
The origin of the obstruction tensor in the FG expansion is that the two series will mix if the spacetime dimension $d$ is even, and the solution to the equations of motion encounters a pole. Hence, another way to formulate the FG expansion is to use the technique of dimensional regularization, i.e.\ to regard $d$ as a variable (formally complex)\cite{Ciambelli:2019bzz,Leigh}. Using this formulation, in this paper we will describe a practical way of reading off the obstruction tensor from the pole of the FG expansion in an even dimension.
\par
Even though the FG gauge is quite convenient to use, the Weyl symmetry in the boundary will be broken when the boundary metric is fixed. More specifically, one can introduce a Penrose-Brown-Henneaux (PBH) transformation \cite{Imbimbo_2000,Bautier:2000mz,Rooman:2000ei} in the bulk and induce a Weyl transformation on the boundary, but the subleading terms in the $z$-expansion will not transform in a Weyl-covariant way if the form of the FG ansatz is to be preserved. In order to resolve this issue, one can relax the FG ansatz of the bulk metric to the Weyl-Fefferman-Graham (WFG) ansatz \cite{Ciambelli:2019bzz}. In the WFG gauge, the form of the bulk metric is preserved under a Weyl diffeomorphism, and all the terms in the $z$-expansion transform in a Weyl-covariant way, which brings a powerful reorganization of the holographic dictionary. Unlike the FG gauge, where the bulk Levi-Civita (LC) connection induces on the conformal boundary also a LC connection (of the boundary metric), in the WFG gauge, the bulk LC connection gives a Weyl connection on the boundary\cite{Ciambelli:2019bzz}. Having the induced metric together with the Weyl connection, the bulk geometry induces on the boundary a Weyl-covariant geometry\cite{Folland:1970,Hall:1992,Scholz2018}.
\par
On the boundary, the induced metric and the Weyl connection act as non-dynamical backgrounds of the dual quantum field theory. Similar to the FG case, the metric is the source of the energy-momentum tensor operator on the boundary. However, the Weyl connection does not source any current since it comes from a pure gauge mode of the bulk metric. Despite being pure gauge, the appearance of the Weyl connection on the boundary is far from innocuous since it makes the geometric quantities on the boundary Weyl-covariant. Specifically, we will show that the obstruction tensors in the WFG gauge are promoted to Weyl-obstruction tensors, which will play an important role in the construction of the Weyl anomaly in this paper.
\par
The Weyl anomaly is reflected by the nonvanishing trace of the energy-momentum tensor in even dimensions, which has been computed for various conformal field theories\cite{Capper1974,Deser1976,Duff1977,Polyakov1981,Fradkin:1983tg,Bonora:1985cq,Osborn1991,Deser1993,Henningson1998,Boulanger:2007st,Boulanger:2007ab}. The results in $2d$ and $4d$ are well-known:
\begin{align}
2d:\langle T^\mu{}_\mu\rangle= -\frac{c}{24\pi} R\,,\qquad4d:\langle T^\mu{}_\mu\rangle= cW^2-aE^{(4)}\,,
\end{align}
where $W^2$ is the contraction of two Weyl tensors, and $E^{(4)}$ is the Euler density in $4d$. In the context of holography, the Weyl anomaly was first suggested in \cite{Witten:1998qj}, and was then calculated from the bulk in \cite{Liu:1998bu} and \cite{Henningson1998}.
For a holographic theory where we have the vacuum Einstein theory in the bulk, one gets $a=c$ in the 4-dimensional boundary theory as a constraint on the central charges. In the FG gauge, after going through the holographic renormalization procedure by adding counterterms to cancel the divergence extracted by the regulator, one finds that the holographic Weyl anomaly in an even dimension corresponds to the logarithmic term in the bulk volume expansion. In mathematical literature this is also referred to as the Q-curvature \cite{Branson1991,Branson1995,Fefferman2001,Fefferman2003} (see \cite{Chang2008} for a short review), which has been studied by means of obstruction tensors and extended obstruction tensors in \cite{graham2005ambient} and \cite{GRAHAM20091956}. Going into the WFG gauge, it was shown in \cite{Ciambelli:2019bzz} using dimensional regularization that the Weyl anomaly in $2k$-dimension can be extracted directly from the variation of the pole term at the $O(z^{2k-d})$-order of the ``bare" on-shell action under the $d\to 2k^-$ limit. This is the method we will use for computing the Weyl anomaly in this work.
\par
Our goal in this paper is to find the holographic Weyl anomaly in higher dimensions using the advantages of the WFG gauge, and organize the results in a form that manifests its general structure. It has been shown in \cite{Ciambelli:2019bzz} that, up to total derivatives, the Weyl anomaly in $2d$ and $4d$ in the WFG gauge has the same form of that in the FG gauge, but now become Weyl-covariant. We generalize these results to $6d$ and $8d$ by calculating the Weyl anomaly explicitly, and we find that the same statement still holds. Furthermore, we discover that by promoting the obstruction tensors in the FG gauge to the Weyl-obstruction tensors in the WFG gauge, one can use them as natural building blocks for the Weyl anomaly. In this way, we will see clearly how the WFG gauge Weyl-covariantizes the Weyl anomaly without introducing additional nontrivial cocycles. Our results also reveal some interesting clues about the general form of the holographic Weyl anomaly in any dimension.
\par
This paper will be organized as follows. In Section \ref{Sec2} we briefly introduce the obstruction tensors and extended obstruction tensors in the FG gauge and their properties. In Section \ref{Sec3} we review the WFG gauge as the Weyl-covariant modification of the FG gauge, and how the bulk LC connection induces a Weyl connection on the conformal boundary. More details about the Weyl connection and Weyl geometry are given in Appendix \ref{WG}. In Section \ref{Sec4} we generalize the results of Section \ref{Sec2} to Weyl-obstruction tensors and extended Weyl-obstruction tensors by solving the Einstein equations in the WFG gauge. The expansions of the Einstein equations can be found in Appendix \ref{AppB0}. Using the Weyl-Schouten tensor and extended Weyl-obstruction tensors as building blocks, in Section \ref{Sec5} we will derive the holographic Weyl anomaly in the WFG gauge in $6d$ and $8d$ after we review the results in $2d$ and $4d$. More details of the calculation are provided in Appendix \ref{AppB}. As a consistency check, we also compute the $8d$ holographic Weyl anomaly in the FG gauge using a completely different approach---the dilatation operator method\cite{Papadimitriou:2004ap,Anastasiou:2020zwc}---which will be presented in Appendix \ref{AppC}. The result agrees with what we get in Section \ref{Sec5}. The expressions for the holographic Weyl anomaly up to $8d$ also suggest the pattern in any dimension, which will be discussed in the end of Section 5. In Section \ref{Sec6} we discuss some aspects of the Weyl structure observed from the role it plays in the formulas for the Weyl-obstruction tensors and Weyl anomaly that we derived. Finally, in Section \ref{Sec7} we summarize our results and point out possible directions for future studies.
\section{Obstruction Tensors}
\label{Sec2}
The obstruction tensor is known as the only irreducible conformal covariant tensor besides the Weyl tensor in an even-dimensional spacetime. The general references for the obstruction tensor are \cite{graham2005ambient,Fefferman2011}, where it was defined precisely in terms of the ambient metric. Instead of providing the formal definition, we will derive the obstruction tensors explicitly in the FG gauge for up to $d=6$ by solving the bulk equations of motion order by order. The same method will also be used in Section \ref{Sec4} for the Weyl-obstruction tensor.
\par
According to the Fefferman-Graham theorem \cite{Feffe} the metric of a $(d+1)$-dimensional asymptotically locally AdS (AlAdS) spacetime can always be expressed in the following form
\begin{equation}
\label{FG}
\text{d} s^2=L^2\frac{\text{d} z^2}{z^2}+h_{\mu\nu}(z;x)\text{d} x^\mu \text{d} x^\nu\,,\qquad\mu,\nu=0,\cdots,d-1\,,
\end{equation}
where the coordinate $z$ can be considered as a ``radial" coordinate, and $z=0$ is the ``location" of the conformal boundary. When $h_{\mu\nu}=L^2 \eta_{\mu\nu}/z^2 $ with $\eta_{\mu\nu}$ the flat metric, this represents the Poincar\'e metric for $AdS_{d+1}$ spacetime. Near the conformal boundary, $h_{\mu\nu}$ can be expanded with respect to $z$ as follows\cite{Ciambelli:2019bzz}:
\begin{equation}
\label{hex0}
h_{\mu\nu}(z;x)=\frac{L^2}{z^2}\left[\gamma^{(0)}_{\mu\nu}(x)+\frac{z^2}{L^2}\gamma^{(2)}_{\mu\nu}(x)+\cdots\right]+\frac{z^{d-2}}{L^{d-2}}\left[\pi^{(0)}_{\mu\nu}(x)+\frac{z^2}{L^2}\pi^{(2)}_{\mu\nu}(x)+\cdots\right]\,.
\end{equation}
As we mentioned in Section \ref{Intro}, the conformal boundary carries a conformal class of metrics. In the FG expansion $\gamma^{(0)}_{\mu\nu}$ serves as the ``canonical" representative of the conformal class sourcing the energy-momentum tensor of the dual field theory on the boundary, while $\pi^{(0)}_{\mu\nu}$ corresponds to the expectation value of the energy-momentum tensor\cite{Leigh}. Once $\gamma^{(0)}_{\mu\nu}$ is given, each term in the first series can be determined by solving the vacuum Einstein equations with negative cosmological constant in the bulk. Similarly, once $\pi^{(0)}_{\mu\nu}$ is given, the second series will be determined. However, $\pi^{(0)}_{\mu\nu}$ is not completely arbitrary but is actually constrained by the Einstein equations. To be more specific, the $zz$-component of the Einstein equations tells us that $\pi^{(0)}_{\mu\nu}$ is traceless while the $z\mu$-components indicate that it is also divergence-free.
\par
Nevertheless, subtleties will arise when the boundary dimension $d$ is an even integer, since the two series in \eqref{hex0} mix into one. To resolve this issue for an even $d=2k$, we treat $d$ formally as a variable $d\in \mathbb{C}$ in the expansion \eqref{hex0} and let $d$ approach $2k$ from below. As we will see explicitly, when the Einstein equations are satisfied, $\gamma^{(2k)}_{\mu\nu}$ has a first order pole at $d=2k$. For any integer $k\geqslant2$, up to some factor, the coefficient of the pole term (which is actually a meromorphic function of the boundary dimension) is what we define as the \emph{obstruction tensor}, denoted by $\mathcal{O}^{(2k)}_{\mu\nu}$:
\begin{equation}\label{gamma2k}
\gamma^{(2k)}_{\mu\nu}= \frac{c_{(2k)}}{d-2k}\mathcal{O}^{(2k)}_{\mu\nu}+ \tilde{\gamma}_{\mu\nu}^{(2k)}\,,\qquad c_{(2k)}=-\frac{L^{2k}}{2^{2k-3}k!}\frac{\Gamma(d/2-k+1)}{ \Gamma(d/2-1)}\,,
\end{equation}
where the normalization factor $c_{(2k)}$ has been chosen so that the obstruction tensor agrees with the convention of \cite{Fefferman2011}, and the tensor $\tilde{\gamma}^{(2k)}_{\mu\nu}$ is analytic at $d=2k$.
\par
Besides holographic dimensional regularization \cite{Leigh}, another common approach is to introduce a logarithmic term for $d=2k$\cite{Henningson1998}, which turns out to be proportional to the obstruction tensor. This is also the origin of the name obstruction tensor, as it obstructs the existence of a formal power series expansion. Note that the tensor $\mathcal{O}^{(2k)}_{\mu\nu}$ is well-defined in any $d\geqslant2k$, but only behaves as an ``obstruction" when $d=2k$. The relation between the two approaches will be cleared up at the end of this section once we show how to correctly take the limit for an even $d$ in holographic dimensional regularization.
\par
Now we present the obstruction tensors explicitly. First, by solving the bulk Einstein equations to the $O(z^2)$-order one finds that
\begin{align}
\label{gamma2}
\frac{\gamma^{(2)}_{\mu\nu}}{L^2}=-\frac{1}{d-2}\left(R^{(0)}_{\mu\nu}-\frac{R^{(0)}}{2(d-1)}\gamma_{\mu\nu}^{(0)}\right)\,,
\end{align}
where $R^{(0)}_{\mu\nu}$ and $R^{(0)}$ represent the Ricci tensor and Ricci scalar of $\gamma_{\mu\nu}^{(0)}$ at the boundary, respectively. One can recognize $\gamma^{(2)}_{\mu\nu}/L^2$ as the Schouten tensor $P_{\mu\nu}$ at the boundary (with a minus sign):
\begin{align}
\label{P}
P_{\mu\nu}&=\frac{1}{d-2}\left(R^{(0)}_{\mu\nu}-\frac{R^{(0)}}{2(d-1)}\gamma_{\mu\nu}^{(0)}\right)\,.
\end{align}
Indeed we notice that there is a first order pole when $d=2$ as expected. However, it is easy to see that the residue of the pole vanishes identically for $d=2$. This is the reason why it is often stated that there is no obstruction tensor for $d=2$.
\par
At the $O(z^4)$-order, the Einstein equations give us
\begin{align}
\label{gamma4}
\frac{\gamma^{(4)}_{\mu\nu}}{L^4}=-\frac{1}{4(d-4)}B_{\mu\nu}+\frac{1}{4}P_{\rho\mu}P^{\rho}{}_{\nu}\,.
\end{align}
Note that on the boundary, the tensor indices are lowered and raised using $\gamma^{(0)}_{\mu\nu}$ and its inverse $\gamma_{(0)}^{\mu\nu}$. The tensor $B_{\mu\nu}$ is the Bach tensor, which is defined as
\begin{align}
\label{B}
B_{\mu\nu}&=\nabla_{(0)}^\lambda\nabla^{(0)}_\lambda P_{\mu\nu}-\nabla_{(0)}^\lambda\nabla^{(0)}_{\nu} P_{\mu\lambda}-W^{(0)}_{\rho\nu\mu\lambda}P^{\lambda\rho}\,,
\end{align}
where $\nabla^{(0)}_\mu$ is the derivative operator on the boundary associated with $\gamma^{(0)}_{\mu\nu}$, and $W^{(0)}_{\rho\mu\nu\lambda}$ is the Weyl tensor of $\gamma_{\mu\nu}^{(0)}$. We notice that the first term has a pole at $d=4$ and it follows from \eqref{gamma2k} that the obstruction tensor for $d=4$ is just the Bach tensor, i.e.\ $
{\cal O}^{(4)}_{\mu\nu}= B_{\mu\nu}$.\par
Similarly, if we move on to the $O(z^6)$-order of the Einstein equations, we find that $\gamma^{(6)}_{\mu\nu}$ has a pole at $d=6$ and can be written as
\begin{align}
\label{gamma6}
\frac{\gamma^{(6)}_{\mu\nu}}{L^6}&=-\frac{1}{24(d-6)(d-4)}{\cal O}^{(6)}_{\mu\nu}-\frac{1}{6(d-4)}B_{\rho\mu}P^\rho{}_\nu\,.
\end{align}
From \eqref{gamma2k} one can see that ${\cal O}^{(6)}_{\mu\nu}$ is the obstruction tensor for $d=6$, now given by
\begin{align}
\label{O6}
{\cal O}^{(6)}_{\mu\nu}={}&\nabla_{(0)}^\lambda\nabla^{(0)}_\lambda B_{\mu\nu}-2W^{(0)}_{\rho\nu\mu\lambda}B^{\lambda\rho}-4B_{\mu\nu}P+2(d-4)\big(2P^{\rho\lambda}\nabla^{(0)}_\lambda C_{(\mu\nu)\rho}+\nabla^{(0)}_\lambda PC_{(\mu\nu)}{}^\lambda\nonumber\\
&\qquad\qquad-C^{\rho}{}_{\mu}{}^{\lambda}C_{\lambda\nu\rho}+ \hat\nabla_{(0)}^\lambda P^\rho{}_{(\mu}C_{\nu)\rho\lambda}-W^{(0)}_{\rho\mu\nu\lambda}P^{\lambda}{}_\sigma P^{\sigma\rho}\big)\,,
\end{align}
where $P\equiv P_{\mu\nu}\gamma_{(0)}^{\mu\nu}$, and $C_{\mu\nu\rho}$ is the Cotton tensor on the boundary defined as
\begin{align}
C_{\mu\nu\rho}=\nabla^{(0)}_\rho P_{\mu\nu}-\nabla^{(0)}_\nu P_{\mu\rho}\,.
\end{align}
\par
Let us make a few remarks on some important properties of the obstruction tensors. First, they are symmetric traceless tensors for any boundary dimension $d$. The traceless condition can be derived from the $zz$-component of the Einstein equations at the $O(z^{2k})$-order. Also, the obstruction tensor ${\cal O}^{(2k)}_{\mu\nu}$ is divergence-free when $d=2k$. For instance, divergence of the Bach tensor gives
\begin{align}
\label{divB}
\nabla_{(0)}^\nu B_{\nu\mu}=(d-4)P^{\nu\rho}C_{\rho\nu\mu}\,.
\end{align}
The divergence of the Bach tensor can be read from the $O(z^4)$-order of the $z\mu$-component of Einstein equations. In general, at any $O(z^{2k})$-order one finds that the divergence of ${\cal O}^{(2k)}_{\mu\nu}$ is proportional to $d-2k$ and thus vanishes when $d=2k$. The divergence of ${\cal O}^{(2k)}_{\mu\nu}$ can also be obtained by using the following identity
\begin{align}
\label{divP}
\nabla_{(0)}^\nu P_{\nu\mu}=\nabla^{(0)}_\mu P\,.
\end{align}
This is equivalent to the contracted Bianchi identity at the boundary (see Appendix \ref{WG}), which can also be read from the leading order of the $z\mu$-component of Einstein equations. Finally, a notable feature of ${\cal O}^{(2k)}_{\mu\nu}$ is that it is Weyl-covariant when $d=2k$ with Weyl weight $2k-2$ (for a proof see \cite{graham2005ambient}).
\par
For convenience, we can also absorb the $d$-dependent factors in $\gamma^{(2k)}_{\mu\nu}$ by introducing Graham's extended obstruction tensor $\Omega^{(k-1)}_{\mu\nu}$ ($k\geqslant 2$) in $d>2k$:
\begin{align}
\label{extO}
\Omega^{(1)}_{\mu\nu}=-\frac{1}{d-4}B_{\mu\nu}\,,\qquad\Omega^{(2)}_{\mu\nu}=\frac{1}{(d-6)(d-4)}\mathcal O^{(6)}_{\mu\nu}\,,\qquad\cdots
\end{align}
The extended obstruction tensor $\Omega^{(k)}_{\mu\nu}$ was precisely defined in \cite{GRAHAM20091956} in the context of the ambient metric. The general relation between the obstruction tensor and extended obstruction tensor is
\begin{align}
\Omega^{(k)}_{\mu\nu}=\frac{(-1)^{k}}{2^k}\frac{\Gamma(d/2-k-1)}{\Gamma(d/2-1)}{\cal O}^{(2k+2)}_{\mu\nu}\qquad (k\geqslant1)\,.
\end{align}
\par
We finish this section by describing how to get the $d\to 2k^{-}$ limit of the two series in \eqref{hex0} properly. By taking the limit carefully we will recover a logarithmic term in the expansion whose coefficient is exactly the obstruction tensor for $d=2k$, which also justifies the name ``obstruction" as we mentioned before. There are two issues one has to deal with while taking the $d\to 2k^{-}$ limit. First, as we already noted, $\gamma^{(2k)}_{\mu\nu}$ has a pole at $d-2k$, so it diverges in this limit. Second, the two series mix since both $\gamma^{(2k)}_{\mu\nu}$ and $\pi^{(0)}_{\mu\nu}$ appear at the same order $O(z^{2(k-1)})$ in \eqref{hex0}, for $d=2k$. To keep the $O(z^{2k})$-order finite we pose that $\pi_{\mu\nu}^{(0)}$ should also have a pole for $d=2k$ proportional to ${\cal O}^{(2k)}_{\mu\nu}$ so that the divergence in $\gamma^{(2k)}_{\mu\nu}$ gets canceled, i.e.\ we claim that $\pi^{(0)}_{\mu\nu}$ has the following form:
\begin{equation}\label{pi0}
\pi^{(0)}_{\mu\nu}= - \frac{c_{(2k)}}{d-2k}{\cal O}^{(2k)}_{\mu\nu} + \tilde{\pi}^{(0)}_{\mu\nu}\,,
\end{equation}
where $\tilde{\pi}^{(0)}_{\mu\nu}$ is finite at $d=2k$. Substituting back \eqref{pi0} and \eqref{gamma2k} to \eqref{hex0} we get
\begin{equation}
h_{\mu\nu}(z;x)=\sum_{n=0}^{k-1}\gamma_{\mu\nu}^{(2n)}\left(\frac{z}{L}\right)^{2n-2} + \big(\tilde\gamma_{\mu\nu}^{(2k)}+ \tilde{\pi}^{(0)}_{\mu\nu}\big)\left(\frac{z}{L}\right)^{2k-2}-c_{(2k)}\left(\frac{z}{L}\right)^{2k-2}\text{ln}\left(\frac{z}{L}\right){\cal O}^{(2k)}_{\mu\nu} +o\big((z/L)^{d}\big)\,.
\end{equation}
This makes contact with the expansion with a logarithmic term (for an even $d$) presented in the literature, e.g.\ \cite{Henningson1998,deHaro:2000vlm,Skenderis2002}.
\section{Weyl-Fefferman-Graham Gauge}
\label{Sec3}
This section is a brief review of the Weyl-Fefferman-Graham (WFG) formalism established in \cite{Ciambelli:2019bzz}. At the end of this section we introduce the ``Weyl quantities" that will appear in later sections.
\par
The Fefferman-Graham ansatz \eqref{FG} is quite convenient for calculations, especially in the context of holographic renormalization. In this setup, one can induce a Weyl transformation of the boundary metric by a bulk diffeomorphism, namely the PBH transformation\cite{Imbimbo_2000},
\begin{equation}
z\to z'= z/{\cal{B}}(x)\,,\qquad x^{\mu}\to x'^{\mu}= x^{\mu}+ \xi^{\mu}(z;x)\, ,
\end{equation}
where $\xi^{\mu}(z;x)$ vanish at the boundary $z=0$. The functions $\xi^{\mu}(z;x)$ can be found (infinitesimally) in terms of ${\cal{B}}(x)$ by the constraint that the form of the FG ansatz is preserved under the transformation. However, under the PBH transformation, the subleading terms in the FG expansion \eqref{hex0} do not transform in a Weyl-covariant way. The source of this complication is the compensating diffeomorphisms $\xi^{\mu}(z;x)$ introduced for preserving the FG ansatz.
\par
This above-mentioned issue motivated the authors of \cite{Ciambelli:2019bzz} to replace the FG ansatz with
\begin{align}
\label{WFG}
\text{d} s^2=L^{2}\left(\frac{\text{d} z}{z}-a_\mu(z;x) \text{d} x^\mu\right)^2
+h_{\mu\nu}(z;x)\text{d} x^\mu \text{d} x^\nu\,,
\end{align}
which was named the Weyl-Fefferman-Graham ansatz. With the additional Weyl structure $a_{\mu}$ added, the form of the WFG ansatz is now preserved under the Weyl diffeomorphism
\begin{align}
\label{weyl}
z\to z'=z/{\cal B}(x)\,,\qquad x^\mu\to x'^\mu=x^\mu\,.
\end{align}
It is not hard to see that the Weyl diffeomorphism \eqref{weyl} induces the following transformation of the fields $a_{\mu}$ and $h_{\mu\nu}$:
\begin{align}
\label{weylha}
a_\mu(z;x)\to a'_{\mu}(z';x)= a_\mu({\cal B}(x)z';x)-\partial_\mu\ln{\cal B}(x)\,,\quad h_{\mu\nu}\to h'_{\mu\nu}(z';x)= h_{\mu\nu}({\cal B}(x)z';x)\,.
\end{align}
Thus, we can now induce a Weyl transformation on the boundary and preserve the form of the metric without introducing the irritating $\xi^{\mu}(z;x)$. Note that according to the FG theorem, any AlAdS spacetime can always be expressed in the FG form, and so \eqref{WFG} can be transformed into \eqref{FG} under a suitable diffeomorphism. This indicates that $a_\mu$ is actually pure gauge in the bulk. Another way of going back to the FG gauge is to simply set $a_\mu$ to zero; in this perspective, the FG gauge is nothing but a special case of the WFG gauge with a fixed gauge.
\par
The main utility of the WFG gauge is that all the terms (except one) in the $z$-expansions of $h_{\mu\nu}(z;x)$ and $a_{\mu}(z;x)$ transform as Weyl tensors under Weyl diffeomorphisms. To see this, let us expand $h_{\mu\nu}$ and $a_\mu$ near $z=0$:
\begin{align}
\label{hex}
h_{\mu\nu}(z;x)&=\frac{L^2}{z^2}\left[\gamma^{(0)}_{\mu\nu}(x)+\frac{z^2}{L^2}\gamma^{(2)}_{\mu\nu}(x)+\cdots\right]+\frac{z^{d-2}}{L^{d-2}}\left[\pi^{(0)}_{\mu\nu}(x)+\frac{z^2}{L^2}\pi^{(2)}_{\mu\nu}(x)+\cdots\right]\,,\\
\label{aex}
a_{\mu}(z;x)&=\left[a^{(0)}_{\mu}(x)+\frac{z^2}{L^2}a^{(2)}_{\mu}(x)+\cdots\right]
+\frac{z^{d-2}}{L^{d-2}}\left[p^{(0)}_{\mu}(x)+\frac{z^2}{L^2}p^{(2)}_{\mu}(x)+\cdots\right]\,.
\end{align}
In the FG gauge where $a_\mu$ is turned off, the FG expansion only includes \eqref{hex}, and the subleading terms $\gamma^{(2k)}_{\mu\nu}$ in the first series are determined solely by the boundary induced metric $\gamma^{(0)}_{\mu\nu}$ and its derivatives. Now with the extra series \eqref{aex}, $\gamma^{(2k)}_{\mu\nu}$ will also depend on $a^{(0)}_\mu$, $a^{(2)}_\mu$, $a^{(4)}_\mu$, etc.
Moving on, from the transformations \eqref{weylha} under a Weyl diffeomorphism, one finds the transformation of each term in the expansions \eqref{hex} and \eqref{aex} as follows\cite{Ciambelli:2019bzz}:
\begin{align}
\label{GP1}
\gamma^{(2k)}_{\mu\nu}(x)&\to\gamma^{(2k)}_{\mu\nu}(x){\cal B}(x)^{2k-2}\,,\qquad
\pi^{(k)}_{\mu\nu}(x)\to \pi^{(2k)}_{\mu\nu}(x){\cal B}(x)^{d-2+2k}\,,\\
\label{AP}
a^{(2k)}_{\mu}(x)&\to a^{(2k)}_{\mu}(x){\cal B}(x)^{2k}-\delta_{k,0}\partial_\mu\ln{\cal B}(x)\,,\qquad
p^{(2k)}_{\mu}(x)\to p^{(2k)}_{\mu}(x){\cal B}(x)^{d-2+2k}\,.
\end{align}
Indeed, we see that almost all the terms in the expansions transform Weyl-covariantly. The only exception is $a^{(0)}_{\mu}$, which transforms inhomogeneously under Weyl transformation, and thus does not have a definite Weyl weight. All the other terms in the expansions \eqref{hex} and \eqref{aex} can be viewed as tensor fields on the boundary and we can easily read off their Weyl weights from the power of ${\cal B}(x)$ appearing in \eqref{GP1} and \eqref{AP}.
\par
For a metric in the form of \eqref{WFG} defined on the bulk manifold $M$, one can choose a dual form basis and its corresponding vector basis as follows:
\begin{align}
\label{basis}
\bm e^z&=L\frac{\text{d} z}{z}-La_\mu(z;x)\text{d} x^\mu\,,\qquad \bm e^\mu=\text{d} x^\mu\,,\\
\underline e_z&=\frac{z}{L}\underline\partial_z\equiv \underline D_z\,,\qquad \underline e_\mu=\underline\partial_\mu+za_\mu(z;x)\underline\partial_z\equiv \underline D_\mu\,.
\end{align}
Then the tangent space at any point $(z,x^{\mu})\in M$ can be spanned by the basis $\{\underline D_z,\underline D_\mu\}$, and the basis vectors $\{\underline D_\mu\}$ form a $d$-dimensional distribution on $M$ which belongs to the kernel of $\bm e^z$. The Lie brackets of these basis vectors are
\begin{align}
\label{DmDn}
[\underline D_\mu,\underline D_\nu]=Lf_{\mu\nu}\underline D_z\,,\qquad[\underline D_z,\underline D_\mu]=L\varphi_\mu\underline D_z\,,
\end{align}
where $\varphi_\mu\equiv D_za_\mu$ and $f_{\mu\nu}\equiv D_\mu a_\nu-D_\nu a_\mu$ ($D_z$ and $D_\mu$ represent taking the derivatives along $\underline e_z$ and $\underline e_\mu$). According to the Frobenius theorem, the condition for the distribution spanned by $\{\underline D_\mu\}$ to be integrable is that $[\underline D_\mu,\underline D_\nu]=0$, i.e.\ $f_{\mu\nu}=0$. In this case, this distribution defines a hypersurface. For instance, in the FG gauge where $a_{\mu}$ is turned off, the distribution $\{\underline D_\mu\}$ becomes $\{\underline \partial_\mu\}$, which generates a foliation of constant-$z$ surfaces. However, $\{\underline D_\mu\}$ in the WFG gauge is not necessarily an integrable distribution, and thus one needs to keep in mind that the boundary hypersurface $z=0$ is in general not part of a foliation.
\par
Suppose $\nabla$ is the Levi-Civita (LC) connection on $M$. One can find the connection coefficients of $\nabla$ in the frame $\{\underline D_z,\underline D_\mu\}$ from its definition \eqref{conncoef}:
\begin{align}
\nabla_{\underline D_\mu}\underline D_\nu=\Gamma^\lambda{}_{\mu\nu}\underline D_\lambda+\Gamma^z{}_{\mu\nu}\underline D_z\,.
\end{align}
The coefficients $\Gamma^\lambda{}_{\mu\nu}$ in the above equation define the induced connection coefficients on the distribution over $M$ spanned by $\{\underline D_\mu\}$ (see \cite{munozlecanda2018aspects}). Expanding $\Gamma^\lambda{}_{\mu\nu}$ with respect to $z$, at the leading order one finds that
\begin{align}
\label{IndWeyl}
\Gamma^{\lambda}_{(0)}{}_{\mu\nu}
&=\frac{1}{2}\gamma_{(0)}^{\lambda\rho}\big(
\partial_\mu \gamma^{(0)}_{\nu\rho}
+\partial_\nu \gamma^{(0)}_{\mu\rho}-\partial_\rho \gamma^{(0)}_{\mu\nu}\big)-\big(a^{(0)}_\mu\delta^\lambda{}_\nu+a^{(0)}_\nu\delta^\lambda{}_\mu+a^{(0)}_\rho\gamma_{(0)}^{\lambda\rho}\gamma^{(0)}_{\mu\nu}\big)\,.
\end{align}
We can see that \eqref{IndWeyl} gives exactly the connection coefficients of a torsion-free connection with Weyl metricity [see \eqref{WeylLC} in Appendix \ref{WG}, where $A_\mu$ and $g_{\mu\nu}$ correspond to $a^{(0)}_\mu$ and $\gamma^{(0)}_{\mu\nu}$]. That is, on the boundary with $z\to0$ we have a connection $\nabla^{(0)}$ satisfying
\begin{align}
\label{nonmetry}
\nabla^{(0)}_\mu\gamma^{(0)}_{\nu\rho}=2a^{(0)}_\mu\gamma^{(0)}_{\nu\rho}\,.
\end{align}
This indicates that although $a_\mu$ is pure gauge in the bulk, its leading order $a_{\mu}^{(0)}$ serves as a Weyl connection at the conformal boundary. Together with the induced metric $\gamma^{(0)}_{\mu\nu}$, they provide a Weyl geometry at the boundary \cite{Folland:1970}. Under a boundary Weyl transformation
\begin{align}
\label{WT}
\gamma_{\mu\nu}^{(0)} \to {\cal B}(x)^{-2} \gamma^{(0)}_{\mu\nu}\,,\qquad a_{\mu}^{(0)}\to a_{\mu}^{(0)}- \partial_{\mu}\text{ln}{\cal}B(x)\,,
\end{align}
for any tensor $T$ (with indices suppressed) with Weyl weight $w_T$ on the boundary, we have
\begin{align}
T\to B^{w_T}T\,,\qquad(\nabla^{(0)}_\mu T+w_Ta^{(0)}_\mu T)\to B^{w_T}(\nabla^{(0)}_\mu T+w_Ta^{(0)}_\mu T)\,.
\end{align}
One can also absorb the Weyl connection and define $\hat\nabla^{(0)}$ such that
\begin{align}
\hat\nabla^{(0)}_\mu T\equiv\nabla^{(0)}_\mu T+w_Ta^{(0)}_\mu T\,,
\end{align}
which renders $\hat\nabla^{(0)}_\mu T$ Weyl-covariant. Particularly, Eq.\ \eqref{nonmetry} indicates that $\hat\nabla^{(0)}$ is metricity-free, which makes it convenient for boundary calculations.
\par
Now that we have the Weyl geometry on the boundary, the geometric quantities there are promoted to the ``Weyl quantities". More precisely, for any geometric quantity constructed by the boundary metric $\gamma^{(0)}_{\mu\nu}$ and the LC connection in the FG case, we now have a Weyl-covariant counterpart of it constructed by $\gamma^{(0)}_{\mu\nu}$, $a^{(0)}_\mu$ and $\hat\nabla^{(0)}$ in the WFG case. For instance, we have the Weyl-Riemann tensor $\hat R^{\mu}{}^{(0)}_{\nu\rho\sigma}$, Weyl-Ricci tensor $\hat R^{(0)}_{\mu\nu}$ and Weyl-Ricci scalar $\hat R^{(0)}$. In addition, $f_{\mu\nu}$ induces on the boundary a tensor $f^{(0)}_{\mu\nu}=\partial_\mu a^{(0)}_\nu-\partial_\nu a^{(0)}_{\mu}$, namely the curvature of the Weyl connection $a^{(0)}$, which is obviously Weyl-invariant. We can also define the Weyl-Schouten tensor $\hat P_{\mu\nu}$ and Weyl-Cotton tensor $\hat C_{\mu\nu\rho}$ on the boundary as follows:
\begin{align}
\label{WP}
\hat P_{\mu\nu}&=\frac{1}{d-2}\bigg(\hat R^{(0)}_{\mu\nu}-\frac{1}{2(d-1)}\hat R^{(0)}\gamma^{(0)}_{\mu\nu}\bigg)\,,\\
\label{WC}
\hat C_{\mu\nu\rho}&=\hat\nabla^{(0)}_{\rho}\hat P_{\mu\nu}-\hat\nabla^{(0)}_{\nu}\hat P_{\mu\rho}\,.
\end{align}
One should notice that the symmetry of the indices of a ``Weyl quantity" is not necessarily the same as the corresponding quantity defined with the LC connection. For instance, the Weyl-Ricci tensor is not symmetric, with its antisymmetric part $\hat R^{(0)}_{[\mu\nu]}=-(d-2)f^{(0)}_{\mu\nu}/2$, and hence the Weyl-Schouten tensor $\hat P_{\mu\nu}$ also contains an antisymmetric part $\hat P_{[\mu\nu]}=-f^{(0)}_{\mu\nu}/2$. In the next section, we will see that the obstruction tensors also have their Weyl-covariant counterparts. More details of the above Weyl quantities are exhibited in Appendix \ref{WG}.
\section{Weyl-Obstruction Tensors}
\label{Sec4}
In the previous section we saw that the WFG gauge in the bulk induces a Weyl geometry on the boundary. Now we would like to determine the higher order terms in the expansion \eqref{hex} and find the obstruction tensors with the Weyl connection turned on. The method is exactly analogous to that in Section \ref{Sec2} for the FG gauge. By solving the bulk Einstein equations order by order in the WFG gauge, we find that $\gamma^{(2k)}_{\mu\nu}$ still has the same form as \eqref{gamma2k}, except that the obstruction tensor ${\cal O}^{(2k)}_{\mu\nu}$ is now promoted to the \emph{Weyl-obstruction tensor} $\hat {{\cal O}}^{(2k)}_{\mu\nu}$. Unlike ${\cal O}^{(2k)}_{\mu\nu}$, which is only Weyl-covariant in $2k$-dimension, the Weyl-obstruction tensors $\hat {{\cal O}}^{(2k)}_{\mu\nu}$ are Weyl-covariant with a weight $2k-2$ in any dimension; that is, under a Weyl transformation \eqref{WT} it transforms in any $d$ as $\hat {{\cal O}}^{(2k)}_{\mu\nu} \to {\cal B}(x)^{2k-2}\hat {{\cal O}}^{(2k)}_{\mu\nu}$.
\par
In principle, $\gamma^{(2k)}_{\mu\nu}$ at any order can be obtained from the Einstein equations by iteration. In this section, we will show solutions of $\gamma^{(2k)}_{\mu\nu}$ obtained from Einstein equations up to $k=3$, and read off the corresponding Weyl-obstruction tensors from them. Some detailed expansions of Einstein equations can be found in Appendix \ref{AppB0}.
\par
First, the leading order of the $\mu\nu$-components of the Einstein equations gives
\begin{align}
\label{g2}
\frac{\gamma^{(2)}_{\mu\nu}}{L^2}&=-\frac{1}{d-2}\bigg(\hat R^{(0)}_{(\mu\nu)}-\frac{1}{2(d-1)}\hat R^{(0)}\gamma^{(0)}_{\mu\nu}\bigg)\,.
\end{align}
We notice that this is the symmetric part of the Weyl-Schouten tensor defined in \eqref{WP} with a minus sign, i.e.\
\begin{align}
\label{Pgf}
\frac{\gamma^{(2)}_{\mu\nu}}{L^2}&=-\hat P_{(\mu\nu)}=-\hat P_{\mu\nu}-\frac{1}{2}f^{(0)}_{\mu\nu}\,.
\end{align}
Similar to the FG gauge, one can check that the residue of the pole in \eqref{g2} vanishes identically when $d=2$. Hence, there is no Weyl-obstruction tensor for $d=2$ and so no logarithmic term will appear in the metric expansion in the $d\to 2^{-}$ limit.
\par
Then, solving the $O(z^2)$-order of the $\mu\nu$-components of the Einstein equations yields
\begin{align}
\label{g4}
\frac{\gamma^{(4)}_{\mu\nu}}{L^4}&=-\frac{1}{4(d-4)}\hat{\cal O}^{(4)}_{\mu\nu}+\frac{1}{4}\hat P^{\rho}{}_{\mu}\hat P_{\rho\nu}-\frac{1}{2L^2}\hat\nabla^{(0)}_{(\mu} a_{\nu)}^{(2)}\,,
\end{align}
where $\hat{\cal O}^{(4)}_{\mu\nu}$ is the Weyl-obstruction tensor for $d=4$, namely the Weyl-Bach tensor $\hat B_{\mu\nu}$, given by
\begin{align}
\hat{\cal O}^{(4)}_{\mu\nu}=\hat B_{\mu\nu}=\hat\nabla^{(0)}_\lambda\hat\nabla_{(0)}^\lambda \hat P_{\mu\nu}-\hat\nabla^{(0)}_\lambda\hat\nabla^{(0)}_{\nu} \hat P_{\mu}{}^{\lambda}-\hat W^{(0)}_{\rho\nu\mu\lambda}\hat P^{\lambda\rho}\,.
\end{align}
If we compare \eqref{g4} with the corresponding result \eqref{gamma4} in the FG case, we see that the form of the expression stays almost the same, with all the LC quantities now being promoted to the corresponding Weyl quantities. Besides, in the WFG gauge $\gamma^{(4)}_{\mu\nu}$ also has an additional term involving $a^{(2)}_{\mu}$, which does not contribute to the pole at $d=4$.
\par
Moving on to the $O(z^4)$-order of the Einstein equations we get
\begin{equation}
\label{g6}
\begin{split}
\frac{\gamma^{(6)}_{\mu\nu}}{L^6}=&-\frac{1}{24(d-6)(d-4)}\hat{\cal O}^{(6)}_{\mu\nu}+\frac{1}{6(d-4)}\hat B_{\rho(\mu}\hat P^\rho{}_{\nu)} -\frac{1}{3L^4}\hat\nabla^{(0)}_{(\mu}a^{(4)}_{\nu)}\\
&-\frac{1}{L^4}a^{(2)}_{\mu}a^{(2)}_{\nu}+\frac{1}{6L^2}a^{(2)}\cdot a^{(2)}\gamma^{(0)}_{\mu\nu}+\frac{1}{6L^2}\hat\nabla^{(0)}_{(\mu}(\hat P^{\rho}{}_{\nu)}a^{(2)}_\rho)+\frac{1}{2L^4}\hat\gamma_{(2)}^{\sigma}{}_{\mu\nu}a^{(2)}_\sigma\, ,
\end{split}
\end{equation}
where $\hat\gamma_{(2)}^{\sigma}{}_{\mu\nu}\equiv-\frac{L^2}{2}(\hat\nabla^{(0)}_\mu\hat P^\lambda{}_{\nu}+\hat\nabla^{(0)}_\nu\hat P_\mu{}^\lambda-\hat\nabla_{(0)}^\lambda\hat P_{\mu\nu})$, and $\hat{\cal O}^{(6)}_{\mu\nu}$ is the Weyl-obstruction tensor for $d=6$:
\begin{equation}
\label{WO6}
\begin{split}
\hat{\cal O}^{(6)}_{\mu\nu}={}&\hat\nabla_{(0)}^\lambda\hat\nabla^{(0)}_\lambda\hat B_{\mu\nu}-2\hat W^{(0)}_{\rho\nu\mu\lambda}\hat B^{\lambda\rho}-4\hat P\hat B_{\mu\nu}+2\hat P_{\rho(\nu}\hat B^{\rho}{}_{\mu)}-2\hat B^{\rho}{}_{(\mu}\hat P_{\nu)\rho}\\
&+2(d-4)\bigg(\hat\nabla_{(0)}^\lambda\hat C_{\lambda\rho(\mu}\hat P^{\rho}{}_{\nu)} -\hat P^{\lambda\rho}\hat\nabla^{(0)}_{(\mu}\hat C_{\nu)\rho\lambda}+2\hat P^{(\rho\lambda)}\hat\nabla^{(0)}_\lambda \hat C_{(\mu\nu)\rho}+\hat\nabla^{(0)}_\lambda\hat P^{\rho\lambda}\hat C_{(\mu\nu)\rho}\\
&\qquad\qquad\qquad-\hat C^{\rho}{}_{\mu}{}^{\lambda}\hat C_{\lambda\nu\rho}+ \hat\nabla_{(0)}^\lambda\hat P^\rho{}_{(\mu}\hat C_{\nu)\rho\lambda}-\hat W^{(0)}_{\rho(\nu\mu)\lambda}\hat P^{\lambda}{}_\sigma\hat P^{\sigma\rho}\bigg)\,.
\end{split}
\end{equation}
It is easy to verify that \eqref{g6} and \eqref{WO6} go back to the FG expressions \eqref{gamma6} and \eqref{O6} when we turn off the Weyl structure $a_\mu$. Note that when the Weyl connection is turned off, the first term inside the parentheses of \eqref{WO6} vanishes due to \eqref{divC}, and the second term there vanishes since the LC Schouten tensor $\mathring P_{\mu\nu}$ is symmetric. Once again, we observe that all the $a^{(2)}_\mu$ and $a^{(4)}_\mu$ terms that appear in $\gamma^{(6)}_{\mu\nu}$ do not contribute to the pole at $d=6$ and thus are not part of the obstruction tensor $\hat{\cal O}_{\mu\nu}^{(6)}$. We will discuss this more in Section \ref{Sec6}.
\par
Just as ${\cal O}_{\mu\nu}^{(2k)}$ derived in the FG gauge, all the $\hat{\cal O}_{\mu\nu}^{(2k)}$ are also symmetric traceless tensors, and they are divergence-free when $d=2k$. These properties can either be verified by using the result from the $\mu\nu$-components of the Einstein equations (``evolution equations"), or read off from the $zz$- and $z\mu$-components of the Einstein equations (``constraint equations"). More specifically, plugging $\gamma^{(2k)}_{\mu\nu}$ into the $zz$-component of the Einstein equations we can see that $\mathcal{\hat O}^{(2k)}_{\mu\nu}$ is traceless in any dimension, and the same result can also be obtained by taking the trace of the $\mu\nu$-components of the Einstein equations. To see that $\hat{\cal O}_{\mu\nu}^{(2k)}$ is divergence-free when $d=2k$, we can plug $\gamma^{(2k)}_{\mu\nu}$ into the $z\mu$-components of the Einstein equations. For instance, the $O(z^4)$-order of the $z\mu$-equations gives
\begin{align}
\label{divWB}
\hat\nabla_{(0)}^\nu\hat B_{\nu\mu}=(d-4)\hat P^{\nu\rho}(\hat C_{\rho\nu\mu}+\hat C_{\mu\nu\rho})\,,
\end{align}
and so the divergence of $\hat B_{\mu\nu}$ vanishes when $d=4$. In the FG gauge where the Schouten tensor is symmetric, the second term in the bracket vanishes and so \eqref{divWB} goes back to \eqref{divB}. On the other hand, the divergence of $\hat{\cal O}_{\mu\nu}^{(2k)}$ can also be derived from a direct calculation by using repeatedly the Weyl-Bianchi identity
\begin{align}
\label{divWP}
\hat\nabla_{(0)}^\nu\hat P_{\nu\mu}=\hat\nabla^{(0)}_\mu\hat P\,,
\end{align}
which can be read off from the $O(z^2)$-order of the $z\mu$-equation. The above discussion indicates that the $zz$- and $z\mu$-components of the Einstein equations do not contain more information about $\gamma^{(2k)}_{\mu\nu}$ than the $\mu\nu$-components of Einstein equations. Note that here we only talk about the equations of motion for $\gamma^{(2k)}_{\mu\nu}$. At $O(z^d)$-order the $zz$- and $z\mu$-equations do provide new constraints on $\pi^{(0)}_{\mu\nu}$, while the $\mu\nu$-equations on $\pi^{(0)}_{\mu\nu}$ become trivial.
\par
It is also convenient to define the \emph{extended Weyl-obstruction tensor} $\hat{\Omega}^{(k)}_{\mu\nu}$ as the Weyl-covariant version of the extended obstruction tensor defined in \eqref{extO}. For example, for $k=1$ and $k=2$ we have
\begin{align}
\label{extWO}
\hat\Omega^{(1)}_{\mu\nu}=-\frac{1}{d-4}\hat B_{\mu\nu}\,,\qquad\hat\Omega^{(2)}_{\mu\nu}=\frac{1}{(d-6)(d-4)}\hat{\mathcal O}^{(6)}_{\mu\nu}\,.
\end{align}
\par
Similar to the FG case, the Weyl-obstruction tensor $\hat{\cal O}_{\mu\nu}^{(2k+2)}$ is also proportional to the residue of the extended Weyl-obstruction tensor $\hat{\Omega}^{(k)}_{\mu\nu}$. Both the Weyl-obstruction tensors and the extended Weyl-obstruction tensors can be defined following \cite{graham2005ambient,GRAHAM20091956} by promoting the ambient metric to the ``Weyl-ambient metric". We will discuss this in detail in a separate publication.
\section{Holographic Weyl Anomaly}
\label{Sec5}
\subsection{Weyl-Ward Identity}\label{WeylWardIdentity}
In this section, we first discuss the anomalous Weyl-Ward identity for a general field theory on a background Weyl geometry following \cite{Ciambelli:2019bzz}, and then we focus on holographic theories in the WFG gauge. Later, we will compute the Weyl anomaly for a holographic theory in the WFG gauge up to $d=8$.
\par
Essentially, for a $d$-dimensional field theory\footnote{From now on, we will work in the Euclidean signature. We also adopt natural units where $c=\hbar=1$.} coupled to a background metric $\gamma^{(0)}_{\mu\nu}$ and a Weyl connection $a^{(0)}_{\mu}$, the Weyl anomaly comes from an additional exponential factor arising in the path integral after applying a Weyl transformation:
\begin{align}
\label{Z}
Z[\gamma^{(0)},a^{(0)}]= \text{e}^{-{\cal A}[{\cal B}(x);\gamma^{(0)},a^{(0)}]} Z[\gamma^{(0)}/{\cal B}(x)^{2},a^{(0)}-\text d\ln {\cal B}(x)]\,.
\end{align}
The anomaly ${\cal A}[{\cal B}(x);g,a]$ should satisfy the 1-cocycle condition \cite{Manvelyan:2001pv,BONORA1983305}
\begin{align}
{\cal A} [{\cal B''} {\cal B'};\gamma^{(0)},a^{(0)}]= {\cal A} [{\cal B'};\gamma^{(0)},a^{(0)}] + {\cal A} [{\cal B''}; \gamma^{(0)}/({\cal B'})^{2},a^{(0)}- \text{d}\ln {\cal B'}]\,.
\end{align}
For any non-exact Weyl-invariant $d$-form $\bm A[\gamma_{(0)},a_{(0)}]$, one can check that ${\cal A}[{\cal B}(x);\gamma^{(0)},a^{(0)}]= \int (\ln {\cal B})\bm{A}$ satisfies the cocycle condition, and thus it is a possible candidate for the Weyl anomaly. However, if $\bm A$ is exact, ${\cal A}$ would be cohomologically trivial since it can be written as the difference of a Weyl-transformed local functional. The linearly independent choices of $\bm A$ in non-trivial cocycles correspond to different central charges.
\par
In general, the background fields $\gamma^{(0)}_{\mu\nu}$ and $a^{(0)}_\mu$ are the sources of the energy-momentum tensor operator $T^{\mu\nu}$ and the Weyl current operator $J^\mu$, respectively:
\begin{align}
\langle T^{\mu\nu}(x)\rangle=\frac{2}{\sqrt{-\det\gamma^{(0)}}}\frac{\delta S}{\delta \gamma^{(0)}_{\mu\nu}(x)}\,,\qquad \langle J^\mu(x)\rangle=-\frac{1}{\sqrt{-\det\gamma^{(0)}}}\frac{\delta S}{\delta a^{(0)}_\mu(x)}\,.
\end{align}
Expanding the quantum effective action $S\equiv-\ln Z$ to the first order under an infinitesimal Weyl transformation and integrating by parts, for a theory with a Weyl anomaly we obtain
\begin{align}
\label{QFTWeylWard}
\frac{1}{\sqrt{-\det\gamma^{(0)}}}\frac{\delta \cal A}{\delta \ln{\cal B}(x)}=\big\langle T^{\mu\nu}(x)\gamma^{(0)}_{\mu\nu}(x)+\hat{\nabla}^{(0)}_\mu J^\mu(x)\big\rangle\,.
\end{align}
This is the (anomalous) Weyl-Ward identity. As we can see, besides the trace of the energy-momentum tensor that appears in the usual case, the divergence of the Weyl current also contributes to the Ward identity when the Weyl connection is turned on.
\par
Let us now focus on a holographic field theory dual to the vacuum Einstein theory in the $(d+1)$-dimensional bulk. The holographic dictionary provides the relation between the on-shell classical bulk action $S_{bulk}$ and quantum effective action $S_{bdr}$ of the field theory on the boundary \cite{Witten:1998qj}:
\begin{align}
\label{dict}
\exp\left(-S_{bulk}[g;\gamma_{(0)},a_{(0)}]\right)=\exp\left(-S_{bdr}[\gamma_{(0)},a_{(0)}]\right)\,,
\end{align}
where $\gamma_{(0)}$ and $a_{(0)}$ are the boundary values of $h$ and $a$ as shown in \eqref{hex} and \eqref{aex}. Since $a_{\mu}$ is pure gauge in the bulk, $a^{(0)}_\mu$ could be gauged away and hence it is not expected to source any current on the boundary. The role of the $a^{(0)}_{\mu}$, however, is important since it makes the energy-momentum tensor along with all the geometric quantities on the boundary Weyl-covariant. On the other hand, the $p^{(0)}_\mu$ also plays a role in the Weyl-Ward identity. In the FG gauge, $\pi^{(0)}_{\mu\nu}$ corresponds to the expectation value of $T_{\mu\nu}$; the Ward identity for the Weyl symmetry shows that the trace of $\pi^{(0)}_{\mu\nu}$ vanishes, which can be read off from the $O(z^d)$-order of the $zz$-component of the Einstein equations\cite{Leigh}. In the WFG gauge, this equation now gives
\begin{align}
\label{boundaryWI}
0=\frac{d}{2L^2}\gamma_{(0)}^{\mu\nu}\pi^{(0)}_{\mu\nu}+\hat\nabla^{(0)}\cdot p_{(0)}\,.
\end{align}
Besides $\pi^{(0)}_{\mu\nu}$, there is an additional term $\hat\nabla^{(0)}\cdot p_{(0)}$ which represents a gauge ambiguity of $a_\mu$. This suggests that the energy-momentum tensor in the WFG gauge acquires an extra piece, which now can be considered as an ``improved" energy-momentum tensor $\tilde T_{\mu\nu}$ (\`a la\cite{BELINFANTE1940449,Callan1970}):
\begin{equation}
\label{improvedT}
\langle \kappa^2\tilde T_{\mu\nu}\rangle=\frac{d}{2 L^2}\pi^{(0)}_{\mu\nu}+ \hat\nabla^{(0)}_{(\mu} p^{(0)}_{\nu)}\,,
\end{equation}
where $\kappa^2 =8\pi G$.\footnote{The energy-momentum tensor \eqref{improvedT} in the WFG gauge can be verified using the prescription introduced in\cite{deHaro:2000vlm}.}
It is easy to see that the trace of this energy-momentum tensor gives the right-hand side of \eqref{boundaryWI}. One can also find that the $z\mu$-components of the Einstein equations at the $O(z^d)$-order give exactly the conservation law $\langle \hat\nabla_{(0)}^\mu\tilde T_{\mu\nu}\rangle =0$ [see \eqref{emz}], which is the Ward identity corresponding to the boundary diffeomorphisms. Therefore, in the holographic case we can write the anomalous Weyl-Ward identity \eqref{QFTWeylWard} as
\begin{equation}
\label{holoWeylWard}
\frac{1}{\sqrt{-\det\gamma^{(0)}}}\frac{\delta \cal A}{\delta \ln{\cal B}(x)}=\big\langle\tilde T^{\mu\nu}(x)\gamma^{(0)}_{\mu\nu}(x)\big\rangle\,.
\end{equation}
Notice that one should distinguish $p^{(0)}_\mu$ and the Weyl current $J_\mu$. Unlike $\pi_{\mu\nu}^{(0)}$ which is sourced by $\gamma^{(0)}_{\mu\nu}$, $p^{(0)}_\mu$ is not sourced by $a^{(0)}_\mu$ since $a_\mu$ is pure gauge in the bulk. In the boundary field theory, the Weyl current $J_{\mu}$ vanishes identically, while $p^{(0)}_{\mu}$ contributes to the expectation value of $\tilde T_{\mu\nu}$ as an ``improvement". In a generic non-holographic field theory defined on the background with Weyl geometry, there may exist a nonvanishing $J_{\mu}$ sourced by the Weyl connection $a^{(0)}_\mu$ (see \cite{Ciambelli:2019bzz} for an example).
\par
Using the basis $\{\bm e^z,\bm e^\mu=\text{d} x^\mu\}$ in \eqref{basis}, the bulk on-shell Einstein-Hilbert action with negative cosmological constant can be written as
\begin{align}
\label{SEH}
S_{bulk}&=\frac{1}{2\kappa^2}\int_M\sqrt{-\det g}\,(R-2\Lambda)\bm{e}^z\wedge \text{d} x^1\wedge\cdots\wedge \text{d} x^d\,.
\end{align}
Note that $\sqrt{-\det g}=\sqrt{-\det h}$. Considering the vacuum Einstein equation in the bulk and the expansion
\begin{align}
\label{sqrth}
\sqrt{-\det h}&=\left(\frac{L}{z}\right)^d\sqrt{-\det\gamma^{(0)}}\left(1+\frac{1}{2}\left(\frac{z}{L}\right)^2 X^{(1)}+\frac{1}{2}\left(\frac{z}{L}\right)^4X^{(2)}+\cdots+\frac{1}{2}\left(\frac{z}{L}\right)^dY^{(1)}+\cdots\right)\,,
\end{align}
one can expand \eqref{SEH} as
\begin{align}
\label{Sbulk}
S_{bulk}&=-\frac{L^{-2}}{\kappa^2}\int_M \left(\frac{L}{z}\right)^d\left(d+\frac{d}{2}\left(\frac{z}{L}\right)^2 X^{(1)}+\frac{d}{2}\left(\frac{z}{L}\right)^4X^{(2)}+\cdots+\frac{d}{2}\left(\frac{z}{L}\right)^dY^{(1)}+\cdots\right)\bm{e}^z\wedge vol_\Sigma\,,
\end{align}
where $vol_\Sigma\equiv \sqrt{-\det\gamma^{(0)}}\text{d} x^1\wedge\cdots\wedge\text{d} x^d$.
\par
When the bulk action transforms under a Weyl diffeomorphism, the corresponding boundary theory undergoes a Weyl transformation. However, the diffeomorphism invariance of the bulk Einstein theory does not imply the Weyl invariance on the boundary when there is an anomaly\cite{Mazur2001}, since it follows from \eqref{Z} that
\begin{align}
\label{Weyltrans}
0=S_{bulk}[g|z',x']-S_{bulk}[g|z,x]=S_{bdr}[\gamma'_{(0)},a'_{(0)}|x]-S_{bdr}[\gamma_{(0)},a_{(0)}|x]+ {\cal A}[{\cal B}]\,,
\end{align}
where $(z',x')=(z/{\cal B},x)$ for the bulk and $\gamma'_{(0)}=\gamma_{(0)}/{\cal B}^2$, $a'_{(0)}=a_{(0)}-\text{d}\ln{\cal B}$ for the boundary.
\par
Normally, to compute the Weyl anomaly first one needs to regularize the bulk on-shell action \eqref{Sbulk} by introducing a cutoff surface at some small value of $z=\epsilon$, and then add counterterms to cancel the divergences when $\epsilon\to 0$\cite{Henningson1998}. This is essentially how the Weyl anomaly arises since the regulator breaks the Weyl symmetry and causes the appearance of a logarithmically divergent term. However, since we do not assume that we have an integrable distribution when the Weyl structure is turned on, the cutoff regularization scheme is inconvenient for the WFG gauge. It has been elucidated in \cite{Ciambelli:2019bzz} using dimensional regularization that the Weyl anomaly can be extracted from the pole of $S_{bulk}$ that arises in an even dimension. By evaluating the difference of the pole term in $S_{bulk}$ under a Weyl diffeomorphism, one finds that the Weyl anomaly ${\cal A}_k$ of the $2k$-dimensional boundary theory is
\begin{align}
\label{Ak}
{\cal A}_k=\frac{k}{\kappa^2L}\int\ln {\cal B} X^{(k)}_{d=2k}vol_\Sigma\,.
\end{align}
Therefore, to find the Weyl anomaly in $2k$-dimension, we only have to compute $X^{(k)}$ coming from the expansion of $\sqrt{-\det h}$.
\subsection{Weyl Anomaly in $2d$ and $4d$}
Now let us apply \eqref{Ak} to $2d$ and $4d$. Here we first go over the WFG results presented in \cite{Ciambelli:2019bzz}, and then make a few important remarks. To find the holographic Weyl anomaly in $2d$ and $4d$ all we have to do is plug in the expressions of $X^{(1)}$ and $X^{(2)}$ obtained from the $zz$-components of the Einstein equations (see Appendix \ref{AppB0}), that is,
\begin{align}
\label{2d4d}
X^{(1)} =-\frac{L^2}{2(d-1)}\hat R\,,\qquad X^{(2)}=-\frac{L^4}{4(d-2)^2}\bigg(\hat R_{\mu\nu}\hat R^{\nu\mu}-\frac{d}{4(d-1)}\hat R^2\bigg)-\frac{L^2}{2}\hat\nabla\cdot a^{(2)}\,.
\end{align}
[From now on we will drop the label ``(0)" for the boundary curvature quantities and derivative operator when there is no confusion.] First we look at the Weyl anomaly in $d=2$:
\begin{align}
\label{2dA}
{\cal A}_1&=\frac{1}{\kappa^2L}\int\ln {\cal B} X^{(1)}_{d=2}vol_\Sigma =-\frac{L}{16\pi G}\int \ln {\cal B}\hat R\sqrt{-\det\gamma^{(0)}}\text{d}^2x\,,
\end{align}
where in the second equality we used \eqref{2d4d}. Then, it follows from \eqref{holoWeylWard} that the Weyl-Ward identity now reads
\begin{align}
\langle{\tilde T}^\mu{}_\mu\rangle=-\frac{L}{16\pi G}\hat R\,.
\end{align}
We can see that the right-hand side of this result has exactly the same form as what we get from the standard calculation in the FG gauge, except that the curvature scalar now is Weyl-covariant. Similarly, plugging \eqref{2d4d} into \eqref{Ak}, we find that the Weyl anomaly in $d=4$ can be written as
\begin{align}
\label{4dA}
{\cal A}_2&=\frac{2}{\kappa^2L}\int\ln {\cal B} X^{(2)}_{d=4}vol_\Sigma=-\frac{L}{8\pi G}\int\bigg[\frac{L^2}{8}\Big(\hat R_{\mu\nu}\hat R^{\nu\mu}-\frac{1}{3}\hat R^2\Big)+\hat\nabla\cdot a^{(2)}\bigg]\ln {\cal B}\sqrt{-\det\gamma^{(0)}}\text{d}^4x\,.
\end{align}
Again, one can immediately tell that the right-hand side of this result matches the standard FG result (e.g. \cite{Henningson1998}) if we turn off the Weyl structure.
\par
There are a few things worth paying attention to: first, in the $2d$ Weyl anomaly \eqref{2dA}, the Weyl-Ricci scalar is also the Weyl-Euler density $E^{(2)}$ in $2d$, i.e.\ the Euler density Weyl-covariantized by the Weyl connection. Furthermore, we can rewrite the $4d$ Weyl anomaly \eqref{4dA} as
\begin{align}
\label{A2}
{\cal A}_2&=-\frac{L}{8\pi G}\int\bigg[\frac{L^2}{16}\Big(\hat W_{\mu\nu\rho\sigma}\hat W^{\rho\sigma\mu\nu}-\hat E^{(4)}\Big)+\hat\nabla\cdot a^{(2)}\bigg]\ln {\cal B}\sqrt{-\det\gamma^{(0)}}\text{d}^4x\,,
\end{align}
where $\hat E^{(4)}$ is the Weyl-Euler density in $4d$:
\begin{align}
\hat E^{(4)}=\hat R_{\mu\nu\rho\sigma}\hat R^{\rho\sigma\mu\nu}-4\hat R_{\mu\nu}\hat R^{\nu\mu}+\hat R^2\,.
\end{align}
Traditionally, the Euler density $E^{(2k)}$ without the Weyl connection is called the type A Weyl anomaly, which is topological in $2k$-dimension and not Weyl-invariant, while the type B Weyl anomaly is the Weyl-invariant part of the anomaly \cite{Deser1993}. Here we find that in the WFG gauge, this classification of the Weyl anomaly is still available, with the Weyl-Euler density now Weyl-invariant since the curvature quantities in this setup are endowed with Weyl covariance.
\par
Also, notice that the subleading term $a^{(2)}_\mu$ of $a_\mu$ only makes an appearance in the anomaly through a cohomologically trivial term, i.e.\ we can express it as a Weyl-transformed local functional as follows:
\begin{align}
\int\text{d}^4x\sqrt{-\det\gamma_{(0)}}\ln {\cal B}\,\hat\nabla_\mu a_{(2)}^\mu=\int\text{d}^4x\sqrt{-\det\gamma'_{(0)}}\,a'^{(0)}_\mu a'^\mu_{(2)}-\int\text{d}^4x\sqrt{-\det\gamma_{(0)}}\,a^{(0)}_\mu a_{(2)}^\mu\,,
\end{align}
where $a'^\mu_{(2)}={\cal B}^4a_{(2)}^\mu$, and the boundary term due to integrating by parts is ignored. We will see that this is a generic feature of the Weyl anomaly in the WFG gauge for any dimension.
\par
Although in \eqref{2dA} and \eqref{4dA} we expressed the holographic Weyl anomaly in $2d$ and $4d$ in terms of curvature to match the corresponding familiar results in the FG gauge, we can also express them alternatively in terms of the Weyl-Schouten tensor:
\begin{align}
\label{X1X2}
\frac{X^{(1)}}{L^2}=-\hat P\,,\qquad \frac{X^{(2)}}{L^4}=-\frac{1}{4}\text{tr}(\hat P^2)+\frac{1}{4}\hat P^2-\frac{1}{2L^2}\hat\nabla\cdot a^{(2)}\,.
\end{align}
Then \eqref{2dA} and \eqref{4dA} can be written as
\begin{align}
\label{2dAP}
{\cal A}_1&=-\frac{L}{\kappa^2}\int\text{d}^2x\sqrt{-\det\gamma^{(0)}}\ln {\cal B} \hat P\,,\\
\label{4dAP}
{\cal A}_2&=-\frac{L^3}{\kappa^2}\int\text{d}^4x\sqrt{-\det\gamma^{(0)}}\ln {\cal B} \bigg(\frac{1}{2}\text{tr}(\hat P^2)-\frac{1}{2}\hat P^2+\frac{1}{L^2}\hat\nabla\cdot a^{(2)}\bigg)\,.
\end{align}
In higher dimensions, $X^{(k)}$ can be expressed in terms of $\gamma_{\mu\nu}^{(0\leqslant j\leqslant2k)}$ (see Appendix \ref{AppB}). By solving the Einstein equations we have seen that these terms can all be expressed in terms of $\hat P_{\mu\nu}$ and $\hat{\cal O}^{(2<j<2k)}_{\mu\nu}$. Therefore, we will use the Weyl-Schouten tensor and Weyl-obstruction tensors as the building blocks for the Weyl anomaly in even dimensions.
\subsection{Weyl Anomaly in $6d$}\label{Weyl6d}
After revisiting the results in $2d$ and $4d$, we will now present our computations for $6d$ and $8d$. In principle, $X^{(k)}$ can be obtained by solving Einstein equations as we have done for $2d$ and $4d$. However, as the dimension goes higher, computing the curvature will become extremely tedious. To facilitate the computation in higher dimensions, we can use a more efficient way of organizing the Einstein equations which helps us avoid the curvature tensors, namely to use the Raychaudhuri equation of the congruence generated by $\underline D_z$. The details of the Raychaudhuri equation and its expansions are given in Appendix \ref{AppB}.
\par
To solve for $X^{(3)}$, we need to expand $\sqrt{-\det h}$ to the order $O(z^{6-d})$. Using \eqref{Ray6d} and plugging the results we have got for $\gamma^{(2)}_{\mu\nu},\gamma^{(4)}_{\mu\nu}$ and $X^{(1)},X^{(2)}$ into \eqref{X3}, we obtain
\begin{align}
\frac{X^{(3)}}{L^6}=&-\frac{1}{12}\text{tr}(\hat P^3)+\frac{1}{8}\text{tr}(\hat P^2)\hat P-\frac{1}{24}\hat P^3+\frac{1}{12}\text{tr}(\hat\Omega^{(1)}\hat P)\nonumber\\
\label{X3P}
&+\frac{1}{6L^4}(d-6)a^2_{(2)}-\frac{1}{3L^4}\hat\nabla\cdot a^{(4)}-\frac{1}{12L^2}\hat\nabla_\mu\big[a^{(2)}_\nu(3\hat P^{\mu\nu}+\hat P^{\nu\mu}
-3\hat P \gamma_{(0)}^{\mu\nu})\big]\,,
\end{align}
where we used the extended Weyl-obstruction tensor $\hat\Omega^{(1)}_{\mu\nu}$ defined in \eqref{extWO}. Notice first that the $a^{(2)}_\mu$ quadratic term in $X^{(3)}$ vanishes in $6d$, and thus does not contribute to the Weyl anomaly. Then, it follows from \eqref{Ak} that the Weyl anomaly in $6d$ is
\begin{align}
\label{6dA}
{\cal A}_3={}&\frac{3}{\kappa^2L}\int\ln {\cal B} X^{(3)}_{d=6}vol_\Sigma\nonumber\\
={}&-\frac{L^5}{\kappa^2}\int \text{d}^6x\sqrt{-\det\gamma^{(0)}}\ln {\cal B} \bigg(\frac{1}{4}\text{tr}(\hat P^3)-\frac{3}{8}\text{tr}(\hat P^2)\hat P+\frac{1}{8}\hat P^3-\frac{1}{4}\text{tr}(\hat\Omega^{(1)}\hat P)\nonumber\\
&+\frac{1}{L^4}\hat\nabla\cdot a^{(4)}+\frac{1}{4L^2}\hat\nabla_\mu\big[a^{(2)}_\nu(3\hat P^{\mu\nu}+\hat P^{\nu\mu}
-3\hat P \gamma_{(0)}^{\mu\nu})\big]\bigg)\,.
\end{align}
Just as what we have shown for the $4d$ case, the subleading terms in the expansion of $a_\mu$ appear only in total derivatives and thus only contribute to cohomologically trivial terms in the $6d$ Weyl anomaly. When we turn off $a_\mu^{(0)}$ and $a_\mu^{(2)}$, this result agrees with the holographic Weyl anomaly in the FG gauge computed in \cite{Henningson1998}. \par
Usually, the Weyl anomaly in $6d$ is written as a linear combination of the $6d$ Euler density and three conformal invariants in $6d$ (see \cite{Bonora:1985cq,Deser1993,Henningson1998}), which represents the four central charges in $6d$. The result we obtained can also be written in this way, which means the classification of type A and type B anomalies still holds for the WFG gauge in $6d$. However, as we will discuss shortly, the expression we have in \eqref{X3P} in terms of $\hat P_{\mu\nu}$ and $\hat\Omega^{(1)}_{\mu\nu}$ reveals some interesting aspects of the Weyl anomaly.
\subsection{Weyl Anomaly in $8d$}\label{Weyl8d}
Expanding $\sqrt{-\det h}$ to the order $O(z^{8-d})$, we have $X^{(4)}$ in \eqref{X4}. Using \eqref{Ray8d} and plugging the results up to $\gamma^{(6)}_{\mu\nu}$ and $X^{(3)}$ into \eqref{X4}, we have
\begin{align}
\label{X4P}
\frac{X^{(4)}}{L^8}=&-\frac{1}{32}\text{tr}(\hat P^4)+\frac{1}{24}\text{tr}(\hat P^3)\hat P+\frac{1}{64}(\text{tr} (\hat P^2))^2-\frac{1}{32}\text{tr}(\hat P^2)\hat P^2+\frac{1}{192}\hat P^4\nonumber\\
&-\frac{1}{24}\text{tr}(\hat\Omega^{(1)}\hat P)\hat P+\frac{1}{24}\text{tr}(\hat\Omega^{(1)}\hat P^2)-\frac{1}{96}\text{tr}(\hat\Omega^{(1)}\hat\Omega^{(1)})-\frac{1}{96}\text{tr}(\hat\Omega^{(2)}\hat P)\nonumber\\
&+\frac{d-8}{4L^6}a^{(4)}\cdot a^{(2)}+\frac{d-8}{12L^4}a^{(2)}_\mu a^{(2)}_\nu(\hat P^{\mu\nu}-\hat P\gamma_{(0)}^{\mu\nu})+\text{total derivatives}\,.
\end{align}
As expected, all the terms in \eqref{X4P} that involve $a^{(2)}_\mu$, $a^{(4)}_\mu$, $a^{(6)}_\mu$ either vanish when $d=8$ or contribute only to the total derivatives. The details of the total derivatives are given in \eqref{X8t}. Plugging \eqref{X4P} into \eqref{Ak}, we obtain the holographic Weyl anomaly in $8d$:
\begin{align}
\label{8dA}
{\cal A}_4={}&\frac{4}{\kappa^2L}\int\ln {\cal B} X^{(4)}_{d=8}vol_\Sigma\nonumber\\
={}&-\frac{L^7}{\kappa^2}\int\text{d}^8x\sqrt{-\det\gamma^{(0)}}\ln {\cal B} \bigg(\frac{1}{8}\text{tr}(\hat P^4)-\frac{1}{6}\text{tr}(\hat P^3)\hat P-\frac{1}{16}(\text{tr} (\hat P^2))^2+\frac{1}{8}\text{tr}(\hat P^2)\hat P^2-\frac{1}{48}\hat P^4\nonumber\\
&+\frac{1}{6}\text{tr}(\hat\Omega^{(1)}\hat P)\hat P-\frac{1}{6}\text{tr}(\hat\Omega^{(1)}\hat P^2)+\frac{1}{24}\text{tr}(\hat\Omega^{(1)}\hat\Omega^{(1)})+\frac{1}{24}\text{tr}(\hat\Omega^{(2)}\hat P)+\text{total derivatives}\bigg)\,.
\end{align}
Once again, we can see that the subleading terms in $a_\mu$ only have cohomologically trivial contributions. If we go back to the FG gauge, then this result agrees with the renormalized volume coefficient for $k=4$ shown in \cite{GRAHAM20091956}. One can also write the FG version of the above result in the traditional way as a linear combination of the type A and type B anomalies, i.e.\ the Euler density and Weyl invariants (the list of Weyl invariants in $8d$ can be found in \cite{Boulanger:2004zf}). We naturally expect that this classification can also be applied to the holographic Weyl anomaly in the WFG gauge for higher dimensions.
\subsection{Building Blocks of the Weyl Anomaly}
As we have seen, if we ignore the total derivatives that depend on the subleading terms of the $a_\mu$ expansion, $X^{(1)}$ corresponds to the Weyl-Ricci scalar (i.e.\ the $2d$ Weyl-Euler density) and $X^{(2)}$ corresponds to the classic ``$a=c$" result. For the Weyl anomaly in $6d$ and $8d$ both $X^{(3)}$ and $X^{(4)}$ can also be written as linear combinations of the Weyl-Euler density and type B anomalies. This is true for both the FG and WFG cases, just the quantities in the latter are Weyl-covariant. One just needs to substitute the Weyl quantities with their LC counterparts (i.e.\ set $a_\mu$ to zero) to get the Weyl anomaly in the FG case. However, when expressing them in terms of the Weyl-Schouten tensor and extended Weyl-obstruction tensors (or Schouten tensor and extended obstruction tensors in the FG case), we observe that the polynomial terms of $X^{(k)}/L^{2k}$ (without the total derivative terms) in $2k$-dimensions, denoted by $\bar X^{(k)}$, have the following structures:
\begin{align}
\label{X1D}
\bar X^{(1)}&=-\delta^\mu_\nu\hat P^\nu{}_\mu\,,\\
2\bar X^{(2)}&=\frac{1}{2}\delta^{\mu_1\mu_2}_{\nu_1\nu_2}\hat P^{\nu_1}{}_{\mu_1}\hat P^{\nu_2}{}_{\mu_2}\,,\\
6\bar X^{(3)}&=-\frac{1}{4}\delta^{\mu_1\mu_2\mu_3}_{\nu_1\nu_2\nu_3}\hat P^{\nu_1}{}_{\mu_1} \hat P^{\nu_2}{}_{\mu_2}\hat P^{\nu_3}{}_{\mu_3}-\frac{1}{2}\delta^{\mu_1\mu_2}_{\nu_1\nu_2}\hat\Omega_{(1)}^{\nu_1}{}_{\mu_1}\hat P^{\nu_2}{}_{\mu_2}\,,\\
\label{X4D}
24\bar X^{(4)}&=\frac{1}{8}\delta^{\mu_1\mu_2\mu_3\mu_4}_{\nu_1\nu_2\nu_3\nu_4}\hat P^{\nu_1}{}_{\mu_1} \hat P^{\nu_2}{}_{\mu_2}\hat P^{\nu_3}{}_{\mu_3}\hat P^{\nu_4}{}_{\mu_4}+\frac{1}{2}\delta^{\mu_1\mu_2\mu_3}_{\nu_1\nu_2\nu_3}\hat\Omega_{(1)}^{\nu_1}{}_{\mu_1}\hat P^{\nu_2}{}_{\mu_2}\hat P^{\nu_3}{}_{\mu_3}\nonumber\\
&\quad+\frac{1}{4}\delta^{\mu_1\mu_2}_{\nu_1\nu_2}\hat\Omega_{(1)}^{\nu_1}{}_{\mu_1}\hat\Omega_{(1)}^{\nu_2}{}_{\mu_2}+\frac{1}{4}\delta^{\mu_1\mu_2}_{\nu_1\nu_2}\hat\Omega_{(2)}^{\nu_1}{}_{\mu_1}\hat P^{\nu_2}{}_{\mu_2}\,,
\end{align}
where the Kronecker $\delta$ symbol is defined as
\begin{align}
\delta^{\mu_1\cdots\mu_s}_{\nu_1\cdots\nu_s}=s!\delta^{\mu_1}{}_{[\nu_1}\cdots\delta^{\mu_s}{}_{\nu_s]}\,.
\end{align}
From \eqref{X1D}--\eqref{X4D} we can see that $\bar X^{(k)}$ contains all kinds of possible combinations of $\hat P_{\mu\nu}$ and $\hat\Omega^{(2<j<2k)}_{\mu\nu}$ whose Weyl weights add up to be $2k$, i.e.\ the Weyl weight of $X^{(k)}$. Using this pattern, one can directly write down the terms in the holographic Weyl anomaly in any dimension. For instance, we can easily predict without explicit calculation that $\bar X^{(5)}$ is the linear combination of the following terms:
\begin{align*}
&\delta^{\mu_1\mu_2\mu_3\mu_4\mu_5}_{\nu_1\nu_2\nu_3\nu_4\nu_5}\hat P^{\nu_1}{}_{\mu_1} \hat P^{\nu_2}{}_{\mu_2}\hat P^{\nu_3}{}_{\mu_3}\hat P^{\nu_4}{}_{\mu_4}\hat P^{\nu_5}{}_{\mu_5}\,,\quad\delta^{\mu_1\mu_2\mu_3\mu_4}_{\nu_1\nu_2\nu_3\nu_4}\hat\Omega_{(1)}^{\nu_1}{}_{\mu_1} \hat P^{\nu_2}{}_{\mu_2}\hat P^{\nu_3}{}_{\mu_3}\hat P^{\nu_4}{}_{\mu_4}\,,\\
&\delta^{\mu_1\mu_2\mu_3}_{\nu_1\nu_2\nu_3}\hat\Omega_{(2)}^{\nu_1}{}_{\mu_1}\hat P^{\nu_2}{}_{\mu_2}\hat P^{\nu_3}{}_{\mu_3}\,,\quad\delta^{\mu_1\mu_2\mu_3}_{\nu_1\nu_2\nu_3}\hat\Omega_{(1)}^{\nu_1}{}_{\mu_1}\hat\Omega_{(1)}^{\nu_2}{}_{\mu_2}\hat P^{\nu_3}{}_{\mu_3}\,,\quad\delta^{\mu_1\mu_2}_{\nu_1\nu_2}\hat\Omega_{(2)}^{\nu_1}{}_{\mu_1}\hat\Omega_{(1)}^{\nu_2}{}_{\mu_2}\,,\quad\delta^{\mu_1\mu_2}_{\nu_1\nu_2}\hat\Omega_{(3)}^{\nu_1}{}_{\mu_1}\hat P^{\nu_2}{}_{\mu_2}\,.
\end{align*}
These terms represent the independent central charges that appear in the holographic Weyl anomaly in $d=10$.
\par
Based on the above pattern, it is natural to expect a general expression that can generate the holographic Weyl anomaly in any dimension, which is an analog of the exponential structure given by the Chern class that generates the chiral anomaly in any dimension (see, e.g.\ \cite{Frampton:1983nr,Zumino:1983rz,Bertlmann:1996xk}). It has been suggested in \cite{Deser1993} that the type A Weyl anomaly can be generated by a mechanism similar to that for the chiral anomaly. The expressions for the Weyl anomaly in terms of the (Weyl-) Schouten tensor and the extended (Weyl-) obstruction tensors suggest a similar mechanism for the holographic Weyl anomaly.
\section{The Role of Weyl Structure}
\label{Sec6}
Now that we have obtained the Weyl-obstruction tensors and Weyl anomaly, let us provide some observations on how the $a_{\mu}$ mode \eqref{aex} is involved. We have already mentioned that according to the FG theorem, this mode is pure gauge in the bulk. Now we have a few clear manifestations of this from our calculations.
\par
The first one is that the subleading terms $a^{(2k)}_{\mu}$ with $k>0$ in the expansion of $a_\mu$ cannot be determined from the Einstein equations when $a^{(0)}_{\mu}$ is given. This is different from the expansion of $h_{\mu\nu}$ where the subleading terms $\gamma_{\mu\nu}^{(2k)}$ can be solved (on-shell) in terms of $\gamma_{\mu\nu}^{(0)}$.
\par
The second one is that $a_{\mu}$ appears only inside total derivatives in $X^{(k)}$, and thus represents cohomologically trivial modifications of the boundary Weyl anomaly. For $a_{\mu}^{(2k)}$ with $k\geqslant 2$, this can easily be seen from the expressions \eqref{4dAP}, \eqref{6dA} and \eqref{8dA}. What is not explicit in these formulas is that $a^{(0)}_{\mu}$ also appears inside a total derivative. This can be verified by separating the LC quantities out of the Weyl quantities in $X^{(k)}$. For instance, denote the LC Schouten tensor as $\mathring P_{\mu\nu}$ and the LC connection as $\mathring{\nabla}$, and then $X^{(1)}$ in $2d$ and $X^{(2)}$ in $4d$ can be written as
\begin{align}
L^{-2}X^{(1)}_{d=2}={}& L^{-2}\mathring X^{(1)}_{d=2}+\mathring{\nabla}\cdot a^{(0)}\,,\\
L^{-4}X^{(2)}_{d=4}={}&L^{-4}\mathring X^{(2)}_{d=4}-\frac{1}{2}\mathring{\nabla}_\mu(\mathring P^{\mu\nu}a^{(0)}_\nu-\mathring Pa_{(0)}^\mu)\nonumber\\
&-\frac{1}{4}\mathring{\nabla}_\mu(a^{(0)}_\nu\mathring{\nabla}^\nu a_{(0)}^\mu-a_{(0)}^\mu\mathring{\nabla}\cdot a_{(0)})-\frac{1}{4}\mathring{\nabla}_\mu(a_{(0)}^\mu a_{(0)}^2)-\frac{1}{2L^2}\mathring{\nabla}\cdot a^{(2)}\,,
\end{align}
where $L^{-2}\mathring X^{(1)}=-\mathring{P}$ and $L^{-4}\mathring X^{(2)}=\frac{1}{4}\mathring{P}^2-\frac{1}{4}\text{tr}(\mathring{P}^2)$.\footnote{Note that $\mathring{\nabla}\cdot a^{(2)}$ is equivalent to $\hat\nabla\cdot a^{(2)}$, since in $2k$-dimension $\hat\nabla$ and $\mathring{\nabla}$ give the same result when acting on a vector with Weyl weight $+2k$ (see Appendix \ref{WG}).}
Notice that although the terms involving $a^{(0)}_{\mu}$ are total derivatives, they are not Weyl-covariant and so one cannot naively assume that they are trivial cocycles. However, by finding suitable local counterterms, we have checked that all the terms involving $a_{\mu}^{(0)}$ are indeed part of a trivial cocycle for $2d$ and $4d$. As $a_{\mu}$ is pure gauge, we expect this to be generally true.
\par
In principle, the Weyl connection $a^{(0)}_\mu$ on the boundary brings new Weyl-invariant objects, such as $\text{tr}(f_{(0)}^2)$, which could lead to new central charges in the Weyl anomaly. However, up to $d=8$ we find the classification of type A and type B anomalies is still available, and in such a basis the nonvanishing central charges are still the same as those in the FG case. Once this can be carried over to higher dimensions, then $a^{(0)}_\mu$ appearing in total derivatives in $X^{(k)}$ can also be deduced by considering the Weyl anomaly as the sum of the type A and type B anomalies. In the FG gauge, under a Weyl transformation the type B anomaly is invariant while the type A anomaly, i.e.\ the Euler density, gets an extra total derivative involving $\ln{\cal B}$. Since the Weyl connection makes the Weyl anomaly in the WFG gauge Weyl-invariant, the terms with $a^{(0)}_\mu$ in the Weyl-Euler density should exactly compensate the extra total derivative, and hence they must form a total derivative.
\par
Another observation we have mentioned is that although the subleading terms in the expansion of $a_{\mu}$ make an appearance in $\gamma^{(2k)}_{\mu\nu}$, they do not appear in the Weyl-obstruction tensors. Up to $k=3$, we have seen explicitly in \eqref{g2}, \eqref{g4} and \eqref{g6} that the terms with $a_{\mu}^{(2)}$ and $a_{\mu}^{(4)}$ do not contribute to the pole at $d=2k$ in $\gamma^{(2k)}_{\mu\nu}$. What is also true but not as obvious, is that the terms with $a^{(0)}_\mu$ do not contribute to the pole at $d-2$ in the Weyl-Schouten tensor and are proportional to $d-2k$ in Weyl-obstruction tensors. For instance, one can separate the $a^{(0)}_\mu$ from $\hat P_{\mu\nu}$ and get
\begin{align}
\hat P_{\mu\nu}=\mathring P_{\mu\nu}+\mathring{\nabla}_\nu a^{(0)}_{\mu}+a^{(0)}_{\mu}a^{(0)}_{\nu}-\frac{1}{2}a_{(0)}^2\gamma^{(0)}_{\mu\nu}\,,
\end{align}
while the only pole on the right-hand side is in the LC Schouten tensor $\mathring P_{\mu\nu}$. Similarly, expressing the Weyl-Bach tensor in terms of LC quantities we have
\begin{align}
\hat B_{\mu\nu}=\mathring{B}_{\mu\nu}+(d-4)(a_{(0)}^\lambda\mathring{C}_{\lambda\nu\mu}-2a_{(0)}^\lambda\mathring{C}_{\mu\nu\lambda}+a_{(0)}^\lambda a_{(0)}^\rho\mathring W_{\rho\mu\lambda\nu})\,.
\end{align}
Thus, when $d=4$, $a^{(0)}_\mu$ does not contribute to the pole in $\gamma^{(4)}_{\mu\nu}$, and the Weyl-Bach tensor $\hat{B}_{\mu\nu}$ is equivalent to the LC Bach tensor $\mathring{B}_{\mu\nu}$. One should naturally expect that this is also true for any Weyl-obstruction tensors, i.e.\ $\hat{\cal O}^{(2k)}_{\mu\nu}$ is equivalent to the LC obstruction tensor $\mathring{\cal O}^{(2k)}_{\mu\nu}$ when $d=2k$. Note that when $d>2k$, the $a^{(0)}_\mu$ terms are included in the Weyl-obstruction tensor so that $\hat{\cal O}^{(2k)}_{\mu\nu}$ is always Weyl-covariant.
\par
The statement that any term in the expansion of $a_\mu$ does not appear in the pole of $\hat\gamma^{(2k)}_{\mu\nu}$ is consistent with the following claim: when $d=2k$, the Weyl-obstruction tensor $\hat{\cal O}_{(2k)}^{\mu\nu}$ satisfies
\begin{align}
\label{varX}
\hat{\cal O}_{(2k)}^{\mu\nu}=\frac{1}{\sqrt{-\det\gamma^{(0)}}}\frac{\delta}{\delta\gamma^{(0)}_{\mu\nu}}\int\text{d}^d x\sqrt{-\det\gamma^{(0)}}X^{(k)}\,.
\end{align}
The FG version of this relation for $\mathring{\cal O}_{(2k)}^{\mu\nu}$ was proved in \cite{graham2005ambient} (see also \cite{deHaro:2000vlm}). If the claim above can be proved for the WFG gauge, then the reason that none of the terms in the expansion of $a_\mu$ contributes to $\hat{\cal O}^{(2k)}_{\mu\nu}$ at $d=2k$ will be straightforward: as they only appear in total derivative terms in $X^{(k)}$, they will be dropped in the variation above. Hence, this can be viewed as another manifestation of $a_\mu$ being pure gauge in the bulk. We have verified by brute force that for $k=2$ the variation in \eqref{varX} indeed gives the Weyl-Bach tensor when $d=4$, and a rigorous proof for any $k$ is worth further study.
\par
Based on the FG version of relation \eqref{varX}, there is another approach of finding the (LC) obstruction tensors and Weyl anomaly in even dimensions called the dilatation operator method \cite{Anastasiou:2020zwc}. As a consistency check, we also computed the $8d$ Weyl anomaly in the FG gauge using this method. We will briefly introduce this method in Appendix \ref{AppC} and show there that the result in $8d$ agrees with what we have in \eqref{8dA} when the Weyl structure is turned off.
\section{Conclusions}
\label{Sec7}
In this work, we first derived the obstruction tensor from the pole at $d=2k$ of the (on-shell) $\gamma^{(2k)}_{\mu\nu}$ in the FG expansion of an AlAdS spacetime using dimensional regularization. Under an appropriate analytical continuation when $d$ approaches an even integer, this approach is equivalent to the one with a logarithmic term in \cite{Fefferman2011,graham2005ambient}. We defined the pole term in the expansion to be Graham's extended obstruction tensor, whose residue is the obstruction tensor (up to a constant factor). Then, after introducing the WFG ansatz, we generalized the Schouten tensor and obstruction tensors in the FG gauge to the Weyl-Schouten tensor and Weyl-obstruction tensors in the WFG gauge, which are now Weyl-covariant in any dimension. By solving the bulk Einstein equations, we computed the Weyl-obstruction tensors in $4d$ (i.e.\ the Weyl-Bach tensor) and $6d$ explicitly, and found that they have almost the same form as the corresponding obstruction tensors, with everything Weyl-covariantized and some extra terms due to the Weyl-Schouten tensor being not symmetric. This is a natural manifestation of the fact that the WFG gauge Weyl-covariantizes the boundary geometry. We observed that all the subleading terms in the expansion of $a_{\mu}$ do not contribute to the Weyl-obstruction tensor. We also found that when $d=2k$, the Weyl-obstruction tensor ${\cal O}^{(2k)}_{\mu\nu}$ is equivalent to its LC counterpart, and so $a^{(0)}_{\mu}$ does not contribute to the obstruction either. When $d>2k$, the $a^{(0)}_{\mu}$ terms are included in the ${\cal O}^{(2k)}_{\mu\nu}$ to make it Weyl-covariant.
\par
As the main result of this paper, we computed the Weyl anomaly in $6d$ and $8d$ in the WFG gauge by using the Weyl-Schouten tensor and extended Weyl-obstruction tensors as the building blocks. The Weyl anomaly shown in \eqref{6dA} and \eqref{8dA} indeed go back to the corresponding FG results when the Weyl structure $a_\mu$ is turned off, but now they become Weyl-covariant. In addition, we also re-expressed the Weyl anomaly in $2d$ and $4d$ in terms of the Weyl-Schouten tensor. By observing the pattern of the Weyl anomaly in different dimensions, we suspect there exists a general formulation that can generate the holographic Weyl anomaly in any dimension, which will be explored in future work.
\par
In the boundary field theory, both the induced metric $\gamma^{(0)}_{\mu\nu}$ and the Weyl connection $a^{(0)}_\mu$ are non-dynamical background fields. However, only $\gamma^{(0)}_{\mu\nu}$ is sourcing a current operator, namely the energy-momentum tensor, while $a^{(0)}_\mu$ does not source any current since $a_\mu$ is pure gauge in the bulk. From the Weyl-Ward identity \eqref{holoWeylWard}, we can see that the trace of the energy-momentum tensor obtains a contribution from $p^{(0)}_\mu$ due to the gauge freedom of WFG. Together we can regard it as an improved energy-momentum tensor $\tilde T_{\mu\nu}$. For non-holographic field theories with background Weyl geometry the corresponding Weyl current $J^\mu$ of the Weyl connection does not need to vanish. The Weyl current in the general case deserves further investigation.
\par
An important corollary in our analysis is that the Weyl structure $a_{\mu}$ only appears as a trivial cocycle in the Weyl anomaly, and thus only contributes cohomologically trivial modifications. From the Weyl anomaly up to $8d$ we can directly see this for the subleading terms of $a_{\mu}$ as they appear only in total derivative terms in $X^{(k)}$. For the leading term $a_{\mu}^{(0)}$ this is less obvious since it plays the role of the boundary Weyl connection, but one can verify that by writing the anomaly in terms of the boundary LC connection, the terms involving $a_{\mu}^{(0)}$ also represent trivial cocycles. This indicates a striking feature of the WFG gauge, namely $a^{(0)}_\mu$ manages to make the expressions Weyl-covariant without introducing new central charges, which, once again, is consistent with the fact that $a_\mu$ is pure gauge in the bulk. Nonetheless, these cohomologically trivial terms might have significant effects in the presence of corners, i.e.\ spacelike codimension-2 surfaces.\footnote{We thank Rob Leigh and Luca Ciambelli for pointing this out in conversations.} The recent construction proposed in \cite{Ciambelli:2021vnn,Freidel:2021cbc} may be useful for the analysis of these effects.
\par
In this paper we introduced the obstruction tensor and extended obstruction tensor as the pole of $\gamma^{(2k)}_{\mu\nu}$. However, as we have mentioned, they can also be defined using the ambient construction. What we have found but not demonstrated in this paper is that the Weyl-obstruction tensors and extended Weyl-obstruction tensors can be defined in a similar way by promoting the ambient metric to the Weyl-ambient metric. We expect to discuss the Weyl-ambient construction in detail in a future publication.
\par
Finally, although this paper focuses on the holographic Weyl anomaly, we believe that the (Weyl-) Schouten tensor and extended (Weyl-) obstruction tensors can also be used as the building blocks for the Weyl anomaly of other theories in general. How can these building blocks arise in a non-holographic context requires a deep understanding of the Lorentz-Weyl structure of a frame bundle, which encodes all the local Lorentz and Weyl transformations. To achieve this, the picture of Atiyah Lie algebroids introduced in \cite{Ciambelli:2021ujl} for gauge theories can be used to organize the Weyl and Lorentz anomalies in a geometric fashion. By means of this geometric picture, we look forward to carrying over the holographic results obtained in this paper to the construction of Weyl anomaly in the general case.
\section*{Acknowledgements}
We would like to thank Rob Leigh for suggesting the problem and providing constant support to us. We are also grateful to Luca Ciambelli for many valuable discussions and carefully going through our manuscript. This work was partially supported by the U.S. Department of Energy under contract DE-SC0015655.
\begin{appendix}
\section{Weyl Geometry}
\label{WG}
This appendix provides a brief review of Weyl geometry\cite{Folland:1970,Hall:1992}. We will mainly introduce the geometric quantities equipped with Weyl connection as well as some useful relations we used in the previous sections. We use $a,b,\cdots$ to label the internal Lorentz indices and $\mu,\nu,\cdots$ to label the spacetime indices. For clarity, we also put $\circ$ on the top of LC quantities, e.g.\ $\mathring R^a{}_{bcd}$, $\mathring P_{ab}$, etc.
\par
Given a generalized Riemannian manifold $(M,g)$ with a connection $\nabla$, in an arbitrary basis $\{\underline e_a\}$, the connection coefficients $\Gamma^c{}_{ab}$ are defined as
\begin{align}
\label{conncoef}
\nabla_{\underline e_a}\underline e_b=\Gamma^c{}_{ab}\underline e_c\,.
\end{align}
The torsion tensor and Riemann curvature tensor of $\nabla$ in this basis are given by
\begin{align}
\label{Tor}
T^c{}_{ab}\underline e_c&\equiv \nabla_{\underline e_a}\underline e_b-\nabla_{\underline e_b}\underline e_a-[\underline e_a,\underline e_b]\,,\\
\label{Rie}
R^a{}_{bcd}\underline e_a&\equiv \nabla_{\underline e_c}\nabla_{\underline e_d}\underline e_b-\nabla_{\underline e_d}\nabla_{\underline e_c}\underline e_b-\nabla_{[\underline e_c,\underline e_d]}\underline e_b\,.
\end{align}
When $\nabla$ is associated with $g$ and is torsion-free, it is called a Levi-Civita (LC) connection, denoted by $\mathring{\nabla}$. Using $\mathring{\Gamma}$ to denote the LC connection coefficients, we have $\mathring\nabla_{\underline e_a}\underline e_b=\mathring{\Gamma}^c{}_{ab}\underline e_c$.
By definition, the conditions satisfied by the LC connection coefficients $\mathring{\Gamma}^c{}_{ab}$ are
\begin{align}
\label{NM}
0&=(\mathring{\nabla} g)(\underline e_a,\underline e_b,\underline e_c)=\mathring{\nabla}_{\underline e_c}g(\underline e_a,\underline e_b)-\mathring{\Gamma}^d{}_{ca}g(\underline e_d,\underline e_b)-\mathring{\Gamma}^d{}_{cb}g(\underline e_d,\underline e_a)\,,\\
0&=T^a{}_{bc}=\mathring{\Gamma}^c{}_{ab}-\mathring{\Gamma}^c{}_{ba}-C_{ab}{}^c\,,
\end{align}
where $C_{\mu\nu}{}^\rho$ are the commutation coefficients defined by $[\underline e_a,\underline e_b]=C_{ab}{}^c\underline e_c$. Denote $g_{ab}\equiv g(\underline e_a,\underline e_b)$ as the component of the metric in the frame $\{\underline e_a\}$. From these conditions $\mathring{\Gamma}^c{}_{ab}$ can be derived as
\begin{align}
\label{LC}
\mathring{\Gamma}^c{}_{ab}=&\,\frac{1}{2}g^{cd}\big(\underline e_a(g_{db})+\underline e_b(g_{ad})-\underline e_d(g_
{ab})\big)-\frac{1}{2}g^{cd}(C_{ad}{}^e g_{eb}+C_{bd}{}^e g_{ae}-C_{ab}{}^e g_{ed})\,.
\end{align}
\par
Now we will work in a coordinate basis $\{\underline\partial_\mu\}$.\footnote{Note that $\underline e_a\equiv e_a^\mu\underline\partial_\mu$ and $\bm e^a\equiv e^a_\mu\text{d} x^\mu$ have Weyl weights $+1$ and $-1$ respectively, while $\underline\partial_\mu$ and $\text{d} x^\mu$ have no Weyl weights. This is because the Weyl transformation of the frame only comes from the soldering of the vector bundle associated with the frame bundle to the tangent space of $M$.} Consider a Weyl transformation
\begin{align}
\label{WeylA}
g\to{\cal B}^{-2}g\,.
\end{align}
The metricity tensor $\nabla g$ will not transform covariantly under \eqref{WeylA}. To restore the Weyl covariance, one can introduce a Weyl connection $A=A_\mu\text{d} x^\mu$ which transforms under a Weyl transformation as
\begin{align}
A_\mu\to A_\mu-\nabla_\mu\ln{\cal B}\,.
\end{align}
Then, we obtain an object that is Weyl-covariant:
\begin{align}
(\nabla_\mu g_{\nu\rho}-2A_\mu g_{\nu\rho})\to{\cal B}^{-2}(\nabla_\mu g_{\nu\rho}-2A_\mu g_{\nu\rho})\,.
\end{align}
More generally, for a tensor $T$ of an arbitrary type (with indices suppressed) that transforms under a Weyl transformation with a specific Weyl weight $\omega_T$, i.e.\ $T\to B^{\omega_T}T$, we can define
\begin{align}
\label{WeylD}
\hat\nabla_\mu T\equiv\nabla_\mu T+w_TA_\mu T\,.
\end{align}
In this way, $\hat\nabla$ acting on $T$ will also transform Weyl-covariantly as $\hat\nabla_\mu T\to B^{\omega_T}\hat\nabla_\mu T$.
\par
Now we choose the connection $\nabla$ by setting the Weyl metricity as follows
\begin{align}
0&=\nabla_\mu g_{\nu\rho}-2A_\mu g_{\nu\rho}=\hat\nabla_\mu g_{\nu\rho}\,.
\end{align}
We will also require $\nabla$ defined in the above equation to be torsion-free. With the existence of the Weyl metricity, the connection coefficients of $\nabla$ in the coordinate basis become
\begin{align}
\label{WeylLC}
\Gamma^\rho{}_{\mu\nu}={}&\frac{1}{2}g^{\rho\sigma}(\partial_\mu g_{\sigma\nu}+\partial_\nu g_{\nu\sigma}-\partial_\sigma g_
{\mu\nu})-(A_\mu\delta^\rho{}_\nu+A_\nu\delta^\rho{}_\mu-g^{\rho\sigma}A_\sigma g_{\mu\nu})\,.
\end{align}
We can see that this is different from the familiar Christoffel symbols due to the extra terms involving the Weyl connection. When $\nabla$ and $\mathring{\nabla}$ act on a vector, their difference can be reflected by
\begin{align}
\label{div_v}
\nabla_\mu v^\nu=\mathring{\nabla}_\mu v^\nu-(A_\mu\delta^\nu{}_\rho+A_\rho\delta^\nu{}_\mu-g^{\nu\sigma}A_\sigma g_{\mu\rho})v^\rho\,.
\end{align}
It is worthwhile to notice that if $v^\nu$ has Weyl weight $d=\dim M$, then it follows from \eqref{WeylD} and \eqref{div_v} that $\hat\nabla_\mu v^\mu=\mathring{\nabla}_\mu v^\mu$.
\par
Now one can compute the Riemann tensor of $\nabla$ and its contractions. Denoting the coordinate components of the Riemann tensor of $\mathring{\nabla}$ as $\mathring R^\mu{}_{\nu\rho\sigma}$, one finds from \eqref{Rie} that
\begin{align}
\label{WRiem}
R^\mu{}_{\nu\rho\sigma}={}&\mathring{R}^\mu{}_{\nu\rho\sigma}+\mathring{\nabla}_\sigma A_\nu\delta^\mu{}_\rho-\mathring{\nabla}_\rho A_\nu\delta^\mu{}_\sigma
+(\mathring{\nabla}_\sigma A_\rho-\mathring{\nabla}_\rho A_\sigma)\delta^\mu{}_\nu
+\mathring{\nabla}_\rho A^\mu g_{\nu\sigma}-\mathring{\nabla}_\sigma A^\mu g_{\nu\rho}\nonumber\\
&+A_\nu(A_\sigma\delta^\mu{}_\rho-A_\rho\delta^\mu{}_\sigma)
+A^\mu(g_{\nu\sigma}A_\rho-g_{\nu\rho}A_\sigma)
+A^2(g_{\nu\rho}\delta^\mu{}_\sigma-g_{\nu\sigma}\delta^\mu{}_\rho)\,,\\
R_{\mu\nu}={}&\mathring{R}_{\mu\nu}-\frac{d}{2}F_{\mu\nu}+(d-2)(\mathring{\nabla}_{(\mu}A_{\nu)}+A_\mu A_\nu)+(\mathring{\nabla}\cdot A-(d-2)A^2)g_{\mu\nu}\,,\\
\label{WRic}
R={}&\mathring{R}+2(d-1)\mathring{\nabla}\cdot A-(d-1)(d-2)A^2\,,
\end{align}
where $R_{\mu\nu}\equiv R^\rho{}_{\mu\rho\nu}$, $R\equiv R_{\mu\nu}g^{\mu\nu}$, and we defined the curvature of $A_\mu$ as $F_{\mu\nu}=\mathring{\nabla}_{\mu}A_{\nu}-\mathring{\nabla}_{\nu}A_{\mu}$. It is easy to see from \eqref{WRiem} that, unlike $\mathring R^\mu{}_{\nu\rho\sigma}$, the $R^\mu{}_{\nu\rho\sigma}$ of $\nabla$ now is not antisymmetric in the first two indices, and it does not have the interchange symmetry for the two index pairs. Also, the $R_{\mu\nu}$ of $\nabla$ is not symmetric due to the appearance of the $F_{\mu\nu}$ term.
\par
On the other hand, from \eqref{conncoef} we have the connection coefficients $\hat\Gamma^c{}_{ab}$ for $\hat\nabla$:
\begin{align}
\hat\Gamma^c{}_{ab}\underline e_c=\hat\nabla_{\underline e_a}\underline e_b=\nabla_{\underline e_a}\underline e_b+A(\underline e_a)\underline e_b=\Gamma^c{}_{ab}\underline e_c+A(\underline e_a)\underline e_b\,,
\end{align}
where we used the fact that the basis vector $\underline e_a$ has Weyl weight $+1$. Plugging this into \eqref{Rie}, we find that the Riemann tensor of $\hat\nabla$ and its contractions satisfy
\begin{align}
\label{hatR}
\hat R^\mu{}_{\nu\rho\sigma}=&R^\mu{}_{\nu\rho\sigma}+\delta^\mu{}_\nu F_{\rho\sigma}\,,\qquad\hat R_{\mu\nu}=R_{\mu\nu}+F_{\mu\nu}\,,\qquad\hat R=R\,.
\end{align}
We refer to $\hat R^\mu{}_{\nu\rho\sigma}$, $\hat R_{\mu\nu}$ and $\hat R$ as the Weyl-Riemann tensor, Weyl-Ricci tensor, and Weyl-Ricci scalar, respectively.\footnote{Note that this is different from \cite{Ciambelli:2019bzz}, in which the quantities defined using $\nabla$ instead of $\hat\nabla$ are called Weyl quantities.} Similar to the curvature tensors for $\nabla$, the Weyl-Riemann tensor is not antisymmetric in the first two indices and does not have the interchange symmetry for the two index pairs, and the Weyl-Ricci tensor is not symmetric. Also notice that the Weyl-Weyl tensor, namely the traceless part of the Weyl-Riemann tensor, is equal to the LC Weyl tensor, i.e.
\begin{align}
\hat W^\mu{}_{\nu\rho\sigma}=\mathring W^\mu{}_{\nu\rho\sigma}\,.
\end{align}
\par
Unlike the LC curvature quantities, which transform in a non-covariant way under the Weyl transformation, the Weyl-Riemann tensor, Weyl-Ricci tensor, and Weyl-Ricci scalar transform under the Weyl transformation as
\begin{align}
\label{hatRtrans}
\hat R^\mu{}_{\nu\rho\sigma}\to\hat R^\mu{}_{\nu\rho\sigma}\,,\qquad\hat R_{\mu\nu}\to\hat R_{\mu\nu}\,,\qquad\hat R\to{\cal B}^2\hat R\,.
\end{align}
Furthermore, we can define the Weyl-Schouten tensor $\hat P_{\mu\nu}$ and Weyl-Cotten tensor $\hat C_{\mu\nu\rho}$ as
\begin{align}
\label{WP1}
\hat P_{\mu\nu}&=\frac{1}{d-2}\bigg(\hat R_{\mu\nu}-\frac{1}{2(d-1)}\hat Rg_{\mu\nu}\bigg)\,,\\
\label{WC1}
\hat C_{\mu\nu\rho}&=\hat\nabla_{\rho}\hat P_{\mu\nu}-\hat\nabla_{\nu}\hat P_{\mu\rho}\,.
\end{align}
Although the LC Schouten tensor $\mathring P_{\mu\nu}$ defined by substituting $\hat R_{\mu\nu}$ and $\hat R$ in \eqref{WP1} with $R_{\mu\nu}$ and $R$ is a symmetric tensor, $\hat P_{\mu\nu}$ has an antisymmetric part $\hat P_{[\mu\nu]}=-F_{\mu\nu}/2$. In terms of the LC connection, the Bach tensor is defined by (the indices of the components are raised and lowered by $g$)
\begin{align}
\mathring B_{\mu\nu}=\mathring{\nabla}^\rho\mathring{\nabla}_\rho \mathring P_{\mu\nu}-\mathring{\nabla}^\rho\mathring{\nabla}_{\nu}\mathring P_{\mu\rho}-\mathring W_{\sigma\nu\mu\rho}\mathring P^{\rho\sigma}\,,
\end{align}
which satisfies $\mathring B_{\mu\nu}\to{\cal B}^2\mathring B_{\mu\nu}$ in $4d$. Now we can define the Weyl-Bach tensor
\begin{align}
\label{WB1}
\hat B_{\mu\nu}=\hat\nabla^\rho\hat\nabla_\rho \hat P_{\mu\nu}-\hat\nabla^\rho\hat\nabla_{\nu}\hat P_{\mu\rho}-\hat W_{\sigma\nu\mu\rho}\hat P^{\rho\sigma}\,.
\end{align}
Similar to the LC Bach tensor, the Weyl-Bach tensor is also symmetric and traceless; however, it is Weyl-covariant in any dimension. Following \eqref{WRiem}--\eqref{WRic}, here we list the above-mentioned Weyl quantities in terms of their corresponding LC quantities:
\begin{align}
\label{hatP}
\hat P_{\mu\nu}&=\mathring P_{\mu\nu}+\mathring{\nabla}_\nu A_{\mu}+A_{\mu}A_{\nu}-\frac{1}{2}A^2g_{\mu\nu}\,,\\
\hat C_{\mu\nu\rho}&=\mathring C_{\mu\nu\rho}-A_\sigma\mathring W^\sigma{}_{\mu\rho\nu}\,,\\
\hat B_{\mu\nu}&=\mathring{B}_{\mu\nu}+(d-4)(A^\rho\mathring{C}_{\rho\nu\mu}-2A^\rho\mathring{C}_{\mu\nu\rho}+A^\rho A^\sigma \mathring W_{\sigma\mu\rho\nu})\,.
\end{align}
\par
The Bianchi identity for $\hat\nabla$ reads
\begin{align}
\hat\nabla_\mu \hat R^\lambda{}_{\nu\rho\sigma}+\hat\nabla_\rho\hat R^\lambda{}_{\nu\sigma\mu}+\hat\nabla_\sigma\hat R^\lambda{}_{\nu\mu\rho}=0\,.
\end{align}
Noticing that $\hat\nabla_\mu g_{\nu\rho}=0$, the contraction of the above equation gives
\begin{align}
\label{BI}
\hat\nabla^\mu\hat G_{\mu\nu}=0\,,
\end{align}
where we defined the Weyl-Einstein tensor $\hat G_{\mu\nu}\equiv \hat R_{\mu\nu}-\frac{1}{2}\hat Rg_{\mu\nu}$. Using \eqref{WP1}, this identity can also be expressed using the Weyl-Schouten tensor as
\begin{align}
\label{BIP}
\hat\nabla^\mu\hat P_{\mu\nu}=\hat\nabla_\nu\hat P\,.
\end{align}
where $\hat P$ is the trace of $\hat P_{\mu\nu}$. Starting from \eqref{WB1} and using \eqref{BIP} repeatedly, one obtains
\begin{align}
\label{nablaB}
\hat\nabla^\mu\hat B_{\mu\nu}=(d-4)\hat P^{\mu\rho}(\hat C_{\rho\mu\nu}+\hat C_{\nu\mu\rho})\,.
\end{align}
Note that since $\mathring P$ is symmetric, the above equation in the LC case becomes
\begin{align}
\mathring{\nabla}^\mu\mathring B_{\mu\nu}=(d-4)\mathring P^{\mu\rho}\mathring C_{\rho\mu\nu}\,.
\end{align}
It is also useful to notice that in the LC case, the divergence of the Cotton tensor vanishes
\begin{align}
\label{divC}
\mathring{\nabla}^\mu\mathring C_{\mu\nu\rho}=0\,,
\end{align}
while for the Weyl-Cotton tensor we have instead
\begin{align}
\hat\nabla^\mu\hat C_{\mu\nu\rho}=\hat W_{\sigma\rho\lambda\nu}F^{\sigma\lambda}\,.
\end{align}
In the end of this appendix, we list the Weyl weights of the above-mentioned Weyl quantities:
\begin{table}[!htbp]
\centering
\caption{Weyl weights of Weyl-covariant quantities}
\begin{tabular}{ccccccccccc}
\toprule
$\underline e_a$ & $\bm e^a$ & $g_{\mu\nu}$ & $g^{\mu\nu}$ &$\hat R^\mu{}_{\nu\rho\sigma}$ & $\hat R_{\mu\nu}$ & $\hat R$ & $F_{\mu\nu}$& $\hat P_{\mu\nu}$ &$\hat C_{\mu\nu\rho}$ &$\hat B_{\mu\nu}$\\
\midrule
$+1$ & $-1$ & $-2$ & $+2$ &$0$ & $0$&$+2$ & $0$& $0$ &$0$ &$+2$\\
\bottomrule
\end{tabular}
\label{table1}
\end{table}
\section{Solving the Bulk Einstein Equations}
\label{AppB0}
To solve for $\gamma^{(2k)}_{\mu\nu}$ in the WFG gauge from the Einstein equations, we first introduced the following notations:
\begin{align}
\varphi_\mu&\equiv D_za_\mu\,,\qquad f_{\mu\nu}\equiv D_\mu a_\nu-D_\nu a_\mu\,,\qquad\rho_{\mu\nu}\equiv\frac{1}{2}D_zh_{\mu\nu}\,,\qquad\theta\equiv\text{tr}\rho\,,\nonumber\\
\label{quantities}
\psi_{\mu\nu}&\equiv\rho_{\mu\nu}+\frac{L}{2}f_{\mu\nu}\,,\qquad
\gamma^\lambda{}_{\mu\nu}\equiv\Gamma^\lambda{}_{\mu\nu}=\frac{1}{2}h^{\lambda\rho}( D_\mu h_{\rho\nu} +D_\nu h_{\mu\rho} - D_\rho h_{\nu\mu})\,.
\end{align}
Since the integral curves of $\underline D_z$ form a congruence, some of these quantities can be interpreted as the properties of this congruence: $\varphi^\mu$ is the acceleration, $f_{\mu\nu}$ is the twist, $\theta$ is the expansion and $\sigma_{\mu\nu}\equiv\rho_{\mu\nu}-\frac{1}{d}\theta h_{\mu\nu}$ is the shear. By plugging in the expansions \eqref{hex} and \eqref{aex}, one can obtain the expansions of the quantities above. A list of these expansions enough for capturing the first two leading orders of the Einstein equations can be found in the Appendix of \cite{Ciambelli:2019bzz}.
\par
Using the connection coefficients $\Gamma^\lambda{}_{\mu\nu}$ in the bulk, one can compute the curvature tensors and the Einstein tensor. Then, the vacuum Einstein equations can be written as
\begin{align}
\label{eomzz}
0&=G_{zz}+g_{zz}\Lambda=-\frac{1}{2}\text{tr}(\rho\rho)-\frac{3L^2}{8}\text{tr}(ff)-\frac{1}{2}\bar{R}+\frac{1}{2}\theta^2+\Lambda\,\\
\label{eomzm}
0&=G_{z\mu}+g_{z\mu}\Lambda=\nabla_\nu\psi^\nu{}_\mu-D_\mu\theta+L^2f_{\nu\mu}\varphi^\nu\,,\\
\label{eommn}
0&=G_{\mu\nu}+g_{\mu\nu}\Lambda=\bar{G}_{\mu\nu}-(D_z+\theta)\psi_{\mu\nu}-L\nabla_\nu\varphi_\mu+2\rho_{\nu\rho}\rho^\rho{}_\mu+\frac{L^2}{2}f_{\nu\rho}f^\rho{}_\mu-L^2\varphi_\mu\varphi_\nu\nonumber\\
&\qquad\qquad\qquad\qquad+h_{\mu\nu}\left(L\nabla_\mu\varphi^\mu+D_z\theta+\frac{1}{2}\text{tr}(\rho\rho)-\frac{L^2}{8}\text{tr}(ff)+L^2\varphi^2+\frac{1}{2}\theta^2+\Lambda\right)\,.
\end{align}
where $\Lambda=-\frac{d(d-1)}{2L^2}$ is the cosmological constant, and $\bar{R}=h^{\mu\nu}\bar R_{\mu\nu}$ with
\begin{align}
\bar{R}_{\mu\nu}=D_\rho\gamma^\rho
{}_{\nu\mu}-D_\nu\gamma^\rho{}_{\rho\mu}+\gamma^{\rho}{}_{\rho\sigma}\gamma^{\sigma}{}_{\nu\mu}-\gamma^{\rho}{}_{\nu\sigma}\gamma^{\sigma}{}_{\rho\mu}\,.
\end{align}
Denote $m_{(2k)\nu}^{\mu}\equiv\gamma_{(0)}^{\mu\rho}\gamma^{(2k)}_{\rho\nu}$ and $n_{(2k)\nu}^{\mu}\equiv\gamma_{(0)}^{\mu\rho}\pi^{(2k)}_{\rho\nu}$. Expanding \eqref{eomzz}--\eqref{eommn} using \eqref{hex} and \eqref{aex}, one can solve the Einstein equations order by order. First, the $zz$-component of the Einstein equations gives
\begin{align}
0={}&\bigg[\frac{d(d-1)}{2L^2}+\Lambda\bigg]-\frac{z^2}{L^2}\bigg[\frac{R^{(0)}}{2}+\frac{d-1}{L^2}X^{(1)}\bigg]+\frac{z^4}{L^4}\bigg[\frac{d}{2L^2}(X^{(1)})^2-\frac{2(d-1)}{L^2}X^{(2)}-\frac{1}{2L^2}\text{tr}(m_{(2)}^2)\nonumber\\
&-\frac{3L^2}{8}\text{tr}(f_{(0)}^2)-\frac{1}{2}\Big(\gamma_{(0)}^{\lambda\nu}\hat\nabla^{(0)}_\lambda\hat\nabla_\mu \big(m_{(2)}{}^{\mu}{}_{\nu}-\text{tr} (m_{(2)})\delta^\mu{}_\nu\big)
+2(d-1)\hat\nabla\cdot a^{(2)}
-\text{tr}\big(m_{(2)}\gamma_{(0)}^{-1}R^{(0)}\big)\Big)\bigg]\nonumber\\
&+\cdots-\frac{z^d}{L^d}(d-1)\bigg[\frac{d}{2L^2}Y^{(1)}+\hat\nabla\cdot p_{(0)}\bigg]+\cdots\,,
\end{align}
where $X^{(1)}$, $X^{(2)}$ and $Y^{(1)}$ are given in expansion \eqref{sqrth}, which can be expressed in terms of the expansion of $h_{\mu\nu}$ as
\begin{align}
\label{XY}
X^{(1)}&=\text{tr}(m_{(2)})\,,\quad X^{(2)}=\text{tr}(m_{(4)})-\frac{1}{2}\text{tr}(m_{(2)}^2)+\frac{1}{4}\left(\text{tr}(m_{(2)})\right)^2\,,\,\cdots\,,Y^{(1)}=\text{tr}(n_{(0)})\,,\,\cdots\,.
\end{align}
At the $O(1)$-order, the $zz$-equation is trivially satisfied, and at the $O(z^2)$-order, we can find that
\begin{align}
X^{(1)}=-\frac{L^2}{2(d-1)} R^{(0)}=-L^2\hat P\,.
\end{align}
Then, using the above result we can obtain from the $O(z^4)$-order that
\begin{align}
X^{(2)}&=-\frac{1}{4}\text{tr}(m_{(2)}^2)+\frac{1}{4}(X^{(1)})^2-\frac{L^2}{2}\hat\nabla\cdot a^{(2)}-\frac{L^4}{16}\text{tr}(f^{(0)}f^{(0)})\nonumber\\
&=-\frac{L^4}{4}\text{tr}(\hat P^2)+\frac{L^4}{4}\hat P^2-\frac{L^2}{2}\hat\nabla\cdot a^{(2)}\,,
\end{align}
where we used \eqref{Pgf}. Also notice that the $O(z^d)$-order gives the Weyl-Ward identity
\begin{align}
0=\frac{d}{2L^2}Y^{(1)}+\hat\nabla\cdot p_{(0)}\,.
\end{align}
\par
Now we look at the $\mu\nu$-components of the Einstein equations:
\begin{align}
0={}&\bigg[G^{(0)}_{\mu\nu}+\frac{d}{2}f^{(0)}_{\mu\nu}-\frac{d-2}{L^2}X^{(1)}\gamma^{(0)}_{\mu\nu}+\frac{d-2}{L^2}\gamma^{(2)}_{\mu\nu}\bigg]+\frac{z^2}{L^2}\bigg[\frac{1}{2}\hat\nabla_\lambda\Big(\gamma_{(0)}^{\lambda\xi}\Big(\hat\nabla_\nu \gamma^{(2)}_{\xi\mu}+\hat\nabla_\mu \gamma^{(2)}_{\xi\nu}-\hat\nabla_\xi \gamma^{(2)}_{\mu\nu}\Big)\Big)\nonumber\\
&-\frac{1}{2}\gamma^{(0)}_{\mu\nu}\hat\nabla_\mu\hat\nabla_\nu \Big(\gamma_{(2)}^{\mu\nu}-X^{(1)}\gamma_{(0)}^{\mu\nu}\Big)-\frac{1}{2}\hat\nabla_{(\mu}\hat\nabla_{\nu)} X^{(1)}+(d-4)\big(\hat\nabla^{(0)}_{(\mu} a_{\nu)}^{(2)}-\gamma_{\mu\nu}^{(0)}\hat\nabla\cdot a^{(2)})\nonumber\\
\label{emn}
&+\frac{2(d-4)}{L^{2}}\gamma^{(4)}_{\mu\nu}+\frac{2}{L^{2}}m_{(2)}^\rho{}_\mu\gamma^{(2)}_{\rho\nu}+\frac{L^2}{2}f^{(0)}_{\nu\rho}f^{(0)}_{\sigma\mu}\gamma_{(0)}^{\sigma\rho}+\Big(\frac{1}{2}\text{tr}(m_{(2)}\gamma_{(0)}^{-1}{R}^{(0)})-\frac{L^2}{8}\text{tr}(f^{(0)}f^{(0)})\nonumber\\
&-\frac{2(d-4)}{L^2}X^{(2)}+\frac{d-3}{2L^2}(X^{(1)})^2+\frac{1}{2L^2}\text{tr}(m_ {(2)}^2)\Big)\gamma^{(0)}_{\mu\nu}\bigg]+\cdots\,.
\end{align}
Note that $\gamma_{(2)}^{\mu\nu}\equiv(\gamma_{(0)}^{-1}\gamma^{(2)}\gamma_{(0)}^{-1})^{\mu\nu}$ is not the inverse of $\gamma^{(2)}_{\mu\nu}$. Plugging in the results we got from the $zz$-equation, we obtain from the first two leading orders of \eqref{emn} that
\begin{align}
\gamma^{(2)}_{\mu\nu}={}&-\frac{L^2}{d-2}\bigg(R^{(0)}_{(\mu\nu)}-\frac{1}{2(d-1)}R^{(0)}\gamma^{(0)}_{\mu\nu}\bigg)\,,\\
\gamma^{(4)}_{\mu\nu}={}&-\frac{L^2}{4(d-4)}\bigg(2\hat\nabla_\lambda\hat\nabla_{(\mu} m_{(2)}{}^{\lambda}{}_{\nu)}
-\hat\nabla\cdot\hat\nabla \gamma^{(2)}_{\mu\nu}-\hat\nabla_{(\mu}\hat\nabla_{\nu)} X^{(1)}-\frac{1}{L^2}\gamma^{(0)}_{\mu\nu}\text{tr}(m_{(2)}^2)+\frac{4}{L^2}m_{(2)}^\rho{}_\mu\gamma^{(2)}_{\rho\nu}\nonumber\\
&\qquad\qquad\qquad+L^2f^{(0)}_{\nu\rho}f^{(0)}_{\sigma\mu}\gamma_{(0)}^{\sigma\rho}-\frac{L^2}{4}\text{tr}(f^{(0)}f^{(0)})\gamma^{(0)}_{\mu\nu}\bigg)-\frac{L^2}{2}\hat\nabla^{(0)}_{(\mu} a_{\nu)}^{(2)}\,.
\end{align}
\par
Furthermore, expanding \eqref{emn} to the $O(z^4)$-order one obtains
\begin{align}
\gamma^{(6)}_{\mu\nu}=&-\frac{L^2}{3(d-6)}\bigg[\hat\nabla_\lambda\hat\gamma^\lambda_{(4)\mu\nu}-\frac{1}{2}\hat\nabla_{(\mu}\hat\nabla_{\nu)} \text{tr}(m_{(4)})-\hat\nabla_\lambda(\hat\gamma^\sigma_{(2)\mu\nu}{m}_{(2)}^\lambda{}_\sigma)+\hat\nabla_{(\nu}(\hat\gamma^\sigma_{(2)\mu)\lambda}{m}_{(2)}^\lambda{}_\sigma)\\
&+\frac{1}{2}\hat\nabla_\sigma X^{(1)}\hat\gamma^\sigma_{(2)\mu\nu}-\hat\gamma^\sigma_{(2)\mu\lambda}\hat\gamma^\lambda_{(2)\sigma\nu}-\frac{2}{L^2}(m_{(2)}^3)^\rho{}_\nu\gamma^{(0)}_{\mu\rho}+\frac{8}{L^2}\gamma^{(4)}_{\rho(\mu}m_{(2)}^\rho{}_{\nu)}-\frac{1}{L^2}\gamma^{(4)}_{\mu\nu}X^{(1)}\nonumber\\
&-\frac{L^2}{2}f^{(0)}_{\sigma\mu}f^{(0)}_{\nu\rho}\gamma_{(2)}^{\rho\sigma}+L^2f^{(2)}_{\sigma(\mu}f^{(0)}_{\nu)\rho}\gamma_{(0)}^{\rho\sigma}-\frac{1}{L^2}\gamma^{(0)}_{\mu\nu}\Big(\text{tr}(m_{(4)}m_{(2)})-\frac{1}{2}\text{tr}(m_{(2)}^3)-\frac{L^4}{8}\text{tr}(m_{(2)} f_{(0)}^2)\nonumber\\
&-\frac{L^4}{4}\hat\nabla_\rho a^{(2)}_\sigma f_{(0)}^{\rho\sigma}-\frac{L^2}{4}\hat\nabla_\sigma X^{(1)} a_{(2)}^\sigma+\frac{L^2}{2}\hat\nabla_\lambda(\gamma_{(2)}^{\lambda\rho} a^{(2)}_\rho)\Big)+2\hat\nabla_\lambda({m}_{(2)}^\lambda{}_{(\nu} a^{(2)}_{\mu)})-2\gamma^{(0)}_{\sigma(\nu}\hat\gamma^\sigma_{(2)\mu)\lambda}a_{(2)}^\lambda\nonumber\\
&-a^{(2)}_{(\nu}\hat\nabla_{\mu)} X^{(1)}-\hat\nabla_{(\nu}(X^{(1)} a^{(2)}_{\mu)})\bigg]-\frac{L^2}{3}\hat\nabla_{(\mu}a^{(4)}_{\nu)}-L^2a^{(2)}_{\mu}a^{(2)}_{\nu}+\frac{L^2}{6}a^{(2)}\cdot a^{(2)}\gamma^{(0)}_{\mu\nu}+\frac{L^2}{3}\hat\gamma^\lambda_{(2)\mu\nu}a^{(2)}_{\lambda}\nonumber\,,
\end{align}
where $f^{(2)}_{\sigma\mu}\equiv\hat\nabla_\sigma a^{(2)}_\mu-\hat\nabla_\mu a^{(2)}_\sigma$, and
\begin{align}
\hat\gamma^\lambda_{(2)\mu\nu}&=\frac{1}{2}\gamma_{(0)}^{\lambda\rho}(\hat\nabla^{(0)}_\mu\gamma^{(2)}_{\nu\rho}+\hat\nabla^{(0)}_\nu\gamma^{(2)}_{\mu\rho}-\hat\nabla^{(0)}_\rho\gamma^{(2)}_{\mu\nu})=-\frac{L^2}{2}(\hat\nabla^{(0)}_\mu \hat P^\lambda{}_{\nu}+\hat\nabla^{(0)}_\nu\hat P_\mu{}^\lambda-\hat\nabla_{(0)}^\lambda\hat P_{\mu\nu})\,.
\end{align}
(In the second step we used $\hat\nabla^{(0)}_\mu f^{(0)}_{\nu\rho}+\hat\nabla^{(0)}_\nu f^{(0)}_{\rho\mu}+\hat\nabla^{(0)}_\rho f^{(0)}_{\mu\nu}=0$.) The $\gamma^{(4)}_{\mu\nu}$ and $\gamma^{(6)}_{\mu\nu}$ above can be organized in to \eqref{g4} and \eqref{g6}, respectively.
\par
Finally, the $z\mu$-component of the Einstein equations gives
\begin{align}
0=&-\frac{L}{d-2}\frac{z^2}{L^2}\gamma_{(0)}^{\alpha\beta}\hat\nabla_\alpha^{(0)}\hat G^{(0)}_{\beta\mu}+L^{-1}\frac{z^4}{L^4}\bigg[\hat\nabla_\alpha\big (2m_{(4)\mu}^\alpha-(m_{(2)}^2)^\alpha{}_\mu\big)+\frac{1}{2}m_{(2)\mu}^\alpha\hat\nabla_\alpha X^{(1)}\nonumber\\
&+\frac{L^2}{2}\bigg(\hat\nabla\cdot\hat\nabla a_\mu^{(2)}-\hat\nabla_\mu\hat\nabla\cdot a^{(2)}+(R^{(0)}_{\beta\mu}+4f^{(0)}_{\beta\mu})\gamma_{(0)}^{\alpha\beta}a^{(2)}_\alpha-\hat\nabla_\alpha\big(f^{(0)}_{\beta\mu}m_{(2)\rho}^\alpha\gamma_{(0)}^{\rho\beta}\big)\nonumber\\
&-f^{(0)}_{\nu\rho}\gamma_{(0)}^{\alpha\nu}\hat\nabla_\alpha m_{(2)}^\rho{}_\mu+\frac{1}{2}f^{(0)}_{\beta\mu}\gamma_{(0)}^{\alpha\beta}\hat\nabla_\alpha X^{(1)}\bigg)-2\hat\nabla_\mu X^{(2)}+\frac{1}{2}\hat\nabla_\mu (X^{(1)})^2-\frac{1}{4}\hat\nabla_\mu\text{tr}(m_{(2)}^2)
\bigg]+\cdots\nonumber\\
\label{emz}
&+\frac{z^d}{L^d}\bigg[\frac{d}{2L}\hat\nabla_\alpha n_{(0)\mu}^\alpha+\frac{L}{2}(\hat\nabla\cdot\hat\nabla p_\mu^{(0)}+\hat\nabla_\alpha\hat\nabla_\mu p_{(0)}^\alpha)\bigg]+\cdots\,.
\end{align}
One can observe that the $O(z^2)$-order of the above equation is exactly the contraction of the Weyl-Bianchi identity as shown in \eqref{BI}. By plugging in the results we got from the $zz$-equation, the $O(z^4)$-order can be organized into the identity \eqref{divWB}, which demonstrates the divergence of the Bach tensor. Also, the $O(z^d)$-order gives the conservation law of the improved energy-momentum tensor defined in \eqref{improvedT}.
\section{Expansions of the Raychaudhuri Equation and $\sqrt{-\det h}$}
\label{AppB}
Using the components of the Einstein equations \eqref{eomzz}--\eqref{eommn}, one can construct the following equation \cite{Ciambelli:2019bzz}:
\begin{align}
\label{eqa}
0&=\frac{g^{MN}(G_{MN}+\Lambda g_{MN})}{d-1}+(G_{zz}+\Lambda g_{zz})\nonumber\\
&=D_z\theta+L\nabla_\nu\varphi^\nu+L^2\varphi^2+\text{tr}(\rho\rho)+\frac{L^2}{4}\text{tr}(ff)-\frac{d}{L^2}\,,
\end{align}
where the indices $M,N$ represent the bulk components as $M=(z,\mu)$. This equation can be recognized as the Raychaudhuri equation of the congruence generated by $\underline D_z$. Expanding each term in the above equation, we can write down a general expansion of this equation to any order. This combination of the components of the Einstein equations contains all the information we need for deriving $X^{(k)}$. We here provide some details of deriving $X^{(3)}$ and $X^{(4)}$ by means of the Raychaudhuri equation.
\par
First, it is useful to expand the inverse of $h_{\mu\nu}$:
\begin{align}
\label{hinv}
h^{\mu\nu}(z;x)&=\frac{z^2}{L^2}\left[\gamma_{(0)}^{\mu\nu}(x)+\frac{z^2}{L^2}\gamma_{(2)}^{\mu\nu}(x)+...\right]+\frac{z^{d+2}}{L^{d+2}}\left[\pi_{(0)}^{\mu\nu}(x)+\frac{z^2}{L^2}\pi_{(2)}^{\mu\nu}(x)+...\right]\\
&=\frac{z^2}{L^2}\left[\gamma_{(0)}^{\mu\nu}(x)-\frac{z^2}{L^2}\tilde{m}_{(2)\rho}^{\mu}\gamma_{(0)}^{\rho\nu}(x)-\frac{z^4}{L^4}\tilde{m}_{(4)\rho}^{\mu}\gamma_{(0)}^{\rho\nu}(x)+\cdots\right]\nonumber+\frac{z^{d+2}}{L^{d+2}}\left[\tilde{n}_{(2)\rho}^{\mu}\gamma_{(0)}^{\rho\nu}(x)+\cdots\right]\,,
\end{align}
where $\tilde{m}_{(2k)\nu}^{\mu}\equiv-\gamma_{(2k)}^{\mu\rho}\gamma^{(0)}_{\rho\nu}$, $\tilde{n}_{(2k)\nu}^{\mu}\equiv-\pi_{(2k)}^{\mu\rho}\gamma^{(0)}_{\rho\nu}$. The above expansion can be solved order by order in terms of $m_{(2k)\nu}^{\mu}$ and $n_{(2k)\nu}^{\mu}$:
\begin{align}
\gamma_{(0)}^{\mu\nu}&=(\gamma^{(0)}_{\mu\nu})^{-1}\,,\qquad \tilde m_{(2)\nu}^{\mu}=m_{(2)\nu}^{\mu}\,,\qquad\tilde m_{(4)\nu}^{\mu}=m_{(4)\nu}^{\mu}-m_{(2)\rho}^{\mu}m_{(2)\nu}^{\rho}\,,\qquad\cdots\\
\tilde n_{(0)\nu}^{\mu}&=n_{(0)\nu}^{\mu}\,,\qquad\tilde n_{(2)\nu}^{\mu}=n_{(2)\nu}^{\mu}-m_{(2)\rho}^{\mu}n_{(0)\nu}^{\rho}-n_{(0)\rho}^{\mu}m_{(2)\nu}^{\rho}\,,\qquad\cdots.\nonumber
\end{align}
Also by taking the inverse of the metric, one finds the following relation:
\begin{align}
m_{(2p)}-\tilde{m}_{(2p)}=\sum^{p-1}_{k=1}\tilde{m}_{(2k)}m_{(2p-2k)}\,.
\end{align}
Specifically, we have
\begin{align}
m_{(2)}-\tilde{m}_{(2)}=0\,,\qquad m_{(4)}-\tilde{m}_{(4)}=m^2_{(2)}\,,\qquad m_{(6)}-\tilde{m}_{(6)}=m_{(2)}m_{(4)}+\tilde m_{(4)}m_{(2)}\,.
\end{align}
Now we expand the quantities defined in \eqref{quantities} to an arbitrary order by plugging the expansions \eqref{hex}, \eqref{aex} and \eqref{hinv} into their definitions. For the purpose of finding the Weyl anomaly, here we only keep the $m_{(2p)}$ and $a_{(2p)}$ terms in the first series of $h_{\mu\nu}$ and $a_\mu$ and neglect the $n_{(2p)}$ and $p_{(2p)}$ terms. The expansions of these quantities are
\begin{align}
\label{exp1}
\rho^\mu{}_\nu=&-\delta^\mu{}_\nu+\frac{1}{2}\sum_{p=1}^\infty\left(\frac{z}{L}\right)^{2p}\Big[p(m_{(2p)}+\tilde{m}_{(2p)})+\sum_{k=1}^{p-1}(2k-p)\tilde{m}_{(2k)}m_{(2p-2k)}\Big]^\mu{}_\nu+O(z^d)\,,\\
\theta=&-\frac{d}{L}+\frac{1}{2L}\sum^{\infty}_{p=1}\bigg(\frac{z}{L}\bigg)^{2p}\bigg[p\text{tr}(m_{(2p)}+\tilde{m}_{(2p)})+\sum^{p-1}_{k=1}(2k-p)\text{tr}\tilde{m}_{(2k)}m_{(2p -2k)}\bigg]+O(z^d)\,,
\end{align}
\begin{align}
\varphi_\mu={}&\frac{1}{L}\sum^\infty_{p=0}\left(\frac{z}{L}\right)^{2p}2pa_\mu^{(2p)}+O(z^{d-2})\,,\\
f_{\mu\nu}=&\sum_{p=0}^{\infty}\left(\frac{z}{L}\right)^{2p}\big[f^{(2p)}_{\mu\nu}+\sum_{q=1}^{p-1} 2q(a_\mu^{(2p-2q)} p_\nu^{(2q)}-a_\nu^{(2p-2q)} p_\mu^{(2q)})\big]+O(z^{d-2})\,,\\
\gamma^\lambda{}_{\mu\nu}={}&\gamma^\lambda_{(0)\mu\nu}-\sum_{p=1}^\infty\left(\frac{z}{L}\right)^{2p}\bigg(\sum_{q=0}^{p-1}\tilde{m}_{(2q)}^\lambda{}_\rho\hat\gamma^\rho_{(2p-2q)\mu\nu}+\frac{1}{2}\sum_{q=0}^{p-1}[\tilde{m}_{(2q)}\gamma_{(0)}^{-1}]^{\lambda\rho}\sum_{k=0}^{p-q-1}(2k-2)\qquad\nonumber\\
\label{exp5}
&\times(a^{(2p-2q-2k)}_\mu\gamma^{(2k)}_{\nu\rho}+a^{(2p-2q-2k)}_\nu\gamma^{(2k)}_{\mu\rho}-a^{(2p-2q-2k)}_\rho\gamma^{(2k)}_{\mu\nu})\bigg)+O(z^{d-2})\,,
\end{align}
where
\begin{align*}
f^{(0)}_{\mu\nu}&=\partial_\mu a_\nu^{(0)}-\partial_\nu a_\mu^{(0)}\,,\qquad f^{(2k)}_{\mu\nu}=\hat\nabla^{(0)}_\mu a_\nu^{(2k)}-\hat\nabla^{(0)}_\nu a_\mu^{(2k)}\quad(k>0)\,,\\
\gamma^\lambda_{(0)\mu\nu}&=\frac{1}{2}\gamma_{(0)}^{\lambda\rho}\big(
\partial_\mu \gamma^{(0)}_{\nu\rho}
+\partial_\nu \gamma^{(0)}_{\mu\rho}-\partial_\rho \gamma^{(0)}_{\mu\nu}\big)-\big(a^{(0)}_\mu\delta^\lambda{}_\nu+a^{(0)}_\nu\delta^\lambda{}_\mu-a^{(0)}_\rho\gamma_{(0)}^{\lambda\rho}\gamma^{(0)}_{\mu\nu}\big)\,,\\
\hat\gamma^\lambda_{(2k)\mu\nu}&=\frac{1}{2}\gamma_{(0)}^{\lambda\rho}(\hat\nabla^{(0)}_\mu\gamma^{(2k)}_{\nu\rho}+\hat\nabla^{(0)}_\nu\gamma^{(2k)}_{\mu\rho}-\hat\nabla^{(0)}_\rho\gamma^{(2k)}_{\mu\nu})\quad(k>0)\,.
\end{align*}
Expanding everything in \eqref{eqa} using \eqref{exp1}--\eqref{exp5}, we obtain the following equation:
\begin{align}
0={}&\frac{1}{L^2}p(p-1)\text{tr}(m_{(2p)}+\tilde{m}_{(2p)})+\frac{1}{L^2}\sum^{p-1}_{q=1}(p-1)(2q-p)\text{tr}\tilde{m}_{(2q)}m_{(2p-2q)}\nonumber\\
&-\sum^{p-1}_{q=1}2q\hat\nabla_\mu a_\nu^{(2q)}\big[\tilde{m}_{(2p-2q-2)}\gamma_{(0)}^{-1}\big]^{\mu\nu}-\sum^{p-1}_{q=1}\sum^{q-1}_{k=0}(2p-2q+2k)2ka_\mu^{(2p-2q)}a_\nu^{(2k)}\big[\tilde{m}_{(2q-2k-2))}\gamma_{(0)}^{-1}\big]^{\mu\nu}\nonumber\\
&-\sum^{p-1}_{q=1}\sum_{k=0}^{q-1}\sum_{n=0}^{p-q-1}na_\lambda^{(2n)}[\tilde{m}_{(2p-2q-2n-2)}\gamma^{-1}_{(0)}]^{\mu\nu}\bigg(\tilde{m}_{(2k)}^\lambda{}_\rho\hat\gamma^\rho_{(2q-2k)\mu\nu}\nonumber\\
&\quad-[\tilde{m}_{(2k)}\gamma_{(0)}^{-1}]^{\lambda\rho}\sum_{m=0}^{q-k-1}(2-2m)(a^{(2q-2k-2m)}_\mu\gamma^{(2m)}_{\nu\rho}+a^{(2q-2k-2m)}_\nu\gamma^{(2m)}_{\mu\rho}-a^{(2q-2k-2m)}_\rho\gamma^{(2m)}_{\mu\nu})\bigg)\nonumber\\
&+\frac{1}{4L^2}\sum_{q=1}^{p-1}(p-q)\text{tr}\bigg[(m_{(2p-2q)}+\tilde{m}_{(2p-2q)})\Big[q(m_{(2q)}+\tilde{m}_{(2q)})+\sum_{k=1}^{q-1}2(2k-q)\tilde{m}_{(2k)}m_{(2q-2k)}\Big]\bigg]\nonumber\\
&+\frac{1}{4L^2}\sum_{q=1}^{p-1}\sum_{k=1}^{q-1}\sum_{m=1}^{p-q-1}(2k-q)(2m-p+q)\text{tr}\big[\tilde{m}_{(2k)}m_{(2q-2k)}\tilde{m}_{(2m)}m_{(2p-2q-2m)}\big]\nonumber\\
&+\frac{L^2}{4}\sum_{q=1}^{p-1}\sum_{k=0}^{q-1}\big[f^{(2k)}_{\mu\rho}+\sum_{m=1}^{k-1} 2m(a_\mu^{(2k-2m)} a_\rho^{(2m)}-a_\rho^{(2k-2m)} a_\mu^{(2m)})\big][\tilde{m}_{(2q-2k-2)}\gamma^{-1}_{(0)}]^{\rho\nu}\nonumber\\
\label{Rayex}
&\quad\times\sum_{n=0}^{p-q-1}\big[f^{(2n)}_{\nu\sigma}+\sum_{s=1}^{n-1} 2s(a_\nu^{(2n-2s)} a_\sigma^{(2s)}-a_\sigma^{(2n-2s)} a_\nu^{(2s)})\big][\tilde{m}_{(2p-2q-2n-2)}\gamma^{-1}_{(0)}]^{\sigma\mu}\,.
\end{align}
From this equation, one can find $\text{tr}(m_{(2p)}+\tilde{m}_{(2p)})$ in terms of $m_{(2q)}$ and $\tilde{m}_{(2q)}$ for all $q<p$.
\par
Taking $p=3$ we get the Raychaudhuri equation at the $O(z^6)$-order:
\begin{align}
0={}&\frac{6}{L^2}\text{tr}(m_{(6)}+\tilde{m}_{(6)})+\frac{4}{L^2}\text{tr}(m_{(4)}m_{(2)})-\frac{4}{L^2}\text{tr}(m_{(2)}^3)-\frac{L^2}{2}m_{(2)}^\mu{}_\alpha f^\alpha_{(0)\beta} f^\beta_{(0)\mu}\nonumber\\
\label{Ray6d}
&+4\hat\nabla\cdot a^{(4)}-2m_{(2)}^\mu{}_\rho\gamma_{(0)}^{\rho\nu}\hat\nabla_\nu a_\mu^{(2)}-2\gamma_{(0)}^{\mu\nu}\hat\gamma^{\lambda}_{(2)\mu\nu}a^{(2)}_\lambda-2(d-6)a^2_{(2)}+\frac{L^2}{2}f^{(2)}_{\mu\nu}f_{(0)}^{\nu\mu}\,.
\end{align}
And for $p=4$, we have the Raychaudhuri equation at the $O(z^8)$-order:
\begin{align}
0={}&\frac{12}{L^2}\text{tr}(m_{(8)}+\tilde{m}_{(8)})+\frac{9}{L^2}\text{tr}(m_{(6)}m_{(2)})-\frac{22}{L^2}\text{tr}(m_{(4)}m_{(2)}^2)+\frac{6}{L^2}\text{tr}(m_{(2)}^4)+\frac{4}{L^2}\text{tr}(m_{(4)}^2)\nonumber\\
&+\frac{L^2}{4}f^{(0)}_{\mu\rho}f_{(0)}^{\nu\sigma}m_{(2)}^\rho{}_\nu m_{(2)}^\mu{}_\sigma+\frac{L^2}{2}f^{(0)}_{\mu\rho}f_{(0)}^{\rho\sigma}(m_{(2)}^2)^\mu{}_\sigma-\frac{L^2}{2}f^{(0)}_{\mu\rho}f_{(0)}^{\rho\sigma}(m_{(4)})^\mu{}_\sigma+6\hat\nabla\cdot a^{(6)}\nonumber\\
&-4\hat\nabla_\mu a_\nu^{(4)}\gamma_{(2)}^{\mu\nu}+L^2\hat\nabla_{[\mu}a^{(4)}_{\rho]}f_{(0)}^{\rho\mu}-4a_\sigma^{(4)}\gamma_{(0)}^{\mu\nu}\hat\gamma_{(2)}^\sigma{}_{\mu\nu}-6(d-8)a^{(4)}\cdot a^{(2)}-2\hat\nabla_\mu a_\nu^{(2)}\gamma_{(4)}^{\mu\nu}\nonumber\\
&-2a_\sigma^{(2)}\gamma_{(0)}^{\mu\nu}\hat\gamma_{(4)}^\sigma{}_{\mu\nu}+2\hat\nabla_\mu a_\nu^{(2)}(m_{(2)}^2)^\mu{}_\rho\gamma_{(0)}^{\rho\nu}+L^2\hat\nabla_{[\mu}a^{(2)}_{\rho]}\hat\nabla^{[\rho}a_{(2)}^{\mu]}-2L^2\hat\nabla_{[\mu}a^{(2)}_{\rho]}f_{(0)}^{\rho\sigma}m_{(2)}^\mu{}_\sigma\nonumber\\
\label{Ray8d}
&+2a_\sigma^{(2)}\gamma_{(2)}^{\mu\nu}\hat\gamma_{(2)}^\sigma{}_{\mu\nu}+2a_\lambda^{(2)}\gamma_{(0)}^{\mu\nu}m_{(2)}^\lambda{}_\sigma\hat\gamma_{(2)}^\sigma{}_{\mu\nu}+2(d-8)a^{(2)}_\mu a^{(2)}_\nu\gamma_{(2)}^{\mu\nu}+2X^{(1)}a^{(2)}\cdot a^{(2)}\,.
\end{align}
\par
Now let us look at the expansion of $\sqrt{-\det h}$. Using the fact that $\theta=D_z(\ln \sqrt{-\det h})$, we can write down the expansion of $\sqrt{-\det h}$ to any order as
\begin{align}
\label{deth}
\sqrt{-\det h}={}&\sqrt{-\det\gamma_{(0)}}\bigg(\frac{z}{L}\bigg)^{-d}\sum_0^\infty\frac{1}{n!}\\
&\times\left[\frac{1}{2}\sum^{\infty}_{m=1}\bigg(\frac{z}{L}\bigg)^{2m}\bigg[\frac{1}{2}\text{tr}(m_{(2m)}+\tilde{m}_{(2m)})+\sum^{m-1}_{k=1}\bigg(\frac{k}{m}-\frac{1}{2}\bigg)\text{tr}(\tilde{m}_{(2k)}m_{(2m-2k)})\bigg]\right]^n\nonumber\,.
\end{align}
Comparing with \eqref{sqrth}, at the $O(z^6)$-order and the $O(z^8)$-order, the above equation gives respectively
\begin{align}
\label{X3}
X^{(3)}={}&\frac{1}{2}\text{tr}(m_{(6)}+\tilde{m}_{(6)})-\frac{1}{6}\text{tr} (m_{(2)}^3)+\frac{1}{2}X^{(1)}X^{(2)}-\frac{1}{12}(X^{(1)})^3\,,\\
X^{(4)}={}&\frac{1}{2}\text{tr}(m_{(8)}+\tilde{m}_{(8)})-\frac{1}{2}\text{tr}(m_{(4)}m_{(2)}^2)+\frac{1}{4}\text{tr}(m_{(2)}^4)\nonumber\\
\label{X4}
&+\frac{1}{2}X^{(3)}X^{(1)}-\frac{1}{4}X^{(2)}(X^{(1)})^2+\frac{1}{4}(X^{(2)})^2+\frac{1}{32}(X^{(1)})^{4}\,.
\end{align}
Now solving for $\text{tr}(m_{(6)}+\tilde{m}_{(6)})$ from \eqref{Ray6d} and plugging \eqref{g2}, \eqref{g4} and \eqref{X1X2} into \eqref{X3}, we can organize all the $m_{(2)}$ and $f_{(0)}$ terms in $X^{(3)}$ and get \eqref{X3P}. Similarly, plugging $\text{tr}(m_{(8)}+\tilde{m}_{(8)})$ obtained from \eqref{Ray8d} into \eqref{X4}, the expression for $X^{(4)}$ can be organized in terms of the Weyl-Schouten tensor and extended Weyl-obstruction tensors as
\begin{align}
\frac{24}{L^2}X^{(4)}={}&L^6\bigg(\frac{1}{8}\hat P^4-\frac{3}{4}\text{tr}(\hat P^2)\hat P^2+\frac{3}{8}[\text{tr}(\hat P^2)]^2+\text{tr}(\hat P^3)\hat P-\frac{3}{4}\text{tr}(\hat P^4)-\text{tr}(\hat\Omega_{(1)}\hat P)\hat P+\text{tr}(\hat\Omega_{(1)}\hat P^2)\nonumber\\
&-\frac{1}{4}\text{tr}(\hat\Omega_{(1)}^2)-\frac{1}{4}\text{tr}(\hat\Omega_{(2)}\hat P)\bigg)+2(d-8)\big[3a^{(4)}\cdot a^{(2)}+a^{(2)}_\mu a^{(2)}_\nu(\hat P^{\mu\nu}-\hat P\gamma_{(0)}^{\mu\nu})\big]-6\hat\nabla\cdot a^{(6)}\nonumber\\
&-L^2\hat\nabla_\mu \big[a_\nu^{(4)}(4\hat P^{\mu\nu}+2\hat P^{\nu\mu}-4\hat P\gamma_{(0)}^{\mu\nu})\big]-\frac{L^2}{2}\hat\nabla_\mu \big[a^{(2)}_{\nu}(3\hat\nabla^{\nu}a_{(2)}^{\mu}+\hat\nabla^{\mu} a^{\nu}_{(2)}-3\hat\nabla\cdot a_{(2)}\gamma_{(0)}^{\mu\nu})\big]\nonumber\\
&+L^4\hat\nabla_\mu\big[a^{(2)}_\nu(3\hat P^{\mu\nu}\hat P+\hat P^{\nu\mu}\hat P)\big]+\frac{3L^4}{2}\hat\nabla^\mu\big[a^{(2)}_\mu(\text{tr}(\hat P^2)-\hat P^2) \big]-\frac{3L^4}{2}\hat\nabla_\mu (a_\nu^{(2)}\hat\Omega_{(1)}^{\mu\nu})\nonumber\\
\label{X8t}
&-\frac{L^4}{4}\hat\nabla_{\mu}\big[a_{\nu}^{(2)}(3\hat P^{\rho\mu}\hat P^\nu{}_\rho-5\hat P^{\rho\mu}\hat P_\rho{}^\nu+7\hat P^{\mu\rho}\hat P_\rho{}^\nu-9\hat P^{\mu\rho}\hat P^\nu{}_\rho)\big]\,,
\end{align}
which leads to \eqref{X4P}.
\section{Dilatation Operator Method}
\label{AppC}
In this appendix we will derive the holographic Weyl anomaly in $8d$ using the recursive algorithm of \cite{Anastasiou:2020zwc} which we will refer to as the {\it dilatation operator method}. We point out that this method uses the usual FG gauge where the Weyl connection is turned off. This alternative way of finding the Weyl anomaly will provide a nontrivial consistency check of the results presented in Section \ref{Sec5}. For completeness we start with a brief review of the algorithm of the dilatation operator method. We then apply the algorithm one step further than \cite{Anastasiou:2020zwc} and compute the holographic Weyl anomaly in $8d$. \subsection{Review of the Algorithm}\label{subsection_C1}
We start by using the metric in the FG gauge \eqref{FG}
\begin{equation}
\text{d} s^2=\text{d} r^2+h_{\mu\nu}(r;x)\text{d} x^\mu \text{d} x^\nu\,,\qquad\mu,\nu=1,\cdots,d\,,
\end{equation}
where we changed the coordinates by setting $r=- L\text{ln}\left(z/L\right)$. The Einstein-Hilbert action in the bulk manifold $M$ with a Gibbons-Hawking boundary term is
\begin{equation}
S_{\text{EH-GH}}= \frac{1}{2\kappa^2} \int_{M}\text{d} r\text{d} ^{d}x\sqrt{-\text{det}g}(R-2\Lambda)+ \frac{1}{\kappa^2} \int_{\partial M_{r_{c}}}\text{d} ^{d}x\sqrt{-\det h}K\,,
\end{equation}
where $\kappa^2 =8\pi G$, $\Lambda= -\frac{d(d-1)}{2L^2}$, and $\partial M_{r_{c}}$ is a cutoff surface at some large value of $r_{c}$.\footnote{We abuse notation and call the cuttof surface $\partial M_{r_{c}}$ even though it is not necessarily the boundary of $M$. } Taking a metric variation and evaluating the result on-shell one gets
\begin{equation}
\delta S_{EHGH}^{\text{ o.s}}= \int_{\partial M_{r_{c}}} \text{d} ^{d}x\pi^{\mu\nu}\delta h_{\mu\nu}\,,\qquad\pi^{\mu\nu}= \frac{1}{2\kappa^2}\sqrt{-\det h}\left(K h^{\mu\nu}- K^{\mu\nu}\right)\,,
\end{equation}
where $K_{\mu\nu}= \frac{1}{2}\partial_{r}h_{\mu\nu}$ is the extrinsic curvature tensor in the FG gauge, and $h_{\mu\nu}$ is the induced metric on $\partial M_{r_{c}}$. The boundary tensor density $\pi^{\mu\nu}$ appears in many different contexts; it was first defined in \cite{brown1993quasilocal} and was later used in \cite{Balasubramanian_1999} to define a boundary stress tensor in an asymptotically AdS spacetime (with the inclusion of necessary counterterms to cancel divergences). It also appears as the conjugate momenta in the ADM formalism \cite{arnowitt1960canonical}.\footnote{The sign difference in the definition of $\pi^{\mu\nu}$ in \cite{arnowitt1960canonical} arises because here we consider the radial evolution which is in a spacelike direction.}
The $rr$- and $r\mu$-components of the Einstein equations, i.e.\ $G^{rr}+ \Lambda g^{rr}=0$ and $G^{r\mu}=0$, can be written in terms of the conjugate momenta as follows:
\begin{align}\label{EE_p1}
\frac{2\kappa^2}{\sqrt{-\det h}}\left(\pi\indices{^{\mu}_{\nu}}\pi^{\nu}_{\mu}- \frac{1}{d-1}\pi^2\right)+ \frac{1}{2\kappa^2}\sqrt{-\det h}\left(\mathring R -2\Lambda\right)&=0\,,\\
\label{EE_p2}
\mathring\nabla_{\rho}\pi^{\rho\mu}&=0\,,
\end{align}
where $\pi\equiv h_{\mu\nu}\pi^{\mu\nu}$ ,$\mathring \nabla$ is the LC connection of the induced metric $h_{\mu\nu}$ on $\partial M_{r_{c}}$, and $\mathring R$ is the LC Ricci scalar of $h_{\mu\nu}$. Note that the indices are raised and lowered using the induced metric $h_{\mu\nu}$. Eq.\ $\eqref{EE_p1}$ and \eqref{EE_p2} are the well-known Hamiltonian and momentum constraints in the ADM language \cite{arnowitt1960canonical}.
\par
The dilatation operator method of solving the constraint equations uses an asymptotic expansion of the conjugate momenta in terms of the induced metric.
One assumes a Hamilton-Jacobi functional $S[h]$ such that
\begin{equation}\label{pi_def}
\pi^{\mu\nu}= \frac{\delta S[h]}{\delta h_{\mu\nu}}\,,\qquad S[h]\equiv \int_{\partial M_{r_{c}}} \text{d} ^{d}x \mathcal{L}[h]\,,
\end{equation}
where $S[h]$ is a local diffeomorphic invariant functional of the induced metric.\footnote{In \cite{Anastasiou:2020zwc} the functional $S[h]$ is used to derive the boundary terms for a well-defined bulk variational problem. Since these are tangential remarks for the calculation of the Weyl anomaly, we simply neglect them and refer the reader there for more details.}
\par
The momentum constraint \eqref{EE_p2} is now trivially satisfied. The Hamiltonian constraint \eqref{EE_p1} can be solved asymptotically by writing
\begin{equation}\label{L_expansion}
\mathcal{L}= \sum_{k=0}\mathcal{L}_{(2k)}[h]\,,\qquad\delta_{D}\mathcal{L}_{(2k)}[h]= (d-2k)\mathcal{L}_{(2k)}[h]\,,
\end{equation}
where $\delta_{D}$ is the dilatation operator \cite{Papadimitriou:2004ap} (acting on metric functionals), defined as
\begin{align}
\label{deltaD}
\delta_{D}\equiv \int \text{d} ^{d}x \left[2 h_{\mu\nu}\frac{\delta }{\delta h_{\mu\nu}}\right]\,.
\end{align}
It is useful to keep in mind that for an AlAdS spacetime the radial derivative $\partial_{r}$ asymptotes the dilatation operator, i.e.\ $\delta_D \sim \frac{\partial}{\partial r}$ (see \cite{Anastasiou:2020zwc}). We can view the dilatation operator expansion as another asymptotic expansion near the conformal boundary since $h_{\mu\nu}\sim e^{2r/L}\gamma_{\mu\nu}^{(0)}+ \cdots$.
\par
The expansion \eqref{L_expansion} together with \eqref{pi_def} implies an expansion of $\pi^{\mu\nu}$ in terms of the dilatation weight:
\begin{equation}\label{pi_2n- definition}
\pi^{\mu\nu}_{(2k)}= \frac{\delta}{\delta h_{\mu\nu}}\int_{\partial M_{r_{c}}}\text{d}^{d}x\mathcal{L}_{(2k)}\,,\qquad\pi^{\mu\nu}= \sum_{k=0}\pi^{\mu\nu}_{(2k)}\,.
\end{equation}
The $\mathcal{L}_{(2k)}[h]$ are defined only up to total derivative terms in $\partial M_{r_{c}}$.
To set up the recursive algorithm we need the following relation:
\begin{equation}\label{trace_of_pi}
\pi_{(2k)}=\frac{L}{2} \mathcal{Q}_{(2k)}\,,\qquad\pi_{(2k)}:= h_{\mu\nu}\pi^{\mu\nu}_{(2k)}\,,
\end{equation}
where we defined the \emph{Q-curvature}
\begin{align}
\mathcal{Q}_{(2k)} := \frac{d-2k}{L}\mathcal{L}_{(2k)}\,.
\end{align}
The Q-curvature is the main quantity we are interested in, since it is proportional to the Weyl anomaly in a specific even dimension [see \eqref{QandA}]. Plugging $\eqref{pi_2n- definition}$ into $\eqref{EE_p1}$ and making use of $\eqref{trace_of_pi}$ we find
\begin{equation}\label{Q-Algorithm}
\mathcal{Q}_{(2k)}= \frac{2\kappa^2}{\sqrt{-\det h}}\sum_{m=1}^{k-1}\left(\pi\indices{_{(2m)}^{\mu}_{\nu}}\pi\indices{_{(2k-2m)}^{\nu}_{\mu}}- \frac{1}{d-1}\pi_{(2m)}\pi_{(2k-2m)}\right)\,,
\end{equation}
where we used the initial values
\begin{equation}\label{Initial values}
\mathcal{Q}_{(2)}= \frac{\sqrt{-\det h}}{2\kappa^2}\mathring R\,,\qquad\pi_{(0)}^{\mu\nu}=\frac{(d-1)}{2\kappa^2 L}\sqrt{-\det h}h^{\mu\nu}\,.
\end{equation}
\par
Equations $\eqref{Q-Algorithm}$, $\eqref{pi_2n- definition}$ and the initial values \eqref{Initial values}, are enough to fix the iterative algorithm of the dilatation operator method. Expanding on this a little more, given the value of $\mathcal{Q}_{(2)}$ we can use \eqref{pi_2n- definition} to find $\pi_{(2)}^{\mu\nu}$. Since we now have $\pi_{(2)}^{\mu\nu}$ and $\pi_{(0)}^{\mu\nu}$, we can find $\mathcal{Q}_{(4)}$ from \eqref{Q-Algorithm}. The process can then be iterated to compute $\pi_{(2k)}^{\mu\nu}$ and $\mathcal{Q}_{(2k)}$ to any order. The recursive algorithm has been solved up to $\pi_{(4)}^{\mu\nu}$ and $\mathcal{Q}_{(6)}$ in \cite{Anastasiou:2020zwc}. In the next section we will push the calculation one step further for finding $\pi^{\mu\nu}_{(6)}$ and $\mathcal{Q}_{(8)}$. This will enable us to find the Weyl anomaly in $8d$.
\subsection{Results and Anomaly in $8d$}
We have explained the algorithm for solving the Hamiltonian and momentum constraints. We now focus on the Q-curvature, which is expressed in terms of the conjugate momenta in \eqref{Q-Algorithm}. The Weyl anomaly in $d$-dimension corresponds to the Q-curvature for $d=2k$\cite{Anastasiou:2020zwc}:
\begin{equation}\label{QandA}
{\cal A}_k =- L\int \text{d} ^{d}x \ln {\cal B}{\cal Q}_{(2k)}^{d=2k}\,.
\end{equation}
\par
We now present the results of the algorithm presented in \ref{subsection_C1}. First, we review the results for $\pi_{(2k)}^{\mu\nu}$ and $\mathcal{Q}_{(2k)}$ up to $k=3$:
\begin{equation}\label{old_pi4}
\begin{split}
\pi_{(0)}^{\mu\nu}&=\frac{(d-1)}{2\kappa^2 L}\sqrt{-\det h}h^{\mu\nu}\,,\\
\pi_{(2)}^{\mu\nu}&= -\frac{\sqrt{-\det h}L}{2\kappa^2}\left(\mathring P^{\mu\nu}- \mathring Ph^{\mu\nu}\right)\,,\\
\pi_{(4)}^{\mu\nu}&=- \frac{\sqrt{-\det h}L^3}{2\kappa^2(d-4)(d-2)}\left[\mathring B^{ij}+ (d-4)\left(\mathring P^{\mu}_{~\lambda}\mathring P^{\lambda\nu}- \mathring P \mathring P^{\mu\nu} - \frac{1}{2}h^{\mu\nu}(\text{tr} (\mathring P^{2}) -\mathring P^2)\right)\right]\,,
\end{split}
\end{equation}
and
\begin{equation}\label{old_Q6}
\begin{split}
\mathcal{Q}_{(2)}&= \frac{\sqrt{-\det h}}{2\kappa^2}\mathring{R}\,,\\
\mathcal{Q}_{(4)}&= \frac{\sqrt{-\det h}L^2}{2\kappa^2}\left[\text{tr} (\mathring P^2)- \mathring P^2\right]\,,\\
\mathcal{Q}_{(6)}&=\frac{\sqrt{-\det h}L^4}{\kappa^2(d-4)(d-2)}\left[\text{tr} (\mathring P \mathring B)+ (d-4)\bigg(\text{tr} (\mathring P^3 )- \frac{3}{2}\mathring P \text{tr} (\mathring P^2)+ \frac{1}{2}\mathring P^3\bigg) \right]\,.
\end{split}
\end{equation}
We can see that $\mathcal{Q}_{(2)}$, $\mathcal{Q}_{(4)}$ and $\mathcal{Q}_{(6)}$ correspond to the LC counterparts of the Weyl anomaly shown in \eqref{2dAP}, \eqref{4dAP} and \eqref{6dA}, respectively. Using $\mathcal{Q}_{(6)}$ in \eqref{old_Q6} we can calculate $\pi_{(6)}^{\mu\nu}$ from \eqref{pi_2n- definition} by taking a metric variation of $\mathcal{Q}_{(6)}$. The result is as follows:
\begin{align}\label{pi6final_2}
\pi_{(6)}^{\mu\nu}={}& -\frac{L^5}{\kappa^2 (d-6)(d-4)(d-2)^2}\bigg[ \mathcal{\mathring O}_{(6)}^{\mu\nu}
+(d-6) \bigg(\mathring P\indices{^{(\mu}_{\lambda}}\mathring B^{\nu)\lambda}- \mathring P \mathring B^{\mu\nu} -2\mathring P_{\rho\lambda}\mathring \nabla^\lambda \mathring C^{(\mu\nu)\rho}\nonumber\\&
- (\mathring\nabla_\lambda P)\mathring C^{(\mu\nu)\lambda} + \mathring C^{\rho \mu \lambda}\mathring C\indices{_{\lambda}^{\nu}_{\rho}}- \frac{1}{2}\mathring C\indices{^{\mu}^{\rho\lambda}}\mathring C\indices{^{\nu}_{\rho\lambda}}+ \mathring P \mathring W\indices{^{\mu}_{\alpha\beta}^{\nu}}\mathring P^{\alpha\beta}+ \frac{1}{2}\mathring\nabla^{\lambda}\mathring\nabla_{\lambda} (\mathring P^{\mu\beta}\mathring P\indices{^{\nu}_{\beta}} - \mathring P\mathring P^{\mu\nu}) \nonumber\\&
- \frac{(d-2)}{4(d-1)} \mathring\nabla^{\mu}\mathring\nabla^{\nu}(\text{tr} (\mathring P^2)- \mathring P^2)-\frac{1}{4(d-1)}h^{\mu\nu}\mathring\nabla^{\lambda}\mathring\nabla_{\lambda} (\text{tr} (\mathring P^2)- \mathring P^2)+(d-4) \mathring P^{\mu}_{~~\alpha}\mathring P^{\alpha \beta}\mathring P\indices{_{\beta}^{\nu}}
\nonumber\\&- \frac{(3d^2 -12d+8)}{4(d-1)} \mathring P^{\mu\nu}(\text{tr} (\mathring P^2)-\mathring P^2)- (d-4)\mathring P \mathring P^{\mu}{}_{\lambda}\mathring P^{\lambda \nu}-\frac{1}{2}h^{\mu\nu}\Big(\text{tr} (\mathring P\mathring B)+ (d-4)\text{tr}(\mathring P^3)\nonumber \\&
- \frac{(3d^2-14d+10)}{2(d-1)}\mathring P\text{tr}(\mathring P^2) + \frac{(d^2 -4d+2)}{2(d-1)}\mathring P^3\Big)\bigg) \bigg]\,,
\end{align}
where ${\cal \mathring O}^{\mu\nu}_{(6)}$ is the LC obstruction tensor defined in \eqref{O6}. We have also checked that $\pi_{(6)}^{\mu\nu}$ is divergence-free in any dimension, as is required by \eqref{EE_p2}.\footnote{This calculation was done thanks to the Mathematica package \href{https://people.brandeis.edu/~headrick/Mathematica/index.html}{diffgeo.m} by Matthew Headrick.}
By plugging \eqref{pi6final_2} and \eqref{old_pi4} into \eqref{Q-Algorithm} we find after some reorganization the expression of $\mathcal{Q}_{(8)}$ as follows:
\begin{align}\label{Q8_Final_4}
\mathcal{Q}_{(8)}={}& \frac{2L^6 \sqrt{-\det h}}{\kappa^2(d-6)(d-4)(d-2)^2}\bigg[ \frac{(d-2)}{2(d-4)}\text{tr} (\mathring P \mathring{\mathcal{O}}_{(6)})+(d-6)\bigg(\frac{3}{4(d-4)}\text{tr} (\mathring B^2) + \frac{5d-16}{2 (d-4)}\text{tr} (\mathring P^2 \mathring B) \nonumber\\&
-\frac{5d-16}{2(d-4)}\mathring P \text{tr} (\mathring P\mathring B)-\frac{5d^2 -20d+8}{16(d-1)}\mathring P^4+ \frac{16-5d}{2}\mathring P\text{tr} (\mathring P^3)+\frac{15d^2 -62d+40}{8(d-1)}\mathring P^2 \text{tr} (\mathring P^2)\nonumber\\&
- \frac{13d^2 -44d+24}{16(d-1)}(\text{tr} (\mathring P^2) )^2+\frac{7d-20}{4}\text{tr} (\mathring P^4) \bigg)
+\mathring\nabla_{\mu}K^{\mu} \bigg]\,,
\end{align}
where $K^{\mu}=\frac{(d-6)}{2(d-4)} K_{0}^{\mu}+ \frac{(d-6)}{2}K_{1}^{\mu}+ \frac{(d-6)}{2}K_{2}^{\mu}+ \frac{(d-6)(d-2)}{4(d-1)}K_{3}^{\mu} $, with
\begin{equation}
\begin{split}
K_{0}^{\mu}&:= \mathring P^{\alpha\beta}\mathring{\nabla}^{\mu}\mathring B_{\alpha\beta}-\mathring B_{\alpha\beta}\mathring\nabla^{\mu}\mathring P^{\alpha\beta}+ \mathring B^{\mu \beta}\mathring\nabla_{\beta}\mathring P - \mathring P\mathring\nabla_{\alpha}\mathring B^{\alpha \mu}\,,\\
K_{1}^{\mu}&:=(\mathring P^{\mu \lambda}\mathring P^{\beta}_{\lambda}-\mathring P\mathring P^{\mu \beta})\mathring\nabla_{\beta}\mathring P - \mathring P\mathring\nabla_{\beta}(\mathring P^{\beta \alpha}\mathring P^{\mu}_{\alpha}- \mathring P\mathring P^{\beta \mu})\,,\\
K_{2}^{\mu}&:= \mathring P_{\alpha\beta}\mathring\nabla^{\mu}(\mathring P^{\alpha \lambda}\mathring P\indices{_{\lambda}^{\beta}} - \mathring P\mathring P^{\alpha\beta}) - (\mathring\nabla^{\mu}\mathring P_{\alpha\beta})(\mathring P^{\alpha \lambda}\mathring P_{\lambda}^{~\beta}- \mathring P\mathring P^{\alpha\beta})\,,\\
K_{3}^{\mu}&:=(\text{tr} (\mathring P^2)- \mathring P^2)\mathring\nabla^{\mu}\mathring P - \mathring P^{\mu \lambda}\mathring\nabla_{\lambda}(\text{tr} (\mathring P^2) - \mathring P^2)\,.
\end{split}
\end{equation}
If we plug $d=8$ into \eqref{Q8_Final_4} , we find that the holographic Weyl anomaly $\mathcal{A}_{4}$ in $8d$ is
\begin{align}
\label{8dAC}
\mathcal{A}_{4}= -L\int \text{d}^{8}x \ln {\cal B}\mathcal{Q}_{(8)}^{d=8}=& -\int \text{d}^{8}x\frac{L^7 \sqrt{-\det h}}{48\kappa^2}\bigg[\frac{1}{4}\text{tr}(\mathring P\mathring{\cal O}_{(6)}) + \frac{1}{8}\text{tr}(\mathring B^2)+2\text{tr}(\mathring P^2 \mathring B){}- 2\mathring P\text{tr}(\mathring P\mathring B){}\nonumber\\
&+ 6 \text{tr}(\mathring P^4)- 3\text{tr}(\mathring P^2)^{2}+ 6\mathring P^2 \text{tr}(\mathring P^2) - 8\mathring P\text{tr}(\mathring P^3) - \mathring P^4 + \mathring{\nabla}_{\mu}K^\mu\bigg]\,.
\end{align}
This result agrees with the Weyl anomaly we obtained in \eqref{8dA} when the Weyl structure is turned off, up to total derivatives.
\end{appendix}
\begingroup\raggedright
|
{
"timestamp": "2021-12-23T02:00:49",
"yymm": "2109",
"arxiv_id": "2109.14014",
"language": "en",
"url": "https://arxiv.org/abs/2109.14014"
}
|
\section{A linear problem} \label{linear}
For this section, we fix $\lambda,\mu\in \R$ and denote $H_1:=H_{1,\mu}$ and $H_2:=H_{N,\lambda}$. Similarly, $r_1:=r_{1,\mu}$ and $r_2:=r_{N,\lambda}$. We call $w:\R^N\to\R$ \textit{doubly antisymmetric (with respect to $H_{1}$ and $H_{2}$)}, if
\[
w\circ r_i=-w\quad\text{in $\R^N$ for $i=1,2$.}
\]
Moreover, if $U\subset \R^N$ is open, we let
\[
{\mathcal V}^s(U)=\Big\{u\in L^2_{loc}(\R^N)\;:\; \rho_s(w,U):=\int_{U}\int_{\R^N}\frac{(w(x)-w(y))^2}{|x-y|^{N+2s}}\ dxdy<\infty\Big\}.
\]
Note that clearly ${\mathcal H}^s_0(U)\subset H^s(\R^N)\subset {\mathcal V}^s(\R^N)\subset {\mathcal V}^s(U)$. In the following Lemma we collect some statements corresponding to the space ${\mathcal V}^s(U)$. The proofs can be found e.g. in \cite{JW16,JW19,JW20}.
\begin{lemma}\label{properties vs}
Let $U\subset \R^N$ open and bounded. Then the following hold.
\begin{enumerate}
\item ${\mathcal E}_s$ is well defined on ${\mathcal V}^s(U)\times {\mathcal H}^s_0(U)$.
\item If $w\in {\mathcal V}^s(U)$, then also $w^{\pm},|w|\in {\mathcal V}^s(U)$. Moreover, if $w\geq 0$ in $\R^N\setminus U$, then $w^-\in{\mathcal H}^s_0(U)$ and we have
$$
{\mathcal E}_s(w^-,w^-)\leq -{\mathcal E}_s(w,w^-).
$$
\item\label{item3} Let $i=1$ or $i=2$ and $U\subset H_i$. If $w\in {\mathcal V}^s(U)$ is antisymmetric in $x_i$, then $w1_{H_i}\in {\mathcal V}^s(U)$. Moreover, if $w\geq 0$ in $H_i\setminus U$, then $w^{-}1_{H_i}\in {\mathcal H}^s_0(U)$ and ${\mathcal E}_s(w^-1_{H_i},w^-1_{H_i})\leq -{\mathcal E}_s(w,w^-1_{H_i})$.
\end{enumerate}
\end{lemma}
The following Lemma gives an extension of Lemma \ref{properties vs}.\ref{item3} to the case of doubly antisymmetric functions.
\begin{lemma}\label{test-function}
Let $U\subset H_1\cap H_2$ and $w\in {\mathcal V}^s(U_{1,2})$ be doubly antisymmetric, where $U_{1,2}=U\cup r_1(U)\cup r_2(U)\cup r_1(r_2(U))$. Then $v=w^-1_{H_1}1_{H_2}-w^+1_{H_1^c}1_{H_2}\in {\mathcal H}^s_0(U\cup r_1(U))$ and we have
\[
{\mathcal E}_s(w,v)+{\mathcal E}_s(v ,v)\leq0,
\]
where equality can only hold if $v\equiv 0$, that is, if $w\geq0$ in $H_1\cap H_2$.
\end{lemma}
\begin{proof}
First note that since $w$ is antisymmetric with respect to $H_i$, $i=1,2$, Lemma \ref{properties vs} and its proof imply $w_i:=w1_{H_i}\in {\mathcal V}^s(U\cup r_j(U))$, $i,j=1,2$, $i\neq j$ and
\[
\rho_s(w_i,U\cup r_j(U))\leq \rho_s(w,U_{1,2})\quad\text{and}\quad {\mathcal E}_s(w_i^-,w_i^-)\leq -{\mathcal E}_s(w,w_i^-)\quad\text{for $i,j=1,2$, $i\neq j$.}
\]
Similarly, also $w_{2}$ is antisymmetric with respect to $H_1$ (resp. $w_1$ with respect to $H_2$) and thus also $w_{1,2}:=w_11_{H_2}=w_21_{H_1}\in {\mathcal V}^s(U)$ with
\[
\rho_s(w_{1,2},U)\leq \min\Big\{\rho_s(w_1,U\cup r_2(U)),\rho_s(w_2,U\cup r_1(U))\Big\}
\]
and it holds
\[
{\mathcal E}_s(w_{1,2}^-,w_{1,2}^-)\leq -\max\Big\{{\mathcal E}_s(w_1,w_{1,2}^-),{\mathcal E}_s(w_2,w_{1,2}^-)\Big\}.
\]
Similarly, we also have $w_{r_1,2}=w_21_{H_1^c}\in {\mathcal V}^s(r_1(U))$ with
\[
\rho_s(w_{r_1,2},r_1(U))\leq \rho_s(w_2,U\cup r_1(U)).
\]
It thus follows that $v=w^-1_{H_1}1_{H_2}-w^+1_{H_1^c}1_{H_2}=w_{1,2}^--w_{r_1,2}^+\in {\mathcal H}^s_0(U\cup r_1(U))$.
Using the monotonicity of $|\cdot|$ and the antisymmetry of $w$ and denoting $r_{1,2}:=r_1\circ r_2=r_2\circ r_1$ we have by several rearrangements and substitutions
\begin{align}
&\frac{2}{c_{N,s}}\Big({\mathcal E}_s(w,v)+{\mathcal E}_s(v,v)\Big)=\int_{H_2}\int_{H_2}\frac{[(w+v)(x)-(w+v)(y)][v(x)-v(y)]}{|x-y|^{N+2s}}\ dxdy+\int_{H_2^c}\int_{H_2^c}\ldots+2\int_{H_2}\int_{H_2^c}\ldots\notag\\
&=\int_{H_2}\int_{H_2}\frac{[(w+v)(x)-(w+v)(y)][v(x)-v(y)]}{|x-y|^{N+2s}}\ dxdy-2\int_{H_2}\int_{H_2}\frac{[ w (r_2(x))-(w+v)(y)]v(y)}{|r_2(x)-y|^{N+2s}}\ dxdy\notag\\
&=\int_{H_2}\int_{H_2}\frac{[(w+v)(x)-(w+v)(y)][v(x)-v(y)]}{|x-y|^{N+2s}}\ dxdy-2\int_{H_2}\int_{H_2}\frac{[ -w (x)-(w+v)(y)]v(y)}{|r_2(x)-y|^{N+2s}}\ dxdy\notag\\
&=\int_{H_1\cap H_2}\int_{H_1\cap H_2}\frac{[(w+v)(x)-(w+v)(y)][v(x)-v(y)]}{|x-y|^{N+2s}}\ dxdy+\int_{H_2\setminus H_1}\int_{H_2\setminus H_1}\ldots +2\int_{H_2\cap H_1}\int_{H_2\setminus H_1}\ldots \notag\\
&\quad -2\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{[ -w (x)-(w+v)(y)]v(y)}{|r_2(x)-y|^{N+2s}}\ dxdy- 2\int_{H_2\setminus H_1}\int_{H_2\setminus H_1}\ldots-2\int_{H_2\cap H_1}\int_{H_2\setminus H_1}\ldots-2\int_{H_2\setminus H_1}\int_{H_2\cap H_1}\ldots\notag\\
&=\int_{H_1\cap H_2}\int_{H_1\cap H_2}\frac{[(w+w^-)(x)-(w+w^-)(y)][w^-(x)-w^-(y)]}{|x-y|^{N+2s}}\ dxdy\notag\\
&\quad -\int_{H_2\setminus H_1}\int_{H_2\setminus H_1}\frac{[(w-w^+)(x)-(w-w^+)(y)][w^+(x)-w^+(y)]}{|x-y|^{N+2s}}\ dxdy\notag\\
&\quad -2\int_{H_2\cap H_1}\int_{H_2\setminus H_1}\frac{[(w-w^+)(x)-(w+w^-)(y)][w^+(x)+w^-(y)]}{|x-y|^{N+2s}}\ dxdy \notag\\
&\quad -2\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{[ -w (x)-(w+w^-)(y)]w^-(y)}{|r_2(x)-y|^{N+2s}}\ dxdy + 2\int_{H_2\setminus H_1}\int_{H_2\setminus H_1}\frac{[ -w (x)-(w-w^+)(y)]w^+(y)}{|r_2(x)-y|^{N+2s}}\ dxdy\notag\\
&\quad +2\int_{H_2\setminus H_1}\int_{H_2\cap H_1}\frac{[ -w (x)-(w-w^+)(y)]w^+(y)}{|r_2(x)-y|^{N+2s}}\ dxdy-2\int_{H_2\cap H_1}\int_{H_2\setminus H_1}\frac{[ -w (x)-(w+w^-)(y)]w^-(y)}{|r_2(x)-y|^{N+2s}}\ dxdy\notag\\
&=\int_{H_1\cap H_2}\int_{H_1\cap H_2}\frac{[w^+(x)-w^+(y)][w^-(x)-w^-(y)]}{|x-y|^{N+2s}}\ dxdy\notag\\
&\quad -\int_{H_2\setminus H_1}\int_{H_2\setminus H_1}\frac{[-w^-(x)+w^-(y)][w^+(x)-w^+(y)]}{|x-y|^{N+2s}}\ dxdy\notag\\
&\quad -2\int_{H_2\cap H_1}\int_{H_2\setminus H_1}\frac{[-w^-(x)-w^+(y)][w^+(x)+w^-(y)]}{|x-y|^{N+2s}}\ dxdy \notag\\
&\quad -2\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{[ -w (x)-w^+(y)]w^-(y)}{|r_2(x)-y|^{N+2s}}\ dxdy + 2\int_{H_2\setminus H_1}\int_{H_2\setminus H_1}\frac{[ -w (x)+w^-(y)]w^+(y)}{|r_2(x)-y|^{N+2s}}\ dxdy\notag\\
&\quad +2\int_{H_2\setminus H_1}\int_{H_2\cap H_1}\frac{[ -w (x)+w^-(y)]w^+(y)}{|r_2(x)-y|^{N+2s}}\ dxdy-2\int_{H_2\cap H_1}\int_{H_2\setminus H_1}\frac{[ -w (x)-w^+(y)]w^-(y)}{|r_2(x)-y|^{N+2s}}\ dxdy\notag\\
&=-\int_{H_1\cap H_2}\int_{H_1\cap H_2}\frac{w^+(x)w^-(y)+w^+(y)w^-(x)}{|x-y|^{N+2s}}\ dxdy -\int_{H_2\setminus H_1}\int_{H_2\setminus H_1}\frac{w^-(x)w^+(y)+w^-(y)w^+(x) }{|x-y|^{N+2s}}\ dxdy\notag\\
&\quad +2\int_{H_2\cap H_1}\int_{H_2\setminus H_1}\frac{w^-(x)w^-(y)+w^+(y)w^+(x)}{|x-y|^{N+2s}}\ dxdy \notag\\
&\quad +2\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{ w (x) w^-(y)}{|r_2(x)-y|^{N+2s}}\ dxdy - 2\int_{H_2\setminus H_1}\int_{H_2\setminus H_1}\frac{ w (x) w^+(y)}{|r_2(x)-y|^{N+2s}}\ dxdy\notag\\
&\quad -2\int_{H_2\setminus H_1}\int_{H_2\cap H_1}\frac{ w (x) w^+(y)}{|r_2(x)-y|^{N+2s}}\ dxdy+2\int_{H_2\cap H_1}\int_{H_2\setminus H_1}\frac{ w (x) w^-(y)}{|r_2(x)-y|^{N+2s}}\ dxdy\notag\\
&= -2\int_{H_1\cap H_2}\int_{H_1\cap H_2}\frac{w^+(x)w^-(y) }{|x-y|^{N+2s}}\ dxdy -2\int_{H_2\setminus H_1}\int_{H_2\setminus H_1}\frac{w^-(x)w^+(y) }{|x-y|^{N+2s}}\ dxdy\notag\\
&\quad +2\int_{H_2\cap H_1}\int_{H_2\setminus H_1}\frac{w^-(x)w^-(y)+w^+(y)w^+(x)}{|x-y|^{N+2s}}\ dxdy \notag\\
&\quad +2\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{ w^+(x) w^-(y)-w^-(x) w^-(y)}{|r_2(x)-y|^{N+2s}}\ dxdy - 2\int_{H_2\setminus H_1}\int_{H_2\setminus H_1}\frac{ w^+ (x) w^+(y)-w^-(x)w^+(y)}{|r_2(x)-y|^{N+2s}}\ dxdy\notag\\
&\quad -2\int_{H_2\setminus H_1}\int_{H_2\cap H_1}\frac{ w^+(x) w^+(y)-w^-(x)w^+(y)}{|r_2(x)-y|^{N+2s}}\ dxdy+2\int_{H_2\cap H_1}\int_{H_2\setminus H_1}\frac{ w^+(x) w^-(y)-w^-(x)w^-(y)}{|r_2(x)-y|^{N+2s}}\ dxdy\notag\\
&= -2\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{w^-(x) w^-(y)}{|r_2(x)-y|^{N+2s}}\ dxdy- 2\int_{H_2\setminus H_1}\int_{H_2\setminus H_1}\frac{ w^+ (x) w^+(y)}{|r_2(x)-y|^{N+2s}}\ dxdy\notag\\
&\quad -2\int_{H_2\cap H_1}\int_{H_2\cap H_1}w^+(x)w^-(y)\Big(\frac{1}{|x-y|^{N+2s}}-\frac{1}{|r_2(x)-y|^{N+2s}}\Big)\ dxdy\notag\\
&\quad- 2\int_{H_2\setminus H_1}\int_{H_2\setminus H_1}w^-(x)w^+(y)\Big(\frac{1}{|x-y|^{N+2s}}-\frac{1}{|r_2(x)-y|^{N+2s}}\Big)\ dxdy\notag\\
&\quad +2\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{w^-(r_1(x))w^-(y)+w^+(y)w^+(r_1(x))}{|r_1(x)-y|^{N+2s}}\ dxdy\notag\\
&\quad -2\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{ w^+(x) w^+(r_1(y))-w^-(x)w^+(r_1(y))}{|r_2(x)-r_1(y)|^{N+2s}}\ dxdy\notag\\
&\quad +2\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{ w^+(r_1(x)) w^-(y)-w^-(r_1(x))w^-(y)}{|r_{1,2}(x)-y|^{N+2s}}\ dxdy\notag\\
&= -2\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{w^-(x) w^-(y)}{|r_2(x)-y|^{N+2s}}\ dxdy - 2\int_{H_2\setminus H_1}\int_{H_2\setminus H_1}\frac{ w^+ (x) w^+(y)}{|r_2(x)-y|^{N+2s}}\ dxdy\notag\\
&\quad -2\int_{H_2\cap H_1}\int_{H_2\cap H_1}w^+(x)w^-(y)\Big(\frac{1}{|x-y|^{N+2s}}-\frac{1}{|r_2(x)-y|^{N+2s}}\Big)\ dxdy\notag\\
&\quad- 2\int_{H_2\setminus H_1}\int_{H_2\setminus H_1}w^-(x)w^+(y)\Big(\frac{1}{|x-y|^{N+2s}}-\frac{1}{|r_2(x)-y|^{N+2s}}\Big)\ dxdy\notag\\
&\quad +2\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{w^+(x)w^-(y)+w^+(y)w^-(x)}{|r_1(x)-y|^{N+2s}}\ dxdy\notag\\
&\quad -2\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{ w^+(x) w^-(y)-w^-(x)w^-(y)}{|r_2(x)-r_1(y)|^{N+2s}}\ dxdy+2\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{ w^-(x) w^-(y)-w^+(x)w^-(y)}{|r_{1,2}(x)-y|^{N+2s}}\ dxdy\notag\\
&= -2\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{w^-(x) w^-(y)}{|r_2(x)-y|^{N+2s}}\ dxdy- 2\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{ w^- (x) w^-(y)}{|r_{1,2}(x)-r_1(y)|^{N+2s}}\ dxdy\notag\\
&\quad -2\int_{H_2\cap H_1}\int_{H_2\cap H_1}w^+(x)w^-(y)\Big(\frac{1}{|x-y|^{N+2s}}-\frac{1}{|r_2(x)-y|^{N+2s}}\Big)\ dxdy\notag\\
&\quad- 2\int_{H_2\cap H_1}\int_{H_2\cap H_1}w^+(x)w^-(y)\Big(\frac{1}{|x-y|^{N+2s}}-\frac{1}{|r_{1,2}(x)-r_1(y)|^{N+2s}}\Big)\ dxdy\notag\\
&\quad +2\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{w^+(x)w^-(y)+w^+(y)w^-(x)}{|r_1(x)-y|^{N+2s}}\ dxdy\notag\\
&\quad -2\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{ w^+(x) w^-(y)-w^-(x)w^-(y)}{|r_2(x)-r_1(y)|^{N+2s}}\ dxdy\notag\\
&\quad +2\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{ w^-(x) w^-(y)-w^+(x)w^-(y)}{|r_{1,2}(x)-y|^{N+2s}}\ dxdy\notag\\
&= -4\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{w^-(x) w^-(y)}{|r_2(x)-y|^{N+2s}}\ dxdy-4\int_{H_2\cap H_1}\int_{H_2\cap H_1}w^+(x)w^-(y)\Big(\frac{1}{|x-y|^{N+2s}}-\frac{1}{|r_2(x)-y|^{N+2s}}\Big)\ dxdy\notag\\
&\quad +2\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{w^+(x)w^-(y)+w^+(y)w^-(x)}{|r_1(x)-y|^{N+2s}}\ dxdy\notag\\
&\quad -4\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{ w^+(x) w^-(y) }{|r_{1,2}(x)-y|^{N+2s}}\ dxdy+4\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{ w^-(x)w^-(y)}{|r_{1,2}(x)-y|^{N+2s}}\ dxdy\notag\\
&= -4\int_{H_2\cap H_1}\int_{H_2\cap H_1}w^-(x) w^-(y)\Big(\frac{1}{|r_2(x)-y|^{N+2s}}-\frac{ 1}{|r_{1,2}(x)-y|^{N+2s}}\Big)\ dxdy\notag\\
&\quad -4\int_{H_2\cap H_1}\int_{H_2\cap H_1}w^+(x)w^-(y)\Big(\frac{1}{|x-y|^{N+2s}}-\frac{1}{|r_2(x)-y|^{N+2s}}\Big)\ dxdy\notag\\
&\quad +4\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{w^+(x)w^-(y)}{|r_1(x)-y|^{N+2s}}\ dxdy-4\int_{H_2\cap H_1}\int_{H_2\cap H_1}\frac{ w^+(x) w^-(y) }{|r_{1,2}(x)-y|^{N+2s}}\ dxdy\notag\\
&= -4\int_{H_2\cap H_1}\int_{H_2\cap H_1}w^-(x) w^-(y)\Big(\frac{1}{|r_2(x)-y|^{N+2s}}-\frac{ 1}{|r_{1,2}(x)-y|^{N+2s}}\Big)\ dxdy\notag\\
&\quad -4\int_{H_2\cap H_1}\int_{H_2\cap H_1}w^+(x)w^-(y)\Big(\frac{1}{|x-y|^{N+2s}}-\frac{1}{|r_2(x)-y|^{N+2s}}-\frac{1}{|r_1(x)-y|^{N+2s}}+\frac{1}{|r_{1,2}(x)-y|^{N+2s}}\Big)\ dxdy.\notag
\end{align}
From here the statement of the Lemma follows, once we show the following claim:
\begin{equation}
\label{claim-double}
\text{\textit{Claim:}}\quad \frac{1}{|x-y|^{N+2s}}-\frac{1}{|r_2(x)-y|^{N+2s}}-\frac{1}{|r_1(x)-y|^{N+2s}}+\frac{1}{|r_{1,2}(x)-y|^{N+2s}}\geq 0,\quad x,y\in H_1\cap H_2
\end{equation}
We write
\[
\frac{1}{|x-y|^{N+2s}}-\frac{1}{|r_2(x)-y|^{N+2s}}=\frac{1}{|x-y|^{N+2s}}\Bigg(1-\Big(\frac{|x-y|^2}{|r_2(x)-y|^2}\Big)^{\frac{N}{2}+s}\Bigg)
\]
and
\[
\frac{1}{|r_1(x)-y|^{N+2s}}-\frac{1}{|r_{1,2}(x)-y|^{N+2s}}=\frac{1}{|r_1(x)-y|^{N+2s}}\Bigg(1-\Big(\frac{|r_1(x)-y|^2}{|r_{1,2}(x)-y|^2}\Big)^{\frac{N}{2}+s}\Bigg).
\]
In the following, fix $x,y\in H_1\cap H_2$ and without loss we may assume $e_1=(1,0,\ldots,0)$ and $e_2=(0,1,0,\ldots,0)$. Indeed, otherwise we may rotate the half spaces and since ${\mathcal E}_s$ is invariant under rotations the situation remains the same. Then with $D:=\sum\limits_{k=2}^{N}(x_k-y_k)^2$
\begin{align*}
|r_1(x)-y|^2&=(x_1+y_1)^2+(x_2-y_2)^2+D=4x_1y_1+ (x_1-y_1)^2+(x_2-y_2)^2+D=4x_1y_1+|x-y|^2\\
|r_2(x)-y|^2&=(x_1-y_1)^2+(x_2+y_2)^2+D=4x_2y_2+ (x_1-y_1)^2+(x_2-y_2)^2+D=4x_2y_2+|x-y|^2\\
|r_{1,2}(x)-y|^2&=(x_1+y_1)^2+(x_2+y_2)^2+D=4x_1y_1+4x_2y_2+ (x_1-y_1)^2+(x_2-y_2)^2+D=4x_1y_1+4x_2y_2+|x-y|^2
\end{align*}
Thus with $M:=|x-y|^2$ we have
\begin{align*}
&\frac{1}{M^{\frac{N}{2}+s}}-\frac{1}{|r_2(x)-y|^{N+2s}}-\frac{1}{|r_1(x)-y|^{N+2s}}+\frac{1}{|r_{1,2}(x)-y|^{N+2s}}\\
&=\frac{1}{M^{\frac{N}{2}+s}}\Bigg(1-\Big(\frac{M}{|r_2(x)-y|^2}\Big)^{\frac{N}{2}+s} -\Big(\frac{M}{|r_1(x)-y|^2}\Big)^{\frac{N}{2}+s}\Bigg(1-\Big(\frac{|r_1(x)-y|^2}{|r_{1}(x)-r_2(y)|^2}\Big)^{\frac{N}{2}+s}\Bigg)\Bigg)\\
&=\frac{1}{M^{\frac{N}{2}+s}}\Bigg(1-\Big(\frac{M}{|r_2(x)-y|^2}\Big)^{\frac{N}{2}+s} -\Big(\frac{M}{|r_1(x)-y|^2}\Big)^{\frac{N}{2}+s} +\Big(\frac{M}{|r_{1,2}(x)-y|^2}\Big)^{\frac{N}{2}+s} \Bigg)\\
&=\frac{1}{M^{\frac{N}{2}+s}}\Bigg(1+\Big(\frac{M}{4x_1y_1+4x_2y_2+M}\Big)^{\frac{N}{2}+s}-\Big(\frac{M}{4x_1y_1+M}\Big)^{\frac{N}{2}+s} -\Big(\frac{M}{4x_2y_2+M}\Big)^{\frac{N}{2}+s} \Bigg).
\end{align*}
Using the notation $a=4x_1y_1>0$ and $b=4x_2y_2>0$, we may consider for fixed $M>0$ the map
\[
f:[0,\infty)^2\to\R,\quad (a,b)\mapsto 1+\Big(\frac{M}{a+b+M}\Big)^{\frac{N}{2}+s}-\Big(\frac{M}{a+M}\Big)^{\frac{N}{2}+s} -\Big(\frac{M}{b+M}\Big)^{\frac{N}{2}+s}.
\]
Then \eqref{claim-double} follows once $f\geq0$. Note that
\[
\nabla f(a,b)=-(\frac{N}{2}+s)M^{\frac{N}{2}+s}\left(\begin{array}{c}\Big(\frac{1}{a+b+M}\Big)^{\frac{N}{2}+1+s}-\Big(\frac{1}{a+M}\Big)^{\frac{N}{2}+1+s}\\
\Big(\frac{1}{a+b+M}\Big)^{\frac{N}{2}+1+s}-\Big(\frac{1}{b+M}\Big)^{\frac{N}{2}+1+s}
\end{array}\right).
\]
Clearly, $f$ has a saddle node at $(0,0)$, but note that for any $(c,d)\in [0,\infty)^2$ we have
\[
\nabla f(a,b){c\choose d}>0,
\]
so that $f$ is increasing in any direction $(c,d)$. In particular, since $f(0,0)=0$, it follows that $f(a,b)\geq 0$ for $a,b\geq 0$. Hence \eqref{claim-double} follows, which implies the assertion of the lemma.
\end{proof}
In view of Lemma \ref{properties vs} we may define \textit{doubly antisymmetric supersolutions} as follows. Let $U\subset H_1\cap H_2$ and $c\in L^{\infty}(U)$. Then $w\in {\mathcal V}^s(U)$ is called a doubly antisymmetric supersolution of
\begin{equation}\label{weak-problem-doubly}
\left\{
\begin{aligned}
(-\Delta)^sw&\geq c(x)w &&\text{in $U$,}\\
w&\geq 0 &&\text{in $H_1\cap H_2\setminus U$,}
\end{aligned}\right.
\end{equation}
if $w$ is doubly antisymmetric and satisfies
\[
{\mathcal E}_s(w,\phi)\geq \int_{U} c(x)w(x)\phi(x)\ dx\quad\text{for all nonnegative $\phi\in {\mathcal H}^s_0(U)$.}
\]
In the following, for an open set $U\subset H_1\cap H_2$ let
\[
\lambda_{1}^-(U):=\min_{\substack{ u\in {\mathcal H}^s_0(U\cup r_1(U))\\ u\neq 0\\ u\circ r_1\equiv-u}} \frac{{\mathcal E}_s(u,u)}{\|u\|_{L^2(U\cup r_1(U))}^2}.
\]
We emphasize that $\lambda_{1}^{-}(U)>\lambda_{1}(U\cup r_1(U))$, where $\lambda_1(D)$ denotes the first Dirichlet eigenvalue of $(-\Delta)^s$ in $D$. Since (see e.g. \cite[Lemma 2.1]{JW16})
$$
\sup_{\substack{D\subset \R^N\ \text{open}\\ |D|\leq \delta}} \lambda_1(D)\to \infty \quad\text{ as $\delta\to 0$,}
$$
it follows also that
\begin{equation}\label{lambda- is large}
\sup_{\substack{U\subset \R^N\ \text{open}\\ |U|\leq \delta}} \lambda_{1}^{-}(U)\to \infty \quad\text{as $|U|\to 0$.}
\end{equation}
We thus can show the following version of a small volume maximum principle for doubly antisymmetric supersolutions.
\begin{proposition}\label{lem:0.2}
Let $c_{\infty}>0$. Then there is $\delta>0$ such that the following is true. For all $U\subset H_1\cap H_2$ open with $|U|\leq \delta$, $c\in L^{\infty}(U)$ with $c\leq c_{\infty}$, and all doubly antisymmetric supersolutions $w$ of \eqref{weak-problem-doubly} it follows that $w\geq0$ in $H_1\cap H_2$.
\end{proposition}
\begin{proof}
Let $c_{\infty}>0$. By \eqref{lambda- is large}, we may fix $\delta>0$ such that $c_{\infty}\leq \lambda_{1}^{-}(U)$ for all open sets $U\subset H_1\cap H_2$ with $|U|\leq \delta$. Fix such an open set $U$ and let $c\in L^{\infty}(U)$. Then note that we may reflect $c$ evenly across $\partial H_1$. Then we have for any with respect to $\partial H_1$ antisymmetric function $\varphi\in {\mathcal H}^s_0(V)$, $V=U\cup r_1(U)$ with $\varphi\geq 0$ in $U$:
\begin{align*}
{\mathcal E}_s(w,\varphi)={\mathcal E}_s(w,1_U\varphi)+{\mathcal E}_s(w,1_{r_1(U)}\varphi)\geq \int_U c(x)w(x)\varphi(x)+\int_{r_1(U)}c(x)w(x)\varphi(x)\ dx=\int_Vc(x)w(x)\varphi(x)\ dx.
\end{align*}
Here, we have used the antisymmetry of $w$ and $\varphi$ with respect to $\partial H_1$ and Lemma \ref{properties vs} to have $1_U\varphi\in {\mathcal H}^s_0(U)$, $1_{r_1(U)}\varphi\in {\mathcal H}^s_0(r_1(U))$, and
\begin{align*}
{\mathcal E}_s(w,1_{r_1(U)}\varphi)={\mathcal E}_s(w\circ r_1,1_U\varphi\circ r_1)={\mathcal E}_s(w,1_U\varphi)\geq \int_Uc(x)w(x)\varphi(x)\ dx=\int_{r_1(U)}c(x)w(x)\varphi(x)\ dx,
\end{align*}
since we extended $c$ evenly across $\partial H_1$. Then $v=w^-1_{H_1}1_{H_2}-w^+1_{H_1^c}1_{H_2}\in {\mathcal H}^s_0(U\cup r_1(U))$ by Lemma \ref{test-function} and we have by symmetry
\begin{align*}
{\mathcal E}_s(w,v)&=\int_{U}c(x)w(x)w^-(x)\ dx-\int_{r_1(U)}c(x)w(x)w^+(x)\ dx=-\int_{U}c(x)(w^-(x))^2\ dx-\int_{r_1(U)}c(x)(w^+(x))^2\ dx\\
&\geq- \lambda_{1,s}^-\Bigg(\int_{U} (w^-(x))^2\ dx+\int_{r_1(U)} (w^+(x))^2\ dx\Bigg)=- \lambda_{1,s}^-\|v\|_{L^2(V)}^2\geq - {\mathcal E}_s(v,v).
\end{align*}
Hence with Lemma \ref{test-function} we have $0\leq {\mathcal E}_s(w,v)+{\mathcal E}_s(v,v)\leq 0$ and this can only be true if $v\equiv 0$.
\end{proof}
In the next statement, we give a Hopf type lemma for equation \eqref{weak-problem-doubly} similar to \cite[Proposition 3.3]{FJ15}.
\begin{proposition}
\label{Hopf-lemma-doubly-anti}
Let $U\subset H_1\cap H_2$ open. Furthermore, let $c\in L^\infty(U)$ and let $u\in {\mathcal V}^s(U)$ be a doubly antisymmetric supersolution of \eqref{weak-problem-doubly}. Assume $u\geq0$ in $H_1\cap H_2$. Then either $u\equiv 0$ or $u>0$ in $U$ in the sense that
\[
\essinf_{K}u>0\quad\text{for all compact sets $K\subset U$.}
\]
Moreover, if there is $x_0\in \partial U\setminus [\partial H_1\cup \partial H_2]$ such that
\begin{enumerate}
\item there exists a ball $B\subset U$ with $\partial B\cap \partial U=\{x_0\}$ and $\lambda_{1,s}^-(B)\geq c$ and
\item $u(x_0)=0$,
\end{enumerate}
then there exists $C>0$ such that
$$
u\geq C\delta^s_{B}\qquad\text{in}\qquad B,
$$
where $\delta_B$ denote the distance to boundary of $B$, and, in particular, if $u\in C(B)$, then
$$
\liminf_{t\downarrow 0}\frac{u(x_0-t\nu(x_0))}{t^s}>0.
$$
\end{proposition}
\begin{proof}
Assume $u\not\equiv 0$. Then there exists a set $K\subset H_1\cap H_2$ such that $|K|>0$ and such that
\begin{equation}
\label{choice-epsilon}
\epsilon:=\essinf_K u>0.
\end{equation}
Let $B\subset U$ be an open ball such that $\dist(B,K)>0$ and $\partial B\cap \partial H_i=\emptyset$ for $i=1,2$. By making $B$ smaller if necessary, we may assume
\begin{equation}
\label{choice-B}
\lambda_{1,s}^-(B)\geq c
\end{equation}
Let $\psi_{B}\in {\mathcal H}^s_0(B)$ be the solution to
$$
(-\Delta)^s \psi_{B} = 1\qquad\text{in}\qquad B
$$
Recall that there exists $c_i=c_i(N,s,B)>0$, $i=1,2$ such that $c_1\delta^s_B\leq \psi_B\leq c_2\delta^s_B$. For any $\alpha>0$, we define
$$
\overline u :=\psi_{B} +\alpha 1_{K}-\psi_{r_1(B)}-\alpha 1_{r_1(K)}\qquad\text{and}\qquad w := \overline u-\overline u\circ r_2
$$
It is clear that $w\circ r_1 = -w=w\circ r_2$, that is, $w$ is doubly antisymmetric. Let $\varphi\in{\mathcal H}^s_0(B)$ with $\varphi\geq 0$. Then, we have
\begin{align}
&{\mathcal E}_s(w,\varphi)={\mathcal E}_s(\overline u,\varphi)-{\mathcal E}_s(\overline u\circ r_2,\varphi)\notag\\
&= \int_{B}\varphi(x)\,dx-\alpha c_{N,s}\int_{B}\varphi(x)\int_{K}\frac{dy}{|x-y|^{N+2s}}dx+\alpha c_{N,s}\int_{B}\varphi(x)\int_{r_1(K )}\frac{dy}{|x-y|^{N+2s}}dx\nonumber\\
&\quad+c_{N,s}\int_{B}\varphi(x)\int_{r_1(B)}\frac{\psi_B(y)}{|x-y|^{N+2s}}dy\,dx+c_{N,s}\int_{B\times B}\frac{\psi_B(x)\varphi(y)}{|x-r_2(y)|^{N+2s}}dx\,dy+\alpha c_{N,s}\int_{K}\int_{B}\frac{\varphi(y)}{|x-r_2(y)|^{N+2s}}dy\,dx\nonumber\\
&\quad-\alpha c_{N,s}\int_{r_1(K)\times r_2(B)}\frac{\varphi(r_2(y))}{|x-y|^{N+2s}}dx\,dy-c_{N,s}\int_{r_1(B)\times r_2(B)}\frac{\psi_{r_1(B)}(x)\varphi(r_2(y))}{|x-y|^{N+2s}}dx\,dy\nonumber\\
&=\int_{B}\varphi(x)\Big( 1-\alpha c_{N,s}\int_{K}\Big[\frac{1}{|x-y|^{N+2s}}-\frac{1}{|x-r_1(y)|^{N+2s}}-\frac{1}{|x-r_2(y)|^{N+2s}}+\frac{1}{|x-r_{1,2}(y)|^{N+2s}}\Big]dy\nonumber\\
&\quad+c_{N,s}\int_{B}\frac{\psi_B(r_1(y))}{|x-r_1(y)|^{N+2s}}dy+ c_{N,s}\int_{B}\frac{\psi_B(y)}{|x-r_2(y)|^{N+2s}}dy-c_{N,s}\int_{B}\frac{\psi_{r_1(B)}(r_1(y))}{|x-r_{1,2}(y)|^{N+2s}}dy\Big)\nonumber\\
&\leq \int_{B}\varphi(x)\Big( 1-\alpha c_{N,s}\int_{K}\Big[\frac{1}{|x-y|^{N+2s}}-\frac{1}{|x-r_1(y)|^{N+2s}}-\frac{1}{|x-r_2(y)|^{N+2s}}+\frac{1}{|x-r_{1,2}(y)|^{N+2s}}\Big]dy\nonumber\\
&\quad+c_{N,s}\|\psi_B\|_{L^\infty(\R^N)}\int_{B}\Big[\frac{1}{|x-r_1(y)|^{N+2s}}+ \frac{1}{|x-r_2(y)|^{N+2s}}+\frac{1}{|x-r_{1,2}(y)|^{N+2s}}\Big]dy\Big)\nonumber\\
&\leq \int_{B}\varphi(x)\Big( \kappa-\alpha c_{N,s}\int_{K}\Big[\frac{1}{|x-y|^{N+2s}}-\frac{1}{|x-r_1(y)|^{N+2s}}-\frac{1}{|x-r_2(y)|^{N+2s}}+\frac{1}{|x-r_{1,2}(y)|^{N+2s}}\Big]dy\Big),\label{bilinear-form}
\end{align}
with
$$
\kappa := 1+c_{N,s}\|\psi_B\|_{L^\infty(\R^N)}\int_{B}\Big[\frac{1}{|x-r_1(y)|^{N+2s}}+ \frac{1}{|x-r_2(y)|^{N+2s}}+\frac{1}{|x-r_{1,2}(y)|^{N+2s}}\Big]dy<\infty,
$$
where we have used that the boundary of $B$ does not touch $\partial H_1\cup \partial H_2$. Since $\overline{B}$ and $K$ are compactly contained in $H_1\cap H_2$, it follows that
$$
C:=\inf_{x\in B,\ y\in K} \Bigg(\frac{1}{|x-y|^{N+2s}}-\frac{1}{|x-r_1(y)|^{N+2s}}- \frac{1}{|x-r_2(y)|^{N+2s}}+\frac{1}{|x-r_{1,2}(y)|^{N+2s}}\Bigg)>0.
$$
Since $c, \psi_B\in L^\infty(U)$, we may hence choose $\alpha$ large enough so that
$$
\kappa-\alpha c_{N,s}\int_{K}\Big[\frac{1}{|x-y|^{N+2s}}-\frac{1}{|x-r_1(y)|^{N+2s}}-\frac{1}{|x-r_2(y)|^{N+2s}}+\frac{1}{|x-r_{1,2}(y)|{N+2s}}\Big]dy\leq c(x)\psi_B(x)\quad\forall\,x\in B.
$$
Consequently, equation \eqref{bilinear-form} gives
$$
{\mathcal E}_s(w,\varphi)\leq \int_{B}c(x)\varphi(x)\psi_B(x)\,dx\qquad \text{for all nonnegative $\varphi\in {\mathcal H}^s_0(B)$.}
$$
Therefore $-w$ satisfies in weak sense
\begin{equation}
\label{problem in B}
\left\{
\begin{aligned}
(-\Delta)^s(-w)&\geq c(x)(-w) &&\text{in $B$,}\\
(-w)&\geq 0 &&\text{in $H_1\cap H_2\setminus B$,}\\
-w\circ r_i&=w&&\text{in $\R^N$ for $i=1,2$.}
\end{aligned}\right.
\end{equation}
Next, consider $u_\epsilon:=u-\frac{\epsilon}{\alpha} w$ with $\epsilon$ given in \eqref{choice-epsilon}. Then $u_{\epsilon}$ also satisfies in weak sense \eqref{problem in B} where the \textit{nonlocal boundary condition} is satisfied by the choice of $\epsilon$. By \eqref{choice-B} and Lemma \ref{lem:0.2} we conclude that $u\geq \frac{\epsilon}{\alpha} \psi_B\geq \frac{\epsilon}{\alpha} c_1\delta^s_B$ in $B$. Since $B$ is chosen arbitrary, the above implies that $u>0$ in $U$ as stated.
If in addition there is $x_0\in \partial U\setminus[\partial H_1\cup \partial H_2]$ with the given properties, the above argument yields in particular
$$
\liminf_{t\downarrow 0}\frac{u(x_0-t\nu(x_0))}{t^s}\geq \epsilon \lim_{t\downarrow 0}\frac{\psi_B(x_0-t\nu(x_0))}{t^s}>0.
$$
This finishes the proof.
\end{proof}
\begin{remark}
To put the Hopf type statement in Proposition \ref{Hopf-lemma-doubly-anti} into perspective, consider in Problem \eqref{main-prob} the nonlinearity $f(x,u)=|u|^{2^{\ast}_s-2}u$ with $2^{\ast}_s:=\frac{2N}{N-2s}$, the critical fractional exponent. It was shown in \cite{RS12-2} that there is no positive bounded solution if $\Omega$ is starshaped. Up to our knowledge, it remains an open question, if there is a \textit{sign-changing} solution to this problem. Assuming that $\Omega$ is bounded and starshaped with $C^{1,1}$ boundary and there exists a bounded solution of \eqref{main-prob} with $f(x,u)=|u|^{2^{\ast}_s-2}u$, it first follows that $u\in C^s(\R^N)\cap C^{\infty}(\Omega)$ (see e.g. \cite{RS14}) and the fractional Pohozaev identity from \cite{RS12-2} implies
\[
\int_{\partial\Omega}\Big(\frac{u}{\dist(\cdot,\partial\Omega)^s}\Big)^2(x\cdot \nu)\ d\sigma=0.
\]
However, by \cite[Proposition 3.3]{FJ15} it then follows that if $\Omega$ has additionally a symmetry hyperplane $T$ and $u$ is odd with respect to reflections across this hyperplane and of one sign on one side of the hyperplane, then $\Big(\frac{u}{\dist(\cdot,\partial\Omega)^s}\Big)^2>0$ on $\partial \Omega\setminus T$. Whence, there cannot be such an odd solution of the problem. Similarly, using instead Proposition \ref{Hopf-lemma-doubly-anti}, it follows that there can also be no doubly antisymmetric solution of this problem if $\Omega$ satisfies (D).
\end{remark}
\section{Symmetry of solutions} \label{symmetry}
In the following, we use the notation from Section \ref{linear} and assume $\Omega\subset \R^N$ satisfies (D). Moreover, $f\in C(\Omega\times \R)$ satisfies (F1) and (F2) and let $u\in L^{\infty}(U)\cap {\mathcal H}^s_0(\Omega)$ be a solution of problem \eqref{main-prob} which satisfies $u\circ r_{N,0}=-u$. Note that by (F1) and \cite{RS14} it follows that $u\in C^s(\R^N)$. For $\lambda\in \R$ we may than define
$$
v_{\lambda}(x)=u(r_{\lambda,1}(x))-u(x).
$$
Then it follows that $v_{\lambda}$ is antisymmetric with respect to $H_{N,0}$ and $H_{1,\lambda}$, hence doubly antisymmetric, and it satisfies due to (F2)
\begin{equation}\label{linearization}
\left\{\begin{aligned}
(-\Delta)^sv_{\lambda}&\geq c_{\lambda}(x)v_{\lambda}&& \text{in $\Omega_{\lambda}:=\Omega\cap H_{N,0}\cap H_{1,\lambda}$,}\\
v_{\lambda} &\geq 0 && \text{in $H_{N,0}\cap H_{1,\lambda}\setminus \Omega_{\lambda}$,}
\end{aligned}\right.
\end{equation}
where
\[
c_{\lambda}(x)=\left\{\begin{aligned} &\frac{f(x,u(r_{\lambda,1}(x)))-f(x,u(x))}{u(r_{\lambda,1}(x))-u(x)} && u(r_{\lambda,1}(x))\neq u(x)\\
&0 && u(r_{\lambda,1}(x))=u(x)
\end{aligned}\right.
\]
Note that by assumption (F1) we have
\[
\sup_{\lambda\in \R}\sup_{x\in \Omega_{\lambda}}|c_{\lambda}(x)|=:c_{\infty}<\infty.
\]
Finally, let $\lambda_1:=\sup_{x\in \Omega} x_1$.
\begin{proof}[Proof of Theorem \ref{main-thm1}]
Assume that $u$ is nontrivial. We apply the moving plane method to then prove that $u$ is symmetric with respect to $H_{1,0}$ and decreasing in $x_1$. For this let
$$
\lambda_0:=\inf\{\lambda\in(0,\lambda_1)\;:\; v_{\mu}> 0 \, \text{ in $\Omega_{\mu}$ for all $\mu\in(\lambda,\lambda_1)$}\}
$$
Next note that by (D) and Proposition \ref{lem:0.2} it follows that there is $\epsilon>0$ such that $v_{\mu}\geq 0$ for all $\lambda\in (\lambda_1-\epsilon,\lambda_1)$ and thus by Proposition \ref{Hopf-lemma-doubly-anti} we have $\lambda_0\leq \lambda_1-\epsilon$. Assume next by contradiction that $\lambda_0>0$. Then by continuity $v_{\lambda_0}\geq 0$ in $H_{N,0}\cap H_{1,\lambda_0}$. By Proposition \ref{Hopf-lemma-doubly-anti} it follows that either $v_{\lambda_0}\equiv 0$ or $v_{\lambda_0}>0$.\\
If $v_{\lambda_0}\equiv 0$, this implies that we have $u\equiv 0$ in $\Omega \setminus H_{1,\lambda_0-\lambda_1}$. But then, we can also start moving the hyperplane from the left (working instead with $\R^N\setminus H_{1,\lambda}$), up to the same $\lambda_0$. It then follows that $u$ has two different parallel symmetry hyperplanes, but since $u$ vanishes outside of $\Omega$, this implies $u\equiv 0$, which cannot be the case.\\
If $v_{\lambda_0}>0$, let $\delta>0$ be given by Proposition \ref{lem:0.2} according to $c_{\infty}$. Then by continuity there is $\mu>0$ such that and a compact set $K\subset \Omega_{\lambda_0}$ such that $|\Omega_{\lambda_0}\setminus K|\leq \frac{\delta}{2}$ and $v_{\lambda_0}\geq 2\mu$ in $K$. Again, by continuity, we can find $\tau\in(0,\lambda_1-\lambda_0)$ such that $v_{\lambda}\geq \mu$ for all $\lambda\in[\lambda_0-\tau,\lambda_0]$. Let $U_{\lambda}:=\{x\in \Omega_{\lambda}\;:\; v_{\lambda}<0\}$. Then, by making $\tau$ smaller if necessary, we may also assume $|U_{\lambda}|\leq \delta$ for all $\lambda\in[\lambda_0-\tau,\lambda_0]$. A combination of Proposition \ref{lem:0.2} and \ref{Hopf-lemma-doubly-anti} gives a contradiction to the definition of $\lambda_0$.\\
Whence, $\lambda_0>0$ is not possible. Thus $\lambda_0=0$ and this finishes the proof.
\end{proof}
\section{A symmetric sign-changing solution} \label{application}
Let $\Omega\subset \R^N$ open and bounded and consider the functional
$$
J:{\mathcal H}^s_0(\Omega)\to \R, \qquad J(u)={\mathcal E}_s(u,u).
$$
Let $M:=\{u\in {\mathcal H}^s_0(\Omega)\;:\; u=-u\circ r_N,\ \int_{\Omega}|u(x)|^p\ dx=1\}$ with $1<p<\frac{2N}{N-2s}$. Then by a constraint minimization argument using the framework as explained e.g. in \cite{SV12,SV13}, see also \cite{BP16}, it follows that there exists such a minimizer $u$ of $J|_{M}$. That is, the minimum
\begin{equation}\label{defi-lambda}
\lambda_{1,p}^-=\min_{u\in M}{\mathcal E}_s(u,u)
\end{equation}
is attained. Similar to \cite[Theorem 3.1]{BP16}, it can be shown that this minimizer is bounded and then, by an iteration of the results of \cite{RS14,G15:2}, we have $u\in C^{\infty}(\Omega)$. If in addition $\partial \Omega$ is of class $C^{1,1}$, then \cite{RS14,G15:2} also imply that $u\in C^s(\R^N)$.
\begin{proof}[Proof of Theorem \ref{main-thm2}]
Let $\lambda^-_{1,p}$ be as in \eqref{defi-lambda}
and let $u$ be the minimizer as explained in the above remarks. In view of Theorem \ref{main-thm1} it remains to show that $u$ can be chosen of one sign in $\O^+:=\Omega\cap H_{N,0}$. In the following $\O^-=\O\setminus \O^+$. Assume by contradiction that $u$ changes sign in $\O^+$ and let $\O^+_1:=\{x\in \O^+: u(x)>0\}$ and $\O^+_2:=\{x\in \O^+: u(x)\leq 0\}$. We also let $\O^-_1 = r_{N,0}(\O^+_1)$, and $\O^-_2 = r_{N,0}(\O^+_2)$. By the property of $u$, it is clear that $u<0$ in $\O^-_1$ and $u\geq 0$ in $\O^-_2$. Now let $\overline u$ be defined by
\begin{equation}
\overline u = 1_{\O^+}|u|-1_{\O^-}|u|.
\end{equation}
Then $\overline u\in M$, that is $\overline u\in {\mathcal H}^s_0(\O)$ satisfies $\overline u\circ r_{N,0}=-\overline u$ and
\begin{equation}\label{eq:l2-norm-ov u}
\int_{\O} |\overline u|^pdx = \int_{\O}(|\overline u|^2)^{p/2}dx=\int_{\O}(1_{\O^+} |u|^2+1_{\O^-}|u|^2)^{p/2}dx= \int_{\O}|u|^pdx=1.
\end{equation}
Moreover, we have
\begin{align}\label{eq:semi-norm-ov u}
&\frac{2}{C_{N,s}}{\mathcal E}_s(\overline u,\overline u)=\int_{\R^N\times\R^N} \frac{\big(\overline u(x)-\overline u(y)\big)^2}{|x-y|^{N+2s}}dxdy=\int_{\O\times\O} \frac{\big(\overline u(x)-\overline u(y)\big)^2}{|x-y|^{N+2s}}dxdy+2\int_{\O}\overline u^2(x)\int_{\R^N\setminus\O}\frac{dy}{|x-y|^{N+2s}}\,dx\nonumber\\
&=\int_{\O^+\times\O} \frac{\big(\overline u(x)-\overline u(y)\big)^2}{|x-y|^{N+2s}}dxdy+\int_{\O^-\times\O} \frac{\big(\overline u(x)-\overline u(y)\big)^2}{|x-y|^{N+2s}}dxdy+4\int_{\O^+}\int_{\R^N\setminus\O}\frac{ u^2(x)dy}{|x-y|^{N+2s}}\,dx\nonumber\\
&=\int_{\O^+\times\O^+} \frac{\big( |u(x)|- |u(y)|\big)^2}{|x-y|^{N+2s}}dxdy+\int_{\O^-\times\O^-} \frac{\big( |u(x)|- |u(y)|\big)^2}{|x-y|^{N+2s}}dxdy+2\int_{\O^-\times\O^+} \frac{\big( |u|(x)+ |u|(y)\big)^2}{|x-y|^{N+2s}}dxdy\nonumber\\
&\qquad\qquad +4\int_{\O^+} u^2(x)\int_{\R^N\setminus\O}\frac{dy}{|x-y|^{N+2s}}\,dx.
\end{align}
Using the notation above, we rewrite
\begin{align}\label{eq:int-omega+}
\int_{\O^+\times\O^+} \frac{\big( |u(x)|- |u(y)|\big)^2}{|x-y|^{N+2s}}dxdy &= \int_{\O^+\times\O^+} \frac{\big( u(x)- u(y)\big)^2}{|x-y|^{N+2s}}dxdy +2\int_{\O^+_1\times\O^+_2}\frac{(u(x)+u(y))^2-(u(x)-u(y))^2}{|x-y|^{N+2s}}dxdy\nonumber\\
&=\int_{\O^+\times\O^+} \frac{\big( u(x)- u(y)\big)^2}{|x-y|^{N+2s}}dxdy+4\int_{\O^+_1\times\O^+_2}\frac{u(x)u(y)}{|x-y|^{N+2s}}dxdy.
\end{align}
Similarly we have
\begin{align}\label{eq:int-omega-}
\int_{\O^-\times\O^-} \frac{\big( |u(x)|- |u(y)|\big)^2}{|x-y|^{N+2s}}dxdy &= \int_{\O^-\times\O^-} \frac{\big( u(x)- u(y)\big)^2}{|x-y|^{N+2s}}dxdy+4\int_{\O^-_1\times\O^-_2}\frac{u(x)u(y)}{|x-y|^{N+2s}}dxdy\nonumber\\
& = \int_{\O^-\times\O^-} \frac{\big( u(x)- u(y)\big)^2}{|x-y|^{N+2s}}dxdy+4\int_{\O^+_1\times\O^+_2}\frac{u(x)u(y)}{|x-y|^{N+2s}}dxdy.
\end{align}
Now using that $\O^-_j = r_N(\O^+_j)$, $j=1,2$ we get
\begin{align}\label{eq:int-omega+-}
\int_{\O^-\times\O^+} &\frac{\big( |u(x)|-|u(y)|\big)^2}{|x-y|^{N+2s}}dxdy = \int_{\O^-\times\O^+} \frac{\big( u(x)- u(y)\big)^2}{|x-y|^{N+2s}}dxdy+\int_{\O^-_2\times\O^+_1}\frac{(u(x)+u(y))^2-(u(x)-u(y))^2}{|x-y|^{N+2s}}dxdy\nonumber\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\int_{\O^-_1\times\O^+_2}\frac{(u(x)+u(y))^2-(u(x)-u(y))^2}{|x-y|^{N+2s}}dxdy\nonumber\\
&=\int_{\O^-\times\O^+} \frac{\big( u(x)- u(y)\big)^2}{|x-y|^{N+2s}}dxdy+2\int_{\O^-_2\times\O^+_1}\frac{u(x)u(y)}{|x-y|^{N+2s}}dxdy+2\int_{\O^-_1\times\O^+_2}\frac{u(x)u(y)}{|x-y|^{N+2s}}dxdy\nonumber\\
&=\int_{\O^-\times\O^+} \frac{\big( u(x)- u(y)\big)^2}{|x-y|^{N+2s}}dxdy -4\int_{\O^+_1\times\O^+_2}\frac{u(x)u(y)}{|r_{N,0}(x)-y|^{N+2s}}dxdy.
\end{align}
Summing up \eqref{eq:int-omega+}, \eqref{eq:int-omega-} and \eqref{eq:int-omega+-}, and taking into account \eqref{eq:semi-norm-ov u}, we obtain
\begin{align}\label{eq:semi-norm- ov u-2}
&\frac{2}{C_{N,s}}{\mathcal E}_s(\overline u,\overline u)-4\int_{\O^+} u^2(x)\int_{\R^N\setminus\O}\frac{dy}{|x-y|^{N+2s}}\,dx = \int_{\R^N\times\R^N}\frac{(u(x)-u(y))^2}{|x-y|^{N+2s}}dxdy-2\int_{\O}u^2(x)\int_{\R^N\setminus\O}\frac{dy}{|x-y|^{N+2s}}\nonumber\\
&+8\int_{\O^+_1\times\O^+_2}\frac{u(x)u(y)}{|x-y|^{N+2s}}dxdy-8\int_{\O^+_2\times\O^+_1}\frac{u(x)u(y)}{|r_{N,0}(x)-y|^{N+2s}}dxdy.
\end{align}
By a change of variable it is clear that
\[2\int_{\O}u^2(x)\int_{\R^N\setminus\O}\frac{dy}{|x-y|^{N+2s}} dx= 4\int_{\O^+} u^2(x)\int_{\R^N\setminus\O}\frac{dy}{|x-y|^{N+2s}}\,dx.\]
Putting that into \eqref{eq:semi-norm- ov u-2} gives
\begin{align}\label{eq:semi-norm ov u-3}
\frac{2}{C_{N,s}}{\mathcal E}_s(\overline u,\overline u) &= \int_{\R^N\times\R^N}\frac{(u(x)-u(y))^2}{|x-y|^{N+2s}}dxdy+8\int_{\O^+_1\times\O^+_2}u(x)u(y)\Big[\frac{1}{|x-y|^{N+2s}}-\frac{1}{|r_{N,0}(x)-y|^{N+2s}}\Big]dxdy.
\end{align}
Now since $\overline u\circ r_N = -\overline u$, it follows from the variational characterization of $\lambda^-_{1,p}(\O)$ in \eqref{defi-lambda} with \eqref{eq:semi-norm ov u-3} and \eqref{eq:l2-norm-ov u} that
\begin{align*}
\lambda^-_{1,p}(\O)&\leq {\mathcal E}_s(\overline u,\overline u)= {\mathcal E}_s( u, u)+4C_{N,s}\int_{\O^+_1\times\O^+_2}u(x)u(y)\Big[\frac{1}{|x-y|^{N+2s}}-\frac{1}{|r_{N,0}(x)-y|^{N+2s}}\Big]dxdy\\
&=\lambda^-_{1,p}(\O)+4C_{N,s}\int_{\O^+_1\times\O^+_2}u(x)u(y)\Big[\frac{1}{|x-y|^{N+2s}}-\frac{1}{|r_{N,0}(x)-y|^{N+2s}}\Big]dxdy.
\end{align*}
That is
\[
0\leq\int_{\O^+_1\times\O^+_2}u(x)u(y)\Big[\frac{1}{|x-y|^{N+2s}}-\frac{1}{|r_{N,0}(x)-y|^{N+2s}}\Big]dxdy\leq 0.
\]
Whence $u \equiv 0$ in $\O^+_2$ and therefore $u\geq 0$ in $\O^+$. This is in contradiction with the hypothesis. It follows that $u$ does not change sign in $\O^+$ and, without loss of generality, we may assume $u\geq 0$ in $\O^+$. By the strong maximum principle \cite[Corollary $3.4$]{FJ15} we have $u>0$ in $\O^+$.\\
For the additional statement let $p=2$ and let $u, v$ be two normalized minimizers for $\lambda^-_{1,2}(\Omega)$. Assume further they satisfy the sign property in Theorem \ref{main-thm2}, i.e. they are of one sign in $\Omega\cap H_{N,0}$. Then, if $u-v$ is not identically zero, it must change sign in $\Omega\cap H_{N,0}$. Indeed, if not, we may assume $u-v>0$ in $\O\cap H_{N,0}$ by \cite[Corollary $3.4$]{FJ15}. Therefore $1=\int_{\O}u^2dx = 2\int_{\O\cap H_{N,0}}u^2dx>2\int_{\O\cap H_{N,0}}v^2dx=1$ a contradiction. Note that if $u\not\equiv v$, then also $(u-v)/\|u-v\|_{L^2(\Omega)}$ is a minimizer. But by the above argument, $u-v$ cannot change sign in $\O\cap H_{N,0}$. Whence $u\equiv v$ as claimed.
\end{proof}
\textbf{Acknowledgements.} This work is supported by DAAD and BMBF (Germany) within the project 57385104. The authors would like to thank Mouhamed Moustapha Fall and Tobias Weth for helpful discussions.
\bibliographystyle{amsplain}
|
{
"timestamp": "2021-09-30T02:03:47",
"yymm": "2109",
"arxiv_id": "2109.14024",
"language": "en",
"url": "https://arxiv.org/abs/2109.14024"
}
|
\section{Introduction}
\label{sec:intro}
Cosmic Microwave Background (CMB) observations are inevitably contaminated at some level by foregrounds, from galactic dust to a range of extragalactic signals including the cosmic infrared background (CIB), Sunyaev-Zeldovich effect (SZ), and radio point sources. Extragalactic emissions are correlated to the large-scale structure from which they originate, and hence correlate with other tracers of the matter distribution over a similar redshift range \cite{planck2015-szcatalog,sptpol-szcatalog,advact-szcatalog, Holder:2002wb,Shirasaki:2018wdq,Allison:2015fac,dwek2002, Ade:2013aro,planck2015-cibXtsz,planck2015-sz,Song:2002sg}.
Gravitational lensing is an important effect on the CMB. It both modifies the observed power spectra and produces non-Gaussian signatures in the CMB maps that can be used for lensing reconstruction. The lensing signal correlates with the extragalactic foregrounds and this means that foreground masking has the potential to produce biased inferences from observations of the lensed CMB over the unmasked area.
In principle, the non-blackbody spectrum of the foregrounds (other than kinetic SZ) can be used to clean the foregrounds through multi-frequency observations. Foreground cleaning has been very successfully used, particularly on large scales, but inevitably comes with the cost of increased noise, especially on small scales where the observational noise becomes comparable to the observed signal. Alternatively, the foreground signal can simply be modelled, which is what is often done at the power spectrum level for CMB likelihood analysis. In both cases, it is often necessary to also apply some masking to the brightest sources, including the galactic plane (which is not correlated to large-scale structure and hence does not introduce a direct bias) but also extragalactic sources. For CMB lensing reconstruction, the non-Gaussianity of the foregrounds is important and can produce a direct bias on lensing estimates that is difficult to model~\cite{Osborne:2013nna,vanEngelen:2013rla}. For tSZ foregrounds, the largest non-Gaussianity is associated with the brightest tSZ clusters, and hence can be substantially reduced by cluster masking~\cite{Osborne:2013nna,vanEngelen:2013rla}. Point sources and CIB foreground non-Gaussianity can also be substantially reduced by masking the brightest sources or through component separation.
A variety of other methods have been suggested to reduce lensing biases from small-scale temperature reconstruction~\cite{namikawa2013,Schaan:2018tup,Darwish:2020fwf,Madhavacheril:2018bxi,Fabbian:2019tik}, however, these are usually only applied after the brightest sources have already been masked out. To extract reliable information from small-scale CMB temperature observations it is therefore likely to be necessary to understand the impact of the
source masking, especially if the correlated masking introduces substantial biases.
Recent ACT lensing analyses successfully performed lensing reconstruction on maps where extragalactic sources were subtracted from the map with a dedicated procedure, but not masking the corresponding sky areas~\cite{Darwish:2020fwf,Naess:2020wgi}. Despite the success of the method, it is unclear if it will remain sufficiently accurate for the analysis of future high sensitivity experiments. Source subtraction may itself alter the underlying CMB signal on small scales and become problematic, in particular for extended tSZ-detected clusters, and dedicated assessment of potential systematics introduced by the technique would have to be carried out anyway.
Masking the observed CMB data with a lensing-correlated mask introduces a number of different effects. Firstly, as shown in \cite[hereafter, Paper I]{Fabbian:2020yjy}, the CMB power spectra on the unmasked area can be significantly altered on small scales (and large scales for B-mode polarization), since there is a net scale-dependent demagnification over the unmasked area. Although estimated to be negligible for \Planck~\cite{PL2018}, the impact is potentially larger for forthcoming high-resolution CMB experiments where more of the information is in the small-scales, precisely where the foregrounds are more important. This has the potential to bias parameter constraints if not consistently accounted for.
Secondly, if the CMB lensing potential is estimated using standard quadratic estimators~\cite{Hu:2001kj}, the mask complicates its normalization and the noise biases to the lensing power spectrum, even for masks that are uncorrelated to the signal. For uncorrelated masks, these effects can be estimated analytically in some cases (see Appendix~\ref{sec:holes}), or quantified for a fiducial model using independent lensed CMB simulations. After correcting for these effects, if the mask is actually correlated to the signal there will be additional biases because: 1) the areas of high convergence have been preferentially masked directly biasing the actual lensing power over the unmasked area; 2) the normalization, mean-field and Gaussian noise bias ($N_L^{(0)}$\xspace) are different because of the differences in the lensed CMB power spectra over the unmasked areas; 3) the $N_L^{(1)}$\xspace bias due to non-primary lensing contractions~\cite{Kesden:2003cc} is changed due to the different CMB and lensing power; 4) non-Gaussian biases, specifically $N_L^{(3/2)}$\xspace related to non-linear large-scale structure growth and post-Born lensing effects~\cite{Bohm:2016gzt,Pratten:2016dsm,Bohm:2018omn,Beck:2018wud,Fabbian:2019tik}, are modified due to the changed non-Gaussianity (e.g. reduced skewness) of the masked field and related changes in the power spectra.
Effects of LSS-correlated masking in the lensing reconstruction might also impact the estimated CMB lensing field itself, and therefore the cosmological constraints involving its statistics beyond the power spectrum \cite{Liu:2016nfs}. Moreover, it might bias the cross-correlation with external matter tracers. These are powerful cosmological and astrophysical probes for a wide range of physical phenomena.
Cross-correlation between CMB lensing and galaxy surveys data helps improving cosmological constraints by breaking parameter degeneracies (e.g., involving galaxy bias) and by measuring nuisance parameters associated with sources of systematic errors (e.g., lensing multiplicative bias, photometric redshift errors) \cite{Vallinotto:2011ge,Schaan:2016ois,Cawthon:2018acr,Schaan:2020qox}. For future galaxy surveys, such as Euclid and LSST, this approach is likely to become the standard analysis to obtain cosmological constraints \cite{Euclid:2021qvm, Sailer:2021yzm}.
The cross-correlation between CMB lensing and the extragalactic emissions can also be a useful source of additional information. The tSZ-CMB lensing cross-correlation ($\kappa\times y$) signal is a unique probe of the physics of the intracluster medium at high-redshift $z\approx 1$, and in relatively low-mass clusters and groups ($10^{13}\lesssim M \lesssim 10^{15} M_{\odot}\xspace/h$). It is also more sensitive to contributions from structures located in dark matter halos of lower masses than the tSZ auto-power spectrum or the cross-correlation with galaxy lensing measurements \cite{Hill:2013dxa,Battaglia:2014era,Hojjati:2014usa,VanWaerbeke:2013cfa}. Furthermore, it is a powerful cosmological probe due to its strong dependency on $\sigma_8$ and $\Omega_m$ \cite{Osato:2017cva}.
The CIB emission, conversely, is generated by redshifted thermal radiation from UV-heated dusty star-forming galaxies (DSFGs) that have a redshift distribution peaked between $1\lesssim z \lesssim 2$ \cite{Bethermin:2012ta,Maniyar:2020tzw}. The kernel of CMB lensing peaks around the same redshift range and CMB photons are mainly lensed by halos of mass $\approx 10^{11}-10^{13} M_{\odot}\xspace$ similar to the one of DSFGs \cite{Song:2002sg}. As such, CIB and CMB lensing are highly correlated. Their cross-correlation ($\kappa\times\mathrm{CIB}$) was first measured by \Planck~\cite{Planck:2013qqi} in multiple frequency bands with high statistical significance. Such strong correlation, as high as 80\% at 545GHz, allows the star formation rate density and the mass of the halos hosting CIB to be constrained over a wide range of redshifts, $1\lesssim z\lesssim 7$. Future CMB lensing and CIB measurements of Simons Observatory~\cite[SO hereafter]{Ade:2018sbj} and CCAT-prime~\cite{Aravena:2019tye} are expected to further improve the constraining power on these astrophysical processes \cite{McCarthy:2020qjf}.
The high degree of correlation between CIB and CMB lensing can also be exploited to construct templates of the lensed B-modes for delensing analyses using CIB as an external LSS tracer alone, or in combination with CMB lensing itself \cite{Sherwin:2015baa,Yu:2017djs}.
Potential biases due to LSS-correlated masking on $\kappa\times \mathrm{CIB}$ would not directly translate in a misinterpretation of the delensed B-modes signal (and hence in a bias on $r$) as the correlation between CIB and CMB lensing is usually not assumed from a model but directly measured from the data itself. Nevertheless, it is important to understand if these effects might impact the expected delensing efficiency for future experiments.\\
Building on our work of \citetalias{Fabbian:2020yjy}\xspace, in this paper we study and quantify the impact of the mask bias for lensing reconstruction. The main aim of this paper is to identify the important sources of correlated mask bias, provided some qualitative understanding of them, and approximately quantify their size. Future work will be required to make accurate and fully self-consistent predictions including the impact of potential foreground-residuals in the masked CMB in addition to the effect of the lensing-correlated mask.
In Sec.~\ref{sec:matter}, we give simple analytic models of the expected effect of foreground masking for various types of mask, taking the masks to apply directly to the lensing convergence field without modelling lensing reconstruction.
In Sec.~\ref{sec:masks}, we describe the realistic simulations, including extragalactic foreground emissions as well as the effect of non-linear evolution of large-scale structure (LSS), that we used to model the relevant effects and validate our analytic models.
In Sec.~\ref{sec:recon}, we show the impact of the mask bias on the final estimated lensing potential power spectrum from lensed CMB maps, including the effects of optimal CMB filtering and lensing reconstruction. In Sec.~\ref{sec:cmblens-results}, we also quantify the impact on the cross-correlation power spectrum with the true convergence field and extragalactic foregrounds, and assess the impact on cluster mass calibration if the lensing field recovered inside a cluster mask is used directly without further modelling. Finally, in Sec.~\ref{sec:forecasts}, we estimate the impact of these biases on the near-future CMB measurements.
Modelling the full effect of mask bias on the reconstructed CMB lensing potential field is challenging analytically, but we give some analytic results in Appendix~\ref{sec:holes} for the case of uncorrelated circular mask holes.
For near-future observations, such as SO, the temperature signal still contains a substantial fraction of the available information, so fully exploiting the data will require robust modelling of the temperature signal, which is what we focus on in this work. The contamination induced by foregrounds is less important for CMB lensing reconstruction using polarization \cite{Beck:2020dhe}, since the polarized foreground amplitudes are expected to be substantially lower, but should also be assessed in future work (future observations by CMB-S4~\cite[S4 hereafter]{Abazajian:2019eic} will give lensing reconstructions that are more dominated by polarization).
\section{Lensing power spectra with LSS-correlated masks}\label{sec:matter}
A simple estimate of the effect of an LSS-correlated mask on the CMB lensing power can be obtained by estimating the power spectrum of the true $\kappa$ field over the unmasked area and compare its value with the one computed over the full sky. This operation can only be performed on simulations, since $\kappa$ is not measurable directly but has to be reconstructed from lensed CMB maps.
However direct $\kappa$ masking is more easily modelled analytically, though it would only provide a good estimate of the expected amplitude of the effect if the lensing reconstruction were a fully local noiseless unbiased estimate of the true underlying field. We use it as a simple model to gain some analytic insight into the impact of correlated masking without the complications induced by the lensing reconstruction estimator. We give simple analytic models of the direct masking effects for all the masks described in the next section, and also compare these analytic predictions with measurements based on direct masking of simulations. In the following, we assume sky masks to be uncorrelated with the unlensed CMB and ignore the correlation between CMB lensing and CMB temperature (i.e. $C_L^{T\phi}=0$) induced by the integrated Sachs-Wolfe effect in both the analytical modelling and in the simulation measurements.
\subsection{Halo model}\label{sec:halomodel}
A qualitative explanation of the effect of masking peaks of the density field can be obtained using the halo model of large structure (see e.g. Ref.~\cite{Cooray:2002dia} for a review). In this model, the entire matter of the universe is distributed inside haloes of roughly universal spherically-averaged halo profile $\rho_M(r)$ as a function of the halo mass. In the following we assume $\rho_M(r)$ to be a Navarro-Frenk-White (NFW) profile~\cite{NFW} defined by the mass-concentration relation of \cite{duffy2008}. The matter power spectrum $P(k)$ is built as the sum of two separate parts, the one- and two-halo terms ($1h$ and $2h$ respectively), that correlate points within the same or two different halos respectively. This reads
\begin{eqnarray}
P(k) &\equiv& P_{1h}(k) + P_{2h}(k)\label{eq:halo-model}\\
P_{1h}(k) &\propto& \int dM \frac{dn}{dM} |\rho_M(k)|^2\label{eq:1-halo}\\
P_{2h}(k) &\propto& P^{\rm lin} (k)\left(\int dM \frac{dn}{dM} b(M) \rho_M(k) \right)^2,\label{eq:2-halo}
\end{eqnarray}
where $M$ is the halo mass, ${dn}/{dM}$ is the halo mass function which describes the number density of halos of a given mass, $b(M)$ is the halo bias, $\rho_M(k)$ is the Fourier transform of $\rho_M(r)$ and $P^{\rm lin}(k)$ is the linear matter power spectrum. To compute the integrals of Eqs.~\eqref{eq:1-halo}, \eqref{eq:2-halo} we used the public \texttt{hmvec} code\footnote{\url{https://github.com/simonsobs/hmvec}} and a Sheth and Tormen \cite{sheth-tormen} mass function. To emulate the masking of a nearly mass-limited sample of halos, we truncate the integration over the halo mass function in both terms of Eq.~\eqref{eq:halo-model} to
different mass limits, such as those expected for the SO and S4 tSZ selected clusters (see the next section for more details). This approximation can be refined to include selection function effects of each experiment \citep{Rotti:2020rdl}.
The signature of such a cut is twofold: a large direct suppression of power on small scales from the $1h$ term, and, since the most massive halos form in the highest peaks of the linear density and are highly biased, a suppression on large scales as well.
\begin{figure*} [t!]
\centering
\begin{tabular}{cccc}
\includegraphics[width = 0.497\textwidth]{delta_Pk_tot_vs_halomass_fullrange_consistency_z1.pdf}&
\includegraphics[width = 0.503\textwidth]{deltacl_clkk_fullrange_consistency_Mmin1e5Mmax1e17_kstar0p010_allterms.pdf}
\end{tabular}
\caption{\emph{Left:} fractional change in the halo-model matter power spectrum due to cutting the most massive haloes at $z\approx 1$, close to the peak of the CMB lensing kernel. The ratio is shown as function of $k\chi \sim \ell$. \emph{Right:} fractional impact of the most massive halos on the corresponding CMB lensing convergence angular power spectrum. In both panels, the dashed and dotted lines show the $1h$ and $2h$ contributions, which sum up to the solid lines. We show results for both Sheth and Tormen \cite{sheth-tormen, st-massfunction01, st-massfunction02} (ST) and Tinker et al. 2010 \cite{tinker2010} mass functions. The choice of the mass function has little impact on the results.}
\label{fig:halomodel}
\end{figure*}
In Fig.~\ref{fig:halomodel}, we show the ratio between the 2D projection of the density power spectra in the Limber approximation after the truncation in the integration, and our reference power spectrum that (very conservatively) uses $10^{18}M_{\odot}\xspace$ as the upper limit of the integration for two different mass cuts (blue and orange). The solid lines show the ratio of the spectra including both the $1h$ and $2h$ terms at $z=1.03$; the dashed lines show the effect of the mass cut on $1h$ term while the dotted lines show the same effect for the $2h$ term. The suppression of power on small scales is visible, as is a large dip at $\ell\simeq k\chi \sim 3000$, while the $2h$ contribution dominates on large scales.
Since our goal is to assess the impact of masking on the CMB lensing, we therefore used the matter power spectrum with and without the mass cut in the computation of the CMB lensing convergence power spectrum
\begin{equation}
C_L^{\kappa\kappa} =\frac{9H_0^4\Omega^2_{m,0}}{4c^4}\!\int_0^{\chi_s}\!d\chi \left(\frac{\chi_s - \chi}{\chi_s}\right)^2\!P\left(\frac{L+1/2}{\chi},\chi\right),
\label{eq:ckk}
\end{equation}
where $\chi$ is the comoving distance and $\chi_s$ is the comoving distance to the CMB. As noted in \cite{munchmeyer2018}, the $2h$ contribution is not naturally built to match the expectation of linear cosmological perturbation theory. We follow their approach and normalize the $2h$ contribution to match the linear theory prediction when computed over our reference range of integration and apply consistently the same normalization to the cases when the integration range is cut. We regularized the $1h$ term on large scales by exponentially damping the contribution of matter perturbations on scales $k\leq 0.01\mathrm{Mpc}/h$ in the integrand of Eq.~\eqref{eq:ckk} \cite{Cooray:2002dia}. The overall change observed in $C_L^{\kappa\kappa}$ is a consequence of the superposition of the power suppression observed for the matter power spectrum at different redshifts weighted by the lensing kernel.\\*
It may also be possible to extend the halo model approach to make predictions for masking bright infrared sources, or a more direct model of the tSZ intensity. However, we do not pursue this further here, since, as we discuss in Sec.~\ref{sec:recon}, the actual response of the lensing reconstruction to masking is rather different due to the filtering on the CMB maps \cite{Maniyar:2020tzw}.
\subsection{Gaussian foreground peaks model}\label{sec:gaussianmodel}
When the mask is obtained by masking the peaks of a foreground field, we can construct an analytic model by approximating both the foreground field and the lensing convergence as being Gaussian, with some known covariance. More generally, we can consider the effect of masking on the cross-correlation function of any two Gaussian fields (which we could take to both be the lensing convergence, or, for a cross-correlation case, the lensing convergence and some tracer of the matter density).
In real space, following a similar argument to \citetalias{Fabbian:2020yjy}\xspace, the main quantity of interest is the masked correlation function of two fields $A$ and $B$
\begin{align}
\xi_{\rm masked}\xspace^{AB}(r) &\equiv \left\langle {W}(\boldvec{x}) A(\boldvec{x}) {W}(\boldvec{x}')B(\boldvec{x}')\right\rangle,
\label{eq:maskedxi}
\end{align}
where $\boldvec{x}' = \boldvec{x} + \vr$ and $W(\boldvec{x})$ is a mask window function.
We assume that some underlying Gaussian statistically-isotropic `foreground' field $f(\boldvec{x})$ determines the mask probability locally, so that $ {W}(\boldvec{x})$ only depends on some (in general non-linear) function of $f(\boldvec{x})$ at the same point.
For a mask that is constructed by thresholding the foreground to remove the peaks of its emission, the mask window function is given by a simple step function $ {W}(\boldvec{x}) = \Theta(\nu\sigma_f-f(\boldvec{x}))$ where $\nu$ determines the `sigma' value of the cut.
The expectation value in Eq.~\eqref{eq:maskedxi} is then an integral over the correlated Gaussian variables $f(\boldvec{x})$, $f(\boldvec{x}')$, $A(\boldvec{x})$ and $B(\boldvec{x}')$, where the components of the covariance matrix are simply the correlation functions of the fields evaluated at $\vr$, or zero. If we know these correlation functions, the expectation value can therefore be calculated straightforwardly as the two Gaussian integrals over $A(\boldvec{x})$ and $B(\boldvec{x}')$ can be done directly.
For Gaussian fields with any local mask (in the sense defined above), the bias may be written compactly by introducing the Gaussian independent variable $f_{\pm} = f(\boldvec{x}) \pm f(\boldvec{x}')$ and integrating analytically over the correlated Gaussian distributions. The bias to the mask-deconvolved correlation function, an estimate of the full-sky correlation function, is then
\begin{align}\label{eq:ABf_bias}
\Delta \xi^{AB}(r) \equiv &\frac{\xi_{\rm masked}\xspace^{AB}(r)}{\left\langle W(\boldvec{x})W(\boldvec{x}') \right\rangle} - \xi^{AB}(r) \\ \nonumber
=&\frac{ \xi^{Af_{+}}(r) \xi^{Bf_{+}}(r)}{\sigma^4_{+}(r)}\frac{\left\langle\left( f^2_{+} -\sigma^2_+\right) W(\boldvec{x}) W(\boldvec{x}') \right \rangle} {\left\langle W(\boldvec{x})W(\boldvec{x}')\right\rangle} \\ \nonumber -& \frac{ \xi^{Af_-}(r) \xi^{B f_-}(r)}{\sigma^4_-(r)}\frac{\left\langle\left( f^2_{-} -\sigma^2_-\right) W(\boldvec{x}) W(\boldvec{x}') \right \rangle} {\left\langle W(\boldvec{x})W(\boldvec{x}')\right\rangle} .
\end{align}
In this equation, $\xi^{Xf_{\pm}}(r)$ is $\xi^{Xf}(0) \pm \xi^{Xf}(r)$, and $\sigma^2_{\pm}(r) = 2\sigma^2_f \pm 2\xi_f(r)$.
For a specific given mask, Eq.~\eqref{eq:ABf_bias} provides an empirical estimate of the bias using simulations or data, after computation of spectra and cross-spectra of $W(\boldvec{x}), (fW)(\boldvec{x})$ and $(f^2W)(\boldvec{x})$. In the case of a threshold mask
$f_-$ is left unconstrained from the threshold windowing, and $f_+$ is restricted not to exceed $2 \nu \sigma_f$. The averages required result in one-dimensional, very smooth integrals which are easily computed numerically, giving us an alternative fully analytic prediction.
In the limit of large separations, where the foreground fields at the two points are uncorrelated, taking $\sigma_f^2 \gg \xi_f(r)$, $|\xi^{Af}(r)| \ll |\xi^{Af}(0)|$,
$|\xi^{Bf}(r)| \ll |\xi^{Bf}(0)|$, the correction to the correlation function reduces to the simple approximate form
\begin{align}
\Delta \xi^{AB}(r) \sim \frac{\xi^{Af}(0)\xi^{Bf}(0)}{\sigma_f^2} \frac{\bar{f}^2}{\sigma_f^2} = \bar{A}\bar{B},
\end{align}
where $\bar{X}\equiv \langle W X\rangle/\langle W\rangle$ (with $X\in\{A,B\}$) is the mean value of $X$ evaluated over the unmasked area. If the correlation is instead defined after subtracting the means over the unmasked areas, the limiting value is instead zero as expected.
For $r$ sufficiently small that the two points are almost surely both unmasked or inside the same mask hole, with $f(x')\approx f(x)$, we have the other limiting form
\begin{align}
\Delta \xi^{AB}(r) \sim \frac{\xi^{Af}(0)\xi^{Bf}(0)}{\sigma_f^4}\left(\overline{f^2} - \sigma_f^2\right),
\end{align}
where $\overline{f^2}\equiv \langle f^2 W\rangle/\langle W\rangle$ is the mean value of $f^2$ evaluated over the unmasked area. If the mask systematically removes peaks of $f$, so that $\overline{f^2} <\sigma_f^2$, and $A$ and $B$ have positive correlation to the foreground $f$, this is negative. If the means over the unmasked areas are subtracted before calculating the correlation functions, this becomes
\begin{align}
\Delta \xi^{AB,0}(r) \sim \frac{\xi^{Af}(0)\xi^{Bf}(0)}{\sigma_f^4}\left(\overline{\sigma}^2 - \sigma_f^2\right),
\end{align}
where $\overline{\sigma}^2 \equiv \overline{f^2} - \bar{f}^2$ is the point variance estimated over the unmasked area.
\subsection{Poisson point sources}
A Poisson model that describe the masking due to radio sources can also be handled analytically assuming a Gaussian distribution for the mean Poisson density. Following \citetalias{Fabbian:2020yjy}\xspace~, we introduce the Poisson intensity field $\lambda(\boldvec{x})$, which defines the probability of observing no source in a given area $\tilde A$ as $\exp\left(-\int_{\tilde A}d^2y \lambda (\boldvec{y}) \right)$. On each location of a source, a disc of radius $R$ is masked, and the mask $W$ consists of the collection of these discs.
Under the assumption of a Gaussian $\lambda$, we may then write
\begin{align}
&\Delta \xi^{AB}(r) + \xi^{AB}(r)\nonumber \\ &= \frac{ \left\langle A(\boldvec{x}) B(\boldvec{x}') \exp \left( -\int_{\tilde A(\boldvec{y})} \lambda(\boldvec{y}) d^2\boldvec{y} \right) \right\rangle }{\left\langle \exp \left( -\int_{\tilde A(\boldvec{y})}\lambda(\boldvec{y}) d^2\boldvec{y} \right) \right\rangle}.
\end{align}
In this equation, $\tilde A(\boldvec{y})$ is the area $D_R(\boldvec{y} - \boldvec{x}) + D_R(\boldvec{y} - \boldvec{x}') - D_R(\boldvec{y} - \boldvec{x})D_R(\boldvec{y} - \boldvec{x}')$ defined by the overlap of two discs $D_R$ of radii $R$ centred on $\boldvec{x}'$ and $\boldvec{y}'$.
Taking $A(\boldvec{x})$, $B(\boldvec{x}')$ and the integral over $\lambda(\boldvec{y})$ to be three correlated Gaussian variables, the Gaussian integral can be done analytically in terms of integrals over $\xi^{A\lambda}(r)$ and $\xi^{B\lambda}(r)$.
These integrals can be evaluated directly numerically, or the result can be rewritten compactly in the form
\begin{align}\label{eq:poisson_souces}
&\Delta \xi^{AB}(r) \\ &= \left[(\xi^{Af}(r)+\xi^{Af}(0)- \left(\left(\xi^{A\lambda} \cdot D_R\right) \star D_R \right)(r) \right] \nonumber \\ \nonumber
& \cdot\left[(\xi^{Bf}(r)+\xi^{Bf}(0)- \left(\left(\xi^{B\lambda} \cdot D_R\right) \star D_R \right)(r) \right], \nonumber
\end{align}
where $f(\boldvec{x})$ is the Poisson intensity $\lambda(\boldvec{x})$ convolved with $D_R$.
Here the $\left(\xi^{A/B\lambda} \cdot D_R\right) \star D_R$ terms are the convolution ($\star$) of the disc $D_R$ with the disc-truncated correlation $D_R(r)\xi^{A/B \lambda}(r)$,
which reduces to $\xi^{A/Bf}(0)$ for $r\ll R$ and vanishes for distances $r > 2R$.
Each full term
therefore changes smoothly
from $\xi^{A/Bf}(0)$ at $r\sim 0$ to $\xi^{A/Bf}(0) + \xi^{A/Bf}(r)$ at $r \ge 2R$.
Note that $\xi^{X f}(0) = -\bar{X}$, so if the means $\bar{A}$ and $\bar{B}$ over the unmasked areas are subtracted this amounts to removing the $\xi^{A/Bf}(0)$ terms in Eq.~\eqref{eq:poisson_souces}.
The entire correction is fourth order in the fields and hence small in either case. If there is a non-zero bispectrum, which is only third order in the fields, the actual real-world effect may be dominated by the non-Gaussian correlations rather than the very small Gaussian prediction.
\section{Simulations }\label{sec:masks}
The analytic models described in the previous section were highly idealized.
To study the impact of a correlated mask on real data, it is necessary to simulate CMB lensing and correlated foreground fields in a realistic way. To do this we followed \citetalias{Fabbian:2020yjy}\xspace and used the publicly-available Websky\xspace\footnote{\url{ https://mocks.cita.utoronto.ca/index.php/WebSky_Extragalactic_CMB_Mocks}} simulation suite ~\cite{Stein:2020its}.
Websky\xspace models the evolution of the matter distribution using the mass-Peak Patch method~\cite{Stein:2018lrh} in a volume of $\sim 600$ (Gpc/$h$)$^3$ with $\sim 10^{12}$ particles over the redshift interval $0 < z < 4.6$. Despite being and approximate N-body method, mass-Peak Patch has been shown to reproduce the clustering properties of halos with good accuracy for both 2-point and 3-point statistics and their covariances across \cite{Blot:2018oxk,Colavincenzo:2018cgf,Lippich:2018wrx}. The simulation only includes dark matter particles, so we neglected baryonic effects in the analytical modelling discussed in the previous section as well. This realization of the matter distribution is used to produce full-sky maps of the CMB lensing convergence $\kappa$ as well as extragalactic foregrounds based on an analytic halo model calibrated on existing observations and hydrodynamical simulations. The public release contains maps of tSZ and CIB at multiple frequencies, catalogues of radio sources based on the model of Refs.~\cite{Sehgal:2009xv,websky-rs}, and catalogues of the dark matter halos identified in the cosmological simulation.
\subsection{Lensed CMB and CMB lensing}\label{sec:MC_sims}
Following \citetalias{Fabbian:2020yjy}\xspace, we produced two sets of lensed CMB simulations that are later used to study the impact of correlated masking on lensing reconstruction. These sets share the same 100 unlensed CMB realizations, but they are lensed either using the same fixed deflection field constructed from the Websky\xspace $\kappa$ simulation (NG set), or using a different Gaussian random realizations of the deflection field for each map (G set), where the deflection field has the same angular power spectrum as the realization of the Websky\xspace $\kappa$ map.
We used the NG set to isolate the bias as it would appear on real data while the G set was used to compute the error bars of our measurements and calibrate the lensing reconstruction normalization and $N^{(i)}$ biases. As such, the error bars displayed in the figures do not include any non-Gaussian contribution to the covariance. In the following, unless stated otherwise, error bars (or bands) displayed in figures are standard deviations of the plotted variable measured over the set of G simulations.
An additional G-like set was computed to estimate the mean field of the quadratic estimator in order to debias the reconstructed lensing potential power spectrum.
The Websky\xspace lensing $\kappa$ map is constructed using the Born approximation and therefore neglects the effect of post-Born lensing, which is expected to significantly modify the overall non-Gaussianity in the CMB lensing $\kappa$ ~\cite{Pratten:2016dsm} and thus the shape of the $N_L^{(3/2)}$\xspace bias in the reconstructed lensing field \cite{Fabbian:2019tik,Beck:2018wud}. Investigating the impact of post-Born effects on the mask bias would require us to include post-Born lensing in the lensed CMB, and (Born) lensing of the extragalactic foregrounds to avoid introducing spurious decorrelation of photon deflections along the line of sight (see, e.g., the discussion in Refs.~\cite{Fabbian:2019tik,Boehm:2019hlv}).
As we do not have a similar set of simulations at our disposal, we investigated the role of post-Born effects only for masks built on the $\kappa$ field using the Born and post-Born CMB lensing maps of Ref.~\cite{Fabbian:2017wfp}. We used those maps to produce two additional sets of simulations where we lensed the same 100 realizations of unlensed CMB maps with deflection fields constructed from the Born and post-Born $\kappa$ maps following the steps adopted for the Websky\xspace $\kappa$ map. We refer to these sets of simulations as BNG and the pBNG respectively. To calibrate the normalization of the estimator we also created a set of Gaussian simulations, as done for the G set, using the power spectrum of the post-Born $\kappa$ map\footnote{As noted in \cite{Beck:2018wud} differences due to post-Born correction in $C_{L}^{\kappa\kappa}$ are negligible for the purpose of this work.}.
\subsection{Foreground masks}\label{sec:sims-masks}
Using the Websky\xspace simulations we constructed three qualitatively-distinct kinds of lensing-correlated masks.
\begin{itemize}
\item $W_{\rm halo}$ masks, which roughly mimic masking of objects detected by a mass-limited cluster sample, for example objects detected in a tSZ cluster survey with a given experimental noise level. We created different masks selecting all the halos in the Websky\xspace halo catalogue with a mass above a certain mass threshold, $M^{\rm cut}$ defined in terms of $M_{500,c}$ \footnote{In the following, we usually define halo masses in terms of spherical overdensity masses in terms of $M_{500,c}$ and $M_{200,m}$. The first is the mass contained within the radius $R_{500,c}$ inside of which the mean interior density is 500 times the critical density. $M_{200,m}$, conversely, is the mass contained within the radius $R_{200,m}$ inside of which the mean interior density is 200 times the mean matter density of the universe.}. For all the selected halos, we masked a disc centred on the halo position with a radius that is a multiple $n$ of the $\theta_{500,c}$ halo angular size. In the following, we will adopt $n = 2$ as our default setup, and will show results for $M^{{\rm cut}}= (1.0, \, 1.8, \, 3.0) \cdot 10^{14}\,M_{\odot}/h$. For reference, Fig.~\ref{fig:hist_catalogue} shows the distributions of angular size $\theta_{500,c}$ in the Websky\xspace halo catalogue;
\item Foreground intensity threshold masks $W_{\kappa}, \,W_{\text{\rm CIB}}$ and $W_{y}$. These masks are created by thresholding the Websky\xspace $\kappa$, CIB at 217 GHz and the tSZ Compton-$y$ parameter maps respectively such that all the pixels above a specific value are removed from the analysis. We used different thresholds that effectively masked different sky fractions, $f^{\rm mask}_{\rm sky} = 0.6\%,\, 2.3\%,\, 6.7\%$. Before the thresholding step, we smoothed the foregrounds maps with a Gaussian beams of full width at half maximum (FWHM, $\theta_{1/2}$) of $5.1^\prime$ or $1.7^\prime$.
This gives masks with more regular and connected holes, as expected from an experiment with a comparable beam size. By construction, these masks allow us to explore the biases for foregrounds with different degrees of correlation with CMB lensing, where $W_\kappa$ is a limiting-case where the masked field is 100\% correlated, while the CIB map is correlated at the $\gtrsim 70\%$ level at $\ell < 1000$, and the tSZ map at the $30\%$--$50\%$ level. The $W_{\rm halo}$ mask is strongly correlated to the $W_y$ mask, since the main peaks of the $y$-parameter map are associated with massive clusters.
\item $W_{\rm rs}$ masks remove resolved radio point sources (and are the same as those described in \citetalias{Fabbian:2020yjy}\xspace). For this purpose, we selected all the sources with a measured flux above the $5\sigma$ detection limit for \Planck, SO and S4, and cut out a circular region of the sky around the source. The radius of these holes has been chosen to be $2\theta_{1/2}$ of each frequency channel.
Each of these masks is built as product of three masks obtained by selecting sources in the three frequency bands ranging from 90GHz to 225GHz most relevant for small-scales CMB power spectra measurements (for more details, see Table I of \citetalias{Fabbian:2020yjy}\xspace).
\end{itemize}
To isolate the impact of mask correlations, each mask was also randomly rotated to give a new mask, $W^{\rm rot}$, which is effectively uncorrelated with CMB lensing but retains all the other non-trivial mode-coupling effects due to cut sky and hole shapes (we neglect the small area of residual correlation around the poles of the random rotation axis).
In Fig.~\ref{fig:masks_all}, we show a cutout of the full sky $\kappa$ and tSZ $y$-parameter maps from the Websky\xspace simulation together with an example of the masks used in our analysis.
\begin{figure}[!]
\centering
\includegraphics[width = 0.95 \columnwidth]{prop_halo_catalogue_v2.pdf}
\caption{Histogram showing the angular size, $\theta_{500,c}$, distribution of halos in the Websky\xspace catalogue for selected halos with $M_{500,c} > 10^{14} M_\odot/h$. This is consistent with the mass limit of detectable tSZ clusters in an S4-like also survey (see \cite{Ade:2018sbj} for more details).}
\label{fig:hist_catalogue}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = \columnwidth]{masks_all_v2.pdf}
\caption{20x20 deg$^2$ cutout of the $\kappa$ (top) and tSZ y parameter maps (bottom) of the Websky\xspace simulation suite. We highlight the masked pixels in black for one of the $W_\mathrm{halo}$ masks employed in this work. The left column displays the unmasked $\kappa$ and $y$ fields.
The middle column shows the same fields masked with a mask that removes halos with $M>M^{\rm cut} = 1.8\cdot 10^{14} M_\odot/h$. As it is evident from the bottom row plots, the masked areas typically correspond to dense areas of strong tSZ emission. The right column shows $\kappa$ and $y$ masked with a mask uncorrelated with the underlying lensing field constructed randomizing the position of the halos used for the $W_\mathrm{halo}$ mask.}
\label{fig:masks_all}
\end{figure}
\subsection{Biases from direct $\kappa$ masking}\label{sec:matter_1}
To measure the effect on simulations, we applied the foreground masks described in Sec.~\ref{sec:sims-masks}, $W_X$, to the Websky\xspace $\kappa$ map, and estimated the power spectrum over the unmasked area by deconvolving the effect of the mask using the MASTER \cite{Hivon:2001jp} algorithm as implemented in the publicly available \texttt{NaMaster}\footnote{\url{https://github.com/LSSTDESC/NaMaster}} package \cite{namaster}.
We then compared this mask-deconvolved power spectrum with the angular power spectrum of the Websky\xspace $\kappa$ computed on the full sky. The results are shown in Fig.~\ref{fig:masking_kappa_noapo}. For each spectra we adopted a variable binning in L: $\Delta_L = 1$ until $L_{\rm max}=10$, 4 bins of $\Delta_L = 20$ ($L_{\rm max}<100$), 4 bins of $\Delta_L = 100$ ($L_{\rm max}<500$), 14 bins of $\Delta_L = 250$ ($L_{\rm max}<4000$) and $\Delta_L = 1000$ for $L_{\rm max}>4000$.
We estimated the error bars on the measured bias using a jackknife approach. For this purpose we divided the sky map into $N_\mathrm{patches}=48$
subregions of equal areas corresponding to pixels of an Healpix pixelization with nside=2. For the $i$-th subregion $W^{(i)}_{\rm cut}$ and for each LSS-correlated mask $W_X$, we estimated the fractional difference between the $C_L^{\kappa \kappa}$ computed on the sky masked with $W_{X}\cdot W^{(i)}_{\rm cut}$, and its value computed on the sky masked with $W^{(i)}_{\rm cut}$ only. The error bar is then given by the covariance of the fractional differences averaged over all the subregions rescaled by $N_\mathrm{patches} -1$ (see, e.g., Appendix B of Ref. \cite{Makiya:2018pda}).
\begin{figure*} [t!]
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.35\textwidth]{masking_kappa_Whalo_noapo_withGFth_err_Tinker_v2_newerr.pdf} &
\includegraphics[width=0.365\textwidth]{masking_kappa_Wrs_noapo_newerr.pdf}\\
\end{tabular}
\begin{tabular}{cccc}
\includegraphics[width=0.33\textwidth]{masking_kappa_Wcib_noapo_newerr_smo51.pdf} &
\includegraphics[width=0.34\textwidth]{masking_kappa_Wy_noapo_newerr_smo51.pdf} &
\includegraphics[width=0.33\textwidth]{masking_kappa_Wkappa_noapo_newerr_smo51.pdf}\\
\end{tabular}
\begin{tabular}{cccc}
\includegraphics[width=0.33\textwidth]{masking_kappa_Wcib_noapo_newerr_smo17.pdf}&
\includegraphics[width=0.34\textwidth]{masking_kappa_Wy_noapo_newerr_smo17.pdf} &
\includegraphics[width=0.33\textwidth]{masking_kappa_Wkappa_noapo_newerr_smo17.pdf}\\
\end{tabular}
\caption{
Effect of LSS-correlated masks on the CMB lensing convergence power spectrum $C_L^{\kappa \kappa}$ as a fractional difference between the $C_L^{\kappa \kappa}$ computed on the masked sky and its value computed on the full sky. Each panel shows results for different masks. Different colours show the result for masks retaining different sky fractions after masking. The $W_\mathrm{halo}$ plot legend shows the mass limits of the halo samples used for the mask, and the $W_{\rm rs}$ plot legend shows source count detection limit used to construct the radio-source mask for the various listed experiments. Simulation measurements are shown as data points. For $W_\mathrm{halo}$ and $W_{\rm rs}$ masks, the solid lines show the analytic predictions of Sec.~\ref{sec:halomodel} and of Eq.~\eqref{eq:poisson_souces} respectively. For the $W_{\kappa}, W_{y}$ and $W_\mathrm{CIB}$ threshold masks, the semi-analytic theory predictions are shown in solid and the pure Gaussian model in dashed. For the former we evaluated the expectation values in Eq.~\eqref{eq:ABf_bias} empirically from our simulations while the latter uses a fully analytic model. The middle and bottom rows show results obtained for different smoothings of the foreground field.}
\label{fig:masking_kappa_noapo}
\end{figure*}
Since all our masks select areas of the sky where mass over-densities are present, the recovered power spectrum has less power compared to its full sky value.
The right panel of Fig.~\ref{fig:halomodel} shows that the halo model describes reasonably well the effect observed when masking the Websky\xspace $\kappa$ map with $W_{\rm halo}$ as also shown in the top-right panel of Fig.~\ref{fig:masking_kappa_noapo}; in particular the analytical curves match well the shape of the biases observed in the simulations.
Changing the halo mass function only gives fairly minor changes to the analytic predictions.
We note that since the halo mass functions and related bias model are usually formulated (or calibrated on simulations) in terms of $M_{200,m}$, while we perform cluster masking of $\kappa$ in terms of $M_{500,c}$, we can compare the curves in the two plots noting that $M_{500,c}\approx 2M_{200,m}$\footnote{We used the \texttt{colossus} code (\url{https://bitbucket.org/bdiemer/colossus/src/master/}) to perform an accurate conversion between the two mass definitions at a specific redshift in the integration.}.
For the $W_{\rm halo}$ mask, reducing $M^{\rm cut}$ increases the number of objects that are masked and thus also the fraction of sky area that is masked. This leads to a progressively larger power deficit. Such suppression is more relevant at very large scales and at $L \sim 2000$ in broad agreement with the analytical predictions of Sec.~\ref{sec:halomodel}.
The shape of the biases induced by $W_{\rm halo}$ and $W_y$ are similar. This is due to the fact that, even though they are build in different ways, they are highly correlated as they mask similar objects and locations as discussed in Sec.~\ref{sec:masks}.
Proceeding clockwise in Fig.~\ref{fig:masking_kappa_noapo}, we show the results for $W_{\rm rs}$ masks. All the relative differences between the $\kappa \kappa$-spectrum computed on the masked sky and the $\kappa \kappa$-spectrum computed on the full sky are quite small but still not compatible with zero, considering the error bars. The theoretical curves, obtained using Eq.~\eqref{eq:poisson_souces}, are not in agreement with the simulation results. This means that the Gaussian foreground peaks model is not completely able to reproduce the behaviour of these radio source masks, consistent with non-Gaussian effects dominating the purely Gaussian predictions.
For the threshold masks, second and third rows of Fig.~\ref{fig:masking_kappa_noapo}, the dashed lines are the theory curves obtained using the pure Gaussian model described in Sec.~\ref{sec:gaussianmodel}, while the solid lines represent the semi-empirical form of Eq.~\eqref{eq:ABf_bias}, where the spectra and cross-spectra of $W(\boldvec{x}), (fW)(\boldvec{x})$ and $(f^2W)(\boldvec{x})$ have been computed directly from our set of foreground masks and the Websky\xspace $\kappa$, $y$ and CIB fields. Overall, the semi-empirical Gaussian model using the actual masks seems to match better the data points, in particular for $W_{\kappa}$, even though, particularly for $W_y$ and $W_{\rm CIB}$ masks, there is a clear disagreement at small scales. This is not surprising as $\kappa$ is the most Gaussian of the three foreground fields considered here. The CIB contains a significant shot-noise contribution from individual bright infrared galaxies, and the $y$ from highly collapsed galaxy clusters that induce significant non-Gaussianities in the maps.
We investigate this disagreement for the threshold masks in Appendix~\ref{sec:threshold_masks}.
\section{Lensing reconstruction on masked fields}
\label{sec:recon}
In Sec.~\ref{sec:matter_1} we have directly masked the CMB lensing convergence to roughly quantify the bias induced by the foreground masks described in Sec.~\ref{sec:sims-masks}. However, the true $\kappa$ field is not a direct observable, and has to be reconstructed from the observed CMB maps potentially also in combination with other external matter tracers. Hence, in this section, we quantify the impact of LSS-correlated masks removing the corresponding regions in lensed CMB maps that are then used to perform lensing reconstruction using a quadratic estimator.
For this first analysis we only use the CMB temperature field, since the foreground contamination is less important for polarization and the properties of the polarized sources are currently much less well understood \cite{Gupta:2019kng}.
For the reconstruction of the lensing potential, we mainly followed the same steps as the \Planck~lensing pipeline described in Ref.~\cite{Aghanim:2018oex}. For CMB temperature lensing reconstruction this is just an optimized and generalized version of the lensing quadratic estimator (QE) of \cite{Okamoto:2003zw}. We used the reconstruction pipeline implemented in the public {\tt plancklens} code\footnote{\url{https://github.com/carronj/plancklens}}, and
refer the reader to Ref.~\cite{Aghanim:2018oex} for more details.
The procedure can be summarized in 4 steps: 1) Optimal filtering of input ``data'' CMB maps;
2) Construction of the lensing quadratic estimator; 3) Mean-field subtraction and normalization of the lensing estimate; 4) Computation of the lensing power spectrum, subtraction of its additive biases, and Monte-Carlo correction of its normalization.
To measure the mask biases in lensing reconstruction we used the two sets of Monte Carlo simulations of lensed CMB realizations describe in Sec.~\ref{sec:MC_sims}.
We used the NG set to isolate the bias as it would appear on real data, while the G set was used to
compute the mean-field, the RD-$N^{(0)}$ noise bias, and the multiplicative Monte Carlo correction assuming no mask-lensing correlation. We also used the G set to estimate error bars on the auto- and cross-spectra estimators described in the following.
Since the Websky\xspace suite includes only a single realization of $\kappa$, our numerical results based on the NG set are limited on large scales by the cosmic variance of this fixed realization of the lensing field.
In the following we considered two experimental setups representative of an SO-like and an S4-like survey. For SO we assumed an effective white noise level in temperature of $\sim$6.7 $\mu$K-arcmin and $\theta_{1/2}=1.5^\prime$, consistent with the publicly available effective baseline noise configuration after component separation\footnote{Details of the noise model for SO can be found at \url{https://github.com/simonsobsso_noise_models}.
Note that the beam size is not entirely consistent with the $1.7^\prime$ fiducial foreground smoothing that we use, which was chosen to match the smoothing used in \citetalias{Fabbian:2020yjy}\xspace.}.
For the S4-like survey, we assumed a beam with $\theta_{1/2}=1.0^\prime$, and isotropic uncorrelated $\sim$1 $\mu$K-arcmin noise for temperature\footnote{Details can be found at \url{https://cmb-s4.uchicago.edu/wiki/index.php/Survey_Performance_Expectations}}.
The results shown in the following are computed using SO-like experimental specifications. In addition, we considered an S4-like experimental setup for a subset of foreground masks: the $W_{\rm rs}$ mask with $f_{\rm sky} = 94.1 \%$, which includes all the sources with a measured flux above the detection limit for S4, the $W_{\mathrm{halo}}$ with $M^{\mathrm{cut}}=10^{14}M_\odot/h$, which removes a mass-limited tSZ-selected cluster sample for an S4-like survey \cite{Raghunathan:2021zfi}, and the extreme cases of $W_{\mathrm{CIB}}$ and $W_{y}$ with $f_{\rm sky} =93.3\%$ and $\theta_{1/2}=1.7^\prime$.
After adding a realization of the isotropic noise, the input maps are then masked using all the unapodized $W_X$ masks discussed in Sec.~\ref{sec:masks}. The (optimal) filtering step produces Wiener-filtered maps that provide the minimum variance estimate of the full-sky lensed CMB based on the information in the unmasked area. Optimal filtering~\cite{Smith:2007rg,Aghanim:2018oex,Mirmelstein:2019sxi} is particularly valuable with this kind of mask compared to more basic (but faster) inverse-variance-weighted or isotropic filtering. Figure ~\ref{fig:optimalfiltering} shows how the optimal filtering operation is able to fill back some information inside small masked regions, effectively recovering information that was masked
(assuming no residual foregrounds outside the masked area). This increases the information available, reduces any complications due to sharp mask cuts, and because the masked area is effectively reduced, can substantially reduce biases when the mask is correlated to the signal.
\begin{figure*}[!]
\centering
\includegraphics[width = \textwidth]{opt_vs_non-opt_2.pdf}
\caption{Comparison between optimal and non-optimal Wiener filtering. Top panel, from left to right, shows the masked CMB field ($W_{\rm halo}$ with $f_{\rm sky} = 94.7\%$), the filtered CMB field using a non-optimal isotropic method, and the filtered CMB field using optimal filtering as described in Refs.~\cite{Smith:2007rg, Aghanim:2018oex}. Bottom panel, from left to right, shows the Wiener-filtered full-sky reconstruction $\kappa$ map, and the filtered reconstructed $\kappa$ map using non-optimal and optimal filtering. In the non-optimal filtering case, the mask has been apodized using the $C^2$ function (effectively a cosine) implemented in NaMaster with an apodization scale of $5^\prime$. This helps to slightly reduce the ringing effects, but large hole-induced reconstruction noise edge effects still dominate. In all the $\kappa$ maps a significant fraction of the small-scale structure is the non-Gaussian reconstruction noise.}
\label{fig:optimalfiltering}
\end{figure*}
To build the filtered CMB maps, we used a fiducial lensed CMB temperature spectrum
including multipoles $100 \leq \ell \leq 4000$. Removing multipoles $\ell < 100$ generates little loss of information for lensing reconstruction. Since we approximate the noise as white and isotropic, the noise in the filter is also taken to be isotropic and consistent with the simulated value.
The real-space lensing deflection estimator is then built from a pair of filtered maps discussed above. The gradient part\footnote{The curl component is expected to be zero to a good approximation, and is zero in the Websky\xspace lensing field by construction.} of the quadratic lensing deflection estimator, $\hat g_{LM}$, contains information on the lensing potential, which is estimated using \cite{PL2018}
\begin{equation}\label{eq:phi-estimator}
\hat \phi_{LM}\,\equiv \, \frac{1}{\mathcal{R}^{\phi}_L} \Big(\hat g_{LM} - \langle \hat g^{\rm MF}_{LM} \rangle\xspace \Big) \,,
\end{equation}
where $\mathcal{R}^{\phi}_L$ is the non-perturbative response function defined to make the lensing reconstruction unbiased on the full sky~\cite{Hanson:2010rp,Lewis:2011fk,Fabbian:2019tik}, and $\langle \hat g^{\rm MF}_{LM} \rangle$ is the mean field of the estimator.
Note that we are constructing our lensing estimators in the hypothesis of no lensing-mask correlation, as appropriate for quantifying the bias on standard methods (rather than, for example, defining the mean field by averaging over simulations with lensing fields correlated to the fixed mask).
Since the mean field depends on the mask, for lensing-correlated masks the mean field is actually correlated to the true lensing potential $\phi$.
\begin{figure}[!]
\includegraphics[width = 0.95 \columnwidth]{N0-meanfield_3p0halomask_err_v2.pdf}
\caption{The RD-$N^{(0)}_L$ bias (black, as described in Eq.~\eqref{eq:RD-N0}) and mean-field power spectrum corrected for the response (grey) for the case of the fixed halo mask described in the plot title.
We show (in purple) the Websky\xspace full-sky $C_{L}^{\kappa \kappa}$ for comparison.
}
\label{fig:N0-meanfield_3.0halomask}
\end{figure}
In our analysis, we make two estimates of the (uncorrelated) mean field for each mask using two different independent sets of independent 25 G simulations, which we denote $\langle \hat g^{\rm MF}_{LM} \rangle\xspace_1$ and $\langle \hat g^{\rm MF}_{LM} \rangle\xspace_2$ respectively (using 50 G simulations in total, 25 for each mean field).
The lensing power spectrum is estimated by cross-correlating two lensing map estimates $\hat \phi_1$ and $\hat \phi_2$ following Eq.~\eqref{eq:phi-estimator}, where we subtracted $\langle \hat g^{\rm MF}_{LM} \rangle\xspace_1$ and $\langle \hat g^{\rm MF}_{LM} \rangle\xspace_2$ respectively to avoid reconstruction noise in the mean field cross-correlation.
The estimate of the lensing power spectrum of the single realization is then obtained as
\begin{equation}\label{eq:lensing-power-spectra-esimator}
\hat C_L^{\hat \phi_1 \hat \phi_2} \,\equiv \, \frac{1}{(2L +1) f_{\rm sky}} \sum_{M=-L}^L \hat \phi_{1,LM}^* \hat \phi_{2,LM} \,,
\end{equation}
where $f_{\rm sky} = \sum_p {W_X}_p/N_{\rm pix}$ is the unmasked sky fraction.
From the above estimator, we then subtract the realization-dependent estimate of the Gaussian (disconnected) lensing bias, RD-$N^{(0)}_L$. Subtracting this term, together with the mean-field subtraction, has the effect of removing the disconnected signal expected from Gaussian fluctuations even in the absence of lensing \cite{Hanson:2010rp,Namikawa:2012pe}. The RD-$N^{(0)}_L$ bias is defined as
\begin{equation}\label{eq:RD-N0}
\text{RD-}N^{(0), d }_L \,\equiv\, \langle 4 \hat C^{di}_L - 2\hat C^{ij}_L \rangle \,,
\end{equation}
where angle brackets denote an average over pairs of distinct $i, j$ G simulations.
Here $\hat C^{di}_L$ is the estimator in Eq.~\eqref{eq:lensing-power-spectra-esimator} with $\hat \phi_1$ reconstructed from the $d$-th CMB realization of the NG set and $\hat \phi_2$ using the $i$-th simulation of the G set; $\hat C^{ij}_L$ is instead the estimator of Eq.~\eqref{eq:lensing-power-spectra-esimator} with $\hat \phi_1$ and $\hat \phi_2$ reconstructed from the $i$-th and the $j$-th simulations of the G set, respectively.
For each mask (both LSS-correlated and uncorrelated), RD-$N^{(0), d}$ is calculated for each NG `data' simulation, $d$, using 20 different pairs of independent G simulations.
Figure~\ref{fig:N0-meanfield_3.0halomask} shows a typical mean-field power spectrum corrected for the non-perturbative response and an example of the RD-$N^{(0), d}$.
For a given lensing reconstruction field, we can also define the cross-spectrum estimator with the true field
\begin{equation}\label{eq:lensing-cross-spectra-esimator}
C_L^{\hat \phi \phi} \,\equiv \, \frac{f_L^{\rm MC}\xspace}{(2L +1) f_{\rm sky}} \sum_{M=-L}^L \hat \phi_{LM}^* \phi_{LM} \,,
\end{equation}
where $\hat \phi$ is the lensing potential estimate of Eq.~\eqref{eq:phi-estimator}, $\phi$ is the true lensing potential field, and $f_L^{\rm MC}\xspace$ is a Monte-Carlo (MC) correction defined to make the estimator unbiased for Gaussian lensing potential uncorrelated with the mask. Specifically, we define
\begin{equation}\label{eq:fMC_correction}
\frac{1}{f_L^{\rm MC}\xspace} \,\equiv \, \left\langle \frac{[C^{\phi\phi}_L]^{-1}}{(2L +1) f_{\rm sky}} \sum_{M=-L}^L \hat \phi_{LM}^* \phi_{LM} \right\rangle_{50\, {\rm G \,sims}} .
\end{equation}
In practice, this ratio is calculated for the binned spectra rather than individual $L$, using the binning scheme described in Sec.~\ref{sec:matter_1}.
Note that for large (e.g., galactic) masks $f_{\rm sky}$ is a reasonable approximate normalization so that $f_L^{\rm MC}\xspace\approx 1$. However, when the mask contains a large number of small holes due to, e.g., point source masking, the $f_{\rm sky}$ correction becomes a much worse approximation as effectively much less mask area is lost after optimal filtering and reconstruction. We discuss this aspect in more detail in Appendix \ref{sec:holes} and \ref{sec:GAUSS}. Here, the inclusion of $f_{\rm sky}$ does not affect the result, since it simply amounts to a redefinition of $f_L^{\rm MC}\xspace$.
We also define the lensing power spectrum estimator
\begin{equation}\label{eq:auto_after-recon}
C^{\hat \phi_1 \hat \phi_2,\,{\rm RD}}_{L} \,=\, \big(f_L^{\rm MC}\xspace\big)^2\,\Big( \hat C^{\hat \phi_1 \hat \phi_2}_L \,-\, \text{RD-}N^{(0)}_L \Big) \,,
\end{equation}
where $f_L^{\rm MC}\xspace$ is defined as above.
However, the $(f_L^{\rm MC}\xspace)^2$ normalization calibrated on the cross-spectrum may not be the correct normalization factor for the auto-spectrum.
Moreover, even in presence of Gaussian lensing fields and LSS-uncorrelated masks, the estimator in Eq.~\eqref{eq:auto_after-recon} does not provide an unbiased estimate of the CMB lensing power spectrum as it still retains the $N^{(1)}_L$ noise bias induced by signal-dependent contractions~\cite{Kesden:2003cc}. For non-Gaussian lensing field, the estimator also retains the noise term involving the 3-point function of the lensing field ($N_L^{(3/2)}$\xspace).
Here we regard $N^{(1)}_L$ and $N_L^{(3/2)}$\xspace as part of the signal contained in $ C^{\hat \phi_1 \hat \phi_2,\,{\rm RD}}_{L}$, and only consider differences between the estimated power spectrum including these additional noise bias terms when using LSS-correlated and uncorrelated masks.
We run the entire end-to-end estimation pipeline 20 times for each mask (both correlated and uncorrelated), taking the `data' each time to be one of the NG lensed CMB simulations. Results are then averaged over these simulations to reduce Monte Carlo noise from variations of the unlensed CMB. However, the cosmic variance of the lensing field is not reduced since all realizations of in the NG set share the same single Websky\xspace $\kappa$ simulation that is currently available.
To make a comparison with the results obtained in Sec.~\ref{sec:matter}, we plot results for the reconstructed convergence field $\hat \kappa$ instead of $\hat \phi$ \footnote{Note that $\hat \kappa_{LM} \equiv \frac{1}{2} L(L+1) \, \hat \phi_{LM}$}.
The effect of the mean-field and RD-$N^{(0)}_L$ bias on the reconstructed $\hat \kappa \hat \kappa$-spectra is shown in the first row of Fig.~\ref{fig:lens-rec_auto_Whalo} as relative differences between the reconstructed $\hat \kappa \hat \kappa$-spectrum and the true Websky\xspace $ \kappa \kappa$-spectrum for three different $W_{\rm halo}$ masks. From left to right, the reconstructed auto-spectra are plotted without mean-field and RD-$N^{(0)}_L$ corrections, then subtracting the mean field only, and finally subtracting both mean field and the RD-$N^{(0)}_L$. The main correction to the reconstructed $\hat \kappa \hat \kappa$-spectra comes from subtracting the RD-$N^{(0)}_L$ bias. The rise on small scales is related to the unsubtracted $N_L^{(1)}$\xspace. The second row of Fig.~\ref{fig:lens-rec_auto_Whalo} shows the effect of the mean-field subtraction on the relative differences between the cross spectrum $\hat \kappa \kappa$ and the true Websky\xspace $ \kappa \kappa$-spectrum.
Note that for correlated masks, the mean field is also important for the cross-correlation since the mask, and hence the mean field, is correlated to the signal.
The bias induced by the mask on the $\hat \kappa \kappa$ spectra is strongly reduced when we remove the mean-field term, especially at intermediate scales where the difference is then dominated\footnote{Note that the $N_L^{(3/2)}$\xspace\ signal is overestimated because the Websky\xspace simulations do not include post-Born lensing, which largely has an opposite sign.} by $N_L^{(3/2)}$\xspace. In all the plots of Fig.~\ref{fig:lens-rec_auto_Whalo}, the full-sky cases are shown in purple as reference. Similar results are obtained for all the other masks described in Sec.~\ref{sec:sims-masks}.
\begin{figure*} [t!]
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.32\textwidth]{len-rec_auto_Whalo_withoutMFN0.pdf} &
\includegraphics[width=0.32\textwidth]{len-rec_auto_Whalo_withMFwithoutN0.pdf} &
\includegraphics[width=0.32\textwidth]{len-rec_auto_Whalo_withMFN0.pdf}\\
\end{tabular}
\begin{tabular}{cccc}
\includegraphics[width=0.321\textwidth]{len-rec_cross_Whalo_withoutMF.pdf}&
\includegraphics[width=0.61\textwidth]{len-rec_cross_Whalo_withMF.pdf} $\quad \,\,\,\,\,\,$\\
\end{tabular}
\caption{\emph{Top:} the left panel shows the raw reconstructed lensing auto-spectra, while the middle and right panel show the reconstructed auto-spectra after subtracting the mean field alone and the mean-field and the RD-$N^{(0)}_L$ noise bias respectively. \emph{Bottom:} the reconstructed cross-spectra with and without the mean-field subtraction are shown in the left and right side respectively. All these curves include the Monte Carlo normalization correction $f_L^{\rm MC}\xspace$.
The results are obtained using the $W_{\rm halo}$ masks for different values of the mass detection threshold $M^{\rm cut}$, whose corresponding $f_{\rm sky}$ is reported in the legend.
The purple lines correspond to the full-sky analysis, where, for the cross-correlation, the final difference it is consistent with $N_L^{(3/2)}$\xspace\ reconstruction bias, and for the auto-spectrum is dominated by $N^{(1)}$. Dashed-lines show the results obtained using the LSS-uncorrelated masks $W^{\rm rot}_{\rm halo}$.
}
\label{fig:lens-rec_auto_Whalo}
\end{figure*}
\section{Numerical results}\label{sec:cmblens-results}
\subsection{CMB lensing power spectrum}
To isolate the effects due to the correlation between the mask and the lensing field, we computed the difference between the two-point correlation function of the reconstructed CMB lensing obtained with the $W_X$ masks and the one obtained with the rotated uncorrelated masks, $W^{\rm rot}_X$. The rotated results have no correlated mask effects, but retain all the other non-trivial mode-coupling effects due to cut sky and hole shapes, as well as $N_L^{(3/2)}$\xspace\ and $N_L^{(1)}$\xspace\ to the extent that they are not modified by an LSS-correlated mask.
We show the results of these measurements in Fig.~\ref{fig:lens-rec_auto}, Fig.~\ref{fig:lens-rec_cross} and Fig.~\ref{fig:lens-rec_auto_and_cross_1p7} for the reconstructed auto and cross spectra respectively, showing the bias for all the masks $W_{\rm halo}$, $W_{\kappa}$, $W_{\rm CIB}$, $W_y$ and $W_{\rm rs}$. We estimated the error bars of these measurements computing the estimators of Eq.~\eqref{eq:lensing-power-spectra-esimator} and Eq.~\eqref{eq:lensing-cross-spectra-esimator} for both $W_X$ and $W^{\rm rot}_X$ using as ``data'' independent sets of 20 G simulations. The errors are then taken as the standard deviations of the differences between correlated and uncorrelated (randomly rotated) mask results.
As expected, the biases become larger as we increase the masked fraction of the sky. However, the amplitude of the lensing reconstruction biases are significantly reduced with respect to those obtained from directly masking the lensing field presented in Sec.~\ref{sec:matter_1} (see, e.g., Fig.~\ref{fig:lens-rec_auto} and Fig.~\ref{fig:masking_kappa_noapo} for a direct comparison). The optimal filtering used by the lensing reconstruction pipeline substantially reduces the fraction of the lensing information that is removed by the mask, both because the filtering recovers some of the CMB modes inside the mask holes, and because the lensing reconstruction itself is able to recover much of the information about lensing modes on scales larger than the hole size (see Appendix~\ref{sec:holes} for an analytic discussion).
The remaining biases induced by $W_{\rm halo}$ and $W_{y}$ (with $\theta_{1/2} = 5.1^\prime$) masks are mainly relevant on small scales.
The S4-noise case considered for the $W_{\rm halo}$ mask shows a similar trend, with a remaining power spectrum bias of the 2-5\% level for $L\gtrsim 2000$.
For the $W_{\kappa}$ and $W_{\rm CIB}$ masks, instead, the bias is roughly constant across all the scales, and has a magnitude of $\sim 1-10\%$ of the signal depending of the $f_{\rm sky}$. The bias induced by $W_{\kappa}$ is larger, since this is the limiting case where the mask is 100\% correlated with the $\kappa$ field. The bias induced by $W_{\rm rs}$ is negligible for both SO and S4.
\begin{figure*} [t!]
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.35\textwidth]{len-rec_auto_Whalo_err_s4.pdf}&
\includegraphics[width=0.35\textwidth]{len-rec_auto_Wtsz_err.pdf}\\
\end{tabular}
\begin{tabular}{cccc}
\includegraphics[width=0.33\textwidth]{len-rec_auto_Wcib_err.pdf} &
\includegraphics[width=0.33\textwidth]{len-rec_auto_Wkappa_err.pdf} &
\includegraphics[width=0.33\textwidth]{len-rec_auto_Wrs_err.pdf}\\
\end{tabular}
\caption{Effect of LSS-correlated masking on the reconstructed CMB lensing convergence auto spectrum obtained with the estimator of Eq.~\eqref{eq:auto_after-recon}. The mask biases are computed as the differences between reconstructions performed with correlated and uncorrelated masks. The plots show the amplitude of the bias relative to the true lensing power spectrum. Different LSS-correlated masks are shown in different panels. The foreground fields have been smoothed with a $\theta_{1/2}=5.1^\prime$ Gaussian beam prior to thresholding. Unless stated otherwise, we assumed an SO-like CMB temperature noise in the lensing reconstruction.
}
\label{fig:lens-rec_auto}
\end{figure*}
\begin{figure*} [t!]
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.35\textwidth]{len-rec_cross_Whalo_err_s4.pdf}&
\includegraphics[width=0.35\textwidth]{len-rec_cross_Wtsz_err.pdf}\\
\end{tabular}
\begin{tabular}{cccc}
\includegraphics[width=0.33\textwidth]{len-rec_cross_Wcib_err.pdf} &
\includegraphics[width=0.33\textwidth]{len-rec_cross_Wkappa_err.pdf} &
\includegraphics[width=0.33\textwidth]{len-rec_cross_Wrs_err.pdf}\\
\end{tabular}
\caption{Same as Fig.~\ref{fig:lens-rec_auto} for the cross-spectrum between the reconstructed CMB lensing convergence field and the true lensing field.
}
\label{fig:lens-rec_cross}
\end{figure*}
\begin{figure}[!]
\centering
\includegraphics[width = \columnwidth]{smoothing_5p1_vs_1p7_v2.pdf}
\caption{Comparison between the masked CMB lensing convergence using $W_y$ and $W_{\rm CIB}$ masks constructed from foreground fields smoothed with different values of $\theta_{1/2}$. While the shape of the masked regions changes significantly, the total masked sky fraction is the same ($f^{\rm mask}_{\rm sky} =6.7\%$).}
\label{fig:smoothing_5p1_vs_1p7}
\end{figure}
In Fig.~\ref{fig:lens-rec_auto} and ~\ref{fig:lens-rec_cross}, the foreground fields used to build the $W_y$, $W_{\rm CIB}$ and $W_\kappa$ masks were smoothed with a Gaussian beam of $\theta_{1/2}=5.1^\prime$ prior to thresholding, similar to the beam of current \Planck~data. This operation leads to masks with large connected holes on the scale of the smoothing. However, future experiments such as SO and S4 will observe the sky at higher angular resolution. To investigate the sensitivity of our result to this smoothing scale we performed a similar analysis on the masks obtained by smoothing the foreground field with a $\theta_{1/2}=1.7^\prime$ beam prior to thresholding. In Fig.~\ref{fig:smoothing_5p1_vs_1p7}, we show the comparison of the masks obtained with the two different smoothing scales. Figure~\ref{fig:lens-rec_auto_and_cross_1p7} shows the measurements of the mask bias for this set of masks. From the direct-masking results obtained with these masks in Sec.~\ref{sec:matter_1}, we would expect larger biases at the level of the auto spectra, especially at smaller scales.
However, comparing Fig.~\ref{fig:lens-rec_auto_and_cross_1p7} with the results shown in Fig.~\ref{fig:lens-rec_auto} and Fig.~\ref{fig:lens-rec_cross} using the larger smoothing scale, we actually see substantially smaller biases when we mask the same fraction of the sky. Reconstruction with optimal filtering is able to recover more information in the holes when we increase the number of holes but also reduce their size. There is now only a small residual bias at $L\agt 1000$ for the $W_y$ mask, where the signal is anyway largely dominated by reconstruction noise.
The biases in the S4-noise case are instead completely negligible both for $W_y$ and $W_{\rm CIB}$.
The reconstructed CMB lensing field can also be used to construct templates of the lensed CMB in order to perform delensing of the observed CMB data, in particular the B-mode polarization (see \cite{POLARBEAR:2019snn,ACT:2020goa} for recent applications on data). As discussed in \cite{Fabbian:2013owa}, an unbiased measurement of the lensing potential at $L\lesssim 1500$ would be sufficient to resolve with sub-percent accuracy the lensing B-mode signal, which is the part of the signal for which the coupling induced by lensing is the most non-local. The mask biases on the reconstructed CMB lensing map and power spectra at these angular scales are small for the most realistic masks we considered, and even more for the S4-like observations we considered, with high-resolution and low-noise, a regime for where delensing would bring larger improvements for cosmological constraints. As such, we do not expect the mask biases to become a significant problem for CMB internal delensing for future data sets.
\begin{figure*} [t!]
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.35\textwidth]{len-rec_auto_Wcib1p7_err_s4.pdf} $\qquad$ &
\includegraphics[width=0.35\textwidth]{len-rec_auto_Wy1p7_err_s4.pdf}\\
\end{tabular}
\begin{tabular}{cccc}
\includegraphics[width=0.35\textwidth]{len-rec_cross_Wcib1p7_err_s4.pdf} $\qquad$ &
\includegraphics[width=0.35\textwidth]{len-rec_cross_Wy1p7_err_s4.pdf}\\
\end{tabular}
\caption{Bias due to the LSS-correlated masking on the reconstructed CMB lensing convergence auto spectrum (top panels) and cross-spectrum (bottom panel) for a subset of the cases shown in Fig.~\ref{fig:lens-rec_auto}. For these results the masks were constructed smoothing the foreground fields with a $\theta_{1/2}=1.7^\prime$ Gaussian beam prior to thresholding (instead of $\theta_{1/2}=5.1^\prime$ for Fig.~\ref{fig:lens-rec_auto}).}
\label{fig:lens-rec_auto_and_cross_1p7}
\end{figure*}
To estimate the impact of post-Born lensing on the correlated mask biases, we ran the entire end-to-end mask bias estimation pipeline, described in Sec.~\ref{sec:recon}, using the pBNG and BNG simulations, computing $f_L^{\rm MC}\xspace$ corrections and the mean field of the QE on the pBG simulation set. For both the pBNG and BNG simulations we built a new correlated foreground mask, thresholding the corresponding $\kappa$ field such that $f^{\rm mask}_{\rm sky} = 6.7\%$. This limiting case of a $\kappa$ mask is the only case we considered. Post-Born lensing modifies the shape of the $N_L^{(3/2)}$\xspace bias as shown in Fig.~\ref{fig:cross_pb}, where the results obtained with rotated masks (dashed lines) are consistent with those obtained on the full-sky (green and red lines, which are consistent with the expected $N_L^{(3/2)}$\xspace\ reconstruction bias).
Our results for correlated mask bias are calculated as differences between spectra on rotated and unrotated masks. As such, the $N_L^{(3/2)}$\xspace bias is mainly removed when taking this difference and the impact of post-Born lensing is only important to the extent that the correlated mask changes the post-Born contribution to $N_L^{(3/2)}$\xspace compared to the Born case. Since the amplitude of $N_L^{(3/2)}$\xspace is relatively small, the overall impact of post-Born lensing is small (see Fig.~\ref{fig:cross_pb2}). Neglecting post-Born effects is therefore not expected to significantly affect the other results shown in the following for other LSS-correlated masks.
\begin{figure}[!]
\includegraphics[width = \columnwidth]{len-rec_cross_v2_Wkappa_postb_MCcorr_v2.pdf}
\caption{Differences between the cross-spectra of the reconstructed CMB lensing fields and the true fields relative to the full-sky power spectrum of the true fields. All these curves include mean-field subtraction and $f_L^{\rm MC}\xspace$ normalization correction.
The results are obtained by performing lensing reconstruction after masking the sky with $W_{\kappa}$ masks constructed from the post-Born and Born $\kappa$ maps of Ref.~\cite{Fabbian:2017wfp} (solid blue and orange respectively). Dashed lines show the results obtained using the LSS-uncorrelated masks $W^{\rm rot}_{\kappa}$. These are consistent with the results of lensing reconstruction on the full-sky (green and red lines), where the observed discrepancy is only due to $N_L^{(3/2)}$\xspace effects.}
\label{fig:cross_pb}
\end{figure}
\begin{figure}[!]
\includegraphics[width = \columnwidth]{len-rec_autocross_Wkappa_postb_v2.pdf}
\caption{
Bias due to the LSS-correlated masking on the reconstructed CMB lensing convergence auto spectrum (in blue and orange) and cross-spectrum (in green and red) relative to the true full-sky $C_{L}^{\kappa\kappa}$ for $W_{\kappa}$ masks. Results including post-Born lensing are shown in solid lines and those in the Born approximation as dashed lines. All these curves include the $f_L^{\rm MC}\xspace$ correction and mean-field subtraction but retain the RD-$N^{(0)}_L$ bias.}
\label{fig:cross_pb2}
\end{figure}
\subsection{Cross-correlation with CMB lensing}\label{sec:xcorr}
In the previous section we showed that using optimal filtering in CMB lensing reconstruction recovers some of the information lost by masking LSS-correlated sky areas. reducing the expected biases in the reconstructed CMB power spectrum. However, this does not guarantee that the CMB lensing field is properly recovered at the sky location masked in the CMB maps used for lensing reconstruction. In this section, we quantitatively assess this problem using statistics involving cross-correlation between CMB lensing and external tracers.
\subsubsection{CIB and tSZ cross-correlation}\label{sec:xcorr}
\begin{figure*}[!htbp]
\includegraphics[width=0.33\textwidth]{kXy_tsz_99p4_5p1_pipeline.pdf}
\includegraphics[width=0.325\textwidth]{kXy_mask_summary.pdf}
\includegraphics[width=0.33\textwidth]{kXcib545_summary.pdf}
\caption{In the left panel, we show an example of the bias induced by the LSS-correlated masking on the cross-correlation power spectrum between CMB lensing and tSZ Compton $y$ parameter (green) for SO. In blue, we show the $N_L^{(3/2)}$\xspace bias to $C_{L}^{\kappa y}$ due to the non-Gaussianity of the lensing field itself measured on the full sky. The cross-correlation power spectrum of the reconstructed CMB lensing field with the $y$ map when masking the sky with an LSS-uncorrelated mask is consistent with $N_L^{(3/2)}$\xspace (orange) measured on the full-sky. The middle and right panel show the mask bias to the cross-correlation power spectrum between CMB lensing and $y$ and the CIB emission at 545GHz respectively for the most significant masks considered in this work. Such biases are larger than the one obtained for the reconstructed CMB lensing auto-spectrum and cross-spectrum shown in Fig.~\ref{fig:lens-rec_auto} and \ref{fig:lens-rec_cross}. Dashed lines show the expected fractional difference of the power spectrum computed using the true $\kappa$ fields on a masked sky compared to its expected value on the full sky (for display purposes shown divided by a factor of 2). The optimal filtering used for the lensing reconstruction recovers a large fraction of the information lost by masking.}
\label{fig:xcorr-summary}
\end{figure*}
To measure the mask biases on $C^{\kappa y}_{L}$ and $C_{L}^{\kappa \mathrm{CIB}}$ we used a pipeline similar to the one we used to measure the biases on $C^{\kappa\kappa}_{L}$. We computed the cross-correlation power spectrum between the Websky\xspace $y$ and CIB map at 545GHz with the lensing map reconstructed from CMB maps masked with our LSS-correlated masks. We then subtracted the same results calculated using the LSS-uncorrelated (rotated) masks. As for the CMB lensing auto-spectrum, this procedure isolates the effect of the LSS-masking from other reconstruction biases, which in this case are only due to the $N_L^{(3/2)}$\xspace bias~\citep{Fabbian:2019tik}. We chose in particular the 545GHz for the CIB emission as it has been successfully employed in delensing studies and offers a good trade-off between signal to noise and dust contamination \cite{Manzotti:2017net,Larsen:2016wpa,Aghanim:2018oex}.
In the left panel of Fig.~\ref{fig:xcorr-summary}, we show that the $N_L^{(3/2)}$\xspace bias in cross-correlation obtained for the lensing field reconstructed from an LSS-uncorrelated mask and an SO noise level is consistent with the result obtained from full-sky reconstructions. The full-sky $N_L^{(3/2)}$\xspace bias for $C_{L}^{\kappa y}$ is consistent with the one expected for a tracer probing the matter density at a median redshift of $z\approx 0.6$ without post-Born corrections (see, e.g., Fig. 13 of Ref.~\cite{Fabbian:2019tik}).
The other panels of Fig.~\ref{fig:xcorr-summary} show the amplitude of the mask bias relative to the true cross-correlation power spectrum of the Websky\xspace maps for a few masks representative of masking infrared sources and galaxy clusters for an SO-like survey. For $C_{L}^{\kappa y}$, masking of infrared sources could bias the measured power spectrum by $\sim 2\%$ for an experiment with a \Planck-like angular resolution, or half this for an experiment with an SO-like resolution. Cluster masking has a more dramatic impact, introducing biases of 10--30\% in the measured power spectrum, with a broad peak around the median scales of the masked objects (for our case roughly $6^\prime$). This case is of course an extreme example useful to quantify the error on the recovered correlation inside the masked regions; for practical analyses, using $C_{L}^{\kappa y}$ directly for scientific purposes cluster masking is avoided. However, the optimal filtering recovers a significant amount of information: applying the LSS-correlated mask on the true field would reduce the $C_{L}^{\kappa y}$ power spectrum by a much larger amount. As an example, in the middle panel of Fig.~\ref{fig:xcorr-summary} the dashed lines show the fractional difference between $C_{L}^{\kappa y}$ computed on a masked sky and the one computed on the full-sky using the true $\kappa$ field. Even for the most extreme cases of cluster masking, optimal filtering recovers about 80\% of the signal for $L\lesssim 500$ and reduces the bias by a factor of $\sim 2$ at $L\gtrsim 1000$.
For $C_{L}^{\kappa \mathrm{CIB}}$, despite the non-negligible correlation between CIB and tSZ due to infrared emission of galaxies in galaxy cluster environments \cite{Planck:2015emq,Maniyar:2020tzw}, the mask biases are smaller ($\lesssim 5\%$) for the masks considered for this study. For the most realistic cases related to infrared point-sources shown in Fig.~\ref{fig:xcorr-summary}, we found a reduction of power of about 2\% approximately constant for the scales most relevant for delensing ($L\lesssim 2000$). As these variations are small, we therefore do not expect the delensing efficiencies for CIB-delensing to be strongly impacted by masking biases.
We performed a similar analysis with CMB lensing maps reconstructed from CMB maps with an S4-like noise level. We focussed on a few specific aggressive masks: $W_{y}$ and $W_{\rm CIB}$ threshold masks removing $f_{\rm sky}^{\rm mask}=7.7\%$, computed with a Gaussian smoothing of $\theta_{1/2}=1.7^\prime$, and $W_{\rm halo}$ with $M^{\rm cut}=10^{14}M_\odot/h$. For the latter, shown in Fig.~\ref{fig:xcorr-summary} for SO noise levels, we observed a reduction of the bias on $C_L^{\kappa y}$ and $C_L^{\kappa \mathrm{CIB}}$ by a factor of $\sim 2$ for $L\lesssim 2000$ and by a factor of $\sim 3$ for $2000\lesssim L \lesssim 4000$ thanks to the improved performances of the optimal filtering. Similar trends can be observed for the threshold masks where the reduction of the biases compared to an SO-like noise is even more larger: going from an SO-like to an S4-like noise, for the $W_{y}$ mask the median bias across all the angular scales changes from 16\% to 6\% for $C_L^{\kappa y}$ and from 1.6\% to 0.2\% for $C_L^{\kappa \mathrm{CIB}}$.
\subsubsection{Cluster mass calibration}
The abundance of galaxy clusters as a function of mass and redshift is a highly-sensitive probe of cosmology: it strongly depends on the growth rate as well as on the geometry of the universe. The main challenge for the application of cluster abundance in cosmology is the inference of the true mass of the cluster from observable quantities such as X-ray, tSZ luminosity or optical richness. Gravitational lensing of light sources behind clusters is one of the most promising
techniques to estimate their masses as it is sensitive to the total matter distribution and less affected by complex details of baryonic physics in dense environments. CMB-cluster lensing might be particularly useful for estimating the mass of high-redshift clusters, for which it is difficult to observe background galaxies with sufficient sensitivity, and for providing estimates complementary to galaxy weak-lensing for low redshift clusters as it is sensitive to different systematic effects \cite{Madhavacheril:2017onh}.
Since the signal-to-noise expected for each cluster in CMB lensing maps, even for futuristic surveys, is well below 1 for clusters of $ M_{500,c}\approx 10^{14}M_{\odot}\xspace/h$ \cite{Raghunathan:2017cle}, cluster masses are usually computed as the average mass of a set of clusters \cite[e.g.,][]{Miyatake:2018lpb,Raghunathan:2017qai}. We therefore quantified the impact on the recovered mean cluster halo mass by stacking the CMB lensing maps reconstructed from different masked fields at the location of the clusters. We selected the objects in the Websky\xspace halo catalogue, mimicking a complete mass-limited sample for SO-like noise described in Sec.~\ref{sec:sims-masks}, and estimated the mass of the clusters from the radial profile of the stacked convergence map with the \texttt{cmbhalolensing} public code\footnote{\url{https://github.com/simonsobs/cmbhalolensing}}. For this purpose, we stacked cut-outs of $25^\prime\times 25^\prime$ stamps around the location of each cluster from our full-sky maps and binned the radial profile of the stack in 25 radial bins. We estimated the covariance of each radial profile by splitting our cluster sample into 192 sub-regions corresponding to the objects located within an Healpix pixelization of nside=4 and adopting a jackknife resampling approach following Ref.~\cite{DES:2018myw}. In the fit we assumed the redshift of the stack to be equal to the mean redshift value of the cluster sample $\bar{z}=0.63$ and used points at a distance $r<10^{\prime}$; we also accounted for the mass dependence of the halo profile concentration parameter $c$ using the scaling relation of Ref.~\cite{duffy2008}. We then quantified the mask bias by comparing the fitted mass value with the one obtained by stacking the full-sky reconstructed $\kappa$ map. To account for the finite sky coverage of future ground-based surveys such as SO and S4, we rescaled the jackknife covariance estimated over the full sky by the sky fraction covered by those experiments before performing the fit. For this purpose we assumed a common sky fraction of $f_{\rm sky}^{\rm obs}=50\%$.
Several methods have been proposed to extract the CMB-cluster lensing signal from CMB temperature and
polarization maps with different level of optimality and sensitivity to extragalactic foreground contaminations \cite{Seljak:1999zn,Dodelson:2004as,Holder:2004rp,Lewis:2005fq,Hu:2007bt,Raghunathan:2019tsz,Madhavacheril:2018bxi}. The standard QE in particular is known to be sub-optimal and biased-low for low CMB noise levels \cite{Maturi:2004zj,Hu:2007bt,Yoo:2008bf}. We quantified this bias by computing the mean cluster mass from the radial profile obtained by stacking the Websky\xspace $\kappa$ at the cluster locations and assuming the same covariance of the full-sky reconstructed map. This value is consistent with the mean mass of the sample estimated from the values present in the Websky\xspace halo catalog ($\bar{M}_{500,c}=2.64\cdot 10^{14} M_{\odot}\xspace/h$) while the one obtained with the full-sky reconstructed map is biased low by $\approx 1\sigma$ ($\hat{M}^{\rm full-sky}_{500,c}=(2.29\pm 0.15)\cdot 10^{14} M_{\odot}\xspace/h$). As a consistency check, we estimated the mean mass from the profile obtained from convergence maps reconstructed from CMB maps masked with $W_X^{\rm rot}$ and found it to be consistent with the results obtained from the full-sky reconstructed maps.\\*
\begin{figure}[!]
\includegraphics[width=\columnwidth]{haloprofile_mask_summary.pdf}
\caption{The azimuthally averaged radial profile of the convergence maps stacked at the location of SO detected clusters. Different colours show the results obtained on $\kappa$ maps reconstructed from different masked CMB maps and assuming an SO noise level in CMB maps. Masking CMB maps with LSS-correlated masks causes a suppression of the signal at small scales compared to the full-sky reconstruction and modifications of the profile at intermediate scales when masks remove large sky fractions. The full-sky results do not retrieve an unbiased profile as they have been derived with a standard QE reconstruction; as such results obtained stacking directly on the CMB lensing convergence maps of the Websky\xspace suite are shown for reference to quantify the importance of the QE bias.}
\label{fig:stacked halo}
\end{figure}
Figure~\ref{fig:stacked halo} shows a comparison of the radial profiles obtained from different LSS-correlated masks. The changes in the halo profile are mainly concentrated towards the halo centre while the largest scales are mainly unaffected. The CIB masks that remove the brightest infrared sources induce changes at the $\approx 1\%$ level in the profile core, but do not significantly affect the mass estimation as the differences with results obtained using the full-sky reconstructed mask are $\lesssim 0.5\sigma$. Masking all the clusters in the sample before lensing reconstruction shows that only about 30\% of the signal at the halo location can be recovered as the recovered mass is $\hat{M}_{500,c}=(0.84\pm 0.17)\cdot 10^{14} M_{\odot}\xspace/h$. As discussed above, this masking choice is over conservative; however, results obtained with $W_{\rm halo}$ and $W_{y}$ masks are similar if the removed sky fraction is comparable, as these masks are correlated.
We therefore also considered a less extreme case where just the brightest clusters are masked, computing the cluster mass from a convergence map reconstructed from a CMB map with a $W_y$ mask removing $f_{\rm sky}^{\rm mask} = 2.3\%$ at a smoothing scale of $1.7^\prime$. This removes a similar sky fraction to an halo mask with the redshift-dependent selection function of the \Planck~cluster catalog ($f_{\rm sky}^{\rm mask}=1.6\%$) \cite{Planck:2015koh} if the clusters are masked within a radius of $2\theta_{500,c}$ from their centres. In this case we observe a bias on the recovered mean cluster mass of about $\approx 2.5\sigma$. To investigate if the mask bias affects clusters in particular redshift bins we repeated the analysis masking only clusters at $z<0.6$, $z>0.6$ and $z>1$. Accounting for the increased statistical uncertainties due to the lower number of sources, we found that the bias is reduced to $1.1\sigma$ for clusters at $z>0.6$, while for objects at $z>1$ the recovered mass is consistent with the one expected from the sample within $\sim 0.5\sigma$. The bias however becomes much more important for low-redshift clusters, where for objects at $z<0.6$ we detected the mask bias at $\sim 3.5\sigma$. This is likely due to the fact that the size of the masked object is about twice as large then the one at $z>0.6$ and therefore optimal filtering is less effective in recovering information.
The mask bias is still measurable at $\sim 1.5\sigma$ significance for less aggressive $W_y$ masks removing $f_{\rm sky}^{\rm mask} = 0.6\%$ at a smoothing scale of $1.7^\prime$. These results suggest that the mask biases will not significantly affect the science case related to cluster mass calibration from CMB lensing mass in the regime where this is the most accurate technique, but more care has to be taken when calibrating the mass of low redshift objects.
We repeated the analysis on CMB lensing maps reconstructed from CMB maps having an S4-like noise level. We focus in particular on a very aggressive case where we masked the sky prior to reconstruction with a $W_{\rm halo}$ mask with $M^{\rm cut}=10^{14}M_{\odot}\xspace/h$ (consistent with an S4-like cluster sample), and we later stacked the reconstructed $\kappa$ at the location of the same halos that were masked. Despite being unrealistic, such an extreme case is useful to assess the performances of the optimal filtering in recovering small-scale information in the maps. We found that the estimated mass from the stack has a bias at $\sim 1.2\sigma$ level compared to the results obtained stacking a $\kappa$ map reconstructed from full sky observations. Optimal filtering recovers about 90\% of the expected signal at the halo location for S4 noise levels. The same estimate assuming SO-like noise levels and the same mask gives a much larger bias ($\sim 7\sigma$) in the estimated cluster mass with only about 50\% of the expected signal properly recovered. Furthermore, we found that the conclusions on the redshift dependency previously discussed for the SO noise level and the SO cluster sample hold also for the S4 noise and cluster sample, with a mask bias increased to $\sim 3\sigma$, $\sim 1.8\sigma$, $\sim 1\sigma$, for objects located at $z< 0.6$, $z>0.6$ and $z>1$ respectively. Since the mask bias we considered here for S4 gives a marginal detection significance, despite being an extreme over-conservative case, we do not expect realistic foreground masks to severely impact cluster mass calibration for S4.
Given that the estimator we employed in this section is not only biased but sub-optimal, and given that our noise level is higher than a full minimum-variance lensing reconstruction, because we only used temperature data, a bias in the estimated cluster mass over the full sample could be important even if only the brightest fraction of the detected clusters is conservatively masked prior to the reconstruction.
However, a more targeted analysis should be carried out to accurately assess this effect for future data sets including all the complexity of the cluster selection function of each experiment.
\subsubsection{Mask-CMB lensing deflection correlation}
In \citetalias{Fabbian:2020yjy}\xspace we presented an analytic model of the mask bias on standard CMB pseudo-$C_\ell$ angular power spectrum estimators. This can be evaluated as an effective correction to the lensed CMB correlation function $\tilde\xi(r)$ given by
\begin{equation}
\Delta \tilde{\xi} \approx \partial_r \tilde\xi(r) \bar{\Delta}(r).
\label{generaldeconvolved}
\end{equation}
and then converted into a correction on $C_{\ell}$'s. In the last equation $\bar{\Delta}(r)$ is the average over the unmasked area of the change in the separation of points due to lensing
\begin{equation}
\bar{\Delta}(r)= 2\frac{\langle \alpha_r(\boldvec{x}) {W}(\boldvec{x}) {W}(\boldvec{x}')\rangle}{\langle {W}(\boldvec{x}) {W}(\boldvec{x}')\rangle},
\label{delta_def}
\end{equation}
where $\boldvec{x}$, $\boldvec{x}'=\boldvec{x}+\vr$ are directions in the sky, $\vr$ their separation vector and $\alpha_r$ the component of the deflection field ${\boldsymbol{\alpha}}=\nabla\phi$ parallel to $\vr$. $\bar{\Delta}(r)$ does not have a general analytical expression but can in principle be calculated empirically. The numerator of Eq.~\eqref{delta_def} can be expressed in terms of the cross-correlation power spectrum between the E-modes of the spin-1 masked deflection field $\alpha {W}$, denoted by $E$, and the sky mask $ {W}$. In the flat sky this reads
\begin{equation}
\langle \alpha_r(\boldvec{x}) {W}(\boldvec{x}) {W}(\boldvec{x}')\rangle =- \int \frac{{\rm d} l}{2\pi} l C_l^{EW} J_1(lr).
\label{eq:EWpower}
\end{equation}
Thus, if we have an estimate of $C_L^{EW}$ we can calculate the CMB power spectrum bias for any mask. This can be constructed from simulations where we know $\kappa$ (and hence ${\boldsymbol{\alpha}}$) and the mask $ {W}$. Alternatively this can be estimated from data if we have reliable estimates of the lensing deflection field. We used this technique in \citetalias{Fabbian:2020yjy}\xspace to evaluate the bias induced by a threshold mask built on \Planck~GNILC maps on the CMB temperature power spectrum of \Planck~SMICA maps, and showed that our analytical prediction so computed matched the mask bias observed on data.
This estimate of the bias makes the weakest assumptions on the correlation between the mask and the lensing deflection, however it is unclear if the mask biases induced by the lensing reconstruction itself can affect our ability to predict the bias on the CMB power spectrum. To test this, we converted the reconstructed lensing potential $\hat{\phi}$ into a deflection field $\hat{{\boldsymbol{\alpha}}}$ with a spin-1 spherical harmonics transform\footnote{For this purpose we used the relation between the lensing potential and the E-modes of the deflection field $\hat{\alpha}^E_{LM}=\sqrt{L(L+1)}\hat{\phi}_{LM}$ and assumed $\hat{\alpha}^B_{LM}=0$.}. We then computed the cross-correlation power spectrum $C^{\hat{E}W}_{L}$ between the reconstructed masked deflection field $\hat{{\boldsymbol{\alpha}}}W$ and $W$, and compared it with $C^{EW}_{L}$ directly computed using the deflection field extracted from the Websky\xspace $\kappa$ map. We focused in particular on the most important realistic cases analysed in \citetalias{Fabbian:2020yjy}\xspace, i.e. the $W_{\mathrm{CIB}}$ and $W_y$ masks with $f_{\rm sky}=99.4\%$ and for both $\theta_{1/2}=5.1^{\prime}$ and $1.7^{\prime}$, which could all be detectable at more than $5\sigma$ significance for SO noise level. We found the error on $C^{EW}_{L}$ for SO noise levels to be lower than $30\%$ for $L\lesssim 3000$ and lower than $10\%$ for the CIB masking which cannot be avoided by component separation. We evaluated Eq.~\eqref{eq:EWpower} with $C^{\hat{E}W}_{L}$ and found that the analytically-predicted mask bias on $C_{\ell}^{TT}$ is consistent with the one computed using the true $C^{EW}_{L}$ measured from Websky\xspace to better than $\sim 0.5\%$ on $L\lesssim 4000$. Even for the most aggressive masks considered here, i.e. $W_{\mathrm{CIB}}$ and $W_y$ that remove $f^{\rm mask}_{\rm sky}=6.7\%$, the error introduced in the prediction of the mask bias using $C^{\hat{E}W}_{L}$ is less than 10\%. For S4 noise levels and the same aggressive masks, the errors on $C^{EW}_{L}$ are reduced up to a factor of 2 on scales $L\gtrsim 1000$ compared to the SO noise case and therefore we expect the accuracy of the analytical predictions of the biases to further improve at lower experimental noise levels. All these errors are enough to predict the mask biases and reduce their statistical significance to below the detection level.
\section{Forecasts for future CMB experiments}
\label{sec:forecasts}
\begin{figure*}[t!]
\includegraphics[width = \textwidth]{snr_SO-S4noise_Lmin2_v2.pdf}
\caption{Detection significance of the mask bias in the CMB lensing power spectrum for future high-resolution ground-based experiments as a function of maximum multipole $L_{\rm max}$ included in the analysis and for different foreground masks. We assumed an observed sky fraction $f^\mathrm{obs}_\mathrm{sky} = 40\%$ and an SO-like lensing noise for all cases but those shown in red which assumed an S4-like CMB lensing noise. The dashed lines show the bias obtained directly masking the $\kappa$ field (see Sec.~\ref{sec:matter}) while the solid lines show the bias on the reconstructed $\kappa$ field from masked CMB maps. The lensing reconstruction pipeline described in Sec.~\ref{sec:recon} and including optimal filtering is very effective in recovering the information inside the masked sky area and reducing the importance of such biases in lensing analyses.
The horizontal black line highlights a statistical detection significance of $3 \sigma$ for reference.}
\label{fig:snr_SO}
\end{figure*}
In the previous sections, we have shown that LSS-correlated masks can introduce biases on the reconstructed CMB lensing power spectrum that could become non-negligible if not accounted for. We have shown that the biases on the CMB lensing power spectrum are likely to be negligible for radio sources for the immediate future, but may be more important for other masks (a $\sim 2\%$ effect which can increase to $\sim 5\%$ though on scales where the lensing reconstruction noise is important, see Figs.~\ref{fig:lens-rec_auto}, \ref{fig:lens-rec_auto_and_cross_1p7}).
Below, we estimate the detectability of such mask biases for SO and S4 in terms of cumulative signal to noise where we used as the signal the mask biases measured in Sec.~\ref{sec:cmblens-results} and the noise includes the sample and noise variance of each experiment. For this purpose we assumed a sky coverage of $f^{\rm obs}_{\rm sky} = 40\%$ and the realistic publicly-available noise power spectra for the lensing potential reconstructed using only temperature modes\footnote{For SO we used the so-called baseline noise from \url{https://github.com/simonsobs/so_noise_models}. For S4 we took the noise power spectra available at \url{https://cmb-s4.uchicago.edu/wiki/index.php/Survey_Performance_Expectations}.}.
We fix all cosmological parameters, so the results are an upper limit on the impact on any cosmology constraint.
The results for all the $W_X$ masks considered in this analysis are summarized in Fig.~\ref{fig:snr_SO}, where we show the detection significance of the mask biases as a function of maximum lensing multipole $L_{\rm max}$ included in the analysis. We do not include the results for the $W_\kappa$ mask since $\kappa$ is not directly observable but, for comparison, we also show the expected biases from directly masking the $\kappa$ convergence field as done in Sec.~\ref{sec:matter}.
Our results clearly show that using optimal filtering of the CMB maps for the lensing reconstruction, the biases are much smaller than they would be if $\kappa$ were masked directly (which would give a signal detectable with high significance, well above $5\sigma$ in future measurements).
For $W_{\rm halo}$, masking up to 1-2\% of the sky (consistent with the sky fraction removed by masking SO-like tSZ cluster sample), the mask bias can be measured with a statistical significance $\lesssim 2\sigma$ while for cluster samples more similar to the S4 ones, where the mass limit of $10^{14}M_{\odot}/h$ and the masked sky fraction is $\sim 10\%$, the mask bias can be detected at $\lesssim 3\sigma$ in the reconstructed lensing field, using both the SO and the S4 noise. Similar results can be observed for the threshold mask $W_y$ when the foreground field has been smoothed with $\theta_{1/2}=5.1^\prime$ Gaussian beam and a similar sky fraction as in the $W_{\rm halo}$ case is masked, given that the two types of masks are correlated.
We note that cluster masking may not be used in final CMB lensing analyses, and the tSZ contamination can also be reduced by analysing component-separated CMB maps where components with a tSZ-like SED are projected out during component separation, or using dedicated modified quadratic estimators \cite{Madhavacheril:2018bxi,Patil:2019zbm}. To assess the validity of the methods and to quantify potential residual emission in these alternative analyses, however, it is common practice to compare the results with more conservative analyses based on cluster masking as also done in the \Planck~2018 lensing analysis \cite{PL2018}. As such, our results show that care is required when performing these kind of comparisons, as mask biases may lead to misleading inconsistencies in the comparison of the CMB lensing estimates.
For the $W_{\rm CIB}$ (with $\theta_{1/2}=5.1^\prime$) masks, the bias detectability remains $\lesssim 1\sigma$ when the masked sky regions removes $\sim 5\%$ of the sky or less. For more aggressive masks removing $f_{\rm sky}^{\rm mask}\approx 7\%$ the bias will be detectable at more than $3\sigma$ significance. The increased significance is not unexpected since the CIB is highly correlated with the CMB lensing field, especially at $L \lesssim 1000$, and therefore a mask that removes these peaks naturally enhances the mask bias effect.
The detection significance for both $W_y$ and $W_{\rm CIB}$ masks built from higher-resolution observations (with $\theta_{1/2}=5.1^\prime$ smoothing) will produce biases that will be measured with lower detection significance statistical significance compared to the $\theta_{1/2}=5.1^\prime$ cases. They always stay $\lesssim 2\sigma$ for SO and $\lesssim 1\sigma$ for the S4 cases we considered showing the improved performance of the optimal filtering in recovering information in presence of lower noise level and smaller holes.
For radio sources, $W_{\rm rs}$, the detection significance of the bias is below the $1\sigma$ level, whether we are masking all the radio sources up to the detection limit of SO or S4.
We finally stress that the bias for any Poisson source mask is expected to be very small, as for radio sources, since the mask is largely determined by random Poisson sampling of the background distributions rather than tracing lensing-correlated perturbations closely. For example, as long as masks removing infrared sources are only removing Poisson sources (i.e. at very high or low redshift), rather than peaks of the full CIB field, their bias should also be negligible. Although masking peaks in the CIB emission is not common practice in CMB analysis, our $W_{\rm CIB}$ CIB masks can potentially mimic the effect of masking bright infrared point sources, where a fraction of them are from strongly lensed objects that are therefore highly correlated with the matter distribution along the line of sight \cite{spt-dsfg2020} on much smaller scales than CMB lensing is sensitive to. We note that Websky\xspace simulations were not constructed to reproduce the source number counts in the infrared, nor include any effect of magnification bias which would affect the number of detected sources in CMB maps for a fixed noise level. However, preliminary analyses showed a good agreement between the expected source number counts at the highest fluxes from semi-analytical models \cite{Cai:2013wna,Lapi:2012xp} and the source counts expected from the halo-model adopted in the Websky\xspace simulation to construct the CIB maps \cite{websky-ir}. Our Websky\xspace simulation results should therefore provide a reasonable ballpark estimate of the total effect, since the uncorrelated Poisson part of the distribution only affects the mask bias via the total $f_{\rm sky}$ masked.
We performed a similar forecast analysis for the mask biases in the cross-correlation between CMB lensing and external tracers presented in Sec.~\ref{sec:xcorr}. In this case we assumed the publicly available SO and S4 noise power spectra for the $y$ map achievable with an ILC component separation and for the lensing noise; for the CIB we fitted a white noise level and an effective beam from the noise-dominated regime of the beam-deconvolved power spectrum of the GNILC maps at 545GHz. We obtained a noise level of $4.84 \mathrm{Jy/sterad}$ and a Gaussian beam with $\theta_{1/2}=4.65^\prime$. We also considered future CIB measurements of CCAT-prime and assumed the beam and noise power spectra described in \cite{Choi:2019rrt}. As expected, biases in cross-correlation are more important than those reported above for the CMB lensing spectrum. We considered mainly threshold masks smoothed with $\theta_{1/2}=1.7^\prime$ as it is the case more relevant for future experiments and $W_{\rm halo}$ masks. For SO, we found that $W_{\rm CIB}$ and $W_y$ threshold masks do not produce any significant bias on $C_{L}^{\kappa \mathrm{CIB}}$ if they remove less than 6\% of the sky both for \Planck~and CCAT-prime noise levels and resolution. Biases due to $W_y$ masks are however more harmful for the analysis of $C_{L}^{\kappa y}$ even if only a minor fraction of the brightest clusters is removed. $W_y$ masks biases will in fact be detectable with a significance of $\sim 3\sigma $ if the mask removes 0.6\% of the observed sky and of $5\sigma$ when masking 2.3\% of the sky (consistent with the sky fraction removed by masking Planck tSZ-detected clusters).
The significance increases to $10\sigma$ and $13\sigma$ for $W_{\rm halo}$ masks removing SO and S4-like clusters respectively. These masks will also leave detectable biases in $C_{L}^{\kappa \mathrm{CIB}}$ at $2.5\sigma$ and $4.5\sigma$ respectively.\\*
For an S4-like survey, we saw in Sec.~\ref{sec:xcorr} that the mask biases are reduced compared to an SO-like one; however, the noise of the reconstructed CMB lensing and $y$ maps is also lower and the detection significance might still be comparable to the ones of an SO-like survey. Nevertheless, we found that the overall detection significance of the mask biases of a given mask is reduced for an S4 survey compared to an SO-like one. Even for the extreme cases discussed in Sec.~\ref{sec:xcorr}, the mask biases on $C_{L}^{\kappa \mathrm{CIB}}$ are negligible as they will be measured with a detection significance $\lesssim 1.5\sigma$. For $C_{L}^{\kappa y}$ biases induced by $W_{\rm CIB}$ will not be detectable and the significance of mask biases induced by $W_{y}$ and $W_{\rm halo}$ masks will be reduced by about a factor of 2. This reduction is enough to reduce the detection significance of mask biases of Planck-like detected clusters to marginal significance ($\sim 2.5\sigma$) but more aggressive cluster masking will introduce biases in $C_L^{\kappa y}$ that will be detectable at high significance.
Finally, we note that if CMB polarization is also used in the lensing reconstruction, and the polarization mask is much smaller than the temperature mask as expected, all the biases that we found here for the temperature are expected to be reduced. Also, combined minimum-variance and optimized estimates~\cite{Hirata:2002jy, Hirata:2003ka, Carron:2017mqf} of the lensing potential should have these biases reduced as the relative importance of polarization-based reconstruction channels will increase for future observations.
\section{Conclusions}
\label{sec:conclusions}
In this work, we studied the impact that foreground masks correlated with the large-scale structure distribution could have on the reconstructed CMB lensing potential map and power spectrum.
Future high-resolution ground-based CMB experiments, such as SO and S4, will resolve a much larger populations of extragalactic sources than current experiments, so masking larger areas of resolved tSZ-selected clusters and radio sources may be necessary and its impact carefully quantified. In~\citetalias{Fabbian:2020yjy}\xspace, we already showed how such masks can potentially give large biases on the CMB temperature and polarization power spectra, even if the masked sky area is small.
Building on these results, in this paper we have shown that:
\begin{itemize}
\item Extra-galactic foregrounds masks that are correlated to CMB lensing give substantial biases on simple lensing power spectrum estimates if the lensing convergence field were masked directly (or no information inside the masked sky area can be recovered).
\item Significantly smaller biases are obtained on the reconstructed CMB lensing field and power spectrum if the CMB fields used for the reconstruction are optimally filtered, effectively
recovering signal inside small mask holes.
\item Simple Halo Model or analytic Gaussian models that we derive can provide a qualitative understanding of the effect when the lensing field is masked directly, though non-Gaussianity of the extra-galactic foregrounds is important to model for accurate results.
\item Radio source masks, and any other mask constructed from Poisson sources, should give a bias that is safely negligible.
Masking foreground peaks in the tSZ or the infrared emission, or large halos, can instead give larger biases and should be avoided if possible.
\item Biases on the reconstructed CMB lensing power spectrum from CMB temperature will mostly be measured with only marginal significance ($\lesssim2\sigma$) for forthcoming experiments.
\item Biases in the cross-correlation between CMB lensing and tSZ or CIB are detected with higher significance for realistic foreground masks ($\lesssim3-4\sigma$ for SO) but can become much larger for more aggressive masks.
Thanks to improved performance of the optimal filtering in presence of low noise levels, the importance of such biases is greatly reduced for S4, however, not enough to make them completely negligible.
\item At SO noise levels, cluster mass calibration with CMB lensing can be significantly affected by mask biases only for low-$z$ objects but will still deliver unbiased results at high-$z$. This holds even if only the brightest clusters are masked. Cluster mass calibration for S4, conversely will likely not be affected by mask biases when realistic foreground masks are employed.
\item Since masking biases in the reconstructed CMB lensing are small, the large masking biases expected for CMB power spectra can be predicted and marginalized self-consistently using CMB lensing as an LSS tracers and the analytical model we described in \citetalias{Fabbian:2020yjy}\xspace.
\end{itemize}
We showed that mask biases are small for lensing reconstruction using optimal CMB filtering, but
alternative methods based on sky map inpainting or source subtraction at the map-making level, combined with an isotropic QE instead of optimal filtering, might also be effective in avoiding sharp mask edge effects and recovering information inside small holes, but would have to be assessed separately. Applying an additional optimal filtering step on the reconstructed lensing field may also be able to recover information in small mask holes that were not filled at the level of the CMB map (c.f. Ref.~\cite{Mirmelstein:2019sxi}).
A detailed analysis including CMB polarization in the lensing reconstruction as well as polarized foregrounds is left for future work, but biases are likely to be smaller given the expected levels of polarization of radio and infrared sources and other extragalactic emissions.
A detailed study of the impact of these correlated mask biases on the estimation of cosmological parameters is also left for future works.
\section*{Acknowledgments}
We thank Anthony Challinor for useful discussion and Boris Bolliet, Emmanuel Schaan and Blake Sherwin for comments.
AL, GF and JC acknowledge support from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement No. [616170], and support by the UK STFC grants ST/P000525/1 (AL) and, ST/T000473/1 (AL and GF). GF acknowledges the support of the European Research Council under the Marie Sk\l{}odowska Curie actions through the Individual Global Fellowship No.~892401 PiCOGAMBAS. JC acknowledges support from a SNSF Eccellenza Professorial Fellowship (No. 186879). ML acknowledges the support of the Fondazione Angelo Della Riccia fellowship.
The sky simulations used in this paper were developed by the Websky\xspace Extragalactic CMB Mocks team, with the continuous support of the Canadian Institute for Theoretical Astrophysics (CITA), the Canadian Institute for Advanced Research (CIFAR), and the Natural Sciences and Engineering Council of Canada (NSERC), and were generated on the Niagara supercomputer at the SciNet HPC Consortium~\cite{2019arXiv190713600P}. SciNet is funded by: the Canada Foundation for Innovation under the auspices of Compute Canada; the Government of Ontario; Ontario Research Fund - Research Excellence; and the University of Toronto.
\noindent
|
{
"timestamp": "2021-09-29T02:26:38",
"yymm": "2109",
"arxiv_id": "2109.13911",
"language": "en",
"url": "https://arxiv.org/abs/2109.13911"
}
|
\section{Introduction}
Lithium (Li) is one of the few elements that are produced minutes after the Big Bang during the Big Bang Nucleosynthesis phase \citep[BBN,][]{Coc2014}. Predictions of the production of elements in this theory are only dependent on the baryon-to-photon ratio, a number that has been measured from the cosmic microwave background by WMAP \citep{WMAP} and Planck \citep{Planck2014}. Then, the predicted amount of lithium formed in the early universe is A(Li)\footnote{$A(Li)=\log(n_{Li}/n_{H})+12$}$=2.69\pm0.03$ \citep{Coc2014}.
The primitive lithium abundance, to be compared with predictions from the BBN, is measured from old, metal-poor halo stars. This value is expected to be indicative of the primordial abundance, since stars this metal-poor do not have enough time at formation to be enriched with material from the interstellar medium or galactic sources producing Li \citep[e.g.][]{Prantzos2012}. \citet{Spitespite1982b, Spitespite1982a} found that halo dwarf stars with [Fe/H] between $-2.4$ and $-1.4$ dex and effective temperatures between $5700$ and $6300$ K share a similar abundance of A(Li)=2.2 dex, the so-called {\it Spite Plateau}, a result that has been confirmed in the halo \citep[e.g.][]{CharbonnelPrimas2005,Melendez2010} and in other environments, such as globular clusters \citep[e.g.][]{Bonifacio2002}. The discrepancy between the predicted A(Li) from the BBN and the measurements, over 3 times lower, is referred to as the {\it Cosmic Lithium Problem}. Notice that Lithium is the only measured element produced during the BBN that experiences such a discrepancy \citep{Coc2014}.
Moreover, to further complicate the picture, there is a decrease in the mean Li abundance and an increase in scatter at the lower-end metallicity of the Spite plateau, for metallicities [Fe/H]$<-2.8$, known as the {\it meltdown} \citep[][and references therein]{Sbordone2010}, although some puzzling very metal-poor stars have been found with higher Li abundances, closer to the plateau \citep[e.g.][]{Bonifacio2018,Aguado2019}.
Solutions for the discrepancy between Li measurements and predictions from BBN range from modifications to the BBN theory to processes affecting the stellar interiors and changing the Li abundance in old stars \citep[see][for a review]{Fields2011}.
Lithium is an element often used as an indicator of chemical processes affecting the interior of stars, such as mixing, since it burns at temperatures ($2.5\times10^6$ K) and densities found in main-sequence stars. Thus, it is possible that a process such as diffusion \citep{Fu2015} or additional turbulent mixing \citep{Richard2005} is depleting the abundance in the stellar atmospheres of old stars which would not be indicative of the BBN lithium.
Metal-poor globular clusters are among the oldest objects in the Galaxy \citep[e.g.][]{DeAngeli2005}. As such, their lithium should resemble closely the abundance produced during BBN, making these systems important probes and tools to study the Cosmic Lithium Problem.
However, the measurement of Li abundance requires high quality spectra not easily obtained for the majority of cluster main sequence stars that are too faint. Thus, the Li abundance is known for dwarfs only for a handful of galactic clusters: M4 \citep{Mucciarelli2011, Monaco2012}, NGC6397 \citep{Lind2009, GH2009}, NGC 7099 \citep{Gruyters2016}, NGC6752 \citep{Pasquini2005,Shen2010}, 47 Tuc \citep{DOrazi2010,Dobrovolskas2014}, Omega Centauri \citep{Monaco2010}, and M92 \citep{Bonifacio2002}. In most of these clusters, the Li measured closely resembles that of the Spite Plateau, with the exception of 47 Tuc.
Additionally, globular clusters, once thought to be defined chemically by a single population of stars, with no dispersion in chemical abundances are now known to harbor populations with different light element abundances \citep{BastianLardo18}. The second population of stars, which according to current scenarios would be born from the processed material from the first population, have high [Na/Fe] and low [O/Fe] producing the observed Sodium-Oxygen anticorrelation in globular clusters \citep{Carretta2009a,Carretta2009b}. Given that the thermonuclear reactions that produce this pattern occur at higher temperatures than that required to burn Li, it is expected that a second population of stars should have a lower Li than the first population. However, only two clusters, with Li measured in their main sequence, show a hint of a Li-O correlation, 47 Tuc \citep{Dobrovolskas2014}, with no Li-Na anticorrelation, and NGC6752 \citep{Shen2010}. In M4 there is a weak but statistically significant Li-Na anti-correlation \citep{Monaco2012}, while other clusters are shown to have similar Li in first and second population stars. The lack of a Li anticorrelation could be produced if the polluter of the second population has a significant Li production or if the material from the polluter is mixed with unprocessed material that preserved its initial lithium. Thus, studying the Li in globular clusters can aid to understand their formation.
The lack of more Li measurements in main sequence stars of globular clusters due to their faintness encourages the use of a complementary method, proposed by \citet{Mucciarelli2012}, which uses lower red giant branch stars (LRGB).
Red giant stars undergo a series of structural changes that produce alterations to their surface chemical abundances. The first of these processes is the {\it first dredge-up} (FDU), where the surface convective envelope of the stars deepens in mass, mixing material from the surface with the chemically processed interior. This translates into a decrease in the carbon and lithium abundances and an increase in the nitrogen abundance.
Standard stellar evolutionary models predict no other surface abundance changes in the red giant branch (RGB) after the end of the FDU. However, observations provide evidence of modified Li, C, N, O abundances and C isotopic ratio after the RGB bump \citep{Gratton2000}. At this moment in stellar evolution, the advancing hydrogen-burning shell encounters and erases the discontinuity left in the chemical profile of the star by the deepest penetration of the convective envelope \citep{DenissenkovVandenberg2003}, allowing extra-mixing to proceed \citep[or do so more efficiently, e.g.][]{Chaname2005} bringing material from the stellar interior to the surface. The details of how this mechanism acts and how it affects the stellar interiors are, however, not well understood.
LRGBs are located between the end of the first dredge-up and the luminosity function bump. The dilution of lithium during the FDU at the beginning of the red giant phase is mass and metallicity dependent, but it is well characterized by stellar evolutionary models. This is why a complementary way to study Li in old stars is to measure its abundance in LRGBs, where the A(Li) is constant at a given metallicity, mirroring the Spite-plateau but at a lower value of A(Li)$\sim0.9-1.0$ dex that considers its depletion in the FDU.
Moreover, the FDU mitigates the effects of diffusion, one of the main uncertainties for the interpretation of the Li abundance in dwarfs \citep{Mucciarelli2011}.
\citet{Mucciarelli2014} used this technique to study the primordial Li abundance of the globular cluster M54 located in the Sagittarius dwarf Spheroidal galaxy, providing evidence that the primordial Li is the same there as in the Milky Way and thus, that the {\it Cosmic lithium problem} is Universal and not local. More evidence for this can be found in $\omega$ Cen, usually considered to be the remnant core of an accreted galaxy \citep[e.g.][]{Lee1999, Pancino2000}, that also shows a consistent Li abundance with the Spite plateau \citep{Monaco2010}. The discovery of Gaia-Enceladus, a disrupted dwarf galaxy that was once accreted by the Milky Way, and is now forming part of the galactic halo, allows a new way to study the primordial Li content outside our Galaxy, confirming once again, the universality of the cosmic Li problem \citep{Molaro2020, Simpson2020}.
Confirming that the LRGB stars can also be used to study the formation of globular clusters, \citet{Mucciarelli2018} measure Li in LRGB stars of $\omega$ Cen, finding an extended Na-Li anticorrelation. However, this distribution seems to be rather complex, with the most metal-rich stars in the cluster always showing low Li abundances, but the metal-poor stars in the cluster can either show low sodium and normal Li, or high sodium with normal or depleted Li abundances.
Thus, as demonstrated by these works, the study of LRGBs allows to characterize the Li abundance pattern of clusters and the primordial Li in systems where dwarfs are too faint. Following this complementary approach, in this work we study the Li abundance of lower red giant branch stars of 5 Galactic globular clusters, providing new insight in the dependence of the RGB Li plateau with metallicity and its use to calculate the primordial Li abundance in these systems.
Moreover, one of these clusters, NGC6838 is a metal-rich globular cluster. The low Li abundance of dwarfs in the relatively metal-rich cluster 47 Tuc ([Fe/H]=-0.8 dex and A(Li)=1.4-2.2 dex, \citealt{Dobrovolskas2014}) when compared to the Spite plateau suggest that there is a depletion mechanism acting in the main sequence at higher metallicities, that is not found at lower metallicities, given that M4, with [Fe/H]=-1.1 dex shows Li consistent with the Spite plateau \citep{Monaco2012}. The study of NGC6838 will allow us to test if this is a peculiar pattern for 47 Tuc or if all metal-rich globular clusters are Li depleted.
Also notice that NGC3201 is significantly younger ($\sim 2$ Gyr) than the rest of the studied clusters \citep{MarinFranch2009}.
In Section \ref{sec:obs} we report the observations and evaluate membership of our targets to the globular clusters. We measure atmospheric parameters (Section \ref{Sec:Atm}) and lithium and sodium abundances (Section \ref{sec:abund}) of these stars. We report results on the LRGB Li plateau, on the lack of a Li-Na correlation in these clusters, and on the discovery of a new Li-rich giant in NGC3201 in Section \ref{sec:results}. Our summary can be found in Section \ref{sec:summary}.
\section{Observations and membership} \label{sec:obs}
We selected five clusters in order to cover the entire metallicity range of Halo globular clusters, and observe a large number of stars. We end up with NGC4590 (M 68), NGC6809 (M 55), NGC6656 (M 22), NGC3201, and NGC6838 (M 71). For each of these 5 clusters we selected our targets in the LRGB phase.
The spectroscopic observations correspond to the ESO program 095.D-0735 (PI. A. Mucciarelli) and were carried out using the FLAMES multi-object spectrograph \citep{FLAMES} at the Very Large Telescope (VLT).
The GIRAFFE fibers provide mid-resolution spectra with R$\sim 18000$.
The observations were performed in the setups HR15N, sampling the lithium resonance doublet at $\lambda\sim 6708$\ \AA, and HR12, sampling the sodium D doublet at $\lambda\sim 5890-5896$\ \AA\ for GIRAFFE.
A total of five exposures, 45 minutes each, for NGC6838, NGC6809, NGC6656, and NGC3201, and, 10 exposures for NGC4590 were taken in the HR15N setup. Only one exposure for each star was needed in the HR12 setup, as the large equivalent width of the Na doublet requires smaller signal-to-noise ratios to be measured.
The spectra were bias-subtracted, flat-fielded and wavelength-calibrated using the standard ESO pipelines\footnote{http://www.eso.org/sci/software/pipelines/}. In each exposure, some fibers were dedicated to measure spectra of the sky. These are median-combined to create a master sky, then subtracted to each of our science spectra.
Radial velocities for each individual spectra in the HR15N setup are measured using the IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} task {\it fxcor}. This task uses the cross correlation method, where we use as template a synthetic spectrum from \citet{Coelho2005}, typical of a metal-poor red giant, with a resolution reduced to be similar to our spectra. The typical radial velocity precision is $\sim2-3\ \mathrm{km\ s^{-1}}$ for each HR15N spectra of each star. This value is the formal fxcor error, related to the fitted function used to calculate the velocity \citep{TonryDavis1979}. After shifting every spectrum to their rest-frame, we median-combine all spectra that correspond to a particular target to obtain an individual spectrum for each star that is later used in the analysis. By combining all the exposures we obtain an additional error in the radial velocity, corresponding to the standard deviation of different measurements for the same target. These are typically from $\sim0.6-2.2\ \mathrm{km\ s^{-1}}$.
Signal-to-noise ratios per pixel (S/N) in the HR15N setup are typically around $\sim70-300$, while the single exposure for the HR12 allows to obtain spectra with S/N$\sim20-70$. Given the lower S/N of these spectra and the fewer number of lines clearly visible, we do not measure a radial velocity from this setup, but instead assume the average radial velocity measured from the HR15N spectra.
After obtaining a unique radial velocity for each target star, we use these values to construct radial velocity distributions in each cluster. These are fitted with a Gaussian profile, where a mean and a standard deviation are calculated and used as criteria for membership.
\subsection{Cluster membership}
For each cluster, we exclude stars with radial velocities significantly different with respect to the mean radial velocity of the sample stars (a difference higher than $3\sigma$). Additionally, we use the membership probability reported for each star in these clusters by \citet{VasilievBaumgardt2021} which makes use of the Early Data Release 3 for the Gaia mission \citep{GaiaDR2}, considering that stars are members if they have a membership probability $\mathrm{P_{mem}}>0.9$,
The cluster NGC6838 is in a particularly contaminated field. With a gaussian distribution we get the mean radial velocity of the cluster and remove all the stars outside $2\sigma$ as field contaminants. Only 35 out of 117 observed stars are within that radial velocity range and have astrometric parameters consistent with the cluster.
In NGC6809, from the originally observed 110 stars, 95 remain after excluding stars by their membership probability or radial velocity. In NGC6656, 101 of the 112 observed stars are consistent with being cluster members. In NGC4590, we kept 50 of 69 observed stars, and in NGC3201, 98 out of the 117 observed stars are consistent with the cluster membership.
\begin{table}
\caption{Mean radial velocity (RV) for each cluster.}
\label{table:RVs}
\centering
\begin{tabular}{c c c c c}
\hline\hline
Cluster & Mean RV & SD & Harris RV & Harris SD\\
& ($\mathrm{km\ s^{-1}}$) & ($\mathrm{km\ s^{-1}}$) & ($\mathrm{km\ s^{-1}}$) & ($\mathrm{km\ s^{-1}}$)\\
\hline
NGC4590 & $-94.2$ & $3.2$ & $-94.7$ & $2.5$\\
NGC6809 & $174.9$ & $4.6$ & $174.7$ & $4.0$\\
NGC6656 & $-146.5$ & $7.8$ & $-146.3$ & $7.8$\\
NGC3201 & $495.0$ & $3.8$ & $494.0$ & $5.0$\\
NGC6838 & $-22.9$ & $3.5$ & $-22.8$ & $2.3$\\
\hline
\end{tabular}
\tablefoot{Mean radial velocity (RV) for each cluster and standard deviation (SD) of each radial velocity distribution and comparisons with \citet[][2010 edition]{Harris1996}.}
\end{table}
All of our measured mean radial velocities are reported in Table \ref{table:RVs} and are consistent with those in the catalog by \citet[][2010 edition]{Harris1996}.
\section{Atmospheric Parameters} \label{Sec:Atm}
\begin{figure*}[!hbt]
\begin{center}
\includegraphics[width=0.8\textwidth]{Figures/Li_Na_fit.pdf}
\end{center}
\caption{Spectra from one typical sample star in each of the clusters in the region of the Li line (left) and the Na doublet (right), and the respective synthetic fits to the lines to measure the abundances. Stellar parameters, NLTE abundances, and S/N of the spectra are indicated for each star.}
\label{fits}
\end{figure*}
Effective temperatures for stars are derived photometrically, using the (V-I) colour and the \citet{Alonso1999} relations.
For all of our clusters, we use the photometry of \citet{Stetson2019} and transform to Johnson (V-I) colors using the relation in \citet{Bessell1983}.
\begin{table}
\caption{Colour excess E(B-V) and distance modulus for each cluster.}
\label{table:ebv_mM}
\centering
\begin{tabular}{c c c c c}
\hline\hline
Cluster & E(B-V) & Source\tablefootmark{a} & $\mathrm{(m-M)_0}$ & Source\tablefootmark{b}\\
- & - & - & mag & -\\
\hline
NGC4590 & $0.06$ & SF11 & $15.00$ & K+15\\
NGC6809 & $0.12$ & SF11 & $13.95$ & VBD18\\
NGC6656* & $0.33$ & S+98 & $13.60$ & H10\\
NGC3201* & $0.24$ & B+13 & $14.20$ & H10\\
NGC6838 & $0.28$ & SF11 & $13.80$ & H10\\
\hline
\end{tabular}
\tablefoot{
Adopted colour excess E(B-V) and distance modulus adopted for each cluster. Clusters marked with * were corrected by differential reddening.\\
\tablefoottext{a}{SF11: \citet{SandF2011}; S+98: \citet{Schlegel1998}; B+13: \citet{Bonatto2013}.\\}
\tablefoottext{b}{VBD18: \citet{VandenBergDenissenkov2018}; K+15: \citet{Kains2015}; H10: \citet[][2010 edition]{Harris1996}.}
}
\end{table}
To calculate dereddened colors, we use extinction coefficients from \citet{McCall2004}.
The adopted colour excess E(B-V) and distance modulus for each cluster can be found in Table \ref{table:ebv_mM}.
Although the color excess of NGC6838 is high, it does not suffer from significant differential reddening \citep[$\langle \delta E(B-V)\rangle=0.035$ mag,][]{Bonatto2013}.
For the clusters NGC6656 and NGC3201 we corrected for differential reddening using the maps of \citet{AlonsoGarcia2012} with zero point E(B-V)$=0.33$ \citep{Schlegel1998} for NGC6656, as suggested by that work, and of Pancino et al., (in preparation), with zero point E(B-V)=$0.24$ for NGC3201 \citep{Bonatto2013}.
The surface gravity was calculated for each star using isochrone fitting and a set of MIST isochrones \citep{MIST0,MIST1} restricting ages to be higher than 12 Gyr. In particular, we placed the star in an effective temperature - absolute magnitude plane and compare its position there with the theoretical isochrones, using the estimated metallicity of the cluster by \citet[][2010 edition]{Harris1996} to prevent degeneracy. We build a probability distribution with all the isochrone points in a $3\sigma$ radius from the input parameters. This method was preferred to measuring $\log g$ spectroscopically given the lack of a relevant number of Fe II lines in our spectra. Additionally, \citet{MucciarelliBonifacio2020} recommend to use photometric temperatures and gravities in low-metallicity stars of globular clusters, since spectroscopic parameters are lower than photometric determinations, and inconsistent with the position of the giants in color-magnitude diagrams.
We compared the calculated $\log g$ with $\log g$ estimated using bolometric luminosity of the giants and find a good agreement between both methods. Uncertainties obtained are $\Delta \log g\simeq 0.2$ dex. As for the error in effective temperature, we adopt typical uncertainties of $\Delta\mathrm{T_{eff}}\simeq125$ K, that correspond to the standard deviation of the colour relation used \citep{Alonso1999}.
Microturbulence velocities are then calculated for each star using their effective temperature, $\log g$, and the relation in \citet{Bruntt2012}. The typical errors when using this relation are reported to be $0.13\,\mathrm{km\ s^{-1}}$ by the authors. However, this value depends on uncertainties in effective temperature and $\log g$, assumed to be $100$ K and $0.1$ dex respectively. Given that our uncertainties are slightly higher, we assume a typical error in microturbulence velocity of $0.15\,\mathrm{km\ s^{-1}}$.
We calculated metallicities for each star individually by measuring the equivalent width of Fe I lines using the code DAOSPEC \citep{DAOSPEC} through the wrapper 4DAO\footnote{http://www.cosmic-lab.eu/4dao/4dao.php} \citep{4DAO}. Then, the Fe abundances were derived with the code GALA \citep{GALA}.
Our Fe I line list was constructed by using the spectra of the coldest and hottest giants in the sample. We inspected visually the spectra to identify lines that were visible in both stars covering the entire effective temperature range. Although we considered initially a large list of lines, a spurious correlation between metallicity and effective temperature was identified in some of the clusters, with an extremely strong correlation in NGC3201. We only selected lines that did not saturate in any cluster, and as such were good indicators of the real metallicity of the star. Based on this test, we ended up selecting 6 lines with a linear correlation between equivalent widths and temperatures to only estimate the metallicity, and used it as an additional criteria of membership. We then calculated the mean metallicity of the sample and adopted that value for all stars of the cluster when we determine chemical abundances.
We estimated the NLTE corrections for our selected Fe I lines, that could be relevant for metal-poor stars \citep{Bergemann2012}. Only two of the used lines have NLTE corrections reported by \citet{Mashonkina2016}; in the temperature and $\log g$ ranges of NGC4590 stars, they amount to $\sim0.072-0.085$ dex. These are the maximum values expected for our sample of stars, because NLTE corrections are larger for lower metallicities. We find no way to apply consistently Fe NLTE corrections, but these seem to be smaller than the reported uncertainty in metallicity, and would not affect significantly the measured abundances.
\begin{figure*}[!hbt]
\begin{center}
\includegraphics[width=0.6\textwidth]{Figures/Li_and_VI_vs_V_4590_pm.pdf}
\includegraphics[width=0.6\textwidth]{Figures/Li_and_VI_vs_V_6809_pm.pdf}
\includegraphics[width=0.6\textwidth]{Figures/Li_and_VI_vs_V_6656_pm.pdf}
\end{center}
\caption{Lithium abundances of NGC4590 (top panels), NGC6809 (middle panels), and NGC6656 (bottom panels) as a function of V magnitude (left panels) alongside their respective color-magnitude diagrams (right panels). Blue symbols are Li upper limits. The approximate position of the luminosity function bump in each cluster is marked with a dashed line.}
\label{Li_vs_V_1}
\end{figure*}
\begin{figure*}[!hbt]
\begin{center}
\includegraphics[width=0.6\textwidth]{Figures/Li_and_VI_vs_V_3201_pm.pdf}
\includegraphics[width=0.6\textwidth]{Figures/Li_and_VI_vs_V_6838_pm.pdf}
\end{center}
\caption{Lithium abundances in NGC3201 (top panels) and NGC6838 (bottom panels) as a function of V magnitude (left panels), with their respective color-magnitude diagrams (right panels). Blue symbols are Li upper limits. The Li-rich red giant ID 97812 is marked as a red star. The dashed lines shows the position of the luminosity function bump.}
\label{Li_vs_V_2}
\end{figure*}
\begin{table}
\caption{Mean metallicity and number of members for each cluster.}
\label{table:Metmem}
\centering
\begin{tabular}{c c c c c}
\hline\hline
Cluster & [Fe/H] & $\mathrm{SD}_{\mathrm{[Fe/H]}}$ & Harris [Fe/H] & Members\\
- & dex & dex & dex & -\\
\hline
NGC4590 & $-2.34$ & $0.10$ & $-2.23$ & 46\\
NGC6809 & $-1.79$ & $0.10$ & $-1.94$ & 90\\
NGC6656 & $-1.77$ & $0.12$ & $-1.70$ & 98\\
NGC3201 & $-1.58$ & $0.06$ & $-1.59$ & 83\\
NGC6838 & $-0.72$ & $0.07$ & $-0.78$ & 32\\
\hline
\end{tabular}
\tablefoot{Mean metallicity, standard deviation (SD), and number of members for each cluster. Metallicity values reported by \citet[][2010 edition]{Harris1996} are also included.
}
\end{table}
Table \ref{table:Metmem} shows the mean metallicity and final number of members in each cluster.
In NGC3201 we found some outliers in the metallicity distribution, which are likely non members, and were removed with sigma-clipping, using a criteria of $2\sigma$. We aim for the maximum purity of the sample rather than completeness, and thus, although we might exclude some members, this procedure increases the probability of membership by selecting stars with metallicities closer to the mean of the cluster.
Notice that while the mean metallicity of NGC3201 is similar in different literature sources, there is an ongoing debate about the existence of an intrinsic metallicity spread in the cluster \citep[][and references therein]{LA2021}. Given that we are selecting only stars with a metallicity similar to the mean of the cluster, the conclusion about the possible spread in the metallicity distribution should not change our results. The mean value of metallicity we obtain for NGC6809 is [Fe/H]$=-1.79\pm0.10$. The metallicity of this cluster seems to be controversial, with some measurements similar to the value we report \citep[e.g.][]{Kayser2008}, and others closer to the [Fe/H]$=-1.94$ value found in the Harris catalogue \citep[e.g.][]{Carretta2009}.
The final parameters for member stars in each of the 5 clusters can be found in Table \ref{table:params}, in Appendix \ref{ap:param}.
\section{Chemical abundances} \label{sec:abund}
\subsection{Lithium}
Lithium abundances are calculated using spectral synthesis around the Li doublet at wavelength $\sim6708$ \AA. The observed spectrum was compared to synthetic spectra generated using MOOG \citep[][2018 version]{MOOG}, with ATLAS9 model atmospheres \citep{ATLAS9} and the abundance derived through $\chi^2$ minimization. The continuum level, one of the greatest uncertainties in the determination of Li abundance with this method, was set by using a region of $\sim10\,$ \AA\, around the Li line. For some of the giants where the Li line is not detected, only upper limits are reported. We estimated the detection limits on Li using the relation by \citet{Cayrel1988} for the minimum equivalent width that could be measured in each spectra, and calculate the corresponding lithium abundance for 3 times that limiting equivalent width.
We calculated non local thermodynamical equilibrium (NLTE) corrections using the grid of \citet{Lind2009NLTE}. In NGC3201, NGC4590, and NGC6656, corrections are usually smaller than $0.1$ dex, while in NGC6809 corrections are even smaller ($<0.06$ dex). In contrast, NGC6838 has larger corrections, from $0.01$ dex to $0.15$ dex. Two stars in NGC4590 are outside the limits of the grid, and thus we use the closest grid point as the values for the Li correction.
The error in lithium is calculated by adding in quadrature the uncertainties associated to the synthetic spectra producing the best fit, something that depends greatly on the positioning of the continuum, and the propagation of errors in stellar parameters, in particular, of effective temperature that produces the largest deviations in A(Li). The typical uncertainties due to the quality of the data are of the order of $\mathrm{\Delta A(Li)}\sim0.05$ dex, which depends on the signal-to-noise ratio of the spectrum. This refers to uncertainties in the fitting procedure, including continuum placement and adjustments in the fit due to small changes in the radial velocity and line broadening.
NGC3201 is the cluster with the overall worst quality spectra, and as such it can show higher errors of up to $\mathrm{\Delta A(Li)}\sim0.08$ dex.
Uncertainties in the Li abundance arising from the propagation of errors in the stellar parameters are $0.10-0.17$ dex due to $\mathrm{T_{eff}}$, $0.00-0.02$ dex due to the metallicity, and $0.01-0.03$ dex because of errors in $\log g$. Errors propagated from microturbulence velocity (of $\sim0.01$ dex) are added linearly, given that microturbulence velocity depends on effective temperature and $\log g$. Our typical uncertainties are $\mathrm{\Delta A(Li)}\simeq0.16$ dex.
As a sanity check, we calculated the Li abundance in the cluster M4 using the same spectra used in \citet{Mucciarelli2011}. The A(Li) we calculated using the same parameters as reported in that work is very similar for the RGB stars. We calculate an average difference $\mathrm{\left<\delta A(Li)\right>=0.09}$. No attempt was made to reproduce the measurements in turn-off stars, as those are out of the scope of this paper. To check that our parameter determination was also consistent with previous literature, we also re-calculated stellar parameters from photometry directly, finding similar values and A(Li), with an average difference of $\mathrm{\left<\delta A(Li)\right>=0.16}$.
\subsection{Sodium}
The sodium abundance was measured from the Na D doublet at $5890 - 5896$ \ \AA, also using spectral synthesis of the region, generating a grid of synthetic spectra using SPECTRUM \citep{SPECTRUM}, with the same ATLAS9 model atmosphere. The choice of using one code over other was only done by convenience of our available wrappers and methods. However, we have tested to see the difference between abundances using the two radiative transfer codes. Abundances of sodium change at most by $0.05$ dex over the range of parameters of our sample. The best fit was selected with $\chi^2$ minimization. The continuum placing is complicated by the low signal to noise of some of our spectra in the region. Accordingly, the errors in Na abundance consider these uncertainties. Uncertainties due to the continuum placement and quality of the fit can be as high as $0.10$ dex. The propagation of errors in the stellar parameters gives typical Na uncertainties of $\sim0.11$ dex due to $\mathrm{T_{eff}}$, $\sim0.14$ dex because of errors in $\log g$, $\sim0.01$ dex uncertainty due to metallicity, and $\sim0.02$ dex due to uncertainties in the microturbulence velocity. Typical uncertainties then are $\mathrm{\Delta A(Na)}\simeq0.18$ dex.
We applied the Na NLTE corrections computed by \citet{Lind2011}, that can be substantial for sodium measured from the D doublet, even reaching values of $-0.5$ dex within our sample.
It was not possible to measure sodium in NGC6838 due to the presence of contamination by interstellar sodium in that region of the spectra, close to the position of the stellar Na lines.
Typical spectra from our sample in the regions of the Li line and Na doublet is shown in Figure \ref{fits}. These spectra show the quality of our data, and typical fits to the Li and Na lines to measure abundances. In the right panels, we also see the strong interstellar sodium lines.
\section{Results} \label{sec:results}
NLTE lithium and sodium abundances of the LRGB stars in the studied globular clusters are found in Table \ref{table:params}, in Appendix \ref{ap:param}. We also include LTE abundances in the online data. We show our measured Li abundances as a function of V magnitude and the position of member stars in their color magnitude diagrams in Figures \ref{Li_vs_V_1} and \ref{Li_vs_V_2}, left and right panels respectively. The magnitudes are also corrected by differential reddening in these figures. Lithium upper limits are shown as blue arrows. The position of the luminosity function bump \citep{Samus1995,Ferraro1999} is indicated for each cluster as a dashed line, and the background corresponds to the catalog from \citet{Stetson2019}, cleaned using membership probabilities by \citet{VasilievBaumgardt2021}, but without corrections by differential reddening. Lithium upper limits in all five clusters are above our reported measurements, and as such are consistent with the abundances reported.
We first notice here that some of the stars show unusual positions in the color-magnitude diagram. This is because our initial selection of targets was not done using the Stetson photometry. We study each of these stars independently and follow them throughout our analysis to make sure they are not contaminating our sample and confusing our results.
In NGC4590 the star ID35003 is not located in the locus of RGB stars of the cluster. This star has a proper motion, radial velocity, and metallicity consistent with NGC4590. We also check an independent measurement of its $\log g$ using bolometric corrections and find a very similar value between both determinations. However, given its color, this star has a higher temperature and higher lithium abundance (A(Li)$=0.81$ dex) than other stars in the RGB of that magnitude. This star can be clearly identified in the left panel of Figure \ref{Li_vs_V_1} with higher Li abundance. Thus, although consistent given our analysis, we consider that this star may not be part of the cluster due to its unusual position in the color-magnitude diagram and remove it from further analysis.
NGC6656 shows a broad RGB even after corrections by differential reddening. The bluer sequence does not show a particular spatial location, indicating that this is probably not the effect of additional differential reddening and that the broad sequence might be due to the spatial resolution of the used reddening maps. This cluster is suspected to have an intrinsic iron spread \citep{DaCosta2009}, but see also \citet{Mucciarelli2015}.
In some of the clusters we can identify the position of the end of the FDU and the RGB bump in both panels, with abrupt decreases in A(Li). In NGC4590 we can clearly identify stars going through the FDU, diluting their Li at V$\sim17$ mag, then a plateau formed by the LRGB and a second decrease in A(Li) produced by the inclusion of giants located after the luminosity function bump. Most of the upper RGB stars have only Li upper limits. Cluster NGC6809 has two stars after the RGB bump, while NGC3201 has only one, that shows a smaller Li abundance than the rest of RGB stars, at the level of the lower envelope of the Li distribution of LRGBs.
We are able to identify the end of the FDU in NGC3201, where stars at the bottom of the RGB decrease their Li abundances. The Li dilution due to FDU in NGC6656 or NGC6838 is not clearly visible. In NGC6656, the abundance slowly decreases as we move up in the giant branch, with a large scatter, and the plateau is not as clear as in other clusters.
We also notice that there is a giant in NGC3201 (namely giant ID 97812, a red star symbol in Figure \ref{Li_vs_V_2}) with unusually high Li abundance $\mathrm{A(Li)_{NLTE}=1.63\pm0.18}$ dex. Given that it is located before the onset of efficient extra-mixing, this Li-rich giant has probably experienced pollution from an external source. We analyze it further in Section \ref{Sec:Lirich}.
\begin{figure*}[!hbt]
\begin{center}
\includegraphics[width=0.35\textwidth]{Figures/Ali_vs_Teff_NGC4590.pdf}
\includegraphics[width=0.35\textwidth]{Figures/Ali_vs_teff_binned_NGC4590.pdf}
\includegraphics[width=0.35\textwidth]{Figures/Ali_vs_Teff_NGC6809.pdf}
\includegraphics[width=0.35\textwidth]{Figures/Ali_vs_teff_binned_NGC6809.pdf}
\includegraphics[width=0.35\textwidth]{Figures/Ali_vs_Teff_NGC6656.pdf}
\includegraphics[width=0.35\textwidth]{Figures/Ali_vs_teff_binned_NGC6656.pdf}
\end{center}
\caption{Behavior of lithium abundances as a function of effective temperatures for NGC4590, NGC6809, and NGC6656, from top to bottom panels. The left panels show the abundances (black points) and upper limits (blue arrows). The right panels only show the measurements in gray and the binned Li abundances, considering equal-sized bins (blue points). Red squares mark the bins considered to be part of the Li LRGB plateau and used to calculate mean values reported.}
\label{Li_vs_Teff_1}
\end{figure*}
\begin{figure*}[!hbt]
\begin{center}
\includegraphics[width=0.35\textwidth]{Figures/Ali_vs_Teff_NGC3201.pdf}
\includegraphics[width=0.35\textwidth]{Figures/Ali_vs_teff_binned_NGC3201.pdf}
\includegraphics[width=0.35\textwidth]{Figures/Ali_vs_Teff_NGC6838.pdf}
\includegraphics[width=0.35\textwidth]{Figures/Ali_vs_teff_binned_NGC6838.pdf}
\end{center}
\caption{Same as Figure \ref{Li_vs_Teff_1} for NGC3201 (top panel) and NGC6838 (bottom panel).}
\label{Li_vs_Teff_2}
\end{figure*}
\subsection{LRGB plateau and the cosmic Li problem}
To better identify a possible LRGB plateau in the 5 globular clusters we have binned the Li abundance as a function of effective temperature in Figures \ref{Li_vs_Teff_1} and \ref{Li_vs_Teff_2}. Left panel shows the Li abundance as a function of effective temperature, where we include upper limits as blue arrows. The right panel shows the binned abundance as blue points but only considering Li measurements (gray points) and no upper limits. To select stars that belong to the plateau, we define the end of the first dredge-up by using the measured Li abundances. We perform a simple bilinear fit to the data in the A(Li)-$\mathrm{T_{eff}}$ diagram, from the highest temperature to the luminosity function bump, which is clearly signposted by the sharp drop of Li abundances with decreasing temperature.
The fit provides the approximate temperature where the abundance plateau starts, whilst the luminosity function bump marks its end. Notice that in NGC4590 there are no stars that have completed the first dredge-up, as suggested by our fit to the data. We identify the bins that belong to the LRGB plateau as red squares, which are the values we use to calculate the mean Li abundance of the plateau in each cluster. The stars are binned in equal-sized temperature ranges within each cluster, to make sure that there is a significant number of stars in each bin. Changing the bin size does not significantly alter our results.
The binned Li abundance allows to better identify the LRGB plateau present in the clusters. In NGC4590, the value of this plateau is A(Li)$=0.90\pm0.08$ dex. The error reported here is the standard deviation of the individual abundances of stars at the plateau. NGC6809 shows a clear decrease in Li at the start of the RGB until reaching a plateau of A(Li)$=1.03\pm0.08$ dex. The effect of the FDU can also be observed in NGC6656, where in this case, the LRGB reach a value of A(Li)=$0.88\pm0.09$ dex. In NGC6838 we also identify a mean Li in LRGB stars of A(Li)=$0.84\pm0.10$ dex. The only cluster where the LRGB do not have a constant value is in NGC3201. Here we observe a decrease in the abundance from higher to lower temperatures. The presence of a plateau is much harder to identify, and its value depends slightly on the position of the bins. If we define the plateau considering the 3 bins between $\sim4900$ K to $\sim 5050$ K where the abundance seems to be constant, we find a mean abundance of A(Li)=$0.97\pm0.10$ dex. Changing the bin size and position of bins, the mean A(Li) varies from $0.93$ to $0.98$ dex, with similar standard deviation. The main difference between this and other clusters in our sample is its age, but there is no clear explanation to the larger scatter near the plateau.
The scatter is fully consistent with the total error, which includes both the uncertainties due to quality of the spectra and uncertainties due to stellar parameters, especially effective temperature.
\begin{figure*}[!hbt]
\begin{center}
\includegraphics[width=0.32\textwidth]{Figures/ali_bin_model_ngc4590.pdf}
\includegraphics[width=0.32\textwidth]{Figures/ali_bin_model_ngc6809.pdf}
\includegraphics[width=0.32\textwidth]{Figures/ali_bin_model_ngc6656.pdf}
\hfill
\includegraphics[width=0.32\textwidth]{Figures/ali_bin_model_ngc3201.pdf}
\includegraphics[width=0.32\textwidth]{Figures/ali_bin_model_ngc6838.pdf}
\end{center}
\caption{Lithium abundance as a function of effective temperature. Blue points are the binned abundance in equal-sized bins. We include theoretical models that fit the LRGB Li plateau. In the cluster NGC6809, we include models with $0.7$ and $0.8\,\mathrm{M_\odot}$.}
\label{Li_models}
\end{figure*}
We use our Li abundances together with theoretical stellar evolutionary models to predict an initial Li value in these clusters. We use the Yale Rotating Evolutionary code \citep[YREC,][]{Pinsonneault1989,Demarque2008}, without diffusion, rotation or overshooting. The models use mixing length theory for convection \citep{CoxGiuli1968}, which acts as the only mixing mechanism inside the star, with no extra-mixing to modify the surface abundances after the RGB bump. Additionally, we use 2006 OPAL equation of state \citep{RogersNayfonov2002}, and cross section for the proton capture by lithium from \citet{Lamia2012}. Other input physics included in these standard models can be found in \citet{AG2016}. For each of the clusters we run models with a mass M$=0.8\,\mathrm{M_\odot}$, considered to be the typical turn-off mass in globular clusters, and a metallicity equal to the median value of the cluster, presented in Section \ref{Sec:Atm}.
The effects of diffusion, that can significantly change the lithium abundance in the main sequence evolutionary phase \citep{Richard2002}, are almost completely erased in the LRGB, given that the diffusion layers are mixed again by the deepening convective envelope. The lithium abundance in the LRGB of standard models is at most $0.07$ dex higher than when models include diffusion \citep{Mucciarelli2012}.
Models with an initial lithium abundance equal to the standard BBN value of A(Li)=$2.72$ dex produce LRGB Li values much larger than those observed. Instead, we attempt to find this primordial Li abundance. We modify the initial Li abundance of the models to match the Li in LRGB, post FDU dilution. These results can be seen in Figure \ref{Li_models}. The model presented for the cluster NGC4590 has a metallicity of [Fe/H]$\sim-2.2$, which is the lowest metallicity we had available for the models. We tested that at such low metallicity the initial abundance should not change significantly between that metallicity and the metallicity measured for the cluster ([Fe/H]=$-2.34$ dex).
Signatures of diffusion have been found in some globular clusters \citep[e.g.][]{Korn2006,Gruyters2016}. The overall effect of diffusion in main sequence stars is to lower the surface Li abundance when approaching the turn-off, however observations suggest that the efficiency of diffusion is moderated by some competing mixing mechanism of unspecified origin. This makes it difficult to predict theoretically the Li abundance in turn-off stars of our studied clusters using the measured LRGB as starting point. Thus, we do not attempt to predict a turn-off abundance, and instead we recover the initial, primordial Li abundances of these clusters with our models, considering that the effects of diffusion are erased during the FDU.
Predicted initial values in every case are very similar to the Spite plateau and A(Li) found in other globular clusters where the Li abundance can be measured in dwarfs. The inclusion of diffusion could decrease this predicted value by $0.07$ dex at most.
The primordial Li abundance of NGC6809 is predicted to be $\mathrm{A(Li)_0}=2.28$. We have also included here a model with $0.7\,\mathrm{M_\odot}$, to see the effects that mass can have in the abundance of dwarfs in the cluster. Li burning, including in the pre-main sequence, can be substantially different in this mass range. By changing the mass of the model, and adjusting the Li in the LRGB, the predicted initial lithium changes by $0.08$ dex, with a primordial value of $\mathrm{A(Li)_0}=2.20$, both within values found in halo stars. The initial lithium predictions of the other clusters are $\mathrm{A(Li)_0}=2.16$ in NGC4590, $\mathrm{A(Li)_0}=2.14$ for NGC6656, $\mathrm{A(Li)_0}=2.21$ in NGC3201, and $\mathrm{A(Li)_0}=2.17$ in NGC6838. These predicted primordial A(Li) values calculated for each cluster match the Li abundances of the Spite plateau and other globular clusters in the literature.
We report in Table \ref{table:limet} LRGB stars abundances of the studied clusters and others in the literature. We also report in this table our predicted initial Li abundances and the Li abundance in the turn-off of clusters where it has been measured. Notice here that measurements are not homogeneous, and temperature scales can change the Li abundance measurements. There does not seem to be any correlations between the Li abundance in the LRGB (or the predicted primordial value) and metallicity for different clusters (Figure \ref{Li_vs_met_clusters}).
The use of a different temperature scale could also change our Li measurements and estimated primordial abundances in the clusters. By using a hotter temperature scale \citep[e.g.,][]{GHBonifacio2009}, our Li abundances should be higher by $\sim0.1$ dex. The use of different stellar evolutionary models, or even different prescriptions in the model used (e.g., including overshooting or changing the efficiency of diffusion) can also modify the predicted estimation of cosmological Li \citep[][]{Mucciarelli2012}, and even make it higher than the Spite Plateau, although differences in the temperature scale of those measurements should also be taken into account \citep[e.g.][]{MelendezRamirez2004}. Thus, our predictions should not be considered as an attempt to precisely obtain the exact primordial lithium of each cluster, but rather an estimation of the possible abundance range.
\begin{figure}[!hbt]
\begin{center}
\includegraphics[width=0.45\textwidth]{Figures/feh_vs_liclusters.pdf}
\hfill
\end{center}
\caption{Li abundance in the LRGB of the 5 studied globular clusters as a function of [Fe/H] (filled red squares). The calculated primordial abundances for these clusters are the empty red squares. These are compared to literature measurements of the lithium abundance in the LRGB (black points at low Li abundances) and in the turn-off stars (grey points at high Li abundances) of globular clusters in the Galaxy. Also the Big Bang Nucleosynthesis prediction from \citet{Coc2014} is included and shown as a blue line, with the dashed blue lines representing its reported uncertainties.}
\label{Li_vs_met_clusters}
\end{figure}
\begin{table*}
\caption{Lithium abundance in the LRGB plateau and turn-off of galactic globular clusters.}
\label{table:limet}
\centering
\begin{tabular}{l c c c l}
\hline\hline
Cluster & [Fe/H] & $\mathrm{A(Li)_{LRGB}}$ & $\mathrm{A(Li)_{0}}$ & Reference \\
- & (dex) & (dex) & (dex) & -\\
\hline
NGC4590 & $-2.34$ & $0.90\pm0.08$ & $2.16$ & This work \\
NGC6809 & $-1.79$ & $1.03\pm0.08$ & $2.28$ & This work \\
NGC6656 & $-1.77$ & $0.88\pm0.09$ & $2.14$ & This work \\
NGC3201 & $-1.58$ & $0.97\pm0.10$ & $2.21$ & This work \\
NGC6838 & $-0.72$ & $0.84\pm0.10$ & $2.17$ & This work \\
\hline\hline
Cluster & [Fe/H] & $\mathrm{A(Li)_{LRGB}}$ & $\mathrm{A(Li)_{TO}}$ & Reference \\
\hline
NGC7099 & $-2.30$ & $1.10\pm0.06$ & $2.21\pm0.12$ & 1\tablefootmark{a}\\
NGC6397& $-2.10$ &$1.13\pm0.09$ & $2.25\pm0.01\pm0.09$ & 2\tablefootmark{a}\\
M4 & $-1.10$ & $0.92\pm0.01\pm0.08$ & $2.30\pm0.02\pm0.10$ & 3\tablefootmark{a}\\
M4 & $-1.31$ & - & $2.13\pm0.09$ & 4\\
NGC6752 &$-1.68$ &$0.83\pm0.15$ & -
& 5\\
NGC1904 & $-1.60$ & $0.97\pm0.02\pm0.11$ & - & 6 \\
NGC2808 & $-1.14$ & $1.06\pm0.02\pm0.13$ & - & 6 \\
NGC362 & $-1.26$ & $1.02\pm0.01\pm0.11$ & - & 6\\
NGC6218 & $-1.37$ & $1.07\pm0.01\pm0.06$ & - & 7\\
NGC5904 & $-1.29$ & $1.02\pm0.01\pm0.11$ & - & 7\\
\hline
47 Tuc & $-0.76$ & - &$1.78\pm0.18$ & 8 \\
M92 &$-2.00$ & - &$2.36\pm0.19$ & 9 \\
$\omega$ Cen & $-1.50$ & - & $2.19\pm0.14$ & 10 \\
\hline
\end{tabular}
\tablebib{
(1) \citet{Gruyters2016}; (2) \citet{Lind2009}; (3) \citet{Mucciarelli2011}; (4) \citet{Monaco2012}; (5) \citet{Mucciarelli2012}; (6) \citet{DOrazi2015}; (7) \citet{DOrazi2014}; (8) \citet{Dobrovolskas2014}; (9) \citet{Bonifacio2002}; (10) \citet{Monaco2010}}
\tablefoot{
\tablefoottext{a}{These works present Li abundances in the turn-off and lower red giant branch.}
}
\end{table*}
\subsection{First and second population stars}
Measurements of the Na abundance were made in order to separate populations in the studied globular clusters. This is based on the idea that the more massive stars of the first population, now evolved, had an active nucleosynthesis cycle in its interior able to produce, for instance, fresh Na at the expense of O. Throughout the lifetime of the star, this processed material is carried to the surface of the star, and through mass loss, stellar winds, and the planetary nebula phase, to the interstellar medium. The second population of stars is born from this polluted material, creating different populations of stars coexisting in the same cluster \citep[see e.g.][]{BastianLardo18}. The nature of the polluter is still a matter of open debate, with fast rotating massive stars \citep{Decressin2007} and asymptotic red giant branch stars \citep[AGB, ][]{VenturaDAntona2009} being the main contenders. On the other hand, there could be alternative scenarios to explain this pattern in clusters, not related to nucleosynthesis, or it is possible that the generational scenario is complicated by additional mechanisms acting \citep{Gratton2019}.
\begin{figure*}[!hbt]
\begin{center}
\includegraphics[width=0.42\textwidth]{Figures/Li_vs_Na_teffcode4590.pdf}
\includegraphics[width=0.4\textwidth]{Figures/Li_vs_Na_teffcode4590_BIN_plateau.pdf}
\hfill
\includegraphics[width=0.42\textwidth]{Figures/Li_vs_Na_teffcode6809.pdf}
\includegraphics[width=0.4\textwidth]{Figures/Li_vs_Na_teffcode6809_BIN_plateau.pdf}
\end{center}
\caption{Lithium abundance as a function of sodium abundance. The left panel is color-coded by effective temperature, and includes all RGB stars before the luminosity function bump, while the right panel considers only Li measurements (no upper-limits) of plateau LRGB stars, and it is binned (blue points) to show possible trends between both abundances. Top panel is NGC4590, bottom panel shows NGC6809.}
\label{Li_vs_Na_1}
\end{figure*}
\begin{figure*}[!hbt]
\begin{center}
\includegraphics[width=0.42\textwidth]{Figures/Li_vs_Na_teffcode6656.pdf}
\includegraphics[width=0.4\textwidth]{Figures/Li_vs_Na_teffcode6656_BIN_plateau.pdf}
\hfill
\includegraphics[width=0.42\textwidth]{Figures/Li_vs_Na_teffcode3201.pdf}
\includegraphics[width=0.4\textwidth]{Figures/Li_vs_Na_teffcode3201_BIN_plateau.pdf}
\end{center}
\caption{Same as Figure \ref{Li_vs_Na_1}. Top panels are NGC6656 and bottom panels NGC3201.}
\label{Li_vs_Na_2}
\end{figure*}
Li, which is destroyed at relatively low temperatures by proton capture, is expected to be depleted in Na-enriched material, such as that from which second population stars are born. An anticorrelation between Na and Li is then expected. However, in certain AGB stars, Li can be created in the interior through the Cameron-Fowler mechanism \citep{CF1971}, and is quickly transported to the surface of the star by convection, where the cooler temperatures prevent it from destruction by proton capture \citep{SackmannBoothroyd1992}. Thus, it may be relevant to compare the Li abundance from the first and second population, with the first population expected to have a cosmological Li content diluted because of the FDU, and the second population may show an abnormally high Li abundance if the polluter is either a Li-enriched AGB star or if the ejecta of polluters is mixed with material that has not burned Li.
Figures \ref{Li_vs_Na_1} and \ref{Li_vs_Na_2} show the behavior of A(Li) as a function of [Na/Fe] only for RGB stars in NGC4590 and NGC6809 (top and bottom panels of Figure \ref{Li_vs_Na_1} respectively), and NGC6656 and NGC3201 (top and bottom panels of Figure \ref{Li_vs_Na_2}). We have removed stars brighter than the RGB bump in these figures. Additionally, we consider only LRGB plateau stars in the right panel of these figures, by removing all stars that have not yet completed their first dredge-up. As mentioned in Section \ref{sec:abund}, we were not able to measure Na in NGC6838, thus we do not present results for that cluster.
Focusing only on LRGB stars, there is no clear correlation between Li and Na in NGC4590, NGC6809, NGC3201, and NGC6656. We do see some star-to-star scatter, but Li does not scale with Na. This, however, does not exclude the possibility to find a trend if more stars are considered in the analysis.
As previously mentioned, there is no statistically significant anticorrelation in NGC6656. By eye, when considering all RGB stars of different effective temperatures in the top left panel of Figure \ref{Li_vs_Na_2}, objects with higher Na would seem to have slightly lower Li. However, when we look for possible correlations by binning both in effective temperature and Na, only the higher temperature bin shows a hint of an anticorrelation, that is however not considered statistically significant.
The lack of a clear Li-Na anticorrelation in our cluster sample needs further confirmation with additional data. In the literature, some clusters do show correlations between Li and other light element abundances. NGC6752 presents a Li-O correlation \citep{Shen2010} and Li-Na anticorrelation \citep{Pasquini2005}. NGC6397 has some stars enriched in Na that are Li poor \citep{Lind2009}. NGC2808 has some stars enriched in Al that are Li depleted \citep{DOrazi2015}. In M4 there is something like a Li-Na anticorrelation \citep{Monaco2012} but no Li-O correlation \citep{Mucciarelli2011}. 47 Tuc shows no sign of a Li-Na anticorrelation \citep{Dobrovolskas2014}.
If it is confirmed that some clusters have a similar Li abundance in both populations, this would point to a higher Li than expected in the second population. This could mean that the birth material of these stars should have been mixed with relatively Li-rich material, pointing to AGB stars as possible polluters. Models have to be fine-tuned to produce such a pattern in globular clusters, given that the Li yields have great uncertainties depending on how physics such as mass loss is introduced \citep{VenturaDAntona2010}. If massive stars were the polluter, this scenario would require mixing the Na-rich material from the ejecta with unprocessed material that has a higher Li abundance. However, confirmation of the lack of a Na-Li anticorrelation is needed before anything can be firmly concluded about the mechanism behind the different populations in clusters.
Additionally, measurements from clusters come from non-homogeneous sources that not only have different parameters scales, spectral qualities, and use different methods, but that also provide abundances of different light elements. An homogeneous determination of properties and abundances could be a major improvement to gain insight about the second generation polluters.
\subsection{Li-rich giant in NGC3201} \label{Sec:Lirich}
Stars in the red giant branch experience abundance changes during the FDU, and then, at the luminosity function bump where the extra-mixing acts. If a solar-like star enters the RGB phase with a meteoritic abundance A(Li)$=3.3$, its predicted Li abundance after the FDU is expected to be A(Li)=$1.5$, only considering FDU dilution. However, values can be much lower, when additional ingredients, such as a much lower initial Li abundance, Li burning, and main sequence mixing are taken into account. The precise value to classify a giant as enriched is actually mass and metallicity dependent, and standard giants with higher A(Li) than $1.5$ dex can be found, as well as giants with lower abundances that could have experienced a Li-enrichment process \citep{AG2016}. In spite of predictions from canonical models, lithium-rich red giants, with higher Li abundances, even reaching or exceeding the meteoritic value, are known to exist \citep[e.g.][]{WallersteinSneden1982, Monaco2011}.
Globular clusters present an advantage, with all of their giants sharing a similar mass, and possibly, a similar original Li content. Because of this, we can compare the Li abundance of the giants to abundances of other stars with similar parameters and at a similar evolutionary stage, making enriched objects much easier to identify. Although Li-rich giants are unusual in general, they are particularly rare in globular clusters. Only about a dozen giants are known to have a much higher Li abundance than other stars in the same evolutionary stage in a globular cluster. So far, Li enriched RGB stars have been found in NGC5272 \citep{Kraft1999}, NGC362 \citep{Smith1999, DOrazi2015rich}, NGC4590 \citep{Ruchti2011, Kirby2016}, NGC5053, NGC5897 \citep{Kirby2016}, 2 giants in NGC7099 \citep{Kirby2016}, $\omega$ Cen \citep{Mucciarelli2019}, and only one Li-rich star in NGC1261 \citep{Sanna2020}.
These are located all along the RGB phase, although AGB Li-rich stars have also been found \citep[e.g.][]{Kirby2016}. Some are located after the luminosity function bump of their respective clusters, where extra-mixing is expected to affect the abundance of stars and could be the reason behind the Li-enrichment. Before that point in evolution, other explanations must be invoked that require pollution from an external source or the presence of a binary companion to trigger Li production \citep{Casey2019}. In the case of pollution, the source could be an AGB companion that can produce additional lithium in its interior and could then be transferring mass to the RGB star; it could be a nova, that can produce Li during the thermonuclear runaway \citep{Starrfield1978, Izzo2015}; or a planet or brown dwarf accreted by the star, objects that preserve the Li they have at formation \citep{Alexander1967,SiessLivio1999}.
We present the discovery here of one more Li-rich giant in a globular cluster, in this case, NGC3201. The star ID 97812, with $\mathrm{A(Li)_{NLTE}}=1.63\pm0.18$ dex is located before the luminosity function bump, and thus it is not expected to be enriched by the internal production of Li. Instead, pollution, either during the RGB phase or before, is probably the cause of enrichment for this giant, it is still possible that the presence of a binary companion is triggering the Li enhancement. Considering accretion as a possible scenario, we calculate the Li abundance of the star after the engulfment of a planet, using the models and parameters from \citet{AG2016}. Applying as initial conditions the Li abundance of the rest of the cluster, we calculate the engulfed mass of a hypothetical planet needed to explain the high A(Li) of this star. This model is shown in Figure \ref{planetengulfment}. For Jupiter-like composition, a mass of $\mathrm{M_{planet}}=10.1\,\mathrm{M_{Jupiter}}=1.92\times10^{31}\,\mathrm{g}$ is needed, and if the engulfed object had an Earth-like composition, it would require a mass of $\mathrm{M_{planet}}=120\,\mathrm{M_{Earth}}=7.17\times10^{29}\,\mathrm{g}$. Although the amount of Earth masses needed is large, the mass of the Jupiter-like planet required is in range of masses of exoplanets known that can orbit close to their parent star. Monitoring the radial velocity of this star would be interesting to understand if its enhancement comes from planet engulfment, or if a binary companion is responsible for its high Li abundance.
\begin{figure}[!hbt]
\begin{center}
\includegraphics[width=0.45\textwidth]{Figures/planetengulfment.pdf}
\end{center}
\caption{Lithium abundance pattern of NGC3201. The Li-rich giant found (red star) could be explained by a model with planet engulfment (dashed line), where the hypothetical planet has a mass of $\mathrm{M_{planet}}=9.3\,\mathrm{M_{Jupiter}}$ (with Jupiter-like composition) or $\mathrm{M_{planet}}=110\,\mathrm{M_{Earth}}$ with Earth-like composition.}
\label{planetengulfment}
\end{figure}
\section{Summary} \label{sec:summary}
We calculated stellar parameters and measured Li and Na abundances of LRGB stars of 5 Galactic globular clusters, covering a wide range of metallicities from [Fe/H]=$-0.72$ to [Fe/H]=$-2.34$ dex.
We find a LRGB plateau in all of the clusters at different levels, all between A(Li)=$0.84-1.03$ dex, consistent with what has been found for other globular clusters previously.
Using theoretical models, we calculate the initial, primordial Li abundance in these clusters. The abundances found are similar to the Spite plateau value of halo stars, with A(Li)=$2.14-2.28$. However, we note that the exact predicted value could change by using either a different temperature scale or different model. Thus, we use these predictions to conclude about the overall trends in Li abundances and not the exact value of the cosmological Li. As a caveat, we have to consider the possibility that the additional mixing operating during the main sequence which affects the efficiency of diffusion, might cause the transport of some extra Li in the burning regions \citep[e.g.][]{Richard2005}. This would result in an underestimate of the initial A(Li) from measurements of LRGB using standard stellar models.
Considering the uncertainties in Li abundances, our main conclusion is that all of the clusters are consistent with models that have evolved from the same initial Li abundance. This agrees with the idea of a constant Li abundance of stars at this metallicity range, confirming the large discrepancy between Big Bang Nucleosynthesis predictions and observations of main sequence field stars.
We find no correlation between the Li value in the LRGB plateau and metallicity. To further study a possible correlation, we also use literature data available for other clusters, finding no relation between A(Li) and [Fe/H].
The measured sodium abundance is used to distinguish between first and second populations in each cluster. We find no clear difference in Li abundance between Na-rich and Na-poor stars in any of the clusters. If this is confirmed, it could point towards a class of polluter stars that are able to produce Li, such as AGB stars, or the mixing of the processed Li-poor medium with additional unprocessed matter.
We summarize the main results for each of the studied clusters:
\begin{itemize}
\item {\bf NGC4590}: The median Li of LRGB stars in this metal-poor cluster is A(Li)=$0.90\pm0.08$ dex. Considering its metallicity of [Fe/H]$=-2.34\pm0.10$ dex, we calculate an initial Li abundance $\mathrm{A(Li)_0}=2.16$. There is no clear correlation between Na and Li in this cluster when we only consider LRGB stars.
\item {\bf NGC6809}: The distribution of LRGB Li abundances in the cluster presents a peak at A(Li)=$1.03\pm0.08$ dex. The primordial value predicted, considering a RGB mass of $0.8\,\mathrm{M_\odot}$ is $\mathrm{A(Li)_0}=2.28$. This is the cluster with the highest Li abundances in the LRGB plateau in our sample. It does not present a Li-Na correlation either.
\item {\bf NGC6656}: In this cluster, the LRGB plateau is located at A(Li)=$0.88\pm0.09$ dex, with a predicted initial value of $\mathrm{A(Li)_0}=2.14$.
\item {\bf NGC3201}: The RGB Li plateau is harder to identify in the cluster, as there appears to always be a small decrease in the abundance at decreasing temperatures. Considering this, we can define the LRGB stars between $\sim 4900$ K and $\sim 5050$ K and find a median abundance of A(Li)$=0.97\pm0.10$ dex. Calibrating models to that value we find an initial $\mathrm{A(Li)_0}=2.21$. There is a Li-rich giant in this cluster with A(Li)=$1.63\pm0.18$ dex, located before the luminosity function bump. Its evolutionary state indicated that its high Li abundance might be the product of external pollution and possibly, an accreted planet.
\item {\bf NGC6838}: Although the number of stars in this cluster is small, we are able to find a Li plateau value of A(Li)$=0.84\pm0.10$ dex. This implies a primordial $\mathrm{A(Li)_0}=2.17$. Being the most metal-rich cluster in our sample, with [Fe/H]$=-0.72\pm0.07$ dex, we can compare its abundances with 47 Tuc. The similar LRGB Li abundance of NGC6838 with other clusters of lower metallicities suggests that, if all higher metallicity globular clusters experience main sequence depletion similar to 47 Tuc, the effect is mostly erased when they evolve to the RGB phase. It is also possible that 47 Tuc is a peculiar case of main sequence depletion.
More abundance measurements of clusters at this high metallicity are needed to understand if either NGC6838 or 47 Tuc are unusual when compared to similar clusters.
\end{itemize}
\begin{acknowledgements}
We would like to thank the anonymous referee for their careful reading of the manuscript and helpful comments and suggestions.
C.A.G. acknowledges support from the National Agency for Research and Development (ANID) FONDECYT Postdoctoral Fellowship 2018 Project number 3180668. This research was supported in part by the National Science Foundation under Grant No. PHY-1430152 (JINA Center for the Evolution of the Elements).
\end{acknowledgements}
\bibliographystyle{aa.bst}
|
{
"timestamp": "2021-09-30T02:00:37",
"yymm": "2109",
"arxiv_id": "2109.13951",
"language": "en",
"url": "https://arxiv.org/abs/2109.13951"
}
|
\section{Introduction}
The question of identifying which effective field theories (EFTs) have UV completions, subject to general principles of unitarity and causality, has long been intimately tied to our understanding of constraints associated with the consistency of quantum gravity and the the swampland program~\cite{Vafa:2005ui,Ooguri:2006in,ArkaniHamed:2006dz}.
The most well-studied aspect of this program has been the Weak Gravity Conjecture (WGC)~\cite{ArkaniHamed:2006dz}, which states that any ${\rm U}(1)$ gauge theory coupled to gravity must be accompanied by states in the spectrum that have charge-to-mass ratio greater than unity in natural units (where ``1'' corresponds to the ratio for large extremal black holes).
The WGC can be motivated by the demand that all black holes be able to decay~\cite{Susskind:1995da,Giddings:1992hh,tHooft:1993dmi,Bousso:2002ju,Banks:2006mm} and, in the standard model, is satisfied by light charged particles. On the other hand, for an Einstein-Maxwell system with no charged matter, the requisite charged states must be black holes. For example, consider
\begin{equation}
{\cal L} = {\cal L}_{\rm EM} + \Delta {\cal L},
\end{equation}
where ${\cal L}_{\rm EM} = R/2\kappa^2 - F^2/4$ is the Einstein-Maxwell Lagrangian and $\Delta {\cal L}$ comprises higher-derivative corrections. The extremal bound will be modified by the presence of $\Delta {\cal L}$~\cite{Kats:2006xp,Cheung:2018cwt,dyonic}, and the WGC requires that the extremal curve contain regions where the charge-to-mass ratio is larger than 1; see \Fig{fig:qmcurve}.
\begin{figure}[t]
\begin{center}
\hspace{-11mm}\includegraphics[width=0.45\columnwidth]{IR_UV_WGC.pdf}
\end{center}\vspace{-2mm}
\caption{Mass and charge parameter space for black holes (both plotted logarithmically). The unperturbed extremality line, at $Q=M$ in Planck units, is modified by higher-derivative operators to a new curve (red). We explore this curve in two regimes: the asymptotic IR (yellow), where the extremality condition is dominated by corrections induced by running of the Wilson coefficients from massless loops, and the threshold region (green), where the leading corrections are the finite coefficients induced by integrating out massive particles parametrically lighter than the Planck scale. The boundaries between these regions are stated for a completion of the higher-derivative operators at scale $m_*$. Black holes occupy the striped region, and we argue that the mass/charge curve always bends below the $M=Q$ line for healthy theories, thus satisfying the WGC.
}
\label{fig:qmcurve}
\end{figure}
It has long been known that unitarity and analytic properties of scattering amplitudes place bounds on the coefficients of higher-derivative operators like $F^4$ and $(\partial\phi)^4$ in nongravitational EFTs~\cite{Adams:2006sv}. The earliest example of a direct link between general constraints from causality and unitarity and the WGC was the observation \cite{ArkaniHamed:2006dz} that the correct signs for the $F^4$ corrections also give a correct-sign shift leading to $Q>M$ for extremal black holes. Causality shows up in various guises in the context of EFT bounds, including the impossibility of reliable observation of global superluminality~\cite{Adams:2006sv} and the sub-$s^2$ scaling of the amplitude in the Regge limit~\cite{Arkani-Hamed:2020blm}.
In a gravitational EFT, forward scattering bounds on four-derivative (e.g., $R^2$) operators encounter a well-known subtlety involving the $t$-channel singularity associated with on-shell graviton exchange~\cite{Adams:2006sv,Cheung:2014ega,Bellazzini:2015cra,Bellazzini:2019xts,Hamada:2018dde}. This can be circumvented by considering finite-$t$ (impact parameter) dispersion relations~\cite{Caron-Huot:2021rmr}, but leading to negative lower bounds. Note, however, that in any theory of quantum gravity with a weak coupling---such as string theory---the scale $m_*$ suppressing higher-dimension operators is parameterically smaller than the Planck scale $m_{\rm Pl}$, and so the signs of the leading operators suppressed by $m_*$ are determined by the standard nongravitational dispersive arguments. And in theories with no separation between $m_*$ and $m_{\rm Pl}$, the ``higher-dimension operators'' are in fact dominated by logarithmic running generated by massless loops in the low-energy theory, and so it is these logarithmic running contributions that must be studied.
In this paper, we will investigate what causality and unitarity can tell us about the $Q/M$ curve in generality. We begin with the asymptotic IR, where as mentioned above the leading corrections to the effective action are dominated by logarithmic running. This asymptotic IR region is relevant for black holes with radii that are exponentially larger than the cutoff scale, with $r_{\rm H} \sim m_*^{-1} \exp(m_{\rm Pl}^p/m_*^p)$ for some power $p$, where the large logs can eventually supercede the presence of any higher-dimension operators suppressed by $m_*$. Note that this depends on having $D=4$ spacetime dimensions, to which we restrict our analysis; in higher dimensions, the logarithmic running generates still higher-dimension operators that can never compete with the leading $m_*$-suppressed terms. We next consider the leading threshold corrections generated by integrating out massive particles at $m_*$, whose signs can be reliably controlled when $m_*$ is parametrically smaller than $m_{\rm Pl}$. Our threshold results apply at loop level, for any number of photon species, and at arbitrary order in derivatives, while our results on the beta function allow us to sidestep complications associated with $t$-channel singularities and cubic terms like $R_{\mu\nu\rho\sigma}F^{\mu\nu}F^{\rho\sigma}$ to uncover universal behavior of asymptotically-large black holes.
The leading higher-dimension operators containing four derivatives have been investigated previously, and one can argue that the $Q/M$ curve indeed bends upward from these terms, under certain assumptions, as a consequence of consistency of black hole entropy for tree-level completions~\cite{Cheung:2018cwt,dyonic} or unitarity of single-${\rm U}(1)$ theories~\cite{Bellazzini:2019xts,Hamada:2018dde}.
In considering the asymptotic IR in \Sec{sec:bubble}, we start with pure Einstein-Maxwell theory with additional minimally coupled massless fields and demonstrate that the beta function for the $F^4$ term (which appears in the form of the square of the stress tensor) always causes the Wilson coefficient to grow larger at low energies, pushing the $Q/M$ curve up and satisfying the WGC.
However, this cannot be the end of the story, since in extended SUSY, the $F^4$ term is protected by nonrenormalization theorems. This implies that there must be some negative contributions for the cancellation. To identify the source of negativity we turn to supergravity theories. For general $\mathcal{N}=1$ theories, the beta function generates a strictly positive $F^4$ term. For $\mathcal{N}=2$, the fact that the beta function of the graviphoton $F^4$ operator vanishes, for an arbitrary number of vector multiplets and hypermultiplets, can be traced to the negative contributions arising from the dimension-five operators $\bar{\psi}F\psi$ and $\phi F^2$. This implies that by breaking SUSY and introducing an arbitrarily large number of such operators, we can eventually generate negative Wilson coefficients. We verify that this is indeed the case for $N>137$ fermions or $N>46$ scalars, nonminimally coupled via the $\bar{\psi}F\psi$ of $\phi F^2$ operators, respectively. Thus in the deep IR the presence of nonminimal couplings can drive the running of $F^4$ operators to negative values.
The existence of negative coefficients for $F^4$ terms raises the spectre of superluminality, but in \Sec{sec:causality} we show that this is not the case for $F^4$ terms suppressed by the Planck scale: the time advance generated by the $F^4$ operator cannot override the gravitational time delay without exiting the validity of the EFT approximation.
We also argue that these examples do not ultimately lead to a violation of the WGC, by examining issues of UV completion and tuning in \Sec{sec:tuning}, noting that the WGC only stipulates that there exists a state with charge-to-mass ratio larger than 1 somewhere along the entire curve as we move to smaller masses. It does not require the curve to move monotonically away from 1. As we will see, once we enter the threshold region, where the shift is generated by integrating out massive states below $m_{\rm Pl}$, unitarity will tend to enforce positivity of these corrections; see \Fig{fig:qmcurvecounter}.
\begin{figure}[t]
\begin{center}
\hspace{-11mm}\includegraphics[width=0.45\columnwidth]{IR_UV_WGC_counter.pdf}
\end{center}\vspace{-3mm}
\caption{In rare examples of nonsupersymmetric theories with a large number of species and nonminimal couplings in a specific Planckian range, the beta function drives the running of the $F^4$ Wilson coefficient to negative values in the exponentially deep IR (yellow).
In that region, the $Q/M$ ratio for extremal black holes acquires a negative correction.
Nonetheless, we expect that in healthy theories in the threshold region (green), the finite corrections associated with sub-Planckian states in the UV completion nonetheless yield a net positive contribution to $Q/M$, so that all extremal black holes---including those with exponentially large masses---are able to decay, preserving the WGC.
}
\label{fig:qmcurvecounter}
\end{figure}
As we are considering threshold contributions from states at $m_{*}\ll m_{\rm Pl}$, gravitational effects are irrelevant. We will utilize the remarkable fact, reviewed in \Sec{sec:action}, that the shift of the extremal charge-to-mass ratio is in fact proportional to the value of $\Delta \mathcal{L}$ itself, evaluated on the on-shell extremal solution in the two-derivative theory~\cite{Cheung:2018cwt,dyonic}. For EFTs that do not induce nonminimal three-particle amplitudes---e.g., a theory where $R_{\mu\nu\rho\sigma}F^{\mu\nu}F^{\rho\sigma}$ can be ignored---we demonstrate in \Sec{sec:squares} that unitarity, via the generalized optical theorem, implies that $\Delta \mathcal{L}$ can be written as a sum over squares for black hole backgrounds and is hence manifestly positive.\footnote{Past works have found that in healthy theories where such three-point couplings are present, $Q/M$ is also shifted in the correct direction~\cite{Cheung:2018cwt,dyonic,Kats:2006xp,Hamada:2018dde,Cano:2019oma,Cano:2019ycn}.} Note that this statement goes beyond Einstein-Maxwell theory and operators at leading order in derivatives, which we demonstrate using quartic Riemann corrections to Reissner-Nordstr\"om (RN) black holes and EFT deformations of dilatonic (GHS) black holes.
Remarkably, we find that the action/extremality relationship continues to hold for GHS black holes.
We conclude and discuss future directions in \Sec{sec:outlook}.
\section{Beta functions and bubble cuts}\label{sec:bubble}
We begin with the leading higher-derivative corrections to Einstein-Maxwell theory:\footnote{Via the Bianchi identity, operators of the form $DFDF$ can be traded for the Riemann/Ricci tensor contracted with $F^2$~\cite{Deser:1974cz}.}
\begin{equation}
\Delta {\cal L} = a_1 (F_{\mu\nu} F^{\mu\nu})^2 + a_2 (F_{\mu\nu} \widetilde F^{\mu\nu})^2 + b F_{\mu\nu} F_{\rho\sigma} R^{\mu\nu\rho\sigma},
\end{equation}
where operators involving the Ricci tensor can be removed via field redefinition and the Riemann-squared operator can be dropped in four dimensions since the Gauss-Bonnet term is a total derivative. Throughout, we define $\widetilde F_{\mu\nu} = \epsilon_{\mu\nu\rho\sigma}F^{\rho\sigma}/2$ and, unless otherwise noted, all higher-derivative couplings will be normalized with appropriate powers of $\kappa^2$ to be dimensionless.
We will first consider the logarithmic running of these coefficients due to massless loops.
That is, we will take $a_1, a_2, b$ to be dominated by large logarithmic terms, and will postpone consideration of the finite pieces that are generated by massive states in the UV to Secs.~\ref{sec:action} and \ref{sec:squares}. The beta function for each operator can be extracted from the UV divergence of the four-photon amplitude, where $a_1,a_2$ are linearly mapped to the all-plus (minus) and the two-plus, two-minus (MHV) helicity sector, and $b$ is mapped to the single-minus (plus) helicity sector. As unitarity cuts control the coefficients of the logarithms, we can easily deduce some general results for $(a_1, a_2, b)$. First, the absence of two-particle cuts for the single-helicity configuration leads to the absence of $R_{\mu\nu\rho\sigma}F^{\mu\nu}F^{\rho\sigma}$ corrections at one loop, and hence $b=0$. For the all-plus helicity configuration, the only nonvanishing two-particle cut occurs when the dimension-five operator $\phi F^2$ is present, where $\phi$ is a massless scalar. This leads to the following cut diagram:
$$\includegraphics[scale=0.5]{AllPlus-eps-converted-to.pdf}\,.$$
Thus, in absence of a $\phi F^2$ coupling, the large logarithms will be proportional to the square of the photon stress-energy tensor,
\begin{equation}
T_{\mu\nu}T^{\mu\nu} =\frac{1}{4}(F^2)^2+\frac{1}{4}(F\widetilde{F})^2,
\end{equation}
which is the unique combination of the two $F^4$ operators that gives vanishing all-plus amplitude for the photon. The effective action that captures the logarithmic running therefore takes the form:
\begin{eqnarray}\label{Stress}
\mathcal{L}&=&\mathcal{L}_{\rm EM} + a (T_{\mu\nu})^2.
\end{eqnarray}
As previously mentioned, the charge-to-mass ratio of the extremal RN black hole is modified under the presence of $ a_1 (FF)^2 + a_2 (F \widetilde F)^2$~\cite{ArkaniHamed:2006dz,Kats:2006xp,Cheung:2018cwt}, with
\begin{equation}
\left.\frac{\sqrt{Q^2 + P^2}}{\sqrt{2}M}\right|_{\rm ext} =1+\frac{16\left[a_1(Q^2 - P^2)^2 + 4a_2Q^2P^2\right]}{5(Q^2 + P^2)^3},\label{eq:shiftRN}
\end{equation}
where $(Q,P)$ are the electric and magnetic charges of the black hole, respectively (with $\kappa^2$ normalized to $1$). For Eq.~\eqref{Stress}, the extremality shift goes like $4a_1=4a_2=a$.
In four dimensions, the one-loop amplitude can be cast into a scalar integral basis involving bubbles, triangles, and boxes~\cite{tHooft:1978jhc, Bern:1992em, Bern:1993kr}. As only the bubble integrals are UV-divergent, the logarithm coefficients $c_s, c_{t},c_{u}$ can be identified with the coefficients of the bubble integrals in the $s$, $t$, and $u$ channels respectively. Indeed, for an $s$-channel scalar bubble integral, we have:
\begin{equation}
c_s \int \frac{{\rm d}\ell^{4 - 2\epsilon}}{(2\pi)^4}\frac{1}{\ell^2(\ell - p_1 - p_2)^2} =c_s \left(\frac{1}{\epsilon} - \log \frac{s}{\mu^2}\right) + \mathcal{O}(\epsilon^0).
\end{equation}
Recalling the dependence of $t$ and $u$ on $s$, the running of the coefficient $a$ is thus given by:\footnote{Here we only consider bubbles with two-particle cuts, since there are no UV divergences associated with bubbles on the external legs. This can be inferred from the absence of their IR counterpart. Indeed, for gravitational theories the collinear divergence cancels, and the IR divergence is universal and proportional to $\log s, \log t, \log u$~\cite{Dunbar:1995ed}. Thus, there are no IR divergences associated with massless bubbles, indicating that their UV divergences cancel as well.}
\begin{equation}
\begin{aligned}
a=-c_s\log \frac{s}{\mu^2} - c_t \log \frac{t}{\mu^2} - c_u \log \frac{u}{\mu^2}\;\xRightarrow[\;]{s\ll \mu^2}\;-(c_s + c_{t} + c_{u})\log \frac{s}{\mu^2}.\label{eq:logs}
\end{aligned}
\end{equation}
Thus, in the deep IR where $-\log (s/\mu^2)\gg 1$, the sign of $a$ will be determined by that of $(c_s + c_{t} + c_{u})$.
\begin{figure}
\centering
\subfigure[$c_s=-\frac{31}{30}-\frac{s^3}{4u^3} -\frac{9s^2}{8u^2}-\frac{47s}{24u}$]{\includegraphics[width=50mm]{GravitonS.pdf}}
\quad\subfigure[$c_t=\frac{1}{20} + \frac{s^3}{4u^3}-\frac{3s^2}{8u^2} + \frac{11s}{24u}$]{\includegraphics[width=50mm]{GravitonT.pdf}}\\
\subfigure[$c_t=\frac{87}{40} + \frac{s^3}{4u^3} + \frac{9s^2}{8u^2}+\frac{47s}{24u}$]{\includegraphics[width=50mm]{PhotonT.pdf}}
\quad\subfigure[$c_s=\frac{131}{120}-\frac{s^3}{4u^3}+\frac{3s^2}{8u^2}-\frac{11s}{24u}$]{\includegraphics[width=50mm]{PhotonS.pdf}}
\caption{The $s$- and $t$-channel bubble coefficients for the graviton and photons in Einstein-Maxwell theory multiplying $(T_{\mu\nu})^2$.}\label{fig1}
\end{figure}
Let us consider the effect of minimally coupled massless fields with spin $\leq 1$. The coefficients are computed using unitarity methods devised by Forde~\cite{Forde:2007mi}, with further modifications in Refs.~\cite{ArkaniHamed:2008gz, Britto:2005ha}; see \App{app:bubble} for details. Without loss of generality, we consider the four-photon amplitude with helicities $(1^+,2^-,3^+,4^-)$. Note that for minimally coupled theories, the $u$-channel cut $c_u$ vanishes, as on either side of the cut one has same-helicity photons. In general, each coefficient will be nonlocal, and it is only for the combination for each irreducible subset that we find a local result. For example, the coefficients for the graviton and photon in each channel are listed in \Fig{fig1}, which combine to give $137/60$. This is the well-known UV divergence of Einstein-Maxwell theory~\cite{Deser:1974cz}, and as first pointed out in Ref.~\cite{Cheung:2014ega}, its positive sign comports with the WGC in the asymptotic IR. With additional matter, the contribution for each spin is given as:
\begin{equation}
c_s + c_t:\;\;\; \text{scalar:}\;\frac{1}{120},\;\;\; \text{fermion:}\;\frac{1}{40},\;\;\; \text{vector:}\; \frac{1}{10}\,,
\end{equation}
where the vectors here correspond to those that are distinct from the external one(s) under which the black hole is charged. Each indeed contributes positively.
While we have seen that adding an arbitrary number of minimally coupled matter fields only pushes the charge-to-mass ratio up, this cannot be the whole story. This is because we know that at some point there must be negative contributions, in accordance with various nonrenormalization theorems in extended supergravity theories. Understanding this cancellation in detail will shed light on the nature of negative contributions to the running of the four-(gravi)photon operator.
\subsection{Running in supergravity} \label{sec:SUGRA}
Supergravity theories introduce two new features: the presence of a spin-$3/2$ particle and nonminimal couplings. As we will see, these two features are precisely the source of negative contributions to the running of $F^4$.
Let us begin with the one-loop UV divergence for $\mathcal{N} = 1$ Einstein-Maxwell supergravity. The four-photon divergence now receives extra contributions from the gravitino $\psi$ and photino $\lambda$. Note that the requirement that the gravitino and photino must come hand-in-hand for consistent factorization at tree level~\cite{McGady:2013sga} is also reflected in the fact that the bubble coefficients for the two contributions are individually nonlocal and only reduce to $(T_{\mu\nu})^2$ when combined. Separating the irreducible contributions, we find: \begin{center}
\begin{tabular}{ | c | c | c | }
\hline
$\mathcal{N}=1$ & $a$ \\ \hline
photon + graviton & $\frac{137}{60}$ \\ \hline
gravitino + photino & $-\frac{1}{5}$ \\ \hline
\end{tabular}
\end{center}
This sums to $25/12$, in agreement with Ref.~\cite{vanNieuwenhuizen:1976bg}. Note that indeed the gravitino yields a negative contribution. However, it is not sufficient to overcome the Einstein-Maxwell contribution, which suggests adding more gravitinos, i.e., extended SUSY. But before doing so, let us add nonminimal couplings involving the Maxwell field, which in $\mathcal{N}=1$ language will be given by
\begin{equation}
g\int {\rm d}^4 x\,{\rm d}^2\theta\; \Phi W^\alpha W_\alpha+{\rm c.c.}
\end{equation}
This includes the dimension-five couplings $\phi F^2$ and $\psi F \chi$, where $(\phi, \psi)$ are the scalar and fermion in the chiral multiplet and $\chi$ is the photino.
(Recall that conventional dipole couplings involving only matter fields are forbidden by rigid ${\cal N}=1$ SUSY.)
Assuming that there are $n$ chiral multiplets with such couplings, the contribution to the divergence is manifestly positive,
\begin{equation}
\frac{1}{4}(ng - 1)^2 + \frac{n}{24} + \frac{11}{6}.
\end{equation}
With $\mathcal{N}=1$ SUSY, $F^4$ thus always runs to larger positive values in the IR.
For extended supergravity theories, there are no UV divergences for the one-loop four-graviphoton amplitude, irrespective of the number of matter multiplets. To understand this cancellation, let us study the $\mathcal{N}=2$ system, where the supergravity sector contains the graviton, two gravitinos, and a graviphoton, while the Maxwell multiplet contains a photon, two photinos, and a complex scalar, and the hypermultiplet contains four scalars and two fermions. Their contributions to the four-graviphoton bubble coefficients are as follows:
\begin{center}
\quad
\begin{tabular}{ | c | c | c | }
\hline
$\mathcal{N}=2$ & $a$ \\ \hline
graviphoton + graviton & $\frac{137}{60}$ \\ \hline
2 gravitino & $-\frac{137}{60}$ \\ \hline
$n_m\times$(Maxwell) photon+scalar& $-\frac{n_m}{20}$ \\ \hline
$n_m\times$(Maxwell) photino & $\frac{n_m}{20}$ \\ \hline
$n_h\times$(hyper) scalar& $\frac{n_h}{30}$ \\ \hline
$n_h\times$(hyper) fermion& $-\frac{n_h}{30}$ \\ \hline
\end{tabular}
\end{center}
Note that the supergravity multiplet, hypermultiplet, and super-Maxwell multiplet each cancel separately. It is enlightening to understand the source of cancellation within the matter multiplet. As previously noted, additional minimally coupled massless states with helicity $<3/2$ always contribute positively to the divergence. Thus, nonminimal coupling must be present to account for this cancellation. Indeed, for the Maxwell multiplet, the complex scalar and photon couple with the graviphoton via the dimension-five operator $\phi F_g F_m$, where $F_m$ and $F_g$ denote the matter and graviphoton field strengths, respectively. Similarly for the hypermultiplet, the fermions couple to the graviphoton with gravitational strength through the dipole moment operator $\bar\psi F\psi $. These are exactly the couplings responsible for the negative contributions in the above table.
For theories containing a massless spin-$3/2$ particle, consistent factorization of the four-particle amplitude forces the presence of the complete supersymmetric spectrum~\cite{McGady:2013sga}. Unlike the gravitino, which comes hand-in-hand with SUSY and which leads to vanishing beta function beyond $\mathcal{N}=1$, the above dimension-five operators can be independently introduced. This motivates us to study the fate of the beta function under these nonminimal couplings in a more general setup.
\subsection[Negative running from nonminimal couplings in non-SUSY theories]{Negative running from nonminimal couplings in non-SUSY \linebreak theories} \label{sec:running}
Let us now generalize to nonminimally coupled matter fields. We will organize our analysis around specific black hole solutions. In particular, since the dimension-five operators modify the equations of motions, we will only consider cases where the original background is still a solution to the new equations of motion.
We begin by augmenting Einstein-Maxwell theory with an arbitrary number of scalars and vectors, coupled through $\phi F F$ operators as follows:
\begin{eqnarray}
\mathcal{L} = \mathcal{L}_{\rm EM} - \frac{1}{2}\sum_{i}(\partial\phi_i)^2 + \sum_{i,j,k}g_{ijk}\phi_i F_jF_k.
\end{eqnarray}
The indices $i$ label the different species. Let us consider a RN black hole charged under one of the photons, say $i=1$; i.e., the only field strength with nontrivial profile is $F_1$, with $F_{1}^2=2(P^2 - Q^2)/r^4$. If $g_{i11}$ is nonvanishing, the scalar equations of motion will be modified by terms proportional to $F_1^2$. Since the RN black hole has a trivial scalar profile, we need to set $P=Q$ and thus consider a dyonic black hole.
Note that with the additional scalars, the dimension-eight operators appearing at one loop now include $(\partial\phi)^2F^2$ and $(\partial\phi)^4$. However, again since the scalar vanishes on the RN solution, such terms will not influence the leading correction to the extremal charge-to-mass ratio. As $F_1^2 =0$, this ratio is only corrected by $a_2(F_1\widetilde{F}_1)^2$. Explicit computation leads to the following modification of the extremal condition:\footnote{Throughout Sec.~\ref{sec:running}, we will suppress the explicit logarithmic factor in the Wilson coefficients, and instead for brevity simply write $a$, $a_1$, or $a_2$ for the coefficient multiplying $\log (s/\mu^2)$, as in \Eq{eq:logs}.}
\begin{equation}\label{srnc}
\left.\frac{Q}{M}\right|_{\rm ext}=1+\frac{8}{5Q^2}a_2\, ,
\end{equation}
where
\begin{equation}\label{eq:a2RNphi}
\begin{aligned}
a_2=&\,\frac{1}{12}\sum _{j=2}^{n_p}\left[\left(\sum _{i=1}^{n_s} g _{i1j}^2 - \frac{1}{2}\right)^2 +8 \left(\sum _{i=1}^{n_s} g_{i11} g_{i1j}\right)^2\right] +\frac{4}{3} \left(\sum _{i=1}^{n_s} g_{i11}^2 - \frac{3}{8}\right)^2 \\& + \frac{1}{12}\sum _{\substack{k,l=2\\ k\neq l}}^{n_p} \left(\sum _{i=1}^{n_s} g_{i1k} g_{i1l}\right)^2 + \frac{1}{480}(182+2n_p+n_s),
\end{aligned}
\end{equation}
writing $n_p$ and $n_s$ for the number of vectors and scalars, respectively.
We see that $a_2$ is given by a sum of positive definite terms.
For vanishing $g_{i11}$, the pure electric RN solution becomes a viable background. Furthermore, since in that case $F\widetilde{F}=-4PQ/r^4 = 0$, we can introduce axion couplings as well. To this end, let us consider:
\begin{equation}
\begin{aligned}
\mathcal{L} = \mathcal{L}_{\rm EM} - \frac{1}{2}\sum_{i=2}^{N}\left[\frac{1}{2}F_i^2 + (\partial\phi_i)^2 + (\partial\chi_i)^2\right] + \sum_{i=2}^{N}g_{i1i}\left(\phi_i F_1F_i-\chi_i F_1\widetilde{F}_i\right).\label{eq:counter1}
\end{aligned}
\end{equation}
For simplicity of the notation, we label the scalar and axion $\phi_i, \chi_i$ with $i=2,\ldots, N$, i.e., we have $N -1$ scalar and axion fields and $N$ vector fields. In $\mathcal{N}=2$ supergravity theories, the scalar and axion combine into the complex scalar in the Maxwell multiplet. Direct computation yields:
\begin{equation}
a_1=a_2=\frac{137}{240}+\sum_{i=2}^N\left(\frac{7}{240}-\frac{g_{i1i}^2}{6}+\frac{g_{i1i}^4}{6}\right).\label{eq:axiondilaton}
\end{equation}
Note that the minimum occurs when $g_{i1i}^2=1/2$, in which case the expression in parentheses is in fact negative, reaching $-1/80$.
This special combination of scalar and axion, as well as the value for $g_{i1i}$, leads precisely to the cancellation of the four-graviphoton divergence observed for the Maxwell multiplets in ${\cal N}=2$ supergravity.
Any deviation would lead to a net positive contribution, which is the case for external matter photons.
At this special value of $g_{i1i}$, we have $a_1 = a_2 = (140-3N)/240$, so that if we set $N>46$, the beta function would cause the Wilson coefficient to run negative for sufficiently large black holes.
Analogous results also occur for the other dimension-five operator. The electric RN solution also allows us to add the fermion couplings $\bar{\psi}\gamma^{\mu}\gamma^{\nu} F_{\mu\nu}\psi$:
\begin{eqnarray}
\mathcal{L}=\mathcal{L}_{\rm EM}- \bar{\psi}\slashed{\nabla}\psi + g \bar{\psi} \gamma^{\mu}\gamma^{\nu}F_{\mu\nu}\psi.\label{eq:dim5op}
\end{eqnarray}
With $N$ copies of such a fermion, we find:
\begin{equation}
a_1 = a_2 = \frac{137}{240} + N\left(\frac{1}{160} -\frac{1}{6}g^2 + \frac{2}{3}g^4 \right).\label{eq:a1a2N}
\end{equation}
In $(g,N)$ parameter space, there is a region where the $a_1$ and $a_2$ divergences~\eqref{eq:a1a2N} go negative, as first pointed out in~Ref.~\cite{Charles}. For fixed $N$, \Eq{eq:a1a2N} is minimized for $g^2=1/8$, which is precisely the value for the photino couplings to the graviphoton in extended supergravity theories. At this value we find $a_1 = a_2 = (137-N)/240$, so that the beta function flips sign for $N>137$.
As in the previous case considered above, marginalizing over all $(g,N)$, one finds that this sign flip only occurs in a window of $g \sim {\cal O}(1)$ in Planck units, with the minimal critical $N$ realized at the supergravity value of the coupling.
\subsection{Causality}\label{sec:causality}
The ``wrong-sign'' running of the $F^4$ operator generated by the theories in \Sec{sec:running} would a priori seem to present a problem for causality:
If the $F^4$ operator has negative Wilson coefficient at some (sufficiently-low) energy scale, what is to prevent us from sending signals superluminally using the constructions of Ref.~\cite{Adams:2006sv}?
Let us examine such a thought experiment more closely, in the context of the dipole example in \Eq{eq:dim5op}.
The EFT at scale $E$ will contain a negative $F^4$ correction,
\begin{equation}
+\frac{\log(E/m_{*})}{m_{\rm Pl}^4} F^4,\label{eq:F4opneg}
\end{equation}
where $m_{*} \gg E$ is the scale of the UV completion of the dimension-five dipole operator (and so $\log(E/m_{*})<0$ is large).
Suppose we try to exploit this operator to detect superluminality.
We arrange for an electromagnetic field of strength $B$ in a bubble of size $L$.
The time advance associated with the $F^{4}$ operator in \Eq{eq:F4opneg} will be
\begin{equation}
t_{\rm adv} \sim \frac{B^{2}L}{m_{{\rm Pl}}^{4}}\log(m_{*}/E).
\end{equation}
Meanwhile, the total mass of the bubble is $B^{2}L^{3}$, and the gravitational time delay the signal incurs in crossing the bubble is
\begin{equation}
t_{{\rm del}}\sim\frac{B^{2}L^{3}}{m_{{\rm Pl}}^{2}}.
\end{equation}
Thus, in order to obtain net superluminality, we would need $t_{{\rm del}}<t_{{\rm adv}}$, which implies an exponentially small value for the scale $E$:
\begin{equation}
\frac{E}{m_{*}}<e^{-m_{{\rm Pl}}^{2}L^{2}}.
\end{equation}
Now, the relevant scale of the EFT is the wavelength of the perturbation of the photon field we are using to send signals within the bubble. Since this must be smaller than the bubble itself, we must have $L>1/E$, so we require
\begin{equation}
\frac{E}{m_{{\rm Pl}}}e^{m_{{\rm Pl}}^{2}/E^{2}}<\frac{m_{*}}{m_{{\rm Pl}}}.\label{eq:finaltune}
\end{equation}
In order for us to treat this theory within QFT, we must have $m_{*}<m_{{\rm Pl}}$. But the function $x\,e^{1/x^{2}}$ is never less than $1$ for positive $x$, so the condition $m_{*}/m_{{\rm Pl}}<1$ is impossible to meet given Eq.~\eqref{eq:finaltune}.
Thus, we have found that it is not possible for the superluminal time advance to triumph over the gravitational time delay, while allowing the experiment to remain within a consistent EFT.
The same analysis would also apply in the Maxwell multiplet-inspired case considered in \Eq{eq:counter1}.
We note that the fact that the ``wrong-sign'' beta function is set by the Planck scale, leading to the $m_{\rm Pl}$ suppression in \Eq{eq:F4opneg}, was crucial in order for the gravitational time delay to cancel the would-be time advance.
\subsection{UV completion and tuning}\label{sec:tuning}
In addition to the logarithmic running from the beta functions, the Wilson coefficients of the Einstein-Maxwell EFT contain extra, finite contributions that we have so far neglected.
That is, a given $F^4$ coefficient $a$ evaluated at scale $E$ can be expanded as $a_{\rm UV} - a_\beta \log (E/m_{*})$, so that the log-dependent Wilson coefficients we have been computing correspond to $a_\beta$.
The proper scale for $E$ is the Compton wavelength of the horizon $\sim 1/r_{\rm H}$, so the shift in charge-to-mass ratio is, schematically,
\begin{equation}
\Delta(Q/M) = \frac{1}{r_{\rm H}^{2}}\left[a_{\rm UV}+a_\beta\log(r_{\rm H} m_{*})\right],\label{eq:extremalshifta}
\end{equation}
as depicted in \Fig{fig:qmcurvecounter}. Now, the WGC demands the existence of some state for which $\Delta (Q/M) > 0$, enabling RN black holes to decay.
All extremal black holes with $M>M_0$ can decay provided there exists some $M_{1}<M_{0}$ such that $\Delta (Q/M)|_{M_{1}}>0$.
For $a_\beta < 0$ as in the theories in \Sec{sec:running}, if $a_{\rm UV} > 0$ there still exists a window of black hole masses in the range $1 \ll r_{\rm H} m_{*} < \exp(-a_{\rm UV}/a_\beta)$ that will have $\Delta(Q/M) > 0$, satisfying the WGC.
As long as $a_{\rm UV}$ is positive, which we will demonstrate by unitarity in \Sec{sec:squares}, we are guaranteed to have extremal black holes near the threshold region with charge-to-mass ratio greater than 1.
Let us consider more closely the question of the UV completion of the higher-dimension operators in the theories of \Sec{sec:running}.
If we assume a perturbative field-theoretic UV completion, to generate the dimension-five operators of \Sec{sec:running} one must have charged states in the UV. For example, suppose we wish to UV-complete the dipole operator in \Eq{eq:dim5op}. We can imagine a massive complex scalar $\sigma$ with charge $e$ along with a massive fermion $\xi$, both with mass $m_{*}$, that interact with the massless fermions $\psi$ through the Yukawa coupling $y\psi\xi\sigma+{\rm h.c}$.
Then the dipole operator in \Eq{eq:dim5op} is generated at one loop by integrating out $\sigma$ and $\xi$, with Wilson coefficient $g \sim y^2 e/m_{*}$.
To match the example in \Sec{sec:running}, we must have $g\sim 1/m_{\rm Pl}$.
This leaves two possibilities.
Either $y\lesssim 1$ and $e/m_{*} > 1$ in Planck units, in which case $\sigma$ is a particle that already satisfies the WGC, or else $e/m_{*} < 1$, requiring $y \gtrsim 1$, violating perturbativity.
Thus, the existence of a perturbative UV completion of the dipole operator, at least in this example, seems to require the existence of a state in the completion itself that satisfies the WGC.
In the case of negative running, it is intriguing to ask if this implies that in the asymptotic IR one can actually find scales at which the extremal charge-to-mass ratio is less than unity.
In order to find such an object, one would be required to consider black holes with exponentially large masses.
For example, from a state at $m_{*}$ we might expect threshold corrections in $a_{\rm UV}$ in \Eq{eq:extremalshifta} going like $m_{*}^{-2}$, and as we will discuss in \Sec{sec:squares}, these contributions will be positive.
In the models discussed in \Sec{sec:running}, such terms compete against the negative correction to the charge-to-mass ratio going like $a_\beta \sim -m_{\rm Pl}^{-2}$, which is enhanced by $\log(r_{\rm H}m_{*})$.
For the net extremality correction to be negative, we must consider black holes with horizon size $r_{{\rm H}}>m_{*}^{-1} \exp(m_{{\rm Pl}}^{2}/m_{*}^{2})$.
However, the presence of an exponentially large distance scale suggests sensitivity to the cosmological constant (CC), which we should expect to be nonzero in a SUSY-breaking theory~\cite{Banks:2000fe}.
Let us write the energy density of the CC in a model-independent manner as $m_{\rm Pl}^2 H^2$ for Hubble parameter $H$.
Requiring that the black hole be smaller than the Hubble radius, we must have $r_{\rm H} H < 1$.
Then in order to find an extremal black hole with $Q/M<1$, we must have an extraordinarily exponentially tuned CC, with $H < m_{*} \exp(-m_{{\rm Pl}}^{2}/m_{*}^{2})$.
To get a sense of how extreme this tuning is, let us input physically motivated numbers.
Minimizing the tuning by taking $m_{*}$ as large as is reasonable, at the GUT scale of $\sim 10^{-2} m_{\rm Pl}$, we would find the requirement that $H \lesssim 10^{-4345}m_{\rm Pl}$, over four thousand orders of magnitude smaller than our own highly-tuned CC of $10^{-120} m_{\rm Pl}^2$.
Even adding back in the scaling with $N\gg 46$ scalars or $N\gg 137$ fermions in $a_\beta$ does not alleviate this tuning. While the enhancement turns the condition on the CC into $H < m_{*} \exp(-m_{\rm Pl}^{2}/Nm_{*}^{2})$, the large number of species renormalizes the effective Planck mass by $\delta m_{\rm Pl}^2 \sim N m_{\rm cutoff}^2$~\cite{Dvali:2007hz,Arkani-Hamed:2005zuc,Dimopoulos:2005ac,Cheung:2014vva}, canceling the $N$-dependence.
Hence, in order to engineer a black hole in tension with the spirit of the WGC, one must also posit either unreasonable tuning of the CC or a desert scenario with no additional degrees of freedom coupling to the photon below the Planck scale.
Such unwelcome features suggest that these models are consigned to the swampland.
\section{Extremality and the action}\label{sec:action}
So far, we have computed the leading-order correction to the charge-to-mass ratio for extremal black holes in the regime of asymptotically large black holes, depicted in yellow in Figs.~\ref{fig:qmcurve} and \ref{fig:qmcurvecounter}.
Here, the Wilson coefficients were dominated by massless loops, and the sign of the extremality correction was fixed by beta functions.
Motivated by the importance of the contributions from the UV completion in \Sec{sec:tuning}, we now turn toward the threshold regime in green, where the dominant contribution to the Wilson coefficients will be from massive states below the Planck scale.
Before examining particular theories, we first review a profound relationship between the on-shell action and the extremality shift induced by higher-dimension operators shown in Ref.~\cite{dyonic}.
Remarkably, the value of the shift in $Q/M$ at leading order in the EFT is given simply by the value of $\Delta {\cal L}$ itself, evaluated on the on-shell extremal black hole solution in the two-derivative theory.
This fact extends beyond the RN case, generalizing to spinning, multicharge, dyonic, or even dilatonic black holes, and applies for any leading operators $\Delta {\cal L}$, for any number of derivatives.
Consider a Kerr-Newman (KN) black hole with ADM mass $8\pi m/\kappa^2$, electric charge $4\sqrt{2}\pi q/\kappa$, and angular momentum $J=8\pi m a/\kappa^2$, where we reintroduce explicit Planck masses for clarity.
It will be convenient to define the extremality parameter $\zeta = \sqrt{q^2 + a^2}/m$, so that we have $\zeta \in [0,1]$ for physical black holes in Einstein-Maxwell theory.
The event horizon is located at radius $r_{\rm H} = m(1+\sqrt{1-\zeta^2})$ in Boyer-Lindquist coordinates, so that the extremal case corresponds to $r_{\rm H} = m$, and we define a spin parameter $\nu = a/r_{\rm H}$.
In terms of these parameters, the extremality shift satisfies a beautiful relation,
\begin{equation}
\Delta \zeta = \frac{\kappa^2(1+\nu^2)}{8\pi m} \lim_{\zeta \rightarrow 1}\left(\int{\rm d}^3 x \sqrt{-g}\,\Delta{\cal L}|_{\rm KN}\right),\label{eq:Dzeta}
\end{equation}
with the integral evaluated on the KN solution outside the event horizon at fixed $t$.
While we leave the full proof of \Eq{eq:Dzeta} to Refs.~\cite{Cheung:2018cwt,dyonic}, the essential elements of the derivation proceed as follows.
By definition of the horizon, in the extremal limit we mechanically have $\Delta \zeta\propto \Delta g^{rr}$ and $\Delta g^{rr} \propto \Delta r$, the shift in horizon radius at fixed ADM charges. This follows from imposing the condition $g^{rr} = 0$ to locate the horizon, and fixing either the charge or the radial derivative of $g^{rr}$, which vanishes in the extremal limit; see \App{Proof} for details.
In turn, the shift $\Delta S$ in the black hole's Wald entropy from $\Delta {\cal L}$ at fixed charges can be shown to be dominated in the extremal limit by the area shift, so $\Delta r \propto \Delta S$.
Finally, by standard thermodynamic identities in Euclidean quantum gravity, one has a Smarr relation $\Delta S \propto \Delta {\cal L}$~\cite{Reall:2019sah}, which ultimately leads to \Eq{eq:Dzeta}.
The relationship between entropy and extremality was generalized beyond the context of black holes in Ref.~\cite{Goon:2019faz}.
Despite the fact that the relation~\eqref{eq:Dzeta} was derived using a combination of steps in both general relativity and thermodynamics, the final result depends only on the well-defined observables of the on-shell action and extremal charge.
This suggests that an entirely geometrical argument may be possible to mechanically arrive at the extremality/action relation, without appealing to thermodynamics; we leave this possibility to future work.
A particularly beneficial aspect of \Eq{eq:Dzeta} is that it allows the computation of the extremality shift without requiring solving the higher-derivative deformed Einstein equations for the perturbed black hole metric as in Refs.~\cite{Kats:2006xp,Cheung:2018cwt}.
The agreement of \Eq{eq:Dzeta} with the result obtained via the brute-force method has been explicitly checked for arbitrary four-derivative operators for dyonic RN black holes~\cite{Cheung:2018cwt,dyonic}.
As an example application of \Eq{eq:Dzeta}, consider a theory in which the higher-dimension operators are quartic in the Riemann tensor (cf. type II string theory~\cite{Gross:1986iv}),
\begin{equation}
\Delta {\cal L} = \frac{c}{\kappa^2 m_*^6}(R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma})^2 + \frac{\widetilde c}{\kappa^2 m_*^6} (R_{\mu\nu\rho\sigma}\widetilde R^{\mu\nu\rho\sigma})^2,
\end{equation}
where $\widetilde R_{\mu\nu\rho\sigma} = \epsilon_{\mu\nu\alpha\beta}R^{\alpha\beta}_{\;\;\;\;\rho\sigma}/2$. By unitarity~\cite{Bellazzini:2015cra} and causality~\cite{Gruzinov:2006ie}, $c$ and $\widetilde c$ are nonnegative.
For nonspinning black holes, the corrected metric satisfies
\begin{equation}
\begin{aligned}
-g_{tt} &= \bar g^{rr} - \frac{64m^3 c}{715\Lambda^6 r^{14}}\times [2860(11m-8r)r^4 +572 mr^3 z^2(-208m+141r) \\&\hspace{3.8cm} + 104m^2 r^2 z^4(1593m-925r) +520m^3 r z^6(-163m+77r) \\&\hspace{3.8cm} + 12705 m^5 z^8] \\
g^{rr} &=\bar g^{rr} - \frac{64m^3 c}{715\Lambda^6 r^{14}}\times [2860(67m-36r)r^4 + 1716 mr^3 z^2 (-521m+250r)
\\&\hspace{3.8cm}+ 260m^2 r^2 z^4 (5733 m - 2266r) +5720 m^3 r z^6(-185m+49r) \\&\hspace{3.8cm} + 252945 m^5 z^8],
\end{aligned}\label{eq:metricR4}
\end{equation}
where $\bar g^{rr} = 1-(2m/r)+(q^2/r^2)$. The shift in the extremality parameter is
\begin{equation}
\Delta(q/m) = +\frac{2208 c}{715m_*^6 r_{\rm H}^6}.\label{eq:R4Dz}
\end{equation}
This can be computed either directly from the metric~\eqref{eq:metricR4} or using \Eq{eq:Dzeta}, and we find agreement.
Generalizing to spinning charged black holes, the correction to the KN extremality parameter is
\begin{equation}
\begin{aligned}
\Delta\zeta & =\;\;\frac{c}{201600 m_*^{6}r_{\rm H}^{6}\nu^{13}(1+\nu^{2})^{4}}\times\Bigl[3239775\nu^{29}+13046250\nu^{27}+21354690\nu^{25}\\
& \hspace{2cm} +21664770\nu^{23} +19661192\nu^{21} +14479886\nu^{19}+13943647\nu^{17}\\
& \hspace{2cm}
+5093180\nu^{15}+8429455\nu^{13}+20545166\nu^{11}+24409088\nu^{9} \\& \hspace{2cm}
+11319666\nu^{7}+7270410\nu^{5}+8419530\nu^{3}+3239775\nu\\
& \hspace{2cm}+315(1+\nu^{2})^{5}\,{\rm arctan}\,\nu\times(10285\nu^{20}-6580\nu^{18}+10734\nu^{16}-2548\nu^{14}\\
& \hspace{3cm}+709\nu^{12}-10285\nu^{8}+21268\nu^{6}-34566\nu^{4}+21268\nu^{2}-10285)\Bigr]\\
& \;\;+\frac{\widetilde c}{22400 m_*^{6}r_{\rm H}^{6}\nu^{13}(1+\nu^{2})^{4}}\times\Bigl[225225\nu^{29}+1183350\nu^{27}+2582790\nu^{25} \\&\hspace{2cm}
+3052830\nu^{23} +2193344\nu^{21}+1104562\nu^{19}+518289\nu^{17}
\\&\hspace{2cm}+1336220\nu^{15}+1603745\nu^{13}+1589258\nu^{11}+1577984\nu^{9}
\\&\hspace{2cm}+1619958\nu^{7} +1505910\nu^{5}+898590\nu^{3}+225225\nu\\
& \hspace{2cm}+315(1+\nu^{2})^{5}\,{\rm arctan}\,\nu\times(715\nu^{20}+420\nu^{18}+138\nu^{16}+84\nu^{14}\\
& \hspace{3cm}+43\nu^{12}-715\nu^{8}+484\nu^{6}-938\nu^{4}+484\nu^{2}-715)\Bigr].
\end{aligned}\label{eq:bigdeltazeta}
\end{equation}
Since $c,\widetilde c>0$, it follows from \Eq{eq:Dzeta} that for all corrected KN black holes (i.e., for all $\nu\in[0,1]$), we have $\Delta \zeta >0$; see \Fig{fig:R4}. As a consistency check of \Eq{eq:Dzeta}, we find that in the $\nu \rightarrow 0$ limit, \Eq{eq:bigdeltazeta} reduces to \Eq{eq:R4Dz} as required.
While $\Delta \zeta > 0$ was not required for the WGC for $\nu \neq 0$, since spinning black holes can shed their angular momentum by emitting Hawking radiation in nonzero orbital angular momentum states, \Eq{eq:bigdeltazeta} shows that such black holes can also decay directly to other spinning black holes with zero orbital angular momentum; such black hole self-sufficiency was also shown for ${\cal O}(\partial^4)$ operators in Ref.~\cite{dyonic}.
\begin{figure}[t]
\begin{center}
\hspace{-11mm}\includegraphics[width=0.75\columnwidth]{R4figure.pdf}
\end{center}\vspace{-7mm}
\caption{Shift in KN extremality parameter $\zeta$ induced by the quartic Riemann terms.
}
\label{fig:R4}
\end{figure}
\section{Actions and perfect squares} \label{sec:squares}
In the previous section, we have seen the intriguing relation between the shift of the extremal parameter $\zeta$ and the leading correction for the on-shell action $\Delta {\cal L}$. This prompts us to ask: Is $\Delta {\cal L}$ always positive? Note that from the start, the question is ill-posed due to the simple fact that the leading kinetic term does not have a definite sign.
For concreteness, let us consider a theory with multiple scalars to illustrate this point. Consider a multiplet of real, shift-symmetric, massless scalars $\phi_{i}, i=1,\ldots,N$, where the action takes the form,
\begin{equation}
\mathcal{L}=-\frac{1}{2}\partial_\mu \phi_i\partial^\mu \phi_i + c_{ijkl} (\partial_\mu \phi_i \partial^\mu \phi_j)(\partial_\nu \phi_k \partial^\nu \phi_l).\label{eq:DLmultiphi}
\end{equation}
A particular solution to the two derivative part of the action can take the form
\begin{equation}
\partial_\mu \phi_i=v_i f_\mu,\label{eq:vf}
\end{equation}
where $v_i$ is some ``flavor'' vector and $f$ is a spacetime-dependent four-vector. Solutions obeying this ansatz transform simply under the ${\rm O}(N)$ symmetry of the two-derivative action. On such a solution, the kinetic term is $f^2 v^2$ and, depending on the signature of the vector $f$, can take any sign.
Similar behavior can be found for the leading four-derivative correction. The coupling $c_{ijkl}$ by definition satisfies
\begin{equation}
c_{ijkl}=c_{klij}=c_{jikl}=c_{ijlk}.\label{eq:sym}
\end{equation}
This is reducible and we can further decompose it by symmetrizing and antisymmetrizing on $j,k$, while continuing to respect the symmetry in \Eq{eq:sym}. Doing so, we write $c_{ijkl} = c^S_{ijkl}{+}c^A_{ijkl}$, defining
\begin{equation}
\begin{aligned}
c^S_{ijkl} &= c_{(ijkl)} \\
&=\frac{1}{3}(c_{ijkl}+c_{ikjl}+c_{ilkj})\\
c^A_{ijkl} &= c_{i[jk]l}+\text{symmetrize on Eq. }\eqref{eq:sym}\\
&= \frac{1}{3}(2c_{ijkl}-c_{ikjl}-c_{ilkj}),
\end{aligned}\label{eq:cSA}
\end{equation}
where we use round and square brackets to denote normalized (anti-)symmetrization.
We can then decompose $\Delta {\cal L}$ as
\begin{equation}
\begin{aligned}
\Delta{\cal L} &= \Delta{\cal L}^S + \Delta{\cal L}^A \\
\Delta {\cal L}^S &= c_{ijkl}^S (\partial_\mu \phi_i \partial^\mu \phi_j)(\partial_\nu \phi_k \partial^\nu \phi_l)\\
\Delta {\cal L}^A &= c_{ijkl}^A (\partial_\mu \phi_i \partial^\mu \phi_j)(\partial_\nu \phi_k \partial^\nu \phi_l).\label{eq:DLphidecomp}
\end{aligned}
\end{equation}
Now since $\Delta {\cal L}^A=0$ on \Eq{eq:vf}, this means deformation around this background can take any sign irrespective of the coefficient $c^{A}_{ijkl}$, and thus it not possible for $\Delta {\cal L}^A$ to have a definite sign in general.
On the other hand, $\Delta {\cal L}^S$ does not vanish on \Eq{eq:vf} and therefore has a chance at being positive on such backgrounds.
Remarkably, precisely these terms, that have the possibility of being positive on factorized backgrounds of the form \eqref{eq:vf}, are in fact guaranteed to be so from unitarity and causality. (However, neither $\Delta {\cal L}^S$ nor $\Delta {\cal L}^A$ is positive for complete arbitrary field configurations.) As we will see, this conclusion comes from the dispersive representation for $c_{ijkl}$.
These statements can be extended to other operators quartic in field strengths, e.g., those involving $N$ ${\rm U}(1)$ gauge fields. Importantly, black hole backgrounds are those for which the analogue of $\Delta {\cal L}^A=0$ and the correction to the extremal parameter is solely determined from $\Delta {\cal L}^S$, which we will find is positive for such backgrounds. By virtue of the action/extremality relation in \Eq{eq:Dzeta}, this implies that the shift in charge-to-mass ratio is positive in the green threshold region of Figs.~\ref{fig:qmcurve} and \ref{fig:qmcurvecounter}.
Hence, the finite corrections from the UV states themselves imply that black holes satisfy the WGC.
Throughout this section, we will restrict consideration to actions starting at quartic order in the massless modes.\footnote{This is consistent from an EFT perspective and corresponds to a UV completion that does not modify the three-point amplitudes of the low-energy modes at leading order.
In particular, the $R_{\mu\nu\rho\sigma}F^{\mu\nu}F^{\rho\sigma}$ operator will be excluded.}
\subsection{Causality and unitarity}\label{sec:generalizedunitarity}
Taking the Wilson coefficients in \Eq{eq:DLmultiphi} to be generated by integrating out massive particles parametrically lighter than the Planck scale, we can constrain $c_{ijkl}$ using dispersion relations.
The most general set of bounds on $c_{ijkl}$ comes from making use of unitarity in the form of the generalized optical theorem~\cite{ZZ}. In particular, such bounds can be stricter than any dispersive relation one obtains from elastic scattering of superpositions of states.
Define $2\,M^{ijkl}\,{=}\,{\rm d}^2 M_{ij\rightarrow kl}(s,t{=}0)/{\rm d}s^2$, where we write $M_{ij\rightarrow kl}(s,t)$ for the amplitude for $\phi_i \phi_j \rightarrow \phi_k \phi_l$, working in cyclic formalism so that elastic scattering would correspond to $i=l$ and $j=k$. By the standard construction of the analytic dispersion relation~\cite{Adams:2006sv},
\begin{equation}
\hspace{-2mm}M^{ijkl}= \frac{1}{2\pi i} \int_{0}^{\infty}\frac{{\rm d}s}{s^{3}}\left[{\rm disc}\,M_{ij\rightarrow kl}(s)+{\rm disc}\,M_{ik\rightarrow jl}(s)\right].
\end{equation}
Though we write the lower limit of integration as zero, in practice we could instead start the integral---and evaluate the Wilson coefficients---at a finite threshold scale below the mass of the UV states generating $\Delta {\cal L}$.
Now, define the three-point amplitude for $\phi_{i}(p_1)\phi_{j}(p_2)\rightarrow X$, where $X$ is any final state, and write out its real and imaginary parts as ${\cal A}_{ij\rightarrow X}=m_{R_{X}}^{ij}+i\,m_{I_{X}}^{ij}$.
We thus have
\begin{equation}
\begin{aligned}
M^{ijkl}&=\frac{1}{\pi}\int_{0}^{\infty}\frac{{\rm d}s}{s^{3}}\sum_{X}\Big(m_{R_{X}}^{ij}m_{R_{X}}^{kl} +m_{I_{X}}^{ij}m_{I_{X}}^{kl} +m_{R_{X}}^{ik}m_{R_{X}}^{lj} +m_{I_{X}}^{ik}m_{I_{X}}^{lj}\Big),
\end{aligned}\label{eq:generaldisp}
\end{equation}
where we drop the boundary term at infinity by requiring sufficiently well-behaved scaling ($\lesssim s^2$) of the forward amplitude at large momentum~\cite{Adams:2006sv}.
Now, $m_{R_{X}}^{ij}$ and $m_{I_{X}}^{ij}$ are some unknown, arbitrary, real-valued matrices.
Thus, the $M^{ijkl}$ allowed by unitarity are given by the full set of positive sums of $m^{ij}m^{kl}+m^{ik}m^{lj}$, where the $m^{ij}$ are real $N$-by-$N$ matrices.
The set of quadratic outer products of matrices defines a cone ${\cal C}$: given any two $M^{ijkl}$ in ${\cal C}$, say $M_{1}$ and $M_{2}$, one has $\lambda_{1}M_{1}+\lambda_{2}M_{2}\in{\cal C}$ for all $\lambda_{1},\lambda_{2}>0$.
Computing the amplitudes from \Eq{eq:DLmultiphi}, we find $M_{ijkl}=2(c_{ijkl} + c_{ikjl})$, so the result of the generalized optical theorem is that
\begin{equation}
c_{ijkl} + c_{ikjl} = \sum_m (m^{ij}m^{kl}+ m^{ik} m^{lj}).\label{eq:cc}
\end{equation}
Using the symmetries of $c_{ijkl}$ in \Eq{eq:sym} as described in \App{app:cc}, we can show that \Eq{eq:cc} is equivalent to an elegant expression for $c_{ijkl}$ alone,
\begin{equation}
c_{ijkl}=\sum_{m}\left(m^{(ij)}m^{(kl)}+m^{[il]}m^{[kj]}+m^{[ik]}m^{[lj]}\right),\label{eq:ccfinal}
\end{equation}
which immediately leads to
\begin{equation}
c^S_{ijkl}=\sum_{m}\,m^{(ij}m^{kl)},
\end{equation}
where we have symmetrized over all indices. We note that unitarity generates more bounds in \Eq{eq:ccfinal} than obtained by considering elastic scattering of arbitrary superpositions of scalars as in Ref.~\cite{Andriolo:2020lul}.
For arbitrary $\phi_i$ backgrounds, using the results of Eq.~\eqref{eq:ccfinal} one finds that neither $\Delta {\cal L}^S$ nor $\Delta {\cal L}^A$ is positive in general.
Defining
\begin{equation}
T^{ij}=\partial_\mu \phi_i \partial^\mu \phi_j,
\end{equation}
we can consider the cases in which $T_{ij}T_{kl}$ obeys the symmetries of $c^S$ or $c^A$ as defined in \Eq{eq:cSA},
\begin{equation}
\begin{aligned}
(T_{ij}T_{kl})^S &= \frac{1}{3}(T_{ij}T_{kl} + T_{ik}T_{jl} + T_{il}T_{kj}) \\
(T_{ij}T_{kl})^A &= \frac{1}{3}(2T_{ij}T_{kl} - T_{ik}T_{jl} - T_{il}T_{kj}).
\end{aligned}
\end{equation}
First, note that $(TT)^S c^A=(TT)^A c^S=0$. Thus, depending on the symmetry property of $(TT)$, either $\Delta \mathcal{L}^S$ or $\Delta \mathcal{L}^A$ can be zero, which means that it cannot have a definite sign irrespective of the sign of $c^S$ or $c^A$. However if $(TT)$ has the symmetry property that kills $c^S$, i.e., if $(TT) = (TT)^A$, then $(TT)c^A$ reduces to $\sum_m \left(T_{ij}m^{ij}\right)^2 > 0$ and vice versa for $S\leftrightarrow A$. That is, if $(TT)^{S,A} = 0$ then $(TT)c^{A,S}$ is positive.
In particular, for backgrounds of the form in Eq.~\eqref{eq:vf}, we find:
\begin{equation}
\Delta{\cal L}^S=(f^2)^2\sum_{m}(v\cdot m\cdot v)^2>0.\label{eq:DLsquare}
\end{equation}
The ansatz \eqref{eq:vf} obeys a no-hair theorem of sorts, in that we have required $f_\mu$ to be independent of the direction of $v_i$ under which the solution is ``charged'' in flavor space.
Indeed, such a solution would govern the scalar profile of a generalization of a dilatonic black hole to a theory in which the dilaton is replaced by a multiplet $\phi_i$.
The positivity statement of \Eq{eq:DLsquare} can be generalized to any theory where the quartic amplitude at leading order goes like $s^2$, with arbitrary states replacing the scalar multiplet.
For example, consider a CP-conserving EFT with gauge group $\Pi_{i=1}^N {\rm U}(1)_i$:
\begin{equation}
\Delta {\cal L} = c_{ijkl}(F^{i}F^{j})(F^{k}F^{l})+\widetilde c_{ijkl}(F^{i}\widetilde{F}^{j})(F^{k}\widetilde{F}^{l}).\label{eq:DLF4multi}
\end{equation}
If we scatter photons of arbitrary flavor with parallel or perpendicular linear polarizations, only one of the two operators (the first or second, respectively) contributes to the forward amplitude.
Writing the real or imaginary parts of the amplitudes for $\gamma_i(p_1)\gamma_j(p_2)\rightarrow X$ with parallel (perpendicular) linear polarizations as $m^{ij}$ ($\widetilde m^{ij}$), \Eq{eq:ccfinal} immediately generalizes for \Eq{eq:DLF4multi}:
\begin{equation}
\begin{aligned}
c_{ijkl}&=\sum_{m}\left(m^{(ij)}m^{(kl)}+m^{[il]}m^{[kj]}+m^{[ik]}m^{[lj]}\right)\\
\widetilde c_{ijkl}&=\sum_{\tilde m}\left(\widetilde m^{(ij)}\widetilde m^{(kl)}+\widetilde m^{[il]}\widetilde m^{[kj]}+\widetilde m^{[ik]}\widetilde m^{[lj]}\right).
\end{aligned} \label{eq:ccfinalgauge}
\end{equation}
Consider a gauge field background that factorizes analogously to \Eq{eq:vf},
\begin{equation}
F^i = Q_i\, f+ P_i\, \star\! f,\label{eq:Fform}
\end{equation}
where $Q_i$ and $P_i$ are arbitrary $N$-component vectors, $f$ is an arbitrary spacetime-dependent two-form with components $f_{\mu\nu}$, and $\star f$ is its Hodge dual with components that we write as $\widetilde f_{\mu\nu} = \epsilon_{\mu\nu\rho\sigma} f^{\rho\sigma}/2$.
(Note in particular that the field strength for an arbitrary dyonic black hole charged under this multi-${\rm U}(1)$ theory is a particular example of \Eq{eq:Fform}, with $F^{i}=\frac{Q_{i}}{r^{2}}\,{\rm d}t\wedge{\rm d}r+P_{i}\sin\theta\,{\rm d}\theta\wedge{\rm d}\phi$.)
From \Eq{eq:ccfinalgauge}, unitarity implies that for solutions with the form \eqref{eq:Fform}, $\Delta {\cal L}$ is positive:
\begin{equation}
\begin{aligned}
\Delta{\cal L } &=\phantom{+}\sum_{m}\Big\{\left[(Q\cdot m\cdot Q-P\cdot m\cdot P)(f^{2})+(Q\cdot m\cdot P+Q\cdot P\cdot m)(f\widetilde{f})\right]^{2} \\& \qquad\qquad + \left[(Q\cdot m\cdot P)-(P\cdot m\cdot Q)\right]^{2}\left[(f^{2})^{2}+(f\widetilde{f})^{2}\right] \Big\}
\\&\phantom{=}+\sum_{\tilde m}\Big\{\left[(P\cdot\widetilde{m}\cdot Q+Q\cdot\widetilde{m}\cdot P)(f^{2})-(Q\cdot\widetilde{m}\cdot Q-P\cdot\widetilde{m}\cdot P)(f\widetilde{f})\right]^{2}\\&\qquad\qquad+\left[(Q\cdot\widetilde{m}\cdot P)-(P\cdot\widetilde{m}\cdot Q)\right]^{2}\left[(f^{2})^{2}+(f\widetilde{f})^{2}\right]\Big\}
\\& > 0,
\end{aligned}\label{eq:gaugepositivity}
\end{equation}
writing $P\cdot m\cdot Q$ for $m^{ij}P_{i}Q_{j}$, $P\cdot\widetilde{m}\cdot Q$ for $\widetilde{m}^{ij}P_{i}Q_{j}$, etc.
(Indeed, writing the analogues of $c^S$ and $c^A$ in \Eq{eq:cSA} for \Eq{eq:ccfinalgauge}, both the symmetric and antisymmetric parts contribute to \Eq{eq:gaugepositivity}, but the antisymmetric part drops out if either $Q_i$ or $P_i$ vanishes in \Eq{eq:Fform}.)
In addition to positivity, we further expect that $\Delta {\cal L}$ is convex as a functional of the fields, as discussed in \App{app:convex}.
The result of \Eq{eq:Dzeta} and the argument for positivity of $\Delta {\cal L}$ on black hole solutions---for quartic and higher contact operators in unitary UV completions---is that for any such deformation of Einstein-Maxwell theory, including beyond the leading four-derivative terms, the extremality parameter receives a positive correction.
In particular, higher-derivative corrections of the form $F^4$, $F^8$, $R^4$, $R^2 F^2$, etc. should contribute positively to the extremal charge-to-mass ratio, thus allowing black holes themselves to be the states demanded by the WGC.
While the sign of the action can be ambiguous in the case of cubic terms---particularly the $R_{\mu\nu\rho\sigma}F^{\mu\nu}F^{\rho\sigma}$ operator for extremal KN black holes with small spin---we expect that $\Delta \zeta$ will be positive for any stable background~\cite{dyonic}. As noted previously, it is consistent from an EFT perspective to consider a theory in which the first corrections in $\Delta {\cal L}$ occur at quartic order and higher.
\subsection{Scalar examples}
Let us consider some particular example EFTs with tree-level completions in more detail.
Starting with the Einstein-Maxwell-dilaton Lagrangian,
\begin{eqnarray}
\mathcal{L}_{\rm EMD}=R-2(\partial\phi)^2-\frac{1}{2}e^{-2\lambda\phi}F^2,\label{eq:EMDL}
\end{eqnarray}
the resulting charge-to-mass ratio of the extremal magnetic dilaton black hole (the GHS solution~\cite{GHS} for arbitrary dilaton coupling $\lambda$) is modified by dimension-eight operators,\footnote{See \App{GHSSol} for details. Here we are assuming that the operators are dressed with appropriate powers of exponential dilaton factors such that in the string frame, the Lagrangian has a universal dilaton factor.}
\begin{equation}
\begin{aligned}
\Delta {\cal L} = a_1e^{-6\lambda\phi}(F^2)^2 + b_1e^{-4\lambda\phi}(\partial\phi)^2 F^2 + c\,e^{-2\lambda\phi}(\partial\phi)^4,
\end{aligned}\label{eq:EMDEFT}
\end{equation}
in terms of which we find:
\begin{equation}
\begin{aligned}
&\left.\frac{P}{\sqrt{2(1 + \lambda^2)}M}\right|_{\rm ext} =1 + \frac{16(1 + \lambda^2)^2a_1 + 4\lambda^2(1 + \lambda^2)b_1 + \lambda^4c}{10(1+\lambda^2)^4P^2}.\label{eq:Deltazdilaton}
\end{aligned}
\end{equation}
Let us now give examples for $(a_1,b_1,c)$ by considering a massive scalar $X$ coupled to photons and dilatons $\phi$ as
\begin{equation}
f_1XF^2,\quad f_2 X(\partial \phi )^2.
\end{equation}
In the low-energy theory that emerges from integrating out the massive scalar at tree level, we find:
\begin{equation}
a_1=\frac{f_1^2}{8m_X^2},\;\;b_1=\frac{f_1 f_2}{4m_X^2},\;\; c=\frac{f_2^2}{8m_X^2}. \label{eq:dilatonabc}
\end{equation}
Importantly, note that $b_1$ can be negative. However, we find the charge-to-mass ratio from Eq.~\eqref{eq:Deltazdilaton} is now given by
\begin{equation} \label{Answ}
\left.\frac{P}{\sqrt{2(1+\lambda^2)}M}\right|_{\rm ext}=1+\frac{[4(1+\lambda^2)f_1+\lambda^2f_2]^2}{80(1+\lambda^2)^4P^2m_X^2}.
\end{equation}
In other words, the sign-ambiguous term $f_1f_2$ gets combined with others to form a perfect square.
Thus we see that, in this simple scalar example, the individual coefficients of the four-derivative operators may not have a definite sign, but their contribution to the charge-to-mass ratio does.
The perfect square above is reflecting the fact that the on-shell action itself is a perfect square. Substituting \Eq{eq:dilatonabc} into $\Delta {\cal L}$, we have:
\begin{equation}
\Delta {\cal L} = \frac{e^{-2\lambda\phi}}{8m_X^2} \left[ f_1 e^{-2\lambda\phi}F^2 + f_2 (\partial\phi)^2\right]^2.
\end{equation}
Indeed, substituting the GHS solution and integrating from the horizon to infinity, one reproduces \Eq{Answ}. Remarkably, we find that the extremality/action relation in \Eq{eq:Dzeta} also holds for the higher-derivative-corrected GHS black hole.
That is (in appropriate units of mass and Newton's constant) one finds by explicit calculation that the the correction to the extremal charge-to-mass ratio computed directly from the equations of motion in \Eq{eq:Deltazdilaton} and \App{GHSSol} actually satisfies the $\Delta \zeta \sim \Delta {\cal L}$ relation in \Eq{eq:Dzeta} for arbitrary dilaton coupling constant.
This is surprising, since extremal GHS black holes are qualitatively very different from extremal KN---in particular, the former have vanishing entropy and area---and the mechanics of the proof of \Eq{eq:Dzeta} above relied on the perturbation of a nonzero horizon area in order to relate the charge-to-mass shift to the on-shell action.
However, in string frame, the GHS black hole has recently been shown to exhibit an ${\rm AdS}_2\times S^2$ geometry~\cite{Porfyriadis:2021zfb}, and in Einstein frame the higher-derivative terms can themselves similarly regularize the singular extremal GHS limit with a Bertotti-Robinson-like spacetime akin to near-horizon RN with nonzero entropy~\cite{Herdeiro:2021gbw}.
Such results may provide a path to understanding the empirical observation that the extremality/action relation in \Eq{eq:Dzeta} nonetheless holds for a GHS black hole.
Similar perfect-square behavior occurs in a related example, where we allow operators of different mass dimensions to contribute.
While the signs of the couplings of the $F^4$ and quartic Riemann operators are constrained by analyticity of scattering amplitudes~\cite{Adams:2006sv,Cheung:2014ega,Bellazzini:2015cra}, the signs of six-derivative operators of the form $R^2 F^2$ are not constrained without invoking additional assumptions.
This leads to a natural question: In EFTs with nonzero $F^4$ and $R^4$ terms, derived from a well-defined UV completion, can the couplings be such that the indefinite-sign $R^2 F^2$ operators overwhelm the definite-sign operators of higher and lower dimension, making the sign of the shift in charge-to-mass ratio indefinite?
As a concrete example, consider Einstein-Maxwell theory with a massive scalar $X$ coupled to $F^2$ and the various $R^2$ terms as:
\begin{equation}
{\cal L}_{{\rm UV}}=\frac{R}{2\kappa^{2}}-\frac{1}{4}F^{2}-\frac{1}{2}(\partial X)^{2}-\frac{1}{2}m_{X}^{2}X^{2} + g_X X \left(a_{1}R^{2} {+} a_{2}R_{\mu\nu}R^{\mu\nu} {+} a_{3}R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma} {+} \epsilon bF^{2}\right),
\end{equation}
where $a_{1,2,3}$ and $b$ are ${\cal O}(1)$ constants in Planck units, $\epsilon$ is a unitless parameter that we can tune, and $g_X \ll 1$ is a small unitless coupling. Integrating out $X$, we have an EFT where $\Delta {\cal L}$ is given by
\begin{equation}
\frac{g_X^2}{2m_{X}^{2}}\left(a_{1}R^{2} +a_{2}R_{ab}R^{ab} +a_{3}R_{abcd}R^{abcd} + b \epsilon F^2 \right)^{2}.\label{eq:DLphi}
\end{equation}
When $\epsilon \sim 1$, the $F^4$ terms in \Eq{eq:DLphi} dominate, while when $\epsilon \ll 1$, the $R^4$ terms dominate, but for intermediate $\epsilon$, one might imagine that the $R^2 F^2$ terms, with Wilson coefficients of indefinite sign, will dominate and spoil the positivity of the extremality shift of RN black holes.
However, computing the corrected metric explicitly for an arbitrary dyonic black hole, we find that the shift in the charge-to-mass ratio can be written in a form that manifestly satisfies the WGC:
\begin{equation}
\begin{aligned}
\left.\frac{\sqrt{Q^2 + P^2}}{\sqrt{2}M}\right|_{\rm ext} & =1+\frac{\kappa^{4}g_X^2}{m^{6}m_{X}^{2}} \left[\frac{46656}{614185}\left(\frac{95}{108}a_{2}+a_{3}\right)^{2}\right. \\&\qquad\qquad\qquad +\frac{2352}{9449} \left( a_{2} + \frac{17}{7}a_{3} - \frac{9449}{5292}\frac{b\epsilon m^{2}}{\kappa^{2}}\gamma \right)^{2}\\&\qquad\qquad\qquad +\left.\frac{383}{59535}\frac{b^{2}\epsilon^{2}m^{4}}{\kappa^{4}}\gamma^2\right],
\end{aligned}\label{eq:Dzcompetition}
\end{equation}
writing $\gamma = (Q^2 - P^2)/(Q^2 + P^2)$.
Here, we must take the coupling $g_X$ small, so that second-order back reaction effects from two insertions of the $F^4$ operator (which would contribute $\propto g_X^4$) do not compete with the $R^2 F^2$ or $R^4$ effects in \Eq{eq:Dzcompetition} ($\propto g_X^2$).
While this conspiracy of operators of different mass dimension to arrange themselves such that the charge-to-mass ratio increases in a well-defined UV completion may seem mysterious at the level of \Eq{eq:Dzcompetition}, it is directly connected to the sum-of-squares form of the higher-derivative terms in the action that arises as a consequence of unitarity.
\section{Outlook}\label{sec:outlook}
In this paper, we have investigated the behavior of corrections to the extremality condition for black holes in two regimes: the asymptotic IR and the threshold regime.
In the former, the charge-to-mass ratio of an extremal black hole is quantum mechanically modified by $T_{\mu\nu}T^{\mu\nu}$ corrections to the EFT induced by loops of massless states.
We show that these corrections are always positive, except in two special cases involving large numbers of massless particles with specific nonminimal, Planck-suppressed couplings; in these cases, we argued that, despite appearances, causality is not in danger and the WGC can be preserved by threshold effects.
In the threshold regime, where finite contributions to the EFT from integrating out massive states below $m_{\rm Pl}$ are dominant, we reviewed a result of Refs.~\cite{Cheung:2018cwt,dyonic}, giving a profound connection between the extremality correction and the on-shell EFT action.
We performed nontrivial checks of this result using both GHS black holes and quartic Riemann operators.
We then employed locality and unitarity---in the form of analytic dispersion relations and the generalized optical theorem---to show that healthy EFTs with higher-derivative terms at quartic order or higher must have an action in the form of a perfect square.
We proved this statement explicitly in the case of four-derivative operators in theories with an arbitrary number of scalars or photons.
Unitarity and the WGC are fundamentally interconnected both by their overlapping consequences for the positive-definite form of the on-shell Lagrangian and their connections to the running of EFT operators. Remaining open questions include more closely studying the fate of the nonminimally coupled, nonsupersymmetric theories with $>137$ fermions or $>46$ bosons, in order to more conclusively decide whether they belong to the landscape or the swampland, and understanding whether unitarity and causality can put practically useful bounds on the leading cubic higher-dimension operators. More broadly, the application of constraints from causality and unitarity to prove or sharpen other swampland conjectures is an interesting direction for future work.
The unexpected utility of the classical Lagrangian as a means of computing the black hole extremality bound in general theories suggests that other applications of the on-shell action could also be worthy of investigation.
Often, discussions of swampland criteria emphasize their nature as ``intrinsically quantum gravitational'' statements, with no field-theoretic avatars. But at least in the context of the WGC, we have seen that the consequences of ``special theories that provide healthy UV completions of quantum gravity'' and ``universal constraints from unitarity and causality'' are instead part of a continuum of statements. If we look at the $Q/M$ curve as a function of $M$ for theories with any amount of supersymmetry where the asymptotic flat-space region can be parametrically realized, we find at all masses that there are massive states above the extremal line. For masses larger than $M \gtrsim m_{\rm Pl}^2/m_*$, where the black hole solutions can be trusted in the effective field theory, the ``reason'' for this is given by the constraints from causality, unitarity, and dispersion relations. For exponentially large $M \gtrsim (m_{\rm Pl}^2/m_*)\exp(m_{\rm Pl}^p/m_*^p)$, it is the logarithmic running of the operators that guarantees the negative shift. Of course, any statement for masses $M \lesssim m_{\rm Pl}$ depends on detailed knowledge of an actual UV completion, and it is here that the beautiful facts about the existence of {\it light} states satisfying the WGC in all known examples in string theory are relevant. But these three regimes appear to be united in suggesting that $Q/M$ curve is convex as a function of $M$, approaching ``$1$'' monotonically from above. Such monotonicity properties were already noted in the spectrum of charged states in the heterotic string in Ref.~\cite{ArkaniHamed:2006dz}. More recently, an interesting connection between the convexity of the spectrum of charged operators in AdS has been discovered in Ref.~\cite{Ofer}.
Studies of the WGC have seen that many fundamental aspects of physics, ranging from unitarity and causality to black hole decay, are intertwined and mutually enforcing.
The ultimate nature and full consequences of these connections in unraveling the workings of quantum gravity, and their ability to make nontrivial predictions for real-world physics, remains as a worthy challenge for future investigation.
\pagebreak
\begin{center}
{\bf Acknowledgments}
\end{center}
\noindent
We thank Clifford Cheung, Hirosi Ooguri, Timothy Trott, and Cumrun Vafa for useful discussions and comments.
{N.A-H.} is supported by DOE grant DE-SC0009988. Y.-t.H. and J.-Y.L. are supported by MoST grant 109-2112-M-002 -020 -MY3.
G.N.R. is supported at the Kavli Institute for Theoretical Physics by the Simons Foundation (Grant~No.~216179) and the National Science Foundation (Grant~No.~NSF PHY-1748958) and at the University of California, Santa Barbara by the Fundamental Physics Fellowship.
|
{
"timestamp": "2021-12-02T02:33:45",
"yymm": "2109",
"arxiv_id": "2109.13937",
"language": "en",
"url": "https://arxiv.org/abs/2109.13937"
}
|
\section{Introduction}
Given a family of hypergraphs $\mathcal{F}$, the $k$-color Ramsey number for $\mathcal{F}$ is the minimum $n$ such that for any edge coloring of the complete $r$-uniform hypergraph on $n$ vertices with $k$ colors, we have that there exists a monochromatic subgraph $F$ for some $F\in \mathcal{F}$. We will denote this quantity by $R_r(\mathcal{F};k)$. The study of graph and hypergraph Ramsey numbers represents a huge body of research, and we refer the reader to the surveys \cite{CFS} and \cite{MS}.
In this paper we will be interested in hypergraph Ramsey numbers where the number of colors goes to infinity. We will focus on families of hypergraphs which are Berge-$G$ for some graph $G$, defined as follows. Given a ($2$-uniform) graph $G$, we say that a hypergraph $H$ is a {\em Berge-$G$} if $V(G) \subset V(H)$ and there is a bijection $\phi:E(G) \to E(H)$ such that $e\subset \phi(e)$ for all $e \in E(G)$. In other words, $H$ is a Berge-$G$ if we can embed a single edge into each hyperedge of $H$ and create a copy of $G$. When $G$ is a path or cycle, this definition agrees with the definition of a Berge path or Berge cycle. Note that many nonisomorphic hypergraphs may be a Berge-$G$, and we denote the family of all such hypergraphs by $\mathcal{B}(G)$. The notion of the family of Berge-$G$ for general graphs $G$ was initiated in \cite{GP} and since then extensive research has been done on extremal problems related to $\mathcal{B}(G)$ for various graphs $G$.
The Tur\'an number for a family $\mathcal{F}$ is denoted by $\mathrm{ex}_r(n, \mathcal{F})$ and is the maximum number of edges in an $n$-vertex $r$-uniform hypergraph that does not contain any $F\in \mathcal{F}$ as a subgraph. Early work on extremal problems for Berge hypergraphs focused on Tur\'an numbers of $\mathcal{B}(G)$. Since the introduction of the Berge-Tur\'an problem, a long list of papers have been written about it, far too many to cite here, and we recommend \cite{GMP} for a partial history. More recently Ramsey problems have also been considered, see for example \cite{AG, BGKW, BZ, G, GMOV, GLSS, GS, LW, NV, P, STWZ}. The two problems are related as any coloring avoiding monochromatic $\mathcal{B}(G)$ must have that every color class contains at most $\mathrm{ex}_r(n, \mathcal{B}(G))$ edges. It is therefore not surprising that the order of magnitude for $R_2(C_{2m};k)$, the multicolor Ramsey number of an even cycle, is known only when $k\in \{2,3,5\}$. In these cases, Li and Lih \cite{LL} showed that $R_2(C_{2m};k) = \Theta\left( k^{\frac{m}{m-1}}\right)$. Our main result is a generalization of this to hypergraphs. We prove our main result as a corollary of some more general theorems which may be useful for future hypergraph Ramsey problems.
Lower bounds for $R_r(\mathcal{F}; k)$ may be proved by considering the dual problem of minimizing the number of colors necessary to partition the edge set of $K_n^{(r)}$ such that each color class is $\mathcal{F}$-free. Our first theorem reduces this dual problem to covering the edges of complete $r$-partite $r$-uniform hypergraphs. We use $K_n^{(r)}$ to denote the complete $r$-uniform hypergraph on $n$ vertices and $K_{n,\cdots, n}^{(r)}$ to denote the complete $r$-partite $r$-uniform hypergraph with $n$ vertices in each part.
\begin{theorem} \label{thm: uniform to complete}
Let $r$ be fixed and $\beta>r-2$ and let $\mathcal{F}$ be a family of connected hypergraphs. If there exists an edge coloring of $K_{n,\cdots, n}^{(r)}$ with $O(n^\beta)$ colors with no monochromatic $F \in \mathcal{F}$, then there exists a coloring of the edges of $K_n^{(r)}$ with $O(n^\beta)$ colors with no monochromatic $F \in \mathcal{F}$.
\end{theorem}
We use this theorem to prove our main result, which determines the order of magnitude of the multicolor Ramsey number for Berge cycles of certain lengths and certain uniformities.
\begin{comment}
\begin{theorem}
Let $m\geq 2$ and $r < 4m-1$ be fixed. If there exists an edge coloring of $K_{n,n}$ with $T$ colors each of which form a graph of girth at least $2m+2$, then there exists a coloring of the edges of the complete $r$-partite $r$-uniform hypergraph with $O(T\cdot n^{r-2})$ colors each of which form a hypergraph with no $BC_{2m}$ or $BC_{2m+1}$.
\end{theorem}
Since the order of magnitude of the multicolor Ramsey number is known for $C_{2m}$ with $m\in \{2,3,5\}$, we have an immediate corollary that determines the order of magnitude of the multicolor Ramsey number for $BC_{2m}$ and $BC_{2m+1}$ in these cases.
\end{comment}
\begin{theorem}\label{thm: cycles ramsey number}
For $m\in \{2,3,5\}$, if $r<4m-1$, then $R_r(\mathcal{B}(C_{2m});k)$ and $R_r(\mathcal{B}(C_{2m+1});k)$ are each $\Theta \left(k^{\frac{m}{rm-m-1}}\right)$
\end{theorem}
We note that our proof yields that if one could determine that the order of magnitude of the graph multicolor Ramsey number for $C_{2m}$ is $\Theta(k^{\frac{m}{m-1}})$ for some $m\not\in \{2,3,5\}$ then this would also determine that for all $r < 4m-1$ the order of magnitude of the $r$-uniform multicolor Ramsey number for $\mathcal{B}(C_{2m})$ and for $\mathcal{B}(C_{2m+1})$ is $\Theta \left(k^{\frac{m}{rm-m-1}}\right)$. Using similar techniques, we are also able to give lower bounds on $R_r(\mathcal{B}(K_{a,b});k)$ for some choices of $r, a, b$.
\begin{theorem}\label{thm: biclique ramsey number}
Let $b\geq 2$ and $a > (b-1)!$. Then for all $r < 2(a+b)-1$ we have
\[
R_r(\mathcal{B}(K_{a,b});k) = \Omega\left(k^{\frac{b}{(r-2)b+1}} \right).
\]
Furthermore, when $b=2$ or $a+b \leq r < 2(a+b)-1$, for $a> (b-1)!$ we have
\[
R_r(\mathcal{B}(K_{a,b});k) = \Theta\left(k^{\frac{b}{(r-2)b+1}} \right).
\]
\end{theorem}
\section{Preliminaries}
\begin{definition}
For a given $r$, there are a finite number of possible vectors $(\rho_1,...,\rho_r)$ such that $\rho_i \in \mathbb{N}\cup \{0\}$ and $\sum \rho_i = r$. We will call the set of these vectors $P_r$. Given a particular vector $\pmb{\rho} \in P_r$, we have the following shorthand for describing specific features of this vector.
The maximal element is $\pmb{\rho}_{max}$.
The number of non-zero $\rho_i$ is $|\text{supp}(\pmb{\rho})|$ which we will notate as $\pmb{\hat{\rho}}$ for brevity.
\end{definition}
\begin{definition}
We define $(P_r,\prec)$ to be the partial ordering of these weak compositions of $r$ as the following. $\forall \pmb{\rho},\pmb{\tau} \in P_r$, $\pmb{\rho} \prec \pmb{\tau}$ if $\pmb{\hat{\rho}}> \pmb{\hat{\tau}}$ and there exists an ordering of a partition of $\pmb{\rho}$ into $\pmb{\hat{\tau}}$ subsets such that the sum of the elements in the $i$'th ordered subset of $\pmb{\rho}$ are equal to the $i$'th nonzero entry of $\pmb{\tau}$ for all $i$. \end{definition}
By considering any linear extension of the poset $(P_r,\prec)$, we arrive at a total ordering of $P_r$ with smallest element $(1,1,...,1)$ which we can then induct over.
\begin{definition}
We define $H^{(r)}_{(\rho_1,...,\rho_r)}(n)$ to be the hypergraph with vertex set $V = V_1 \cup V_2 \cup ... \cup V_r$ where $|V_i|=n$ and edge set $\{e : |e \cap V_i|=\rho_i \}$.
\end{definition}
We will need the following procedure which takes a graph and transforms it to a hypergraph of higher uniformity.
\begin{definition}[Enlarging]
Let $G$ be a bipartite graph with partite sets $A$ and $B$ and let $a,b\in \mathbb{N}$. Define an $(a+b)$-uniform hypergraph $H$ as follows. For each $v\in A$ let $v_1,\cdots, v_a$ be $a$ disjoint vertices and for each $u\in B$ let $u_1,\cdots, u_b$ be $b$ disjoint vertices. Then
\[
V(H) = \left(\bigcup_{v\in A} \{v_1,\cdots , v_a\}\right) \cup \left( \bigcup_{u \in B} \{u_1, \cdots, u_b\}\right),
\]
\[
E(H) = \left \{ \{v_1,\cdots, v_a, u_1,\cdots, u_b\}: uv\in E(G) \right\}.
\]
We say that $H$ {\em is the hypergraph obtained by enlarging each vertex in $A$ to $a$ vertices and each vertex in $B$ to $b$ vertices}.
\end{definition}
As stated in the introduction, determining the minimum $n$ such that any coloring of $K_n^{(r)}$ has a monochromatic $F$ is equivalent to the dual problem of minimizing the number of colors necessary to color $K_n^{(r)}$ such that no color class contains an $F$. We formalize this with the following function.
\begin{definition}
Let $H$ be a hypergraph and $\mathcal{F}$ be a family of hypergraphs. Define the function $C(H, \mathcal{F})$ to be the minimum number of colors necessary to color the edge set of $H$ such that no color class contains any $F\in \mathcal{F}$.
\end{definition}
\section{Proof of Theorem \ref{thm: uniform to complete}}
Let $\beta>r-2$ and let $\mathcal{F}$ be a fixed family of connected hypergraphs, and assume that we can color the edges of complete $r$-uniform $r$-partite hypergraph with $O(n^{\beta})$ colors so that there is no monochromatic copy of a hypergraph in $\mathcal{F}$. That is, there exists a constant $c_{1,\cdots, 1}$ such that $C(H_{1,\cdots, 1}^{(r)}(n), \mathcal{F}) \leq c_{1,\cdots, 1} n^{\beta}$ for all $n$. We aim to show that $C(K_n^{(r)}, \mathcal{F}) = O(n^{\beta})$. To do this, we will split the edge set of $K_n^{(r)}$ into a bounded number of parts each associated to an element of the poset $P_r$ and show that that each of these sets can be colored with $O(n^{\beta})$ colors.
Since $C(K_n^{(r)}, \mathcal{F})$ is monotone in $n$, we assume without loss of generality that $n$ is divisible by $r$. Divide the vertex set into $V_1,..., V_r$ each of size $\frac{n}{r}$. For each edge $e$ there is a vector $(e_1,...,e_r) \in P_r$ where $e_i = |e\cap V_i|$, and we may partition the edge set of $K^{(r)}_n$ into sets depending on which vector in $P_r$ it is associated with. For a given vector $\pmb{\rho}\in P_r$ the set of edges with vector $\pmb{\rho}$ forms a subhypergraph isomorphic to $H_{\pmb{\rho}}^{(r)}\left(\frac{n}{r}\right)$, and hence
\[K_n^{(r)} = \bigcup\limits_{\pmb{\rho} \in P_r} H_{\pmb{\rho}}^{(r)}\left(\frac{n}{r}\right).\]
Since the number of vectors in $P_r$ is a constant that depends only on $r$, it suffices to show that for each $\pmb{\rho}\in P_r$ we have that $C(H_{\pmb{\rho}}^{(r)}, \mathcal{F}) = O(n^{\beta})$. We will proceed by induction on (any linear extension of) $P_r$. Since by the assumption we have that $C(H_{1,\cdots, 1}^{(r)}(n), \mathcal{F}) \leq c_{1,\cdots, 1} n^{\beta}$, the base case is satisfied. Now fix $\pmb{\rho} = (\rho_1,\cdots, \rho_r)\in P_r$ and assume that for all $\pmb{\tau} \prec \pmb{\rho}$ there is a constant $c_{\pmb{\tau}}$ such that $C(H_{\pmb{\tau}}^{(r)}(n), \mathcal{F}) \leq c_{\pmb{\tau}} n^{\beta}$ for all $n$.
Note that if $\rho_i = 0$, then $V_i$ is not incident with any hyperedges of $H_{\pmb{\rho}}^{(r)}(n)$. Without loss of generality we can assume that $\rho_1$ through $\rho_{\pmb{\hat{\rho}}}$ are non-zero. Split each $V_i$ where $\rho_i > 0$ into $\pmb{\rho}_{max}$ parts $V_{i, 1},\cdots, V_{i,\pmb{\rho}_{max}}$ (again without loss of generality assume that $n$ is divisible by $\pmb{\rho}_{max}$). Divide the edges of $H_{\pmb{\rho}}^{(r)}\left(n\right)$ as follows. Call an edge $e$ {\em Type I} if for all $i$ there exists a $j$ such that $e\cap V_{i,j} = e\cap V_i$. Call the other edges {\em Type II}. We will show that we may cover the Type I and Type II edges with $O(n^{\beta})$ $\mathcal{F}$-free hypergraphs by induction on $n$ and by the induction hypothesis on $P_r$ respectively.
First we take care of the Type II edges. For any choice $U_1,\cdots, U_r$ of distinct sets from $\{V_{i,j}\}_{i,j}$ we may consider the subhypergraph of Type II edges which are induced by $U_1,\cdots, U_r$. If this subhypergraph contains edges, then for each edge $e$ one may consider the vector $(e'_1,\cdots, e'_r)$ where $e'_i = |U_i \cap e|$. By definition of Type II, the vector $(e'_1,\cdots, e'_r)$ is strictly less than $\pmb{\rho}$ in $P_r$. Therefore, by the induction hypothesis (on $P_r$), this subhypergraph of edges may be covered by $O(n^\beta)$ hypergraphs each of which is $\mathcal{F}$-free. Since the number of choices for $U_1,\cdots, U_r$ is a constant that depends only on $r$ and $\pmb{\rho}_{max}$, we have that there is an absolute constant $C:=C_{r, \pmb{\rho}}$ so that the Type II edges may be covered with at most $C n^{\beta}$ $\mathcal{F}$-free subhypergraphs.
Next we take care of the Type I edges by induction on $n$. Define $C_1$ to be a constant that satisfies $C + C_1 \pmb{\rho}_{max}^{\pmb{\hat{\rho}} - 1 - \beta} < C_1$. This is possible since $\beta > r-2$ and $\pmb{\hat{\rho}} \leq r-1$ for any $\pmb{\rho} \not= (1,\cdots, 1)$. For the induction hypothesis, assume that for any $k < n$ we have that $C(H_{\pmb{\rho}}^{(r)}(k), \mathcal{F}) \leq C_1 k^{\beta}$. For any $\mathbf{j} = (j_1,\cdots, j_{\hat{\pmb{\rho}}}) \in \{1,\cdots, {\pmb{\rho}_{max}}\}^{\hat{\pmb{\rho}}}$ the graph of Type I edges induced by $V_{1, j_1},\cdots, V_{{\hat{\pmb{\rho}}}, j_{\hat{\pmb{\rho}}}}$ is isomorphic to $H_{\pmb{\rho}}^{(r)}\left(\frac{n}{\pmb{\rho}_{max}}\right)$. By the induction hypothesis (on $n$) there are $\mathcal{F}$-free hypergraphs $G_1(\mathbf{j}),\cdots, G_{T}(\mathbf{j})$ which cover the Type I edges induced by $V_{1, j_1},\cdots, V_{{\hat{\pmb{\rho}}}, j_{\hat{\pmb{\rho}}}}$ where $T = C_1\left( \frac{n}{ \pmb{\rho}_{max}}\right)^\beta$. Naively, we could use such a set of hypergraphs for each $\mathbf{j}$, but unfortunately this is not a small enough number in total. In order to reduce the total number of hypergraphs used, we will combine those which are edge-disjoint. Note that because $\mathcal{F}$ contains only connected hypergraphs, the disjoint union of $\mathcal{F}$-free graphs is still $\mathcal{F}$-free.
For each $\mathbf{j}$ assume that we have $\mathcal{F}$-free hypergraphs $G_1(\mathbf{j}),\cdots, G_T(\mathbf{j})$ which partition the Type I edges induced by $V_{1, j_1},\cdots, V_{{\hat{\pmb{\rho}}}, j_{\hat{\pmb{\rho}}}}$ and $T = C_1\left( \frac{n}{ \pmb{\rho}_{max}}\right)^\beta$. We combine disjoint copies of these as follows. For $k_2,\cdots, k_{\pmb{\hat{\rho}}}$ any ${\pmb{\hat{\rho}}}-1$ (not necessarily distinct) integers in $\{0,\cdots, \pmb{\rho}_{max}-1\}$, consider the vectors $\mathbf{j}_1 = (1, 1+k_2,\cdots, 1+k_{\pmb{\hat{\rho}}})$, $\mathbf{j}_2 = (2, 2+k_2, \cdots, 2+k_{\pmb{\hat{\rho}}})$, .... $\mathbf{j}_{\pmb{\hat{\rho}}} = ({\pmb{\hat{\rho}}}, {\pmb{\hat{\rho}}}+k_2, \cdots, {\pmb{\hat{\rho}}}+k_{\pmb{\hat{\rho}}})$ where addition is done on $\{1,\cdots, \pmb{\rho}_{max}\}$ mod $\pmb{\rho}_{max}$. Then for any $t$ the graphs $G_t(\mathbf{j}_1), \cdots ,G_t(\mathbf{j}_{\pmb{\hat{\rho}}})$ are disjoint. Let their union be called $G_t(k_2,\cdots, k_{\pmb{\hat{\rho}}})$. Then as $k_2,\cdots, k_{\pmb{\hat{\rho}}}$ vary we have
\[
\bigcup_{t=1}^T \bigcup_{k_2,\cdots, k_{\pmb{\hat{\rho}}}} G_t(k_2,\cdots, k_{\pmb{\hat{\rho}}}) = \bigcup_{t=1}^T \bigcup_{\mathbf{j}} G_t(\mathbf{j}),
\]
and this union covers all of the Type I edges. The total number of graphs $G_t(k_2,\cdots, k_{\pmb{\hat{\rho}}})$ is $\pmb{\rho}_{max}^{\pmb{\hat{\rho}}-1} \cdot T = C_1 \left(\pmb{\rho}_{max}^{\pmb{\hat{\rho}} -1 - \beta }\right)n^\beta$. Combining the graphs used to cover the Type I edges with the graphs used to cover the Type II edges we have that
\[
C(H_{\pmb{\rho}}^{(r)}(n), \mathcal{F}) \leq Cn^{\beta} + C_1 \left(\pmb{\rho}_{max}^{{\pmb{\hat{\rho}}}-1 - \beta }\right)n^\beta < C_1 n^{\beta},
\]
where the last inequality follows by the choice of $C_1$.
\section{Proof of Theorems \ref{thm: cycles ramsey number} and \ref{thm: biclique ramsey number}}
We need the following lemmas which take a graph and transform it to a hypergraph which forbids something. Lemma \ref{lem: cycle blowup} has been noted before, see Construction 1.9 in \cite{GL2} for example, but we include a proof for completeness.
\begin{lemma}\label{lem: cycle blowup}
Let $G$ be a bipartite graph with no $C_3, C_4,...,C_{2m},C_{2m+1}$. Let $H$ be the $(s+t)$-uniform hypergraph obtained by enlarging each vertex in one part of $G$ to $s$ vertices and each vertex in the other part of $G$ to $t$ vertices. Then if $s < 2m$ and $t < 2m$, $H$ is $\mathcal{B}(C_{2m})$ and $\mathcal{B}(C_{2m+1})$ free.
\end{lemma}
\begin{proof}
By contrapositive, let $g\in \{2m, 2m+1\}$ and assume that $H$ contains a Berge-$C_g$ with vertex set $v_1,...,v_{g}$ and edge set $e_1,...,e_{g}$ such that $v_i,v_{i+1} \in e_i$ (subscripts considered modulo $g$). Let $A$ and $B$ be the partite sets of graph $G$. For each $v_j$ let $w_j$ be the vertex in $G$ which was enlarged to create $v_j$. Note that the $w_j$ may not be distinct, but in the sequence $(w_1,\cdots, w_t)$ a vertex may appear at most $s$ times if it is in $A$ and at most $t$ times if it is in $B$. For each $j$, if $w_j$ and $w_{j+1}$ are distinct, then $w_j \sim w_{j+1}$ in $G$. Then, ignoring repeated vertices, the sequence $w_1, w_2, \dots, w_g, w_1$ corresponds to a closed walk in $G$ of length $\ell \leq g$. Furthermore, since $e_1,\cdots, e_g$ are distinct hyperedges, the edges in this closed walk must be distinct. Since $g \geq 2m$ and $s < 2m$ and $t<2m$, we have that $\ell \geq 3$ Therefore, there is a cycle in $G$ of length at least $3$ and at most $g$.
\end{proof}
We prove a similar lemma regarding enlarging graphs that are $K_{a,b}$-free.
\begin{lemma}\label{lem: biclique blowup}
Let $a,b\geq 2$ and $G$ be a bipartite graph with no $K_{a,b}$. Let $H$ be the $(s + t)$-uniform hypergraph obtained by enlarging each vertex in one part of $G$ to $s$ vertices and each vertex in the other part of $G$ to $t$ vertices. Then if $s<a+b$ and $t<a+b$, $H$ does not contain a Berge-$K_{a,b}$.
\end{lemma}
\begin{proof}
By contrapositive, assume $H$ contains a Berge-$K_{a,b}$ with vertex set $v_1,\cdots, v_a, u_1,\cdots, u_b$ and edge set $\{e_{i,j}\}$ where $\{v_i, u_j\} \subset e_{i,j}$. Let the partite sets of $G$ be $A'$ and $B'$ and let $A$ be the set of vertices that came from enlarging vertices in $A'$ and $B$ be the set of vertices that came from enlarging vertices in $B'$. For each $u_i$ and $v_j$, let $u'_i$ and $v'_j$ be the vertex in $G$ that was enlarged to create $u_i$ or $v_j$ respectively. First the set $\{v'_1,\cdots, v'_a, u'_1,\cdots, u'_b\}$ contains more than one vertex because each vertex of $G$ was enlarged to either $s$ or $t$ vertices in $H$ and both $s$ and $t$ are at most $a+b-1$ by assumption. Therefore, there exist $u'_i$ and $v'_j$ that are adjacent in $G$, and we may assume without loss of generality that $v'_j \in A'$ and $u'_i \in B'$ (and therefore $v_j\in A$ and $u_j \in B$).
Next we will show that $v_k$ and $u_k$ are in $A$ and $B$ respectively for all $k$. The only vertices in $A$ that $v_i$ shares edges with are those that came from enlarging $v'_i$. Therefore, if $u_k \in A$ for some $k$ we must have that $u'_k = v'_i$. But then this forces all vertices all vertices in $A$ to come from enlarging either $v'_i$ or $u'_j$. For $a>1$ this is a contradiction for then the map from the edges of the Berge-$K_{a,b}$ to the edges of $K_{a,b}$ will not be a bijection. A similar contradiction occurs if $v_k\in B$ for some $k$.
Since all $v_i$ are in $A$ and $u_j$ are in $B$, we must also have that all $v'_i$ and all $v'_j$ are distinct, otherwise again, since $a,b\geq 2$, the map from the edges of the Berge-$K_{a,b}$ to the edges of $K_{a,b}$ will not be a bijection. But now if all $v'_i$ and $u'_j$ are distinct, we have that $v'_i$ and $u'_j$ are adjacent in $G$ for all $i$ and $j$, ie there is a $K_{a,b}$ in $G$.
\end{proof}
We will also use the following general theorem which allows one to obtain a coloring of $K_{n,\cdots, n}^{(r)}$ given a coloring of $K_{n,n}$.
\begin{theorem}\label{thm: hypergraph partitioning}
Let $G_1,\cdots, G_T$ be bipartite graphs on partite sets $A$ and $B$ whose union is $K_{n,n}$. For each $j$ let $H_j$ be the hypergraph obtained by enlarging each vertex in $A$ to $s$ vertices and each vertex in $B$ to $t$ vertices. Assume that $\mathcal{F}$ is a family of hypergraphs such that $H_i$ is $\mathcal{F}$-free for all $i$. Then there is a partition of the edge set of the complete $(s+t)$-partite $(s+t)$-uniform hypergraph with $n$ vertices in each part into $T \cdot n^{s + t - 2}$ subgraphs each of which is $\mathcal{F}$-free.
\end{theorem}
\begin{proof}
Let $A$ and $B$ be identified with $\mathbb{Z}/ n\mathbb{Z}$, and let $A_1,\cdots, A_s$ and $B_1,\cdots, B_t$ be disjoint sets of vertices also each identified with $\mathbb{Z} /n\mathbb{Z}$. For $a_2, \cdots, a_s$ and $b_2, \cdots, b_t$ arbitrary elements of $\mathbb{Z} / n\mathbb{Z}$ and $1\leq j\leq T$, define the $(s+t)$-partite $(s+t)$-uniform hypergraph $H_i (a_2, \cdots, a_s, b_2, \cdots, b_t)$ to be the $(s+t)$-partite $(s+t)$-uniform hypergraph on partite sets $A_1,\cdots, A_s, B_1,\cdots, B_t$ with edge set
\[
\{(u, u+a_2, \cdots, u+a_s, v, v+b_2, \cdots v+b_t): uv\in E(G_i) \},
\]
where vertices in coordinates $1$ through $s$ are in parts $A_1,\cdots, A_s$ respectively and vertices in coordinates $s+1,\cdots s+t$ are in parts $B_1,\cdots, B_t$ respectively.
Note that $H_i(0, \cdots, 0, 0,\cdots, 0)$ is isomorphic to the hypergraph obtained by enlarging each vertex in $G_i$ to $s$ vertices if it is in $A$ and to $t$ vertices if it is in $B$, and hence it is $\mathcal{F}$-free. Furthermore for any choice $a_2,\cdots, a_s, b_2, \cdots b_t$, the hypergraph $H_i(a_2,\cdots, a_s, b_2, \cdots b_t)$ is isomorphic to $H_i(0,\cdots, 0, 0, \cdots, 0)$ via the explicit automorphism
\[ u\mapsto
\begin{cases}
u & u\in A_1 \cup B_1\\
u+a_i & u\in A_i, i\geq 2\\
u+b_i & u\in B_i, i\geq 2
\end{cases}
\]
Note that as $i$ ranges from $1$ to $T$ and $a_2,\cdots, a_s, b_2, \cdots b_t$ vary over all choices in $\mathbb{Z} / n\mathbb{Z}$, we have $T\cdot n^{s+2-2}$ hypergraphs $H_i(a_2,\cdots, a_s, b_2, \cdots b_t)$. It only remains to show that
\[
\bigcup_{j=1}^T \bigcup_{a_2,\cdots, a_s, b_2, \cdots b_t} H_i(a_2,\cdots, a_s, b_2, \cdots b_t)
\]
covers all of the hyperedges of the complete $r$-partite $r$-uniform hypergraph on partite sets $A_1, \cdots, A_s, B_1,\cdots, B_t$. To do this, consider an arbitrary hyperedge $(v_1,\cdots, v_{s+t})$. Let $i$ be the index such that $v_1 v_{s+1} \in E(G_i)$ (this is well-defined since the union of $G_1,\cdots, G_T$ is $K_{n,n}$). Then by the definitions we have that $(v_1,\cdots, v_{s+t})$ is an edge of the hypergraph
\[
H_i(v_2-v_1, v_3-v_1, \cdots, v_s - v_1, v_{s+2} - v_{s+1}, \cdots, v_{s+t} - v_{s+1}).
\]
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm: cycles ramsey number}]
It is known \cite{GL} that the Tur\'an numbers for Berge cycles satisfy
\begin{align*}
\mathrm{ex}_r(n, \mathcal{B}(C_{2m})) &= O\left( n^{1+\frac{1}{m}}\right)\\
\mathrm{ex}_r(n, \mathcal{B}(C_{2m+1})) &= O\left( n^{1+\frac{1}{m}}\right)
\end{align*}
Applying this result and the pigeonhole principle yields the upper bound. For the lower bound, showing that $R_r(\mathcal{B}(C_{2m}); k)$ and $R_r(\mathcal{B}(C_{2m+1}); k)$ are $\Omega\left(k^{\frac{m}{rm-m-1}}\right)$ is equivalent to showing that $K_n^{(r)}$ can be partitioned into $O\left(n^{r - 1 - \frac{1}{m}} \right)$ subgraphs each of which are $\mathcal{B}(C_{2m})$ and $\mathcal{B}(C_{2m+1})$ free respectively. Let $s$ and $t$ be defined so that $s+t = r$ and $s,t < 2m$.
It is known that for $m\in \{2,3,5\}$, $K_{n,n}$ can be partitioned into $O\left(n^{1-\frac{1}{m}}\right)$ subgraphs each of which has girth at least $2m+2$ (see Lemma 5 of \cite{LL} for the case $m=2$ and Proposition 3.1 of \cite{T} for the cases when $m=3$ and $m=5$). Therefore, for $T = O\left(n^{1 - \frac{1}{m}}\right)$ assume that $G_1, \cdots, G_T$ are graphs each of which have girth at least $2m+2$ and whose union is $K_{n,n}$. By Lemma \ref{lem: cycle blowup}, for each $G_i$ the hypergraph obtained by enlarging each vertex in one partite set to $s$ vertices and each vertex in the other partite set to $t$ vertices is both $\mathcal{B}(C_{2m})$-free and $\mathcal{B}(C_{2m+1})$-free. Then, by applying Theorem \ref{thm: hypergraph partitioning}, we have a set of $O\left(n^{r-1-\frac{1}{m}}\right)$ subgraphs which are $\mathcal{B}(C_{2m})$ and $\mathcal{B}(C_{2m+1})$-free the union of which cover the edges of $K_{n,\cdots, n}^{(r)}$. Applying Theorem \ref{thm: uniform to complete}, we may partition the edge set of $K_n^{(r)}$ into $O\left(n^{r-1-\frac{1}{m}}\right)$ subgraphs each of which are $\mathcal{B}(C_{2m})$ and $\mathcal{B}(C_{2m+1})$-free. This completes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm: biclique ramsey number}]
The lower bound is similar to the proof of the lower bound in Theorem \ref{thm: cycles ramsey number}. We leave the details to the reader and only note that one uses Lemma \ref{lem: biclique blowup} and the result from \cite{ARS} that for $a>(b-1)!$, the edge set of $K_n$ may be partitioned into $\Theta(n^{1/b})$ subgraphs each of which is $K_{a,b}$-free.
For the upper bound, when $b=2$ we use the result from \cite{GMP} that
\[
\mathrm{ex}_r(n, \mathcal{B}(K_{2,t})) = O\left(n^{3/2} \right),
\]
for all $r$ and $t$ and the result from \cite{GP} that
\[
\mathrm{ex}_r(n, \mathcal{B}(K_{a,b})) = O(n^{2-1/s})
\]
whenever $r\geq a+b$. The bound then follows from the pigeonhole principle.
\end{proof}
\section{Conclusion}
In this paper we determined the order of magnitude for the multicolor Ramsey numbers of Berge cycles of length $4$, $5$, $6$, $7$, $10$, or $11$, as long as the uniformity is small enough. Extending our theorem to other cycle lengths or uniformities is out of reach at the current time, for in these cases we do not even know the order of magnitude of the Tur\'an number $\mathrm{ex}_r(n, C_\ell)$. Our main result follows from a more general set up that allows one to go from a construction in the graph case to a construction in the hypergraph case. Because of this we were also able to give the order of magnitude for $R_r(\mathcal{B}(K_{a,b}))$ for some choices of $r,a,b$. The lower bound in Theorem \ref{thm: biclique ramsey number} is not tight in general. It is known (see \cite{GMP}) that
\[
\mathrm{ex}(n, K_r, F) \leq \mathrm{ex}_r(n, \mathcal{B}(F)) \leq \mathrm{ex}(n, K_r, F) + \mathrm{ex}(n, F),
\]
where $\mathrm{ex}(n, K_r, F)$ denotes the maximum number of copies of $K_r$ in an $n$-vertex $F$-free graph. Combining this with results from \cite{AS} gives that for $a>(b-1)!$ and $3\leq r\leq \tfrac{a}{2} + 1$,
\[
\mathrm{ex}_r(n, \mathcal{B}(K_{a,b})) = \Theta \left( n^{r - \binom{r}{2}/a}\right)
\]
The upper bound that one gets from the pigeonhole principle for such $r,a,b$ does not match our lower bound in Theorem \ref{thm: biclique ramsey number}. Perhaps one could leverage the projective norm graphs to improve on our result in these cases. When $\tfrac{a}{2} + 1 < r < a+b$ the order of magnitude for $\mathrm{ex}_r(n, \mathcal{B}(K_{a,b}))$ is not known and this would have to be determined before answering the Ramsey question. It would be interesting to determine the order of magnitude for the multicolor Ramsey number of $\mathcal{B}(G)$ for other graphs $G$.
Throughout this paper we did not try to optimize our multiplicative constants because doing so would not have given us an asymptotic formula in any of the cases. We note that in all of the constructions as they are written, there are pairs of color classes that correspond to edge disjoint hypergraphs, and these could be combined to reduce the total number of colors used. It is not clear what the best way to do this systematically is, but for example, we can obtain a lower bound for $R_3(n, \mathcal{B}(C_4))$ of \[\left(\frac{(3\sqrt{2}-4)(3\sqrt{3}-1)}{2} -o(1)\right)^{2/3}k^{2/3} \approx 0.63756 k^{2/3}.\] Furthermore, in some cases it is possible to extend Theorem \ref{thm: uniform to complete} to some $\beta \leq r-2$. Determining an asymptotic formula for any of the Ramsey numbers studied in this paper would be very interesting but would require new ideas, as even asymptotics for the corresponding Tur\'an numbers are not known (cf \cite{EGM, EGMST, GL} and Section 5 of \cite{GMP}). In the specific case of $3$-uniform graphs of girth $5$, it is known \cite{LV} that
\[
\mathrm{ex}_3(n, \{\mathcal{B}(C_2), \mathcal{B}(C_3), \mathcal{B}(C_4)\}) \sim \frac{1}{6} n^{3/2}.
\]
One construction showing the lower bound is to take the vertex set to be the $1$-dimensional subspaces of $\mathbb{F}_q^3$ where $3$ subspaces form an edge if and only if they are an orthogonal basis. It would be interesting to try to use automorphisms of this graph to show (if it is true) that \[R_3(\{\mathcal{B}(C_2), \mathcal{B}(C_3), \mathcal{B}(C_4)\}; k) \sim k^{2/3}.\]
\bibliographystyle{plain}
|
{
"timestamp": "2021-09-30T02:01:05",
"yymm": "2109",
"arxiv_id": "2109.13969",
"language": "en",
"url": "https://arxiv.org/abs/2109.13969"
}
|
\section*{Appendix}
\onecolumn
\section*{\centering {Appendix}}
\section{Dataset Statistics}
\label{app:stata}
\begin{table*}[h!]
\centering
\begin{tabular}{c|c|c|c|c}
\toprule
\textbf{} & \textbf{Language} & \textbf{NgramShift} & \textbf{ClauseShift} & \textbf{RandomShift} \\
\midrule
\textbf{num. tokens} &
\begin{tabular}{@{}c@{}c@{}}\textbf{Ru} \\ \textbf{En} \\ \textbf{Sv}\end{tabular}
& \begin{tabular}{@{}c@{}c@{}} 105.8k \\ 128.5k \\ 134.1k \end{tabular} & \begin{tabular}{@{}c@{}c@{}} 199.7k \\ 198.6k \\ 192.9k \end{tabular} & \begin{tabular}{@{}c@{}c@{}} 95.6k \\ 111.1k \\ 100.7k \end{tabular}
\\ \midrule
\textbf{unique tokens} &
\begin{tabular}{@{}c@{}c@{}}\textbf{Ru} \\ \textbf{En} \\ \textbf{Sv}\end{tabular}
& \begin{tabular}{@{}c@{}c@{}} 25.2k \\ 19.2k \\ 23.2k \end{tabular} & \begin{tabular}{@{}c@{}c@{}} 46.1k \\ 25.1k \\ 25.7k \end{tabular} & \begin{tabular}{@{}c@{}c@{}} 27.8k \\ 22.8k \\ 17.8k \end{tabular}
\\ \midrule
\textbf{tokens / sentence} &
\begin{tabular}{@{}c@{}c@{}}\textbf{Ru} \\ \textbf{En} \\ \textbf{Sv}\end{tabular}
& \begin{tabular}{@{}c@{}c@{}} 10.9 \\ 12.9 \\ 13.4 \end{tabular} & \begin{tabular}{@{}c@{}c@{}} 19.9 \\ 19.9 \\ 19.3 \end{tabular} & \begin{tabular}{@{}c@{}c@{}} 10.5 \\ 11.1 \\ 10.1 \end{tabular}
\\ \bottomrule
\end{tabular}
\caption{A brief statistics of the controlled perturbation datasets. Languages: \textbf{Ru}=Russian, \textbf{En}=English, \textbf{Sv}=Swedish.}
\label{tab:stata}
\end{table*}
\clearpage
\section{Parameter-free Probing}
\label{app:pfp}
\begin{figure*}[h!]
\centering
\includegraphics[width=.8\textwidth]{images/complexity_M-BART.jpeg}
\caption{The task-wise heatmaps depicting the $\delta$ UUAS scores by M-BART for each language. Method=\textbf{Self-Attention Probing}. PE=\textbf{absolute}. X-axis=Attention head index. Y-axis=Layer index. Tasks: \textbf{Ngramshift} (top); \textbf{ClauseShift} (middle); \textbf{RandomShift} (bottom). Languages: \textbf{En}=English (left); \textbf{Sv}=Swedish (middle); \textbf{Ru}=Russian (right).}
\label{fig:mbart_complexity}
\end{figure*}
\begin{figure*}[h!]
\centering
\includegraphics[width=.8\textwidth]{images/en_M-BERT_positional.jpeg}
\caption{The task-wise heatmaps depicting the $\delta$ UUAS scores by M-BERT for each language. Method=\textbf{Self-Attention Probing}. PE: \textbf{absolute} (left); \textbf{random} (middle); \textbf{zero} (right). X-axis=Attention head index. Y-axis=Layer index. Tasks: \textbf{Ngramshift} (top); \textbf{ClauseShift} (middle); \textbf{RandomShift} (bottom).}
\label{fig:mbert_complexity_pe}
\end{figure*}
\begin{figure*}[t!]
\centering
\includegraphics[width=.85\textwidth]{images/l2_perturbed_ru_M-BERT.pdf}
\caption{The Euclidean distance between the impact matrices computed by M-BERT with different PEs over each pair of sentences ($s$, $s'$) for Russian. The distances are averaged over attention heads at each layer. Method: \textbf{Token Perturbed Masking}. Tasks: \textbf{NgramShift} (left); \textbf{ClauseShift} (middle); \textbf{RandomShift} (right)}.
\label{fig:l2-perturbed-ru-mbert}
\end{figure*}
\begin{figure*}[t!]
\centering
\includegraphics[width=.85\textwidth]{images/l2_perturbed_en_M-BERT.pdf}
\caption{The Euclidean distance between the impact matrices computed by M-BERT with different PEs over each pair of sentences ($s$, $s'$) for English. The distances are averaged over attention heads at each layer. Method: \textbf{Token Perturbed Masking}. Tasks: \textbf{NgramShift} (left); \textbf{ClauseShift} (middle); \textbf{RandomShift} (right)}.
\label{fig:l2-perturbed-en-mbert}
\end{figure*}
\begin{figure*}[t!]
\centering
\includegraphics[width=.85\textwidth]{images/l2_perturbed_sv_M-BART.pdf}
\caption{The Euclidean distance between the impact matrices computed by M-BART with different PEs over each pair of sentences ($s$, $s'$) for Swedish. The distances are averaged over attention heads at each layer. Method: \textbf{Token Perturbed Masking}. Tasks: \textbf{NgramShift} (left); \textbf{ClauseShift} (middle); \textbf{RandomShift} (right)}.
\label{fig:l2-perturbed-sv-mbart}
\end{figure*}
\begin{figure*}[h!]
\centering
\includegraphics[width=.85\textwidth]{images/l2 perturbed en M-BART.pdf}
\caption{The Euclidean distance between the impact matrices computed by M-BART with different PEs over each pair of sentences ($s$, $s'$) for English. The distances are averaged over attention heads at each layer. Method: \textbf{Token Perturbed Masking}. Tasks: \textbf{NgramShift} (left); \textbf{ClauseShift} (middle); \textbf{RandomShift} (right)}.
\label{fig:l2-perturbed-en-mbart}
\end{figure*}
\begin{figure*}[h!]
\centering
\includegraphics[width=.85\textwidth]{images/l2 perturbed ru M-BART.pdf}
\caption{The Euclidean distance between the impact matrices computed by M-BART with different PEs over each pair of sentences ($s$, $s'$) for Russian. The distances are averaged over attention heads at each layer. Method: \textbf{Token Perturbed Masking}. Tasks: \textbf{NgramShift} (left); \textbf{ClauseShift} (middle); \textbf{RandomShift} (right)}.
\label{fig:l2-perturbed-ru-mbart}
\end{figure*}
\begin{table*}[ht!]
\centering
\begin{tabular}{c|c|c|c|c}
\toprule
\textbf{} & \textbf{Language} & \textbf{M-BERT} &\textbf{ M-BART} \\
\midrule
\textbf{NgramShift} &
\begin{tabular}{@{}c@{}c@{}}\textbf{En} \\ \textbf{Sv} \\ \textbf{Ru}\end{tabular}
& \begin{tabular}{@{}c@{}c@{}} 0.3155 - 0.3293 \\ 0.3007 - 0.3123 \\0.3602 - 0.3813 \end{tabular} & \begin{tabular}{@{}c@{}c@{}} 0.3144 - 0.322 \\ 0.3024 - 0.3075\\ 0.3649 - 0.3757 \end{tabular} &
\\ \midrule
\textbf{ClauseShift} &
\begin{tabular}{@{}c@{}c@{}}\textbf{En} \\ \textbf{Sv} \\ \textbf{Ru}\end{tabular}
& \begin{tabular}{@{}c@{}c@{}} 0.2043 - 0.214 \\ 0.203 - 0.213 \\ 0.1961 - 0.2081 \end{tabular} & \begin{tabular}{@{}c@{}c@{}} 0.2028 - 0.2067 \\ 0.2037 - 0.2086\\ 0.2023 - 0.2047 \end{tabular} &
\\ \midrule
\textbf{RandomShift} &
\begin{tabular}{@{}c@{}c@{}}\textbf{En} \\ \textbf{Sv} \\ \textbf{Ru}\end{tabular}
& \begin{tabular}{@{}c@{}c@{}} 0.3659 - 0.3788 \\ 0.3819 - 0.412 \\ 0.3915 - 0.4163 \end{tabular} & \begin{tabular}{@{}c@{}c@{}} 0.3666 - 0.3726 \\0.3716 - 0.3887 \\ 0.4011 - 0.4105 \end{tabular} &
\\ \bottomrule
\end{tabular}
\caption{The UUAS scores by \textbf{Self-Attention Probing} method. The minimum and maximum values are given (min - max). Languages: \textbf{Ru}=Russian, \textbf{En}=English, \textbf{Sv}=Swedish.}
\label{tab:uuas-sap}
\end{table*}
\begin{table*}[t!]
\centering
\begin{tabular}{c|c|c|c|c}
\toprule
\textbf{} & \textbf{Language} & \textbf{M-BERT} & \textbf{M-BART} \\
\midrule
\textbf{NgramShift} &
\begin{tabular}{@{}c@{}c@{}}\textbf{En} \\ \textbf{Sv} \\ \textbf{Ru}\end{tabular}
& \begin{tabular}{@{}c@{}c@{}} 0.3506 - 0.4397 \\ 0.3682 - 0.4601 \\0.4244 - 0.5229 \end{tabular} & \begin{tabular}{@{}c@{}c@{}} 0.3097 - 0.4397 \\ 0.4295 - 0.533\\ 0.4553 - 0.5364 \end{tabular} &
\\ \midrule
\textbf{ClauseShift} &
\begin{tabular}{@{}c@{}c@{}}\textbf{En} \\ \textbf{Sv} \\ \textbf{Ru}\end{tabular}
& \begin{tabular}{@{}c@{}c@{}} 0.2331 - 0.2915 \\ 0.2463 - 0.3264 \\ 0.2279 - 0.2748 \end{tabular} & \begin{tabular}{@{}c@{}c@{}} 0.2331 - 0.2915 \\ 0.2379 - 0.3264\\ 0.2219 - 0.2748 \end{tabular} &
\\ \midrule
\textbf{RandomShift} &
\begin{tabular}{@{}c@{}c@{}}\textbf{En} \\ \textbf{Sv} \\ \textbf{Ru}\end{tabular}
& \begin{tabular}{@{}c@{}c@{}} 0.3932 - 0.4718 \\0.4585 - 0.533\\ 0.4553 - 0.5364 \end{tabular} & \begin{tabular}{@{}c@{}c@{}} 0.3878 - 0.4719 \\0.4346 - 0.533 \\ 0.4341 - 0.5364 \end{tabular} &
\\ \bottomrule
\end{tabular}
\caption{The UUAS scores by \textbf{Token Perturbed Masking} probe. The minimum and maximum values are given (min-max). Languages: \textbf{Ru}=Russian, \textbf{En}=English, \textbf{Sv}=Swedish.}
\label{tab:uuas-tpm}
\end{table*}
\begin{figure*}[h!]
\centering
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{images/en gr.png}
\caption{\texttt{original}}
\label{fig:tr-en-rshift-gr}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{images/en un.png}
\caption{\texttt{perturbed}}
\label{fig:tr-en-rshift-ungr}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{images/en ud.png}
\caption{\texttt{gold}}
\label{fig:tr-en-rshift-ud}
\end{subfigure}
\caption{Graphical representations of the syntactic trees inferred for the Swedish sentence \textit{Iyassu stoned me with gold} and its perturbed version. \texttt{original}=original sentence; \texttt{perturbed}=the perturbed version; \texttt{gold}=gold standard. Task=\textbf{RandomShift}. Model=\textbf{M-BERT} (Layer: 11; Head: 2). Method=\textbf{Self-Attention Probing}. The perturbation is underlined with red, and incorrectly assigned dependency heads are marked with red arrows.
}
\label{fig:tr-en}
\end{figure*}
\clearpage
\section{Representation Analysis}
\label{app:repr}
\begin{figure*}[h!]
\centering
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{images/ti_res_ru_ngram_shift.pdf}
\caption{\textbf{NgramShift}}
\label{fig:ti-ru-nshift}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{images/ti_res_ru_clause_shift.pdf}
\caption{\textbf{ClauseShift}}
\label{fig:ti-ru-cshift}
\end{subfigure}
\caption{Token identifiability (TI) by layer for M-BERT and M-BART on the \textbf{NgramShift} (left) and \textbf{ClauseShift} (right) tasks for Russian. Dashed lines represent the scores computed over the intact sentences. X-axis=Layer index. Y-axis=TI.
}
\label{fig:ti-ru}
\centering
\includegraphics[width=\linewidth]{images/SAD SV random.png}
\caption{Self-Attention Distance (SAD) by layer for M-BART and M-BERT with absolute (left) and zeroed (right) positional embeddings on the \textbf{RandomShift} task for Swedish. X-axis=Layer index. Y-axis=SAD.
}
\label{fig:sad-sv}
\end{figure*}
\clearpage
\section{Acceptability Judgements}
\label{app:pppl}
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.95\textwidth]{images/mbart_dist.png}
\caption{The \textit{MeanLP} distributions for the perturbed (ungrammatical) and intact (grammatical) sentences by M-BART. Tasks: \textbf{NgramShift} (left); \textbf{ClauseShift} (middle); \textbf{RandomShift} (right)}.
\label{fig:pppl-mbart}
\centering
\includegraphics[width=0.9\textwidth]{images/M-BERT Mean LP.png}
\caption{The \textit{MeanLP} distributions for the perturbed (ungrammatical) and intact (grammatical) sentences by M-BERT. Tasks: \textbf{NgramShift} (left); \textbf{ClauseShift} (middle); \textbf{RandomShift} (right)}.
\label{fig:pppl-mbert}
\end{figure*}
\end{document}
\section{Introduction}
An extensive body of works is devoted to analyzing syntactic knowledge of Transformer language models (LMs) \cite{vaswani2017attention,clark-etal-2019-bert,goldberg2019assessing,belinkov-glass-2019-analysis}. BERT-based LMs \cite{devlin-etal-2019-bert} have demonstrated their abilities to encode various linguistic and hierarchical properties \cite{lin-etal-2019-open,jawahar2019does,jo2020roles} which have a positive effect on the downstream performance \cite{liu-etal-2019-linguistic,miaschi-etal-2020-linguistic} and serve as an inspiration for syntax-oriented architecture improvements \cite{wang2019structbert,bai-etal-2021-syntax,ahmad-etal-2021-syntax,sachan-etal-2021-syntax}. Besides, a variety of pre-training objectives has been introduced \cite{liu2020survey}, with some of them modeling reconstruction of the perturbed word order \cite{lewis-etal-2020-bart,tao-etal-2021-learning,panda-etal-2021-shuffled}.
Recent research has adopted a new experimental direction aimed at exploring the syntactic knowledge of LMs and their sensitivity to word order employing \emph{text perturbations} \cite{futrell2018rnns,futrell-etal-2019-neural,ettinger-2020-bert}. Some studies show that shuffling word order causes significant performance drops on a wide range of QA tasks \cite{si2019does,sugawara2020assessing}. However, a number of works demonstrates that such permutation has little to no impact during the pre-training and fine-tuning stages \cite{pham2020out,sinha2020unnatural,DBLP:journals/corr/abs-2104-06644,o2021context,hessel-schofield-2021-effective,gupta2021bert}. The latter contradict the common understanding on how the hierarchical and structural information is encoded in LMs \cite{rogers-etal-2020-primer}, and even may question if the word order is modeled with the position embeddings \cite{wang2020position,dufter2021position}.
This has stimulated a targeted probing of the LMs internal representations generated from original texts and their permuted counterparts \cite{DBLP:journals/corr/abs-2104-06644,hessel-schofield-2021-effective}. A new type of \emph{controllable} probes has been proposed, designed to test the LMs sensitivity to granular character- and sub-word level manipulations \cite{clouatre2021demystifying}, as well as structured syntactic perturbations \cite{alleman-etal-2021-syntactic}. Despite the emerging interest in the field, little is investigated for languages other than English, specifically those with flexible word order.
This paper extends the ongoing research on the syntactic sensitivity to three Indo-European languages with a varying degree of word order flexibility: English, Swedish, and Russian. The contributions of this work are summarized as follows. First, we propose nine probing datasets in the languages mentioned above, organized by the type of controllable syntactic perturbation: N-gram perturbation (\textbf{NgramShift}), shuffling parts of the syntactic clauses (\textbf{ClauseShift}) and randomizing word order (\textbf{RandomShift}). Despite that randomizing word order has been studied from many perspectives (see Section \ref{related_work}), \textbf{NgramShift} differs from similar approaches \cite{conneau-etal-2018-cram,ravishankar-etal-2019-probing,eger-etal-2020-probe,alleman-etal-2021-syntactic} in that the N-grams correspond to \emph{only} syntactic phrases (e.g. prepositional or numerical phrases) rather than random word spans. \textbf{ClauseShift} is a previously unexplored type of syntactic perturbation adopted from the syntactic tree augmentation method \cite{sahin-steedman-2018-data}. Second, we apply a combination of parameter-free interpretation methods to test the sensitivity of two multilingual Transformer LMs: M-BERT \cite{devlin-etal-2019-bert}, and M-BART \cite{liu-etal-2020-multilingual-denoising}. We hypothesize that M-BART is more robust to the perturbations as opposed to M-BERT since it is learned to restore the shuffled input during pre-training. We evaluate the discrepancy in the syntactic trees induced by the models from perturbed sentences against the original ones, along with the ability to distinguish between them by judging their linguistic acceptability \cite{lau-etal-2020-furiously}. Finally, we analyze the relationship between the models' probe performance and position embeddings (PEs). To the best of our knowledge, it is one of the first attempts to introspect PEs regarding structural probing, particularly in the light of syntactic perturbations. The code and datasets are publicly available\footnote{\url{https://github.com/evtaktasheva/dependency_extraction}}.
\section{Related Work}
\label{related_work}
\paragraph{Syntax Probing}
Most of the previous studies on the syntactic knowledge of LMs are centered around the concept of probing tasks, where a simple classifier is trained to predict a particular linguistic property based on the model internal representations \cite{conneau-etal-2018-cram}. The scope of the properties ranges from dependency relations \cite{tenney2018you} to the depth of a syntax tree, and top constituents \cite{conneau-etal-2018-cram}. A variety of probing datasets and benchmarks have been developed. To name a few, \citet{liu-etal-2019-linguistic} create a probing suite focused on fine-grained linguistic phenomena, including hierarchical knowledge. SyntaxGym \cite{gauthier2020syntaxgym} and LINSPECTOR \cite{sahin-etal-2020-linspector} allow for targeted evaluation of the LMs linguistic knowledge in a standardized and reproducible environment.
These studies have proved that LMs are capable of encoding linguistic and hierarchical information \cite{belinkov-glass-2019-analysis,rogers-etal-2020-primer}. However, the probing paradigm has been lately criticized for relying on \emph{supervised} probes, which can learn linguistic properties given the supervision, and make it challenging to interpret the results because of the additional set of parameters \cite{hewitt-liang-2019-designing,belinkov2021probing}. Towards that end, \citet{hewitt-manning-2019-structural} introduce a \emph{structural} probe to explore a linear transformation of the embedding space, which best approximates the distance between words and depth of the parse tree. The method has proved to infer the hierarchical structure without any linguistic annotation \cite{kim2020pre}. \citet{maudslay2021syntactic} propose a \emph{Jabberwocky} probing suite of semantically nonsensical but syntactically well-formed sentences. The results demonstrate that the BERT-based LMs do not isolate semantics from syntax, which motivates further development of the probing field.
\paragraph{Acceptability Judgements} Another line of works relies on the concept of acceptability judgments. The CoLA benchmark \cite{warstadt-etal-2019-neural} and its counterpart for Swedish \cite{volodina2021dalaj} test LMs ability to identify various linguistic violations. Although Transformer LMs have outperformed the CoLA human solvers on the GLUE leaderboard \cite{wang-etal-2018-glue}, a granular linguistic analysis \cite{DBLP:journals/corr/abs-1901-03438} shows that the models struggle with long-distance syntactic phenomena as opposed to more local ones. Similar in spirit, BLiMP \cite{warstadt-etal-2020-blimp-benchmark}, and CLiMP \cite{xiang-etal-2021-climp} allow to evaluate the LMs with respect to the acceptability contrasts, framing the task as ranking sentences in minimal pairs.
\paragraph{Text Perturbations} Recent research has adopted a scope of novel approaches to investigating the LMs sensitivity to syntax corruption and input data manipulations. Starting from studies on randomized word order in LSTMs \cite{hill-etal-2016-learning,khandelwal-etal-2018-sharp,sankar2019neural,nie2019analyzing}, text perturbations have emerged as an audacious experimental direction under the ``pre-train \& fine-tune'' paradigm along with the interpretation methods of modern LMs. \citet{si2019does,sugawara2020assessing} show that N-gram permutations and shuffled word order in the fine-tuning data cause BERT's performance drops up to 22\% on a wide range of QA tasks. In contrast, several works report that models fine-tuned on such perturbed data still produce high confidence predictions and perform close to their counterparts on many tasks, including the GLUE benchmark \cite{ahmad-etal-2019-difficulties,sinha2020unnatural,liu2021importance,hessel-schofield-2021-effective,gupta2021bert}. Similar results are demonstrated by the RoBERTa model \cite{liu2019roberta} when the word order perturbations are incorporated into the pre-training objective \cite{panda-etal-2021-shuffled} or tested as a part of full pre-training on the perturbed corpora \cite{DBLP:journals/corr/abs-2104-06644}. \citet{DBLP:journals/corr/abs-2104-06644} find that the randomized RoBERTa models are similar to their naturally pre-trained peer according to parametric probes but perform worse according to the non-parametric ones.
Recognizing the need to further explore the LMs sensitivity to word order, \citet{clouatre2021demystifying} and \citet{alleman-etal-2021-syntactic} conduct the interpretation analysis of LMs by means of \emph{controllable} text perturbations. \citet{clouatre2021demystifying} propose two metrics that score local and global structure of sentences perturbed at the granularity of characters and sub-words. The metrics allow identifying that both conventional and Transformer LMs rely on the local order of tokens more than the global one. \citet{alleman-etal-2021-syntactic} find that BERT builds syntactic complexity towards the output layer and demonstrates a growing sensitivity to the hierarchical phrase structure across layers. In line with these studies, we analyze the syntactic sensitivity of Transformer-based LMs, extending the experimental setup to the multilingual setting.
\section{Controllable Perturbations}
This work proposes three types of \emph{controllable} syntactic perturbations varying in the extent of sentence corruption. We construct nine probing tasks\footnote{We use sentences from the CoNLL 2017 Shared Task on Multilingual Parsing from Raw Texts to Universal Dependencies \cite{ginter@conll}.} for three Indo-European languages\footnote{\url{https://wals.info}}: English (West Germanic, analytic), Swedish (North Germanic, analytic), and Russian (Balto-Slavic, fusional). Based on the dominant constituent order, all three languages are classified as the SVO (Subject-Verb-Object) languages. Nevertheless, there are some differences between them regarding word order flexibility. Russian is known to exhibit free word order as all of the possible constituent reorderings are acceptable: SOV, OSV, SVO, OVS, VSO, VOS \cite{bailyn2012syntax}. English allows for only two of them, namely SVO and OSV \cite{prince1988pragmatic}. Swedish belongs to the verb-second languages, which poses different restrictions on the possible constituent reorderings \cite{borjars2003subject}. Each dataset\footnote{A brief statistics is outlined in Appendix~\ref{app:stata}.} consists of 10k pairs of the corresponding perturbed sentence and its original.
\vspace{0.5em}\noindent \textbf{NgramShift} tests the LM sensitivity to \emph{local} perturbations taking into account the syntactic structure. We used a set of carefully designed morphosyntactic patterns to perturb N-grams that correspond to \emph{only} syntactic phrases such as numeral phrases, determiner phrases, compound noun phrases, prepositional phrases, etc. Towards this, we applied TF-IDF weighting from scikit-learn library \cite{pedregosa2011scikit} to build a ranked N-gram feature matrix from the corpora and further used it for the N-gram inversion. We used the N-gram range $\in [2; 4]$ for each language. Note that the number of words that change their absolute positions is similar for different values of $N$. Figure \ref{fig:example-nshift} illustrates the shift of the head in the prepositional phrase \textit{``to school''} for the sentence \textit{``He did not go to school yesterday''}.
\begin{figure}[ht]
\centering
\fbox{\begin{minipage}{0.95\columnwidth}
\setstretch{1.5}
\parbox{\columnwidth}{
\centering
\textcolor{cb-blue-green}{He did not go} \textcolor{cb-brown}{to} \textcolor{cb-blue}{school} \textcolor{cb-rose}{yesterday} \\
\textbf{En:} \textcolor{cb-blue-green}{He did not go} \textcolor{cb-blue}{school} \textcolor{cb-brown}{to} \textcolor{cb-rose}{yesterday}\\
\textbf{Ru:} \textcolor{cb-rose}{Vchera} \textcolor{cb-blue-green}{on ne poshel} \textcolor{cb-blue}{shkolu} \textcolor{cb-brown}{v} \\
\textbf{Sv:} \textcolor{cb-blue-green}{Han gick inte} \textcolor{cb-blue}{skolan} \textcolor{cb-brown}{till} \textcolor{cb-rose}{igår}
}
\end{minipage}
}
\caption{Examples of the N-gram perturbations (\textbf{NgramShift}). Languages: \textbf{En}=English, \textbf{Ru}=Russian, \textbf{Sv}=Swedish. The English sentence is translated to the other languages for illustrational purposes.
}
\label{fig:example-nshift}
\end{figure}
\vspace{0.5em}\noindent \textbf{ClauseShift} probes the LM sensitivity to \emph{distant} perturbations at the level of syntactic clauses. We use the syntactic tree augmentation method \cite{csahin2019data} to rotate sub-trees around the root of the dependency tree of each sentence to form a new synthetic sentence. We then apply a set of manually curated language-specific heuristics to filter out sentences uncorrupted by the rotation procedure. Figure \ref{fig:example-clauseshift} outlines an example of the clause rotation perturbation for the sentence \textit{``He manages to tell her that she has been resurrected''}.
\begin{figure}[ht]
\centering
\fbox{\begin{minipage}{0.95\columnwidth}
\setstretch{1.5}
\parbox{\columnwidth}{
\centering
\small
\textcolor{cb-blue-green}{He manages to tell her} \textcolor{cb-rose}{that she has been resurrected}\\
\textbf{En:} \textcolor{cb-rose}{That she has been resurrected} \textcolor{cb-blue-green}{he manages to tell her} \\
\textbf{Sv:} \textcolor{cb-rose}{Att hon har uppstått} \textcolor{cb-blue-green}{han lyckas berätta för henne}\\
\textbf{Ru:} \textcolor{cb-rose}{Chto ona byla voskreshena} \textcolor{cb-blue-green}{on smog rasskazat' ej}
}
\end{minipage}
}
\caption{Examples of the clause rotation perturbation (\textbf{ClauseShift}). Languages: \textbf{En}=English, \textbf{Ru}=Russian, \textbf{Sv}=Swedish. The English sentence is translated to the other languages for illustrational purposes.
}
\label{fig:example-clauseshift}
\end{figure}
\vspace{0.5em}\noindent \textbf{RandomShift} tests the LM sensitivity to \emph{global} perturbations obtained by shuffling the word order. This type represents an extreme case of sentence permutation and is useful for comparing the behavior of the models at the scale of the perturbation complexity. An example of the randomized word order perturbation for the sentence \textit{``She wanted to go to London''} is presented in Figure \ref{fig:example-randomshift}.
\begin{figure}[ht]
\centering
\fbox{\begin{minipage}{0.95\columnwidth}
\setstretch{1.5}
\parbox{\columnwidth}{
\centering
\textcolor{cb-blue-green}{She} \textcolor{cb-brown}{wanted}
\textcolor{cb-blue}{to go}
\textcolor{cb-rose}{to London}\\
\textbf{En:} \textcolor{cb-brown}{Wanted} \textcolor{cb-rose}{London}
\textcolor{cb-blue}{go}
\textcolor{cb-blue-green}{she}
\textcolor{cb-blue}{to}
\textcolor{cb-rose}{to} \\
\textbf{Sv:} \textcolor{cb-brown}{Ville} \textcolor{cb-rose}{London}
\textcolor{cb-blue}{åka}
\textcolor{cb-blue-green}{hon}
\textcolor{cb-rose}{till}
\textcolor{cb-blue}{att}\\
\textbf{Ru:} \textcolor{cb-brown}{Hotela} \textcolor{cb-rose}{London}
\textcolor{cb-blue}{poehat'}
\textcolor{cb-blue-green}{ona}
\textcolor{cb-rose}{v}}
\end{minipage}
}
\caption{Examples of the word order shuffling (\textbf{RandomShift}). Languages: \textbf{En}=English, \textbf{Ru}=Russian, \textbf{Sv}=Swedish. The English sentence is translated to the other languages for illustrational purposes.
}
\label{fig:example-randomshift}
\end{figure}
\section{Experimental Setup}
\label{setup}
\subsection{Models}
\label{setup:models}
The experiments are run on two 12-layer multilingual Transformer models released by the HuggingFace library \cite{wolf-etal-2020-transformers}:
\vspace{0.5em}\noindent \textbf{M-BERT}\footnote{Model name: \texttt{bert-base-multilingual-cased}.} is pre-trained using masked language modeling (MLM) and next sentence
prediction objectives, over concatenated monolingual
Wikipedia corpora in 104 languages.
\vspace{0.5em}\noindent \textbf{M-BART}\footnote{Model name: \texttt{facebook/mbart-large-cc25}.} is a sequence-to-sequence model that comprises a BERT encoder and an autoregressive GPT-2 decoder \cite{radford2019language}. The model is pre-trained on the CC25 corpus in 25 languages using text infilling and sentence shuffling objectives, where it learns to predict masked word spans and reconstruct the permuted input. We use only the encoder in our experiments.
\subsection{Interpretation Methods}
\label{setup:methods}
\paragraph{Parameter-free Probing} We apply two unsupervised probing methods to reconstruct syntactic trees from self-attention (\textbf{Self-Attention Probing}) and so-called ``impact'' (\textbf{Token Perturbed Masking}) matrices computed by feeding the MLM models with each sentence $s$ and its perturbed version $s'$. The trees are induced by Chu-Liu-Edmonds algorithm \cite{chu1965shortest,edmonds1968optimum} used to compute the Maximum Spanning Tree starting from the root of the corresponding gold dependency tree \cite{raganato-tiedemann-2018-analysis,htut2019attention,wu-etal-2020-perturbed}. The probing performance is evaluated by the Undirected Unlabeled Attachment Score (UUAS), which reflects the percentage of words that have been assigned the correct head without taking the direction of relations and dependency labels into account \cite{klein-manning-2004-corpus}.
\vspace{0.5em}\noindent \textbf{Self-Attention Probing} \cite{htut2019attention} allows to explore if attention heads encode complete syntactic trees. To this end, each layer-head attention matrix is treated as a weighted directed graph where the vertices represent words in the input sentence and edges are the attention weights. Model-specific special tokens such as \texttt{[CLS]}, \texttt{[SEP]}, \texttt{<s>}, \texttt{</s>} are excluded at the pre-processing stage to eliminate their impact on other tokens.
\vspace{0.5em}\noindent \textbf{Token Perturbed Masking} \cite{wu-etal-2020-perturbed} extracts global syntactic information by measuring the impact one word has on the prediction of another in an MLM. The impact matrix is similar to the self-attention matrix as it reflects the inter-word relationships in terms of Euclidean distance, except that it is derived from the outputs of the MLM head. For the sake of space, we refer the reader to \citet{wu-etal-2020-perturbed} for more details.
\paragraph{Representation Analysis} \citet{hessel-schofield-2021-effective} propose two metrics to compare contextualized representations and self-attention matrices produced by the model for each pair of sentences $s$ and $s'$. \textit{Token Identifiability (TI)} evaluates the similarity of the LM's contextualized representations of a particular token in $s$ and $s'$. It is high if the token representations are similar to one another. \textit{Self-Attention Distance (SAD)} measures if each token in $s$ relates to similar words in $s'$ by computing row-wise Jensen-Shannon Divergence between the two self-attention matrices. It is low if an LM attends to the same words despite the perturbations.
\paragraph{Pseudo-perplexity} Pseudo-perplexity (PPPL) is an intrinsic measure that estimates the probability of a sentence with an MLM similar to that of conventional LMs \cite{salazar-etal-2020-masked}. PPPL-based measures have proved to correlate with human ratings \cite{lau2017grammaticality}, match or outperform autoregressive LMs (GPT-2) in ranking hypotheses for downstream tasks and the BLiMP benchmark \cite{salazar-etal-2020-masked}, and perform at the human level in acceptability judgments \cite{lau-etal-2020-furiously}. We use two PPPL-based measures under implementation\footnote{\url{https://github.com/jhlau/acceptability-prediction-in-context}} by \citet{lau-etal-2020-furiously} to infer probabilities of the sentences and their perturbed counterparts. The \textit{MeanLP} and \textit{PenLP} measures are computed as the sum of pseudo-log-likelihood scores for each token in the sentence normalized by the total number of tokens. \textit{PenLP} additionally scales the denominator with the exponent $\alpha$ to penalize the effect of high scores.
\subsection{Positional Encoding} Various PEs have been proposed to utilize the information about word order in the Transformer-based LMs \cite{wang2020position,dufter2021position}. Surprisingly, little is known about what PEs capture and how well they learn the meaning of positions. \citet{wang-chen-2020-position} among the first present an extensive study on the properties captured by PEs in different pre-trained Transformers and empirically evaluate their impact on the downstream performance for many NLP tasks. In the spirit of this work, we aim at analyzing the impact of the PEs on the syntactic probe performance. Towards this end, we consider the following three configurations of PEs of the M-BERT and M-BART models: (1) \textbf{absolute}=frozen PEs; (2) \textbf{random}=randomly initialized PEs; and (3) \textbf{zero}=zeroed PEs.
\section{Results}
\label{res}
\begin{figure*}[t!]
\centering
\includegraphics[width=.7\textwidth]{images/complexity_M-BERT.jpeg}
\caption{The task-wise heatmaps depicting the $\delta$ UUAS scores by M-BERT for each language. Method=\textbf{Self-Attention Probing}. PE=\textbf{absolute}. X-axis=Attention head index. Y-axis=Layer index. Tasks: \textbf{NgramShift} (top); \textbf{ClauseShift} (middle); \textbf{RandomShift} (bottom). Languages: \textbf{En}=English (left); \textbf{Sv}=Swedish (middle); \textbf{Ru}=Russian (right).}
\label{fig:mbert_complexity}
\end{figure*}
\subsection{Parameter-free Probing}
\label{pfp}
The discrepancy in the syntactic trees induced from the original sentences and their perturbed analogs is measured as the difference between the corresponding UUAS scores ($\delta$ UUAS). The lower the $\delta$ UUAS, the better is the syntax tree reconstructed from $s'$ with respect to the UUAS score for $s$.
\paragraph{Self-Attention Probing} Figures \ref{fig:mbert_complexity} and \ref{fig:mbart_complexity} in Appendix \ref{app:pfp} outline the task-wise heatmaps with the $\delta$ UUAS scores achieved by the M-BERT and M-BART models with \textbf{absolute} PEs for each layer-head pair, respectively. The models exhibit similar behavior, demonstrating positive correlation between the $\delta$ UUAS scores and the granularity of the perturbation. The overall pattern for both models is that they display little to no sensitivity to \emph{local} and \emph{distant} perturbations (\textbf{NgramShift}, \textbf{ClauseShift}) in contrast to the \emph{global} ones (\textbf{RandomShift}). We provide examples of the dependency trees extracted from the self-attention matrices of the M-BERT model for the Swedish \textbf{NgramShift} task on Figure \ref{fig:tr-sv}. The trees from both original (see Figure \ref{fig:tr-sv-nshift-gr}) and perturbed (see Figure \ref{fig:tr-sv-nshift-ungr}) sentence versions receive the UUAS score of 0.86, demonstrating little changes in the assigned dependency heads under the local perturbation. On the contrary, randomizing word order (\textbf{RandomShift}) corrupts the syntactic structure significantly with a $\delta$ UUAS score of 0.33 (see Figure \ref{fig:tr-en}, Appendix \ref{app:pfp}).
\paragraph{Token Perturbed Masking} The models show similar results to that of in \textbf{Self-Attention Probing}, with regards to the perturbation granularity (see Figure \ref{fig:perturbed}). In spite of that, the model performance on the \textbf{NgramShift} and \textbf{ClauseShift} reveal some differences between the encoders. M-BART generally achieves lower and close to zero $\delta$ UUAS scores, meaning to better restore the hierarchical information from the perturbed sentences (e.g., \textbf{ClauseShift}: [\textbf{Sv, Ru}]). We relate this to the fact that M-BART is pre-trained with the sentence shuffling objective.
\paragraph{Language-wise Comparison} Another observation is that there are more insensitive attention heads on the Russian tasks, possibly indicating that it is harder to distinguish from the perturbations as opposed to English and Swedish, particularly on the \textbf{ClauseShift} task with typically longer and syntactically more complex sentences (see Figures \ref{fig:mbert_complexity}, \ref{fig:mbart_complexity}, Appendix \ref{app:pfp}). As for Swedish, which has a similar to English but stricter syntactic structure, M-BART tends to induce correct syntactic trees from the permuted sentences more frequently. This is indicated by negative $\delta$ UUAS scores on most tasks.
\paragraph{Positional Encoding} Analysis of the positional encoding shows that despite the genuine belief that positional information contributes most to syntactic structure encoding, the models do not seem to rely on it as much as might be expected. Figure \ref{fig:mbert_complexity_pe} (see Appendix \ref{app:pfp}) illustrates the distribution of $\delta$ UUAS scores for M-BERT with different PEs on English tasks. The heatmaps show that \textbf{zero} and \textbf{random} PEs only slightly affects the quality of the probe performance of the self-attention heads.
\begin{figure*}[h]
\centering
\begin{subfigure}[b]{0.6\linewidth}
\centering
\includegraphics[width=\linewidth]{images/sv_gr.pdf}
\caption{\texttt{original}}
\label{fig:tr-sv-nshift-gr}
\end{subfigure}
\begin{subfigure}[b]{0.6\linewidth}
\centering
\includegraphics[width=\linewidth]{images/sv_un.pdf}
\caption{\texttt{perturbed}}
\label{fig:tr-sv-nshift-ungr}
\end{subfigure}
\begin{subfigure}[b]{0.6\linewidth}
\centering
\includegraphics[width=\linewidth]{images/sv_ud.png}
\caption{\texttt{gold}}
\label{fig:tr-sv-nshift-ud}
\end{subfigure}
\caption{Graphical representations of the syntactic trees inferred for the Swedish sentence \textit{Treubiaceae är en familj av bladmossor} 'Treubiaceae is a family of mosses' and its perturbed version. \texttt{original}=the original sentence; \texttt{perturbed}=the perturbed version; \texttt{gold}=gold standard. Task=\textbf{NgramShift}. Model=\textbf{M-BERT} (Layer: 11; Head: 2). Method=\textbf{Self-Attention Probing}. The perturbation is underlined with red, and incorrectly assigned dependency heads are marked with red arrows.
}
\label{fig:tr-sv}
\end{figure*}
\begin{figure*}[h!]
\centering
\includegraphics[width=.82\textwidth]{images/deltas_perturbed.jpeg}
\caption{The probing performance in $\delta$ UUAS across layers under \textbf{Token Perturbed Probing}. PE=\textbf{absolute}. The scores are averaged over attention heads at each layer. X-axis=Attention head index. Y-axis=$\delta$ UUAS.}
\label{fig:perturbed}
\end{figure*}
To analyze the impact of PEs from another perspective, for each pair of ($s$, $s'$) we compute the Euclidean distance (L2) between the corresponding impact (\textbf{Token Perturbed Probing}) and self-attention matrices (\textbf{Self-Attention Probing}) described in Section \ref{setup:methods}. The difference in the impact matrices produced by M-BERT model is generally observed only in the setting with \textbf{zero} PEs (see Figures \ref{fig:l2-perturbed-sv-mbert}; Figures \ref{fig:l2-perturbed-ru-mbert}-\ref{fig:l2-perturbed-en-mbert}, Appendix \ref{app:pfp}). In contrast, there is almost no difference between the representations generated by M-BART across all configurations of the PEs (see Figures \ref{fig:l2-perturbed-sv-mbart}-\ref{fig:l2-perturbed-ru-mbart}, Appendix \ref{app:pfp}). This behavior is consistent with the head-wise results under \textbf{Self-Attention Probing} for all languages.
\subsection{Representation Analysis}
\label{res:repr}
\paragraph{Token Identifiability} The overall pattern for both models under the representation analysis is that for \emph{local} and \emph{distant} perturbations TI steadily decreases towards the output layer with rapid increases at layers $[1, 10]$ (see Figure \ref{fig:ti-ru}, Appendix \ref{app:repr}), and high for \emph{global} perturbations (\textbf{RandomShift}). TI decreases when the perturbed inputs generate embeddings different from the intact ones. Despite that higher layers in both models are more sensitive, the perturbed representations remain similar to that of the original \cite{hessel-schofield-2021-effective}.
\paragraph{Self-Attention Distance} The results by SAD show that both models score significantly lower with \textbf{random} and \textbf{zero} PEs (see Figure \ref{fig:sad-sv}, Appendix \ref{app:repr}), meaning lower sensitivity to the perturbations supported by the probing results (Section \ref{pfp}). This provides evidence that the encoders marginally rely on the positional information to induce the syntactic structure despite the distributions of the self-attention weights for the intact and perturbed sentences may differ according to the Jensen-Shannon divergence.
\subsection{Pseudo-perplexity}
\label{res:pppl}
Consistent with the results under parameter-free probing (Section \ref{pfp}) and representation analysis (Section \ref{res:repr}), PPPL-based acceptability judgements\footnote{We present the results obtained by the \textit{MeanLP} measure which are consistent with those of \textit{PenLP}.} indicate that the encoders distinguish between the perturbations depending on their granularity. The overall trend is that for all languages the sentence pseudo-log-probability inferred from both LMs decreases with the increase of the perturbation complexity which is demonstrated by higher acceptability scores on \textbf{NgramShift}, but significantly lower scores on the \textbf{ClauseShift} and \textbf{RandomShift} (see Figures \ref{fig:pppl-mbart}-\ref{fig:pppl-mbert}, Appendix \ref{app:pppl}). The statistical significance of the PPPL distributions is confirmed with Kolmogorov–Smirnov and Wilcoxon signed-rank tests (p-value $<$ 0.01).
\begin{figure*}[t!]
\centering
\includegraphics[width=.82\textwidth]{images/l2_perturbed_sv_M-BERT.pdf}
\caption{The Euclidean distances between the impact matrices computed by M-BERT with different PEs over each pair of sentences ($s$, $s'$) for Swedish. The distances are averaged over attention heads at each layer. Method: \textbf{Token Perturbed Masking}. Tasks: \textbf{NgramShift} (left); \textbf{ClauseShift} (middle); \textbf{RandomShift} (right)}.
\label{fig:l2-perturbed-sv-mbert}
\end{figure*}
\section{Discussion}
\paragraph{The syntactic sensitivity depends upon language} At present, English remains the focal point of prior research in the field of NLP, leaving other languages understudied. Our probing experiments on the less explored languages with different word order flexibility show that M-BERT and M-BART behave slightly differently in Swedish and Russian. While M-BART better restores the corrupted syntactic structure on most of the tasks for Swedish, there are fewer attention heads sensitive to the perturbations in Russian, which is revealed through the examination of head-wise attention patterns of both models. Besides, the encoders receive lower probing performance for Russian that can be contributed to the more complex syntax and flexible word order.
\paragraph{Pre-training objectives can help to improve syntactic robustness} Analysis of the M-BERT and M-BART LMs that differ in the pre-training objectives shows that M-BERT achieves higher $\delta$ UUAS performance across all languages as opposed to M-BART pre-trained with the sentence shuffling objective. The lower $\delta$ UUAS probing performance indicates that M-BART better induces syntactic trees from both perturbed and intact sentences (see Section \ref{pfp}). Despite this, the representation and acceptability analysis demonstrate that M-BART is also capable of distinguishing between the perturbations (see Sections \ref{res:repr}-\ref{res:pppl}). A fruitful direction for future work is to analyze more LMs that differ in the architecture design and pre-training objectives.
\paragraph{The LMs are less sensitive to more granular perturbations} The results of the parameter-free probing show that M-BERT and M-BART exhibit little to no sensitivity to \emph{local} perturbations within syntactic groups (\textbf{NgramShift}) and \emph{distant} perturbations at the level of syntactic clauses (\textbf{ClauseShift}). In contrast, the \emph{global} perturbations (\textbf{RandomShift}) are best distinguished by the encoders. As the granularity of the syntactic corruption increases, we observe a worse probing performance under all considered interpretation methods. Namely, the results are supported by representation analysis metrics (see Section \ref{res:repr}) that indicate higher susceptibility to major changes in the sentences structure (\textbf{RandomShift, ClauseShift}), and the PPPL-based measures (see Section \ref{res:pppl}) prescribing higher acceptability scores to sentences with more granular perturbations (\textbf{NgramShift}). We also find that the sensitivity to the hierarchical corruption grows across layers together with the increase of the perturbation complexity, which is in line with \citet{alleman-etal-2021-syntactic}.
\paragraph{M-BERT and M-BART barely use positional information to induce syntactic trees} Previous research has shown that the token embeddings capture enough semantic information to restore the syntactic structure \cite{vilares2020parsing, kim2020pre, rosa2019inducing}. \citet{maudslay2021syntactic} claim that syntactic abilities of BERT-based LMs are overestimated and raise the problem of isolating semantics from syntax. However, more recent studies show that Transformer encoders encode redundant information \cite{luo2021positional}, may not sufficiently capture the meaning of positions and be unimportant for downstream tasks \cite{wang-chen-2020-position}, including the setting with perturbed fine-tuning data \cite{clouatre2021demystifying}. In spirit with the latter studies, our results under different PEs configurations reveal that M-BERT and M-BART do not need the precise position information to restore the syntactic tree from their internal representations. The overall behavior is that zeroed (except for M-BERT) or even randomly initialized PEs can result in the probing performance and one with absolute positions. We suppose that despite the absolute positions of words changes during the N-gram permutation and sub-tree rotation procedures, the word order within the clauses remains almost the same as in the intact sentence (\textbf{NgramShift}, \textbf{ClauseShift}). That is, the more granular perturbations marginally confuse the LMs when: (i) predicting the masked word under \textbf{Token Perturbation Probing} which can be performed using \emph{only} attention \cite{wang-chen-2020-position}, or (ii) judging the acceptability of the sentence where the low token pseudo-log-probability can occur at the juxtaposition of the syntactic groups, and clauses \cite{alleman-etal-2021-syntactic}. We leave a more detailed exploration of the relationship between PEs and probing analysis for future work.
\section{Conclusion}
This paper presents an extension of the ongoing research on the controllable text perturbations to the multilingual setting and introspection of positional embeddings in pre-trained LMs. We introduce nine probing datasets for three Indo-European languages varying in their flexibility of the word order: English, Swedish, and Russian. The suite is constructed using language-specific heuristics carefully designed under linguistic expertise and organized by three types of syntactic perturbations: randomization of word order studied by previous research from many perspectives and less explored permutations within syntactic phrases and clauses. The method includes a combination of parameter-free probing methods based on the intermediate self-attention and contextualized representations, novel metrics for representation analysis, and acceptability judgments with pseudo-perplexity. We conduct a line of experiments to probe the syntactic sensitivity of two multilingual Transformers, M-BERT and M-BART, the latter of which is learned to reconstruct the word order during pre-training. The LMs are less sensitive to more granular perturbations and build hierarchical complexity towards the output layer. The analysis of the understudied relationship between the position embeddings and syntactic probe performance reveals that the position information is not necessary for inducing the hierarchical structure, which is a promising direction for a more detailed investigation. The results also show that the syntactic sensitivity may depend on the language and be enhanced by pre-training objectives. We believe there is still room for exploring the sensitivity to word order and syntactic abilities of modern LMs, specifically across a more diverse set of languages and models varying in the architecture design choices.
\section*{Acknowledgements}
Ekaterina Taktasheva and Ekaterina Artemova are partially supported by the framework of the HSE University Basic Research Program.
|
{
"timestamp": "2021-09-30T02:03:39",
"yymm": "2109",
"arxiv_id": "2109.14017",
"language": "en",
"url": "https://arxiv.org/abs/2109.14017"
}
|
\subsection{Pick-and-place Manipulation for Novel Objects}
Robot manipulation systems for tight packing \cite{shome2019towards} or placement in constrained spaces \cite{haustein2019object} often assume the availability of complete 3d object models. For novel object instances, a majority of recent work focuses on task-agnostic picking \cite{mahler2019learning, zeng2018robotic} while others resort to shape completion, performed either via category-level reasoning \cite{gao2019kpam, gualtierirobotic} or given physical consistency constraints \cite{agnew2020amodal}. Nevertheless, the output of shape completion may not be precise enough and lead to collisions when the object is placed in a constrained space. Recent work \cite{mitash2020task} proposes to use a conservative shape representation for pre-pick planning to ensure safe manipulation and reconstruct the shape of the object in-hand, if the task requires it. In certain scenarios, the conservative estimate might be too restrictive for the constrained placement task. To address this, the current work proposes to recognize previously seen objects and perform life-long model reconstruction over many manipulation runs.
\vspace{-0.05in}
\subsection{Simultaneous Tracking and Object Reconstruction}
Object models are often generated by using a turntable \cite{calli2015yale}, or via manual scanning \cite{wang2019hand} or a robotic arm \cite{krainin2010manipulator} and post-processed. These models are then used for single-shot pose estimation \cite{mitash2018robust, sui2017sum} or model-based object tracking \cite{choi2012robust}. Model-free manipulation research has used local scan matching \cite{icp} with an occupancy grid structure \cite{mitash2020task} to simultaneously track and reconstruct a conservative object volume. Another popular surface
reconstruction technique often used in SLAM is Truncated Signed Distance Function (TSDF) \cite{curless1996volumetric, newcombe2011kinectfusion}. It fuses multiple depth observations from a moving sensor and maintains a signed distance to the closest zero-crossing (representing the surface). The current work leverages TSDF in a particle filter for simultaneous tracking and reconstructing a manipulated object.
\vspace{-0.05in}
\subsection{Object Identification and Pose Estimation}
Previous work \cite{sharif2014cnn, babenko2014neural} has shown that features trained on large-scale classification datasets allows for image matching. The current work leverages such pre-trained features to store object viewpoints and re-identify object instances. Pose estimation based on particle filters \cite{sui2017sum} has been used before for matching complete object models with object segments in the scene. The current work utilizes a similar technique but with partial object models that were constructed from past manipulations.
\section{Introduction}
General purpose and flexible robot manipulators should be able to manipulate object instances they have never seen before. Once an object has been manipulated, however, the robot should be able to leverage its prior experience for future encounters of a similr object. Such abilities allow the deployment of robots that self-learn to precisely manipulate objects and improve their performance over time.
Recent work on manipulating novel objects either completes their shape based on category-level reasoning \cite{gao2019kpam, gualtierirobotic} or utilizes physical constraints \cite{agnew2020amodal}. Single-view 3D shape completion, however, is often not precise and safe enough for many manipulation tasks, such as pick and place in constrained spaces. To ensure safety during manipulation of novel objects, recent prior work \cite{mitash2020task} considers a conservative estimate of an object's volume. The conservative estimate includes the observed surface of that object together with the volume attached to it, which has not yet been observed bounded by physical constraints, such as the presence of a support surface. To achieve this, the prior work uses a simple volumetric object representation similar to occupancy-grids \cite{occ_grid}, and proposes action primitives, such as \textit{handoffs} and \textit{active perception}, to reduce shape uncertainty of novel objects during manipulation. While this prior system can deal with constrained placement tasks for novel objects, its success rate and efficiency sometimes suffers. This is due to the fact that every object is treated as novel, even if instances of the same object have been seen before. The current work aims to improve the efficiency of such manipulation pipelines by recognizing previously manipulated objects and reusing the models that it has constructed online given prior manipulation operations. During the development of the proposed work, it was identified that occupancy-grid representations are not precise enough for reusing the reconstructed object model in future tasks.
To address these issues, this paper proposes a manipulation pipeline that performs object picking and constrained placement via life-long object model reconstruction and reuse. It is based on the hypothesis that some objects will reappear over multiple manipulation tasks. For instance, this can occur in logistics setups where singulated objects are dropped from a conveyor belt and then need to be picked and placed in a container to be shipped. To achieve more accurate reconstruction and model reuse, the Truncated Signed Distance Function (TSDF) is adopted to represent partial models. Similarly, a variant of a standard particle filter \cite{6696485, 10.1145/3240765.3243493, chen2019grip} is used for performing pose estimation and tracking of the partial TSDF models. This variant prunes pose hypotheses that violate viewpoint or physical constraints, and rejects models of falsely recognized objects from being reused. It achieves speed advantages by rendering objects in a region of interest instead of a full image. Furthermore, this work presents an effective way to construct a dataset on the fly that stores partially reconstructed object models for future tasks. This dataset also stores a set of color features for each object that are the output of a clustering algorithm given previous object viewpoints. The clustering is experimentally shown to result in efficient and accurate object recognition.
A sequence of real-world experiments with a manipulator and a set of objects is performed to show that more successful and more efficient robot manipulation can be achieved over time by proper reconstruction and reuse of object models. Compared to the baseline \cite{mitash2020task}, the proposed robotic system not only achieves higher success rate (by a 13\% margin), but also significantly improves efficiency. In particular, it reduces \textit{handoff} actions by 31\%, and reduces \textit{active perception} actions by 49\% over the same sequence of manipulation tasks against the baseline.
\section{Related Work}
\input{related_work}
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{figrues/pipeline.png}
\vspace{-.25in}
\caption{Proposed pipeline: For each object, recognition is first executed given an object dataset. If the object is recognized, pose estimation given its existing (perhaps partial) model is performed and the model is reused. If not, a new, partial model is initialized from current data. Either way, a perception and manipulation process is executed. If the model is not completed enough, additional sensing and reorientation action may be needed to place the object in a constrained area. Visual tracking is executed to dynamically update the object model in parallel. The latest model is stored in the dataset.}
\label{fig:pipeline}
\vspace{-.2in}
\end{figure*}
\section{Problem Setup and Notation}
\textbf{Object Representation}
The $i^{th}$ rigid object to be manipulated is defined by the volume it occupies $O^i \in \mathbb{R}^3$ in a local reference frame. Given a pose $P^i \in SE(3)$, the 3D region occupied by $O^i$ in the global frame is denoted as $O^i_{P^i}$. Note that the ground truth geometric model of this object is not available, i.e., $O^i$ is unknown. The estimated object representation $\hat{O}^i$ is composed of the object's surface $S^i$ and a conservative volume $U^i$, which has not yet been viewed but may contain part of the target object, i.e. $\hat{O}^i = S^i \cup U^i$.
\textbf{Object Recognition and Pose Estimation:} One singulated object $O^i$ appears for picking for each task $i$. The robot first determines whether $O^i$ has been manipulated before given the initial RGB-D observation $I^i_{init}$. If $O^i$ is recognized as in the same category as $O^j$, where $j < i$, then the reconstructed model $\hat{O}^j$ is reused to initialize $\hat{O}^i$, which is the registered output of the current object point cloud in $I^i_{init}$ and $\hat{O}^j$. The initial pose $P^i_{init}$ is set to be $\hat{P}^i$, which is the estimated pose of the object model $\hat{O}^i$ given $I^i_{init}$. If $O^i$ is recognized as a novel object, then $\hat{O}^i$ is directly initialized from the object's point cloud in $I^i_{init}$.
\textbf{Constrained placement}
Given an object at a pose $P^i_{init}$, the goal of the constrained placement is to move $O^i$ to a pose $P^i_{target}$, such that $O^i_{P_{target}} \subset R^i_{place}$ where $R^i_{place} \in \mathbb{R}^3$ is a target placement region. To accomplish this, a sequence of manipulation actions, i.e. \textit{pick}, \textit{handoff}, \textit{place} are performed. Since the ground truth $O^i$ is unknown, perceptive actions that sense the object may also be needed.
\section{System Design and Implementation}
The proposed pipeline is shown in Fig. \ref{fig:pipeline}. At the beginning of each task, the robot first determines whether the target object has been seen before. If an object is considered novel, then its model will be initialized based on the current RGB-D observation. Otherwise, a previously reconstructed model, which may be partially complete, is registered to the current observation and reused in the current task. An integrated perception and manipulation planning process is performed thereafter to accomplish a constrained placement task. During manipulation, the object is tracked and its model is dynamically updated. The reconstructed model is also used by the manipulation planning process, which forms a feedback loop. A dataset, which stores object information, is updated after each manipulation task to benefit future tasks.
\smallskip
\subsection{Object Initialization} An object model is initialized when the target object is considered unknown. Then, a truncated signed distance function (TSDF) \cite{tsdf} representation is used as the object model. TSDF has been widely used for high-quality scene reconstruction \cite{kinectfusion, bundlefusion}. Each voxel in a TSDF volume stores the signed distance $d$ to its closest surface, where the sign of $d$ indicates whether the voxel is in free space $(d > 0)$ or in a conservative estimate of the object's volume $(d < 0)$. The surface point cloud $S^i$ of the object can be extracted at the zero crossings of the function either by ray-casting or a marching cubes algorithm \cite{lorensen1987marching}. An object's TSDF volume is initialized given a minimum oriented 3D bounding box that encloses the observed point cloud of an object and its occluded region. Standard methods are used to approximately compute this 3D bounding box \cite{malandain2002,barequet2001}. The point cloud segment of the object can be easily obtained since each task only contains one object. The voxel size of a TSDF volume is set to be 1mm in the accompanying implementation, which is small enough to capture object details for both 3D registration and grasp pose detection.
\subsection{Dataset Construction} \label{dataset_construction}
A dataset is constructed from scratch to store information of manipulated objects. For each object instance, a reconstructed TSDF model and a set of RGB features are stored. Cropped RGB images captured during manipulation are fed to a neural network for extracting features representing the object. The implementation uses ResNet50 \cite{He_2016_CVPR} pretrained on ImageNet \cite{5206848} as the feature extractor similar to related work \cite{8461044}. The choice of the specific feature extractor is not the main focus of this work and can be replaced with alternatives. Since an object may look very different from different viewpoints, a set of features vectors are computed via clustering to represent an object instead of a single feature vector. In particular, the mini-batch KMeans algorithm that has been designed for efficient incremental clustering of new data \cite{sculley2010web, scikit-learn}, is adopted to cluster features from similar viewpoints. It is called after each manipulation task. Given an ablation study (Sec. \ref{obj_recognition}), a value of $K=64$ for the number of clusters used for an object's features provides good viewpoint diversity and near instant fitting speed.
\begin{figure}[h]
\vspace{-.1in}
\centering
\includegraphics[width=0.48\textwidth]{figrues/recognition.png}
\vspace{-.25in}
\caption{Object Recognition and Pose Estimation. A segmented observation is first augmented by rotation and then fed to a neural network for feature extraction. The features are compared with centers of feature clusters of objects in the dataset. The object with the closest cosine distance that is less than a threshold ($\delta = 0.15$) is considered as a matching candidate. Pose estimation with viewpoint constraints is performed to reject false positives.}
\label{fig:estimation}
\vspace{-.2in}
\end{figure}
\subsection{Object Recognition and Pose Estimation}
\label{sec:objrecposeest}
Given an initial observation $I^i_{init}$ for task $i$, the robot first attempts to recognize $O^i$ and estimate its pose $P^i_{init}$. Despite notable progress in object recognition and 6D pose estimation, these problems remain challenging in the considered setup, since: 1) the dataset is constructed from scratch and data collected from one task can be insufficient to train a deep model; 2) retraining a deep model after each task is time consuming and violates the objective of performing efficient manipulation. This work proposes a two-stage method that performs object recognition and pose estimation without retraining a feature extractor multiple time as in Fig. \ref{fig:estimation}.
First, a cropped observation is augmented rotation-wise 8 times and fed to a feature extractor. The features for each rotated image are then compared using cosine distance against the K cluster centers of the feature sets selected to represent each object. Among the nearest neighbors of all rotated images within a cosine distance $d < \delta = 0.15$, the most similar nearest neighbor is selected as a matching candidate $\bar{O}^i = \hat{O}^j$. If no nearest neighbor has a distance $d < \delta$, then this object is considered to be novel. An ablation study of the threshold $\delta$ is performed in section \ref{obj_recognition}.
\vspace{-.1in}
\begin{algorithm}[h!]
\caption{6DoF Pose Estimation}\label{alg:pf}
\begin{algorithmic}[1]
\Require Observed depth image $I$, object TSDF volume $V_{obj}$, extrinsic matrix $E$, intrinsic matrix $C$, number of particles $M$, number of iterations $K$, pixel depth inlier threshold $d_{thres}$, rejection ratios $\beta_1, \beta_2$.
\noindent
\State Generate scene TSDF volume $V_{scene}$ from $I$, $E$, and $C$.
\State Filter table and get object region of interest for rendering.
\State Compute 3D centroid $c$ of $I_{roi}$ projected in 3D space.
\State Initialize a set of $M$ particles at $t=0$, $\mathcal{X} = \{x^1_t, ..., x^M_t\}$ located at position $c$ with random orientation in $SO(3)$.
\For{$t = 1$ to $K$}
\State $\mathcal{\hat{X}}_t = \mathcal{X}_t = \emptyset$
\For{$m = 1$ to $M$}
\State Diffuse $x^m_t \sim p(x_t | u_t, x^m_{t-1})$ \Comment{$u_t$ is zero.} \label{ln:diffuse}
\State Render object depth $I_r$ in RoI given $V_{obj}$ and $x^m_t$
\State $w^m_t =$ count\_pix\_inliers($I_{roi}$, $I_{r}$, $d_{thres}=1cm$)
\State $\mathcal{\hat{X}}_t = \mathcal{\hat{X}}_t$ + $\langle x^m_t, w^m_w \rangle$
\EndFor
\For{$m = 1$ to $M$}
\State Draw $x^i_t$ with probability $\propto$ $w^i_t$
\State $\mathcal{X}_t = \mathcal{X}_t + x^i_t$
\EndFor
\EndFor
\State Sort $\mathcal{X}_K$ based on particle weights (descending order)
\State $x_{best} \gets$ None
\For{$m = 1$ to $M$}
\State Render object $I_r$ at $x^m_K$ and project to $V_{scene}$
\State $\beta_{free} \gets$ pts\_ratio\_in\_freespace($I_r, V_{scene}, C, E$)
\State $\beta_{collide} \gets$ pts\_ratio\_below\_table($I_r, V_{scene}, C, E$)
\If{$\beta_{free} < \beta_1$ and $\beta_{collide} < \beta_2$}
\State $x_{best} \gets x^m_K$
\State \textbf{break}
\EndIf
\EndFor
\State \textbf{Return} $x_{best}$
\end{algorithmic}
\end{algorithm}
\vspace{-.1in}
Then, a particle filter variant is used to estimate the pose of the object candidate $\bar{O}^i$, which can help reject potential false positives in recognition. The variant is detailed in Alg. \ref{alg:pf} and is adapted from existing Monte Carlo localization methods \cite{6696485, 10.1145/3240765.3243493, chen2019grip}, with the following differences: 1) It can work on partially reconstructed TSDF volumes other than complete mesh models; 2) Rendering is only performed in a Region of Interest (RoI - referred to $I_{roi}$ in the algorithmic), which is the smallest 2D bounding box of the conservative volume estimate augmented by a $30\%$ margin. This makes the algorithm more efficient ($\sim30ms/iter$ vs $\sim60ms/iter$ \cite{chen2019grip}) with the same number of particles ($N=625$); 3) Two rejection criteria are introduced to prune bad pose hypotheses that either violate viewpoint constraints or physical constraints, i.e. a registered model should not lie in the free space or collide with the supporting plane. If the number of pixel violating these criteria over the total number of pixels in the ROI is less than $\beta_1$ and $\beta_2$ respectively, where $\beta_1$ and $\beta_2$ are predefined thresholds ($\beta_1 = \beta_2 = 0.95$ in the accompanying implementation), then it is considered a good registration. Otherwise, $O^i$ is considered novel and will be initialized from scratch. These additional pose rejection criteria minimize the chance of a falsely recognized object to be registered and fail a manipulation task. The algorithm is implemented using PyCUDA \cite{kloeckner_pycuda_2012}, and the authors have open sourced on Github as a 6DoF pose annotation tool.
\subsection{Simultaneous Reconstruction and Tracking} \label{tracking}
The object model $\hat{O}^i$ is tracked and reconstructed over time. The same particle filter variant as in Alg. \ref{alg:pf} is reused with the following changes: 1) The object transition model (in line \ref{ln:diffuse}) between two time steps is set to be: $u_t = \Delta E^i_{t-1:t}$, instead of 0, where $E^i_t$ is the end-effector's pose computed via forward kinematics; 2) One iteration is performed for each new observation; 3) The RoI is computed by first rendering the object at $\hat{x} = u_t \cdot \hat{x}_{t-1}$, where $\hat{x}_{t-1}$ is the most likely estimate, and then finding its minimum 2D bounding box augmented by a 30\% size increase; 4) The rejection criteria are not used for tracking. New observations are then integrated to the object's TSDF volume after the arm and the end-effector are filtered. Since the ground-truth, frame-to-frame tracking pose of a model under reconstruction is not available, tracking quality is implicitly evaluated by the final object reconstruction (Sec. \ref{obj_reconsturction}).
\begin{figure}[h]
\vspace{-.15in}
\centering
\includegraphics[width=0.48\textwidth]{figrues/pf.png}
\vspace{-.3in}
\caption{Illustration of Simultaneous reconstruction and tracking. Given a new observation at $T=t$ (a), a transition model given a diffusion process is applied on particle estimates, which are sampled given the previous time step, and rendered at the region of interest (b). The particle with the largest weight is selected as the pose prediction at $T=t$ (c). The current observation is integrated to the object representation and particles are then resampled (d) to become available for the next time step (e).}
\label{fig:tracking}
\vspace{-.15in}
\end{figure}
\subsection{Perception and Manipulation Planning}
Given a specified constrained area $R^i_{place}$, the robot will first check if the target object can be directly grasped by its end-effectors given the object's conservative volume. The dual-arm robot prioritizes the two-finger gripper as the orientation of the suction gripper is limited. Grasp poses are computed over the surface representation $\hat{S}^i$ to ensure stable geometric interaction \cite{ten2017grasp}. For the suction gripper, suction points are sampled on the object surface $\hat{S}^i$, where the normalized normal $N$ is close to the global $z$ axis i.e., $\abs{N_z} > 0.8$. Suction points are further ranked in quality according to their distances from the center of the surface.
Given the constrained area $R^i_{place}$ and the object representation $\hat{O}^i$, two bounding boxes are computed: 1) the maximum bounding box inside the constrained area, i.e., $B^i_{place} \subset R^i_{place}$, 2) a minimal bounding box $B_{O^i}$ that encloses $\hat{O}^i$. A discrete set of configurations (= 24) for the object's bounding box are computed by placing $B_{O^i}$ at the center of $B^i_{place}$ and are validated by all axis-aligned rotations. If no placement can be found, an active perception action will be taken to move the object to the front of the camera and rotate along the z-axis by $180$ deg. to reduce model uncertainty, and recompute the object bounding box and placement. If there exists a placement but the target pose is beyond the reachability limits of the suction gripper, a \textit{handoff} action will transfer the object from the suction to the parallel gripper.
As part of the planning framework, a probabilistic roadmap ({\tt PRM}$^*$) \cite{kavraki1996probabilistic, Karaman:2011aa} is pre-computed for each of the arms, which takes into account collisions with static obstacles, such as the table. To generate informative paths for the considered setup, the configurations along the roadmap are sampled so that the end-effector is within the camera's view. This allows the object to be tracked during arm movement. Based on the precomputed {\tt PRM}$^*$ roadmap, a lazy version of the {\tt A}$^*$ algorithm is used online for computing a shortest path on the roadmap, where lazy collision checks with the object are performed after an initial solution path is found. Once a collision has been detected, the roadmap is modified and a new {\tt A}$^*$ query is triggered. The loop continues until a valid path is confirmed.
\section{Experiments}
Experiments are designed to showcase the effectiveness and efficiency of the proposed robotic manipulation system. It is compared with a baseline system \cite{mitash2020task}, which was designed to perform similar manipulation on unknown objects but considers all objects as novel without learning object information from tasks or reuse reconstructed object models in future tasks. In addiition, evaluation and ablation studies are performed to show the efficacy of the system's submodules. In particular, the proposed system is evaluated from the following perspectives: 1) Success rate, 2) system efficiency, 3) reduced shape uncertainty after registration, 4) object recognition, and 5) object reconstruction.
\subsection{Hardware and Experimental Setup}
The hardware setup is shown in Fig. \ref{fig:hardware}. It comprises of a dual-arm manipulator (Yaskawa Motoman) with two 7-dof arms. The left arm is fitted with a suction gripper, while the right arm is fitted with a Robotiq 2-fingered gripper. A single RGB-D sensor (RealSense L515) is mounted on the robot torso overlooking the workspace. The camera is configured to capture 480p RGB-D images at a frequency of 30 Hz.
Randomly ordered constrained placement tasks are performed given 4 objects (shown in Fig. \ref{fig:reconstruction_qualitative}). Each task requires the robot to pick up an object from the table and place it in a constrained area. The target object is randomly positioned on the 2D plane within the reach of both grippers. The objects are placed on different sides and rotated along the z-axis by a predefined angle for each experiment. Since the \textit{bleach} object can't be placed stably on its side but either stand or lie flat on the table, only 10 experiments were performed for it. For \textit{cheezit}, \textit{sugar}, and \textit{mustard}, 15 experiments are performed for each. In total, 110 real world pick and place experiments were executed for both systems for comparison.
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth, height=5cm]{figrues/hardware_setup.png}
\caption{The hardware setup involving a dual-arm manipulator with a suction and a parallel gripper manipulating objects in the presence of a static RDG-D camera so as to achieve placement in a target region.}
\label{fig:hardware}
\vspace{-.2in}
\end{figure}
\subsection{Results}
\subsubsection{Success Rate}
There are certain object configurations that correspond to a large initial conservative estimate of an object, and thus no safe grasps or top-down suction points can be found, e.g., a standing bleach cleanser. In these cases, the task will fail if uncertainty is not reduced. In a sequence of 55 constrained placement experiments, eight tasks failed for the baseline experiment, while only one task failed for the proposed experiment, which is due to a standing \textit{mustard} not being recognized. Table \ref{table:1} provides the statistics.
{\renewcommand{\arraystretch}{1.2}%
\vspace{-.1in}
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
& {\bf Success Rate} & \# Handoffs & \# Active Perception \\
\hline
Baseline \cite{mitash2020task} & 47/55(85\%) & 37/55(67\%) & 35/55(63\%) \\
\hline
Proposed & 54/55(98\%) & 20/55(36\%) & 8/55(14\%) \\
\hline
\end{tabular}
\caption{Task success rate (higher is better)\\ and ratio of primitive actions used (lower is better)}
\label{table:1}
\vspace{-.3in}
\end{table}
}
\subsubsection{System Efficiency}
This is evaluated by counting the number of times two action primitives are used, i.e. \textit{active perception} and \textit{handoff}. \textit{Active perception} means that the robot moves the object in front of the camera and rotates it along $Z$ to reduce shape uncertainty. \textit{Handoff} means the robot transfers the object between grippers to achieve a grasp that allows placement. Both actions can be potentially avoided given a better model. The fewer times these actions are taken, the more efficient the manipulation process is.
\subsubsection{Reduced Shape Uncertainty}
By registering a previously reconstructed model, the initial conservative estimate of an object is greatly reduced. This allows a robot to detect more potential grasps and increases the collision-free space for motion planning. As a reminder, the conservative estimate of an object's volume is defined as the ground truth volume of that object together with the volume attached to it, which has not yet been observed. Then, the shape uncertainty is defined to be the ratio of the conservative estimate of an object's volume after a model has been registered over the conservative estimate of that object given only the current observation. The result for each experiment is shown in Fig. \ref{fig:reduced_uncertainty}. This smaller this ratio is, the more uncertainty is reduced by registering against a previously constructed model. This ratio is reduced by $32\%$ on average, and up to $75\%$ in some cases for all the experiments by reusing a partially reconstructed model.
\begin{figure}[h]
\vspace{-.1in}
\centering
\includegraphics[width=0.45\textwidth]{figrues/reduced_uncertainty.png} \vspace{-.15in}
\caption{Remaining uncertainty of each experiment.}
\label{fig:reduced_uncertainty}
\vspace{-.15in}
\end{figure}
\subsubsection{Object Recognition} \label{obj_recognition}
The target object is correctly recognized in $49/55$ of the real experiments. In $5/55$ of the experiments the object is erroneously not recognized as a previously manipulated object (false negative). Only one object was initially falsely recognized as a previously manipulated object (false positive), but was then rejected in the pose estimation stage. A small cosine distance threshold $\delta = 0.15$ is used for feature matching to minimize false positives (as in Sec. \ref{sec:objrecposeest}). And a value $K = 64$ is used for the mini-batch K-Means clustering approach (as in Sec. \ref{dataset_construction}), which represents the features of each manipulated object from different viewpoints. False negatives typically occur when the currently observed object part has not be seen in previous experiments. While false positives may cause task failures, false negatives only decrease efficiency as the object will be considered novel. To make the results more statistically meaningful, the data collected from this sequence of experiments were shuffled and an ablation study was performed for the values of the $\delta$ and $K$ parameters. Table \ref{table:2} shows how these two values affect precision and recall. The results are computed from an average of 30 randomly shuffled sequences.
{\renewcommand{\arraystretch}{1.2}%
\vspace{-.1in}
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
& K = 1 & K = 16 & K = 64 & K = 128 \\
\hline
$\delta$=0.14 & 0.995/0.131 & 0.975/0.755 & 0.981/0.831 & 0.988/0.844 \\
\hline
$\delta$=0.15 & 0.996/0.211 & 0.955/0.842 & 0.958/0.886 & 0.964/0.891 \\
\hline
$\delta$=0.16 & 0.978/0.315 & 0.920/0.892 & 0.932/0.927 & 0.940/0.923 \\
\hline
$\delta$=0.17 & 0.931/0.484 & 0.848/0.936 & 0.862/0.961 & 0.844/0.964 \\
\hline
\end{tabular}
\caption{Ablation study of $\delta$ and $K$. Each cell shows the precision/recall of the recognition.}
\label{table:2}
\vspace{-.3in}
\end{table}
}
Table \ref{table:2} shows that a smaller $\delta$ tends to increase precision but decrease recall. A good balance is achieved when precision is high $(>0.95)$, while keeping recall at a good level for manipulation efficiency. Simply using the mean feature (K = 1) to represent an object is not ideal as the recall is very low. Increasing K is often beneficial, but the benefit diminishes when K becomes very large (e.g., K=128), while also increasing training time increases.
\subsubsection{Object Reconstruction} \label{obj_reconsturction}
Results of shape reconstruction are shown in Fig. \ref{fig:reconstruction_qualitative}. The first row shows the ground truth mesh models of objects, and the second row shows reconstructed models after a sequence of manipulation tasks. For objects that are stored multiple times in the dataset due to failures in recognition, this figure only shows the most completed one. Fig. \ref{fig:reconstruction_qualitative} also presents the quantitative evaluation by comparing the distance between the aligned ground truth model $P_{gt}$ and the reconstructed model $P_{rec}$ using Chamfer distance, i.e.
\vspace{-0.05in}
$$
D(P_{gt}, P_{rec}) = \frac{1}{|P_{gt}|}\sum_{p_i \in P_{gt}}d(p_i, p_r),
\vspace{-0.05in}
$$
where $p_r$ is the closest point in $P_{rec}$ to $p_i$. It can be seen that the reconstructed model is close to the ground truth (Chamfer distance $D < 5mm$ for all four objects) with a few noisy points inside. Such noise is caused by tracking errors when an object is highly occluded, but it does not affect tracking or pose estimation since it will not be rendered by ray casting, and can be further removed by post-processing.
\begin{figure}[h]
\vspace{-.1in}
\centering
\includegraphics[width=0.48\textwidth]{figrues/reconstruction.png}
\vspace{-.25in}
\caption{Qualitative results of object reconstruction after 55 experiments. Ground truth models are shown in the first row and reconstructed models are shown in the second row. $D$ is the Chamfer distance between the ground truth model and the reconstructed model.}
\vspace{-.1in}
\label{fig:reconstruction_qualitative}
\end{figure}
\section{CONCLUSION}
This work proposes a robotic system, which utilizes object model reconstruction and reuse for achieving lifelong robot manipulation. By using TSDF representations of objects and a particle filter approach for simultaneous reconstruction and tracking, object models are incrementally reconstructed over a sequence of manipulation tasks. An efficient object dataset construction is proposed to store the color and geometry information of manipulated objects, which makes models reusable and assists future manipulation tasks. Real world experiments show the efficiency of the proposed pipeline.
While this pipeline has been designed to work for most novel rigid objects, it faces challenges with certain objects, such as bowls, which have thin surfaces. This is mainly because the TSDF representation is not suitable for reconstructing thin structures, and may be improved by considering alternatives. Future work will focus on manipulation tasks in cluttered scenes, improving tracking and reconstruction for objects with thin surfaces, and task planning that maximizes information gain while placing novel objects.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2022-05-24T02:27:04",
"yymm": "2109",
"arxiv_id": "2109.13910",
"language": "en",
"url": "https://arxiv.org/abs/2109.13910"
}
|
\section*{\emph{Guidelines for the Article}}
\section*{Introduction}
Recently, Graph Neural Networks (GNNs) have become one of the most active research fields in Artificial Intelligence \cite{ZHOU202057}. GNNs are a class of Deep Learning methods introduced to analyze data which display a graph structure. Graphs represent the topology of a great variety of data structures in which objects (nodes) are connected with each other by some kind of relation (edges).
Due to the very general nature of graphs, applications of GNNs are found in very different contexts, such as computer vision\cite{8578815, 6128754,DBLP:journals/corr/abs-1711-11575,9010993}, natural language processing\cite{Yao_Mao_Luo_2019, huang-etal-2019-text,bastings-etal-2017-graph, marcheggiani-etal-2018-exploiting}, social sciences \cite{9139346,10.3389/fdata.2019.00002}, and natural sciences including biology \cite{NIPS2017_f5077839}, particle physics \cite{Shlomi_2021} and astrophysics \cite{inproceedings}.
The topology of a graph can also reflect that of atomistic crystal structures: indeed, a graph can be generated by connecting each atom (nodes) with its neighbors (edges), within a specified cutoff radius \cite{doi:10.1063/1.5126336}. The Message Passing Neural Networks framework (MPNN) \cite{pmlr-v70-gilmer17a} has been introduced as a common GNN paradigm for atomistic structures in quantum chemistry applications.
Within an atomistic GNN, the atoms and their connections are associated with numerical lists of ``features'', also named {\it embedding} vectors.
Features are updated by the Message Passing framework, which is a two-step process. In the first step, each atom receives a message that is an aggregate of its neighbour's embeddings. In the second step, an updated embedding of the atom is evaluated, by means of a function that depends on the message and on the current atomic embedding.
By iterating this scheme $n$ times, each atom will receive messages from atoms that are distant up to $n$ connections, thus accounting for long-range interactions.
A GNN model for atomistic graphs is therefore determined: (i) by the nature and the size of the embeddings, which convey the informative content of the specific atomistic system and (ii) by the operations it executes on them, i.e. the procedure used to aggregate and update embeddings.
Once the aforementioned characteristics are defined, the model can be trained to predict the system potential energy surface (PES).
A number of GNN models have been proposed in the recent years to model atomistic systems. Most of them were first introduced in molecular research and further applied to crystals.
Deep Tensor Neural Networks (DTNN) \cite{Sch_tt_2017} and PhysNet \cite{Unke_2019} aggregate the atomic embeddings by means of {\it filters} that ensure that the resulting message changes smoothly with respect to small changes of the interatomic distances. The main difference between DTNN and PhysNet lies in how distances are represented and how messages are aggregated.
Crystal Graph Convolutional Neural Networks (CGCNN) \cite{Xie_2018} were explicitly developed to deal with materials displaying a crystal structure, such as metals.
Unlike DTNN and PhysNet, CGCNN considers both atomic (node) and edge embeddings; however, distances are not regularized with continuous functions: the range of the distances is partitioned in ten equally spaced segments, and interatomic distances are encoded within a single vector in which all components are zero but the one associated with the matching segment. Thus, this model lacks the ability to smoothly change the embeddings with respect to small displacements of the atomic positions.
SchNet \cite{10.5555/3294771.3294866} is based on DTNN and introduces continuous filter convolutions: distances are used as input of a neural layer that generates a continuous mapping to an embedding space. In an updated version \cite{doi:10.1063/1.5019779}, periodic boundary conditions (PBCs) have been introduced, and the model has been applied to the prediction of formation energies of bulk crystals. Also inspired by {SchNet} and sharing its overall architecture, the ``Neural message passing with edge updates'' \cite{jorgensen2018neural} uses both node and edge embeddings in the form of a concatenation of the two connected atoms embeddings.
This makes edge embeddings directional as they depend on the order of the concatenated elements.
MatErials Graph Network (MEGNet) \cite{Chen_2019} leverages a similar scheme, by incorporating both directional edge and node updates, while also introducing a global state vector which stores the molecule/crystal-level or state attributes, e.g. the temperature of the system. Updates of atoms, bonds and global state vector are performed in a sequence.
All these approaches employ filters that rely only on the distance between pairs of atoms to aggregate and update the atomic embeddings.
It is well-established that classical, empirical interatomic potentials \cite{PhysRevB.29.6443} that rely on pairwise interactions often fail to reproduce structural changes \cite{Lee2012AtomisticMO} and some crucial properties of dislocations in metals \cite{2018npjCM...4...69M}. In the case of phase transitions, the addition of directionality, i.e. angular dependence of the interatomic potential, as well as second nearest-neighbor interactions, has lead to the improved qualitative reproduction of quantum-mechanical PES \cite{Lee2012AtomisticMO}.
Within the context of GNNs, there is a remarkable shortage of approaches that rely also on the angle between edges connecting atomic pairs. Embeddings of edges connecting triplets of atoms convey the angular information, and once they are updated via the message passing scheme, they can be used to update the atomic embeddings.
With this aim, {DimeNet}\cite{Klicpera2020Directional} also leverages the Directional Message (hence the name) by considering the direction of the pairwise connections and by introducing the angle between two edges connected within atomic triplets.
{DimeNet} employs a continuous filter convolution by expanding both distances and angles in a Bessel-Fourier basis. However, to date, {DimeNet} has been applied merely to isolated molecules and has not been investigated to model crystals such as metals.
Although GNNs have been scarcely explored in the context of interatomic potentials for metals, they introduce a number of advantages with respect to other ML methods \cite{doi:10.1063/1.5126336}. First, interactions among neighbouring atoms are straightforwardly modeled as pair-wise connections. Previous ML approaches need to introduce specific geometrical descriptors of the environment around atoms (within a cut-off radius), such as atom-centered symmetry functions in the Neural Networks Potentials \cite{PhysRevLett.98.146401}, or bispectrum components and then smooth overlap of atomic positions (SOAP) \cite{PhysRevB.87.184115} in Gaussian Approximation Potentials (GAP) \cite{PhysRevLett.104.136403}. Second, iterating the process makes the model able to consider the contributions of distant atoms, so as to mimic the influence of long-range interactions beyond the cut-off distance that limits pairwise interactions. This can be easily achieved by stacking message-passing layers in the network. Previous ML approaches either lack these long-range contributions or account for them by adding extra long-range terms to the total energy, e.g. for electrostatic interactions \cite{https://doi.org/10.1002/qua.24890}.
Third, the GNN approach guarantees scalability of the system, as the pair-wise nature of the connections means that complex clusters of atoms can be modeled by simply increasing the number of iterations, at a limited computational cost.
Finally, since the approach is only dependent on the relative positions of the atoms which determine the connections inside the cut-off radius, it is also invariant with respect to isometric transformations, i.e. reflections, translations, rotations, and combinations of those, and to permutation of atoms.
Here, we use GNNs to explore their ability to reproduce with quantum-accuracy the potential energy surface (PES) of metals, by taking as a reference the challenging and technologically crucial example of ferromagnetic body-centered-cubic (BCC) iron. We consider {SchNet} as a prototypical GNN framework that is based on the distance of atomic pairs, and we consider {DimeNet} to assess the performance of a GNN scheme that also includes angular (three-body) interactions. To this purpose, we have implemented periodic boundary conditions (PBCs) and made the new {DimeNet} implementation that includes PBCs available at \url{https://github.com/AilabUdineGit/GNN_atomistics/}.
In order to machine-learn the GNN interatomic potential, we use an existing database \cite{dragoni_db} that was previously trained to develop a Gaussian Approximation Potential (GAP) \cite{PhysRevMaterials.2.013808}.
The remainder of this paper is organized as follows.
Section Results is divided in two main subsections:
``Implementation and training'' reports a summary of computational details, together with some performance metrics;
``Testing the GNN interatomic potentials for bcc Fe'' shows a comparative analysis of the networks based on their predictions of the properties of iron.
A general summary of the methodology and its achievements, together with suggestions for future improvements, is provided in the ``Discussion and Conclusions section''.
Finally, section Methods details the approach and the implementation of the networks, and is organized in three subsections:
``Graph Neural Networks and Message Passing'' contains a formal description of the Message Passing paradigm applied to GNNs for atomistic systems;
``Network models'' provides details of both the networks {SchNet} and {DimeNet};
``Dataset'' reports a summary of the used data.
\section*{Results}
\newcommand{\textbf{x}}{\textbf{x}}
\newcommand{\textbf{e}}{\textbf{e}}
\newcommand{\textbf{m}}{\textbf{m}}
\newcommand{\textbf{t}}{\textbf{t}}
\newcommand{\textbf{r}}{\textbf{r}}
\subsection*{Implementation and training}
To model bulk crystal structures, the simulated atomic cluster must be embedded in an effectively infinite medium. This is achieved by using periodic boundary conditions (PBCs), which are already implemented in {SchNet}. Here, PBCs have been implemented also for {DimeNet}.
The training strategy is the same for both SchNet\ and DimeNet.
All data used for the training are from a large, existing, highly-converged DFT database \cite{dragoni_db} of bcc ferromagnetic iron that includes both pristine configurations and configurations with defects such as free surfaces, vacancies and interstitials (see Database section for details).
A GAP potential that reproduces accurately DFT vibrational and thermodynamic properties\cite{PhysRevMaterials.2.013808} is also trained, and employed as a baseline in the comparison of the GNN models.
The training dataset is built as a subset of 80\% of the database;
samples are randomly shuffled to avoid bias. The remaining 20\% of the samples is used to test the trained model; samples are not shuffled in this case. To regularize the distribution of the data and improve training efficiency, the per-atom energies of the whole dataset have been standardized by subtracting the mean value and dividing by the standard deviation.
Data samples are then batched with batch size $N=6$.
A random seed is set to enable reproducibility of the process.
The objective function, or loss, to minimize is the mean absolute error (MAE) of the difference between the predicted energy $\hat{E}_i$ and its target value $E_i$, averaged over the batch:
\begin{equation}
\mathcal{L}_{MAE} = \frac{1}{N} \sum_{i=1}^N |\hat{E}_i - E_i| \ .
\label{eq:mae}
\end{equation}
For each batch, the gradient of the loss is evaluated with respect to all the trainable parameters (weights and biases) of the network. Then, the optimization algorithm minimizes the loss by adapting the parameter values. At the end of each epoch (when all the batches are evaluated) the training convergence is assessed by evaluating the MAE over all the test data.
In our setting an Adam \cite{article, loshchilov2018decoupled} optimizer was adopted. The initial learning rates, $\alpha = 10^{-4}$ for {DimeNet} and $\alpha = 10^{-3}$ for {SchNet}, have been fixed by performing preliminary tests.
A linear scheduler was used to reduce the learning rate if the loss did not decrease significantly; more precisely, for {DimeNet} ({SchNet} respectively) the learning factor is reduced by a rate of $1/10$ (respectively, $1/2$) each time the test loss was detected not to have improved by at least $1\%$ (respectively, $5\%$) over the last 10 (respectively, 3) epochs.
The more strict requirements adopted for {SchNet} are due to its observed higher computational cost and difficulty for the loss to converge to the minimum.
The training is stopped when $100$ training epochs have been performed.
Using a Tesla P100 GPU with 16GB RAM, the training time amounts to $\sim{11}$ min/epoch for {DimeNet} and $\sim{22}$ min/epoch for {SchNet}, which means a total training time of $\sim{18}$ and $\sim{37}$ hours, respectively.
For a rough comparison, we also trained GAP on the same dataset, by using Intel Xeon E7 4860v2 CPU with $\sim{317}$GB RAM, and the training lasted $\sim{60}$ hours.
Final values of the test MAE are in the order of magnitude of tens of meV.
Inference latencies have been evaluated for 54 and 128 atoms lattices and are of the order of tens of milliseconds, with the exception of a value of 104 milliseconds for {SchNet} on the smaller lattice: being a lighter model, {SchNet} relies less on GPU than {DimeNet} and uses only $\sim{1}\%$ of resources during 54 atoms inference, while {DimeNet} uses $\sim{25}\%$.
With the more demanding 128 atoms lattice, latencies are closer and in the order of tens of milliseconds, as both the models use better the resources.
Metrics about training time, test MAE and inference latency are summarized in Table \ref{tab:efficiency}.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Metric & Unit & SchNet & DimeNet\\% & GAP\\
\hline
Training time & min./epoch & $\sim{22}$ & $\sim{11}$\\% & ...\\
Test MAE & meV & 54.8 & 23.3\\% & ...\\
Inference latency (54 atoms cube) & sec. & 0.104 & 0.040\\% & ...\\
Inference latency (128 atoms cube) & sec. & 0.041 & 0.053\\% & ...\\
\hline
\end{tabular}
\caption{Training time, test MAE and inference latency for {SchNet} and {DimeNet}.
}
\label{tab:efficiency}
\end{table}
In the original papers\cite{doi:10.1063/1.5019779, Klicpera2020Directional} atomic embeddings have size of $F=64$ for {SchNet} and $F=128$ for {DimeNet}.
We tested both values on both models, and obtained that while {DimeNet} improves slightly from 64 to 128 (test MAE from 24.85 to 23.3), {SchNet} makes a sensible leap forward (test MAE from 76.0 to 54.8).
Consequently, an embedding size of 128 was set for both models. We consider this aspect interesting and being worth of future investigation.
The cutoff value is determined as a trade-off between two competing requirements: on one hand, the higher is the value of the cutoff, the higher is the number of connected atoms within an interaction block; on the other hand, the higher is this number, the higher is the computational cost during training. For this reason, and considering that {DimeNet} is a much more complex network in which also triplets of atoms are considered, the cutoff radius is different for the two models: $r_{cut}=5.0$ {\AA} for {SchNet}; $3.5$ {\AA} for {DimeNet}. Using a larger cutoff (up to $4$ {\AA}) for {DimeNet} did not increase the accuracy but did increase the computational time.
The presence of seven interaction blocks in {DimeNet} with respect to three in {SchNet} alleviates for the shorter $r_{cut}$, allowing the network to receive messages from distant atoms and to adequately model long-range interactions.
\subsection*{Testing the GNN interatomic potentials for bcc Fe}
The {SchNet} and {DimeNet} Fe potentials are benchmarked against either published DFT data\cite{PhysRevMaterials.2.013808} or data computed with Quantum Espresso based on settings (k-mesh and energy convergence) consistent with the training database \cite{PhysRevMaterials.2.013808}.
The equation of state is computed with GNNs by varying the lattice constant $a_0 = 2.834$ {\AA} of the primitive unit cell within a range of $\pm 5\%$ volumetric change around the equilibrium volume computed with DFT.
As shown in Fig. \ref{fig:eos}, both GNNs reproduce the DFT data with high accuracy.
\begin{figure}[H]
\centering
\includegraphics[scale=0.45]{figures/results/EOS_GNN.pdf}
\caption{\textbf{a} Equation of state of SchNet and DimeNet
compared with DFT data and the EAM\cite{EAM_PhysRevB.79.174101} and MEAM\cite{MEAM_PhysRevB.89.094102} empirical potentials. \textbf{b} Equation of state of the DimeNet
compared with DFT data and the state-of-the-art GAP iron potential \cite{PhysRevMaterials.2.013808}.}
\label{fig:eos}
\end{figure}
To compare the performance of GNNs with empirical potentials, we compute the equation of state, equilibrium volume and buk modulus with two broadly used empirical potentials: EAM \cite{EAM_PhysRevB.79.174101}, which is based on pairwise interactions; and MEAM \cite{MEAM_PhysRevB.89.094102}, that includes higher-order interactions (e.g. angular-dependent terms). Both GNN potentials reproduce the DFT results with high accuracy, while both the equilibrium volume and the curvature of the empirical potentials are far from the DFT results (see Fig. \ref{fig:eos}a). One reason for the discrepancy is that the empirical potentials are fitted to the experimental data of the equilibrium volume $V_0 = 11.7$ \AA$^3$, which is obtained by extrapolation to T=0K \cite{MEAM_PhysRevB.89.094102}. However, despite being fitted to such value, both EAM and MEAM visibly underpredict the experimental equilibrium volume. In contrast, both GNNs can reproduce closely the dataset they have been trained to and, as shown in Fig. \ref{fig:eos}b, the level of accuracy is comparable with the state-of-the-art GAP interatomic potential for BCC iron \cite{PhysRevMaterials.2.013808}.
The equilibrium volume and bulk modulus of iron are computed by fitting the Birch-Murnaghan equation of state to the energy-volume curve. The result of the fitting for the GNNs and DFT data is reported in Table \ref{tab:bulk}.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Property & Unit & DFT & SchNet & $\varepsilon_{SN}$ & DimeNet & $\varepsilon_{DN}$ & GAP \cite{PhysRevMaterials.2.013808} & $\varepsilon_{GAP}$\\
\hline
$\text{a}_0$ & \AA & $2.834$ & $2.834$ & 0.0\% & $2.834$ & 0.0\% & $2.834$ & 0.0\% \\
$\text{B}_0$ & GPa & $199.8 \pm 0.1$ & $199.0$ & -0.4\% & $199.4$ & -0.2\% & 198.2 & -0.8\% \\
\hline
\end{tabular}
\caption{T=0K lattice parameter $a_0$ and bulk modulus $B_0$ for $\alpha$-iron.
GNN results are compared to DFT data. The relative errors of SchNet\ ($\varepsilon_{SN}$), DimeNet\ ($\varepsilon_{DN}$) and GAP ($\varepsilon_{GAP}$) with respect to DFT are also shown.}
\label{tab:bulk}
\end{table}
As indicated by the relative errors $\varepsilon_{SN}$ and $\varepsilon_{DN}$, both SchNet\ and DimeNet\ reproduce the equilibrium lattice parameter and the bulk modulus with an accuracy comparable to GAP. Both the models achieve DFT-accurate results in the equation of state, with a maximum energy difference $<0.1$ meV in the volume range [11.0, 12.0] {\AA}$^3$. These results thus reveal no apparent difference between the performance of SchNet\ and DimeNet.
In order to assess the ability of GNNs to reproduce tetragonal lattice distortions, the Bain path is evaluated and compared with DFT data (Figure \ref{fig:bain}).
\begin{figure}[H]
\centering
\includegraphics[scale=0.42]{figures/results/Bain_GNN.pdf}
\caption{
\textbf{a} DFT Bain path compared to the Bain path obtained using the GNN potentials and the empirical potentials. \textbf{b} DFT Bain path compared to the Bain path computed with DimeNet and GAP. In both panels, grey dots represent the cloud of the training data.
}%
\label{fig:bain}
\end{figure}
In the figure, DFT is used to compute the energy as a function of a distorted primitive cell. The cell is distorted at constant volume, that is by increasing one axis, $c$, while reducing the two other axes, $a$, and keeping the volume constant. Both the Fe SchNet\ and DimeNet\ potentials are then used to compute the same path. Volume optimization, i.e. finding the minimum energy configuration at the prescribed $c/a$ by adjusting the volume, has also been performed with the GNNs potential to verify that the path does not deviate strongly from the assumed tetragonal distortion at constant volume, and no strong qualitative changes were found with respect to the result obtained with the constrained Bain path. The plot also shows the $c/a$ distortion of the training database. Fig. \ref{fig:bain}a shows that SchNet\ interpolates well within the training set while it extrapolates poorly, with a discontinuous behaviour of the energy $vs$ the $c/a$ ratio. Instead, DimeNet\ can extrapolate fairly well outside of the training database, also reproducing qualitatively the energy barrier at $c/a\sim1.4$ as well as the subsequent local energy minimum around $c/a\sim1.65$. This specific capability of DimeNet\ sets it aside from SchNet\ , making it a more promising GNN for atomistic simulations of metals with structural transformations. Moreover, DimeNet\ outperforms the EAM potential, which shows no metastable minimum for BCC ferromagnetic Fe at $c/a\sim1.65$. MEAM was fitted on data including both BCC and FCC configurations, and for this reason it deviates strongly from the DFT results, which are based on BCC ferromagnetic configurations only. Fig. \ref{fig:bain}b shows that DimeNet\ approaches the transferability of GAP for the Bain path.
Finally, the vacancy formation energy and the surface energies have been predicted for a number of crystal planes. The vacancy formation energy is calculated by using a $3\times3\times3$ cubic supercell. First, one atom of the supercell is removed and a DFT calculation is performed to relax the atoms around the vacancy. Then, the DFT total energy $E_{\rm def}$ of the vacancy-containing configuration is computed. The total energy $E_{\rm bulk}$ of the bulk defect-free supercell is also computed. The vacancy formation energy equals
\begin{equation}
E_{\rm v} = E_{\rm def} - \frac{N-1}{N} E_{\rm bulk}\ ,
\end{equation}
where $N$ is the number of atoms in the bulk system ($N=54$ atoms in this case).
The surface energy is evaluated for four crystallographic planes, i.e. $\{100\}$, $\{110\}$, $\{111\}$ and $\{112\}$. The surface is generated by creating a supercell with a vacuum region, the energy of which is indicated as $E_{\rm split}$. The vacancy formation energy is computed as
\begin{equation}
E_{\rm surf} = (E_{\rm split} - E_{\rm bulk})/2A
\end{equation}
where $A$ is the newly created surface area. The results are plotted in Fig. \ref{fig:vacsurf}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.55]{figures/results/VacSurf_GNN.pdf}
\caption{
Vacancy formation energies and surface energies. The $\{100\}$, $\{110\}$, $\{111\}$ and $\{112\}$ surfaces are considered.
}%
\label{fig:vacsurf}
\end{figure}
Fig. \ref{fig:vacsurf}a shows that both GNNs can reproduce mostly within 10\% accuracy all the considered DFT energies. The GNN potentials largely outperform both EAM and MEAM empirical potentials, that consistently underpredict the energies. {DimeNet} shows the largest deviation ($\sim 7\%$) for the $\{112\}$ surface energy, and is otherwise approaching the predictive capabilities of the GAP potential (Fig. \ref{fig:vacsurf}b).
\section*{Discussion and Conclusions}
In this paper we presented a comparative study on the application of two GNN models, {SchNet} and {DimeNet}, to the prediction of properties of bcc iron.
Since {DimeNet} was previously tested only on molecules and not on periodic structures/crystals, we implemented a version with PBCs and made it publicly available. Both models predict with DFT accuracy the energy-volume curve and related properties such as the bulk modulus and the equilibrium lattice parameter. This result is consistent with the fact that the energy-volume curve includes datapoints close to those of the training database. The investigated GNN potentials outperform closed-form empirical interatomic potentials (e.g. EAM and MEAM) and approach the accuracy of state-of-the-art interatomic potentials such as GAP. This makes the present GNNs implementation interesting for application to other metallic systems.
A different performance of {DimeNet} with respect to {SchNet} is found for configurations including tetragonal distortions (Bain path), point defects and planar defects. {DimeNet} can predict the energy of these configurations within the MAE, while the predictive capability of {SchNet} is limited. We attribute this difference to the fact that, in {DimeNet}, the energy depends explicitly on the angular, three-body contributions that are essential for structural transformations and for local shape distortions, while {SchNet} only depends on pairwise contributions. It is also remarkable that {DimeNet} has better transferability, e.g. considering the Bain path.
By showing the capabilities of GNNs and especially the importance of three-body terms, this work supports the further investigation of GNNs and specifically {DimeNet}.
Activity is currently ongoing in the following directions:
\begin{itemize}
\item There is a number of potential improvements in terms of efficiency and accuracy of the model, which is related to the hyperparameter optimization. Further investigations will involve finding a tradeoff between chosing larger cutoff radii and/or increasing the number of interaction layers, in order to ensure the efficient description of short- and long-range interactions with high accuracy.
\item Another aspect to be investigated is the number of features of both atom and edge embeddings, and their initialization. These are crucial characteristics in modeling the atomic environment, encoding properties such as the nature of the atom and of the pair interactions, and are expected to impact the model efficiency, e.g. in the convergence of the training.
\item The implementation of the developed GNN Fe {DimeNet} potential within the LAMMPS\cite{plimpton1995fast} open-source package is currently ongoing and will enable the systematic simulation of thermoelastic properties, as well as linear and planar defects such as dislocations and cracks that are relevant for the investigation of the mechanical properties of metals.
\item We expect that, in the spirit of Atomic Cluster Expansions (ACE) \cite{PhysRevB.99.014104}, the transferability of GNN potentials will be improved by including more terms in the angular descriptions, by using a different choice of the radial function (e.g. based on Chebyshev polynomials), or by setting different values of the parameters $l, m$ in the angular functions (which, in the current {DimeNet} implementation, are spherical harmonics with $m=0$), and/or by introducing higher-body terms, beyond the three-body term currently used in {DimeNet} (see Methods). This is also the subject of current research.
\end{itemize}
\section*{Methods}
\subsection*{Graph Neural Networks and Message Passing}
A graph \cite{kipf2017semi} is a pair $\mathcal{G}=(\mathcal{V}, \mathcal{E})$ where $i \in \mathcal{V}$ are the $N$ nodes and $(i, j) \in \mathcal{E}$ are the edges. The connections among the nodes of a graph can be stored in an adjacency matrix $A \in \mathbb{R}^{N \times N}$ containing the pairs $(i, j)\in \mathcal{E}$. At both nodes and edges, vectors of features (or embeddings) are defined as $\textbf{x}_i \in \mathbb{R}^F$ and $\textbf{e}_{ij} \in \mathbb{R}^D$, respectively, where $F, D$ are model specific parameters.
In the message passing with node update, node embeddings are updated
iteratively, with each iteration executed in the message passing layers $l$ as follows:
\begin{equation}
\textbf{x}^{(l+1)}_i = \gamma (\textbf{x}^{(l)}_i, \sum_{j\in \mathcal{N}_i} \mu (\textbf{e}^{(l)}_{ij}, \textbf{x}^{(l)}_j))
\label{eq:node-update}
\end{equation}
where $\mathcal{N}_i$ is the set of the nodes connected to node $i$, $\mu$ is a differentiable function of the nodal and edge embeddings, the sum aggregates the contributions of atoms $j$,
and $\gamma$ is a differentiable function which evaluates the update of node embedding.
In the message passing with edge update \cite{31bef22ac7034baca72d1f08d3b16c4b}, edge embeddings are updated by following a similar scheme:
\begin{equation}
\textbf{e}^{(l+1)}_{ij} = \kappa (\textbf{e}^{(l)}_{ij}, \sum_{k\in \mathcal{N}_j \setminus\{i\}} \nu (\textbf{x}^{(l)}_j, \textbf{e}^{(l)}_{jk}, \textbf{x}^{(l)}_k))\ .
\label{eq:edge-update}
\end{equation}
with the same conventions as the previous case, $\kappa$ and $\nu$ being differentiable functions of the nodal and edge embeddings, analogously to $\gamma$ and $\mu$. Note that edges connected to $(i, j)$ are the edges $(j, k)$ linking node $j$ and node $k \neq i$, hence the index of the summation.
At the next iteration, the message is evaluated by the layer $l+1$ by aggregating embeddings $\textbf{x}^{(l+1)}_i$ ($\textbf{e}^{(l+1)}_{ij}$) from the neighbours, which in turn have received a message from their own neighbours: stacking together $L$ layers means that the final update is performed by using messages coming from a distance up to $L$ neighbors away, see fig. \ref{fig:message_passing}.
Once iteratively updated via the message passing, embeddings are elaborated by a readout function
\begin{equation}
y = f (\{\textbf{x}^{(L)}_i, \textbf{e}^{(L)}_{ij}\})\
\label{eq:readout}
\end{equation}
which performs a further aggregation of all the embeddings and outputs the prediction $y \in \mathbb{R}$ of the network.
\begin{figure}[H]
\centering
\includegraphics[width=.4\textwidth]{figures/message_passing_2.pdf}
\caption{
Message Passing with node update. Atomic environment as seen by node $i$:
layer $l=1$ aggregates messages from connected neighbours (red area);
layer $l=2$ acts the same but connected neighbours have been already updated by messages from their neighbours at previous layer, so signals received by atom $i$ are now from a distance of up to two edges (orange area). A sequence of $L$ layers means messages coming from nodes at a distance of up to $L$ edges.
A similar scheme works for message passing with edge update.
}
\label{fig:message_passing}
\end{figure}
Within the context of crystalline materials, it is straightforward to consider the nodes of a graph as the atoms and to connect by edges the pairs of atoms that lie within a specified interaction radius. Let $\textbf{r}_i \in \mathbb{R}^3$ be the coordinates of the atom $i$. Then, the graph is defined by connecting all the atoms $j$ that are inside the cutoff radius $r_{cut}>||\textbf{r}_i - \textbf{r}_j||$.
Atomic embeddings $\textbf{x}_i$ are vectors of learnable numerical features.
They are randomly initialized, and atoms with the same set of relevant atomic properties, such as atomic number $Z$, have the same initial embeddings.
Edge embeddings $\textbf{e}_{ij}$ are similar, with properties related to pairs of connected atoms, such as the interatomic distance $d_{ij}$.
The message and update functions $\mu, \gamma$, (\ref{eq:node-update} \ref{eq:edge-update}) are neural layers which add learnable weights and define the form of the convolutional filter and of the update procedure.
Hence, the prediction of the potential energy $E \in \mathbb{R}$ of a crystal as a function of the atomic coordinates is a regression task performed on such a graph (\ref{eq:readout}).
\subsection*{Network models}
\label{subsection:network_models}
In this paper we use two recent models of Graph Neural Networks based on the Message Passing framework: SchNet \cite{doi:10.1063/1.5019779} and DimeNet \cite{Klicpera2020Directional}.
There are two main differences between them. The first is related to the embeddings: {SchNet} relies on atomic embeddings, while {DimeNet} also uses edge embeddings in the form of pairs of atom embeddings to account for the directionality of the message passing. The second difference is the learned convolutional filters used to aggregate embeddings: while {SchNet} employs a filter that accounts only for the distance between pairs of atoms, {DimeNet} considers also the angles formed by pairs of edges, or triplets of atoms.
The general scheme of both the models is shown in fig. \ref{fig:models_schema}. At an high level of abstraction, they can be described in terms of block diagrams, with each block representing a set of specific neural layers that performs some operations on input data and generates output data.
Blocks with the same name in both models perform similar general operations.
\begin{figure}[H]
\centering
\includegraphics[width=.9\textwidth]{figures/schnet_dimenet.pdf}
\caption{
Block diagram for {SchNet} (left) and {DimeNet} (right).
Outputs generated by each block are shown near the arrows.
In both models the starting point is the Embedding block that maps atom or edge features in a vector space, generating embeddings.
For {SchNet}, the Interaction blocks are in a sequence, the output of one being the input of the next; the final Output block evaluates the total energy $\hat{E}$.
In {DimeNet} the output of each block is both passed sequentially to the next and further elaborated by the Output block, and then summed to obtain energy $E$.
}
\label{fig:models_schema}
\end{figure}
\subsubsection*{Filters and physical representation of the atomic environment}
\label{subsubsection:distances-angles}
To take into account the physical environment surrounding each atom, the convolutional filter assigns weights to the embeddings received by the neighbours (see below the description of Embedding and Interaction blocks).
Filter weights are learned during training and have to change smoothly with respect to small atomic shifts.
Therefore, distances and angles are {\it expanded}, that is represented as feature vectors whose components are sets of continuous basis functions.
In {SchNet} the filter depends only on the interatomic distance $d$,
expanded by a set of radial, Gaussian (G) basis functions:
\begin{equation}
\phi^G_k(d) = \exp \left(-\frac{\left(d - \mu_k\right)^2}{2 \sigma^2}\right)
\label{eq:sch-filter}
\end{equation}
with $\mu_k$ equally spaced in the interval $[0, r_{cut}]$, and $\sigma$ representing the scale of the distances.
Hyperparameters $k$ and $\sigma$ define the granularity of the representation, and determine the precision of the filter.
The spacing $r_{cut}/k$ and the scale $\sigma$ are both set to 0.1 {\AA} in the original paper \cite{doi:10.1063/1.5019779}; in order to improve the precision to better compare with {DimeNet} we set them to 0.04 {\AA}.
{DimeNet} introduces two different filters: one radial depending only on distances, used to weight the embeddings received by atoms;
and one radial-angular which takes into account also the angles to weight the embeddings passed to the edges.
Both distances and angles are expanded in a 2D Bessel-Fourier basis which are the solutions of the related time-independent Schr\"odinger equation, and represent the electron density of the system inside the cutoff radius
\ For the first, only radial, filter, distances $d$ are expanded in a feature vector whose components are given by the Radial Basis Functions (RBF):
\begin{equation}
\phi^{RBF}_k(d) = \sqrt{\frac{2}{r_{cut}}} \frac{\sin{\left( \frac{k\pi}{r_{cut}} d\right)}}{d} \ .
\label{eq:dist-filter}
\end{equation}
The second filter depends on the distance $d$ and the angle $\theta$. The components of the bidimensional feature vectors are given in terms of the Spherical Basis Functions (SBF):
\begin{equation}
\phi^{SBF}_{l,k}(d, \theta) = \sqrt{\frac{2}{r_{cut}^3 j^2_{l+1}\left(z_{lk}\right) }} j_l \left( \frac{z_{lk}}{r_{cut}} d\right) Y^0_l (\theta)
\label{eq:angle-filter}
\end{equation}
where $j_l$ are the spherical Bessel functions of the first kind and $Y^m_l$ are the spherical harmonics; $z_{lk}$ is the $k$-th root of the $l$-order Bessel function.
Settings for the non learnable parameters are as per the original paper \cite{Klicpera2020Directional}, namely: for eq. \ref{eq:dist-filter} $k \in [1, \dots, 6]$ while in eq. \ref{eq:angle-filter} $k \in [1, \dots, 6]$, $l \in [0, \dots, 5]$.
To avoid the discontinuity given by the boundary condition $\phi(d) = 0$ for $d > r_{cut}$, functions \ref{eq:dist-filter} and \ref{eq:angle-filter} are multiplied by a smoothing polynomial $u(d) \sim \mathcal{O}(d^8)$: a step function with a root of multiplicity 3 at $d = r_{cut}$.
For both {SchNet} and {DimeNet} the expanded representations are passed through dense neural layers (see below) which add the learnable weights.
The filter is therefore a mapping of the physical representation of angles and distances to a vector space with dimensions matching the ones of the embeddings to weight.
The general aspect of the filters and an intuition of how they work are shown in fig. \ref{fig:filters}.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{figures/filtri_creazione.pdf}
\caption{
Filters and physical representation of the atomic environment.
(a) Starting from the positions of atoms, distances between pairs (top, SchNet and DimeNet) and distances and angles between triplets (bottom, DimeNet) are evaluated.
(b) Distances $d$ are expanded in a Gaussain basis of functions (top, SchNet, eq. \ref{eq:sch-filter}) or in Radial Bessel basis (middle, DimeNet, eq. \ref{eq:dist-filter}), while distances $d$ and angles $\theta$ for triplets are expanded in a 2D Bessel-Fourier basis (bottom, DimeNet, \ref{eq:angle-filter}).
(c) The convolutional filters: expansions are passed through dense layers whose weights are optimized during training. Learned weights are the convolutional filters. The first three and the last component are extracted and shown for all the cases.
}
\label{fig:filters}
\end{figure}
\subsubsection*{Dense layers}
Dense layer is the very basic element of a neural network. Given an input $\textbf{x} \in \mathbb{R}^k$ it is defined as
\begin{equation}
\textbf{y} = \sigma (\textbf{W} \cdot \textbf{x} + \textbf{b})
\label{eq:dense}
\end{equation}
where $\textbf{W} \in \mathbb{R}^{m \times k}$, $\textbf{b} \in \mathbb{R}^m$ are the learnable weights and bias, $\cdot$ is the matrix multiplication operator and $\sigma$ is the \emph{activation}, i.e. a differentiable nonlinear function.
Activation is the shifted softplus for SchNet: $ssp(x) = \ln (0.5\cdot e^x + 0.5)$, and the self-gated Swish for DimeNet: $sgs(x) = x \cdot \sigmoid(x)$.
In terms of vector algebra a dense layer projects the input vector $\textbf{x} \in \mathbb{R}^k$ into a vector space $\mathbb{R}^m$ with $m \neq k$ in general, and then applies the function $\sigma$ element-wise.
\subsubsection*{Embedding block}
For SchNet, atom embeddings are defined as vectors $\textbf{x}_i \in \mathbb{R}^F$; initial values $\textbf{x}_i^{(0)}$ are randomly initialized.
For DimeNet, similarly defined atom embeddings are concatenated in pairs to generate an initial edge embedding $\textbf{e}_{ji}^{(0)} = (\textbf{x}_j^{(0)} || \textbf{x}_i^{(0)} || \phi^{RBF}_k(d_{ji}))$. Note that this definition guarantees the directionality, as in general $\textbf{e}_{ji} \neq \textbf{e}_{ij}$.
Once initialized, embeddings are passed through dense layers.
\subsubsection*{Interaction block}
Message passing paradigm is implemented in Interaction blocks.
Multiple Interaction blocks are generally stacked together. Each of them performs a convolution by aggregating embeddings from the directly connected entities, and then updating them.
The output of one block is passed as the input to the next.
Let $l$ be the generic Interaction block. In {SchNet}, the embedding $\textbf{x}_j^{(l)}$ received by atom $i$ from neighbour $j \in \mathcal{N}_i$ is first weighted by the gaussian radial filter depending on $\phi^G(d)$ (eq. \ref{eq:sch-filter}). Then embeddings are aggregated and the resulting embedding is summed to $\textbf{x}_i^{(l)}$ and passed through a dense layer to update it to $\textbf{x}_i^{(l+1)}$ (eq. \ref{eq:node-update}).
In {DimeNet}, the edge $(j, i)$ receives message embeddings $\textbf{e}_{kj}^{(l)}$ from edges $(k, j)$ that are first weighted by means of the radial filter based on $\phi^{RBF}(d)$ (eq. \ref{eq:dist-filter}) with $d = d_{ji}$, and then by the radial-angular filter based on the Bessel-Fourier basis $\phi^{SBF}(d, \theta)$ (eq. \ref{eq:angle-filter}), where $\theta$ is the angle formed by $(j, i)$ and $(k, j)$ and $d = d_{kj}$.
Again, exchanged messages are aggregated, summed over $k$ to the embedding $\textbf{e}_{ji}^{(l)}$ relative to edge $(j, i)$ and then passed through the dense layers to obtain the updated $\textbf{e}_{ji}^{(l+1)}$ edge embedding (eq. \ref{eq:edge-update}).
For an intuition of how the filters are applied see fig. \ref{fig:filters_application}.
In {DimeNet}, updated messages are given as input to the next interaction block \emph{and} to the related output block, see below.
\begin{figure}[H]
\centering
\includegraphics[width=.5\textwidth]{figures/filtri_applicazione.pdf}
\caption{
Schematic of the application of the filters for {DimeNet}.
Features of the edge embedding $e_{kj}$ are first multiplied element-wise with the value at the point $d_{ji}$ of the components of the radial filter.
Then another element-wise multiplication is performed with the value at the point $(d_{kj}, \theta)$ of the components of the radial-angular filter.
An analogous scheme works for {SchNet}, limited to the purely radial filter.
}
\label{fig:filters_application}
\end{figure}
\subsubsection*{Output block}
Output blocks are responsible for lowering the dimensions of the atom embeddings, reducing them to scalars.
SchNet~ has one only Output block at the end of the Interaction blocks stack; it is a sequence of dense layers whose task is to reduce the embedding to a scalar $\textbf{x}_i^{(L+1)} \rightarrow x_i^{(L+1)}$, to be interpreted as the atom-wise contribution to the total potential energy.
The final prediction is evaluated as the sum of atom-wise contributions $\hat{E} = \sum_i x_i^{(L+1)}$.
DimeNet~ performs another convolution here, resulting in the update of the atomic embeddings.
Embeddings $\textbf{e}_{ij}^{(l+1)}$ from the related interaction block $l$ (and of the embedding block, $l=0$) are further convoluted by means of a radial filter based on $\phi^{RBF}(d)$: $\textbf{e}_{ij}^{(l+1)} \rightarrow \Tilde{\textbf{e}}_{ij}^{(l+1)}$.
The update of the embedding of atom $i$ is then evaluated as $\textbf{x}_i^{(l+1)} = \sum_j \Tilde{\textbf{e}}_{ij}^{(l+1)}$.
Multiple dense layers are applied to reduce dimensions to 1: $\textbf{x}_i^{(l+1)} \rightarrow x_i^{(l+1)}$, to be intended as the per-atom contribution of the level $l$ blocks to the output of the model.
Finally they are summed atom-wise and level-wise to evaluate the final prediction of the network $\hat{E} = \sum_l \sum_i x_i^{(l+1)}$.
\subsection*{Dataset}
\label{subsection:dataset}
We use a DFT database\cite{PhysRevMaterials.2.013808} of bcc ferromagnetic iron in our study. The database is generated by delicate collinear spin-polarized plane wave DFT computations, which includes the following subsets.
\begin{itemize}
\item DB1: Primitive unit cell under arbitrary pressures at 300K
\item DB2: $3\times 3\times 3$ and $4\times 4\times 4$ supercell under a range of pressures and temperatures
\item DB3: $3\times 3\times 3$ supercell containing a vacanvy under a range of pressures and temperatures the same as DB2
\item DB4: $4\times 4\times 4$ supercell with divacancies at 800K
\item DB5: $4\times 4\times 4$ supercell with 3, 4 and 5 vacancies at 800-1000K
\item DB6: $4\times 4\times 4$ supercell containing self- and di-interstitials at 100-300K
\item DB7: $1\times 1\times 6$ supercell with (100), (110), (111) and (112) free surfaces
\item DB8: $1\times 1\times 6$ supercell with $\gamma$ surfaces on (110) and (112) plane
\end{itemize}
All structures in DBs other than DB1 are bcc lattices; structures in DB1 are primitive unit cells of bcc lattices.
More details about the database can be found in the original paper \cite{PhysRevMaterials.2.013808}. The DFT database is computed by using the open source codes QUANTUMESPRESSO\cite{Giannozzi_2009,Giannozzi_2017}. An ultrasoft GGA PBE pseudopotential from $0.2.1$ pslibrary is employed. The kinetic energy cutoff for wavefunctions and charge density are set to be $90$ and $1080$ $Ry$, respectively. The $k$ spacing is set to be less than $0.03$ \AA$^{-1}$.
\section*{Data and code availability}
The data used for training and testing the system is publicly available at the \emph{Materials Cloud} site: \url{https://archive.materialscloud.org/record/2017.0006/v2}.
The code generated to obtain the data reported in the paper can be found at the GitHub repository of the project: \url{https://github.com/AilabUdineGit/GNN_atomistics/}
\section*{Article}
\section*{Introduction}
\section*{Results and Discussion}
Here the results and discussion.
\section*{Methods}
Here the results and discussion.
\section*{Data availability}
Dataset by Dragoni.
\section*{Code availability}
Dataset by Dragoni.
\subsection*{Subheading 1}
Subheadings should be no more than 39 characters, including spaces. In the final layout, subheadings will be in-line and thus followed by a full stop.
\subsubsection*{Second-level subheading 1}
Second-level subheadings should be no more than 70 characters, including spaces.
\section*{Main text heading 2}
\subsection*{Subheading 2}
\subsubsection*{Second-level subheading 2}
...\\
...
\section*{Outlook/Future Perspectives}
The text is often best rounded off with a comment on the implications of the latest work and on future research directions. This section should be about 800 words long and is an opportunity for authors to give their vision of where the field is heading.
\section*{7 display items [figures, text boxes or tables]}
Display items should be complementary to the main text and avoid repeating information. Please keep in mind that all display items will need to fit in portrait format. Please cite all display items in the main text. Do any items require copyright permission? If any of the figures have been reproduced or adapted from elsewhere we need you to complete a \href{http://www.nature.com/licenceforms/npg/thirdpartyrights-table.doc}{Third Party Rights Table}, which includes the relevant details (including full reference and figure number) so we can request permission from the publishers to use it. We do not require the Third Party Rights table to be completed before submission of the first draft. However, to speed up the process at the author revision step, we recommend that you add the reference numbers to the figure captions. Please note that we cannot use permissions (to adapt/reproduce figures) that authors have received from publishers. Also, a permission is still required if the author of this Review article is an author of the original manuscript.\\
\noindent For chemical structures, please see the \href {https://www.nature.com/documents/nr-chemical-structures-guide.pdf}{Nature guide for ChemDraw structures}.\\
\noindent For information about the artwork process at Nature Reviews journals, please refer to \href{https://www.nature.com/reviews/pdf/artworkguide.pdf}{the guide to authors} for figures and an \href{http://www.nature.com/reviews/pdf/artworkguidep2.pdf}{example figure}.\\
\noindent \textbf{Box 1 | Title.}
Word count up to 600 words. Boxes can include small illustrations. Any figures included in boxes should be described in full in the box text — a separate figure caption is not used. Boxes can also contain small tables. Boxes should be included in the manuscript Word document. Please include them at the end of the article (i.e. the present location in this template) rather than at the position you’d like the box to be in within the main text. However, please add a call out in the main text for ‘Box 1’.\\
\noindent \textbf{Figure captions}\\
\noindent Please include detailed figure captions that define all symbols and abbreviations used in the figures (e.g. $V$, voltage; $R$, resistance). Readers should understand figures without needing to refer to the text. Figure captions should be included in the manuscript latex document. Figures should be provided separately.\\
\begin{figure}[ht]
\centering
\caption{\textbf{Title.} a | Text. b | Text. c | Text. d | Text maximum 250 words. Panel a is adapted/reproduced with permission from ref.123, Springer Nature Limited. Panel b is adapted/reproduced with permission from ref.124, Publisher Name.}
\label{fig}
\end{figure}
\begin{table}[ht]
\centering
\begin{tabular}{|l|l|l|}
\hline
Particle & Mass & Charge \\
\hline
\multicolumn{3}{|c|}{Charged particles}\\
\hline
Electron & $9.10938356(11)\times10^{-31}$ kg & $-1e$ \\
\hline
Proton & $1.672621898(21)\times10^{-27}$ kg & $+1e$ \\
\hline
\multicolumn{3}{|c|}{Neutral particles}\\
\hline
Neutron & $1.674927471(21)\times10^{-27}$ kg & $0$ \\
\hline
\end{tabular}
\caption{\label{tab}Tables have titles but no captions are allowed. All symbols and acronyms used in a table should be defined in a footnote. Example: Here $e$ is the elementary charge.}
\end{table}
\section*{References}
150–200 references are suggested. All referenced work should be accepted for publication or on recognized Archive databases. References should be given superscript numbers and cited sequentially according to where they appear in the main text, followed by those in the text boxes, figure captions and then tables (i.e. the order they appear in this template).
Journal abbreviation in italics, volume number in bold. If there are six or more authors for a reference, only the first author should be listed, followed by 'et al.'. Journal name abbreviations are followed by a full stop. Please include full page ranges. \\
\noindent Please do not cite web sites in the reference list — these should be listed separately, with the URLs in the Related Links section (see below). If you are unsure if the website is a ‘Related link’ or should go in the ‘References’ section then identify if there is a publication date on the website. If there is a publication date, it is likely that the item will be in the ‘References’ section. If there is no publication date, it is likely that the item will be in the ‘Related Links’ section.\\
\noindent When citing a book, please indicate if you are citing a specific page range or chapter. Otherwise, we presume that you are citing the entire book. \\
\noindent \textbf{Highlighted references (optional)} Please select 5–-10 key references and provide a single sentence for each, highlighting the significance of the work.\\
\noindent\textbf{Acknowledgements}\\
E.g. Funding agencies.\\
\noindent\textbf{Author contributions}\\
Please describe the contributions made by each author. Please use the initials of the individual author to explain these contributions. These contributions are also required when you upload the files to our submission website.\\
\noindent\textbf{Competing interests}\\
Nature Journals require authors to declare any competing interests in relation to the work described. Information on this policy is available \href{http://www.nature.com/authors/policies/competing.html}{here}. \\
\noindent\textbf{Supplementary information (optional)}
If your article requires supplementary information, please include these files for peer-review. Please note that supplementary information will not be edited.
\end{document}
|
{
"timestamp": "2021-09-30T02:03:22",
"yymm": "2109",
"arxiv_id": "2109.14012",
"language": "en",
"url": "https://arxiv.org/abs/2109.14012"
}
|
\section{Introduction}
A typical visualization of a binary data matrix is a hierarchically-clustered heatmap, with dendrograms in which the higher-level clusters are recursively comprised of smaller clusters, the hierarchy being computed with an agglomeration strategy involving a distance function defined pairwise between samples (or features) to be clustered. In favorable cases a cluster may appear at some level of the hierachy which is especially characteristic of an important underlying state or measure, i.e. an outcome. For example, likelihood of favorable response to some medical treatment.
But it is often difficult to decide whether a cluster found this way, or any other way, could just as easily have occurred by random chance. This is obviously a primary concern in the unsupervised context, where outcomes which might guide cluster assessment are not present. It is also a concern in the supervised context, due to the possibility of overfitting or multiple-hypothesis false discovery.
As an example, in recent work of the authors\autocite{psgpaper}, a subtype of several types of cancers (including lung and uterus cancers) was identified which exhibited a molecular signature defined by about 10 genes, the PSGs. Network analysis methods implicated the gene subset, but initially confidence concerning its actual significance was low. Pearson correlation analysis was inconclusive due to the presence of outliers. The rarity of the subtype displaying the full signature added to this uncertainty. Ultimately Kaplan-Meier analysis did show that the PSG+ phenotype confers a poor prognosis, confirming the biological significance of this subtype, but we still lacked an objective basis for any claim of statistical significance of the signature/subtype itself. The exact test we introduce in this article turns out to provide such a basis, as described in Figure \ref{psgfigure}.
\section{Theory}
\subsection{Setup}
Let $\overline{M}$ be a binary matrix of shape $(N, K)$. We call the $K$ columns \emph{features} and the $N$ rows \emph{samples}. Given a $k$-element subset $F$ of the feature set, let $M$ denote the restriction of $\overline{M}$ to the columns $F$, and let $v:=(v_1, \ldots, v_k)$ denote the corresponding column sums. Let $S$ denote the set of samples which have all of the features $F$. That is,
\begin{align*}
S = \{ \thinspace s \quad \vert \quad M(s,f)=1 \quad \forall f\in F \}
\end{align*}
A pair $(F, S)$ obtained as above may be called a \emph{maximal bicluster} or a \emph{formal concept}. We shall use the term \emph{signature}, emphasizing the feature set $F$, and call $S$ the set of samples \emph{displaying signature} $F$. In appendix \ref{fca_methods} we explain how to identify, in practice, many examples of $(F, S)$ for which $S$ is non-empty and relatively large. Of course, any other signature discovery method may be used instead.
We propose to assess the significance of a given signature finding in terms of the size $|S|$, under the intuition that simultaneous display of multiple features by a large set of samples indicates a non-trivial relation between the features. We call this size the \emph{incidence} or \emph{intersection} statistic, and denote it $I$.
\subsection{Binary matrix configurations}
We are concerned with binary matrices $M$. If $M$ has $k$ columns, it will be convenient to do some calculations in the ring of formal power series $T:=\mathbb{Z}[[t_1, \ldots, t_k]]$. This is because of the correspondence between:
\begin{enumerate}
\itemsep0em
\item{Multiplicity-free monomials in $T$, i.e. elements of the form $t_J:=\prod\limits_{j\in J}t_j$ for some $J\subset \{1,\ldots,k\}$}
\item{Subsets $J\subset \{1,\ldots, k\}\thinspace$ (i.e. $J\in \mathscr{P}_{k}$)}
\item{Possible rows $r=(r^{1},\ldots,r^{k})$ of $M$}
\end{enumerate}
The correspondence is
\begin{align*}
t_J \quad \longleftrightarrow \quad J \quad \longleftrightarrow \quad r=(r^{1},\ldots r^{k}),\thinspace r^j = \begin{cases} 1 \mbox{ if } j\in J \\ 0 \mbox{ if } j\notin J\end{cases}
\end{align*}
Denote by $\mathscr{F}(k)$ the set defined by any of these 3 equivalent descriptions. (Here $\mathscr{F}$ stands for "features".)
Symmetrically, if $M$ has $n$ rows, we consider the ring $W:=\mathbb{Z}[[s_1,\ldots,s_n]]$, and the 3 sets in correspondence:
\begin{enumerate}
\itemsep0em
\item{Multiplicity-free monomials in $W$, i.e. elements of the form $s_U:=\prod\limits_{u\in U}s_u$ for some $U\subset \{1,\ldots,n\}$}
\item{Subsets $U\subset \{1,\ldots, n\}\thinspace$ (i.e. $U\in \mathscr{P}_{n}$)}
\item{Possible columns $c=(c^{1},\ldots,c^{n})$ of $M$}
\end{enumerate}
Denote this set by $\mathscr{S}(n)$. (Here $\mathscr{S}$ stands for "samples").
In these terms, the set of all $M$ is naturally identified with $(\mathscr{F}(k))^{n}$ and with $(\mathscr{S}(n))^{k}$ by regarding $M$ as an $n$-tuple of rows or, respectively, as a $k$-tuple of columns.
We will also call the matrices $M$ \emph{configurations}, writing
\[(\mathscr{F}(k))^{n}\cong (\mathscr{S}(n))^{k}=: \mathscr{C}\]
\[(J_1,\ldots,J_n)\quad \longleftrightarrow\quad (U_1,\ldots,U_k) \quad \longleftrightarrow M\]
In counting configurations satisfying certain conditions, we will appeal to the notation introduced above for corresponding elements in lieu of explicit notation for the bijection functions.
\subsection{Incidence statistic, its PMF, and CDF\label{formuladist}}
Define integers $a(n,v)$, for integers $n\geq 0$ and $v=(v_1,\ldots, v_k)$ with $v_j\geq 0\thinspace \forall j$, by the generating function:
\begin{align*}
(f(t)-t_1\cdots t_k)^{n}& =: \sum_{v} a(n,v) t_1^{v_1}\cdots t_k^{v_k} \\
f(t):&= (1+t_1)\cdots(1+t_k)
\end{align*}
The following counting theorem is the underlying fact needed to prove a formula for the probability mass function (PMF) of the incidence statistic.
\begin{theorem}\label{intersectioncounting}\hfill
\begin{enumerate}
\itemsep0em
\item{\label{interpretation_intersection}$a(n,v)$ is the number of configurations in which the mutual intersection of the $U_j$ is empty, that is $\cap_{j=1}^{j=k}U_j=\emptyset$, and such that $|U_j|=v_j$ for each $j$.}
\item{\label{formula_intersection}$a(n,v)=\sum\limits_{m=0}^{m=n}(-1)^{n+m}\binom{n}{m}\prod\limits_{j=1}^{j=m}\binom{m}{n-v_j} $}
\end{enumerate}
\end{theorem}
\begin{proof} (\ref{interpretation_intersection}) By expansion, $f(t)$ consists of the sum of all the monomials in $\mathscr{F}(k)$. So $f(t)-t_1\cdots t_k$ is the sum of all the monomials except $t_1\cdots t_k$. Before collecting terms with the same monomial part, the terms of $(f(t)-t_1\cdots t_k)^n$ are labelled by ordered $n$-tuples of elements of $\mathscr{F}(k)\backslash \{t_1\cdots t_k\}$. That is, by certain elements of $\mathscr{C}$. Thus the notation we have introduced for elements of $\mathscr{C}$ may be brought to bear. In particular, the monomial part of a given term is
\[ t_1^{|U_1|}\cdots t_k^{|U_k|} \]
It follows that the coefficient of $t_1^{v_1}\cdots t_k^{v_k}$ is the number of configurations, in which no $J_i$ equals the whole set $\{1,\ldots,k\}$ (due to the missing element $t_1\ldots t_k$), such that $|U_j|=v_j$ for all $j$. The condition that no $J_i$ be equal to the whole set is equivalent to the mutual intersection of $U_j$ being empty.
\noindent(\ref{formula_intersection}) We apply the binomial theorem $1+k$ times:
\begin{align*}
(f(t)-t_1\cdots t_k)^{n} &= \sum\limits_{m=0}^{m=n} (-1)^{n-m} \binom{n}{m} (f(t))^{m} (t_1^{n-m}\cdots t_k^{n-m}) \\
&=\sum\limits_{m=0}^{m=n} (-1)^{n+m} \binom{n}{m} (1+t_1)^{m}\cdots(1+t_k)^{m} (t_1^{n-m}\cdots t_k^{n-m})\\
&=\sum\limits_{m=0}^{m=n} (-1)^{n+m} \binom{n}{m} \left(\sum\limits_{u=0}^{u=m}\binom{m}{u}t_1^{u}\right)\cdots\left(\sum\limits_{u=0}^{u=m}\binom{m}{u}t_k^{u}\right) (t_1^{n-m}\cdots t_k^{n-m})\\
&=\sum\limits_{m=0}^{m=n} (-1)^{n+m} \binom{n}{m} \left(\sum\limits_{v}\prod\limits_{j=1}^{j=k}\binom{m}{v_j}t_1^{v_1}\cdots t_k^{v_k}\right) (t_1^{n-m}\cdots t_k^{n-m})\\
&=\sum\limits_{v}\sum\limits_{m=0}^{m=n} (-1)^{n+m} \binom{n}{m} \prod\limits_{j=1}^{j=k}\binom{m}{v_j}t_1^{n-m+v_1}\cdots t_k^{n-m+v_k}\\
&=\sum\limits_{v}\sum\limits_{m=0}^{m=n} (-1)^{n+m} \binom{n}{m} \prod\limits_{j=1}^{j=k}\binom{m}{v_j-(n-m)}t_1^{v_1}\cdots t_k^{v_k}\\
&=\sum\limits_{v}\sum\limits_{m=0}^{m=n} (-1)^{n+m} \binom{n}{m} \prod\limits_{j=1}^{j=k}\binom{m}{n-v_j}t_1^{v_1}\cdots t_k^{v_k}
\end{align*}
\end{proof}
The proof above is clarified somewhat by the observation that $f(t)$ can be expressed as a specialization of the power series in $W\otimes_{\mathbb{Z}}T$,
\[g(s,t):=\prod\limits_{u,j}(1+s_u t_j)\quad,\]
namely $f(t)=g(\mathbf{1},t)$ where $\mathbf{1} = (1, \ldots, 1)$.
\begin{theorem} Fix integers $i\geq 0$, $v=(v_1,\ldots,v_k)$, $v_j\geq 0$, and $n>0$. Consider the $n\times k$ configurations $M$ in which:
\begin{enumerate}
\itemsep0em
\item{$|U_j|=v_j$ for each $j$.}
\item{The cardinality of the intersection of the $U_j$ is exactly $i$, that is $|\cap_{j=1}^{j=k}U_j|=i$.}
\end{enumerate}
The number of such configurations is given by the formula:
\[ \binom{n}{i}\sum\limits_{m=0}^{m=n-i}(-1)^{n-i+m}\binom{n-i}{m}\prod\limits_{j=1}^{j=m}\binom{m}{n-v_j} \]
\end{theorem}
\begin{proof} The indicated set of configurations is partitioned equally into $\binom{n}{i}$ sets, according to which $i$-element sample subset is the mutual intersection, denoted $X$. By construction the reduced configuration matrix, not involving the elements of $X$, must consist of $k$ features with sample sets of sizes $(v_1-i, \ldots, v_k-i)$ and with no intersection. Thus the size of each part of the partition is $a(n-i, (v_1-i,\ldots,v_k-i))$. The number of configurations is therefore
\[\binom{n}{i}a(n-i, (v_1-i,\ldots, v_k-i))\]
The result follows from the formula for $a$ given in Theorem \ref{intersectioncounting}.\ref{formula_intersection}.
\end{proof}
The null assumption we make for our test is the one that is made implicitly in a standard permutation test, namely the uniform distribution on the subset of $\mathscr{C}$ defined by $|U_{j}|=v_j$, given $v=(v_1,\ldots,v_k)$. Note that this entails that we do \emph{not} assume $M$ is comprised of $n$ independent and identically distributed (iid) samples. Also, despite the fact that $M$ appears to be $n$ samples from a set of binary discrete variables, it is definitely not $n$ samples of Bernoulli variables; for example, the variance of the number of positives is 0 for each feature, rather than $np(1-p)$ for some positivity rate $p$.
Under this assumption the incidence statistic $I$ is an integer-valued random variable. The following corollary provides a formula for its PMF.
\begin{corollary}\label{stattest}Consider $n$ samples observed with $k$ binary features of respective frequencies $v_1,\ldots v_k$. The probability of observing exactly $i$ samples positive for all $k$ features is:
\[ p(I=i) = \frac{\binom{n}{i}\sum\limits_{m=0}^{m=n-i}(-1)^{n-i+m}\binom{n-i}{m}\prod\limits_{j=1}^{j=m}\binom{m}{n-v_j}}{\prod\limits_{j=1}^{j=k}\binom{n}{v_j}} \]
\end{corollary}
By summing over several values of $i$ in Corollary \ref{stattest}, one can compute a value of the cumulative distribution function (CDF) of $I$. This is (one minus) the $p$-value for the proposed ``exact test for coincidence".
The next theorem provides an alternative, more closed-form calculation of the CDF, with significantly decreased computational complexity compared with direct summation of PMF values, namely $O(n)$ rather than $O(n^2)$.
The proof of this theorem depends on two basic lemmas.
\begin{lemma}
\[ \binom{a}{b} \binom{b}{c} = \binom{a-c}{a-b} \binom{a}{c} \]
\end{lemma}
\begin{proof}
\begin{align*}
\frac{a!}{(a-b)!b!}\cdot \frac{b!}{(b-c)!c!} = \frac{1}{(a-b)!(b-c)!}\cdot \frac{a!}{c!} = \frac{(a-c)!}{(a-b)!(b-c)!}\cdot \frac{a!}{(a-c)!c!}
\end{align*}
\end{proof}
\begin{lemma}
\[ \sum\limits_{h=0}^{h=l}(-1)^{h}\binom{g}{h} = (-1)^{l} \binom{g-1}{l} \]
\end{lemma}
\begin{proof}
By induction. Base case $g=1$:
\begin{align*}
(-1)^{0}\binom{1}{0} &= 1 = (-1)^{0}\binom{0}{0} \\
\binom{1}{0} - \binom{1}{1} &= 0 = (-1)^{1}\binom{0}{1}
\end{align*}
Now assume the formula holds (for all $l$) for a fixed $g\geq 0$.
\begin{align*}
\sum\limits_{h=0}^{h=l}(-1)^{h}\binom{g+1}{h} &= \sum\limits_{h=0}^{h=l}(-1)^{h}\left(\binom{g}{h}+\binom{g}{h-1}\right) \\
&= \sum\limits_{h=0}^{h=l}(-1)^{h}\binom{g}{h} + \sum\limits_{h=0}^{h=l}(-1)^{h}\binom{g}{h-1} \\
&= \sum\limits_{h=0}^{h=l}(-1)^{h}\binom{g}{h} + \sum\limits_{h=1}^{h=l}(-1)^{h}\binom{g}{h-1}\\
&= \sum\limits_{h=0}^{h=l}(-1)^{h}\binom{g}{h} - \sum\limits_{h=0}^{h=l-1}(-1)^{h}\binom{g}{h}\\
&= (-1)^{l}\binom{g}{l}
\end{align*}
\end{proof}
\begin{theorem}\label{cdf}\hfill
\begin{align*}
&\sum\limits_{u=i}^{u=n}p(I=u)=\frac{N}{D}
\end{align*}
where
\begin{align*}
&N:=\sum\limits_{m=\text{\emph{max}}\{n-v_j\}}^{m=n-i}(-1)^{m} \binom{n}{m} \left( (-1)^{\text{\emph{max}}\{n-v_j\}}\binom{n-m-1}{n-\text{\emph{max}}\{n-v_j\}} + (-1)^{n-i}\binom{n-m-1}{i-1} \right) \prod\limits_{j=1}^{j=m} \binom{m}{n-v_j}\\
&D:=\prod\limits_{j=1}^{j=k}\binom{n}{v_j}
\end{align*}
\end{theorem}
\begin{proof} First note that $p(I=u)=0$ if $u>\text{min}\{v_j\}$, so the sum stops at $u=\text{min}\{v_j\}$. We apply the formula for $p(I=u)$:
\begin{align*}
&\sum\limits_{u=i}^{u=\text{min}\{v_j\}}\binom{n}{u}a(n-u, (v_1-u,\ldots, v_k-u)) = \sum\limits_{u=i}^{u=\text{min}\{v_j\}}\binom{n}{u}\sum\limits_{m=0}^{m=n-u}(-1)^{n-u+m}\binom{n-u}{m}\prod\limits_{j=1}^{j=m}\binom{m}{n-v_j} \\
&= (-1)^{n} \sum\limits_{m=0}^{m=\infty}(-1)^{m} \prod\limits_{j=1}^{j=m} \binom{m}{n-v_j} \sum\limits_{u=i}^{u=\text{min}\{v_j\}} (-1)^{u} \binom{n}{n-u} \binom{n-u}{m} \\
&= (-1)^{n} \sum\limits_{m=0}^{m=\infty}(-1)^{m} \prod\limits_{j=1}^{j=m} \binom{m}{n-v_j} \sum\limits_{u=i}^{u=\text{min}\{v_j\}} (-1)^{u} \binom{n-m}{u} \binom{n}{m} \\
&= (-1)^{n} \sum\limits_{m=0}^{m=\infty}(-1)^{m} \binom{n}{m} \prod\limits_{j=1}^{j=m} \binom{m}{n-v_j} \sum\limits_{u=i}^{u=\text{min}\{v_j\}} (-1)^{u} \binom{n-m}{u} \\
&= (-1)^{n} \sum\limits_{m=0}^{m=\infty}(-1)^{m} \binom{n}{m} \prod\limits_{j=1}^{j=m} \binom{m}{n-v_j} \left( (-1)^{\text{min}\{v_j\}}\binom{n-m-1}{\text{min}\{v_j\}} - (-1)^{i-1}\binom{n-m-1}{i-1} \right) \\
&= (-1)^{n} \sum\limits_{m=0}^{m=\infty}(-1)^{m} \binom{n}{m} \left( (-1)^{\text{min}\{v_j\}}\binom{n-m-1}{\text{min}\{v_j\}} + (-1)^{i}\binom{n-m-1}{i-1} \right) \prod\limits_{j=1}^{j=m} \binom{m}{n-v_j} \\
&= \sum\limits_{m=0}^{m=\infty}(-1)^{m} \binom{n}{m} \left( (-1)^{n-\text{min}\{v_j\}}\binom{n-m-1}{\text{min}\{v_j\}} + (-1)^{n-i}\binom{n-m-1}{i-1} \right) \prod\limits_{j=1}^{j=m} \binom{m}{n-v_j} \\
&= \sum\limits_{m=0}^{m=\infty}(-1)^{m} \binom{n}{m} \left( (-1)^{\text{max}\{n-v_j\}}\binom{n-m-1}{n-\text{max}\{n-v_j\}} + (-1)^{n-i}\binom{n-m-1}{i-1} \right) \prod\limits_{j=1}^{j=m} \binom{m}{n-v_j} \\
&= \sum\limits_{m=\text{max}\{n-v_j\}}^{m=n-i}(-1)^{m} \binom{n}{m} \left( (-1)^{\text{max}\{n-v_j\}}\binom{n-m-1}{n-\text{max}\{n-v_j\}} + (-1)^{n-i}\binom{n-m-1}{i-1} \right) \prod\limits_{j=1}^{j=m} \binom{m}{n-v_j}
\end{align*}
\end{proof}
Plots of the PMF/CDFs for some values of the parameters are shown in Figure \ref{pdfs}. The behavior of the test in an example case is illustrated in Figure \ref{illustration1case}.
\subsection{CDF generating function and incomplete beta function}
The generating function for the values of CDF$(i)$, that is with $i$ and $n$ fixed and the $v=(v_1,\ldots, v_k)$ variable, is nearly expressible as the regularized incomplete beta function $I_{x}(a,b)$ with certain arguments, establishing a strong analogy to the binomial distribution. The number of configurations with up to $i$ incidence statistic is given by the generating function:
\begin{align*}
\sum\limits_{v}&\sum\limits_{u=0}^{u=i}\binom{n}{u}a(n-u, (v_1-u,\ldots, v_k-u))t^{v} = \sum\limits_{u=0}^{u=i}\binom{n}{u} (f(t)-t_1\cdots t_k)^{n-u}(t_1\cdots t_k)^{u}\\
&= f(t)^{n} \sum\limits_{u=0}^{u=i} \binom{n}{u}\left(1 - \frac{t_1\cdots t_k}{f(t)}\right)^{n-u}\left( \frac{t_1\cdots t_k}{f(t)}\right)^{u}\\
&= f(t)^{n} \thinspace I_{1 - \frac{t_1\cdots t_k}{f(t)}}(n-i, i+1)
\end{align*}
The last equation above is a "formal" application of the expression for the CDF of a binomial distribution with $n$ trials, that is,
\begin{align*}
\sum\limits_{u=0}^{u=i} \binom{n}{u} p^{u}(1-p)^{n-u} = I_{1-p}(n-i, i+1)
\end{align*}
except that instead of the usual real parameter $p\in [0,1]$ of such a distribution, $p$ must be permitted to be equal to the power series $\frac{t_1\cdots t_k}{f(t)}$ which tabulates information across all of the different values of the parameters $v=(v_1,\ldots, v_k)$.
The total number of configurations is given by the generating function $(f(t))^{n}$, so the generating function for CDF$(i)$ is the ratio:
\begin{align*}
&f(t)^{n}I_{1 - \frac{t_1\cdots t_k}{f(t)}}(n-i, i+1)\thinspace // \thinspace f(t)^{n}
\end{align*}
Here the double division symbol $//$ means the coefficient-wise ratio of the multi-dimensional series represented by the respective generating functions. Thus, despite the analogy with the binomial distribution, the generating function for CDF$(i)$ is not literally equal to $I_{1 - \frac{t_1\cdots t_k}{f(t)}}(n-i, i+1)$.
\section{Software implementation}
\subsection{Python package}
A Python package \texttt{\href{https://pypi.org/project/coincidencetest/}{coincidencetest}} is released on PyPI. It contains a self-contained module, with no dependencies beyond the standard library, that calculates the $p$-value for the test.
\subsection{Command-line tool}
A command-line tool is distributed with \texttt{coincidencetest} that bundles together a basic, lightweight signature discovery algorithm as well as test evaluation on an input binary matrix file. This may be run in a non-interactive context on a remote server or as part of a pipeline.
\subsection{Web application}
A simple GUI performs signature discovery and evaluation in real-time after user upload of a binary matrix file. A screenshot is shown in Figure \ref{screenshot_gui}.
\subsection{Testing}
The Python package contains a test suite which verifies the $p$-value formulas (i.e. the PMF and CDF) against brute-force enumerations for several small values of the parameters, furnishing rigorous computational evidence for the main theorems in addition to the proofs.
\section{Related work}
The test turns out to specialize to the Fisher exact test\autocite{fisher1922} in the case of 2 features, $|F|=2$. The incidence statistic and the frequencies of each feature provide the same information as a $2 \times 2$ integer contingency table, and the formula for the probability value agrees with ours in this case.
The Fisher exact test has been generalized to larger, $r \times c$ contingency tables\autocite{highercontingency}. Whether such tables are regarded as pertaining to 2 categorical variables with $r$ and $c$ categories respectively, or as pertaining to pairs of binary variables, one from a list of $r$ variables and one from a list of $c$ variables, contingency table methods are second-order in that they only involve interactions between pairs of variables. Much work on exact inference generally has focused on contingency tables, with multi-dimensional generalizations appearing in the literature up to order 3 (e.g. $I\times J\times K$ tables\autocite{agresti1992survey}).
By contrast our test is inherently higher-order, depending, albeit in a simple way, on the mutual interaction of all $k$ variables. As for other higher-order methods, an investigation of the joint distribution of Bernoulli variables under certain constraints has been published\autocite{kolev2006joint}, and this may yield a test with comparable domain of applicability as our test. However, as indicated in section \ref{formuladist}, the Bernoulli context involves a different null assumption.
In Good\autocite{good1976application} a very similar generating function to our $g(s,t)$ is identified as a tabulation of the number of \emph{contingency tables} (not binary matrix configurations) with fixed column and row sums. The function is $g(-s,t)^{-1}=g(s,-t)^{-1}$ (c.f. page 1166 item 5.6 and page 1182, ``$f(\mathbf{z})$"). This connection may help to explain the appearance of the beta function in the generating function for the CDF of the incidence statistic.
|
{
"timestamp": "2021-10-01T02:04:05",
"yymm": "2109",
"arxiv_id": "2109.13876",
"language": "en",
"url": "https://arxiv.org/abs/2109.13876"
}
|
\subsubsection*{Acknowledgments} #1
\fi
\if@preprinttype%
\subsubsection*{Acknowledgments} #1
\fi
}
\if@conferencefinal
\newcommand{\@noticestring}{%
\@conferenceordinal\/ Conference on Robot Learning
(CoRL \@conferenceyear), \@conferencelocation.%
}
\else
\if@preprinttype
\newcommand{\@noticestring}{%
}
\else
\newcommand{\@noticestring}{%
Submitted to the \@conferenceordinal\/ Conference on Robot Learning (CoRL \@conferenceyear). Do not distribute.%
}
\linenumbers
\AtBeginDocument{%
\@ifpackageloaded{amsmath}{%
\newcommand*\patchAmsMathEnvironmentForLineno[1]{%
\expandafter\let\csname old#1\expandafter\endcsname\csname #1\endcsname
\expandafter\let\csname oldend#1\expandafter\endcsname\csname end#1\endcsname
\renewenvironment{#1}%
{\linenomath\csname old#1\endcsname}%
{\csname oldend#1\endcsname\endlinenomath}%
}%
\newcommand*\patchBothAmsMathEnvironmentsForLineno[1]{%
\patchAmsMathEnvironmentForLineno{#1}%
\patchAmsMathEnvironmentForLineno{#1*}%
}%
\patchBothAmsMathEnvironmentsForLineno{equation}%
\patchBothAmsMathEnvironmentsForLineno{align}%
\patchBothAmsMathEnvironmentsForLineno{flalign}%
\patchBothAmsMathEnvironmentsForLineno{alignat}%
\patchBothAmsMathEnvironmentsForLineno{gather}%
\patchBothAmsMathEnvironmentsForLineno{multline}%
}{}
}
\fi
\fi
\makeatletter
\providecommand{\doi}[1]{%
\begingroup
\let\bibinfo\@secondoftwo
\urlstyle{rm}%
\href{http://dx.doi.org/#1}{%
doi:\discretionary{}{}{}%
\nolinkurl{#1}%
}%
\endgroup
}
\widowpenalty=10000
\clubpenalty=10000
\flushbottom
\sloppy
\renewcommand{\normalsize}{%
\@setfontsize\normalsize\@xpt\@xipt
\abovedisplayskip 7\p@ \@plus 2\p@ \@minus 5\p@
\abovedisplayshortskip \z@ \@plus 3\p@
\belowdisplayskip \abovedisplayskip
\belowdisplayshortskip 4\p@ \@plus 3\p@ \@minus 3\p@
}
\normalsize
\renewcommand{\small}{%
\@setfontsize\small\@ixpt\@xpt
\abovedisplayskip 6\p@ \@plus 1.5\p@ \@minus 4\p@
\abovedisplayshortskip \z@ \@plus 2\p@
\belowdisplayskip \abovedisplayskip
\belowdisplayshortskip 3\p@ \@plus 2\p@ \@minus 2\p@
}
\renewcommand{\footnotesize}{\@setfontsize\footnotesize\@ixpt\@xpt}
\renewcommand{\scriptsize}{\@setfontsize\scriptsize\@viipt\@viiipt}
\renewcommand{\tiny}{\@setfontsize\tiny\@vipt\@viipt}
\renewcommand{\large}{\@setfontsize\large\@xiipt{14}}
\renewcommand{\Large}{\@setfontsize\Large\@xivpt{16}}
\renewcommand{\LARGE}{\@setfontsize\LARGE\@xviipt{20}}
\renewcommand{\huge}{\@setfontsize\huge\@xxpt{23}}
\renewcommand{\Huge}{\@setfontsize\Huge\@xxvpt{28}}
\providecommand{\section}{}
\renewcommand{\section}{%
\@startsection{section}{1}{\z@}%
{-2.0ex \@plus -0.5ex \@minus -0.2ex}%
{ 1.5ex \@plus 0.3ex \@minus 0.2ex}%
{\large\bf\raggedright}%
}
\providecommand{\subsection}{}
\renewcommand{\subsection}{%
\@startsection{subsection}{2}{\z@}%
{-1.8ex \@plus -0.5ex \@minus -0.2ex}%
{ 0.8ex \@plus 0.2ex}%
{\normalsize\bf\raggedright}%
}
\providecommand{\subsubsection}{}
\renewcommand{\subsubsection}{%
\@startsection{subsubsection}{3}{\z@}%
{-1.5ex \@plus -0.5ex \@minus -0.2ex}%
{ 0.5ex \@plus 0.2ex}%
{\normalsize\bf\raggedright}%
}
\providecommand{\paragraph}{}
\renewcommand{\paragraph}{%
\@startsection{paragraph}{4}{\z@}%
{1.5ex \@plus 0.5ex \@minus 0.2ex}%
{-1em}%
{\normalsize\bf}%
}
\providecommand{\subparagraph}{}
\renewcommand{\subparagraph}{%
\@startsection{subparagraph}{5}{\z@}%
{1.5ex \@plus 0.5ex \@minus 0.2ex}%
{-1em}%
{\normalsize\bf}%
}
\providecommand{\subsubsubsection}{}
\renewcommand{\subsubsubsection}{%
\vskip5pt{\noindent\normalsize\rm\raggedright}%
}
\renewcommand{\topfraction }{0.85}
\renewcommand{\bottomfraction }{0.4}
\renewcommand{\textfraction }{0.1}
\renewcommand{\floatpagefraction}{0.7}
\newlength{\@nipsabovecaptionskip}\setlength{\@nipsabovecaptionskip}{7\p@}
\newlength{\@nipsbelowcaptionskip}\setlength{\@nipsbelowcaptionskip}{\z@}
\setlength{\abovecaptionskip}{\@nipsabovecaptionskip}
\setlength{\belowcaptionskip}{\@nipsbelowcaptionskip}
\renewenvironment{table}
{\setlength{\abovecaptionskip}{\@nipsbelowcaptionskip}%
\setlength{\belowcaptionskip}{\@nipsabovecaptionskip}%
\@float{table}}
{\end@float}
\setlength{\footnotesep }{6.65\p@}
\setlength{\skip\footins}{9\p@ \@plus 4\p@ \@minus 2\p@}
\renewcommand{\footnoterule}{\kern-3\p@ \hrule width 12pc \kern 2.6\p@}
\setcounter{footnote}{0}
\setlength{\parindent}{\z@}
\setlength{\parskip }{5.5\p@}
\setlength{\topsep }{4\p@ \@plus 1\p@ \@minus 2\p@}
\setlength{\partopsep }{1\p@ \@plus 0.5\p@ \@minus 0.5\p@}
\setlength{\itemsep }{2\p@ \@plus 1\p@ \@minus 0.5\p@}
\setlength{\parsep }{2\p@ \@plus 1\p@ \@minus 0.5\p@}
\setlength{\leftmargin }{3pc}
\setlength{\leftmargini }{\leftmargin}
\setlength{\leftmarginii }{2em}
\setlength{\leftmarginiii}{1.5em}
\setlength{\leftmarginiv }{1.0em}
\setlength{\leftmarginv }{0.5em}
\def\@listi {\leftmargin\leftmargini}
\def\@listii {\leftmargin\leftmarginii
\labelwidth\leftmarginii
\advance\labelwidth-\labelsep
\topsep 2\p@ \@plus 1\p@ \@minus 0.5\p@
\parsep 1\p@ \@plus 0.5\p@ \@minus 0.5\p@
\itemsep \parsep}
\def\@listiii{\leftmargin\leftmarginiii
\labelwidth\leftmarginiii
\advance\labelwidth-\labelsep
\topsep 1\p@ \@plus 0.5\p@ \@minus 0.5\p@
\parsep \z@
\partopsep 0.5\p@ \@plus 0\p@ \@minus 0.5\p@
\itemsep \topsep}
\def\@listiv {\leftmargin\leftmarginiv
\labelwidth\leftmarginiv
\advance\labelwidth-\labelsep}
\def\@listv {\leftmargin\leftmarginv
\labelwidth\leftmarginv
\advance\labelwidth-\labelsep}
\def\@listvi {\leftmargin\leftmarginvi
\labelwidth\leftmarginvi
\advance\labelwidth-\labelsep}
\providecommand{\maketitle}{}
\renewcommand{\maketitle}{%
\par
\begingroup
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\renewcommand{\@makefnmark}{\hbox to \z@{$^{\@thefnmark}$\hss}}
\long\def\@makefntext##1{%
\parindent 1em\noindent
\hbox to 1.8em{\hss $\m@th ^{\@thefnmark}$}##1
}
\thispagestyle{empty}
\@maketitle
\@thanks
\@notice
\endgroup
\let\maketitle\relax
\let\thanks\relax
}
\newcommand{\@toptitlebar}{
\hrule height 4\p@
\vskip 0.25in
\vskip -\parskip%
}
\newcommand{\@bottomtitlebar}{
\vskip 0.29in
\vskip -\parskip
\hrule height 1\p@
\vskip 0.09in%
}
\def\keywords#1{%
\ifdefined\isaccepted \else
\begin{quote}
\textbf{Keywords:} #1%
\end{quote}
\fi
\ifdefined\nohyperref\else\ifdefined\hypersetup
\hypersetup{pdfkeywords={#1}}
\fi\fi
}
\providecommand{\@maketitle}{}
\renewcommand{\@maketitle}{%
\vbox{%
\hsize\textwidth
\linewidth\hsize
\vskip 0.1in
\centering
{\LARGE\bf \@title\par}
\if@conferencefinal
\def\And{%
\end{tabular}\hfil\linebreak[0]\hfil%
\begin{tabular}[t]{c}\bf\rule{\z@}{24\p@}\ignorespaces%
}
\def\AND{%
\end{tabular}\hfil\linebreak[4]\hfil%
\begin{tabular}[t]{c}\bf\rule{\z@}{24\p@}\ignorespaces%
}
\begin{tabular}[t]{c}\bf\rule{\z@}{24\p@}\@author\end{tabular}%
\else
\if@preprinttype
\def\And{%
\end{tabular}\hfil\linebreak[0]\hfil%
\begin{tabular}[t]{c}\bf\rule{\z@}{24\p@}\ignorespaces%
}
\def\AND{%
\end{tabular}\hfil\linebreak[4]\hfil%
\begin{tabular}[t]{c}\bf\rule{\z@}{24\p@}\ignorespaces%
}
\begin{tabular}[t]{c}\bf\rule{\z@}{24\p@}\@author\end{tabular}%
\else
\begin{tabular}[t]{c}\bf\rule{\z@}{24\p@}
Anonymous Author(s) \\
Affiliation \\
Address \\
\texttt{email} \\
\end{tabular}%
\fi
\fi
\vskip 0.3in \@minus 0.1in
}
}
\newcommand{\ftype@noticebox}{8}
\newcommand{\@notice}{%
\enlargethispage{2\baselineskip}%
\@float{noticebox}[b]%
\footnotesize\@noticestring%
\end@float%
}
\renewenvironment{abstract}%
{%
\begin{quote}%
\textbf{Abstract:}%
}
{
\par%
\vskip 1ex%
\end{quote}%
}
\endinput
\section{$f$-Cal{}: Variational Inference for Aleatoric Uncertainty Calibration}
\label{sec:approach}
In this section we present $f$-Cal{}, a principled approach to obtain calibrated aleatoric uncertainty from neural nets.
\subsection{Calibration as Distribution Matching}
Following the definition of distributional calibration (Def.~\ref{definition:calibration}), $f$-Cal{} formulates a variational minimization objective to calibrate the uncertainty estimates from a deep network.
In the case of a traditional (non deep learning based) sensor, we would calibrate the noise distribution with the procedure:
\begin{enumerate}
\item Choose a distribution family for the noise
\item For a fixed and known input value, draw multiple samples of the output observations
\item Fit the output samples to the distribution family
\end{enumerate}
In the DNNS case, we only have one sample for any given input and we have no knowledge of the ground truth (noise free) label. We can similarly choose a distribution family for our model, but we cannot assume that any of the parameters are fixed across samples. Our approach to overcome this problem will be to assume that there is some canonical element of the distribution family that we can transform each predictive distribution to. Specifically, we seek to approximate the empirical posterior over some canonical transformation of the target variables $Y$ by a simpler (tractable) target distribution $Q$ (modeling choice).
This enables us to leverage an abundant class of distribution matching metrics, $f$-divergences, to formulate a loss function enforcing distributional calibration.
For tractable inference, we assume i.i.d. mini-batches of training data and instead impose distribution matching losses over empirical error residuals across each batch%
.
We assume that we can transform each training sample output distribution to some canonical element of the distribution family. For instance, Gaussian random variables are canonicalized by centering the distribution (subtracting the output label), followed by normalization (scaling the result by the inverse variance). These canonical elements are used (in conjunction with the labels) to determine the \emph{empirical} error distribution.
$f$-Cal{} then performs distribution matching across this empirical and a target distribution.
\subsection{$f$-Cal{} Algorithm}
Given a mini-batch containing $N$ inputs $x_i$, a probabilistic regressor predicts $N$ sets of distributional parameters $f_p(x_i) = \phi_i$ ($\phi_i \in \Phi)$ to the corresponding probability distribution $s(y_i; \phi_i)$. Define $g: \mc{Y} \times \Phi \mapsto \mc{Z}$ as the function that maps the target random variable $y_i$ to a random variable $z_i$ which follows a known canonical distribution. Since these residuals $\{z_1, z_2, \ldots, z_N\}$ must ideally follow a chosen calibrating (target) distribution $Q$:
\begin{equation}
z_i = g(y_i, \phi_i) \sim Q
\label{eq:z}
\end{equation}
The key difference between \eqref{eq:y} and \eqref{eq:z} is that \eqref{eq:z} now applies \textbf{for all samples} in the dataset, as opposed to just a single sample. As a result, we can now follow the similar procedure that we would with a traditional sensor and compute the empirical statistics of the residuals of the $z_i$ variables across the entire set (or in practice across a mini-batch) to fit a proposal distribution $P_z$, and minimize the distributional distance from the canonical distribution $Q$.
This minimization can be performed with variational loss function that minimizes an $f$-divergence, $D_f(P_z||Q)$, between these two distributions.
In summary, we propose a distribution matching loss function that augments typical supervised regression losses, and results in the neural regressor being calibrated to the target distribution:
\begin{align}
\label{eq:fcal}
\mathcal{L} &= (1 - \lambda) R_{emp}(f_p) + \lambda \mathcal{L}_{\text{$f$-Cal{}}} \\
& = (1 - \lambda) R_{emp}(f_p) + \lambda D_f(P_z || Q) \nonumber
\end{align}
\noindent
where $\lambda$ is a hyper-parameter to balance the two loss terms (we provide thorough analysis of the choice of $\lambda$ in Sec.~\ref{sec:results}.
We experiment with a number of $f$-divergence choices, and identify KL-divergence and Wasserstein distance as viable choices.
Importantly, $f$-Cal{} is agnostic to the choice of probabilistic deep neural regression model or task. In practice, it is a straightforward modification to the training loss function that can also be applied as a fine-tuning step to a previously partially trained model.
\subsection{$f$-Cal{} for Gaussian calibration}
\label{sec:gaussian-calibration}
The $f$-Cal{} framework is generic and can be applied to arbitrary distributions. In this section we consider the case when the distribution $s(y_i;\phi_i)$ is Gaussian with $\phi_i \triangleq (\mu_i,\sigma_i)$. The variance $\sigma_i^2$ denotes the aleatoric uncertainty in this case. The error residuals are computed as $z_i = \frac{y_i - \mu_i}{\sigma_i}$, where $\mu_i$ and $\sigma_i$ are predicted mean and the standard deviation of the $i$th Gaussian output from the neural network for each input $x_i$. So, $y_i \sim \mathcal{N}(\mu_i,\sigma_i^2)$, then $z_i \sim \mathcal{N}(0,1)$.
\begin{algorithm} [tb]
\SetCustomAlgoRuledWidth{0.6\textwidth} %
\SetAlgoLined
\SetKwInOut{KInput}{Input}
\DontPrintSemicolon
\KInput{Dataset $D$, probabilistic neural regressor, $f_p$, degrees of freedom $K$, batch size $N$, number of samples for hyper-constraint $H$}
\For{$i = 1 \ldots N$}{
$(\mu_i, \sigma_i) \gets f_p(x_i)$ \\
$z_i \gets \frac{y_i - \mu_i}{\sigma_i}$ \\
}
$C = \emptyset$ \tcp*{\textnormal{Samples from Chi-squared distribution}}
\For{$i = 1 \ldots H$}{
\tcp{\textnormal{Create a chi-squared hyper-constraint}}
$\displaystyle q_i \gets \sum_{j = 1}^{K} z_{ij}^2, z_{ij} \thicksim \{z_1 \cdot \cdot \cdot z_N\}$ \\
C.append($q_i$)
}
$P_z \gets \textnormal{Fit-Chi-Squared-Distribution}(C)$ \\
$\mathcal{L}_{\text{$f$-Cal{}}} \gets D_f(P_z || \chi_K^2)$ \\
\Return $\mathcal{L}_{\text{$f$-Cal{}}}$
\caption{$f$-Cal{} for Gaussian uncertainties}
\label{algorithm-gaussian}
\end{algorithm}
Optionally, one may apply several transforms to the random variables $y_i$ and impose distributional \emph{hyper-}constraints over the transformed variables. In practice, we find that this can improve the stability of the training process and enforces more stable calibration. In this case we compute the sum-of-squared error residuals $q = \sum_{i=1}^{K} z_i^2$, and enforce the resulting distribution to be Chi-squared with parameter $K$ i.e $q \sim \chi^2_K$, so in this case target distribution $Q = \chi_K^2$. Subsequently, we note that as the degrees of freedom $K$ of a Chi-squared distribution increase, it can be approximated by a Gaussian of mean $K$ and variance $2K$ through the application of the central limit theorem:
\begin{align*}
&\lim_{K \to \infty} \frac{\chi^2_K -K}{\sqrt{2K}} \to \mathcal{N}(0,1)
\implies \lim_{K \to \infty} \chi^2_K \to \mathcal{N}(K,2K)
\label{cs_central_limit_theorem}
\end{align*}
In practice, this variation of the central limit theorem for Chi-squared random variables holds for moderate values of $K$ (i.e., $K > 50$). This is practical to ensure, particularly in dense regression tasks such as bounding box object detection (where hundreds of proposals have to be scored per image) and per-pixel regression. We summarize the process for generating the calibration loss in Alg. \ref{algorithm-gaussian}. This loss is then combined with the typical empirical risk as given by \eqref{eq:fcal}.
\section{Related Work}
\label{sec:relatedwork}
\vspace{-0.2cm}
The rapidly growing field of \textbf{Bayesian deep learning} has resulted in the development of models that estimate a \emph{distribution} over the output space~\cite{gal2016uncertainty,gal2016dropout, kendall2017uncertainties,lakshminarayanan2017simple, blundell2015weight}. There is a distinction between uncertainty that is due to the stochasticity of the underlying process (\emph{aleatoric}) versus uncertainty that is due to the model being insufficiently trained (\emph{epistemic})~\cite{kendall2017uncertainties}.
\textbf{Epistemic} uncertainty is often estimated by either using ensembles of neural networks or by stochastic regularization at inference time (Monte-Carlo dropout)~\cite{gal2016dropout, lakshminarayanan2017simple, tagasovska2019single}.
\textbf{Distributional} uncertainty is also being extensively studied, to detect out of training-distribution examples~\cite{hendrycks2016baseline, lakshminarayanan2017simple, hein2019relu,liang2017enhancing, guo2017calibration, hendrycks2018deep, mohseni2020self, malinin2018predictive, sehwag2019analyzing}. However, there is no direct approach to address distributional uncertainty for regression settings.
In this work, we assume distributional and epistemic uncertainty to be low (i.e., in-distribution setting with reasonably well-trained models such as those common in robot perception), and focus specifically on calibrating \textbf{aleatoric uncertainty} estimates in regression problems. Such challenging settings have received far less attention in terms of uncertainty estimation~\cite{kuleshov2018accurate, gp-beta, levi2019evaluating, bosch-calib}. Existing calibration techniques are post-hoc and either require a large held-out calibration dataset~\cite{bosch-calib} and/or add parameters to the model after training~\cite{bosch-calib, isotonic}.
Quantile regression methods~\cite{chung2020beyond,tagasovska2019single, ho2005calibrated, rueda2007calibration, taillardat2016calibrated} quantify uncertainty by the fraction of predictions in each quantile.
Other methods, such as isotonic regression and temperature scaling, have also been extended to be the regression setting \cite{kuleshov2018accurate, bosch-calib}. Authors in \cite{gast2018lightweight} proposed an alternate architecture for aleatoric uncertainty estimation. However, $f$-Cal{} is completely architecture agnostic, and can be applied to any probabilistic neural regressors. More recently, a \emph{calibration loss} is proposed in \cite{bosch-calib} that enforces the predicted variances to be equal to per-sample errors, thus grounding each prediction. However, this takes on a \emph{local view} of the calibration problem, and while individual samples might appear well-calibrated, the overall distribution of the regressor errors exhibits a strong deviation from the expected target distribution (\emph{cf.} Sec.~\ref{sec:results}).
A recent approach that is somewhat similar to ours in spirit is Gaussian process beta calibration (GP-beta)~\cite{gp-beta}. It is a post-hoc approach that employs a Gaussian process model (with a beta-link function prior) to calibrate uncertainties during inference. This requires the computation of pairwise statistics, exacerbating inference time. In the maximum mean discrepancy (MMD) loss \cite{cui2020calibrated} distribution matching is performed to achieve calibration. This method was proposed for small datasets and does not scale well with input size. $f$-Cal{} is a superior performing loss function that requires the same inference time as typical Bayesian neural networks~\cite{kendall2015bayesian}.
\section{Introduction}
\label{sec:intro}
The \textit{performance} of deep neural network-based visual perception systems has increased dramatically in recent years. However, for safety-critical \textit{embodied} applications, such as autonomous driving, performance alone is not sufficient.
The absence of reliable and \emph{calibrated} uncertainty estimates in neural network predictions precludes us from incorporating these into downstream sensor fusion \cite{shin2018direct} or probabilistic planning \cite{romandhaivat, gopalakrishnan2017prvo, blackmore2006probabilistic} components.
The tools of probabilistic robotics require calibrated confidence/uncertainty measures, in the form of a \emph{measurement model} $z=h(x, \nu)$. For a traditional sensor, this model $h$ is specified by the designer's understanding of the physical sensing processes, and the noise distribution parameters $\nu$ are estimated by controlled calibration experiments with known ground truth states $x^*$ and sensor observations $z$.
However, for deep neural networks (DNNs) to be used as \emph{sensors} in typical robotic perception stacks, estimating the noise distribution is a much more challenging task for several reasons. First, the domain of inputs is extremely high dimensional (e.g., RGB images) - generating a calibration setup for every possible input is infeasible. Second, the noise distribution is input dependent (heteroscedastic). Finally, neural networks typically transform the inputs via millions of nonlinear operations, preventing approximation by simpler (e.g., piecewise affine) models.
We envision a \textbf{deep neural network (DNN) as a sensor} paradigm where a DNN outputs calibrated predictive distributions that may directly be used in probabilistic planning or sensor fusion. The challenge, however, is that these predictive distributions must be learned solely from training data, with neither additional postprocessing nor architectural modifications.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{figures/intro_corl.png}
\caption{\textbf{$f$-Cal{}} enables us to obtain \emph{calibrated} measures of uncertainty from otherwise blackbox neural networks used in robot perception tasks. This didactic example demonstrates how $f$-Cal{} can estimate the \emph{aleatoric} uncertainty from object detectors. (a) depicts a ground-truth bounding box, a single-sample (dirac-delta) distribution; (b) and (c) denote \emph{uncalibrated} probabilistic outputs from a Bayesian neural network -- (b) is overconfident and inconsistent, (c) is consistent but underconfident; (d) denotes a calibrated estimate, i.e., the error ellipses correspond to the \emph{true} underlying aleatoric uncertainty.
}
\label{fig:intro_figure}
\end{figure}
Our key insight is that \textbf{distributional calibration cannot be achieved by a loss function that operates over individual samples}. This motivates a new loss function that enforces calibration through a distributional constraint that is imposed upon uncertainty estimates across multiple (i.i.d.) samples.
Specifically, our approach $f$-Cal{}, minimizes an $f$-divergence between a specified canonical distribution and an empirical distribution generated from neural network predictions, as shown in Fig. \ref{fig:pipeline}.
Unlike prior approaches~\cite{gp-beta, isotonic, bosch-calib}, we neither require a held-out calibration dataset nor impose any inference time overhead.
For a given performance threshold, $f$-Cal{} achieves better calibration compared to current art.
We demonstrate the effectiveness, scalability and widespread applicability of this approach on large-scale, real-world tasks such as object detection and depth estimation.
\section{Problem Setup}
\label{sec:background}
\subsection{Preliminaries}
\label{sec:preliminaries}
We assume a regression problem over an i.i.d. labelled training dataset
$\mathcal{D} \triangleq \{(x_i,y_i)\}_{i = 1 \ldots |\mc{D}|}$ with $x_i \in \mathcal{X}$ where $\mathcal{X}$ is the ($n$-dimensional) input space and $y_i \in \mathcal{Y}$ where $\mc{Y} \subseteq \mathbb{R}^n$ is the output space.
A \textit{deterministic} model $f_{d}:\mathcal{X}\mapsto\mathcal{Y}$~\footnote{In practice these models are assumed to be neural networks with parameters $\theta$ but we omit the $\theta$ for clarity at this stage.} directly learns the mapping from the input to the output space by minimizing a loss function $\mathcal{L}: \mathcal{Y}\times\mathcal{Y} \mapsto \mathbb{R}$, for example through empirical risk minimization:
\begin{equation}
R_{emp}(f_d) = \frac{1}{N}\sum_{i=1}^N \mathcal{L}(f_d(x_i),y_i).
\label{eq:empirical_risk}
\end{equation}
Equation \ref{eq:empirical_risk} is typically estimated over a mini-batch of size $N<<|\mc{D}|$ during stochastic gradient descent (SGD). Following the notation in \cite{gp-beta}, we desire a \textit{probabilistic} model $f_p:\mathcal{X}\mapsto \mc{S_Y}$ where $\mc{S_Y}$ is the space of all probability density functions $s(y)$ over $\mc{Y}$ ($s: \mc{Y} \mapsto [0,\infty)$ and $\int s(y)dy = 1$). The probability density function (PDF) is defined through its cumulative density function (CDF): $S(y) = \int_{-\infty}^y s(y')dy'$.
\subsection{Uncertainty Calibration}
Calibrated uncertainty estimates are those where the output uncertainties can be exactly interpreted as confidence intervals of the underlying target label distribution.
This allows uncertainty estimates across multiple samples (and models) to be compared. Intuitively, we understand the notion of uncertainty calibration to mean that if we repeated a stochastic experiment a large number of times, for example by asking many different people to label the same image, that the ``label generating distribution'' matches the predictive distribution of the model:
\vspace{-0.3cm}
\begin{equation}
y_i \sim f_p(x_i)
\label{eq:y}
\end{equation}
However, it is impractical to label every piece of data multiple times. Instead, we aggregate the labels across many different inputs to produce calibrated predictive distributions. Using our definitions from Sec. \ref{sec:preliminaries} and adapting from \cite{gp-beta}, we can define what we desire in terms of calibration in the case of a deep neural regressor as follows:
\begin{figure*}[!ht]
\centering
\includegraphics[width=\textwidth]{figures/pipeline-ICRA-dhaivat.png}
\caption{\textbf{$f$-Cal{} pipeline}: We make a conceptually simple tweak to the loss function in a typical (deterministic) neural network training pipeline. In addition to the empirical risk (e.g., $L1$, $L2$, etc.) terms, we impose a distribution matching constraint ($\mathcal{L}_{f-\textnormal{Cal}}$) over the error residuals across a mini-batch.
By encouraging the distribution of these error residuals to match a target \emph{calibrating distribution} (e.g., Gaussian), we ensure the neural network predictions are \emph{calibrated}.
Compared to prior approaches, most of which perform post-hoc calibration, or require large held-out calibration datasets, $f$-Cal{} does not impose an inference time overhead.
$f$-Cal{} is task and architecture agnostic, and we apply it to robot perception problems such as object detection and depth estimation.
}
\label{fig:pipeline}
\vspace{-0.3cm}
\end{figure*}
\begin{definition}[\textbf{Uncertainty Calibration}]
A neural regressor $f_p$ is calibrated if and only if \footnote{Referring to Fig.~\ref{fig:intro_figure}, the requirement for calibration is more stringent than that of consistency, which is a one-way constraint at an arbitrary confidence bound $c$: $p(Y \leq y | s(y)) \leq c$ }:
\begin{equation}
p(Y \leq y | s(y)) = \int_{-\infty}^y s(y')dy' \text{ } \forall y \in \mc{Y}
\end{equation}
\label{definition:calibration}
\end{definition}
In the above definition, $Y$ is an instantiation of the random variable $y$. If we can assume that the noise is sampled from a parametric distribution $s(y;\phi)$, then the probabilistic model need only output the parameters associated with each sample.
In this case, we can consider the model to be calibrated if and only if the aggregated error statistics over multiple outputs of a model align with the parameters predicted by the model.
\subsection{Loss Attenuation (Negative Log-Likelihood - NLL)}
The most widely used technique for estimating heteroscedastic aleatoric uncertainty is \textit{loss attenuation} \cite{kendall2017uncertainties, loss_att}, which performs maximum likelihood estimation by minimizing the negative log-likelihood loss:
\begin{align}
R_{emp}(f_p) & = - \frac{1}{N}\sum_{i=1}^{N} \mathcal{L_{\text{LA}}}(f_p(x_i),y_i) \nonumber \\
& = - \frac{1}{N}\sum_{i=1}^{N} \log{s(y_i; f_p(x_i))}
\end{align}
\noindent
since $f_p(x_i)$ outputs the parameters of the distribution.
For example, if the aleatoric uncertainty is characterized by a Gaussian random variable ($\phi \triangleq (\mu,\sigma)$), the above expression becomes
\begin{equation}
R_{emp}(f_p) = \frac{1}{N} \sum_{i = 1}^{N} \frac{1}{2} \left( \frac{(y_i - \mu_i)^2}{\sigma_i^2} + \log{\sigma_i^2} \right)
\label{equation:nll}
\end{equation}
We refer to the loss in \eqref{equation:nll} as the \textbf{NLL} loss in the experiments. However, probabilistic neural regressors trained using this NLL objective typically lack \textbf{calibration} according to Def.~\ref{definition:calibration}. %
\section{Deriving $f$-Cal{} loss:}
In this section, we derive the KL-divergence and W-dist loss presented in this work.
Here, let's say that the neural regressor is predicting N regression variables over an entire batch of inputs. We use the following notations for our predictions and ground-truth.
\begin{itemize}
\item predicted means: $\mu_1, \mu_2, ...., \mu_N$
\item predicted variance: $\sigma_1^2, \sigma_2^2, ...., \sigma_N^2$
\item Ground truth: $y_1, y_2, ...., y_N$
\item K = degrees of freedom of a chi-squared random variable, generally, K $ > $ 50.
\end{itemize}
Here, N is assumed to be larger, generally N $>$ 1000. Here K is a hyper-parameters.
\begin{itemize}
\item $z_i^2 = \frac{(y_i - \mu_i)^2}{\sigma_i^2} \sim \chi_1^2 $; i = \{1, 2, ..., N\}, are Mahalanobis distances with DoF 1.
\end{itemize}
\begin{align*}
& Q_i = \sum_{j = 1}^{K} z_{ij}^2 = \sum_{j = 1}^{K} \frac{(y_{ij} - \mu_{ij})^2}{\sigma_{ij}^2} \\
& y_{ij} {\sim} \{y_1,y_2, ...., y_N\} \\
& \boxed{Q_i = \sum_{j = 1}^{K} \frac{(y_{ij} - \mu_{ij})^2}{\sigma_{ij}^2} \sim \chi_K^2}
\end{align*}
Here, $\mu_{ij}$ and $\sigma_{ij}$ are predictions corresponding to $y_{ij}$. $y_{ij}$ is uniformly sampled without replacement. We have H such $Q_i$, each distributed as chi-squared random variable with DoF K. H is a hyper-parameter, which is number of chi-squared samples.
The distribution resulting out of these H random variables is a chi-squared distribution. For K $>$ 50, $\chi_K^2 \to \mathcal{N}(K, 2K)$.
Empirical mean($\mu_{\chi_K^2}$) and variance($\sigma_{\chi_K^2}^2$) of the chi-squared distribution can be written as below,
\begin{align*}
& \mu_{\chi_K^2} = \frac{1}{H} \sum_{i = 1}^{H} Q_i = \sum_{i = 1}^H \sum_{j = 1}^{K} \frac{(y_{ij} - \mu_{ij})^2}{\sigma_{ij}^2} , \sigma_{\chi_K^2}^2 = \frac{1}{H-1} \sum_{i = 1}^{H}(Q_i - \mu_{\chi_K^2})^2 \\
& \sigma_{\chi_K^2}^2 = \frac{1}{H-1} \sum_{i = 1}^{H} \bigg( \sum_{j = 1}^{K} \frac{(y_{ij} - \mu_{ij})^2}{\sigma_{ij}^2} - \frac{1}{K}\sum_{i = 1}^H \sum_{j = 1}^{K} \frac{(y_{ij} - \mu_{ij})^2}{\sigma_{ij}^2} \bigg)^2
\end{align*}
In the above equation, we get empirical means and variance of our chi-squared distribution.
According to central limit theorem, Chi-squared distribution with degrees of freedom K(K $\geq$ 50) follow Gaussian distribution mean K and variance 2K. hence target mean($\hat{\mu}_{\chi_K^2}$) and target variance($\hat{\sigma}_{\chi_K^2}^2$) are,
\begin{align*}
\hat{\mu}_{\chi_K^2} = K,
\hat{\sigma}_{\chi_K^2}^2 = 2K
\end{align*}
Proposal distribution: $ \boxed{p(x) = \mathcal{N}( \hat{\mu}_{\chi_K^2}, \hat{\sigma}_{\chi_K^2}^2)} $
Target distribution: $ \boxed{ q(x) = \mathcal{N}(\mu_{\chi_K^2}, \sigma_{\chi_K^2}^2) } $ \\
We have statistics of proposal distribution($p(x)$) and target distribution($q(x)$). The closed form KL-divergence and Wasserstein distance between two univariate normal distributions can be expressed as below
\begin{equation*}
\begin{aligned}
\textnormal{KLD} & = KL(p||q) = \frac{1}{2} \log \bigg( \frac{\hat{\sigma}_{\chi_K^2}^2}{\sigma_{\chi_K^2}^2} \bigg) + \frac{\sigma_{\chi_K^2}^2 + (\mu_{\chi_K^2} - \hat{\mu}_{\chi_K^2})^2}{2\hat{\sigma}_{\chi_K^2}^2} - \frac{1}{2} \\
\end{aligned} \\
\end{equation*}
\begin{equation*}
\boxed{
\begin{aligned}
\textnormal{KLD} = KL(p||q) = & \frac{1}{2} \log \Bigg( \frac{2K}{\frac{1}{H-1} \sum_{i = 1}^{H} \bigg( \sum_{j = 1}^{K} \frac{(y_{ij} - \mu_{ij})^2}{\sigma_{ij}^2} - \frac{1}{K}\sum_{i = 1}^H \sum_{j = 1}^{K} \frac{(y_{ij} - \mu_{ij})^2}{\sigma_{ij}^2} \bigg)^2} \Bigg) + \\
& \hspace{3cm} \bigg(\frac{2K + (\sum_{i = 1}^H \sum_{j = 1}^{K} \frac{(y_{ij} - \mu_{ij})^2}{\sigma_{ij}^2} - K)^2 }{4K}\bigg) - \frac{1}{2} \\
\end{aligned}
}
\end{equation*}
\begin{equation*}
\begin{aligned}
\textnormal{W-dist} & = W(p,q) = (\mu_{\chi_K^2} - \hat{\mu}_{\chi_K^2})^2 + (\hat{\sigma}_{\chi_K^2}^2 + \sigma_{\chi_K^2}^2 - 2\sigma_{\chi_K^2}\hat{\sigma}_{\chi_K^2}) \\
\end{aligned} \\
\end{equation*}
\begin{equation*}
\boxed{
\begin{aligned}
\textnormal{W-dist} & = W(p,q) = (\sum_{i = 1}^H \sum_{j = 1}^{K} \frac{(y_{ij} - \mu_{ij})^2}{\sigma_{ij}^2} - K)^2 + \\
& (2K + \frac{1}{H-1} \sum_{i = 1}^{H} \bigg( \sum_{j = 1}^{K} \frac{(y_{ij} - \mu_{ij})^2}{\sigma_{ij}^2} - \frac{1}{K}\sum_{i = 1}^H \sum_{j = 1}^{K} \frac{(y_{ij} - \mu_{ij})^2}{\sigma_{ij}^2} \bigg)^2 - \\ & 2\sqrt{\frac{1}{H-1} \sum_{i = 1}^{H} \bigg( \sum_{j = 1}^{K} \frac{(y_{ij} - \mu_{ij})^2}{\sigma_{ij}^2} - \frac{1}{K}\sum_{i = 1}^H \sum_{j = 1}^{K} \frac{(y_{ij} - \mu_{ij})^2}{\sigma_{ij}^2} \bigg)^2}\sqrt{2K})
\end{aligned} \\
}
\end{equation*}
\section{Implementation Details}
\subsection{Bokeh: A synthetic disc-tracking benchmark}
\textbf{Dataset}: We design Bokeh with complexities of a typical regression problem for computer vision task in mind.
The goal is to predict the 2D location of the centre of a red disc from an input image containing other distractor discs.
All discs are sampled from a known data-generating distribution\footnote{This is important, as it devoids us of the usual handicaps with real data, where one may not have access to the label-generating distribution.}.
Randomly coloured discs are added to the image to occlude the red disc and act as distractors. The locations and radii of these distractor discs are sampled from a uniform distribution. We generate homoscedastic and heteroscedastic variants of the dataset.
We introduce noise to our ground-truth labels and create two separate synthetic datasets one where noise is Homoscedastic and other where its Heteroscedastic. Noise in x and y are sampled independently from a gaussian distribution. In case of homoscedastic noise, the noise generating distribution is $\mathcal N(0, \sigma)$, where $\sigma$ is a fixed value. On the other hand, heteroscedastic noise is generated from the distribution $\mathcal N(0, \sigma(x))$, where $\sigma$ is a function of the input image $x$. $\sigma(x)$ depends on the proximity of the distractor discs in relation to the red disc. Simply put, if the distractor discs are nearby or occluding the red disc the $\sigma(x)$ value will be high and low when they are far away. We split the dataset into training, validation and test sets in proportion of 3:1:1.
\textbf{Training:}
We train $f$-Cal{}-KLD and $f$-Cal{}-Wass with KL-divergence and Wasserstein distance as our loss function to measure the distance between predicted and groundtruth chi-squared distribution. All the baseline calibration methods (Table~\ref{table:toy_exp_full}) were initialized with NLL loss trained weights.
\begin{figure*}[ht]
\centering
\includegraphics[width=\columnwidth]{figures/datasets_corl.png}
\caption{\textbf{Datasets}: We evaluate baselines, current-art, and $f$-Cal{} on $3$ datasets and multiple tasks. a) We create a \textbf{synthetic dataset (Bokeh)} where the task is to regress the coordinates of the center of a unique red disk. b) \textbf{KITTI Object Detection} and \textbf{KITTI Depth Estimation} benchmark datasets~\cite{kitti} c) Object detection on \textbf{Cityscapes}~\cite{cordts2016cityscapes}}
\label{fig:dataset}
\end{figure*}
\subsection{KITTI Depth Estimation}
To test the scalability of $f$-Cal{} to real-world robotics tasks, we evaluate it on Depth estimation and Object Detection tasks. For depth estimation, we use the KITTI Depth Estimation benchmark dataset for evaluation. We modify the BTS\cite{big2small} model into a Bayesian Neural Network by adding an uncertainty decoder. Similar to BTS\cite{big2small} work, we train our network on a subset of nearly 26K images from KITTI\cite{kitti}, corresponding to different scenes not part of the test set containing 697 images. The depth maps have an upper bound of 80 meters. We include an uncertainty head which predicts the standard deviation corresponding to the output distribution. All the models were initialized with NLL loss trained weights.
The loss function used for training these models is given in Eq. \ref{eq:depth_loss}.
\begin{equation}
\mathcal{L} = \mathcal{L}_{reg} + \lambda*\mathcal{L}_{cal}
\label{eq:depth_loss}
\end{equation}
Here $\mathcal{L}_{reg}$ is the SiLog loss function used in BTS\cite{big2small} paper, and $\mathcal{L}_{cal}$ can be NLL, calibration or the f-cal losses. As mentioned in the main paper, $\lambda$ is trade-off parameter which can be used to control the trade-off between calibration and deterministic performances.
\subsection{Object Detection}
We use the popular Faster R-CNN~\cite{ren2015faster} with a feature pyramid network~\cite{lin2017feature} and a Resnet-101~\cite{he2016deep}backbone. We use the publicly available detectron2~\cite{wu2019detectron2} and PyTorch\cite{paszke2019pytorch} implementation and extend the model to regress uncertainty estimates.
For uncertainty estimation in object detectors, we add uncertainty head in stage-2 of the network. We employ \textit{xyxy} bounding box parameterization as used in~\cite{he2019bounding} as opposed to \textit{xywh} used in~\cite{ren2015faster}. Using \textit{xyxy} bounding box parameterization ensures that we have all linear transformations over our predictions to get final bounding box. The reason for employing \textit{xyxy} bounding box parameterization is to ensure that our final prediction over the bounding box is Gaussian distribution. A Gaussian uncertainty going through non-linear transformation will lose Gaussian-ness of the prediction. In uncertainty head, we have a fully connected layer followed by a Generalized Sigmoid non-linearity, $g(x) = \alpha + \frac{\beta - \alpha}{1 + \exp{(-\eta x)}}$. Here, $\beta$ is upper asymptote, $\alpha$ is a lower asymptote and $\eta$ is the sharpness. For all the experiments in this section, we have $\alpha = 0, \beta = 50 $ and $\eta = 0.15$. These hyperparameters are chosen to have wide range of uncertainty estimates. Using Generalized Sigmoid function bounds our variance predictions as well as provides stable training dynamics. All the baseline calibration models were initialized with NLL Loss trained weights, and trained with learning rate of 1e-4.
For \textbf{KITTI}, there are 7481 images in the annotated data. This data is divided intro train/test/val splits. 4500 images are used for training, 500 images are in validation set and rest 2481 are in the test dataset.
For Temperature scaling(\cite{bosch-calib}), we use validation dataset to learn the temperature parameter. We train the temperature parameter until its value is stabilized.
We have exactly same procedure for \textbf{Cityscapes} dataset as well. In Cityscapes, we have 3475 annotated images, out of which 2500 are used for training, 475 are used for validation, and 500 are used for testing the models. We perform similar holdout cross validation for Cityscapes also, and choose the best performing model for testing. We use same procedure as \textbf{KITTI} for Temperature scaling.
\input{tables/toy-experiments-full}
\section{Additional Results}
\subsection{Bokeh }
\label{suppl:bokeh_results}
Table \ref{table:toy_exp_full} shows the results for $f$-Cal{} and the baseline calibration techniques on Bokeh dataset with both homoscedastic and heteroscedastic noise. It can be seen that $f$-Cal{} outperforms all the baseline methods on all calibration metrics while maintaining the similar or sometimes better deterministic performance as the base model(i.e NLL loss). From Fig. \ref{fig:toy_reliability}, we can see qualitatively that the output distributions from $f$-Cal{} trained models are much closer to the ground truth distribution than the baselines. We can also see from the reliability curves, that $f$-Cal{} models are much closer to the diagonal line representing perfect calibration than the baselines. Apart from the baseline methods shown in Table\ref{table:toy_exp_full}, we also trained MMD\cite{cui2020calibrated}, but the model fails to converge on Bokeh which is the simplest of the three datasets used in this paper.
\begin{figure*}
\begin{center}
\centering
\includegraphics[width=0.8\textwidth]{figures/toy_exps/Toy_distribution_and_reliability_plots.png}
\caption{\textbf{Distributional and Reliability Diagrams (Bokeh)}: (Col 1) with Homoscedastic Noise (Col 2) with Heteroscedastic noise. (Row 1) shows the chi-squared distributional comparison of the predicted outputs with the target. (Row 2) shows the reliability diagram with chi-squared distribution. (Row 3) shows the standard-normal distributional comparison. (Row 4) shows the reliability curve with standard-normal variables.}
\label{fig:toy_reliability}
\end{center}
\end{figure*}
\subsection{Object detection}
\label{suppl:object_detection}
In this section, we extensively report qualitative results on Object detection with $f$-Cal{}. In the main paper, we reported results for Expected calibration error(ECE) and Negative log likelihood(NLL). Here, we report results on other metrics such as Maximum calibration error, KLD, Wasserstein distance between proposed and target distributions.
The above metrics reflect calibration quality of the models. In addition, for object detection, we also modify existing popular metrics such as mAP(\cite{lin2014microsoft}) and PDQ(\cite{hall2020probabilistic}) to report consistency of the models. Calibration generally implies consistency(vice versa is not true). In the main paper, we have defined calibration. Here we define consistency as below,
\begin{definition}[\textbf{Consistency:}]
A neural regressor $f_p$ is consistent for any arbitrary confidence bound $c$ if,
\begin{equation}
p(Y \leq y | s(y)) \leq c
\end{equation}
\label{definition:consistency}
\end{definition}
From the above definition, we can observe that the requirement for calibration is more stringent than that of consistency. We can have consistent probabilistic detection if we predict arbitrarily high uncertainty(Intro figure(c)), we can always have consistent predictions, though the uncertainties can not be interpreted as confidence scores. Through these new metrics, we show that consistency is the byproduct of calibration. We do not enforce any explicit constraints for consistency, yet we show in our results that we end up achieving highly consistent prediction. It is important to note that the consistency metrics we report do not automatically imply calibration. So these metrics should be interpreted in conjunction with calibration metrics. We can have arbitrarily high consistency if we predict highly inflated uncertainty estimates.
Towards this end, we modify two popular object detection evaluation metrics, mAP and PDQ, for consistency estimation. The new metrics are minor modification of mAP and PDQ, designed to evaluate consistency of the object detector. We use definition \ref{definition:consistency} to build these metrics.
Formally, let's say our groundtruth bounding box is $B_g = [x_1^{g}, y_1^{g}, x_2^{g}, y_2^{g}]$, represented by top left and bottom right corners of the bounding box. Our predicted box is represented by $B_d = \mathcal{N}(\mu_d, \Sigma_d)$. Here, $\mu_d = [\mu_{x_1}^{d}, \mu_{y_1}^{d}, \mu_{x_2}^{d}, \mu_{y_2}^{d}]$. $\Sigma_d$ is 4 x 4 matrix representing co-variance matrix for bounding box. It can be a full co-variance of diagonal covariance too. For this work, we are assuming diagonal co-variance however our proposed metric will be applicable to full co-variance matrix. Given this, the loss attenuation formulation looks as below,
\begin{equation}
\label{loss_att}
L_{la} = (B_g - \mu_d)^T \Sigma_d^{-1} (B_g - \mu_d) + \log(det(\Sigma_d)) \\
\end{equation}
In equation \ref{loss_att}, first term represents squared Mahalanobis distance, which charaterizes number of standard deviations a point is away from
mean of a distribution. The squared mahalanobis distance followed chi-squared distribution with p degress of freedom. We get mahalanobis distance threshold $\mathcal{M}_{thresh}$ when we evaluate chi-squared distribution with p degrees of freedom and confidence interval $\alpha$. In this case, it will denote the probability of the groundtruth being in the hyper-ellipse defined by squared Mahalanobis distance.
In this work, we propose to use probability confidence between groundtruth and predicted distribution as a quality measure for detection. The less the mahalanobis distance, the more close ground-truth and the distribution are, the smaller the confidence contour would be.
Now we formally define confidence contour and corresponding Mahalanobis distance. Let's say confidence contour of a Gaussian distribution's volume of $\alpha$. It means that probability of a random variable X falling inside this confidence contour is $\alpha$. In this case, probability of exceeding critical value is $\beta = 1 - \alpha$. Then Squared Mahalanobis distance threshold is $\mathcal{M}_{thresh} = \tilde{\chi}^2(p, \beta)$. Which means that for a normal distribution $\mathcal{N}(\mu, \Sigma)$, if X falls in confidence contour $\alpha$, then $(X- \mu)^T\Sigma^{-1}(X - \mu) \leq \tilde{\chi}^2(p, 1 - \alpha)$. \\
\begin{figure*}[h]
\begin{center}
\centering
\includegraphics[width=1.0\columnwidth]{figures/object/mMAP.png}
\vspace{0.3em}
\caption{The definition of True positive changes as we define confidence contour based criterion. Blue box represents groundtruth object in both the images. In deterministic detection(a), we evaluate certain proposal as true positive if the IoU with actual object is greater than certain threshold. In probabilistic detection(b), we represent the uncertainty with ellipses in the figure. The ellipses visualized correspond to 80\% confidence contour, and we observe that the groundtruth falls within 80\% confidence contour, hence we classify that probabilistic detection as a true positive.}
\label{fig:prob_eval}
\end{center}
\end{figure*}
\noindent
\textbf{\textit{Mean Mahalanobis average precision(mMAP):}}
Mean average precision(mAP)\cite{lin2014microsoft} has been the most popular metric to evaluate object detector. For consistency estimation, we modify this metric to incorporate probabilistic bounding box predictions. In mAP, precision is calculated for IoUs of $0.5$ to $0.95$ at the interval of $0.05$. In mMAP, we replace IoU threshold with confidence contour thresholds. We keep confidence contour as a threshold and determine true positive based on Mahalanobis distance as explained in figure \ref{fig:prob_eval}. In this work, we have thresholds of [0.999, 0.995, 0.99 , 0.95, 0.9, 0.85, 0.8, 0.7]. We observe that mMAP is more informative metric to analyse probabilistic consistency. The definition of consistency(\ref{definition:consistency}) requires the prediction to be in certain confidence contour bound($c$). This metric just evaluates that over multiple thresholds and averages across thresholds and categories just like mAP.
\noindent
\textbf{\textit{Probability-based detection quality(PDQ):}}
\input{tables/cityscapes-plots}
PDQ(Probability-based Detection Quality)\cite{hall2020probabilistic} is a recently proposed metric to evaluate Probabilistic object detectors. It uses spatial and label quality into the evaluation criteria, and explicitly rewards probabilistically accurate detections. Both Spatial and label quality measures are calculated between all possible pairs of detections and Groundtruth. Geometric mean of these two measures is calculated and used to find the optimal assignment between all detections and groundtruths.
In PDQ, spatial quality is calculated by fusing background and foreground loss, which are computed using groundtruth segmentation mask and the probabilistic detection. This requires availability of masks during test time which may not be possible for bounding box based object detection dataset. In addition to that, evaluation objective of spatial quality estimation is different compared to training objective of probabilistic object detectors, which are trained with NLL loss. In contrast, the modified evaluation criteria(\ref{fig:prob_eval}) evaluates spatial quality without the need of any segmentation mask. Incorporating Mahalanobis distance based criteria enables the modified metric to evaluate consistency. Mahalanobis distance is a reflection of how many standard deviations away your mean prediction is compared to the sample(ground-truth in this case). Less Mahalanobis distance could be interpreted as good spatial quality of the prediction. We redefine the spatial quality as below,
\begin{equation}
\label{spatialquality}
Q_s(B_g, B_d) = \exp( \frac{-(B_g - \mu_d)^T \Sigma_d^{-1} (B_g - \mu_d)}{T})
\end{equation}
Where $T$ is the temperature parameter(Not to be confused with Temperature parameter of Temperature scaling method). It determines how much should the Mahalanobis distance penalize spatial quality. Higher the temperature value, lower the penalty. In our experiments, we keep the value of $T$ to be 10.
Both these metrics, complement mAP, which gives us estimate of how accurate our bounding boxes are, while these metrics will tell us how consistent our uncertainty values are. We can infer more about consistency of our models by analysing these metrics.
\input{tables/kitti-plots}
\noindent
\textbf{\textit{Cityscapes results:}}
In tables \ref{table:cs-consistency} and \ref{table:cs-calibration}, we report results for Cityscapes\cite{cordts2016cityscapes} dataset. We observe that $f$-Cal{} is able to obtain highly consistent and calibrated uncertainty estimates. We observe that NLL Loss(full) is providing overconfident uncertainty estimates, resulting in poor consistency. We observe that other baselines such as NLL Loss(base) and Calibration Loss are able to obtain high consistency, but poorer calibration, which is a result of inflated uncertainty estimates. $f$-Cal{} and Temperature scaling are able to yield calibrated and consistent uncertainty estimates, while retaining deterministic performance. However, Temperature scaling had holdout validation set for tuning temperature parameter, while $f$-Cal{} results are directly obtained using training data only. We also note that $f$-Cal{} enables the model to learn uncertainty aware representations, as opposed to Temperature scaling, where representations learned are same as those of uncalibrated models.
\noindent
\textbf{\textit{KITTI results:}}
\input{tables/kitti-fulltable}
\input{tables/cityscapes-fulltable}
In tables \ref{table:kitti-consistency} and \ref{table:kitti-calibration}, we extensively report results for various metrics and baselines. We see that $f$-Cal{} is able to obtain highly consistent and calibrated results. We observe that when we train entire model with NLL Loss\footnote{In practice, NLL Loss is trained without freezing any part of the model. In this work, we just train the uncertainty head with NLL loss to have base model, which is used as weight initializer for other methods. NLL Loss(base) works better than NLL Loss(full) for consistency and calibration. Hence in the main paper, we report results for the base model, as opposed to full model. But if we train entire model with loss attenuation as done in practice, due to its mean seeking nature, we observe that it yields extremely poor consistency.}(\cite{loss_att}), due to its mean seeking nature, it is trying to maximize the likelihood, and predicting very low uncertainty values, resulting in overconfident uncertainty estimates. This results in inconsistent predictions as evident from the mMAP, PDQ and PDQ(spatial) values. Note that many baselines have high consistency values but poorer calibration, which is a result of highly inflated uncertainty estimates. High consistency won't be very useful if we do not have good calibration. So consistency metrics must be interpreted in conjunction with calibration metrics, to make accurate conclusions. Currently, $f$-Cal{} achieves state of the art calibration, while having competitive consistency. In table \ref{table:kitti-calibration}, we also report Maximum calibration error(MCE), Wasserstein distance and KLD, between proposed and target distributions.
\subsection{KITTI Depth Estimation}
\label{suppl:depth_results}
\input{tables/depth_suppl}
We evaluate $f$-Cal{} and the baseline calibration methods using SiLog and RMSE for deterministic performance and ECE(z), ECE(q) and NLL for calibration performance. We run every experiment over 5 seeds to ensure reproducibility and establish statistical significance. Table. \ref{table:kitti_depth_suppl} shows the full results with both mean and standard deviation over the 5 seeds. From Table. \ref{table:kitti_depth_suppl}, we can see that $f$-Cal{} models outperform the baseline calibration techniques on all the calibration metrics for similar deterministic scores. We also observe that unlike Bokeh and Object Detection, Temperature scaling struggles at calibrating uncertainties using a single scale/temperature parameter for depth estimation because of the complex nature of the task and the size of the dataset. Its also apparent that the values of ECE(q) are larger in comparison to the Bokeh and Object Detection results, this can be attributed to the non-i.i.d. nature of the nearby pixels in the depth estimation task which will make it difficult for the models to obtain a low ECE(q) score.
\vspace{0.5cm}
\section{$f$-Cal{} code snippet}
\vspace{0.5cm}
This is a python implementation of $f$-Cal{} for wasserstein distance as divergence minimization criteria. As we see, $f$-Cal{} can be easily implemented using standard python packages such as PyTorch \cite{paszke2019pytorch} and NumPy \cite{harris2020array}.
\vspace{0.5cm}
\lstinputlisting[language=Python]{code-snippets/f-cal-gaussian.py}
\newpage
\section{$f$-Cal{} beyond Gaussians:}
\vspace{0.5cm}
In this work, we proposed a way to calibrate aleatoric uncertainty, which is modeled as Gaussian error. But in most real life settings, this assumption may not hold true. In such a case, we need to have methods to calibrate uncertainty for non-Gaussian setups too. In this section, we show that $f$-Cal{} can be easily extended to non-Gaussian setups.
Given a mini-batch containing $N$ inputs $x_i$, a probabilistic regressor predicts $N$ sets of parameters $f_p(x_i) = \phi_i$ to the corresponding probability distribution $s(y_i; \phi_i)$. Define $h: \mc{Y} \times \Phi \mapsto \mc{Z}$ as the function that maps the target random variable $y_i$ to a random variable $z_i$, which follows a known canonical distribution. In case of $s$ being a Gaussian, $\phi_i \triangleq (\mu_i,\sigma_i)$, and we chose $h$ to be $\frac{y_i - \mu_i}{\sigma_i}$. Which results in residuals $\{z_1, z_2, \ldots, z_N\}$ following a standard normal distribution. We choose our target distribution $Q$ to be a chi-squared distribution, and compute the empirical statistics of the residuals to fit a proposal distribution $P$ of the same family as $Q$. We define a variational loss function that minimizes the $f$-divergence between these two distributions.
In case of non-Gaussian distribution, we need to define the transformation function $h$, such that our residuals $z_i$ follow a known canonical distribution. Here, we propose a way to construct $h$, through a series of transformations, such that final residuals are distributed as samples of a standard normal distribution. This will enable us to construct chi-squared hyper-constraints easily. Let $S(y; \phi) = \int_{-\infty}^y s(y', \phi)dy'$ be the Cumulative density function of the predicted probability distribution $s(y_i; \phi_i)$. Below, we revisit the definition of calibration,
\begin{equation}
p(Y \leq y | s(y)) = \int_{-\infty}^y s(y')dy' \text{ } \forall y \in \mc{Y} \nonumber
\end{equation}
\noindent
\begin{theorem}
\label{theorem1}
If $x$ is a univariate random variable with continuous and strictly increasing cumulative density function $F$, then transforming a random variable by its continuous density function always leads to the same distribution, the standard uniform\cite{embrechts2013note}. Hence, if $y = F(x)$ then y $\sim$ U[0,1].
\end{theorem}
Here, $p(Y \leq y | s(y))$ is cumulative density function $S(y; \phi)$. Hence, according to theorem \ref{theorem1} we know that $p(Y \leq y | s(y)) \sim U[0, 1]$. If we have a non-Gaussian modeling assumption, we transform the predictions to uniform distribution using theorem \ref{theorem1}, and then we take quantile function of a standard normal distribution($\mathcal{N}(0,1)$), to get standard normal residuals. After that, we follow same procedure to construct chi-squared hyperconstraints as presented in the main paper. To illustrate this, we provide a colab notebook with this supplementary, to illustrate applicability of theorem \ref{theorem1}. If $S(y; \phi_i)$ is cumulative density function of $s(y)$, and $F^{-1}_{\mathcal{N}}(.)$ is inverse CDF of a standard normal distribution, then our modified algorithm for non-Gaussian case can be expressed as below,
\begin{algorithm}[h]
\SetCustomAlgoRuledWidth{\textwidth} %
\SetAlgoLined
\SetKwInOut{KInput}{Input}
\DontPrintSemicolon
\KInput{Dataset $D$, probabilistic neural regressor, $f_p$, degrees of freedom $K$, batch size $N$, number of samples for hyper-constraint $H$}
\For{$i = 1 \ldots N$}{
$ \phi_i \gets f_p(x_i)$ \\
$z_i \gets F^{-1}_{\mathcal{N}}(S(y_i; \phi_i)) $
}
$C = \emptyset$ \tcp*{\textnormal{Samples from Chi-squared distribution}}
\For{$i = 1 \ldots H$}{
\tcp{\textnormal{Create a chi-squared hyper-constraint}}
$\displaystyle q_i \gets \sum_{j = 1}^{K} z_{ij}^2, z_{ij} \thicksim \{z_1 \cdot \cdot \cdot z_N\}$ \\
C.append($q_i$)
}
$P \gets \textnormal{Fit-Chi-Squared-Distribution}(C)$ \\
$\mathcal{L}_{\text{$f$-Cal{}}} \gets D_f(P || \chi_K^2)$ \\
\Return $\mathcal{L}_{\text{$f$-Cal{}}}$
\caption{$f$-Cal{} for non-Gaussian uncertainties}
\label{algorithm-non-gaussian}
\end{algorithm}
Note that above algorithm requires the CDF function $S(.)$ to be continuous and differentiable. The rest of the pipeline is already differentiable and this can be implemented using standard autodifferentiation~\cite{paszke2019pytorch}. Below, we present a code-snippet for a case when uncertainties are modeled as Laplace distribution,
\vspace{0.5cm}
\lstinputlisting[language=Python]{code-snippets/f-cal-non-gaussian.py}
\section{Experiments}
\label{sec:results}
We conduct experiments on a number of large-scale perception tasks, on both synthetic and real-world datasets. We report the following key findings which we elaborate on in the remainder of this section.
\begin{compactenum}
\item $f$-Cal{} achieves significantly superior calibration compared to existing methods for calibrating aleatoric uncertainty.
\item These performance trends are consistently observed across multiple regression setups, neural network architectures, and dataset sizes.
\item We demonstrate that there is a trade-off between deterministic and calibration performance by varying the $\lambda$ hyper-parameter. This trade-off has been established in previous literature \cite{bosch-calib, guo2017calibration}. However, we further demonstrate empirically that this trade-off is inherently caused by a mismatch between the choice of the noise data distribution family and the true underlying noise distribution.
\end{compactenum}
\subsection{Regression tasks}
We consider 3 regression tasks: a synthetic disc tracking dataset (Bokeh), KITTI depth estimation~\cite{kitti} and KITTI object detection~\cite{kitti}. These tasks are chosen to span the range of regression tasks relevant for robotics applications: sparse (one output per image in disc tracking), semi-dense (object detection), and pixelwise (fully) dense (depth estimation).
Unless otherwise specified, we model aleatoric uncertainty using heteroscedastic Gaussian distributions.
\subsection{Baselines}
We compare $f$-Cal{} models with the following baselines: \textbf{NLL loss}~\cite{loss_att, gal2016uncertainty}, \textbf{Temperature scaling}~\cite{bosch-calib}, \textbf{Isotonic regression}~\cite{isotonic}, \textbf{Calibration loss}~\cite{bosch-calib} and \textbf{GP-beta}~\cite{gp-beta}. We report results for $f$-Cal{}, with KL-divergence (\textbf{$f$-Cal{}-KL}) and Wasserstein distance (\textbf{$f$-Cal{}-Wass}) as losses for distribution matching.
We also experimented with a recently proposed maximum mean discrepancy based method~\cite{cui2020calibrated}. Being designed for very low data regimes, it failed to solve any of our tasks considered.
Similarly, GP-Beta~\cite{gp-beta} and isotonic regression \cite{isotonic} solve our synthetic tasks, but do not scale to large, real-world tasks.
\input{tables/all-results-table}
\subsection{Evaluation metrics}
We evaluate the accuracy in calibration by means of the following widely used metrics.
The \textbf{expected calibration error} (ECE)~\cite{naeini2015obtaining, bosch-calib} measures the discrete discrepancy between the predicted distribution of the neural regressor and that of the label distribution. We divide the predicted distribution into $S$ intervals of size $\frac{1}{S}$. ECE is computed as the difference between the empirical bin frequency and the true frequency($\frac{1}{S}$). For total samples $P$ and number of samples in bin $s$ as $B_s$, $ECE = \sum_{s = 1}^{S} \frac{|B_s|}{P} \left\lVert \frac{1}{S} - \frac{|B_s|}{P} \right\rVert $.
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth]{figures/qualitative_result_f_cal-KITTI-OD.png}
\vspace{-0.2cm}
\caption{\textbf{Qualitative results}: Uncertainty calibration for object detection models (Faster RCNN) over the KITTI~\cite{kitti} dataset. (\textit{Left}) Models trained using an NLL loss term produce overconfident predictions (notice how the model outputs small, low uncertainty, ellipses for the occluded cars). (\textit{Right}) $f$-Cal{}, on the other hand, produces calibrated uncertainty estimates (notice the large covariances for occluded cars, and the car in the foreground, whose endpoints are indeed uncertain).
}
\label{fig:qualitative_results}
\end{figure}
We report ECE scores for standard normal distribution and chi-squared distribution in this work, which we denote by ECE(z) and ECE(q) respectively.
We also plot \textbf{reliability diagrams} which visually depict the amount of miscalibration over the support of the distribution. Perfectly calibrated distributions should have a diagonal reliability plot. Portions of a curve above the diagonal line are over-confident regions, while those below the curve are under-confident.
\subsection{Bokeh: A synthetic disc-tracking benchmark}
\label{subsec:bokeh}
Since ground-truth estimates of aleatoric uncertainty are extremely challenging to obtain from real-world datasets, we first validate our proposed approach in simulation.
\textbf{Setup}: We design a synthetic dataset akin to~\cite{backpropkf} for a \emph{disc-tracking} task.
The goal is to predict the 2D location of the centre of a unique red disc from an input image containing other distractor discs.
All disc locations are sampled from a known data-generating distribution.
\textbf{Models}: We use a 3-layer ConvNet architecture with an uncertainty prediciton head.
We train a model using the NLL loss~\cite{loss_att} for our baseline probabilistic regressor.
We then train two models using our proposed $f$-Cal{} loss ($f$-Cal{}-KL and $f$-Cal{}-Wass).
\textbf{Results:} Table~\ref{table:all-results}(a) compares $f$-Cal{} to the aforementioned baselines, evaluating \emph{performance} (i.e., the accuracy of the estimated mean) and \emph{calibration} quality.
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth]{figures/depth_qualitative_results.png}
\vspace{0.1cm}
\caption{\textbf{Qualitative results} for depth estimation models on the KITTI~\cite{kitti} benchmark. (\textit{Top}) Input image; (\textit{Middle}) Predicted depth; (\textit{Bottom}) Predicted uncertainty.
}
\label{fig:depth_qualitative}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth]{figures/object/kitti-plots-2x2.png}
\vspace{0.1cm}
\caption{\textbf{Calibration plots} on KITTI\cite{kitti} object detection: \textit{Top:} Predicted Chi-squared distributions (using hyper-constraints) and standard normal distributions from the residuals, \textit{Bottom:} corresponding reliability diagrams for chi-square and standard normal space. $f$-Cal{} consistently yields superior calibration curves in both, chi-square and standard normal space. These curves correspond to results reported in Table \ref{table:all-results}}
\vspace{-0.3cm}
\label{fig:kitti-results}
\end{figure}
We report the performance (Smooth-L1 error) denoted by L1 in Table \ref{table:all-results} for both the \emph{noise-free} ground-truth (in typical ML settings, we never have access to this variable. We only ever access the noisy ground-truth labels), and the \emph{noisy} ground-truth (accounting for label generation error).
We see in Table \ref{table:all-results} that $f$-Cal{} outperforms all baselines considered.
It is worth noting that we perform better than temperature scaling~\cite{bosch-calib} despite this being a somewhat unfair comparison (temperature scaling leverages a large held-out calibration dataset, while we do not use any additional data). $f$-Cal{} gives well-calibrated uncertainty estimates without sacrificing the deterministic performance (more discussion of this point in Sec.~\ref{subsec:analysis}).
\subsection{KITTI Depth Estimation}
\vspace{-0.1cm}
\textbf{Setup}: We evaluate $f$-Cal{} on real-world robotics tasks like depth estimation and object detection (Sec. \ref{subsec:object_detection}). We train $f$-Cal{} and several baseline calibration techniques on the KITTI depth estimation benchmark dataset~\cite{kitti}. We modify the BTS model\cite{big2small} for supervised depth estimation into a Bayesian Neural Network by adding a variance decoder. We evaluate the deterministic performance using SiLog and RMSE metrics and calibration using ECE and NLL.
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{figures/depth/ece_vs_silog.png}
\vspace{-0.1cm}
\caption{\textbf{Calibration-vs-deterministic performance trade-off:} We see that this trade-off is observed for all the three calibration techniques. For similar deterministic performance $f$-Cal{} models are able to achieve smaller ECE values (i.e., better calibration). }
\vspace{-0.6cm}
\label{fig:trade-off}
\end{figure}
\textbf{Discussion}: Through our experiments, we conclude that there is a trade-off between deterministic and calibration performances as shown in Fig.~\ref{fig:trade-off} (also established in \cite{guo2017calibration, bosch-calib}). We can control this trade-off by varying the $\lambda$ in \eqref{eq:fcal}. By plotting SiLog and ECE for different values of $\lambda$ we can analyze this trade-off for the baseline calibration techniques. We note that $\lambda$ may be application dependent. To our knowledge our method is the first that enables this tradeoff to be made easily with one parameter.
In Table.~\ref{table:all-results}-(b), for every method we select a $\lambda$ which best balances between deterministic performance and calibration. For this fixed $\lambda$ we run the experiment over multiple seeds and report mean scores
We see that $f$-Cal{} outperforms all baselines on all calibration metrics. We also observe that unlike Bokeh (Table. \ref{table:all-results} - (a)), temperature scaling struggles to calibrate uncertainties by tuning a single temperature parameter on such a large and complex task of depth estimation. We show qualitative results of depth estimation in figure~\ref{fig:depth_qualitative}.
\vspace{-0.15cm}
\subsection{Object detection}
\label{subsec:object_detection}
\vspace{-0.1cm}
\textbf{Setup}: We now consider the task of object detection in an autonomous driving seting. We calibrate probabilistic object detectors trained on the KITTI~\cite{kitti} and Cityscapes~\cite{cordts2016cityscapes} datasets.%
We use the popular Faster R-CNN~\cite{ren2015faster} model with a feature pyramid network~\cite{lin2017feature} and a Resnet-101~\cite{he2016deep} backbone. We use the publicly available detectron2~\cite{wu2019detectron2} implementation and extend the model to output variances.
\textbf{Discussion}: We summarize the results of our object detection experiments in Table~\ref{table:all-results}-(c, d) and Fig.~\ref{fig:kitti-results}. As can be seen in Table~\ref{table:all-results}, we see that $f$-Cal{} variants, while having competitive regression performance (in terms of mAP), exhibit far superior calibration as reflected through ECE scores.
In Fig~\ref{fig:kitti-results}, we can see through reliability plots that the baselines methods yield inferior calibration and are farther away from the ground-truth distribution. It is important to note that even though calibration loss (\cite{bosch-calib}) is able to predict a distribution which is close to being standard normal, it is still not as calibrated as the $f$-Cal{} estimates. This is reflected in the reliability diagram for the Chi-squared distribution which is much more contrastive than the curve for the standard normal distribution. Fig~\ref{fig:kitti-results} also shows that loss attenuation yields very over-confident uncertainty predictions, which can be corroborated with qualitative results shown in Figure \ref{fig:qualitative_results}. By employing hyper-constraints over the proposed distribution, $f$-Cal{} enforces regularization at a batch level which leads to superior calibration performance.
\section{Discussion and Conclusion}
\label{sec:limitations}
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{figures/toy_exps/ablation_plot.png}
\caption{\textbf{Ablation}: (left) We plot the \% drop in deterministic performance compared to a deterministic model for different noise distributions. For large shape parameter, the Gamma distribution converges to a Gaussian, resulting in nearly identical performance to a deterministic model. (right) Effect of K on the performance of $f$-Cal{}, we see that as long as $K > 50$, the central limit theorem holds and we get good calibration. }
\label{fig:ablation}
\end{figure}
\textbf{Impact of modeling assumption}:
\label{subsec:analysis}
We postulate that for real-world datasets such as KITTI \cite{kitti}, the tradeoff in calibration and deterministic performance occurs due to poor modeling assumptions (i.e., modeling uncertainty using a distribution that is quite different from the underlying label error distribution).
To investigate this, we introduce a mismatch between the true distribution, a Gamma distribution parameterized by $\gamma$, and the assumed distribution, a Gaussian distribution, on the synthetic (Bokeh) dataset (Fig. \ref{fig:ablation} (left)). For lower distributional mismatch, the performance gap between the calibrated and deterministic models is reduced. We attribute the deterministic performance drop for KITTI results to this phenomenon.
The impact of this facet of our approach is significant. This means that through experimenting with different modeling assumptions and looking at the resulting tradeoff, we may be able to infer something about the underlying noise distribution, something that is typically very hard to do.
\textbf{Effect of degrees of freedom (K)}:
We analyze how the number of degrees of freedom ($K$) would impact calibration performance. We train models with different values of $K$ and measure the degree of calibration. In Fig. \ref{fig:ablation} (right), we can observe that for $K > $ 50, the central limit theorem holds and we see superior calibration when compared with models trained for $K \leq $ 50, when our approximation of a Gaussian distribution breaks, resulting in poor calibration. For Object detection(where thousands of proposals are being scored) and per pixel depth estimation, minibatch size $N >> K$, which allows us to effectively construct hyperconstraints.
|
{
"timestamp": "2021-09-29T02:26:44",
"yymm": "2109",
"arxiv_id": "2109.13913",
"language": "en",
"url": "https://arxiv.org/abs/2109.13913"
}
|
\section{Introduction} \label{sec:intro}
The recent emergence of Digital Rock Physics (DRP) has revolutionized the way we study porous media. It is now possible to directly characterize the pore structure of subsurface systems and perform three-dimensional direct numerical simulations of fluid flow in digital models of rock samples that approach the size of a Representative Elementary Volume (REV). As such, DRP has transformed our capacity to characterize and predict fluid flow in soils, sedimentary rocks, hydrocarbon reservoirs, and engineered porous systems \citep{Mehmani2019a,Han2020}. The computation of rock transport parameters including absolute permeability \citep{Spanne1994}, dispersion coefficients \citep{Bijeljic2013a,Soulaine2021c}, relative permeabilities, and capillary pressures \citep{Raeini2014,Prodanovic2015} has had direct impacts in the fields of reservoir engineering, hydrology, and $\mathrm{CO_2}$ sequestration \citep{Blunt2013,Soulaine2021a}.
\subsection{Rock Imaging Techniques and Sub-Resolution Porosity}
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=1\textwidth]{Fig1.pdf}
\caption{\label{fig:concept} A) Conceptual representation of the Multiphase Micro-Continuum approach. The image shows an advancing fluid-fluid interface in a system that contains impermeable rock (top), free fluid (center), and a permeable porous medium (bottom). The two immiscible fluids are shown in different shades of blue (left and right) and the inset shows a sample REV over which the model's equations are averaged. B) SEM image of a shaly sandstone obtained from \cite{Peters2009} showing the distribution of its components: porous clay (dark grey), non-porous sand (light grey), and open pore space (black).}
\end{center}
\end{figure*}
Digital Rock Physics is made possible by advances in high resolution imaging techniques, notably X-ray Microtomography (XCT) \citep{Baker2012,Singh2018,kohanpur_valocchi_2020} and focused ion beam scanning electron microscopy (FIB-SEM) \citep{Cnudde2013,Kelly2016,Welch2017,ruspini2021}. The first method, XCT, involves recording hundreds or thousands of two-dimensional (2-D) X-ray scans through a sample that are then computationally reconstructed to create a 3-D image. This method enables detailed volumetric representations of rock core samples spanning several cubic millimeters with a resolution of about 1 cubic micrometer \citep{Wildenschild2013,Blunt2013}. The second method, FIB-SEM, involves repeated etching and imaging of a sample through alternating application of focused ion beams and scanning electron microscopy at considerably smaller scales. It yields images spanning $\sim$5 cubic micrometers with an associated resolution of $\sim$5 cubic nanometers \citep{Dewers2012}. These two techniques highlight an important limitation of current imaging techniques: the existence of an unavoidable trade-off between image resolution and field-of-view.
The inherent complexity of most natural rocks further complicates the imaging and characterization process. More often than not, rocks such as sandstones, carbonates, and shales exhibit heterogeneities that span several length scales \citep{Bear1988,Mousavi2013,Akbarabadi2017,Beckingham2017}, some of which cannot be properly resolved by the aforementioned imaging techniques. A common way to simplify the imaging process while accounting for these heterogeneities is to designate a ``cutoff" voxel size that resolves the largest pores within a given rock sample (or other features of interest such as cracks or fractures) while simultaneously acting as a ``filter" for any pores smaller than that particular size. These small pores, which are not individually resolved, are then designated ``sub-resolution porosity" (SRP) and labeled as a third phase in the rock-pore-SRP system during the eventual segmentation of sample images. The final result is a reconstructed image with an acceptable trade-off between resolution and field of view \citep{Scheibe2015}.
Until recently, and despite its abundance in reconstructed natural rock scans, SRP was generally assumed to have little influence on rock hydraulic properties predicted from flow simulations. Most computational models were based on the simplifying assumption that transport within the SRP is dominated by diffusion and thus contributes negligibly to fluid flow \citep{Haggerty1995,Carrera1998,Gouze2008,Shabro2011,Gjetvaj2015}. However, recent studies have shown that this assumption breaks down whenever SRP contributes significantly to the rock's percolating path by forming bridges between resolved pore spaces. In these cases, SRP can impact permeability by a factor larger than 2 even when contributing only $\sim$2\% of the total porosity \citep{Churchel1991,Tanino2012,Soulaine2016_subresolution,wu_tahmasebi_lin_munawar_cnudde_2019}. In addition, recent evidence suggests that SRP also can have important impacts on multiphase flow, as shown by observations of dramatic changes in relative permeability curves and overall flow behaviour associated with differences in SRP wetting properties in mixed-wet or strongly-wetting rocks \citep{Zou2018,Rucker2019,Fan2020,Garfi2020}.
\subsection{Multiscale Models}
A potential path towards resolving the influence of SRP on hydrologic processes is provided by sustained efforts to develop numerical techniques designed to account for this feature combined with steady advances in high-performance parallel computing \citep{Arbogast1993,Arbogast1993b,Moctezuma2004,Javadapour2009,Bauer2011,Jiang2013b}. In particular, the development of multiscale/dual-porosity pore network models (D-PNM) has allowed for relatively fast and accurate assessment of the permeability of rocks containing multiscale heterogeneity \citep{Bekri1995}. Classical PNMs rely on approximating the 3-dimensional resolved pore space through a series of ideally-shaped pore ``nodes" and ``throats" \citep{Fatt1956}. The result is a system where the relevant fluid dynamics can be readily solved through idealized equations for flow \citep{Dong2009,Joekar-Niasar2012,Jiang2012,Blunt2013,Huang2016,Suo2020}. In D-PNMs, the presence of SRP is accounted for through the implementation of an additional fine-scale pore network \citep{Ioannidis2000,Jiang2013,Prodanovic2015,sadeghnejad_gostick_2020,moslemipour_sadeghnejad_2020} or through the creation of ``micro-links" forming percolation paths between large pores \citep{Bultreys2015,xu_lin_jiang_ji_cao_2021}. Accurate definition of SRP connectivity within these networks remains a challenge \citep{zhao_shang_jin_jia_2017,petrovskyy_dijke_jiang_geiger_2020}, as, by definition, there are no discernible features from which to inform assignments of pore network topology within the SRP.
The expansion of multiscale models into multiphase flow further complicates matters, as the effects of capillarity and wettability need to be modeled through representative relative permeability and capillary pressure models in order to obtain accurate flow representations within the SRP \citep{Carrillo2020}. For this reason, the few studies that implemented multiphase D-PNMs have relied on the assumption of quasi-static fluid displacement, an assumption valid for simulating flow at low capillary numbers \citep{Mehmani2013b,Bultreys2015,xu_lin_jiang_ji_cao_2021} and where both phases are effectively set at a given saturation. These studies have leveraged D-PNMs to study how the amount and distribution of SRP affects the relative permeability behaviour of artificial rock samples \citep{Mehmani2014} and how SRP characterization and connectivity affect the wetting properties of natural rocks \citep{Bultreys2016,song_yao_zhang_sun_yang_2021,isah_adebayo_mahmoud_babalola_el-husseiny_2020}. Unfortunately, due to the simplifying assumptions of D-PNMs outlined above, extension of these studies to dynamic systems with mixed-wet SRP or systems with viscously-dominated flow remains impossible.
The Micro-Continuum approach presents an alternative route to simulating dynamic flow processes in systems with SRP. This approach relies on locally-averaged Navier-Stokes equations that asymptotically approach Darcy's law in regions with SRP and the Navier-Stokes equations in fully resolved pores. This model has proven fairly flexible and has been used to evaluate the effects of static \citep{Knackstedt_2006,Apourvari2014,Scheibe2015,Soulaine2016_microcontinuum,Guo2018,Kang2019,Singh2019}, reactive \citep{Soulaine2017_dissolutionSinglePhase,Noiriel2021}, and deformable \citep{Carrillo2019} SRP on the permeability of heterogeneous porous media. Furthermore, through careful consideration of capillary and viscous effects within the SRP (i.e., fluid mobility, relative permeabilities, and capillary pressures), recent investigations have successfully expanded and validated the Micro-Continuum Approach for situations involving the flow of multiple fluids in multiscale porous media \citep{Soulaine2018_twoPhaseDissolution,Carrillo2020,Carrillo2020MDBB,Carrillo2021}. In this approach, the impact of simplifying model assumptions is greatly reduced relative to the D-PNM approach at the expense of relatively high computational costs. As such, this approach allows for the simulation of dynamic multiscale systems in domain sizes that approach that of an REV.
\subsection{Objective of this Paper}
In this study, we leverage the capabilities of the Multiphase Micro-Continuum Approach to systematically examine the influence of SRP properties (permeability, porosity, wettability) on Direct Numerical Simulation predictions of multiphase flow in a digital model of a carbonate rock. In particular, we characterize the rock's absolute permeability, relative permeability curves, residual permeabilities, and fluid breakthrough times on the $\sim$30 mm$^3$ scale of an XCT image. We hypothesise that the SRP properties outlined above have even greater impacts on multiphase flow than on single phase flow, such that their neglect or misrepresentation leads to inaccurate predictions of rock hydraulic properties. To the best of our knowledge, this is the first application of Direct Numerical Simulations to multiphase flow in rock samples containing unresolved porosity and the first computational effort to systematically examine the impacts of SRP wetting properties on the aforementioned rock flow properties.
\section{Materials and Methods}\label{sec:Methods}
\subsection{Mathematical model}\label{sec:mathematical_model}
The Multiphase Micro-Continuum framework for incompressible immiscible flow in rigid porous media consists of three volume-averaged partial differential equations. They describe the conservation and transport of fluid mass (Eqn.~\ref{eq:mass_conservation}), fluid saturation (Eqn.~\ref{eq:saturation_eq}), and fluid momentum (Eqn.~\ref{eq:momentum_conservation}). Once implemented in a suitable numerical solver, these equations are used to solve for the single-field pressure ($p$), the single-field fluid velocity ($\boldsymbol{U}$), and the wetting-fluid saturation ($\alpha_w$). A full description of the model can be found in \cite{Carrillo2020}. Here, we have:
\begin{equation}
\nabla\cdot\boldsymbol{U}=0,
\label{eq:mass_conservation}
\end{equation}
\begin{equation}
\frac{\partial \phi \alpha_w}{\partial t} + \nabla\cdot\left(\alpha_w \boldsymbol{U}\right) + \nabla\cdot\left(\phi \alpha_w\alpha_n\boldsymbol{U}_r\right)=0,
\label{eq:saturation_eq}
\end{equation}
\begin{equation}
\frac{1}{\phi }\left( \frac{\partial \rho \boldsymbol{U}}{\partial t} +\nabla \cdot \left( \frac{\rho}{\phi }\boldsymbol{U} \boldsymbol{U} \right)\right)=-\nabla p
+ \nabla\cdot\boldsymbol{S} -\mu k^{-1} \boldsymbol{U} + \boldsymbol{F}_c,
\label{eq:momentum_conservation}
\end{equation}
\noindent where the subscripts $w$ and $n$ refer to the wetting and non-wetting fluids, $\phi$ is the cell porosity, $\rho$ is the single-field density, $\mu k^{-1}$ is the drag coefficient of the unresolved porous media (a function of the cell permeability, saturation, and fluid viscosities), $\boldsymbol{F}_c$ are the capillary forces, and $\boldsymbol{S}={\mu}(\nabla\boldsymbol{U}+{(\nabla\boldsymbol{U})}^T)$ is the averaged single-field shear stress tensor. Here, gravity is neglected and the phrase ``single-field" refers to averaged variables that depend on the properties of both fluids \citep{Maes2020}.
A key feature of Eqns. \ref{eq:mass_conservation}-\ref{eq:momentum_conservation} is that they are valid in control volumes that contain any combination of the three relevant phases (porous solid, wetting fluid, non-wetting fluid), meaning that they can be applied to systems that contain both solid-free ($\phi =1$) and porous regions ($\phi <1$). Due to the scale separation hypothesis \citep{Whitaker1986}, this unique set of equations tends towards distinct solutions in solid-free and porous regions. Notably, the single-field momentum equation tends to a solution that can be asymptotically matched to the two-phase Navier-Stokes equations in solid-free regions and to two-phase Darcy's law in porous regions \citep{Carrillo2020}:
\begin{equation} \label{eq:SolidFree_Darcy}
\begin{cases}
\frac{\partial {\rho }{\boldsymbol{U}}}{\partial t}+\nabla\cdot\left({\rho }{{\boldsymbol{U}} \boldsymbol{U}}\right)=-\nabla p+\nabla\cdot\boldsymbol{S}+{\boldsymbol{F}}_c, & \textnormal{if } \phi = 1, \\
{\boldsymbol{U}}=-\frac{k}{\mu }\left(\nabla p-{\boldsymbol{F}}_c\right), & \textnormal{if } \phi < 1.
\end{cases}
\end{equation}
As such, the Multiphase Micro-Continuum model is ideally suited for simulating multiphase flow in XCT images that contain SRP, as illustrated schematically in Figure~\ref{fig:concept}.
The asymptotic matching noted above requires appropriate definitions of the relative velocity $\boldsymbol{U}_r$, drag force $\mu k^{-1} \boldsymbol{U}$, and capillary forces $\boldsymbol{F}_c$. These variables reflect the influence of sub-grid-scale structure and dynamics, including the fluid distribution and the impact of porous micro-structure on flow within the SRP. For this reason, these parameters are defined differently in the solid-free region ($\phi =1$) and porous regions ($\phi <1$). In particular, the single-field drag force is negligible in solid-free regions and, in porous regions, depends on absolute ($k_0$) and relative permeabilities ($k_{r,i}$) within the SRP:
\begin{equation} \label{perm}
\mu k^{-1}=
\begin{cases}
0, & \textnormal{if } \phi = 1,\\
k_0^{-1}\left(\frac{k_{r,w}}{\mu_{w}}+\frac{k_{r,n}}{\mu_{n}}\right)^{-1}, & \textnormal{if } \phi < 1.
\end{cases}
\end{equation}
The capillary forces within the solid-free region are proportional to the surface tension $\gamma$ and the curvature of the fluid-fluid interface as described by the Continuum Surface Force formulation \citep{Brackbill1992}. In the porous region, capillary forces are a function of the fluid mobilities ($M_i=k_0k_{i,r}/{\mu }_i; \ M=M_w+M_n$) and the average capillary pressure $p_c$:
\begin{equation}
\boldsymbol{F}_{c}=\begin{cases}
-\gamma\nabla.\left(\hat{\boldsymbol{n}}_{w,n}\right)\nabla \alpha_{w}, & \textnormal{if } \phi = 1,\\
\left[M^{-1}\left( M_w \alpha_n - M_n \alpha_w \right)\left(\frac{\partial p_c}{\partial \alpha_w }\right) - p_c \right]\nabla \alpha_w, & \phi < 1,
\end{cases}
\label{eq:surface_tension_forces}
\end{equation}
\noindent where the normal at the fluid-fluid interface, $\hat{\boldsymbol{n}}_{w,n}$, is given by
\begin{equation}\label{normal}
\hat{\boldsymbol{n}}_{w,n}=\begin{cases}
-\frac{\nabla\alpha_w}{\left|\nabla\alpha_w\right|}, & \textnormal{if } \phi = 1,\\
\cos \theta_p \boldsymbol{n}_{wall} + \sin \theta_p \boldsymbol{t}_{wall}, & \textnormal{at the SRP surface}.
\end{cases}
\end{equation}
Equation \ref{normal} imposes a contact angle $\theta_p$ at the SRP surface following the approach developed by \cite{Horgue2014}, where $\boldsymbol{t}_{wall}$ and $\boldsymbol{n}_{wall}$ are the tangential and normal directions relative to the SRP surface. The specification of the contact angle at non-porous rock surfaces, $\theta_r$, follows a similar implementation.
The relative fluid velocity is given by:
\begin{equation}
\boldsymbol{U}_{r}=\begin{cases}\label{Ur}
C_\alpha \max \left(\left| \boldsymbol{U} \right|\right) \frac{\nabla \alpha_w}{\left| \nabla \alpha_w \right|}, & \textnormal{if } \phi = 1, \\
{\phi}^{-1}\left[ \begin{array}{c}
-\left(\frac{M_w}{\alpha_w} - \frac{M_n}{\alpha_n}\right)\nabla p \\ +\left(\frac{M_w\alpha_n}{\alpha_w} +\frac{M_n\alpha_w}{\alpha_n} \right)\nabla p_c \\ - \left(\frac{M_w}{\alpha_w} - \frac{M_n}{\alpha_n} \right)p_c \nabla \alpha_w \end{array}\right], & \textnormal{if } \phi < 1,
\end{cases}
\end{equation}
\noindent where $C_{\alpha}$ is an interface compression parameter used in the Volume-of-Fluid method (typically set to values between 1 and 4), and the expression within the SRP is imposed by asymptotic matching to two-phase Darcy's law \citep{Carrillo2020}.
Lastly, closure of the system of equations presented above requires appropriate constitutive models to solve for $p_c$ and $k_{r,i}$ within the SRP. For simplicity, we use the well-known Van Genuchten model \citep{VanGenutchen1980}:
\begin{equation}
k_{r,n} = (1-\alpha_{w})^{1/2}(1-\alpha_{w}^{1/m})^{2m},
\label{eq:rel_perm}
\end{equation}
\begin{equation}
k_{r,w}= \alpha_{w}^{1/2}(1-(1-\alpha_{w}^{1/m})^m)^{2},
\end{equation}
\begin{equation}
p_c\ ={p_{c,0}\left({\left({\alpha }_{w}\right)}^{-\frac{1}{m}}-1\right)}^{1-m},
\label{eq:p_cap}
\end{equation}
\noindent where $m$ is a wetting parameter that controls the internal wettability of the SRP and $p_{c,0}$ is the entry capillary pressure of the SRP. The SRP is internally water-wet if $m > 1$ and oil-wet if $m <1$. Note that the sign of the entry capillary pressure was changed for values of $m > 1$ to prevent unphysical parameterizations where the SRP is both water-wet and oil-wet at the same time.
\subsection{Numerical Implementation}
The mathematical model presented in Section \ref{sec:mathematical_model} was numerically implemented in OpenFOAM{\circledR}, a free, parallel, C++ simulation platform that uses the Finite Volume Method to discretize and solve partial differential equations in three-dimensional grids. Mass conservation and incompressibility (Eqns.~\ref{eq:mass_conservation} and \ref{eq:momentum_conservation}) were ensured through the Pressure Implicit Splitting-Operator (PISO) algorithm \citep{Issa1986}. The evolution of the fluid-fluid interface (Eqn.~\ref{eq:saturation_eq}) was solved using the Multidimensional Universal Limiter of Explicit Solution (MULES) algorithm \citep{Marquez2013} and a Piecewise-Linear Interface Calculation (PLIC) compression scheme. Extensive validation of the modeling framework is presented in \cite{Carrillo2020} and the open-source implementation is available from the author's \href{https://github.com/Franjcf}{GitHub} repository \citep{hybridPorousInterFoam_code}.
\subsection{Studied Rock Sample} \label{sec:sample}
Simulations were performed on a reconstructed 3-D XCT scan of an Estaillades Carbonate sample obtained from \cite{sample_dataset} through the \href{https://www.digitalrocksportal.org/}{Digital Rock Portal}. This set of images has been used in several previous D-PNM studies \citep{Bultreys2015,Bultreys2016}. The sample (1000 by 1000 by 1000 voxels, $3.1 \ \mu \mathrm{m}$ per voxel) is ideally suited for our purposes, as it is a mono-mineralic calcite rock containing both intergranular macropores and unresolved intragranular micropores (i.e., SRP). Voxels containing solid rock, resolved pores, and unresolved pores were identified through a 3-phase segmentation procedure following the steps outlined in \citet{Bultreys2015}. This yielded a sample with $56.2 \%$ solid rock voxels, $11.8\%$ porous voxels, and $32\%$ microporous voxels (Figure \ref{fig:sample}).
Due to the computational cost associated with performing direct numerical simulations on such a large physical space, we extracted a 200 by 200 by 200 voxel sub-sample from the original scan in order to perform our simulations. The computational cost was further reduced by removing all grid cells corresponding to solid rock voxels in the resulting computational mesh, yielding a sample of about 3.2 million cells (see Fig. \ref{fig:porosity_distribution}). In order to maintain adequate mesh resolution while properly representing the mobile fluid-fluid interface within the open pore space, we implemented a dynamic mesh refinement algorithm that allowed the mesh to become up to 16 times finer at said interface. No mesh refinement was carried out within the SRP. Lastly, as is customary for these types of simulations and to properly control the flow rate into the sample, we added two ``buffer" regions at the inlet and outlet boundaries of our samples. All other boundaries were defined with no-flow boundary conditions.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.48\textwidth]{Fig2.pdf}
\caption{\label{fig:sample} Representative cross-section of the XCT scan of Estaillades carbonate rock used in this study \citep{sample_dataset}. A) Full 2-D view of the sample, which is 7 mm in diameter with a resolution of 3.1 $\mu \mathrm{m}$ per voxel. B) 500 by 500 voxel cropped sample. (C) Corresponding segmented image. For all figures, black is open pore space, dark grey corresponds to domains that contain SRP, and the lightest color is solid calcite.}
\end{center}
\end{figure}
\section{Base Simulation Setup and Upscaling}\label{sec:simulation_setup}
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{Fig3.pdf}
\caption{\label{fig:porosity_distribution} Spatial distribution of the SRP (red), pore space (blue), and solid rock (transparent) within the extracted 3-D rock representation (200 by 200 by 200 cells). A) The complete computational mesh. B) The corresponding SRP, which accounts for $21\%$ of the voxels. C) The associated open pore space, which accounts for $40 \%$ of the voxels.}
\end{center}
\end{figure}
\subsection{Base Simulation Parameterization}\label{sec:base_simulation_parameterization}
The main purpose of our simulations is to perform a sensitivity analysis of the impact of SRP properties on single and multiphase flow at the scale of the full digital rock image, hereafter referred to as the macroscopic scale. For this, we first introduce a ``base" simulation that will be parameterized using experimental values and then used as a template for the systematic variation of SRP properties. This workflow is conceptually similar to the one performed in \cite{hashemi_blunt_hajibeygi_2021}.
Our base simulation involves the injection of oil into a fully-water-saturated rock sample at a constant rate of 0.1 $\mu$L s$^{-1}$ until the simulation reaches a steady-state. The choice of the labels ``oil" and ``water" is an arbitrary one: our main goal is to examine the flow of two immiscible and incompressible fluids. The advancing fluid is non-wetting in our base simulation, but the wettability of the solid by the two fluids is reversed in some of our simulations. The rock and fluid properties are summarized in Tables \ref{table:fluid_parameters} and \ref{table:rock_parameters}. The subsequent sensitivity analysis was performed by independently modifying the SRP's porosity ($\phi = 0$ to $1$), absolute permeability ($k_0 = 10^{-12}$ to $10^{-17} \ m^2$), internal wetting properties ($m = 0.2$ to $1.5$ in Eqns.~\ref{eq:rel_perm}-\ref{eq:p_cap}), and the contact angles formed by fluid-fluid interfaces on the external surface of SRP and impermeable rock domains ($\theta_p$ and $\theta_r = 30^{\circ}$ to $150^{\circ}$).
The decoupling of the contact angle at the SRP and impermeable rock surfaces allows the investigation of mixed-wet systems \citep{Song2015,Huang2016,Akbarabadi2017} and establishes the possibility of defining a roughness- or saturation-dependent contact angle in future studies \citep{Wenzel1936,Whyman2008}. The decoupling of internal ($m$) and surface ($\theta$) wetting properties allows us to differentiate between macroscopic and microscopic wetting effects, where $\theta$ impacts multiphase flow in large pores and $m$ impacts multiphase flow within the SRP.
Lastly, we carried out additional single-phase flow simulations for each case where we varied the SRP porosity and permeability. This was necessary to calculate each case's absolute permeability and relative permeability curves (see Section \ref{sec:relative_perm_calc}). On average, each multiphase simulation ran for approximately 120 hours on ten 28-core Broadwell Xeon nodes.
\begin{table}[!htb]
\centering
\setcellgapes{2pt}
\makegapedcells
\medskip
\begin{tabular}{||c c||}
\hline
Property & Value \\ [0.5ex]
\hline\hline
$\rho_w$ & \SI{1000}{kg.m^{-3}} \\
\hline
$\mu_w$ &\SI{0.001}{Pa.s}\\
\hline
$\rho_n$ & \SI{800}{kg.m^{-3}} \\
\hline
$\mu_n$ & \SI{0.1}{Pa.s} \\
\hline
$\gamma$ & \SI{0.03}{kg.s^{-2}} \\
\hline
\end{tabular}
\caption{Simulated Fluid Properties. These were kept constant for all simulations.}
\label{table:fluid_parameters}
\end{table}
\begin{table}[!htb]
\centering
\setcellgapes{2pt}
\makegapedcells
\medskip
\begin{tabular}{||c c c||}
\hline
Property & Base Value & Range \\ [0.5ex]
\hline\hline
\(\phi\) & 0.5 & $0 - 1$ \\
\hline
\(k_0 \) & \SI{e-13}{m^{2}} & $10^{-12} -$ \SI{e-17}{m^{2}} \\
\hline
\(\theta_r \) & $30^{\circ}$ & $30 - 150^{\circ}$ \\
\hline
\(\theta_p \) & $30^{\circ}$ & $30 - 150^{\circ}$ \\
\hline
$m$ & 1 & $0.2-1.5$ \\
\hline
\(p_{c,0}\) & $\pm$ \SI{1.35e4}{Pa} & n/a \\
\hline
\end{tabular}
\caption{Simulated Rock and SRP Parameters. The second column represents each parameter value used in the base simulation and the third column shows the range over which each parameter was varied. These ranges where chosen to create representative samples of the associated parameter space: from strongly hydrophobic to strongly hydrophilic systems, and from systems with permeable (and impermeable) SRP to systems with no SRP.}
\label{table:rock_parameters}
\end{table}
\subsection{Calculation of Absolute Permeability and Relative Permeability Curves}\label{sec:relative_perm_calc}
Relative permeabilities were calculated through modification of the upscaling approach presented in \cite{Raeini2014}, where macroscopic relative permeability $K_{r,i}$ is defined as the ratio between the apparent permeability $K_i$ calculated from transient, multi-phase flow experiments and the upscaled absolute permeability $K_0$ calculated from steady-state, single-phase flow experiments:
\begin{equation}\label{eq:Kr}
K_{r,i} = \frac{K_i}{K_0} = \frac{Q_i/\Delta P_i}{Q_{i,s}/\Delta P_{i,s}}.
\end{equation}
In Equation \ref{eq:Kr}, the subscript $i$ identifies properties pertaining to either fluid and the subscript $s$ refers to quantities obtained from single-phase experiments. Furthermore, $Q_i = \int \boldsymbol{U}\cdot \boldsymbol{n}\alpha_i dA$ is the volumetric fluid flow rate of phase $i$ passing through an area $A$ into the porous medium, and $\Delta P_i$ is the pressure drop in phase $i$ across said medium. The latter is defined as follows:
\begin{align}\label{eq:deltaP}
\Delta P_{i} \equiv & -\frac{1}{Q_{i}} \int_{V_f}\left( -\nabla p + \boldsymbol{F}_c \right)\cdot \boldsymbol{U}d{V_{f,i}},
\\
& = -\frac{1}{Q_{i}}\int_{V_{f}} \nonumber \left(\frac{\mathrm{D}}{\mathrm{D}t}(\rho\boldsymbol{U}) - \nabla\cdot\boldsymbol{S} + \mu k^{-1} \boldsymbol{U}\right) \cdot \boldsymbol{U}d{V_{f,i}},
\end{align}
\noindent where $V_f$ is the fluid volume of the sample excluding the buffer zones. A drag term ($\mu k^{-1} \boldsymbol{U}$) is included Equation \ref{eq:deltaP} to account for the momentum dissipation (i.e. pressure drop) induced by the presence of SRP in the sample. The calculation of $\Delta P_{i,s}$ follows Eqn. \ref{eq:deltaP} sans the capillary force term.
Relative permeability curves were constructed by matching each $K_{r,i}$ value to the corresponding saturation in the porous medium at a specific point in time. This so-called ``unsteady" approach, where $K_{r,i}$ values are not calculated at steady state \citep{Amaefule1982,Johnson1959}, enables calculating relative permeability curves without needing to carry out a distinct steady-state multiphase simulation for each data point, a current necessity across numerical frameworks for rock models with realistically complex pore structures given current computational capabilities. However, this comes at the expense of accuracy or, more precisely, at a risk that the resulting relative permeability curves may be sensitive to fluid flow rate \citep{Diamantopoulos2012}. To minimize the impact of this approximation, we focus on characterizing the sensitivity of $K_{r,i}$ to different SRP properties, as opposed to absolute values of $K_{r,i}$. In other words, we aim to gain insight into the magnitude of the different impacts of SRP on multiphase flow, as opposed to quantitatively matching experimental results.
\section{Impact of SRP on Absolute Permeability}\label{sec:SRP_effects_on_absolute_perm}
In the following three sections, we quantify the effects of SRP properties on the rock's overall absolute permeability (this section), relative permeability curves (Section \ref{sec: SRP_effects_on_rel_perm}), and time-dependent saturation profiles (Section \ref{sec:SRP_Effects_On_Saturation}). For the remainder of this study, each simulation case is identified by the variable that is changed with respect to the base simulation established in Section \ref{sec:base_simulation_parameterization} and parameterized according to Tables \ref{table:fluid_parameters} and \ref{table:rock_parameters}. We now start by evaluating the effect of SRP on the rock's absolute permeability.
\begin{figure}[h!]
\includegraphics[width=0.48\textwidth]{Fig4.pdf}
\caption{Sample absolute permeability as a function of SRP properties. Each label shows the only varied parameter with respect to the base simulation. Values indicated to the right of each bar show the percent change in absolute permeability relative to the base simulation described in Section \ref{sec:base_simulation_parameterization}.\label{fig:abs_perm}}
\end{figure}
Figure \ref{fig:abs_perm} shows that the sample's absolute permeability is overestimated by $57\%$ if the SRP is neglected and assumed to be open pore space ($\phi = 1$) and underestimated by $34\%$ if it is ignored and assumed to be impermeable ($\phi = 0$), where the former's permeability more than doubles the latter. The overall trend in Figure \ref{fig:abs_perm} is fairly intuitive: as the SRP's porosity and/or permeability increase, so does the rock's absolute permeability. This is in line with the findings of \citet{Mehmani2014}, and \citet{Soulaine2016_subresolution}. However, whereas some previous studies have observed that SRP can have a disproportionately large impact on permeability, implying that it forms key percolation pathways for single-phase flow \citep{Soulaine2016_subresolution}, the factor of $\sim$2 impact of SRP on absolute permeability observed here is roughly consistent with the predictions of the well-known Kozeny-Carman equation, implying that SRP is relatively uniformly distributed in the studied rock sample (in close agreement with \citet{Bultreys2016}). As examined in the following sections, greater impacts of SRP are observed in systems with multiple fluid phases, where SRP wettability and relative permeability become key factors controlling fluid flow.
\section{Impact of SRP on Relative Permeability Curves} \label{sec: SRP_effects_on_rel_perm}
Changes in sample relative permeability as a function of SRP porosity, wetting properties, and absolute permeability are not particularly intuitive. These often involve non-linear behaviors brought about by the combination of capillary forces and the sample's geometry. Throughout the following discussion we will see that the SRP has two primarily competing effects: it \textit{enhances} flow by connecting otherwise-isolated macroscopic flow paths, but it \textit{reduces} flow by being less permeable than the open pore space. We will show that the balance between these two roles is strongly dictated by SRP properties.
The four sets of relative permeability curves present in Figure \ref{fig:all_curves} exhibit two distinct behaviors reflecting different responses to changes in SRP properties. In one observed behavior, the curves for both fluids shift up (or down) \textit{in the same direction} with respect to the y-axis. This implies that the sample becomes more (or less) permeable to \textit{both} phases simultaneously. In the other observed behavior, the water and oil relative permeability curves shift up or down \textit{in opposite directions}, indicating that an increased permeability to one fluid is associated with a decreased permeability to the other fluid.
\begin{figure*}[h!]
\includegraphics[width=1\textwidth]{Fig5.pdf}
\caption{\label{fig:all_curves} Sensitivity of drainage and imbibition relative permeability curves to different SRP properties. A) Sensitivity to SRP absolute permeability, from $k_0 = 10^{-17}$ to $10^{-12}$ m$^2$. B) Sensitivity to SRP internal wettability, from oil-wetting ($m<1$) to water-wetting ($m>1$). C) Sensitivity to the external wettability of rock and SRP domains, from water-wetting ($\theta_r = 30^{\circ} , \ \theta_p = 30^{\circ} $), to mixed-wetting ($\theta_r = 150, \ \theta_p = 30^{\circ} $ and $\theta_r = 30^{\circ} , \ \theta_p = 150^{\circ} $), to oil-wetting ($\theta_r = 150^{\circ} , \ \theta_p = 150^{\circ} $). D) Sensitivity to SRP porosity, from $\phi = 0$ to $\phi = 1$. Unless specified, all parameterized values not indicated in the legend are held constant and equal to the values described in Tables \ref{table:fluid_parameters} and \ref{table:rock_parameters}. Each color pair represents the oil (top) and water (bottom) relative permeability curves for a given simulated case. The base simulation is shown in black for all cases.}
\end{figure*}
\subsection{Sensitivity to SRP Absolute Permeability}\label{sec:SRP_absolute_perm}
Figure \ref{fig:all_curves}A demonstrates that an increase in SRP absolute permeability enhances the relative permeability curves of \textit{both} oil and water. This enhancement occurs in addition to the enhancement in absolute permeability presented in Fig. \ref{fig:abs_perm}. The enhancement of water relative permeability is entirely expected as the SRP is water-wet (and mostly water-saturated) in this scenario, such that greater SRP permeability naturally facilitates the flow of water. The enhancement of oil permeability is less intuitive. Since oil minimally accesses the SRP in this scenario, this enhancement is likely indirect, i.e., greater SRP permeability facilitates water drainage from the open pore space, which in turn facilitates the flow of oil.
We note, that both effects essentially disappear at SRP permeabilities below $\sim$10$^{-17}$ m$^2$ as shown in Fig. SI1 in the Supporting Information. In short, SRP permeability is only important if it is sufficiently large that flow can actually occur within the SRP.
\subsection{Sensitivity to SRP's internal wettability}\label{sec:SRP_internal_wetting_properties}
Figure \ref{fig:all_curves}B shows that an increase in SRP internal wettability, from oil-wetting ($m<1$) to water-wetting ($m>1$), also enhances the flow potential of both fluids. This effect is likely analogous to that observed for SRP absolute permeability: a more hydrophilic SRP should remain more fully water-saturated, and hence more permeable to water (because of impact of saturation on relative permeability within the SRP). As in Fig. \ref{fig:all_curves}A, this greater ability of water to flow through the SRP indirectly facilitates oil flow, likely by aiding water drainage from the open pore space.
\subsection{Sensitivity to SRP and rock surface contact angles}\label{sec:SRP_wetting_properties}
Figure \ref{fig:all_curves}C shows that the relative permeability curves shift in \textit{opposite} directions in response to changes in the external wettability of the rock or SRP surfaces. Specifically, as the pore walls become more hydrophobic, permeability to water decreases, while permeability to oil increases. The impact on oil flow is relatively small, likely because of the partial cancellation of two competing effects: more hydrophobic surfaces should inhibit oil flow by causing this flow to occur preferentially in smaller pores or closer to the pore walls; simultaneously more hydrophobic surfaces should enhance oil flow by minimizing the tendency towards trapping of oil droplets through capillary effects. Therefore, we posit that a decrease in capillary number (Ca) or sample homogeneity would likely enhance the trapping effect and may reverse the order of the oil relative permeability curves.
The impact on water flow is larger, a counter-intuitive observation. If water flows predominantly within the SRP, the impact of surface contact angles on water flow should be minimal. Alternatively, if water flows predominantly in the open pore space, surface contact angles should have relatively minor impact on relative permeability to water because of the competing effects noted above in the case of oil. In fact, an increase in water relative permeability with $\theta$ (opposite to that observed here) was reported by \cite{Fan2020}. A possible explanation of our results is that residual water flow in our simulated system relies on the \textit{combination} of SRP and residual macropore water flow (previous studies have largely ignored the presence of SRP). In systems with no microporosity, water can be retained in the open pore space through capillary forces, such as in capillary film coatings on rough pore walls \citep{Tokunaga1997,Khishvand2016}. Hydrophobic microporous walls would eliminate the capillary macropore water component of these residual flow paths.
\subsection{Sensitivity to SRP Porosity}
The effects of modulating SRP internal porosity between 0 to 1 are shown in Figure \ref{fig:all_curves}D. The overall magnitude of the relative permeability changes is in close agreement with \cite{Mehmani2014}, where the authors found that the addition of pore-clogging SRP can modify the relative permeability of the wetting and non-wetting phases by about a factor of 2. We note, again, that this effect occurs in addition to the significant impact of SRP porosity on absolute permeability presented in Fig. \ref{fig:abs_perm}.
In addition to this significant influence of SRP porosity on relative permeability, our results also show unexpected complexity. In particular, the impact of SRP porosity on water flow is non-monotonous, with minimum water relative permeabilities observed at either $\phi = 0$ or 1 and larger water relative permeabilities observed at intermediate $\phi$ values. This observation is consistent with the expected trend if residual water flow relies on a combination of both SRP and residual macropore water as suggested above: values $\phi = 0$ or 1 would inhibit water flow by eliminating the SRP water component of these residual flow paths.
\section{Impact of SRP on Residual Relative Permeability}
As noted above, our results strongly suggest that the SRP can function as an efficient and persistent connector between otherwise-disconnected water bodies, particularly at low water saturations. We call this increase in permeability the `SRP-enhanced relative permeability'. A key manifestation of this is the persistence of significant relative permeability in the water phase at water saturations below 0.5, in agreement with experimental observations for rocks with significant microporosity \citep{Bennion2010}. In contrast, pore network model simulations of multiphase flow generally predict that relative permeability to water is nearly zero at water saturations below $\sim$0.2 to 0.5 \citep{Prodanovic2015,Huang2016}.
A convenient way to characterize this effect is by ranking the relative permeabilities of water once each system has achieved a steady state, as seen in Figure \ref{fig:residual_relative_perm}. The overarching trend is clear: increasing the SRP's permeability to water also increases the steady-state relative permeability of said fluid (up to 20 times). The reason for this is not obvious, higher SRP permeability should lead to higher displacement of the defending fluid, lower residual saturations, and thus, lower (not higher!) steady-state permeabilities. This leads us to believe that increasing the flow capability of the SRP also leads to the creation of enhanced percolation pathways that are persistent and remain connected throughout the sample, even at low water saturations. This phenomenon is consistent with experiments in mixed-wet porous media \citep{AlRatrout2018} and somewhat analogous to thin-film flow in soils, where small amounts of water facilitate transport above the soil's water table \citep{Tokunaga1997,Lebeau2010}. This persistence of significant residual relative permeability to water has potentially important implications in the physics of soil drying \citep{Or2013} and in hydrocarbon recovery from tight sandstone formations \citep{Tian2019}.
\begin{figure}[h!]
\includegraphics[width=0.48\textwidth]{Fig6.pdf}
\caption{\label{fig:residual_relative_perm} Steady-state water relative permeability for select cases. Each label shows the only varied parameter with respect to the base simulation. The percentages to the right of the bars show the percent change in residual permeability with respect to the aforementioned base simulation specified in Section \ref{sec:base_simulation_parameterization}.}
\end{figure}
\section{Impact of SRP on Dynamic Saturation Evolution}\label{sec:SRP_Effects_On_Saturation}
The presence of SRP has the following competing effects on the evolution of oil saturation within the sample during oil-flooding: 1) It increases the residual saturation of its wetting phase (be it oil or water) by acting as a fluid reservoir that ``defends" itself against the non-wetting phase. 2) It decreases the residual saturation of the defending fluid phase by adding additional inter-pore connectivity and outflow routes \citep{Mehmani2014}. The balance between these two effects is dictated by the flow properties of the SRP.
\begin{figure*}[ht!]
\begin{center}
\includegraphics[width=0.99\textwidth]{Fig7.pdf}
\caption{\label{fig:graph_saturations} The evolution of oil saturation vs time for all studied cases. Each label shows the only varied parameter with respect to the base simulation. Note that oil breakthrough occurs when about $1/2$ of the macropore space still contains water, suggesting that much of the later (and slower) increase in oil content corresponds to water drainage from open pore space. The insets at the top left and bottom right show the final configurations for the cases with the highest and lowest final oil saturations, respectively. In said insets, the blue phase represents oil within open pore space and red represents the SRP that has been invaded by oil.}
\end{center}
\end{figure*}
Figure \ref{fig:graph_saturations} shows that fluid injection into the sample follows two characteristic behaviours: 1) An initial linear increase in saturation, where the slope is primarily dictated by the injection rate. 2) A non-linear plateauing slope dictated by the slow drainage of the defending fluid through the SRP and flow of the injected fluid into the SRP, which are influenced by the SRP's flow properties. The transition point between these two primary flow mechanisms is dictated by the ``breakthrough time", the point at which the injected fluid first reaches the sample's outlet boundary. The next two sections will leverage the information within the oil-flooding saturation curves in Figure \ref{fig:graph_saturations} to study the effects of the SRP on the dynamic and static properties of these experiments.
\subsection{Impact on Breakthrough Time}
We now present a general ranking of the breakthrough times for oil flooding as a function of SRP properties obtained from the results in Figure \ref{fig:graph_saturations} (and Figure SI2). The samples are well distributed around the standard base case and obey the following trends: The slowest breakthrough times correspond to cases with oil-wetting surface contact angles, where the oil explores more of the porous medium before reaching the outlet, in agreement with experimental observations of multiphase flow in bead-packs and micromodels with no SRP \citep{Zhao2016,Hu2017}. These are followed by the sample case with no SRP, where the reasoning is the same as above. Sample cases with a less water-wet SRP (decreasing $m$) or with lower SRP permeability or porosity further decrease the breakthrough times by limiting the ability of water to drain through the SRP, such that the oil explores less of the sample before reaching the outlet. Overall, our results show that oil breakthrough times are sensitive to SRP parameters ($\pm \ 30\%$) even though drainage occurs predominantly in the larger pores, a result that has potentially important implications in enhanced oil recovery and geologic CO$_2$ sequestration.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.483\textwidth]{Fig8.pdf}
\caption{\label{fig:sat_vs_time} Residual water saturations vs breakthrough times for all studied cases. Note that the longer it takes for oil to break through the sample, the lower the final water saturation. This trends holds for all cases with an SRP porosity of $\phi=0.5$ (blue circles). Changes in SRP porosity have the additional effect of changing the water storage capacity of the sample (red stars).
}
\end{center}
\end{figure}
\subsection{Impact on Residual Saturations}
Finally, we observe in Figure \ref{fig:sat_vs_time} (and Fig. SI3) that residual water saturations are highly correlated with oil breakthrough times: samples with faster breakthrough times generally have higher residual water saturations at steady state. The reasoning behind this behaviour is very similar to the one developed to explain the difference in oil breakthrough times: if more residual defending fluid is present, the invading fluid explores less of the available space and hence travels through the sample more rapidly. Overall, this analysis indicates that SRP has a considerable impact on a sample's residual saturations ($\pm 400\%$), strongly implying that it should not be neglected during the design of subsurface fluid extraction and sequestration processes.
\section{Conclusions}\label{sec:conclusion}
In this paper we studied the effects of XCT Sub-Resolution Porosity (SRP) on a rock's absolute permeability, relative permeability, residual saturations, and fluid breakthrough times. Our results quantify how these four properties react to changes in the porosity, permeability, and wettability of the SRP. One notable finding is that SRP can function as a persistent connector between otherwise-isolated fluid clusters during multiphase flow, even at low saturations. These results were obtained from numerical simulations performed with our newly-developed Multiphase Micro-Continuum framework. To the best of our knowledge, this is the first two-phase flow model and study to take into account SRP without having to rely on a quasi-static assumption or simplified pore-network models.
As such, this investigation establishes a framework for performing two-phase flow simulations in digital rock systems that have two characteristic length-scales. Potential improvements to our methodology include the simulation of larger and more diverse rock samples, a very attainable task due to the current continuous and massive growth of high-performance computing.
Finally, our results suggest potentially fruitful opportunities for future work aimed at quantifying the effects of SRP on upscaled capillary pressure curves, and broadening the investigated parameter space to different types of rocks involving different geometries, different amounts of SRP, and different SRP-induced connectivity. These avenues will more extensively test the conclusions presented in this study and lead the way towards greater understanding of multiscale rock physics and the development of more accurate and predictive upscaled permeability models.
\paragraph*{Acknowledgements}
This work was supported by the National Science Foundation, Division of Earth Sciences, Early Career program through Award EAR-1752982. F.J.C. was additionally supported by a Mary and Randall Hack ‘69 Research Award from the High Meadows Environmental Institute at Princeton University. C.S was sponsored by the French Agency for Research (Agence Nationale de la Recherche, ANR) through the labex Voltaire ANR-10-LABX-100-01 and the grant FraMatI ANR-19-CE05-0002. We do not report any conflicts of interest. The code for the computational model used in this manuscript is archived at \url{https://doi.org/10.5281/zenodo.4013969} \citep{hybridPorousInterFoam_code} and can also be found at \url{https://github.com/Franjcf}. The Estaillades Carbonate rock sample was obtained from \cite{sample_dataset} through the \href{https://www.digitalrocksportal.org/}{Digital Rock Portal}.
\section{Nomenclature}
\nomenclature{$\rho_i$}{Density of phase $i$ ($\unit{kg/m^3}$) }%
\nomenclature{$\rho$}{Single-field fluid density ($\unit{kg/m^3}$) }%
\nomenclature{$\boldsymbol{U}$}{Single-field fluid velocity ($\unit{m/s}$)}%
\nomenclature{$\boldsymbol{U}_r$}{Relative fluid velocity ($\unit{m/s}$)}%
\nomenclature{$m$}{Van-Genuchten wettability parameter}%
\nomenclature{$p$}{Single-field fluid pressure ($\unit{Pa}$)}%
\nomenclature{$p_c$}{Average capillary pressure ($\unit{Pa}$) }%
\nomenclature{$\boldsymbol{S}$}{Single-field fluid viscous stress tensor ($\unit{Pa}$)}%
\nomenclature{$Q$}{Volumetric fluid flow rate ($\unit{m^3/s}$)}%
\nomenclature{$\Delta P$}{Macroscopic pressure difference ($\unit{Pa}$)}%
\nomenclature{$\gamma$}{Fluid-fluid interfacial tension ($\unit{Pa.m}$)}%
\nomenclature{$\phi$}{Porosity field}%
\nomenclature{$\alpha_w$}{Saturation of the wetting phase}%
\nomenclature{$\alpha_n$}{Saturation of the non-wetting phase}%
\nomenclature{$\mu_i$}{Viscosity of phase $i$ ($\unit{Pa.s}$) }%
\nomenclature{$k_0$}{SRP absolute permeability ($\unit{m^2}$) }%
\nomenclature{$K_0$}{Sample absolute permeability ($\unit{m^2}$) }%
\nomenclature{$k_{r,i}$}{SRP relative permeability for fluid $i$ }%
\nomenclature{$K_{r,i}$}{Sample relative permeability for fluid $i$ }%
\nomenclature{$\boldsymbol{F}_{c}$}{Average capillary forces ($\unit{Pa/m}) $}%
\nomenclature{$C_{\alpha}$}{Parameter for the compression velocity model}%
\nomenclature{$M_i$}{Mobility of phase $i$ ($\unit{m^3/kg.s}$)}%
\nomenclature{$M$}{Total mobility ($\unit{m^3/kg.s}$)}%
\nomenclature{$\theta_r$}{Rock surface contact angle}%
\nomenclature{$\theta_p$}{SRP surface contact angle}%
\nomenclature{$\boldsymbol{n}_{wall}$}{Normal vector to the porous surface}%
\nomenclature{$\boldsymbol{t}_{wall}$}{Tangent vector to the porous surface}%
\nomenclature{$p_{c,0}$}{Entry capillary pressure ($\unit{Pa}$)}%
\nomenclature{$V_f$}{Total volume of fluid in the sample ($\unit{m^3}$) }%
\nomenclature{$V_{f,i}$}{Total volume of fluid $i$ in the sample ($\unit{m^3}$) }%
\printnomenclature\label{nom}
\bibliographystyle{chicago}
\section{Additional Relative Permeability Curves}\label{additionalrelperm}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.6\textwidth]{S1.pdf}
\caption{A) The effects of removing the SRP ($\phi =1$) and solidifying the SRP ($\phi = 0$). B) The effects of changing the wettability properties of an SRP with an absolute permeability of $k_0 = 1\times10^{-17} \ m^2$. Unless specified, all parameterized values not indicated in the legend are held constant and equal to the values described in Tables 1 and 2 in the main text. Each color pair represents the oil (top) and water (bottom) relative permeability curves for a given simulated case.}
\end{center}
\end{figure}
\newpage
\section{SRP Impact on Oil Breakthrough Times}\label{times}
\begin{figure}[h!]
\includegraphics[width=0.9\textwidth]{breakthrough_time.pdf}
\caption{\label{fig:breakthrough_time} Normalized oil breakthrough times for select cases. Each label shows the only varied parameter with respect to the base simulation. The percentages to the right of the bars show the percent change in breakthrough time with respect to the aforementioned base simulation specified in Section 3.}
\end{figure}
\newpage
\section{SRP Impact Residual Water Saturations}\label{residualwater}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.9\textwidth]{Residual_Water_Saturation.pdf}
\caption{\label{fig:residual_saturations} Residual water saturations for select cases. Each label shows the only varied parameter with respect to the base simulation. The percentages to the right of the bars show the percent change in residual saturations with respect to the base simulation specified in Section 3.}
\end{center}
\end{figure}
\bibliographystyle{elsarticle-harv}
|
{
"timestamp": "2021-09-29T02:26:22",
"yymm": "2109",
"arxiv_id": "2109.13903",
"language": "en",
"url": "https://arxiv.org/abs/2109.13903"
}
|
\section*{Acknowledgements}
Fred Roosta was partially supported by the Australian Research Council through a Discovery Early Career Researcher Award (DE180100923).
Stephen Wright was partially supported by NSF Awards 1740707 and 2023239; DOE ASCR under Subcontract 8F-30039 from Argonne National Laboratory; Award N660011824020 from the DARPA Lagrange Program; and AFOSR under subcontract UTA20-001224 from the University of Texas-Austin. Michael Mahoney would also like to acknowledge DARPA, NSF, and ONR for providing partial support of this work.
\bibliographystyle{abbrvnat}
\section{Introduction}
We consider the following unconstrained optimization problem
\begin{equation}
\label{eqn:basic_problem}
\min_{{\bf x}\in{\mathbb R}^d} f({\bf x}),
\end{equation}
where $f:\bbR^d \to \bbR $ is a smooth but nonconvex function.
At the heart of many machine learning and scientific computing applications lies the problem of finding an (approximate) minimizer of \cref{eqn:basic_problem}.
Faced with modern ``big data'' problems, many classical optimization algorithms \citep{numopt,bertsekas1999nonlinear} are inefficient in terms of memory and/or computational overhead.
Much recent research has focused on approximating various aspects of these algorithms.
For example, efficient variants of first-order algorithms, such as the stochastic gradient method, make use of inexact approximations of the gradient.
The defining element of second-order algorithms is the use of the curvature information from the Hessian matrix.
In these methods, the main computational bottleneck lies with evaluating the Hessian, or at least being able to perform matrix-vector products involving the Hessian.
Evaluation of the gradient may continue to be an unacceptably expensive operation in second-order algorithms too.
Hence, in adapting second-order algorithms to machine learning and scientific computing applications, we seek to {\em approximate} the computations involving the Hessian and the gradient, while preserving much of the convergence behavior of the exact underlying second-order algorithm.
Second-order methods use curvature information to nonuniformly rescale the gradient in a way that often makes it a more ``useful'' search direction, in the sense of providing a greater decrease in function value.
Second-order information also opens the possibility of convergence to points that satisfy second-order necessary conditions for optimality, that is, ${\bf x}$ for which $\|\nabla f({\bf x})\|= 0$ and $\nabla^2 f({\bf x})\succeq \bm{0}$.
For nonconvex machine learning problems, first-order stationary points include saddle points, which are undesirable for obtaining good generalization performance \citep{dauphin2014identifying,choromanska2015loss,saxe2013exact,lecun2012efficient}.
The canonical example of second-order methods is the classical Newton's method, which in its pure form is often written as
\begin{align*}
{\bf x}_{k+1} = {\bf x}_k + \alpha_k {\bf d}_k, \quad \mbox{where ${\bf d}_k =-{\bf H}_k^{-1} {\bf g}_k$,}
\end{align*}
where ${\bf H}_k = \nabla^2 f({\bf x}_k)$ is the Hessian, ${\bf g}_k = \nabla f({\bf x}_k)$ is the gradient, and $\alpha_k $ is some appropriate step-size, often chosen using an Armijo-type line-search \cite[Chapter~3]{numopt}.
A more practical variant for large-scale problems is Newton-Conjugate-Gradient (Newton-CG), in which the linear system ${\bf H}_k {\bf d}_k = -{\bf g}_k$ is solved inexactly using the conjugate gradient (CG) algorithm \citep{steihaug1983conjugate}.
Such an approach requires access to the Hessian matrix only via matrix-vector products; it does not require ${\bf H}_k$ to be evaluated explicitly.
Recently, a new variant of the Newton-CG algorithm was proposed in \cite{royer2018newton} that can be applied to large-scale non-convex problems.
This algorithm is equipped with certain safeguards and enhancements that allow worst-case complexity to be bounded in terms of the number of iterations and the total running time.
However, this approach relies on the exact evaluation of the gradient and on matrix-vector multiplication involving the exact Hessian at each iteration.
Such operations can be prohibitively expensive in machine learning problems.
For example, when the underlying optimization problem has the finite-sum form
\begin{equation}
\label{eq:finte_sum_problem}
\min_{{\bf x}\in{\mathbb R}^d} f({\bf x}) = \sum_{i=1}^n f_i({\bf x}),
\end{equation}
exact computation of the Hessian/gradient can be costly when $ n \gg 1 $, requiring a complete pass through the training data set.
Our work here builds upon that of \cite{royer2018newton} but allows for \refboth{inexactness in computation of gradients and Hessians, while obtaining a similar complexity result to the earlier paper.}
\subsection{Related work}
Since deep learning became ubiquitous, first order methods such as gradient descent and its adaptive, stochastic variants \citep{kingma2014adam,duchi2011adaptive}, have become the most popular class of optimization algorithms in machine learning; see the recent textbooks \cite{beck2017first,lan2020first,lin2020accelerated,WriR21} for in-depth treatments.
These methods are easy to implement, and their per-iteration cost is low compared to second-order alternatives.
Although classical theory for first-order methods guarantees convergence only to first-order optimal (stationary) points, \cite{ge2015escaping,jin2017escape,levy2016power} argued that stochastic variants of certain first-order methods such as SGD have the potential of escaping saddle points and converging to second-order stationary points.
The effectiveness of such methods usually requires painstaking fine-tuning of their (often many) hyperparameters, and the number of iterations they require to escape saddle regions can be large.
By contrast, second-order methods can make use of curvature information (via the Hessian) to escape saddle points efficiently and ultimately converge to second-order stationary points. This behavior is seen in trust-region methods \citep{conn2000trust,curtis2014trust,Cur19a}, cubic regularization \cite{nesterov2006cubic} and its adaptive variants (ARC) \citep{cartis2011adaptiveI,cartis2011adaptiveII}, as well as line-search based second-order methods \citep{royer2018complexity,royer2018newton}.
Subsequent to \cite{cartis2011adaptiveI,cartis2011adaptiveII,cartis2012complexity}, which were among the first works to study Hessian approximations to ARC and trust region algorithms, respectively, \cite{xuNonconvexTheoretical2017} analyzed the optimal complexity of both trust region and cubic regularization, in which the Hessian matrix is approximated under milder conditions.
Extension to gradient approximations was then studied in \cite{tripuraneni2017stochasticcubic,yao2018inexact}.
\refone{A novel take on inexact gradient and dynamic Hessian accuracy is investigated in \cite{bellavia2021stochastic}.}
The analysis in \cite{gratton2018complexity,cartis2018global,blanchet2019convergence} relies on probabilistic models whose quality are ensured with a certain probability, but which allow for approximate evaluation of the objective function as well.
\refone{Alternative approximations of the function and its derivative are considered in \cite{bellavia2019adaptive}.}
A notable difficulty of these methods concerns the solution of their respective subproblems, which can themselves be nontrivial nonconvex optimization problems.
Some exceptions are \cite{royer2018newton,liu2019stability,roosta2018newton}, whose fundamental operations are linear algebra computations, which are much better understood. While \cite{liu2019stability,roosta2018newton} are limited in their scope to invex problems \citep{mishra2008invexity}, the method in \cite{royer2018newton} can be applied to more general non-convex settings.
In fact, \cite{royer2018newton} enhances the classical Newton-CG approach with safeguards to detect negative curvature in the Hessian, during the solution of the Newton equations to obtain the step ${\bf d}_k$.
Negative curvature directions can subsequently be exploited by the algorithm to make significant progress in reducing the objective.
Moreover, \cite{royer2018newton} gives complexity guarantees that have been shown to be optimal in certain settings.
(Henceforth, we use the term ``Newton-CG'' to refer specifically to the algorithm in \cite{royer2018newton}.)
\subsection{Contribution}
\iffalse
{\bf PREVIOUS TEXT:}
\refone{We describe two variants of the Newton-CG algorithm in \cite{royer2018newton} in which, to reduce overall computational costs, approximations of gradient and Hessian are employed at various steps. In the first variant (\cref{alg:inexact_withlinesearch}), we consider a line-search based algorithm where we entirely replace the exact Hessian with its approximations. To compute the Newton direction (Procedure \ref{alg:capped_cg}), we also employ approximation of the gradient. However, to subsequently obtain the step-size, we resort to using the exact function and gradient information. This is of course not ideal since in many problems, evaluating the gradient (or the objective function) exactly can be prohibitive.
To partially remedy this situation, we propose a second variant (\cref{alg:fixed_stepsize}) which, by employing constant step-sizes, obviates the need for function evaluations and allows for an approximation to the gradients to be used throughout.
The main drawback of this variant is that the fixed step-size depends on bounds on problem-dependent quantities.
While these are available in several problems of interest in machine learning and statistics (see \cref{tab:L_H_bound,tab:K_H_K_g_bound}), they may be hard to estimate for other practical problems.
Moreover, the step-sizes obtained from these bounds tend to be conservative, a situation that arises often in fixed-step optimization methods.}
\fi
\refone{We describe two new variants of the Newton-CG algorithm of \cite{royer2018newton} in which, to reduce overall computational costs, approximations of gradient and Hessian are employed.
The first variant (\cref{alg:inexact_withlinesearch}) is a line-search method in which only approximate gradient and Hessian information is needed at each step, but it resorts to the use of exact function values in performing a backtracking line search at each iteration.
This requirement is not ideal, since exact evaluation of the objective function can be prohibitive.
To partially remedy this situation, we propose a second variant (\cref{alg:fixed_stepsize}) which, by employing constant step-sizes, obviates the need for exact evaluations of functions, gradients, or Hessians.
The main drawback of this variant is that the fixed step-size depends on bounds on problem-dependent quantities.
While these are available in several problems of interest in machine learning and statistics (see \cref{tab:L_H_bound,tab:K_H_K_g_bound}), they may be hard to estimate for other practical problems.
Moreover, the step-sizes obtained from these bounds tend to be conservative, a situation that arises often in fixed-step optimization methods.}
\refboth{For both these algorithms, we show that the convergence and complexity properties of the original exact algorithm from \cite{royer2018newton} are largely retained.}
Specifically, to achieve ($\epsilon$, $\sqrt{\epsilon}$)-optimality (see Definition~\ref{def:optimality} below) under Condition \ref{cond:opt_epsilong_epsilonh} on gradient and Hessian approximations (see below, in Section \ref{sec:opt}), we show the following.
\begin{itemize}
\item Inexact Newton-CG with backtracking line search (\cref{alg:inexact_withlinesearch}), achieves the optimal iteration
complexity of $\mathcal{O}(\epsilon^{-3/2})$; see Section \ref{sec:opt}.
\item Inexact Newton-CG in which a predefined step size replaces the backtracking line searches (\cref{alg:fixed_stepsize}) achieves the same optimal iteration
complexity of $\mathcal{O}(\epsilon^{-3/2})$; see Section~\ref{sec:fixed_step_size}.
\item We obtain estimates of oracle complexity in terms of $\epsilon$ for both variants.
\item The accuracy required in our gradient approximation changes adaptively with the current gradient size.
One consequence of this feature is to allow cruder gradient approximations in the regions with larger gradients, translating to a more efficient algorithm overall.
\item We empirically illustrate the advantages of our methods on several real datasets; see Section \ref{sec:numerical_experiments}.
\end{itemize}
We note that \cref{alg:inexact_withlinesearch} \refboth{may not be computationally feasible as written, because
the backtracking line searches require repeated (exact) evaluation of $f$.
This requirement} may not be practical in situations in which \refboth{exact evaluations of $ f$} are impractical.
By contrast, \cref{alg:fixed_stepsize} does not assume such knowledge and can be implemented strictly as written, given knowledge of the appropriate Lipschitz constant. The steplengths used in Algorithm~\ref{alg:fixed_stepsize} are, however, quite conservative, and better computational results will almost certainly be obtained with Algorithm~\ref{alg:inexact_withlinesearch}, modified to use \refboth{ approximations to $f({\bf x})$}; see the numerical examples in \cref{sec:numerical_experiments}.
\section{Algorithms and analysis}
\label{sec:main_result}
We describe our algorithms and present our main theoretical results in this section.
We start with background (Section~\ref{sec:notation_and_assumption}) and important technical ingredients (Section~\ref{sec:main_ingredients}), and then we proceed to our two main algorithms (Section~\ref{sec:opt} and Section~\ref{sec:fixed_step_size}).
\subsection{Notation, definitions, and assumptions}
\label{sec:notation_and_assumption}
Throughout this paper, scalar constants are denoted by regular lower-case and upper-case letters, e.g., $c$ and $K$.
We use bold lowercase and blackboard bold uppercase letters to denote vectors and matrices, e.g., ${\bf a}$ and ${\bm{A}}$, respectively.
The transpose of a real vector ${\bf a}$ is denoted by $ {\bf a}^{T} $.
For a vector ${\bf a}$, and a matrix ${\bm{A}}$, $\|{\bf a}\|$ and $\|{\bm{A}}\|$ denote the vector $\ell_{2}$ norm and the matrix spectral norm, respectively.
Subscripts (as in ${\bf a}_{t}$) denote iteration counters.
The smallest eigenvalue of a symmetric matrix ${\bm{A}}$ is denoted by $\lambda_{\min}({\bm{A}})$.
For any $ {\bf x},{\bf y} \in {\mathbb R}^{d}$, $ \left[{\bf x}, {\bf y}\right] $ denotes the line segment between ${\bf x}$ and ${\bf y}$, i.e., $\left[{\bf x}, {\bf y}\right] = \left\{{\bf z} \mid {\bf z} = {\bf x} + \tau ({\bf y} - {\bf x}), \; 0 \leq \tau \leq 1 \right\}$.
We are interested in expressing certain bounds in terms of their dependence on the small positive convergence tolerance $\epsilon$, especially on certain negative powers of this quantity, ignoring the dependence on all other quantities in the problem, such as dimension, Lipschitz constants, etc. For example, we use ${\mathcal O}(\epsilon^{-1})$ to denote a bound that depends linearly on $\epsilon^{-1}$ and $\tilde{\mathcal O}(\epsilon^{-1})$ for linear dependence on $\epsilon^{-1} |\log\epsilon|$.
For nonconvex problems, the determination of near-optimality can be much more complicated than for convex problems; see the examples of~\cite{murty1987some,hillar2013most}.
In this paper, as in earlier works (see for example \cite{royer2018newton}), we make use of approximate second-order optimality, defined as follows.
\begin{definition}[$(\epsilon_g,\epsilon_H)$-optimality]\label{def:optimality}
Given $0<\epsilon_g,\epsilon_H<1$, ${\bf x}$ is an $(\epsilon_g,\epsilon_H)$-optimal solution of \eqref{eqn:basic_problem},~if
\begin{align}\label{cond:stop_cr}
\|\nabla f({\bf x})\| \leq \epsilon_g \quad \text{and} \quad \lambda_{\min}( \nabla^2 f({\bf x}) ) \geq -\epsilon_H.
\end{align}
\end{definition}
\begin{assumption}\label{ass:lip}
The smooth nonconvex function $f$ is bounded below by the finite value $f_{\text{\rm low}}$. \reftwo{It also has compact sub-level sets, i.e., the set $ \mathcal{L}({\bf x}_{0}) = \left\{{\bf x} \mid f({\bf x}) \leq f({\bf x}_{0})\right\} $ is compact.}
Moreover, on an open set $\mathcal{B} \subset {\mathbb R}^n$ containing all line segments $[{\bf x}_k,{\bf x}_k+{\bf d}_k]$ for iterates ${\bf x}_k$ and search directions ${\bf d}_k$ generated by our algorithms, the objective function has Lipschitz continuous gradient and Hessian, that is, there are positive constants $0 < L_g < \infty$ and $0 < L_H < \infty$ such that for any $ {\bf x},{\bf y} \in \mathcal{B} $, we have
\[
\norm{\nabla f({\bf x}) - \nabla f({\bf y})} \le L_g \norm{{\bf x} - {\bf y}} \quad \text{and} \quad
\norm{\nabla^2 f({\bf x}) - \nabla^2 f({\bf y})} \le L_H \norm{{\bf x} - {\bf y}} .
\]
\end{assumption}
\refone{Although \cref{ass:lip} is typical in the optimization literature, it nonetheless implies a somewhat strong smoothness assumptions on the function. Some related works on various Newton-type methods, e.g., \cite{bellavia2019adaptive,bellavia2021stochastic}, obtain second-order complexity guarantees that require only Lipschitz continuity of the Hessian. It would be interesting to investigate whether our analysis can be modified to allow for such relaxations. We leave such investigations for future work.}
Consequences of Lipschitz continuity of the Hessian, which we will use in later results, include the following bounds for any ${\bf x},{\bf y} \in \mathcal{B}$:
\begin{subequations}
\begin{align}
\label{eq:lipH}
& \norm{ \nabla f({\bf x}) - \nabla f({\bf y}) - \nabla^2 f({\bf y})({\bf x}-{\bf y})} \le \frac{L_H}{2} \norm{{\bf x}-{\bf y}}^2 \\
\label{eq:lipH2}
& f({\bf x}) \le f({\bf y}) + \nabla f({\bf y})^T({\bf x}-{\bf y}) + \frac12 ({\bf x}-{\bf y})^T\nabla^2 f({\bf y})({\bf x}-{\bf y}) + \frac{L_H}{6} \| {\bf x}-{\bf y}\|^3.
\end{align}
\end{subequations}
\refone{An interesting avenue for future research is to try to replace these Lipschitz continuity conditions with milder variants in which the gradient and/or Hessian are required to maintain Lipschitz continuity only along a given set of directions, e.g., the piecewise linear path generated by the iterates such as the corresponding assumption in \cite{xuNonconvexTheoretical2017}.
Our current proof techniques do not allow for such relaxations, but we will look into possibility in future work.}
For our inexact Newton-CG algorithms, we also require that the approximate gradient and Hessian satisfy the following conditions, for prescribed positive values $ \delta_{g,t} $ and $ \delta_H $.
\begin{condition}\label{cond:appr_gh}
For given $\delta_{g,t}$ and $\delta_H$, we say that the approximate gradient ${\bf g}_t$ and Hessian ${\bf H}_t$ at iteration $t$ are $ \delta_{g,t}$-accurate and $\delta_H$-accurate if
\[
\|{\bf g}_t - \nabla f({\bf x}_t)\| \leq \delta_{g,t} \quad \text{and} \quad\|{\bf H}_t - \nabla^2 f({\bf x}_t)\| \leq \delta_H,
\]
respectively.
\end{condition}
Under these assumptions and conditions, it is easy to show that there exist constants $U_g$ and $U_H$ such that the following are satisfied for all iterates ${\bf x}_t$ in the set defined in Assumption~\ref{ass:lip}:
\begin{equation} \label{eq:bounds}
\|{\bf g}_t\| \leq U_g\quad \text{and} \quad\|{\bf H}_t\| \leq U_H.
\end{equation}
\subsection{Key ingredients of the Newton-CG method}
\label{sec:main_ingredients}
We present the two major components from \cite{royer2018newton} that are also used in our inexact variant of the Newton-CG algorithm.
The first ingredient, Procedure~\ref{alg:capped_cg} (referred to in some places as ``Capped CG''), is a version of the conjugate gradient~\citep{shewchuk1994introduction} algorithm that is used to solve a damped Newton system of the form $\bar {\bf H} {\bf d} = -{\bf g}$, where $\bar{\bf H} = {\bf H} + 2\epsilon \bf{I}$ for some positive parameter $\epsilon$.
Procedure~\ref{alg:capped_cg} is modified to detect indefiniteness in the matrix ${\bf H}$ and, when this occurs, to return a direction along which the curvature of ${\bf H}$ is at most $-\epsilon$.
The second ingredient,
Procedure~\ref{alg:minimum_eigenvalue} (referred to as the ``Minimum Eigenvalue Oracle'' or ``MEO''), checks whether a direction of negative curvature (less than $-\epsilon$ for a given positive argument $\epsilon$) exists for the given matrix ${\bf H}$.
We now discuss each of these procedures in more detail.
\input{capped_conjugate_gradient.tex}
\paragraph{Procedure~\ref{alg:capped_cg} (Capped-CG).}
The well-known classical CG algorithm \citep{shewchuk1994introduction} is used to solve linear systems involving positive definite matrices.
However, this positive-definite requirement is often violated during the iterations for non-convex optimization due to the indefiniteness of Hessians encountered at some iterates.
Capped-CG, proposed by \cite{royer2018newton} and presented in Procedure~\ref{alg:capped_cg} for completeness, is an original way to leverage and detect such negative curvature directions, when they are encountered during CG iterations.
Lines 13-17 in Procedure~\ref{alg:capped_cg} contain the standard CG operations.
When ${\bf H} \succeq -\epsilon {\bf I}$, the tests in lines 22, 26, and 28 that indicate negative curvature will not be activated, and Capped-CG will return an approximate solution ${\bf d} \approx -\bar{\bf H}^{-1}{\bf g}$.
However, when ${\bf H} \not\succeq -\epsilon {\bf I}$, Capped-CG will identify and return a direction of ``sufficient negative curvature'' --- a direction $d$ satisfying ${\bf d}^T{\bf H}{\bf d} \leq -\epsilon\|{\bf d}\|^2$.
Such a negative curvature direction is obtained under two circumstances.
First, when the intermediate step (either ${\bf y}_j$ or ${\bf p}_j$) satisfies the negative curvature condition, that is, ${\bf d}^T\bar{\bf H} {\bf d} \leq -\epsilon \|{\bf d}\|^2$ (Lines \ref{line:nca} and \ref{line:ncb}), Procedure~\ref{alg:capped_cg} will be terminated and the intermediate step will be returned.
Second, when the residual, ${\textnormal{r}}_j$, decays at a slower rate than anticipated by standard CG analysis (Line \ref{line:ncc}), a negative curvature direction can be recovered by the procedure of Lines \ref{line:nc1}, \ref{line:nc2}, and \ref{line:nc3}.
Note that Procedure~\ref{alg:capped_cg} can be called with an optional input $M$, which is an upper bound on $\|{\bf H}\|$.
However, even without a priori knowledge of this upper bound, M can be updated so that at any point in the execution of the procedure, M is an upper bound on the maximum curvature of $ {\bf H} $ revealed to that point.
Other parameters ($\kappa$, $\tilde \zeta$, $\tau$, $T$) are also updated whenever the value of M changes.
It is not hard to see that $M$ is bounded by $U_{\bf H}$ throughout the execution of Procedure~\ref{alg:capped_cg}, provided that if an initial value of $M$ is supplied to this procedure, this value satisfies $M \le U_{\bf H}$.
Lemma~\ref{lemma:capped_cg_iter_bounded} gives a bound on the number of iterations performed by Procedure~\ref{alg:capped_cg}.
\begin{lemma}[{\citet[Lemma 1]{royer2018newton}}]
\label{lemma:capped_cg_iter_bounded}
The number of iterations of Procedure~\ref{alg:capped_cg} is bounded by
\[
\min \, \left\{ d, J(M, \epsilon, \zeta) \right\},
\]
where $J = J(M,\epsilon, \zeta)$ is the smallest integer such that $\sqrt T (1-\tau)^{J/2} \leq \hat \zeta$. The number of matrix-vector products required is bounded by $2\min\{d, J(M, \epsilon, \zeta)\}+1$, unless all iterates ${\bf y}_i,~i=1,2,\ldots$ are stored, in which case it is $\min\{d, J(M, \epsilon, \zeta)\} + 1$.
For the upper bound of $J(M, \epsilon, \zeta)$, we have
\begin{equation}
J(M, \epsilon, \zeta) \leq \min \, \left\{d, \tilde{\mathcal O}(\epsilon^{-1/2})\right\}.
\end{equation}
\end{lemma}
When the slow decrease in residual is detected (Line 21), a direction of negative curvature for ${\bf H}$ can be extracted from the previous intermediate solutions, as the following result describes.
\begin{lemma}[{\citet[Theorem 2]{royer2018newton}}]
\label{lemma:capped_cg_last_condition_gaurantee}
Suppose that the loop of Procedure~\ref{alg:capped_cg} terminates with $j=\hat J$, where
\[
\hat J \in \{1,2,\ldots, \min\{n, J(M, \epsilon, \zeta)\}\}
\]
satisfies
\[
\|r_{\hat J}\| > \max\{\hat\zeta, \sqrt T (1-\tau)^{\hat J/2}\}\|r_0\|.
\]
Suppose further that $y_{\hat J}^T\bar {\bf H} y_{\hat J} \geq \epsilon \|y_{\hat J}\|^2$, so that $y_{\hat J + 1}$ is computed.
Then we have
\[
\frac{(y_{\hat J +1} - y_i)^T\bar {\bf H} (y_{\hat J +1} - y_i)}{\|y_{\hat J +1} - y_i\|^2} < \epsilon,~~~~for~~some~~i\in\{0,\ldots,\hat J -1\}.
\]
\end{lemma}
Note that $d^T \bar{\bf H} d \le \epsilon \|d\|^2 \Longleftrightarrow d^T {\bf H} d \le -\epsilon \|d\|^2$.
Procedure~\ref{alg:capped_cg} is invoked by the Newton-CG procedure, Algorithm~\ref{alg:inexact_withlinesearch} (described in \cref{sec:opt}), when the current iterate ${\bf x}_k$ has $\| {\bf g}_k \| \ge \epsilon_g>0$.
Procedure~\ref{alg:capped_cg} can either return the approximate Newton direction or a negative curvature one.
After describing how this output vector is modified by Algorithm~\ref{alg:inexact_withlinesearch}, in the next section, we state a result (Lemma~\ref{lemma:d_from_cappedcg}) about the properties of the resulting step.
In the case of $\|{\bf g}_k \|<\epsilon_g$, Algorithm~\ref{alg:inexact_withlinesearch} calls Procedure~\ref{alg:minimum_eigenvalue} to explicitly seek a direction of sufficient negative curvature.
We describe this procedure next.
\input{min_eigenvalue_procedure}
\paragraph{Procedure~\ref{alg:minimum_eigenvalue} (Minimum Eigenvalue Oracle).}
This procedure searches for a direction spanned by the negative spectrum of a given symmetric matrix or, alternately, verifies that the matrix is (almost) positive definite.
Specifically, for a given $\epsilon>0$, Procedure~\ref{alg:minimum_eigenvalue} finds a negative curvature direction ${\bf v}$ of ${\bf H}_k$ such that ${\bf v}^T{\bf H}{\bf v}\leq -\epsilon\|{\bf v}\|^2/2$, or else certifies that ${\bf H} \succeq -\epsilon {\bf I}$.
The probability that the certificate is issued but $\lambda_{\min} ({\bf H}) < -\epsilon$ is bounded above by some (small) specified value $\delta$.
As indicated in \cite{royer2018newton}, this minimum eigenvalue oracle can be implemented using the Lanczos process or the classical CG algorithm. (In this paper, we choose the former.)
Both of these approaches have the same complexity, given in the following result.
\begin{lemma}[{\citet[Lemma 2]{royer2018newton}}]
\label{lemma:fail_min_eigen_oracle}
Suppose that the Lanczos method is used to estimate the smallest eigenvalue of ${\bf H}$ starting from a random vector drawn from the uniform distribution on the unit sphere, where $\|{\bf H} \| \le M$.
For any $\delta \in (0,1)$, this approach finds the smallest eigenvalue of $H$ to an absolute precision of $\epsilon/2$, together with a corresponding direction ${\bf v}$, in at most
\begin{equation}\label{eqn:iter_neg}
\min\, \left\{d, 1+ \left\lceil \frac{\ln(2.75d/\delta^2)}{2}\sqrt{\frac{M}{\epsilon}} \right\rceil \right\} \quad \mbox{iterations,}
\end{equation}
with probability at least $1-\delta$. Each iteration requires evaluation of a matrix-vector product involving ${\bf H}$.
\end{lemma}
\subsection{Inexact Newton-CG algorithm with line search}
\label{sec:opt}
\input{inexact_alg_withlinesearch}
\cref{alg:inexact_withlinesearch} shows our inexact damped Newton-CG algorithm, which calls Procedures~\ref{alg:capped_cg} and~\ref{alg:minimum_eigenvalue}. In this section, we establish worst case iteration complexity to achieve $(\epsilon_g,\epsilon_H)$-optimality according to \cref{def:optimality}.
Under mild conditions on the approximate gradient and Hessian, the complexity estimate is the same as for the exact Newton-CG algorithm described in \cite{royer2018newton}.
\refone{For \cref{alg:inexact_withlinesearch}, approximations of the Hessian and gradient can be used throughout. However, to obtain the step-size $ \alpha_k $, \cref{alg:inexact_withlinesearch} requires exact evaluation of the function.
We avoid the need for these exact evaluations in the fixed-step variant, \cref{alg:fixed_stepsize}, to be studied in \cref{sec:fixed_step_size}.}
Apart from the use of approximate Hessian and gradient, Lines \ref{line:threshold}-\ref{line:threshold_end} constitute a notable difference between our algorithm and the exact counterpart of \cite{royer2018newton}, in which our method calls Procedure~\ref{alg:minimum_eigenvalue} to obtain a direction of sufficient negative curvature when the direction ${\bf d}_k$ derived from Procedure~\ref{alg:capped_cg} is small; specifically, $\|{\bf d}_k\|\leq \epsilon_g/\epsilon_H$.
If such a direction is found, we perform a backtracking line search along with it.
Otherwise, if Procedure~\ref{alg:minimum_eigenvalue} certifies that no direction of sufficient negative curvature exists, we terminate and return the point ${\bf x}_k+{\bf d}_k$, which already satisfies the second-order optimality condition.
In theory, this modification is critical to obtaining the optimal worst-case complexity.
\reftwo{In practice, however, we have observed that performing line-search with such $ {\bf d}_k $, despite the fact that $\|{\bf d}_k\|\leq \epsilon_g/\epsilon_H$, results in acceptable progress in reducing the function. In other words, we believe that Lines 9-16 of \cref{alg:inexact_withlinesearch,alg:fixed_stepsize} serve a mainly theoretical purpose and can be safely omitted in practical implementations.}
\refboth{Another notable difference with previous versions of this general approach is the use of a ``bidirectional'' line search when ${\bf d}_k$ is a negative curvature direction.
We do backtracking along both positive and negative directions, ${\bf d}_k$ and $-{\bf d}_k$, because we are unable to determine with certainty the sign of ${\bf d}_k^T \nabla f({\bf x}_k)$, since we have access only to the approximation ${\bf g}_k$ of $\nabla f(x_k)$.
This additional algorithmic feature causes only modest changes to the analysis of the function decrease along negative curvature directions, as we point out in the appropriate results below.}
We begin our complexity analysis with a result that summarizes important properties of the direction ${\bf d}_k$ that is derived from the capped CG algorithm, Procedure~\ref{alg:capped_cg}.
(The proof is identical to that of the cited result \cite[Lemma~3]{royer2018newton}, except that we use approximate values of the Hessian and gradient of $f$ here.)
\begin{lemma}[{\citet[Lemma~3]{royer2018newton}}]
\label{lemma:d_from_cappedcg}
Suppose that Assumption~\ref{ass:lip} is satisfied.
Suppose that Procedure~\ref{alg:capped_cg} is invoked at an iterate ${\bf x}_k$ of Algorithm~\ref{alg:inexact_withlinesearch} (so that $\|{\bf g}_k\| \ge \epsilon_g>0$) with inputs ${\bf H}={\bf H}_k$, ${\bf g}={\bf g}_k$, $\epsilon=\epsilon_H$, and $\zeta$.
Suppose that ${\bf d}_k$ in Algorithm~\ref{alg:inexact_withlinesearch} is obtained from the output vector ${\bf d}$ of Procedure~\ref{alg:capped_cg}, after possible scaling and change of sign.
Then one of the two following statements holds.
\begin{enumerate}
\item $d_{\text{\rm type}}=\text{\sc SOL}$ and ${\bf d}_k={\bf d}$ satisfies
\begin{subequations}
\begin{align}\label{eqn:pos}
{\bf d}_k^T {\bf H}_k {\bf d}_k \geq -\epsilon_H\|{\bf d}_k\|^2,
\end{align}
\begin{align}\label{eqn:dk_sol_upper}
\|{\bf d}_k\| \leq 1.1 \epsilon_H^{-1} \|{\bf g}_k\|,
\end{align}
\begin{align}\label{eqn:dk_sol_rk}
\|\hat {\textnormal{r}}_k\| \leq \frac12 \epsilon_H\zeta\|{\bf d}_k\|,
\end{align}
\end{subequations}
where
\begin{align}\label{eq:r_k}
\hat {\textnormal{r}}_k = ({\bf H}_k + 2\epsilon_H{\bf I}){\bf d}_k + {\bf g}_k.
\end{align}
\item $d_{\text{\rm type}}=\text{\sc NC}$ and ${\bf d}_k$ satisfies
\[
{\bf d}_k = -\mathrm{sgn}({\bf d}^T{\bf g}_k)\frac{|{\bf d}^T{\bf H}_k{\bf d}|}{\norm{{\bf d}}^2} \frac{{\bf d}}{\norm{{\bf d}}},
\]
and ${\bf d}_k$ satisfies
\begin{equation}\label{eqn:dk_nc}
\frac{{\bf d}_k^T{\bf H}_k{\bf d}_k}{\| {\bf d}_k \|^2} = -\| {\bf d}_k \| \le -\epsilon_H.
\end{equation}
\end{enumerate}
\end{lemma}
In order to establish the iteration complexity of Algorithm~\ref{alg:inexact_withlinesearch}, we first present a sufficient condition on the degree of the inexactness of the gradient and Hessian.
\begin{condition}\label{cond:opt_epsilong_epsilonh}
We require the inexact gradient ${\bf g}_k$ and Hessian ${\bf H}_k$ to satisfy Condition \ref{cond:appr_gh} with
\[
\delta_{g, k} \leq \frac{1-\zeta}{8}\max\Big(\epsilon_g, \min\left(\epsilon_H\|{\bf d}_k\|, \|{\bf g}_k\|, \|{\bf g}_{k+1}\|\right)\Big),~~~\text{and}~~~\delta_H \leq \left(\frac{1-\zeta}{4}\right)\epsilon_H.
\]
\end{condition}
One could simplify Condition \ref{cond:opt_epsilong_epsilonh} to have an iteration-independent condition on $ \delta_{g,k} \equiv \delta_{g} $, namely,
\begin{align*}
\delta_{g} \leq \frac{1-\zeta}{8} \epsilon_g.
\end{align*}
However, the adaptivity of the iteration-dependent version of Condition \ref{cond:opt_epsilong_epsilonh} through ${\bf g}_k$ and ${\bf g}_{k+1}$ offers practical advantages.
Indeed, in many iterations, one can expect $\| {\bf g}_k \|$ and $\| {\bf g}_{k+1}\|$ to be of similar magnitudes.
Also, as shown in Lemma~\ref{lemma:d_from_cappedcg}, we have $\|{\bf d}_k\| \leq 1.1 \epsilon_H^{-1} \|{\bf g}_k\|$.
Thus, the three terms in $\min(\epsilon_H\|{\bf d}_k\|, \|{\bf g}_k\|, \|{\bf g}_{k+1}\|)$ are often roughly of the same order, and usually larger than $\epsilon_g$.
These observations suggest that when the true gradient is large, we can employ loose approximations.
Given Condition~\ref{cond:opt_epsilong_epsilonh}, the proofs of the complexity bounds boil down to three parts.
First, we bound the decrease in the objective function $f({\bf x}_k)$ (Lemma~\ref{lemma:dtype_sol_opt}) when taking the damped Newton step ${\bf d}_k$ (that is, when $d_{\text{\rm type}}=\text{\sc SOL}$ on return from Procedure~\ref{alg:capped_cg} and $\|{\bf d}_k\|$ is not too small).
Second, we bound the decrease in the objective when a negative curvature direction is encountered in Procedure~\ref{alg:capped_cg} (Lemma~\ref{lemma:nc_from_cappedcg}) or Procedure~\ref{alg:minimum_eigenvalue} (Lemma~\ref{lemma:nc_both_procedure}).
Third, for Lines 9-18 in Algorithm~\ref{alg:inexact_withlinesearch}, we show that the algorithm can be terminated after the update in Line \ref{line:term1}.
In particular, when the update direction is sufficiently small from Procedure~\ref{alg:capped_cg} and a large negative curvature from Procedure~\ref{alg:minimum_eigenvalue} has not been detected, Line \ref{line:term1} terminates at a point satisfying the required optimality conditions (Lemma~\ref{lemma:terminate_step_condition_withlinesearch}).
We start with the case in which an inexact Newton step is used.
\begin{lemma}
\label{lemma:dtype_sol_opt}
Suppose that \cref{ass:lip} is satisfied and that Condition \ref{cond:opt_epsilong_epsilonh} holds for all $k$.
Suppose that at iteration $k$ of \cref{alg:inexact_withlinesearch}, we have $\|{\bf g}_k\| \ge \epsilon_g$, so that Procedure~\ref{alg:capped_cg} is called.
When Procedure~\ref{alg:capped_cg} outputs a direction ${\bf d}_k$ with $d_{\text{\rm type}}=\text{\sc SOL}$ and $\norm{{\bf d}_k} > {\epsilon_g}/\epsilon_H$,
then the backtracking line search requires at most $j_k\leq j_{\text{\rm sol}}+1$ iterations, where
\[
j_{\text{\rm sol}} = \left\lceil \frac{1}{2} \log_\theta \left( \frac{3(1 - \zeta)\epsilon_H^2}{4.4 U_g(L_H + \eta)} \right) \right\rceil,
\]
and the resulting step ${\bf x}_{k+1}={\bf x}_k+\alpha_k{\bf d}_k$ satisfies
\begin{equation} \label{eq:csol}
f({\bf x}_k) -f({\bf x}_{k+1}) \geq c_{\text{\rm sol}} \max\left\{0, \min \left( \frac{(\|{\bf g}_{k+1} \| - \delta_{g,k} - \delta_{g,k+1})^3}{(2.5 \epsilon_H)^3}, (2.5 \epsilon_H)^3, \epsilon_g^{3/2}\right)\right\},
\end{equation}
where
\[
c_{\text{\rm sol}} = \frac{\eta}{6} \min\left\{ \frac{1}{(1+2L_H)^{3/2}}, \left[ \frac{3\theta^2(1 - \zeta)}{4(L_H + \eta)}\right]^{3/2} \right\}.
\]
\end{lemma}
\begin{proof}
When the $d_{\text{\rm type}}=\text{\sc SOL}$, ${\bf d}_k$ is the solution of the inexact regularized Newton equations.
We first prove that when ${\bf d}_k^T{\bf g}_k < 0$, the inner product ${\bf d}_k^T\nabla f({\bf x}_k)$ is also negative:
\begin{equation*}
\begin{aligned}
{\bf d}_k^T \nabla f({\bf x}_k) &\leq {\bf d}_k^T{\bf g}_k + \delta_{g,k}\|{\bf d}_k\| \\
& = {\bf d}_k^T \hat {\textnormal{r}}_k - {\bf d}_k^T({\bf H}_k+2\epsilon_H{\bf I}){\bf d}_k + \delta_{g,k} \|{\bf d}_k\| && (\mbox{from \cref{eq:r_k}})\\
& \leq \|{\bf d}_k\| \|\hat r_k\| - \epsilon_H \|{\bf d}_k\|^2 + \delta_{g,k}\|{\bf d}_k\| &&(\mbox{from \cref{eqn:pos}})\\
& \leq \frac12 \epsilon_H \zeta \| {\bf d}_k \|^2 - \epsilon_H \|{\bf d}_k\|^2 + \delta_{g,k}\|{\bf d}_k\| && (\mbox{from \cref{eqn:dk_sol_rk}}) \\
&\leq -\frac{1}{2}\epsilon_H \|{\bf d}_k\|^2 + \frac{1-\zeta}{8}\max\left(\epsilon_g, \epsilon_H\|{\bf d}_k\|\right) \|{\bf d}_k\| && (\mbox{from $\zeta \in (0,1)$ and Condition~\ref{cond:opt_epsilong_epsilonh}}) \\
& = -\frac{1}{2}\epsilon_H \|{\bf d}_k\|^2 + \frac{1-\zeta}{8}\epsilon_H\|{\bf d}_k\|^2 && (\mbox{from $\norm{{\bf d}_k} > {\epsilon_g}/\epsilon_H$})\\
&< -\frac{3}{8}\epsilon_H \|{\bf d}_k\|^2.
\end{aligned}
\end{equation*}
We consider two cases here.
\textbf{Case 1:} Consider first the case in which the value $\alpha_k=1$ is accepted by the backtracking line search procedure.
We first note that in the case $\| {\bf g}_{k+1} \| - \delta_{g,k} - \delta_{g,k+1} \le 0$, the claim \eqref{eq:csol} is satisfied trivially, because $f({\bf x}_{k+1})<f({\bf x}_k)$ and the right-hand side of \eqref{eq:csol} is $0$.
Thus we assume in the rest of the argument for this case that $\| {\bf g}_{k+1} \| - \delta_{g,k} - \delta_{g,k+1} > 0$. We have
\begin{align*}
& \|{\bf g}_{k+1}\| = \|{\bf g}_{k+1}-{\bf g}_k + {\bf g}_k\| \\
& = \|{\bf g}_{k+1}-\nabla f_{k+1} + \nabla f_{k+1} - {\bf g}_k - \nabla f_k + \nabla f_{k} -\nabla^2 f({\bf x}_k) {\bf d}_k - 2\epsilon_H {\bf d}_k + \nabla^2 f({\bf x}_k) {\bf d}_k - {\bf H}_k{\bf d}_k + \hat{\textnormal{r}}_k \|\\
&\leq \delta_{g,k} + \delta_{g,k+1} + \|\nabla f_{k+1}-\nabla f_k -\nabla^2 f({\bf x}_k) {\bf d}_k\|+\| 2\epsilon_H {\bf d}_k\|+\| \nabla^2 f({\bf x}_k) {\bf d}_k - {\bf H}_k{\bf d}_k\|+\|\hat{\textnormal{r}}_k \|\\
&\leq \delta_{g,k} + \delta_{g,k+1} + \frac{L_H}2 \|{\bf d}_k\|^2 + 2\epsilon_H\|{\bf d}_k\| + \delta_H\|{\bf d}_k\| + \frac12 \epsilon_H\zeta\|{\bf d}_k\| \quad \quad (\mbox{from \cref{eqn:dk_sol_rk}})\\
& = \delta_{g,k} + \delta_{g,k+1} + \left(2\epsilon_H + \delta_H + \frac12 \epsilon_H\zeta \right)\|{\bf d}_k\| + \frac{L_H}2 \|{\bf d}_k\|^2 \\
& \leq \delta_{g,k} + \delta_{g,k+1} + \left(2\epsilon_H + \frac{1-\zeta}{2}\epsilon_H + \frac12 \epsilon_H\zeta \right)\|{\bf d}_k\| + \frac{L_H}2 \|{\bf d}_k\|^2 \quad \quad (\mbox{from Condition~\ref{cond:opt_epsilong_epsilonh}}) \\
&= \delta_{g,k} + \delta_{g,k+1} + 2.5 \epsilon_H \|{\bf d}_k\| + \frac{L_H}2 \|{\bf d}_k\|^2 .
\end{align*}
We thus have $A \|{\bf d}_k \|^2 + B\|{\bf d}_k \| - C \ge 0$, where $A=L_H/2$, $B=2.5 \epsilon_H$, and $C = \| {\bf g}_{k+1}\| - \delta_{g,k} - \delta_{g,k+1} > 0$. Since for any $D \ge 0$ and $t \ge 0$ we have $-1 + \sqrt{1+Dt} \ge \left(-1 + \sqrt{1+D}\right) \min\left\{t,1\right\}$ (see~\citet[Lemma 17]{royer2018complexity}), it follows that
\begin{align*}
\| {\bf d}_k \| \ge \frac{-B + \sqrt{B^2+4AC}}{2A} & = \left( \frac{-1+ \sqrt{1+4AC/B^2}}{2A} \right) B \ge \left( \frac{-1+\sqrt{1+4A}}{2A} \right) \min \left\{C/B,B\right\}\\
& = \left( \frac{2}{\sqrt{1+4A}+1} \right) \min \left\{C/B,B\right\} \ge \left( \frac{1}{\sqrt{1+4A}} \right) \min \left\{C/B,B\right\},
\end{align*}
where the last step follows from $A>0$.
By substituting for $A$, $B$, and $C$, we obtain
\[
\|{\bf d}_k\| \geq \frac{1}{\sqrt{1+2L_H}} \min\left\{\frac{\|{\bf g}_{k+1}\|-\delta_{g,k} - \delta_{g,k+1}}{2.5 \epsilon_H}, 2.5 \epsilon_H\right\}.
\]
Since $\alpha_k=1$ was accepted by the backtracking line search, we have
\begin{align*}
f({\bf x}_k) - f({\bf x}_k+{\bf d}_k) &\geq \frac{\eta}6 \|{\bf d}_k \|^3 \\
& \ge \frac{\eta}6 \frac{1}{(1+2L_H)^{3/2}} \min\left\{\frac{(\|{\bf g}_{k+1}\|-\delta_{g,k} - \delta_{g,k+1})^3}{(2.5 \epsilon_H)^3}, (2.5 \epsilon_H)^3 \right\}.
\end{align*}
By combining this inequality with the trivial inequality obtained when $\|{\bf g}_{k+1}\|-\delta_{g,k} - \delta_{g,k+1} \le 0$, we obtain \cref{eq:csol} for the case of $\alpha_k=1$.
\textbf{Case 2:}
As a preliminary step, note that for any $\alpha \in [0,1]$, we have the following:
\begin{align}
\nonumber
& \alpha{\bf g}_k^T{\bf d}_k + \tfrac12{\alpha^2} {\bf d}_k^T {\bf H}_k{\bf d}_k \\
\nonumber
&= \alpha \left[ \hat{\textnormal{r}}_k - ({\bf H}_k + 2 \epsilon_H I) {\bf d}_k \right]^T {\bf d}_k
+ \tfrac12{\alpha^2} {\bf d}_k^T {\bf H}_k{\bf d}_k \quad \quad (\mbox{from \cref{eq:r_k}}) \\
\nonumber
& \le \alpha \| \hat{\textnormal{r}}_k \| \| {\bf d}_k \|
- \alpha \left( 1- \tfrac12 \alpha \right) {\bf d}_k^T ({\bf H}_k+2 \epsilon_H I) {\bf d}_k
- \alpha^2\epsilon_H \| {\bf d}_k \|^2\\
\nonumber
& \le \alpha \| \hat{\textnormal{r}}_k \| \| {\bf d}_k \|
- \alpha \left( 1- \tfrac12 \alpha \right) {\bf d}_k^T ({\bf H}_k+2 \epsilon_H I) {\bf d}_k && \\
\nonumber
& \le \tfrac12 \alpha \epsilon_H \zeta \|{\bf d}_k \|^2 - \tfrac12 \alpha \epsilon_H \| {\bf d}_k \|^2 \quad \quad (\mbox{from $1-\tfrac12 \alpha \ge \tfrac12$, \cref{eqn:pos}, and \cref{eqn:dk_sol_rk}})\\
\label{eq:ic1}
&= \tfrac12{\alpha} \epsilon_H (\zeta-1) \|{\bf d}_k \|^2.
\end{align}
\reftwo{Now consider the case where $ \alpha_k = 1 $ is not accepted by the line search.
In this case, suppose $j \ge 0$ is the largest integer such that the step acceptance condition is not satisfied.}
For this $j$, we have the following:
\begin{alignat*}{2}
& -\frac{\eta}6 \theta^{3j} \|{\bf d}_k\|^3 \\
&\leq f({\bf x}_k + \theta^j{\bf d}_k) - f({\bf x}_k) \\
&\leq \theta^{j}\nabla f_k^T{\bf d}_k + \frac{\theta^{2j}}{2} {\bf d}_k^T\nabla^2 f({\bf x}_k) {\bf d}_k + \frac{L_H}6 \theta^{3j} \|{\bf d}_k\|^3 \;\; && (\mbox{from \cref{eq:lipH2}})\\
& \le \theta^{j}{\bf g}_k^T{\bf d}_k + \frac{\theta^{2j}}{2} {\bf d}_k^T {\bf H}_k{\bf d}_k + \theta^j\delta_{g,k}\|{\bf d}_k\| + \frac{\theta^{2j}}{2}\delta_H\|{\bf d}_k\|^2 + \frac{L_H}6 \theta^{3j}\|{\bf d}_k\|^3 \;\; && (\mbox{from Definition~\ref{cond:appr_gh}})\\
& \leq -\frac{\theta^j}2 (1-\zeta)\epsilon_H\|{\bf d}_k\|^2 + \theta^j\delta_{g,k}\|{\bf d}_k\| + \frac{\theta^{2j}}{2}\delta_H\|{\bf d}_k\|^2 + \frac{L_H}6 \theta^{3j}\|{\bf d}_k\|^3 \; && (\mbox{from \cref{eq:ic1}})\\
& \leq -\frac{\theta^j}2 \|{\bf d}_k\|^2 \big((1-\zeta)\epsilon_H-\delta_H\big) + \theta^j\delta_{g,k}\|{\bf d}_k\| + \frac{L_H}6 \theta^{3j}\|{\bf d}_k\|^3\; && (\mbox{from $ 0< \theta < 1 $}).
\end{alignat*}
By rearranging this expression, we obtain
\[
\theta^{2j} \geq \left( \frac{3}{L_H+\eta} \right) \left( \frac{\big((1-\zeta)\epsilon_H-\delta_H\big) \|{\bf d}_k\| - 2 \delta_{g,k}}{\|{\bf d}_k\|^2} \right).
\]
From Condition~\ref{cond:opt_epsilong_epsilonh}, we have $\delta_H \le {(1-\zeta)} \epsilon_H/2$, so this bound implies that
\begin{equation} \label{eq:dj1}
\theta^{2j} \geq \left( \frac{3}{L_H+\eta} \right) \frac{(1-\zeta)\epsilon_H \|{\bf d}_k\| - 4\delta_{g,k}}{2 \|{\bf d}_k\|^2}.
\end{equation}
Since by assumption $\norm{{\bf d}_k} \ge {\epsilon_g}/{\epsilon_H}$, we have from Condition~\ref{cond:opt_epsilong_epsilonh} that either
\begin{equation}\label{eq:diff2}
\delta_{g,k} \le \frac{1-\zeta}{8} \epsilon_g = \frac{1-\zeta}{8} \epsilon_H {\frac{\epsilon_g}{\epsilon_H}}\le \frac{(1-\zeta)\epsilon_H\norm{{\bf d}_k}}{8},
\end{equation}
or else
\begin{equation}\label{eq:diff2_2}
\delta_{g,k} \le \frac{1-\zeta}8 \min(\epsilon_H\|{\bf d}_k\|, \|{\bf g}_k\|, \|{\bf g}_{k+1}\|) < \frac{(1-\zeta)\epsilon_H\norm{{\bf d}_k}}{8}.
\end{equation}
In either case, we have that $(1-\zeta) \epsilon_H \|{\bf d}_k \| - 4\delta_{g,k} \ge (1-\zeta) \epsilon_H \|{\bf d}_k \|/2$, so we have from \eqref{eq:dj1} that
\begin{equation} \label{eq:dj2}
\theta^{2j} \geq \left( \frac{3}{L_H+\eta} \right)\left( \frac{(1 - \zeta)\epsilon_H}{4\norm{{\bf d}_k}}\right).
\end{equation}
Since in the case under consideration, the acceptance condition for the backtracking line search fails for $j=0$, the latter expression holds with $j=0$, and we have
\begin{align}\label{eq:dk_lowerbound2}
\norm{{\bf d}_k} \ge \frac{3(1-\zeta)\epsilon_H}{4(L_H + \eta)}.
\end{align}
From \eqref{eq:dj2}, \eqref{eqn:dk_sol_upper}, and \eqref{eq:bounds}, we know that
\begin{equation} \label{eq:dj3}
\theta^{2j} \ge \frac{3(1-\zeta)\epsilon_H}{4(L_H + \eta)} \norm{{\bf d}_k}^{-1} \ge \frac{3(1-\zeta)\epsilon_H}{4(L_H + \eta)} \frac{\epsilon_H}{1.1 U_g}.
\end{equation}
Since
\[
j_{\text{\rm sol}} = \left\lceil \frac{1}{2} \log_\theta \frac{3(1 - \zeta)\epsilon_H^2}{4.4 U_g(L_H + \eta)} \right\rceil,
\]
then for any $j > j_{\text{\rm sol}}$, we have
\[
\theta^{2j} < \theta^{2j_{\text{\rm sol}}} \le \frac{3(1 - \zeta)\epsilon_H^2}{4.4 U_g(L_H + \eta)}.
\]
By comparing this expression with \eqref{eq:dj3}, we conclude that the line-search acceptance condition cannot be rejected for $j >j_{\text{\rm sol}}$, so the step taken is $\alpha_k = \theta^{j_k}$ for some $j_k \le j_{\text{\rm sol}}+1$. From \eqref{eq:dj3}, the preceding index $j=j_k-1$ satisfies
\[
\theta^{2j_k -2} \ge \frac{3(1 - \zeta)\epsilon_H}{4(L_H + \eta)} \norm{{\bf d}_k}^{-1},
\]
so that
\[
\theta^{j_k} \ge \sqrt{\frac{3\theta^2(1 - \zeta)}{4(L_H + \eta)}} \epsilon_H^{1/2}\norm{{\bf d}_k}^{-1/2}.
\]
Then, we have
\begin{align}
\nonumber
f({\bf x}_k) - f({\bf x}_k + \theta^{j_k}{\bf d}_k) & \ge \frac{\eta}{6} \theta^{3j_k} \norm{{\bf d}_k}^3 \\
\nonumber
& \ge \frac{\eta}{6} \left[ \frac{3\theta^2(1 - \zeta)}{4(L_H + \eta)}\right]^{3/2} \epsilon_H^{3/2} \norm{{\bf d}_k}^{3/2} \\
& \ge \frac{\eta}{6} \left[ \frac{3\theta^2(1 - \zeta)}{4(L_H + \eta)}\right]^{3/2} \epsilon_g^{3/2},
\label{eq:sol_lower_bound_improvement}
\end{align}
where the last inequality follows from $\|{\bf d}_k \| \ge \epsilon_g/\epsilon_H$.
We obtain the result by combining the two cases above.
\end{proof}
Next, we deal with the negative curvature directions, for which $d_{\text{\rm type}}=\text{\sc NC}$ \refboth{and for which a backtracking birectional line search is used}.
Lemmas~\ref{lemma:nc_from_cappedcg} and \ref{lemma:nc_both_procedure} bound the amount of decrease obtained from the negative curvature directions obtained in Procedures~\ref{alg:capped_cg} and~\ref{alg:minimum_eigenvalue}, respectively.
\begin{lemma}
\label{lemma:nc_from_cappedcg}
Suppose that Assumption~\ref{ass:lip} is satisfied and that Condition~\ref{cond:opt_epsilong_epsilonh} holds for all $k$.
Suppose that at iteration $k$ of Algorithm~\ref{alg:inexact_withlinesearch}, we have $\|{\bf g}_k\| \ge \epsilon_g$, so that Procedure~\ref{alg:capped_cg} is called.
When Procedure~\ref{alg:capped_cg} outputs a direction ${\bf d}_k$ with $d_{\text{\rm type}}=\text{\sc NC}$ that is subsequently used as a search direction, the backtracking \refboth{birectional line search terminates with \eqref{eq:suffdecr} satisfied by either $\alpha_k =\theta^{j_k}$ or $\alpha_k =-\theta^{j_k}$, with $j_k\leq j_{\text{\rm nc}}+1$,} where
\[
j_{\text{\rm nc}} = \left\lceil \log_\theta \frac{3}{2(L_H+\eta)} \right\rceil.
\]
The resulting step ${\bf x}_{k+1}={\bf x}_k+\alpha_k{\bf d}_k$ satisfies
\[
f({\bf x}_k) -f({\bf x}_{k+1}) \geq c_{\text{\rm nc}} \epsilon_H^3,
\]
where
\[
c_{\text{\rm nc}} = \frac{\eta}{6} \min \left\{ \left[\frac{3\theta}{2(L_H+\eta)}\right]^3,1\right\}.
\]
\end{lemma}
\begin{proof}
Note first that by \cref{eqn:dk_nc}, we have $\|{\bf d}_k\| = |{\bf d}^T{\bf H}_k{\bf d}| \ge \epsilon_H$.
Thus, if \refboth{$\alpha_k=\pm 1$, we have by \eqref{eq:suffdecr} that} $f({\bf x}_k) -f({\bf x}_{k+1}) \geq \frac{\eta}{6} \| {\bf d}_k \|^3 \ge \frac{\eta}{6} \epsilon_H^3$, so the result holds in this case.
When \refboth{$|\alpha_k|<1$}, using \cref{eqn:dk_nc} again, we have
\[
{\bf d}_k^T{\bf H}_k{\bf d}_k = -\|{\bf d}_k\|^3 \leq -\epsilon_H\|{\bf d}_k\|^2.
\]
We have from Definition~\ref{cond:appr_gh} that
\[
|{\bf d}_k^T({\bf H}_k-\nabla^2 f({\bf x}_k)){\bf d}_k | \leq \delta_H \|{\bf d}_k\|^2,
\]
so by combining the last two expressions, we have
\begin{equation} \label{eq:rd9}
{\bf d}_k^T\nabla^2 f({\bf x}_k){\bf d}_k \leq -\|{\bf d}_k\|^3 + \delta_H \|{\bf d}_k\|^2.
\end{equation}
\refboth{Let $j \ge 0$ be an integer such that neither $\theta^j$ nor $-\theta^j$ satisfies the criterion \eqref{eq:suffdecr}.
Supposing first that $\nabla f({\bf x}_k)^T{\bf d}_k\leq 0$, we have
from \eqref{eq:lipH2} and \eqref{eq:rd9} that}
\begin{align}
\nonumber
-\frac{\eta}6 \theta^{3j} \|{\bf d}_k\|^3
&\leq f({\bf x}_k + \theta^j{\bf d}_k) - {\bf f}({\bf x}_k) \\
\nonumber
&\leq \theta^{j}\nabla f({\bf x}_k)^T{\bf d}_k + \frac{\theta^{2j}}{2} {\bf d}_k^T\nabla^2 f({\bf x}_k) {\bf d}_k + \frac{L_H}6 \theta^{3j} \|{\bf d}_k\|^3\\
\label{eq:sj88}
&\leq -\frac{\theta^{2j}}{2}\|{\bf d}_k\|^3 + \frac{\theta^{2j}}{2}\delta_H\|{\bf d}_k\|^2 + \frac{L_H}6 \theta^{3j} \|{\bf d}_k\|^3.
\end{align}
\refboth{Supposing instead that $\nabla f({\bf x}_k)^T{\bf d}_k> 0$, we have by considering the step $-\theta^j$ that
\begin{align*}
-\frac{\eta}6 \theta^{3j} \|{\bf d}_k\|^3
&\leq f({\bf x}_k - \theta^j{\bf d}_k) - {\bf f}({\bf x}_k) \\
&\leq - \theta^{j}\nabla f({\bf x}_k)^T{\bf d}_k + \frac{\theta^{2j}}{2} {\bf d}_k^T\nabla^2 f({\bf x}_k) {\bf d}_k + \frac{L_H}6 \theta^{3j} \|{\bf d}_k\|^3\\
&\leq -\frac{\theta^{2j}}{2}\|{\bf d}_k\|^3 + \frac{\theta^{2j}}{2}\delta_H\|{\bf d}_k\|^2 + \frac{L_H}6 \theta^{3j} \|{\bf d}_k\|^3,
\end{align*}
yielding the same inequality as \eqref{eq:sj88}.
After rearrangement of this inequality} and using $\|{\bf d}_k \| \ge \epsilon_H$, it follows that
\begin{equation} \label{eq:fp2}
\theta^j \geq \left(\frac{6}{L_H+\eta}\right)\left( \frac{\|{\bf d}_k\| - \delta_H}{2\|{\bf d}_k\|}\right) = \frac{3}{L_H+\eta} - \frac{3\delta_H}{(L_H+\eta)\|{\bf d}_k\|} \geq \frac{3}{L_H+\eta} - \frac{3\delta_H}{(L_H+\eta)\epsilon_H}.
\end{equation}
Since from Condition~\ref{cond:opt_epsilong_epsilonh}, we have $\delta_H \le (1-\zeta)\epsilon_H/4 < \epsilon_H/4$, then
\begin{equation} \label{eq:rd8}
\theta^j \ge \frac{3}{2(L_H + \eta)}.
\end{equation}
Meanwhile, we have for $j > j_{\text{\rm nc}}$ that
\[
\theta^j < \theta^{j_{\text{\rm nc}}} \le \frac{3}{2(L_H + \eta)}.
\]
\refboth{The last two inequalities together imply that $j \le j_{\text{\rm nc}}$,} so the line search must terminate with \refboth{$\alpha_k = \pm \theta^{j_k}$} for some $j_k \le j_{\text{\rm nc}}+1$. Since \eqref{eq:rd8} must hold for $j=j_k-1$, we have
\[
\theta^{j_k-1} \geq \frac{3}{2(L_H+\eta)} \implies \refboth{|\alpha_k|} = \theta^{j_k} \ge \frac{3\theta}{2(L_H+\eta)}.
\]
Thus, from the step acceptance condition \eqref{eq:suffdecr} together with \cref{eqn:dk_nc} and the definition of $c_{\text{\rm nc}}$, we have
\[
f({\bf x}_k) -f({\bf x}_{k+1}) \geq \frac{\eta}{6} \refboth{|\alpha_k|^3} \| {\bf d}_k \|^3 \ge c_{\text{\rm nc}} \epsilon_H^3,
\]
so the required claim also holds in the case of \refboth{$|\alpha_k|<1$}, completing the~proof.
\end{proof}
We now turn our attention to the property of Procedure~\ref{alg:minimum_eigenvalue}.
The following lemma shows that when a negative curvature direction is obtained from Procedure~\ref{alg:minimum_eigenvalue}, we can guarantee descent in the function in a similar fashion to Lemma~\ref{lemma:nc_from_cappedcg}.
\begin{lemma}\label{lemma:nc_both_procedure}
Suppose that Assumption~\ref{ass:lip} is satisfied and that Condition~\ref{cond:opt_epsilong_epsilonh} holds for all $k$.
Suppose that at iteration $k$ of Algorithm~\ref{alg:inexact_withlinesearch}, the search direction ${\bf d}_k$ is a negative curvature direction for ${\bf H}_k$, obtained from Procedure~\ref{alg:minimum_eigenvalue}.
Then the \refboth{backtracking bidirectional} line search terminates with step size \refboth{either $\alpha_k = \theta^{j_k}$ or $\alpha_k = -\theta^{j_k}$} with $j_k \le j_{\text{\rm nc}} + 1$ where $j_{\text{\rm nc}}$ is defined as in Lemma~\ref{lemma:nc_from_cappedcg}.
Moreover, the decrease in function value resulting from the chosen step size satisfies
\begin{equation}
f({\bf x}_k) - f({\bf x}_k + \alpha_k {\bf d}_k) \ge \frac{c_{\text{\rm nc}}}{8} \epsilon_H^3,
\end{equation}
where $c_{\text{\rm nc}}$ is defined in Lemma~\ref{lemma:nc_from_cappedcg}.
\end{lemma}
\begin{proof}
Note that
\[
{\bf d}_k^T{\bf H}{\bf d}_k \leq - \|{\bf d}_k\|^3 \leq - \frac{\epsilon_H}{2}\|{\bf d}_k\|^2,
\]
so that $\|{\bf d}_k\|\geq \epsilon_H/2$. In the first part of the proof, for the case \refboth{$\alpha_k= \pm 1$}, we have
\[
f({\bf x}_k) - f({\bf x}_{k+1}) \ge \frac{\eta}{6} \| {\bf d}_k \|^3 \ge \frac{\eta}{6} \frac{1}{8} \epsilon_H^3 \ge \frac{c_{\text{\rm nc}}}{8} \epsilon_H^3,
\]
so the result holds in this case.
The analysis of the case \refboth{$|\alpha_k|<1$} proceeds as in the proof of Lemma~\ref{lemma:nc_from_cappedcg} until the lower bound on $\theta^j$ in \eqref{eq:fp2}, where because of $\| {\bf d}_k \| \ge \epsilon_H/2$, we have
\[
\theta^j \ge \frac{3}{L_H+\eta} - \frac{6\delta_H}{(L_H+\eta)\epsilon_H},
\]
which, because of $\delta_H \le \epsilon_H/4$, still yields the lower bound \eqref{eq:rd8}, allowing the result of the proof to proceed as in the earlier result, except for the factor of $1/8$.
\end{proof}
Now comes a crucial step. When the output direction ${\bf d}_k$ from Procedure~\ref{alg:capped_cg} satisfies $\|{\bf d}_k\|\leq \epsilon_g/\epsilon_H$ and Procedure~\ref{alg:minimum_eigenvalue} detects no significant negative curvature in the Hessian,
the update of ${\bf x}_k$ with unit step along ${\bf d}_k$ is the final step of Algorithm~\ref{alg:inexact_withlinesearch}.
Dealing with this case is critical to obtaining the convergence rate of our inexact damped Newton-CG algorithm.
\begin{lemma}
\label{lemma:terminate_step_condition_withlinesearch}
Suppose that Assumption~\ref{ass:lip} is satisfied and that Condition~\ref{cond:opt_epsilong_epsilonh} holds for all $k$.
Suppose that \cref{alg:inexact_withlinesearch} terminates at iteration $k$ at line~\ref{line:term1}, and returns ${\bf x}_k+{\bf d}_k$, where ${\bf d}_k$ is obtained from Procedure~\ref{alg:capped_cg} and satisfies $\norm{{\bf d}_k} \le {\epsilon_g}/{\epsilon_H}$. Then we have
\[
\norm{\nabla f({\bf x}_k + {\bf d}_k)} \le \frac{L_H}2\dfrac{\epsilon_g^2}{\epsilon_H^2} + 4 \epsilon_g.
\]
If in addition the property ${\bf H}_k \succeq -\epsilon_H I$ holds, then
\[
\lambda_{\min}(\nabla^2 f({\bf x}_k + {\bf d}_k)) \ge -\left(\frac54 \epsilon_H + L_H\dfrac{\epsilon_g}{\epsilon_H} \right) I.
\]
\end{lemma}
\begin{proof} Note that termination at line~\ref{line:term1} occurs only if $d_{\text{\rm type}}=\text{\sc SOL}$, so Part 1 of Lemma~\ref{lemma:d_from_cappedcg} holds. For the gradient norm at ${\bf x}_k+{\bf d}_k$, we have
\begin{align*}
\norm{\nabla f({\bf x}_k + {\bf d}_k)}
&\le \norm{\nabla f({\bf x}_k+{\bf d}_k) - \nabla f({\bf x}_k) - \nabla^2 f({\bf x}_k) {\bf d}_k + {\bf H}_k {\bf d}_k+ {\bf g}_k}
\\ & \quad\quad +\norm{\nabla f({\bf x}_k) - {\bf g}_k} + \norm{\nabla^2 f({\bf x}_k){\bf d}_k - {\bf H}_k{\bf d}_k}
\\
& \le \norm{\nabla f({\bf x}_k+{\bf d}_k) - \nabla f({\bf x}_k) - \nabla^2 f({\bf x}_k) {\bf d}_k } + \norm{{\bf H}_k{\bf d}_k + {\bf g}_k} + \delta_{g,k} + \delta_H\norm{{\bf d}_k}
\\
& \le \norm{\nabla f({\bf x}_k+{\bf d}_k) - \nabla f({\bf x}_k) - \nabla^2 f({\bf x}_k) {\bf d}_k } + \norm{\hat{\textnormal{r}}_k} + 2\epsilon_H \norm{{\bf d}_k} + \delta_{g,k} + \delta_H\norm{{\bf d}_k}
\\
& \le \frac{L_H}2\norm{{\bf d}_k}^2 + \frac{1}{2}\epsilon_H \zeta\norm{{\bf d}_k} + (2\epsilon_H + \delta_H)\norm{{\bf d}_k} + \delta_{g,k} \quad \quad \mbox{(from \eqref{eq:lipH} and \eqref{eqn:dk_sol_rk})}
\\
& \le \frac{L_H}2\norm{{\bf d}_k}^2 + 3\epsilon_H \norm{{\bf d}_k} + \delta_{g,k}
\quad \quad \quad \quad \mbox{(since $\zeta \in (0,1)$ and $\delta_H \le \epsilon_H/2$)}
\\
& \le \frac{L_H}2\norm{{\bf d}_k}^2 + 3\epsilon_H \norm{{\bf d}_k} + \left( \frac{1-\zeta}{8} \right) \max\left(\epsilon_g, \min(\epsilon_H\|{\bf d}_k\|, \|{\bf g}_k\|, \|{\bf g}_{k+1}\|)\right)
\\
& \le \frac{L_H}2\dfrac{\epsilon_g^2}{\epsilon_H^2} + 3 \epsilon_H \norm{{\bf d}_k} + \frac{1-\zeta}{8}\max\left(\epsilon_g, \epsilon_H\|{\bf d}_k\|\right)
\\
& \le \frac{L_H}2\dfrac{\epsilon_g^2}{\epsilon_H^2} + 3\epsilon_g + \frac{1-\zeta}{8} \epsilon_g \quad \quad \quad \quad
\mbox{(since $\| {\bf d}_k \| \le \epsilon_g/\epsilon_H$)}
\\
& \le \frac{L_H}2\dfrac{\epsilon_g^2}{\epsilon_H^2} + 4 \epsilon_g,
\end{align*}
as required.
For the second-order condition, since ${\bf H}_k \succeq -\epsilon_H I$ and $\delta_H \le \epsilon_H/4$ (from Condition~\ref{cond:opt_epsilong_epsilonh}), we have
\[
\nabla^2 f({\bf x}_k + {\bf d}_k) \succeq \nabla^2 f({\bf x}_k) - L_H\norm{{\bf d}_k} I
\succeq {\bf H}_k - \delta_H I - L_H\dfrac{\epsilon_g}{\epsilon_H} I \succeq -\left(\frac54 \epsilon_H + L_H\dfrac{\epsilon_g}{\epsilon_H} \right) I.
\]
This completes the proof.
\end{proof}
Now, combining Lemmas \ref{lemma:dtype_sol_opt}--\ref{lemma:terminate_step_condition_withlinesearch}, we obtain the iteration complexity for \cref{alg:inexact_withlinesearch}.
\begin{theorem}
\label{thm:opt_iteration_comp}
Suppose that Assumption~\ref{ass:lip} is satisfied and that Condition~\ref{cond:opt_epsilong_epsilonh} holds for all $k$.
For a given $\epsilon>0$, let $\epsilon_H = \sqrt{L_H \epsilon}, \epsilon_g = \epsilon$.
Define
\begin{equation}\label{eq:k}
\bar K :=
\left\lceil \frac{3(f({\bf x}_0)- f_\text{low})}{\min \left( \frac{1}{64 L_H^{3/2}} c_{\text{\rm sol}}, 8 L_H^{3/2} c_{\text{\rm sol}}, L_H^{3/2} c_{\text{\rm nc}}/8 \right)} \epsilon^{-3/2} \right \rceil + 5,
\end{equation}
where $c_{\text{\rm sol}}$ and $c_{\text{\rm nc}}$ are defined in Lemmas \ref{lemma:dtype_sol_opt} and \ref{lemma:nc_from_cappedcg}, respectively.
Then \cref{alg:inexact_withlinesearch} terminates in at most $\bar{K}$ iterations at a point satisfying
\[
\norm{\nabla f({\bf x})} \lesssim \epsilon.
\]
Moreover, with probability at least $(1 - \delta)^{\bar K }$ the point returned by \cref{alg:inexact_withlinesearch} also satisfies the approximate second-order condition
\begin{equation} \label{eqn:2oeps}
\lambda_{\min}(\nabla^2 f({\bf x})) \gtrsim -\sqrt{L_H\epsilon}.
\end{equation}
\reftwo{Here, $\lesssim$ and $\gtrsim$ denote that the corresponding inequality holds up to a certain constant that is independent of $ \epsilon $ and $ L_{H} $.}
\end{theorem}
\begin{proof}
Note first that for our choices of $\epsilon_g$ and $\epsilon_H$, the threshold $\epsilon_g/\epsilon_H$ for $\| {\bf d}_k \|$ in line~\ref{line:threshold} of Algorithm~\ref{alg:inexact_withlinesearch} becomes $\sqrt{\epsilon/L_H}$.
We show first that \cref{alg:inexact_withlinesearch} terminates after at most $\bar K$ steps. We taxonomize the iterations into five classes. To specify these classes, we denote by ${\bf d}_k$ and $d_{\text{\rm type}}$ the values of these variables {\em immediately before a step is taken or termination is declared}, bearing in mind that these variables can be reassigned during iteration $k$, in Line \ref{line:newdk}.
Supposing for contradiction that \cref{alg:inexact_withlinesearch} runs for at least $K$ steps, for some $K>\bar{K}$, we define the five classes of indices as follows.
\begin{align*}
\mathcal K_1 &:= \{k=0,1,2,\dotsc, K-1 \, | \, \norm{{\bf g}_k} < \epsilon \} \\
\mathcal K_2 &:= \{k=0,1,2,\dotsc, K-1 \, | \, \norm{{\bf g}_k} \ge \epsilon, \, d_{\text{\rm type}}=\text{\sc SOL}, \, \|{\bf d}_k \| > \sqrt{\epsilon/L_H}, \, \norm{{\bf g}_{k+1}} < \epsilon \} \\
\mathcal K_3 &:= \{k=0,1,2,\dotsc, K-1 \, | \, \norm{{\bf g}_k} \ge \epsilon, \, d_{\text{\rm type}}=\text{\sc SOL}, \, \|{\bf d}_k \| > \sqrt{\epsilon/L_H}, \, \norm{{\bf g}_{k+1}} \ge \epsilon \} \\
\mathcal K_4 &:= \{k=0,1,2,\dotsc, K=1 \, | \, \norm{{\bf g}_k} \ge \epsilon, \, d_{\text{\rm type}}=\text{\sc SOL}, \, \| {\bf d}_k \| \le \sqrt{\epsilon/L_H}\} \\
\mathcal K_5 &:= \{k=0,1,2,\dotsc, K-1 \, | \, \norm{{\bf g}_k} \ge \epsilon, \, d_{\text{\rm type}}=\text{\sc NC}\}.
\end{align*}
Obviously, $K= \Abs{\mathcal K_1} + \Abs{\mathcal K_2} + \Abs{\mathcal K_3} + \Abs{\mathcal K_4} + \Abs{\mathcal K_5} $. We consider each of these types of steps in turn.
\paragraph{Case 1:} $k \in \mathcal K_1$.
The update ${\bf d}_k$ in this case must come from Procedure~\ref{alg:minimum_eigenvalue}. Either the method terminates (which happens at most once!) or from Lemma~\ref{lemma:nc_both_procedure}, we have that
\begin{equation}
f({\bf x}_k) - f({\bf x}_{k+1}) \geq \frac18 c_{\text{\rm nc}} \epsilon_H^3 =\frac18 L_H^{3/2} c_{\text{\rm nc}} \epsilon^{3/2}.
\end{equation}
Thus the total amount of decrease that results from steps in $\mathcal K_1$ is at least $(\Abs{\mathcal K_1} -1) L_H^{3/2} c_{\text{\rm nc}} \epsilon^{3/2}/8$.
\paragraph{Case 2:} $k \in \mathcal K_2$.
With \cref{lemma:dtype_sol_opt}, we can guarantee only that $f({\bf x}_k) - f({\bf x}_{k+1}) \ge 0$. However, since $\norm{{\bf g}_{k+1}} < \epsilon$, the {\em next} iterate must belong to class $\mathcal K_1$. Therefore we have $\Abs{\mathcal K_2} \le \Abs{\mathcal K_1}$.
\paragraph{Case 3:} $k \in \mathcal K_3$.
Here the step ${\bf d}_k$ is an approximate solution of the damped Newton equations, and we can apply \cref{lemma:dtype_sol_opt} to obtain a nontrivial lower bound on the decrease in $f$.
By Condition \ref{cond:opt_epsilong_epsilonh}, we have
\begin{align*}
\delta_{g,k} & \le \frac18 \max\left(\epsilon_g, \min(\epsilon_H\|{\bf d}_k\|, \|{\bf g}_k\|, \|{\bf g}_{k+1}\|)\right) \le \frac18 \max( \epsilon_g, \| {\bf g}_{k+1} \|) = \frac18 \| {\bf g}_{k+1} \|, \\
\delta_{g,k+1} & \le \frac18 \max\left(\epsilon_g, \min(\epsilon_H\|{\bf d}_{k+1}\|, \|{\bf g}_{k+1}\|, \|{\bf g}_{k+2}\|)\right) \le \frac18 \max( \epsilon_g, \| {\bf g}_{k+1} \|) = \frac18 \| {\bf g}_{k+1} \|,
\end{align*}
so that
\[
\|{\bf g}_{k+1} \| - \delta_{g,k} - \delta_{g,k+1} \ge \frac34 \| {\bf g}_{k+1} \| \ge \frac34 \epsilon_g = \frac34 \epsilon.
\]
Thus, from \eqref{eq:csol} in Lemma~\ref{lemma:dtype_sol_opt}, we have for this type of step that
\begin{align*}
f({\bf x}_k) -f({\bf x}_{k+1}) & \ge c_{\text{\rm sol}} \max\left\{0, \min \left( \frac{(\|{\bf g}_{k+1} \| - \delta_{g,k} - \delta_{g,k+1})^3}{(2.5 \epsilon_H)^3}, (2.5 \epsilon_H)^3, \epsilon_g^{3/2}\right)\right\} \\
& \ge c_{\text{\rm sol}} \min \left( \frac{(\tfrac34 \epsilon)^3}{(2.5 \sqrt{L_H \epsilon})^3}, (2.5 \sqrt{L_H \epsilon})^3, \epsilon^{3/2}\right) \\
& = c_{\text{\rm sol}} \min \left( \frac{1}{64L_H^{3/2}}, 8 L_H^{3/2}, 1 \right) \epsilon^{3/2} =
c_{\text{\rm sol}} \min \left( \frac{1}{64L_H^{3/2}}, 8 L_H^{3/2} \right) \epsilon^{3/2}.
\end{align*}
\paragraph{Case 4:} $k \in \mathcal K_4$.
In this case, Procedure~\ref{alg:capped_cg} outputs $d_{\text{\rm type}}=\text{\sc SOL}$ along with a ``small'' value of ${\bf d}_k$. Subsequently, Procedure~\ref{alg:minimum_eigenvalue} was called, but it must have returned with a certification of near-positive-definiteness of ${\bf H}_k$, since $d_{\text{\rm type}}$ was not switched to $\text{\sc NC}$.
Thus, according to \cref{lemma:terminate_step_condition_withlinesearch}, termination occurs with output ${\bf x}_k+{\bf d}_k$. Thus, this case can occur at most once, and we have $|\mathcal K_4| \le 1$.
\paragraph{Case 5:} $k \in \mathcal K_5$.
In this case, either the algorithm terminates and outputs ${\bf x}={\bf x}_k$ (which happens at most once), or else a step is taken along a negative curvature direction for ${\bf H}_k$, detected either in Procedure \ref{alg:capped_cg} or Procedure \ref{alg:minimum_eigenvalue}. In the former case (detection in Procedure \ref{alg:capped_cg}), we have from
\cref{lemma:nc_from_cappedcg} that
$f({\bf x}_k) -f({\bf x}_{k+1}) \ge c_{\text{\rm nc}} \epsilon_H^3 = c_{\text{\rm nc}} L_H^{3/2} \epsilon^{3/2}$, while in
the latter case (detection in Procedure \ref{alg:minimum_eigenvalue}), we have from \cref{lemma:nc_both_procedure} that
$f({\bf x}_k) - f({\bf x}_{k+1}) \geq \frac18 L_H^{3/2} c_{\text{\rm nc}} \epsilon^{3/2}$.
Thus, the total decrease in $f$ resulting from steps of this class is bounded below by
$(| \mathcal K_5|-1) \frac18 L_H^{3/2} c_{\text{\rm nc}} \epsilon^{3/2}$.
The total decrease of $f$ over all $K$ steps cannot exceed $f({\bf x}_0) - f_\text{low}$. We thus have
\begin{align*}
f({\bf x}_0) - f_\text{low} & \ge \sum_{k=0}^{K-1} (f({\bf x}_k) - f({\bf x}_{k+1})) \\
& \ge \sum_{k\in \mathcal K_1}(f({\bf x}_k) - f({\bf x}_{k+1})) + \sum_{k\in \mathcal K_3}(f({\bf x}_k) - f({\bf x}_{k+1})) + \sum_{k\in \mathcal K_5}(f({\bf x}_k) - f({\bf x}_{k+1})) \\
& \ge (\Abs{\mathcal K_1} + \Abs{\mathcal K_5} -2 ) \frac18 L_H^{3/2} c_{\text{\rm nc}} \epsilon^{3/2} + \Abs{\mathcal K_3} c_{\text{\rm sol}} \min \left( \frac{1}{64 L_H^{3/2}}, 8 L_H^{3/2}\right) \epsilon^{3/2}.
\end{align*}
Therefore, we have
\begin{align*}
\Abs{\mathcal K_1} + \Abs{\mathcal K_5} -2 & \le \frac{f({\bf x}_0) - f_\text{low}}{L_H^{3/2} c_{\text{\rm nc}} /8}\epsilon^{-3/2}, \\
\Abs{\mathcal K_3} & \le \frac{f({\bf x}_0)- f_\text{low}}{c_{\text{\rm sol}} \min \left( \frac{1}{64 L_H^{3/2}}, 8 L_H^{3/2}\right)} \epsilon^{-3/2}.
\end{align*}
Finally, we have
\begin{align*}
K &= \Abs{\mathcal K_1} + \Abs{\mathcal K_2} + \Abs{\mathcal K_3} + \Abs{\mathcal K_4} + \Abs{\mathcal K_5} \\
& \le 2\Abs{\mathcal K_1} + \Abs{\mathcal K_3} + 1 + \Abs{\mathcal K_5} \\
& \le 2 (\Abs{\mathcal K_1} + \Abs{\mathcal K_5} -2) + \Abs{\mathcal K_3} + 5 \\
& \le \frac{2(f({\bf x}_0) - f_\text{low})}{L_H^{3/2} c_{\text{\rm nc}} /8}\epsilon^{-3/2}
+ \frac{f({\bf x}_0)- f_\text{low}}{c_{\text{\rm sol}} \min \left( \frac{1}{64 L_H^{3/2}}, 8 L_H^{3/2} \right)} \epsilon^{-3/2} + 5 \\
& \le \frac{3(f({\bf x}_0)- f_\text{low})}{\min \left( \frac{1}{64 L_H^{3/2}} c_{\text{\rm sol}}, 8 L_H^{3/2} c_{\text{\rm sol}}, L_H^{3/2} c_{\text{\rm nc}}/8 \right)} \epsilon^{-3/2} + 5 \le \bar{K},
\end{align*}
which contradicts our assertion that $K> \bar{K}$. Thus \cref{alg:inexact_withlinesearch} terminates in at most $\bar{K}$ steps.
Note that if termination occurs at Line \ref{line:term2} of \cref{alg:inexact_withlinesearch}, the returned value of ${\bf x} = {\bf x}_{k}$ certainly has $\| \nabla f({\bf x}) \| \lesssim \epsilon_g = \epsilon$. This is because when $\|{\bf g}_k \| \le \epsilon_g$, we have from Condition~\ref{cond:opt_epsilong_epsilonh} that $\delta_{g,k} \le {(1-\zeta)} \epsilon_g/8$, so that $\| \nabla f({\bf x}) \| \le \|{\bf g}_k\| + \delta_{g,k} \lesssim \epsilon_g$.
Alternatively, if termination occurs at Line~\ref{line:term1}, for the returned value of ${\bf x} = {\bf x}_{k} + {\bf d}_{k}$ we have
\[
\norm{\nabla f({\bf x})} \le \frac{L_H}2\dfrac{\epsilon_g^2}{\epsilon_H^2} + 4 \epsilon_g = \frac{L_H}{2} \frac{\epsilon^2}{L_H \epsilon} + 4 \epsilon = \frac92 \epsilon.
\]
Thus, the claim $\norm{\nabla f({\bf x})} \lesssim \epsilon$ at the termination point ${\bf x}$ holds.
We now verify the claims about probability of failure and the second-order conditions. Note that for both types of termination (at Lines~\ref{line:term1} and \ref{line:term2} of \cref{alg:inexact_withlinesearch}), Procedure~\ref{alg:minimum_eigenvalue} issues a certificate that $\lambda_{\min}({\bf H}_k) \ge -\epsilon_H$.
Subject to this certificate being correct, we show now that our claim \eqref{eqn:2oeps} holds. When termination occurs at line~\ref{line:term1}, we have in this case from Lemma~\ref{lemma:terminate_step_condition_withlinesearch} that at the returned point ${\bf x}={\bf x}_k+{\bf d}_k$, we have
\[
\lambda_{\min} (\nabla^2 f({\bf x})) \ge -\left( \frac54 \epsilon_H + L_H \frac{\epsilon_g}{\epsilon_H} \right) = -\frac94 \sqrt{L_H \epsilon},
\]
as required. For termination at Line~\ref{line:term2}, we have directly that $\lambda_{\min} (\nabla^2 f({\bf x})) \ge -\epsilon_H = -\sqrt{L_H \epsilon}$, again verifying the claim.
We now calculate a bound on the probability of incorrect termination, which can occur at either Line~\ref{line:term1} or Line \ref{line:term2} when Procedure~\ref{alg:minimum_eigenvalue} issues a certificate that $\lambda_{\min}({\bf H}_k) \ge -\epsilon_H$, whereas in fact $\lambda_{\min}({\bf H}_k) < -\epsilon_H$.
\reftwo{The proof is a simple adaptation from \citet[Theorem~2]{XieW21a} and \citet[Theorem~4.6]{Cur19a}, the adaptations for inexactness being fairly straightforward. We include the argument here for the sake of completeness.}
The possibility of such an event happening on any individual call to Procedure~\ref{alg:minimum_eigenvalue} is bounded above by $\delta$. For all iterates $k$, we denote by $\tilde{P}_k$ the probability that \cref{alg:inexact_withlinesearch} reaches iteration $k$ but $\lambda_{\min}({\bf H}_k) < -\epsilon_H$, and denote by $P_k$ the probability that \cref{alg:inexact_withlinesearch} reaches iteration $k$ but $\lambda_{\min}({\bf H}_k) < -\epsilon_H$, yet the algorithm terminates due to Procedure~\ref{alg:minimum_eigenvalue} issuing an incorrect certificate. Clearly, we have $P_k \le \delta \tilde{P}_k$ for all $k=0,1,\dotsc,\bar{K}$. Since it is trivially true for all $k$ that
\[
\tilde{P}_k + \sum_{i=0}^{k-1} P_i \le 1,
\]
we have for all $k$ that
\begin{equation} \label{eq:ys9}
P_k \le \delta \tilde{P}_k \le \delta \left( 1- \sum_{i=0}^{k-1} P_i \right).
\end{equation}
Now let $M_k$ be the total number of calls to Procedure~\ref{alg:minimum_eigenvalue} that have occurred up to and including iteration $k$ of \cref{alg:inexact_withlinesearch}. We prove by induction that $\sum_{i=0}^k P_i \le 1-(1-\delta)^{M_k}$ for all $k$. For $k=0$, the claim holds trivially, both in the case of $M_0=0$ (in which case $P_0=0$) and $M_0=1$ (in which case $P_0 \le \delta$). Supposing now that the claim is true for some $k \ge 0$, we show that it continues to hold for $k+1$. If \cref{alg:inexact_withlinesearch} reaches iteration $k+1$ with $\lambda_{\min}({\bf H}_{k+1}) < -\epsilon_H$, and Procedure~\ref{alg:minimum_eigenvalue} is {\em not} called at this iteration, then $M_{k+1}=M_k$ and $P_{k+1}=0$, so by the induction hypothesis we have
\[
\sum_{i=0}^{k+1} P_i = \sum_{i=0}^k P_i \le 1-(1-\delta)^{M_k} = 1-(1-\delta)^{M_{k+1}},
\]
as required. In the other case in which \cref{alg:inexact_withlinesearch} reaches iteration $k+1$ with $\lambda_{\min}({\bf H}_{k+1}) < -\epsilon_H$, and Procedure~\ref{alg:minimum_eigenvalue} {\em is} called at this iteration, then $M_{k+1}=M_k+1$, so by using \eqref{eq:ys9} and the inductive hypothesis, we have
\begin{align*}
\sum_{i=0}^{k+1} P_i & = \sum_{i=0}^k P_i + P_{k+1} \\
& \le \sum_{i=0}^k P_i + \delta \left( 1- \sum_{i=0}^k P_i \right) \\
& = \delta + (1-\delta) \sum_{i=0}^k P_i \\
& \le \delta + (1-\delta) \left( 1- (1-\delta)^{M_k} \right) \\
& = 1- (1-\delta)^{M_k+1} = 1- (1-\delta)^{M_{k+1}},
\end{align*}
as required. Since $M_k \le k \le \bar{K}$ for all $k=1,2,\dotsc,\bar{K}$, we have that the probability that \cref{alg:inexact_withlinesearch} terminates incorrectly on {\em any} iteration is bounded above by $1-(1-\delta)^{\bar{K}}$. So when termination occurs, the condition \eqref{eqn:2oeps} holds at the termination point with probability at least $(1-\delta)^{\bar{K}}$, as claimed.
\end{proof}
By incorporating the complexity of Procedures~\ref{alg:capped_cg} and \ref{alg:minimum_eigenvalue}, as described in Lemmas~\ref{lemma:capped_cg_iter_bounded} and \ref{lemma:fail_min_eigen_oracle}, we can obtain an upper bound on the number of approximate gradient and approximate Hessian-vector product evaluations required during a run of \cref{alg:inexact_withlinesearch}. The iteration count for the algorithm is bounded by ${\mathcal O}(\epsilon^{-3/2})$ in \cref{thm:opt_iteration_comp} and each iteration requires one approximate gradient evaluation. Additionally, each iteration of \cref{alg:inexact_withlinesearch} may require a call to Procedure~\ref{alg:capped_cg}, which by \cref{lemma:capped_cg_iter_bounded} requires $\tilde{\mathcal O}(\epsilon_H^{-1/2}) = \tilde{\mathcal O}(\epsilon^{-1/4})$ approximate Hessian-vector products. A call to Procedure~\ref{alg:minimum_eigenvalue} may also be required on some iterations. Here, by \cref{lemma:fail_min_eigen_oracle}, ${\mathcal O}(\epsilon_H^{-1/2}) = {\mathcal O}(\epsilon^{-1/4})$ approximate Hessian-vector products may be required also. We summarize these observations in the following corollary.
\begin{corollary}\label{cor:total_complexity}
Suppose that the assumptions of \cref{thm:opt_iteration_comp} hold. Let $\epsilon_g=\epsilon, \epsilon_H=\sqrt{L_H\epsilon}$, and $\bar K$ be defined as in \cref{eq:k}. Then for $d$ sufficiently large relative to $\epsilon^{-1/2}$, \cref{alg:inexact_withlinesearch} terminates after at most $\tilde{\mathcal O}(\epsilon^{-7/4})$ matrix-vector products with the approximate Hessians and at most ${\mathcal O}(\epsilon^{-3/2})$ evaluations of approximate gradients.
With probability at least $(1-\delta)^{\bar{K}}$, it returns a point that satisfies the approximate first- and second-order conditions described in \cref{thm:opt_iteration_comp}.
\end{corollary}
\subsection{Evaluation complexity of Algorithm~\ref{alg:fixed_stepsize} for finite-sum problems}
\label{sec:finite-sum}
When $f$ has finite-sum form \cref{eq:finte_sum_problem} for $n \gg 1$, we consider subsampling schemes for estimating ${\bf g}_k$ and ${\bf H}_k$, as in \cite{roosta2019sub,xuNonconvexTheoretical2017}.
We can define the subsampled quantities as follows
\begin{align}\label{eq:sub_gh}
{\bf g} \triangleq \frac{1}{\Abs{\mathcal S_g}} \sum_{i\in\mathcal S_g} \nabla f_i({\bf x}), ~~ \text{and} ~~~ {\bf H} \triangleq \frac{1}{\Abs{\mathcal S_H}} \sum_{i\in\mathcal S_H} \nabla^2 f_i({\bf x}),
\end{align}
where $\mathcal S_g, \mathcal S_H\subset\{1,\cdots, n\}$ are the subsample batches for the estimates of the gradient and Hessian, respectively.
In~\citet[Lemma 1 and 2]{roosta2019sub} and \citet[Lemma 16]{xuNonconvexTheoretical2017},
it is shown that with a uniform sampling strategy, the following lemma can be proved.
\begin{lemma}[Sampling complexity \citep{roosta2019sub,xuNonconvexTheoretical2017}]
\label{lemma:sampling}
Suppose that Assumption~\ref{ass:lip} is satisfied, and let $\bar\delta \in (0,1)$ be given.
Suppose that at iteration $k$ of Algorithm~\ref{alg:fixed_stepsize}, $\delta_{g,k}$ and $\delta_H$ are as defined in Condition~\ref{cond:opt_epsilong_epsilonh_fixedstepsize}.
Also, let $ 0< K_g,K_H < \infty $ be such that $ \norm{\nabla f_i({\bf x})} \le K_g$ and $\norm{\nabla^2 f_i({\bf x})} \le K_H$ for all ${\bf x}$ belonging to the set defined in Assumption~\ref{ass:lip}.
For ${\bf g}_k$ and ${\bf H}_k$ defined as in \eqref{eq:sub_gh} with ${\bf x}={\bf x}_k$, and subsample sets $\mathcal S_g = \mathcal S_{g,k}$ and $\mathcal S_H$ satisfying
\begin{align*}
\Abs{\mathcal S_{g,k}} \ge \frac{16K_g^2}{\delta_{g,k}^2} \log\frac{1}{\bar\delta} \quad \text{and} \quad \Abs{\mathcal S_H} \ge \frac{16K_H^2}{\delta_H^2} \log\frac{2d}{\bar\delta},
\end{align*}
we have with probability at least $1-\bar\delta$ that Condition~\ref{cond:opt_epsilong_epsilonh_fixedstepsize} holds for the given values of $\delta_{g,k}$ and $\delta_H$.
\end{lemma}
For the choices of $\epsilon_g$ and $\epsilon_H$ being used in this section, and assuming that $\delta_{g,k}$ and $\delta_{H}$ are set to their upper bounds in Condition~\ref{cond:opt_epsilong_epsilonh_fixedstepsize}, we can derive a uniform condition on the required subsample~sizes.
\begin{lemma} \label{lemma:sampling2}
Suppose the conditions of Lemma~\ref{lemma:sampling} holds, and that for some $\epsilon>0$, we set $\epsilon_H = \sqrt{L_H \epsilon}$ and $\epsilon_g = \epsilon$.
Suppose that at some iteration $k$, $\delta_{g,k}$ and $\delta_H$ are set to their upper bounds in Condition~\ref{cond:opt_epsilong_epsilonh_fixedstepsize}. Then we have that $\delta_{g,k} \ge \bar\delta_g$ for all $k$ and $\delta_H = \bar\delta_H$, where
\begin{equation} \label{eq:bardgdh}
\reftwo{\bar\delta_g = \frac{1-\zeta}{8} \min \left( \frac{3 L_H \epsilon }{65(L_H+\eta)}, \epsilon \right) = \mathcal{O}(\epsilon),} \quad
\bar\delta_H = \left( \frac{1-\zeta}{4} \right) \sqrt{L_H \epsilon} = \mathcal{O}(\epsilon^{1/2}).
\end{equation}
Moreover, when ${\bf g}_k$ and ${\bf H}_k$ are estimated from \eqref{eq:sub_gh} with ${\bf x}={\bf x}_k$ and subsample sets $\mathcal S_g = \mathcal S_{g,k}$ and $\mathcal S_H$ satisfying
\begin{align*}
\Abs{\mathcal S_g} \ge \frac{16K_g^2}{\bar\delta_g^2} \log\frac{1}{\bar\delta} = \mathcal{O}(\epsilon^{-2}), \quad \Abs{\mathcal S_H} \ge \frac{16K_H^2}{\bar\delta_H^2} \log\frac{2d}{\bar\delta} = \mathcal{O}(\epsilon^{-1}),
\end{align*}
then Condition~\ref{cond:opt_epsilong_epsilonh_fixedstepsize} is satisfied at iteration $k$ with probability at least $1-\bar\delta$.
\end{lemma}
\begin{proof}
The right-hand side of the bound on $\delta_{g,k}$ in Condition~\ref{cond:opt_epsilong_epsilonh_fixedstepsize} is bounded below by
\[
\frac{1-\zeta}{8} \min \left( \frac{3 \epsilon_H^2}{65(L_H+\eta)}, \epsilon_g \right) =
\frac{1-\zeta}{8} \min \left( \frac{3 L_H \epsilon }{65(L_H+\eta)}, \epsilon \right) = \bar\delta_g = \mathcal{O}(\epsilon),
\]
as claimed.
The claims concerning $\bar\delta_H$ are immediate.
\end{proof}
By combining \cref{lemma:sampling2} with Theorem~\ref{thm:fixed_stepsize_complexity}, we can obtain an oracle complexity result in which the oracle is either an evaluation of a gradient $\nabla f_i$ for some $i=1,2,\dotsc,n$ or a Hessian-vector product of the form $\nabla^2 f_i({\bf x}) {\bf v}$, for some $i=1,2,\dotsc,n$ and some ${\bf x}, {\bf v} \in {\mathbb R}^d$.
The result is complicated by the fact that there is a probability of failure to satisfy Condition~\ref{cond:opt_epsilong_epsilonh_fixedstepsize} at each $k$, to go along with the possible failure, noted in the previous section, to detect negative curvature when Procedure~\ref{alg:minimum_eigenvalue} is invoked.
For our result below, we consider the case in which failure to satisfy Condition~\ref{cond:opt_epsilong_epsilonh_fixedstepsize} never occurs at any iteration, Since there are at most $\bar{K}_2$ iterations, this case occurs with probability at least $(1-\bar\delta)^{\bar{K}_2}$.
\begin{corollary}[Evaluation Complexity of Algorithm~\ref{alg:fixed_stepsize} for finite-sum problem \cref{eq:finte_sum_problem}]
\label{cor:tr_prob}
Suppose that Assumption~\ref{ass:lip} is satisfied.
Let $\bar\delta \in (0,1)$ be given, and suppose that at each iteration $k$, ${\bf g}_k$ and ${\bf H}_k$ are obtained from \eqref{eq:sub_gh}, with $\mathcal S_g = \mathcal S_{g,k}$ and $\mathcal S_H$ satisfying the lower bounds in Lemma~\ref{lemma:sampling}, where $\delta_{g,k} \ge \bar\delta_g$ and $\delta_H \ge \bar\delta_H$, with $\bar\delta_g$ and $\bar\delta_H$ defined in \eqref{eq:bardgdh}.
For a given $\epsilon>0$, let $\epsilon_H = \sqrt{L_H \epsilon}, \epsilon_g = \epsilon$.
Let $\bar{K}_2$ be defined as in \eqref{eq:k2}.
Then with probability at least $(1-\bar\delta)^{\bar{K}_2}(1-\delta)^{\bar{K}_2}$, Algorithm~\ref{alg:fixed_stepsize} terminates in at most $\bar K_2$ iterations at a point ${\bf x}$ satisfying
$\norm{\nabla f({\bf x})} \lesssim \epsilon$ and $\lambda_{\min}(\nabla^2 f({\bf x})) \gtrsim -\sqrt{L_H\epsilon}$.
\reftwo{Again, $\lesssim$ and $\gtrsim$ denote that the corresponding inequality holds up to a certain constant that is independent of $ \epsilon $ and $ L_{H} $.}
Moreover, the total number of oracle calls is bounded by
\begin{align*}
& \underbrace{\left(2\left\lceil \frac{(f({\bf x}_0) - f_{\text{\rm low}})}{\min\{\bar{c}_{\text{\rm sol}},\bar{c}_{\text{\rm nc}}/8\}} \epsilon^{-3/2} \right \rceil + 3 \right)}_{\bar K_2} \cdot \left(\underbrace{\frac{16K_g^2}{\bar\delta_{g}^2} \log\frac{1}{\bar\delta}}_{\text{Gradient Sampling}} + \underbrace{\frac{16K_H^2}{\bar\delta_H^2} \log\frac{2d}{\bar\delta}}_{\text{Hessian Sampling}}\cdot \left(\underbrace{\tilde{\mathcal O}(\epsilon^{-1/4})}_{Procedure~\ref{alg:capped_cg}} + \underbrace{\mathcal{O}(\epsilon^{-1/4})}_{Procedure~\ref{alg:minimum_eigenvalue}} \right) \right) \\
& = \mathcal{O}(\epsilon^{-3/2}) \cdot({\mathcal O}(\epsilon^{-2}) + \tilde{\mathcal O}(\epsilon^{-1} \times \epsilon^{-1/4})) \\
& = {\mathcal O}(\epsilon^{-7/2}).
\end{align*}
\end{corollary}
\refone{
As mentioned earlier, \cref{alg:fixed_stepsize} requires knowledge of an upper bound of the Lipschitz contstant $L_H$ of the Hessian matrix.
In addition, the sample complexity derived in \cref{cor:tr_prob} depends on upper estimates of $K_g$ and $K_H$, which may be unavailable for many non-convex problems. Fortunately, for many non-convex objectives of interest in machine learning and statistical analysis, we can readily obtain reasonable estimates of these quantities.
\cref{tab:L_H_bound} provides estimates on $L_H$ for some examples of such objectives. See \cref{tab:K_H_K_g_bound} for upper bounds on $K_g$ and $K_H$ for such problems.
Equipped with these estimates, we can give a more refined complexity analysis tailored for the problems in \cref{tab:L_H_bound,tab:K_H_K_g_bound}.
Indeed, since for the constants $\bar{c}_{\text{\rm sol}}$ and $\bar{c}_{\text{\rm nc}}$ in \cref{lemma:fixed_stepsize_nc_from_cappedcg,lemma:fixed_stepsize_with_sol}, we have
$\bar{c}_{\text{\rm sol}} \in \Omega(1/L_H^3)$, $\bar{c}_{\text{\rm nc}} \in \Omega(1/L_H^3)$, from \cref{tab:L_H_bound,tab:K_H_K_g_bound,cor:tr_prob}, it follows that the total number of oracle calls for these problems is at most
\begin{align*}
& \tilde {\mathcal O} \left[ \left( \left( \max_{i} \|{\bf a}_{i}\|^{9} \right) (f({\bf x}_0) - f_{\text{\rm low}}) \epsilon^{-3/2} \right) \right] \cdot \left( \tilde {\mathcal O} \left( \left( \max_{i} \|{\bf a}_{i}\|^{2} \right) \epsilon^{-2} \right) + \tilde {\mathcal O} \left( \left( \max_{i} \|{\bf a}_{i}\| \right) \epsilon^{-5/4}\right) \right) \\
& = \tilde {\mathcal O}\left(\epsilon^{-7/2} (f({\bf x}_0) - f_{\text{\rm low}}) \max_{i} \left\{1,\|{\bf a}_{i}\|\right\}^{11} \right),
\end{align*}
where for simplicity we have assumed $ |b_{i}| \leq 1 $, e.g., binary classification problems.
\begin{table}[!htb]
\centering
\refone{\caption{The upper bound of $K_g$ and $K_H$ for the non-convex finite-sum minimization problems of \cref{tab:L_H_bound}. \label{tab:K_H_K_g_bound}}}
\centering
\begin{adjustbox}{width=1\linewidth}
\refone{\begin{tabular}{lcccc}
\toprule
Problem Formulation & Predictor Function & \multicolumn{1}{m{6cm}}{\centering Upper bound of $K_g$} & \multicolumn{1}{m{5cm}}{\centering Upper bound of $K_H$} \\
\midrule
$\sum\limits_{i=1}^n(b_i-\phi(\langle {\bf a}_i, {\bf x}\rangle))^2$ & $\phi(z) =1/{(1+e^{-z})}$
& \multicolumn{1}{m{6cm}}{\centering$\displaystyle \max_{i=1,\ldots,n} \; \left(|b_i| + 1\right)\|{\bf a}_{i}\|/2$} & $\displaystyle \max_{i=1,\ldots,n} \; \left(|b_i| + 2\right)\|{\bf a}_{i}\|^{2}$ \\ \\
$\sum\limits_{i=1}^n(b_i-\phi(\langle {\bf a}_i, {\bf x}\rangle))^2$ & $\phi(z) ={(e^{z}-e^{-z})}/{(e^{z}+e^{-z})}$
& \multicolumn{1}{m{6cm}}{\centering$\displaystyle \max_{i=1,\ldots,n} \; 2 \left(|b_i| + 1\right)\|{\bf a}_{i}\|$} & $\displaystyle \max_{i=1,\ldots,n} \; \left(|b_i| + 2\right)\|{\bf a}_{i}\|^{2}$ \\\\
$\sum\limits_{i=1}^n\phi(b_i - \langle {\bf a}_i, {\bf x}\rangle)$ & $\phi(z) ={(1 -e^{-\alpha z^2})}/{\alpha}$ & \multicolumn{1}{m{6cm}}{\centering$\displaystyle \sqrt{2/\alpha}\max_{i=1,\ldots,n} \; \|{\bf a}_{i}\|$} & $\displaystyle 2 \max_{i=1,\ldots,n} \;\|{\bf a}_{i}\|^{2}$\\
\bottomrule
\end{tabular}}
\end{adjustbox}
\end{table}
}
\subsection{Inexact Newton-CG algorithm without line search}
\label{sec:fixed_step_size}
Although \cref{alg:inexact_withlinesearch} employs approximate gradients and Hessian \refone{at various steps, the use of backtracking line search to compute the stepsize $\alpha_k$ requires exact evaluations of the function $f$ and its gradient}. This setting has indeed been considered in some previous work, e.g., \cite{yao2018inexact,roosta2019sub}.
When \refboth{gradient evaluation has similar computational cost to the corresponding function evaluation, we may not save much in computation by requiring only an approximate gradient.}
We show in this section that a pre-defined (``fixed'') value of the step length $\alpha_k$ can be carefully chosen to obviate the need for function evaluations.
\refone{The advantage of not requiring exact evaluations of functions is considerable, but there are disadvantages too.}
First, \refone{the computed fixed step-size is conservative, so} the guaranteed descent in the objective generally will be smaller than in \cref{alg:inexact_withlinesearch}; see \cref{lemma:dtype_sol_opt,lemma:nc_from_cappedcg,lemma:nc_both_procedure}.
Second, our approach makes use of an approximate upper bound $L_H$ on the Lipschitz constant of the Hessian, which might not be readily available.
Fortunately, there are many important instances (especially in machine learning) where an estimate of
$L_H$ can be obtained easily; for example, empirical risk minimization problems involving the squared loss \citep{xuNonconvexEmpirical2017} and Welsch's exponential variant \citep{zhang2019robustness}. See \cref{tab:L_H_bound} for details.
\begin{table}[!htb]
\centering
\caption{The upper bound of $L_H$ for some non-convex finite-sum minimization problems of the form \eqref{eq:finte_sum_problem}. Here, we consider $\{({\bf a}_i, b_i)\}_{i=1}^n$ as training data where ${\bf a}_i\in\bbR^d$ and $b_i\in\bbR$.
For Welsch's exponential function $\phi$, $\alpha$ is a positive parameter.
\label{tab:L_H_bound}}
\centering
\begin{adjustbox}{width=1\linewidth}
\begin{tabular}{lcccc}
\toprule
Problem Formulation & Predictor Function & \multicolumn{1}{m{6cm}}{\centering Upper bound of $L_H$ for single data point $({\bf a},b)$} & \multicolumn{1}{m{5cm}}{\centering Upper bound of $L_H$ for entire problem} \\
\midrule
$\sum\limits_{i=1}^n(b_i-\phi(\langle {\bf a}_i, {\bf x}\rangle))^2$ & $\phi(z) =1/{(1+e^{-z})}$
& \multicolumn{1}{m{6cm}}{\centering$2\|{\bf a}\|^3(|b\phi'''(z)| + 3|\phi'(z)\phi''(z)|+|\phi(z)\phi'''(z)|)\leq 2(|b|+4)\|{\bf a}\|^3$} & $\displaystyle \max_{i=1,\ldots,n} \; 2(|b_{i}|+4)\|{\bf a}_i\|^3$ \\ \\
$\sum\limits_{i=1}^n(b_i-\phi(\langle {\bf a}_i, {\bf x}\rangle))^2$ & $\phi(z) ={(e^{z}-e^{-z})}/{(e^{z}+e^{-z})}$
& \multicolumn{1}{m{6cm}}{\centering$2\|{\bf a}\|^3(|b\phi'''(z)| + 3|\phi'(z)\phi''(z)|+|\phi(z)\phi'''(z)|)\leq 2(|b|+4)\|{\bf a}\|^3$} & $\displaystyle \max_{i=1,\ldots,n} \; 2(|b_{i}|+4)\|{\bf a}_i\|^3$ \\\\
$\sum\limits_{i=1}^n\phi(b_i - \langle {\bf a}_i, {\bf x}\rangle)$ & $\phi(z) ={(1 -e^{-\alpha z^2})}/{\alpha}$ & $\|{\bf a}\|^3|\phi'''(z)|$ & $9\alpha^{3/2} \displaystyle \max_{i=1,\ldots,n} \; \|{\bf a}_i\|^3$\\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table}
We state our variant of the Inexact Newton-CG Algorithm that does not require line search as \cref{alg:fixed_stepsize}.
Lines \ref{line:dif_01}, \ref{line:dif_02}, \ref{line:dif_03}, and \ref{line:dif_04}-\ref{line:dif_05} constitute the main differences between \cref{alg:inexact_withlinesearch,alg:fixed_stepsize}.
\input{inexact_alg_with_fixed_stepsize.tex}
The analysis of this section makes use of the following condition.
\begin{condition}\label{cond:opt_epsilong_epsilonh_fixedstepsize}
The inexact gradient ${\bf g}_k$ and Hessian ${\bf H}_k$ satisfy Condition \ref{cond:appr_gh} with
\begin{align*}
\delta_{g, k} &\leq \frac{1-\zeta}{8}\min\left( \frac{3\epsilon_H^2}{65(L_H+\eta)}, \max\big(\epsilon_g, \min(\epsilon_H\|{\bf d}_k\|, \|{\bf g}_k\|, \|{\bf g}_{k+1}\|)\big)\right)\\
\text{and}~~~\delta_H &\leq \frac{1-\zeta}{4}\epsilon_H.
\end{align*}
Throughout this section, we fix $\epsilon_H = \sqrt{L_H\epsilon_g}$, so that $\epsilon_g/\epsilon_H = \sqrt{\epsilon_g/L_H}$.
\end{condition}
In the next three lemmas, we show that the choices of $\alpha_k$ in \cref{alg:fixed_stepsize} lead to the step length acceptance condition used in \cref{alg:inexact_withlinesearch} being satisfied, that is,
\begin{equation} \label{eq:bf1}
-\frac{\eta}6 \alpha_k^3 \|{\bf d}_k\|^3 \geq f({\bf x}_k + \alpha_k{\bf d}_k) - f({\bf x}_k).
\end{equation}
We now show that the fixed step size can result in a sufficient descent in the function $f({\bf x}_k)$ when $d_{\text{\rm type}}=\text{\sc SOL}$ and $\|{\bf d}_k\|\geq \sqrt{{\epsilon_g}/{L_H}}$.
The following lemma can be viewed as a modification of Lemma \ref{lemma:dtype_sol_opt} with fixed step size.
\begin{lemma}
\label{lemma:fixed_stepsize_with_sol}
Suppose that Assumption~\ref{ass:lip} is satisfied and that Condition~\ref{cond:opt_epsilong_epsilonh_fixedstepsize} holds for all $k$.
Suppose that at iteration $k$ of Algorithm~\ref{alg:fixed_stepsize}, we have $\|{\bf g}_k\| \ge \epsilon_g$, so that Procedure~\ref{alg:capped_cg} is called.
When Procedure~\ref{alg:capped_cg} outputs a direction ${\bf d}_k$ with $d_{\text{\rm type}}=\text{\sc SOL}$ and $\|{\bf d}_k\|\geq\epsilon_g/\epsilon_H$, Algorithm~\ref{alg:fixed_stepsize} sets
\[
\alpha_k = \left[ \frac{3(1 - \zeta)}{4(L_H + \eta)}\right]^{1/2} \frac{\epsilon_H^{1/2}}{\|{\bf d}_k\|^{1/2}}.
\]
The resulting step ${\bf x}_{k+1}={\bf x}_k+\alpha_k{\bf d}_k$ satisfies
\[
f({\bf x}_k) -f({\bf x}_{k+1}) \geq \bar{c}_{\text{\rm sol}} \epsilon_H^3,
\]
where
\[
\bar{c}_{\text{\rm sol}} = \frac{\eta}{6} \left[ \frac{3(1 - \zeta)}{4L_H(L_H + \eta)}\right]^{3/2}.
\]
\end{lemma}
\begin{proof}
First, we prove that $\alpha_k \leq 1$. We have, using $\epsilon_H = \sqrt{L_H \epsilon_g}$, that
\[
\alpha_k^2
= \frac{3(1 - \zeta)\epsilon_H}{4(L_H + \eta)\|{\bf d}_k\|}
\leq \frac{3(1 - \zeta)\epsilon_H^{2}}{4(L_H + \eta)\epsilon_g}
= \frac{3(1 - \zeta)L_H}{4(L_H + \eta)} < 1.
\]
If we can show that \eqref{eq:bf1} holds, then we obtain the conclusion of the lemma by substituting the formula for $\alpha_k$ into this expression and using $\| {\bf d}_k \| \ge \epsilon_g/\epsilon_H$ and $\epsilon_H = \sqrt{L_H\epsilon_g}$.
Suppose for contradiction that condition \eqref{eq:bf1} is not satisfied. Then we have
\begin{align*}
-\frac{\eta}6 \alpha_k^3 \|{\bf d}_k\|^3
&\leq f({\bf x}_k + \alpha_k{\bf d}_k) - f({\bf x}_k) \\
&\leq \alpha_k\nabla f_k^T{\bf d}_k + \frac{\alpha_k^2}{2} {\bf d}_k^T\nabla^2 f({\bf x}_k) {\bf d}_k + \frac{L_H}6 \alpha_k^3 \|{\bf d}_k\|^3\\
& = \alpha_k{\bf g}_k^T{\bf d}_k + \frac{\alpha_k^2}{2} {\bf d}_k^T {\bf H}_k{\bf d}_k + \alpha_k(\nabla f_k-{\bf g}_k)^T{\bf d}_k + \frac{\alpha_k^2}{2} {\bf d}_k^T(\nabla^2 f({\bf x}_k)-{\bf H}_k) {\bf d}_k \\
&~~~+ \frac{L_H}6 \alpha_k^3 \|{\bf d}_k\|^3 \\
& \leq -\frac{\alpha_k}2 (1-\zeta)\epsilon_H\|{\bf d}_k\|^2 + \alpha_k\delta_{g,k}\|{\bf d}_k\| + \frac{\alpha_k^2}{2}\delta_H\|{\bf d}_k\|^2 + \frac{L_H}6 \alpha_k^3\|{\bf d}_k\|^3 \quad \quad (\mbox{from \cref{eq:ic1}}) \\
& < \alpha_k\delta_{g,k}\|{\bf d}_k\|-\frac{\alpha_k}2 \|{\bf d}_k\|^2 \big((1-\zeta)\epsilon_H-\delta_H\big) + \frac{L_H}6 \alpha_k^3\|{\bf d}_k\|^3 \quad \quad (\mbox{since $\alpha_k<1$}).
\end{align*}
By rearrangement, it follows that
\begin{equation} \label{eq:sj8}
\frac{L_H+\eta}{6} \alpha_k^2 \| {\bf d}_k \|^2 - \frac12 \big((1-\zeta)\epsilon_H-\delta_H\big) \| {\bf d}_k \| + \delta_{g,k} >0.
\end{equation}
By substituting the definition of $\alpha_k$ and using $\delta_H\leq {(1-\zeta)}\epsilon_H/{2}$ into the formula above, we have that \eqref{eq:sj8} implies
\begin{align*}
\frac{L_H+\eta}{6}\left[ \frac{3(1 - \zeta)}{4(L_H + \eta)}\right] \frac{\epsilon_H}{\|{\bf d}_k\|}\|{\bf d}_k\|^2 - \frac{(1-\zeta)\epsilon_H}{4}\|{\bf d}_k\| + \delta_{g,k} & >0 \\
\Leftrightarrow -\frac{(1-\zeta)}{8} \epsilon_H \| {\bf d}_k \| + \delta_{g,k} & >0.
\end{align*}
By using $\delta_{g,k} \le (1-\zeta) \max\left(\epsilon_g, \min(\epsilon_H\|{\bf d}_k\|, \|{\bf g}_k\|, \|{\bf g}_{k+1}\|)\right)/8$, this inequality implies that
\begin{equation} \label{eq:sj9}
- \epsilon_H \|{\bf d}_k \| + \max\left(\epsilon_g, \min(\epsilon_H\|{\bf d}_k\|, \|{\bf g}_k\|, \|{\bf g}_{k+1}\|)\right) >0.
\end{equation}
If $\epsilon_g > \min(\epsilon_H\|{\bf d}_k\|, \|{\bf g}_k\|, \|{\bf g}_{k+1}\|)$, since $\epsilon_H = \sqrt{L_H\epsilon_g}$, we have from \eqref{eq:sj9} that
\[
- \sqrt{L_H\epsilon_g}\|{\bf d}_k\| + \epsilon_g > 0 \Rightarrow \|{\bf d}_k\| < \sqrt{\epsilon_g/L_H},
\]
which contradicts our assumption $\|{\bf d}_k\| \geq \sqrt{{\epsilon_g}/{L_H}}=\epsilon_g/\epsilon_H$.
Alternatively, if we assume that $\epsilon_g \le \min(\epsilon_H\|{\bf d}_k\|, \|{\bf g}_k\|, \|{\bf g}_{k+1}\|)$, then from \eqref{eq:sj9}, it follows that that
\[
0 < - \epsilon_H\|{\bf d}_k\| + \min(\epsilon_H\|{\bf d}_k\|, \|{\bf g}_k\|, \|{\bf g}_{k+1}\|) \le -\epsilon_H\|{\bf d}_k\| + \epsilon_H\|{\bf d}_k\| =0,
\]
which is again a contradiction. Hence, our chosen value of $\alpha_k$ must satisfy \eqref{eq:bf1}, completing the~proof.
\end{proof}
Next, let us deal with the case when $d_{\text{\rm type}}=\text{\sc NC}$, which can be considered as a fixed-step alternative to Lemma~\ref{lemma:nc_from_cappedcg}.
\begin{lemma}
\label{lemma:fixed_stepsize_nc_from_cappedcg}
Suppose that Assumption~\ref{ass:lip} is satisfied and that Condition~\ref{cond:opt_epsilong_epsilonh_fixedstepsize} holds for all $k$.
Suppose that at iteration $k$ of Algorithm~\ref{alg:fixed_stepsize}, we have $\|{\bf g}_k\| > \epsilon_g$, so that Procedure~\ref{alg:capped_cg} is called.
When Procedure~\ref{alg:capped_cg} outputs a direction ${\bf d}_k$ with $d_{\text{\rm type}}=\text{\sc NC}$, we can choose the pre-defined step size
\[
\alpha_k = \left( \frac{{(\|{\bf d}_k\|-\delta_H)}/{2} + \sqrt{({(\|{\bf d}_k\|-\delta_H)}/{2})^2 - 4{(L_H+\eta)}\delta_{g,k}/6}}{{(L_H+\eta)\|{\bf d}_k\|}/3} \right) \tilde\theta,
\]
where $\tilde\theta$ is a parameter satisfying $(2-\sqrt{3})^2<\tilde\theta<1$.
The resulting step ${\bf x}_{k+1}={\bf x}_k+\alpha_k{\bf d}_k$ satisfies
$f({\bf x}_k) -f({\bf x}_{k+1}) \geq \bar{c}_{\text{\rm nc}} \epsilon_H^3$,
where
\[
\bar{c}_{\text{\rm nc}} := \frac{\eta}{6}\left[\frac{3 \tilde\theta}{4 (L_H+\eta)}\right]^3.
\]
\end{lemma}
\begin{proof}
We start by noting that under the assumptions of the lemma, we have
\begin{equation} \label{nncc.1}
{\bf d}_k^T {\bf H}_k {\bf d}_k \le -\epsilon_H \| {\bf d}_k \|^2, \quad \| {\bf d}_k \| \ge \epsilon_H,
\end{equation}
We replace the lower bound on $\| {\bf d}_k \|$ by the weaker bound $\| {\bf d}_k \| \ge \tfrac12 \epsilon_H$ (so that we can reuse our results in the next lemma) to obtain
\begin{equation} \label{eq:lw2}
\| {\bf d}_k \| \ge \frac12 \epsilon_H, \quad \delta_H \le \frac14 \epsilon_H \le \frac12 \| {\bf d}_k \| \;\; \mbox{and so} \;\; \| {\bf d}_k \| - \delta_H \ge \frac12 \| {\bf d}_k \| \ge \frac14 \epsilon_H.
\end{equation}
Note too that ${\bf d}_k^T {\bf g}_k \le 0$ by design, so that from Definition~\ref{cond:appr_gh} of $\delta_{g,k}$, we have
\begin{equation} \label{eq:es4}
{\bf d}_k^T \nabla f({\bf x}_k) \le {\bf d}_k^T {\bf g}_k + \| {\bf d}_k \| \| \nabla f({\bf x}_k) - {\bf g}_k \| \le \delta_{g,k} \| {\bf d}_k \|.
\end{equation}
We therefore have
\begin{align*}
f({\bf x}_k + \alpha_k{\bf d}_k) - {\bf f}({\bf x}_k)
&\leq \alpha_k\nabla f({\bf x}_k)^T{\bf d}_k + \frac{\alpha_k^{2}}{2} {\bf d}_k^T\nabla^2 f({\bf x}_k) {\bf d}_k + \frac{L_H}6 \alpha_k^{3} \|{\bf d}_k\|^3\\
&\leq \alpha_k\delta_{g,k}\|{\bf d}_k\| -\frac{\alpha_k^2}{2}\|{\bf d}_k\|^3 + \frac{\alpha_k^2}{2}\delta_H\|{\bf d}_k\|^2 + \frac{L_H}6 \alpha_k^3 \|{\bf d}_k\|^3. \quad \quad (\mbox{from \cref{eq:rd9}})
\end{align*}
Thus condition \eqref{eq:bf1} will be satisfied provided that
\[
\alpha_k\delta_{g,k}\|{\bf d}_k\| -\frac{\alpha_k^2}{2}\|{\bf d}_k\|^3 + \frac{\alpha_k^2}{2}\delta_H\|{\bf d}_k\|^2 + \frac{L_H}6 \alpha_k^3 \|{\bf d}_k\|^3 \le -\frac{\eta}6 \alpha_k^{3} \|{\bf d}_k\|^3.
\]
By rearranging and dividing by $\alpha_k \| {\bf d}_k \|$, we find that $\alpha_k$ satisfies \eqref{eq:bf1} provided that the following quadratic inequality in $\alpha_k$ is satisfied:
\begin{equation} \label{eq:qe2}
\left( \frac{(L_H+\eta)\|{\bf d}_k\|^2}{6}\right) \alpha_k^2 - \left( \frac{\|{\bf d}_k\|(\|{\bf d}_k\|-\delta_H)}{2} \right) \alpha_k + \delta_{g,k} \le 0.
\end{equation}
In fact this inequality is satisfied provided that $\alpha_k \in [\beta_2,\beta_1]$, where
\begin{align*}
\beta_1 & := \frac{{(\|{\bf d}_k\|-\delta_H)}/{2} + \sqrt{({(\|{\bf d}_k\|-\delta_H)}/{2})^2 - 4{(L_H+\eta)}\delta_{g,k}/6}}{{(L_H+\eta)\|{\bf d}_k\|}/3}, \\
\beta_2 &:= \frac{{(\|{\bf d}_k\|-\delta_H)}/{2} - \sqrt{({(\|{\bf d}_k\|-\delta_H)}/{2})^2 - 4{(L_H+\eta)}\delta_{g,k}/6}}{{(L_H+\eta)\|{\bf d}_k\|}/3}.
\end{align*}
To verify that the quantity under the square root is positive, we use \eqref{eq:lw2} to write
\begin{align*}
\left(\frac{\|{\bf d}_k\|-\delta_H}{2} \right)^2 - 4\frac{(L_H+\eta)}6 \delta_{g, k}
& \geq \frac{1}{16}\|{\bf d}_k\|^2 - \frac{2(L_H+\eta)}{3}\delta_{g, k} \\
& \geq \frac{1}{64} \epsilon_H^2 - \frac{2(L_H+\eta)}{3}\delta_{g, k}
> 0,
\end{align*}
where the last inequality follows from Condition~\ref{cond:opt_epsilong_epsilonh_fixedstepsize}, since
\[
\delta_{g,k} \le \frac{3}{2\times 65} \frac{\epsilon_H^2}{L_H+\eta} < \frac{3}{128} \frac{\epsilon_H^2}{L_H+\eta}.
\]
(Note that $0<\beta_2<\beta_1$.)
Next, we show that our choice of $\alpha_k$, which equals $\tilde\theta \beta_1$, lies in the interval $(\beta_2,\beta_1)$.
First, we have $\alpha_k=\tilde\theta\beta_1 < \beta_1$ since $\tilde\theta<1$.
Second, proving $\alpha_k > \beta_2$ is equivalent to showing that $\tilde\theta > \beta_2 / \beta_1$. Defining
\[
z:= \frac{\| {\bf d}_k \|-\delta_H}{2}, \quad c:= \frac23 (L_H+\eta) \delta_{g,k},
\]
we see that
\[
\beta_1 = \frac{z+\sqrt{z^2-c}}{(L_H+\eta)\|{\bf d}_k \|/3}, \quad
\beta_2 = \frac{z-\sqrt{z^2-c}}{(L_H+\eta)\|{\bf d}_k \|/3},
\]
so that the required condition is
\[
\tilde\theta > \beta_2 / \beta_1 = \frac{z-\sqrt{z^2-c}}{z+\sqrt{z^2-c}} = \frac{(z-\sqrt{z^2-c})^2}{c}.
\]
We have from \eqref{eq:lw2} and Condition~\ref{cond:opt_epsilong_epsilonh_fixedstepsize} that
\[
z^2 = \left( \frac{\|{\bf d}_k \| - \delta_H}{2} \right)^2 \ge \frac{\epsilon_H^2}{64} > \frac{\epsilon_H^2}{65} \ge \frac{8}{3} (L_H+\eta) \delta_{g,k} = 4c.
\]
Since $z-\sqrt{z^2-c}$ is a decreasing function of $z$ for all $z^2 >c>0$, we have by using $z^2>4c$ that
\[
\frac{\beta_2}{\beta_1} = \frac{(z-\sqrt{z^2-c})^2}{c} < \frac{(2 \sqrt{c} - \sqrt{4c-c})^2}{c} = (2-\sqrt{3})^2 < \tilde\theta.
\]
We have therefore proved that $\alpha_k \in [\beta_2,\beta_1]$, so that $\alpha_k$ satisfies \eqref{eq:bf1}.
From \eqref{eq:lw2}, we have
\begin{align} \nonumber
\alpha_k = \tilde\theta \beta_1
& = \tilde\theta\frac{{(\|{\bf d}_k\|-\delta_H)}/{2} + \sqrt{({(\|{\bf d}_k\|-\delta_H)}/{2})^2 - 4{(L_H+\eta)}\delta_{g,k}/6}}{{(L_H+\eta)\|{\bf d}_k\|}/3} \\
\label{eq:lw3}
& \geq \tilde\theta\frac{\|{\bf d}_k\|/4}{{(L_H+\eta)\|{\bf d}_k\|}/3}
= \frac{3}{4}\frac{\tilde\theta}{L_H+\eta}.
\end{align}
The final claim of the theorem is obtained by substituting this lower bound on $\alpha_k$ into \eqref{eq:bf1}, and using $\|{\bf d}_k \| \ge \epsilon_H$.
\end{proof}
The next lemma shows that when $d_{\text{\rm type}}=\text{\sc NC}$ is obtained from Procedure~\ref{alg:minimum_eigenvalue}, the same fixed step size as in \cref{lemma:fixed_stepsize_nc_from_cappedcg} can be used, with the same lower bound on improvement in $f$.
\begin{lemma}\label{lemma:fixedstepsize_nc_proc2}
Suppose that Assumption~\ref{ass:lip} is satisfied and that Condition~\ref{cond:opt_epsilong_epsilonh_fixedstepsize} holds for all $k$.
Suppose that at iteration $k$ of Algorithm~\ref{alg:fixed_stepsize}, the step ${\bf d}_k$ is of negative curvature type, obtained from Procedure~\ref{alg:minimum_eigenvalue}. Then when we define $\alpha_k$ as in \cref{lemma:fixed_stepsize_nc_from_cappedcg}, we obtain
\begin{equation}
f({\bf x}_k) - f({\bf x}_k + \alpha_k {\bf d}_k) \ge \frac18 \bar{c}_{\text{\rm nc}} \epsilon_H^3,
\end{equation}
where $\bar{c}_{\text{\rm nc}}$ is defined in \cref{lemma:fixed_stepsize_nc_from_cappedcg}.
\end{lemma}
\begin{proof}
Note that for ${\bf d}_k$ obtained from Procedure~\ref{alg:minimum_eigenvalue}, we have
\[
{\bf d}_k^T{\bf H}{\bf d}_k \leq -\frac{1}{2}\epsilon_H\|{\bf d}_k\|^2, \quad \| {\bf d}_k \| \ge \frac12 \epsilon_g.
\]
Since the bulk of the proof of \cref{lemma:fixed_stepsize_nc_from_cappedcg} uses only the latter lower bound on $\|{\bf d}_k\|$, we can use this proof to derive the same lower bound \eqref{eq:lw3} on $\alpha_k$. The result follows by substituting this lower bound together with $\|{\bf d}_k \| \ge \epsilon_H/2$ into \eqref{eq:bf1}.
\end{proof}
Using \cref{lemma:fixed_stepsize_with_sol,lemma:fixed_stepsize_nc_from_cappedcg,lemma:fixedstepsize_nc_proc2,lemma:terminate_step_condition_withlinesearch}, we are now ready to give the iteration complexity of \cref{alg:fixed_stepsize}.
\begin{theorem}
\label{thm:fixed_stepsize_complexity}
Suppose that Assumption~\ref{ass:lip} is satisfied and that Condition~\ref{cond:opt_epsilong_epsilonh_fixedstepsize} holds for all $k$.
For a given $\epsilon>0$, let $\epsilon_H = \sqrt{L_H \epsilon}, \epsilon_g = \epsilon$.
Define
\begin{equation}\label{eq:k2}
\bar K_2 := 2 \left\lceil \frac{f({\bf x}_0) - f_{\text{\rm low}}}{\min\{\bar{c}_{\text{\rm sol}},\bar{c}_{\text{\rm nc}}/8\} L_H^{3/2}} \epsilon^{-3/2} \right \rceil + 3,
\end{equation}
where $\bar{c}_{\text{\rm sol}}$ and $\bar{c}_{\text{\rm nc}}$ are defined in \cref{lemma:fixed_stepsize_with_sol} and \cref{lemma:fixed_stepsize_nc_from_cappedcg}, respectively.
Then \cref{alg:fixed_stepsize} terminates in at most $\bar K_2$ iterations at a point ${\bf x}$ satisfying
$\norm{\nabla f({\bf x})} \lesssim \epsilon$.
Moreover, with probability at least $(1 - \delta)^{\bar K_2}$, the point returned by \cref{alg:fixed_stepsize} also satisfies the approximate second-order condition
$\lambda_{\min}(\nabla^2 f({\bf x})) \gtrsim -\sqrt{L_H\epsilon}$.
\reftwo{Here again, $\lesssim$ and $\gtrsim$ denote that the corresponding inequality holds up to a certain constant that is independent of $ \epsilon $ and $ L_{H} $.}
\end{theorem}
\begin{proof}
The proof tracks that of \cref{thm:opt_iteration_comp} closely, so we omit much of the detail and discussion.
For contradiction, we assume that \cref{alg:fixed_stepsize} runs for at least $K$ steps, where $K>\bar K_2$. We partition the set of iteration indices $\{1,2,\dotsc,K \}$ into the same sets $\mathcal K_1, \dotsc, \mathcal K_5$ as in the proof of \cref{thm:opt_iteration_comp}.
Considering each of these sets in turn, we have the following.
\paragraph{Case 1:} $k \in \mathcal K_1$. Either \cref{alg:fixed_stepsize} terminates (which happens at most once for $k \in \mathcal K_1$) or we achieve a reduction in $f$ of at least $\tfrac18 \bar{c}_{\text{\rm nc}} \epsilon_H^3 = \tfrac18 \bar{c}_{\text{\rm nc}} L_H^{3/2} \epsilon^{3/2}$ (\cref{lemma:fixedstepsize_nc_proc2}).
\paragraph{Cases 2 and 3:} $k \in \mathcal K_2 \cup \mathcal K_3$. $f$ is reduced by at least $\bar{c}_{\text{\rm sol}} L_H^{3/2} \epsilon^{3/2}$ (\cref{lemma:fixed_stepsize_with_sol}).
\paragraph{Case 4:} $k \in \mathcal K_4$. The algorithm terminates, so we must have $|\mathcal K_4| \le 1$.
\paragraph{Case 5:} $k \in \mathcal K_5$. Either the algorithm terminates, or we achieve a reduction of at least $\bar{c}_{\text{\rm nc}} L_H^{3/2} \epsilon^{3/2}$ (\cref{lemma:fixed_stepsize_nc_from_cappedcg}).
Reasoning as in the proof of \cref{thm:opt_iteration_comp}, we have that
\[
f({\bf x}_0) - f_{\text{\rm low}} \ge (| \mathcal K_1|-1) \tfrac18 \bar{c}_{\text{\rm nc}} L_H^{3/2} \epsilon^{3/2} + (| \mathcal K_2| + | \mathcal K_3| ) \bar{c}_{\text{\rm sol}} L_H^{3/2} \epsilon^{3/2} + (| \mathcal K_5|-1) \bar{c}_{\text{\rm nc}} L_H^{3/2} \epsilon^{3/2},
\]
from which we obtain
\begin{align*}
| \mathcal K_1| + | \mathcal K_5| -2 & \le \frac{f({\bf x}_0)-f_{\text{\rm low}}}{\tfrac18 \bar{c}_{\text{\rm nc}} L_H^{3/2}} \epsilon^{-3/2}, \\
| \mathcal K_2| + | \mathcal K_3| & \le \frac{f({\bf x}_0)-f_{\text{\rm low}}}{\bar{c}_{\text{\rm sol}} L_H^{3/2}}\epsilon^{-3/2}.
\end{align*}
By using these bounds along with $| \mathcal K_4| \le 1$, we obtain
\[
K \le \sum_{i=1}^5 | \mathcal K_i| \le 2 \frac{f({\bf x}_0) - f_{\text{\rm low}}}{\min(\bar{c}_{\text{\rm nc}}/8,\bar{c}_{\text{\rm sol}}) L_H^{3/2}} \epsilon^{-3/2} + 3,
\]
which contradicts our assumption that $K > \bar K_2$.
The proof of the remaining claim, concerning the approximate second-order condition, is identical to the corresponding section in the proof of \cref{thm:opt_iteration_comp}.
\end{proof}
Note that the worst-case iteration complexity of \cref{alg:fixed_stepsize} has the same dependence on $\epsilon$ as \cref{alg:inexact_withlinesearch} despite the function evaluation no longer being required. The terms in the bound that do not depend on $\epsilon$ are, however, generally worse for \cref{alg:fixed_stepsize}.
\reftwo{We conclude with a discussion of Conditions \ref{cond:opt_epsilong_epsilonh} and \ref{cond:opt_epsilong_epsilonh_fixedstepsize}.
These conditions allow for the accuracy of ${\bf g}_k$ to be chosen adaptively, depending on problem-dependent constants, algorithmic parameters, the desired solution tolerances $\epsilon_g$ and $\epsilon_H$, and the quantities $\|{\bf d}_k\|$, $\|g_{k+1}\|$, and $\|{\bf g}_k\|$. The quantity $\|{\bf g}_k\|$ is easy to evaluate (since, after all, ${\bf g}_k$ is the quantity actually calculated). However, the dependence on the quantities $\| {\bf d}_k\|$ and $\| {\bf g}_{k+1} \|$ is more problematic, since ${\bf g}_k$ is needed to compute both ${\bf d}_k$ and ${\bf g}_{k+1}$.
Thus, the bounds on $\delta_{g,k}$ in Conditions \ref{cond:opt_epsilong_epsilonh} and \ref{cond:opt_epsilong_epsilonh_fixedstepsize} can be checked only ``in retrospect," not enforced as an a priori condition.
We can deal with this issue by checking the bound on $\delta_{g,k}$ after the step to ${\bf x}_{k+1}$ has been taken.
if it fails to be satisfied, we can improve the accuracy of ${\bf g}_k$ and re-do iteration $k$.
If we halve $\delta_{g,k}$ each time the step is recomputed, the number of recomputations is at worst a multiple of $\log \epsilon_g$ (since the bound on $\delta_{g,k}$ in both conditions is at least $(1-\zeta)\epsilon_g/8$), so our complexity bounds are not affected significantly.
We choose to elide this fairly uninteresting issue in our analysis, and simply assume for simplicity that the relevant bound on $\delta_{g,k}$ holds at each iteration.
}
\iffalse
\reftwo{Before ending this section, we emphasize that although the adaptivity of Conditions \ref{cond:opt_epsilong_epsilonh} and \ref{cond:opt_epsilong_epsilonh_fixedstepsize} is a desirable theoretical property, the main drawback lies in the difficulty of enforcing them in practice.
Indeed, a priori enforcing Conditions \ref{cond:opt_epsilong_epsilonh} and \ref{cond:opt_epsilong_epsilonh_fixedstepsize} requires one to have already taken the $k^{\textnormal{th}}$ iteration, which itself can be done only after computing ${\bf g}_{k}$, hence creating a vicious circle.
A posteriori guarantees can be given if one obtains a lower-bound estimate on the yet-to-be-computed $ \|{\bf g}_{k}\| $ and $ \|{\bf g}_{k+1}\| $, i.e., to have $ g_{0}>0 $ such that $ g_{0} \leq \min\{\|{\bf g}_{k}\|,\|{\bf g}_{k+1}\|\} $.
This allows one to consider a stronger, but practically enforceable, condition. However, to obtain such a lower-bound estimate on $\|{\bf g}_{k}\|$ and $\|{\bf g}_{k+1}\|$, one has to resort to a recursive procedure, which necessitates repeated constructions of the approximate gradient and subsequent solutions of the corresponding sub-problems.
Clearly, this procedure will result in a significant computational overhead and will lead to undesirable theoretical complexities.
An important area for future work is the investigation of ways to modify the adaptivity of Conditions~\ref{cond:opt_epsilong_epsilonh} and \ref{cond:opt_epsilong_epsilonh_fixedstepsize} in such a way that they can be enforced in practice.}
\fi
\section{Numerical evaluation}
\label{sec:numerical_experiments}
In this section, we evaluate the performance of \cref{alg:inexact_withlinesearch,alg:fixed_stepsize} on three model problems in the form of finite-sum minimization: nonlinear least squares (NLS), multilayer perceptron (MLP), and variational autoencoder (VAE).
Our aim here is to illustrate the efficiency gained from gradient and Hessian approximations as compared with the exact counterpart in \cite{royer2018newton}.
More specifically, in our numerical examples, we consider the following algorithms.
\begin{itemize}[leftmargin=*,wide=0em
\item {\texttt {Full NTCG}}: Newton Method with Capped-CG solver with full gradient and Hessian evaluations, as developed in \cite{royer2018newton}.
\item {\texttt {SubH NTCG}} ({\bf this work}): Variant of \cite{royer2018newton} where Hessian is approximated. We consider this setting as an intermediary between the full algorithm and those where both the gradient and the Hessian are approximated. Sample sizes for approximating Hessian for experiments using NLS, MLP, and VAE, are $ 0.01n $, $ 0.02n $, and $ 0.02n $, respectively.
\item {\texttt {Inexact NTCG Full-Eval}} ({\bf this work}): Newton Method with Capped-CG solver with back-tracking line-search where both the gradient and the Hessian are approximated.
To perform the backtracking line search, we employ the full dataset to evaluate the objective function.
The sample size for estimating the gradient is adaptively calculated as follows: if $\|{\bf g}_{t}\| \geq 1.2 \|{\bf g}_{t-1}\|$ or $\|{\bf g}_{t}\| \leq \|{\bf g}_{t-1}\|/1.2$, then the sample size is decreased or increased, respectively, by a factor of 1.2.
Otherwise, we maintain the same sample size as the previous iteration. \reftwo{The initial sample size to approximate the gradient for the experiments of \cref{sec:nls} is set to $0.05 n$, while for the experiments of \cref{sec:mlp,sec:vae}, we use an initial sample size of 10,000.}
The sample size for approximating the Hessian is set the same as that in \texttt{SubH NTCG}.
\item {\texttt {Inexact NTCG Fixed}} ({\bf this work}): Newton Method with Capped-CG solver, using approximations of both the gradient and the Hessian and fixed step-sizes.
The step sizes are predefined as follows: for NLS experiments, we use $\alpha_k=0.04$ for $d_{\text{\rm type}}=\text{\sc NC}$ and $\alpha_k=0.2$ for $d_{\text{\rm type}}=\text{\sc SOL}$, while for simulations on MLP/VAE models, we consider $\alpha_k=0.1$ for $d_{\text{\rm type}}=\text{\sc NC}$ and $\alpha_k=\sqrt{0.1}$ for $d_{\text{\rm type}}=\text{\sc SOL}$.
The gradient and Hessian approximations are done as in the previous two variants.
\item {\texttt {Inexact NTCG Sub-Eval}}: This method is almost identical to \texttt {Inexact NTCG Full-Eval}, however,
the backtracking line search is performed on estimates of the objective function using the same samples as the ones used in gradient approximation.
Of course, our theoretical analysis does not immediately support this variant. However, we have found this strategy to be highly effective in practice, and we intend to theoretically investigate it in future work.
\end{itemize}
\reftwo{In all of our experiments, we run each stochastic method five times (starting from the same initial point), and plot the average run (solid line) and 1-standard deviation band (shaded regions).
To avoid cluttering the plots, we only show the upper deviation from the average, since the lower deviation band is almost identical on all of our experiments.}
\refone{We note that the step-size implies by \cref{alg:fixed_stepsize} is very pessimistic and hence small. This is a byproduct of our worst-case analysis, which comprises of descent obtained from a sequence of conservative steps. Requiring small step-lengths to provide a convergence guarantee is perhaps the main drawback for the worst-case style of analysis, which is almost ubiquitous within the optimization literature, e.g., fixed step-size of length $1/L_g$ for gradient descent on smooth unconstrained problems.
Our numerical example shows that much larger step-sizes than those prescribed by \cref{alg:fixed_stepsize} can be employed in practice.
We suspect this to be the case for most practical applications.}
\reftwo{
Although in Algorithms \ref{alg:inexact_withlinesearch} and \ref{alg:fixed_stepsize}, the case where $ \|{\bf d}_{k}\| $ is small (relative to the ratio $\epsilon_{g}/\epsilon_{H}$) is crucial in obtaining theoretical guarantees, in all of our simulations, we have found that performing line search directly with such small $ {\bf d}_{k} $ and without resorting to Procedure \ref{alg:minimum_eigenvalue} in fact yields reasonable progress. In this light, in all of our implementations, we have made the practical decision to omit Lines 9-16 of Algorithms \ref{alg:inexact_withlinesearch} and \ref{alg:fixed_stepsize}.}
Similar to \cite{xuNonconvexEmpirical2017,yao2018inexact}, the performance of all the algorithms is measured by tallying the total \textit{number of propagations}, that is, the number of oracle calls of function, gradient, and Hessian-vector products. This is so since comparing algorithms in terms of ``wall-clock'' time can be highly affected by their particular implementation details as well as system specifications. In contrast, counting the number of oracle calls, as an implementation and system independent unit of complexity, is most appropriate and fair.
More specifically, after computing $ f_{i}({\bf x}) $, which accounts for one oracle call, computing the corresponding gradient $ \nabla f_{i}({\bf x}) $ is equivalent to one additional function evaluation, i.e., two oracle calls are needed to compute $ \nabla f_{i}({\bf x}) $. Our implementations are Hessian-free, i.e., we merely require Hessian-vector products instead of using the explicit Hessian. For this, each Hessian-vector product $ \nabla^{2} f_{i}({\bf x}) {\bm{v}} $ amounts to two additional function evaluations, as compared with gradient evaluation, i.e., four oracle calls are used to evaluate $ \nabla^{2} f_{i}({\bf x}) {\bm{v}}$.
\subsection{Nonlinear least squares}
\label{sec:nls}
We first consider the simple, yet illustrative, non-linear least squares problems arising from the task of binary classification with squared loss.\footnote{Logistic loss, the ``standard'' loss used in this task, leads to a convex objective. We use squared loss to obtain a nonconvex objective.}
Given training data $\{{\bf a}_i,b_i\}_{i=1}^n$, where ${\bf a}_i\in\bbR^d, b_i\in\{0,1\}$, we solve the empirical risk minimization problem
\begin{align*}
\min_{{\bf x}\in\bbR^d} \frac1n \sum_{i=1}^n \Big(b_i-\phi\big(\langle{\bf a}_i,{\bf x}\rangle\big)\Big)^2,
\end{align*}
where $\phi(z)$ is the sigmoid function: $\phi(z) = 1/(1+e^{-z})$.
Datasets are taken from \texttt{LIBSVM} library~\citep{libsvm}; see \cref{tab:data1} for details.
We use the same setup as in~\cite{yao2018inexact}.
\begin{table}[!htb]
\centering
\caption{Datasets used for NLP experiments.}
\label{tab:data1}
\begin{tabular}{lccccc}
\toprule
\sc Data & $n$ & $d$ \\
\midrule
{\texttt{ covertype}} & 464,810 & 54
\\
{\texttt{ ijcnn1}} & 49,990 & 22
\\
\bottomrule
\end{tabular}
\end{table}
The comparison between different NTCG algorithms is shown in Figure~\ref{fig:nls_result_ntcg}.
\reftwo{It is clear that, for a given value of the loss, all inexact variants in the \texttt{Inexact NTCG} family converge faster, i.e., with fewer oracle calls.}
Clearly, lower per-iteration cost of \texttt{Inexact NTCG Fixed} comes at the cost of slower overall convergence as compared with \texttt{Inexact NTCG Sub-Eval}. This is mainly because the step size obtained as part of the line-search procedure can generally result in a better decrease in function value.
\reftwo{For this problem we could refer to \cref{tab:L_H_bound} and explicitly compute the fixed step-size prescribed by \cref{alg:fixed_stepsize}.
As mentioned earlier, the resulting step size is overly conservative.
Our simulations show that much larger step-sizes yield convergent algorithms. In this light, our fixed step-sizes are chosen without regard to the value prescribed in \cref{alg:fixed_stepsize}, but are based rather on numerical experience.}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=.49\textwidth]{figures/ijcnn1.pdf}
\includegraphics[width=.49\textwidth]{figures/covtype2.pdf}
\end{center}
\caption{
Comparison between all variants of \texttt{NTCG} on \texttt{ijcnn1} and \texttt{covertype} datasets.
}
\label{fig:nls_result_ntcg}
\end{figure}
\subsection{Multilayer perceptron}
\label{sec:mlp}
Here, we consider a slightly more complex setting than simple NLS and evaluate the performance of \cref{alg:inexact_withlinesearch,alg:fixed_stepsize} on several MLPs in the context of the image classification problem.
For our experiments here, we will make use of the \texttt{MNIST} dataset, which is also available from \texttt{LIBSVM} library \citep{libsvm}.
We consider three MLPs with one hidden layer, involving $ 16 $, $ 128 $, and $ 1024 $ neurons, respectively. All MLPs contain one output layer to determine the assigned class of the input image.
The intermediate activation is chosen as the SoftPlus function~\citep{glorot2011deep}, which amounts to a smooth optimization problem.
Table~\ref{tab:mlp} summarizes the total dimensions, in terms of $ n $ and $ d $, of the resulting optimization problems.
\begin{table}[!htb]
\centering
\caption{The problem size for various MLPs.
\label{tab:mlp}}
\centering
\begin{tabular}{lcccc}
\toprule
\multicolumn{1}{m{2cm}}{\centering Hidden Layer Size} & $n$ & $ d $ \\
\midrule
16 & 60,000 & 12,704 \\
128 & 60,000 & 101,632 \\
1,024 & 60,000 & 813,056 \\
\bottomrule
\end{tabular}
\end{table}
\cref{fig:mlp_result_ntcg} depicts the performance of all variants of NTCG that we consider in this paper.
As can be seen, for all cases, our \texttt{Inexact NTCG Full-Eval} and \texttt{Inexact NTCG Sub-Eval} have the fastest convergence rate and achieve lower training loss as compared to alternatives.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=.325\textwidth]{figures/16.pdf}
\includegraphics[width=.325\textwidth]{figures/128.pdf}
\includegraphics[width=.325\textwidth]{figures/1024.pdf}
\end{center}
\caption{
Comparison between all variants of \texttt{NTCG} on several MLPs with different hidden-layer sizes: 16 (left), 128 (middle), and 1024 (right).
}
\label{fig:mlp_result_ntcg}
\end{figure}
\subsection{Variational autoencoder}
\label{sec:vae}
We now evaluate the performance of
\cref{alg:inexact_withlinesearch,alg:fixed_stepsize}
using a more complex setting of variational autoencoder (VAE) model.
Our VAE model consists of six fully-connect layers, which are structured as $784\rightarrow512\rightarrow256\rightarrow2\rightarrow256\rightarrow512\rightarrow784$.
The intermediate activation and the output truncation functions, are respectively chosen as SoftPlus \citep{glorot2011deep} and Sigmoid \citep{glorot2011deep}. We again consider the \texttt{MNIST} dataset.
The results are shown in Figure~\ref{fig:vae}.
Although we did not fine-tune the fixed step-sizes used within \texttt{Inexact NTCG Fixed} (as evidenced by its clear non-monotonic behavior), one can see that \texttt{Inexact NTCG Fixed} exhibits competitive performance.
Again, as observed previously, \texttt{Inexact NTCG Full-Eval} and \texttt{Inexact NTCG Sub-Eval} have the fastest convergence rate among all of the variants.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=.45\textwidth]{figures/mnist_vae.pdf}
\end{center}
\caption{
Comparison between all variants of \texttt{NTCG} on VAE.
}
\label{fig:vae}
\end{figure}
\section{Conclusion}
We have considered inexact variants of the Newton-CG algorithm in which \refboth{approximations of gradient and Hessian are used.}
\refone{\cref{alg:inexact_withlinesearch} employs approximations to the gradient and Hessian matrix at each step, and this inexact information is used to obtain an approximate Newton direction in Procedure \ref{alg:capped_cg}.
However, to obtain the step-size, \cref{alg:inexact_withlinesearch} requires exact function values.
This issue is partially addressed in \cref{alg:fixed_stepsize}, where fixed step-sizes replace the line search.
The drawbacks of the latter approach are that the fixed step-sizes are conservative and that they depend on some problem-dependent quantities that are generally unavailable, though known for some important classes of machine learning problems.
An ``ideal'' algorithm would allow for line searches using inexact function evaluations.
One might be able to derive such a version using some further assumptions on the inexact function and the inexact gradient, such as those considered in \cite{paquette2020stochastic}, and by introducing randomness into the algorithm and the use of concentration bounds in the analysis.
We intend to investigate these topics in future research.}
We are especially interested in problems in which the objective has a ``finite-sum'' form, so the approximated gradients and Hessians are obtained by sampling randomly from the sum.
For all of our proposed variants, we showed that the iteration complexities needed to achieve approximate second-order criticality are essentially the same as that of the exact variants.
In particular, a variant that uses a fixed step size, rather than a step chosen adaptively by a backtracking line search, attains the same order of complexity as the other variants, despite never needing to evaluate the function itself.
\refone{The dependence of our algorithms on Procedure \ref{alg:minimum_eigenvalue} implies the probabilistic nature of our results, which can be shown to hold with high-probability over the run of the algorithm.}
We demonstrate the advantages and shortcomings of the approach, in comparison with other methods, using several test problems.
|
{
"timestamp": "2022-04-12T02:29:23",
"yymm": "2109",
"arxiv_id": "2109.14016",
"language": "en",
"url": "https://arxiv.org/abs/2109.14016"
}
|
\section{Introduction}\label{Intro}
A regular clique in a finite regular graph is a clique such that every vertex that does not belong to the clique is adjacent to the same positive number of vertices in the clique. A regular clique in a graph can be equivalently viewed as a clique which is a part of an
equitable 2-partition (see \cite{BH12, GR01}), or a clique which is a completely regular code of radius 1 (see \cite{N92} and
\cite[p. 345]{BCN89}) in this graph.
It is well known that a clique in a strongly regular graph is regular if and
only if it is a Delsarte clique (see \cite{BHK07,BCN89}).
In \cite{N81}, A. Neumaier posed the problem of whether there exists a non-complete, edge-regular,
non-strongly regular graph containing a regular clique. A \emph{Neumaier graph}
is a non-complete edge-regular graph containing a regular clique and a \emph{strictly
Neumaier graph} is a non-strongly regular Neumaier graph. (These definitions are analogous to the definitions of Deza graphs and strictly Deza graphs \cite{EFHHH99}.)
Two families of strictly Neumaier graphs with $1$-regular cliques were found in \cite{GK18} and \cite{GK19}. In \cite{EGP19}, two strictly Neumaier graphs with $2^i$-regular cliques for every positive integer $i$ were constructed. In \cite{ADDK21}, strictly Neumaier graphs with few eigenvalues were studied.
In this paper we present a generalisation of the constructions found in \cite{GK18} and \cite{GK19}; the general construction requires the existence of edge-regular graphs admitting a partition into perfect 1-codes of a certain size. In Section \ref{sec:Prelims} we give several definitions related to strictly Neumaier graphs, and the definition of WQH-switching. In Section \ref{sec:GeneralConstruction} we present the main results of this paper. A general construction of Neumaier graphs containing 1-regular cliques is given in Theorem \ref{GeneralConstruction}, followed by a criterion for the resulting graph to be a strictly Neumaier graph in Corollary \ref{cor:t2strict}. Further, we use a WQH-switching to show that our construction creates cospectral Neumaier graphs in Proposition \ref{Switching}.
In Section \ref{sec:Examples}, we present several examples of small strictly Neumaier graphs, and show that each of these graphs can be found using our general construction. For each of these graphs, we give a geometric or algebraic description of their structure. Finally, in Section \ref{sec:FurtherProblems} we present families of infinite edge-regular graphs, and ask if we can use these graphs to construct infinite families of strictly Neumaier graphs. In finding such families, we would be extending the constructions of certain graphs found in Section \ref{sec:Examples} to infinite families of graphs.
We would like to note that the main result of this paper, Theorem \ref{GeneralConstruction}, has already been referenced, presented, adjusted and used in \cite{ACDKZ21}. Furthermore, the strictly Neumaier graph we construct in Example \ref{graph65} was found independently by the authors of \cite{ACDKZ21}, who present an infinite family of strictly Neumaier graphs containing this graph as its smallest graph. In \cite{ACDKZ21}, there are also some interesting non-existence results for Neumaier graphs.
\section{Preliminaries}\label{sec:Prelims}
In this paper we only consider undirected graphs that contain no loops or multiple
edges. Let $\Gamma$ be such a graph. We denote by $V(\Gamma)$ the vertex set of $\Gamma$ and $E(\Gamma)$ the edge set of $\Gamma$. For a vertex $u \in V(\Gamma)$ we define the \emph{neighbourhood} of $u$ in $\Gamma$ to be the
set $\Gamma(u) = \{w \in V(\Gamma) : uw \in E(\Gamma)\}$.
Let $\Gamma$ be a graph and $v = |V(\Gamma)|$. The graph $\Gamma$ is called \emph{$k$-regular} if every vertex has
neighbourhood of size $k$. The graph $\Gamma$ is \emph{edge-regular} if it is non-empty, $k$-regular, and every pair of
adjacent vertices have exactly $\lambda$ common neighbours. Then $\Gamma$ is said to be edge-regular
with \emph{parameters} $(v, k, \lambda)$. The graph $\Gamma$ is \emph{co-edge-regular} if it is non-complete, $k$-regular and every pair of distinct non-adjacent vertices have exactly $\mu$ common neighbours. Then $\Gamma$ is said to be co-edge-regular with \emph{parameters} $(v, k, \mu)$.
The graph $\Gamma$ is \emph{strongly regular} if it is both edge-regular and co-edge-regular. If $\Gamma$ is
edge-regular with parameters $(v, k, \lambda)$, and co-edge-regular with parameters $(v, k, \mu)$, the
graph is called \emph{strongly regular} with \emph{parameters} $(v, k, \lambda, \mu)$.
A \emph{clique} in a graph $\Gamma$ is a set of pairwise adjacent vertices of $\Gamma$, and a clique of size $s$
is called an \emph{$s$-clique}. A clique $S$ in a regular graph $\Gamma$ is \emph{regular} if every vertex that does not belong to $S$ is
adjacent to the same number $m > 0$ of vertices in $S$. We then say that $S$ has \emph{nexus}
$m$ and is \emph{$m$-regular}. A set of cliques of a graph $\Gamma$ that partition the vertex set of $\Gamma$ is called a \emph{spread}
in $\Gamma$.
A \emph{Neumaier graph} with \emph{parameters} $(v, k, \lambda; m, s)$ is a non-complete edge-regular graph with parameters $(v,k,\lambda)$ containing an $m$-regular $s$-clique. A \emph{strictly
Neumaier graph} is a Neumaier graph that is not strongly regular.
\subsection{Perfect codes}
Let $\Gamma$ be a simple undirected graph and $e \ge 1$ an integer. The \emph{ball} with radius $e$
and centre $u \in V(\Gamma)$ is the set of vertices of $\Gamma$ with distance at most $e$ to $u$ in $\Gamma$. A subset $C$ of $V(\Gamma)$ is called a \emph{perfect $e$-code} in $\Gamma$ if the balls with radius $e$ and centres in $C$ form a partition
of $V(\Gamma)$ (see \cite{B73, K86}). In particular, a perfect 1-code is a subset of vertices $C$ such that every vertex not in $C$ is adjacent to a unique element of $C$.
\subsection{Wang-Qiu-Hu switching}
The \emph{spectrum} of a graph $\Gamma$ is the multiset of eigenvalues of the adjacency matrix of $\Gamma$ (see \cite{BH12}). Two graphs are \emph{cospectral} if they have the same spectra.
The following switching, which produces cospectral graphs, was discovered in \cite{WQH19} and applied in \cite{IM19} to obtain new strongly regular graphs.
\begin{lemma}[WQH-switching]\label{WQHswitching}
Let $\Gamma$ be a graph whose vertex set is partitioned as $C_1 \cup C_2 \cup D$. Assume that $|C_1| =|C_2|$ and that the induced subgraphs on $C_1, C_2$, and $C_1 \cup C_2$ are regular, where the degrees in the induced subgraphs on $C_1$ and $C_2$ are the same. Suppose that all $x\in D$ satisfy one of the following:
\begin{enumerate}
\item $|\Gamma(x) \cap C_1| = |\Gamma(x) \cap C_2|$, or
\item $\Gamma(x) \cap (C_1 \cup C_2)| \in \{C_1,C_2\}$.
\end{enumerate}
Construct a graph $\Gamma'$ from $\Gamma$ by modifying the edges between $C_1 \cup C_2$ and $D$ as follows:
\begin{equation*}
\Gamma'(x)\cap(C_1 \cup C_2) =
\begin{cases}
C_1, & \text{if } \Gamma(x)\cap(C_1 \cup C_2) = C_2;\\
C_2, & \text{if } \Gamma(x)\cap(C_1 \cup C_2) = C_1;\\
\Gamma(x)\cap(C_1 \cup C_2), & \text{otherwise},
\end{cases}
\end{equation*}
for all $x \in D$. Then $\Gamma'$ is cospectral with $\Gamma$.
\end{lemma}
\section{A general construction of strictly Neumaier graphs with 1-regular cliques}\label{sec:GeneralConstruction}
In this section we present a construction of Neumaier graphs with 1-regular cliques, which generalises the constructions found in \cite{GK18} and \cite{GK19}. We note that our construction was first introduced in the PhD thesis of Rhys. J. Evans, the first author of this paper (see \cite[Theorem 5.1]{E20}).
Let $\Gamma^{(1)},\dots,\Gamma^{(t)}$ be edge-regular graphs with parameters $(v,k,\lambda)$, and such that each $\Gamma^{(i)}$ has a partition of its vertices into perfect $1$-codes of size $a$, where $a$ is a proper divisor of $\lambda+2$. Further, we define $t=(\lambda+2)/a$.
For any $\ell\in \{1, . . . , t\}$, let $H_{1}^{(\ell)},\dots,H_{v/a}^{(\ell)}$ denote the perfect $1$-codes that partition the vertex set of $\Gamma^{(\ell)}$.
If $t\geq 2$, we also take a $(t-1)$-tuple of permutations from
$\text{Sym}(\{1, . . . , v/a \})$, denoted by $\Pi =(\pi_{2},\dots,\pi_{t})$.
Using these graphs and the permutation $\Pi$, we define the graph $F_\Pi(\Gamma^{(1)},\ldots,\Gamma^{(t)})$ as follows.
\begin{enumerate}
\item Take the disjoint union of the graphs $\Gamma^{(1)},\ldots,\Gamma^{(t)}$.
\item For any $i \in \{1,\ldots,v/a\}$, add an edge between any two distinct vertices from $H^{(1)}_i \cup H^{(2)}_{\pi_2(i)} \cup \ldots \cup H^{(t)}_{\pi_t(i)}$ (which forms a $1$-regular clique of size $at$).
\end{enumerate}
Note that for $t=1$, our definition of $F_\Pi(\Gamma^{(1)},\ldots,\Gamma^{(t)})=F_\Pi(\Gamma^{(1)})$ does not use $\Pi$, so we do not need to choose $\Pi$ to construct our graph.
Now we show that the graph $F_\Pi(\Gamma^{(1)},\ldots,\Gamma^{(t)})$ is a Neumaier graph, and express the parameters of this graph in terms of the parameters of the original graphs, $t$ and $a$.
\begin{theorem}\label{GeneralConstruction}
Let $\Gamma^{(1)},\dots,\Gamma^{(t)}$ be edge-regular graphs with parameters $(v,k,\lambda)$ and let $\Pi =(\pi_{2},\dots,\pi_{t})$ be a $(t-1)$-tuple of permutations from
$\text{Sym}(\{1, . . . , v/a \})$ when $t\geq 2$.
Further, suppose that each graph $\Gamma^{(\ell)}$ has a partition of its vertices into perfect $1$-codes $H_{1}^{(\ell)},\dots,H_{v/a}^{(\ell)}$, each of size $a$, where $a$ is a proper divisor of $\lambda+2$ and $t=(\lambda+2)/a$.
Then
\begin{enumerate}
\item $F_{\Pi}(\Gamma^{(1)},\dots,\Gamma^{(t)})$ has (a spread of) $1$-regular cliques, each of size $\lambda+2$;
\item $F_{\Pi}(\Gamma^{(1)},\dots,\Gamma^{(t)})$ is an edge-regular graph with parameters \\$(vt,k+\lambda+1,\lambda).$
\end{enumerate}
\end{theorem}
\begin{proof}
1. For $i\in \{1, . . . , t\}$, consider the union of the sets $H_{i}^{(1)},H_{\pi_{2}(i)}^{(2)},\dots,H_{\pi_{t}(i)}^{(t)}$. As $H_{i}^{(j)}$ is a perfect $1$-code in $\Gamma^{(i)}$, every vertex in $V(\Gamma^{(i)})\setminus H_{i}^{(j)}$ is adjacent to a unique element of $H_{i}^{(j)}$. Therefore, each vertex of $F_{\Pi}(\Gamma^{(1)},\dots,\Gamma^{(t)})$ is adjacent to a unique vertex of $H_{i}^{(1)}\cup H_{\pi_{2}(i)}^{(2)}\cup \dots\cup H_{\pi_{t}(i)}^{(t)}$. After adding all possible edges in $H_{i}^{(1)}\cup H_{\pi_{2}(i)}^{(2)}\cup\dots\cup H_{\pi_{t}(i)}^{(t)}$, we see that this set is a $1$-regular clique.
2. The graph $F_{\Pi}(\Gamma^{(1)},\dots,\Gamma^{(t)})$ has $vt$ vertices by definition.
Let $w$ be a vertex in $F_{\Pi}(\Gamma^{(1)},\dots,\Gamma^{(t)})$, where $w\in H_{i}^{(j)}$ for some $i,j$. Note that $w$ is adjacent to $k$ vertices of $\Gamma^{(j)}$ before adding the edges of the construction. Then $w$ gains $a-1$ new neighbours from $H_{i}^{(j)}$, and $a$ new neighbours from $H_{\pi_{m}(\pi^{-1}_{j}(i))}^{(m)}$, for each $m\not=j$. Therefore, the degree of $w$ is $k+a-1+a(t-1)=k+\lambda+1$.
Now let $u,w$ be adjacent vertices in $F_{\Pi}(\Gamma^{(1)},\dots,\Gamma^{(t)})$. First we suppose $u,w$ lie in the same clique $H_{i}^{(1)},H_{\pi_{2}(i)}^{(2)},\dots,H_{\pi_{t}(i)}^{(t)}$, for some $i$. As this clique is $1$-regular, $u,w$ must have $at-2=\lambda$ common neighbours. Otherwise by definition of $F_{\Pi}(\Gamma^{(1)},\dots,\Gamma^{(t)})$, there must be integers $m,n$ and $j$ such that $m\not=n$ and $u\in H_{m}^{(j)},w\in H_{n}^{(j)}$. As $H_{m}^{(j)},H_{n}^{(j)}$ are disjoint perfect codes in $\Gamma^{(j)}$, there are no new common neighbours of $u,w$ coming from adding the edges in the construction of the graph. Therefore, all adjacent vertices have $\lambda$ common neighbours.
\end{proof}
\begin{remark}
We note that a converse to this theorem is also true. Let $\Gamma\in \text{NG}(v,k,\lambda;1,s)$ have a spread of $1$-regular cliques. The graph $\Gamma^{\circ}$ created by removing edges from the cliques of this spread in $\Gamma$ is also edge-regular by Soicher \cite[Theorem 6.1]{S15}. It also follows that connected components of $\Gamma^{\circ}$ can be partitioned into perfect $1$-codes, and the parameters have the same restrictions as the conditions of the statement of Theorem \ref{GeneralConstruction}.
\end{remark}
In Section \ref{sec:Examples}, we show that the case for which $t=1$ can occur, and the construction can result in a strictly Neumaier graph. However, this is not necessarily true in all cases when $t=1$. The following Corollary shows that for $t\geq 2$, our construction always results in a strictly Neumaier graph.
\begin{corollary}\label{cor:t2strict}
Let $\Gamma^{(1)},\dots,\Gamma^{(t)}$ be non-complete edge-regular graphs with parameters $(v,k,\lambda)$ and let $\Pi =(\pi_{2},\dots,\pi_{t})$ be a $(t-1)$-tuple of permutations from
$\text{Sym}(\{1, . . . , v/a \})$.
Further, suppose that each graph $\Gamma^{(\ell)}$ has a partition of its vertices into perfect $1$-codes $H_{1}^{(\ell)},\dots,H_{v/a}^{(\ell)}$, each of size $a$, where $a$ is a proper divisor of $\lambda+2$ and $t=(\lambda+2)/a$.
If $t\geq 2$, then $F_\Pi(\Gamma^{(1)},\ldots,\Gamma^{(t)})$ is a strictly Neumaier graph.
\end{corollary}
\begin{proof}
By Theorem \ref{GeneralConstruction}, we know that $F_\Pi(\Gamma^{(1)},\ldots,\Gamma^{(t)})$ is a Neumaier graph. Therefore, we need to prove that $F_\Pi(\Gamma^{(1)},\ldots,\Gamma^{(t)})$ is not a co-edge-regular graph.
Consider the subgraphs $\Gamma^{(1)}$ and $\Gamma^{(2)}$ of $F_\Pi(\Gamma^{(1)},\ldots,\Gamma^{(t)})$. Let $x \in H^{(1)}_{i_1}$ and $y \in H^{(2)}_{i_2}$, where $i_2 \ne \pi_2(i_1)$. Then $x$ and $y$ have exactly two common neighbours in $F_\Pi(\Gamma^{(1)},\ldots,\Gamma^{(t)})$: one common neighbour in $H^{(2)}_{\pi_2(i_1)}$ and one common neighbour in $H^{(1)}_{i_3}$, where $\pi_2(i_3) = i_2$.
Now consider the subgraphs $\Gamma^{(1)}$ and $\Gamma^{(2)}$ of $F_\Pi(\Gamma^{(1)},\ldots,\Gamma^{(t)})$. Let $u,z\in V(\Gamma^{(1)})$ be distinct vertices at distance 2 in $\Gamma^{(1)}$ (which exists as $\Gamma^{(1)}$ is not complete). Note that this means $u$ and $z$ belong to different perfect 1-codes that partition $\Gamma^{(1)}$. Let $m,n$ be the distinct integers such that $u \in H^{(1)}_{m},z \in H^{(1)}_{n}$. Since $u,z$ are at distance 2 in $\Gamma^{(1)}$, they have at least one common neighbour in $\Gamma^{(1)} \setminus (H^{(1)}_{m} \cup H^{(1)}_{n})$. Moreover, in $F_\Pi(\Gamma^{(1)},\ldots,\Gamma^{(t)})$, the vertices $u$ and $z$ have one common neighbour in $H^{(1)}_{m}$ and one common neighbour in $H^{(1)}_{n}$. Thus, $u$ and $z$ have at least three common neighbours in $F_\Pi(\Gamma^{(1)},\ldots,\Gamma^{(t)})$.
We have shown that in $F_\Pi(\Gamma^{(1)},\ldots,\Gamma^{(t)})$, there is a pair of non-adjacent vertices with exactly $2$ common neighbours and a pair of non-adjacent vertices with at least $3$ common neighbours. Therefore, $F_\Pi(\Gamma^{(1)},\ldots,\Gamma^{(t)})$ is not a co-edge-regular graph.
\end{proof}
In Theorem \ref{GeneralConstruction}, we can use an arbitrary $(t-1)$-tuple of permutations $\Pi$. In the following result, we show that a certain WQH-switching in our graphs is equivalent to a certain change of tuple $\Pi$. We note that for $t=1$, our choice of partition also satisfies the criteria for a switching in Lemma \ref{WQHswitching}, but the switching operation does not change the graph.
\begin{proposition}\label{Switching}
Let $\Gamma^{(1)},\dots,\Gamma^{(t)}$ be edge-regular graphs with parameters $(v,k,\lambda)$ and let $\Pi =(\pi_{2},\dots,\pi_{t})$ be a $(t-1)$-tuple of permutations from
$\text{Sym}(\{1, . . . , v/a \})$ when $t\geq 2$.
Further, suppose that each graph $\Gamma^{(\ell)}$ has a partition of its vertices into perfect $1$-codes $H_{1}^{(\ell)},\dots,H_{v/a}^{(\ell)}$, each of size $a$, where $a$ is a proper divisor of $\lambda+2$ and $t=(\lambda+2)/a$.
Then for any non-empty subset $I\subseteq \{1,\dots,t\}$ containing 1 and distinct $i,j\in \{1,\dots,v/a\}$, the partition $$C_1:=\bigcup_{\ell\in I} H^{(\ell)}_{i},$$
$$C_2:=\bigcup_{\ell\in I} H^{(\ell)}_{j},$$
and
$$D:=V(F_\Pi(\Gamma^{(1)},\ldots,\Gamma^{(t)})) \setminus (C_1\cup C_2 )$$
satisfies the conditions of Lemma \ref{WQHswitching}.
In fact, we have the equality $(F_\Pi(\Gamma^{(1)},\ldots,\Gamma^{(t)}))' = F_{\Pi'}(\Gamma^{(1)},\ldots,\Gamma^{(t)})$,
where
\begin{equation*}
(\Pi')_r =\begin{cases}
\pi_r & \text{if } r\in I \\
(i~j)\circ \pi_r & \text{if } r\not\in I
\end{cases}
\end{equation*}
\end{proposition}
\begin{proof}
Let $\Gamma=F_\Pi(\Gamma^{(1)},\ldots,\Gamma^{(t)})$. We have the following cases for vertices $u\in D$:
\begin{enumerate}
\item $u\in H_{\pi_m(i)}^{(m)}$ for some $m\not\in I$. By the definition of our construction,\linebreak $\Gamma(u)\cap (C_1 \cup C_2) = C_1$.
\item $u\in H_{\pi_m(j)}^{(m)}$ for some $m\not\in I$. By the definition of our construction,\linebreak $\Gamma(u)\cap (C_1 \cup C_2) = C_2$.
\item $u\in H_{\pi_m(h)}^{(m)}$ for some $m,h$ where $m\not\in I,h\not\in \{i,j\}$. This means that $u$ is not from any $\Gamma^{(\ell)}$ for $\ell\in I$, or the cliques containing $C_1$ or $C_2$ that are created by our construction. Therefore, $|\Gamma(u)\cap C_1|=|\Gamma(u)\cap C_2|=0$.
\item $u\in H_h^{(m)}$ for some $m\in I,h\not\in \{i,j\}$. As the set $H_h^{(m)}$ is a perfect 1-code in $\Gamma^{(m)}$, we have $|\Gamma(u)\cap C_1|=|\Gamma(u)\cap C_2|=1$.
\end{enumerate}
We have shown that the partition satisfies the conditions of Lemma \ref{WQHswitching}. From the cases for $u\in D$ above and the definition of WQH-switching, it also follows that $(F_\Pi(\Gamma^{(1)},\ldots,\Gamma^{(t)}))' = F_{\Pi'}(\Gamma^{(1)},\ldots,\Gamma^{(t)})$
with the stated $\Pi'$.
\end{proof}
Given a certain graph property, it is common in spectral graph theory to ask if this property is determined by the spectrum of a graph with this property. In Proposition \ref{Switching}, we have seen that we have given a method for constructing cospectral Neumaier graphs. Therefore, we can investigate if (strictly) Neumaier graphs with a 1-regular clique are determined by their spectra. The following shows that this is not true for strictly Neumaier graphs with 1-regular cliques.
From Proposition \ref{Switching}, we can see that for any two $(t-1)$-tuples of elements of $\text{Sym}(\{1,\dots,v/a\})$, $\Pi,\Pi'$, we can obtain $F_{\Pi'}(\Gamma^{(1)},\ldots,\Gamma^{(t)})$ from $F_\Pi(\Gamma^{(1)},\ldots,\Gamma^{(t)})$ by applying a series of appropriate WQH-switchings. As the switching operation does not change the spectrum of the resulting graph, we get the following result.
\begin{corollary}
For any $\Pi,\Pi'$, $(t-1)$-tuples of elements of $\text{Sym}(\{1,\dots,v/a\})$, the graphs
$F_\Pi(\Gamma^{(1)},\ldots,\Gamma^{(t)})$ and $F_{\Pi'}(\Gamma^{(1)},\ldots,\Gamma^{(t)})$ are cospectral.
\end{corollary}
It would be interesting to investigate how many pairwise non-isomorphic graphs can be constructed using our construction. In doing so, we may find a prolific construction of cospectral strictly Neumaier graphs. Although this has not been investigated in detail, we have already observed several pairwise non-isomorphic graphs with relatively small order.
\begin{corollary}\label{Neumaier24AreCospectral}
There are four strictly Neumaier graphs with parameters $(24,8,2;1,4)$ which are cospectral.
\end{corollary}
\begin{proof}
This can be shown by applying iterations of WQH-switchings to a given graph from Example \ref{ex:icos}. This is also easily verified by direct calculation of the spectra of the four graph from Example \ref{ex:icos}.
The adjacency matrices of these graphs can be found in \cite[Section 4.4.1]{E20}. The spectra of the four strictly Neumaier graphs are all equal to
$\left[(-1-\sqrt{5})^6, (-2)^5, (-1+\sqrt{5})^6, 2^5, 4^1, 8^1 \right]$.
\end{proof}
\section{Some examples of strictly Neumaier graphs given by the general construction}\label{sec:Examples}
In this section, we give several small strictly Neumaier graphs which can be constructed using Theorem \ref{GeneralConstruction}. Some of these graphs are new, and have not been found as part of an infinite family of strictly Neumaier graphs at the time of writing this paper. In these graphs, we find our underlying edge-regular graphs from various geometric and algebraic constructions.
\subsection{Construction given by a pair of icosahedrons}\label{ex:icos}
The icosahedral graph is an edge-regular graph with parameters $(12,5,2)$ that admits a partition into six perfect 1-codes of size $a = 2$. Thus, we can use $t = (\lambda+2)/a= 2$ copies of the icosahedral graph in the general construction to produce four pairwise non-isomorphic strictly Neumaier graphs (depending on the choice of the permutation $\pi_2$) with parameters $(24,8,2;1,4)$. Note that these four graphs were found as Cayley-Deza graphs in \cite{GS14} and listed here: \url{http://alg.imm.uran.ru/dezagraphs/deza_cayleytab.html}.
\subsection{Construction given by a pair of dodecahedrons}
Take a pair of dodecahedral graphs $\Gamma_1$ and $\Gamma_2$ and consider the natural matching between their vertices. Note that the dodecahedral graph has diameter 5 and admits a partition into ten perfect 2-codes of size 2. For every vertex $x_1 \in \Gamma_1$ and its matched vertex $x_2 \in \Gamma_2$, connect $x_1$ with the six vertices of $\Gamma_2$ that are at distance 2 from $x_2$ and connect $x_1$ with the six vertices of $\Gamma_1$ that are at distance 2 from $x_1$. We obtain an edge-regular graph with parameters $(40,9,2)$ that admits a partition into ten perfect 1-codes of size $a = 4$, where every perfect 1-code is a union of matched perfect 2-codes in the original dodecahedral graphs. We then apply the general construction to the edge-regular graph with parameters $(40,9,2)$, which gives a strictly Neumaier graph with parameters $(40,12,2;1,4)$.
\subsection{A graph on 65 vertices}\label{graph65}
Consider the Cayley graph $Cay(\mathbb{Z}^+_{65},\{2^i~mod ~65 : i \in \{0,\ldots, 11\}\}$, which is edge-regular with parameters $(65,12,3)$ and admits a partition into perfect 1-codes of size $a = 5$ given by the cosets of the subgroup $\{0,13,26,39,52\}$. We then apply the general construction, which gives a strictly Neumaier graph with parameters $(65,16,3;1,5)$.
\subsection{Construction given by the 6-regular triangular grid}
Eisenstein integers are the complex numbers of the form $\mathbb{Z}[\omega] = \{b+c\omega : b,c \in \mathbb{Z}\}$, where $\omega = \frac{-1+i\sqrt{3}}{2}$. They form a ring with respect to usual addition and multiplication.
The norm mapping $N:\mathbb{Z}[\omega]\mapsto \mathbb{N} \cup \{0\}$ is defined as follows. For an Eisenstein integer $b+c\omega$, $N(b+c\omega) = b^2+c^2-bc$ holds. The norm mapping $N$ is known to be multiplicative. It is well-known that $\mathbb{Z}[\omega]$ forms an Euclidean domain (in particular, a principal ideal domain).
The units of $\mathbb{Z}[\omega]$ are $\{\pm 1, \pm\omega, \pm\omega^2\}$.
The natural geometrical interpretation of Eisenstein integers is the 6-regular triangular grid in the complex plane. If it does not lead to a contradiction, we use the same notation $\mathbb{Z}[\omega]$ for the triangular grid.
The grid $\mathbb{Z}[\omega]$ has exactly six elements of norm 7; these are $\{\pm(1+3\omega),\pm(3+2\omega),\pm(2-\omega)\}$. Consider the ideal $I$ generated by an element of norm 7 (say, by the element $2-\omega$). The elements of $I$ form a perfect 1-code in the triangular grid (the balls of radius 1 centred at dark blue vertices are shown in Figure \ref{Ideal}). Note that $I$ is an additive subgroup of index 7 in $\mathbb{Z}[\omega]$; we denote it by $I^+$.
The seven cosets $\mathbb{Z}[\omega]/I^+$ give a partition of the triangular grid into seven perfect 1-codes formed by vertices of the same color (see Figure \ref{Partition}). For example, the dark blue vertices are formed by the elements of the ideal $I$.
Consider a quadruple (block) of the balls of radius 1 centred at dark blue vertices (see Figure \ref{Block}); these balls have 28 vertices. Take the following two additive subgroups of $\mathbb{Z}[\omega]$:
$$
T_1:=\{2(-2+\omega)x+14y~|~x,y\in \mathbb{Z}\},
$$
$$
T_2:=\{(5+\omega)x+28y~|~x,y\in \mathbb{Z}\}.
$$
Since $-2+\omega$, $7$ and $5+\omega$ are divisible by $2-\omega$, a generator of $I$, we have that $T_1$ and $T_2$ are subgroups in $I^+$ (see Figures \ref{T1} and \ref{T2}). Note that the additive shifts of the block of four balls by the elements of $T_1$ and $T_2$ give two tessellations of $\mathbb{Z}[\omega]$.
Consider the quotient groups $$G_1 := \mathbb{Z}[\omega]/T_1$$ and $$G_2:=\mathbb{Z}[\omega]/T_2,$$
where $G_1 \cong \mathbb{Z}_2 \oplus \mathbb{Z}_{14}$ and $G_2 \cong \mathbb{Z}_{28}$.
Define two Cayley graphs
$$\Delta_1:=Cay(G_1,\{\pm(1+T_1), \pm(\omega+T_1), \pm(\omega^2+T_1)\}),$$
$$\Delta_2:=Cay(G_2,\{\pm(1+T_2), \pm(\omega+T_2), \pm(\omega^2+T_2)\}).$$
Note that $\Delta_1$ and $\Delta_2$ can be interpreted as quotient graphs of the triangular grid by $T_1$ and $T_2$, respectively.
Each of the graphs $\Delta_1$ and $\Delta_2$ is edge-regular with parameters $(28,6,2)$ and admits a partition into perfect 1-codes of size $a = 4$; these partitions are given by the original partition of the triangular grid into perfect 1-codes (see Figures \ref{Delta1} and \ref{Delta2}). We then apply the general construction, which gives two strictly Neumaier graphs with parameters $(28,9,2;1,4)$. The graph obtained from $\Delta_1$ is isomorphic to the smallest graph found in \cite{GK18} (see Example 1) and the graph obtained from $\Delta_2$ is new.
\subsection{Graphs on 78 vertices from the tetrahedral-octahedral honeycomb}
Similar to the case of the triangular grid, there exists a partition of the tetrahedral-octahedral honeycomb into thirteen perfect 1-codes. Taking different quotients preserving the partition, we get at least eight edge-regular graphs with parameters $(78,12,4)$. We then apply the general construction to these graphs, which gives at least eight strictly Neumaier graphs with parameters $(78,17,4;1,6)$.
\section{Further problems: Infinite edge-regular lattices}\label{sec:FurtherProblems}
In Section \ref{sec:Examples} we have seen several strictly Neumaier graphs constructed using infinite lattices which are edge-regular. In this section, we give certain families of infinite edge-regular lattices, and find new strictly Neumaier graphs in certain cases. We also present open problems relevant to the construction of new infinite families of strictly Neumaier graphs from these lattices.
In the following, we consider countably infinite graphs with vertices consisting of elements of the vector space $\mathbb{R}^n$, for some integer $n\geq 3$. The elements of $\mathbb{R}^n$ are called \emph{($n$-dimensional) vectors}, and we identify the elements with their coordinates with respect to the standard basis of $\mathbb{R}^n$.
Let $x\in \mathbb{R}^n$. For a set $A\subseteq \mathbb{R}$, the vector $x$ is an \emph{$A$-vector} if the value of all of its entries lie in $A$. The \emph{weight} of $x$ is the number of its non-zero entries.
Let $n \ge 3$ be a positive integer and let $m$ be an even positive integer.
Let $S_{n,m}^{(1)}$ denote the set of all $n$-dimensional $\{1,-1,0\}$-vectors of weight $m$ whose sum of coordinates is zero.
Let $S_{n,m}^{(2)}$ denote the set of all $n$-dimensional $\{1,-1,0\}$-vectors of weight $m$.
Let $G_{n,m}^{(1)}$ and $G_{n,m}^{(2)}$ be the groups generated by $S_{n,m}^{(1)}$ and $S_{n,m}^{(2)}$ respectively.
\begin{proposition}
For any positive even integer $m$ and any integer $n$ such that $n \ge m+1$, the following statements hold.
\begin{enumerate}
\item $G_{n,m}^{(1)}$ is equal to $G_{n,2}^{(1)}$, which consists of all $n$-dimensional vectors with integer coordinates such that the sum of coordinates is equal to $0$.
\item $G_{n,m}^{(2)}$ is equal to $G_{n,2}^{(2)}$, which consists of all $n$-dimensional vectors with integer coordinates such that the sum of coordinates is even.
\end{enumerate}
\end{proposition}
\begin{proof}
1. Since $n \ge m+1$, every $\{1,-1,0\}$-vector of weight 2 with zero sum of coordinates can be expressed in terms of $\{1,-1,0\}$-vectors of weight $m$ with zero sum of coordinates. It is easy to see that every $n$-dimensional vector with integer coordinates such that the sum of coordinates is equal to $0$ can be expressed in terms of
$\{1,-1,0\}$-vectors of weight 2 with zero sum of coordinates.
2. Since $n \ge m+1$, every $\{1,-1,0\}$-vector of weight 2 can be expressed in terms of $\{1,-1,0\}$-vectors of weight $m$. It is easy to see that every $n$-dimensional vector with integer coordinates such that the sum of coordinates is even can be expressed in terms of $\{1,-1,0\}$-vectors of weight 2. $\square$
\end{proof}
From now on, we let $$G_{n}^{(1)}:=G_{n,2}^{(1)},$$
$$G_{n}^{(2)}:=G_{n,2}^{(2)},$$
and define graphs $$\Gamma_{n,m}^{(1)}:=Cay(G_{n}^{(1)},S_{n,m}^{(1)})$$ and
$$\Gamma_{n,m}^{(2)}:=Cay(G_{n}^{(2)},S_{n,m}^{(2)}).$$
A graph $\Gamma$
with infinitely many vertices is \emph{edge-regular} with \emph{parameters} $(k,\lambda)$ if it is $k$-regular and each pair of adjacent vertices have exactly $\lambda$ common neighbours.
In the following, we show that $\Gamma_{n,m}^{(1)}$ and $\Gamma_{n,m}^{(2)}$ are infinite edge-regular graphs, and give the parameters of these graphs in terms of binomial coefficients. In our notation, we have $C_a^b = {a\choose b}$.
\begin{proposition} For any positive even integer $m$ and any integer $n$ such that $n \ge m+1$, the following statements hold.
\begin{enumerate}
\item The graph $\Gamma_{n,m}^{(1)}$ is an induced subgraph in $\Gamma_{n,m}^{(2)}$.
\item $\Gamma_{n,m}^{(1)}$ is an infinite edge-regular graph with parameters $(k_1,\lambda_1)$, such that $$k_1 = C_n^mC_m^{\frac{m}{2}},$$ $$\lambda_1 = \sum\limits_{i=0}^{\frac{m}{2}}C_{\frac{m}{2}}^i C_{\frac{m}{2}}^{\frac{m}{2}-i}C_{n-m}^{\frac{m}{2}-i}C_{n-\frac{3m}{2}+i}^{i}.$$
\item $\Gamma_{n,m}^{(2)}$ is an infinite edge-regular graph with parameters $(k_1,\lambda_1)$, such that
$$k_2 = 2^mC_n^m,$$
$$\lambda_2 =
C_m^{\frac{m}{2}}C_{n-m}^{\frac{m}{2}}2^{\frac{m}{2}}.$$
\end{enumerate}
\end{proposition}
\begin{proof}
The elements of $G_{n}^{(1)}$ form a subset in the set of elements of $G_{n}^{(2)}$, which means that $\Gamma_{n,m}^{(1)}$ is an induced subgraph in $\Gamma_{n,m}^{(2)}$. The formulas for parameters $k_1, \lambda_1, k_2, \lambda_2$ can be obtained by applying standard counting arguments.
\end{proof}
\begin{remark}
Note that if $n < \frac{3m}{2}$, then $\lambda_1 = 0$ and $\lambda_2 = 0$. Otherwise, $\lambda_1 > 0$ and $\lambda_2 > 0$.
\end{remark}
\begin{remark}
The generating sets $S_{n,2}^{(1)}$ and $S_{n,2}^{(2)}$ are known as root systems $A_{n-1}$ and $D_n$, respectively (see \cite[Chapter 8]{BH12}).
\end{remark}
$A_2$ root lattice is isomorphic to the 6-regular triangular grid. The root lattices $A_3$ and $D_3$ are both isomorphic to the tetrahedral-octahedral honeycomb.
In Tables \ref{tab:gamma1} and \ref{tab:gamma2}, we present the number of cases for which we find strictly Neumaier graphs using the graphs $\Gamma_{n,m}^{(1)}$ and $\Gamma_{n,m}^{(2)}$, respectively. The first column of the tables give the corresponding value of $n$. The second column gives the Neuamier graph parameters of the graphs we find through the construction. The last column gives the number of pairwise non-isomorphic strictly Neumaier graphs we find from the construction.
\begin{table}
\centering
\begin{tabular}{|c|c|c|}
\hline
$n$ & parameters of SNG & $\#$ \\
\hline
3 & $(28,9,2;1,4)$ & 2 \\
\hline
4 & $(78,17,4;1,6)$ & $\ge 8$ \\
\hline
5 & $(168,27,6;1,8)$ & $\ge 12$ \\
\hline
6 & $(310,39,8;1,10)$ & $\ge 1$ \\
\hline
\end{tabular}
\caption{Number of strictly Neumaier graphs from quotients of $\Gamma_{n,2}^{(1)}$.}\label{tab:gamma1}
\end{table}
\begin{table}
\centering
\begin{tabular}{|c|c|c|}
\hline
$n$ & parameters of SNG & $\#$ \\
\hline
3 & $(78,17,4;1,6)$ & $\ge 8$ \\
\hline
4 & $(250,33,8;1,10)$ & $\ge 16$ \\
\hline
\end{tabular}
\caption{Number of strictly Neumaier graphs from quotients of $\Gamma_{n,2}^{(2)}$.}\label{tab:gamma2}
\end{table}
However, we have not been able to find more examples of perfect codes and quotients of $\Gamma_{n,2}^{(1)}$ and $\Gamma_{n,2}^{(2)}$ that lead to strictly Neumaier graphs. Therefore, we ask the following.
\begin{problem}
What strictly Neumaier graphs can be obtained as quotients of infinite edge-regular graphs $\Gamma_{n,m}^{(1)}$ and $\Gamma_{n,m}^{(2)}$?
\end{problem}
We can also use two infinite edge-regular graphs to get a new infinite edge-regular graph by taking the cartesian product of the graphs (for the definition of cartesian product of graphs, see \cite{BH12}, for example).
\begin{proposition}\label{CartProd}
Let $\Gamma_1$ and $\Gamma_2$ be two infinite edge-regular graphs with parameters $(k_1,\lambda)$ and $(k_2,\lambda)$, respectively. Then the Cartesian product of $\Gamma_1$ and $\Gamma_2$ is an edge-regular graph with parameters $(k_1+k_2, \lambda)$.
\end{proposition}
\begin{proof}
It follows from the definition of Cartesian product. $\square$
\end{proof}
Consider the Cartesian product of two 6-regular triangular grids; the resulting infinite graph is edge-regular with parameters (12,2). This graph has a perfect 1-code, and there exists an edge-regular quotient graph with parameters $(52,12,2)$. We then apply the general construction to this graph, which gives a strictly Neumaier graph having parameters $(52,15,2;1,4)$ and isomorphic to the graph constructed in \cite[Theorem 3.6]{GK18}. As we have seen this example using cartesian products, we ask the following.
\begin{problem}
What strictly Neumaier graphs can be obtained as quotients of Cartesian products of infinite edge-regular graphs?
\end{problem}
\section*{Acknowledgements} \label{Ack}
The work of Rhys~J.~Evans, Elena~V.~Konstantinova, and Alexander~D.~Mednykh was supported by the Mathematical Center in Akademgorodok, under agreement No. 075-15-2022-281
with the Ministry of Science and High Education of the Russian Federation. Sergey Goryainov also thanks Mathematical Center in Akademgorodok for organising his visit in Akademgorodok in July 2021. Finally, Sergey Goryainov and Elena V. Konstantinova thank Denis Krotov for useful discussions.
|
{
"timestamp": "2022-04-20T02:15:21",
"yymm": "2109",
"arxiv_id": "2109.13884",
"language": "en",
"url": "https://arxiv.org/abs/2109.13884"
}
|
\section{Introduction}
\IEEEPARstart{I}{n} robot motion planning, the reference path generation is often performed independently of the feedback control design for path following.
While such a two-stage procedure leads to a suboptimal policy in general, it simplifies the problem to be solved and the resulting performance loss is acceptable in many applications.
The two-stage procedure also benefits from powerful geometry-based trajectory generation algorithms (e.g., A* \cite{hart1968formal}, PRM \cite{kavraki1996probabilistic}, RRT \cite{lavalle2001rapidly}), as other factors such as dynamic constraints, stochasticity and uncertainty can often be resolved in the control design stage.
Motion planning is more challenging if the robot's configuration is only partially observable through noisy measurements.
A common approach to such problems is via the \emph{belief state formalism} \cite{platt2010belief,van2011lqg}. In this approach, the Bayesian estimate (i.e., a probability distribution) of the robot’s state is considered as a new state, called the \emph{belief state}, whereby the original stochastic optimal control problem with a partially observable state is converted into an equivalent stochastic optimal control problem with a fully observable state. The belief state formalism makes the aforementioned two-stage motion planning strategy applicable in a similar manner, except that both path generation and tracking are performed in the space of belief states (\emph{belief space}). This approach is powerful especially if the belief space is representable by a small number of parameters (e.g., Gaussian beliefs, which can be parametrized by mean and covariance only).
In this paper, we consider the problem of generating a reference path in the Gaussian belief space such that the path length with respect to a particular quasi-pseudometric on the belief manifold is minimized. The quasi-pseudometric we choose is interpreted as the weighted sum of the Euclidean travel distance and the information gain required to steer the belief state. Solving the shortest path problem therefore means finding a joint sensing and control strategy for a robot to move from a given initial Gaussian belief to a target Gaussian belief while minimizing the weighted sum of the travel distance and the cost of sensing.
\begin{figure}[t]
\centering
\includegraphics[trim = 0cm 0cm 0cm 0cm, clip=true, width = 0.75\columnwidth]{fig/Scenario_sim_nolabel.eps}
\caption{Simulation results of the proposed algorithm. Path A prioritizes to minimize the Euclidean travel distance, while path B prioritize to reduce the information gain required to follow the path.
}
\label{fig:intro_example}
\end{figure}
\subsection{Motivation}
The shortest path problem we formulate is motivated by the increasing need for simultaneous perception and action planning in modern information-rich autonomy.
Due to the wide availability of low-cost and high-performance sensing devices, obtaining a large amount of sensor data has become easier in many applications. Nevertheless, operating a sensor at its full capacity may not be the best strategy for resource-constrained robots, especially if it drains the robot's scarce power or computational resources with little benefit.
As sensor modalities increase, how to achieve a given task with minimum perceptual resources (e.g., with reduced sensing frequencies or sensor gains) becomes increasingly relevant. In navigation tasks, required sensing effort critically depends on the geometry of the planned paths. For example, Path A in Fig.~\ref{fig:intro_example} offers a shorter travel distance; however, the sensing effort required to trace it is high as the robot's locational uncertainty needs to be kept small. Depending on the cost of perception, taking a longer path (such as Path B) that is traceable with less sensing cost may be preferable. Thus, in this paper, we aim to develop a path planning methodology that allows a ``minimum sensing'' navigation that can flexibly comply with the robot’s perceptual resource constraint.
\subsection{Related Work}
This subsection provides a non-exhaustive list of related works categorized from the perspective of 1) belief space planning, 2) chance-constrained path planning, 3) information theory in path planning, and 4) controlled sensing.
\subsubsection{Belief space planning} Belief space path planning for uncertain systems \cite{lavalle2006planning} has been studied in various forms in the literature.
The work \cite{alterovitz2007stochastic} studied path planning for systems with uncertain dynamics within a fully observable and geometrically known environment.
The work \cite{agha2014firm} generalized the results of \cite{alterovitz2007stochastic} by incorporating the sensing uncertainty and presented a feedback-based information roadmap.
The belief-space probabilistic roadmap (BRM) is presented in \cite{prentice2009belief}, wherein a factored form of the covariance matrix is used, leading to efficient posterior belief predictions.
The work \cite{roy1999coastal}
presented the advantage of a path plan (the \emph{coastal navigation} strategy) that best assists the robot's perception during the navigation. The framework of safe path-planning \cite{lambert2003safe, pepy2006safe} is also
established for path planning in the belief space to provide a planned path with a safety guarantee.
The probability distribution of closed-loop trajectories under linear feedback policies has been characterized in \cite{bry2011rapidly, van2011lqg} which allows the evaluation of the probability of collision with obstacles. The work \cite{van2011lqg} uses the probability of collision to search for ``safe'' trajectories among the ones generated by RRT. Instead, \cite{bry2011rapidly} incrementally constructs a graph of ``safe'' trajectories with the aid of RRT*.
In \cite{van2012motion, van2017motion}, belief space path planning for robots with imperfect state information is studied.
In \cite{van2012motion}, a belief space variant of stochastic dynamic programming is introduced to find the optimal belief trajectory.
Alternatively, \cite{van2017motion} used the belief state iterative LQG
which is shown to have a lower computation complexity and a better numerical stability compared with \cite{van2012motion}. The authors of \cite{sun2016stochastic}
proposed the stochastic extended linear quadratic regulator for motion planning in Gaussian belief space, which simultaneously computes the optimal trajectory and the associated linear control policy for following that trajectory. Belief space path planning in environments with discontinuities in sensing domains is studied in \cite{patil2014gaussian}.
In addition to generating a nominal path, many of the aforementioned works (e.g., \cite{van2012motion,agha2014firm, van2017motion, sun2016stochastic}) provide local controllers to stabilize the system around the nominal path. These controllers eliminate the need for extensive replanning during the execution. A belief space path-following algorithm is also considered in \cite{platt2010belief},
where the nonlinear stochastic dynamics of belief state is linearized, and a local LQR controller is used. In \cite{platt2010belief}, a replanning strategy is also proposed to update the reference trajectory when divergence from the planned trajectory is too large to be handled by the LQR controller.
\subsubsection{Chance-constrained path planning} Belief space path planning is closely related to the large body of literature on chance-constrained (CC) path planning \cite{blackmore2006probabilistic, blackmore2011chance,vitus2011closed}, where the focus is on the development of path planning algorithms under probabilistic safety constraints (see e.g., \cite{ono2008iterative, jasour2015semidefinite}). Basic CC methods for linear-Gaussian systems have been extended to address non-linear and non-Gaussian problems \cite{blackmore2010probabilistic, wang2020non} and to handle the joint chance constraints \cite{ono2015chance}.
While CC formulations often suffer from poor scalability \cite{aoude2013probabilistically}, a method to overcome such difficulties is proposed in \cite{dai2019chance}. The computational complexity of CC algorithms is studied in \cite{da2019collision}. Alternatively, \cite{luders2010chance} proposed a sampling-based methods called CC-RRT method that allows efficient computation of feasible paths. The CC-RRT algorithm is generalized in \cite{du2011robot, kothari2013probabilistically} for CC path planning in dynamic environments.
The works \cite{luders2013robust, liu2014incremental} introduced several variants of CC-RRT* algorithm where the convergence to optimal trajectory is guaranteed.
\subsubsection{Information theory (IT) in path planning} Information-theoretic concepts have been utilized in path planning problems by several prior works, albeit differently from our approach in this paper. The work \cite{he2008planning} proposed a modification for BRM \cite{prentice2009belief} in which the states where informative measurements are available (i.e., where a large entropy reduction is expected) are sampled more frequently. Informative path planning is investigated in \cite{levine2013information}, where the sensing agents seek to both maximize the information gathered about the target position (or the environment), quantified by Fisher information matrix, and minimize the cost of traversing to its goal state.
In \cite{folsom2021scalable}, information-theoretic path planning is studied, where a Mars helicopter uses RRT*-IT to explore and obtain information about the surface of the Mars, expressed in terms of the reduction in the standard deviation of the belief terrain type distribution, in a shortest time.
IT is also used to reduce the computational complexity of the path planning in \cite{larsson2020information}, where it is suggested to obtain abstractions of the search space by the aid of IT and perform the path planning in abstracted representation of the search space.
\subsubsection{Controlled sensing} The works mentioned above assume either no or fixed sensor modalities. In modern autonomy where variable sensor modality is available, it is becoming increasingly meaningful to model the strategic sensing aspect explicitly in the problem formulation. The partially observable Markov decision process (POMDP) framework is widely used for controlled sensing design \cite{krishnamurthy2016partially}.
The work \cite{carlone2018attention} proposed a greedy algorithm for strategic sensing in vision-based navigation, where during the path execution, the robot chooses only a small number of landmarks that are most relevant to its task.
The authors of \cite{tzoumas2020lqg} incorporated a restriction on sensing budget into the optimal control problem, and proposed an algorithm for control and sensing co-design. The optimal estimation through a network of sensors operated under sensing constraints is explored in \cite{hashemi2020randomized}.
\subsection{Proposed Approach}
\label{sec:proposed_approach}
Noticing that belief-space path planning and strategic sensing are inseparable problems,
we propose a belief space path planning method in which the expected perception cost required for path following is explicitly incorporated in the objective function of the shortest path problem. In the proposed approach, the expected perception cost for path-following is estimated purely from the geometry of the planned belief path. To this end, we first identify the cost of transitioning from one Gaussian belief state to another with a weighted sum of the Euclidean distance between their means and the information gain (i.e., the entropy reduction) required to update the belief covariance. The ``distance'' notion introduced this way defines a quasi-pseudometric on the belief manifold, making the shortest path problem on the Gaussian belief space well-defined. Since an analytical expression of the shortest path is usually not available when the belief space is filled with obstacles, we propose to apply an RRT*-based path planner \cite{karaman2011sampling} to find an approximate solution.
The fact that the perception cost is computed purely from the geometry of a Gaussian belief path without assuming particular sensor models
makes the proposed motion planning strategy applicable to scenarios where perception costs are concerned but accurate sensor models are cumbersome to obtain. While estimating perception costs without assuming particular sensor models may appear unrealistic, the utility of such an approach can be understood by invoking the utility of geometry-based path planners (e.g., A*, RPM, RRT), which are typically model free (actuator models are not required) but are widely used in reality.
Even for scenarios where sensor models are available,
our method is beneficial to provide an optimized reference belief path to guide on how to operate sensors at hand to control belief states in real-time.
In this sense, the proposed approach integrates strategic sensing and path planning in a single framework.
Since a sample path of a belief trajectory is realized only after sensor data becomes available, a belief path generated in our framework only serves as a reference trajectory to be tracked. We thus invoke the two-stage design philosophy discussed above, assuming that an appropriate belief space path following algorithm such as \emph{belief LQR} \cite{platt2010belief} is used in real-time implementations. In this paper, we demonstrate the effectiveness of such a two-stage motion planning approach in numerical simulations.
\subsection{Contribution}
The Technical contributions of this paper are as follows:
\begin{itemize}
\item[(a)] We formulate a shortest path problem in a Gaussian belief space with respect to a novel ``distance'' function $\mathcal{D}$ (defined by \eqref{eq:def_D} below) that characterizes the weighted sum of the Euclidean travel distance and the sensing cost required for trajectory following assuming simple mobile robot dynamics (equation \eqref{eq:euler3} below).
We first show that $\mathcal{D}$ is a quasi-pseudometric, i.e., $\mathcal{D}$ satisfies the triangular inequality (Theorem~\ref{theo:tria}) but fails to satisfy symmetry and the identity of indiscernibles. We then introduce a path length concept using $\mathcal{D}$, for which the shortest path problem is formulated as \eqref{eq:main_problem}.
\item[(b)] We develop an RRT*-based algorithm for the shortest path problem described in part (a). Besides a basic version (Algorithm~\ref{algo:1}), we also develop a modified algorithm with improved computational efficiency (Algorithm~\ref{algo:2}) and a backward-in-time version (Algorithm~\ref{algo:Backward}). We also show how the sensing constraints can be incorporated into the developed algorithm.
\item[(c)] We prove that the path length function characterized in the problem formulation in part (a) is continuous with respect to the topology of total variation. We believe this result is critical to prove the asymptotic optimality of the proposed RRT*-based algorithm, although a complete proof must be postponed as future work.
\item[(d)] The practical usefulness of the proposed motion planning approach is demonstrated by simulation studies. We show that the algorithms in part (b) can be easily combined with the existing ideas for belief path following (e.g., belief LQR with event-based sensing or greedy sensor selection) to efficiently reduce sensing efforts (e.g., frequency of sensing or the number of sensors to be used simultaneously) for both the scenario when the robot's dynamics are close to \eqref{eq:euler3}, and the scenario when the dynamics are significantly different from it.
\end{itemize}
In our previous work \cite{pedram2021rationally}, we proposed the distance function $\mathcal{D}$ while the analysis on this metric was limited to 1-D spaces. In this paper, the analyses are extended to $N$ dimensional spaces. To establish these results, we provide novel proofs (summarized in Appendix \ref{ap:zero}, \ref{ap:B}, and \ref{ap:C}) which are significantly different from our previous work.
We also develop three novel algorithms as described in (b). One of the major improvements is the introduction of the concept of \emph{losslessness}. This concept allows us to develop computationally efficient algorithms and to establish a continuity result in Theorem~\ref{theo:continuity2}. Finally, this paper provides comprehensive simulation results demonstrating the effectiveness of the proposed method for mitigating sensing costs during the path following phase, which was not discussed in the previous work.
\subsection{Outline of the Paper}
The rest of the paper is organized as follows:
In Section~\ref{sec:prelim}, we summarize basic information-geometric concepts that are necessary to formally state the shortest path problem in Section~\ref{sec:formulation}.
In Section~\ref{sec:algorithm}, we present the proposed RRT*-based algorithms and prove the continuity of the path length function to shed light on its asymptotic optimality.
Section~\ref{sec:simulation} presents
simulation studies demonstrating the effectiveness of the proposed planning strategy.
We conclude with a list of future work in Section~\ref{sec:conclusion}.
\subsection{Notation and Convention}
Vectors and matrices are represented by lower-case and upper-case symbols, respectively. Random variables are denoted by bold symbols such as $\mathbf{x}$. The following notation will be used: $\mathbb{S}^d=\big\{P \in \mathbb{R}^{d \times d} : \text{ $P=P^\top$} \big\}$, $\mathbb{S}^d_{++}=\big\{P \in \mathbb{S}^d : P \succ 0 \big\}$, and $\mathbb{S}^d_{\rho}=\big\{P \in \mathbb{S}^d : P \succeq \rho I \}$. $\bar{\sigma}(M)$ and $\|M\|_F$ represent the maximum singular value and the Frobenius norm of the matrix $M$, respectively. The vector $2$-norm is denoted by $\|\cdot\|$. $\mathcal{N}(x, P)$ represents a Gaussian random variable with mean $x$ and covariance of $P$.
\section{Preliminaries}
\label{sec:prelim}
In this section, we introduce an appropriate distance notion on a Gaussian belief space which will be needed to formulate a shortest path problem in Section~\ref{sec:formulation}. We also study the mathematical properties of the introduced distance notion.
\subsection{Assumed Dynamics}
The distance concept we introduce can be interpreted as a navigation cost for a mobile robot whose location uncertainty (covariance matrix) grows linearly with the Euclidean travel distance when no sensor is used.
Specifically, suppose that the reference trajectory for the robot is given as a sequence of way points $\{x_k\}_{k=0,1, ... , K}$ in the configuration space $\mathbb{R}^d$, and that the robot is commanded with a constant unit velocity input
\[
v_k := \frac{x_{k+1}-x_k}{\|x_{k+1}-x_k\|}
\]
to move from $x_k$ to $x_{k+1}$. Let $t_k$ be the time that the robot is scheduled to visit the $k$-th way point $x_k$, defined sequentially by $t_{k+1}-t_k=\|x_{k+1}-x_k\|$.
We assume that the actual robot motion is subject to stochastic disturbance.
Let ${\bf x}(t_k)$ be the random vector representing the robot's actual position at time $t_k$. In an open-loop control scenario, it is assumed to satisfy
\begin{equation}
\label{eq:euler3}
{\bf x}(t_{k+1})={\bf x}(t_k)+(t_{k+1}-t_k)v_k+{\bf n}_k
\end{equation}
where ${\bf n}_k \sim \mathcal{N}(0, \|x_{k+1}-x_k\|W)$ is a Gaussian disturbance whose covariance matrix is proportional to the commanded travel distance. In feedback control scenarios, the command input $v_k$ is allowed to be dependent on sensor measurements.
We emphasize that the simple dynamics \eqref{eq:euler3} is assumed solely for the purpose of introducing a distance notion on a Gaussian belief space in the sequel. The algorithm we develop in Section~\ref{sec:algorithm} can be used even if the actual robot dynamics are significantly different from \eqref{eq:euler3}.
This is similar to the fact that RRT* for Euclidean distance minimization is widely used even in applications where Euclidean distance in the configuration space does not capture the motion cost accurately.\footnote{However, we also note that there are many works that incorporate non-Euclidean metrics in RRT* to better approximate true motion costs.}
To follow this philosophy, we strategically adopt a simple model \eqref{eq:euler3} and leave more realistic dynamic constraints to be addressed in the path following control phase.
\subsection{Gaussian Belief Space and Quasi-pseudometric}
In Gaussian belief space planning, a reference trajectory is given as a sequence of belief way points $b_k=(x_k,P_k), k=0, 1, ... , K$, where $x_k\in\mathbb{R}^d$ and $P_k\in \mathbb{S}_{++}^d$ are planned mean and covariance of the random vector ${\bf x}(t_k)$.
In the sequel, we call $\mathbb{B}:=\mathbb{R}^d \times \mathbb{S}_{++}^d$ the \emph{Gaussian belief space} or simply the \emph{belief space}.
We first introduce an appropriate directed distance function from a point $b_k=(x_k, P_k)$ to another $b_{k+1}=(x_{k+1}, P_{k+1})$.
The distance function is interpreted as the cost of steering the Gaussian probability density characterized by $b_k$ to the one characterized by $b_{k+1}$.
We assume that the distance function is a weighted sum of the travel cost $\mathcal{D}_{\text{travel}}(b_k, b_{k+1})$ and the information cost $\mathcal{D}_{\text{info}}(b_k, b_{k+1})$.
\subsubsection{Travel cost}
We assume that the travel cost is simply the commanded travel distance:
\[
\mathcal{D}_{\text{travel}}(b_k, b_{k+1}):=\|x_{k+1}-x_k\|.
\]
\subsubsection{Information cost}
Assuming that no sensor measurement is utilized while the deterministic control input $v_k$ is applied to \eqref{eq:euler3}, the covariance at time step $k+1$ is computed as
\begin{equation}
\label{eq:p_prior}
\hat{P}_{k+1}:=P_k+\|x_{k+1}-x_k\|W.
\end{equation}
We refer to $\hat{P}_{k+1}$ as the prior covariance at time step $k+1$.
Suppose that the prior covariance is updated to the posterior $P_{k+1}(\preceq \hat{P}_{k+1})$ by a sensor measurement ${\bf y}_{k+1}$ at time step $k+1$. (See Section~\ref{sec:formulation_interpret} for a discussion on sensing actions enabling this transition).
The minimum information gain required for this transition is given by the entropy reduction:
\begin{align}
\mathcal{D}_{\text{info}}(b_k, b_{k+1})&= h({\bf x}_{k+1}|{\bf y}_0, \cdots ,{\bf y}_k)-h({\bf x}_{k+1}|{\bf y}_0, \cdots, {\bf y}_{k+1}) \nonumber \\
&=\frac{1}{2}\log\det \hat{P}_{k+1} - \frac{1}{2}\log\det P_{k+1}. \label{eq:info_gain1}
\end{align}
Here, $h(\cdot|\cdot)$ denotes conditional differential entropy.
Note that for any physically ``meaningful'' belief update, the inequality $P_{k+1}\preceq \hat{P}_{k+1}$ should be satisfied, as the posterior uncertainty $P_{k+1}$ should be ``smaller'' than the prior uncertainty $\hat{P}_{k+1}$. In the sequel, we say that a transition from $b_k$ to $b_{k+1}$ is \emph{lossless} if the inequality $P_{k+1}\preceq \hat{P}_{k+1}$ is satisfied.
If the transition from $b_k$ to $b_{k+1}$ is lossless, the formula \eqref{eq:info_gain1} takes a non-negative value and hence it can be used in the definition of a (directed) distance from $b_k$ to $b_{k+1}$.
However, in order for the shortest path problem on a Gaussian belief space $\mathbb{B}$ to be well-defined, the distance function must be well-defined for arbitrary pairs $(b_k, b_{k+1})$. To generalize \eqref{eq:info_gain1} to pairs $(b_k, b_{k+1})$ that are not necessarily lossless, we adopt the following definition:
\begin{subequations}
\label{eq:d_info_general0}
\begin{align}
\mathcal{D}_{\text{info}}(b_k, b_{k+1})=\min_{Q_{k+1}\succeq 0} & \ \frac{1}{2}\log\det \hat{P}_{k+1}-\frac{1}{2}\log\det Q_{k+1} \label{eq:d_info_general}\\
\text{s.t.\ \ } &\quad Q_{k+1} \preceq P_{k+1}, \;\; Q_{k+1} \preceq \hat{P}_{k+1}.\label{eq:d_info_general1}
\end{align}
\end{subequations}
Notice that for any given pair $(P_k, P_{k+1})$, \eqref{eq:d_info_general0} takes a non-negative value, and \eqref{eq:d_info_general0} coincides with \eqref{eq:info_gain1} if the transition from $b_k$ to $b_{k+1}$ is lossless.
To see why \eqref{eq:d_info_general0} is a natural generalization of \eqref{eq:info_gain1}, consider a two-step procedure $\hat{P}_{k+1}\rightarrow Q_{k+1}\rightarrow P_{k+1}$ to update the prior covariance $\hat{P}_{k+1}$ to the posterior covariance $P_{k+1}$. In the first step, the uncertainty is ``reduced'' from $\hat{P}_{k+1}$ to $Q_{k+1}(\preceq \hat{P}_{k+1})$. The associated information gain is $\frac{1}{2}\log\det \hat{P}_{k+1}-\frac{1}{2}\log\det Q_{k+1}$.
In the second step, the covariance $Q_{k+1}$ is ``increased'' to $P_{k+1}(\succeq Q_{k+1})$.
This step incurs no information cost, since the location uncertainty can be increased simply by ``forgetting'' the prior knowledge. The optimization problem \eqref{eq:d_info_general0} is interpreted as finding the optimal intermediate step $Q_{k+1}$ to minimize the information gain in the first step.
\begin{remark}
The expression \eqref{eq:d_info_general0} characterizes $\mathcal{D}_{\text{info}}(b_k, b_{k+1})$ as a value of convex program (more precisely, the max-det program \cite{vandenberghe1998determinant}).
Lemma~\ref{lemma:explicit} in Appendix~\ref{ap:zero} provides a method to solve \eqref{eq:d_info_general0} directly using the singular value decomposition.
\end{remark}
Since only lossless transitions are physically meaningful, the path planning algorithms we develop in Section~\ref{sec:algorithm} below are designed to produce a sequence of lossless transitions as an output. In Theorem~\ref{theo:loss-lessmod} in Section~\ref{sec:formulation}, we will formally prove that the optimal solution to the shortest path problem can be assumed lossless without loss of generality.
\subsubsection{Total cost}
The total cost to steer the belief state from $b_k=(x_k,P_k)$ to $b_{k+1}=(x_{k+1},P_{k+1})$
is a weighted sum of $\mathcal{D}_{\text{travel}}(b_k, b_{k+1})$ and $\mathcal{D}_{\text{info}}(b_k, b_{k+1})$.
Introducing $\alpha>0$, we define the total cost as
\begin{equation}
\label{eq:def_D}
\mathcal{D}(b_k, b_{k+1}):= \; \mathcal{D}_{\text{travel}}(b_k, b_{k+1})+\alpha \mathcal{D}_{\text{info}}(b_k, b_{k+1})
\end{equation}
Throughout this paper, the total cost function \eqref{eq:def_D} serves as a distance metric with which the lengths of the belief paths are measured. Before the shortest path problem is formally formulated in the next section, it is worthwhile to note the following key properties of the function \eqref{eq:def_D}:
\begin{enumerate}[label=(\roman*)]
\item $\mathcal{D}(b_1, b_2)\geq 0 \;\; \forall b_1, b_2 \in \mathbb{B}$;
\item $\mathcal{D}(b, b)= 0 \;\; \forall b \in \mathbb{B}$; and
\item $\mathcal{D}(b_1, b_2)\leq \mathcal{D}(b_1, b_3)+\mathcal{D}(b_3, b_2) \;\; \forall b_1, b_2, b_3 \in \mathbb{B}$.
\end{enumerate}
The first two properties are straightforward to verify. The third property (the triangular inequality) has been shown in \cite{pedram2021rationally} for special cases with $d=1$.
As the first technical result of this paper, we prove the triangular inequality in full generality as follows:
\begin{theorem}
\label{theo:tria}
In obstacle free space, the optimal path cost between $b_1=(x_1, P_1)$ and $b_2=(x_2,P_2)$ is equal to $\mathcal{D}(b_1, b_2)$ or equivalently
\begin{equation}
\mathcal{D}(b_1, b_2) \leq \mathcal{D}(b_1, b_{int})+\mathcal{D}(b_{int}, b_2)
\end{equation}
for any intermediate $b_{int}$.
\end{theorem}
\begin{proof}
See Appendix~\ref{ap:B}.
\end{proof}
Theorem~\ref{theo:tria} implies that the shortest path from $(x_1, P_1)$ to $(x_2, P_2)$ is obtained by first making a sensing-free travel from $(x_1, P_1)$ to $(x_2, \hat{P}_2)$ followed by a covariance reduction from $\hat{P}_2$ to $P_2$. In other words, the ``move-and-sense'' strategy is optimal for transitioning in an obstacle-free space.
It is also noteworthy that \eqref{eq:def_D} fails to satisfy symmetry, i.e., $\mathcal{D}(b_1, b_2)\neq \mathcal{D}(b_2, b_1)$ in general.
Consequently, the notion of the path length we introduce below is direction-dependent. This nature will also be presented in a numarical example in Section~\ref{sec:asymmetry}.
The function \eqref{eq:def_D} also fails to satisfy the identity of indiscernibles since $\mathcal{D}(b_1, b_2)=0$ does not necessarily imply $b_1=b_2$. (Consider $b_1=(x_1, P_1)$ and $b_2=(x_2, P_2)$ with $x_1=x_2$ and $P_1 \preceq P_2$.)
Due to the lack of symmetry and the identity of indiscernibles, the function \eqref{eq:def_D} fails to be a metric.
However, with properties (i)-(iii) above, $\mathcal{D}$ is a \emph{quasi-pseudometric} on the belief space $\mathbb{B}$.
A shortest path problem on $\mathbb{B}$ is then well-defined with respect to $\mathcal{D}$, as we discuss in the next section.
Finally, from the perspective of information geometry, \eqref{eq:def_D} is among various alternative quasi-pseudometrics, including KL-divergence and the Wasserstein distance. These metrics are used in the covariance steering problems in \cite{chen2015optimal, okamoto2018optimal, balci2020covariance} and references therein.
We remark that any choice of quasi-pseudo metric on $\mathbb{B}$ leads to a well-defined shortest path problem on the belief space, and the RRT*-based algorithm we present in Section~\ref{sec:algorithm} can easily be modified to incorporate different choices.
\section{Problem formulation}
\label{sec:formulation}
In this section, we define the length of general paths in the belief space $\mathbb{B}$ using the distance function \eqref{eq:def_D} and formally state the shortest path problem.
\subsection{Belief Chains and Belief Paths}
In the sequel, we use the term \emph{belief chain} to refer to a sequence of transitions from $b_k=(x_k, P_k)\in\mathbb{B}$ to $b_{k+1}=(x_{k+1}, P_{k+1})\in\mathbb{B}$, $k=0, 1, 2, ... , K-1$, where $K$ is a finite integer.
We also use the term \emph{belief path} to refer to a function $\gamma: [0,T]\rightarrow \mathbb{B}$, $\gamma(t)=b(t)$ with $b(t)=(x(t), P(t))$.
The origin and the end point of the path $\gamma$ are denoted by $\gamma(0)$ and $\gamma(T)$, respectively.
The parameter $t$ is often referred to as \emph{time}, but we remark that $t$ does not necessarily correspond to the physical time. The time of arrival of the robot at the end point depends on the the length of the path and the travel speed of the robot.
\subsubsection{Lossless chains and paths}
Recall that a transition from $b_k=(x_k, P_k)$ to $b_{k+1}=(x_{k+1}, P_{k+1})$ is said to be \emph{lossless} if
\begin{equation}
\label{eq:lossless_def}
P_{k+1} \preceq \hat{P}_{k+1}(:=P_k+\|x_{k+1}-x_k\|W)
\end{equation}
If every transition in the belief chain $\{(x_k, P_k)\}_{k=0,1, ... , K}$ is lossless, we say that the belief chain is lossless.
Let $\gamma: [0,T]\rightarrow \mathbb{B}$, $\gamma(t)=(x(t), P(t))$ be a belief path.
The \emph{travel length} of the path $\gamma$ from time $t=t_a$ to time $t=t_b$ is defined as
\[
\ell(x[t_a, t_b])=\sup_{\mathcal{P}} \sum_{k=0}^{K-1} \|x(t_k)-x(t_{k+1})\|
\]
where the supremum is over the space of all partitions $\mathcal{P}=(t_a=t_0<t_1<\cdots < t_N=t_b), N\in\mathbb{N}$. We say that a path $\gamma$ is \emph{lossless} if the condition
\begin{equation}
P(t_b) \preceq P(t_a)+\ell(x[t_a, t_b])W
\end{equation}
holds for all $0\leq t_a < t_b \leq T$. A path $\gamma(t)= (x(t),P(t))$ is said to be \emph{finitely lossless} if there exists a finite partition $\mathcal{P}=(0=t_0<t_1<\dots<t_K=T)$ such that for any refinement $\mathcal{P}'=(0=t'_0<t'_1<\dots<t'_{K'}=T)$ of $\mathcal{P}$ (i.e., $\mathcal{P}' \supseteq \mathcal{P}$), the belief chain $\{(x_{k'},P_{k'})\}_{k'=0, 1,\dots, K'}$ is lossless.
\subsubsection{Collision-free chains and paths}
Let $\mathcal{X}^{l}_{\text{obs}}\subset \mathbb{R}^d$ be a closed convex subset representing the obstacle $l \in \{1, \dots, M\}$.
Consider a robot moving from a way point $x_k\in \mathbb{R}^d$ to $x_{k+1}\in \mathbb{R}^d$. Using $0\leq \lambda \leq 1$, the line segment connecting $x_k$ and $x_{k+1}$ is parametrized as
\[
x[\lambda]=(1-\lambda)x_k+\lambda x_{k+1}.
\]
Assuming that the robot's initial covariance is $P_k$, the evolution of the covariance matrix subject to the model \eqref{eq:euler3} is written as
\[
P[\lambda]=P_k+\lambda\|x_{k+1}-x_k\|W.
\]
For a fixed confidence level parameter $\chi^2>0$, we say that the transition from $x_k$ to $x_{k+1}$ with initial covariance $P_k$ is \emph{collision-free} if
\begin{align}
&(x[\lambda]-x_{\text{obs}})^\top P[\lambda]^{-1}(x[\lambda]-x_{\text{obs}}) \geq \chi^2 \nonumber \\
&\forall \lambda\in [0,1], \quad \forall x_{\text{obs}}\in \mathcal{X}^l_{\text{obs}}, \quad \forall l \in\{1, \dots, M\}.
\end{align}
\begin{remark}
We say that a collision with obstacle $l$ is detected when
\begin{equation}
(x[\lambda]-x_{\text{obs}})^\top P[\lambda]^{-1}(x[\lambda]-x_{\text{obs}}) < \chi^2\quad
\end{equation}
for some $ \lambda\in [0,1]$ and $x_{\text{obs}}\in \mathcal{X}^l_{\text{obs}}$. Collision detection can be formulated as a feasibility problem
\begin{align}
\label{eq:collision_checker}
&\begin{bmatrix}
\chi^2 & \text{sym.} \\
(1-\lambda)x_k+\lambda x_{k+1}-x_{\text{obs}} & P_k+\lambda\|x_{k+1}-x_k\|W
\end{bmatrix}\succ 0, \nonumber \\
&0\leq \lambda \leq 1, \quad x_{\text{obs}}\in \mathcal{X}^l_{\text{obs}}
\end{align}
which is a convex program for each convex obstacle $\mathcal{X}^l_{\text{obs}}$.
\qed
\end{remark}
We say that a belief chain $\{(x_k, P_k)\}_{k=0, 1, ... , K-1}$ is \emph{collision-free} if for each $k=0, 1, ... , K-1$, the transition from $x_k$ to $x_{k+1}$ with the initial covariance $P_k$ is collision-free. We say that a belief path $\gamma: [0,T]\rightarrow \mathbb{B}$, $\gamma(t)=(x(t), P(t))$ is \emph{collision-free} if
\begin{equation}
\begin{split}
&(x(t)-x_{\text{obs}})^\top P^{-1}(t)(x(t)-x_{\text{obs}}) \geq \chi^2, \;\;\\
& \quad \forall t\in [0, T], \;\; \forall x_{\text{obs}}\in \mathcal{X}^l_{\text{obs}}, \ \ \forall l \in \{1, \dots, M\}.
\end{split}
\end{equation}
\subsection{Path Length}
Let $\gamma: [0,T]\rightarrow \mathbb{B}$, $\gamma(t)=(x(t), P(t))$ be a path, and $\mathcal{P}=(0=t_0<t_1<\cdots < t_K=T)$ be a partition.
The length of the path $\gamma$ with respect to the partition $\mathcal{P}$ is defined as
\begin{equation}
c(\gamma;\mathcal{P})=\sum_{k=0}^{K-1} \mathcal{D}(\gamma(t_k), \gamma(t_{k+1}))
\end{equation}
where the function $\mathcal{D}$ is defined by \eqref{eq:def_D}.
The length of a path $\gamma$ is defined as the supremum of $c(\gamma;\mathcal{P})$ over all partitions
\begin{equation}
\label{eq:def_path_length}
c(\gamma):=\sup_\mathcal{P} c(\gamma;\mathcal{P}).
\end{equation}
The definition \eqref{eq:def_path_length} means that for each path with a finite length, there exists a sequence of partitions
$\{\mathcal{P}_i\}_{i\in\mathbb{N}}$ such that
$\lim_{i \rightarrow \infty} c(\gamma; \mathcal{P}_i) =c(\gamma)$.
\begin{remark}
If $\gamma(t)$ is differentiable, then the losslessness condition \eqref{eq:lossless_def} is equivalent to
$W\frac{d}{dt}x(t) \succeq \frac{d}{dt} P(t), \forall t \in [0, T]$. In this case, the path length can be expressed as:
\begin{align}
c(\gamma) \!= \!\!\int_{0}^T \left[ \left\|\frac{d}{dt}x(t)\right\|+ \frac{\alpha}{2} Tr\Big( \big(W-\frac{d}{dt}P(t)\big)P^{-1}(t)\Big)\right] dt.
\end{align}
\end{remark}
\subsection{Topology on the Path Space}
The proofs of asymptotic optimality of the original RRT* algorithm \cite{karaman2010incremental,karaman2011sampling} critically depends on the continuity of the path length function $c(\gamma)$.
In this subsection, we introduce an appropriate topology on the space of belief paths $\gamma: [0,T]\rightarrow \mathbb{B}$ with respect to which the path length function $c(\gamma)$ is shown to be continuous in Theorem~\ref{theo:continuity2} below.
The space of all belief paths $\gamma: [0,T]\rightarrow \mathbb{B}$
can be thought of as an open subset (convex cone) of the space of \emph{generalized paths} $\gamma: [0,T]\rightarrow \mathbb{R}^d\times \mathbb{S}^d$. The space of generalized paths is a vector space on which addition and scalar multiplication are defined as $(\gamma_1+\gamma_2)(t)=(x_1(t)+x_2(t), P_1(t)+P_2(t))$ and $a \gamma(t)=(a x(t), a P(t))$ for $a\in\mathbb{R}$, respectively.
Let $\mathcal{P}=(0=t_0<t_1<\cdots < t_K=T)$ be a partition. The variation of a generalized path $\gamma$ with respect to $\mathcal{P}$ is defined as
\begin{align*}
V(\gamma; \mathcal{P})&:=\|x(0)\|\bar{\sigma}(W)+\bar{\sigma}(P(0)) \\
&+\sum_{k=0}^{K-1}\Bigl[\|x(t_{k+1})-x(t_k)\|\bar{\sigma}(W)+\bar{\sigma}(P(t_{k+1})-P(t_k))\Bigr].
\end{align*}
The total variation of a generalized path $\gamma$ is defined as
\[
|\gamma|_{\text{TV}}:=\sup_{\mathcal{P}}V(\gamma; \mathcal{P}).
\]
Notice that $|\cdot|_{\text{TV}}$ defines a norm on the space of generalized paths. If we introduce
\[
\|\gamma\|_\infty:=\sup_{t\in[0,T]} \|x(t)\|\bar{\sigma}(W)+\bar{\sigma}(P(t))
\]
then $\|\gamma\|_\infty \leq |\gamma|_{\text{TV}}$ holds \cite[Lemma 13.2]{carothers2000real}.
In what follows, we assume the topology of total variation metric $|\gamma_1-\gamma_2|_{\text{TV}}$ on the space of generalized paths $\gamma: [0,T]\rightarrow \mathbb{R}^d\times \mathbb{S}^d$, which is then inherited to the space of belief paths $\gamma: [0,T]\rightarrow \mathbb{B}(=\mathbb{R}^d\times \mathbb{S}_{++}^d)$. We denote by $\mathcal{BV}[0, T]$ the space of belief paths $\gamma: [0,T]\rightarrow \mathbb{B}$ such that $|\gamma|_{\text{TV}}<\infty$.
\subsection{The Shortest Belief Path Problem}
Let $b_0=(x_0, P_0)\in \mathbb{B}$ be a given initial belief state, $\mathcal{B}_{\text{target}} \subset \mathbb{B}$ be a given closed subset representing the target belief region, and
$\mathcal{X}^{l}_{\text{obs}}\subset \mathbb{R}^d$ be the given obstacle $l \in\{1, \dots, M\}$. Given a confidence level parameter $\chi^2>0$, the shortest path problem is formulated as
\begin{equation}
\label{eq:main_problem}
\begin{split}
\min_{\gamma \in \mathcal{BV}[0, T]} \;\; & c(\gamma) \\
\text{ s.t. }\;\;\;\; & \gamma(0)=b_0, \; \gamma(T)\in \mathcal{B}_{\text{target}} \\
& (x(t)-x_{\text{obs}})^\top P^{-1}(t)(x(t)-x_{\text{obs}}) \geq \chi^2 \\
& \forall t\in [0, T], \;\; \forall x_{\text{obs}}\in \mathcal{X}^{l}_{\text{obs}}, \ \ \forall l \in \{1, \dots, M\}.
\end{split}
\end{equation}
We make the following mild assumption which will be needed in the development of Section~\ref{sec:algorithm}.
\begin{assumption}
There exists a feasible path $\gamma(t)=(x(t), P(t))$ for \eqref{eq:main_problem} such that $P(t)\in \mathbb{S}_\rho^d$ and $\rm{Tr}(P(t))\leq R$ for all $t\in[0,T]$, where $R>0$ and $\rho>0$ are constants.
\end{assumption}
The next theorem also plays a key role in the development of our algorithm in Section~\ref{sec:algorithm}.
\begin{theorem}
\label{theo:loss-lessmod}
For any collision-free belief chain $\{b_k = (x_k, P_k)\}_{k=0, 1, ... , K-1}$, there exist a collision-free and lossless chain $\{b'_k=(x'_k, P'_k )\}_{k=0, 1, ... , K-1}$ with $x_k=x'_k$ and $P'_k \preceq P_k$ for $k=0, \dots, K$ that has a shorter (or equal) length in that $\sum_{k=1}^{K-1} \mathcal{D}(b'_k,b'_{k+1}) \leq \sum_{k=1}^{K-1} \mathcal{D}(b_k, b_{k+1}).$
\end{theorem}
\begin{proof}
See Appendix~\ref{ap:C}.
\end{proof}
Theorem~\ref{theo:loss-lessmod} implies that
the shortest path problem \eqref{eq:main_problem} always admits a ``physically meaningful'' path as an optimal solution. We also use Theorem~\ref{theo:loss-lessmod} to restrict the search for an optimal solution to the space of lossless paths in the algorithms we develop in the sequel.
\subsection{Interpretation}
\label{sec:formulation_interpret}
By tuning the parameter $\alpha>0$, the shortest path problem formulation \eqref{eq:main_problem} is able to incorporate information cost at various degrees. As we will show in Section~\ref{sec:simulation_alpha}, different choices of $\alpha$ lead to qualitatively different optimal paths.
Notice that the proposed problem formulation \eqref{eq:main_problem} is purely geometric and does not use any particular sensor models to estimate sensing costs. This makes our motion planning strategy model-agnostic and widely applicable to scenarios where information cost for path following is concerned but actual models of sensors are not available or too complex to be utilized.
Another advantage of the proposed approach is that, from the obtained belief path $(x(t), P(t))$, one can \emph{synthesize} a sensing strategy under which the planned covariance $P(t)$ is obtained as an outcome of Bayesian filtering.
To see this, assume that a desired belief path is given by a lossless belief chain $\{(x_k, P_k)\}_{k=0,1, ... , K}$. At every way point, the prior covariance
\begin{equation}
\label{eq:sensor_reconstruction1}
\hat{P}_k=P_{k-1}+\|x_k-x_{k-1}\|W
\end{equation}
needs to be updated to a posterior covariance $P_k$.
Such a belief update occurs as a consequence of a linear measurement
\begin{equation}
\label{eq:sensor_reconstruction2}
{\bf y}_{k}=C_k{\bf x}_{k}+{\bf v}_{k}
\end{equation}
with Gaussian noise ${\bf v}_{k}\sim\mathcal{N}(0, V_k)$, provided that $C_k$ and $V_k$ are chosen to satisfy
\begin{equation}
\label{eq:sensor_reconstruction3}
C_k^\top V_k^{-1}C_k=P_{k}^{-1}-\hat{P}_{k}^{-1}.
\end{equation}
Notice that \eqref{eq:sensor_reconstruction3} together with \eqref{eq:sensor_reconstruction1} are the standard Riccati recursion for Kalman filtering.
Moreover, it can be shown that the linear sensing strategy \eqref{eq:sensor_reconstruction2} incurs the designated information gain (i.e., the equality \eqref{eq:info_gain1} holds), and is information-theoretically optimal in the sense that no other sensing strategy, including nonlinear ones, allows the covariance update from $\hat{P}_k$ to $P_k$ with less information gain.
In other words, \eqref{eq:sensor_reconstruction2} for $k=0,1, ... , K$ provides an optimal sensing strategy that perceives ``minimum yet critical'' information from the environment to perform the path following task.
References \cite{tanaka2015sdp,tanaka2016semidefinite,tanaka2017lqg} elaborate on an information-theoretic interpretation of the sensing mechanism \eqref{eq:sensor_reconstruction2} as an optimal source-coder (data-compressor) for networked LQG control systems.
While information-theoretically optimal, the sensing strategy \eqref{eq:sensor_reconstruction2} may not be feasible in reality if the robot is not equipped with an adequate set of sensors.
In such cases, the planned sequence of covariance matrices $\{P_k\}_{k=0,1, ... , K}$ cannot be traced exactly. Even if the sensing strategy \eqref{eq:sensor_reconstruction2} is feasible (and thus the planned sequence of covariance matrices $\{P_k\}_{k=0,1, ... , K}$ is traceable), the realization of the mean $\hat{{\bf x}}_k=\mathbb{E}[{\bf x}_k|{\bf y}_0, \cdots , {\bf y}_k]$ inevitably deviates from the planned trajectory $\{x_k\}_{k=0,1, ... , K}$ because $\hat{{\bf x}}_k$ is a random process whose realization depends on the realization of ${\bf y}_k$.
For these reasons, the belief path we obtain by solving \eqref{eq:main_problem} can only serve as a reference trajectory to be tracked in the real-time implementations. Even though the optimal belief path cannot be traced perfectly, various approaches can be taken to design a joint sensing and control policies for trajectory tracking.
In Section~\ref{sec:simulation}, we present simulation results showing that such a joint sensing and control strategy helps the robot to mitigate sensing cost (e.g., the frequency of sensing actions, the number of sensors that must be used simultaneously) during the path following phase.
\section{Algorithm}
\label{sec:algorithm}
We utilize the RRT* algorithm \cite{karaman2011sampling} as a numerical method to solve the shortest path problem \eqref{eq:main_problem}.
In this section, we develop four different variations of the algorithm. While they operate differently, they are basically the same in that they all incrementally construct directed graphs $G=(B,E)$ with randomly sampled belief nodes $B$ and edges $E$.
To shed light on the asymptotic optimality of the proposed algorithms, we show that the path length function \eqref{eq:def_path_length} is continuous.
\subsection{Basic Algorithm}
\begin{figure*}[t!]
\centering
\subfloat[]{\includegraphics[clip,width=4.1cm]{fig/lossless_mod/LossLess_mod_1_trim.eps}
\label{fig:LossLess_1}} \quad
%
\subfloat[]{\includegraphics[clip,width=4.3cm]{fig/lossless_mod/LossLess_mod_2_trim.eps}
\label{fig:LossLess2}} \quad
%
\subfloat[]{\includegraphics[clip,width=4.3cm]{fig/lossless_mod/LossLess_mod_3_trim.eps}
\label{fig:LossLess3}} \quad
%
\subfloat[]{\includegraphics[clip,width=4cm]{fig/lossless_mod/LossLess_mod_4_trim.eps
\label{fig:LossLess4}}
\caption{The lossless modification executed by the $\textsc{\fontfamily{cmss}\selectfont LossLess}$ function and its propagation to the descendants. (a) The $\textsc{\fontfamily{cmss}\selectfont Generate}(i)$ function samples the new node $b_{\rm new}$ (a blue ellipse). In the rewired process, the black ellipse $b_j$ located at the center is selected as an element of $B_{\rm nbors}$. (b) The prior covariance after the travel from $x_{\rm new}$ to $x_{j}$ is depicted as a blue ellipse in the center. The $\textsc{\fontfamily{cmss}\selectfont LossLess}(b_{\rm new}, b_j)$ function calculates the optimal solution of \eqref{eq:d_info_general0} to achieve the lossless transition. (c) $\textsc{\fontfamily{cmss}\selectfont LossLess}$ is propageted to the descendant of $b_j$ to create the lossless chain. (d) As the consequence of the rewiring and $\textsc{\fontfamily{cmss}\selectfont LossLess}$, the algorithm generates the path with lower $\mathcal D(b_{\rm init}, b_k)$ while achieving smaller covariances for $b_j$ and $b_k$.}
\label{fig:LossLess_mod}
\end{figure*}
The basic implementation of RRT* in the belief space for the cost function \eqref{eq:def_path_length} is summarized in Algorithm~\ref{algo:1}.
Sampling of a new node (Lines~3-7), an addition of a new edge connecting $b_j\in B$ and $b_{\rm new}$ (Lines~8-14), and the rewiring process (Line~15-19) are performed similarly to those of the original RRT* \cite{karaman2011sampling}. However, the new distance function ${\mathcal D}$ and its directional dependency necessitate the introduction of new functionalities.
\begin{algorithm}[ht]
\label{algo:1}
\footnotesize{
$B \leftarrow \{b_{\text{init}}\}$; $E \leftarrow \emptyset$; cost($b_{\text{init}}$)$\leftarrow 0$; $G\leftarrow (B,E)$\;
\For{$i = 2:N$}{
$b_i = (x_i,P_i) \leftarrow \textsc{\fontfamily{cmss}\selectfont Generate}(i)$\;
$b_{\text{near}} \leftarrow \textsc{\fontfamily{cmss}\selectfont Nearest}(B,b_i)$\;
$b_{\text{new}} \leftarrow \textsc{\fontfamily{cmss}\selectfont Scale} (b_{\text{near}},b_i,\hat{D}_{\text{min}})$\;
\If{$\textsc{\fontfamily{cmss}\selectfont FeasCheck} (b_{\text{near}},b_{\text{new}}) = \text{ \textup{True}}$}{
$B \leftarrow B \cup b_{\text{new}}$\;
$B_{\text{nbors}} \leftarrow \textsc{\fontfamily{cmss}\selectfont Neighbor} (B,b_{\text{new}},\hat{D}_{\text{min}})$\;
$\text{cost}(b_{\text{new}}) \leftarrow realmax$\;
\For{$b_j \in B_{\text{nbors}}$}{
\If{$\textsc{\fontfamily{cmss}\selectfont FeasCheck} (b_j,b_{\text{new}}) = \text{\normalfont True}$ \textbf{\textup{ and}}
$\textup{cost} (b_j) + \mathcal{D}(b_j,b_{\text{new}}) < \textup{cost}(b_{\text{new}}) $}{
$\text{cost}(b_{\text{new}}) \leftarrow \text{cost} (b_{j}) + \mathcal{D}(b_j,b_{\text{new}}) $\;
$b_{\text{nbor}}^* \leftarrow b_j$\;
}
}
$E \leftarrow E \cup \left[b_{\text{nbor}}^*,b_{\text{new}} \right] $\;
\For{$b_j \in B_{\text{nbors}} \: \backslash \: b_{\text{nbor}}^*$}{
\If{$\textsc{\fontfamily{cmss}\selectfont FeasCheck} (b_{\text{new}}, b_j) = \text{ \normalfont True} $ \textbf{\textup{ and}} $ \textup{cost} (b_{\text{new}}) + \mathcal{D}(b_{\text{new}},b_j) < \textup{cost} (b_j)$}{
$value \leftarrow \text{cost} (b_{\text{new}}) + \mathcal{D}(b_{\text{new}}, b_j) - \text{cost} (b_j)$\;
$E \leftarrow E \cup \left[ b_{\text{new}}, b_j \right] \backslash \left[ \text{parent}(b_j), b_j \right]$\;
$\textsc{\fontfamily{cmss}\selectfont UpdateDes}(G, b_j, value)$\;
}
}
}\Return $G = (B, E)$
}
}
\caption{Information-Geometric RRT* Algorithm }
\end{algorithm}
Algorithm~\ref{algo:1} begins with the graph $G$ containing the initial node $b_{\rm init}$ and an empty edge set.
At each iteration, the $\textsc{\fontfamily{cmss}\selectfont Generate}(i)$ function creates a new point $b_i$ in the obstacle-free space $\mathcal{B}_{\text{free}}$ by randomly sampling an obstacle-free spatial location ($x\in \mathbb{R}^d$) and a covariance ($P \in \mathbb{S}^d_{++}$).
The $\textsc{\fontfamily{cmss}\selectfont Nearest}$ function finds the nearest point $b_{\text{near}}$ in the set $B$ from the newly generated node $b_i$ in the metric $\hat{\mathcal{D}}(b,b'):=\|x-x'\|+\|P-P'\|_{F}$.
The $\textsc{\fontfamily{cmss}\selectfont Scale}(b_{\text{near}},b_i,\hat{D}_{\text{min}})$ function linearly shifts the generated point $b_i$ to a new location as:
\begin{equation*}
b_{\text{new}} \!=\!\!
\begin{cases}
b_{\text{near}} \!+\! \frac{\hat{D}_{\text{min}}}{\hat{\mathcal{D}}(b_i,b_\text{near})}\left(b_i-b_{\text{near}}\right)~ \text{if}~\hat{\mathcal{D}}(b_i,b_{\text{near}}) > \hat{D}_\text{min}, \\
b_i \hspace{3.8cm} \text{otherwise,}
\end{cases}
\end{equation*}
where $\hat{D}_\text{min} := \textup{min}\{ED_{\text{min}},\ r \left( \frac{\log n}{n} \right)^{\frac{1}{d}}\}$. $ED_{\text{min}}$ is a user-defined constant, $r$ is the connection radius, and $n$ is the number of nodes in the graph $G$.
The $\textsc{\fontfamily{cmss}\selectfont Scale}$ function also checks that the $\chi^2$ confidence region of $b_{\rm new}$ does not interfere with any obstacle.
The $\textsc{\fontfamily{cmss}\selectfont FeasCheck}(b_{\text{near}}, b_{\text{new}}) = \textsc{\fontfamily{cmss}\selectfont IsLossless}(b_{i}, b_{j}) \ \textbf{\textup{and}} \ $ $ \textsc{\fontfamily{cmss}\selectfont ObsCheck}(b_{i}, b_{j})$ is a logical function.
It ensures the transition $b_{\text{near}} \rightarrow b_{\text{new}}$ is a lossless transition i.e., it checks if
$P_{\text{new}} \preceq P_{\text{near}}+ \|x_{\text{new}}-x_{\text{near}}\|W$. It also ensures that the $\chi^2$ confidence bound in this transition does not intersect with any obstacle $\ell\in \{1, \dots, M\}$ by solving problem~\eqref{eq:collision_checker}. The function $\textsc{\fontfamily{cmss}\selectfont Neighbor}(B, b_{\text{new}}, \hat{D}_{\text{min}})$ returns the subset of nodes described as $ B_{\text{nbors}} = \{ b_i \in B: \hat{\mathcal{D}}(b_i,b_\text{new}) \leq \hat{D}_{\text{min}} \}$.
To find the parent node for the sampled node $b_{\text{new}}$, Lines 11-13 of Algorithm~\ref{algo:1} attempt connections from the neighboring nodes $B_{\text{nbors}}$ to $b_{\text{new}}$.
Among the nodes in $B_{\text{nbors}}$ from which there exists a collision-free and lossless path, the node that results in minimum $\textup{cost}({b_{\text{new}}})$ is selected as the parent of $b_{\text{new}}$, where $\textup{cost}({b})$ denotes the cost of the path from the $b_{init}$ to node $b$. Note that the transitions are lossless and thus $\mathcal{D}(b_j, b_{\text{new}}) =\log\det(P_j+\|x_{\text{new}}-x_j\|W)- \log\det (P_{\text{new}})$. Line 14 establishes a new edge between the sought parent and $b_{\text{new}}$.
In the rewiring step (Lines 15-19), the algorithm replaces the parents of nodes $b_j$ in $B_{\text{nbors}}$ with $b_{\text{new}}$ if it results in lower $\textup{cost}(b_j)$. In line 16, the $\textsc{\fontfamily{cmss}\selectfont FeasCheck}$ function is called again. This is because $\textsc{\fontfamily{cmss}\selectfont FeasCheck}(b_j, b_{\text{new}}) = \textup{True}$ does not necessarily imply $\textsc{\fontfamily{cmss}\selectfont FeasCheck}(b_{\text{new}}, b_j) = \textup{True}$. Finally, for each rewired node $b_j$, its cost (i.e., $ \textup{cost}({b_j})$) and the cost of its descendants are updated via $\textsc{\fontfamily{cmss}\selectfont UpdateDes}(G, b_j, value)$ function in Line 19 as $\textup{cost}(.) \leftarrow \textup{cost}(.) + value $.
\subsection{Improvement of Algorithm~\ref{algo:1} }
While Algorithm~\ref{algo:1} is simple to implement and easy to analyze, it can be modified in at least two aspects to improve its computational efficiency. Algorithm~\ref{algo:2} shows the modified algorithm.
\subsubsection{Branch-and-Bound}
As the first modification, we deploy a branch-and-bound technique as detailed in \cite{karaman2011anytime}. For a given tree $G$, let $b_\text{min} $ be the node that has the lowest cost along the nodes of $G$ within $\mathcal{B}_{\text{target}}$. It follows from the triangular inequality (Theorem~\ref{theo:tria}) that the cost $\mathcal{D}(b,b_\text{goal})$ of traversing from $b$ to the goal region ignoring obstacles is a lower-bound for the cost of transitioning from $b$ to $b_\text{goal}$. The $\textsc{\fontfamily{cmss}\selectfont BranchAndBound(G)}$ function, Line 22 in Algorithm~\ref{algo:2}, periodically deletes the nodes $B'' = \{b\in B: \textup{cost}(b) +\mathcal{D}(b, b_\text{goal}) \geq \textup{cost}({b_\text{min}})\}$. This elimination of the non-optimal nodes speeds up the RRT* algorithm.
\begin{algorithm}[ht]\label{algo:2}
\footnotesize{
$B \leftarrow \{b_{\text{init}}\}$; $E \leftarrow \emptyset$; $\textup{cost}(b_{\text{init}}) \leftarrow 0$ $G\leftarrow (B,E)$\;
\For{$i = 2:N$}{
$b_i = (x_i,P_i) \leftarrow \textsc{\fontfamily{cmss}\selectfont Generate}(i)$\;
$b_{\text{near}} \leftarrow \textsc{\fontfamily{cmss}\selectfont Nearest}(B,b_i)$\;
$b_{\text{new}} \leftarrow \textsc{\fontfamily{cmss}\selectfont Scale} (b_{\text{near}},b_i,\hat{D}_{\text{min}})$\;
\If{$\textsc{\fontfamily{cmss}\selectfont ObsCheck} (b_{\text{near}},b_{\text{new}}) = \text{ \normalfont True}$}{
$B_{\text{nbors}} \leftarrow \textsc{\fontfamily{cmss}\selectfont Neighbor} (B,b_{\text{new}},\hat{D}_{\text{min}})$\;
$\text{cost}(b_{\text{new}}) \leftarrow realmax$\;
\For{$b_j \in B_{\text{nbors}}$}{
\If{$\textsc{\fontfamily{cmss}\selectfont ObsCheck} (b_j,b_{\text{new}}) = \text{\normalfont True} $ \textbf{\textup{and}} $ \text{cost}(b_j) + \mathcal{D}(b_j,b_{\text{new}}) < \text{cost}(b_{\text{new}}) $}{
$\text{cost}(b_{\text{new}}) \leftarrow \text{cost} (b_{j}) + \mathcal{D}(b_j,b_{\text{new}}) $\;
$b_{\text{nbor}}^* \leftarrow b_j$\;
}
}
$b_{\text{new}} \leftarrow \textsc{\fontfamily{cmss}\selectfont LossLess} (b_{\text{nbor}}^*, b_{\text{new}})$\;
$B \leftarrow B \cup B_{\text{new}}$\;
$E \leftarrow E \cup \left[b_{\text{nbor}}^*,b_{\text{new}} \right]$\;
\For{$b_j \in B_{\text{nbors}} \: \backslash \: b_{\text{nbor}}^*$}{
\If{$\textsc{\fontfamily{cmss}\selectfont ObsCheck} (b_{\text{new}},b_j) = \text{\normalfont True} $ \text{\textup{ and}} $ \text{cost} (b_{\text{new}}) + \mathcal{D}(b_{\text{new}},b_j) < \text{cost} (b_j)$}{
$b_{j} \leftarrow \textsc{\fontfamily{cmss}\selectfont LossLess} ( b_{\text{new}}, b_j)$\;
$E \leftarrow E \cup \left[ b_{\text{new}}, b_j \right] \backslash \left[ \text{parent}(b_j), b_j \right]$\;
$\text{cost}(b_j) \leftarrow \text{cost} (b_{\text{new}}) + \mathcal{D}(b_{\text{new}},b_j) $\;
$\textsc{\fontfamily{cmss}\selectfont UpdateDes}(G, b_j)$\;
}
}
} $\textsc{\fontfamily{cmss}\selectfont BranchAndBound} (G)$\;
\Return $G = (B, E)$
}
}
\caption{Improved Information-Geometric RRT* Algorithm}
\end{algorithm}
\subsubsection{Lossless Modification}
Simulation studies with Algorithm~\ref{algo:1} show that the $\textsc{\fontfamily{cmss}\selectfont IsLossless}(b_{i}, b_{j})$ check often returns $\textup{False}$, meaning that Lines 11-13 and Lines 16-19 are skipped frequently. Consequently, an extremely large $N$ may be required for Algorithm~\ref{algo:1} to produce meaningful results.
To resolve this issue, we adopt an extra step called \emph{lossless modification} which, as shown graphically in Fig.~\ref{fig:LossLess_mod}, ensures that the existing links are all lossless. By including lossless modification, Algorithm~2 no longer needs
to call $\textsc{\fontfamily{cmss}\selectfont IsLossless}(b_i,b_j)$ to verify the connected link $b_i \rightarrow b_j$ is lossless. Therefore, in Algorithm~\ref{algo:2}, we employ $\textsc{\fontfamily{cmss}\selectfont ObsCheck}(b_i, b_j)$ instead of $\textsc{\fontfamily{cmss}\selectfont FeasCheck}(b_i, b_j)$. This alteration of functions mitigates the computational burden.
In Algorithm~\ref{algo:2}, we first find the parent $b_{\text{nbor}}^*$ for $b_{new}$ in Lines 6-12. Then in Line 13, the covariance component in $b_{new}$ is modified so that the transition from its parent becomes lossless. Specifically, the $\textsc{\fontfamily{cmss}\selectfont LossLess}$ function computes $\textsc{\fontfamily{cmss}\selectfont LossLess} ( b_j, b_{\text{new}})= (x_{\text{new}}, Q^*)$, where $Q^*$ is the minimizer of \eqref{eq:d_info_general0} in computing $\mathcal{D}_{\text{info}} (b_j, b_{\text{new}})$. In rewiring step, a similar lossless modification is performed for the rewired node $b_j$ in Line 18 to assure the transition $b_{\text{new}} \rightarrow b_j$ is lossless. After modifying the rewired node $b_j$, all its descendant belief nodes are modified sequentially so that all transitions become lossless as detailed in Algorithm~\ref{algo:update_des}. The process of lossless modification
is similar to the method introduced in Appendix~\ref{ap:C} for constructing a lossless collision-free chain that has a lower cost from a given collision-free chain.
\begin{algorithm}[ht]\label{algo:update_des}
\footnotesize{
$Parent\_list \leftarrow \{b_{\text{rewired}}\}$; $Child\_list \leftarrow \emptyset $\;
\While {$Parent\_list \neq \emptyset$}{
\For{$b_i \in Parent\_list $}{
$Child\_list.append(\text{Children}(b_i))$ \;
}
\For {$b_j \in Child\_list$}{
$b_{j} \leftarrow \textsc{\fontfamily{cmss}\selectfont LossLess} ( \text{parent}(b_j), b_j)$\;
$\text{cost}(b_j) \leftarrow \text{cost}( \text{parent}((b_j)) + \mathcal{D}(\text{parent}((b_j), b_j) $\;
}
$Parent\_list \leftarrow Child\_list$\;
$Child\_list \leftarrow \emptyset$\;
}
}
\caption{$\textsc{\fontfamily{cmss}\selectfont UpdateDes}(G, b_{\text{rewired}})$}
\end{algorithm}
\subsection{Forward vs Backward Algorithm}
In the path following phase, a feedback control mechanism is responsible for suppressing the deviation of the robot from the nominal path. In ``noisy'' environments where the disturbance input is highly stochastic, the deviation can occasionally be large and replanning of the nominal path may be necessary. However, frequent online execution of RRT* is costly or even prohibitive in many applications.
For such scenarios, it is beneficial to recall the principle of backward dynamic programming \cite{bertsekas2000dynamic} which, once executed offline, eliminates the need for replanning.
The backward RRT* algorithm attempts to pre-compute the shortest path from any given belief state $b \in \mathcal{B}_{\text{free}}$ to $b_{\text{goal}}$ prior to the real-time implementation. When a significant deviation from the nominal path is detected, the robot looks up in the tree for a new nominal path to the goal.
The proposed backward RRT* is shown in Algorithm~\ref{algo:Backward}.
It develops a random tree with its root in the goal region spanning the entire $\mathcal{B}_{\text{free}}$.
Steps in Algorithm~\ref{algo:Backward} are similar to Algorithm~\ref{algo:1} except that each edge connects a node to its parent, and it does not connect a node to one of its children.
In Lines 10-13, the connection from the sampled node $b_{\text{new}}$ to the neighboring nodes $B_{\text{nbor}}$ are attempted. Conversely, in the rewiring step (Lines 15-19), the connections from the node $ b_j \in B_{\text{nbor}}$ to $b_{\text{new}}$ are checked. In this algorithm, $\textup{cost}(b_j)$ coincides with the optimal cost of traversing from node $b_j$ to $b_{\text{goal}}$, which is commonly known as the cost-to-go function \cite{bertsekas2000dynamic}.
\begin{algorithm}[h]\label{algo:Backward}
\footnotesize{
$B \leftarrow \{b_{\text{goal}}\}$; $E \leftarrow \emptyset$; $\textup{cost} (b_{\text{goal}})$; $G\leftarrow (B,E)$\;
\For{$i = 2:N$}{
$b_i = (x_i,P_i) \leftarrow \textsc{\fontfamily{cmss}\selectfont Generate}(i)$\;
$b_{\text{near}} \leftarrow \textsc{\fontfamily{cmss}\selectfont Nearest}(B,b_i)$\;
$b_{\text{new}} \leftarrow \textsc{\fontfamily{cmss}\selectfont Scale} (b_{\text{near}}, b_i,\hat{D}_{\text{min}})$\;
\If{$\textsc{\fontfamily{cmss}\selectfont FeasCheck} (b_{\text{new}},b_{\text{near}}) = \text{\normalfont True}$}{
$B \leftarrow B \cup b_{\text{new}}$\;
$B_{\text{nbors}} \leftarrow \textsc{\fontfamily{cmss}\selectfont Neighbor} (B,b_{\text{new}},\hat{D}_{\text{min}})$\;
$\textup{cost}(z_{\text{new}}) \leftarrow realmax$\;
\For{$b_j \in B_{\text{nbors}}$}{
\If{$\textsc{\fontfamily{cmss}\selectfont FeasCheck} (b_{\text{new}},b_j) = \text{ \normalfont True}$ \textbf{\textup{ and}} $ \textup{cost} (b_j) + \mathcal{D}(b_{\text{new}}, b_j) < \textup{cost}(b_{\text{new}}) $}{
$\textup{cost}(b_{\text{new}}) \leftarrow \text{cost} (b_{j}) + \mathcal{D}(b_{\text{new}},b_j) $\;
$b_{\text{nbor}}^* \leftarrow b_j$\;
}
}
$E \leftarrow E \cup \left[b_{\text{new}}, b_{\text{nbor}}^* \right] $\;
\For{$b_j \in B_{\text{nbors}} \: \backslash \: b_{\text{nbor}}^*$}{
\If{$\textsc{\fontfamily{cmss}\selectfont FeasCheck} (b_j, b_{\text{new}}) = \text{ \normalfont True}$ \textbf{\textup{ and}} $ \text{cost} (b_{\text{new}}) + \mathcal{D}(b_j, b_{\text{new}}) < \textup{cost} (b_j)$}{
$value \leftarrow \textup{cost} (b_{\text{new}}) + \mathcal{D}(b_j, b_{\text{new}}) - \textup{cost} (b_j)$\;
$E \leftarrow E \cup \left[b_j, b_{\text{new}} \right] \backslash \left[b_j, \text{parent}(b_j) \right]$\;
$\textsc{\fontfamily{cmss}\selectfont UpdateDes}(G, b_j, value)$\;
}
}
}\Return $G = (B, E)$
}
}
\caption{Backward Information-Geometric RRT* Algorithm}
\end{algorithm}
\subsection{Incorporating Sensor Constraints}
\label{sec:sensor_constraint}
Algorithms~\ref{algo:1}, \ref{algo:2} and \ref{algo:Backward} are purely geometric in the sense that they ignore physical constraints of the robot's hardware, including sensors. While this is an advantage in the sense we discussed in Section~\ref{sec:proposed_approach}, it is also a limitation.
In particular, a possible glitch of a belief chain synthesized by our algorithms is that it can be ``physically unrealizable'' if the robot is not equipped with a sensor to perform necessary measurements.
For example, a robot only equipped with a camera with a bounded field of view (FoV) is prohibited from obtaining information outside the FOV. Drones relying on GPS signals for localization may not have an access to sensor measurements in GPS-denied regions.
Such sensor constraints restrict the way in which belief states are updated.
Fortunately, there is a simple remedy for this issue. Consider the transition from $b_i=(x_i, P_i)$ to $b_j=(x_j, P_j)$, and assume that the only sensor available at the belief state $b_j$ is ${\bf y}_{j}=C_j{\bf x}_{j}+{\bf m}_{j}$ with ${\bf m}_{j}\sim\mathcal{N}(0, V_j)$.
In this transition, the covariance $P_i$ first
grows into $\hat{P}_j = P_i+\|x_j-x_i\| W $ in the prediction step, which is then reduced to $\tilde{P}_j := (\hat{P}_j^{-1}+ C_j ^\top V_j^{-1} C_j)^{-1}$ in the update step. Thus, the transition from $b_i$ to $b_j$ is clearly feasible if $P_j \succeq \tilde{P}_j$.
Therefore, to incorporate the sensor constraint, the function $\textsc{\fontfamily{cmss}\selectfont FeasCheck}$ in Algorithm~\ref{algo:1} simply needs to be replaced with $\textsc{\fontfamily{cmss}\selectfont FeasCheck2}(b_i, b_j):= \textsc{\fontfamily{cmss}\selectfont FeasCheck}(b_i,b_j) \ \textbf{\textup{and}} \ (P_j \succeq \tilde{P}_j)$.
\begin{figure*}[t!
\centering
%
\subfloat[$\alpha = 0$]
{\includegraphics[trim = 0.3cm 0cm 1.46cm 0.70cm, clip=true, width=0.64\columnwidth]{fig/RRT_Output/alpha_00_TRO_new.eps}
\label{fig:sim_snap_nocbf}} \quad
%
\subfloat[$\alpha = 0.3$]
{\includegraphics[trim = 0.3cm 0cm 1.46cm 0.70cm, clip=true, width=0.64\columnwidth]{fig/RRT_Output/alpha_03_TRO_new.eps}
\label{fig:sim_snap_cbf19}} \quad
%
\subfloat[$\alpha = 0.7$]
{\includegraphics[trim = 0.3cm 0cm 1.46cm 0.70cm, clip=true, width=0.64\columnwidth]{fig/RRT_Output/alpha_07_TRO_new.eps}
\label{fig:sim_snap_cbf19_1}} \quad
\caption{Simulation results with $\alpha = 0, 0.3, 0.7$ under the existence of multiple obstacles. Disturbance noise intensity is set to $W = 10^{-3} I$ and confidence ellipses representing $90 \%$ certainty regions. The boundaries of the plots are considered as obstacles.}
\label{fig:2D_mult}
\end{figure*}
\subsection{Asymptotic Optimality}
The RRT* algorithm \cite{karaman2011sampling} is an improvement of the RRT algorithm \cite{lavalle2001randomized} to achieve asymptotic optimality (the cost of the best path discovered converges to the optimal one almost surely as the number of samples increases).
Since the algorithms we introduced in this section are RRT*-based, their asymptotic optimality can be naturally conjectured.
Unfortunately, it is not straightforward to prove such a property in our setting because of a number of differences between the problem formulation \eqref{eq:main_problem} and the premises utilized in the original proof of asymptotic optimality \cite{karaman2010incremental,karaman2011sampling} (see also \cite{solovey2020revisiting}).
One of the premises that the original proof critically relies on is the continutity of the path cost function. Thus, to understand the asymptotic optimality of the algorithms we introduced in this section, it is essential to understand the continuity of the path length function $c(\gamma)$ we introduced by \eqref{eq:def_path_length}.
The next theorem shows that the function $c(\gamma)$ is continuous in the space of finitely lossless paths with respect to the topology of total variation:
\begin{theorem}
\label{theo:continuity2}
Let $\gamma: [0,T]\rightarrow \mathbb{R}^d\times \mathbb{S}_\rho^d$ and $\gamma': [0,T]\rightarrow \mathbb{R}^d\times \mathbb{S}_\rho^d$ be paths. Suppose $\gamma \in \mathcal{BV}[0, T]$ and $\gamma' \in \mathcal{BV}[0, T]$ and they are both finitely lossless. Then, for each $\epsilon > 0$, there exists $\delta > 0$ such that
\[
|\gamma'-\gamma|_{\text{TV}}\leq \delta \quad \Rightarrow \quad |c(\gamma')-c(\gamma)| \leq \epsilon.
\]
\end{theorem}
\begin{proof}
See Appendix~\ref{ap:A}, where without loss of generality we assume $T=1$.
\end{proof}
A complete proof of asymptotic optimality requires much additional work and must be postponed as future work.
\section{Numerical Experiments}
\label{sec:simulation}
\subsection{Impact of Changing $\alpha$}
\label{sec:simulation_alpha}
In this experiment, Algorithm~\ref{algo:2} is tested with $\alpha = 0.0,\ 0.3,$ and $0.7$ in a two-dimensional configuration space containing multiple obstacles shown in Fig.~\ref{fig:2D_mult}.
The paths shown in Fig.~\ref{fig:2D_mult} are generated by sampling 20,000 nodes. Sampled covariance ellipses are shown in black and the propagation between samples are shown in blue.
As shown in Fig.~\ref{fig:2D_mult}~(a), the algorithm yields a path with the shortest Euclidean length when $\alpha = 0$. If the weight is increased to $\alpha=0.7$, the algorithm finds a long path depicted in Fig.~\ref{fig:2D_mult}~(c).
When $\alpha = 0.3$, the path illustrated in Fig.~\ref{fig:2D_mult}~(b) is obtained.
Numerical experiment shows that the optimal paths are homotopic to Fig.~\ref{fig:2D_mult}~(a), (b), and (c) when $\alpha \leq 0.3$, $0.3 < \alpha \leq 0.5$, and $0.5 < \alpha$, respectively. In the sequel, red, purple, and blue colors are used to refer to the paths that are homotopic to the paths shown in Fig.~\ref{fig:2D_mult}~(a), (b), and (c).
Fig.~\ref{fig:cost}~(a) and (b) display the travel and information costs as functions of $\alpha \in \{0.1, \dots, 1 \}$ for the environment shown in Fig.~\ref{fig:2D_mult}.
Fig.~\ref{fig:cost}~(b)
shows a decreasing trend of the information cost as a function of $\alpha$. It is not monotonically decreasing (even within the same homotopy class) because of probabilistic nature of RRT*; for each $\alpha$, the generated path itself is a random variable.
\begin{figure}[t!
\centering
%
\subfloat[Travel Cost]
{\includegraphics[trim = 0.1cm 0cm 1cm 0.3cm, clip=true, width=0.45\columnwidth]{fig/cost_alpha_var/TC.eps}
\label{fig:travel_cost}} \quad
%
\subfloat[Information cost.]
{\includegraphics[trim = 0.1cm 0cm 1cm 0.3cm, clip=true, width=0.45\columnwidth]{fig/cost_alpha_var/IC.eps}
\label{fig:info_cost}}
\caption{Travel and information costs as functions of $\alpha$ for the simulations tested in the environment shown in Fig.~\ref{fig:2D_mult}.}
\label{fig:cost}
\end{figure}
\begin{figure}[t]
\centering
{\includegraphics[trim = 0.9cm 0.7cm 0cm 0cm, clip=true, width=\columnwidth]{fig/asym/new_asym.eps}}
\caption{Results of Algorithm~\ref{algo:2} with $10,000$ nodes in the two-dimensional space containing roughly two paths, A and B, separated by a diagonal wall. The black line is the shortest path with the associated covariance ellipses. The blue ellipses illustrate the propagation of covariance between nodes. The simulation was completed with $W = 10^{-3} I$ and $\chi^2$ covariance ellipses representing $90 \%$ certainty regions. The boundaries of the plots are considered as obstacles.}
\label{fig:2D_asym}
\end{figure}
\subsection{Asymmetry of Path Length}
\label{sec:asymmetry}
The asymmetric nature of $\mathcal{D}$ makes the path length function $c(\gamma)$ direction-dependant.
Here, we demonstrate this nature using a sample configuration space (Fig.~\ref{fig:2D_asym}) in which there exist two paths having the same Euclidean length but different $c(\gamma)$.
Fig.~\ref{fig:2D_asym} shows a two-dimensional configuration space with a diagonal wall. The initial state of the robot is marked by the red dot and the target region is shown as the green rectangle at the upper-right corner. The path is obtained by running Algorithm~\ref{algo:2} for $N=10,000$ nodes.
Notice that in Fig.~\ref{fig:2D_asym}, there are two homotopy classes of paths from the initial state to the goal, as shown by A and B in Fig.~\ref{fig:2D_asym}.
Although they have similar Euclidean lengths, the algorithm selects path B.
The reason why B is preferred can be understood by invoking the optimality of the ``move-and-sense'' strategy for transitioning between two points. (Recall the comment after Theorem~\ref{theo:tria}).
In view of this, path B in Fig.~\ref{fig:2D_asym} allows a strategy close to ``move-and-sense,'' as the covariance is allowed to grow freely until the robot comes near the goal region where a one-time covariance reduction is performed.
In contrast, path A requires covariance reductions multiple times as the passage narrows.
\begin{figure}[t!
\centering
%
\subfloat[Number of measurements using moderate precision sensor $V=10^{-3} I$. ]
{\includegraphics[trim = 0.1cm 0cm 1cm 0.0cm, clip=true, width=0.45\columnwidth]{fig/meas_deadbeat/NM_DB_low.eps}
\label{fig:low_prec}} \quad
%
\subfloat[Number of measurements using high precision sensor $V=10^{-4} I$.]
{\includegraphics[trim = 0.1cm 0cm 1cm 0.0cm, clip=true, width=0.45\columnwidth]{fig/meas_deadbeat/NM_DB_high.eps}
\label{fig:high_prec}}
\caption{Number of required measurements in path following for the single integrator robot using dead-beat controller and event-based measurement.}
\label{fig:num_meas}
\end{figure}
\begin{figure*}[t!
\centering
%
\subfloat[$\alpha = 0.1$]
{\includegraphics[trim = 0.3cm 0cm 1.46cm 0.70cm, clip=true, width=0.64\columnwidth]{fig/sample_path/SP_DI_01.eps}
\label{fig:EV_alpha_01}} \quad
%
\subfloat[$\alpha = 0.3$]
{\includegraphics[trim = 0.3cm 0cm 1.46cm 0.70cm, clip=true, width=0.64\columnwidth]{fig/sample_path/SP_DI_03.eps}
\label{fig:EV_alpha_03}} \quad
%
\subfloat[$\alpha = 0.7$]
{\includegraphics[trim = 0.3cm 0cm 1.46cm 0.70cm, clip=true, width=0.64\columnwidth]{fig/sample_path/SP_DI_07.eps}
\label{fig:EV_alpha_07}} \quad
\caption{
The reference paths generated with $\alpha=0.1, 0.3$ and $0.7$ are followed by an event-based LQG controller using the high-precision sensor with $Z=10^{-4} I$. Three sample trajectories (shown in red, blue, and yellow) are plotted in each case.
}
\label{fig:Event_based}
\end{figure*}
\subsection{Event-based Control with Velocity Input}
\label{subsec:event}
So far, we have been considering $\mathcal{D}_{\text{info}}$ (i.e., the entropy reduction) as the cost of perception without showing the connections between $\mathcal{D}_{\text{info}}$ and more concrete metrics of perception costs (e.g., sensing power or sensing frequency).
In this section, we consider a mobile robot with a noisy location sensor, and demonstrate that the belief path with a small $\mathcal{D}_{\text{info}}$ helps the robot to navigate with less frequent measurements.
Once again, consider the environment shown in Fig.~\ref{fig:2D_mult}. As in \eqref{eq:euler3}, assume a simple robot dynamics
\[{\bf x}_{k+1} ={\bf x}_{k}+u_k+ {\bf w}_{k}, \quad {\bf w}_k\sim \mathcal{N} (0,\|\Delta t\|W), \]
where ${\bf x}$ is the 2-D position of the robot and $u$ is the velocity input with $W=10^{-4} I$ and $ \Delta t= 5\times 10^{-4}$ s. The robot is equipped with a sensor and can perform measurements
\begin{equation}
\label{eq:event_measurement}
{\bf y}_{k} ={\bf x}_{k}+ {\bf v}_{k}, \quad {\bf v}_k\sim \mathcal{N} (0,V).
\end{equation}
Two values of $V=10^{-3}$ and $V=10^{-4}$ are considered to model moderate and high precision measurements, respectively.
The robot uses Kalman filter (KF) to obtain the estimation $\hat{{\bf x}}_k \sim (\hat{x}_k, P_{k}^{\text{KF}})$ and the dead-beat controller (i.e., $u_k = x_{k+1}^{\text{ref}} - \hat{x}_k$) to follow the reference trajectory $\{x_{k}^{\text{ref}}\}_{k=0}^{N}$.
To follow the reference covariance $\{P_{k}^{\text{ref}}\}_{k=0}^{N}$ with infrequent measurements, here we adopt an event-based sensing strategy. In particular, the robot performs a measurement \eqref{eq:event_measurement} only when the confidence ellipse corresponding to $(\hat{x}_k, P_{k}^{\text{KF}})$ is not contained in the planned ellipse corresponding to $(x_k^{\text{ref}}, P^{\text{ref}}_k)$ under a fixed confidence parameter $\chi^2$.
Fig.~\ref{fig:num_meas} shows the number of measurements performed in the simulations with different values of $\alpha \in \{0.1, \dots, 1\}$.
First, the number of measurements performed to follow the reference trajectory decreases as $\alpha$ increases. Second, by using the sensor with higher precision, fewer sensing actions is required. Non-monotonicity observed in Fig.~\ref{fig:num_meas} is primarily due to the stochastic nature of the RRT* algorithm.
\subsection{Event-based Control with Acceleration Input}
To demonstrate the utility of the proposed path planning method for robots whose dynamics are different from
\eqref{eq:euler3}, we now consider the same setup as in Subsection~\ref{subsec:event} except that
the robot is a point mass with acceleration input. Denoting by $[x_{1,k} \; x_{2,k}]^\top$ and $[v_{1,k} \; v_{2,k}]^\top$ the position and the velocity of the robot, the dynamics are described as
\begin{equation*}
\begin{bmatrix}
{\bf x}_{1,k+1} \\ {\bf x}_{2,k+1} \\ {{\bf v}}_{1,k+1} \\ {{\bf v}}_{2,k+1}
\end{bmatrix} \!=\!
\begin{bmatrix}
I_2 & \Delta t I_2 \\
0_2 & I_2
\end{bmatrix}
\begin{bmatrix}
{\bf x}_{1,k} \\ {\bf x}_{2,k} \\ {{\bf v}}_{1,k} \\ {{\bf v}}_{2,k}
\end{bmatrix} \!+\!
\begin{bmatrix}
0 \\ 0 \\ a_{1,k} \\ a_{2,k}
\end{bmatrix} \Delta t + {\bf w}_k,~{\bf w}_k \sim \mathcal N(0, \|\Delta t\| W),
\end{equation*}
where acceleration $a_k = [a_{1,k} \; a_{2,k}]^\top$ is the control input with $\Delta t = \frac{1}{30}$ and $W = {\rm diag} (10^{-4}, 10^{-4},0,0)$. The robot can observe its position by making noisy measurements of its position as
\[ {\bf y}_k = \begin{bmatrix}
{\bf y}_{1,k} \\ {\bf y}_{2,k}
\end{bmatrix} = \begin{bmatrix}
{\bf x}_{1,k} \\ {\bf x}_{2,k}
\end{bmatrix} + {\bf z}_t, \quad {\bf z}_t \sim \mathcal{N} (0, Z),\]
where $Z=10^{-3} I$ and $Z=10^{-4} I$ are used. Similarly to the previous subsection, the robot conducts a measurement only when the confidence ellipse is not contained in the planned ellipse. The reference control input to follow the reference trajectory is generated using linear quadratic tracker for the nominal speed of 0.1\,m/s.
An LQG controller is used in the path following phase.
Fig.~\ref{fig:Event_based} shows three sample trajectories of $[{x}_{1,k} \; {x}_{2,k}]^\top$ obtained for each of the scenarios with $\alpha = 0.1, 0.3$ and $0.7$. Fig.~\ref{fig:num_meas_di} shows the number of measurements averaged over $500$ sampled trajectories. It illustrates that the robot following the path generated for higher $\alpha$ performs fewer measurements. It also shows that the number of measurements can be reduced by using sensors with a higher precision.
\begin{figure}[t!
\centering
%
\subfloat[The number of measurements using moderate precision sensor $Z=10^{-3} I_2$. ]
{\includegraphics[trim = 0.1cm 0cm 1cm 0.0cm, clip=true, width=0.45\columnwidth]{fig/meas_double/NM_DI_low.eps}
\label{fig:low_prec_di}} \quad
%
\subfloat[The number of measurements using high precision sensor $Z=10^{-4} I_2$.]
{\includegraphics[trim = 0.1cm 0cm 1cm 0.0cm, clip=true, width=0.45\columnwidth]{fig/meas_double/NM_DI_high.eps}
\label{fig:high_prec_di}}
\caption{The number of required measurements for a double integrator robot using event-based LQG controller. The results are averaged over $500$ randomly generated paths.}
\label{fig:num_meas_di}
\end{figure}
\begin{figure}[t!
\centering
%
\subfloat[Planned path with $\alpha = 0.4$]
{\includegraphics[trim = 0.1cm 0cm 1cm 0.3cm, clip=true, width=0.45\columnwidth]{fig/path_follow/path_alp_04_sm.eps}
\label{fig:plan_al04}} \quad
%
\subfloat[Planned path with $\alpha = 1.6$]
{\includegraphics[trim = 0.1cm 0cm 1cm 0.3cm, clip=true, width=0.45\columnwidth]{fig/path_follow/path_alp_1_6_sm.eps}
\label{fig:plan_al16}} \\
%
\subfloat[Average number of measured landmarks when following the path shown in Fig.~\ref{fig:path_follow}(a)]
{\includegraphics[trim = 0.1cm 0cm 1cm 0.2cm, clip=true, width=0.45\columnwidth]{fig/path_follow/lm_num_alpha0_4_sm.eps}
\label{fig:num_lm_al04}} \quad
%
\subfloat[Average number of measured landmarks when following the path shown in Fig.~\ref{fig:path_follow}(b)]
{\includegraphics[trim = 0.1cm 0cm 1cm 0.2cm, clip=true, width=0.45\columnwidth]{fig/path_follow/lm_num_alpha1_6_sm.eps}
\label{fig:num_lm_al16}}
\caption{Simulation results in the path following and self-localization scenario. (a) and (b) show the path generated by Algorithm~\ref{algo:2} with $N=30,000$, $W = 10^{-3} I$ and $\alpha = 0.4, 1.6$, respectively. The $18$~obstacles, also utilized as landmarks in the self-localization during the path following, are shown in red rectangles. (c) and (d) illustrate the average number of measured landmarks in the self-localization at each time step for 1000 runs.}
\label{fig:path_follow}
\end{figure}
\subsection{Path Following with Landmark Selections}
Previous subsections demonstrated a correlation between $\alpha$ and the frequency of sensing actions. In this subsection, we consider a scenario where multiple sensors are available to the robot.
We show that the proposed planning strategy is effective to reduce the number of sensors that must be activated simultaneously within a single time step.
More concretely, we consider a scenario in which a robot follows a path generated by Algorithm~\ref{algo:2} while localizing its position by an omnidirectional camera that provides the relative angle between itself and obstacles (which also serve as landmarks).
We show the number of measured obstacles during the path following phase decreases by inceasing $\alpha$ in the path planning phase.
The state of the robot at time step $k$ comprises the 2-D position $[x_k~y_k]^\top$ and the orientation $\theta_k$.
The dynamics of the robot are governed by the unicycle model perturbed with a Gaussian i.i.d. noise
\begin{equation} \label{eq:uni_cycle}
\begin{bmatrix}
{\bf x}_{k+1} \\ {\bf y}_{k+1} \\ {{\bf m}{\theta}}_{k+1}
\end{bmatrix} \!=\!
\begin{bmatrix}
{\bf x}_k \\ {\bf y}_k \\ {\bf m}{\theta}_{k}
\end{bmatrix} \!+\!
\begin{bmatrix}
v_k \cos{{{\bf m} \theta}_k} \\ v_k \sin{{{\bf m}{\theta}}_k} \\ \omega_k
\end{bmatrix} \Delta t + {\bf w}_k,~{\bf w}_k \sim \mathcal N(0, W),
\end{equation}
with the velocity and angular velocity input $u_k = [v_k~\omega_k]^\top$.
In this simulation, we set $\Delta t = \frac{1}{30}$s and $W = {\rm diag}(\Delta t\times10^{-4}, \Delta t\times10^{-4}, \frac{1}{100}\times\frac{\pi}{180})$.
The relative angles between the center of obstacles and the robot are extracted via the computer vision techniques \cite{Lowry16_visual_place} and a camera model \cite{Kawai11_panorama_cam}.
The measurement model can be expressed as
\begin{equation} \label{eq:sens_model_sim}
{\bf m}{y}_{k} =
\begin{bmatrix}
\arctan{\left( \frac{m_{1,y}-{\bf m}{y}_k}{m_{1,x}-{\bf m}{x}_k} \right)} - {\bf m}{\theta}_k
\\ \vdots \\
\arctan{\left( \frac{m_{M,y}-{\bf m}{y}_k}{m_{M,x}-{\bf m}{x}_t} \right)} - {\bf m}{\theta}_k
\end{bmatrix}
+ {\bf v}_k,~ {\bf v}_k \sim \mathcal N(0, \hat V),
\end{equation}
with positions of known obstacles with known positions $m_j = [m_{j,x}~m_{j,y}]^\top$ for $j \in \{1,\ldots,M\}$,
where $\hat V = {\rm diag}(\{\hat{V}_{j}\}_{j \in \{1,\ldots,M\}})$ is the noise level of the sensors in case the robot decides to measure all obstacles.
$\hat V_i = 0.305$ for all landmarks, for which the standard variance is 10\,deg.
The reference trajectories generated by Algorithm~\ref{algo:2} with $N = 30,000$, $W=10^{-3}I$, and $\alpha = 0.4, 1.6$ are depicted in Fig.~\ref{fig:path_follow}(a) and (b), respectively.
Similarly to the results in Section\,\ref{sec:simulation_alpha}, the smaller $\alpha$ forces the robot to shrink the covariance to go through the narrower region surrounded by obstacles shown in red rectangles.
We again assume that measurements are performed only when the covariance ellipse is not contained in the planned one.
When measurements are needed, the robot is allowed to observe multiple landmarks within a single time step. Landmarks are selected greedily -- the one that reduces the determinant of the covariance matrix the most is selected one after another until the confidence ellipse is contained in the planned one. The robot follows the reference trajectory with the nominal speed of 0.1\,m/s.
As the control and estimation schemes, an LQG controller and extended Kalman filter are employed to mitigate the deviation from the reference trajectory.
Figure~\ref{fig:path_follow}(c) and (d) show how many obstacles are measured at each time step of the path following averaged over 1,000 runs with the paths in Fig.~\ref{fig:path_follow}(a) and (b), respectively.
Fig.~\ref{fig:path_follow}(c) reveals that a robot measures many obstacles from time step $k=140$ to $k=190$, which corresponds with the period that the robot is required to follow a narrow path passing between obstacles.
Namely, a robot is forced to localize its own position accurately to avoid a collision with obstacles, resulting in navigation with many selected landmarks.
In the last few time steps, a robot conducts measurements again to obtain a smaller covariance that can fit in the goal region.
In contrast, Fig.~\ref{fig:path_follow}(d) illustrates the robot following the path with $\alpha = 1.6$ performs fewer measurements than the one with $\alpha=0.4$.
\subsection{ Path Planning under Sensor Constraints}
In this subsection, we consider a scenario in which constraints on robot's sensing capability must be taken into account to generate a meaningful motion plan.
As discussed in Section~\ref{sec:sensor_constraint}, sensor constraints can be incorporated into the proposed algorithms. To demonstrate this, consider a point robot with velocity input navigating through the environment shown in Fig.~\ref{fig:sens_cons}.
Assume the robot is able to localize itself accurately (e.g., by GPS) only in the white region.
The dynamic and measurement models are assumed as
\begin{subequations}
\begin{align}
{\bf x}_{k+1} &={\bf x}_{k}+u_k+ {\bf w}_k, \quad {\bf w}_k\sim \mathcal{N}(0,\|\Delta x_k\|W),\\
{\bf y}_{k} &= {\bf x}_k+ {\bf v}_k, \quad {\bf v}_k \sim \mathcal{N}(0, V),
\end{align}
\end{subequations}
where the state ${\bf x}$ is the 2-D position of the robot, control input $u$ is the robot’s velocity, and ${\bf y}$ is the measured value. Here, $V= 10^{-3} I$ in the dark region and $V= 10^{-5} I$ in the white region. The simulations are carried out with $W = 5\times 10^{-4} I$, where the confidence ellipses correspond to $90\%$ safety.
Fig.~\ref{fig:sens_cons} visualizes the result of the simulation using Algorithm~\ref{algo:1} with function $\textsc{\fontfamily{cmss}\selectfont FeasCheck2}$.
Since accurate location data is unavailable in the black region, the direct move from the start point to the goal region (which entirely lies in the dark region) results in an unacceptably large covariance at the end.
The result plotted in Fig.~\ref{fig:sens_cons} shows a feasible solution; the robot visits the white region to make a high-precision measurement shortly before moving toward the goal.
\begin{figure}[t]
\centering
\includegraphics[trim = 0cm 0cm 0cm 0cm, clip=true, width = 0.75\columnwidth]{fig/N_20000_06.eps}
\caption{Simulation results generated from $N=10,000$ nodes for navigation in dark-white environment.}
\label{fig:sens_cons}
\end{figure}
\subsection{Quadrotor}
\begin{figure}[t!
\centering
%
\subfloat[$\alpha=0.2$. ]
{\includegraphics[trim = 0.1cm 0cm 1cm 0.3cm, clip=true, width=0.45\columnwidth]{fig/quadrotor/quad_traj_alpha_02.eps}
\label{fig:quad_traj_low}} \quad
%
\subfloat[ $\alpha=2.0$.]
{\includegraphics[trim = 0.1cm 0cm 1cm 0.3cm, clip=true, width=0.45\columnwidth]{fig/quadrotor/quad_traj_alpha_2.eps}
\label{fig:quad_traj_high}}
\caption{The path generated by Algorithm~\ref{algo:2} for $N=10000$ and $W= 0.1 I$.}
\label{fig:quad_traj}
\end{figure}
\begin{figure*}[t
\centering
%
\subfloat[Horizontal position of CM (X-Y plane) for $\alpha = 0.2$.]
{\includegraphics[trim = 0.1cm 0cm 1cm 0.3cm, clip=true, width=0.7\columnwidth]{fig/quadrotor/sample_XY_02.eps}
\label{fig:sample_XY_02}}
%
\subfloat[Horizontal position of CM (X-Y plane) for $\alpha = 2.0$.]
{\includegraphics[trim = 0.1cm 0cm 1cm 0.3cm, clip=true, width=0.7\columnwidth]{fig/quadrotor/sample_XY_2.eps}
\label{fig:sample_XY_2}}
%
\subfloat[Vertical deviation of the position of CM for $\alpha = 0.2$ and $2$.]
{\includegraphics[trim = 0.0cm 0cm 0cm 0.2cm, clip=true, width=0.7\columnwidth]{fig/quadrotor/sample_Z.eps}
\label{fig:sample_Zt}}
\caption{ Smoothed reference trajectories (blue) and sampled paths (red).}
\label{fig:quad_sample}
\end{figure*}
To further demonstrate the effectiveness of the proposed path planning strategy for reducing sensing costs, this subsection considers the problem of navigating a 6 DoF quadrotor.
The dynamic models of the quadrotor, rotors' thrusts and response times, motor dynamics, and the aerodynamic effects are adopted from \cite{hoffmann2007quadrotor}.
The quadrotor is equipped by an IMU accelerometer with navigation grade that measures both acceleration and gyro rate. The quadrotor is capable of communication with global navigation satellite system (GNSS) and uses real-time kinematic method \cite{mannings2008ubiquitous} to achieve precise localization. Unscented Kalman filter is deployed for state estimation. The source codes used for simulating the quadrotor, the IMU unit, and the GNSS unit are borrowed from \href{https://gitlab.com/todd.humphreys/game-engine-student}{https://gitlab.com/todd.humphreys/game-engine-student}.
The IMU suffers from a drift (i.e., the estimations error accumulates over time) and thus the quadrotor requires GNSS data for successful navigation. The quadrotor seeks to minimizes the frequency of communication with GNSS without compromising safety. We show this goal can be achieved using the framework we proposed.
Fig. \ref{fig:quad_traj} shows a $10 \rm m \times 10 \rm m$ environment with obstacles and the belief paths generated by Algorithm~\ref{algo:2} for $\alpha =0.2$ and $\alpha=2$, $W=0.1 I$, and $90\%$ safety level.
To obtain a nominal reference, the paths generated by Algorithm~\ref{algo:2} are smoothed by fitting a 9-th degree polynomial to them. The quadrotor follows the reference with the nominal velocity of $1 \rm{m/s}$ and it is commanded at $200 \rm{Hz}$. The quadrotor we consider is an under-actuated system (only 4 out of 6 DoF are controllable). In this simulation, the 3D position of the center of the mass (CM) and the yaw angle are controlled with the aid of a PID controller. The quadrotor communicates with the GNSS if and only if the error covariance does not fully reside inside the confidence ellipse planned by Algorithm~\ref{algo:2}. Fig.~\ref{fig:quad_sample} demonstrates the smoothed paths and a few sample trajectories. Fig.~\ref{fig:meas_dist} depicts the distribution of the required communications with GNSS for 1000 runs. It shows a significant drop (from almost $75$ to $25$ on average) in communications during the navigation between $\alpha=2.0$ and $\alpha=0.2$.
\begin{figure}[h]
\centering
\includegraphics[trim = 0cm 0cm 0cm 0cm, clip=true, width =0.85 \columnwidth]{fig/quadrotor/meas_dist_new.eps}
\caption{Histogram of required number of communications with GNSS unit for the reference trajectories for $\alpha =0.2$ and $\alpha = 2.0$ shown in Fig.~\ref{fig:quad_traj}.}
\label{fig:meas_dist}
\end{figure}
\section{Conclusion and Future Work}
\label{sec:conclusion}
In this paper, we proposed an information-geometric method to generate a reference path that is traceable with moderate sensing costs by a mobile robot navigating through an obstacle-filled environment.
In Section~\ref{sec:prelim}, we introduced a novel distance metric on the Gaussian belief manifold which captures the cost of steering the belief state. Based on this distance concept, in Section~\ref{sec:formulation}, we formulated a shortest path problem that characterizes the desired belief path. In Section~\ref{sec:algorithm}, an RRT*-based algorithm is proposed to solve the shortest path problem. A few variations of the algorithm were also proposed to improve computational efficiency and to accommodate various scenarios.
The continuity of the path length function with respect to the topology of total variation was proved, which is expected to be a key step towards the proof of asymptotic optimality of the proposed RRT*-based algorithms.
Section~\ref{sec:simulation} presented simulation results that confirmed the effectiveness of the proposed planning strategy to mitigate sensing costs for self-navigating mobile robots in several practical scenarios.
There are several directions to explore in the future:
\begin{itemize}
\item Computational efficiency of the proposed algorithms can be improved further by incorporating the exiting methodologies, such as informed RRT* or k-d trees.
\item Asymptotic optimality of Algorithm~\ref{algo:1} is conjectured by the continuity of the path length function (Theorem~\ref{theo:continuity2}) and should be investigated further.
\item Path ``smoothing'' algorithms similar to \cite{cubuktepe2020scalable} need to be developed to fine-tune the path obtained by RRT*-based algorithms.
\item Connections between information theory and sensing costs in practical contexts need to be further investigated. Although our simulation studies empirically confirmed that minimizing the information gain is an effective strategy to mitigate the expected sensing cost in some practical scenarios, theoretical reasoning for these results needs to be developed to fully understand the application domain where the proposed method yields practically useful results.
\end{itemize}
\appendices
\section{Explicit expression for $\mathcal{D}_{\text{info}}(k)$}
\label{ap:zero}
\begin{lemma}
\label{lemma:explicit}
Let $[U,\Sigma]$ be the eigen-decomposition of $P_{k+1}^{-\frac{1}{2}}\hat{P}_{k+1} P_{k+1}^{-\frac{1}{2}}$ i.e. $U \Sigma U^\top=P_{k+1}^{-\frac{1}{2}}\hat{P}_{k+1} P_{k+1}^{-\frac{1}{2}}$, where $\Sigma= diag (\sigma_1, \dots, \sigma_n) \succeq 0$ and $U$ is unitary matrix (meaning that $UU^\top= U^\top U=I$). Then, $Q^*= P_{k+1}^{\frac{1}{2}} U S^* U^\top P_{k+1}^{\frac{1}{2}}$ is the optimal solution of \eqref{eq:d_info_general}, where $S^*:=\rm diag (\min \{1, \sigma_1\}, \dots, min\{1, \sigma_n\})$.
\end{lemma}
\begin{proof}
The term $\frac{1}{2} \log\det \hat{P}_{k+1}$ is a constant in \eqref{eq:d_info_general} and thus $Q^*$, the optimal solution for \eqref{eq:d_info_general0}, can be computed as
\begin{equation}
\label{eq:explicit_one}
\begin{split}
Q^*=\argmin_{Q_{k+1}\succeq 0} & \quad -\frac{1}{2}\log\det Q_{k+1} \\
\text{s.t. } &\quad Q_{k+1} \preceq P_{k+1}, \;\; Q_{k+1} \preceq \hat{P}_{k+1}.
\end{split}
\end{equation}
If we define a new variable $ R_{k+1} := P_{k+1}^{-\frac{1}{2}} Q_{k+1} P_{k+1}^{-\frac{1}{2}}$, problem ~\eqref{eq:explicit_one} can be rewritten as
\begin{equation}
\label{eq:explicit_two}
\begin{split}
R^*=\argmin_{R_{k+1}\succeq 0} & \quad -\frac{1}{2}\log\det R_{k+1} \\
\text{s.t. } &\quad R_{k+1} \preceq I, \;\; R_{k+1} \preceq \bar{P}_{k+1},
\end{split}
\end{equation}
where $\bar{P}_{k+1} := P_{k+1}^{\frac{-1}{2}} \hat{P}_{k+1} P_{k+1}^{\frac{-1}{2}}$. Using eigen-decomposition, $\bar{P}$ can be written in the canonical form $\bar{P}= U \Sigma U^\top$, where $\Sigma=diag(\sigma_1, \dots, \sigma_n) \succeq 0$ and $U$ is a unitary matrix. By defining the variable $S:= U^\top R U$, problem~\eqref{eq:explicit_two} can be cast as
\begin{equation}
\label{eq:explicit_three}
\begin{split}
S^*=\argmin_{S_{k+1}\succeq 0} & \quad -\frac{1}{2}\log\det S_{k+1} \\
\text{s.t. } &\quad S_{k+1} \preceq I, \;\; S_{k+1} \preceq \Sigma = \rm diag (\sigma_1, \dots, \sigma_n).
\end{split}
\end{equation}
It is trivial to check the optimal solution to Problem~\eqref{eq:explicit_three} is $S^*=\rm diag(\min\{1, \sigma_1\}, \dots, min\{1, \sigma_n\})$ which completes the proof.
\end{proof}
\section{Proof of triangular inequality}
\label{ap:B}
\subsection{Proof Overview}
In what follows, we will establish the following chain of inequalities:
\begin{subequations}
\begin{align}
\nonumber
&\mathcal{D}(b_1,b_{int})+\mathcal{D}(b_{int},b_{2})
\\ \nonumber
&= \|x_{init}-x_1\|+\|x_2-x_{int}\|\\ \nonumber
+& \frac{\alpha}{2} \left(\begin{array}{cc}
&\!\!\!\!\!\!\!\!\min_{Q_{1}\succeq 0} \log \det (P_1+\|x_{int}-x_1\|W)-\log \det Q_{1} \\
& \text{s.t.} \quad Q_{1}\preceq P_1+\|x_{int}-x_1\|W, \quad Q_{1} \preceq P_{int}
\end{array} \right)\\ \label{equ:a}
+& \frac{\alpha}{2} \left(\begin{array}{cc}
&\!\!\!\!\!\!\!\! \min_{Q_{2}\succeq 0} \log \det (P_{int}+\|x_{2}-x_{int}\|W)-\log \det Q_{2} \\
& \text{s.t.} \quad Q_{2}\preceq P_{int}+\|x_2-x_{int}\|W, \quad Q_{2} \preceq P_{2}
\end{array} \right) \\ \nonumber
& \overset{(A)}{\geq} \|x_2-x_1\| \\ \label{equ:b}
+& \frac{\alpha}{2} \left(\begin{array}{cc}
& \!\!\!\!\!\!\!\!\min_{Q\succeq 0} \log \det (P_1+(\|x_{int}-x_1\|+ \|x_2-x_{int}\|)W)\\
&-\log \det Q \\
& \text{s.t.} \quad Q\preceq P_1+(\|x_{int}-x_1\|+ \|x_2-x_{int}\|)W, Q \preceq P_2
\end{array} \right)\\ \nonumber
& \overset{(B)}{\geq} \|x_2-x_1\| \\ \label{equ:c}
+& \frac{\alpha}{2} \left(\begin{array}{cc}
& \!\!\!\!\!\!\!\!\min_{Q\succeq 0} \log \det (P_1+(\|x_2-x_1\|W)-\log \det Q \\
& \text{s.t.} \quad Q\preceq P_1+\|x_2-x_1\|W, \quad Q \preceq P_2
\end{array} \right)\\ \nonumber
=& \mathcal{D}(b_1, b_2)
\end{align}
We will prove the inequalities (B) and (A) in Appendix~\ref{subsec:2} and Appendix~\ref{subsec:3}, respectively.
\end{subequations}
\subsection{Proof of Inequality (B)}
\label{subsec:2}
Define a function $f: \mathbb{S}^d_{++} \rightarrow \mathbb{R}$ for $Y \in \mathbb{S}^d_{++}$ as
\begin{equation}
\label{def:f_1}
\begin{split}
&f(X):= \min_{Q\succeq 0} \log \det X -\log \det Q\\
& \quad \quad \quad \text{s.t.} \quad Q\preceq X, \quad Q\preceq Y.
\end{split}
\end{equation}
Equivalently, $f(X)$ can also be defined as
\begin{equation}
\label{def:f_2}
\begin{split}
& f(X):= \min_{R\succeq 0} \log \det X +\log \det R\\
& \quad \quad \quad \text{s.t.} \quad R\succeq X^{-1}, \quad R \succeq Y^{-1}.
\end{split}
\end{equation}
\begin{lemma}
\label{lem:app}
$f(X)$ is a monotone function meaning that if $0 \preceq X_1 \preceq X_2$, then $f(X_1) \leq f(X_2)$.
\end{lemma}
\begin{proof}
Set $X=X_2$ and let $R^*$ be the minimizer of right-hand-side of (\ref{def:f_2}), i.e.,
$f(X_2)= \log \det X_2 +\log \det R^*$. Introducing $F:=R^*-X_2^{-1}$, we have $R^*=X_2^{-1}+F$ and $f(X_2)=\log \det X_2 +\log \det (X_2^{-1}+F)= \log \det(I+ F^{\frac{1}{2}}X_2F^{\frac{1}{2}})$.
On the other hand, set $X=X_1$ in (\ref{def:f_2}) then $R':= X_1^{-1}+F$ is a feasible point for (\ref{def:f_2}). Namely, $R'= X_1^{-1}+F \succeq X_1^{-1}$ and
\begin{align*}
R'& = X_1^{-1}+F\\
& = X_1^{-1}+R^*-X_2^{-1}\\
& \geq R^* \quad ( \text{since $X_1^{-1} \preceq X_2^{-1}$})\\
& \geq Y^{-1} \ \ ( \text{ $R^*$ is a feasible point for (\ref{def:f_2}) for $X=X_1$}).
\end{align*}
Therefore,
\begin{align*}
f(X_1)= & \log \det X_1 +\log \det R^* \\
=& \log \det X_1 +\log \det (X_1^{-1}+F)\\
=& \log \det(I+ F^{\frac{1}{2}}X_1F^{\frac{1}{2}})\\
=& \log \det(I+ F^{\frac{1}{2}}X_2F^{\frac{1}{2}}) = f(X_2)
\end{align*}
\end{proof}
The inequality (B) is an application of Lemma~\ref{lem:app} with $X_1= P_1+\|x_2-x_1\|W$ and $X_2= P_1+(\|x_{int}-x_1\|+ \|x_2-x_{int}\|)W$, which clearly satisfy $0 \preceq X_1 \preceq X_2$.
\subsection{Proof of Inequality (A)}
\label{subsec:3}
Let $P_1\succ 0$, $W_1\succeq 0$, $P_2\succ 0$, and $W_2\succeq 0$ be given matrix-valued constants. To complete the proof of (A), we consider
\begin{equation}
\label{def:F_12}
\min_{P_{int}\succeq 0} F_1(P_{int})+F_2(P_{int}),
\end{equation}
where
\begin{equation}
\label{def:F_1}
\begin{split}
&F_1(P_{int}):= \min_{Q_1\succeq 0} \log \det (P_1+W_1) -\log \det Q_1\\
& \quad \quad \quad \text{s.t.} \quad Q_1\preceq P_1+W_1, \quad Q_1\preceq P_{int}.
\end{split}
\end{equation}
\begin{equation}
\label{def:F_2}
\begin{split}
&F_2(P_{int}):= \min_{Q_2\succeq 0} \log \det (P_{int}+W_2) -\log \det Q_2\\
& \quad \quad \quad \text{s.t.} \quad Q_2\preceq P_{int}+W_2, \quad Q_2\preceq P_2.
\end{split}
\end{equation}
and show optimality is attained by $P^*_{int}= P_1+W_1$. First, we show that the optimality is attained by $P^*_{int} \preceq P_1+W_1$.
\begin{proposition}
\label{prop_a}
There exists an optimal solution for (\ref{def:F_12}) that belongs to the set $\mathbb{P}^{ineq}_{int}:= \{P_{int} \succeq 0: P_{int} \preceq P_1+W_1\}$.
\end{proposition}
\begin{proof}
We show for any optimal solution candidate $P_{int}^* \succeq 0$, there exist an element $ P'_{int}\in \mathbb{P}^{ineq}_{int}$ such that
\[F_1(P'_{int})+F_2(P'_{int}) \leq F_1(P_{int}^*)+F_2(P_{int}^*)\]
Set $P_{int}=P^*_{int}$ in (\ref{def:F_12}), and let $Q^*$ be the unique optimal solution, i.e., $F_1(P^*_{int})= \min_{Q_1\succeq 0} \log \det (P_1+W_1) -\log \det Q^*$. Take $P'_{int}=Q^*$ as a new solution candidate. Note that $Q^* \in \mathbb{P}^{ineq}_{int}$ since it is feasible solution for (\ref{def:F_12}) with $P_{int}=P^*_{int}$. Additionally, $Q_1=Q^*$ is a feasible a point for (\ref{def:F_12}) with $P_{int}=P'_{int}$. Therefore,
\begin{equation}
\label{ineq1}
F_1(P'_{int}) \leq F_1(P^*_{int}).
\end{equation}
Moreover, since $P'_{int}\succeq P^*_{int}$, by Lemma~\ref{lem:app}, we have
\begin{equation}
\label{ineq2}
F_2(P'_{int}) \leq F_2(P^*_{int}).
\end{equation}
Finally, (\ref{ineq1}) and (\ref{ineq2}) yield $F_1(P'_{int})+F_2(P'_{int}) \leq F_1(P_{int}^*)+F_2(P_{int}^*)$.
\end{proof}
\begin{proposition}
\label{prob_b}
The optimality for (\ref{def:F_12}) is attained by $P'_{int}=P_1+W_1$.
\end{proposition}
\begin{proof}
Let $P^*_{int} \in \mathbb{P}^{ineq}_{int}$ (i.e., $P^*_{int} \preceq P_1+W_1$) be any solution candidate from Proposition~\ref{prop_a}. We will show that if we pick a new solution $P'_{int}:=P_1+W_1$, then
\begin{equation}
\label{equ:appb_final}
F_1(P'_{int})+F_2(P'_{int}) \leq F_1(P_{int}^*)+F_2(P_{int}^*).
\end{equation}
Note that the constraints in (\ref{def:F_1}) for $P_{int} \preceq P^*_{init} \preceq P_1+W_1$ reduces to $Q_1 \preceq P^*_{init} \preceq P_1+W_1$. Hence, $P^*_{int}$ is the unique minimizer and
\begin{align}
\nonumber
F_1(P^*_{int})&=\log \det (P_1+W_1) -\log \det P^*_{int}\\ \label{equ:appb_final_1}
&= \log \det (P'_{int}) -\log \det P^*_{int}.
\end{align}
On the other hand, it is easy to verify that
\begin{align}
\label{equ:appb_final_2}
F_1(P'_{int})=0
\end{align}
Let $Q^*_2$ be the unique minimizer of (\ref{def:F_2}) for $P_{int}=P^*_{int}$. Then,
\begin{align}
\label{equ:appb_final_3}
F_2(P^*_{int})&=\log \det (P^{*}_{int}+W_2) -\log \det Q_2^*
\end{align}
On the other hand, since $P^*_{int} \preceq P'_{int}$, $Q^*_2$ is a feasible point for (\ref{def:F_2}) with $P_{int}=P^*_{int}$. Hence,
\begin{align}
\label{equ:appb_final_4}
F_2(P'_{init})\leq \log \det (P'_{int}+W_2) -\log \det Q_2^*
\end{align}
\end{proof}
Now, from (\ref{equ:appb_final_1}), (\ref{equ:appb_final_2}), (\ref{equ:appb_final_3}), and (\ref{equ:appb_final_4}), we have
\begin{align*}
&F_1(P^*_{int})+F_2(P^*_{int})- F_1(P'_{int})-F_2(P'_{int})\\
& \geq \log \det (P'_{int}) -\log \det P^*_{init} \\
& \quad + \log \det (P^{*}_{int}+W_2) -\log \det Q_2^*\\
& \quad -\log \det (P'_{int}+W_2) +\log \det Q_2^*\\
& = \log \det (P^{*}_{int}+W_2)-\log \det P^*_{int}\\
& \quad -\log \det (P'_{int}+W_2) + \log \det (P'_{int})\\
&= \log \det(I+W_2^{\frac{1}{2}} P^{*^{-1}}_{int}W_2^{\frac{1}{2}})- P'^{-1}_{int}W_2^{\frac{1}{2}}) \geq 0,
\end{align*}
since $P^{*^{-1}} \succeq P'^{-1}$. Therefore, (\ref{equ:appb_final}) holds. Inequality (A) is the application of Proposition~\ref{prob_b} with $W_1= \|x_{int}-x_1\|W$ and $W_2=\|x_2-x_{int}\|W$, and standard triangular inequality: $ \|x_2-x_1\|\leq \|x_{int}-x_1\|+\|x_2-x_{int}\|$ .
\section{Proof of Theorem~\ref{theo:loss-lessmod}}
\label{ap:C}
We prove the existence by constructing a collision-free lossless chain $\{b'_k\}_{k=0, 1, \dots, K-1}$ from the initial chain $\{b_k\}_{k=0, 1, \dots, K-1}$. The construction is performed in $K-1$ steps, where in the $k$-th step, the belief $b_{k}=(x_k,P_k)$ is shrunk to $b'_k=(x_k,P'_k)$, where $ P'_k \preceq P_k$ and the transition $b_{k-1}$ to $b'_k$ becomes lossless. More precisely, $P'_k$ is selected as the minimizer of \eqref{eq:def_D} for computing $\mathcal{D}(b_{k-1},b_k)$, where from \eqref{eq:d_info_general1} we have $P'_k \preceq P_k$. The fact that $P_k$ does not increase after performing a step automatically guarantees that the transitions $b_{k-1}\rightarrow b'_k$ and $b'_{k}\rightarrow b_{k+1}$ are collision-free. (They reside completely inside $b_{k-1}\rightarrow b_k$ and $b_k \rightarrow b_{k+1}$, respectively.) This means that the chain stays collision-free after each step, and in particular the final chain is collision-free.
Next, we show after step $k \in \{1, \dots, K-1\}$, the length of the chain does not increase. Note that at step $k$, the transitions do not change except the transitions to and from the $k$-th belief.
For transition to the $k$-th belief, it is trivial to see $\mathcal{D}(b_{k-1},b_k)=\mathcal{D}(b_{k-1},b'_k)$ as $P'_k$ is the minimizer of \eqref{eq:def_D}. For transition from the $k$-th belief, we have $\mathcal{D}(b'_k, b_{k+1}) \leq \mathcal{D}(b'_k, b_{k}) + \mathcal{D}(b_k, b_{k+1})$ from the triangular inequality we showed in Theorem~\ref{theo:tria}. From $x'_k=x_k$ and $P'_k \preceq P_k$, it is easy to verify that $\mathcal{D}(b'_k, b_{k})=0$ which yields $\mathcal{D}(b'_k, b_{k+1}) \leq \mathcal{D}(b_k, b_{k+1})$. This relation leads to the conclusion that the length of the chain does not increase after step $k$ which completes the proof.
\section{Proof of Theorem~\ref{theo:continuity2}}
\label{ap:A}
\subsection{Preparation}
\begin{lemma}
\label{lem1}
Let $M$ and $N$ be $d \times d$ symmetric matrices such that $M \succeq \kappa I$ and $N \succeq \kappa I$. Then
\[
|\log\det M - \log\det N| \leq \frac{d}{\kappa }\bar{\sigma}(M-N).
\]
\end{lemma}
\begin{proof}
Set $\mathcal{X}=\{X\in \mathbb{S}^d: X \succeq \kappa I\}$ and define $f: \mathcal{X}\rightarrow \mathbb{R}$ by $f(X)=\log\det X$.
The directional derivative $\nabla_Y f(X)$ of $f(X)$ in the direction $Y$ is given by $\nabla_Y f(X)=\text{Tr}(X^{-1}Y)$.
Suppose $M, N \in \mathcal{X}$ and define
\[
X(t)=tM+(1-t)N, \;\; t \in [0, 1].
\]
Since $\nabla_{M-N} f(X(t))=\text{Tr}(X(t)^{-1}(M-N))$, by the mean value theorem, there exists $t\in [0, 1]$ such that
\begin{align*}
f(M)-f(N)&=\nabla_{M-N} f(X(t)) \cdot (1-0) \\
&=\text{Tr}(X(t)^{-1}(M-N)).
\end{align*}
Now,
\begin{align*}
|f(M)-f(N)|&= |\text{Tr}(X(t)^{-1}(M-N))| \\
&\leq \|X^{-1}(t)\|_F \|M-N\|_F \\
&\leq d \bar{\sigma}(X^{-1}(t)) \bar{\sigma}(M-N) \\
&\leq d \bar{\sigma}(\frac{1}{\kappa }I) \bar{\sigma}(M-N) \\
&=\frac{d}{\kappa } \bar{\sigma}(M-N).
\end{align*}
This completes the proof.
\end{proof}
\begin{lemma}
\label{lem2}
Let $X, Y$ and $\Theta$ be symmetric matrices such that $0\preceq X \preceq \frac{1}{\epsilon}I$ and $0\preceq Y \preceq \frac{1}{\epsilon}I$. Then
\[
\bar{\sigma}(X\Theta X-Y\Theta Y)\leq \frac{3\bar{\sigma}(\Theta)}{\epsilon}\bar{\sigma}(X-Y).
\]
\end{lemma}
\begin{proof}
By assumption, we have $\bar{\sigma}(X-Y)\leq \frac{1}{\epsilon}$. Notice that
\begin{align*}
X\Theta X-Y\Theta Y&=(Y+X-Y)\Theta (Y+X-Y) -Y\Theta Y \\
&=(X-Y)\Theta Y + Y \Theta (X-Y) + (X-Y) \Theta (X-Y).
\end{align*}
Therefore,
\begin{align*}
\bar{\sigma}(X\Theta X-Y\Theta Y)
&\leq 2\bar{\sigma}(X-Y)\bar{\sigma}(\Theta)\bar{\sigma}(Y)+\bar{\sigma}(X-Y)^2 \bar{\sigma}(\Theta) \\
&\leq \frac{2}{\epsilon} \bar{\sigma}(X-Y)\bar{\sigma}(\Theta)+\frac{1}{\epsilon} \bar{\sigma}(X-Y)\bar{\sigma}(\Theta) \\
&=\frac{3\bar{\sigma}(\Theta)}{\epsilon} \bar{\sigma}(X-Y).
\end{align*}
\end{proof}
\begin{lemma}
\label{lem3}
Let $X$ and $Y$ be symmetric matrices and suppose $X \succeq \epsilon I$ and $Y \succeq \epsilon I$ for some $\epsilon>0$. Then,
\[
\bar{\sigma}(X^{-1}-Y^{-1})\leq \frac{1}{\epsilon^2} \bar{\sigma}(X-Y).
\]
\end{lemma}
\begin{proof}
By assumption, we have $\bar{\sigma}(X^{-1})\leq \frac{1}{\epsilon}$ and $\bar{\sigma}(Y^{-1})\leq \frac{1}{\epsilon}$. Therefore,
\begin{align*}
\bar{\sigma}(X^{-1}-Y^{-1})&=\bar{\sigma}(X^{-1}(Y-X)Y^{-1}) \\
&\leq \bar{\sigma}(X^{-1})\bar{\sigma}(Y^{-1})\bar{\sigma}(X-Y) \\
&\leq \frac{1}{\epsilon^2}\bar{\sigma}(X-Y).
\end{align*}
\end{proof}
\begin{lemma}
\label{lem4}
Let $X$ and $Y$ be symmetric matrices. Suppose $X \succeq \epsilon_1 I$ and $Y \succeq \epsilon_2 I$ hold for some $\epsilon_1>0$ and $\epsilon_2>0$. Then,
\[
\bar{\sigma}(X^{\frac{1}{2}}-X^{\frac{1}{2}})\leq \frac{1}{\sqrt{\epsilon_1}+\sqrt{\epsilon_2}} \bar{\sigma}(X-Y).
\]
\end{lemma}
\begin{proof}
See \cite[Lemma 2.2]{schmitt1992perturbation}.
\end{proof}
\subsection{Continuity of $c(\gamma)$ with respect to the topology of total variation}
Consider transitions from $(x_k, P_k)$ to $(x_{k+1}, P_{k+1})$ and from $(x'_k, P'_k)$ to $(x'_{k+1}, P'_{k+1})$.
Assume the following:
\begin{itemize}
\item There exists a positive constant $\rho$ such that
\[
\rho I \preceq P_k,\quad \rho I \preceq P_{k+1},\quad \rho I \preceq P'_k, \quad \rho I \preceq P'_{k+1}.
\]
\item Perturbations $\Delta x_k:= x'_k-x_k$, $\Delta x_{k+1}:= x'_{k+1}-x_{k+1}$, $\Delta P_k:= P'_k-P_k$, $\Delta P_{k+1}:= P'_{k+1}-P_{k+1}$ are bounded by a constant $\delta < \frac{\rho}{4}$ as
\begin{align}
&\|\Delta x_{k}\|\leq \frac{\delta}{\bar{\sigma}(W)}, \quad \bar{\sigma}(\Delta P_{k})\leq \delta, \nonumber \\
& \|\Delta x_{k+1}\|\leq \frac{\delta}{\bar{\sigma}(W)}, \quad \bar{\sigma}(\Delta P_{k+1})\leq \delta. \label{eq:deviation_delta}
\end{align}
\item Transition from $(x_k, P_k)$ to $(x_{k+1}, P_{k+1})$ is lossless.
\item Transition from $(x'_k, P'_k)$ to $(x'_{k+1}, P'_{k+1})$ is lossless.
\end{itemize}
Based on these assumptions, we have
\begin{align}
&\Big|\mathcal{D}(x'_k, x'_{k+1}, P'_k, P'_{k+1})-\mathcal{D}(x_k, x_{k+1}, P_k, P_{k+1})\Big| \nonumber \\
&\leq \Big|\|x'_{k+1}-x'_k\|\bar{\sigma}(W) - \|x_{k+1}-x_k\|\bar{\sigma}(W) \nonumber \\
&\qquad +\frac{1}{2}\log \det (P'_k+ \|x'_{k+1}-x'_k\|W) - \frac{1}{2}\log \det P'_{k+1} \nonumber\\
&\qquad -\frac{1}{2}\log \det (P_k+ \|x_{k+1}-x_k\|W) + \frac{1}{2}\log \det P_{k+1} \Big| \nonumber\\
&\leq \big| \|x'_{k+1}-x'_k\|- \|x_{k+1}-x_k\| \big| \bar{\sigma}(W) \nonumber\\
&\quad +\frac{1}{2}\Big| \log \det(P'_k+\|x'_{k+1}-x'_k\|W)- \log \det P'_{k+1} \nonumber \\
&\quad\qquad - \log \det(P_k+\|x_{k+1}-x_k\|W)+ \log \det P_{k+1} \Big| \label{eq:continuous1}
\end{align}
Using the triangular inequality:
\begin{align}
\big| \|x_{k+1}+\Delta x_{k+1}-x_k-\Delta x_k \| - \|x_{k+1}-x_k \| \big| &\leq \| \Delta x_{k+1}-\Delta x_k \| \nonumber \\ &\leq \frac{2\delta}{\bar{\sigma}(W)} \label{eq:mu_bound}
\end{align}
the first term of \eqref{eq:continuous1} can be upper bounded by $\|\Delta x_{k+1}-\Delta x_k\|\bar{\sigma}(W)$.
Writing $\hat{P}_k=P_k+\|x_{k+1}-x_k\|W$, the second term of \eqref{eq:continuous1} can be expressed as
\begin{subequations}
\begin{align}
&\frac{1}{2}\bigg|\log \det \left(\hat{P}_k+\Delta P_k+(\|x'_{k+1}-x'_k\|-\|x_{k+1}-x_k\|)W\right) \nonumber \\
&\qquad -\log \det (P_{k+1}+\Delta P_{k+1}) - \log \det \hat{P}_k + \log \det P_{k+1} \bigg| \nonumber \\
&=\frac{1}{2}\bigg| \log \det \Big(I+\hat{P}_k^{-\frac{1}{2}}\Delta P_k \hat{P}_k^{-\frac{1}{2}} \nonumber \\
&\hspace{10ex}+(\|x'_{k+1}-x'_k\|-\|x_{k+1}-x_k\|)\hat{P}_k^{-\frac{1}{2}}W\hat{P}_k^{-\frac{1}{2}}\Big) \nonumber \\
&\qquad - \log \det (I+P_{k+1}^{-\frac{1}{2}}\Delta P_{k+1} P_{k+1}^{-\frac{1}{2}}) \bigg| \nonumber \\
&\leq \frac{1}{2}\cdot 4d \bar{\sigma}
\Big( \hat{P}_k^{-\frac{1}{2}}\Delta P_k \hat{P}_k^{-\frac{1}{2}}-P_{k+1}^{-\frac{1}{2}}\Delta P_{k+1} P_{k+1}^{-\frac{1}{2}} \nonumber \\
&\hspace{10ex}+\left(\|x'_{k+1}-x'_k\|-\|x_{k+1}-x_k\|\right)\hat{P}_k^{-\frac{1}{2}}W\hat{P}_k^{-\frac{1}{2}} \Big) \label{eq:continuous2_a}\\
&\leq 2d \bar{\sigma}\left( \hat{P}_k^{-\frac{1}{2}}\Delta P_k \hat{P}_k^{-\frac{1}{2}}-P_{k+1}^{-\frac{1}{2}}\Delta P_{k+1} P_{k+1}^{-\frac{1}{2}}\right) \nonumber \\
&\qquad +2d \|\Delta x_{k+1}-\Delta x_k\|\bar{\sigma}(\hat{P}_k^{-\frac{1}{2}}W\hat{P}_k^{-\frac{1}{2}}) \nonumber \\
&\leq 2d \bar{\sigma}\Big( \hat{P}_k^{-\frac{1}{2}}(\Delta P_k-\Delta P_{k+1}) \hat{P}_k^{-\frac{1}{2}} \nonumber \\
&\hspace{10ex}+\hat{P}_{k}^{-\frac{1}{2}}\Delta P_{k+1} \hat{P}_{k}^{-\frac{1}{2}}-P_{k+1}^{-\frac{1}{2}}\Delta P_{k+1} P_{k+1}^{-\frac{1}{2}}\Big) \nonumber \\
&\qquad +2d \|\Delta x_{k+1}-\Delta x_k\|\bar{\sigma}(\hat{P}_k^{-\frac{1}{2}}W\hat{P}_k^{-\frac{1}{2}}) \nonumber \\
&\leq \frac{2d}{\rho}\|\Delta x_{k+1}-\Delta x_k\|\bar{\sigma}(W) + \frac{2d}{\rho}\bar{\sigma}(\Delta P_k-\Delta P_{k+1}) \nonumber \\
& \qquad +2d \bar{\sigma}\left(\hat{P}_{k}^{-\frac{1}{2}}\Delta P_{k+1} \hat{P}_{k}^{-\frac{1}{2}}-P_{k+1}^{-\frac{1}{2}}\Delta P_{k+1} P_{k+1}^{-\frac{1}{2}}\right). \label{eq:continuous2_b}
\end{align}
\end{subequations}
To see \eqref{eq:continuous2_a}, notice the following inequalities hold from \eqref{eq:deviation_delta} and \eqref{eq:mu_bound}:
\begin{align*}
&\bar{\sigma}(\hat{P}_k^{-\frac{1}{2}}\Delta P_k \hat{P}_k^{-\frac{1}{2}})
\leq \bar{\sigma}(\hat{P}_k^{-\frac{1}{2}})^2 \bar{\sigma}(\Delta P_k) \leq \frac{\delta}{\rho} < \frac{1}{4}\\
&\bar{\sigma}(P_{k+1}^{-\frac{1}{2}}\Delta P_{k+1} P_{k+1}^{-\frac{1}{2}})
\leq \bar{\sigma}(P_{k+1}^{-\frac{1}{2}})^2 \bar{\sigma}(\Delta P_{k+1}) \leq \frac{\delta}{\rho} < \frac{1}{4}\\
&\bar{\sigma}\left((\|x'_{k+1}-x'_k\|-\|x_{k+1}-x_k\|) \hat{P}_k^{-\frac{1}{2}} W \hat{P}_k^{-\frac{1}{2}}\right) \\
&\qquad \leq \| \Delta x_{k+1}-\Delta x_k \| \bar{\sigma}(P_{k+1}^{-\frac{1}{2}})^2 \bar{\sigma}(W) \leq \frac{2\delta}{\rho}<\frac{1}{2}.
\end{align*}
Therefore, we have
\begin{align*}
I\!+\!\hat{P}_k^{-\frac{1}{2}}\Delta P_k \hat{P}_k^{-\frac{1}{2}} \!+(\|x'_{k+1}\!-\!x'_k\|\!-\!\|x_{k+1}\!-\!x_k\|) \hat{P}_k^{-\frac{1}{2}} W \hat{P}_k^{-\frac{1}{2}}& \succeq \frac{1}{4}I \\
I+P_{k+1}^{-\frac{1}{2}}\Delta P_{k+1} P_{k+1}^{-\frac{1}{2}} &\succeq \frac{1}{4}I
\end{align*}
and thus Lemma~\ref{lem1} with $\kappa=\frac{1}{4}$ is applicable.
The last term in \eqref{eq:continuous2_b} is upper bounded as follows:
\begin{subequations}
\begin{align}
&2d \bar{\sigma}\left( \hat{P}_k^{-\frac{1}{2}}\Delta P_{k+1} \hat{P}_k^{-\frac{1}{2}} - P_{k+1}^{-\frac{1}{2}} \Delta P_{k+1} P_{k+1}^{-\frac{1}{2}}\right) \nonumber \\
&\leq 2d \cdot \frac{3\bar{\sigma}(\Delta P_{k+1})}{\sqrt{\rho}}
\bar{\sigma}\left( \hat{P}_k^{-\frac{1}{2}}-P_{k+1}^{-\frac{1}{2}} \right) \label{eq:continuous3_a} \\
&\leq 2d \cdot \frac{3\delta}{\sqrt{\rho}}\cdot \frac{1}{\rho}\bar{\sigma}\left( \hat{P}_k^{\frac{1}{2}}-P_{k+1}^{\frac{1}{2}} \right) \label{eq:continuous3_b} \\
&\leq 2d \cdot \frac{3\delta}{\sqrt{\rho}}\cdot \frac{1}{\rho} \cdot \frac{1}{2\sqrt{\rho}}\bar{\sigma}\left( \hat{P}_k-P_{k+1} \right) \label{eq:continuous3_c} \\
&= \frac{3\delta d }{\rho^2}\bar{\sigma}\left(P_k-P_{k+1}+\|x_{k+1}-x_k\|W\right) \nonumber \\
&\leq \frac{3\delta d}{\rho^2} \bigg\{ \bar{\sigma}(P_k-P_{k+1})+\|x_{k+1}-x_k\|\bar{\sigma}(W) \bigg\}. \nonumber
\end{align}
\end{subequations}
Lemmas \ref{lem2}, \ref{lem3} and \ref{lem4} were used in steps \eqref{eq:continuous3_a}, \eqref{eq:continuous3_b} and \eqref{eq:continuous3_c}.
Combining the results so far, we obtain an upper bound for $\big|\mathcal{D}(x'_k, x'_{k+1}, P'_k, P'_{k+1})-\mathcal{D}(x_k, x_{k+1}, P_k, P_{k+1})\big|$ as follows:
\begin{align}
&\Big|\mathcal{D}(x'_k, x'_{k+1}, P'_k, P'_{k+1})-\mathcal{D}(x_k, x_{k+1}, P_k, P_{k+1}) \Big| \nonumber \\
& \leq \left(1+\frac{2n}{\rho}\right)\|\Delta x_{k+1}-\Delta x_k\|\bar{\sigma}(W)
+\frac{2n}{\rho}\bar{\sigma}(\Delta P_{k+1}-\Delta P_k) \nonumber \\
& \qquad +\frac{3\delta n }{\rho^2} \Big\{\bar{\sigma}(P_{k+1}-P_k) +\|x_{k+1}-x_k\|\bar{\sigma}(W) \Big\}. \label{continuous4}
\end{align}
The result in this subsection is summarized as follows:
\begin{lemma}
\label{lem:main2}
Let $\delta$ and $\rho$ be any constants satisfying $0< \delta <\frac{\rho}{4}$.
Suppose that $(x_k, P_k)$, $(x_{k+1}, P_{k+1})$, $(x'_k, P'_k)=(x_k+\Delta x_k, P_k+\Delta P_k)$ and $(x'_{k+1}, P'_{k+1})=(x_{k+1}+\Delta x_{k+1}, P_{k+1}+\Delta P_{k+1})$ are points in the uncertain configuration space $\mathbb{R}^d\times \mathbb{S}_\rho^d$.
If the transitions from $(x_k, P_k)$ to $(x_{k+1}, P_{k+1})$ and from $(x'_k, P'_k)$ to $(x'_{k+1}, P'_{k+1})$ are both lossless, then there exists a positive constant $L_\rho$ such that
\begin{align*}
&|\mathcal{D}(x'_k, x'_{k+1}, P'_k, P'_{k+1})-\mathcal{D}(x_k, x_{k+1}, P_k, P_{k+1})| \\
& \leq L_\rho \bigg[ \|\Delta x_{k+1}-\Delta x_k\|\bar{\sigma}(W) + \bar{\sigma}(\Delta P_{k+1}-\Delta P_k) \\
&\qquad + \delta \Big\{ \|x_{k+1}-x_k\|\bar{\sigma}(W)+\bar{\sigma}(P_{k+1}-P_k) \Big\}
\bigg].
\end{align*}
\end{lemma}
\begin{proof}
The result follows from \eqref{continuous4} by setting $L_\rho=\max \left\{1+\frac{2d}{\rho}, \frac{3d}{\rho^2} \right\}$.
\end{proof}
\subsection{Proof of Theorem~\ref{theo:continuity2}}
We prove that the choice $\delta = \frac{\epsilon}{L_\rho (1+|\gamma|_{\text{TV}})}$ suffices, where $L_\rho$ is defined in Lemma~\ref{lem:main2}.
Suppose $\gamma(t)=(x(t), P(t))$, $\gamma'(t)=(x'(t), P'(t))$ and both $\gamma$ and $\gamma'$ are finitely lossless with respect to a partition $\mathcal{P}_0$.
Let $\mathcal{P}=(0=t_0<t_1<\cdots < t_K=1)$ be any partition such that $\mathcal{P}\supseteq \mathcal{P}_0$. Then the following chain of inequalities holds:
\begin{subequations}
\label{eq:chain}
\begin{align}
&\big|c(\gamma'; \mathcal{P})-c(\gamma; \mathcal{P})\big| \nonumber \\
&\leq \sum_{k=0}^{K-1}\Big|\mathcal{D}(x'(t_k), x'(t_{k+1}), P'(t_k), P'(t_{k+1})) \nonumber \\
&\qquad \qquad -\mathcal{D}(x(t_k), x(t_{k+1}), P(t_k), P(t_{k+1})) \Big| \\
&\leq L_\rho \sum_{k=0}^{K-1} \bigg[
\|x'(t_{k+1})-x(t_{k+1})-x'(t_k)+x(t_k)\|\bar{\sigma}(W) \nonumber \\
&\quad +
\bar{\sigma} \big(P'(t_{k+1})-P(t_{k+1})-P'(t_k)+P(t_k)\big) \nonumber \\
&\quad + \delta \Big\{ \|x(t_{k+1})-x(t_k)\| \bar{\sigma}(W)+\bar{\sigma}\big(P(t_{k+1})-P(t_k)\big) \Big\}
\bigg] \label{eq:15d} \\
&= L_\rho \big( V(\gamma'-\gamma; \mathcal{P}) + \delta V(\gamma; \mathcal{P})\big) \\
&\leq L_\rho \big(|\gamma'-\gamma|_{\text{TV}} + \delta |\gamma|_{\text{TV}} \big)\\
&\leq L_\rho (1 + |\gamma|_{\text{TV}}) \delta\\
&=\epsilon
\end{align}
\end{subequations}
The inequality \eqref{eq:15d} follows from Lemma~\ref{lem:main2}, noticing that both $\gamma$ and $\gamma'$ are finitely lossless with respect to $\mathcal{P}$.
Let $\{\mathcal{P}_i\}_{i\in\mathbb{N}}$ and $\{\mathcal{P}'_i\}_{i\in\mathbb{N}}$ be sequences of partitions such that $\mathcal{P}_i \supseteq \mathcal{P}_0$ and $\mathcal{P}'_i \supseteq \mathcal{P}_0$ for each $i\in\mathbb{N}$, and
\begin{equation}
\label{eq:p_sequence}
\lim_{i \rightarrow \infty} c(\gamma; \mathcal{P}_i) =c(\gamma), \;\;
\lim_{i \rightarrow \infty} c(\gamma'; \mathcal{P}'_i) =c(\gamma').
\end{equation}
Let $\{\mathcal{P}''_i\}_{i\in\mathbb{N}}$ be the sequence of partitions such that for each $i\in\mathbb{N}$, $\mathcal{P}''_i$ is a common refinement of $\mathcal{P}_i$ and $\mathcal{P}'_i$. Since
$c(\gamma;\mathcal{P}_i) \leq c(\gamma;\mathcal{P}''_i) \leq c(\gamma)$ and
$c(\gamma';\mathcal{P}'_i) \leq c(\gamma';\mathcal{P}''_i) \leq c(\gamma')$
hold for each $i\in\mathbb{N}$, \eqref{eq:p_sequence} implies
\begin{equation}
\label{eq:p_sequence2}
\lim_{i \rightarrow \infty} c(\gamma; \mathcal{P}''_i) =c(\gamma), \;\;
\lim_{i \rightarrow \infty} c(\gamma'; \mathcal{P}''_i) =c(\gamma').
\end{equation}
Now, since the chain of inequalities \eqref{eq:chain} holds for any partition $\mathcal{P}\supseteq \mathcal{P}_0$ and since $\mathcal{P}''_i \supseteq \mathcal{P}_0$ for each $i\in\mathbb{N}$,
\[
\big| c(\gamma; \mathcal{P}''_i) - c(\gamma'; \mathcal{P}''_i) \big| \leq \epsilon
\]
holds for all $i\in\mathbb{N}$. Therefore, we obtain
$|c(\gamma)-c(\gamma')|=\lim_{i\rightarrow \infty} |c(\gamma; \mathcal{P}''_i) - c(\gamma'; \mathcal{P}''_i)| \leq \epsilon$.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2021-09-30T02:01:38",
"yymm": "2109",
"arxiv_id": "2109.13976",
"language": "en",
"url": "https://arxiv.org/abs/2109.13976"
}
|
\section{Introduction}
Heavy duty truck driving performance is details sensitive \cite{lattemann2004predictive, kirches2013mixed, bae2003parameter, vahidi2003simultaneous, druzhinina2002speed, lu2005heavy}. A typical passenger car could achieve a fairly consistent braking distances while the braking distance of a truck varies significantly within a single trip; before and after hooking up to a trailer, before and after trailer loading, and distance could double as the brakes worms up during the trip. Modeling accuracy has been shown to be a significant (and limiting) factor for precision driving maneuvers \cite{lu2017integrated, spielberg2019neural}. Deep learning has been shown to improve modeling accuracy compared to state of the art classical models for passenger cars \cite{da2019modelling, spielberg2019neural}. However, heavy duty truck modelling literature using deep learning is still sparse. Heavy duty trucks are typically configured and tailor built to optimize to their expected mission requirements. Detailed underlying physics and internal states of trucks are configuration specific and often are different from the more exhaustively modeled variants (components) of passenger cars. In this article, we develop a deep-learning-based longitudinal model for heavy duty trucks and validate its modeling accuracy for heavy duty trucks of different configurations both in simulation and using real-physical trucks.
Model-free deep reinforcement learning has been shown to achieve improved performance in many applications in addition to simplifying several previously intractable problems. Transfer of learned policies from simulation is often challenged however by the reality-gap (the mismatch between model and corresponding real-physical system). This article studies the application of deep learning for longitudinal modeling of heavy duty trucks and its application to minimize reality-gap for transferable deep reinforcement learning continuous control policies as shown in Figure~\ref{fig:deep-truck-process-diagram}.
The process uses deep learning to build deep replica models for each truck from some real vehicle pool. These deep replica models are used to develop deep environments suitable for deep reinforcement learning continuous control tasks. The article takes into consideration several of the factors either traditionally or expected to impact modeling and control performance such as vehicle mechanical configuration, operational scope, setup, and traffic scenarios.
Deep learning and deep reinforcement learning offer potential for improved performance at the expense of guarantees such as bounds on control error that are better understood using classical methods. To compensate, more in depth evaluation is always required. To simplify investigation and avoid expulsion of combinatorics however, the article focuses on presenting the process and experimental evaluation of the relevant components and leave additional investigations such as robustness for later articles.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=\linewidth]{figures/migrated-deep-truck/deep-process-diagram/deep-truck-process-diagram.png} \\
\vspace{5pt}\hline\vspace{5pt}\\
\includegraphics[width=0.6\linewidth]{figures/migrated-deep-truck/deep-process-diagram/deep-truck-pools-listings.png}
\caption{The deep truck process for the development of field-testable deep RL continuous control policies for longitudinal automation of heavy duty trucks and a sample of the pools and operational variations relevant to this work.}
\label{fig:deep-truck-process-diagram}
\end{figure*}
\section{Modeling problem formulation}
\label{sec:deep-learning-model}
In this article, we formulate heavy duty truck longitudinal dynamics modeling as a time-series supervised deep learning problem. The longitudinal dynamics model $f_{DT}$, detailed in the next section, is represented as:
\begin{equation}
{\begin{bmatrix} x(k+1) \\ y(k+1) \end{bmatrix}}
=
f_{DT}\left(
{\begin{bmatrix} u(k) \\ w(k) \end{bmatrix}}
\bigg|
{\begin{bmatrix} x(k) \\ y(k) \end{bmatrix}},
\Phi
\right)\label{eq:model-base-form},
\end{equation}
where $x$ represent internal state, $y$ represent truck response, $u$ represent controllable inputs to the truck, $w$ represent uncontrollable conditions relevant to the dynamics, and $\Phi$ represent model parameters. The initial conditions are given by $x(k = 0) = x_o$ and $y(k = 0) = y_o$.
Model parameters $\Phi$ are trained by solving the following optimization problem:
\[
\min_{\Phi} \quad \sum_{k}
{
\left\lVert
\hat{y}(k| \Phi) - y(k)
\right\rVert_2^2
}
\]
where $\hat{y}$ is the model-based estimate of $y$ given ground truth historical driving time-series data $y$, $u$, $w$, initial state vector $x(k=0)$, and proper recursive substitution of the estimate of the internal state vector $\hat{x}(k| \Phi)$ for $x$ as follows:
\begin{equation}
{\begin{bmatrix} \hat{x}(k+1| \Phi) \\ \hat{y}(k+1| \Phi) \end{bmatrix}}
=
f_{DT}\left(
{\begin{bmatrix} u(k) \\ w(k) \end{bmatrix}}
\bigg|
{\begin{bmatrix} \hat{x}(k| \Phi) \\ y(k) \end{bmatrix}},
\Phi
\right)\label{eq:model-training-form}.
\end{equation}
The model is trained using a $K$-step unfolded time-series mini-batch Adagrad (Adaptive stochastic gradient) algorithm \cite{duchi2011adaptive, mcmahan2010adaptive}. Each gradient step is estimated from M independent samples (time-series model evaluations) each of $K$ time steps as follows:
\[
\sum_{m = 0 \dots M} \sum_{k = 0 \dots K}
{
\left\lVert
\hat{y}_{n, m}(k| \Phi) - y_{n, m}(k)
\right\rVert_2^2
},
\]
where the sub-indices abstract time-series splits and $n = 0 \dots N$ represent the mini-batch index. Training is initialized with random deep network parameters, and with $\hat{x}_{n, m}(k = 0| \Phi) = 0$ for all $n$ and $m$. Given the trained model, truck simulations are generated from:
\begin{equation}
{\begin{bmatrix} \hat{x}(k+1| \Phi) \\ \hat{y}(k+1| \Phi) \end{bmatrix}}
=
f_{DT}\left(
{\begin{bmatrix} u(k) \\ w(k) \end{bmatrix}}
\bigg|
{\begin{bmatrix} \hat{x}(k| \Phi) \\ \hat{y}(k) \end{bmatrix}},
\Phi
\right)\label{eq:model-deployment-form},
\end{equation}
and initialized using $\hat{x}(k = 0| \Phi) = 0$ and $\hat{y}(k = 0) = y_o$, where $y_o$ represent the observable initial condition of truck dynamics.
Variable instantiations are detailed for each respective experiment in the later sections; however, we assume in general, for longitudinal dynamics,
\begin{align*}
u(k) &= \begin{bmatrix} E_\text{cmd}(k) \\ B_\text{cmd}(k) \end{bmatrix} \\
y(k) &= \begin{bmatrix} v(k) \\ a(k) \\ f_\text{rate}(k) \end{bmatrix} \\
w(k) &= \theta_\text{rdg}(k),
\end{align*}
where $E_\text{cmd}(k)$ is engine command, $B_\text{cmd}(k)$ is brake command, $v(t)$ is vehicle speed, $a(k)$ is vehicle acceleration, $f_\text{rate}(k)$ is fuel rate, and $\theta_\text{rdg}(k)$ is road grade each at discrete time step $k$.
\section{Deep model}
\label{sec:deep-truck}
This section presents the \(f_{DT}\) model structure used to represent the longitudinal dynamics of the heavy duty truck. The model assumes that only controllable inputs to the truck, uncontrollable driving environment variables, and truck responses are known and measurable while the configuration of the truck and relevant internal state variables are not specified.
The state model,
\[x(k+1) = H(u(k), w(k) | x(k), y(k)),\]
represents an integrated state observer and tracker equations, and state update and encoder equations. The state model, $H(\cdot)$, is represented in this article as a long short-term memory (LSTM) recurrent neural network (RNN). The use of a single deep network unit to represent this model enables parameter sharing for the state observer, tracker, updater, and encoder functions.
The output model,
\[y(k) = G(x(k)),\]
represents an integrated state decoder equation and explicit output constraints model. The output model is represented by a cascade of a state decoder $D$ and an explicit constraints models $C$ such that $y(k) = C(D(x(k)))$. The state decoder is implemented as a fully connected feedforward neural network parameterized by $\Phi_{nn}$. In the remainder of this article, we implement discrete-time longitudinal kinematics model as an explicit constraint model for longitudinal response output variables:
\[
v(k + 1) = v(k) + a(k) \cdot dt,
\]
where $v(k)$ and $a(k)$ are longitudinal velocity and acceleration respectively, and $dt$ is discrete time step. Other variables were left unconstrained.
\section{Driving cycles for data collection}
\label{sec:driving-cycles}
In this article, we assume that the internal dynamics of trucks can be observed from datasets were $y = (v, a)$, $u = (E_\text{cmd}, B_\text{cmd})$, and $w = \theta_\text{rdg}$ are jointly spanning. We consider that, without specialized driving data collection facilities, a human driver is most practical for data collection. Internal truck control signals $u$ are often not accessible though the human driver interface (pedals), however, but are processed though vehicle manufacturer proprietary control systems as shown in Figure~\ref{fig:truck-interfaces-diagram}. We thus approximate such spanning dataset by a driving cycle consisting of (1) $w$ and $y$ spanning arbitrary acceleration/deceleration profiles, (2) $w$ and $v$ spanning coasting ($u = 0$), and (3) $w$ and $B_\text{cmd}$ spanning braking to zero speed.
For the field experiments presented in this article, these instructions were given to a human driver to execute for data collection. In simulation, we used a random generative model to approximate such a driving cycle. The generative model utilizes a moving average random walk model as a road profile generator. Coasting and braking to zero episodes are generated using direct randomized initializations. The spanning arbitrary acceleration/deceleration profiles were generated from the model presented in the next section.
\subsection{Generative model for state space spanning driving cycles}
For the numerical experiments presented in this article, we simulate arbitrary driving cycles using a speed profile generative model based on a time adaptive unstable stochastic speed controller. We designed it as a random speed profile (driving cycle) generator that samples the state space ``fairly'' uniformly.
Double integrator model is used with hard saturation limit at desired maximum and minimum speeds as follows:
\[
v(t) = \text{max}(\text{min}(v(t-dt) + a(t) \cdot dt, v_\text{max}), v_\text{min}),
\]
where $v_\text{min}$ and $v_\text{max}$ are desired minimum and maximum speeds of generated profile. Acceleration is sampled from a normal distribution as:
\[
a(t) = \mathcal{N}(\mu_\text{a, scaling} \cdot \mu_\text{a}(t), \sigma_\text{a, scaling} \cdot \sigma_\text{a}(t)),
\]
where $\mu_\text{a, scaling}$ and $\sigma_\text{a, scaling}$ are tuning parameters.
Acceleration statistics are designed based on speed dependant unstable feedback control. Average acceleration is given by:
\[
\mu_\text{a}(t) = 1 - \frac{v(T_i)}{v_\text{ref}},
\]
where $v_\text{ref}$ is control reference speed, here set to $\frac{v_\text{min}+v_\text{max}}{2}$. The standard deviation is designed to allow for bursts of spontaneous high accelerations but discourage it at extreme speeds ($v_\text{min}$, and $v_\text{max}$) as follows:
\[
\sigma_\text{a}(t) = \frac{v(T_i)}{v_\text{ref}} \cdot \left(1 - \frac{v(T_i)}{v_\text{max}}\right).
\]
$T_i$ is used for acceleration-based adaptive temporal discretization for sampling acceleration statistics, and is designed to make high acceleration episodes short lived. The non-negative integer index is updated to $i \coloneqq i + 1$ and $T_{i+1}$ is re-sampled when $t$ equals $T_{i+1}$ and integrated as follows:
\[
T_{i+1} = T_i + \lceil \text{max}(\mathcal{N}(\mu_T) \cdot (1 - | \mu_\text{a}(t) | ), dt) \rceil,
\]
where $\mu_T$ is a tuning parameter.
To smooth out the noise, we pass the generated acceleration signal though a moving average filter to get $a_f(t)$, and re-integrate speed with a softmax operator as follows:
\[
v_f(t) = \frac{\max(v_f(t-1) + a_f(t) \cdot dt, v_\text{min})}{1 + e^{\frac{1}{2} \cdot (\max(v_f(t-1) + a_f(t) \cdot dt, v_\text{min}) - v_\text{max})}},
\]
noting that acceleration signal has to be recalculated, as needed, from speed signal after this step.
\section{Deep-RL continuous longitudinal control}
\label{sec:deep-rl-control}
In this article, we use deep-reinforcement-learning to design end-to-end heavy duty truck controllers allowing for offline (1) design/tune the controller, (2) calibrate the controller module to the specific physics of each truck, (3) design an embedded state observer/tracker. In developing these controls we assume limited observable input and output (IO), unknown truck mechanical configuration, and unknown relevant internal state. We formulate the problem as a \emph{Partially Observable Markov Decision Processes} (POMDP) and solve it using deep reinforcement learning framework.
\subsection{POMDP and The deep-RL framework}
Design of continuous control problems can be formulated as a deep-RL problem modeled as a POMDP defined by the tuple \((S, P, OS, OP, A, r, \rho_o, \gamma, T)\), Where \(S\) represents the state space of an RL-environment (system); \(P\) is state transition probability space (governing dynamics of the system); \(OS\) represents the observable state space (space of system outputs); \(OP\) is probability distribution of observation space (governing dynamics of observation model); \(A\) represents the action space (actuation and control variables) of an RL-agent (a decision function, a policy, or a controller); \(r\) is reward function (system performance metric); \(\rho_o\) is initial state distribution; \(\gamma\) is reward discount factor over time; and \(T\) is time horizon. In this manuscript, we use model free Policy Gradient learning \cite{schulman2015trust, duan2016benchmarking} to computationally optimize for expected discounted cumulative reward for an agent policy $\pi_\theta$ parameterized by $\theta$:
\[
\theta^* = \underset{\theta}{\mathrm{argmax}} \sum_{t=1}^{T} {E_{(s_t, a_t) \sim p_{\theta}(s_t, a_t)} \left[ r(s_t, a_t) \right] }
\]
where $s_t$ and $a_t$ are state and action at time step $t$, and $p_{\theta}$ is probability distribution over state and action space.
\subsection{Deep-RL cooperative adaptive cruise control}
\label{sec:deep-rl-cacc}
\begin{figure}[!htbp]
\centering
\includegraphics[width=\linewidth, trim={0cm 0 0.5cm 0},clip]{figures/migrated-deep-truck/cacc-illustration/cacc-illustration-v3.png}
\caption{Two truck cooperative adaptive cruise control system setup.}
\label{fig:two-truck-cacc-illustration}
\end{figure}
This section formulates an end-to-end two-truck cooperative adaptive cruise control (CACC)~\cite{lu2017integrated} using deep-RL based on the longitudinal truck model developed in this article. In this system, we consider a human driven leader vehicle, and a follower semi-automated to simultaneously regulate speed and time gap as shown in Figure~\ref{fig:two-truck-cacc-illustration}.
The environment is modeled by a two point mass system representing each of the two trucks. Both vehicle dynamics were modeled using the double integrator kinematic model. Leader vehicle (leader) dynamics were simplified as a linear system. The velocity of the controlled truck (ego) is modeled as a nonlinear system according to the deep truck model $f_{DT}$ presented earlier in this article. This results in the following environment model:
\begin{align*}
\begin{bmatrix}
p_\text{leader}(k+1) \\
v_\text{leader}(k+1) \\
p_\text{ego}(k+1) \\
v_\text{ego}(k+1)
\end{bmatrix}
=
\begin{bmatrix}
1 & dt & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & dt \\
0 & 0 & 0 & 0
\end{bmatrix}
\begin{bmatrix}
p_\text{leader}(k) \\
v_\text{leader}(k) \\
p_\text{ego}(k) \\
v_\text{ego}(k)
\end{bmatrix}
\\
+
\begin{bmatrix}
0 \\
1 \\
0 \\
0
\end{bmatrix} \cdot
a_\text{leader}(k) \cdot dt
\\
+
\begin{bmatrix}
0 \\
0 \\
0 \\
1
\end{bmatrix} \cdot
f_{DT, ego, v}\left(
{\begin{bmatrix} E_\text{cmd, ego}(k) \\ B_\text{cmd, ego}(k) \\ \theta_\text{rdg, ego}(k) \end{bmatrix}}
\bigg|
{\begin{bmatrix} x_\text{ego}(k| \Phi) \\ v_\text{ego}(k) \end{bmatrix}},
\Phi
\right),
\end{align*}
where $f_{DT, \text{ego}, v}$ represents the velocity component from the deep truck model for the ego truck, and relevant variables ($E_\text{cmd, ego}$, $B_\text{cmd, ego}$, $\theta_\text{rdg, ego}$, $x_\text{ego}$, and $\Phi$) are as defined in Section~\ref{sec:deep-learning-model}. As shown in Figure~\ref{fig:two-truck-cacc-illustration}, $p_\text{leader}$, $v_\text{leader}$, and $a_\text{leader}$ are absolute longitudinal position, velocity, and acceleration representing the leading vehicle, and $p_\text{ego}$ and $v_\text{ego}$ are absolute longitudinal position and velocity of the ego vehicle. Time step size is represented by $dt$.
The agent is represented by the probability distribution function $\pi(a_k|o_k, \Phi_\text{agent})$, where $a_k$ represent agent action, $o_k$ represent observation at time step $k$, and $\Phi_\text{agent}$ represent agent parameters. The corresponding control $u(k)$ is implemented as:
\begin{align*}
u(k) &= \begin{bmatrix} E_\text{cmd, ego}(k) \\ B_\text{cmd, ego}(k) \end{bmatrix} = f_\pi\left( \begin{bmatrix} v_\text{leader}(k) \\ v_\text{ego}(k) \\ p_\text{leader}(k) - p_\text{ego}(k) \\ v_\text{ego}(k) \cdot Tg_\text{target} \\ \theta_\text{rdg}(k) \end{bmatrix} \right) \\ &= E\left(\pi\left(a_k \bigg| o_k = \begin{bmatrix} v_\text{leader}(k) \\ v_\text{ego}(k) \\ p_\text{leader}(k) - p_\text{ego}(k) \\ v_\text{ego}(k) \cdot Tg_\text{target} \\ \theta_\text{rdg}(k) \end{bmatrix}, \Phi_\text{agent}\right)\right)
\end{align*}
representing the mean value for a Multi-Layer Perceptron (MLP) Gaussian distribution model.
The reward function is designed to simultaneously regulate time-gap between ego and leader to a given desired time-gap, and regulate velocity of ego to match that the leader. The agent is penalized for actuation cost, here approximated by engine and brake commands. Safety constraint is implemented as a very large penalty term applied when minimum safety distance is violated. The reward function is modeled as:
\begin{multline*}
r(k) = - \alpha_p (p_\text{leader}(k)-p_\text{ego}(k)-v_\text{ego}(k) \cdot Tg_\text{target})^2 \\ - \alpha_v (v_\text{leader}(k)-v_\text{ego}(k))^2 - \alpha_E E_\text{cmd}^2(k) - \alpha_B B_\text{cmd}^2(k) \\ - \alpha_\text{crash} \cdot (p_\text{leader}(k)-p_\text{ego}(k) \leq d_\text{safety}),
\end{multline*}
where $Tg(k)$ is actual time-gap between leader tail and ego head, $Tg_\text{target}$ is target (desired) time-gap, \(\alpha_x\), \(\alpha_v\), \(\alpha_E\), and \(\alpha_B\) are positive constants, $\alpha_\text{crash}$ is a large positive constant, $d_\text{safety}$ is minimum safety distance, and all other variables are as defined earlier in this section.
Each training episode is initialized using leader position $p_\text{leader}(k=0) = 0$, random initial ego truck position error $p_\text{leader}(k=0) - p_\text{ego}(k=0) - v_\text{ego}(k=0) \cdot Tg_\text{target}$ from a \(\text{uniform} (p_{o, min}, p_{o, max})\), random initial leader speed \(v_\text{leader}(k=0)\) from \(\text{uniform} (v_{o, min}, v_{o, max})\), random initial ego truck speed error \(v_\text{ego}(k=0) - v_\text{leader}(k=0)\) from \(\text{uniform} (v_{o, min}, v_{o, max})\) distribution, random desired time gap $Tg_\text{target}$ from a \(\text{uniform} (Tg_{o, min}, Tg_{o, max})\) distribution, and random constant road grade $\theta_\text{rdg}$ from a \(\text{uniform} (\theta_\text{rdg, o, min}, \theta_\text{rdg, o, max})\) distribution. To simplify the setup, we also assume $a_\text{leader}(k) = 0$. All distribution boundaries are positive constants chosen to cover the desired operational state-space of the CACC system and be constrained by the state-space covered by the deep model where appropriate.
\section{Vehicle pool}
We primarily utilize three trucks with three different mechanical configurations for the study presented in this article as shown in shown in Figure~\ref{fig:deep-truck-process-diagram} and Figure~\ref{fig:truck-pool-images}. One truck is simulation based used primarily for numerical experiments. The remaining two trucks are full-size real-physical trucks that had been modeled using two different physics-based power-train models in \cite{XYLu2005HDVModelandLongControl} and \cite{lu2017integrated} and used to develop high precision control systems within each respective article.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=\linewidth]{figures/migrated-deep-truck/truck-pool-images/trucks-images.png}
\caption{Real full-size and simulations trucks of multiple mechanical configurations used in this research.}
\label{fig:truck-pool-images}
\end{figure*}
\textbf{Simulation framework and simulation truck mechanical configuration.} Simulation experiments in this article are conducted in TruckSim \cite{sayers1996modeling}, a black-box state-of-the-art commercial software framework with high fidelity modeling capabilities and a detailed vehicle and vehicle component libraries.
The truck, shown in Figure~\ref{fig:truck-pool-images}, is equipped with a 402hp engine. The engine shaft is connected to one side of the transmission via clutch. The clutch allows speed difference between the engine and the transmission when gear shifts. The transmission has ten forward gears and one reverse gear. The other side of the transmission is connected to rear wheels via a differential gear with a fixed reduction ratio. The truck is equipped with an air-brake system. The front air-brakes have capacity of 7.5 kN-m on each wheel. The rear brakes have capacity of 10 kN-m on each wheel. Actuation control input to the truck are engine torque and brake cylinder pressure. The details are presented here for completeness and for reporting purposes, but are irrelevant to the deep model.
\textbf{Real full-size Freightliner truck mechanical configuration. } The Freightliner truck used for the results in this section is a tractor-only Freightliner Century truck driven by a 435 hp turbocharged Detroit Diesel diesel engine and equipped with a 6 gear true-automatic (equipped with torque-converter) Allison transmission system. The service brake is a drive by wire all the way to the wheels. The truck is not equipped with road grade sensors.
\textbf{Real full-size Volvo truck mechanical configuration. } The second set of experiments were conducted using a Volvo VNL truck (with and without a tractor) driven by a 500 hp engine. The mechanically most significant differentiator of this truck from the Freightliner truck is the transmission system which is an automated manual-transmission (equipped with clutches).
\section{Vehicle interface}
Access to vehicle powertrain is often primarily provided through a human driver interface (pedals) and is mediated by proprietary controllers as shown in Figure~\ref{fig:truck-interfaces-diagram}. For precision sensitive applications however, it is often desirable to probe as close to the powertrain as possible (e.g. engine torque or engine fuel rate control signals). We access these signals through a custom-built automated driver interface connected to vehicle communication backbone J-1939. The interface provide access to powertrain and sensor signals; however, architectural details and signal accessibility vary between truck platforms. Multiple layers of fail-safe safety systems were implemented to ensure experiments remain faithful to published description while maintaining safety on the road. Parallel interfaces and system architecture is used for the simulation truck.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=\linewidth]{figures/migrated-deep-truck/truck-interfaces/truck-interfaces-diagram.png}
\caption{Interface architecture for deep modeling and control of heavy duty trucks.}
\label{fig:truck-interfaces-diagram}
\end{figure*}
\section{Experiments}
This section presents experimental evaluation of the process detailed in this article. The section starts by applying the process to a simulation based truck to present detailed performance statistics. The section then reapplies the process to full-size trucks.
\subsection{Deep modeling of the simulation truck}
This section presents experimental results for the development of a deep learning model as described in Section~\ref{sec:deep-learning-model} for the simulation truck.
\subsubsection{Deep model specifications}
In this experiment, the uncontrollable conditions $w(k) = \theta_\text{rdg}(k) [\%]$ represent road grade. The controllable input to the truck is given by $u(k) = [ E_\text{cmd}(k), B_\text{cmd}(k)], $ where $E_\text{cmd}(k)$ is engine torque in [$N-m$] and $B_\text{cmd}(k)$ is service brake master cylinder pressure [$0-100\%$].
The output (truck response) vector is given by $y(k) = [a(k), v(k), F_\text{rate}(k)],$ where $a(k)$ is longitudinal acceleration in [$m/s^2$], $v(k)$ is longitudinal speed in [$m/s$], and $F_\text{rate}(k)$ is fuel rate in [$cm^3/s$].
\subsubsection{Driving datasets}
For training, we simulated a total of four hours of driving using the data collection strategy presented in Section~\ref{sec:driving-cycles}. We generated another three hour set for testing and validation to evaluate modeling performance on unseen data. All datasets span speeds from zero to $35$ $m/s$ and road grades from $\pm3\%$. A sample of the dataset is shown in Figure~\ref{fig:trucksim-sample-dataset-model-inputs} and Figure~\ref{fig:trucksim-sample-dataset-model-outputs}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.85\linewidth]{figures/trucksim-modeling-dataset/trucksim-dataset-sample-model-inputs.png}
\caption{A ground truth sample dataset representing inputs to the deep model.}
\label{fig:trucksim-sample-dataset-model-inputs}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.85\linewidth]{figures/trucksim-modeling-dataset/trucksim-dataset-sample-model-outputs.png}
\caption{A ground truth sample dataset representing outputs from the deep model.}
\label{fig:trucksim-sample-dataset-model-outputs}
\end{figure}
\subsubsection{Model learning curves}
\begin{figure}[!htbp]
\centering
\includegraphics[width=\linewidth]{figures/migrated-deep-truck/trucksim-model-deep-learning-curves/clean_trucksim_modeling_learning_curves.png}
\caption{Learning curve---min/max and mean from 30 seeds---for deep modeling of the TruckSIM truck.}
\label{fig:trucksim-model-learning-curve}
\end{figure}
This section presents learning curves for training the deep truck model for the simulation truck using the data presented in this section. Figure~\ref{fig:trucksim-model-learning-curve} shows the loss function statistics (mean, min and max) based on training form Equation~\eqref{eq:model-training-form} and deployment form Equation~\eqref{eq:model-deployment-form} from 30 seeds. Each curve is produced using a separate dataset both unseen during training.
Both learning curves stabilize and converge by the 600th epoch. They exhibits spikes we speculate are a symptom of the inherent stochasticity of the mini-batch algorithm we used. An expected loss gap between training form curve and deployment form curve is observed.
\subsubsection{Results and model validation}
This section presents model validation results using an unseen validation dataset. Figure~\ref{fig:sim-modeling-error-stats} shows modeling error statistics as a function of model simulation time from 90 independent random trails. Error statistics are generated as:
\[\text{ErrorStatistic}(k)= \text{Statistic}_m\left(\hat{y}_{m}(k| \Phi) - y_{m}(k)\right)\]
where $m$ is trial number, and $\hat{y}$ follow the deployment form Equation~\eqref{eq:model-deployment-form}. Distributions (initial speed distribution, visited speed over time and across all trials, visited road grades over time and across all trials) of the validation dataset are shown in Figure~\ref{fig:sim-modeling-validation-dataset-distributions}.
Mean of modeling error stays bounded near zero over the 40 second simulation time. On average acceleration deviates by less than $0.5 m/s^{2}$ and fuel deviates by less than $10^{-3}$ at any given time. The statistics also show that the error of modeled speed is expected to remain within $1.5m/s$ over a 40 second simulation time.
A sample model validation dataset is shown in Figure~\ref{fig:trucksim-model-validation-sample-dataset} and Figure~\ref{fig:trucksim-model-validation-error-sample-dataset}. In this validation experiment, the model is initialized once at $k = 0$ and then simulated for 2000 time steps ($t_\text{end} = 200s$). The dataset exhibits a large initial error transient with significant model response delay estimation error. Error statistics appear to be (by visual inspection) stationary consistent with error statistics in Figure~\ref{fig:sim-modeling-error-stats}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.8\linewidth, trim={18cm 0 16cm 0},clip]{figures/migrated-deep-truck/trucksim-model-error-stats/clean_trucksim_model_error_stats.png}
\caption{Model error statistics---standard deviation (red shaded areas) and mean (blue curves) from 90 trials---for deep modeling of the TruckSIM truck.}
\label{fig:sim-modeling-error-stats}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.70\linewidth, trim={0cm 4cm 0cm 2cm},clip]{figures/trucksim-modeling-dataset/clean_trucksim_validation_dataset_distributions.png}
\caption{Scenario distribution of the dataset used to compute model validation statistics presented in Figure~\ref{fig:sim-modeling-error-stats}.}
\label{fig:sim-modeling-validation-dataset-distributions}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.85\linewidth]{figures/trucksim-modeling-dataset/trucksim-validation-sample-dataset.png}
\caption{A sample ground truth and model output using a sample validation set.}
\label{fig:trucksim-model-validation-sample-dataset}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.85\linewidth]{figures/trucksim-modeling-dataset/trucksim-validation-error-sample-dataset.png}
\caption{A sample error over time between ground truth and model output using a sample validation set.}
\label{fig:trucksim-model-validation-error-sample-dataset}
\end{figure}
\subsection{Deep-RL control of the simulation truck}
This section presents experimental results for the development of a deep-RL CACC as described in Section~\ref{sec:deep-rl-control} for the simulation truck.
\subsubsection{Training setup and learning curves}
For this experiment, the sampling rate is set to $10Hz$ ($dt = 0.1s$) and we assume flat driving environment with no other relevant driving environment variables; thus we substitute $w(k)$ with the empty set. The controllable input to the ego truck (agent output) is given by $u(k) = [E_\text{cmd, ego}(k), B_\text{cmd, ego}(k)],$
where $E_\text{cmd, ego}(k)$ is requested engine torque in [$N-m$] and $B_\text{cmd, ego}(k)$ is requested service brake master cylinder pressure percentage [$0-100\%$]. The agent $\pi$ is modeled using an ANN that has 3 hidden layers, each of size 25.
Each training episode is initialized using $p_\text{leader}(k=0) = 0$, random initial ego truck position $p_\text{ego}(k=0)$ from a \(- (v_\text{ego}(k=0) \cdot Tg_\text{target} + \text{uniform} (-1.39, 1.39))\), random initial leader speed \(v_\text{leader}(k=0)\) from \(\text{uniform} (8.3, 22.2)\) and \(v_\text{ego}(k=0)\) from \(v_\text{leader}(k=0) + \text{uniform} (-1.39, 1.39)\) distributions, and random desired time gap $Tg$ from a \(\text{uniform} (2, 5)\) distribution. To simplify the setup, we also assume $a_\text{leader}(k) = 0$.
The deep-RL controller was trained on RLLab \cite{duan2016benchmarking} using batch size of 20000, max path length of 800 (sampled at 10Hz) and discount factor of 0.9999. We trained ten policies (ten seeds). The average discounted returns plot is shown in Figure~\ref{fig:trucksim-rl-cacc-learning-curves}. The trained policies shown in the plot converged after 500 iteration. The observed sharp numerical negative infinity return values are caused by crashes between the two trucks inside the training environment as specified by the reward function presented in Section~\ref{sec:deep-rl-cacc}. These crashes happen as the agent of the deep reinforcement learning explores the state-action space, which is implemented here by means of a stochastic agent policy.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\linewidth]{figures/migrated-deep-truck/deep-policy-training-trucksim/adr-deep-trucksim-deeprl-cacc.png}
\caption{Learning curve---min/max (light red shaded area) and mean (dark red curve) from ten seeds---for deep cruise control policy based on deep model of the TruckSIM truck.}
\label{fig:trucksim-rl-cacc-learning-curves}
\end{figure}
\subsubsection{Control validation results}
\begin{figure*}[!htbp]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\linewidth, trim={4cm 0 3cm 0},clip]{figures/migrated-deep-truck/trucksim_truck_deepenv_cacc_validation/trucksim-truck-deepenv-rl-cacc-validation.png}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\linewidth, trim={4cm 0 3cm 0},clip]{figures/migrated-deep-truck/trucksim_truck_trucksimenv_cacc_validation/trucksim_rl_cacc_ten_rand_sims.png}
\end{subfigure}
\caption{Control error statistics---min/max (dashed curves), standard deviation (red shaded areas) and mean (blue curves)---for the deep policy evaluated against the deep environment and against TruckSIM environment.}
\label{fig:trucksim-truck-rl-cacc-experiment}
\end{figure*}
This section validates the performance of the deep-RL cooperative adaptive cruise controller against both the deep environment and transfer into TruckSIM as shwon in Figure~\ref{fig:trucksim-truck-rl-cacc-experiment}. DeepEnv-set experiment is a replication of the training setup and consists of 100 rollouts drawn from the same training distributions (environment model and initialization distributions). The same controller is zero-shot transferred to TruckSIM to produce TruckSIM-set consisting of 10 rollouts drawn from the same initialization distributions.
The policy is designed to simultaneously regulate speed and time-gap. In DeepEnv-set, time-gap error converges to steady state error between $\pm0.05s$ within $10s$ from the start time of the experiment, while speed error converges to steady state error between $\pm0.03m/s$ within $25s$ from the start time of the experiment both with mean error of approximately zero.
TruckSIM-set evaluates the transfer of the same policy to TruckSIM environment with the same random distributions. Observed shift in control performance is caused by shift in truck model distribution due to modal mismatch discussed in the modeling experiments. Time-gap error converges to steady state error between $0.04s$ and $-0.2s$ within $10s$ from the start time of the experiment, while speed error converges to steady state error between $\pm0.02m/s$ within $25s$ from the start time. The time-gap mean error converges to $-0.06s$, while speed mean error converges to approximately zero. The controller exhibits a nonlinear bimodal speed control over/undershoot.
Figure~\ref{fig:trucksim-truck-rl-cacc-experiment} shows preliminary learning results for deep-RL cooperative adaptive cruise controller and shows preliminary evidence to expect marginal shifts in error statistics when transferring the policy from the deep-truck environment to the ``real'' environment (here conducted using a simulated truck).
\subsection{Deep modeling of full-size trucks (field experiments)}
This section presents field experimental results for the model described in this article. The section documents experiments conducted using two differently configured real-physical heavy duty trucks. These same two trucks were modeled using two different physics-based power-train models in \cite{XYLu2005HDVModelandLongControl} and \cite{lu2017integrated} used to develop high precision control systems within each respective article.
\subsubsection{Configuration one: Freightliner}
In this experiment, the truck is not equipped with any sensors relevant to the driving environment (e.g. road grade) and thus we substitute $w(k)$ with the empty set. The controllable input to the truck is given by $u(k) = [ E_\text{cmd}(k), B_\text{cmd}(k)], $ where $E_\text{cmd}(k)$ is requested percentage engine torque in [$0-100\%$] and $B_\text{cmd}(k)$ is service brake pedal position [$0-100\%$].
The output (truck response) vector is given by $y(k) = [a(k), v(k), F_\text{rate}(k)],$ where $a(k)$ is longitudinal acceleration in [$m/s^2$], $v(k)$ is longitudinal speed in [$m/s$], and $F_\text{rate}(k)$ is fuel rate in [$cm^3/s$].
Experiments for this truck has been carried out at a nearly flat test track with straight roads the longest of which is around 300 meters long at the Richmond Field Station at California. The truck was driven for about 16 minutes to collect primarily slow speed dataset covering from zero to $18m/s$. The dataset was split into 85 percent for training and 15 percent to test modeling performance on an unseen dataset.
\subsubsection{Configuration two: Volvo}
The truck is equipped with a road grade sensor where $w(k) = \theta_\text{rdg}(k) [\%]$. The controllable input to the truck is given by $u(k) = [E_\text{cmd}(k), B_\text{cmd}(k)],$ where $E_\text{cmd}(k)$ is requested percentage engine torque in [$0-100\%$] and $B_\text{cmd}(k)$ is service brake command [$m/s^2$].
Actuation and accessible signal measurement of the brake system in this truck is asymmetric. The service brake system in this truck is not directly actuatable (and signals not interceptable); instead, desired deceleration is processed through Volvo propriety systems. After collecting the data, we substitute brake commands with observed deceleration gated by brake pedal gating switch signal.
The output (truck response) vector is given by $y(k) = [a(k), v(k), F_\text{rate}(k)],$ where $a(k)$ is longitudinal acceleration in [$m/s^2$], $v(k)$ is longitudinal speed in [$m/s$], and $F_\text{rate}(k)$ is fuel rate in [$cm^3/s$].
This truck was primarily driven over non-flat open freeways. The truck was driven for about 24 minutes to collect primarily freeway speed driving dataset covering speeds from $20m/s$ to $30m/s$. The dataset was split into 85 percent for training and 15 percent to test modeling performance on an unseen dataset.
\subsubsection{Results and model validation}
We validate modeling performance against an unseen ground truth dataset from each truck configuration as shown in Figure~\ref{fig:field-experiments-error-stats}. In this figure, mean and standard deviation for acceleration, speed, and fuel rate modeling errors are charted as a function of model simulation time. The statistics were produced from an ensemble of ten timeseries simulations. Each simulation is fresh initialized at time zero, and simulated using knowledge of inputs and the uncontrollable conditions only. Error statistics are generated as $\text{ErrorStatistic}(k)= \text{Statistic}_m\left(\hat{y}_{m}(k| \Phi) - y_{m}(k)\right)$ where $m$ is trial number, and $\hat{y}$ follow the deployment form Equation~\eqref{eq:model-deployment-form}.
In this figure, acceleration error is bounded between $\pm0.5m/s^2$. For the Freightliner, speed error remained bounded between $\pm0.5m/s$ mean of speed error $0.12m/s$ after the initial transient. For the Volvo, speed error remained between $-0.5m/s$ and $1m/s$ with a significant error bias approaching $0.5m/s$ during the 15 seconds of simulation. Fuel rate modeling error is bounded between $\pm1$ once the initial transient decays. We speculate that model performance degradation for the Volvo truck is influenced by insufficient data to model truck dynamics over graded roads.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=\linewidth, trim={4cm 0 3cm 0},clip]{figures/migrated-deep-truck/field-modeling-error-stats/clean_fl_vl_modeling_error_stats_batches10.png}
\caption{Model error statistics---standard deviation (red shaded areas) and mean (blue curves) from 10 trials---for deep modeling of the Freightliner and the Volvo trucks.}
\label{fig:field-experiments-error-stats}
\end{figure*}
\subsection{Deep-RL control of full-size trucks (field experiments)}
This section presents control experiments for the deep-RL CACC system presented earlier. Due lab access limitations during the CoVID-19 pandemic, the system was operated as a two vehicle ACC (using radar instead of direct communications) system on non-flat open freeways. The leader is a passenger car and the follower is the Volvo truck presented earlier.
Figure~\ref{fig:volvo_rl_acc_gap_closing} shows gap closing regulation performance where the leader drove at nearly constant speed with initial speed error of $2m/s$, initial time gap error of $1s$, and a desired time-gap setting of $1.5s$. The gap was closed within 15 seconds and to within error bound of $\pm0.2m/s$ and $0.35s$. Leader conducted a quick successive changes of speed towards the end of experiment causing the observed speed ripple after time $17s$.
Figure~\ref{fig:volvo_rl_acc_long_run} shows tracking performance over an arbitrary driving cycle conducted by the leader vehicle with a desired time-gap setting of $1.5s$. Speed error was regulated to within $\pm0.5m/s$ and time gap was regulated to between $0.05m/s$ and $0.3m/s$. A lane change maneuver was conducted at time $63s$ causing a momentary misalignment between ego vehicle's sensor line-of-sight with the leader. Speed and distance measurements of a farther vehicle down stream was detected causing the observed discontinuity.
\begin{figure*}[!htbp]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\linewidth, trim={4cm 0 3cm 0},clip]{figures/migrated-deep-truck/vl-acc-validation/volvo_rl_acc_gap_closing_speed_t_gap.png}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\linewidth, trim={4cm 0 3cm 0},clip]{figures/migrated-deep-truck/vl-acc-validation/volvo_rl_acc_gap_closing.png}
\end{subfigure}
\caption{Tracking speed, time-gap, and control error for the Volvo deep CACC policy evaluated against the real environment---gap closing maneuver.}
\label{fig:volvo_rl_acc_gap_closing}
\end{figure*}
\begin{figure*}[!htbp]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\linewidth, trim={4cm 0 3cm 0},clip]{figures/migrated-deep-truck/vl-acc-validation/volvo_rl_acc_long_run_speed_t_gap.png}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\linewidth, trim={4cm 0 3cm 0},clip]{figures/migrated-deep-truck/vl-acc-validation/volvo_rl_acc_long_run.png}
\end{subfigure}
\caption{Tracking speed, time-gap, and control error for the Volvo deep CACC policy evaluated against the real environment---leader following maneuver.}
\label{fig:volvo_rl_acc_long_run}
\end{figure*}
\section{Conclusion}
Detailed study of each heavy duty truck in some pool of trucks has historically been required to develop and fit precise analytical models and controls. This article discusses the application of deep learning and deep reinforcement learning as an approach to simplify the process and abstract detailed vehicle underlying mechanics with a potential for improving modeling and control precision. A brief experimental evaluation is presented as a walk through the process and as preliminary performance validation.
The deep models and deep-RL controls presented in this article successfully (1) infers relevant latent and state variables (such as gearbox), (2) performs dynamic state estimation (such as selected gear and brake cylinder pressure at \(t=0\)) and tracking (latent state variable values for \(t>0\)), and (3) successfully performs system identification and parameter estimation (such as the aerodynamic drag effect and its coefficient).
This article focuses on outlining the process of applying deep learning and deep reinforcement learning for modeling and control of heavy duty trucks. More extensive experimentation and comparison with established classical approaches is still required for validation and performance evaluation. Furthermore, the process presented here still requires full replication for each target truck, and each truck combination (multi-truck environments). Further investigation is still required to introduce transfer learning of longitudinal dynamics across mechanical configurations. Data sampling efficiency and utilization of existing first-principle models could also be investigated to improve the process presented here.
\section*{Acknowledgements}
The authors would also like to acknowledge John Spring and David Nelson for their technical conversations about automation software and hardware for heavy duty trucks and their support in the field. This research work was supported in part by King Abdulaziz City for Science and Technology (KACST).
\section{Introduction}
Heavy duty truck driving performance is details sensitive \cite{lattemann2004predictive, kirches2013mixed, bae2003parameter, vahidi2003simultaneous, druzhinina2002speed, lu2005heavy}. A typical passenger car could achieve a fairly consistent braking distances while the braking distance of a truck varies significantly within a single trip; before and after hooking up to a trailer, before and after trailer loading, and distance could double as the brakes worms up during the trip. Modeling accuracy has been shown to be a significant (and limiting) factor for precision driving maneuvers \cite{lu2017integrated, spielberg2019neural}. Deep learning has been shown to improve modeling accuracy compared to state of the art classical models for passenger cars \cite{da2019modelling, spielberg2019neural}. However, heavy duty truck modelling literature using deep learning is still sparse. Heavy duty trucks are typically configured and tailor built to optimize to their expected mission requirements. Detailed underlying physics and internal states of trucks are configuration specific and often are different from the more exhaustively modeled variants (components) of passenger cars. In this article, we develop a deep-learning-based longitudinal model for heavy duty trucks and validate its modeling accuracy for heavy duty trucks of different configurations both in simulation and using real-physical trucks.
Model-free deep reinforcement learning has been shown to achieve improved performance in many applications in addition to simplifying several previously intractable problems. Transfer of learned policies from simulation is often challenged however by the reality-gap (the mismatch between model and corresponding real-physical system). This article studies the application of deep learning for longitudinal modeling of heavy duty trucks and its application to minimize reality-gap for transferable deep reinforcement learning continuous control policies as shown in Figure~\ref{fig:deep-truck-process-diagram}.
The process uses deep learning to build deep replica models for each truck from some real vehicle pool. These deep replica models are used to develop deep environments suitable for deep reinforcement learning continuous control tasks. The article takes into consideration several of the factors either traditionally or expected to impact modeling and control performance such as vehicle mechanical configuration, operational scope, setup, and traffic scenarios.
Deep learning and deep reinforcement learning offer potential for improved performance at the expense of guarantees such as bounds on control error that are better understood using classical methods. To compensate, more in depth evaluation is always required. To simplify investigation and avoid expulsion of combinatorics however, the article focuses on presenting the process and experimental evaluation of the relevant components and leave additional investigations such as robustness for later articles.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=\linewidth]{figures/migrated-deep-truck/deep-process-diagram/deep-truck-process-diagram.png} \\
\vspace{5pt}\hrule\vspace{5pt}
\includegraphics[width=0.6\linewidth]{figures/migrated-deep-truck/deep-process-diagram/deep-truck-pools-listings.png}
\caption{The deep truck process for the development of field-testable deep RL continuous control policies for longitudinal automation of heavy duty trucks and a sample of the pools and operational variations relevant to this work.}
\label{fig:deep-truck-process-diagram}
\end{figure*}
\section{Modeling problem formulation}
\label{sec:deep-learning-model}
In this article, we formulate heavy duty truck longitudinal dynamics modeling as a time-series supervised deep learning problem. The longitudinal dynamics model $f_{DT}$, detailed in the next section, is represented as:
\begin{equation}
{\begin{bmatrix} x(k+1) \\ y(k+1) \end{bmatrix}}
=
f_{DT}\left(
{\begin{bmatrix} u(k) \\ w(k) \end{bmatrix}}
\bigg|
{\begin{bmatrix} x(k) \\ y(k) \end{bmatrix}},
\Phi
\right)\label{eq:model-base-form},
\end{equation}
where $x$ represent internal state, $y$ represent truck response, $u$ represent controllable inputs to the truck, $w$ represent uncontrollable conditions relevant to the dynamics, and $\Phi$ represent model parameters. The initial conditions are given by $x(k = 0) = x_o$ and $y(k = 0) = y_o$.
Model parameters $\Phi$ are trained by solving the following optimization problem:
\[
\min_{\Phi} \quad \sum_{k}
{
\left\lVert
\hat{y}(k| \Phi) - y(k)
\right\rVert_2^2
}
\]
where $\hat{y}$ is the model-based estimate of $y$ given ground truth historical driving time-series data $y$, $u$, $w$, initial state vector $x(k=0)$, and proper recursive substitution of the estimate of the internal state vector $\hat{x}(k| \Phi)$ for $x$ as follows:
\begin{equation}
{\begin{bmatrix} \hat{x}(k+1| \Phi) \\ \hat{y}(k+1| \Phi) \end{bmatrix}}
=
f_{DT}\left(
{\begin{bmatrix} u(k) \\ w(k) \end{bmatrix}}
\bigg|
{\begin{bmatrix} \hat{x}(k| \Phi) \\ y(k) \end{bmatrix}},
\Phi
\right)\label{eq:model-training-form}.
\end{equation}
The model is trained using a $K$-step unfolded time-series mini-batch Adagrad (Adaptive stochastic gradient) algorithm \cite{duchi2011adaptive, mcmahan2010adaptive}. Each gradient step is estimated from M independent samples (time-series model evaluations) each of $K$ time steps as follows:
\[
\sum_{m = 0 \dots M} \sum_{k = 0 \dots K}
{
\left\lVert
\hat{y}_{n, m}(k| \Phi) - y_{n, m}(k)
\right\rVert_2^2
},
\]
where the sub-indices abstract time-series splits and $n = 0 \dots N$ represent the mini-batch index. Training is initialized with random deep network parameters, and with $\hat{x}_{n, m}(k = 0| \Phi) = 0$ for all $n$ and $m$. Given the trained model, truck simulations are generated from:
\begin{equation}
{\begin{bmatrix} \hat{x}(k+1| \Phi) \\ \hat{y}(k+1| \Phi) \end{bmatrix}}
=
f_{DT}\left(
{\begin{bmatrix} u(k) \\ w(k) \end{bmatrix}}
\bigg|
{\begin{bmatrix} \hat{x}(k| \Phi) \\ \hat{y}(k) \end{bmatrix}},
\Phi
\right)\label{eq:model-deployment-form},
\end{equation}
and initialized using $\hat{x}(k = 0| \Phi) = 0$ and $\hat{y}(k = 0) = y_o$, where $y_o$ represent the observable initial condition of truck dynamics.
Variable instantiations are detailed for each respective experiment in the later sections; however, we assume in general, for longitudinal dynamics,
\begin{align*}
u(k) &= \begin{bmatrix} E_\text{cmd}(k) \\ B_\text{cmd}(k) \end{bmatrix} \\
y(k) &= \begin{bmatrix} v(k) \\ a(k) \\ f_\text{rate}(k) \end{bmatrix} \\
w(k) &= \theta_\text{rdg}(k),
\end{align*}
where $E_\text{cmd}(k)$ is engine command, $B_\text{cmd}(k)$ is brake command, $v(t)$ is vehicle speed, $a(k)$ is vehicle acceleration, $f_\text{rate}(k)$ is fuel rate, and $\theta_\text{rdg}(k)$ is road grade each at discrete time step $k$.
\section{Deep model}
\label{sec:deep-truck}
This section presents the \(f_{DT}\) model structure used to represent the longitudinal dynamics of the heavy duty truck. The model assumes that only controllable inputs to the truck, uncontrollable driving environment variables, and truck responses are known and measurable while the configuration of the truck and relevant internal state variables are not specified.
The state model,
\[x(k+1) = H(u(k), w(k) | x(k), y(k)),\]
represents an integrated state observer and tracker equations, and state update and encoder equations. The state model, $H(\cdot)$, is represented in this article as a long short-term memory (LSTM) recurrent neural network (RNN). The use of a single deep network unit to represent this model enables parameter sharing for the state observer, tracker, updater, and encoder functions.
The output model,
\[y(k) = G(x(k)),\]
represents an integrated state decoder equation and explicit output constraints model. The output model is represented by a cascade of a state decoder $D$ and an explicit constraints models $C$ such that $y(k) = C(D(x(k)))$. The state decoder is implemented as a fully connected feedforward neural network parameterized by $\Phi_{nn}$. In the remainder of this article, we implement discrete-time longitudinal kinematics model as an explicit constraint model for longitudinal response output variables:
\[
v(k + 1) = v(k) + a(k) \cdot dt,
\]
where $v(k)$ and $a(k)$ are longitudinal velocity and acceleration respectively, and $dt$ is discrete time step. Other variables were left unconstrained.
\section{Driving cycles for data collection}
\label{sec:driving-cycles}
In this article, we assume that the internal dynamics of trucks can be observed from datasets were $y = (v, a)$, $u = (E_\text{cmd}, B_\text{cmd})$, and $w = \theta_\text{rdg}$ are jointly spanning. We consider that, without specialized driving data collection facilities, a human driver is most practical for data collection. Internal truck control signals $u$ are often not accessible though the human driver interface (pedals), however, but are processed though vehicle manufacturer proprietary control systems as shown in Figure~\ref{fig:truck-interfaces-diagram}. We thus approximate such spanning dataset by a driving cycle consisting of (1) $w$ and $y$ spanning arbitrary acceleration/deceleration profiles, (2) $w$ and $v$ spanning coasting ($u = 0$), and (3) $w$ and $B_\text{cmd}$ spanning braking to zero speed.
For the field experiments presented in this article, these instructions were given to a human driver to execute for data collection. In simulation, we used a random generative model to approximate such a driving cycle. The generative model utilizes a moving average random walk model as a road profile generator. Coasting and braking to zero episodes are generated using direct randomized initializations. The spanning arbitrary acceleration/deceleration profiles were generated from the model presented in the next section.
\subsection{Generative model for state space spanning driving cycles}
For the numerical experiments presented in this article, we simulate arbitrary driving cycles using a speed profile generative model based on a time adaptive unstable stochastic speed controller. We designed it as a random speed profile (driving cycle) generator that samples the state space ``fairly'' uniformly.
Double integrator model is used with hard saturation limit at desired maximum and minimum speeds as follows:
\[
v(t) = \text{max}(\text{min}(v(t-dt) + a(t) \cdot dt, v_\text{max}), v_\text{min}),
\]
where $v_\text{min}$ and $v_\text{max}$ are desired minimum and maximum speeds of generated profile. Acceleration is sampled from a normal distribution as:
\[
a(t) = \mathcal{N}(\mu_\text{a, scaling} \cdot \mu_\text{a}(t), \sigma_\text{a, scaling} \cdot \sigma_\text{a}(t)),
\]
where $\mu_\text{a, scaling}$ and $\sigma_\text{a, scaling}$ are tuning parameters.
Acceleration statistics are designed based on speed dependant unstable feedback control. Average acceleration is given by:
\[
\mu_\text{a}(t) = 1 - \frac{v(T_i)}{v_\text{ref}},
\]
where $v_\text{ref}$ is control reference speed, here set to $\frac{v_\text{min}+v_\text{max}}{2}$. The standard deviation is designed to allow for bursts of spontaneous high accelerations but discourage it at extreme speeds ($v_\text{min}$, and $v_\text{max}$) as follows:
\[
\sigma_\text{a}(t) = \frac{v(T_i)}{v_\text{ref}} \cdot \left(1 - \frac{v(T_i)}{v_\text{max}}\right).
\]
$T_i$ is used for acceleration-based adaptive temporal discretization for sampling acceleration statistics, and is designed to make high acceleration episodes short lived. The non-negative integer index is updated to $i \coloneqq i + 1$ and $T_{i+1}$ is re-sampled when $t$ equals $T_{i+1}$ and integrated as follows:
\[
T_{i+1} = T_i + \lceil \text{max}(\mathcal{N}(\mu_T) \cdot (1 - | \mu_\text{a}(t) | ), dt) \rceil,
\]
where $\mu_T$ is a tuning parameter.
To smooth out the noise, we pass the generated acceleration signal though a moving average filter to get $a_f(t)$, and re-integrate speed with a softmax operator as follows:
\[
v_f(t) = \frac{\max(v_f(t-1) + a_f(t) \cdot dt, v_\text{min})}{1 + e^{\frac{1}{2} \cdot (\max(v_f(t-1) + a_f(t) \cdot dt, v_\text{min}) - v_\text{max})}},
\]
noting that acceleration signal has to be recalculated, as needed, from speed signal after this step.
\section{Deep-RL continuous longitudinal control}
\label{sec:deep-rl-control}
In this article, we use deep-reinforcement-learning to design end-to-end heavy duty truck controllers allowing for offline (1) design/tune the controller, (2) calibrate the controller module to the specific physics of each truck, (3) design an embedded state observer/tracker. In developing these controls we assume limited observable input and output (IO), unknown truck mechanical configuration, and unknown relevant internal state. We formulate the problem as a \emph{Partially Observable Markov Decision Processes} (POMDP) and solve it using deep reinforcement learning framework.
\subsection{POMDP and The deep-RL framework}
Design of continuous control problems can be formulated as a deep-RL problem modeled as a POMDP defined by the tuple \((S, P, OS, OP, A, r, \rho_o, \gamma, T)\), Where \(S\) represents the state space of an RL-environment (system); \(P\) is state transition probability space (governing dynamics of the system); \(OS\) represents the observable state space (space of system outputs); \(OP\) is probability distribution of observation space (governing dynamics of observation model); \(A\) represents the action space (actuation and control variables) of an RL-agent (a decision function, a policy, or a controller); \(r\) is reward function (system performance metric); \(\rho_o\) is initial state distribution; \(\gamma\) is reward discount factor over time; and \(T\) is time horizon. In this manuscript, we use model free Policy Gradient learning \cite{schulman2015trust, duan2016benchmarking} to computationally optimize for expected discounted cumulative reward for an agent policy $\pi_\theta$ parameterized by $\theta$:
\[
\theta^* = \underset{\theta}{\mathrm{argmax}} \sum_{t=1}^{T} {E_{(s_t, a_t) \sim p_{\theta}(s_t, a_t)} \left[ r(s_t, a_t) \right] }
\]
where $s_t$ and $a_t$ are state and action at time step $t$, and $p_{\theta}$ is probability distribution over state and action space.
\subsection{Deep-RL cooperative adaptive cruise control}
\label{sec:deep-rl-cacc}
\begin{figure}[!htbp]
\centering
\includegraphics[width=\linewidth, trim={0cm 0 0.5cm 0},clip]{figures/migrated-deep-truck/cacc-illustration/cacc-illustration-v3.png}
\caption{Two truck cooperative adaptive cruise control system setup.}
\label{fig:two-truck-cacc-illustration}
\end{figure}
This section formulates an end-to-end two-truck cooperative adaptive cruise control (CACC)~\cite{lu2017integrated} using deep-RL based on the longitudinal truck model developed in this article. In this system, we consider a human driven leader vehicle, and a follower semi-automated to simultaneously regulate speed and time gap as shown in Figure~\ref{fig:two-truck-cacc-illustration}.
The environment is modeled by a two point mass system representing each of the two trucks. Both vehicle dynamics were modeled using the double integrator kinematic model. Leader vehicle (leader) dynamics were simplified as a linear system. The velocity of the controlled truck (ego) is modeled as a nonlinear system according to the deep truck model $f_{DT}$ presented earlier in this article. This results in the following environment model:
\begin{align*}
\begin{bmatrix}
p_\text{leader}(k+1) \\
v_\text{leader}(k+1) \\
p_\text{ego}(k+1) \\
v_\text{ego}(k+1)
\end{bmatrix}
=
\begin{bmatrix}
1 & dt & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & dt \\
0 & 0 & 0 & 0
\end{bmatrix}
\begin{bmatrix}
p_\text{leader}(k) \\
v_\text{leader}(k) \\
p_\text{ego}(k) \\
v_\text{ego}(k)
\end{bmatrix}
\\
+
\begin{bmatrix}
0 \\
1 \\
0 \\
0
\end{bmatrix} \cdot
a_\text{leader}(k) \cdot dt
\\
+
\begin{bmatrix}
0 \\
0 \\
0 \\
1
\end{bmatrix} \cdot
f_{DT, ego, v}\left(
{\begin{bmatrix} E_\text{cmd, ego}(k) \\ B_\text{cmd, ego}(k) \\ \theta_\text{rdg, ego}(k) \end{bmatrix}}
\bigg|
{\begin{bmatrix} x_\text{ego}(k| \Phi) \\ v_\text{ego}(k) \end{bmatrix}},
\Phi
\right),
\end{align*}
where $f_{DT, \text{ego}, v}$ represents the velocity component from the deep truck model for the ego truck, and relevant variables ($E_\text{cmd, ego}$, $B_\text{cmd, ego}$, $\theta_\text{rdg, ego}$, $x_\text{ego}$, and $\Phi$) are as defined in Section~\ref{sec:deep-learning-model}. As shown in Figure~\ref{fig:two-truck-cacc-illustration}, $p_\text{leader}$, $v_\text{leader}$, and $a_\text{leader}$ are absolute longitudinal position, velocity, and acceleration representing the leading vehicle, and $p_\text{ego}$ and $v_\text{ego}$ are absolute longitudinal position and velocity of the ego vehicle. Time step size is represented by $dt$.
The agent is represented by the probability distribution function $\pi(a_k|o_k, \Phi_\text{agent})$, where $a_k$ represent agent action, $o_k$ represent observation at time step $k$, and $\Phi_\text{agent}$ represent agent parameters. The corresponding control $u(k)$ is implemented as:
\begin{align*}
u(k) &= \begin{bmatrix} E_\text{cmd, ego}(k) \\ B_\text{cmd, ego}(k) \end{bmatrix} = f_\pi\left( \begin{bmatrix} v_\text{leader}(k) \\ v_\text{ego}(k) \\ p_\text{leader}(k) - p_\text{ego}(k) \\ v_\text{ego}(k) \cdot Tg_\text{target} \\ \theta_\text{rdg}(k) \end{bmatrix} \right) \\ &= E\left(\pi\left(a_k \bigg| o_k = \begin{bmatrix} v_\text{leader}(k) \\ v_\text{ego}(k) \\ p_\text{leader}(k) - p_\text{ego}(k) \\ v_\text{ego}(k) \cdot Tg_\text{target} \\ \theta_\text{rdg}(k) \end{bmatrix}, \Phi_\text{agent}\right)\right)
\end{align*}
representing the mean value for a Multi-Layer Perceptron (MLP) Gaussian distribution model.
The reward function is designed to simultaneously regulate time-gap between ego and leader to a given desired time-gap, and regulate velocity of ego to match that the leader. The agent is penalized for actuation cost, here approximated by engine and brake commands. Safety constraint is implemented as a very large penalty term applied when minimum safety distance is violated. The reward function is modeled as:
\begin{multline*}
r(k) = - \alpha_p (p_\text{leader}(k)-p_\text{ego}(k)-v_\text{ego}(k) \cdot Tg_\text{target})^2 \\ - \alpha_v (v_\text{leader}(k)-v_\text{ego}(k))^2 - \alpha_E E_\text{cmd}^2(k) - \alpha_B B_\text{cmd}^2(k) \\ - \alpha_\text{crash} \cdot (p_\text{leader}(k)-p_\text{ego}(k) \leq d_\text{safety}),
\end{multline*}
where $Tg(k)$ is actual time-gap between leader tail and ego head, $Tg_\text{target}$ is target (desired) time-gap, \(\alpha_x\), \(\alpha_v\), \(\alpha_E\), and \(\alpha_B\) are positive constants, $\alpha_\text{crash}$ is a large positive constant, $d_\text{safety}$ is minimum safety distance, and all other variables are as defined earlier in this section.
Each training episode is initialized using leader position $p_\text{leader}(k=0) = 0$, random initial ego truck position error $p_\text{leader}(k=0) - p_\text{ego}(k=0) - v_\text{ego}(k=0) \cdot Tg_\text{target}$ from a \(\text{uniform} (p_{o, min}, p_{o, max})\), random initial leader speed \(v_\text{leader}(k=0)\) from \(\text{uniform} (v_{o, min}, v_{o, max})\), random initial ego truck speed error \(v_\text{ego}(k=0) - v_\text{leader}(k=0)\) from \(\text{uniform} (v_{o, min}, v_{o, max})\) distribution, random desired time gap $Tg_\text{target}$ from a \(\text{uniform} (Tg_{o, min}, Tg_{o, max})\) distribution, and random constant road grade $\theta_\text{rdg}$ from a \(\text{uniform} (\theta_\text{rdg, o, min}, \theta_\text{rdg, o, max})\) distribution. To simplify the setup, we also assume $a_\text{leader}(k) = 0$. All distribution boundaries are positive constants chosen to cover the desired operational state-space of the CACC system and be constrained by the state-space covered by the deep model where appropriate.
\section{Vehicle pool}
We primarily utilize three trucks with three different mechanical configurations for the study presented in this article as shown in shown in Figure~\ref{fig:deep-truck-process-diagram} and Figure~\ref{fig:truck-pool-images}. One truck is simulation based used primarily for numerical experiments. The remaining two trucks are full-size real-physical trucks that had been modeled using two different physics-based power-train models in \cite{XYLu2005HDVModelandLongControl} and \cite{lu2017integrated} and used to develop high precision control systems within each respective article.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=\linewidth]{figures/migrated-deep-truck/truck-pool-images/trucks-images.png}
\caption{Real full-size and simulations trucks of multiple mechanical configurations used in this research.}
\label{fig:truck-pool-images}
\end{figure*}
\textbf{Simulation framework and simulation truck mechanical configuration.} Simulation experiments in this article are conducted in TruckSim \cite{sayers1996modeling}, a black-box state-of-the-art commercial software framework with high fidelity modeling capabilities and a detailed vehicle and vehicle component libraries.
The truck, shown in Figure~\ref{fig:truck-pool-images}, is equipped with a 402hp engine. The engine shaft is connected to one side of the transmission via clutch. The clutch allows speed difference between the engine and the transmission when gear shifts. The transmission has ten forward gears and one reverse gear. The other side of the transmission is connected to rear wheels via a differential gear with a fixed reduction ratio. The truck is equipped with an air-brake system. The front air-brakes have capacity of 7.5 kN-m on each wheel. The rear brakes have capacity of 10 kN-m on each wheel. Actuation control input to the truck are engine torque and brake cylinder pressure. The details are presented here for completeness and for reporting purposes, but are irrelevant to the deep model.
\textbf{Real full-size Freightliner truck mechanical configuration. } The Freightliner truck used for the results in this section is a tractor-only Freightliner Century truck driven by a 435 hp turbocharged Detroit Diesel diesel engine and equipped with a 6 gear true-automatic (equipped with torque-converter) Allison transmission system. The service brake is a drive by wire all the way to the wheels. The truck is not equipped with road grade sensors.
\textbf{Real full-size Volvo truck mechanical configuration. } The second set of experiments were conducted using a Volvo VNL truck (with and without a tractor) driven by a 500 hp engine. The mechanically most significant differentiator of this truck from the Freightliner truck is the transmission system which is an automated manual-transmission (equipped with clutches).
\section{Vehicle interface}
Access to vehicle powertrain is often primarily provided through a human driver interface (pedals) and is mediated by proprietary controllers as shown in Figure~\ref{fig:truck-interfaces-diagram}. For precision sensitive applications however, it is often desirable to probe as close to the powertrain as possible (e.g. engine torque or engine fuel rate control signals). We access these signals through a custom-built automated driver interface connected to vehicle communication backbone J-1939. The interface provide access to powertrain and sensor signals; however, architectural details and signal accessibility vary between truck platforms. Multiple layers of fail-safe safety systems were implemented to ensure experiments remain faithful to published description while maintaining safety on the road. Parallel interfaces and system architecture is used for the simulation truck.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=\linewidth]{figures/migrated-deep-truck/truck-interfaces/truck-interfaces-diagram.png}
\caption{Interface architecture for deep modeling and control of heavy duty trucks.}
\label{fig:truck-interfaces-diagram}
\end{figure*}
\section{Experiments}
This section presents experimental evaluation of the process detailed in this article. The section starts by applying the process to a simulation based truck to present detailed performance statistics. The section then reapplies the process to full-size trucks.
\subsection{Deep modeling of the simulation truck}
This section presents experimental results for the development of a deep learning model as described in Section~\ref{sec:deep-learning-model} for the simulation truck.
\subsubsection{Deep model specifications}
In this experiment, the uncontrollable conditions $w(k) = \theta_\text{rdg}(k) [\%]$ represent road grade. The controllable input to the truck is given by $u(k) = [ E_\text{cmd}(k), B_\text{cmd}(k)], $ where $E_\text{cmd}(k)$ is engine torque in [$N-m$] and $B_\text{cmd}(k)$ is service brake master cylinder pressure [$0-100\%$].
The output (truck response) vector is given by $y(k) = [a(k), v(k), F_\text{rate}(k)],$ where $a(k)$ is longitudinal acceleration in [$m/s^2$], $v(k)$ is longitudinal speed in [$m/s$], and $F_\text{rate}(k)$ is fuel rate in [$cm^3/s$].
\subsubsection{Driving datasets}
For training, we simulated a total of four hours of driving using the data collection strategy presented in Section~\ref{sec:driving-cycles}. We generated another three hour set for testing and validation to evaluate modeling performance on unseen data. All datasets span speeds from zero to $35$ $m/s$ and road grades from $\pm3\%$. A sample of the dataset is shown in Figure~\ref{fig:trucksim-sample-dataset-model-inputs} and Figure~\ref{fig:trucksim-sample-dataset-model-outputs}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.85\linewidth]{figures/trucksim-modeling-dataset/trucksim-dataset-sample-model-inputs.png}
\caption{A ground truth sample dataset representing inputs to the deep model.}
\label{fig:trucksim-sample-dataset-model-inputs}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.85\linewidth]{figures/trucksim-modeling-dataset/trucksim-dataset-sample-model-outputs.png}
\caption{A ground truth sample dataset representing outputs from the deep model.}
\label{fig:trucksim-sample-dataset-model-outputs}
\end{figure}
\subsubsection{Model learning curves}
\begin{figure}[!htbp]
\centering
\includegraphics[width=\linewidth]{figures/migrated-deep-truck/trucksim-model-deep-learning-curves/clean_trucksim_modeling_learning_curves.png}
\caption{Learning curve---min/max and mean from 30 seeds---for deep modeling of the TruckSIM truck.}
\label{fig:trucksim-model-learning-curve}
\end{figure}
This section presents learning curves for training the deep truck model for the simulation truck using the data presented in this section. Figure~\ref{fig:trucksim-model-learning-curve} shows the loss function statistics (mean, min and max) based on training form Equation~\eqref{eq:model-training-form} and deployment form Equation~\eqref{eq:model-deployment-form} from 30 seeds. Each curve is produced using a separate dataset both unseen during training.
Both learning curves stabilize and converge by the 600th epoch. They exhibits spikes we speculate are a symptom of the inherent stochasticity of the mini-batch algorithm we used. An expected loss gap between training form curve and deployment form curve is observed.
\subsubsection{Results and model validation}
This section presents model validation results using an unseen validation dataset. Figure~\ref{fig:sim-modeling-error-stats} shows modeling error statistics as a function of model simulation time from 90 independent random trails. Error statistics are generated as:
\[\text{ErrorStatistic}(k)= \text{Statistic}_m\left(\hat{y}_{m}(k| \Phi) - y_{m}(k)\right)\]
where $m$ is trial number, and $\hat{y}$ follow the deployment form Equation~\eqref{eq:model-deployment-form}. Distributions (initial speed distribution, visited speed over time and across all trials, visited road grades over time and across all trials) of the validation dataset are shown in Figure~\ref{fig:sim-modeling-validation-dataset-distributions}.
Mean of modeling error stays bounded near zero over the 40 second simulation time. On average acceleration deviates by less than $0.5 m/s^{2}$ and fuel deviates by less than $10^{-3}$ at any given time. The statistics also show that the error of modeled speed is expected to remain within $1.5m/s$ over a 40 second simulation time.
A sample model validation dataset is shown in Figure~\ref{fig:trucksim-model-validation-sample-dataset} and Figure~\ref{fig:trucksim-model-validation-error-sample-dataset}. In this validation experiment, the model is initialized once at $k = 0$ and then simulated for 2000 time steps ($t_\text{end} = 200s$). The dataset exhibits a large initial error transient with significant model response delay estimation error. Error statistics appear to be (by visual inspection) stationary consistent with error statistics in Figure~\ref{fig:sim-modeling-error-stats}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.8\linewidth, trim={18cm 0 16cm 0},clip]{figures/migrated-deep-truck/trucksim-model-error-stats/clean_trucksim_model_error_stats.png}
\caption{Model error statistics---standard deviation (red shaded areas) and mean (blue curves) from 90 trials---for deep modeling of the TruckSIM truck.}
\label{fig:sim-modeling-error-stats}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.70\linewidth, trim={0cm 4cm 0cm 2cm},clip]{figures/trucksim-modeling-dataset/clean_trucksim_validation_dataset_distributions.png}
\caption{Scenario distribution of the dataset used to compute model validation statistics presented in Figure~\ref{fig:sim-modeling-error-stats}.}
\label{fig:sim-modeling-validation-dataset-distributions}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.85\linewidth]{figures/trucksim-modeling-dataset/trucksim-validation-sample-dataset.png}
\caption{A sample ground truth and model output using a sample validation set.}
\label{fig:trucksim-model-validation-sample-dataset}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.85\linewidth]{figures/trucksim-modeling-dataset/trucksim-validation-error-sample-dataset.png}
\caption{A sample error over time between ground truth and model output using a sample validation set.}
\label{fig:trucksim-model-validation-error-sample-dataset}
\end{figure}
\subsection{Deep-RL control of the simulation truck}
This section presents experimental results for the development of a deep-RL CACC as described in Section~\ref{sec:deep-rl-control} for the simulation truck.
\subsubsection{Training setup and learning curves}
For this experiment, the sampling rate is set to $10Hz$ ($dt = 0.1s$) and we assume flat driving environment with no other relevant driving environment variables; thus we substitute $w(k)$ with the empty set. The controllable input to the ego truck (agent output) is given by $u(k) = [E_\text{cmd, ego}(k), B_\text{cmd, ego}(k)],$
where $E_\text{cmd, ego}(k)$ is requested engine torque in [$N-m$] and $B_\text{cmd, ego}(k)$ is requested service brake master cylinder pressure percentage [$0-100\%$]. The agent $\pi$ is modeled using an ANN that has 3 hidden layers, each of size 25.
Each training episode is initialized using $p_\text{leader}(k=0) = 0$, random initial ego truck position $p_\text{ego}(k=0)$ from a \(- (v_\text{ego}(k=0) \cdot Tg_\text{target} + \text{uniform} (-1.39, 1.39))\), random initial leader speed \(v_\text{leader}(k=0)\) from \(\text{uniform} (8.3, 22.2)\) and \(v_\text{ego}(k=0)\) from \(v_\text{leader}(k=0) + \text{uniform} (-1.39, 1.39)\) distributions, and random desired time gap $Tg$ from a \(\text{uniform} (2, 5)\) distribution. To simplify the setup, we also assume $a_\text{leader}(k) = 0$.
The deep-RL controller was trained on RLLab \cite{duan2016benchmarking} using batch size of 20000, max path length of 800 (sampled at 10Hz) and discount factor of 0.9999. We trained ten policies (ten seeds). The average discounted returns plot is shown in Figure~\ref{fig:trucksim-rl-cacc-learning-curves}. The trained policies shown in the plot converged after 500 iteration. The observed sharp numerical negative infinity return values are caused by crashes between the two trucks inside the training environment as specified by the reward function presented in Section~\ref{sec:deep-rl-cacc}. These crashes happen as the agent of the deep reinforcement learning explores the state-action space, which is implemented here by means of a stochastic agent policy.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\linewidth]{figures/migrated-deep-truck/deep-policy-training-trucksim/adr-deep-trucksim-deeprl-cacc.png}
\caption{Learning curve---min/max (light red shaded area) and mean (dark red curve) from ten seeds---for deep cruise control policy based on deep model of the TruckSIM truck.}
\label{fig:trucksim-rl-cacc-learning-curves}
\end{figure}
\subsubsection{Control validation results}
\begin{figure*}[!htbp]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\linewidth, trim={4cm 0 3cm 0},clip]{figures/migrated-deep-truck/trucksim_truck_deepenv_cacc_validation/trucksim-truck-deepenv-rl-cacc-validation.png}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\linewidth, trim={4cm 0 3cm 0},clip]{figures/migrated-deep-truck/trucksim_truck_trucksimenv_cacc_validation/trucksim_rl_cacc_ten_rand_sims.png}
\end{subfigure}
\caption{Control error statistics---min/max (dashed curves), standard deviation (red shaded areas) and mean (blue curves)---for the deep policy evaluated against the deep environment and against TruckSIM environment.}
\label{fig:trucksim-truck-rl-cacc-experiment}
\end{figure*}
This section validates the performance of the deep-RL cooperative adaptive cruise controller against both the deep environment and transfer into TruckSIM as shwon in Figure~\ref{fig:trucksim-truck-rl-cacc-experiment}. DeepEnv-set experiment is a replication of the training setup and consists of 100 rollouts drawn from the same training distributions (environment model and initialization distributions). The same controller is zero-shot transferred to TruckSIM to produce TruckSIM-set consisting of 10 rollouts drawn from the same initialization distributions.
The policy is designed to simultaneously regulate speed and time-gap. In DeepEnv-set, time-gap error converges to steady state error between $\pm0.05s$ within $10s$ from the start time of the experiment, while speed error converges to steady state error between $\pm0.03m/s$ within $25s$ from the start time of the experiment both with mean error of approximately zero.
TruckSIM-set evaluates the transfer of the same policy to TruckSIM environment with the same random distributions. Observed shift in control performance is caused by shift in truck model distribution due to modal mismatch discussed in the modeling experiments. Time-gap error converges to steady state error between $0.04s$ and $-0.2s$ within $10s$ from the start time of the experiment, while speed error converges to steady state error between $\pm0.02m/s$ within $25s$ from the start time. The time-gap mean error converges to $-0.06s$, while speed mean error converges to approximately zero. The controller exhibits a nonlinear bimodal speed control over/undershoot.
Figure~\ref{fig:trucksim-truck-rl-cacc-experiment} shows preliminary learning results for deep-RL cooperative adaptive cruise controller and shows preliminary evidence to expect marginal shifts in error statistics when transferring the policy from the deep-truck environment to the ``real'' environment (here conducted using a simulated truck).
\subsection{Deep modeling of full-size trucks (field experiments)}
This section presents field experimental results for the model described in this article. The section documents experiments conducted using two differently configured real-physical heavy duty trucks. These same two trucks were modeled using two different physics-based power-train models in \cite{XYLu2005HDVModelandLongControl} and \cite{lu2017integrated} used to develop high precision control systems within each respective article.
\subsubsection{Configuration one: Freightliner}
In this experiment, the truck is not equipped with any sensors relevant to the driving environment (e.g. road grade) and thus we substitute $w(k)$ with the empty set. The controllable input to the truck is given by $u(k) = [ E_\text{cmd}(k), B_\text{cmd}(k)], $ where $E_\text{cmd}(k)$ is requested percentage engine torque in [$0-100\%$] and $B_\text{cmd}(k)$ is service brake pedal position [$0-100\%$].
The output (truck response) vector is given by $y(k) = [a(k), v(k), F_\text{rate}(k)],$ where $a(k)$ is longitudinal acceleration in [$m/s^2$], $v(k)$ is longitudinal speed in [$m/s$], and $F_\text{rate}(k)$ is fuel rate in [$cm^3/s$].
Experiments for this truck has been carried out at a nearly flat test track with straight roads the longest of which is around 300 meters long at the Richmond Field Station at California. The truck was driven for about 16 minutes to collect primarily slow speed dataset covering from zero to $18m/s$. The dataset was split into 85 percent for training and 15 percent to test modeling performance on an unseen dataset.
\subsubsection{Configuration two: Volvo}
The truck is equipped with a road grade sensor where $w(k) = \theta_\text{rdg}(k) [\%]$. The controllable input to the truck is given by $u(k) = [E_\text{cmd}(k), B_\text{cmd}(k)],$ where $E_\text{cmd}(k)$ is requested percentage engine torque in [$0-100\%$] and $B_\text{cmd}(k)$ is service brake command [$m/s^2$].
Actuation and accessible signal measurement of the brake system in this truck is asymmetric. The service brake system in this truck is not directly actuatable (and signals not interceptable); instead, desired deceleration is processed through Volvo propriety systems. After collecting the data, we substitute brake commands with observed deceleration gated by brake pedal gating switch signal.
The output (truck response) vector is given by $y(k) = [a(k), v(k), F_\text{rate}(k)],$ where $a(k)$ is longitudinal acceleration in [$m/s^2$], $v(k)$ is longitudinal speed in [$m/s$], and $F_\text{rate}(k)$ is fuel rate in [$cm^3/s$].
This truck was primarily driven over non-flat open freeways. The truck was driven for about 24 minutes to collect primarily freeway speed driving dataset covering speeds from $20m/s$ to $30m/s$. The dataset was split into 85 percent for training and 15 percent to test modeling performance on an unseen dataset.
\subsubsection{Results and model validation}
We validate modeling performance against an unseen ground truth dataset from each truck configuration as shown in Figure~\ref{fig:field-experiments-error-stats}. In this figure, mean and standard deviation for acceleration, speed, and fuel rate modeling errors are charted as a function of model simulation time. The statistics were produced from an ensemble of ten timeseries simulations. Each simulation is fresh initialized at time zero, and simulated using knowledge of inputs and the uncontrollable conditions only. Error statistics are generated as $\text{ErrorStatistic}(k)= \text{Statistic}_m\left(\hat{y}_{m}(k| \Phi) - y_{m}(k)\right)$ where $m$ is trial number, and $\hat{y}$ follow the deployment form Equation~\eqref{eq:model-deployment-form}.
In this figure, acceleration error is bounded between $\pm0.5m/s^2$. For the Freightliner, speed error remained bounded between $\pm0.5m/s$ mean of speed error $0.12m/s$ after the initial transient. For the Volvo, speed error remained between $-0.5m/s$ and $1m/s$ with a significant error bias approaching $0.5m/s$ during the 15 seconds of simulation. Fuel rate modeling error is bounded between $\pm1$ once the initial transient decays. We speculate that model performance degradation for the Volvo truck is influenced by insufficient data to model truck dynamics over graded roads.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=\linewidth, trim={4cm 0 3cm 0},clip]{figures/migrated-deep-truck/field-modeling-error-stats/clean_fl_vl_modeling_error_stats_batches10.png}
\caption{Model error statistics---standard deviation (red shaded areas) and mean (blue curves) from 10 trials---for deep modeling of the Freightliner and the Volvo trucks.}
\label{fig:field-experiments-error-stats}
\end{figure*}
\subsection{Deep-RL control of full-size trucks (field experiments)}
This section presents control experiments for the deep-RL CACC system presented earlier. Due lab access limitations during the CoVID-19 pandemic, the system was operated as a two vehicle ACC (using radar instead of direct communications) system on non-flat open freeways. The leader is a passenger car and the follower is the Volvo truck presented earlier.
Figure~\ref{fig:volvo_rl_acc_gap_closing} shows gap closing regulation performance where the leader drove at nearly constant speed with initial speed error of $2m/s$, initial time gap error of $1s$, and a desired time-gap setting of $1.5s$. The gap was closed within 15 seconds and to within error bound of $\pm0.2m/s$ and $0.35s$. Leader conducted a quick successive changes of speed towards the end of experiment causing the observed speed ripple after time $17s$.
Figure~\ref{fig:volvo_rl_acc_long_run} shows tracking performance over an arbitrary driving cycle conducted by the leader vehicle with a desired time-gap setting of $1.5s$. Speed error was regulated to within $\pm0.5m/s$ and time gap was regulated to between $0.05m/s$ and $0.3m/s$. A lane change maneuver was conducted at time $63s$ causing a momentary misalignment between ego vehicle's sensor line-of-sight with the leader. Speed and distance measurements of a farther vehicle down stream was detected causing the observed discontinuity.
\begin{figure*}[!htbp]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\linewidth, trim={4cm 0 3cm 0},clip]{figures/migrated-deep-truck/vl-acc-validation/volvo_rl_acc_gap_closing_speed_t_gap.png}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\linewidth, trim={4cm 0 3cm 0},clip]{figures/migrated-deep-truck/vl-acc-validation/volvo_rl_acc_gap_closing.png}
\end{subfigure}
\caption{Tracking speed, time-gap, and control error for the Volvo deep CACC policy evaluated against the real environment---gap closing maneuver.}
\label{fig:volvo_rl_acc_gap_closing}
\end{figure*}
\begin{figure*}[!htbp]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\linewidth, trim={4cm 0 3cm 0},clip]{figures/migrated-deep-truck/vl-acc-validation/volvo_rl_acc_long_run_speed_t_gap.png}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\linewidth, trim={4cm 0 3cm 0},clip]{figures/migrated-deep-truck/vl-acc-validation/volvo_rl_acc_long_run.png}
\end{subfigure}
\caption{Tracking speed, time-gap, and control error for the Volvo deep CACC policy evaluated against the real environment---leader following maneuver.}
\label{fig:volvo_rl_acc_long_run}
\end{figure*}
\section{Conclusion}
Detailed study of each heavy duty truck in some pool of trucks has historically been required to develop and fit precise analytical models and controls. This article discusses the application of deep learning and deep reinforcement learning as an approach to simplify the process and abstract detailed vehicle underlying mechanics with a potential for improving modeling and control precision. A brief experimental evaluation is presented as a walk through the process and as preliminary performance validation.
The deep models and deep-RL controls presented in this article successfully (1) infers relevant latent and state variables (such as gearbox), (2) performs dynamic state estimation (such as selected gear and brake cylinder pressure at \(t=0\)) and tracking (latent state variable values for \(t>0\)), and (3) successfully performs system identification and parameter estimation (such as the aerodynamic drag effect and its coefficient).
This article focuses on outlining the process of applying deep learning and deep reinforcement learning for modeling and control of heavy duty trucks. More extensive experimentation and comparison with established classical approaches is still required for validation and performance evaluation. Furthermore, the process presented here still requires full replication for each target truck, and each truck combination (multi-truck environments). Further investigation is still required to introduce transfer learning of longitudinal dynamics across mechanical configurations. Data sampling efficiency and utilization of existing first-principle models could also be investigated to improve the process presented here.
\section*{Acknowledgements}
The authors would also like to acknowledge John Spring and David Nelson for their technical conversations about automation software and hardware for heavy duty trucks and their support in the field. This research work was supported in part by King Abdulaziz City for Science and Technology (KACST).
\bibliographystyle{apalike}
|
{
"timestamp": "2021-09-30T02:03:45",
"yymm": "2109",
"arxiv_id": "2109.14019",
"language": "en",
"url": "https://arxiv.org/abs/2109.14019"
}
|
\section{Introduction}
\subsection{Description of the problem}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\setcounter{equation}{0
In this article, we take into account the model describing inelastic strains in solids, which are subject to the effects of temperature. The mathematical model describing such dynamics consists of the classical continuum mechanical models (see for example \cite{Alber,temam}) and a nonlinear heat equation derived from the first law of thermodynamics. From mathematical point of view a nonzero thermal expansion of material makes the system very complicated and the mathematical analysis of equations describing such process is one of the challenging problems in applied mathematics. One of the main reasons is that the nonlinearities occurring in the system are only integrable functions and the standard energy methods do not work. By analyzing the literature (which we will cover in detail later) we can conclude that the original model is not satisfactorily researched from mathematical point of view. In the considered models, simplifications are made which allow to obtain results concerning the existence of very weak solutions. Here, we study a system that in the presented generality was not considered. For such a system, we use our proposed approximations, thanks to which we show the existence of classic weak solutions. The approach presented in this article is completely new and has not been used in in the framework under consideration.
Let us assume the body occupies the bounded domain $\Omega\subset\R^3$ with smooth boundary $\partial\Omega$ and fix a positive real number $\T>0$. The dynamics of inelastic strains in solids can be described in two ways. The standard unknowns in this theory are the displacement field $u:\Omega\times[0,\T]\rightarrow \R^3$ and the stress tensor $\sigma:\Omega\times[0,\T]\rightarrow \S$, where $\S$ denotes the set of symmetric $3\times3$-matrices. A very popular description is described in Alber's book \cite{Alber}, which was very often used in recent years, see for example \cite{AlberChelminski1, chegwia2, ChelminskiOwczarekthermoII, GKS15, GKO, Roubicek}. In this approach the inelastic part of the infinitesimal strain tensor $\ve(u)=\mathrm{sym}(\nabla_x u)$ is described by an additional internal variable $\varepsilon^p$ i.e.
$$\ve(u)=(\ve(u)-\varepsilon^p)+\varepsilon^p$$
and the first part on the right-hand side describes the pure elastic deformations ($\mathrm{sym}(\nabla_x u)$ denotes the symmetric part of the gradient of the displacement). Then, the elastic constitutive equation takes the form of a generalized Hooke's law:
$$\sigma = \mathbb{C}(\ve(u)-\varepsilon^p)\,,$$
where the operator $\mathbb{C}:\S\rightarrow\S$ is a classical $4^{\mathrm{th}}$ order
elasticity tensor. The inelastic constitutive equation is given in the form of evolution equation for $\varepsilon^p$
\begin{equation}
\label{fl}
\ve^p_t=\, G(\sigma,\ve^p)\,,
\end{equation}
where $G$ is a given constitutive function (in general $G$ may be a multifunction) and $\ve_t^p$ denotes the time derivative of the tensor $\ve_t^p$. In the literature we can find various examples of the function $G$ (for instance \cite{Alber,chegwia1,owcz2,ChelNeffOwczarek14} and many others). Observe that selection of the vector fields $G$ lead to different models.
Here, we are going to use a different description that was used by Roger Temam in the book \cite{temam} and also in the work \cite{temam1, Bensoussan1996,Bensoufrehsebook,Iosofonea}. This approach does not introduce an additional unknown $\varepsilon^p$. It is based only on standard unknowns $u$ and $\sigma$. Assuming small deformations and taking into account a special density of external forces acting on the material (a dumping term), the slow motion of the body is governed by the classical balance of momentum
\begin{equation}
\label{BM}
\mathrm{div}(\sigma)=-F-\mathrm{div}\,\D (\ve(u_t))\,,
\end{equation}
where the acceleration term is omitted since we consider a quasi-static problem. The function $F:\Omega\times[0,\T]\rightarrow \R^3$ in \eqref{BM} describes the applied body forces. The general equations \eqref{BM} must be completed by constitutive relations. The body is subject to thermal expansion, therefore the Cauchy stress tensor consists of two stresses
\begin{equation}
\label{CE}
\sigma = T-f(\theta)\,\id\,,
\end{equation}
where $T:\Omega\times[0,\T]\rightarrow \S$ is an elastic one, the second one is the thermal stress and $\theta:\Omega\times [0,\T]\rightarrow \R$ is the temperature of the material. The function $f\colon\R\to \R$ is a nonlinear given constitutive function depending on considered material. In this paper we deal with the isotropic materials. This means that the 4th order tensor of elastic constants $\D$ is defined by the Lam\'e constants $\lambda$ and $\mu$ i.e.
\begin{equation}
\label{CE1}
\D_{ijkl}=\mu(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk})+\lambda \delta_{ij}\delta_{kl}\,,
\end{equation}
where $\delta$ denotes the Kronecker symbol. Then, the elastic constitutive equation takes the form of a generalized Hooke's law:
\begin{equation}
\label{CE2}
T = 2\mu\ve(u)+\lambda\mathrm{tr}(\ve(u))\id\,.
\end{equation}
The assumptions $\mu>0$ and $3\lambda+2\mu>0$ means that the deformation of elastic energy is positive definite and they imply that the inverse operator of $\D$ exists. Hence, inverting the relation \eqref{CE2} we obtain
\begin{equation}
\label{CE3}
\ve(u)=\D^{-1}T\,,
\end{equation}
where $\D^{-1}:\S\rightarrow\S$ is a positive definite operator which leads to
\begin{equation}
\label{CE4}
\ve(u) = \frac{1}{2\mu} T-\frac{\lambda}{2\mu(2\mu+3\lambda)}\mathrm{tr}(T)\id\,.
\end{equation}
In this paper we give a viscous-elastic model. The viscous properties are associated with inelastic deformation. Then the inelastic constitutive relation is in the form
\begin{equation}
\label{ICE}
\D^{-1}T_t+ G(T,\theta)=\ve(u_t)\,,
\end{equation}
where $G$ is a given constitutive function, which in our model depends on $T$ and $\theta$. We are going to deal with a generalization of the Norton-Hoff type constitutive function which is known and very popular in the literature, see \cite{temam1, GKO,ChelminskiOwczarekthermoII,GKS15,Bensoussan1996}. More precisely, we replace equation \eqref{ICE} by
\begin{equation}
\label{ICE1}
\D^{-1} T_t+\big\{|\dev(T)|-\beta(\theta)\big\}_{+}^{r}\,\frac{\dev(T)}{|\dev(T)|}=\ve(u_t)\,,
\end{equation}
where $\dev (T)=T-\frac{1}{3}\,\mathrm{tr}\,(T)\cdot\id$ denotes the deviator of a $3\times 3$-tensor. The symbol $\{\xi\}_{+}$ denotes the nonnegative part of the real number $\xi$. The function $\beta:\R\rightarrow\R$ is given and depends on the material under consideration, while $r>1$ is a fixed number. The equations \eqref{BM}, \eqref{CE}, \eqref{CE2} and \eqref{ICE1} are completed by the heat equation on the temperature function $\theta$. It is a consequence of the first principle of thermodynamics, hence the heat transfer is governed by the equation
\begin{equation}
\label{HT}
\theta_t-\Delta\theta+f(\theta)\mathrm{div}\,u_t=\big\{|\dev(T)|-\beta(\theta)\big\}_{+}^{r}|\dev(T)|\,.
\end{equation}
In conclusion, the problem that we study in this article is: for a fix positive real number $\T>0$, find the displacement field $u:\Omega\times[0,\T]\rightarrow \R^3$, the stress tensor $T:\Omega\times[0,\T]\rightarrow \S$ and the temperature of the material $\theta:\Omega\times [0,\T]\rightarrow \R$ solution of the following system of equations
\begin{equation}
\label{Main}
\begin{split}
\mathrm{div}(T-f(\theta)\id )&=-F-\mathrm{div}\,\D (\ve(u_t))\,,\\[1ex]
\D^{-1} T_t+\big\{|\dev(T)|-\beta(\theta)\big\}_{+}^{r}\,\frac{\dev(T)}{|\dev(T)|}&=\ve(u_t)\,,\\[1ex]
\theta_t-\Delta\theta+f(\theta)\mathrm{div}\,u_t&=\big\{|\dev(T)|-\beta(\theta)\big\}_{+}^{r}|\dev(T)|\,,
\end{split}
\end{equation}
where the tensor $\D$ is defined in \eqref{CE1}. The equations in \eqref{Main} are studied for $x\in\Omega$ and time $t\in [0,\T]$.
The system (\ref{Main}) is considered with the nonhomogeneous Dirichlet boundary condition for the displacement and with the nonhomogeneous Neumann boundary condition for the temperature
\begin{equation}
\label{BC}
\begin{split}
u(x,t)&=g_D(x,t)\quad\, \textrm{ for}\quad x\in \partial \Omega \quad\textrm{and}\quad t\geq 0\,,\\[1ex]
\frac{\partial\,\theta}{\partial\,n}(x,t)&=g_{\theta}(x,t)\quad\,\,\, \textrm{ for}\quad x\in \partial \Omega \quad\textrm{and}\quad t\geq 0\,.
\end{split}
\end{equation}
Finally, we adjoin to the system (\ref{Main}) the following initial conditions
\begin{equation}
\label{IC}
u(x,0)=u_0(x),\quad T(x,0)=T_0(x),\quad \theta(x,0)=\theta_0(x)\,.
\end{equation}
There are many different possible ways to deal with problems in which the thermal effects are included. However, none of them guarantee solvability of the problem in the general case. One can linearize the system of equations i.e. $f(\theta)=c_\theta(\theta-\theta_0)\,,$ while the heat equation takes the form
\[
\theta_t - \Delta\theta +c_\theta(\theta-\theta_0)\, \mathrm{div} u_t=r\,.
\]
The nonlinear term is usually approximated in the linear theory by a linear term
$c_0\dyw u_t$ with the physical argument that the temperature in the
deformation process remains close to the reference temperature, see e.g. \cite{KlaweOwczarek,GKO,Bartczak12,Haupt}. The first mathematical analysis of thermoelasticity including the nonlinear heat equation was done in \cite{BlanchardGuibe97} and later in \cite{BlanchardGuibe00}. The authors consider the visco-elastic \emph{Kelvin-Voigt
model}, i.e.\ they assume that the constitutive relation is of the form
\begin{equation}
\sigma=A\sym(\nabla u) +B\sym(\nabla u_t)-f(\theta)\,\id\,,\nn
\end{equation}
where $A,B$ are symmetric and positive definite linear operators acting on
symmetric matrices and $f$ is a constitutive function describing the
thermal part of the stress tensor. The additional term $B\sym(\nabla u_t)$ in the constitutive relation allows to control the very difficult term $f(\theta)\dyw u_t$ in the
heat equation. Under the fundamental assumption of a
sublinear growth of the function $f$ the authors have proven the existence of \emph{renormalized
solutions} of the considered system of equations. The other approach is to add additional dumping term into right-hand side of momentum equation, see \cite{ChelminskiOwczarekthermoI,ChelminskiOwczarekthermoII,barowcz2}. This term also helps to control the difficult term in the heat equation. Moreover, in these articles the coupling with thermal effects occurs only in an elastic constitutive relationship (the constitutive function $G$ does not depend on $\theta$).
Additionally, it is worth to emphasize the works \cite{GKS15,GwiazdaKlaweSwierczewska014,tve-Orlicz} and \cite{KlaweOwczarek,GKO}, where the authors deal with thermo-visco-elasticity systems. However, in these works the thermal expansion does not appear, which means that the Cauchy stress tensor does not depend on temperature function. Coupling between temperature and displacement occur only in the inelastic constitutive equation. An important issue is the fact that in the systems considered by Gwiazda at al. the total energy is conserved, contrary to the systems analysed in \cite{Haupt,ChelRacke,Bartczak14,Bartczak12} in which the lack of the total energy is observed. This is caused by the linearisation. The temperature occurring in nonlinear dissipation term of heat equation is only linearised (without any linearision of the Cauchy stress tensor).
Another point of view on thermo-mechanical problems is presented by S. Bartels and T.~ Roub{\'{\i}}{\v{c}}ek (see for example \cite{BarlesRoubicek} and \cite{Roubicek}) where the authors use, the so called, enthalpy transformation and consider energetic solutions. In both of this papers the authors study Kelvin-Voigt viscous material, but in the article \cite{BarlesRoubicek} they consider a plasticity model with hardening in quasistatic case, while in \cite{Roubicek} the perfect plasticity in dynamical case is considered.
We emphasize that in the present article we consider a model in which coupling with thermal effects occurs both: in the generalized Hooke's law (equation \eqref{CE}) and in the inelastic constitutive equation \eqref{ICE1}. The total energy of the system \eqref{Main} is also conserved.
\subsection{Main assumptions and main result}
The dissipation term $f(\theta)\dyw u_t$ on the right-hand side of the heat equation $\eqref{Main}_3$ is expected in the space $L^1(\Omega\times (0,\T))$. It is known that, in general, for integrable data a weak solution might not exist. From articles \cite{BoccardoGallouet}, \cite{BlanchardMurat} and \cite{dall96} we are able to deduce that a solution of heat equation with $L^1$-data is expected as a function from $L^p((0,\T)\times\Omega)$ for $p<\frac{N+2}{N}$, where $N$ denotes the dimension of a space in which the heat equation is considered. For this reason, in this paper it is assumed that the function $f:\R\rightarrow\R$ is continuous, satisfies the following growth condition
\begin{equation}
\label{warwzrostu}
|f(r)|\leq a+B|r|^{\alpha}\qquad \mathrm{for\,all}\qquad r\in\R_{+}\,,\,\, a,\,B\geq 0\,\,\, \mathrm{and}\,\,\, \alpha\in\Big(\frac{1}{2},\frac{5}{6}\Big)
\end{equation}
and there exists constant $\tilde{B}>0$ such that
\begin{equation}
\label{warwzrostu1}
|f(r)|\leq \tilde{B}(1+|r|)^{\frac{1}{2}}\qquad \mathrm{for\,all}\qquad r\in\R_{-}\,.
\end{equation}
The above growth assumption on the function $f$ were first used in the articles \cite{BlanchardGuibe97}, \cite{BlanchardGuibe00} and then also in \cite{ChelminskiOwczarekthermoII}, \cite{barowcz2}.
Additionally let assume that the function $\beta:\R\rightarrow\R$ satisfies the following conditions:
\begin{description}
\item[(C1)] $\beta\in C^1(\R;\R)$,
\item[(C2)] there exists $d>0$ such that $\beta(r)\in [0,d]$ for all $r\in\R$,
\item[(C3)] there exists $\tilde{d}>0$ such that $|\beta'(r)|\in [0,\tilde{d}]$ for all $r\in\R$.
\end{description}
Recall that the Prandtl-Reuss inelastic flow rule with the von Mises yield function is in the form \cite{Mises1913,Lionsfr}
\begin{equation}
\label{P-R}
\ve(u_t)-\D^{-1} T_t\in \partial I_{K}(T) \,,
\end{equation}
where the set of admissible elastic stresses $K$ is defined in the following form
$$K = \{T \in \S : |\dev(T)| \leq k \}\,,$$
with $k > 0$ a given material constant (the yield limit). The function $I_{K}$ is the indicator function of the set $K$ and the function $\partial I_{K}$ denotes the subgradient of the convex, proper, lower semicontinuous function $I_K$ in the sense of convex analysis (for more details we refer to \cite{AubFran}). As it know the flow rule \eqref{P-R} can be obtained as a limit of visco-elastic (Norton-Hoff) flow rule (see for example \cite{temam1})
\begin{equation}
\label{N-H}
\ve(u_t)-\D^{-1} T_t=\big\{|\dev(T)|-k)\big\}_{+}^{r}\,\frac{\dev(T)}{|\dev(T)|}
\end{equation}
with $r\geq 1$. A natural extension of \eqref{P-R}, by including heat effects, is the following flow rule
\begin{equation}
\label{P-R1}
\ve(u_t)-\D^{-1} T_t\in \partial I_{K(\theta)}(T) \,,
\end{equation}
where the set $K(\theta)$ is in the form $K(\theta) = \{T \in \S : |\dev(T)| \leq k-\theta \}$. The natural area that is used in practice is when $0\leq\theta\leq k$ (more information can be found for example in \cite{ChelRacke}). Then the approximation of equation \eqref{P-R1} takes the form
\begin{equation}
\label{N-H1}
\ve(u_t)-\D^{-1} T_t=\big\{|\dev(T)|-(k-\theta))\big\}_{+}^{r}\,\frac{\dev(T)}{|\dev(T)|}\,.
\end{equation}
Assuming in the equation $\eqref{Main}_2$ that $\beta(\theta)=k-\theta$ and $0\leq\theta\leq k$ we obtain that the assumptions (C1), (C2) and (C3) are fulfils naturally. Therefore, the system considered in this article can be treated as an approximation of the Prandtl-Reuss model in which thermal effects were taken into account.
We will assume that the given data $F$, $g_D$, $g_{\theta}$, $u_0$, $T_0$, $\theta_0$ have the regularities
\begin{equation}
\label{regularity}
\begin{split}
F\in L^{\frac{1}{1-\alpha}}(0,\T;L^\frac{1}{1-\alpha}(\Omega;\R^3))\,,&\quad g_D\in W^{1,\frac{1}{1-\alpha}}(0,\T;W^{\alpha,\frac{1}{1-\alpha}}(\partial\Omega;\R^3))\,, \\[1ex]
u_0\in H^1(\Omega;\R^3)\,,&\quad (T_0,\dev(T_0))\in L^2(\Omega;\S)\times L^{2r}(\Omega;\SS)\,,\\[1ex]
g_{\theta}\in L^2(0,\T;L^2(\partial\Omega;\R))\,,&\quad \theta_0\in L^1(\Omega;\R)\,,
\end{split}
\end{equation}
where $\alpha$ is coming from assumption \eqref{warwzrostu} and we can observe that $\frac{1}{1-\alpha}\in (2,6)$. Such an atypical assumptions on the data $F$ and $g_D$ are related to the nonhomogeneous boundary condition for the Dirichlet boundary condition for the displacement vector. Systems with nonhomogeneous boundary condition for the displacement, where the temperature dependence occurs in the two constitutive equations, not been investigated.
Let us introduce a definition of a solution for the system \eqref{Main}. \begin{de}
\label{Maindef} Let $1<q<\frac{5}{4}$. We say that a vector $(u,T,\theta)$ is a solution of the system \eqref{Main} with the boundary and initial conditions \eqref{BC} and \eqref{IC} if:\\[1ex]
{\bf 1.}\hspace{2ex} it has the following regularities:
\begin{equation}
\label{regularity1}
\begin{split}
u\in H^{1}(0,\T;H^1_{g_D}(\Omega;\R^3))\,,&\quad T\in L^{\infty}(0,\T;L^2(\Omega;\S))\,,\\[1ex]
\dev(T)\in L^{r+1}(0,\T;L^{r+1}(\Omega;\SS))\,,&\quad T_t\in L^{\frac{r+1}{r}}(0,\T;L^{\frac{r+1}{r}}(\Omega;\S))\,,\\[1ex]
\theta\in L^{q}(0,\T;W^{1,q}(\Omega))& \cap C([0,\T];L^1(\Omega))\,,\\[1ex]
f(\theta)\in L^2(0,\T;L^2(\Omega))\,,&\quad \theta_t\in L^1\big(0,\T;\big(W^{1,q'}(\Omega)\big)^{\ast}\big)\,,
\end{split}
\end{equation}
where $H^1_{g_D}(\Omega;\R^3):=\{u\in H^1(\Omega;\R^3):\, u=g_D\,\,\mathrm{on}\,\,\partial\Omega\}$ and the space $\big(W^{1,q'}(\Omega)\big)^{\ast}$ is the space of all linear bounded functionals on $W^{1,q'}(\Omega)$ $(\frac{1}{q}+\frac{1}{q'}=1)$.\\[1ex]
{\bf 2.}\hspace{2ex} the equations \eqref{Main} are satisfied in the following form
\begin{equation}
\int_0^\T\int_{\Omega} \big(T - f(\theta)\id\big) \ve(\psi)\, \di x\,\di t + \int_0^\T\int_{\Omega}\D\varepsilon(u_{t}) \,\varepsilon(\psi)\,\di x\,\di t =\int_0^\T\int_{\Omega} F\,\psi\,\di x\,\di t
\label{balancede}
\end{equation}
for every function $\psi\in C^\infty([0,\T];H^1_0(\Omega;\R^3))$,
\begin{equation}
\begin{split}
\int_0^{\T}\int_{\Omega}\D^{-1} T_{t}\,\varphi\,\di x\,\di t+& \int_0^{\T}\int_{\Omega}\big\{|\dev(T)|-\beta(\theta)\big\}^{r}_{+}\frac{\dev(T)}{|\dev(T)|}\,\dev(\varphi)\,\di x\,\di t\\[1ex]
&=\int_0^{\T}\int_{\Omega}\ve(u_{t})\,\varphi\,\di x\,\di t
\end{split}
\label{421de}
\end{equation}
for all $\varphi \in L^{r+1}(0,\T;L^{r+1}(\Omega;\S))$ and
\begin{equation}
\label{tempde}
\begin{split}
&\int_0^\T\int_{\Omega}\theta_{t}\, \phi\,\di x\,\di t + \int_0^\T\int_{\Omega}\nabla\theta\, \nabla\phi\, \di x\,\di t +\int_0^\T\int_{\Omega} f\big(\theta )\mathrm{div} (u_{t})\, \phi \,\di x\,\di t\\[1ex]
&\hspace{2ex}= \int_0^\T\int_{\Omega} \big\{|\dev(T)|-\beta(\theta)\big\}^{r}_{+}|\dev(T)|\,\phi\, \di x\,\di t+\int_0^\T\int_{\partial\Omega}g_{\theta}\,\phi\,\di S(x)\,\di t
\end{split}
\end{equation}
for all $\phi\in C^\infty([0,\T];C^\infty(\overline{\Omega}))$.\\[1ex]
{\bf 3.}\hspace{2ex} for almost all $x\in\Omega$ the initial conditions
\begin{equation}
\label{ICde}
u(x,0)=u_0(x),\quad T(x,0)=T_0(x),\quad \theta(x,0)=\theta_0(x)\,.
\end{equation}
are met.
\end{de}
\begin{remark}
Taking in the equation \eqref{balancede} the test function from the space $C^\infty([0,\T];C^\infty_0(\Omega;\R^3))$, we deduce that the weak divergence $\div (T-f(\theta)\id+\D\ve(u_t))$ fulfills
$$\div (T-f(\theta)\id+\D\ve(u_t))=-F \in L^{\frac{1}{1-\alpha}}(0,\T;L^\frac{1}{1-\alpha}(\Omega;\R^3))$$
and in that sense the equation $\eqref{Main}_1$ holds for almost all $(x,t)\in \Omega\times (0,\T)$. Additionally, the equation \eqref{421de} holds for all functions in the linear space $ L^{r+1}(0,\T;L^{r+1}(\Omega;\S))$, therefore
$$\D^{-1} T_t(x,t)+\big\{|\dev(T(x,t))|-\beta(\theta(x,t))\big\}_{+}^{r}\,\frac{\dev(T(x,t))}{|\dev(T(x,t))|}=\ve(u_t(x,t))$$
for almost all $(x,t)\in \Omega\times (0,\T)$. Which means that the inelastic constitutive relationship is fulfilled in a strong sense. Moreover, it is not difficult to conclude that equation \eqref{tempde} holds for almost everyone $t\in (0,\T)$ i.e.
\begin{equation}
\label{tempde1}
\begin{split}
&\int_{\Omega}\theta_{t}\, \phi\,\di x\ +\int_{\Omega}\nabla\theta\, \nabla\phi\, \di x +\int_{\Omega} f\big(\theta )\mathrm{div} (u_{t})\, \phi \,\di x\\[1ex]
&\hspace{2ex}= \int_{\Omega} \big\{|\dev(T)|-\beta(\theta)\big\}^{r}_{+}|\dev(T)|\,\phi\, \di x+\int_{\Omega}g_{\theta}\,\phi\,\di S(x)\,.
\end{split}
\end{equation}
\end{remark}
\begin{tw}$\mathrm{(Main\, result)}$\\[1ex]
\label{Mainresult}
Let assume that the growth assumptions \eqref{warwzrostu}, \eqref{warwzrostu1} and the assumptions (C1), (C2), (C3) are satisfied. Additionally assume that the given data $F$, $g_D$, $g_{\theta}$, $u_0$, $T_0$, $\theta_0$ have the regularities specified in \eqref{regularity}. Then there exists a solution of the system \eqref{Main} with the boundary and initial conditions \eqref{BC} and \eqref{IC} in the sense of Definition \ref{Maindef}.
\end{tw}
The main idea of the proof of Theorem \ref{Mainresult} is our proposed approximation of inelastic constitutive relationship $\eqref{Main}_3$. Simultaneously we make a truncation in the thermal part of the stress tensor and in nonlinear terms occurring in the heat equation $\eqref{Main}_2$. The applied approximation allows for obtaining the $L^2$-strong solutions (Definition \ref{defk}) in any step of approximation. Comparing our results with other from the literature, it can be concluded that in many cases the approximations used did not allow to consider the models in the presented generality. For example, in articles \cite{ChelminskiOwczarekthermoI,ChelminskiOwczarekthermoII} and \cite{barowcz2} the authors use the Yosida approximation to show the existence in the used approximation. The Yosida approximation did not allow to consider the model in which the flow rule depends on the temperature. Additionally, solutions were obtained only in the renormalised sense. In works \cite{GKS15,GwiazdaKlaweSwierczewska014,tve-Orlicz} and \cite{GKO}, Gwiazda et al. proposed a very complicated two-level Galerkin approximation for the visco-elastic strain tensor. With the help of this approximation, they were able to consider a flow rules which depends on the temperature. However, the dependence on temperature in an elastic constitutive relationship occurs only in elementary cases such as homogeneous thermal expansion \cite{GKO}. In these articles, for the models in question, very weak solutions were obtained, where the visco-elastic strain tensor can be recovered from the equation on its evolution, only. We also want to mention two articles \cite{HERZOGtermoplast} and \cite{KlaweOwczar20} in which the dependence on temperature occurs in both: the elastic and inelastic constitutive equations. However, ignoring the differences in the constitutive relations under consideration, it is assumed that the nonlinear function $f$ is continuous and bounded (in the article \cite{HERZOGtermoplast} $f$ is even a Lipschitz function). Which makes the model under consideration easier to analyze. Summarizing, Theorem \ref{Mainresult} provides us with the existence of weak solution for model \eqref{Main} in which only the heat equation is satisfied in a weak sense. To pass to the limit in the approximation used, we apply Young measures, Boccardo's and Gallou{\"e}t's approach, truncation methods for heat equation used by D. Blanchard and Minty-Browder trick.
\section{Truncation of the problem}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\setcounter{equation}{0
Before we propose an approximation for system \eqref{Main}, let us consider two auxiliary initial-value problems:
\begin{equation}
\left\{
\begin{aligned}
-{\div}\D\varepsilon(\tilde{u}_t)&= F & \qquad\mbox{in } \Omega\times (0,\T), \\
\tilde{u}_{t} &= g_{D_{,t}} & \mbox{on } \partial\Omega\times (0,\T),
\\
\tilde{u}(x,0) &= 0 & \mbox{in } \Omega
\end{aligned}
\right.
\label{war_brz_u}
\end{equation}
and
\begin{equation}
\left\{
\begin{aligned}
\tilde{\theta}_t -\Delta \tilde{\theta} &= 0 & \qquad\mbox{in } \Omega\times (0,\T), \\
\frac{\partial\tilde{\theta}}{\partial n} &= g_{\theta} & \mbox{on } \partial\Omega\times (0,\T), \\
\tilde{\theta}(x,0) &= 0 & \mbox{in } \Omega,
\end{aligned}
\right.
\label{war_brz_t}
\end{equation}
where $F$ is a given volume force, $g_D$ and $g_\theta$ are given boundary values for displacement and thermal flux, respectively. $g_{D_{,t}}$ denotes the time derivative of the function $g_D$. Existence and regularity of solutions to those systems are standard results. Indeed, the regularity assumptions \eqref{regularity} allow us to use the Corollary $4.4$ of \cite{Valent} and obtain one and only one solution $\tilde{u}_t$ of \eqref{war_brz_u} belonging to $L^{\frac{1}{1-\alpha}}(0,\T;W^{1,\frac{1}{1-\alpha}}(\Omega;\R^3))$. Initial condition in \eqref{war_brz_u} gives $\tilde{u}\in W^{1,\frac{1}{1-\alpha}}(0,\T;W^{1,\frac{1}{1-\alpha}}(\Omega;\R^3))$. Additionally, using the Corollary 5.7 of \cite{Valent} we get the solution $\tilde{\theta}$ of \eqref{war_brz_t} such that $\tilde{\theta}\in L^2(0,\T;H^1(\Omega;\R))$ and $\tilde{\theta}_t\in L^2(0,\T;L^2(\Omega;\R))$.\\[1ex]
If we denote by $(\hat{u},T,\hat{\theta})$ the solution of the problem \eqref{Main} - \eqref{IC} and define $u=\hat{u}-\tilde{u}$ and $\theta=\hat{\theta}-\tilde{\theta}$, we observe that we can write the investigated problem equivalently i.e. for $x\in \Omega$ and $t\in [0,\T]$
\begin{equation}
\label{Main1}
\begin{split}
\mathrm{div}(T-f(\theta+\tilde{\theta})\id )&=-\mathrm{div}\,\D (\ve(u_t))\,,\\[1ex]
\D^{-1} T_t+\big\{|\dev(T)|-\beta(\theta+\tilde{\theta})\big\}^{r}_{+}\,\frac{\dev(T)}{|\dev(T)|}&=\ve(u_t)+\ve(\tilde{u}_t)\,,\\[1ex]
\theta_t-\Delta\theta+f(\theta+\tilde{\theta})\mathrm{div}(u_t+\tilde{u}_t)&=\big\{|\dev(T)|-\beta(\theta+\tilde{\theta})\big\}^{r}_{+}|\dev(T)|\,,
\end{split}
\end{equation}
with the initial-boundary conditions in the following form
\begin{equation}
\label{eq:Ini-Bond}
\begin{array}{rclrcl}
u_{|_{\partial\Omega}}&=&0,\quad&
\frac{\partial\,\theta}{\partial\,n}_{|_{\partial\Omega}}&=&0, \\[1ex]
\theta(0)=\hat{\theta}_0&=&\theta_{0},&
u(0)&=&\hat{u}_0=u_0,\quad
T(0)=T_0.
\end{array}
\end{equation}
It follows from the above considerations that in order to prove Theorem \ref{Mainresult} it is enough to find a solution of initial-boundary value problem \eqref{Main1} and \eqref{eq:Ini-Bond} i.e. which satisfies the conditions of Definition \ref{Maindef}. Therefore, it is sufficient to propose approximations for system \eqref{Main1}. This approximation will use the truncation function.
For any positive real number $k$, let us define the truncation function $\TC_k$ at height $k>0$ i.e. $\TC_{k}(r)=\min\{k,\max(r,-k)\}$. Notice that $\TC_k(\cdot)$ is a real-valued Lipschitz function. Moreover, let us define $\varphi_k(r)=\int_0^r \TC_k(s)\,\di s$, hence
\begin{equation}
\label{pierwotna}
\varphi_k(r) = \left\{ \begin{array}{ll}
\frac{1}{2}r^2 & \textrm{if}\quad |r|\leq k\,,\\[1ex]
\frac{1}{2}k^2+k(|r|-k) & \textrm{if}\quad |r|>k\\
\end{array} \right.
\end{equation}
and $\varphi_k$ is a $W^{2,\infty}(\R;\R)$-function with linear growth at infinity.\\
We propose the following approximation of the system \eqref{Main1}: for $k>0$ we will consider the following system
\begin{equation}
\label{AMain1}
\begin{split}
\mathrm{div}\big(T^k-f\big(\TC_k(\theta^k+\tilde{\theta})\big)\id \big)=-\mathrm{div}\,\D (\ve(u^k_t))&\,,\\[1ex]
\D^{-1} T^k_t+\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}\,\frac{\dev(T^k)}{|\dev(T^k)|}
+\frac{1}{k}|\dev(T^k)|^{2r-1}\,\frac{\dev(T^k)}{|\dev(T^k)|} &=\ve(u^k_t)+\ve(\tilde{u}_t)\,,\\[1ex]
\theta^k_t-\Delta\theta^k+f\big(\TC_k(\theta^k+\tilde{\theta})\big)\mathrm{div}(u^k_t+\tilde{u}_t)= \TC_k\big(\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}&|\dev(T^k)|\big)\,.
\end{split}
\end{equation}
The system \eqref{AMain1} is considered with the same boundary conditions as the system \eqref{Main1} and with the following initial conditions
\begin{equation}
\label{eq:Ini-Bondk}
\theta^k(0) = \TC_k(\theta_{0}),\quad
u^k(0)=\hat{u}_0=u_0,\quad
T^k(0)=T_0.
\end{equation}
In order to state the existence result for the system \eqref{AMain1} we start with the following notion of solution for \eqref{AMain1}.
\begin{de}
\label{defk}
Suppose that the given data satisfy \eqref{regularity}. We say that for each positive number $k>0$, the vector $(u^k, T^k, \theta^k)$ is a solution of the truncated system \eqref{AMain1} with boundary conditions \eqref{eq:Ini-Bond} and initial conditions \eqref{eq:Ini-Bondk} if
\begin{equation*}
\begin{split}
& u^k \in H^1(0,\T;H^1(\Omega;\R^3))\,,\quad T^k\in H^1(0,\T;L^2(\Omega;\S))\,,\\[1ex]
&\dev(T^k)\in L^{2r}(0,\T; L^{2r}(\Omega;\SS))\,\quad\theta^k\in L^{\infty}(0,\T;H^1(\Omega))\cap H^1(0,\T;L^2(\Omega))
\end{split}
\end{equation*}
and the first equation in \eqref{AMain1} is satisfied in the following sense
\begin{equation*}
\int_{\Omega} \big(T^k - f\big(\TC_k(\tilde{\theta}+\theta^k )\big)\id\big) \varepsilon(w)\, \di x + \int_{\Omega}\D(\varepsilon(u^k_{t})) \,\varepsilon(w)\,\di x = 0
\end{equation*}
for all $w\in H^1_0(\Omega;\R^3)$ and almost all $t\in (0,\T)$. The equation $\eqref{AMain1}_2$ is fulfill in the following sense
\begin{equation*}
\begin{aligned}
\int_{\Omega}\D^{-1} T^k_{t}\,\tau\,\di x&+ \int_{\Omega}\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}\,\frac{\dev(T^k)}{|\dev(T^k)|}\,\tau\,\di x
\\[1ex]
&+\frac{1}{k}\int_{\Omega}|\dev(T^k)|^{2r-1}\,\frac{\dev(T^k)}{|\dev(T^k)|}\,\tau\,\di x
\\[1ex]
&=\int_{\Omega}\big(\ve(u^k_{t})+\ve(\tilde{u}_t)\big)\tau\,\di x
\end{aligned}
\end{equation*}
for all $(\tau,\dev(\tau))\in L^2(\Omega;\S)\times L^{2r}(\Omega;\SS)$ and almost all $t\in (0,\T)$ and the equation $\eqref{AMain1}_3$
\begin{equation*}
\begin{split}
\int_{\Omega}&\theta^k_{t}\, v\,\di x + \int_{\Omega}\nabla\theta^k\nabla v\, \di x
+\int_{\Omega} f\big(\TC_k(\tilde{\theta}+ \theta^k )\big)\mathrm{div} (\tilde{u}_t + u^k_{t})\, v \,\di x\\[1ex]
=& \int_{\Omega} \TC_k \big(\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}|\dev(T^k)| \big)\,v\, \di x\,.
\end{split}
\end{equation*}
for all $v\in H^1(\Omega)$ and almost all $t\in (0,\T)$.
\end{de}
\begin{remark}
It is worth noting that if functions $u^k$, $T^k$ and $\theta^k$ are solutions of a truncated system \eqref{AMain1} in the sense of Definition \ref{defk}, the equations appearing in this system are satisfied almost everywhere for $(x,t)\in \Omega\times (0,\T)$. Then the first equation in \eqref{AMain1} is understood in the sense that the weak divergence of the function $$T^k-f\big(\TC_k(\theta^k+\tilde{\theta})\big)\id \big)+\D (\ve(u^k_t))$$
is equal to zero. This type of definition of solution for systems occurring in the mechanics of continuum can be found in the literature under the name $L^2-$strong solution (compare with \cite{ChelRacke} and \cite{ChelNeffOwczarek15}).
\end{remark}
\begin{tw}$\mathrm{(Existence\, for\, each\, approximation\, step)}$\hspace{1ex} Let us assume that the given data have the regularities specified in \eqref{regularity}. Then for all $\T>0$ and $k>0$ the system \eqref{AMain1} with boundary conditions \eqref{eq:Ini-Bond} and initial conditions \eqref{eq:Ini-Bondk} possesses a solution $(u^k, T^k, \theta^k)$ in the sense of Definition \ref{defk}.
\label{istnieniek}
\end{tw}
Theorem \ref{istnieniek} is crucial in the proof of Theorem \ref{Mainresult}. Thanks to this theorem we can pass to the limit in the system \eqref{AMain1} and then obtain a solution of \eqref{Main} in the sense of Definition \ref{Maindef}. Writing the system \eqref{AMain1} as used by R. Temam allows to use a single - level Galerkin approximation. With the additional term in the equation $\eqref{AMain1}_2$ we show an $L^2(L^2)$- estimates for time derivatives, which are independent of the Galerkin approximation step. Then, using the Young's measure approach, we pass to the limit in nonlinearities occurring in the system \ref{AMain1}.
\subsection{Existence for the truncation system}
We are going to use the Galerkin approximation. First, we focus on basis for displacement. Let us consider the space $L^2(\Omega;\mathcal{S}^3)$ with a scalar product defined
\begin{equation}
(\xi,\eta)_{\D}:= \int_\Omega {\D}^\frac{1}{2}\xi\,{\D}^\frac{1}{2}\eta\, \di x
\quad\mbox{for }\xi,\,\eta\in L^2(\Omega,\mathcal{S}^3),
\label{eq:defD}
\end{equation}
where ${\D}^\frac{1}{2}$ is the square root of matrix $\D$. We define by $\{w_i\}_{i=1}^{\infty}$ a set of eigenfunctions of the operator $-\mathrm{div}\, \D\,\varepsilon(\cdot)$ with the domain $H_0^{1}(\Omega;\mathbb{R}^3)$ and set $\{ \lambda_i \}_{i=1}^{\infty}$ which contains a corresponding eigenvalues,
\begin{equation}
\int\limits_{\Omega}\D\,\varepsilon(w_i)\,\varepsilon(w_j)\, \di x = \lambda_i \int\limits_{\Omega}w_i\cdot w_j\, \di x.
\end{equation}
We assume that $\{w_i\}_{i=1}^{\infty}$ is orthonormal in $H^{1}_0(\Omega;\mathbb{R}^3)$ with the inner product
\begin{equation}
( w, v)_{H^{1}_0(\Omega)}=( \varepsilon(w), \varepsilon(v))_{\D}
\end{equation}
and orthogonal in $L^2(\Omega;\R^3)$. $\D_{ijkl}$ are constant and boundary of $\Omega$ is $C^2$, thus each function from basis $\{w_i\}_{i=1}^{\infty}$ belongs to $H^{3}(\Omega;\mathbb{R}^3)$, see \cite{Brezis1}.\\
Bases for temperature and stress are constructed in a standard way. Let $\{v_i\}_{i=1}^\infty$ be the subset of $H^{1}(\Omega)$ such that
\begin{equation}
\label{wektoryw}
\int_{\Omega} \nabla v_i\,\nabla\phi\,\di x = \mu_i\int_{\Omega} v_i\,\phi\di x,
\end{equation}
holds for every function $\phi\in C^{\infty}(\overline{\Omega})$, see \cite{Alt,strauss}. Moreover, $\{v_i\}_{i=1}^{\infty}$ is orthonormal in $H^{1}(\Omega)$ and orthogonal in $L^2(\Omega)$. By $\{\mu _i\}_{i=1}^{\infty}$ we denote a set of corresponding eigenvalues.\\
The idea of Galerkin approximation for the stress tensor $T$ is taken from \cite{temam1}. Let $\{\tau_i\}_{i=1}^\infty$ be a basis of $L^2(\Omega;\S)$ such that $\dev(\tau_i)\in L^{2r}(\Omega;\SS)$.\\[1ex]
For every $m\in\mathbb{N}$, we consider approximate solutions in the following form
\begin{equation}
\begin{aligned}
u_{m}^k & = \sum_{n=1}^m\alpha_{k,m}^n(t) w_n\,,
\\
\theta_{m}^k & = \sum_{n=1}^m\beta_{k,m}^n(t) v_n\,,
\\
T_{m}^k & = \sum_{n=1}^m\gamma_{k,m}^n(t) \tau_n\,\,,
\end{aligned}
\label{eq:postac}
\end{equation}
where $k>0$ is the fixed truncation level. The triple $(u_{m}^k, \theta_{m}^k, T_m^k)$ defined in \eqref{eq:postac} is a solution of the approximate system of equations
\begin{equation}
\begin{aligned}[b]
\int_{\Omega} \big(T_{m}^k - f\big(\TC_k(\tilde{\theta}+\theta_{m}^k )\big)\id\big) \varepsilon(w_n)\, \di x &+ \int_{\Omega}\D(\varepsilon(u^k _{m,_{t}})) \,\varepsilon(w_n)\,\di x = 0\,,
\\[1ex]
\int_{\Omega}\D^{-1} T^k_{m,_{t}}\tau_n\,\di x&+ \int_{\Omega}\big\{|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big\}^{r}_{+}\,\frac{\dev(T^k_m)}{|\dev(T^k_m)|}\,\tau_n\,\di x
\\[1ex]
&+\frac{1}{k}\int_{\Omega}|\dev(T^k_m)|^{2r-1}\,\frac{\dev(T^k_m)}{|\dev(T^k_m)|}\,\tau_n\,\di x
\\[1ex]
&=\int_{\Omega}\big(\ve(u^k_{m,_{t}})+\ve(\tilde{u}_t)\big)\,\tau_n\,\di x\,,
\\[1ex] \label{app_system}
\int_{\Omega}(\theta^k_{m,_{t}})\, v_n\,\di x + \int_{\Omega}\nabla\theta^k _{m}\nabla v_n\, \di x & +\int_{\Omega} f\big(\TC_k(\tilde{\theta}+ \theta_{m}^k )\big)\mathrm{div} (\tilde{u}_t + u^k _{m,_{t}})\, v_n \,\di x
\\[1ex]
& = \int_{\Omega} \TC_k \big(\big\{|\dev(T^k_m)|-\beta(\theta_m^k+\tilde{\theta})\big\}^{r}_{+}|\dev(T^k_m)| \big)\,v_n\, \di x
\end{aligned}
\end{equation}
for a.a. $t\in (0,\T]$ and for every $n=1, ... ,m$. Initial conditions for \eqref{app_system} have the following form
\begin{equation}
\begin{split}
\left( u^k_{m}(x,0), w_n\right) &= \left( u_0,w_n \right) \qquad n=1,..,m, \\
\left( \theta^k_{m}(x,0), v_n\right) &= \left( \TC_k(\theta_0),v_n \right) \,\,\, n=1,..,m, \\
\left( T^k_m(x,0), \tau_n \right) &= \left( T_0,\tau_n \right) \qquad n=1,..,m,
\end{split}
\label{eq:warunki_pocz_app}
\end{equation}
where $\big(\cdot,\cdot\big)$ denotes the inner product in $L^2(\Omega)$ or in $L^2(\Omega,\mathbb{R}^3)$ or in $L^2(\Omega,\S)$.
Using the form of approximate solutions \eqref{eq:postac} in momentum equation \eqref{app_system}$_{(1)}$ and the fact that the sets $\{\varepsilon( w_n)\}_{n=1}^k$ is orthogonal, we obtain
\begin{equation}
\label{aproxmomentum}
\lambda_n(\alpha_{k,m}^n(t))_t=-\int_{\Omega}\Big(\sum_{i=1}^m\gamma_{k,m}^i(t) \tau_i-f\big(\TC_k(\tilde{\theta}+\sum_{i=1}^m\beta_{k,m}^i(t) v_i)\big)\Big)\varepsilon(w_n)\, \di x
\end{equation}
for every $n=1,..,m$. Considering the heat equation \eqref{app_system}$_{(3)}$ we have
\begin{equation}
\begin{aligned}
(\beta_{k,m}^n(t))_t&+\mu_n \beta_{k,m}^n(t) +\sum_{i=1}^m(\alpha_{k,m}^i(t))_t\int_{\Omega}f\big(\TC_k(\tilde{\theta}+ \sum_{i=1}^m\beta_{k,m}^i(t) v_i)\big){\div}(w_i) v_n \di x
\\[1ex]&
+ \int_{\Omega} f\big(\TC_k(\tilde{\theta}+ \sum_{i=1}^m \beta_{k,m}^i(t) v_i)\big) {\div}\tilde{u}_t\,
v_n\di x
\\
&= \int_{\Omega} \TC_k \Big(\big\{\big|\sum_{i=1}^m\gamma_{k,m}^i(t) \dev(\tau_i)\big|-\beta\big(\sum_{i=1}^m \beta_{k,m}^i(t) v_i+\tilde{\theta}\big)\big\}^{r}_{+}\big|\sum_{i=1}^m\gamma_{k,m}^i(t) \dev(\tau_i)\big| \Big)\,v_n\, \di x
\end{aligned}
\label{app_system20aa}
\end{equation}
for every $n=1,..,m$. Considering equations for visco-elastic strain tensor \eqref{app_system}$_{(2)}$ we get
\begin{equation}
\begin{split}
&\sum_{i=1}^m(\gamma_{k,m}^i(t))_t\int_{\Omega}\D^{-1}\tau_i\,\tau_n\,\di x\\[1ex]
&+\sum_{i=1}^m \int_{\Omega}\big\{\big|\sum_{i=1}^m\gamma_{k,m}^i(t) \dev(\tau_i)\big|-\beta\big(\sum_{i=1}^m \beta_{k,m}^i(t) v_i+\tilde{\theta}\big)\big\}^{r}_{+}\,\frac{\sum\limits_{i=1}^m\gamma_{k,m}^i(t) \dev(\tau_i)}{\big|\sum\limits_{i=1}^m\gamma_{k,m}^i(t) \dev(\tau_i)\big|}\,\tau_n\,\di x
\\[2ex]
&+\frac{1}{k}\sum_{i=1}^m\int_{\Omega}\big|\sum_{i=1}^m\gamma_{k,m}^i(t) \dev(\tau_i)\big|^{2r-1}\,\frac{\sum\limits_{i=1}^m\gamma_{k,m}^i(t) \dev(\tau_i)}{\big|\sum\limits_{i=1}^m\gamma_{k,m}^i(t) \dev(\tau_i)\big|}\,\tau_n\,\di x
\\[1ex]
&=\sum_{i=1}^m(\alpha_{k,m}^i(t))_t\int_{\Omega}\ve(w_n)\,\tau_n\,\di x+\int_{\Omega}\ve(\tilde{u}_t)\,\tau_n\,\di x\,,
\end{split}
\label{app_systemforT}
\end{equation}
for every $n=1,..,m$. Let us define
\begin{equation}
\chi(t) = (\alpha_{k,m}^1(t),...,\alpha_{k,m}^m(t),\beta_{k,m}^1(t),...,\beta_{k,m}^m(t),\gamma_{k,m}^1(t),..., \gamma_{k,m}^m(t))^T .
\nonumber
\end{equation}
In this article we consider the isotropic materials, hence the matrix $\{(\D^{-1}\tau_i,\tau_n)\}_{i,n=1}^m$ is nonsingular. Multiplying the system of equations \eqref{app_systemforT} by the inverse matrix of $\{(\D^{-1}\tau_i,\tau_n)\}_{i,n=1}^m$ we obtain
\begin{equation}
(\gamma_{k,m}^n(t))_t=\tilde{G}(\chi(t),t)\,,
\label{app_systemforT1}
\end{equation}
where for fixed approximate parameters $k,m\in\mathbb{N}$ function $\tilde{G}(\cdot,\cdot)$ is measurable with respect to $t$, continuous with respect to $\chi$ and for every $t$ function $\tilde{G}(\cdot,t)$ is bounded. From \eqref{aproxmomentum}, \eqref{app_system20aa} and \eqref{app_systemforT1} we deduce that the system \eqref{app_system} with initial conditions \eqref{eq:warunki_pocz_app} can be written in the following form
\begin{equation}\label{47}
\begin{split}
&\frac{\di\chi}{\di t} = G(\chi(t),t)\,,
\qquad
t\in [0,T),
\\
&\chi(0) =\chi_{0}\,.
\end{split}
\end{equation}
For fixed approximate parameters $k,m\in\mathbb{N}$ function $G(\cdot,\cdot)$ is measurable with respect to $t$, continuous with respect to $\chi$ and for every $t$ function $G(\cdot,t)$ is bounded. Using Carath\'eodory theorem, see \cite[Theorem 3.4, Appendix]{maleknecas} or \cite[Appendix $(61)$]{zeidlerB}, we obtain that for some positive $t^*$ there exists absolutely continuous function $\chi$ on time interval $[0,t^*]$. Thus, there exist absolutely continuous functions $\alpha_{k,m}^n(t)$, $\beta_{k,m}^n(t)$ and $\gamma_{k,m}^n(t)$ on time interval $[0,t^*]$ and for every $n \leq m$.
\begin{remark}
Due to Carath\'edory theorem we obtain the local existence of approximate solutions. Global existence of approximate solutions is a consequence of uniform boundedness of solutions which will be proved in the next subsection.
\end{remark}
\subsection{Boundedness of approximate solutions}
In this section, with $k\in \mathbb{N}$ fixed, we focus on estimating the sequence $(u_{m}^k, \theta_{m}^k, T_m^k)$ regardless of the parameter $m$. Firstly, we will prove the following energy inequality.
\begin{tw}
\label{oszacowanie0}
For every $k\in \mathbb{N}$, the following energy estimates
\begin{equation*}
\begin{aligned}
&\sup_{t\in (0,\T)}\|T^k_{m}(t)\|^2_{L^2(\Omega)} + \sup_{t\in (0,\T)}\|\theta_m^k(t)\|^2_{L^2(\Omega)}+
\|\ve(u^k _{m,_{t}})\|^2_{L^2(0,\T;L^2(\Omega))}
\\[1ex]
&\hspace{2ex}+ \|\theta_m^k(t)\|^2_{L^2(0,\T;H^1(\Omega))}+ \frac{1}{k}\int_0^t\int_{\Omega}|\dev(T^k_m)|^{2r}\,\di x\,\di\tau\\[1ex]
&\hspace{4ex}+ \int_0^t\int_{\Omega}\big\{|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big\}^{r}_{+}\,|\dev(T^k_m)|\,\di x\,\di\tau \hspace{1ex}\leq \hspace{1ex} C
\hspace{2ex}
\end{aligned}
\label{oszacowanie15}
\end{equation*}
is satisfied, where the constant $C>0$ does not depend on $m$
\end{tw}
\begin{proof}
Multiplying the equation $(\ref{app_system})_1$ by $(\alpha_{k,m}^n(t))_{,_{t}}$, the equation $(\ref{app_system})_2$ by $\gamma_{k,m}^n(t)$, the equation $(\ref{app_system})_3$ by $\beta_{k,m}^n(t)$ and summing up the results over $n=1,\ldots,m$ we obtain
\begin{equation}
\begin{aligned}
\int_{\Omega} (T_{m}^k - f\big(\TC_k(\tilde{\theta}+\theta_{m}^k )\big)\id) \ve(u^k _{m,_{t}})\, \di x &+ \int_{\Omega}\D(\varepsilon(u^k _{m,_{t}})) \,\ve(u^k _{m,_{t}})\,\di x = 0\,,
\\[1ex]
\int_{\Omega}\D^{-1} T^k_{m,_{t}}\,T^k_m\,\di x+ \int_{\Omega}\big\{&|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big\}^{r}_{+}\,|\dev(T^k_m)|\,\di x
\\[1ex]
&+\frac{1}{k}\int_{\Omega}|\dev(T^k_m)|^{2r}\,\di x
\\[1ex]
&=\int_{\Omega}\big(\ve(u^k_{m,_{t}})+\ve(\tilde{u}_t)\big)\,T^k_m\,\di x\,,
\\[1ex]
\int_{\Omega}(\theta^k_{m,_{t}})\, \theta_m^k\,\di x + \int_{\Omega}|\nabla\theta^k _{m}|^2\, \di x & +\int_{\Omega} f\big(\TC_k(\tilde{\theta}+ \theta_{m}^k )\big)\mathrm{div} (\tilde{u}_t + u^k _{m,_{t}})\, \theta_m^k \,\di x
\\[1ex]
= \int_{\Omega} \TC_k \big(\big\{&|\dev(T^k_m)|-\beta(\theta_m^k+\tilde{\theta})\big\}^{r}_{+}|\dev(T^k_m)| \big)\,\theta_m^k\, \di x\,.
\end{aligned}
\label{oszacowanie11}
\end{equation}
Adding up all equations in \eqref{oszacowanie11} we get
\begin{equation}
\begin{aligned}
&\frac{1}{2}\frac{\di}{\di t}\int_{\Omega}\D^{-1} T^k_{m}\,T^k_m\,\di x+
\int_{\Omega}\D(\varepsilon(u^k _{m,_{t}})) \,\ve(u^k _{m,_{t}})\,\di x\\[1ex]
&\hspace{2ex}+ \int_{\Omega}\big\{|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big\}^{r}_{+}\,|\dev(T^k_m)|\,\di x \\[1ex]
&\hspace{4ex}+\frac{1}{k}\int_{\Omega}|\dev(T^k_m)|^{2r}\,\di x +\frac{1}{2}\frac{\di}{\di t} \int_{\Omega}|\theta_m^k|^2\,\di x + \int_{\Omega}|\nabla\theta^k _{m}|^2\, \di x
\\[1ex]
&= \int_{\Omega} f\big(\TC_k(\tilde{\theta}+\theta_{m}^k )\big) \mathrm{div}(u^k _{m,_{t}})\, \di x +\int_{\Omega}\ve(\tilde{u}_t)\,T^k_m\,\di x\\[1ex]
&\hspace{2ex}- \int_{\Omega} f\big(\TC_k(\tilde{\theta}+ \theta_{m}^k )\big)\mathrm{div} (\tilde{u}_t + u^k _{m,_{t}})\, \theta_m^k \,\di x
\\[1ex]
&\hspace{4ex}+\int_{\Omega} \TC_k \big(\big\{|\dev(T^k_m)|-\beta(\theta_m^k+\tilde{\theta})\big\}^{r}_{+}|\dev(T^k_m)| \big)\,\theta_m^k\, \di x\,.
\end{aligned}
\label{oszacowanie12}
\end{equation}
For a.a. $t\in[0,\T]$ we integrate above mentioned equation over $(0,t)$ and use the H\"older inequality. It leads us to
\begin{equation}
\begin{aligned}
&\frac{1}{2}\int_{\Omega}\D^{-1} T^k_{m}(t)\,T^k_m(t)\,\di x+
\int_0^t\int_{\Omega}\D(\varepsilon(u^k _{m,_{t}})) \,\ve(u^k _{m,_{t}})\,\di x\, \di\tau\\[1ex]
&\hspace{2ex}+ \int_0^t\int_{\Omega}\big\{|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big\}^{r}_{+}\,|\dev(T^k_m)|\,\di x\,\di\tau\\[1ex]
&\hspace{4ex}+\frac{1}{k}\int_0^t\int_{\Omega}|\dev(T^k_m)|^{2r}\,\di x\,\di\tau+ \frac{1}{2}\int_{\Omega}|\theta_m^k(t)|^2\,\di x + \int_0^t\int_{\Omega}|\nabla\theta^k _{m}|^2\, \di x\,\di\tau
\\[1ex]
&\leq \hspace{1ex} c\| T^k_{m}(0)\|^2_{L^2(\Omega)} + \frac{1}{2}\|\theta_m^k(0)\|^2_{L^2(\Omega)}+ \int_0^t\big\|f\big(\TC_k(\tilde{\theta}+\theta_{m}^k )\big)\big\|_{L^2(\Omega)} \|\ve(u^k _{m,_{t}})\|_{L^2(\Omega)}\,\di\tau \\[1ex]
&\hspace{2ex} + \int_0^t\|\ve(\tilde{u}_t)\|_{L^2(\Omega)}\|T^k_m\|_{L^2(\Omega)}\,\di\tau\\[1ex]
&\hspace{4ex}+ \int_0^t \big\|f\big(\TC_k(\tilde{\theta}+ \theta_{m}^k )\big)\big\|_{L^\infty(\Omega)}\|\tilde{u}_t + u^k _{m,_{t}}\|_{L^2(\Omega)}\|\theta_m^k\|_{L^2(\Omega)} \,\di\tau
\\[1ex]
&\hspace{6ex}+\int_0^t\big\|\TC_k \big(\big\{|\dev(T^k_m)|-\beta(\theta_m^k+\tilde{\theta})\big\}^{r}_{+}|\dev(T^k_m)| \big)\big\|_{L^2(\Omega)}\|\theta_m^k\|_{L^2(\Omega)}\, \di \tau\,,
\end{aligned}
\label{oszacowanie123}
\end{equation}
where the constant $c>0$ does not depend on $m$. The assumptions on the initial data imply that the initial terms on the right hand side of \eqref{oszacowanie123} are bounded independently on $m$. Furthermore the properties of cut-off function $\TC_k$ yields
\begin{equation}
\label{oszacowanie13} \big\|f\big(\TC_k(\tilde{\theta}+ \theta_{m}^k )\big)\big\|_{L^{\infty}(0,\T;L^\infty(\Omega))}\leq \sup\limits_{-k\leq \xi\leq k}|f(\xi)|\,.
\end{equation}
The continuity of the function $f$ entails that the sequence $\{f\big(\TC_k(\tilde{\theta}+ \theta_{m}^k )\big)\}_{m=1}^{\infty}$ is uniformly bounded in $L^{\infty}(0,\T;L^{\infty}(\Omega))$. Applying Cauchy inequality with small weight to all integrals occurring on the right hand side of \eqref{oszacowanie123} we obtain
\begin{equation}
\begin{aligned}
&\frac{1}{2}\int_{\Omega}\D^{-1} T^k_{m}(t)\,T^k_m(t)\,\di x+
\int_0^t\int_{\Omega}\D(\varepsilon(u^k _{m,_{t}})) \,\ve(u^k _{m,_{t}})\,\di x\, \di\tau\\[1ex]
&\hspace{2ex}+ \int_0^t\int_{\Omega}\big\{|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big\}^{r}_{+}\,|\dev(T^k_m)|\,\di x\,\di\tau \\[1ex]
&\hspace{4ex}+\frac{1}{k}\int_0^t\int_{\Omega}|\dev(T^k_m)|^{2r}\,\di x\,\di\tau+ \frac{1}{2}\int_{\Omega}|\theta_m^k(t)|^2\,\di x + \int_0^t\int_{\Omega}|\nabla\theta^k _{m}|^2\, \di x\,\di\tau
\\[1ex]
&\leq \hspace{1ex} C+ \eta \int_0^t \|\ve(u^k _{m,_{t}})\|^2_{L^2(\Omega)}\,\di\tau + \eta\int_0^t\|T^k_m\|^2_{L^2(\Omega)}\,\di\tau + \eta\int_0^t\|u^k _{m,_{t}}\|^2_{L^2(\Omega)}\,\di\tau\\[1ex]
&\hspace{2ex} +D(\eta)\int_0^t\|\theta_m^k\|^2_{L^2(\Omega)} \,\di\tau\,,
\hspace{2ex}
\end{aligned}
\label{oszacowanie14}
\end{equation}
where the constant $C$ is a positive constant that does not depend on $m$ and $\eta>0$ is a sufficiently small constant. Observe that the constant $C$ depends only on the given data having regularity specified in \eqref{regularity}. Using the Korn's inequality in the third integral on the right hand side of \eqref{oszacowanie14}, the properties of the elasticity tensor $\D$ and selecting sufficiently small $\eta>0$ we arrive to the following estimate
\begin{equation}
\begin{aligned}
&\frac{1}{2}\int_{\Omega}\D^{-1} T^k_{m}(t)\,T^k_m(t)\,\di x+
\int_0^t\int_{\Omega}\D(\varepsilon(u^k _{m,_{t}})) \,\ve(u^k _{m,_{t}})\,\di x\, \di\tau
\\[1ex]
&\hspace{2ex}+ \int_0^t\int_{\Omega}\big\{|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big\}^{r}_{+}\,|\dev(T^k_m)|\,\di x\,\di\tau\\[1ex]
& \hspace{4ex}+ \frac{1}{k}\int_0^t\int_{\Omega}|\dev(T^k_m)|^{2r}\,\di x\,\di\tau
+ \frac{1}{2}\int_{\Omega}|\theta_m^k(t)|^2\,\di x + \int_0^t\int_{\Omega}|\nabla\theta^k _{m}|^2\, \di x\,\di\tau
\\[1ex]
&\leq \hspace{1ex} C+D\int_0^t\|\theta_m^k\|^2_{L^2(\Omega)} \,\di\tau\,.
\hspace{2ex}
\end{aligned}
\label{oszacowanie151}
\end{equation}
Using the Gronwall's lemma the proof is complete.
\end{proof}
The next step are estimates for the time derivatives of the sequence $(\theta_{m}^k, T_m^k)$ regardless of $m$. This will result from the following two theorems, which are crucial in the proof of Theorem \ref{istnieniek}.
\begin{tw}
There exist constant $\tilde{C}$ independent on $m$, such that the following inequality holds
\begin{equation*}
\|\theta_{m,_{t}}^k\|^2_{L^2(0,\T;L^2(\Omega))} +
\sup\limits_{t\in(0,\T]
)}\|\theta_m^k(t)\|^2_{H_0^1(\Omega)} \leq\hspace{1ex} \tilde{C}\,.
\end{equation*}
\label{oszacowanie1}
\end{tw}
\begin{proof}
Multiplying the equation $(\ref{app_system})_3$ by $(\beta_{k,m}^n(t))_t$ and summing up over $n=1,\ldots,m$ we obtain
\begin{equation}
\begin{split}
\int_{\Omega}|\theta^k_{m,_{t}}|^2\,\di x & + \int_{\Omega}\nabla\theta^k _{m}\, \nabla \theta^k_{m,_{t}} \, \di x +\int_{\Omega} f\big(\TC_k(\tilde{\theta}+ \theta_{m}^k )\big)\mathrm{div} (\tilde{u}_t + u^k _{m,_{t}})\, \theta^k_{m,_{t}} \,\di x
\\[1ex]
& = \int_{\Omega} \TC_k \big(\big\{|\dev(T^k_m)|-\beta(\theta_m^k+\tilde{\theta})\big\}^{r}_{+}|\dev(T^k_m)| \big)\,\theta^k_{m,_{t}}\, \di x\,.
\end{split}
\end{equation}
Theorem \ref{oszacowanie0} imply that the sequence $\{\ve(u^k _{m,_{t}})\}_{m=1}^\infty$ is bounded (independently on $m$) in $L^2(0,\T;L^2(\Omega;\S))$. Therefore integrating above mentioned equation over $(0,t)$ and using the standard tools for parabolic equations, see e.g. Evans [25], we complete the proof.
\end{proof}
\begin{tw}
For fixed $k>0$, the Galerkin sequence $\theta_{m}^k$ and $T_m^k$ satisfy also
\label{oszacowanie2}
\begin{equation*}
\begin{split}
\int_0^t\int_{\Omega}&\D^{-1} T^k_{m,_{t}}\,T^k_{m,_{t}}\,\di x\,\di\tau+ \frac{1}{r+1} \int_{\Omega}\big\{|\dev(T^k_m(t))|-\beta\big(\theta^k_m(t)+\tilde{\theta(t)}\big)\big\}^{r+1}_{+}\,\di x
\\[1ex]
&+\frac{1}{2kr}\int_{\Omega}|\dev(T^k_m(t))|^{2r}\,\di x \hspace{1ex}\leq \hspace{1ex} \hat{C}
\end{split}
\end{equation*}
with $\hat{C}$'s independent of $m$ and a.a. $t\in (0,\T)$.
\end{tw}
\begin{proof}
Multiplying the equation $(\ref{app_system})_2$ by $(\gamma_{k,m}^n(t))_t$ and summing up over $n=1,\ldots,m$ we get
\begin{equation}
\begin{split}
\int_{\Omega}&\D^{-1} T^k_{m,_{t}}\,T^k_{m,_{t}}\,\di x+ \int_{\Omega}\big\{|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big\}^{r}_{+}\frac{\dev(T^k_m)}{|\dev(T^k_m)|}\, T^k_{m,_{t}}\,\di x
\\[1ex]
&+\frac{1}{k}\int_{\Omega}|\dev(T^k_m)|^{2r-1}\,\frac{\dev(T^k_m)}{|\dev(T^k_m)|} T^k_{m,_{t}}\,\di x =\int_{\Omega}\big(\ve(u^k_{m,_{t}})+\ve(\tilde{u}_t)\big)\,T^k_{m,_{t}}\,\di x\,.
\end{split}
\label{oszacowanie21}
\end{equation}
Using the fact that the deviatoric part of the stress is orthogonal to it's volumetric part, we conclude that
\begin{equation}
\begin{split}
\int_{\Omega}&\D^{-1} T^k_{m,_{t}}\,T^k_{m,_{t}}\,\di x+\frac{1}{r+1}\frac{\di}{\di t}\Big( \int_{\Omega}\big\{|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big\}^{r+1}_{+}\,\di x\Big)
\\[1ex]
&+\frac{1}{2kr}\frac{\di}{\di t}\Big(\int_{\Omega}|\dev(T^k_m)|^{2r}\,\di x\Big) =\int_{\Omega}\big(\ve(u^k_{m,_{t}})+\ve(\tilde{u}_t)\big)\,T^k_{m,_{t}}\,\di x\\[1ex]
&\hspace{2ex}+\int_{\Omega}\big\{|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big\}^{r}_{+}\beta'(\theta^k_m+\tilde{\theta})\big(\theta^k_{m,_{t}}+\tilde{\theta}_{t}\big)\,\di x\,.
\end{split}
\label{oszacowanie22}
\end{equation}
Integrating \eqref{oszacowanie22} with respect to time and using H\"older inequality we have
\begin{equation}
\begin{split}
\int_0^t\int_{\Omega}&\D^{-1} T^k_{m,_{t}}\,T^k_{m,_{t}}\,\di x\,\di\tau+ \frac{1}{r+1} \int_{\Omega}\big\{|\dev(T^k_m(t))|-\beta\big(\theta^k_m(t)+\tilde{\theta}(t)\big)\big\}^{r+1}_{+}\,\di x
\\[1ex]
&+\frac{1}{2kr}\int_{\Omega}|\dev(T^k_m(t))|^{2r}\,\di x \leq \frac{1}{r+1} \int_{\Omega}\big\{|\dev(T^k_m(0))|-\beta\big(\theta^k_m(0)+\tilde{\theta}(0)\big)\big\}^{r+1}_{+}\,\di x\\[1ex]
&\hspace{2ex}+\frac{1}{2kr}\|\dev(T^k_m(0))\|^{2r}_{L^{2r}(\Omega)}\,\di x
+\int_0^t\|\ve(u^k_{m,_{t}})+\ve(\tilde{u}_t)\|_{L^2(\Omega)}\|T^k_{m,_{t}}\|_{L^2(\Omega)}\,\di \tau\\[1ex]
&\hspace{4ex}+\|\beta'(\theta^k_m+\tilde{\theta})\|_{L^{\infty}(0,T;L^{\infty}(\Omega))}\int_0^t\big\|\big\{|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big\}^{r}_{+}\big\|_{L^2(\Omega)}\|\theta^k_{m,_{t}}+\tilde{\theta}_{t}\|_{L^2(\Omega)}\,\di \tau\,.
\end{split}
\label{oszacowanie23}
\end{equation}
Theorems \ref{oszacowanie0} and \ref{oszacowanie1} yield that the sequences $\{\ve(u^k _{m,_{t}})\}_{m=1}^\infty$ and $\{\theta^k_{m,_{t}}\}_{m=1}^\infty$ are bounded in $L^2(0,\T;L^2(\Omega;\S))$ and $L^2(0,\T;L^2(\Omega))$, respectively. Assumption (C2) implies that the norm $\|\beta'(\theta^k_m+\tilde{\theta})\|_{L^{\infty}(0,T;L^{\infty}(\Omega))}$ is finite. Additionally, the assumption on the initial data and Cauchy inequality with small weight entail the following inequality
\begin{equation}
\begin{split}
\int_0^t\int_{\Omega}&\D^{-1} T^k_{m,_{t}}\,T^k_{m,_{t}}\,\di x\,\di\tau+ \frac{1}{r+1} \int_{\Omega}\big\{|\dev(T^k_m(t))|-\beta\big(\theta^k_m(t)+\tilde{\theta}(t)\big)\big\}^{r+1}_{+}\,\di x
\\[1ex]
&+\frac{1}{2kr}\int_{\Omega}|\dev(T^k_m(t))|^{2r}\,\di x \leq C(\nu)+\nu\int_0^t\|T^k_{m,_{t}}\|^2_{L^2(\Omega)}\,\di \tau\\[1ex]
&\hspace{4ex}+D\int_0^t\big\|\big\{|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big\}^{r}_{+}\big\|^2_{L^2(\Omega)}\,\di \tau\,,
\end{split}
\label{oszacowanie24}
\end{equation}
where $\nu$ is any positive constant and the constants $C(\nu)$ and $D$ do not depend on $m$. Choosing $\nu$ small enough we obtain
\begin{equation}
\begin{split}
\int_0^t&\int_{\Omega}\D^{-1} T^k_{m,_{t}}\,T^k_{m,_{t}}\,\di x\,\di\tau+ \frac{1}{r+1} \int_{\Omega}\big\{|\dev(T^k_m(t))|-\beta\big(\theta^k_m(t)+\tilde{\theta(t)}\big)\big\}^{r+1}_{+}\,\di x
\\[1ex]
&+\frac{1}{2kr}\int_{\Omega}|\dev(T^k_m(t))|^{2r}\,\di x \leq C +D\int_0^t\big\|\big\{|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big\}^{r}_{+}\big\|^2_{L^2(\Omega)}\,\di \tau\,.
\end{split}
\label{oszacowanie25}
\end{equation}
Observe that for almost every $(x,\tau)\in \Omega\times (0,t)$ such that $$|\dev(T^k_m(x,\tau))|\leq\beta(\theta^k_m(x,\tau)+\tilde{\theta}(x,\tau))\,,$$
the integral on the right hand side of \eqref{oszacowanie25} is equal to 0. Let us introduce the set
$$Q=\{(x,\tau)\in\Omega\times (0,t):\,|\dev(T^k_m(x,\tau))|>\beta(\theta^k_m(x,\tau)+\tilde{\theta}(x,\tau))\}\,,$$
then
\begin{equation}
\begin{split}
\int_0^t\big\|\big\{|\dev(T^k_m)|&-\beta(\theta^k_m+\tilde{\theta})\big\}^{r}_{+}\big\|^2_{L^2(\Omega)}\,\di \tau = \int_{Q} \big(|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big)^{2r}
\,\di x\,\di\tau\\[1ex]
&\leq 2^{2r-1}\int_{Q} |\dev(T^k_m)|^{2r}+|\beta(\theta^k_m+\tilde{\theta})|^{2r}
\,\di x\,\di\tau\\[1ex]
&\leq 2^{2r-1}\int_{Q} |\dev(T^k_m)|^{2r}+|\dev(T^k_m)|^{2r} \leq 2^{2r}\int_{Q} |\dev(T^k_m)|^{2r}
\,\di x\,\di\tau\\[1ex]
&\leq 2^{2r}\int_0^t\int_{\Omega} |\dev(T^k_m)|^{2r}\,\di x\,\di\tau\,.
\end{split}
\label{oszacowanie26}
\end{equation}
Theorem \ref{oszacowanie0} entails that the integral on the right hand side of \eqref{oszacowanie26} is bounded independently of $m$, which completes the proof.
\end{proof}
\begin{concl}
\label{wnioski}
Summarizing the results of Theorems \ref{oszacowanie0}, \ref{oszacowanie1} and \ref{oszacowanie2} we obtain that
\begin{enumerate}[i)]
\item the sequence $\{T^k_m\}_{m=1}^\infty$ is bounded in $H^1(0,\T;L^2(\Omega;\S))$.
\item the sequence $\{u^k_m\}_{m=1}^\infty$ is bounded in $H^1(0,\T;H^1_0(\Omega;\R^3))$.
\item the sequence $\{\theta^k_m\}_{m=1}^\infty$ is bounded in $L^{\infty}(0,\T;H^1(\Omega))\cap H^1(0,\T;L^2(\Omega))$, hence the compactness Aubin-Lions Lemma (see for example \cite{Roubicekbook}) implies that the sequence $\{\theta^k_m\}_{m=1}^\infty$ is relatively
compact in $L^2(0,\T;L^2(\Omega))$. Therefore it contains a subsequence (again denoted using the superscript $m$) such that $\theta^k_m\rightarrow \theta^k$ a.e. in $\Omega\times(0,\T)$. The continuity of $f$ and $\TC_k$ yield that
$$f\big(\TC_{k}(\theta^k_m+\tilde{\theta})\big)- f\big(\TC_{k}(\theta^k+\tilde{\theta})\big)\rightarrow 0 \quad \mathrm{a.e.\, in}\quad \Omega\times(0,\T)$$
and $\big|\,f\big(\TC_{k}(\theta^k_m+\tilde{\theta})\big)- f\big(\TC_{k}(\theta^k+\tilde{\theta})\big)\,\big|$ is bounded independently of $m$. From the dominated Lebesgue theorem we conclude that for all $q\geq 1$
\begin{equation}
\label{mocnatemp}
f\big(\TC_{k}(\theta^k_m+\tilde{\theta})\big)- f\big(\TC_{k}(\theta^k+\tilde{\theta})\big)\rightarrow 0\quad \mathrm{in}\quad L^q(0,\T;L^q(\Omega;\R))\,.
\end{equation}
\item the sequence $\{\dev(T^k_m)\}_{m=1}^\infty$ is bounded in $L^{\infty}(0,\T;L^{2r}(\Omega;\SS))$.
\item the sequence $\Big\{\big\{|\dev(T^k_m)|-\beta\big(\theta^k_m+\tilde{\theta}\big)\big\}^{r+1}_{+}\Big\}_{m=1}^\infty$ is bounded in $L^{\infty}(0,\T;L^1(\Omega))$.\\[1ex]
\end{enumerate}
\end{concl}
\begin{lem}
\label{ogrniel}
The sequences $\big\{|\dev(T^k_m)|^{2r-1}\,\frac{\dev(T^k_m)}{|\dev(T^k_m)|}\big\}_{m=1}^\infty$ and\\ $\Big\{\big\{|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big\}^{r}_{+}\,\frac{\dev(T^k_m)}{|\dev(T^k_m)|}\Big\}_{m=1}^\infty$ are bounded in $L^{\frac{2r}{2r-1}}(0,\T;L^{\frac{2r}{2r-1}}(\Omega;\SS))$ and $L^{\frac{r+1}{r}}(0,\T;L^{\frac{r+1}{r}}(\Omega;\SS))$, respectively.
\end{lem}
\begin{proof}
Observe that for $t\in (0,\T)$ we have
\begin{equation*}
\int_0^t\int_{\Omega}\Big||\dev(T^k_m)|^{2r-1}\,\frac{\dev(T^k_m)}{|\dev(T^k_m)|}\Big|^{\frac{2r}{2r-1}}\,\di x\, \di \tau= \int_0^t\int_{\Omega}|\dev(T^k_m)|^{2r}\,\di x\, \di \tau
\end{equation*}
and from Conclusion \ref{wnioski} we receive the first statement of this lemma. Next, let us notice that
\begin{equation*}
\int_0^t\int_{\Omega}\Big|\big\{|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big\}^{r}_{+}\,\frac{\dev(T^k_m)}{|\dev(T^k_m)|}\Big|^{\frac{r+1}{r}}\,\di x\, \di \tau= \int_0^t\int_{\Omega}\big\{|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big\}^{r+1}_{+}\,\di x\, \di \tau\,.
\end{equation*}
Then, using Conclusion \ref{wnioski} the proof is complete.
\end{proof}
The above uniform estimates allows us to conclude that, at least for a subsequence, the following holds
\begin{eqnarray}
\label{weaklimm}
T^k_m \rightharpoonup T^k &\hspace{2ex} \mbox{weakly in } H^1(0,\T;L^2(\Omega;\S)),\nn\\[1ex]
\dev(T^k_m) \rightharpoonup \dev(T^k) & \hspace{2ex} \mbox{weakly in } L^{2r}(0,\T;L^{2r}(\Omega;\SS)),\nn\\[1ex]
u^k_m\rightharpoonup u^k & \hspace{2ex} \mbox{weakly in } H^1(0,\T;H^1_0(\Omega;\R^3)),\nn\\[1ex]
\theta^k_m \rightarrow \theta^k & \hspace{2ex} \mbox{in } L^2(0,\T;L^2(\Omega)),\\[1ex]
\frac{1}{k}|\dev(T^k_m)|^{2r-1}\,\frac{\dev(T^k_m)}{|\dev(T^k_m)|} \rightharpoonup \chi^k &\hspace{2ex} \mbox{weakly in } L^{\frac{2r}{2r-1}}(0,\T;L^{\frac{2r}{2r-1}}(\Omega;\SS)),\nn\\[2ex]
\big\{|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big\}^{r}_{+}\,\frac{\dev(T^k_m)}{|\dev(T^k_m)|}\rightharpoonup \psi^k & \hspace{2ex} \mbox{weakly in } L^{\frac{r+1}{r}}(0,\T;L^{\frac{r+1}{r}}(\Omega;\SS))\nn
\end{eqnarray}
with $m \rightarrow \infty$. Connecting the last two convergences in \eqref{weaklimm} we may write
\begin{equation*}
\frac{1}{k}|\dev(T^k_m)|^{2r-1}\,\frac{\dev(T^k_m)}{|\dev(T^k_m)|}+\big\{|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big\}^{r}_{+}\,\frac{\dev(T^k_m)}{|\dev(T^k_m)|}\rightharpoonup \chi^k+\psi^k=\omega^k
\end{equation*}
in $L^{\frac{2r}{2r-1}}(0,\T;L^{\frac{2r}{2r-1}}(\Omega;\SS))$.
From \eqref{weaklimm} and \eqref{mocnatemp} we can pass to the limits in equations $\eqref{app_system}_1$ and $\eqref{app_system}_2$ with $m \rightarrow \infty$. The standard tools in the Galerkin method allow us to write
\begin{equation}
\begin{aligned}
\int_{\Omega} \big(T^k - f\big(\TC_k(\theta^k + \tilde{\theta})\big)\id\big) \varepsilon(w)\, \di x &+ \int_{\Omega}\D(\varepsilon(u^k _{t})) \,\varepsilon(w)\,\di x = 0\,,
\\[1ex]
\int_{\Omega}\D^{-1} T^k_{t}\tau\,\di x + \int_{\Omega}\omega^k\,\tau\,\di x
&=\int_{\Omega}\big(\ve(u^k_{t})+\ve(\tilde{u}_t)\big)\,\tau\,\di x
\end{aligned}
\label{weaklimit12}
\end{equation}
for almost all $t\in (0,\T)$, where the first equation in \eqref{weaklimit12} is satisfied for all $w\in H_0^1(\Omega;\R^3)$ and the second one for all $(\tau,\dev{\tau})\in L^2(\Omega;\S)\times L^{2r}(\Omega;\SS)$. To complete the existence of solutions to truncated problem \eqref{AMain1}, we need to characterize weak limit of $\omega^k$. We are going to use the Young Measure approach. In order to be able to use Young's measure, the main assumption that the nonlinearity must satisfy, is the following inequality (see for example \cite{chelgwiaz2007,tve-Orlicz,GWIAZDA2005923}).
\begin{tw}
\label{lmimsup}
The following inequality holds for solutions of approximate system
\begin{equation}
\label{limsupmain}
\begin{split}
\limsup_{m\rightarrow\infty}\Big[& \int_{0}^{t}\int_{\Omega}\big\{|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big\}^{r}_{+}\,|\dev(T^k_m)| \,\di x\, \di \tau\\[1ex]
&+ \frac{1}{k}\int_{0}^{t}\int_{\Omega}|\dev(T^k_m)|^{2r} \,\di x\, \di \tau\Big]\leq
\int_{0}^{t}\int_{\Omega}\omega^k\, \dev(T^k) \di x\,\di \tau
\end{split}
\end{equation}
for all $t\in (0,\T]$.
\end{tw}
\begin{proof}
From the formulas $\eqref{oszacowanie11}_1$ and $\eqref{oszacowanie11}_2$ we have
\begin{equation}
\begin{aligned}
&\frac{1}{2}\frac{\di}{\di t}\int_{\Omega}\D^{-1} T^k_{m}\,T^k_m\,\di x+
\int_{\Omega}\D(\varepsilon(u^k _{m,_{t}})) \,\ve(u^k _{m,_{t}})\,\di x\\[1ex]
&\hspace{2ex}+ \int_{\Omega}\big\{|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big\}^{r}_{+}\,|\dev(T^k_m)|\,\di x \\[1ex]
&\hspace{4ex}+\frac{1}{k}\int_{\Omega}|\dev(T^k_m)|^{2r}\,\di x
= \int_{\Omega} f\big(\TC_k(\tilde{\theta}+\theta_{m}^k )\big) \mathrm{div}(u^k _{m,_{t}})\, \di x +\int_{\Omega}\ve(\tilde{u}_t)\,T^k_m\,\di x\,.
\end{aligned}
\label{limsup1}
\end{equation}
Testing $\eqref{weaklimit12}_1$ by $w=u^k_t$ and $\eqref{weaklimit12}_2$ by $\tau=T^k$ we have
\begin{equation}
\begin{aligned}
\frac{1}{2}\frac{\di}{\di t}\int_{\Omega}&\D^{-1} T^k\,T^k\,\di x + \int_{\Omega}\D(\varepsilon(u^k _{t})) \,\varepsilon(u^k_t)\,\di x+\int_{\Omega}\omega^k\,\dev(T^k) \,\di x\\[1ex]
&= \int_{\Omega} f\big(\TC_k(\theta^k + \tilde{\theta})\big) \mathrm{div}(u^k_t)\, \di x + \int_{\Omega}\ve(\tilde{u}_t)T^k\,\di x\,.
\end{aligned}
\label{limsup2}
\end{equation}
After integration \eqref{limsup1} and \eqref{limsup2} over time interval $(0,t)$ and subtraction these two formulas to each other we get
\begin{equation}
\begin{aligned}
&\int_{\Omega}\D^{-1} T^k_{m}(t)\,T^k_m(t)\,\di x+
\int_0^t\int_{\Omega}\D(\varepsilon(u^k _{m,_{t}})) \,\ve(u^k _{m,_{t}})\,\di x\,\di \tau\\[1ex]
&\hspace{2ex} +\int_0^t\int_{\Omega}\big\{|\dev(T^k_m)|-\beta(\theta^k_m+\tilde{\theta})\big\}^{r}_{+}\,|\dev(T^k_m)|\,\di x\,\di \tau +\frac{1}{k}\int_0^t\int_{\Omega}|\dev(T^k_m)|^{2r}\,\di x\,\di\tau\\[1ex]
&=\int_{\Omega}\D^{-1} T^k_m(0)\,T^k_m(0)\,\di x + \int_0^t\int_{\Omega}\D(\varepsilon(u^k _{t})) \,\varepsilon(u^k_t)\,\di x\,\di\tau + \int_0^t\int_{\Omega}\omega^k\, \dev(T^k) \,\di x\,\di\tau\\[1ex]
&\hspace{2ex}+ \int_0^t\int_{\Omega} f\big(\TC_k(\tilde{\theta}+\theta_{m}^k )\big) \mathrm{div}(u^k _{m,_{t}})\, \di x\,\di\tau +\int_0^t\int_{\Omega}\ve(\tilde{u}_t)\,T^k_m\,\di x\,\di\tau\\[1ex]
&\hspace{4ex}- \int_0^t\int_{\Omega} f\big(\TC_k(\theta^k + \tilde{\theta})\big) \mathrm{div}(u^k_t)\, \di x\,\di\tau - \int_0^t\int_{\Omega}\ve(\tilde{u}_t)\,T^k\,\di x\,\di\tau\,.
\end{aligned}
\label{limsup3}
\end{equation}
Observe that for almost all $(x,t)\in \Omega\times (0,T)$ operators $\D^{-1}(\cdot)(\cdot)$ and $\D(\cdot)(\cdot)$ are convex, hence the lower semi-continuity in $ L^2(0,\T;L^2(\Omega;\S))$ yields
\begin{equation}
\label{lowersemi1}
\liminf_{m\rightarrow\infty}\Big(\int_{\Omega}\D^{-1} T^k_{m}(t)\,T^k_m(t)\,\di x\Big)\geq \int_{\Omega}\D^{-1} T^k(t)\,T^k(t)\,\di x
\end{equation}
for almost all $t\in(0,\T)$ and
\begin{equation}
\label{lowersemi2}
\liminf_{m\rightarrow\infty}\Big( \int_0^t\int_{\Omega}\D(\varepsilon(u^k _{m,_{t}})) \,\ve(u^k _{m,_{t}})\,\di x\,\di \tau\Big)\geq \int_0^t\int_{\Omega}\D(\varepsilon(u^k _{t})) \,\varepsilon(u^k_t)\,\di x\,\di\tau\,.
\end{equation}
Additionally, \eqref{weaklimm} and \eqref{wnioski} $(iii)$ imply
\begin{equation}
\label{limsup4}
\int_0^t\int_{\Omega} f\big(\TC_k(\tilde{\theta}+\theta_{m}^k )\big) \mathrm{div}(u^k _{m,_{t}})\, \di x\,\di\tau\rightarrow \int_0^t\int_{\Omega} f\big(\TC_k(\theta^k + \tilde{\theta})\big) \mathrm{div}(u^k_t)\, \di x\,\di\tau
\end{equation}
and
\begin{equation}
\label{limsup5}
\int_0^t\int_{\Omega}\ve(\tilde{u}_t)\,T^k_m\,\di x\,\di\tau \rightarrow \int_0^t\int_{\Omega}\ve(\tilde{u}_t)\,T^k\,\di x\,\di\tau
\end{equation}
as $m\rightarrow\infty$. Taking the limit superior of \eqref{limsup3}, we complete the proof.
\end{proof}
\subsection{Young measures tools}
In this section, for the convenience of the reader, we present all the tools concerning Young measures which we need to characterize the weak limit $\omega^k$. For more details and proofs, we refer to \cite[Corollaries 3.2-3.4]{MUller1999} and \cite[Theorem 2.9]{Alibert1997}, see also \cite{maleknecas}.
Let us consider a measurable set $E\subset\R^n$ and by $C_0(\R^m)$ we will denote the closure of continuous functions on $\R^m$ with a compact support. $C_0(\R^m)$ equipped with a norm $\|f\|_{\infty}=\sup_{\lambda\in\R^m}|f(\lambda)|$ is a Banach space. From the Riesz representation theorem the dual of $C_0(\R^m)$ can be identified with the space $\mathcal{M}(\R^m)$ of bounded Radon measures on $\R^m$. The duality pair between $C_0(\R^m)$ and $\mathcal{M}(\R^m)$
is defined by
\begin{equation}
\label{paradualna}
\langle\nu,f\rangle=\int_{\R^m}f(\lambda)\,\di\nu(\lambda)\,.
\end{equation}
We will start with the Fundamental theorem on Young measures, see Theorem 3.1 of \cite{MUller1999}.
\begin{tw}
\label{Ball}
Let $E\subset\R^n$ be a measurable set of finite measure and let $z_j:E\rightarrow\R^m$ be a sequence of measurable functions.
Then there exist a subsequence (still denote by $z_j$) and weak$^{\ast}$ measurable map $\nu_x:E\rightarrow \mathcal{M}(\R^m)$ such that the following holds
\begin{enumerate}[(i)]
\item \,$\nu_x\geq0$,\, $\|\nu_x\|_{\mathcal{M}(\R^m)}=\int_{\R^m}\,\di\nu_x\leq 1$ for a.e. $x\in E$.
\item For all $f\in C_0(\R^m)$
\begin{equation*}
f(z_j) \overset{\ast}{\rightharpoonup} \bar{f}\quad \textrm{in}\quad L^{\infty}(E)\,,
\end{equation*}
where
\begin{equation*}
\bar{f}(x)=\langle \nu_x,f\rangle=\int_{\R^m} f(\lambda)\,\di \nu_x(\lambda)\,.
\end{equation*}
\item Let $K\subset\R^m$ be compact. If $\mathrm{dist}\,(z_j,K)\rightarrow 0$ in measure, then $\mathrm{supp}\,\nu_x\subset K$.
\item Furthermore $\|\nu_x\|_{\mathcal{M}(\R^m)}=1$ for a.a. $x\in E$ if and only if the sequence does not go to infinity, i.e. if
\begin{equation*}
\lim_{M\rightarrow \infty}\,\sup\big|\{|z_j|\geq M\}\big|=0\,.
\end{equation*}
\item Assume that $\|\nu_x\|_{\mathcal{M}(\R^m)}=1$ for a.a. $x\in E$, $A\subset E$ is measurable, $f\in C(\R^m)$ and the set $\{f(z_j)\}$ is relatively weakly compact in $L^1(A)$. Then
\begin{equation*}
f(z_j) \rightharpoonup \bar{f}\quad \textrm{in}\quad L^{1}(A)\,,\quad \bar{f}(x)=\langle \nu_x,f\rangle=\int_{\R^m} f(\lambda)\,\di \nu_x(\lambda)\,.
\end{equation*}
\end{enumerate}
\end{tw}
Theorem \ref{Ball} refers to the existence of Young measures. Let us now formulate a few properties related to these measures.
\begin{lem}
\label{deltadiraca} Let us assume that a sequence $z_j$ of measurable functions from $E$ to $\R^m$ generates the Young measure $\nu:E\rightarrow \mathcal{M}(\R^m)$. Then $z_j\rightarrow z$ in measure if and only if $\nu_x=\delta_{z(x)}$ a.e.
\end{lem}
\begin{lem}
\label{slpolciaYoung}
Suppose that the sequence of maps $z_j:E\rightarrow \R^m$ generates Young measure $\nu_x:E\rightarrow \mathcal{M}(\R^m)$. Let $f:E\times \R^m\rightarrow\R^m$ be a Carath\'eodory function (i.e. measurable in the first argument and continuous in the second one). Let us also assume that the negative part $\{f(x,z_j(x))\}_{-}$ is weakly relatively compact in $L^1(E)$. Then
\begin{equation}
\label{miarapol} \liminf_{j\rightarrow\infty}\int_{E}f(x,z_j(x))\,\di x\geq \int_{E}\int_{\R^m}f(x,\lambda)\,\di \nu_x (\lambda)\,\di x\,.
\end{equation}
Additionally, if the sequence of functions $x\mapsto |f|(x,z_j(x))$ is weakly relatively compact in $L^1(E)$, then
\begin{equation*}
f(\cdot,z_j(\cdot))\rightharpoonup \int_{\R^m}f(\cdot,\lambda)\,\di\nu_x(\lambda) \quad\mathrm{in} \quad L^1(E)\,.
\end{equation*}
\end{lem}
And the last property is follow
\begin{lem}
\label{product}
Let $u_j:E\rightarrow \R^n$, $v_j:E\rightarrow\R^m$ be measurable and suppose that $u_j\rightarrow u$ a.e. while $v_j$ generates the Young measure $\nu$. Then the sequence of pairs $(u_j,v_j):E\rightarrow\R^{n+m}$ generates the Young measure $x\rightarrow \delta_{u(x)}\otimes\nu_x$.
\end{lem}
\subsection{Passing to the limit with Galerkin approximation}
To characterise weak limits of nonlinearities we are going to improve the convergence of the sequence $\{\dev(T^k_m)\}_{m=1}^\infty$. To do this we use the idea from \cite{GWIAZDA2005923,tve-Orlicz}, where the Young measure tools was used. Let us define the operator $G:\R\times \SS\rightarrow\SS$ by the formula
\begin{equation}
\label{operator}
G(\theta,S):= \big\{|\dev S|-\beta(\theta+\tilde{\theta})\big\}^{r}_{+}\,\frac{\dev S}{|\dev S|} + \frac{1}{k}|\dev S|^{2r-1}\,\frac{\dev S}{|\dev S|}
\end{equation}
It is worth emphasizing that the article \cite{GWIAZDA2005923} proves a general theorem (Theorem $1.2$) that could be applied to the operator $G$ from \eqref{operator}. However, our field explicitly fails to meet one of the main assumption of this theorem, so we decided to present the full calculus related to field \eqref{operator}. Let us consider the function $G(\theta^k_m,\dev T^k_m)\cdot \dev T^k_m$. It is easy to observe that $G(\theta^k_m,\dev T^k_m)\cdot \dev T^k_m\geq 0$ for every $m\in\mathbb{N}$, hence the sequence of negative part of $G(\theta^k_m,\dev T^k_m)\cdot \dev T^k_m$ is relatively weakly compact in $L^1(\Omega\times (0,\T))$. The Lemma \ref{slpolciaYoung} yields
\begin{equation}
\label{operator1}
\liminf_{m\rightarrow\infty}\int_{Q} G(\theta^k_m,\dev T^k_m)\cdot \dev T^k_m\,\di x\,\di t\geq \int_{Q} \int_{\R\times \R^6}G(s,\lambda)\cdot\lambda\,\di \mu_{(x,t)}(s,\lambda)\,\di x\, \di t\,,
\end{equation}
where $Q=\Omega\times (0,\T)$ and $\mu_{(x,t)}$ is the Young measure generated by the sequence\\ $\{(\theta^k_m,\dev T^k_m)\}_{m=1}^\infty$. Using Lemma \ref{product}, we can characterise this measure more precisely. We know that $\theta_m^k\rightarrow \theta^k$ a.e. in $\Omega\times (0,\T)$ and that the sequence $\{\dev T_m^k\}_{m=1}^\infty$ generates the Young measure $\nu_{(x,t)}$, so
\begin{equation}
\label{measure}
\mu_{(x,t)}(s,\lambda)=\delta_{\theta^k(x,t)}(s)\otimes \nu_{(x,t)}(\lambda)
\end{equation}
and
\begin{equation}
\label{operator2}
\int_{Q} \int_{\R\times \R^6}G(s,\lambda)\cdot\lambda\,\di \mu_{(x,t)}(s,\lambda)\,\di x\, \di t= \int_{Q} \int_{\R^6} G(\theta^k,\lambda)\cdot\lambda\,\di \nu_{(x,t)}(\lambda)\,\di x\, \di t\,.
\end{equation}
The last two convergence in \eqref{weaklimm} imply that the sequence $\{G(\theta^k_m,\dev T^k_m)\}_{m=1}^\infty$ is bounded in $L^{\frac{2r}{2r-1}}(\Omega\times(0,\T))$ ($\frac{2r}{2r-1}<\frac{r}{r-1}$ for $r>1$), hence it is weakly relatively compact in $L^1(\Omega\times (0,\T))$. Theorem \ref{Ball} implies
\begin{equation}
\label{operator3}
\omega^k(x,t)=\int_{\R^6} G(\theta^k,\lambda)\cdot\lambda\,\di \nu_{(x,t)}(\lambda)\,.
\end{equation}
Similarly, we conclude that $\dev T_m^k(x,t)=\int_{\R^6}\lambda\,\di \nu_{(x,t)}(\lambda)$. Combining \eqref{operator2} and \eqref{operator1} with \eqref{limsupmain} we obtain
\begin{equation}
\label{operator4}
\begin{split}
\int_{Q} \int_{\R\times \R^6}& G(s,\lambda)\cdot\lambda\,\di \mu_{(x,t)}(s,\lambda)\,\di x\, \di t\leq \liminf_{m\rightarrow\infty}\int_{Q} G(\theta^k_m,\dev T^k_m)\cdot \dev T^k_m\,\di x\,\di t\\[1ex]
&\leq \limsup_{m\rightarrow\infty}\int_{Q} G(\theta^k_m,\dev T^k_m)\cdot \dev T^k_m\,\di x\,\di t\\[1ex]
&\leq\int_{Q} \int_{\R^6} G(\theta^k,\lambda)\cdot\lambda\,\di \nu_{(x,t)}(\lambda)\cdot \int_{\R^6}\lambda \,\di \nu_{(x,t)}(\lambda) \,\di x\,\di t\,.
\end{split}
\end{equation}
The above information will allow us to proof that the Young measure $\nu_{(x,t)}$ is a Dirac measure i.e. $\nu_{(x,t)}=\delta_{\dev T^k(x,t)}$ for almost all $(x,t)\in \Omega\times (0,\T)$. This will be accomplished by showing that the integral
\begin{equation}
\label{Young1}
\int_{Q}\int_{\R^6} h(\xi)\,\di\nu_{(x,t)}(\xi)\,\di x\,\di t
\end{equation}
is equal to $0$, where the function $h(\cdot)$ is defined by the formula
\begin{equation}
\label{Young2}
h(\xi):=\Big( G(\theta^k,\xi)- G\Big(\theta^k,\int_{\R^6}\xi\,\di \nu_{(x,t)}(\xi)\Big)\Big)\cdot\Big(\xi-\int_{\R^6}\xi\,\di \nu_{(x,t)}(\xi)\Big)\,.
\end{equation}
Monotonicity of the operator $G(\theta^k,(\cdot))$ yields that \begin{equation}
\label{Young3}
\int_{Q}\int_{\R^6} h(\xi)\,\di\nu_{(x,t)}(\xi)\,\di x\,\di t\geq 0\,.
\end{equation}
Therefore,
\begin{equation}
\label{Young4}
\begin{split}
\int_{Q}\int_{\R^6}& h(\xi)\,\di\nu_{(x,t)}(\xi)\,\di x\,\di t\\[1ex]
&= \int_{Q}\int_{\R^6} G(\theta^k,\xi)\cdot\Big(\xi-\int_{\R^6}\xi\,\di \nu_{(x,t)}(\xi)\Big) \,\di\nu_{(x,t)}(\xi)\,\di x\,\di t\\[1ex]
&\hspace{2ex}-\int_{Q}\int_{\R^6} G\Big(\theta^k,\int_{\R^6}\xi\,\di \nu_{(x,t)}(\xi)\Big)\cdot\Big(\xi-\int_{\R^6}\xi\,\di \nu_{(x,t)}(\xi)\Big) \,\di\nu_{(x,t)}(\xi)\,\di x\,\di t\,.
\end{split}
\end{equation}
Changing the variables and simple calculations imply that the second term on the right-hand side of \eqref{Young4} is equal to zero, hence \eqref{Young4} is in the following form
\begin{equation}
\label{Young5}
\begin{split}
\int_{Q}\int_{\R^6} h(\xi)\,\di\nu_{(x,t)}(\xi)\,\di x\,\di t
&= \int_{Q}\int_{\R^6} G(\theta^k,\xi)\cdot\xi \,\di \nu_{(x,t)}(\xi)\,\di x\,\di t\\[1ex]
&\hspace{2ex} - \int_{Q}\int_{\R^6} G(\theta^k,\xi)\,\di\nu_{(x,t)}(\xi)\cdot\int_{\R^6}\xi\,\di \nu_{(x,t)}(\xi) \,\di x\,\di t\,.
\end{split}
\end{equation}
\eqref{Young5} together with \eqref{operator4} and \eqref{Young3}, assures that
\begin{equation}
\label{Young6}
\int_{Q}\int_{\R^6} h(\xi)\,\di\nu_{(x,t)}(\xi)\,\di x\,\di t\leq 0\,.
\end{equation}
Let us observe that the vector field $G(\theta,(\cdot))$ is strictly monotone, i.e. for all $S_1$, $S_2\in\SS$ such that $S_1\neq S_2$ we have
\begin{equation}
\label{striclymono}
(G(\theta^k, S_1)-G(\theta^k,S_2))\cdot(S_1-S_2)>0\,.
\end{equation}
We note that \eqref{striclymono} is true because the second component of $G(\theta^k,S)$ is strictly monotone. Therefore $\mathrm{supp}\, h(\cdot)=\{\xi\in\R^6:\, \xi(x,t)\neq\int_{\R^6}\xi\,\di \nu_{(x,t)}(\xi) \}$ and
\begin{equation}
\label{Young7}
\int_{\R^6} h(\xi)\,\di\nu_{(x,t)}(\xi)= 0
\end{equation}
for almost all $(x,t)\in \Omega\times (0,\T)$. Measure $\nu_{(x,t)}$ is a probability measure ($\nu_{(x,t)}\geq 0$), so \eqref{Young7} implies that the support of the function $h(\cdot)$ and measure $\nu_{(x,t)}$ are disjoint a.a. $(x,t)\in \Omega\times (0,\T)$. Which yields $$\mathrm{supp}\,\nu_{(x,t)}=\{\xi\in\R^6:\, \xi(x,t)=\int_{\R^6}\xi\,\di \nu_{(x,t)}(\xi)=\dev T^k(x,t) \}$$
for almost all $(x,t)\in \Omega\times (0,\T)$ and $\nu_{(x,t)}=\delta_{\dev T^k(x,t)}$ for almost all $(x,t)\in \Omega\times (0,\T)$. A direct application
of Lemma \ref{deltadiraca} leads to $\dev T^k_m\rightarrow \dev T^k$ in measure hence
\begin{equation}
\label{punktowazb}
\dev T^k_m(x,t)\rightarrow \dev T^k(x,t) \quad \mathrm{a.e.\, in}\quad \Omega\times(0,\T)\,,
\end{equation}
which immediately gives
\begin{equation*}
\psi ^k(x,t)=\big\{|\dev(T^k(x,t))|-\beta(\theta^k(x,t)+\tilde{\theta}(x,t))\big\}^{r}_{+}\,\frac{\dev(T^k(x,t))}{|\dev(T^k(x,t))|}\quad\mathrm{a.\,e.}\quad (x,t)\in\Omega\times (0,\T)
\end{equation*}
and
\begin{equation*}
\chi ^k(x,t)=\frac{1}{k}|\dev(T^k(x,t))|^{2r-1}\,\frac{\dev(T^k(x,t))}{|\dev(T^k(x,t))|}\quad\mathrm{a.\,e.}\quad (x,t)\in\Omega\times (0,\T)\,.
\end{equation*}
Let us define
\begin{equation*}
g(\theta, S):=\TC_k \big(\big\{|\dev(S)|-\beta(\theta+\tilde{\theta})\big\}^{r}_{+}|\dev(S)| \big)
\end{equation*}
for $\theta\in\R$ and $S\in\S$. The information \eqref{punktowazb} and Conclusion \ref{wnioski} ($iii)$ entail that
\begin{equation*}
g\big(\theta^k_m(x,t), T^k_m(x,t)\big)-g\big(\theta^k(x,t), T^k(x,t)\big)\rightarrow 0 \quad \mathrm{a.e.\, in}\quad \Omega\times(0,\T)\,.
\end{equation*}
Additionally, $\big|g\big(\theta^k_m(x,t), T^k_m(x,t)\big)-g\big(\theta^k(x,t), T^k(x,t)\big)\big|\leq 2k$, so the Dominated Lebesgue theorem implies
\begin{equation}
\label{charakteryzacja1}
g\big(\theta^k_m, T^k_m\big)-g\big(\theta^k, T^k\big)\rightarrow 0 \quad \mathrm{in}\quad L^2(0,\T;L^2(\Omega;\R))\,.
\end{equation}
The convergence obtained in \eqref{charakteryzacja1} allows the passage in this system \eqref{app_system} to the limit with $m\rightarrow\infty$. Notice that we only have to do it in the equation $\eqref{app_system}_3$, because in the other equations it has already been done in \eqref{weaklimit12}. Let us fix the natural number $N$ and consider the following function $v\in C^1([0,\T];H^1(\Omega))$ in the form
\begin{equation}
\label{form1}
v(t)=\sum_{l=1}^N d^l(t) v_k\,,
\end{equation}
where $\{d^l\}_{l=1}^N$ are given smooth functions and $v_k$ are the solution of the \eqref{wektoryw}, which are also smooth. Assume that $m\geq N$, then multiplying $\eqref{app_system}_3$ by $d^l(t)$, summing up to $N$ and integrate over time, we get
\begin{equation}
\label{temp}
\begin{split}
\int_0^\T\int_{\Omega}&(\theta^k_{m,_{t}})\, v(t)\,\di x\,\di t + \int_0^\T\int_{\Omega}\nabla\theta^k _{m}\nabla v(t)\, \di x\,\di t\\[1ex]
&+\int_0^{\T}\int_{\Omega} f\big(\TC_k(\tilde{\theta}+ \theta_{m}^k )\big)\mathrm{div} (\tilde{u}_t + u^k _{m,_{t}})\, v(t) \,\di x\,\di t\\[1ex]
=& \int_0^\T\int_{\Omega} \TC_k \big(\big\{|\dev(T^k_m)|-\beta(\theta_m^k+\tilde{\theta})\big\}^{r}_{+}|\dev(T^k_m)| \big)\,v(t)\, \di x\,\di t\,.
\end{split}
\end{equation}
The most problematic term in \eqref{temp} is the third term on the left-hand side. Observe that
\begin{equation}
\label{temp1}
\begin{split}
\int_0^\T\int_{\Omega}& \big|f\big(\TC_k(\tilde{\theta}+ \theta_{m}^k )\big) v(t)-f\big(\TC_k(\tilde{\theta}+ \theta^k )\big) v(t)\big|^2\,\di x\,\di t\\[1ex]
&\leq \int_0^\T \big\|\big(f\big(\TC_k(\tilde{\theta}+ \theta_{m}^k )\big) -f\big(\TC_k(\tilde{\theta}+ \theta^k )\big)\big)^2\big\|_{L^2(\Omega)}\|v^2(t)\|_{L^2(\Omega)}\,\di t\\[1ex]
& = \int_0^\T \big\|f\big(\TC_k(\tilde{\theta}+ \theta_{m}^k )\big) -f\big(\TC_k(\tilde{\theta}+ \theta^k )\big)\big\|^2_{L^4(\Omega)}\|v(t)\|^2_{L^4(\Omega)}\,\di t\\[1ex]
&\leq \big\|f\big(\TC_k(\tilde{\theta}+ \theta_{m}^k )\big) -f\big(\TC_k(\tilde{\theta}+ \theta^k )\big)\big\|^2_{L^4(0,\T;L^4(\Omega))}\|v\|^2_{L^4(0,\T;L^4(\Omega))}\,.
\end{split}
\end{equation}
The convergence \eqref{mocnatemp} and regularity of the function $v$ imply that \eqref{temp1} tends to zero, hence
\begin{equation}
\label{temp2}
f\big(\TC_k(\tilde{\theta}+ \theta_{m}^k )\big) v\rightarrow f\big(\TC_k(\tilde{\theta}+ \theta^k )\big)v\quad \mathrm{in}\quad L^2(0,\T;L^2(\Omega))\,.
\end{equation}
The information \eqref{temp2} allows us to use standard tools in the Galerkin method and obtain
\begin{equation}
\label{temp3}
\begin{split}
\int_{\Omega}&\theta^k_{t}\, v\,\di x + \int_{\Omega}\nabla\theta^k\nabla v\, \di x
+\int_{\Omega} f\big(\TC_k(\tilde{\theta}+ \theta^k )\big)\mathrm{div} (\tilde{u}_t + u^k_{t})\, v \,\di x\\[1ex]
=& \int_{\Omega} \TC_k \big(\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}|\dev(T^k)| \big)\,v\, \di x
\end{split}
\end{equation}
for all $v\in H^1(\Omega)$ and almost all $t\in (0,\T)$. Additionally we got
\begin{equation}
\int_{\Omega} \big(T^k - f\big(\TC_k(\tilde{\theta}+\theta^k )\big)\id\big) \varepsilon(w)\, \di x + \int_{\Omega}\D(\varepsilon(u^k_{t})) \,\varepsilon(w)\,\di x = 0
\label{balancek}
\end{equation}
for all $w\in H^1_0(\Omega;\R^3)$ and almost all $t\in (0,\T)$ and
\begin{equation}
\begin{aligned}
\int_{\Omega}\D^{-1} T^k_{t}\,\tau\,\di x&+ \int_{\Omega}\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}\,\frac{\dev(T^k)}{|\dev(T^k)|}\,\tau\,\di x
\\[1ex]
&+\frac{1}{k}\int_{\Omega}|\dev(T^k)|^{2r-1}\,\frac{\dev(T^k)}{|\dev(T^k)|}\,\tau\,\di x
\\[1ex]
&=\int_{\Omega}\big(\ve(u^k_{t})+\ve(\tilde{u}_t)\big)\tau\,\di x
\end{aligned}
\label{flowrulek}
\end{equation}
for all $\tau\in L^2(\Omega;\S)\times L^{2r}(\Omega;\SS)$ and almost all $t\in (0,\T)$. Formula \eqref{flowrulek} completes the proof of the existence of solutions for any approximation step $k> 0$.
\section{Proof of the Main Result}
\subsection{Estimates independent on truncation}
In this chapter we are going to pass to the limit with $k\rightarrow\infty$ and obtain solutions in the sense of Definition \ref{Maindef}.
At the beginning we prove some a priori estimates for the sequence of approximate solutions $\{(T^k, u^{k},\theta^{k})\}_{k>0}$. First the energy estimate is demonstrated.
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\setcounter{equation}{0
\begin{tw}
\label{tw:4.1}
Assume that the given data satisfy all requirements of \eqref{regularity}.
Then there exists a positive constant $C(\T)$ (not depending on $k>0$) such that the following inequality holds
\begin{eqnarray*}
\|T^{k}(t)\|^2_{L^2(\Omega)} + \frac{1}{k}\|\dev(T^{k}(t))\|^{2r}_{L^{2r}(\Omega)}+ \int_0^t\|\ve(u_t^{k})\|^2_{L^2(\Omega)}\,\di\tau+\int_{\Omega}|\theta^{k}(t)|\,\di x \leq\,\, C(\T)\,,
\end{eqnarray*}
where $0<t\leq \T$.
\end{tw}
\begin{proof}
The proof of the above inequality is based on the proof of Theorem 4.1 from \cite{ChelminskiOwczarekthermoII}. In this article, we consider nonhomogeneous Dirichlet boundary condition for the displacement vector, which was not taken into account in \cite{ChelminskiOwczarekthermoII}. As a consequence we get an additional term that needs to be bounded independently of $k>0$. Theorem \ref{tw:4.1} is an essential part of the proof of main result, so we decided to present it in the complete form. Fix $M>0$. Testing the equation \eqref{balancek} by $w=M u_t^{k}$, the equation \eqref{flowrulek} by $\tau=M T^k$ and the equation \eqref{temp3} by $v=\TC_M(\theta^{k})$ we obtain
\begin{equation}
\label{41}
\begin{split}
M\int_{\Omega} \big(T^k &- f\big(\TC_k(\tilde{\theta}+\theta^k )\big)\id\big) \varepsilon(u_t^{k})\, \di x + M\int_{\Omega}\D(\varepsilon(u^k_{t})) \,\varepsilon(u_t^{k})\,\di x = 0\,,\\[1ex]
M\int_{\Omega}\D^{-1} T^k_{t} T^k\,\di x&+ M\int_{\Omega}\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}|\dev(T^k)|\,\di x
\\[1ex]
&+\frac{M}{k}\int_{\Omega}|\dev(T^k)|^{2r}\,\di x
=M\int_{\Omega}\big(\ve(u^k_{t})+\ve(\tilde{u}_t)\big) T^k\,\di x\,,
\\[1ex]
\int_{\Omega}\theta^k_{t}\, \TC_M(\theta^{k})\,\di x &+ \int_{\Omega}|\nabla\TC_M(\theta^{k})|^2\, \di x
+\int_{\Omega} f\big(\TC_k(\tilde{\theta}+ \theta^k )\big)\mathrm{div} (\tilde{u}_t + u^k_{t})\, \TC_M(\theta^{k}) \,\di x\\[1ex]
=& \int_{\Omega} \TC_k \big(\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}|\dev(T^k)| \big)\,\TC_M(\theta^{k})\, \di x\,.
\end{split}
\end{equation}
Integrating \eqref{41} with respect to time and adding all the equations in \eqref{41} we arrive to the following identity
\begin{equation}
\begin{split}
&M\int_{\Omega}\D^{-1}T^{k}(t)\,T^{k}(t)\di x +M\int_0^t\int_{\Omega}\D\ve(u^{k}_t)\,\ve(u^{k}_t)\,\di x\,\di\tau\\[1ex]
&\hspace{2ex}+M\int_0^t\int_{\Omega}\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}|\dev(T^k)|\,\di x\,\di\tau \\[1ex]
&\hspace{4ex} +\frac{M}{k}\int_0^t\int_{\Omega}|\dev(T^k)|^{2r}\,\di x\,\di\tau+\int_{\Omega}\varphi_M(\theta^{k}(t))\,\di x+ \int_0^t\int_{\Omega}|\nabla \TC_M(\theta^{k})|^2\,\di x\,\di\tau\\[1ex]
&= M\int_{\Omega}\D^{-1}T_0\,T_0\,\di x+
\int_{\Omega}\varphi_M(\TC_k(\theta_0))\,\di x+M\int_0^t\int_{\Omega}\ve(\tilde{u}_t)\,T^k\,\di x\,\di\tau\\[1ex]
&\hspace{2ex}+\int_0^t\int_{\Omega} f\big(\TC_k(\tilde{\theta}+ \theta^k )\big)\mathrm{div}\, u^k_{t}\,\big(M-\TC_M(\theta^{k})\big)\,\di x\,\di\tau\\[1ex]
&\hspace{4ex}+\int_0^t\int_{\Omega} f\big(\TC_k(\tilde{\theta}+ \theta^k )\big)\mathrm{div}\, \tilde{u}_t\,\TC_M(\theta^{k})\,\di x\,\di\tau\\[1ex]
&\hspace{6ex}+ \int_0^t\int_{\Omega}\TC_k \big(\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}|\dev(T^k)| \big)\,\TC_M(\theta^{k})\,\di x\,\di\tau\,,
\end{split}
\label{42}
\end{equation}
where the function $\varphi_M(\cdot)$ is defined by the formula \eqref{pierwotna}.
Notice that the first initial integral of \eqref{42} is independent on $k>0$. On the other hand, the linear increase of the function $\varphi_M$ at infinity allows us to estimate the second initial integral as follows
\begin{equation}
\label{thetazero}
\int_{\Omega}\varphi_M(\TC_k(\theta_0))\,\di x\leq C\|\theta_0\|_{L^1(\Omega)}\,
\end{equation}
for every $k>0$, where the constant $C>0$ is independent of $k$. Moreover,
\begin{equation}
\begin{split}
\int_0^t\int_{\Omega}&\TC_k \big(\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}|\dev(T^k)| \big)\,\TC_M(\theta^{k})\,\di x\,\di\tau\\[1ex]
&\leq M\int_0^t\int_{\Omega}\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}|\dev(T^k)|\,\di x\,\di\tau\,.
\end{split}
\label{43}
\end{equation}
Using the H\"older, the Young inequalities and regularity of the given data we conclude that
\begin{equation}
\begin{split}
&M\int_{\Omega}\D^{-1}T^{k}(t)\,T^{k}(t)\di x +M\int_0^t\int_{\Omega}\D\ve(u^{k}_t)\,\ve(u^{k}_t)\,\di x\,\di\tau\\[1ex]
&\hspace{2ex} +\frac{M}{k}\int_0^t\int_{\Omega}|\dev(T^k)|^{2r}\,\di x\,\di\tau+\int_{\Omega}\varphi_M(\theta^{k}(t))\,\di x+ \int_0^t\int_{\Omega}|\nabla \TC_M(\theta^{k})|^2\,\di x\,\di\tau\\[1ex]
&\leq\quad\tilde{C}(\T)+M\int_0^t\int_{\Omega}|T^k|^2\,\di x\,\di\tau+ \nu\int_0^t\int_{\Omega}|\ve(u^{k}_t)|^2\,\di x\,\di\tau\\[1ex]
&\hspace{2ex}+D\int_0^t\int_{\Omega} f^2\big(\TC_k(\tilde{\theta}+ \theta^k )\big)\,\big(M-\TC_M(\theta^{k})\big)^2\,\di x\,\di\tau\\[1ex]
&\hspace{4ex}+\int_0^t\int_{\Omega} f\big(\TC_k(\tilde{\theta}+ \theta^k )\big)\mathrm{div}\, \tilde{u}_t\,\TC_M(\theta^{k})\,\di x\,\di\tau\,,
\end{split}
\label{44}
\end{equation}
where $\nu>0$ is any positive constant and the constants $\tilde{C}(\T)$ and $D$ do not depend on $k>0$. Let us start by estimating the last integral in \eqref{44} that arises from nonhomogeneous Dirichlet boundary condition ($\frac{1}{1-\alpha}>2$). Recalling \eqref{warwzrostu} we have
\begin{equation}
\begin{split}
\int_0^t&\int_{\Omega} f\big(\TC_k(\tilde{\theta}+ \theta^k )\big)\mathrm{div}\, \tilde{u}_t\,\TC_M(\theta^{k})\,\di x\,\di\tau\leq M \int_0^t\int_{\Omega} \big|f\big(\TC_k(\tilde{\theta}+ \theta^k )\big)\big||\mathrm{div}\, \tilde{u}_t|\,\di x\,\di\tau\\[1ex]
&\leq M \int_0^t\int_{\Omega} \big(a+B|\TC_k(\tilde{\theta}+\theta^k)|^{\alpha}\big)|\mathrm{div}\, \tilde{u}_t|\,\di x\,\di\tau\\[1ex]
&\leq M \int_0^t\int_{\Omega} \big(a+B|\tilde{\theta}+\theta^k|^{\alpha}\big)|\mathrm{div}\, \tilde{u}_t|\,\di x\,\di\tau\\[1ex]
& \leq M \int_0^t\Big(\int_{\Omega} \big(a+B|\tilde{\theta}+\theta^k|^{\alpha}\big)^{\frac{1}{\alpha}}\,\di x\Big)^{\alpha}\Big(\int_{\Omega}|\mathrm{div}\, \tilde{u}_t|^{\frac{1}{1-\alpha}}\,\di x\Big)^{1-\alpha}\,\di\tau\\[1ex]
&\leq M \int_0^t\Big(\int_{\Omega} 2^{\frac{1}{\alpha}-1}(a^{\frac{1}{\alpha}}+B^{\frac{1}{\alpha}}|\tilde{\theta}+\theta^k|)\,\di x\Big)^{\alpha}\Big(\int_{\Omega}|\mathrm{div}\, \tilde{u}_t|^{\frac{1}{1-\alpha}}\,\di x\Big)^{1-\alpha}\,\di\tau\\[1ex]
&\leq M \Big(\alpha\int_0^t\int_{\Omega} 2^{\frac{1}{\alpha}-1}(a^{\frac{1}{\alpha}}+B^{\frac{1}{\alpha}}|\tilde{\theta}+\theta^k|)\,\di x\,\di\tau+(1-\alpha)\int_0^t\int_{\Omega}|\mathrm{div}\, \tilde{u}_t|^{\frac{1}{1-\alpha}}\,\di x\,\di\tau\Big)\\[1ex]
&\leq \quad\hat{C}(\T)+M 2^{\frac{1}{\alpha}-1}\alpha\int_0^t\int_{\Omega} |\theta^k|\,\di x\,\di\tau\,,
\end{split}
\label{45}
\end{equation}
where the constants $\hat{C}(\T)$ does not depend on $k>0$. Until the proof is completed, we have one more integral to estimate on the right-hand side of \eqref{44}. This estimate is the same as in \cite{ChelminskiOwczarekthermoII}, but for the convenience of the reader we have decided to present it here. Let us define two sets $Q_1=\{(x,\tau)\in\Omega\times (0,t):\,\theta^{k}(x,\tau)\leq -M\}$ and\\ $Q_2=\{(x,\tau)\in\Omega\times (0,t):\,-M<\theta^{k}(x,\tau)<M\}$. For almost every $(x,\tau)\in\Omega\times (0,t)$ the following inequality is satisfied
\begin{equation}
\label{46}
\begin{split}
\Big(f\big(\TC_{k}(\theta^{k}+\tilde{\theta})\big)\Big)^2\big(M-\TC_M(\theta^{k})\big)^2& \leq 4M^2f^2\big(\TC_{k}(\theta^{k}+\tilde{\theta})\big)\chi_{Q_1}\\[1ex]
&\hspace{2ex}+4M^2f^2\big(\TC_{k}(\theta^{k}+\tilde{\theta})\big)\chi_{Q_2}\,,
\end{split}
\end{equation}
where $\chi_{Q_1}$ and $\chi_{Q_2}$ are the characteristic functions of the sets $Q_1$ and $Q_2$, respectively. The inequality \eqref{46} yields
\begin{equation}
\label{471}
\begin{split}
\int_0^t\int_{\Omega} \Big(f\big(\TC_{k}(\theta^{k}+\tilde{\theta})\big)\Big)^2\big(M-\TC_M(\theta^{k})\big)^2\,\di x\,\di\tau&\leq 4M^2 \int_{Q_1}f\big(\TC_{k}(\theta^{k}+\tilde{\theta})\big)^2\,\di x\,\di \tau\\[1ex]
&+ 4M^2 \int_{Q_2}f\big(\TC_k(\theta^{k}+\tilde{\theta})\big)^2\,\di x\,\di\tau\,.
\end{split}
\end{equation}
The regularity of $\tilde{\theta}$ and the growth condition \eqref{warwzrostu} entail that the second integral on the right-hand side of \eqref{471} is bounded. Let us define the next two sets\\ $Q_1'=\{(x,\tau)\in Q_1:\,\theta^{k}(x,\tau)+\tilde{\theta}(x,\tau)\geq 0\}$ and
$Q_1''=\{(x,\tau)\in Q_1:\,\theta^{k}(x,\tau)+\tilde{\theta}(x,\tau) < 0\}$. Therefore the first term on the right-hand side of \eqref{471} is estimated as follow
\begin{equation}
\label{48}
\int_{Q_1} f\big(\TC_{k}(\theta^{k}+\tilde{\theta})\big)^2\,\di x\,\di\tau= \int_{Q_1'} f\big(\TC_{k}(\theta^{k}+\tilde{\theta})\big)^2\,\di x\,\di \tau+\int_{Q_1''} f\big(\TC_{k}(\theta^{k}+\tilde{\theta})\big)^2\,\di x\,\di\tau\,.
\end{equation}
Again the regularity of $\tilde{\theta}$ and the growth conditions on $f$ yield that the first integral on the right-hand side of \eqref{48} is bounded. Using the assumption \eqref{warwzrostu1} we obtain
\begin{equation}
\label{49}
\begin{split}
4M^2\int_{Q_1''} f\big(\TC_{k}(\theta^{k}+\tilde{\theta})\big)^2\,\di x\,\di t&\leq 4CM^2\int_{Q_1''}(1+|\theta^{k}+\tilde{\theta}|)\,\di x\,\di\tau\\[1ex]
&\leq D+4CM^2\int_{Q_1''}|\theta^{k}|\,\di x\,\di\tau\\[1ex]
&=D+4CM\int_{Q_1''}(\varphi_M(\theta^{k})+\frac{1}{2}M^2)\,\di x\,\di\tau\,.
\end{split}
\end{equation}
Notice that for $r\in\R$
\begin{displaymath}
\varphi_K(r) \geq \left\{ \begin{array}{ll}
\frac{1}{2}|r|^2 & \textrm{if}\quad |r|\leq M\,,\\[1ex]
\frac{1}{2}M|r| & \textrm{if}\quad |r|>M\,.\\
\end{array} \right.
\end{displaymath}
Applying inequalities \eqref{45}- \eqref{49} in \eqref{44}, then selecting the appropriate small constant $\nu>0$ and Gronwall's inequality, we complete the proof.
\end{proof}
\noindent
Following the ideas of \cite{BoccardoGallouet,GKS15,ChelminskiOwczarekthermoII,barowcz2}, we are going to use Boccardo's and Gallou{\"e}t's approach to the heat equation occurring in the system \eqref{AMain1}. Then we need the boundedness of the right-hand side of heat equation in $L^1(0,\T;L^1(\Omega))$. We will receive this information in the following lemma.
\begin{lem}
\label{tw:4.7}
The sequences $\Big\{\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}|\dev(T^k)| \Big\}_{k>0}$ and\\
$\big\{\frac{1}{k}|\dev(T^k)|^{2r}\big\}_{k>0}$ are uniformly bounded in the space $L^{1}(0,\T;L^1(\Omega))$.
\end{lem}
\begin{proof}
Putting $\tau=T^k$ in \eqref{flowrulek} we have
\begin{equation}
\begin{aligned}
\int_{\Omega}&\D^{-1} T^k_{t}\,T^k\,\di x + \int_{\Omega}\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}\,|\dev(T^k)|\,\di x
\\[1ex]
&+\frac{1}{k}\int_{\Omega}|\dev(T^k)|^{2r}\,\,\di x
=\int_{\Omega}\big(\ve(u^k_{t})+\ve(\tilde{u}_t)\big)T^k\,\di x\,.
\end{aligned}
\label{flowrulek1}
\end{equation}
Integrating with respect to time we get
\begin{equation}
\begin{aligned}
&\int_0^t\int_{\Omega}\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}\,|\dev(T^k)|\,\di x\,\di\tau
+\frac{1}{k}\int_0^t\int_{\Omega}|\dev(T^k)|^{2r}\,\di x\,\di\tau\\[1ex]
&=\frac{1}{2}\int_{\Omega}\D^{-1} T_0\,T_0\,\di x-\frac{1}{2}\int_{\Omega}\D^{-1} T^k\,T^k\,\di x
+\int_0^t\int_{\Omega}\big(\ve(u^k_{t})+\ve(\tilde{u}_t)\big)T^k\,\di x\,\di\tau\,.
\end{aligned}
\label{flowrulek2}
\end{equation}
The assumptions on the initial data \eqref{regularity} imply that the first integral on the right hand side of \eqref{flowrulek2} is bounded. The second one is nonpositive. Theorem \ref{tw:4.1} yields that the sequences $\{u^{k}_t\}_{k>0}$ and $\{T^{k}\}_{k>0}$ are bounded in the space $L^2(0,\T;H^1_0(\Omega;\R^3))$ and $L^{\infty}(0,\T;L^2(\Omega;\S))$, respectively, which completes the proof.
\end{proof}
\noindent
Theorem \ref{tw:4.1} and Lemma \ref{tw:4.7} imply that the sequences\\ $\Big\{\TC_k\big(\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}|\dev(T^k)|\Big) \Big\}_{k>0}$ and $\{\mathrm{div}\,u^{k}_t\}_{k>0}$ are bounded in $L^1(0,\T;L^{1}(\Omega))$ and $L^2(0,\T;L^2(\Omega))$, respectively. This information allows us to use Boccardo's and Gallou{\"e}t's approach and obtain the following lemma
\begin{lem}
\label{lem:4.2}
The sequence $\{\theta^{k}\}_{k>0}$ is uniformly bounded in the space $L^{q}(0,\T;W^{1,q}(\Omega))$ for all $1<q<\frac{5}{4}$.
\end{lem}
\noindent
In the theory of inelastic deformations taking into account heat flow, it can be said that the Lemma \ref{lem:4.2} has become a standard. The original idea came from Boccardo's and Gallou{\"e}t from \cite{BoccardoGallouet}. The complete proof of the Lemma \ref{lem:4.2} can be found in \cite{ChelminskiOwczarekthermoII} and \cite{barowcz2}, so we decided to omit it (see also \cite{GKS15} and \cite{RoubicekL1}, where this approach has been used to similar problems).
\begin{remark}
\label{col:4.5}
The growth condition \eqref{warwzrostu} yields
\begin{equation}
\label{eq:47}
\int\limits _0^\T\int\limits _{\Omega}|f\big(\TC_{k}(\theta^{k}+\tilde{\theta})\big)|^2\,\di x\,\di\tau\leq A+\tilde{M} \int\limits _0^\T\int\limits _{\Omega}|\theta^{k}|^{2\alpha}\,\di x\,\di\tau\,,
\end{equation}
where the constants $A$ and $\tilde{M}$ do not depend on $k>0$. Let us assume that $\frac{4}{3}\leq 2\alpha<\frac{5}{3}$ and $2\alpha=\frac{4}{3}q$. Using the interpolation inequality we obtain
\begin{equation}
\label{eq:48}
\|\theta^{k}(t)\|_{L^{2\alpha}(\Omega)} \leq \|\theta^{k}(t)\|^{s_1}_{L^{1}(\Omega)}\|\theta^{k}(t)\|^{1-s_1}_{L^{q^{\ast}}(\Omega)}
\end{equation}
for almost every $t\leq \T$, where $q^{\ast}=\frac{3q}{3-q}$ and $\frac{1}{2\alpha}=\frac{s_1}{1}+\frac{1-s_1}{q^{\ast}}$. Using inequality \eqref{eq:48} in \eqref{eq:47}, applying Theorem \ref{tw:4.1} and the Sobolev embedding theorem we deduce that the sequence $\{f\big(\TC_{k}(\theta^{k}+\tilde{\theta})\big)\}_{k>0}$ is bounded in $L^2(0,\T;L^2(\Omega;\R))$. For $1<2\alpha<\frac{4}{3}$ the last statement is also correct since the following inequality is true
\begin{equation}
\label{eq:411}
\|\theta^{\lambda}(t)\|_{L^{2\alpha}(\Omega)} \leq D_5\|\theta^{\lambda}(t)\|_{L^{\frac{4}{3}}(\Omega)}
\end{equation}
for almost every $t\leq T$. Summarizing the above statements we get that the sequence
\begin{equation}
\label{eq:412}
\Big\{f\big(\TC_{k}(\theta^{k}+\tilde{\theta})\big) \mathrm{div}\,u^{k}_t + \TC_k\Big(\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}|\dev(T^k)|\Big) \Big\}_{\lambda>0}
\end{equation}
is bounded in $L^1(0,\T;L^1(\Omega))$. It follows that the sequence \eqref{eq:412} is bounded in\\ $L^1\big(0,\T;\big(W^{1,q'}(\Omega)\big)^{\ast}\big)$, where the space $\big(W^{1,q'}(\Omega)\big)^{\ast}$ is the space of all linear bounded functionals on $W^{1,q'}(\Omega)$ $(\frac{1}{q}+\frac{1}{q'}=1)$. This information entail that the sequence $\{\theta^{k}_t\}_{\lambda>0}$ is bounded in $L^1\big(0,\T;\big(W^{1,q'}(\Omega)\big)^{\ast}\big)$. Using the compactness Aubin-Lions lemma we derive that the sequence $\{\theta^{k}\}_{k>0}$ is relatively compact in $L^1(0,\T;L^1(\Omega))$. It contains a subsequence (again denoted using the superscript $k$) such that $\theta^{k}\rightarrow \theta$ a.e. in $\Omega\times(0,\T)$.
The continuity of $f$ entails that
$$f\big(\TC_{k}(\theta^{k}+\tilde{\theta})\big)\rightarrow f(\theta+\tilde{\theta})\quad \mathrm{a.e.\,\, in}\quad \Omega\times(0,\T)\,.$$
Additionally, from Lemma \ref{lem:4.2} we know that the sequence $\{\theta^{k}\}_{k>0}$ is bounded in the space $L^{p}(0,\T;L^{p}(\Omega))$ for $1\leq p<\frac{5}{3}$. Let us select $r\in\R$ such that $2\alpha<r<\frac{5}{3}$, hence the growth condition on $f$ yields that the sequence $\left\{f\left(\TC_{k}(\theta^{k}+\tilde{\theta})\right)\right\}_{k>0}$ is bounded in $L^{\frac{r}{\alpha}}(\Omega\times(0,\T))$. Perceiving that $\frac{r}{\alpha}>2$, we finally deduce from equi-integrability one of the most important pieces of information (in the proof of main result)
$$
f\big(\TC_{k}(\theta^{k}+\tilde{\theta})\big)\rightarrow f(\theta+\tilde{\theta})\quad \mathrm{in}\quad L^2(0,\T;L^2(\Omega))\,.\\[1ex]
$$
\end{remark}
\noindent
In the next part of this section we intend to address the boundedness of nonlinearities associated with the inelastic constitutive equation $\eqref{AMain1}_2$.
\begin{remark}
\label{col:4.6}
Let us examine the following integral
\begin{equation}
\label{412}
\begin{split}
\int_0^t&\int_{\Omega}\Big|\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}\,\frac{\dev(T^k)}{|\dev(T^k)|}\Big|^{\frac{r+1}{r}}\,\di x\, \di \tau\\[1ex]
&= \int_0^t\int_{\Omega}\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r+1}_{+}\,\di x\, \di\tau
\end{split}
\end{equation}
for $t\leq\T$. Observe that for almost every $(x,\tau)\in \Omega\times (0,t)$ such that $$|\dev(T^k(x,\tau))|\leq\beta(\theta^k(x,\tau)+\tilde{\theta}(x,\tau))\,,$$
the integral on the right hand side of \eqref{412} is equal to 0. Let us denote by
$$Q_1=\{(x,\tau)\in\Omega\times (0,t):\,|\dev(T^k(x,\tau))|>\beta(\theta^k(x,\tau)+\tilde{\theta}(x,\tau))\}\,,$$
then
\begin{equation}
\label{413}
\begin{split}
\int_0^t&\int_{\Omega}\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r+1}_{+}\,\di x\, \di\tau\\[1ex]
&=\int_{Q_1}\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}\big(|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big)\,\di x\, \di\tau\\[1ex]
&=\int_{Q_1}\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}|\dev(T^k)|\,\di x\, \di\tau\\[1ex]
&\hspace{2ex}-
\int_{Q_1}\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}\beta(\theta^k+\tilde{\theta})\,\di x\, \di\tau
\end{split}
\end{equation}
Last term on the right-hand side of \eqref{413} in non-positive (assumption (C2)) and the Lemma \ref{tw:4.7} implies that the sequence $\Big\{\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}\,\frac{\dev(T^k)}{|\dev(T^k)|}\Big\}_{k>0}$ is bounded in $L^{\frac{r+1}{r}}(0,\T;L^{\frac{r+1}{r}}(\Omega;\SS))$. Additionally
\begin{equation}
\label{414}
\frac{1}{k}\int_0^t\int_{\Omega}\Big||\dev(T^k)|^{2r-1}\,\frac{\dev(T^k)}{|\dev(T^k)|}\Big|^{\frac{2r}{2r-1}}\,\di x\, \di \tau= \frac{1}{k}\int_0^t\int_{\Omega}|\dev(T^k)|^{2r}\,\di x\, \di \tau\,,
\end{equation}
therefore using again Lemma \ref{tw:4.7} we obtain that the sequence
\begin{equation}
\label{415}
\Big(\frac{1}{k}\Big)^{\frac{2r-1}{2r}}\Big\||\dev(T^k)|^{2r-1}\,\frac{\dev(T^k)}{|\dev(T^k)|}\Big\|_{L^{\frac{2r}{2r-1}}(0,\T;L^{\frac{2r}{2r-1}}(\Omega))}
\end{equation}
is bounded independently on $k>0$.
\end{remark}
\noindent
To pass to the limit in equations \eqref{temp3}, \eqref{balancek} and \eqref{flowrulek}, we also need a boundedness of the sequences $\{\dev(T^k)\}_{k>0}$ and $T^k_t$.
\begin{remark}
\label{col:4.7}
Let us introduce the $Q_2= \Omega\times(0,t)\setminus Q_1$ for $t\in (0,\T)$, where $Q_1$ is defined in Corollary \ref{col:4.6}. Then
\begin{equation}
\label{416}
\begin{split}
\int_0^t&\int_{\Omega}|\dev(T^k)|^{r+1}\,\di x\, \di\tau= \int_0^t\int_{\Omega}|\dev(T^k)|^{r}|\dev(T^k)|\,\di x\, \di\tau\\[1ex]
&=\int_{Q_1}|\dev(T^k)|^{r}|\dev(T^k)|\,\di x\, \di\tau+ \int_{Q_2}|\dev(T^k)|^{r}|\dev(T^k)|\,\di x\, \di\tau\\[1ex]
&=\int_{Q_1}\big(|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})+\beta(\theta^k+\tilde{\theta})\big)^{r}|\dev(T^k)|\,\di x\, \di\tau\\[1ex]
&\hspace{2ex}+\int_{Q_2}|\dev(T^k)|^{r}|\dev(T^k)|\,\di x\, \di\tau\\[1ex]
&=2^{r-1}\int_{Q_1}\big(|\dev(T^k)|-\beta(\theta^k+\tilde{\theta}))\big)^{r}|\dev(T^k)|\,\di x\, \di\tau\\[1ex]
&\hspace{2ex}+ 2^{r-1}\int_{Q_1}\beta^r(\theta^k+\tilde{\theta})|\dev(T^k)|\,\di x\, \di\tau
+\int_{Q_2}|\dev(T^k)|^{r}|\dev(T^k)|\,\di x\, \di\tau\,.
\end{split}
\end{equation}
Lemma \ref{tw:4.7} implies that the first integral on the right-hand side of \eqref{416} is bounded independently of $k>0$. Theorem \ref{tw:4.1} yields that the sequence $\{T^k\}_{k>0}$ is bounded in\\ $L^\infty(0,\T;L^2(\Omega;\S))$, therefore from the assumption (C2) the penultimate integral in \eqref{416} is bounded. On the set $Q_2$ the deviatoric part of $T^k$ is in $L^\infty(0,\T;L^\infty(\Omega;\SS))$, hence the last integral in \eqref{416} is also bounded. Summarizing we got that the sequence $\{\dev(T^k)\}_{k>0}$ is bounded in the space $L^{r+1}(0,\T;L^{r+1}(\Omega;\SS))$.
\end{remark}
\begin{lem}
\label{lem47} The sequence $T^k_t$ is bounded in $ L^{\frac{2r}{2r-1}}(0,\T;L^{\frac{2r}{2r-1}}(\Omega;\S))$.
\end{lem}
\begin{proof}
Let $\varphi \in L^{2r}(0,\T;L^{2r}(\Omega;\S))$. The function $\varphi$ can be used as a test function in \eqref{flowrulek}, hence integrating \eqref{flowrulek} with respect to time we obtain
\begin{equation}
\begin{aligned}
\int_0^{\T}\int_{\Omega}\D^{-1} T^k_{t}\,\varphi\,\di x\,\di\tau&+ \int_0^{\T}\int_{\Omega}\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}\,\frac{\dev(T^k)}{|\dev(T^k)|}\,\varphi\,\di x\,\di\tau
\\[1ex]
&+\frac{1}{k}\int_0^{\T}\int_{\Omega}|\dev(T^k)|^{2r-1}\,\frac{\dev(T^k)}{|\dev(T^k)|}\,\varphi\,\di x\,\di\tau
\\[1ex]
&=\int_0^{\T}\int_{\Omega}\big(\ve(u^k_{t})+\ve(\tilde{u}_t)\big)\varphi\,\di x\,\di\tau\,,
\end{aligned}
\label{417}
\end{equation}
therefore
\begin{equation}
\begin{aligned}
\big|&\int_0^{\T}\int_{\Omega}\D^{-1} T^k_{t}\,\varphi\,\di x\,\di\tau\big|\\[1ex]
&\leq \big\|\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}\,\frac{\dev(T^k)}{|\dev(T^k)|}\big\|_{L^{\frac{r+1}{r}}(0,\T;L^{\frac{r+1}{r}}(\Omega))}\|\varphi\|_{ L^{2r}(0,\T;L^{2r}(\Omega))}
\\[1ex]
&\hspace{2ex}+\Big(\frac{1}{k}\Big)^{\frac{1}{2r}}\Big(\frac{1}{k}\Big)^{\frac{2r-1}{2r}}\Big\||\dev(T^k)|^{2r-1}\,\frac{\dev(T^k)}{|\dev(T^k)|}\Big\|_{L^{\frac{2r}{2r-1}}(0,\T;L^{\frac{2r}{2r-1}}(\Omega))}\|\varphi\|_{ L^{2r}(0,\T;L^{2r}(\Omega))}
\\[1ex]
&\hspace{4ex}+\|\ve(u^k_{t})+\ve(\tilde{u}_t)\|_{L^{2}(0,\T;L^{2}(\Omega))}\|\varphi\|_{ L^{2r}(0,\T;L^{2r}(\Omega))}\,.
\end{aligned}
\label{418}
\end{equation}
Remark \ref{col:4.6} together with \eqref{418} show that
\begin{equation}
\sup_{\substack{\varphi \in L^{2r}(0,\T;L^{2r}(\Omega;\S))\\[1ex]
\|\varphi\|_{L^{2r}(0,\T;L^{2r}(\Omega;\S))}\leq 1}}\,\,\big|\int_0^{\T}\int_{\Omega}\D^{-1} T^k_{t}\,\varphi\,\di x\,\di\tau\big|
\end{equation}
is bounded independently on $k>0$ and the proof is complete.
\end{proof}
\subsection{Passing to the limit with $k\rightarrow +\infty$}
All the boundednesses from Section $4.1$ lead to (going if needed to the subsequences)
\begin{equation}
\begin{array}{cl}
T^k \rightharpoonup T & \mbox{weakly in } L^2(0,\T;L^2(\Omega;\S))\,,\\[1ex]
T^k_t \rightharpoonup T_t & \mbox{weakly in } L^{\frac{2r}{2r-1}}(0,\T;L^{\frac{2r}{2r-1}}(\Omega;\S))\,,\\[1ex]
\dev(T^k) \rightharpoonup \dev(T) & \mbox{weakly in } L^{r+1}(0,\T;L^{r+1}(\Omega;\SS))\,,\\[1ex]
u^k\rightharpoonup u & \mbox{weakly in } H^1(0,\T;H^1_0(\Omega;\R^3))\,,\\[1ex]
\theta^k \rightharpoonup \theta & \mbox{weakly in } L^q(0,\T;W^{1,q}(\Omega))\,,\\[1ex]
\theta^k \rightarrow \theta & \mbox{in } L^1(0,\T;L^1(\Omega))\,,\\[1ex]
f\big(\TC_{k}(\theta^{k}+\tilde{\theta})\big)\rightarrow f(\theta+\tilde{\theta})& \mbox{in } L^2(0,\T;L^2(\Omega))\,,\\[1ex]
\frac{1}{k}|\dev(T^k)|^{2r-1}\,\frac{\dev(T^k)}{|\dev(T^k)|} \rightharpoonup 0 & \mbox{weakly in } L^{\frac{2r}{2r-1}}(0,\T;L^{\frac{2r}{2r-1}}(\Omega;\SS)),\\[2ex]
\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}\,\frac{\dev(T^k)}{|\dev(T^k)|}\rightharpoonup \psi & \mbox{weakly in } L^{\frac{r+1}{r}}(0,\T;L^{\frac{r+1}{r}}(\Omega;\SS))
\end{array}
\label{weaklimk}
\end{equation}
with $k \rightarrow \infty$.
\begin{lem}
\label{lem48}
The weak limit $T_t$ belongs to $L^{\frac{r+1}{r}}(0,\T;L^{\frac{r+1}{r}}(\Omega;\S))$.
\end{lem}
\begin{proof}
Using \eqref{weaklimk} we can pass to the limit with $k\rightarrow\infty$ in \eqref{417} and obtain
\begin{equation}
\int_0^{\T}\int_{\Omega}\D^{-1} T_{t}\,\varphi\,\di x\,\di\tau+ \int_0^{\T}\int_{\Omega}\psi\,\dev(\varphi)\,\di x\,\di\tau
=\int_0^{\T}\int_{\Omega}\big(\ve(u_{t})+\ve(\tilde{u}_t)\big)\varphi\,\di x\,\di\tau
\label{421}
\end{equation}
for all $\varphi \in L^{2r}(0,\T;L^{2r}(\Omega;\S))$, which means that
\begin{equation}
\D^{-1} T_{t}(x,t)= -\psi(x,t)+ \ve(u_{t}(x,t))+\ve(\tilde{u}_t(x,t))
\label{422}
\end{equation}
almost everywhere in $\Omega\times(0,\T)$. Regularities of the functions $\psi$, $u_{t}$ and $\tilde{u}_t$ complete the proof.
\end{proof}
\begin{col}
\label{col:4.8}
Formula \eqref{422} yield
\begin{equation}
\begin{split}
\mathrm{tr}(\D^{-1}T_{t}(x,t))&= -\mathrm{tr}(\psi(x,t))+ \mathrm{tr}\big(\ve(u_{t}(x,t))+\ve(\tilde{u}_t(x,t))\big)\\[1ex]
&=\mathrm{tr}\big(\ve(u_{t}(x,t))+\ve(\tilde{u}_t(x,t))\big)
\end{split}
\label{423}
\end{equation}
The regularities of the functions $u_t$ and $\tilde{u}_t$ give $\mathrm{tr}(\D^{-1}T_{t})\in L^2(0,\T;L^2(\Omega;\R^3))$. The properties of the operator $\D$ (we consider the isotropic materials) imply
\begin{equation}
\D^{-1}T_{t}(x,t)T(x,t)= \D^{-1}\mathrm{tr}(T_{t}(x,t))\mathrm{tr}(T(x,t)) +\D^{-1}\dev (T_t(x,t))\dev (T(x,t))\,.
\label{424}
\end{equation}
Integrating \eqref{424} with respect to $\Omega\times (0,t)$ for $t\in (0,\T]$ we get
\begin{equation}
\begin{split}
\int_0^{t}\int_{\Omega}\D^{-1}T_{t}T\,\di x\, \di \tau&= \int_0^{t}\int_{\Omega}\D^{-1}\mathrm{tr}(T_{t})\mathrm{tr}(T)\,\di x\, \di \tau +\int_0^{t}\int_{\Omega}\D^{-1}\dev (T_t)\dev (T)\,\di x\, \di\tau\\[1ex]
&=\int_0^{t}\frac{1}{2}\frac{\di}{\di t}\big( \int_{\Omega}\D^{-1}\mathrm{tr}(T)\mathrm{tr}(T)+ \D^{-1}\dev (T)\dev (T)\,\di x\big)\,\di\tau\\[1ex]
&= \int_0^{t}\frac{1}{2}\frac{\di}{\di t}\big( \int_{\Omega}\D^{-1}T\,T\,\di x\big)\,\di\tau\\[1ex]
&= \int_{\Omega}\D^{-1}T(t)\,T(t)\,\di x-\int_{\Omega}\D^{-1}T(0)\,T(0)\,\di x\,.
\end{split}
\label{425}
\end{equation}
On the other hand, using the formula \eqref{422} we obtain
\begin{equation}
\begin{split}
\int_{\Omega}\D^{-1}T(t)\,T(t)\,\di x-\int_{\Omega}\D^{-1}T(0)\,T(0)\,\di x&= -\int_0^{t}\int_{\Omega}\psi\,\dev(T)\,\di x\,\di\tau\\[1ex]
&\hspace{2ex}+\int_0^{t}\int_{\Omega}\big(\ve(u_{t})+\ve(\tilde{u}_t)\big)T\,\di x\,\di\tau\,.
\end{split}
\label{426}
\end{equation}
\end{col}
\noindent
The formula \eqref{426} is crucial to characterize the weak limit $\psi$.
\begin{tw}
\label{lmimsup1}
The following inequality holds for solutions of approximate system
\begin{equation}
\label{limsupmain1}
\limsup_{k\rightarrow\infty} \int_{0}^{t}\int_{\Omega}\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}\,|\dev(T^k)| \,\di x\, \di \tau\leq
\int_{0}^{t}\int_{\Omega}\psi\, \dev(T) \,\di x\,\di \tau
\end{equation}
and for all $t\in (0,\T]$.
\end{tw}
\noindent
The formula \eqref{426} makes the proof of Theorem \ref{lmimsup1} almost identical to the proof of the Theorem \ref{lmimsup}, thus we decided to skip it.
\begin{lem}
\label{lem:4.4}
The following characterisation holds\\[1ex]
$$\psi(x,t)=\big\{|\dev(T(x,t))|-\beta(\theta(x,t)+\tilde{\theta}(x,t))\big\}^{r}_{+}\frac{\dev(T(x,t))}{|\dev(T(x,t))|}$$
for almost all $(x,t)\in\Omega\times(0,\T)$.
\end{lem}
\begin{proof}
Let us introduce
\begin{equation}
\label{427a}
G(\theta, S):=\big\{|\dev(S)|-\beta(\theta+\tilde{\theta})\big\}^{r}_{+}\frac{\dev(S)}{|\dev(S)|}
\end{equation}
for $\theta\in\R$ and $S\in\S$.
The monotonicity of the function $G(\theta,\cdot)$ implies
\begin{equation}
\label{428}
\int_0^{\T}\int_{\Omega}\big(G(\theta^k,\dev(T^k))-G(\theta^k,\dev(W))\big) \big(\dev(T^{k})-\dev(W)\big)\,\di x\,\di\tau\geq 0\\
\end{equation}
for all $W\in L^{r+1}(0,\T;L^{r+1}(\Omega;\S))$. Therefore
\begin{equation}
\label{429}
\begin{split}
\int_0^{\T}\int_{\Omega}&G(\theta^k,\dev(T^k))\dev(T^{k})\,\di x\,\di\tau -\int_0^{\T}\int_{\Omega}G(\theta^k,\dev(T^k)) \dev(W)\,\di x\,\di\tau\\[1ex]
& -\int_0^{\T}\int_{\Omega}G(\theta^k,\dev(W)) \big(\dev(T^{k})-\dev(W)\big)\,\di x\,\di\tau\geq 0\\
\end{split}
\end{equation}
The pointwise convergence of the sequence $\{\theta^k\}_{k>0}$ yields that the sequence $G(\theta^k,\dev(W))$ convergence pointwise to $G(\theta,\dev(W))$. Additionally the sequence $\{G(\theta^k,\dev(W))\}_{k>0}$ is bounded in $L^{\frac{r+1}{r}}(0,\T;L^{\frac{r+1}{r}}(\Omega;\SS))$. The Lebesgue's dominated convergence theorem implies that $G(\theta^k,\dev(W))\rightarrow G(\theta,\dev(W))$ in $L^{\frac{r+1}{r}}(0,\T;L^{\frac{r+1}{r}}(\Omega;\SS))$. This information is sufficient to pass to the limit in the last integral in \eqref{429}, thus taking $\limsup\limits_{k\rightarrow 0}\big(\ref{428}\big)$ and using Lemma \ref{lmimsup1} we deduce
\begin{equation}
\label{430}
\int_0^{\T}\int_{\Omega}\big(\psi-G(\theta,\dev(W))\big) \big(\dev(T)-\dev(W)\big)\,\di x\,\di\tau\geq 0\,.
\end{equation}
Now the standard approach in the Minty-Browder trick finises the proof.
\end{proof}
\begin{lem}
\label{lem:4.5}
The following formula holds
\begin{equation}
\begin{split}
\lim_{k\rightarrow\infty}\,\int_{0}^{\T}\int_{\Omega}\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}\,|\dev(T^k)| \,\di x\, \di \tau=\qquad\qquad\qquad\nn\\[1ex]
\int_{0}^{\T}\int_{\Omega}\big\{|\dev(T)|-\beta(\theta+\tilde{\theta})\big\}^{r}_{+}\,|\dev(T)| \,\di x\, \di \tau\,.
\end{split}
\end{equation}
\end{lem}
\begin{proof}
Using the monotonicity of $G(\theta,\cdot)$ defined in \eqref{427a} we have
\begin{equation}
\label{eq:432}
\begin{split}
0&\leq\int_0^{\T}\int_{\Omega}\big(G(\theta^k,\dev(T^k))-G(\theta^k,\dev(T))\big) \big(\dev(T^{k})-\dev(T)\big)\,\di x\,\di\tau\\[1ex]
&= \int_0^{\T}\int_{\Omega}G(\theta^k,\dev(T^k)) \big(\dev(T^{k})-\dev(T)\big)\,\di x\,\di\tau\\[1ex]
&\hspace{2ex}-\int_0^{\T}\int_{\Omega}G(\theta^k,\dev(T)) \big(\dev(T^{k})-\dev(T)\big)\,\di x\,\di\tau\,.
\end{split}
\end{equation}
From the proof of the Lemma \ref{lem:4.4} we conclude that the last integral convergences to zero as $k\rightarrow\infty$. Taking the limit superior of \eqref{eq:432} we obtain
\begin{equation}
\label{eq:4.33}
\begin{split}
0&\leq\limsup_{k\rightarrow\infty} \int_{0}^{\T}\int_{\Omega}\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}\frac{\dev(T^k)}{|\dev(T^k)|}\big(\dev(T^{k})-\dev(T)\big) \,\di x\, \di \tau\\[1ex]
&=\limsup_{k\rightarrow\infty} \int_{0}^{\T}\int_{\Omega}\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}|\dev(T^{k})|\,\di x\, \di \tau\\[1ex]
&\hspace{2ex}-\int_{0}^{\T}\int_{\Omega}\big\{|\dev(T)|-\beta(\theta+\tilde{\theta})\big\}^{r}_{+}|\dev(T)|\,\di x\, \di \tau\leq 0\,,
\end{split}
\end{equation}
where the last inequality follows from \eqref{limsupmain1} and the proof is complete.
\end{proof}
\noindent
The above convergences allow us to pass to the limit in the system \eqref{AMain1} with $k\rightarrow \infty$. Let us assume that $\psi_1\in C_0^\infty([0,\T])$, then from \eqref{balancek} we have
\begin{equation}
\int_0^\T\int_{\Omega} \big(T^k - f\big(\TC_k(\tilde{\theta}+\theta^k )\big)\id\big) \psi_1(t) \varepsilon(w)\, \di x\,\di t + \int_0^\T\int_{\Omega}\D(\varepsilon(u^k_{t})) \,\psi_1(t)\varepsilon(w)\,\di x\,\di t =0
\label{balancek1}
\end{equation}
for all $w\in H^1_0(\Omega;\R^3)$. Using the convergences \eqref{weaklimk} and keeping in mind the removed boundary value problems \eqref{war_brz_u} and \eqref{war_brz_t} we conclude that
\begin{equation}
\int_0^\T\int_{\Omega} \big(T - f(\tilde{\theta}+\theta)\id\big) \ve(\psi)\, \di x\,\di t + \int_0^\T\int_{\Omega}\D(\varepsilon(u_{t}+\tilde{u}_t)) \,\varepsilon(\psi)\,\di x\,\di t =\int_0^\T\int_{\Omega} F\,\psi\,\di x\,\di t\,.
\label{balance}
\end{equation}
for every test function $\psi\in C^\infty_0([0,\T];H^1_0(\Omega;\R^3))$. In particular, we also receive \eqref{balancede}. Passage to the limit in the inelastic constitutive equation \eqref{flowrulek} have been actually done in \eqref{421} (see Lemma \ref{lem:4.4}). It is enough to note that since $T_t\in L^{\frac{r+1}{r}}(0,\T;L^{\frac{r+1}{r}}(\Omega;\S))$, we can take the test functions from the space $L^{r+1}(0,\T;L^{r+1}(\Omega;\S))$. There remains a passing to the limit in heat equation \eqref{temp3}. Let us suppose that $\psi_2\in C_0^\infty([0,\T])$, then \eqref{temp3} yields
\begin{equation}
\label{tempkoncowe}
\begin{split}
&\int_0^\T\int_{\Omega}\theta^k_{t}\, \psi_2(t)v\,\di x\,\di t + \int_0^\T\int_{\Omega}\nabla\theta^k\, \psi_2(t)\nabla v\, \di x\,\di t\\[1ex]
&\hspace{2ex}+\int_0^\T\int_{\Omega} f\big(\TC_k(\tilde{\theta}+ \theta^k )\big)\mathrm{div} (\tilde{u}_t + u^k_{t})\, \psi_2(t)v \,\di x\,\di t\\[1ex]
=& \int_0^\T\int_{\Omega} \TC_k \big(\big\{|\dev(T^k)|-\beta(\theta^k+\tilde{\theta})\big\}^{r}_{+}|\dev(T^k)| \big)\,\psi_2(t)v\, \di x\,\di t\,.
\end{split}
\end{equation}
Again \eqref{weaklimk}, Remark \ref{col:4.5} and Lemma \ref{lem:4.5} give us
\begin{equation}
\label{tempkoncowe1}
\begin{split}
&\int_0^\T\int_{\Omega}(\theta+\tilde{\theta})_{t}\, \phi\,\di x\,\di t + \int_0^\T\int_{\Omega}\nabla(\theta+\tilde{\theta})\, \phi\, \di x\,\di t\\[1ex] &\hspace{2ex}+\int_0^\T\int_{\Omega} f\big(\tilde{\theta}+ \theta )\mathrm{div} (\tilde{u}_t + u_{t})\, \phi \,\di x\,\di t\\[1ex]
&= \int_0^\T\int_{\Omega} \big\{|\dev(T)|-\beta(\theta+\tilde{\theta})\big\}^{r}_{+}|\dev(T)|\,\phi\, \di x\,\di t+\int_0^\T\int_{\Omega}g_{\theta}\,\phi\,\di S(x)\,\di t
\end{split}
\end{equation}
for all $\phi\in C^\infty([0,\T];C^\infty(\overline{\Omega}))$ and the function $\tilde{\theta}$ is the solution of \eqref{war_brz_t}. To complete the proof of the main result, we need to make sense of the initial condition for the temperature function. The following lemma ensures that the initial condition for temperature is satisfied in the standard sense.
\begin{lem}
\label{lem:46}
The sequence $\{\theta^k\}_{k>0}$ converges strongly to $\theta$ in $C([0,\T];L^1(\Omega))$.
\end{lem}
\begin{proof}
The idea of the proof is taken from Lemma 1 of \cite{Blanchard}. Fix $M>0$, then selecting the test function $v=\TC_M(\theta^{k}-\theta^{l})$ in the difference of the heat equations for $\theta^{k}$ and $\theta^{l}$ (see \eqref{temp3} if necessary) we get
\begin{equation}
\label{431}
\begin{split}
\int_{\Omega}&\varphi_M(\theta^{k}-\theta^{l})(t)\,\di x+ \int_0^{t}\int_{\Omega}|\nabla \TC_M(\theta^{k}-\theta^{l})|^2\,\di x\,\di\tau\\[1ex]
&=\int_0^{t}\int_{\Omega}\Big(G^{k}-G^{l}\Big)\TC_M(\theta^{k}-\theta^{l})\,\di x\,\di\tau+\int_{\Omega}\varphi_M(\theta^{k}-\theta^{l})(0)\,\di x\,,
\end{split}
\end{equation}
for almost every $t\leq \T$, where
\begin{equation}
\label{eq:4123}
G^i=f\big(\TC_{i}(\theta^{i}+\tilde{\theta})\big) \mathrm{div}\,u^{i}_t + \TC_k\Big(\big\{|\dev(T^i)|-\beta(\theta^i+\tilde{\theta})\big\}^{r}_{+}|\dev(T^i)|\Big)
\end{equation}
for $i=k,\,l$. Observe that the function $\TC_M(\theta^{k}-\theta^{l})\rightarrow 0$ for almost all $(x,\tau)\in\Omega\times(0,t)$ (as $k,\,l\rightarrow \infty$). Additionally, $\TC_M(\theta^{k}-\theta^{l})$ is uniformly bounded. The Egorov Theorem entails that for every $\nu>0$, there exists a measurable subset $A$ of $\Omega\times(0,t)$ such that $|A|<\nu$ and $\TC_M(\theta^{k}-\theta^{l})$ converges to $0$ uniformly on $\Omega\times(0,t)\setminus A$. Therefore,
\begin{equation}
\label{eq:4.32}
\begin{split}
\int_{\Omega}&\varphi_M(\theta^{k}-\theta^{l})(t)\,\di x
\leq \int_0^{t}\int_{\Omega}\Big(G^{k}-G^{l}\Big)\TC_M(\theta^{k}-\theta^{l})\,\di x\,\di\tau+\int_{\Omega}\varphi_M(\theta^{k}-\theta^{l})(0)\,\di x\\[1ex]
&=\int_{\Omega\times(0,t)\setminus B} \Big(G^{k}-G^{l}\Big)\TC_M(\theta^{k}-\theta^{l})\,\di x\,\di \tau\\[1ex]
&\hspace{2ex}+\int_{B}\Big(G^{k}-G^{l}\Big)\TC_M(\theta^{k}-\theta^{l})\di x\,\di \tau+ \int_{\Omega}\varphi_M(\theta^{k}-\theta^{l})(0)\,\di x\,\\[1ex]
\end{split}
\end{equation}
From Lemma 4.5, we know that the sequence $G^{k}$ converges weakly in $L^1(0,\T;L^1(\Omega))$ (it is bounded), hence the first integral on the right hand-side of (\ref{eq:4.32}) tends to zero. Notice that $(\theta^{k}-\theta^{l})(0)=\TC_{k}(\theta_0)-\TC_{l}(\theta_0)\rightarrow 0$ in $L^1(\Omega)$ as $k,\,l\rightarrow \infty$. The functions $G^{k}$ and $G^{l}$ are uniformly integrable, thus the second integral on the right hand-side of (\ref{eq:4.32}) is arbitrarily small. Moreover, for $r\in\R$
\begin{displaymath}
\varphi_M(r) \geq \left\{ \begin{array}{ll}
\frac{1}{2}|r|^2 & \textrm{if}\quad |r|\leq M\,,\\[1ex]
\frac{1}{2}M|r| & \textrm{if}\quad |r|>M\\
\end{array} \right.
\end{displaymath}
and the sequence $\{\theta^{k}\}_{k>0}$ is a Cauchy sequence in the space $C([0,\T];L^1(\Omega))$.
\end{proof}
\begin{remark}
\label{col:4.10}
From Lemma \ref{lem:46}, we know that the sequence $\theta^k(\cdot,0)$ convergences to $\theta(\cdot,0)$ in $L^1(\Omega)$. It is not hard to observe that $\theta^k(\cdot,0)=\TC_k(\theta_0)\rightarrow \theta_0$ in $L^1(\Omega)$. Hence $\theta_0=\theta(\cdot,0)$ and the initial condition for the temperature function is satisfied in a classical sense.
\end{remark}
\noindent
The Remark \ref{col:4.10} ends with the proof of the existence of solutions for the considered system \eqref{Main} with the boundary conditions \eqref{BC} and the initial conditions \eqref{IC}.
\bibliographystyle{plain}
\begin{footnotesize}
|
{
"timestamp": "2021-09-30T02:02:35",
"yymm": "2109",
"arxiv_id": "2109.13998",
"language": "en",
"url": "https://arxiv.org/abs/2109.13998"
}
|
\section{Introduction}
Anomaly detection represents a challenging problem, where the goal is to distinguish \textit{anomalous} data from data considered to be \textit{normal}\footnote{The term \textit{normal} describes data that conforms to some predefined characteristics and is, in general, application dependent.}~\cite{perera2021one}. Most recent solutions approach this problem from a \textit{one-class} classification perspective and attempt to learn detection models using normal training data only. Such an approach has led to successful deployment of anomaly detection techniques in a wide variety of application domains, where anomalous data is not readily available or is difficult to collect, including (visual) quality inspection \cite{racki2018compact,zavrtanik2020reconstruction}, surveillance and security \cite{Akcay2018GANomaly,Doshi_CVPR2020_ADV_ideo,Park_CVPR2020_MemAug, sabokrou2017deep}, information forensics \cite{Khalid_CVPRW2020_OC-FakeDect, Wang_ijcai2020_FakeSpotter}, biometrics \cite{fatemifar2019combining,oza2019active,yadav2020relativistic,perera2019learning} or medical imaging~\cite{Schlegl2017AnoGAN,Schlegl2019fAnoGAN} among others.
\begin{figure}[!t]
\begin{center}
\centering
\includegraphics[width=0.95\linewidth, trim = 12mm 0 0 0, clip]{teaser.pdf}\vspace{-1mm}
\end{center}
\caption{We propose Y-GAN, a novel anomaly detection model build around a Y-shaped auto-encoder network. The model disentangles semantically-relevant image information from irrelevant, residual characteristics and facilitates efficient anomaly detection based on selective image encoding.
As illustrated on the right, the removal of residual characteristics allows for an easier detection of the digit ``$1$'', considered anomalous in this illustrative example. \label{fig:teaser} \vspace{-3mm}}
\end{figure}
Contemporary research on one-class anomaly detection is dominated by reconstruction-based models and typically relies on powerful auto-encoders \cite{Gong_ICCV2019_MemAug_AE, Nguyen_ICCV2019_AD_AE} or generative adversarial networks (GANs) \cite{Schlegl2019fAnoGAN,P-Net_ECCV2020}. These models commonly learn some latent representation that can be used to reconstruct normal data samples with high-fidelity. Because no anomalous data is seen during training, the basic assumption here is that such (anomalous) samples will lead to poor reconstructions. As a result, differences in reconstruction quality are commonly exploited to differentiate between normal and anomalous data. Reconstructive approaches have been shown to perform well across a broad range of anomaly detection tasks and to provide competitive results across several popular benchmarks~\cite{Akcay2018GANomaly,Akcay2019Skip-GANomaly}.
However, as emphasized in~\cite{fei2020attribute}, the learning objectives typically utilized for learning reconstructive models predominantly focus on low-level pixel comparisons instead of image semantics intrinsic to the training data. This results in latent representations that encode low-level data characteristics that are likely to be shared between normal and anomalous data samples~\cite{dosovitskiy2016generating} instead of more discriminative higher-level semantics. Additionally, when data with rich visual characteristics and complex appearances is used for training, the likelihood of high-fidelity reconstructions of anomalous data increases as well, rendering reconstruction-based models less effective in such cases. This problem is further exacerbated by the high generalization capabilities of modern generative models, where high-quality reconstructions of anomalous samples can already be expected under more relaxed assumptions~\cite{fei2020attribute, Gong_ICCV2019_MemAug_AE}. The key challenge with these techniques is, therefore, to learn latent representations that encode important image semantics and are uninformative with respect to low-level visual characteristics commonly shared by normal and anomalous data.
Based on this insight, we propose in this paper a novel anomaly detection model, called Y-GAN, that aims to address the above mentioned challenges. As illustrated in Fig. \ref{fig:teaser}, Y-GAN is designed around a Y-shaped auto-encoder model that
encodes input images in two distinct latent representations. The first representation captures semantically meaningful image characteristics, useful for representing key properties of normal data, while the second encodes irrelevant, residual data characteristics. This dual encoding is enabled by an efficient disentanglement procedure that can be learnt automatically in a \textit{one-class learning} setting, i.e., without the use of anomalous data. To control the information content in the two latent representations, Y-GAN utilizes a latent classifier and trains it to discriminate between sub-classes/groups of normal data. In other words, it exploits differences within the normal data to learn meaningful data semantics that can later be used for anomaly detection. Additionally, a novel \textit{representation consistency} loss is introduced for the training procedure of Y-GAN that ensures that the encoded information in the dual latent representations is mutually exclusive. Using this approach, Y-GAN is able to learn highly descriptive data representations that facilitate efficient anomaly detection across a variety of problem settings.
The model is evaluated in extensive experiments on four anomaly detection benchmarks and compared with several state-of-the-art anomaly detection models presented recently in the literature. The results of the evaluations show that Y-GAN offers significant performance improvements over all considered competing models.
In summary, our key contributions in this paper are:
\begin{itemize
\item We propose Y-GAN, a novel anomaly detection method, that disentangles semantically-relevant image characteristics from residual information for efficient data representation and addresses some of the key challenges associated with reconstructive anomaly-detection models.
\item We introduce a novel disentanglement strategy that enforces representation consistency and allows Y-GAN to exclude uninformative image information from the anomaly detection task in a \textit{one-class} learning setting. We note that the same strategy is also applicable to other problem domains in need of efficient disentanglement.
\item We show the benefit of the proposed dual data representation over several state-of-the-art anomaly detectors by reporting superior results on multiple benchmarks and across different anomaly detection tasks.
\end{itemize}
\section{Background and Related Work}
A considerable amount of research has been conducted in the field of anomaly detection over the years. While early appro\-aches considered statistical models \cite{eskin2000anomaly,yamanishi2004line,xu2012robust}, one-class classifiers~\cite{scholkopf2001estimating,tax2004support,lanckriet2003robust} or sparse representations \cite{cong2011sparse,lu2013abnormal,zhao2011online} for this task, more recent solutions leverage advances in deep learning to learn powerful (one-class) anomaly detectors,
e.g., \cite{Schlegl2019fAnoGAN, abati2019latent, Ruff_2018_Deep_SVDD, Markovitz_2020_CVPR,Bergmann_2020_CVPR, Pang_2020_CVPR, Zaheer_CVPR2020_OGNet,Bergman_ICLR_2020_GOAD}. In this section, we present the most important background information with respect to
such one-class models to provide the necessary context for our work. For a more comprehensive coverage of the area of anomaly detection and a broader discussion of existing solutions, the reader is referred to some of the excellent surveys in this field, e.g., \cite{perera2021one,chalapathy2019deep,chandola2009anomaly, pang2021deep}.
\subsection{Reconstruction-Based Anomaly Detection}
Reconstruction-based models represent one of the most widely studied group of anomaly detectors in the
literature.
Such models try to discriminate between normal and anomalous data by evaluating reconstruction errors produced by generative networks trained exclusively on normal data. Schlegl \textit{et al.}~\cite{Schlegl2017AnoGAN}, for example, proposed to project probe samples into a GAN latent space learned in this manner in their AnoGAN model and generate reconstructions from the computed latent representation for scoring. While this approach relied on two separate steps (i.e., latent-space learning and reconstruction), later improvements, such as f-AnoGAN\cite{Schlegl2019fAnoGAN} or EGBAD~\cite{Zenati2018EGBAD}, demonstrated the benefits of learning the latent representation jointly with the reconstructive mapping. Akcay \textit{et al.}~\cite{Akcay2018GANomaly} further enhanced the capability of reconstruction-based models with an adversarial auto-encoder, called GANomaly. Different from previous work, the model derived an anomaly score by comparing latent representations of original and reconstructed images. In their follow up work, the same authors also introduced Skip-GANomaly~\cite{Akcay2019Skip-GANomaly} (a U-Net~\cite{Ronneberger_MICCAI2015_U-Net} based GANomaly extension) in an attempt to capture descriptive multi-scale information.
More recent work on reconstruction-based models capitalized on the importance of designing informative/discriminative latent spaces that can widen the gap between reconstruction errors observed with normal and anomalous data.
Perera \textit{et al.} \cite{Perera_2019_CVPR_OCGAN}, for example, designed a constrained latent space for their OCGAN model, such that only samples belonging to the class observed during training are reconstructed well, while anomalous samples are not. Zhao \textit{et al.}~\cite{P-Net_ECCV2020} split images into two distinct parts (i.e, texture and structure) in their P-Net model. The two parts were then encoded
separately with the goal of making the generated representations more informative for anomaly detection. Park \textit{et al.}~\cite{Park_CVPR2020_MemAug} integrated memory modules into their anomaly-detection approach to lessen the representation capacity of their reconstruction-based model for anomalous data and reported highly competitive detection results.
A conceptually similar solution was also presented in~\cite{Gong_ICCV2019_MemAug_AE}.
The Y-GAN model, proposed in this paper, follows the general idea of reconstruction-based methods, but unlike competing solutions encodes the input data in two distinct latent representations that allow for the separation of relevant information from information irrelevant for the anomaly detection task. As we show in the following sections, this separation of information
is: $(i)$ achieved without any assumptions regarding the source of relevant information (e.g., texture, color, structure, etc.), $(ii)$ is learned automatically in an end-to-end manner from normal data only, and $(iii)$ leads to significant performance improvements over existing reconstruction-based models on various anomaly detection tasks.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.85\textwidth]{model.pdf}\vspace{1.5mm}
\caption{Overview of the proposed Y-GAN model and training loss terms. The model consist of a Y-shaped auto-encoder with encoders, $E_{s}$ and $E_{r}$, for dual data representation, a decoder $D$ for image reconstruction, a latent classifier $C$ for data disentanglement, and an adversarial discriminator $Ds$, which ensures that data reconstructions $\hat{x}$ cannot be distinguished from the original input samples $x$.
Using an efficient disentanglement procedure, Y-GAN aims to learn a semantically meaningful data representation in $z_{s}$, that encodes only characteristics relevant for representing normal data, while capturing irrelevant, residual data characteristics in $z_{r}$.
The figure is best viewed in color.\label{fig:Model}\vspace{-1mm}}
\end{figure*}
\subsection{Anomaly Detection with Proxy Tasks}
To address some of the limitations of reconstruction-based anomaly detectors, another major group of existing models uses proxy tasks when learning to discriminate between normal and anomalous data.
The main idea behind solutions from this group is that models trained on normal data will fare badly in the considered proxy task when subjected to anomalous samples. Ye \textit{et al.}~\cite{fei2020attribute}, for instance, explored image restoration in this context and showed that differences in restoration performance can be used for anomaly detection. Noroozi \textit{et al.}~\cite{Noroozi_ECCV_2016_jigsaw} investigated jigsaw solving as a proxy tasks, the solutions from
\cite{zavrtanik2020reconstruction, Haselmann_ICMLA_2018_inpainting, Stain_AD_ICPR2020, zavrtanik_ICCV2021_draem} utilized image inpainting as the proxy for anomaly detection, and the work from \cite{Bergman_ICLR_2020_GOAD, NEURIPS2019_a2b15837, GidarisSK_ICLR_2018, Golan_NIPS2018_GeoTrans} investigated classification objectives defined over self-annotated recognition problems to facilitate anomaly detection. The proposed Y-GAN is related to these models in that it also relies on a proxy classifier, which, however, aims at distinguishing between different sub-groups of the normal data. By defining the proxy task as a classification problem over normal data, Y-GAN is able to: $(i)$ ensure a \textit{compact} representation of the normal data in the model's latent space, and $(ii)$ automatically learn semantically relevant information for the anomaly detection task. Both of these characteristics are beneficial for anomaly detection performance, as we demonstrate in the experimental section.
\subsection{Pretrained Models for Anomaly Detection}
In an effort to capture the most discriminative characteristics of the input data, many recent anomaly detection techniques try to
leverage the representational power of features extracted by pretrained (large-scale) classification models. Defard \textit{et al.}~\cite{PaDiM_ICPR2020} and Rippel \textit{et al.}~\cite{rippel2020modeling}, for example, have shown that such features can be successfully used for explicitly modelling the distribution of normal samples. Anomalies are in this case detected as out-of-distribution samples. Similar solutions based on shallower models have also been investigated in literature, e.g., \cite{SPADE_Cohen_2020, PatchCore_Roth_2021}. Reiss \textit{et al.}~\cite{Reiss_2021_CVPR_PANDA} proposed to fine-tune existing pretrained models using normal training data and showed that such an approach leads to impressive anomaly detection performance. Mai \textit{et al.} explored the use of knowledge distillation to limit the set of pretrained features considered when building anomaly detection models. Similar ideas have also been pursued in
\cite{Bergmann_2020_CVPR, wang2021student_teacher_pyramid, KD_Salehi_2020}. Wang \textit{et al.}~\cite{Wang_2021_CVPR} extended these studies by fine-tuning the distilled student network and further improved the detection rates on several anomaly detection benchmarks. In contrast to the outlined solutions, Y-GAN does not rely on pretrained models for data representation but instead learns discriminative encodings from scratch by separating uninformative data characteristics from meaningful data semantics relevant with respect to the normal training data. By doing so, it is able to selectively encode part of the characteristics that are relevant for discrimination without the need for large-scale datasets and resource hungry (pretrained) classification models.
\section{Methodology}\label{sec:methods}
\subsection{Proposed Model}
Y-GAN, illustrated in Fig.~\ref{fig:Model}, represents a generative adversarial network built around a Y-shaped auto-encoder.
The key idea behind Y-GAN is to split the latent space of the auto-encoder into two distinct parts by disentangling
informative \textit{image semantics} (e.g, shapes, appearances, textures), relevant with respect to some \textit{normal} training data from uninformative, \textit{residual image information} (e.g., noise, background, illumination changes). This separation of image content is achieved through an efficient disentanglement procedure (facilitated by a latent classifier) and allows our model to learn highly descriptive data representations for anomaly detection even in the challenging one-class learning regime. Details on the individual components of Y-GAN are given below.
\textbf{The Auto-Encoder Network.} To be able to separate relevant image content from residual information,
we design a Y-shaped auto-encoder network and use it as the generator for Y-GAN. As illustrated in Fig.~\ref{fig:Model}, this Y-shaped network consist of two separate encoders and a single decoder. The two encoders are identical from an architectural point of view, but have distinct parameters that can be learned independently one from the other. The first encoder $E_s$ maps the input image $x\in\mathbb{R}^{w\times h}$ into a \textit{semantically-relevant} latent representation $z_s$ and the second $E_r$ maps $x$ into the dual, \textit{residual} representation $z_r$, i.e.:
\begin{equation}
z_{s} = E_{s}(x)\in\mathbb{R}^d \ \text{and} \ z_{r} = E_{r}(x)\in\mathbb{R}^d,
\end{equation}
where $d$ stands for the dimensionality of $z_{s}$ and $z_{r}$. The complete latent representation of $x$ is computed as a concatenation of the two partial representations, i.e., $z = z_{s}\oplus z_{r}$, and passed to the decoder $D$ for reconstruction, i.e.,
\begin{equation}
\hat{x}=D(z)\in\mathbb{R}^{w\times h}.
\end{equation}
To ensure that all of the image content in $x$ is captured by the concatenated representation $z$, we use a standard $L_1$ reconstruction loss $\mathcal{L}_{rec}$ when learning the parameters of the auto-encoder~\cite{Isola_CVPR2017_L1_loss}:
\begin{equation} \label{eq:reconstruction_loss}
\mathcal{L}_{rec} = \| x-\hat{x} \|_1 = \| x-D\big(E_{s}(x)\oplus E_{r}(x)\big) \|_1.
\end{equation}
Moreover, an adversarial loss term and additional learning objectives that control the information content in the latent representations $z_{s}$ and $z_{r}$ are also utilized during training. We discuss these in the following sections.
\textbf{The Discriminator.} The expressive power of the latent representations, $z_{s}$ and $z_{r}$, critically depends on the fidelity of the image reconstructions $\hat{x}$. To improve fidelity and ensure that the reconstructed samples follow the distribution of the normal training data, we include a discriminator $Ds$ in the training procedure of Y-GAN and use an additional adversarial loss $\mathcal{L}_{adv}$ when learning the model.
Following the recommendations from~\cite{Salimans_NIPS2016}, we update the weights of the auto-encoder, i.e., the generator of Y-GAN, based on the following feature-matching objective that reduces training instability and avoids GAN over-training, i.e.:
\begin{equation} \label{eq:adversarial_loss}
\mathcal{L}_{adv} = \| f(x)-f(\hat{x}) \|_2^2,
\end{equation}
where $f(\cdot)$, in our case, denotes the activations of the last convolutional layer of $Ds$. Conversely, we encourage the discriminator to distinguish between real and fake images by optimizing a standard binary cross-entropy loss:
\begin{equation} \label{eq:bce_loss}
\mathcal{L}_{bce} = -\big[\log(Ds(x)) + \log(1-Ds(\hat{x}))\big]
\end{equation}
\textbf{The Latent Classifier.}
The Y-shaped design of Y-GAN's auto-encoder allows for the partitioning of the latent space into two representations, $z_{s}$ and $z_{r}$.
We force these representations
to encode mutually exclusive information by using a disentanglement procedure based on a latent classifier $C$.
The goal of this classifier is two-fold: $(i)$ to encourage semantic information relevant for representing normal data to be encoded in $z_{s}$ and $(ii)$ to force the irrelevant residual information into the latent representation $z_{r}$.
To be able to learn $C$, we assume that the normal training data can be partitioned into $N$ sub-classes\footnote{Note that this is a reasonable assumption for many applications and holds for a wide variety of datasets and experimental protocols from the literature~\cite{perera2021one,Ruff_deepAndShallowADReview, STL-10, Abnormal101, tang2019_PCB}, including the ones used in this paper.}. The classifier is then trained to predict correct class labels from the latent representations $z_s$ and to misclassify input samples $x$ given their latent representation $z_r$. This training procedure controls the information content in the dual latent representations and helps to learn meaningful data characteristics for anomaly detection without examples of anomalous data.
A cross-entropy loss is utilized to maximize the classification performance of $C$ based on the semantically-relevant latent representation $z_{s}$, i.e.:
\begin{equation} \label{eq:classifier_loss_zs}
\mathcal{L}_{s} = -\sum_{i=1}^{N} y(i)\,\log(\hat{y}_s(i)),
\end{equation}
where $y\in\mathbb{R}^N$ is the one-hot
encoded ground truth label
of $x$ and $\hat{y}_{s}=C(E_s(x))\in\mathbb{R}^N$ is the classifier prediction.
While minimizing $\mathcal{L}_{s}$ forces the latent representation $z_{s}$ to be informative with respect to the classification of the normal data, our goal is to achieve the opposite for $z_r$. To this end, we transform $z_r$ with a gradient reversal layer~\cite{ganin2015_gradRev}. This transformation layer $R$ acts as an identity function in the forward pass through the model, i.e., $R(z_{r})=z_{r}$, but reverses the gradients from the subsequent layer during back-propagation, i.e.,
\begin{equation}
\frac{dR}{dz_{r}}=-\lambda_RI,
\end{equation}
where $\lambda_R$ is a hyper-parameter and $I\in \mathbb{R}^{d\times d}$ is an identity matrix. The minimization of relevant information content in $z_{r}$ is then achieved through the following objective:
\begin{equation} \label{eq:classifier_loss_zres}
\mathcal{L}_{r} = -\sum_{i=1}^{N} y(i)\,\log(\hat{y}_r(i)),
\end{equation}
where $\hat{y}_{r}=C(R(E_r(x)))\in\mathbb{R}^N$.
\begin{figure}[!t]
\begin{center}
\centering
\includegraphics[width = \columnwidth]{consistency_loss.pdf}
\end{center}
\vspace{-3mm}
\caption{Illustration of the procedure used to enforce \textit{representation consistency}. The input image is represented in two latent spaces that need to encode mutually exclusive information. Y-GAN ensures that the representation in the latent representation $z_s$ is independent of that in $z_r$ by encouraging the model to produce the same $z_s$ even when changes in $z_r$ are introduced. Shown is an illustrative toy example involving digit shapes (assumed to be relevant) and background textures (assumed to be irrelevant). \label{fig:Consistency} \vspace{-2mm}}
\end{figure}
\textbf{Enforcing Representation Consistency.} The loss functions in Eqs.~\eqref{eq:classifier_loss_zs}~and~\eqref{eq:classifier_loss_zres} provide for a first level of disentanglement, but do not completely prevent information leakage between the latent representations $z_{s}$ and $z_{r}$.
For this purpose, we introduce a novel \textit{consistency loss} ($\mathcal{L}_{con}$) that penalizes the encoder $E_{s}$ in case it extracts inconsistent (semantically-relevant) information in $z_s$, when changes in the residual representation $z_r$ are introduced. The overall idea of this procedure is illustrated on the right part of Fig.~\ref{fig:Model} and using a simple visual example in Fig.~\ref{fig:Consistency}.
To calculate $\mathcal{L}_{con}$, we first randomly shuffle the set of residual representation $z_{r}$, generated from the samples in a given training batch, so that each $z_{s}$ vector is concatenated with a $z^\prime _{r}$ vector, belonging to a randomly chosen sample from the batch. These artificially created concatenations are then passed to the decoder $D$, which generates hybrid reconstructions $\hat{x}^\prime$, such that $\hat{x}^\prime = D(z_{s}\oplus z^\prime_{r})$. Next, the reconstructed images are fed to the encoder $E_{s}$, which is expected to extract latent representations $\hat{z}^\prime_{s}$ that are equivalent to the vector $z_{s}$, initially used for the generation of the hybrid reconstructions $\hat{x}^\prime$. An angular dissimilarity measure is used to penalize differences between $z_{s}$ and $\hat{z}^\prime_{s}$,
i.e.:
\begin{equation} \label{eq:disentanglement_loss}
\mathcal{L}_{con} = -\frac{z_{s}\cdot \hat{z}^\prime_{s}}{\| z_{s}\| \cdot \| \hat{z}^\prime_{s} \|}.
\end{equation}
By minimizing $\mathcal{L}_{con}$, we encourage the encoder $E_{s}$ to extract image information that is invariant to residual data characteristics encoded in $z_r$. Note that, $\mathcal{L}_{con}$ is calculated and optimized for each batch in the training set separately. The size of the training batches, therefore, has to be at least equal or grater than two, otherwise $\mathcal{L}_{con}$ has no effect. It also worth noting that $\mathcal{L}_{con}$ does not control what is encoded in the latent representations $z_{s}$ and $z_{r}$ but only ensures that the information in the two representations is mutually exclusive, or in other words, that the representations are properly disentangled.
\subsection{Y-GAN Training}
Y-GAN is trained in an end-to-end fashion, using a multi-step procedure. For each training batch, the losses from Eqs.~(\ref{eq:reconstruction_loss}) to (\ref{eq:classifier_loss_zs}) are calculated first. Next, the set of residual representations $z_{r}$ in the given batch is randomly shuffled and processed, as described above
for the calculation of the consistency loss $\mathcal{L}_{con}$ in Eq.~\eqref{eq:disentanglement_loss}. Finally, the weights of the generator (i.e, the Y-shaped auto-encoder) are updated, based on the combined objective $\mathcal{L_G}$, i.e.:
\begin{equation}
\mathcal{L_G}=\lambda_{1}\mathcal{L}_{rec}+\lambda_{2}\mathcal{L}_{adv}+\lambda_{3}\mathcal{L}_{s}+\lambda_{4}\mathcal{L}_{r}+\lambda_{5}\mathcal{L}_{con}.
\label{Eq:g_loss}
\end{equation}
Similarly, the weights of the adversarial discriminator $Ds$ are updated based on the combined loss $\mathcal{L_D}$, i.e.:
\begin{equation}
\mathcal{L_D}=\lambda_{1}\mathcal{L}_{rec}+\lambda_{6}\mathcal{L}_{bce}
\label{Eq:d_loss}
\end{equation}
The generator and discriminator are updated alternately for a fixed number of epochs during training.
\begin{figure}[!t]
\centering
\begin{minipage}[b]{\columnwidth}
\centering
\includegraphics[width=0.99\linewidth]{MNIST.pdf}
{\footnotesize (a) MNIST: Examples of the 10 digit categories in the dataset.}
\end{minipage}
\centering
\begin{minipage}[b]{\columnwidth}
\centering
\vfill \vspace{3mm}
\includegraphics[width=0.99\linewidth]{FMNIST.pdf}
{\footnotesize (b) FMNIST: Examples of the 10 fashion-item categories in the dataset }
\end{minipage}
\centering
\begin{minipage}[b]{\columnwidth}
\centering
\vfill \vspace{3mm}
\includegraphics[width=0.99\linewidth]{CIFAR10.pdf}
{\footnotesize (c) CIFAR10: Examples of the 10 object categories in the dataset }
\vspace{-1.5mm}
\end{minipage}
\caption{Selected samples from the three standard anomaly detection benchmarks used in the experiments: (a) MNIST~\cite{MNIST_paper_lecun1998gradient, MNIST}, (b) FMNIST~\cite{FMNIST} and (c) CIFAR10~\cite{CIFAR10}. Each dataset consists of 10 different object classes.\label{fig:k_classes_out}
\end{figure}
\subsection{Anomaly Detection with Y-GAN}\label{subsec:evaluation}
Similarly to~\cite{Golan_NIPS2018_GeoTrans}, predictions of the latent classifier $C$ are used to calculate anomaly scores. Given a probe sample $x$, the latent representation $z_{s}=E_{s}(x)$ is first computed and passed to the classifier $C$.
Next, the activations of the output layer of the classifier are normalized $\{p_i(x)\}_{i=1}^N$, so they behave like probabilities, i.e., $\sum_{i=1}^Np_i=1$. Finally, the highest of these ``probabilities'' is used to compute the anomaly score $s$, i.e.:
\begin{equation} \label{eq: our_score}
s = 1- \max(p_i(x)),
\end{equation}
for the given test sample $x$.
Due to the normalization procedure, the generated anomaly scores are bounded to $s\in[0,1]$, with $0$ representing \textit{ideal normal} data.
Note that for the calculation of the anomaly scores only $E_{s}$ and $C$ are needed, which significantly shortens inference time.
\section{Experimental Datasets and Setup}
\begin{figure}[!t]
\begin{center}
\centering
\includegraphics[width=0.99\linewidth]{PlantVillage.pdf}\vspace{-2mm}
\end{center}
\caption{Selected samples from the real-world dataset PlantVillage~\cite{PlantVillage_Hughes}. The dataset consists of healthy and ill leafs of $14$ different plant species. Anomalies usually represent changes in the leaf shape and color or they appear as irregular pattern on the leaf's surface. Anomalous samples are in this figure marked with red color.\label{fig:plantVillage}}\vspace{-2mm}
\end{figure}
\subsection{Datasets}
We evaluate Y-GAN on three standard anomaly detection benchmark
, i.e., MNIST \cite{MNIST_paper_lecun1998gradient, MNIST}, FMNIST \cite{FMNIST}, and CIFAR10 \cite{CIFAR10}, as well as the real-world PlantVillage dataset from~\cite{PlantVillage_Hughes}. A few example images from the four datasets are presented in Figs.~\ref{fig:k_classes_out} and \ref{fig:plantVillage}. We provide details on the selected datasets below
\begin{itemize
\item \textbf{MNIST} contains $70,000$ grayscale images of handwritten digits, divided into $10$ (approximately balanced) classes, where each class represents one digit ($0$ through $9$). The images ship with a resolution of $28\times 28$ pixels and exhibit variations in terms of digit appearance \cite{MNIST_paper_lecun1998gradient, MNIST}.
\item \textbf{FMNIST} (Fashion MNIST) was developed as a more comprehensive alternative to MNIST. The dataset again consists of $70,000$ grayscale images of size $28 \times 28$ pixels split into $10$ balanced classes. However, FMNIST exhibits a larger degree of appearance variability than MNIST. Images in FMNIST depict clothing items grouped into different categories, i.e. T-shirt, trousers, pullover, dress, coat, sandal, shirt, sneaker, bag, and ankle boot \cite{FMNIST}.
\item \textbf{CIFAR10} contains $60,000$ color images of size $32\times32$ pixels representing animals and vehicles from $10$ categories, i.e., airplane, car, bird, cat, deer, dog, frog, horse, ship, truck.
Different from MNIST or FMNIST, images in CIFAR10 do not have a uniform background and exhibit considerable diversity in terms of appearance even within the same category. These characteristics make it particularly challenging for anomaly detection tasks \cite{CIFAR10}.
\item \textbf{PlantVillage} is a recent real-world dataset of leafs and contains $54,305$ color images of size $256\times256$ pixels. Each image represents a single leaf, photographed on a homogeneous background. Images are divided into $14$ unbalanced categories of different plant species, $9$ of which contain both, healthy and ill leafs. $3$ categories contain only healthy samples, while the remaining $2$ categories have no disease-free leafs. Plant diseases are usually manifested as changes in the shape and the color of the leaf, but they can also appear as a subtle pattern, that covers the leaf base, as shown in
Fig. \ref{fig:plantVillage}.
\end{itemize}
The selected datasets allow for the evaluation of anomaly detection models in different problem settings, i.e.: $(i)$ with anomalies representing homogeneous classes in MNIST, FMNIST and CIFAR10, and $(ii)$ with challenging and diverse real-world anomalies in the PlantVillage dataset.
As we show in the results section, Y-GAN achieves state-of-the-art performance for both types of problems.
\subsection{Experimental Setup}\label{SubSec: setup}
All models evaluated in the experimental section
are trained in a {\em one-class learning} setting, where no examples of anomalous data are seen during training.
Experiments on the MNIST, FMNIST and CIFAR10 datasets are conducted in accordance with the standard {\em k-classes-out} experimental setup
~\cite{Akcay2018GANomaly,Zenati2018EGBAD,Ruff_deepAndShallowADReview}, where nine classes are defined as normal, while the tenth class is considered anomalous. We implement the standard $80/20$ split rule commonly used in the literature~\cite{Perera_2019_CVPR_OCGAN}, where $80\%$ of the normal data is randomly selected for training, while the rest is combined with anomalous samples, forming a balanced testing set. All experiments are repeated ten times, each time with a new class defined as anomalous. Images from CIFAR10 are used with the original resolution, while MNIST and FMNIST images are re-scaled to $32\times32$ pixels to fit the Y-GAN's architecture
For the experiments with the PlantVillage dataset, we again follow the $80/20$ split rule, by randomly selecting $80\%$ of the normal data for training purposes. The rest of the normal data along with anomalous samples is used for evaluation. However, because the number of training samples in PlantVillage is relatively small and not sufficient for deep learning tasks, the training data is augmented. For pixel-level augmentations we use techniques such as image sharpening, embossing, histogram manipulations and random changes of brightness and contrast
Additionally, we also apply horizontal flipping and random affine transformations. By carefully adjusting parameter values of the augmentation operations we ensure that the original dataset is significantly enlarged without inducing anomaly-like samples.
Following prior
work~\cite{Akcay2018GANomaly,Schlegl2017AnoGAN, Zaheer_CVPR2020_OGNet}, we report the Area Under the Receiver Operating Characteristic (ROC) curve (AUC) as a scalar performance score in the experiments. In PlantVillage, we also report true positive (TPR) and true negative rates (TNR) for each class in the dataset. Both metrics are calculated at the equal error rate (EER) point of the ROC curve generated for all training samples from the dataset.
\begin{figure}[!t]
\centering
\begin{minipage}[b]{0.58\columnwidth}
\includegraphics[width=0.99\linewidth]{disentanglement.pdf}
\centering
{\small (a) Disentaglement analysis }
\end{minipage}
\hfill
\hspace{0.1mm}
\begin{minipage}[b]{0.01\textwidth}
\tikz{\draw[-,gray, densely dashed, thick] (0.1,0) -- (0.1,5.7);}
\end{minipage}
\vspace{2mm}
\begin{minipage}[b]{0.36\columnwidth}
\includegraphics[width=0.95\linewidth]{tsne.pdf}
\centering
{\small (b) $t$-SNE plots }
\end{minipage}
\caption{Proof-of-concept study. Y-GAN is trained on normal data (digits $1$ to $9$) with the goal of flagging anomalous data (digit $0$). (a) Disentanglement results: the model learns to separate digits from background in the latent spaces $z_{s}$ and $z_{r}$. Shown are examples of hybrid reconstructions, where $z_{s}$ is taken from the examples on the top and $z_{r}$ from the samples on the left. (b) $t$-SNE plots in 2D: normal data forms compact well-separated clusters in the semantically-relevant latent space (marked $z_{s}$) and overlaps considerably in the residual latent space (marked $z_{r}$). }
\label{fig:disentanglement}
\end{figure}
\begin{figure}[!t]
\begin{center}
\centering
\includegraphics[width=0.48\columnwidth,trim = 0mm 25mm 0 0, clip]{disentanglement_CIFAR10.pdf}\hspace{2mm}
\includegraphics[width=0.48\columnwidth,trim = 0mm 0 0 25mm, clip]{disentanglement_CIFAR10.pdf}\hspace{-2mm}
\end{center}
\caption{Disentanglement results on samples from two different CIFAR10 categories. The synthesized samples show that the model learns to successfully disentangle semantically relevant from residual image characteristics. Each object's shape is inferred from the source image of the semantically-relevant vector $z_s$ (first row), while color and style are inferred from the source images of $z_r$ (left most image in each example). Best viewed electronically.\label{fig:CIFAR10_disentanglement}}
\end{figure}
\subsection{Implementation Details}\label{SubSec: ImplDetails}
\textbf{Model Architecture.} Y-GAN consists of five architectural building blocks, i.e., two encoders, $E_{s}$ and $E_{r}$, a decoder, $D$, a discriminator, $Ds$ and a classifier, $C$. The
first four components are designed after
DCGAN~\cite{Chintala_ICLR2016_DCGAN}, whereas
the topology of the classifier is determined experimentally.
The two encoders, $E_{s}$ and $E_{r}$, consist of convolutional layers with stride $2$. Each convolutional layer is followed by a Leaky ReLU activation with negative slope $0.2$ and a batch normalization layer. The two encoders have an identical architecture and each map the input image $x$ to a $d=100$ dimensional latent vector.
The upscaling in the decoder $D$ is performed with transposed convolutions with stride $2$, each followed by ReLU activations and a batch normalization layer. The last convolutional layer uses a {\em tanh} activation function for bounded support.
The disciminator $Ds$ has the same architecture as $E_{s}$ and $E_{r}$, up until the last convolutional layer, which is followed by a standard \textit{sigmoid} activation.
The latent classifier $C$ is a multy layer perceptron (MLP) with one hidden layer and $30$ hidden units. The size of the input layer is determined by the dimensionality of the latent representation, $z_{s}$, and
the size of the output layer by the number of classes $N$ of the (normal) training data.
In our case $N=9$ for the experiments on MNIST, FMNIST and CIFAR10, since there are $9$ normal sub-classes defined by our experimental protocol. For the PlantVillage $N$ is set to $N=12$, which is the number of non-anomalous plant categories in the dataset\footnote{Recall that $2$ out of the total of $14$ classes have only anomalous samples and are not considered during training.}.
\begin{table*}[!t]
\centering
\resizebox{0.83\textwidth}{!}{%
\begin{tabular}{l| l | c c c c c c c c c c | c}
\hline\hline
Model & Type$^\dagger$ & $0$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & Mean $\pm$ Std \\
\hline
\rowcolor{claY} GANomaly~\cite{Akcay2018GANomaly} & RB & $0.899$ & $0.701$ & $0.954$ & $0.820$ & $0.829$ & $0.891$ & $0.875$ & $0.735$ & $0.926$ & $0.667$ & $0.830\pm 0.094$\\
\rowcolor{claY} Skip-GANomaly~\cite{Akcay2019Skip-GANomaly} & RB& $0.845$ & $0.919$ & $0.754$ & $0.734$ & $0.530$ & $0.573$ & $0.761$ & $0.532$ & $0.765$ & $0.637$ & $0.705\pm 0.127$\\
\rowcolor{claY} OCGAN~\cite{Perera_2019_CVPR_OCGAN} & RB& {\color{red} $\mathbf{0.958}$} & {\color{red} $\mathbf{0.934}$} & {\color{red} $\mathbf{0.959}$} & {\color{red} $\mathbf{0.969}$} & {\color{red} $\mathbf{0.929}$} & {\color{red} $\mathbf{0.920}$} & {\color{red} $\mathbf{0.904}$} & $0.775$ & {\color{red} $\mathbf{0.970}$} & {\color{red} $\mathbf{0.732}$} & {\color{red} $\mathbf{0.905}\pm \mathbf{0.079} $}\\
\rowcolor{claY} f-AnoGAN~\cite{Schlegl2019fAnoGAN} & RB& $0.880$ & $0.983$ & $0.954$ & {\color{red} $\mathbf{0.969}$} & $0.928$ & $0.896$ & $0.892$ & {\color{red} $\mathbf{0.782}$} & $0.949$ & $0.702$ & $0.894\pm 0.084$\\
P-Net~\cite{P-Net_ECCV2020} & RB & $0.788$ & $0.608$ & $0.678$ & $0.553$ & $0.528$ & $0.467$ & $0.612$ & $0.526$ & $0.618$ & $0.474$ & $0.585\pm 0.093$\\
\rowcolor{claB} ARNet~\cite{fei2020attribute} & PT& $0.879$ & $0.798$ & $0.880$ & $0.752$ & $0.767$ & $0.816$ & $0.940$ & $0.636$ & $0.811$ & $0.685$ & $0.796\pm 0.087$\\
\rowcolor{claB} Patch SVDD~\cite{Patch_SVDD_2020_ACCV} & PT& $0.774$ & $0.709$ & $0.849$ & $0.646$ & $0.570$ & $0.656$ & $0.730$ & $0.515$ & $0.573$ & $0.609$ & $0.663\pm 0.098$\\
\rowcolor{claC} PaDiM~\cite{PaDiM_ICPR2020} & PC & $0.551$ & $0.670$ & $0.828$ & $0.647$ & $0.608$ & $0.690$ & $0.820$ & $0.779$ & $0.610$ & $0.561$ & $0.676\pm 0.097$ \\ \hline
Y-GAN [Ours] & RB* & {\color{blue} $\mathbf{0.993}$} & {\color{blue} $\mathbf{0.993}$} & {\color{blue} $\mathbf{0.984}$} & {\color{blue} $\mathbf{0.989}$} & {\color{blue} $\mathbf{0.984}$} & {\color{blue} $\mathbf{0.986}$} & {\color{blue} $\mathbf{0.985}$} & {\color{blue} $\mathbf{0.980}$} & {\color{blue} $\mathbf{0.988}$} & {\color{blue} $\mathbf{0.987}$} & {\color{blue} $\mathbf{0.987}\pm \mathbf{0.004}$}\\
\hline\hline
\multicolumn{13}{l}{$^{\dagger}$\small \colorbox{claY}{RB - reconstruction based,} \colorbox{claB}{PT - proxy task based,} \colorbox{claC}{PC - utilizing pretrained classification models}; RB* - reconstruction based but with latent proxy task}
\end{tabular}
}
\caption{MNIST results in terms of AUC scores. The best model in each column is marked blue, the runner-up red.\label{tab:MNIST_results}}
\end{table*}
\begin{table*}[!t]
\centering
\resizebox{0.88\textwidth}{!}{%
\begin{tabular}{l | l |c c c c c c c c c c |c}
\hline\hline
Model & Type$^\dagger$ & T-shirt & Trousers & Pullover & Dress & Coat & Sandal & Shirt & Sneaker & Bag & Ankle boot & Mean $\pm$ Std \\
\hline
\rowcolor{claY} GANomaly~\cite{Akcay2018GANomaly} & RB& $0.607$ & {\color{red} $\mathbf{0.936}$} & $0.600$ & $0.767$ & $0.678$ & {\color{blue} $\mathbf{0.973}$} & $0.533$ & {\color{red} $\mathbf{0.895}$} & {\color{red} $\mathbf{0.961}$} & $0.867$ & $0.782\pm 0.158$\\
\rowcolor{claY} Skip-GANomaly~\cite{Akcay2019Skip-GANomaly} & RB& $0.615$ & $0.639$ & $0.698$ & $0.523$ & $0.632$ & $0.816$ & $0.612$ & $0.683$ & $0.780$ & $0.722$ & $0.672 \pm 0.082$\\
\rowcolor{claY} OCGAN~\cite{Perera_2019_CVPR_OCGAN} & RB& $0.681$ & $0.912$ & $0.643$ & $0.680$ & $0.615$ & $0.942$ & $0.563$ & $0.679$ & $0.959$ & $0.844$ & $0.752 \pm 0.140$\\
\rowcolor{claY} f-AnoGAN~\cite{Schlegl2019fAnoGAN}& RB & $0.642$ & $0.875$ & {\color{red} $\mathbf{0.758}$} & $0.656$ & $0.671$ & $0.889$ & {\color{red} $\mathbf{0.690}$} & $0.852$ & $0.945$ & {\color{red} $\mathbf{0.911}$} & $0.789\pm 0.112$\\
\rowcolor{claY} P-Net~\cite{P-Net_ECCV2020} & RB& $0.590$ & $0.586$ & $0.566$ & $0.564$ & $0.694$ & $0.340$ & $0.473$ & $0.542$ & $0.720$ & $0.714$ & $0.579 \pm 0.110$\\
\rowcolor{claB} ARNet~\cite{fei2020attribute} & PT& $0.647$ & {\color{blue} $\mathbf{0.966}$} & $0.749$ & {\color{red} $\mathbf{0.773}$} & {\color{red} $\mathbf{0.740}$} & $0.877$ & $0.687$ & $0.751$ & $0.993$ & $0.896$ & {\color{red} $\mathbf{0.808}\pm \mathbf{0.112}$}\\
\rowcolor{claB} Patch SVDD~\cite{Patch_SVDD_2020_ACCV} & PT & \color{red} $\mathbf{0.742}$ & $0.572$ & $0.661$ & $0.597$ & $0.536$ & $0.727$ & $0.674$ & $0.577$ & $0.876$ & $0.763$ & $0.673 \pm 0.101$\\
\rowcolor{claC} PaDiM~\cite{PaDiM_ICPR2020} & PC & $0.729$ & $0.676$ & $0.475$ & $0.642$ & $0.497$ & $0.821$ & $0.502$ & $0.615$ & $0.915$ & $0.794$ & $0.667\pm 0.142$ \\ \hline
Y-GAN [Ours] & RB* & {\color{blue} $\mathbf{0.912}$} & $0.915$ & {\color{blue} $\mathbf{0.904}$} & {\color{blue} $\mathbf{0.949}$} & {\color{blue} $\mathbf{0.880}$} & {\color{red} $\mathbf{0.957}$} & {\color{blue} $\mathbf{0.877}$} & {\color{blue} $\mathbf{0.903}$} & {\color{blue} $\mathbf{0.982}$} & {\color{blue} $\mathbf{0.975}$} & {\color{blue} $\mathbf{0.925}\pm \mathbf{0.036}$}\\
\hline\hline
\multicolumn{13}{l}{$^{\dagger}$\small\colorbox{claY}{RB - reconstruction based,} \colorbox{claB}{PT - proxy task based,} \colorbox{claC}{PC - utilizing pretrained classification models}; RB* - reconstruction based but with latent proxy task}
\end{tabular}}
\caption
FMNIST results in terms of AUC scores. The best model in each column is marked blue, the runner-up red.\label{tab: FMNIST_results}}
\end{table*}
\begin{table*}[!t]
\centering
\resizebox{0.86\textwidth}{!}{%
\begin{tabular}{l | l |c c c c c c c c c c |c}
\hline\hline
Model & Type$^\dagger$& Airplane & Car & Bird & Cat & Deer & Dog & Frog & Horse & Ship & Truck & Mean $\pm$ Std\\
\hline
\rowcolor{claY} GANomaly~\cite{Akcay2018GANomaly} & RB& \color{red} $\mathbf{0.655}$ & $0.705$ & $0.420$ & $0.580$ & $0.359$ & $0.588$ & $0.515$ & $0.571$ & \color{red} $\mathbf{0.630}$ & $0.712$ &$0.574\pm 0.109$ \\
\rowcolor{claY} Skip-GANomaly~\cite{Akcay2019Skip-GANomaly} & RB& $0.581$ & $0.718$ & $0.494$ & $0.487$ & \color{red} $\mathbf{0.518}$ & $0.480$ & \color{red} $\mathbf{0.722}$ & $0.559$ & $0.554$ & $0.701$ & $0.581\pm 0.092$\\
\rowcolor{claY} OCGAN~\cite{Perera_2019_CVPR_OCGAN} & RB& $0.548$ & \color{red} $\mathbf{0.731}$ & $0.386$ & $0.582$ & $0.348$ & $0.597$ & $0.422$ & $0.626$ & $0.488$ & $0.713$ & $0.544\pm 0.125$\\
\rowcolor{claY} f-AnoGAN~\cite{Schlegl2019fAnoGAN} & RB& $0.578$ & $0.692$ & \color{red} $\mathbf{0.564}$ & $0.555$ & $0.459$ & $0.580$ & $0.591$ & $0.643$ & $0.610$ & $0.682$ & $0.595\pm 0.064$\\
\rowcolor{claY} P-Net~\cite{P-Net_ECCV2020} & RB& $0.582$ & $0.671$ & $0.455$ & $0.611$ & $0.476$ & $0.596$ & $0.602$ & $0.538$ & $0.523$ & $0.629$ & $0.563\pm 0.065$\\
\rowcolor{claB} ARNet~\cite{fei2020attribute} & PT & $0.598$ & $0.635$ & $0.466$ & \color{red} $\mathbf{0.706}$ & $0.435$ & \color{red} $\mathbf{0.697}$ & $0.512$ & \color{red} $\mathbf{0.662}$ & \color{red} $\mathbf{0.630}$ & \color{red} $\mathbf{0.727}$ & \color{red} $\mathbf{0.607}\pm \mathbf{0.098}$\\
\rowcolor{claB} Patch SVDD~\cite{Patch_SVDD_2020_ACCV} & PT & $0.517$ & $0.542$ & $0.505$ & $0.548$ & $0.493$ & $0.567$ & $0.529$ & $0.538$ & $0.511$ & $0.542$ & $0.529\pm 0.021$\\
\rowcolor{claC} PaDiM~\cite{PaDiM_ICPR2020} & PC & $0.542$ & $0.668$ & $0.502$ & $0.546$ & $0.335$ & $0.612$ & $0.433$ & $0.549$ & $0.386$ & $0.625$ & $0.520\pm 0.102$ \\ \hline
Y-GAN [Ours] & RB*& \color{blue} $\mathbf{0.729}$ & \color{blue} $\mathbf{0.767}$ & \color{blue} $\mathbf{0.749}$ & \color{blue} $\mathbf{0.768}$ & \color{blue} $\mathbf{0.759}$ & \color{blue} $\mathbf{0.764}$ & \color{blue} $\mathbf{0.778}$ & \color{blue} $\mathbf{0.780}$ & \color{blue} $\mathbf{0.722}$ & \color{blue} $\mathbf{0.811}$ & \color{blue} $\mathbf{0.763\pm 0.024}$\\
\hline\hline
\multicolumn{13}{l}{$^{\dagger}$\small\colorbox{claY}{RB - reconstruction based,} \colorbox{claB}{PT - proxy task based,} \colorbox{claC}{PC - utilizing pretrained classification models}; RB* - reconstruction based but with latent proxy task}
\end{tabular}}
\caption{CIFAR10 results in terms of AUC scores. The best model in each column is marked blue, the runner-up red
\label{tab: CIFAR10_results}}
\end{table*}
\textbf{Training Setting.} The learning objectives in Eqs. \eqref{Eq:g_loss} and \eqref{Eq:d_loss} are minimized using the Adam~\cite{Kingma_ICLR2015_Adam} optimizer with a learning rate of ${l_r}=0.0002$ and momentums $\beta_1=0.5$ and $\beta_2=0.999$~\cite{Chintala_ICLR2016_DCGAN}. The
weights in $\mathcal{L_G}$ and $\mathcal{L_D}$ are determined empirically through an optimization procedure on validation data.
For the experiments we use $\lambda_{1}=\lambda_{5}=50$, and $\lambda_{2}=\lambda_{3}=\lambda_{4}=\lambda_{6}=1$.
While these weights are kept constant, the weight associated with the gradient reversal layer is initialized to a value of $\lambda_R=0$ and is then gradually increased as the training progresses,
as suggested in~\cite{ganin2015_gradRev}.
All models are trained for $100$ epochs on MNIST, FMNIST and CIFAR10 and for $200$ epochs on PlantVillage,
where less data data is available for the learning procedure.
\textbf{Implementation.} Y-GAN is implemented in Python 3.7. using PyTorch 1.5. and CUDA 10.2. All source code, model definitions and trained weights are made publicly available to facilitate reproducibility\footnote{URL will be added after review.}.
Using a personal desktop computer with an Intel\textsuperscript{\textregistered} Core\textsuperscript{TM} i7-8700K CPU and an NVIDIA\textsuperscript{\textregistered} GeForce RTX 2080 Ti GPU, it takes around four hours to train Y-GAN on MNIST, FMNIST and CIFAR10.
For the higher resolution images in PlantVillage the training stage takes around five hours. Once the model is learned, a single image is processed in around $2.6\, ms$ for the smallest $32\times32$ images and $13.5\, ms$ for the largest $256\times256$ images.
\begin{table*}[!t]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{l |l | l|c c c c c c c c c c c c c c |c}
\hline\hline
Model & Type$^{\dagger}$ & Error & Apple & Blueberry & Cherry & Corn & Grape & Orange & Peach & Pepper & Potato & Raspberry & Soybean & Squash & Strawberry & Tomato & AUC\\
\hline
\rowcolor{claY} & & TNR & $0.228$ & $0.482$ & \color{red} $\mathbf{0.953}$ & $0.644$ & \color{blue} $\mathbf{0.941}$ & / & $0.375$ & $0.635$ & \color{blue} $\mathbf{0.968}$ & $0.853$ & \color{red} $\mathbf{0.911}$ & / & $0.750$ & \color{red} $\mathbf{0.680}$ & \\
\rowcolor{claY} \multirow{-2}{*}{GANomaly~\cite{Akcay2018GANomaly}} & \multirow{-2}{*}{RB} & TPR & $0.630$ & / & $0.436$ & $0.826$ & $0.864$ & \color{red} $\mathbf{0.926}$ & $0.784$ & $0.790$ & $0.601$ & / & / & $0.214$ & $0.711$ & $0.656$ & \multirow{-2}{*}{\color{red} $\mathbf{0.781}$}\\
\hline
\rowcolor{claY} & & TNR & \color{red} $\mathbf{0.675}$ & \color{red} $\mathbf{0.847}$ & $0.842$ & $0.253$ & \color{red} $\mathbf{0.929}$ & / & $0.222$ & \color{blue} $\mathbf{0.774}$ & $0.742$ & \color{red} $\mathbf{0.867}$ & $0.876$ & / & \color{blue} $\mathbf{0.880}$ & $0.085$ & \\
\rowcolor{claY} \multirow{-2}{*}{Skip-GANomaly~\cite{Akcay2019Skip-GANomaly}}& \multirow{-2}{*}{RB}& TPR & $0.632$ & / & \color{red} $\mathbf{0.832}$ & \color{blue} $\mathbf{0.989}$ & $0.309$ & $0.843$ & $0.917$ & $0.578$ & $0.569$ & / & / & \color{blue} $\mathbf{0.969}$ & $0.390$ & $0.656$ & \multirow{-2}{*}{$0.746$}\\
\hline
\rowcolor{claY} & & TNR & $0.584$ & $0.266$ & $0.877$ & $0.605$ & $0.882$ & / & $0.111$ & $0.338$ & $0.484$ & $0.640$ & $0.763$ & / & $0.315$ & $0.389$ & \\
\rowcolor{claY} \multirow{-2}{*}{OCGAN~\cite{Perera_2019_CVPR_OCGAN}}& \multirow{-2}{*}{RB}& TPR & $0.562$ & / & $0.216$ & $0.758$ & $0.793$ & $0.426$ & $0.598$ & \color{blue} $\mathbf{0.943}$ & $0.736$ & / & / & $0.201$ & $0.886$ & $0.547$ & \multirow{-2}{*}{$0.608$}\\
\hline
\rowcolor{claY} & & TNR & $0.375$ & $0.645$ & $0.579$ & $0.224$ & $0.847$ & / & \color{red} $\mathbf{0.486}$ & $0.625$ & $0.581$ & $0.840$ & $0.767$ & / & $0.587$ & $0.260$ & \\
\rowcolor{claY} \multirow{-2}{*}{f-AnoGAN~\cite{Schlegl2019fAnoGAN}}& \multirow{-2}{*}{RB} & TPR & $0.592$ & / & $0.465$ & $0.772$ & $0.282$ & $0.574$ & $0.655$ & $0.541$ & $0.474$ & / & / & $0.845$ & $0.192$ & $0.631$ & \multirow{-2}{*}{$0.623$}\\
\hline
\rowcolor{claY} && TNR & $0.495$ & $0.532$ & $0.520$ & $0.528$ & $0.529$ & / & $0.444$ & $0.530$ & $0.613$ & $0.467$ & $0.526$ & / & $0.533$ & $0.489$ & \\
\rowcolor{claY} \multirow{-2}{*}{P-Net~\cite{P-Net_ECCV2020}}& \multirow{-2}{*}{RB}& TPR & $0.508$ & / & $0.520$ & $0.517$ & $0.519$ & $0.511$ & $0.532$ & $0.524$ & $0.505$ & / & / & $0.525$ & $0.526$ & $0.517$ & \multirow{-2}{*}{$0.524$}\\
\hline
\rowcolor{claB} && TNR & $0.608$ & $0.698$ & $0.889$ & $0.116$ & $0.824$ & / & $0.000$ & $0.767$ & \color{red} $\mathbf{0.774}$ & $0.813$ & $0.846$ & / & $0.511$ & $0.633$ & \\
\rowcolor{claB} \multirow{-2}{*}{ARNet~\cite{fei2020attribute}}& \multirow{-2}{*}{PT}& TPR & $0.586$ & / & $0.669$ & \color{red} $\mathbf{0.979}$ & \color{red} $\mathbf{0.928}$ & $0.237$ & $0.909$ & $0.782$ & \color{red} $\mathbf{0.760}$ & / & / & $0.835$ & \color{red} $\mathbf{0.905}$ & \color{red} $\mathbf{0.672}$ & \multirow{-2}{*}{$0.736$}\\
\hline
\rowcolor{claB} && TNR & $0.672$ & $0.299$ & $0.708$ & \color{red} $\mathbf{0.682}$ & \color{red} $\mathbf{0.929}$ & / & $0.375$ & $0.578$ & \color{red} $\mathbf{0.774}$ & $0.707$ & $0.730$ & / & $0.326$ & $0.404$ & \\
\rowcolor{claB} \multirow{-2}{*}{Patch SVDD~\cite{Patch_SVDD_2020_ACCV}}& \multirow{-2}{*}{PT}& TPR & $0.512$ & / & $0.339$ & $0.772$ & $0.746$ & $0.512$ & $0.659$ & $0.857$ & $0.687$ & / & / & $0.335$ & \color{blue} $\mathbf{0.933}$ & $0.593$ & \multirow{-2}{*}{$0.670$}\\
\hline
\rowcolor{claC} & & TNR & $0.605$ & $0.751$ & $0.409$ & $0.176$ & $0.529$ & / & $0.111$ & $0.507$ & $0.645$ & \color{red} $\mathbf{0.867}$ & $0.796$ & / & $0.674$ & $0.580$ & \\
\rowcolor{claC} \multirow{-2}{*}{PaDiM~\cite{PaDiM_ICPR2020}}& \multirow{-2}{*}{PC}& TPR & \color{red} $\mathbf{0.736}$ & / & $0.526$ & $0.966$ & $0.878$ & $0.120$ & \color{red} $\mathbf{0.928}$ & \color{red} $\mathbf{0.887}$ & $0.754$ & / & / & $0.882$ & $0.835$ & $0.556$ & \multirow{-2}{*}{0.671}\\
\hline
\multirow{2}{*}{Y-GAN~[Ours]} & \multirow{2}{*}{RB*}& TNR & \color{blue} $\mathbf{0.833}$ & \color{blue} $\mathbf{0.993}$ & \color{blue} $\mathbf{0.965}$ & \color{blue} $\mathbf{1.000}$ & $0.847$ & / & \color{blue} $\mathbf{0.847}$ & \color{red} $\mathbf{0.770}$ & $0.677$ & \color{blue} $\mathbf{0.893}$ & \color{blue} $\mathbf{0.945}$ & / & \color{red} $\mathbf{0.772}$ & \color{blue} $\mathbf{0.950}$ & \multirow{2}{*}{\color{blue} $\mathbf{0.962}$} \\
& & TPR & \color{blue} $\mathbf{0.802}$ & / & \color{blue} $\mathbf{0.909}$ & $0.800$ & \color{blue} $\mathbf{0.943}$ & \color{blue} $\mathbf{0.964}$ & \color{blue} $\mathbf{0.934}$ & $0.795$ & \color{blue} $\mathbf{0.932}$ & / & / & \color{red} $\mathbf{0.946}$ & $0.722$ & \color{blue} $\mathbf{0.926}$ & \\
\hline\hline
\multicolumn{18}{l}{$^{\dagger}$\small\colorbox{claY}{RB - reconstruction based,} \colorbox{claB}{PT - proxy task based,} \colorbox{claC}{PC - utilizing pretrained classification models}; RB* - reconstruction based but with latent proxy task}
\end{tabular}}
\caption{PlantVillage results in terms of per-class TPR and TNR scores and mean AUC values over all classes. The best model in each column and for each performance score is marked blue, the runner-up is marked red. The TPR and TNR scores were computed at a (global) decision threshold defined by the equal error rate (EER) on the training data of all classes. Note that the two scores are not necessarily well calibrated for category in the dataset -- see results for Corn, Grape or Potatoe leafs
\label{tab: PlantVillage_results}}
\end{table*}
\section{Proof-of-Concept Study}
To explore the characteristics of the dual latent-space representation and evaluate the effectiveness of the disentanglement process performed by Y-GAN, we first conduct a proof-of-concept study. To this end, we generate a color version of MNIST (\textit{Color-MNIST} hereafter)
by inverting the thresholded black and white images of the dataset and replacing the white background
with one of the following colors: red, green, blue, cyan, yellow, purple, violet, brown, dark green or orange. We make sure all colors are represented equally in the generated dataset.
Next, we train Y-GAN on the constructed dataset by considering digits $1$ to $9$ as normal data, and $0$ as anomalous. In the test phase, we present the model with unseen samples and compute their latent representations, $z_{s}$ and $z_{r}$. Finally, we generate hybrid reconstructions by concatenating the latent vector $z_{s}$ taken from one test sample with the latent vector $z_{r}$ of another randomly selected test sample and pass the concatenated vector through the decoder.
Example reconstructions produced with this process are shown in Fig.~\ref{fig:disentanglement}(a).
As can be seen, Y-GAN learns to disentangle data characteristics relevant for the digit representation task, from characteristics that are irrelevant, i.e., background color in this toy example. Consequently, the replacement of the original latent vector $z_{r}$ causes a change in the background color, which is now inherited, from the randomly selected sample - shown on the left part of Fig.~\ref{fig:disentanglement}(a). Meanwhile, the shape of the digit in the original image is preserved well.
Next, we use $t$-distributed Stochastic Neighbor Embeddings ($t$-SNE)~\cite{van2008visualizing} to visualize the distribution of the generated data in the dual latent spaces in Fig.~\ref{fig:disentanglement}(b). Here, $250$ random samples of each of the Color-MNIST digit classes are used for visualization. Note that for the semantically-relevant latent space, samples corresponding to digits $1$ to $9$ form compact and well separated clusters (marked $z_{s}$), while samples for the anomalous $0$ are considerably less compact despite the fact that they come from a single (homogeneous) class. Nevertheless, they do not overlap (significantly) with the normal data. In the residual latent space (marked $z_{r}$), $10$ clusters corresponding to the
background colors used in Color-MNIST can be identified in the $t$-SNE plot. However, each cluster contains samples from all $10$ digits, suggesting that this representation has limited discriminative power for anomaly detection.
To test the behavior of Y-GAN on more complex data, we repeat the same experiment using CIFAR10. Selected synthesized samples and their respective source images are shown in Fig.~\ref{fig:CIFAR10_disentanglement}. We again observe that Y-GAN learnt to successfully disentangle semantically relevant attributes from those that are irrelevant for representing classes in the normal data. The shape of the objects and other visually important characteristics are obviously inferred from the source image of the relevant latent vector $z_s$, while background style and colors and inferred from the residual latent vector $z_r$. Different from the Color-MNIST example, where the digits represent homogeneous classes with limited variability, the large intra-class variability of CIFAR10 images leads to lower quality reconstructions, which is expected given Y-GAN's learning objectives. Nevertheless, the results validate that meaningful separation of information content is achieved in the latent space even with challenging input images.
\section{Results and Discussion}
To illustrate the performance of Y-GAN, we report in this section results that: $(i)$ compare Y-GAN to state-of-the-art techniques from the literature, $(ii)$ were generated through a comprehensive ablation study and demonstrate the contribution of various components of Y-GAN, $(iii)$ highlight some of the model's characteristics, and $(vi)$ investigate the behavior of Y-GAN in a qualitative manner.
\subsection{Quantitative Evaluation}
We evaluate Y-GAN in comparative experiments with several state-of-the-art anomaly-detection models. Specifically, we compare against GANomaly~\cite{Akcay2018GANomaly}, Skip-GANomaly~\cite{Akcay2019Skip-GANomaly}, OCGAN~\cite{Perera_2019_CVPR_OCGAN}, f-AnoGAN~\cite{Schlegl2019fAnoGAN}, and P-Net~\cite{P-Net_ECCV2020}, which represent powerful reconstruction-based (RB) anomaly detection models and are Y-GAN's \textit{main competitors}. Additionally, we also include the recent ARNet~\cite{fei2020attribute} and Patch SVDD~\cite{Patch_SVDD_2020_ACCV} approaches as representatives of proxy-task (PT) models, and the PaDiM technique from~\cite{PaDiM_ICPR2020} as an example of solutions utilizing pretrained classification (PC) models in the evaluation. For a fair comparison, the official GitHub implementations
are used for the experiments (where available)
together with the advocated hyper-parameters to ensure optimal performance.
\textbf{MNIST.} The MNIST results, reported in Table \ref{tab:MNIST_results}, show that Y-GAN significantly outperforms all evaluated baselines. It improves on the mean AUC score of the runner-up OCGAN by $9\%$ and on the standard deviation (computed over all runs) by a factor of close to $20$. Compared to the competing models, Y-GAN ensures the most consistent results, regardless of which class is considered anomalous. This can be seen particularly well from the results for digits $7$ and $9$, where all tested models exhibit a drop in AUC scores, while Y-GAN retains performance similar to other settings.
\textbf{FMNIST.} Compared to MNIST, the AUC scores obtained on FMNIST are lower for all tested models due to the larger image diversity in this dataset, as summarized in Table~\ref{tab: FMNIST_results}. The proposed Y-GAN achieves a mean AUC score of $0.925$, compared to $0.808$ for ARNet, $0.789$ for f-AnoGAN and $0.782$ for GANomaly,
which are the next three models (in this order) in terms of performance. A performance improvement of more than $14\%$ over the second best performing model, ARNet, points to the descriptiveness of the representation learnt by Y-GAN in the semantically-relevant latent space.
Y-GAN again achieves the most consistent results across different experimental runs.
\textbf{CIFAR10.} Images in CIFAR10 were captured in unconstrained settings, which makes
this dataset extremely challenging, as evidenced by the results in Table~\ref{tab: CIFAR10_results}. All competing models result in mean AUC scores close (or below) to $0.6$,
which speaks of the difficulty of learning meaningful representations on CIFAR10. The dual representation strategy of Y-GAN, on the other hand, yields a mean AUC score of $0.763$, improving on the runner-up, ARNet, by more than $25\%$. The proposed model also convincingly outperforms all competing models in all $10$ experimental runs.
\textbf{PlantVillage.} Results for the
PlantVillage dataset are reported in Table~\ref{tab: PlantVillage_results}. As can be seen, Y-GAN is again the top performer with an AUC of $0.962$, outperforming the state-of-the-art runner-up GANomaly by more than $23\%$. Y-GAN exceeds detection accuracy of other models with respect to normal and anomalous samples and generates fewer misses on average as shown by the TPR and TNR scores. However, we also observe that the (global) calibration of the models results in unbalanced TPR and TNR values for certain classes (e.g., Corn, Grape, Potato). Despite this calibration issues, Y-GAN performs best on average even if the individual TPR and TNR scores are considered. We attribute this performance to the implemented dual data representation strategy,
that allows for excluding irrelavant data characteristics when deciding whether a sample is anomalous or not.
\subsection{Ablation Study}\label{SubSec: Ablation study}
To demonstrate the importance of different components of Y-GAN,
we perform a two-part ablation study, where we first remove various parts of the learning objective $\mathcal{L_G}$, and then ablate parts of the model architecture.
\begin{table}[!t
\resizebox{\columnwidth}{!}{%
\begin{tabular}{l|cccc}
\hline\hline
\multicolumn{1}{l|}{\textbf{Ablation study}} & MNIST & FMNIST & CIFAR10 & PlantVillage \\
\hline
Complete Y-GAN & $0.987$ & $0.925$ & $0.763$ & $0.962 $ \\
\hline
\rowcolor{claB}A1: $\mathcal{L_G}$ w/o $\mathcal{L}_{con}$& $0.987$ & $0.890$ & $0.745$ & $0.918$ \\
\rowcolor{claB}A2: $\mathcal{L_G}$ w/o $\mathcal{L}_{r}$& $0.980$ & $0.832$ & $0.682$ & $0.865$ \\
\rowcolor{claB}A3: $\mathcal{L_G}$ w/o $\mathcal{L}_{r}$ and $\mathcal{L}_{con}$ & $0.962$ & $0.823$ & $0.660$ & $0.861$ \\
\hline
\rowcolor{claC}B1: Y-GAN w/o dual encoders & $0.979$ & $0.881$ & $0.734$ & $0.915$ \\
\rowcolor{claC}B2: B1 w/o $z_{r}$& $0.956$ & $0.810$ & $0.659$ & $0.807$ \\
\rowcolor{claC}B3: B1 w/o $z_{r}$ and $C$ & $0.640$ & $0.689$ & $0.531$ & $0.610$ \\
\rowcolor{claC}B4: Y-GAN w/o $Ds$ & $0.982$ & $0.896$ & $0.719$ & $0.921$ \\
\hline\hline
\multicolumn{5}{l}{Color coding: \small\colorbox{claB}{A - learning objective ablation,} \colorbox{claC}{B - architecture ablation}}
\end{tabular}}
\caption{Y-GAN ablation study. Results are reported in the form of AUC scores. The first part of the ablation study (marked A) explores the impact of loss terms, the second part (marked B) the impact of architectural components.}
\label{tab:ablation_1}
\end{table}
\textbf{Impact of Loss Terms.} For the first part of the ablation study, three Y-GAN variants are trained with different versions of the generator loss $\mathcal{L_G}$: $(i)$ $\mathcal{L_G}$ without the consistency loss $\mathcal{L}_{con}$ (A1), $(ii)$ $\mathcal{L_G}$ without the
residual information loss $\mathcal{L}_{r}$ (A2), and $(iii)$ $\mathcal{L_G}$ without both $\mathcal{L}_{con}$ and $\mathcal{L}_{r}$ (A3). The results in Table \ref{tab:ablation_1} show that the removal of both loss terms causes a considerable drop in the AUC scores on FMNIST, CIFAR10 and PlantVillage.
Compared to MNIST, where a smaller drop is observed, these three datasets contain a greater amount of residual, semantically-irrelevant information (e.g, various clothing prints, background style, etc.). In such cases, both disentanglement terms, $\mathcal{L}_{con}$ and $\mathcal{L}_{r}$, play a significant role in the extraction of semantically-relevant information. Although the two losses are complementary, $\mathcal{L}_{r}$ results in a slightly greater performance drop than $\mathcal{L}_{con}$ when removed from the training objective. These results suggest that all loss terms are important and contribute to the performance of Y-GAN.
\begin{table}[!t]
\centering
\resizebox{0.99\columnwidth}{!}{%
\begin{tabular}{l |c c c c}
\hline\hline
Model & MNIST & FMNIST & CIFAR10 & PlantVillage\\
\hline
Y-GAN (ground truth labels) & $0.987$ & $0.925$ & $0.763$ & $0.962$\\
Y-GAN* (weak labels) & $0.964$ & $0.892$ & $0.733$ & $0.846$\\
\hline\hline
\end{tabular}}\vspace{2mm}
\caption{Mean AUC scores generated with Y-GAN models generated with: $(i)$ ground truth labels on the sub-class structure of the normal training data, and $(ii)$ weak labels generated through a $k$-means clustering procedure. Note that even in a completely unsupervised setting, where (noisy) weak labels are inferred directly from the normal training data, Y-GAN generates state-of-the-art results that outperform all of the competing models on all four experimental datasets.
\label{tab: results_weak}}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth, trim=0mm 0mm 12mm 0mm, clip]{score_analysis.pdf}
\vspace{-6mm}
\caption{Score analysis on the CIFAR10 and PlantVillage datasets. Various anomaly scores, defined in the image space ($s_x$), over the latent representations ($s_z, s_{zs}$), or based on the sub-class structure of the normal data ($s_{zp},s_{zw}, s_c$, and $s$), are explored to better understand the expressive power of different representations generated within Y-GAN. Note that the proposed anomaly score achieves the strongest performance on both datasets. \label{fig: score_analysis}}
\end{figure}
\textbf{Impact of Architecture.} Y-GAN uses a dual encoder to generate latent representations. Four versions of the model are implemented to highlight the importance of the design choices made around this topology:
$(i)$ a model without the dual encoder (a single encoder $E$ is used), where the generated latent representation $z=E(x)\in\mathbb{R}^{2d}$ is split into two equally sized vectors, $z_{s}\in\mathbb{R}^{d}$ and
$z_{r}\in\mathbb{R}^{d}$, on top of which $\mathcal{L_G}$~(\ref{Eq:g_loss}) is applied (B1),
$(ii)$ the model from B1 but with a single (entangled) latent representation - no $z_{r}$ and associated losses ($\mathcal{L}_{r}$, $\mathcal{L}_{con}$) are used (B2),
$(iii)$ the model from B2 but without the classifier $C$, i.e., an auto-encoder with a reconstruction-based anomaly score (B3), and
$(iv)$ the proposed Y-GAN model without the adversarial discriminator $Ds$, i.e., a dual encoder generator trained without $\mathcal{L}_{adv}$ (B4).
The results in Table \ref{tab:ablation_1} suggest that the removal of the dual encoder increasingly impacts results, as the complexity of the data (from an anomaly detection point of view) increases. The Y-shaped architecture, thus, contributes to a more efficient disentanglement of semantically-relevant and residual information. It can also be seen (from B2) that using a single entangled latent space representation is detrimental for the performance of the anomaly detection task, especially for the more challenging CIFAR10 and PlantVillage datasets. The disentanglement of irrelevant information and its removal from the decision making process is, hence, critical for the success of Y-GAN. The exclusion of the classifier also causes a large decrease in the overall anomaly detection accuracy across all datasets, suggesting that steered representation learning is key for Y-GAN - see B3 results. Finally, training the originally proposed Y-GAN auto-encoder without the adversarial discriminator, seems to have the least significant impact on the overall detection accuracy, in comparison to other Y-GAN components (B4). Nevertheless, it does contribute to the quality of the generated image reconstructions, which can further affect the performance of the disentanglement process in more complex images, e.g., in CIFAR10 and PlantVillage.
\subsection{Model Characteristics}
\textbf{Unlabeled Normal Data.} Y-GAN assumes that the (normal) training data comes from multiple sub-classes/groups and that labels for these sub-classes are readily available. This assumption allows for the inclusion of the latent classifier $C$ in the training process, which was shown to be critical for the overall performance of Y-GAN.
Here, we show that it is possible to relax this assumption and train Y-GAN with unlabeled training data in a completely unsupervised manner. To this end,
we run a clustering procedure over the training data and \textit{generate weak labels} that can be utilized when learning Y-GAN. Specifically, we first compute feature representations from the training images used in the given experimental run with the EfficientNet-B4~\cite{pmlr-EfficientNet_2019} model, pre-trained on ImageNet. Given input images $x$, the model then generates $1792$-dimensional representations for the experiments. For efficiency reasons, we reduce the dimensionality of these representations to $100$ using Principal Component Analysis (PCA)~\cite{turk1991eigenfaces} and cluster the data with $k$-means. We determine the optimal number of clusters based on the \textit{average silhouette method}~\cite{kaufman2009finding}. Finally, we utilize the generated cluster assignments as weak labels for Y-GAN training.
\begin{itemize}[leftmargin=3mm]
\item \textbf{K--Classes--Out Results.} On MNIST, FMNIST and CIFAR10 the clustering procedure identifies either $9$ or $10$ clusters in any given experimental run ($9$ classes are in fact represented).
An analysis of the generated clusters shows that $95\%$ of MNIST samples in each cluster share the same ground truth. This percentage is bit lower in FMNIST and CIFAR10, where it equals $91\%$ and $87\%$, respectively.
This suggests that the clustering reasonably well approximates the actual data classes, but also that part of the data is not assigned correct class labels.
A comparison between Y-GAN trained with the ground truth labels and the weak labels generated with the clustering procedure (denoted as Y-GAN*) is presented in Table~\ref{tab: results_weak}. As can be seen, the weak labels result in slight performance degradations compared to the original Y-GAN. However, the Y-GAN* model achieves competitive results on all three datasets and still outperforms all competing baselines evaluated in Tables \ref{tab:MNIST_results}, \ref{tab: FMNIST_results} and \ref{tab: CIFAR10_results}.
The presented results suggest that learning data representations through a latent (proxy) classifier that considers differences between different sub-classes of normal training data is beneficial for performance, even if the sub-classes are not necessarily homogeneous and contain label noise.
\item \textbf{PlantVillage Results.} For this dataset, $12$ distinct clusters are identified by $k$-means, which is corresponds to the number of original categories that include non-anomalous/normal samples.
However, the partitioning of the PlantVillage data is in this case less accurate in comparison to the $k$-classes-out datasets, due to similarities between objects from different classes. After the clustering process, only $80\%$ of PlantVillage images representing the same plant species and share the same ground truth label. Although such weak labels decreasee the anomaly detection performance by approximately $12\%$, Y-GAN* learned without any supervision still outperforms the second best competing model, GANomaly, by $7.7\%$ (see Table~\ref{tab: PlantVillage_results} for reference). Overall, these results support the observation
that a competitive Y-GAN model can be trained even without access to ground truth class labels for the normal training data.
\end{itemize}
\begin{figure}[!t]
\centering
\includegraphics[width=0.96\columnwidth]{gradcam.pdf}
\caption{Sample images of correctly detected normal and anomalous samples (marked red) with corresponding Grad-CAM visualizations~\cite{Selvaraju_2017_ICCV_Grad-CAM}. As can be seen, the model focuses on
the global appearance of objects on MNIST and FMNIST, while local spatial characteristics are more informative on CIFAR10. On PlantVillage, on the other hand, Y-GAN appears to be simultaneously focusing on both, global and local object characteristics, due to the large intra-class variability of normal samples.\label{fig:grad_cam}}
\end{figure}
\textbf{Anomaly Score Analysis.} The proposed Y-GAN uses an anomaly score derived from the output of the latent classifier $C$ to detect anomalous data. However, previous work has used other definitions of anomaly scores, including distances between the input images and their reconstructions or $L_p$ norms over latent representations among others~\cite{Perera_2019_CVPR_OCGAN,Akcay2019Skip-GANomaly}. In this section, we compare the score utilized with Y-GAN to several other possibilities. These experiment are meant to provide additional insight into the model and the characteristics of various representations generated. For the analysis we only use the CIFAR10 and PlantVillage datasets, which contain more complex data than the other two datasets. We implement the following competing anomaly scores for the comparison:
\begin{itemize}[leftmargin=*]
\item Image score, $s_x$:
\begin{equation}
s_x(x,\hat{x}) = ||x-\hat{x}||_2^2,
\end{equation}
where the anomaly score is computed based on the reconstruction quality. Here, $x$ is the input image and $\hat{x}$ is the reconstruction generated by Y-GAN.
\item Latent score, $s_z$:
\begin{equation}
s_{z}(z,\hat{z}) = ||z-\hat{z}||_2^2,
\end{equation}
where the anomaly score is computed in the latent space using the combined latent representations $z=E_{s}(x)\oplus E_{r}(x)$ and $\hat{z}=E_{s}(\hat{x})\oplus E_{r}(\hat{x})$. $\oplus$ is a concatenation operator.
\item Semantic latent score, $s_{zs}$:
\begin{equation}
s_{zs}(z_s,\hat{z_s}) = ||z_s-\hat{z_s}||_2^2,
\end{equation}
where the score is computed based on the semantic latent space representation only, i.e., $z_s = E_{s}(x)$ and $\hat{z}_s = E_{s}(\hat{x})$.
\item Prototype-based semantic latent score with ground truth labels, $s_{zp}$:
\begin{equation}
s_{zp}(z_s,z_s^{(C_i)}) = \min_i||z_s-z_s^{(C_i)}||_2^2,
\end{equation}
where the (semantically-meaningful) latent probe vector $z_s$ is compared to the class prototypes $z_s^{(C_i)}=1/|C_i|\sum_{z_s\in{C_i}}z_s$, computed for the $N$ sub-classes of the normal training data $\{C_i\}_{i=1}^N$ and the minimum distance is used as the anomaly score. $|\cdot|$ is the cardinality of the class.
\item Prototype-based semantic latent score with weak class labels, $s_{zw}$:
\begin{equation}
s_{zw}(z_s,z_s^{(C^*_i)}) = \min_i||z_s-z_s^{(C^*_i)}||_2^2,
\end{equation}
where the latent probe vector $z_s$ is compared to the class prototypes $z_s^{(C^*_i)}=1/|C^*_i|\sum_{z_s\in{C^*_i}}z_s$, computed for the $N$ sub-classes of the normal training data $\{C^*_i\}_{i=1}^N$ defined through $k$-means clustering. The minimum distance over all class protoypes is used as the anomaly score.
\item Classifier uncertainty, $s_c$:
\begin{equation}
s_c = -\sum_i p_i\log p_i,
\end{equation}
where $p=[p_1,p_2,\ldots,p_N]\in\mathbb{R}^N$ is the probability distribution for the $N$ sub-classes of the normal data computed by subjecting the output of the latent classifier $C$ to a softmax function given an input probe sample $x$.
\end{itemize}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.96\columnwidth]{missed_anomalies_standard.pdf}\vspace{-2mm}
\end{center}
{\scriptsize \hspace{3.5mm} (a) Anomaly: Digit $4$ \hspace{5mm} (b) Anomaly: Pullover \hspace{6mm} (c) Anomaly: Horse}\vspace{-1mm}
\caption{Examples of edge cases with the $k$-classes-out datasets (MNIST, FMNIST, CIFAR10). Undetected normal samples in the top two rows exhibit visual similarities with the anomalous class or are poor representatives of normal data. Similarly, the appearance of undetected anomalous samples in the bottow row (red) is close to the appearance of classes in the normal data. \label{fig:failure_cases_standard_data}}
\vspace{-5mm}
\end{figure}
We note that all latent representations used in the above definitions of latent scores are normalized to unit norm prior to score calculation.
The results, presented in Fig.~\ref{fig: score_analysis}, show that a simple reconstruction-based score ($s_x$) results in modest performance on both datasets. The latent space score $s_z$ is slightly more informative on CIFAR10, but generates weaker results on image from PlantVillage.
If the residual latent space is removed from the decision making process, we observe additional improvements on both datasets. Thus, the anomaly score defined in the semantically meaningful latent space $s_{zs}$ already ensures better results on CIFAR10 than all of the competing state-of-the-art models evaluated in Table~\ref{tab: CIFAR10_results} and yields comparable detection results as a large portion of the tested models on PlantVillage. If anomaly scores are defined by also considering the sub--classes present in the (normal) training data (i.e., $s_{zp}$, $s_{zw}$, $s_{c}$, and the proposed Y-GAN score $s$), we see another significant jump in AUC results on both datasets, which suggest that the structure (or distribution) of the normal data is an important source of information that can be exploited to improve anomaly detection performance.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{failure_cases.pdf}
\caption{Sample images of undetected normal and anomalous samples (marked red) from the PlantVillage dataset. Errors with the normal data occur due to uncommon leaf shapes, holes in healthy leafs and unusual positions/orientations. With anomalous samples, problems appear due to subtle, often unnoticeable changes in leaf color or local textures. \label{fig:failure_cases}}
\end{figure}
\subsection{Visual/Qualitative Evaluation}
\textbf{Grad-CAM Analysis.} We conduct a qualitative analysis to gain better insight into the behavior of the proposed Y-GAN model. To this end, we generate Grad-CAM visualizations~\cite{Selvaraju_2017_ICCV_Grad-CAM} of image regions that are most informative with respect to the anomaly detection task.
This can be done due to the use of the latent (proxy) classifier $C$ utilized for the computation of anomaly scores. Examples of correctly classified normal and anomalous samples (marked red) with superimposed heatmaps are shown in Fig.~\ref{fig:grad_cam}. As can be seen, the global appearance of the objects is critical for anomaly detection on datasets with homogeneous backgrounds and small intra-class variability, such as MNIST and FMNIST. Conversely, detection on datasets with more complex visual data (such as CIFAR10) is primarily based on local object and texture characteristics. Different from MNIST, FMNIST and CIFAR10, PlantVillage exhibits relatively large intra-class variability in terms of the size, shape, illumination and orientation of the leafs in the images. Additionally, anomalies can appear either as inconsistencies in the overall shape and color or, alternatively, impact the texture of the leafs at an arbitrary spatial location in this dataset. Therefore, both global and local image characteristics appear to play an important role in the detection of anomalous leafs, as seen in Fig.~\ref{fig:grad_cam}. Interestingly, Y-GAN is able to adapt to the anomaly detection task and learn descriptive and informative features from the input data regardless of whether these features correspond to global or local (or both) image characteristics.
\textbf{Visual Evaluation.} To better understand why the model fails to classify certain normal and anomalous samples, we perform an additional visual inspection of a few edge cases from the four experimental datasets. in Figs. \ref{fig:failure_cases_standard_data} we show results for the $k$-classes out datasets MNIST, FMNIST and CIFAR10, where the class listed below the images was considered anomalous with the presented examples. As can be seen, the undetected normal samples correspond to objects with uncommon appearance for the considered normal data, i.e., oddly shaped digits for MNIST, ambiguous fashion classes for FMNIST, and unusual object appearancs for CIFAR10. Difficult anomalous samples, on the other hand, often resemble certain classes from the normal data or exhibit ambiguous appearance. Fig. \ref{fig:failure_cases} present edge cases for the PlanVillage dataset. Here, severely folded healthy leafs and distorted leaf shapes are often detected as anomalous. Similar outcomes are also observed with leafs with holes, although such holes do not necessarily indicate an illness. Shadows darkening various parts of non-anomalous leafs can also trick the model into missclassifying normal samples. Conversely, undetected anomalies typically represent subtle, unnoticeable changes in the leaf color or local textures.
\section{Conclusion}
The paper introduced a reconstruction-oriented auto-encoder based anomaly detection model, called Y-GAN. Different from competing approaches, Y-GAN learns to disentangle image characteristics that are relevant for representing normal data from irrelevant residual data characteristics and derives anomaly scores from selectively encoded image information. The model was shown to significantly outperform several state-of-the-art anomaly detection models on the MNIST, FMNIST, CIFAR10 and PlantVillage
datasets and provide the most consistent performance across different anomaly detection tasks among all tested models. As part of our future work, we plan to extend the model, so it allows for additional functionality, such as anomaly localization/segmentation, which is of interest for various anomaly detection tasks.
\section*{Acknowledgment}
Supported in parts by the ARRS Research Program P2-0250 (B)
and the ARRS Research Project J2-9433 (B).
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
{
"timestamp": "2021-09-30T02:03:46",
"yymm": "2109",
"arxiv_id": "2109.14020",
"language": "en",
"url": "https://arxiv.org/abs/2109.14020"
}
|
\section{Introduction}
Technological breakthroughs of the past decade have led to
increasingly common human-robot co-working
environments~\cite{cheng2018autonomous}. Navigating among humans is
an imperative task that most cobots, ranging from industrial to
service robots, are expected to perform safely and
efficiently. Although
motion planning for autonomous robots has been studied from multiple
perspectives~\cite{chen2017socially,kuderer2012feature,trautman2013robot},
these approaches focus on movement actions and do not address the
problem using communication to resolve situations that require
extensive human-robot interaction.
The objective of this paper is to develop a unified paradigm for
computing movement and communication strategies that improve
efficiency and reduce movement conflicts in co-working scenarios (see Fig.\,\ref{fig:example problem}).
\begin{figure}[t]
\centering
\includegraphics[scale=0.25]{realworld}
\caption{An example of a social navigation scenario in a confined environment where the robot's movement can not reveal any information about its future intentions.}
\label{fig:example problem}
\end{figure}
Although the problem of integrated task and motion planning has received significant research attention \cite{kaelbling2010hierarchical,garrett2020pddlstream,srivastava2014combined,shah2020anytime, dantam2018incremental,dantam2018task} the integration of these deliberative processes with communication has not been studied sufficiently. Prior work on this topic includes extensions to sampling based motion
planning paradigms that model
pedestrians as moving obstacles~\cite{sucan2012open,kuffner2000rrt}. While these extensions provide
valuable enhancements of well-known and efficient algorithms, they
view humans as impervious entities and have limited applicability in
co-working scenarios where both the human and the robot need to adjust their
behavior to allow feasible
solutions.
On the other hand, there are approaches that employ disjoint
prediction models to establish simple interactions with humans to
generate safer and more risk-aware motion plans~\cite{nishimura2020risk}. Since these approaches neglect the effect of
the robot's motion on the human's behavior, they suffer from \emph{the robot freezing problem} where the robot cannot find any safe
solution. To address this limitation,
\emph{socially compliant} methods consider potential
human-robot cooperation via learning and planning techniques to produce legible plans or plans subject
to stipulations on the information divulged during plan
execution \cite{trautman2015robot,kulkarni2019unified,zhang2018finding}.
\cite{kretzschmar2016socially} employs inverse reinforcement learning
(IRL) to learn interactive models of pedestrians in the environment
for social compliant path planning. Further,~\cite{kivrak2021social}
presents a social navigation framework that adapts the social force model
(SFM) to generate human-friendly collision-free motion plans in
unknown environments.
Although these approaches model the effect of the robot's movement on
the humans' behavior for legible motion planning,
relying purely on motion actions, taxonomically known as
implicit communication~\cite{knepper2017implicit}, could be misleading
for the human~\cite{habibian2021here} and may lead
to deadlocks in confined environments.
Clearly,
employing explicit communication~\cite{baraka2018mobile}
coupled with the robot's movements would
enrich the human-robot interaction. ~\cite{che2020efficient} uses IRL to model the effects of both explicit and implicit actions of
the robot on the human's behavior. Further, a robot planner relies on
this model to produce communicative actions to maximize the robot's
clarity. Since this method assumes predefined behavior modes for the robot and human (robot priority and human priority), the solution always impels one agent to slow down, which
degrades the planning effectiveness.
In contrast, we formalize a unified deliberative communication planning
problem that addresses the joint problem of computing the robot's
communication strategy and movements while taking into account the
human's imperfect perception about the robot and its communications
(Sec.\,\ref{sec:methodology}). We use a noisy communication model to
estimate the results of robot's communications on the human's belief
of the robot's possible locations. In contrast to the human prediction
framework in~\cite{che2020efficient}, which requires the robot's
future trajectories (the need for socially compliant planning
illustrates the difficulty of obtaining such inputs), our approach supports arbitrary human movement prediction models that can predict human
behaviors given a set of possible obstacles. Our solution paradigm
derives estimates of the human's belief on the robot's positions to
compute robot communication and movement plans
(Sec.\,\ref{sec:communication planner})). This is done using a hierarchical search process with a socially compliant motion planner Control Barrier
Function enabled Time-Based RRT (CBF-TB-RRT)~\cite{majd2021safe}
(Sec.\,\ref{sec:motionplanner}). Theoretical results and extensive
simulations on various test environments show that this approach
efficiently avoids deadlocks and computes mutually efficient solutions
without requiring preset behavior modes.
\section{Preliminaries}
\subsection{Control Barrier Function (CBF)}
Assume that the robot $R$ is following a nonlinear control affine dynamics as
\begin{align}
\label{eq: sys_dyn1}
\dot{\mathbf{s}}_r =\mathbf{f}_r(\mathbf{s}_r)+\mathbf{g}_r(\mathbf{s}_r)\mathbf{a}_r,
\end{align}
where $\mathbf{s}_r\in \mathcal{S}_R \subseteq \mathbb{R}^{n}$ denotes the state of $R$, $\mathbf{a}_r\in \mathcal{A}_R \subseteq \mathbb{R}^{m}$ is the control input, and $\mathbf{f}_r:\mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$ and $\mathbf{g}_r:\mathbb{R}^{n} \rightarrow \mathbb{R}^{n \times m}$ are locally Lipschitz functions. \
A function $\alpha: \mathbb{R} \rightarrow \mathbb{R}$ is an extended class $\mathcal{K}$ function iff it is strictly increasing and $\alpha(0) = 0$ \cite{ames2019control}.
A set $\mathcal{C} \subseteq \mathbb{R}^{n}$ is forward invariant w.r.t the system (\ref{eq: sys_dyn1}) iff for every initial state $\mathbf{s}^0_r \in \mathcal{C}$, its solution satisfies $\mathbf{s}^t_r\in \mathcal{C}$ for all $t \geq 0$ \cite{blanchini1999set}.
\begin{definition}[Control Barrier Function~\cite{ames2019control}]
A continuously differentiable function $B(\mathbf{s}_r)$ is a Control Barrier Function (CBF) for the system (\ref{eq: sys_dyn1}), if there exists a class $\mathcal{K}$ function $\alpha$ s.t. $\forall \mathbf{s}_r\in \mathcal{C}$ :
\begin{equation}\label{eq: CBF}
\sup_{a_r\in \mathcal{A}_R}\big(L_{f_r} B(\mathbf{s}_r) +L_{g_r} B(\mathbf{s}_r) \mathbf{a}_r +\alpha(B(\mathbf{s}_r))\big)\geq 0
\end{equation}
where $L_{f_r} B(\mathbf{s}_r) = \frac{\partial B}{\partial \mathbf{s}_r}^\top f_r(\mathbf{s}_r), L_{g_r} B(\mathbf{s}_r)= \frac{\partial B}{\partial \mathbf{s}_r}^\top g_r(\mathbf{s}_r)$ are the first order Lie derivatives of the system.
\end{definition}
Any Lipschitz continuous controller $\mathbf{a}_r \in K_{cbf}(\mathbf{s}_r) = \{\mathbf{a}_r\in \mathcal{A}_R\;|\;L_f B(\mathbf{s}_r) +L_{g_r} B(\mathbf{s}_r) \mathbf{a}_r +\alpha(B(\mathbf{s}_r))\geq 0\}$ results in a forward invariant set $\mathcal{C}$ for the system (\ref{eq: sys_dyn1}).
\subsection{Control Barrier Function Enabled Time-Based Rapidly-exploring Random Tree (CBF-TB-RRT)}
CBF-TB-RRT, proposed in~\cite{majd2021safe}, provides a probabilistic safety guaranteed solution in real-time to the start-to-goal motion planning problem. At each time step, given a probabilistic trajectory of dynamic agents, this method extracts ellipsoidal reachable sets for the agents for a given time horizon with a bounded probability. This method extends time-based RRT (each node of TB-RRT denotes a specific state in a specific time), proposed in~\cite{sintov2014time}, in conjunction with CBFs to generate path segments for $R$ (Eq. (\ref{eq: sys_dyn1})) that avoid the agents' reachable sets while moving toward goal. If the probability distribution over the dynamic agents' future trajectory for a given finite time horizon is accurate, the generated control by CBF-TB-RRT guarantees that the probability of collision at each time step is bounded.
\section{Deliberative Communication Planning} \label{problem}
We formulate the deliberative communication planning problem $\mathcal{P_{DC}}$ as the problem of jointly computing communication signals with corresponding feasible motion plans for $R$ in a social navigation scenario. As a starting point, we focus on settings with a single robot and a single human $H$. In such problems, $R$'s actions $\mathcal{A}$ include communication as well as movement actions. In order to model realistic scenarios, we use potentially noisy models of $H$'s movement ($T_H$) and of $H$'s sensing ($O$) of $R$'s communications. We use these models to evaluate possible courses of action while computing efficient, collision-free communication and movement plans for $R$.
Intuitively, $T_H$ maps the current state of $H$ and $H$'s belief about the possible positions of $R$ at the next planning cycle to possible motion plans for $H$. We model $H$'s sensor model $O$ as a variation of the standard noisy sensor paradigm used in planning under partial observability. $O$ relates $H$'s current state, $R$'s communication action and $R$'s intended next state to the observation signal that $H$ receives. In this formulation, $H$ \emph{need not know $R$'s current/intended states nor the exact communication that it executed} -- $H$ only receives an observation signal. Such sensor models are very general: they can capture a variety of scenarios ranging from perfect communication to imperfect communication settings where $H$ may not have a perfect understanding or observation of $R$'s communications and may conflate $R$'s communication actions with each other.
\begin{definition} \label{def:problem}
A deliberative communication planning problem is a tuple $\mathcal{P_{DC}} =\langle \mathcal{S}, s^0, \mathcal{A}, T, \mathcal{G},O, J\rangle$, where:
\begin {itemize}
\item $\mathcal{S} = \mathcal{S}_R\times \mathcal{S}_H$ is the set of states consisting of $R$'s and $H$'s states, respectively.
\item $s^0 = s^0_r\times s^0_h$ are the initial states of $R$ and $H$, respectively, where $s^0_r\in \mathcal{S}_R$ and $s^0_h\in \mathcal{S}_H$.
\item $\mathcal{A}$ is the set of $R$'s actions defined as $\mathcal{A} = \mathcal{A}_c \cup \mathcal{A}_m$, where $\mathcal{A}_c$ is a set of communication signals that includes the null communication, and $\mathcal{A}_m$ is the implicit uncountable set of $R$'s feasible motion plans. Each feasible motion plan $\pi_R\in \mathcal{A}_m$ is a continuous function $\pi_R:[0,1]\rightarrow \mathcal{S}_R$ where $\pi_R(0) = s^0_r$ and $\pi_R(1) \in \mathcal{S}_R$.
\item $\mathcal{G}=\langle \mathcal{G}_R, \mathcal{G}_H\rangle$ is the goal pair where $\mathcal{G}_R\subseteq \mathcal{S}_R$ is $R$'s goal set and $\mathcal{G}_H\subseteq \mathcal{S}_H$ is $H$'s goal set.
\item $T = \langle T_R, T_H \rangle$ constitutes transition/movement models of both agents where $T_R$ is $R$'s transition function defined as $T_R: \mathcal{S}_R \times \mathcal{A}_m \rightarrow \mathcal{S}_R$, and $T_H: \mathcal{S}_H \times \mathcal{G}_H \times \mathcal{B}^{R'}_H \rightarrow 2^{\Pi_H}$ denotes $H$'s movement model where $\mathcal{B}^{R'}_H$ is the set of possible beliefs over the state of $R$ at the next planning cycle and $\Pi_H$ is the set of feasible $H$ movement plans within $\mathcal{S}_H$. $T_H$
may be available as a simulator that yields a sample of the possible $H$ plans.
\item $O$ is $H$'s sensor model defined as $O: \mathcal{S}_H\times \mathcal{A}_c \times \mathcal{S}_R \rightarrow \Omega $, where $\Omega$ denotes $H$'s observation. Situations where $H$ cannot perfectly understand or observe $R$'s communication can be modeled by mapping multiple tuples $\langle \mathbf{s}_h, a_c, \mathbf{s}_r\rangle$ to the same $\omega\in \Omega$, where $a_c \in \mathcal{A}_c$.
\item $J: \mathcal{S}_H\times \mathcal{S}_R\times \mathcal{A} \rightarrow \mathbb{R}$ is a utility function denoting the value of a joint $H$-$R$ state and a communication-motion action. In practice, we express $J$ as a cost function.
\end {itemize}
\end{definition}
A solution to $\mathcal{P_{DC}}$ is a sequence of communication actions and motion plans that satisfy $\mathcal{G}_R$, and is defined as follow.
\begin{definition}
A solution to the deliberative communication planning problem $\mathcal{P_{DC}} =\langle \mathcal{S}, s^0, \mathcal{A}, T, \mathcal{G}, O, J\rangle$ is a finite sequence of communication and movement actions: $\Psi = \langle (a_c^1, \pi_R^1), (a_c^2, \pi_R^2), \cdots,(a_c^q,\pi_R^q) \rangle$, where $a_c^i \in \mathcal{A}_c$, $\pi_R^i \in \mathcal{A}_m$, $\pi_R^{1}(0)=s_r^0$, $\pi_R^i(1)=\pi_R^{i+1}(0)$, and $\pi_R^q(1)\in \mathcal{G}_R$ for $i=1,\cdots,q$.
\end{definition}
\section{Methodology} \label{sec:methodology}
\subsection {Overview} \label{sec:overview}
In the proposed paradigm of joint communication and motion planning, a motion planner (MP) returns a set of feasible and collision-free motion plans $\Pi_R \in \mathcal{A}_m$ considering the goal set $\mathcal{G}$
Accordingly, a communication planner (CP) uses a search tree to select a combination of a communication action and a motion plan at each planning cycle that minimizes $J$.
Each node of this search tree is defined by $\langle \mathbf{s}, a_c, \pi_R\rangle$ where $\mathbf{s}\in \mathcal{S}$, $a_c \in \mathcal{A}_c$ and $\pi_R \in \Pi_R$. $a_c$ denotes the communication action being considered at this node while $\pi_R$ denotes one of the plans returned by MP.
Fig.\,\ref{fig:blockdiagram} illustrates the mechanism by which MP and CP interact. MP utilizes CBF-TB-RRT with $H$'s movement model and $R$'s dynamic model to produce a finite set of feasible motion plans $\Pi_R \subset \mathcal{A}_m$ (Sec.\,\ref{sec:motionplanner}). Starting with a node representing the current state, CP creates a successor node for each combination of a feasible plan in $\Pi_R$ and a communication action from $\mathcal{A}_c$. For each such combination, it uses a belief update process to compute and store an estimate of $H$'s next belief if $R$ were to use the corresponding communication action. At each planning cycle, CP selects a node of tree that minimizes $J$ (CP is described in Sec.\,\ref{sec:communication planner}). An important property of this approach is that our solution algorithms are independent of the choice of $R$, the environment, and $H$'s movement and sensor models.
\begin{figure}[t]
\centering
\includegraphics[scale=0.42]{block_diagram_revised}
\caption{An overview of our approach.}
\label{fig:blockdiagram}
\end{figure}
\subsection {CBF-based TB-RRT as MP} \label{sec:motionplanner}
We obtain a set $\Pi_R$ of diverse plans in Alg. \ref{alg: select_plan_min} by employing CBF-TB-RRT~\cite{majd2021safe} as MP.
We modified the original CBF-TB-RRT method to better serve our hierarchical framework as follows. First, the set of possible future trajectories for $H$ can either be given by a stochastic $T_H$ or by a deterministic $T_H$ with an $\varepsilon$ tube around the predicted trajectory. We denote this region by $\mathcal{S}_R^{unsafe}$. MP maintains a continually updated estimate of $R$'s safe states, $\mathcal{S}^{safe}_R=\mathcal{S}_R\setminus \mathcal{S}^{unsafe}_R$, where $\mathcal{S}_R^{safe}$ would be collision-free with respect to the predicted trajectories of $H$. Second, the original CBF-TB-RRT expands a tree for a finite time horizon and just apply the control for the first time-step at each planning cycle. In contrast, here, we let $R$ to execute the full returned partial plan. Finally, instead of selecting one plan to execute, we select a set of $p$ plans $\Pi_R\subseteq \mathcal{A}_m$ of $\Bar{\pi}_{R,j}: [t_0,t_j] \rightarrow \mathcal{S}^{safe}_R$ for $j=\{1,2,\cdots,p\}$. Here, each plan $\Bar{\pi}_{R,j}$ represents a path segment from the initial vertex $\nu_0$ at location $\Bar{\pi}^{t_0}_R$ in time $t_0$ to another vertex $\nu_j$ at location $\Bar{\pi}^{t_j}_R$ in time $t_j$. Assuming $c_j$ to be the cost of vertex $\nu_j$ in the set of all expanded RRT vertices $\mathcal{V}$, we minimize the following cost to select $p$ diverse plans $\Bar{\pi}_{R,j}$ with the minimum costs $c_j$ for $j\in \{1,2,\cdots,p\}$,
\begin{align}
& \underset{\mathbf{r}}{\text{min}}
& & J_d = \sum^{\lvert\mathcal{V}\rvert}_{i=0}\frac{w_c r_i c_i}{w_d\sum^{\lvert\mathcal{V}\rvert}_{j=1, i\neq j}r_j d_{ij}}, \nonumber\\
& \label{eq: select_plan_min} \text{s.t.} & & \sum_{i=0}^{\lvert\mathcal{V}\rvert} r_i = p, \\
& & & r_i\in \{0,1\}, & & & \text{for }i=0,\cdots,\lvert \mathcal{V}\rvert, \nonumber
\end{align}
where $w_c$ and $w_d$ are the numerator and denominator weights, respectively, $d_{ij}$ is the Euclidean distance between vertices $i$ and $j$, and $\mathbf{r}$ is a vector of binary values $r_i$ for $i=1,\cdots,\lvert \mathcal{V}\rvert$, that determines the selected plans (vertices). Given the expanded RRT at each planning cycle, we minimize (\ref{eq: select_plan_min}) using Alg.\,\ref{alg: select_plan_min} to find $p$ diverse $\Bar{\pi}_R$ plans.
\begin{algorithm}[h]
\DontPrintSemicolon
\SetAlgoLined
\KwInput{$\mathcal{V}$ and $p$ }
\KwOutput{$\Pi_R$}
$\mathcal{P}\leftarrow$ Randomly select $p$ vertices from $\mathcal{V}$\\
$\textsc{Opt\_Cost}\leftarrow$ Calculate the cost $J_d$ for the vertices in $\mathcal{P}$ \\
\While{CONVERGE}{
\For{$\nu\in \mathcal{V}\setminus\mathcal{P}$}{\label{alg: select_plan_min-line4}
Calculate the cost $J_d$ for all $p$-combinations of $\mathcal{P}\cup \{\nu\}$ and update $\textsc{Opt\_Cost}$ and $\mathcal{P}$ with the minimum cost combination
}
}
$\Pi_R\leftarrow$ Extract the path segments $\Bar{\pi}_R$ from $\nu_0$ to each $p$ vertex in $\mathcal{P}$
\caption{RRT Plan $\Pi_R$ Generation (MP)}
\label{alg: select_plan_min}
\end{algorithm}
\begin{proposition}
Given that RRT includes a finite set of vertices and $J_d\geq 0$, Alg.\,\ref{alg: select_plan_min} terminates in a finite time.
\end{proposition}
\begin{Assumption}\label{assum: human-pred}
The future human motion remains within the unsafe region $\mathcal{S}^{unsafe}_R$ predicted by $T_H$.
\end{Assumption}
\begin{Lemma} \label{lem:cbf safe}
Following Assn. \ref{assum: human-pred}, all generated path segments $\Bar{\pi}_{R,j}$ for $j=1,\cdots,\lvert\mathcal{V}\rvert$ by CBF-TB-RRT are guaranteed to remain in $\mathcal{S}^{safe}_R$ if $\Bar{\pi}^{t_0}_R\in \mathcal{S}^{safe}_R$.
\end{Lemma}
\begin{proof}
This proof is immediate following \cite[Prop. 1]{majd2021safe}.
\end{proof}
\subsection {Communication Planner Module (CP)} \label{sec:communication planner}
As discussed in Sec.\,\ref{sec:overview}, CP builds a search tree to select an optimal combination of communication action and motion plan. Recall that each node in the search tree consists of a state $\mathbf{s}$, a communication action $a_c$ and a motion plan $\pi_R$. Here, $\pi_R$ denotes the discretization of the continuous-time path segment $\Bar{\pi}_R$ given by MP. We use a belief-space formulation to represent the set of locations where $H$ might expect $R$ to be at the next planning cycle $k+1$. Thus, the set of all possible beliefs of $H$, is the power set of $\mathcal{S}_R$. However, in practice $H$ needs to keep track of only a subset of possible locations, in a small neighbourhood around $H$.
\begin{definition}
A \emph{\textbf{$\delta$-local neighborhood}} of $H$ is a subset $\mathcal{L}\subseteq \mathcal{S}_R$ s.t. the Euclidean distance from $S_H$ $d(\mathbf{s}_{xyz}, \mathcal{S}_H)$ of $R$'s base coordinates $\mathbf{s}_{xyz}$ in state $\mathbf{s}$ is less than $\delta$ $\forall S \in \mathcal{L}$.
\end{definition}
We maintain a bounded, discretized set of regions to approximate $H$'s belief about $R$'s presence in their $\delta$-local neighborhood. Let $\mathcal{L}_H$ be the set of these discretized zones $\{l_1,\ldots, l_{\ell}\}$. Collectively these regions can represent neighborhoods in domain-specific configurations (e.g., an $H$-centered forward-biased cone or a rectangular region around $H$ with discretized cells). Given a state $(\mathbf{s}_R, \mathbf{s}_H)\in \mathcal{S}$ we use $\mathbf{s}_R\in l_i(\mathbf{s}_H)$ to express that when $R$'s state is $\mathbf{s}_R$ and $H$'s state is $\mathbf{s}_H$, $R$ will be in the region $l_i$ in $H$'s local neighborhood.
In this notation, $H$'s belief is a Boolean vector of dimension $\lvert l_H\rvert$, so that $b_i=1$ in a belief $\mathbf{b}$ represents a belief that $\mathbf{s}_R\in l_i(\mathbf{s}_H)$ is possible at the next time step.
Given a starting belief $\mathbf{b}_k$ and an observation symbol $\omega_k$, we
can invert the sensor model and the transition function to derive a
logical filtering based belief update equation for computing $\mathbf{b}_{k+1}$
as follows. Let $\varphi_1 (\mathbf{s}_R^{k+1}, i)$ state that $R$ at
$\mathbf{s}_R^{k+1}$ would be in $H$'s $i^{th}$ neighborhood zone, i.e.,
$\mathbf{s}_R^{k+1}\in l_i(\mathbf{s}_H^{k})$; $\varphi_2(\mathbf{s}_R^k,j)$ state that
$b_j^k$ was 1 with $R$ at $\mathbf{s}_R$, i.e., $ b^k_j=1 \land \mathbf{s}_R^k\in
l_j(\mathbf{s}_H^{k-1})$; $\varphi_3(\mathbf{s}_R^k, \mathbf{s}_R^{k+1})$ state that $R$ can move
from $\mathbf{s}_R^{k}$ to $\mathbf{s}_R^{k+1}$, i.e., $\exists a_m \in \mathcal{A}_m, T_R(\mathbf{s}_R^k,a_m) =\mathbf{s}_R^{k+1} $; and $\varphi_4(\mathbf{s}_H^k,\omega, \mathbf{s}_R^{k+1})$ state that $R$ may have executed a communication action $a_c$ that resulted in observation $\omega$, i.e., $\exists a_c\in \mathcal{A}_c, o(\mathbf{s}_H^k, a_c, \mathbf{s}_R^{k+1})=\omega$. Inverting the sensor model and the transition function gives us $b_i^{k+1}=1 \emph{iff}$ $\exists \mathbf{s}_R^{k},
\mathbf{s}_R^{k+1}\in \mathcal{S}_R; j \in [1, \ell]:$ $\varphi_1(\mathbf{s}_R^{k+1}, i) \land \varphi_2(\mathbf{s}_R^k, j)$ $\land \varphi_3(\mathbf{s}_R^k, a_m, \mathbf{s}_R^{k+1})\land\varphi_4(\mathbf{s}_H^k,\omega, \mathbf{s}_R^{k+1})$. CP uses this
expression to compute $R$'s estimate of $H$'s belief $\mathbf{b}^{k+1}$ given a belief $\mathbf{b}^k$ at the parent node and the observation $\omega$ that $H$ would receive as a result of the communication action being considered at that node. We use $\mathbf{b}(n)$ to denote this belief for node $n$.
CP uses a cost function $J$ to evaluate a node $n=\langle \mathbf{s}, a_c, \pi_R\rangle$ in the search tree. Intuitively, $J$ needs to consider $H$ and $R$'s future paths $\Gamma_H$ and $\Gamma_R$, respectively. $\tilde{\Gamma}_R(n)$ is an estimate for $\Gamma_R$ based on $\pi_R$.
However, we do not have an accurate future path for $H$ and we use $\mathbf{b}(n)$ and the human movement model $T_H$ to obtain an estimate $\tilde{\Gamma}_H(n)$. We omit the node argument unless required for clarity.
For computational efficiency, we discretize $\Gamma_R$ and $\Gamma_H$ as sequences of waypoints: $\Gamma_R=\{ \gamma_R^i \}_{i=1}^{i_{max}}$ and $\Gamma_H = \{ \gamma_H^i \}_{i=1}^{i_{max}}$. W.l.o.g., both sequences have the same length as the agent with the shorter path can be assumed to stay at their final location for remainder of the other agent's path execution. Let $c(\tilde{\Gamma})$ be the sum of pairwise distances between successive waypoints in a path $\tilde{\Gamma}$ and let $\delta(\tilde{\Gamma}_1, \tilde{\Gamma}_2)$ be $\delta(\tilde{\Gamma}_1, \tilde{\Gamma}_2) = max(d_{min}(\tilde{\Gamma}_1$, $\tilde{\Gamma}_2) - \sigma^{safe}, 0)$, where $\sigma^{safe}$ denotes the safety threshold and $d_{min}(\tilde{\Gamma}_1, \tilde{\Gamma}_2)$ is the minimum Euclidean distance between $\tilde{\Gamma}_1$ and $\tilde{\Gamma}_2$: $\emph{min}_{i=1,\ldots, i_{max}} \{d(\gamma_1^i, \gamma_2^i) \}$. Besides, let $c_C(a_c)$ be the cost of executing the communication action $a_c$, and $\eta_R$, $\eta_H $, $\eta_P$, and $\eta_C$ be the weights of the cost function. Using this notation, we define $J(n)$ as follows:
\begin{align}
J(n) = \eta_Rc(\tilde{\Gamma}_R(n)) +\eta_Hc(\tilde{\Gamma}_H(n))+ \nonumber \\
\eta_p1/\delta(\tilde{\Gamma}_R(n),\tilde{\Gamma}_H(n)) + \eta_Cc(a_c) \label{eq:highcost}
\end{align}
In Alg.\,\ref{alg:highlevel}, at each planing iteration (lines 3-20), CP gets a library of motion plans $\Pi_R$ from MP. In lines 7-11, a branch of the tree is created for each $a_c$ and $\pi_R$. As explained in (\ref{eq:highcost}), the path-to-goal of $H$ and $R$ are required to compute a cost value for each branch. $\Gamma_H$ is thoroughly given by $T_H$, as mentioned in line \ref{alg:highlevel:th}. On the other hand, since a $\pi_R$ is likely a partial path, $T_R$ is utilized in line \ref{alg:highlevel:tr} to compute a completed path-to-goal for $R$ given $\pi_R$.
\begin{algorithm}
\DontPrintSemicolon
\SetAlgoLined
\KwInput{$\mathcal{P_{DC}}$}
\KwOutput{$\Psi$}
initialize: $\mathbf{b}_{0}$ and $\mathcal{S}^0$\;
\While{$\textsc{Goal\_Test}(\mathcal{G}_R, \mathcal{S}_R)== \textsc{False}$}{ \label{alg:highlevel-while-start}
$\Pi_R \leftarrow$ get the plans from the MP\;
$\textsc{Min\_Cost} \leftarrow \infty$\;
\For{$\pi_R \in \Pi_R$} {
\For{$a_c \in \mathcal{A}_c$} {
$\omega_{k+1} \leftarrow O(a_c, \mathcal{S}^k$)\;
$\mathbf{b}_{k+1} \leftarrow \textsc{Update}(\mathbf{b}_{k},\omega_{k+1}$)\;
$\tilde{\Gamma}_H \leftarrow T_H(\mathcal{S}_H^k,\mathcal{G}_H,\mathbf{b}_{k+1})$\; \label{alg:highlevel:th}
$\tilde{\Gamma}_R \leftarrow T_R(\mathcal{S}_R,\pi_R)$\; \label{alg:highlevel:tr}
$c_{branch} \leftarrow J(\tilde{\Gamma}_R,\tilde{\Gamma}_H,a_c)$\;
\If{$c_{branch} < \textsc{Min\_Cost}$}{
$\textsc{Min\_Cost} \leftarrow c_{branch} $\\
$\textsc{Best\_Action} \leftarrow \langle \pi_R,a_c\rangle$ }
}
} \label{alg:highlevel-while-end}
$\textsc{Execute}(\textsc{Best\_Action})$ \;
$\mathcal{S}^k \leftarrow$ $\mathcal{S}^{k+1}$ \;
$\Psi.\textsc{append}(\textsc{Best\_Action})$
}
\caption{Communication Planner}
\label{alg:highlevel}
\end{algorithm}
Fig.\,\ref{fig:example} exemplifies two branches of the CP search tree evaluated in lines 7-11 of Alg.\,\ref{alg:highlevel}. In each example, $H$ is shown at the center of its \emph{$\delta$-local neighborhood} visualized as a set of nine squares around her, where the colored squares stand for $\mathbf{b}_k$. Besides, $R$ is pictured at the bottom of each example with a partially expand RRT, where dark gray branches of RRT represented $\Pi_R$ selected by MP. In Fig.\,\ref{fig:example}(a), $R$ communicates $a_c =$``Right" and $\pi_R$ is the right branch of RRT, emphasized by a star, which makes $H$ believe that it will be in one of the squares on her left. In Fig.\,\ref{fig:example} (b), $R$ goes forward and communicates ``Forward" as well, which makes $H$ believe that $R$ will be in one of the middle squares. In scenario (b), $R$ takes a shorter path to goal but scenario (a) results in less conflicting paths for both $R$ and $H$. Thus, the more optimal branch will be determined based on the weights of $J$ in (\ref{eq:highcost}).
\begin{figure}[t]
\centering
\includegraphics[scale=0.42]{exmp}
\caption{Two examples of the reasoning procedure of CP for a branch of the search tree.}
\label{fig:example}
\end{figure}
\begin{Assumption}\label{assum: human-pred-highlevel}
The predicted trajectories $\Gamma_H$ given by $T_H$ in the discretized domain is an over-approximation of the predicted trajectories by $T_H$ in the continuous domain.
\end{Assumption}
\begin{Assumption}\label{assum: robot-pred-highlevel}
The discretized projection of $\Bar{\pi}_R$ on $\Gamma_R$ ($\pi_R$ in discretized domain) is an over-approximation of $\Bar{\pi}_R$ in the continuous domain.
\end{Assumption}
\begin{theorem}
Let $P_{DC}=\langle \mathcal{S}, s^0, \mathcal{A}, T, \mathcal{G}, O, J\rangle $ be a deliberative communication problem and let $\Psi^*=\langle(a^i_c,\pi^i_R)\rangle^{q}_{i=1}$ be its
solution computed by Alg.\,\ref{alg:highlevel} using the cost function J in (\ref{eq:highcost}). Let $\Gamma_R$ be the discretized waypoints of $R$ in $\Psi^*$ defined as $\Gamma_R = \langle \pi_R^i\rangle_i $, and $\Gamma_H$ be a corresponding discretized waypoint sequence of a trajectory for $H$ predicted by $T_H$ and starting at $s^0$ with the goal $G_H$. If Assn. 1-3 hold, $\Gamma_R$ will either lie within $\Bar{\mathcal{S}}_R^{safe}$ or it will satisfy $d_{min}(\Gamma_R, \Gamma_H) >\sigma^{safe}$.
\end{theorem}
\begin{proof}
Since $R$ has a null communication action that does not alter $H$'s belief, Alg.\,\ref{alg:highlevel} will always have a node reflecting the default behavior of CBF-TB-RRT with cost $<\infty$. In this case, Lemma\,\ref{lem:cbf safe} guarantees $R$'s trajectory not to leave $\Bar{\mathcal{S}}_R^{safe}$.
If Assn.\,\ref{assum: human-pred-highlevel} and\,\ref{assum: robot-pred-highlevel} hold, and if Alg.\,\ref{alg:highlevel} selects a node other than the default CBF-TB-RRT behavior, the min distance will be at least $\sigma^{safe}$, otherwise $\forall \, \eta_P > 0$, $J$ would be $\infty$ and the default CBF-TB-RRT behavior will be selected.
\end{proof}
\section {Empirical Evaluation}
We conducted extensive experiments in various simulation environments to evaluate the proposed method. These experiments 1) draw a comparison between the proposed method and the baseline method CBF-TB-RRT, and 2) illustrate the performance of the proposed method in deadlock situations.
\subsection {Implementation}
\subsubsection{CBF-TB-RRT Design}
In our implementation, we consider the nonholonomic unicycle model for $R$ dynamics as
\begin{align}\label{rot-model}
\dot{\mathbf{s}}_r=\mathbf{g}_r(\mathbf{s}_r)\mathbf{a}_r ={\small\begin{bmatrix}
\cos(\theta_r) & 0\\
\sin(\theta_r) & 0\\
0 & 1
\end{bmatrix}}\mathbf{a}_r.
\end{align}
where states are $\mathbf{s}_r\!=\![x_r,y_r,\theta_r ]^T\!\in\!\mathcal{S}_R\!\subseteq \mathbb{R}^2\!\times\![-\pi,\pi)$ and control inputs are $\mathbf{a}_r=[v_r,\omega_r]^T\in \mathcal{A}_R\subseteq \mathbb{R}^2$. The parameters $x_r$, $y_r$, $\theta_r$ denote the longitudinal and lateral positions of $R$ and heading angle, respectively. The controls $v_r$ and $\omega_r$ also represent the linear and angular velocities of $R$, respectively.
Moreover, the goal set $\mathcal{S}_{g}\subset\mathcal{S}_R$ of $R$ can describe a set of position states in $\mathbb{R}^2$ as follows
\begin{align}\label{G-set2}
\mathcal{S}_g = \big\{\mathbf{s}_r \in \mathcal{S}_R\;|\; \big\lVert [x_r,y_r]^T-\mathbf{s}_g\big\rVert^2_2 - r_g^2 \leq 0 \big\},
\end{align}
where $\lVert\cdot\rVert_2$ denotes the Euclidean norm, $\mathbf{s}_g=[x_g,y_g]^T$ is the center, and $r_g$ is the radius of the goal set. \
While expanding the RRT tree, the following cost $c_i$ is assigned to each vertex $\nu_i\in\mathcal{V}$ for $i=0,1,\cdots,\lvert\mathcal{V}\rvert$,
\begin{align}
c_i = w^G_dc^G_d+w^H_dc^H_d+w_gc_g+w_tc_t,
\end{align}
where $c^G_d$ is the Euclidean distance between vertex $i$ and the goal point, $c^H_d$ is the Euclidean distance between vertex $i$ and $H$, $c_h$ is the heading cost, and $c_t$ is the trap cost. The heading cost $c_g$ calculates the angular difference between the sampled vertex heading and the heading toward goal. To calculate the trap cost $c_t$, the algorithm checks the waypoints of a discretized direct straight line from the sampled vertex to the goal point. The trap cost $c_t$ is then the number of waypoints lied within the occupied regions. Readers are referred to \cite{majd2021safe} for further details on CBF-TB-RRT tree expansion. $w^G_d$, $w^H_d$, $w_g$, and $w_t$ are weight terms.\
\begin{figure}[h!]
\centering
\includegraphics[scale=0.45]{special_case_revised}
\caption{An example of a potential deadlock in confined environments.}
\label{fig:deadlock}
\end{figure}
\subsubsection{Human Movement Model}
We assumed $H$'s movement is described by a deterministic kinematic motion transition function ($T_H$) and we used the Dynamic Window Approach (DWA) in MP, proposed in \cite{fox1997dynamic}, to predict $H$'s shortest trajectory to the goal for a finite time horizon. Since DWA is a deterministic prediction method, we assumed an $\varepsilon$ bound around the human's predicted trajectory following Assn. \ref{assum: human-pred} to derive the CBF safety constraints. Given the human's predicted trajectory $\mathbf{s}_h$, we define the safe set $\mathcal{S}_R^{safe}\subseteq \mathcal{S}_R$ as $\mathcal{S}_R^{safe} = \big\{ \mathbf{s}_r \in \mathcal{S}_R, \mathbf{s}_h \in \mathcal{S}_H~\lvert ~B(\mathbf{s}_r,\mathbf{s}_h)\geq 0\big\}$, where $B(\mathbf{s}_r)$ is a continuously differentiable safety measure defined as
\begin{align}\label{eq: safe-measure}
B(\mathbf{s}_r,\mathbf{s}_h) = \lVert[x_r,y_r]^T - \mathbf{s}_h\rVert_2^2 - (\varepsilon + r_h + r_r)^2,
\end{align}
$r_h$, and $r_r$ are the radii of human and robot, respectively. The safety measure $B(\mathbf{s}_r)$ is employed as a CBF to impose the safety constraint (\ref{eq: CBF}) on the control input $\mathbf{a}_r$ in a Quadratic Program (QP) to generate safe plans $\pi_R$
\cite{majd2021safe}.\
As illustrated in Sec. \ref{sec:overview}, CP also utilizes $T_H$ to predict a trajectory-to-goal for $H$ for each branch of the search tree. Besides, in contrast to the requirements of the motion planning module, $H$ movement prediction must be provided for the whole horizon in communication planning module. Therefore, for the sake of computational efficiency, CP utilizes another $H$ movement model rather than DWA. CP considers a grid-based abstraction of the environment and utilizes A* search algorithm to predict a path-to-goal for $H$. In general this abstraction could be derived using methods for automatically predicting reliable state and action abstractions such as \cite{shah2022using}.
\begin{Assumption}
Predictions drawn from A* and DWA approaches complied with the Assn. \ref{assum: human-pred-highlevel} in all our experiments.
\end{Assumption}
\subsubsection{Human Motion Execution Model}
We utilized the Social Forces model \cite{helbing_social_1995} to simulate the human movement, as it is very fast, scalable, and yet describes observed pedestrian behaviors realistically. We modeled $H$ and $R$ both as pedestrians. To mimic $H$’s reactivity to $R$’s communication action $a_c$, the model creates multiple virtual agents moving from $R$’s current position to all $x$-$y$ projections of discretized zones $l_i\in l_H$ in the CP's belief model for which $b_i = 1$. If $\mathbf{b}_k = \varnothing $, $R$’s goal is computed as a linear projection from its current position based on its current velocity, i.e. $H$ makes no assumptions over $R$’s future trajectory. Thus, in our experiments, the models used by $H$ are different from the model $H$ used by $R$, which is likely in real-world setting.
\subsection {Experimental Setup}
\emph{Test environments:} Fig. \ref{fig:test_envs} the environments used in our experiments. The basic floor map exemplifies spacious environments, while the hallway and intersection floor maps model more restricted and confined environments.
\begin{figure}[t]
\centering
\includegraphics[scale=0.5]{maps}
\caption{Schematic illustration of diversified test environments that capture various conflicting situation.}
\label{fig:test_envs}
\end{figure}
\begin{table*}[h!]
\centering
\begin{threeparttable}
\caption{\label{tab:results}Comparison with CBF-TB-RRT.}
\begin{tabular}{|c|cccc|cccc|}
\hline
\multirow{2}{*}{\diagbox[width=7em]{Maps}{Measures}}
& \multicolumn{4}{c|}{Our approach}
& \multicolumn{4}{c|}{CBF-TB-RRT}\\
& {$R$ cost-to-goal} & {$H$ cost-to-goal} & {$PI$} & {$PC$} & {$R$ cost-to-goal} & {$H$ cost-to-goal} & {$PI$} & {$PC$} \\
\hline \hline
Basic
& 5.65\textendash5.68 & 7.33\textendash7.52 & 2\textendash2 & 0.50\textendash0.53
& 5.51\textendash6.27 & 6.90\textendash6.99 & 46\textendash113 & 0.21\textendash0.57 \\
Intersection
& 3.63\textendash3.88 & 6.10\textendash6.29 & 2\textendash2 & 0.22\textendash0.24
& 4.20\textendash4.24 & 5.76\textendash5.90 & 51\textendash98 & 0.34\textendash $\infty$ \\
Hallway
& 10.12\textendash10.58 & 6.85\textendash7.39 & 4\textendash4 & 0.72\textendash0.89
& 10.27\textendash10.30 & 6.65\textendash6.77 & 121\textendash123 & $\infty$ \textendash $\infty$ \\
\hline
\end{tabular}
\begin{tablenotes}
\small
\item The results show the range of the measurements in 10 trials per map; PI: planning iterations; PC: proximity cost.
\end{tablenotes}
\end{threeparttable}
\end{table*}
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.33]{weight_charts_revised}
\caption{Flexible prioritization of $H$ and $R$ in different test environments, where $F=1$ prioritizes the robot.}
\label{fig:weights}
\end{figure*}
\noindent \emph{Measurements:}
Aside from cost-to-goal of $R$ and $H$, there are four more quantitative measures to evaluate the performance and effectiveness of the proposed method:
\begin {itemize}
\item $R$'s normalized speed (RNS): $RNS = \nicefrac{c_R^*}{time_R^{actual}}$ measures $R$'s normalized mean speed from $\mathbf{s}^0_r$ to $\mathcal{G}_R$, where $c_R^*$ and $time_R^{actual}$ denote the optimal cost-to-goal of $R$ and $R$'s actual travel time respectively.
\item $H$'s normalized speed (HNS): $HNS = \nicefrac{c_H^*}{time_H^{actual}}$ measures $H$'s normalized average speed from $\mathbf{s}^0_h$ to $\mathcal{G}_H$, where $c_H^*$ and $time_H^{actual}$ denote the optimal cost-to-goal of $H$ and $H$'s actual travel time respectively.
\item Planning iterations (PI): PI denotes the number of iterations of lines\,\ref{alg:highlevel-while-start} to\,\ref{alg:highlevel-while-end} in Alg.\,\ref{alg:highlevel}.
\item Proximity cost (PC): PC measures the closeness of $R$ and $H$ during an experiments. Let $\Gamma_R = \{\gamma_R^i\}_{i=1}^{i_{max}}$ be $R$'s discretized trajectories given by a solution $\Psi$ and $\Gamma_H = \{\gamma_H^i\}_{i=1}^{i_{max}}$ be the corresponding discretized waypoint sequence of an actual trajectory for $H$. We defined PC using (\ref{eq: safe-measure}) as follows.
\begin{align}
Z = & \{ \zeta_i | \: \zeta_i = B(\gamma_R^i, \gamma_H^i) < thresh \}_{i=1}^{i_{max}} \\
PC = &
\begin{cases}
\infty & \text{if $\exists \zeta_i \in Z, \: \zeta_i < 0$}\\
\nicefrac{1}{\sum_{i=1}^{i_{max}}\zeta_i} & \text{otherwise}\\
\end{cases},
\end{align}\
\end {itemize}
\noindent \emph{Hypotheses:} throughout the experiments, we evaluate the following hypotheses 1) In confined environments, the chances of a deadlock are higher. Therefore, the effect of communication to avoid such deadlocks is more effective. 2) The proposed deliberative communication approach not only results in less conflicting social navigation, but also prevents deadlock situations where non-communicative approaches fail to find a solution. 3) By adjusting the weight vector of the cost function $J$, $H$ or $R$ can be prioritized. Accordingly, the non-prioritized agent is expected to have a decreased normalized average speed due to an increased cost-to-goal.
\subsection{Results}
\subsubsection{Comparison with CBF-RRT} \label{sec:compare}
In this section, we aim to demonstrate that the proposed method performs as optimally as CBF-TB-RRT, in terms of the traveled distances, while it reduces the conflict between $H$ and $R$. In Table \ref{tab:results}, the results are presented as the range of 10 experiments the experiments for each test environments of Fig. \ref{fig:test_envs}, where $\eta_R=1.5, \eta_H=0.25, \eta_P = 3$, $\eta_C = 1$, and $A_c = \{ north, south,east, west \}$.
Our results show that $PC$ of the baseline drastically increases in more confined environments. E.g., $PC$ has a finite range in the basic environment since the room is spacious, while the $PC$ range is infinity in the intersection environment where the floor map is confined and only one agent can pass through a corridor at a time. The situation is even more severe in the hallway environment in which the baseline method results in an infinite $PC$ for all 10 experiments. These observations validate
Hypothesis 1. In contrast, the proposed method handles conflicting situations of the intersection and hallway environments effectively.
The $PC$ values of our method in all environments are dramatically lower compared to the baseline method, while cost-to-goal of $R$ and $H$ do not increase noticeably.
Moreover, employing the proposed method eliminates the necessity for frequent re-planning as $PI$ drops significantly compared to the experiments with the baseline method.
\subsubsection{Handling potential deadlocks}
According to \ref{sec:compare}, the proposed method is significantly more effective in reducing $PC$ in confined environments while maintaining the efficiency in terms of $c_R$ and $c_H$. This property is particularly imperative in preventing potential deadlocks in narrow passages, where a lower $PC$ implies less conflicting path for $H$ and $R$. Fig. \ref{fig:deadlock} demonstrates a pervasive case where lack of communication leads to a freezing situation. In this example, at the first planning iteration, $R$ transmits an ``east" signal, selected automatically by CP, to $H$ by which $H$ is informed about $R$'s plan before she enters the narrow corridor. As shown in Fig. \ref{fig:deadlock} (top left), this communication signal updates $H$'s belief about $R$'s next location adequately and impels $H$ to clear the passage. At the second planning iteration, $R$ has already passed through the intersection, so it remains silent and $H$'s belief indicates no collisions, as depicted in Fig. \ref{fig:deadlock} (bottom left).
In the same scenario, the baseline method performs ineffectively since $H$ enters the left corridor before $R$ departs it. When $H$ gets closer to $R$, there won't be enough room for the RRT to be expanded and a deadlock happens since the passage will be blocked for $R$ permanently. This analysis supports Hypothesis 2 regarding the capability of the proposed method to handle potential deadlocks.
\subsubsection{Flexible prioritization}
$H$ or $R$ can be prioritized flexibly by adjusting the weights of $J$. A parameter study on $\eta_R$ and $\eta_H$ reveals the way that each agent is favored in different social navigation scenarios, as shown in Fig. \ref{fig:weights}. In these experiments, the weights are adjusted
as $\eta_R = F \eta_{const}$, and $\eta_H = (1-F) \eta_{const}$, where $F\in[0,1]$ denotes the priority factor ($R$ is fully prioritized for $F=1$), and $\eta_{const}=1.5$. In all three environments, prioritizing $R$ increases $R$'s normalized speed significantly. Fig.\,\ref{fig:weights} shows that in the basic environment, $R$'s normalized speed increases by 2.7 times when $R$ is prioritized, compared to the case where $H$ is highly prioritized. Likewise, $H$ speeds up when she is prioritized in the basic and intersection environment. However, in the hallway environment, the whole $H$-$R$ interaction is relatively smoother and less conflicting when $R$ has a higher priority. Together, the present findings support Hypothesis 3. Furthermore, the results support the fact that the proposed method maintains a reasonably low $PC$ in all test environments not matter which agent is prioritized. In other words, the proposed method can be used to identify appropriate priorities for smooth social navigation.
\section {Conclusion}
This paper proposes a joint communication and motion planning framework that selects from an arbitrary input set of communication signals while computing the robot motion plans. The simulation results demonstrated that the presented framework avoids potential deadlocks in confined environments by leveraging explicit communications coupled with robot motion plans. We found that producing less conflicting trajectories for the robot in confined environments, which led to drastically lower proximity costs, indicates lower chances of a deadlock. We also observed that the proposed method does not degrade the robot's efficiency (in terms of traveled distances) compared to CBF-TB-RRT. In contrast, the non-communicative baseline method resulted in high proximity cost overall, which shows its incapability of generating viable solutions when extensive human-robot interaction is required. Furthermore, the proposed method can flexibly prioritize either the robot or the human while maintaining its effectiveness in handling potential deadlocks.
\bibliographystyle{ieeetr}
|
{
"timestamp": "2022-03-04T02:05:49",
"yymm": "2109",
"arxiv_id": "2109.14004",
"language": "en",
"url": "https://arxiv.org/abs/2109.14004"
}
|
\section{Introduction}
\section{Basic and multi-adjoint logics}
\label{sec:dataset_ml}
\section{Relación entre BL y ML}
\label{sec:explanations}
Una primera comparación entre la lógica BL y la lógica ML se dio en [2], de la que obtuvimos que ML tiene una axiomatización más débil que BL. En concreto, analizamos cada axioma en ML y comprobamos que su traducción a una fórmula en BL proporciona una fórmula demostrable. Es fácil ver que el axioma P1 es la fórmula (3) del lema 1, P2 es el axioma BL1, P3 se sostiene a partir del axioma BL1 y la fórmula (1) del lema 1, P4 es el axioma BL7, P5 se obtiene a partir de la fórmula (1) del lema 1, y el hecho de que $T$ es demostrable. Además, identificando los símbolos de implicación en ML con el símbolo de implicación en PC(∗), tenemos que los Axiomas M1 y M2 son BL5a y BL5b, respectivamente. Finalmente, el axioma M3 surge siguiendo un razonamiento similar al dado en la prueba de la fórmula (3) del lema 1.
Por otro lado, mostraremos cómo los axiomas de la lógica BL pueden reescribirse en el lenguaje de ML. Evidentemente, de los ocho axiomas, ya hemos escrito en el lenguaje de ML cuatro de ellos. Teniendo en cuenta que el axioma BL2, se traduce en la fórmula $(\alpha \wedge_i \psi) \rightarrow \alpha$, donde ∧i es cualquier símbolo de conjunción asociado a un conjuntor adjunto $&$ . Esta fórmula no se cumple en ML, porque es una consecuencia de la propiedad x &i y x, para todo x, y ∈ P , que no se cumple en general en un álgebra multiadjunta de orden derecho. Sin embargo, surge la surge la siguiente propiedad.
El axioma BL3 es la conmutatividad del operador $*$ en BL, que se reescribe en el lenguaje ML como sigue (φ ∧i ψ) → (ψ ∧i φ), donde ∧i es cualquier símbolo de conjunción símbolo asociado a un conjuntor adjunto &i. Sin embargo, esta fórmula no es demostrable en ML, en general, debido a que &i no necesita ser conmutativo en cualquier álgebra multiadjunta de orden derecho.
Finalmente, Hájek argumentó que BL6 es una variante de la prueba por casos, es decir, "si χ se sigue de φ → ψ, entonces si χ también se sigue de ψ → φ entonces χ". Sin embargo, este punto de vista de la "prueba por casos" sólo es demostrable en conjuntos totales ordenados y, por tanto, carece de sentido en nuestro marco.
Ahora, analizaremos otras propiedades interesantes introducidas en [10] que nos permiten dar una relación más estrecha entre BL y ML.
La primera propiedad a analizar es la fórmula (1) del lema 1. Por la Proposición 1, esta fórmula es demostrable si y sólo si (φ ∧i ψ) → φ es demostrable, lo cual como hemos comentado anteriormente está asociado al axioma BL2. Por tanto, esta propiedad sí se cumple en general.
Proposition 2
La fórmula (2) del Lemma 1, no se satisface ya que, como muestra el siguiente resultado, está está relacionada con la conmutatividad de los conjuntos de un álgebra multiadjunta de orden derecho de orden derecho, lo cual no es necesario.
La fórmula (3) del lema 1, es simplemente el axioma P1. En cuanto a las fórmulas (4) y (5) en el Lemma 1, obtenemos propiedades análogas a las de ML tras una traslación adecuada como demostramos a continuación.
Obsérvese que la fórmula (Pr1) es dual a la fórmula (4) del lema 1, debido a que el símbolo símbolo del conjuntor no satisface la conmutatividad (Axioma BL3 de BL) en general, como comentamos anteriormente. El siguiente lema introduce la fórmula en ML similar a la fórmula (6) del lema 1.
\section{Conclusion y trabajo futuro
}
Hemos demostrado que ML proporciona una axiomatización de las álgebras multiadjuntas, que es una lógica más general que BL, considerando una axiomatización más débil. Además, diferentes propiedades interesantes en BL han sido traducidas y estudiadas en ML. En el futuro, ML se extenderá a la consideración de retículos [3] y a la inclusión de otros operadores, como los operadores que estresan y deprimen la verdad [8, 13] y sus casos particulares: sobre muy verdadero [11] y el operador de Monteiro [1, 10] serán analizados e incluidos en ML.
\newpage
\section{Revision}
The work is theoretical by nature, presents a comparative study between basic logic (BL) introduced by Petr Hájek and the multi-adjoint logic (ML), showing that ML is a more general logic than BL. Specifically, it is proved that the axioms given in ML are provable formulas in BL, and analyze that different axioms of BL cannot be proven in ML. Furthermore, different interesting properties in BL have been translated and studied in ML. On the other hand, is show how the axioms of BL logic can be rewritten in the language of ML.
This article is a continuation of previous works by the authors in which they had introduced a logical characterization of multi-adjoint algebras.
In general, the paper has been written with care and the results appear intuitively and technically sound to me.
More detailed comments/questions for the authors:
- In the formula BL6 on page 4 the initial parenthesis is missing.
Therefore, ML takes into account two kind of implications, where the role of the connective $\rightarrow$ is associated with the bounded poset and the connectives $\rightarrow_i$ are associated with the adjoint implications. En este trabajo no está bien explicado lo hecho en falta. ????????????
\bibliographystyle{splncs04}
\section{Introduction}
When Artificial Intelligence (AI) techniques are applied in a sensitive domain such as Healthcare, providing an explanation for the results is sometimes as important as the accuracy or correctness of those results, if not more.
Take, for instance, the liver transplantation domain and the problem of deciding a donor-receiver matching using a good prediction of the graft survival.
If significantly enough data are available, Machine Learning (ML) algorithms may obtain highly accurate predictions, but many ML techniques act as black boxes, failing to provide a verifiable explanation for their results.
This lack of explanations is especially problematic when the final decision may have critical consequences on the patients in the waiting list or even lead to legal implications.
Thus, most hospitals use the gravity of the patient's health as the only priority for the waiting list: the potential survival of the transplantation is just disregarded due to the lack of clear and fair rules that can be properly justified.
One ML technique that does not suffer from this black box limitation is \emph{decision tree} (DT) learning~\cite{Quin86}.
In DT, the result of the learning process is a tree whose nodes check conditions about the input features.
A prediction can be easily explained in human terms by following the corresponding path in the tree.
In the recent survey~\cite{connor20} of articles that apply AI to organ transplantation, more than a half of the approaches include a DT learning algorithm.
In the case of liver transplantation, Bertsimas et al.~\cite{Bertsimas2018}, used a big dataset with 1,618,966 observations to predict 3-month mortality or removal from the waiting list for a given patient.
ML techniques were applied to obtain two DT models: one for patients with hepatocellular carcinoma (HCC) and different one for the rest.
An interactive online tool\footnote{\url{http://www.opom.online/}} was built, including a simulator to make a prediction using 4 input variables from some patient's data.
The tool does not provide a specific explanation for the simulated prediction but, instead, it allows browsing the general DTs graphically, collapsing or expanding some parts of the tree.
In particular, the overwhelming width and detail of the non-HCC tree makes it very difficult to follow a certain path, even with the provided interactive browser.
So, strictly speaking, we can obtain an explanation for each prediction but, from a practical perspective, this approach lacks of the simplicity and ease of use that is expected from an explainable AI tool.
In this work, we present a flexible method for explaining, in human readable terms, the 5-year survival predictions made by a DT trained on a dataset of liver transplantations.
The dataset was collected at the Digestive Service of the Coru\~na University Hospital Center (CHUAC), Spain.
The method consists in representing the DT as a logic program and using the tool \texttt{xclingo}~\cite{xclingo20} to annotate the program with natural language tags and construct the compound explanations.
We purpose two different translations into logic programming, each with its benefits and disadvantages, which will be discussed.
\section{An example of DT learning for predicting liver survival}
\label{sec:dataset_ml}
The dataset consisted of 258 transplants dating from 2009 to 2014 and each of those samples comprises 66 features both from the receiver and the donor.
As a target variable, we used the Boolean feature \texttt{goal\_death}, that points out if the
patient died during the following 5 years after the surgery.
Numerical features were previously discretised using another DT, as described in \cite{ibm}.
The feature selection consisted in a chi-square test performed for each candidate feature against the target.
We took the 7 features with lowest $p$-value (i.e. highest significance for predicting the target variable) that are shown in Table~\ref{tab:pvalues} (prefixes ``don\_'' and ``rec\_'' respectively stand for donor and receiver).
\begin{table}[!htb]
\begin{minipage}{.4\linewidth}
\caption{Top 7 significant input features}
\label{tab:pvalues}
\centering
\begin{tabular}{|l|c|}
\hline
Feature & P-value \\
\hline
rec\_vhc & 0.015 \\
rec\_afp & 0.042 \\
rec\_abdominal\_surgery & 0.049 \\
don\_microesteatosis & 0.082 \\
rec\_hypertension & 0.111 \\
rec\_provenance & 0.138 \\
don\_acv & 0.146 \\
\hline
\end{tabular}
\end{minipage}%
\begin{minipage}{.6\linewidth}
\centering
\caption{Grid Search parameters}
\label{tab:gridseach}
\begin{tabular}{|l|c|}
\hline
Parameter & Possible values \\
\hline
maximum depth & {5,9,11} \\
splitting criterion & {entropy, gini-importance} \\
maximum features & {$\sqrt{n\_features}$, $log_2(n\_features)$} \\
\hline
Best parameters & 9, entropy, $\sqrt{n\_features}$ \\
\hline
\end{tabular}
\end{minipage}
\end{table}
Due to the unbalanced distribution of the target class (death (0.76), alive(0.24)), a stratified splitting was used for dividing the data into train and test (75:25) while preserving the ratio of the target class.
Categorical features were label-encoded before training so that decision tree could process them.
The best parameters for training the decision tree were estimated by performing a stratified, 5-fold cross validation grid search over the training test.
Table~\ref{tab:gridseach} shows the parameter grid and the best parameters found.
The best parameters were used to train a DT that eventually produced an accuracy of 0.789 with a kappa value of 0.312.
\section{DTs as Explainable Logic Programs}
\label{sec:explanations}
Answer Set Programming (ASP)~\cite{brewka2011} is a declarative problem solving paradigm where problems are represented as a set of rules in a logic program and solutions to those problems are obtained in the form of answer sets.
ASP rules have the general form of ``\texttt{head :- body.}'' where both head and body are lists of \textit{literals}, that is, predicate atoms optionally preceded by \texttt{not}.
Intuitively, the rule head is derived as true if all the literals in the body are also true.
Rules with an empty body are called \textit{facts} and its head is always derived.
If we call the widely used ASP solver \texttt{clingo}~\cite{gekakaosscwa16a} on the following program:
\begin{flushleft}\ttfamily\small
holds(55,vhc,true). holds(55,don\_acv,true).\\
bad(P) :- holds(P,vhc,true), holds(P,don\_acv,true).
\end{flushleft}
we obtain an answer set including the two facts in the first line plus the atom {\tt bad(55)} derived from the rule in the second line.
\texttt{xclingo}~\cite{xclingo20} is an ASP tool built on top of \texttt{clingo} that explains why a given atom was derived by tracing the relevant fired rules.
To this aim, we may use textual descriptions to annotate specific rules (with the \texttt{\%!trace\_rule} directive) or any derivation of a given atom (with the \texttt{\%!trace} directive).
As an illustration, Listing~\ref{lst:xclingo_example} shows an annotated version of the previous example program and Listing~\ref{lst:xclingo_example_output}, the \texttt{xclingo} output.
The original obtained DT was automatically encoded into two different \texttt{xclingo} annotated programs: \texttt{nodes.lp} and \texttt{paths.lp}\footnote{All files publicly available in \url{https://github.com/bramucas/crystal-tree}}.
We also use two additional files: \texttt{extra.lp} which contains common code for \texttt{nodes.lp} and \texttt{paths.lp}; and \texttt{cases.lp} which contains the data from the transplant cases to be predicted.
\begin{minipage}{.52\textwidth}
\begin{lstlisting}[caption=\texttt{xclingo} annotated program.\label{lst:xclingo_example},frame=tlrb, basicstyle=\ttfamily\scriptsize, breaklines=true]{Name}
holds(55,vhc,true).
holds(55,don_acv,true).
bad(P) :- holds(P,vhc,true), holds(P,don_acv,true).
\end{lstlisting}
\end{minipage}\hfill
\begin{minipage}{.4\textwidth}
\begin{lstlisting}[caption=\texttt{xclingo}'s explanation for \texttt{bad(55)}\label{lst:xclingo_example_output},frame=tlrb, basicstyle=\ttfamily\scriptsize, breaklines=true]{Name}
>> bad(55) [1]
*
|__"Patient 55 may fail"
| |__"rec_vhc is true"
| |__"don_acv is true"
\end{lstlisting}
\end{minipage}
\begin{lstlisting}[caption=Fragment from \texttt{nodes.lp}.\label{lst:nodes},frame=tlrb, basicstyle=\ttfamily\scriptsize, breaklines=true]{Name}
tree_node(0,P,left) :- holds(P,rec_hypertension,false).
tree_node(1,P,left) :- holds(P,rec_vhc,false), tree_node(0,P,left).
(...)
tree_node(6,P,left) :- le(P,rec_afp,184), tree_node(5,P,left).
alive(P) :- tree_node(6,P,left).
\end{lstlisting}
\noindent
\begin{minipage}{.45\textwidth}
\begin{lstlisting}[caption=Fragment from \texttt{paths.lp}.\label{lst:paths},frame=tlrb, basicstyle=\ttfamily\scriptsize, breaklines=true]{Name}
alive(P) :-
holds(P,rec_vhc,true),
holds(P,rec_abdominal_surgery,false),
holds(P,rec_hypertension,true),
le(P,rec_afp,20994),
le(P,don_microesteatosis,50).
\end{lstlisting}
\end{minipage}\hfill
\begin{minipage}{.52\textwidth}
\begin{lstlisting}[caption=Traces used by \texttt{paths.lp}. \label{lst:path-traces},frame=tlrb, basicstyle=\ttfamily\scriptsize, breaklines=true]{Name}
\end{lstlisting}
\end{minipage}
The \texttt{nodes.lp} program (partially shown in Listing~\ref{lst:nodes}) directly represents each DT edge using predicate \texttt{tree\_node(N,Patient,Dir)} where {\tt N} is the child node to be activated and {\tt Dir} its direction below the tree (left or right).
As we can see, each rule is annotated with a \texttt{\%!trace\_rule} describing the decision condition.
Leaves are encoded as rules with \texttt{alive(P)} or \texttt{not\_alive(P)}.
The following is an example of obtained explanation:
\begin{lstlisting}[frame=tlrb, basicstyle=\ttfamily\scriptsize, breaklines=true]{Name}
>> prediction(14) [1]
*
|__"Bad (<5years)"
| |__"rec_afp > 509"
| | |__"don_microesteatosis <= 50"
| | | |__"rec_afp <= 635"
| | | | |__"rec_abdominal_surgery is false"
| | | | | |__"don_acv is true"
| | | | | | |__"rec_afp <= 1244"
| | | | | | | |__"rec_vhc is true"
| | | | | | | | |__"rec_hypertension is false"
\end{lstlisting}
whose cascade form reflects the order in which conditions are applied when traversing the tree, which comes from the (decreasingly) discriminatory power of each condition.
However, as a tree grows in depth, they become less readable and most discriminant features tend to be used repeatedly with different thresholds (as it happens above with \texttt{rec\_afp}) making the explanation less clear.
On the other hand, \texttt{paths.lp} (Listing~\ref{lst:paths}) just encodes a rule per each leaf in the original tree.
The head of the rule encodes the class of the leaf, and the body is a conjunction of all conditions traversed in the path.
An example of explanation from this second encoding would be:
\begin{lstlisting}[frame=tlrb, basicstyle=\ttfamily\scriptsize, breaklines=true]{Name}
>> prediction(14) [1]
*
|__"Bad forecast (<5years)"
| |__"rec_abdominal_surgery is false"
| |__"don_acv is true"
| |__"rec_vhc is true"
| |__"rec_hypertension is false"
| |__"rec_afp in (509,635]"
| |__"don_microesteatosis <= 50"
\end{lstlisting}
\section{Conclusions}
As we can see, {\tt paths.lp} resulted in shorter explanations that are no longer in terms of the DT traversal.
In the example, we replaced the cascade of 8 conditions obtained before by a groups of 6 conditions (some of them in terms of intervals).
By grouping conditions, the explanation includes each feature used by the tree at most once, which guarantees readable explanations, even for the deepest paths.
Also, explanations can be easily adapted for different explanation needs (language, level of expertise, etc.) by modifying the text within {\tt trace} directives.
In our case, simpler and more general explanations for the patient could be provided while keeping all the detail in the explanations for the doctor.
As future work, we plan to improve the \texttt{paths.lp} by including probabilities extracted from the DT learning.
We also plan to keep improving the accuracy of the obtained models by applying balancing techniques for the target feature.
Lastly, we also plan to continue collecting more transplant cases for the dataset.
\bibliographystyle{splncs04}
|
{
"timestamp": "2021-09-29T02:25:58",
"yymm": "2109",
"arxiv_id": "2109.13893",
"language": "en",
"url": "https://arxiv.org/abs/2109.13893"
}
|
\section{Introduction}
\textit{Ab initio} simulation methods
are standard practice for predicting the
chemical and physical properties of molecules and materials.
At the level of simulating electronic structure the
majority of approaches are either directly based
upon Kohn-Sham density-functional theory~(DFT) or Hartree-Fock~(HF)
or use these techniques as starting points for more accurate
post-DFT or post-HF developments.
Both HF and DFT ground states are commonly found by solving
the self-consistent field~(SCF) equations,
which for both type of methods are very similar in structure.
Being thus fundamental to electronic-structure simulations
substantial effort has been devoted in the past to develop
efficient and widely applicable SCF algorithms.
We refer to \citet{Woods2019} and \citet{Lehtola2020}
for recent reviews on this subject.
However, the advent of both cheap computational power
as well as the introduction of data-driven approaches to materials modelling
has caused simulation practice to change noticeably.
In particular in domains such as catalysis or battery research
where experiments are expensive or time-consuming,
it is now standard practice
to perform systematic computations on thousands to millions of compounds.
The aim of such high-throughput calculations is to either
(i) generate data for training sophisticated surrogate models
or to (ii) directly screen complete design spaces for relevant compounds.
The development of such data-driven strategies has already accelerated
research in these fields and enabled
the discovery of novel semiconductors, electrocatalysts,
materials for hydrogen storage or for Li-ion batteries%
~\cite{Jain2016,Alberi2019,Luo2021}.
Compared to the early years where the aim was to perform
a small number of computations on hand-picked systems,
high-throughput screening approaches have much stronger requirements.
In particular the key bottleneck is the required human time
to set up and supervise computations.
To minimize manual effort state-of-the-art high-throughput frameworks%
~\cite{Curtarolo2012, %
Jain2011, %
Huber2020} %
provide a set of heuristics
which automatically select computational parameters based on prior experience.
In case of a failing calculation such heuristics may also be employed
for parameter adjustment and automatic rescheduling.
While this empirical approach manages to take care of the majority of failures
automatically, it is far from perfect.
First, state-of-the-art heuristic approaches cannot capture all cases,
and keeping in mind the large absolute number of calculations already a 1\% fraction
of cases that require human attention easily equals hundreds to thousands of calculations.
This causes idle time and severely limits the overall throughput of a study.
Second, any failing calculation, whether automatically caught
by a high-throughput framework or not, needs to be redone, implying
wasted computational resources that contributes to the
already noteworthy environmental footprint of supercomputing~\cite{Feng2007,Feng2008}.
The objectives for improving the algorithms employed in high-throughput workflows
is therefore to increase the inherent reliability as well as reduce the number of
parameters, which need to be chosen.
Ideally each building block of a simulation workflow would be entirely black-box
and automatically self-adapt to each simulated system.
To some extent this amounts to taking the existing empirical wisdom
already implemented in existing high-throughput frameworks
and converting it into simulation algorithms with convergence guarantees
using a mixture of both mathematical and physical arguments.
With this objective in mind, this work will focus on improving the robustness
of self-consistent field~(SCF) algorithms,
as mentioned above
one of the most fundamental components of electronic-structure simulations.
Our main motivation
and application are DFT simulations discretized in plane wave or
``large'' basis sets, for which it is only feasible to store and
compute with orbitals, densities and potentials, and not the full
density matrix or Fock matrix.
In this setting, the standard SCF approach are damped, preconditioned self-consistent iterations.
Using an approach based on potential-mixing
the next SCF iterate is found as
\begin{equation}
V_\text{next} = V_\text{in} + \alpha P^{-1} (V_\text{out} - V_\text{in}),
\label{eqn:potmixsimple}
\end{equation}
where $V_\text{in}$ and $V_\text{out}$ are the input and output potentials
to a simple SCF step, $\alpha$ is a fixed damping parameter and $P$ is a preconditioner.
It is well-known that simple SCF iterations (where $P$ is the identity)
can converge poorly for many systems due to a number of instabilities~\cite{ldos}.
Examples are the large-wavelength divergence due to the Coulomb operator
leading to the ``charge-sloshing'' behavior in metals
or the effect of strongly localized states near the Fermi level,
e.g.~due to surface states or $d$- or $f$-orbitals.
To accelerate the convergence of the SCF iteration despite these instabilities,
one typically aims to employ a preconditioner $P$ matching the underlying
system.
Despite some recent progress towards cheap self-adapting
preconditioning strategies~\cite{ldos}
for the charge-sloshing-type instabilities,
choosing a matching preconditioner is still not a straightforward task
for other types of instabilities.
For example currently no cheap preconditioner is available to
treat the instabilities due to strongly localized states near the Fermi level,
such that in such systems using a suboptimal preconditioning strategy is unavoidable.
While convergence acceleration techniques are usually crucial in such cases,
these also complicate the choice of an appropriate damping parameter $\alpha$
to achieve the fastest and most reliable convergence.
As we will detail in a series of example calculations
on some transition metal systems the interplay of mismatching preconditioner
and convergence acceleration can lead to a very unsystematic pattern
between the chosen damping parameter $\alpha$ and obtaining a successful
or failing calculation.
Especially for such cases finding a good combination of preconditioning strategy
and damping parameter can require substantial trial and error.
As an alternative approach to a fixed damping selected by a user
\textit{a priori} Cancès and Le Bris suggested the optimal damping
algorithm~(ODA)~\cite{Cances2000,Cances2000a}. In this algorithm the
damping parameter is obtained automatically by performing a line
search along the update suggested by a simple SCF step. Following this
strategy, the ODA ensures a monotonic decrease of the energy, which
leads to strong convergence guarantees. This can be improved using the
history to improve convergence, such as in the EDIIS
method~\cite{Kudin2002}, or trust-region strategies
\cite{francisco2004globally,francisco2006density}. These approaches
are successfully employed for SCF calculations on atom-centered basis
sets, where an explicit representation of the density matrix is
possible. However, their use with plane-wave DFT methods, where only
orbitals, densities and potentials are ever stored, does not appear to
be straightforward, in particular in conjunction with accelerated
methods.
Another development towards finding an DFT ground state in a
mathematically guaranteed fashion are approaches based on a direct
minimization of the DFT energy as a function of the orbitals and
occupations, not using the self-consistency principle (see Reference
\onlinecite{cances2020convergence} for a mathematical comparison).
Although direct minimization methods are often quite efficient for
gapped systems, their use for metals requires a minimization over
occupation numbers \cite{marzari1997ensemble,freysoldt2009direct},
which is potentially costly and unstable. For this reason such
approaches seem to be less used than the SCF schemes in solid-state
physics.
In the realm of self-consistent iterations, variable-step methods have
been successfully used \cite{Marks2021,marks2008robust} to increase
robustness. These methods are based on a minimization of the residual.
Although this often proves efficient in practice, this has a number of
disadvantages. First, the residual might go up then down on the way to
a solution making it rather hard to design a linesearch algorithm.
Second, this forces an algorithm to select an appropriate notion of a
residual norm, with results potentially sensitive to this choice.
Third, there is the possibility of getting stuck in local minima of
the residual, or a saddle point of the energy. By contrast, we aim to
find a scheme ensuring energy decrease as an important ingredient to
ensure robustness. Indeed, under mild conditions, a scheme that
decreases the energy monotonically is guaranteed to converge to a
solution of the Kohn-Sham equations (see Theorems 1 and 2 below). This
is in contrast to residual-based schemes, which afford no such
guarantee. The very good practical performance of these schemes,
despite the lack of global theoretical guarantees, is an interesting
direction for future research.
Our goal in this work is to design a mixing scheme that (a) is
applicable to plane-wave DFT, and involves only quantities such as
densities and potentials; (b) is based on an energy minimization, to
ensure robustness; (c) is based on the self-consistent iterations; (d)
is compatible with acceleration and preconditioning. Our scheme is based on a minimal modification of the damped
preconditioned iterations \eqref{eqn:potmixsimple}. Similar to the ODA
approach we employ a line search procedure to choose the damping
parameter automatically. Our algorithm builds upon ideas of the
potential-based algorithm of \citet{gonze1996towards} to construct an
efficient SCF algorithm. In combination with Anderson acceleration on
challenging systems we show our adaptive damping scheme to be less
sensitive than the approach based on a fixed damping parameter.
In contrast to the fixed damping approach the scheme does
not require a manual damping selection from the user.
The outline of the paper is as follows.
Section \ref{sec:analysis} presents
the mathematical analysis of the self-consistent field iterations
justifying our algorithmic developments.
In particular it presents a justification for global convergence
of the SCF iterations. The proofs for the results presented in this section are given in the appendix.
Section \ref{sec:adaptive} discusses the adaptive damping algorithm itself
followed by numerical tests (Section \ref{sec:tests}) to illustrate
and contrast cost and performance compared to the standard fixed-damping approach.
Concluding remarks and some outlook to future work is given in Section \ref{sec:conclusion}.
\section{Analysis}
\label{sec:analysis}
\subsection{Preliminaries}
\label{sec:prelim}
We use similar notation to those in \citet{cances2020convergence},
extend the analysis in that paper to the finite-temperature
case~\cite{mermin1965thermal}, and
introduce the potential mixing algorithm. We
work in the \textit{grand-canonical ensemble}: we fix a chemical
potential (or Fermi level) $\mu$ and an inverse temperature $\beta$.
In particular, the number of electrons is not fixed. This is for
mathematical convenience: fixing the number of electrons $N$ instead
of $\mu$ does not change our results. We assume that space has been
discretized in a finite-dimensional orthogonal basis (typically,
plane-waves) of size $N_{\rm b}$, and will not treat either spin or
Brillouin zone sampling explicitly for notational simplicity, although
of course the formalism can be extended easily. In this section we
will work with the formalism of density matrices, self-adjoint
operators $P$ satisfying $0 \le P \le 1$. Such operators can be
diagonalized as
\begin{equation}
P = \sum_{i=1}^{N_{\rm b}} f_{i} |\phi_{i}\rangle\langle \phi_{i}|.
\end{equation}
The numbers $0 \le f_{i} \le 1$ are the occupation numbers, and $\phi_{i}$
are the orbitals. Either density matrices or the set of occupation
numbers and orbitals can be taken as the primary unknowns in the
self-consistency problem. Density matrices are impractical numerically
in plane-wave basis sets,
since they are $N_{\rm b} \times N_{\rm b}$; however,
they are very convenient to formulate and analyze algorithms.
Accordingly, we will use them in this theoretical section, but
implement the resulting algorithms using orbitals only.
We work on the sets
\begin{align}
\mathcal H &= \{H \in {\mathbb R}^{N_{\rm b} \times N_{\rm b}}, H^{T} = H\}\\
\mathcal P &= \{P \in \mathcal H, 0 < P < 1\}
\end{align}
of Hamiltonians and density matrices, equipped with the standard
Frobenius metric. Here and in the following, inequalities between matrices are
understood in the sense of symmetric matrices. The closure
\mbox{$\overline{\mathcal P} = \{P \in \mathcal H, 0 \le P \le 1\}$} is
compact. Let ${\mathcal E}_{0}$ be a twice continuously differentiable function on
$\overline{\mathcal P}$: we aim to solve the problem
\begin{align}
\min_{P \in \mathcal P} {\mathcal E}_0(P).
\end{align}
Let
\begin{align}
H_{\rm KS}(P) = \nabla{\mathcal E}_{0}(P)
\end{align}
be its gradient, and
\begin{align}
\label{eqn:kernel4}
\bm K(P) = \bm{d^{2}{\mathcal E}_{0}}(P) = \bm{d \nabla{\mathcal E}_{0}}(P)
\end{align}
be its Hessian. We will denote in bold ``super-operators'' or ``four-point operators'', operators from $\mathcal H$ to
$\mathcal H$. Let $s$ be the fermionic entropy
\begin{align}
s(p) &= -(p\log p + (1-p)\log (1-p)),
\end{align}
with derivatives
\begin{align}
s'(p) &= \log\left( \frac {1-p}{p} \right), \quad s''(p) = -\frac{1}{p(1-p)}.
\end{align}
Let
\begin{align}
{\mathcal E}(P) &= {\mathcal E}_{0}(P) - \frac 1 \beta {\rm Tr}(s(P)) - \mu {\rm Tr} P.
\end{align}
be the free energy of a density matrix, where here and in the
following we use functional calculus implicitly to define $s(P) \in
\mathcal H$.
${\mathcal E}$ diverges on the boundary of $\mathcal P$, whose closure is
compact, and therefore ${\mathcal E}$ has at least one minimizer in
$\mathcal P$. The first-order optimality condition
$\nabla {\mathcal E}(P) = 0$ gives
\begin{align}
H_{\rm KS}(P) -\mu - \frac 1 \beta s'(P) = 0,
\end{align}
and therefore
\begin{align}
\label{eqn:scfdm}
P = f_{\rm FD}(H_{\rm KS}(P)),
\end{align}
where we define the Fermi-Dirac map $f_{\rm FD}$ by
\begin{align}
\label{eqn:fermidirac}
f_{\rm FD}(H) = \frac{1}{1+e^{\beta(H-\mu)}}.
\end{align}
Here we have used the equation
$s'(f_{\rm FD}(\varepsilon)) = \beta(\varepsilon - \mu)$, which will also be
useful in the following. Although we use the Fermi-Dirac smearing
function for concreteness, our results apply just as well to Gaussian
smearing for instance; however, they don't apply to schemes with
non-monotonous occupations such as the Methfessel-Paxton
scheme \cite{methfessel1989high}.
\subsection{The dual energy}
Reformulating the ideas in \citet{gonze1996towards}, we now define a ``dual'' energy
\begin{align}
{\mathcal I}(H) &= {\mathcal E}(f_{\rm FD}(H)).
\end{align}
Since the map $f_{\rm FD}$ is a
bijection from $\mathcal H$ to $\mathcal P$, we have
\begin{align}
\min_{H \in \mathcal H} {\mathcal I}(H) = \min_{P \in \mathcal P} {\mathcal E}(P).
\end{align}
This is analogous to convex duality since the unknown in this
formulation is now $H = \nabla {\mathcal E}(P)$.
We can compute the derivative $\bm{\chi_{0}}= d f_{\rm FD}$ of $f_{\rm FD}$ (see
Lemma \ref{sec:lemma} in the Appendix for details):
\begin{equation}
\label{eqn:chi4}
\begin{aligned}
&\bm{\chi_{0}}\left(\sum_{i=1}^{N_{\rm b}} \varepsilon_{i} |\phi_{i}\rangle\langle \phi_{i}|\right) \cdot \delta H \\
&\hspace{1.9em}= \sum_{i=1}^{N_{\rm b}}\sum_{j=1}^{N_{\rm b}} \frac{f_{\rm FD}(\varepsilon_{i}) - f_{\rm FD}(\varepsilon_{j})}{\varepsilon_{i}-\varepsilon_{j}}\langle \phi_{i}, \delta H \phi_{j} \rangle |\phi_{i}\rangle\langle \phi_{j}|
\end{aligned}
\end{equation}
The linear map $\bm{\chi_{0}}$ is a ``four-point'' generalization of the
independent-particle polarizability. It describes the change to the
density matrix of a system of independent electrons to a change in Fock matrix.
We then have
\begin{equation}
\begin{aligned}
\nabla {\mathcal I}(H) &= \bm{\chi_{0}}(H) \nabla {\mathcal E}(f_{\rm FD}(H)) \\
&= \bm{\chi_{0}}(H) \Big( H_{\rm KS}(f_{\rm FD}(H)) -\mu - \frac 1 \beta s'(f_{\rm FD}(H))\Big)\\
&= \bm{\chi_{0}}(H) ( H_{\rm KS}(f_{\rm FD}(H)) -H)
\end{aligned}
\label{eqn:gradI}
\end{equation}
where again we used $s'(f_{\rm FD}(\varepsilon)) = \beta(\varepsilon - \mu)$.
The Hessian of ${\mathcal I}$ is a complicated object due to the derivative of
$\bm{\chi_{0}}(H)$. However, at a solution of $H_{\rm KS}(f_{\rm FD}(H_{*})) = H_{*}$,
this term vanishes, and we have the simple result
\begin{equation}
\label{eqn:hessI}
\bm{d^{2} {\mathcal I}}(H_{*}) = -\bm{\chi_{0}}(H_{*}) (1- \bm K(H_{*}) \bm{\chi_{0}}(H_{*}))
\end{equation}
To better understand this object, we compute the Hessian of ${\mathcal E}$.
From $\nabla {\mathcal E}(P) = H_{\rm KS}(P) - \frac 1 \beta s'(P) - \mu$ we get
\begin{equation}
\begin{aligned}
\bm{d^{2} {\mathcal E}}(P) \cdot \delta P
&= \bm K(H_{\rm KS}(P)) \cdot \delta P \\&- \frac 1 \beta \sum_{i=1}^{N_{\rm b}}\sum_{j=1}^{N_{\rm b}} \frac{s'(p_{i})-s'(p_{j})}{p_{i}-p_{j}}\langle \phi_{i}, \delta P \phi_{j} \rangle |\phi_{i}\rangle\langle \phi_{j}|.
\end{aligned}
\end{equation}
Defining
\begin{equation}
\begin{aligned}
&\bm \Omega\left(\sum_{i=1}^{N_{\rm b}} \varepsilon_{i} |\phi_{i}\rangle\langle \phi_{i}|\right) \cdot \delta P\\
&\hspace{2.5em}= -\sum_{i=1}^{N_{\rm b}}\sum_{j=1}^{N_{\rm b}} \frac{\varepsilon_{i}-\varepsilon_{j}}{f_{\rm FD}(\varepsilon_{i}) - f_{\rm FD}(\varepsilon_{j})}\langle \phi_{i}, \delta P \phi_{j} \rangle |\phi_{i}\rangle\langle \phi_{j}|
\end{aligned}
\end{equation}
we get
\begin{align}
\label{eqn:hessE}
\bm{d^{2} {\mathcal E}}(P) &= \bm K(H_{\rm KS}(P)) + \bm \Omega(f_{\rm FD}^{-1}(P)).
\end{align}
The point of this formula is to recognize now that $\bm \Omega(H) = -\bm{\chi_{0}}(H)^{-1}$. This links the Hessians of ${\mathcal E}$ and ${\mathcal I}$: at
a fixed point $H_{*} = H_{\rm KS}(f_{\rm FD}(H_{*}))$,
\begin{align}
\label{eqn:hess_both}
\bm{d^{2}{\mathcal E}}(f_{\rm FD}(H_{*})) = \bm \Omega(H_{*}) {\bm{d^{2}{\mathcal I}}}(H_{*}) \bm \Omega(H_{*}).
\end{align}
Since $\bm \Omega$ is self-adjoint and positive definite, both Hessians
have the same inertia (number of negative eigenvalues).
\subsection{Hamiltonian mixing}
\label{sec:hammix}
The very simplest Hamiltonian mixing algorithm is
\begin{align}
H_{n+1} = H_{\rm KS}(f_{\rm FD}(H_{n})).
\end{align}
As already recognized in Reference \onlinecite{gonze1996towards}, \eqref{eqn:gradI}
makes it possible to reinterpret this simple algorithm in a new light:
it is a gradient descent algorithm on ${\mathcal I}$ with step $1$,
preconditioned by $\bm{\chi_{0}}(H_{n})^{-1}$. It is natural to use a
smaller stepsize to try to ensure convergence, and indeed this is
guaranteed to work:
\begin{theorem}
\label{thm:algorithm}
Let $H_{0} \in \mathcal H$. There is $\alpha_{0} > 0$ such that, for
all $0 < \alpha < \alpha_{0}$, the algorithm
\begin{align}
H_{n+1} = H_{n} + \alpha(H_{\rm KS}(f_{\rm FD}(H_{n})) - H_{n})
\end{align}
satisfies $H_{\rm KS}(f_{\rm FD}(H_{n})) - H_{n} \to 0$. If furthermore $E$ is
analytic, $H_{n}$ converges to a solution of the equation $H_{\rm KS}(f_{\rm FD}(H))=H$.
\end{theorem}
Adaptive-step schemes can also ensure guaranteed convergence:
\begin{theorem}
\label{thm:damping}
Fix $H_{0} \in \mathcal H$, and constants $0 < \alpha_{\rm max} < 1$,
$0 < c < 1, 0 < \tau < 1$. Consider the algorithm
\begin{align}
H_{n+1} = H_{n} + \alpha_{n}(H_{\rm KS}(f_{\rm FD}(H_{n})) - H_{n})
\end{align}
where $\alpha_{n}$ is chosen in the following way: starting from
$\alpha_{\rm max}$, decrease $\alpha_{\rm max}$ by a factor $\tau$
while the Armijo line search condition
\begin{equation}
\label{eq:armijo}
\begin{aligned}
&{\mathcal I}(H_{n} + \alpha_{n}(H_{\rm KS}(f_{\rm FD}(H_{n})) - H_{n})) \\
&\hspace{1.0em}\le {\mathcal I}(H_{n}) - \alpha_{n} c \langle \bm \Omega(f_{\rm FD}(H_{n})) \nabla{\mathcal I}(H_{n}), \nabla{\mathcal I}(H_{n}) \rangle
\end{aligned}
\end{equation}
is not verified. Then this algorithm satisfies $H_{\rm KS}(f_{\rm FD}(H_{n})) - H_{n} \to 0$. If furthermore $E$ is
analytic, $H_{n}$ converges to a solution of the equation $H_{\rm KS}(f_{\rm FD}(H))=H$.
\end{theorem}
The proofs of both these statements are found in the Appendix.
The adaptive-step scheme above however suffers from two important
drawbacks. First, it is costly (requiring several SCF steps per
iteration). Second, it is imcompatible with preconditioned or
accelerated schemes because, in contrast to the SCF direction, there
is no guarantee in these cases that the chosen direction is a descent
direction to the energy. This would make a straightforward
implementation of the above algorithm uncompetitive for ``easy''
systems, and therefore motivates the search for a compromise algorithm
that tries to recover some robustness properties while not sacrificing
performance.
%
%
%
%
%
%
%
%
%
%
%
%
%
\subsection{Potential mixing}
We now specialize the above discussion to our case of interest of semi-local
density-functional theory~(DFT) models. We introduce the operators
\mbox{$\diag : {\mathbb R}^{N_{\rm b} \times N_{\rm b}} \to {\mathbb R}^{N_{\rm b}}$} and
\mbox{$\diagm: {\mathbb R}^{N_{\rm b}} \to {\mathbb R}^{N_{\rm b} \times N_{\rm b}}$}. The
$\diag$ operator takes the diagonal (in real space) of a density
matrix, yielding a density. The $\diagm$ operator constructs a Fock
matrix contribution with a given local potential.
Both operators are adjoint of each other.
With these notations, the energy function takes the form
\begin{equation}
{\mathcal E}_0(P) = {\rm Tr}(H_0 P) + g\left(\diag(P)\right),
\label{eqn:trace}
\end{equation}
where $H_0$ is a given operator (the core Hamiltonian) and $g$ is a
nonlinear function (the Hartree-exchange-correlation energy). For these models the gradient
of ${\mathcal E}_0(P)$ (the Fock matrix) depends on $\diag(P)$ (the density) only:
\begin{equation}
H_\text{KS}(P) = H_0 + \diagm(V(\diag(P))),
\end{equation}
with the potential
\begin{equation}
V(\rho) = \nabla g(\rho) \in {\mathbb R}^{N_{\rm b}}.
\end{equation}
Based on \eqref{eqn:fermidirac} and the definition of the density
we define the potential-to-density mapping
\begin{equation}
\rho(V) = \diag(f_{\rm FD}(H_0 + \diagm(V))),
\label{eqn:potdensmap}
\end{equation}
which allows to solve the self-consistency problem
\eqref{eqn:scfdm} by iteration in the potential $V$ only:
\newcommand{\delta V_n}{\delta V_n}
\begin{equation}
\label{eqn:potmix}
V_{n+1} = V_n + \alpha \delta V_n,
\end{equation}
where we defined the search direction
\begin{equation}
\delta V_n = V(\rho(V_n)) - V_n.
\end{equation}
The corresponding energy functional minimized by this fixed-point problem is
\begin{equation}
\label{eqn:energyV}
{\mathcal I}(V) = {\mathcal E}(f_{\rm FD}(H_0 + \diagm(V))).
\end{equation}
Compared to an algorithm based on Kohn-Sham Hamiltonians
as suggested in Section \ref{sec:hammix}
this formulation has the advantage that only vector-sized potentials $V_n$
instead of matrix-sized quantities need to be handled.
The analysis of the previous sections carries forward straightforwardly
to the potential mixing setting.
In particular one identifies as the analogue of $\bm{K}$
the Hessian of $g$, i.e.~the (two-point) Hartree-exchange-correlation kernel $K$,
and as the analogue of $\bm{\chi_0}$
the derivative of $V(\rho)$, which is the independent-particle susceptibility $\chi_0$.
The latter becomes apparent by comparing \eqref{eqn:chi4}
to the Adler-Wiser formula for $\chi_0$~\cite{Adler1962,Wiser1963}
\begin{equation}
\label{eqn:chi}
\begin{aligned}
\chi_0(V)
&= \sum_{i=1}^{N_b} \sum_{j=1}^{N_b}
\frac{f_{\rm FD}(\varepsilon_i) - f_{\rm FD}(\varepsilon_j)}{\varepsilon_i - \varepsilon_j} |\phi_i^\ast \phi_j\rangle
\langle \phi_i^{*} \phi_j|
\end{aligned}
\end{equation}
in which $(\varepsilon_i, \phi_i)$ denotes the eigenpairs of $H_0 + \diagm(V)$.
Both $K$ and $\chi_0$ arise naturally when considering the Jacobian matrix
\begin{equation}
J_\alpha = 1 - \alpha \big(1 - K(V_\ast) \chi_0(V_\ast) \big)
\label{eqn:Jacobian}
\end{equation}
of the potential-mixing SCF iteration \eqref{eqn:potmix} near a fixed point $V_\ast$.
If the eigenvalues of $J_\alpha$ are between $-1$ and $1$ the potential-mixing
SCF iterations converge.
By analogy with Hamiltonian mixing, Theorem \ref{thm:algorithm}
guarantees that global convergence can always be ensured by selecting $\alpha$ small enough.
In this respect our results from Section \ref{sec:hammix} strengthen
a number of previous results~\cite{dederichs1983self,gonze1996towards,cances2020convergence},
which established local convergence for sufficiently small $\alpha$.
\subsection{Improving the search direction $\delta V_n$: Preconditioning and acceleration}
The Jacobian matrix \eqref{eqn:Jacobian} involves the dielectric
matrix $\epsilon(V) = 1 - K(V) \chi_0(V)$, which can become
badly conditioned for many systems. In such cases, a very small step
must be employed to ensure stability (smallest eigenvalue of
$J_{\alpha}$ larger than $-1$), which slows down convergence (largest
eigenvalue of $J_{\alpha}$ very close to $1$) to a level too slow to
be practical. A solution is to improve the search direction $\delta V_n$
to ensure faster convergence~\cite{Woods2019}. This is usually
achieved by a combination of techniques jointly referred to as
``mixing'', which amend $\delta V_n$ using both preconditioning as
well as convergence acceleration.
Employing a preconditioned search direction
\begin{equation}
\delta V_n = P^{-1} [V(\rho(V_n)) - V_n]
\label{eqn:search_precon}
\end{equation}
in a damped SCF iteration, the corresponding Jacobian becomes
\begin{equation}
J_\alpha = 1 - \alpha P^{-1} \epsilon(V).
\end{equation}
Provided that the inverse $P^{-1}$ approximates the inverse dielectric
matrix $\epsilon^{-1}$ sufficiently well, the spectrum of
$P^{-1} \epsilon$ is close to 1, so that a larger damping $\alpha$ and
and faster iteration is possible. While suitable cheap preconditioners
$P$ are not yet known for all sources of bad conditioning in SCF
iterations, a number of successful strategies have been suggested.
Examples include Kerker mixing~\cite{Kerker1981} to improve SCF
convergence in metals or LDOS-based mixing~\cite{ldos} to tackle
heterogeneous metal-vacuum or metal-insulator systems. For a more
detailed discussion on this matter we refer the reader to
Reference~\onlinecite{ldos}.
An additional possibility to speed up convergence is to use black-box
convergence accelerators. These techniques
build up a history of the previous iterates $V_1, \ldots, V_n$
as well as the previous preconditioned residuals $P^{-1} R_1, \ldots, P^{-1} R_n$
(with $R_n = V(\rho(V_n)) - V_n$) and use this information to obtain the
next search direction $\delta V_n$.
The most frequently used acceleration technique in this context is variously
known as Pulay/DIIS/Anderson mixing/acceleration,
which we will refer to as Anderson acceleration.
This method obtains the search direction as a linear combination
\begin{equation}
\begin{aligned}
\delta V_n &= P^{-1} R_{n} \\
&\hspace{1.3em}+ \frac 1 \alpha \sum_{i=1}^{n-1} \beta_{i}\big(V_{i} + \alpha P^{-1} R_{i} - V_{n} - \alpha P^{-1} R_{n}\big)
%
%
\end{aligned}
\label{eqn:Anderson}
\end{equation}
where the expansion coefficients $\beta_{i}$ are found by minimizing
\begin{equation}
\norm{P^{-1} R_n + \sum_{i=1}^{n-1} \beta_i \left(P^{-1} R_i - P^{-1} R_n\right)}.
\label{eqn:AndersonMinimization}
\end{equation}
In practice, it is impractical to keep a potentially large number of
past iterates, and only the last 10 iterates are taken into
account. Furthermore, the associated linear least squares problem can become
ill-conditioned \cite{Walker2011}. We use the simple strategy of
discarding past iterates to ensure a maximal conditioning of $10^{6}$.
This method is known to be equivalent to a multisecant Broyden method.
In the linear regime and with infinite history Anderson acceleration
is further equivalent to the well-known GMRES method to solve linear
equations. For details see Reference~\cite{Chupin2020} and References therein.
Provided that nonlinear effects are
negligible, Anderson acceleration typically inherits the favorable
convergence properties of Krylov methods~\cite{Saad2003}, explaining
their frequent use in the DFT context. However, especially at the
beginning of the SCF iterations or when treating systems that feature
many close SCF minima, nonlinear effects can become important. In
such cases the behavior of Anderson is more complex and
mathematically not yet fully understood. In particular the dependence
of the convergence behavior on numerical parameters such as the
chosen damping can become less regular and harder to interpret, as we
will see in our numerical examples in Sections \ref{sec:nonlinear} and
\ref{sec:tm}.
\section{Adaptive damping algorithm}
\label{sec:adaptive}
\newcommand{\widetilde{\alpha}}{\widetilde{\alpha}}
\newcommand{\atrial_\text{min}}{\widetilde{\alpha}_\text{min}}
Up to now we have assumed that the step size $\alpha$ is constant,
reflecting common practice in plane-wave DFT computations. We now describe the
main contribution of this paper, an algorithm to adapt this step size
to increase robustness and minimize user intervention into the
convergence process. At step $n$ of the algorithm, given a trial
potential $V_{n}$, we compute the search direction $\delta V_{n}$
through \eqref{eqn:Anderson}, and look for a step $\alpha_{n}$ to take as
\begin{align}
V_{n+1} = V_{n} + \alpha_{n} \delta V_{n}
\end{align}
Note that the definition of $\delta V_{n}$ itself in
\eqref{eqn:Anderson} depends on a stepsize $\alpha$; since our scheme
will adapt $\alpha_{n}$ to $\delta V_{n}$, we cannot just take
$\alpha=\alpha_{n}$ in \eqref{eqn:Anderson}, and so we use for
$\alpha$ a trial damping $\widetilde{\alpha}$ (to be discussed in Section
\ref{sec:atrial}).
To select $\alpha_{n}$, we could try to minimize ${\mathcal I}(V_{n+1})$, or
employ an Armijo line-search strategy. However,
each evaluation of ${\mathcal I}$ is very costly, and it is therefore desirable
to obtain efficient approximate schemes. The energy
${\mathcal I}(V + \alpha \delta V_n)$, can be expanded as
\begin{equation}
\begin{aligned}
{\mathcal I}(V_n + \alpha \delta V_n)
&= {\mathcal I}(V_n) + \alpha \langle \chi_0(V_n) R_{n}, \delta V_n\rangle\\
&+ \frac12 \alpha^2 \langle \delta V_n , d^2 {\mathcal I}(V_n) \cdot \delta V_n \rangle
+ O(\alpha^3 \|\delta V_n\|^{3}),
\end{aligned}
\label{eq:energy_expansion}
\end{equation}
where we have used $\nabla {\mathcal I}(V_n) = \chi_{0}(V_{n}) R_{n}$. This
approximation is good for small dampings $\alpha$ and/or close
to the solution, when $\delta V_n$ is small.
The object $d^{2} {\mathcal I}$ is complicated and expensive to compute in
general. However, close to a fixed point, we can use the expression
\eqref{eqn:hessI} to write $d^{2} {\mathcal I}(V_{n}) \approx
\chi_{0}(V_{n})(1-K(V_n)) \chi_0(V_n)$. We can then approximate the terms in \eqref{eq:energy_expansion},
leading to the model
\newcommand{\varphi_n}{\varphi_n}
\begin{equation}
\begin{aligned}
\varphi_n(\alpha) &= {\mathcal I}(V_n)
+ \alpha \left\langle R_{n}, \chi_0(V_n) \delta V_n \right\rangle\\
&- \frac12 \alpha^2 \left\langle \chi_0(V_n) \delta V_n,
\big[1-K(V_n) \chi_0(V_n)\big] \delta V_n \right\rangle
\end{aligned}
\label{eqn:modelFirst}
\end{equation}
for the energy, where we have used the self-adjointness of
$\chi_0(V_n)$ to make it act only on $\delta V_n$. To compute the coefficients in this model, we still need to
compute $\chi_{0}(V_{n}) \delta V_n$, a costly operation. However, for all
$\alpha$ we have to first order
\begin{align}
\alpha \chi_{0}(V_{n}) \delta V_n = \rho(V_{n}+\alpha \delta V_n) - \rho(V_{n}) + O(\alpha^{2}\|\delta V_n\|^{2}).
\label{eqn:model_coeffs}
\end{align}
Note that if we set $V_{n+1} = V_{n} + \alpha_{n} \delta V_n$ and then proceed along
the iterative algorithm, we will have to compute $\rho(V_{n+1})$ in any case.
An approximation to the coefficients of the model $\varphi_n$ can therefore
be constructed without any extra diagonalization.
This is the basis of the adaptive damping scheme described in Algorithm \ref{alg:adaptive}.
Since $\rho(V_{n})$ is already known (it is needed to construct $\delta V_n$),
the only expensive step in this algorithm is the computation of $\rho(V_{n+1})$,
which occurs only once per loop iteration.
In particular, when set to always accept $V_{n+1}$, this algorithm reduces to the
standard damped SCF algorithm. Notice that the algorithm only allows $\alpha_n$ to shrink between iterations.
As a result (i) the model $\varphi_n$ provides better and better damping predictions
and (ii) keeping in mind our analysis of Section \ref{sec:hammix}
the proposed tentative steps $V_{n+1}$ become more likely to be accepted.
\begin{algorithm}[H]
\caption{Adaptive damping algorithm}
\label{alg:adaptive}
\begin{algorithmic}[1]
\Require{
Current iterate $V_{n}$, search direction $\delta V_n$, trial damping $\widetilde{\alpha}$
}
\Ensure{Damping $\alpha_{n}$, next iterate $V_{n+1}$}
%
\State $\alpha_n \gets \widetilde{\alpha}$
\Loop
\State Make tentative step $V_{n+1} = V_n + \alpha_n \delta V_n$
\State Compute $\rho(V_{n+1}), {\mathcal I}(V_{n+1})$ (the expensive step)
\If{accept $V_{n+1}$ (see Section \ref{sec:step_acceptance})}
\State \textbf{break}
\Else
\State Build the coefficients of the model $\varphi_n$
%
\If{model $\varphi_n$ is good (see Section \ref{sec:quality_model})}
\State $\alpha_n \gets \argmin_\alpha \varphi_n(\alpha)$
\State Scale $\alpha_n$ to ensure $\abs{\alpha_n}$ is strictly decreasing
\Else
\State $\alpha_n \gets \frac{\alpha_n}{2}$
\EndIf
\EndIf
\EndLoop
\end{algorithmic}
\end{algorithm}
We complete the description of the algorithm by specifying some practical points:
when to accept a step,
how to determine whether a model is good, how to select the initial
trial step $\widetilde \alpha$ and how to integrate adaptive damping
with Anderson acceleration.
\subsection{Step acceptance}
\label{sec:step_acceptance}
We accept the step as soon as the proposed next iterate
$V_{n+1} = V_n + \alpha_n \delta V_n$ satisfies
\begin{equation}
\begin{aligned}
{\mathcal I}(V_{n+1}) &< {\mathcal I}(V_n)
&\text{or}\quad
%
\norm{P^{-1} R_{n+1}} &< \norm{P^{-1} R_n},
\end{aligned}
\label{eqn:acceptance}
\end{equation}
i.e. if either the energy or the preconditioned residual decreases.
Although accepting steps higher in energy may decrease the robustness
of the algorithm, we found in practice that accepting steps that
decrease the residual helps keeping the method effective in the later
stages of convergence, when the Anderson acceleration is able to take
efficient steps that may slightly increase the energy but are not
worth reverting.
\subsection{Quality of the model $\varphi_n$}
\label{sec:quality_model}
Our model $\varphi_n$ makes various assumptions that might not hold in
practice, especially far from convergence. However, by comparing
the actual energy ${\mathcal I}(V_n+\alpha_{n}\delta V_n)$ to the prediction $\varphi_n(\alpha_{n})$, we can
inexpensively check the quality of the model. We do this by computing the ratio
\begin{equation}
r_n = \frac{\abs{{\mathcal I}(V_n + \alpha_{n} \delta V_n) - \varphi_n(\alpha_{n})}}{\abs{{\mathcal I}(V_n + \alpha_{n} \delta V_n) - {\mathcal I}(V_n)}},
\label{eqn:relerrormodel}
\end{equation}
which should be small if the model is accurate.
We deem the model good enough if
\begin{equation}
r_n < 0.1 \qquad \text{and} \qquad \text{$\varphi_n$ has a minimum}.
\label{eqn:modelacceptance}
\end{equation}
Notice that the minimizer of $\varphi_n$ may not necessarily be positive.
For particularly accurate models ($r_n < 0.01$) we additionally
allow backward steps (i.e.~$\alpha_n < 0$),
which turned out to overall improve convergence in our tests.
\subsection{Choice of the trial step $\widetilde \alpha$}
\label{sec:atrial}
To ensure that as many as possible SCF steps only require a single line search step,
we dynamically adjust $\widetilde{\alpha}$ between two subsequent SCF steps.
A natural approach is to reuse the adaptively determined damping $\alpha_n$
as the $\widetilde{\alpha}$ in the next line search,
which effectively shrinks $\widetilde{\alpha}$ between SCF steps.
However, the algorithm may need small values of $\widetilde{\alpha}$ in the
initial stages of convergence, and keeping these small values for too
long limits the eventual convergence rate.
To counteract the decreasing trend, we allow $\widetilde{\alpha}$ to increase
if a line search was immediately successful (i.e.~$\alpha_n = \widetilde{\alpha}$).
In this case we again use the model $\varphi_n$. If it is sufficiently
good (as described in Section \ref{sec:quality_model}), we set
\begin{equation}
\widetilde{\alpha} \gets \max\Big(\widetilde{\alpha}, \ 1.1 \cdot \argmin_\alpha \varphi_n(\alpha) \Big).
\end{equation}
Otherwise $\widetilde{\alpha}$ is left unchanged.
As an additional measure, to prevent the SCF from stagnating we enforce
$\widetilde{\alpha}$ to not undershoot a minimal trial damping $\atrial_\text{min}$. We used
mostly $\atrial_\text{min} = 0.2$ as a baseline, and report varying this parameter
in the numerical experiments.
With this dynamic adjustment of $\widetilde{\alpha}$, we checked that its initial
value $\widetilde{\alpha}_0$, i.e.~the value used in the first SCF step, has
little influence on the overall convergence behavior. However,
in well-behaved cases, too small values for this parameter
lead to an unnecessary slowdown of the first few SCF steps. We therefore settled on $\widetilde{\alpha}_0 = 0.8$ similar to standard recommendations
for the default damping~\cite{Kresse1996}.
\section{Numerical tests}
\label{sec:tests}
\label{sec:implementation}
The adaptive damping algorithm described in Section \ref{sec:adaptive}
was compared against a conventional preconditioned damped potential-mixing SCF scheme
featuring only a fixed damping.
For this we employed three kinds of test problems.
The first are calculations on aluminium systems of various size
including cases with an unsuitable
computational setup, i.e.~where charge sloshing is not prevented
by employing the Kerker preconditioner.
These are discussed in more detail in Section~\ref{sec:aluminium}.
The second, discussed in Section~\ref{sec:nonlinear},
is a gallium arsenide system which we previously found to feature
strongly nonlinear behavior in the initial SCF steps~\cite{ldos}.
Lastly in section Section~\ref{sec:tm} we will consider
Heusler systems and other transition-metal compounds,
which are generally found to be difficult to converge.
\begin{table*}
\centering
\begin{tabular}{%
l@{\extracolsep{1em}}l%
@{\extracolsep{1em}}c@{\extracolsep{1em}}c%
*{9}{@{\extracolsep{0.5em}}c}%
@{\extracolsep{1em}}c@{\extracolsep{1em}}c%
}
\hline \hline
System&Precond.&&\multicolumn{10}{c}{fixed damping $\alpha$}&&adaptive\\
&&& 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1.0 &&damping\\
\hline
\ce{Al8} supercell & Kerker\textsuperscript{a} && $\times$ & 58 & 37 & 27 & 21 & 16 & 13 & \textbf{11} & 12 & 18 && 17 \\
\ce{Al8} supercell & None\textsuperscript{a} && $\times$ & \textbf{52} & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ && 24 \\
\ce{Al40} supercell & Kerker && 19 & 15 & 14 & 12 & \textbf{11} & 12 & 12 & 12 & 12 & 12 && 12 \\
\ce{Al40} supercell & None && \textbf{38} & 40 & 40 & 39 & 44 & 50 & 49 & $\times$ & 76 & $\times$ && 44 \\
\ce{Al40} surface & Kerker && $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ && $\times$ \\
\ce{Al40} surface & None && \textbf{46} & 48 & 50 & 49 & 51 & 60 & 61 & 66 & 89 & $\times$ && 49 \\
\hline
\ce{Ga20As20} supercell & None && \textbf{26} & 33 & 40 & 42 & 45 & 44 & 70 & 70 & 65 & 76 && 26 \\
\hline
\ce{CoFeMnGa} & Kerker && $\times$ & $\times$ & $\times$ & $\times$ & 28 & \textbf{21} & 24 & 28 & 22 & 22 && 30 \\
\ce{Fe2CrGa} & Kerker && $\times$ & $\times$ & $\times$ & 27 & $\times$ & $\times$ & \textbf{19} & 25 & $\times$ & 22 && 39 \\
\ce{Fe2MnAl} & Kerker && $\times$ & 48 & $\times$ & $\times$ & $\times$ & 20 & 21 & 17 & 16 & \textbf{15} && 34 \\
\ce{FeNiF6} & Kerker && $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & 23 & 22 & \textbf{21} && 24 \\
\ce{Mn2RuGa} & Kerker && $\times$ & $\times$ & $\times$ & $\times$ & 37 & 24 & 23 & \textbf{22} & 23 & 23 && 36 \\
\ce{Mn3Si} & Kerker && $\times$ & $\times$ & $\times$ & $\times$ & 26 & 30 & 22 & \textbf{20} & $\times$ & $\times$ && $\times$ \\
\ce{Mn3Si} (AFM)\textsuperscript{b} & Kerker && $\times$ & $\times$ & 58 & 29 & 31 & 30 & \textbf{20} & 22 & 26 & 28 && 35 \\
\hline
\ce{Cr19} defect & Kerker && $\times$ & $\times$ & $\times$ & 74 & 46 & 48 & 46 & \textbf{41} & 47 & 53 && 48 \\
\ce{Fe28W8} multilayer & Kerker && \textbf{32} & 34 & 37 & 34 & 38 & 43 & 41 & 48 & $\times$ & $\times$ && 37 \\
\hline \hline
\end{tabular}
\begin{minipage}{0.64\textwidth}
\vspace{0.15em}
\begin{flushleft}
\footnotesize
\textsuperscript{a}without Anderson acceleration\\
\textsuperscript{b}initial guess with antiferromagnetic spin ordering
\end{flushleft}
\end{minipage}
\caption{Number of Hamiltonian diagonalizations required to obtain convergence
in the energy to $10^{-10}$ with a cross ($\times$)
denoting a failure to converge within $100$ diagonalizations.
Except where otherwise noted Anderson acceleration has been employed
and for the transition-metal systems (third/fourth group of compounds)
a ferromagnetic initial guess has been used.
On supercells atomic positions were slightly randomised.
Computational details are given in the text.
Notice that the transition-metal systems
may not converge to the same SCF solution for each calculation.
%
%
%
%
}
\label{tab:convtable}
\end{table*}
For our tests we used the implementation of the adaptive damping algorithm available
in the density-functional toolkit (DFTK)~\cite{DFTKjcon,DFTK},
a flexible open-source Julia package for plane-wave density-functional theory simulations.
For all calculations we used Perdew-Burke-Ernzerhof~(PBE) exchange-correlation
functional~\cite{Perdew1996} as implemented in the
libxc~\cite{Lehtola2018} library, and Goedecker-Teter-Hutter
pseudopotentials~\cite{Goedecker1996}.
Depending on the system, a kinetic energy cutoff between $20$ and $45$ Hartree
as well as an unshifted Monkhorst-Pack with a maximal $k$-point spacing of
at most $0.14$ inverse Bohrs was used.
For the Heusler systems this was reduced to at most $0.08$ inverse Bohrs.
With the exception of the gallium arsenide system
a Gaussian smearing scheme with width of 0.001 Hartree was employed.
For the systems containing transition-metal elements collinear spin polarization
was allowed and the initial guess was constructed assuming
ferromagnetic spin ordering except when otherwise noted.
Notice, that this initial guess is generally not close to the
final spin ordering, see Section \ref{sec:tm} for discussion regarding this choice.
The full computational details for each system (including the employed structures)
as well as instructions how to reproduce all results of this paper
can be found in our repository of supporting information~\cite{reproducers}.
Table~\ref{tab:convtable} summarizes the required number of
Hamiltonian diagonalizations to converge the SCF energy to an error of
$10^{-10}$ Hartree for various fixed dampings $\alpha$ as well as the
adaptive damping algorithm. We carefully verified the obtained solutions
to be stationary points by monitoring the SCF residual $R_n$.
Note that for the adaptive damping
procedure the number of SCF steps is not identical to the number of
Hamiltonian diagonalizations, since multiple tentative steps might be
required until a step is accepted. Since iterative diagonalization
overall dominates the cost of the SCF procedure, the number of
diagonalizations provides a better metric to compare between the cost
of both damping strategies.
\subsection{Inadequate preconditioning: Aluminum}
\label{sec:aluminium}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Al_nodiis.pdf}\\[0.5em]
\includegraphics[width=0.48\textwidth]{Al_nodiis_Kerker.pdf}
\caption{
SCF convergence for a randomized aluminium supercell (8 atoms)
without Anderson acceleration
and using simple mixing (top) as well as Kerker mixing(bottom).
The adaptive scheme converges robustly even in the unpreconditioned case,
without requiring the manual selection of a step.
}
\label{fig:Al}
\end{figure}
To investigate the influence of the choice of a suboptimal preconditioner
on the convergence for both the fixed damping and adaptive damping strategies
we considered three aluminium test systems:
two elongated bulk supercells with 8 or 40 atoms
as well as a surface with 40 atoms
and a portion of vacuum of identical size.
For the elongated supercells both the initial guess as well as the atomic
positions were slightly rattled.
The results are summarized in the first segment of
Table~\ref{tab:convtable}. For the small \ce{Al8} system, where
Anderson acceleration was not used, representative convergence curves
are shown in Figure~\ref{fig:Al}. Due to the well-known
charge-sloshing behavior, SCF iterations on such metallic systems are
ill-conditioned. Without preconditioning small fixed damping values
$\alpha$ are thus required to obtain convergence, with only a small
window of damping values being able to achieve a convergence within
100 Hamiltonian applications. On the other hand in combination with
the matching Kerker preconditioning strategy~\cite{Kerker1981} large
fixed damping values generally converge more quickly.
In contrast the adaptive damping strategy is much less sensitive to
the choice of the minimal trial damping $\atrial_\text{min}$. Moreover, it leads
to a much improved convergence for the case without suitable
preconditioning while still maintaining similar costs if Kerker mixing
is employed.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{AlVac.pdf}
\caption{
SCF convergence without preconditioning
for an elongated supercell of an aluminium surface
with 40 aluminium atom and a vacuum portion of equal size.
A fixed step of $\alpha=0.1$ was optimal here,
but the adaptive scheme gets very close
performance without manual stepsize selection.
}
\label{fig:AlVac}
\end{figure}
These observations carry over to cases including Anderson acceleration
and larger aluminium systems, see Figure~\ref{fig:AlVac} for a
representative computation on an aluminium surface.
Notice that Kerker mixing is extremely badly suited for the
large aluminium surface, such that convergence is not obtained in 100 Hamiltonian
for any of the damping strategies,
see Reference \onlinecite{ldos} for a better preconditioner
in such inhomogeneous systems.
Overall employing adaptive damping therefore makes the reliability and
efficiency of the SCF less dependent on the choice of the
preconditioning strategy, while not requiring the user to manually
select a damping parameter.
\subsection{Strong nonlinear effects: Gallium arsenide}
\label{sec:nonlinear}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{GaAs.pdf}
\caption{Elongated gallium arsenide supercell
(20 gallium and 20 arsenide atoms)
with slightly randomized atomic positions.
For all cases, simple mixing and Anderson acceleration are
employed. }
\label{fig:GaAs}
\end{figure}
In previous work we identified elongated supercells of gallium arsenide
with slightly perturbed atomic positions to
be a simple system that still exhibits strong nonlinear effects
when the SCF is far from convergence~\cite{ldos}.
In the convergence profiles of these systems
this manifests by the error shooting up abruptly with Anderson failing
to quickly recover. In Figure~\ref{fig:GaAs}, for example, the error
increases steeply between Hamiltonian diagonalizations 3 and 6 for the
fixed-damping approaches with stepsizes beyond $0.1$. It should be
noted that this behavior is an artefact of the interplay of Anderson
acceleration and damped SCF iterations on these systems, which is not
observed in case Anderson acceleration is not employed. For more
details see the discussion of the gallium arsenide case in
Ref.~\onlinecite{ldos}.
For the calculations employing a fixed damping strategy only small
damping values of $\alpha=0.1$ are able to prevent this behavior.
Already slightly larger damping values noticably increase the number
of Hamiltonian diagonalizations required to reach convergence (compare
Table~\ref{tab:convtable}), and thus a careful selection of the
damping value is in order for such systems. In contrast the proposed
adaptive damping strategy with our baseline minimal trial damping of
$\atrial_\text{min} = 0.2$ automatically detects the unsuitable Anderson steps and
downscales them. As a result an optimal or near-optimal cost is
obtained without any manual parameter tuning. For comparison, we also
display in Figure~\ref{fig:GaAs} the results with a large value of
$\atrial_\text{min}=0.5$, which prevents the damping algorithm to avoid the
nonlinear effects.
\subsection{Challenging transition-metal compounds}
\label{sec:tm}
In this section we discuss two types of transition-metal systems.
First, we consider a selection of smaller primitive unit cells,
including the mixed iron-nickel fluoride \ce{FeNiF6}
as well a number of Heusler-type alloy structures,
see the third group of Table \ref{tab:convtable}.
These structures were found found in the course of high-throughput computations
to be difficult to converge~\cite{tricky}.
Moving to larger systems we considered
an elongated chromium supercell with a single vacancy defect
as well as a layered iron-tungsten system,
see the fourth group of Table \ref{tab:convtable}.
Both test cases were taken from previous studies~\cite{Winkelmann2020,Marks2021}
on SCF algorithms.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Fe2CrGa_nodiis.pdf}\\[0.5em]
\includegraphics[width=0.48\textwidth]{Fe2CrGa.pdf}
\caption{
Convergence of the \ce{Fe2CrGa} Heusler alloy with Kerker mixing
without Anderson acceleration (top)
and
with Anderson acceleration (bottom).
Notice that the SCF calculations do not necessarily converge to
the same local SCF minimum.
}
\label{fig:Fe2CrGa}
\end{figure}
In particular the Heusler
compounds are known to exhibit rich and unusual magnetic and electronic properties.
From our test set, for example, \ce{Fe2MnAl} shows halfmetallic behaviour,
i.e. a vanishing density of states at the Fermi level in only the minority
spin channel~\cite{Belkhouane2015}.
Other compounds, such as \ce{Mn2RuGa} or \ce{CoFeMnGa}
show an involved pattern of ferromagnetic and antiferromagnetic coupling
of the neighboring transition-metal sites~\cite{Wollmann2015,Shi2020}.
Such effects are closely linked to the $d$-orbitals forming localized states
near the Fermi level~\cite{He2018,Jiang2021}
and imply that there are a multiple accessible spin configurations,
which are close in energy.
Unfortunately these two properties also make Heusler compounds
difficult to converge using standard methods.
First, localized states near the Fermi level are a source of ill-conditioning for the
SCF fixed-point problem~\cite{ldos}, with no cheap and widely
applicable preconditioning strategy being available.
Second, the abundance of multiple spin configurations
implies a more involved SCF energy landscape with multiple SCF minima.
On such a landscape convergence may easily ``hesitate'' between
different local minima or stationary points.
Furthermore the setup of an appropriate initial guess, which ideally guides
the SCF towards the final spin ordering requires human expertise and is
hard to automatise in the high-throughput setting.
Albeit not fully appropriate for the systems we consider,
we followed the guess setup, which has also been used in the
aforementioned high-throughput procedure~\cite{tricky},
namely to start the calculations with an initial guess based on
ferromagnetic~(FM) spin ordering.
As a result in our tests,
calculations on Heusler systems without Anderson acceleration require
very small fixed damping values below $0.1$ even if the Kerker preconditioner is
used, see Figure~\ref{fig:Fe2CrGa} (a). The adaptive damping strategy
improves the convergence behavior and in agreement with our previous
results partially corrects for the mismatch in preconditioner and initial guess.
Still, convergence is extremely slow.
An acceptable convergence is only accessible in combination with
Anderson acceleration. %
However, the Anderson-accelerated
fixed-damping SCF is very susceptible to the chosen damping $\alpha$,
see Figure~\ref{fig:Fe2CrGa} (b). In particular the lowest-energy SCF
minimum is only found within 100 Hamiltonian diagonalizations for
$\alpha=0.4$, $\alpha=0.7$, $\alpha=0.8$ and $\alpha=1.0$. Other fixed
damping values initially converge, but then convergence stagnates and
the error only reduces very slowly beyond around $30$ diagonalizations.
We investigated the source of this pathological behavior by restarting
the iterations after stagnation.
This did not noticeably alter the behavior,
eliminating the possibility that the history of the iterates
within the Anderson acceleration scheme
somehow ``jam'' the SCF into stagnation
in the strongly nonlinear regime
--- as can happen for instance in nonlinear
conjugate gradient methods~\cite{hager2006survey}.
Another possibility is that the iterations
somehow got into a particularly rough region of SCF energy landscape
between multiple stationary points, which is simply hard to escape.
However, this is not the case either. For instance on the \ce{Fe2CrGa}
system with a fixed damping of $0.3$, the restarted iterations did
converge quickly using an Anderson scheme with a small maximal
conditioning of $10^{2}$ for the linear least squares problem.
It would therefore appear that this phenomenon is due to inadequate
regularization of the least squares problem. We expect more
sophisticated techniques for controlling the Anderson
history~\cite{Chupin2020} to be worth investigating for such systems
in the future.
Because of this stagnation issue we found Anderson-accelerated SCF
iterations to become unreliable for our transition-metal test systems:
for fixed damping values below $\alpha=0.5$, hardly any calculation
converges. Notably, due to the non-trivial interplay with the Anderson
scheme, this result is the exact opposite to our theoretical
developments on damped SCF iterations in Section \ref{sec:hammix},
which suggested to reduce the damping to achieve reliable convergence.
Overall the transition-metal cases emphasize the difficulty in manually choosing
an appropriate fixed damping.
For a number of cases the window of converging damping values
is rather narrow, e.g.~consider \ce{Fe2CrGa}, \ce{FeNiF6}
or \ce{Mn3Si} (with the FM guess)
and in our tests only a single damping value of $\alpha=0.8$ fortitiously
manages to converge all systems.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Mn3Si.pdf}
\caption{Convergence of the \ce{Mn3Si} Heusler compound
with Kerker mixing and Anderson acceleration
and starting from a ferromagnetically ordered initial guess.
Small dampings are susceptible to stagnation
induced by Anderson instabilities.
}
\label{fig:Mn3Si}
\end{figure}
In contrast the proposed adaptive damping strategy with our baseline
value of $\atrial_\text{min} = 0.2$ is less susceptible to the stagnation issue.
Across the unit cells and extended transition-metal systems we considered
we observed only a convergence failure in one test case, namely the
\ce{Mn3Si} Heusler alloy with the FM guess, see Figure \ref{fig:Mn3Si}.
For some test cases,
adaptive damping did cause a noticeable computational overhead: in
extreme cases (such as \ce{Fe2CrGa} or \ce{Fe2MnAl}) the number of
required Hamiltonian diagonalizations almost doubles. However, it
should be emphasized that no user adjustments were needed to obtain
these results, even though adaptive damping has been constructed on
the here invalid principle that smaller damping increases reliability.
Yet even on cases where Anderson instabilities cause non-convergence
of the adaptive scheme, fine-tuning is possible. Increasing the
minimal trial damping from $\atrial_\text{min}=0.2$ to $\atrial_\text{min}=0.5$, for example,
increases the minimal step size and thus lowers the risk of Anderson
stagnation. For all transition-metal cases we considered $\atrial_\text{min}=0.5$
strictly reduces the number of diagonalizations required to reach
convergence compared to $\atrial_\text{min}=0.2$, see for example
Figure~\ref{fig:Fe2CrGa} (b). Moreover this parameter adjustment even
resolves the convergence issues of \ce{Mn3Si}, see Figure~\ref{fig:Mn3Si}.
If manual intervention is possible
another option is to incorporating prior knowledge of the
final ground state electronic structure into the initial guess.
For the \ce{Mn3Si} case, for example, an improved initial guess based on
an antiferromagnetic spin ordering (AFM) between adjacent manganese layers
simplifies the SCF problem, such that both a larger range of fixed
damping values as well as the adaptive damping strategy give rise to
converging calculations.
\section{Conclusion}
\label{sec:conclusion}
We proposed a new linesearch strategy for SCF computations, based
on an efficient approximate model for the energy as a function of the
damping. Our algorithm follows four general principles: (a) the
algorithm should need no manual intervention from the user; (b) it
should be combinable with known effective mixing techniques such as
preconditioning and Anderson acceleration; (c) in ``easy'' cases where
convergence with a fixed damping is satisfactory it should not slow
down too much; and (d) it should be possible to relate it to schemes
with proved convergence guarantees. We demonstrated that our proposed
scheme fulfills all these objectives. With our default parameter choice
of $\atrial_\text{min} = 0.2$ the resulting adaptively damped SCF algorithm
is able to converge all of the ``easy'' cases faster or almost as fast as
the fixed-damping method with the best damping.
Simultaneously it is more robust than the fixed-damping method
on the ``hard'' cases we considered,
such as elongated bulk metals and metal surfaces without proper preconditioning
or Heusler-type transition-metal alloys.
In particular the latter kind of systems feature a very irregular convergence
behavior with respect to the damping parameter, making a robust manual damping
selection very challenging.
In practice the classification between ``easy'' and ``hard'' cases may well depend
on the considered system and the details of the computational setup,
e.g.~the employed mixing and acceleration techniques.
However, our scheme makes no assumptions about the details
how a proposed SCF step has been obtained.
We therefore believe adaptive damping to be a black-box stabilisation technique
for SCF iterations, which applies beyond the Anderson-accelerated setting
we have considered here.
Still, our results on these ``hard'' cases also highlight poorly-understood
limitations of the commonly used Anderson acceleration process. For example,
despite following standard recommendations to increase Anderson robustness,
we frequently observe SCF iterations to stagnate.
A more thorough understanding of this effect would
be an interesting direction for future research.
Our scheme was applied to semilocal density functionals in a
plane-wave basis set. It is not specific to plane-wave basis sets, and
we expect it to be similarly efficient in other ``large'' basis sets
frequently used in condensed-matter physics. For atom-centered basis sets,
like those common in quantum chemistry, direct mixing of the
density matrix is feasible, and likely more efficient. Our scheme does
not apply directly to hybrid functionals, where orbitals or Fock
matrices have to be mixed also; an extension to this case would be an
interesting direction for future research.
\section*{Acknowledgements}
This project has received funding from
the European Research Council (ERC) under
the European Union's Horizon 2020 research and innovation program
(grant agreement No 810367).
We are grateful to Marnik Bercx and Nicola Marzari
for pointing us to the challenging transition-metal structures
that stimulated most of the presented developments.
Fruitful discussions with Eric Cancès, Xavier Gonze and Benjamin Stamm
and provided computational time at Ecole des Points and RWTH Aachen University
are gratefully acknowledged.
\section*{Appendix: mathematical proofs}
\label{sec:proofs}
\noindent
\begin{lemma}
\label{sec:lemma}
Let
\begin{align}
H_{*} = \sum_{i=1}^{N_{\rm b}} \varepsilon_{i} |\phi_{i}\rangle\langle \phi_{i}|
\end{align}
with orthonormal $\phi_{i}$ and non-decreasing $\varepsilon_{i}$. Let
$f : {\mathbb R} \to {\mathbb R}$ be a real analytic function in a neighborhood of $[\varepsilon_{1},\varepsilon_{N_{\rm b}}]$. Then the
map $H \mapsto f(H)$ is analytic in a neighborhood of $H_{*}$, and
\begin{align}
{\bf df}(H_{*}) \cdot \delta H = \sum_{i=1}^{N_{\rm b}}\sum_{j=1}^{N_{\rm b}} \frac{f(\varepsilon_{i})-f(\varepsilon_{j})}{\varepsilon_{i}-\varepsilon_{j}}\langle \phi_{i}, \delta H \phi_{j} \rangle |\phi_{i}\rangle\langle \phi_{j}|
\end{align}
with the convention that $\frac{f(\varepsilon_{i}) -
f(\varepsilon_{i})}{\varepsilon_{i}-\varepsilon_{i}} = f'(\varepsilon_{i})$.
\end{lemma}
\noindent \textbf{Proof of Lemma 1.}\qquad
This is a classical result, known as the Daleckii-Krein theorem in
linear algebra; see for instance Higham~\cite{higham2008functions}. To keep
this paper self-contained, we
follow here the proof in Levitt~\cite{levitt2020screening} in the analytic
case. Since $f$ is analytic on
$[\varepsilon_{1},\varepsilon_{N_{\rm b}}]$, it is analytic in a
complex neighborhood. Let $\mathcal C$ be a positively oriented
contour enclosing $[\varepsilon_{1},\varepsilon_{N_{\rm b}}]$. Then,
for $H$ close enough to $H_{*}$, we have
\begin{align}
f(H) = \frac{1}{2\pi i}\oint_{\mathcal C} f(z) \frac {1} {z-H} dz
\end{align}
and analyticity of $f$ follows. For $\delta H$ small enough,
\begin{equation}
\begin{aligned}
&f(H_{*}+\delta H) \\
&= \frac 1 {2\pi i} \oint_{\mathcal C} f(z) \frac {1} {z-H_{*}-\delta H} dz\\
&\approx f(H_{*}) + \frac 1 {2\pi i} \oint_{\mathcal C} f(z) \frac 1 {z-H_{*}}\delta H \frac 1 {z-H_{*}}dz\\
&= f(H_{*}) + \frac 1 {2\pi i} \oint_{\mathcal C} {}\sum_{i=1}^{N_{\rm b}}\sum_{j=1}^{N_{\rm b}} \frac {f(z) \langle \phi_{i}, \delta H \phi_{j} \rangle }{(z-\varepsilon_{i})(z-\varepsilon_{j})} |\phi_{i}\rangle\langle \phi_{j}|dz\\
&= f(H_{*}) + \sum_{i=1}^{N_{\rm b}}\sum_{j=1}^{N_{\rm b}} \frac{f(\varepsilon_{i})-f(\varepsilon_{j})}{\varepsilon_{i}-\varepsilon_{j}}\langle \phi_{i}, \delta H \phi_{j} \rangle |\phi_{i}\rangle\langle \phi_{j}|
\end{aligned}
\end{equation}
where $\approx$ means
up to terms of order $O\left(\|\delta H\|^{2}\right)$.
\qed
\noindent \textbf{Proof of Theorem 1.}\qquad
If $\alpha_{0} \le 1$, $H_{n}$ belongs to the
convex hull spanned by $H_{0}$ and $\{H_{\rm KS}(f_{\rm FD}(H)), H \in \mathcal
H\}$. On this compact set $X$, $f_{\rm FD}$, ${\mathcal I}$ and their derivatives are
bounded. We have for all $H \in X$
%
%
%
%
%
\begin{equation}
\label{eq:expansion_alphasquare}
\begin{aligned}
&{\mathcal I}(H + \alpha (H_{\rm KS} - H)) \\
&= {\mathcal I}(H) - \alpha \langle \bm \Omega^{-1} ( H_{\rm KS} -H), ( H_{\rm KS} -H) \rangle + O(\alpha^{2})\\
&= {\mathcal I}(H) - \alpha \langle \bm \Omega \nabla{\mathcal I}(H), \nabla{\mathcal I}(H) \rangle + O(\alpha^{2})
\end{aligned}
\end{equation}
where in this expression the functions $\bm \Omega$ and $H_{\rm KS}$ are
evaluated at $f_{\rm FD}(H)$, and the constant in the $O(\alpha^{2})$ term is uniform in $n$.
It follows that for $\alpha_{0}$ small enough, there is $c > 0$ such that
\begin{align}
\label{eq:guaranteed_energy_decay}
{\mathcal I}(H_{n+1}) \le {\mathcal I}(H_{n}) - \alpha c \|\nabla{\mathcal I}(H_{n})\|^{2},
\end{align}
and therefore \mbox{$\nabla{\mathcal I}(H_{n}) \to 0$}, so that
\mbox{$H_{\rm KS}(f_{\rm FD}(H_{n})) - H_{n} \to 0$}.
We now proceed as in \citet{levitt2012convergence}. Let
${\mathcal I}_{*} = \lim_{n \to \infty}{\mathcal I}(H_{n})$. The set
$\Gamma = \{H \in X,$ \mbox{${\mathcal I}(H) = \lim_{n \to \infty}{\mathcal I}(H_{n})\}$} is
non-empty and compact. Furthermore, $d(H_{n}, \Gamma) \to 0$; if
this was not the case, we could extract by compactness of $X$ a
subsequence at finite distance of $\Gamma$ converging to a
$H_{*} \in X$ satisfying ${\mathcal I}(H_{*}) = \lim_{n \to \infty}{\mathcal I}(H_{n})$,
which would imply that $H_{*} \in \Gamma$, a contradiction.
At every point $H$ of $\Gamma$, by analyticity there is a neighborhood of $H$ in
$\mathcal H$ such that the \L{}ojasiewicz inequality
\begin{align}
|{\mathcal I}(H') - {\mathcal I}_{*}|^{1-\theta_{H}} \le \kappa_{H} \|\nabla{\mathcal I}(H')\|
\end{align}
holds for some constants $\theta_{H} \in (0,1/2]$, $\kappa_{H} > 0$ \cite{levitt2012convergence,lojasiewicz1965ensembles}.
By compactness, we can extract a finite covering of these
neighborhoods, and obtain a \L{}ojasiewicz inequality with universal
constants $\theta \in (0,1/2], \kappa > 0$ in a neighborhood of
$\Gamma$. Therefore, for $n$ large enough, using the concavity
inequality $x^{\theta} \le y^{\theta} + \theta y^{\theta-1}(x-y)$ with
$x = {\mathcal I}(H_{n+1}) - {\mathcal I}_{*}$, $y = {\mathcal I}(H_{n}) - {\mathcal I}_{*}$, we get
\begin{equation}
\begin{aligned}
\|\nabla{\mathcal I}(H_{n})\|^{2} &\le \frac 1 {\alpha c}{{\mathcal I}(H_{n}) - {\mathcal I}(H_{n+1})}\\
& \le \frac 1 {\theta \alpha c} ({\mathcal I}(H_{n}) - {\mathcal I}_{*})^{1-\theta}\\
&\hspace{1cm}\cdot
\Big[\left({\mathcal I}(H_{n}) - {\mathcal I}_{*}\right)^{\theta} - ({\mathcal I}(H_{n+1})-{\mathcal I}_{*})^{\theta} \Big]\\
&\le \frac \kappa {\theta \alpha c} \|\nabla{\mathcal I}(H_{n})\|\\
&\hspace{1cm}\cdot
\Big[({\mathcal I}(H_{n}) - {\mathcal I}_{*})^{\theta} - ({\mathcal I}(H_{n+1})-{\mathcal I}_{*})^{\theta} \Big]\\
\|\nabla{\mathcal I}(H_{n})\| &\le \frac \kappa {\theta \alpha c} \Big[({\mathcal I}(H_{n}) - {\mathcal I}_{*})^{\theta} - ({\mathcal I}(H_{n+1})-{\mathcal I}_{*})^{\theta} \Big]
\end{aligned}
\end{equation}
It follows that $\|\nabla{\mathcal I}(H_{n})\| $ is summable, and therefore
that $\|H_{n+1} - H_{n}\|$ is; this implies convergence of $H_{n}$ to
some $H_{*}$.
When $\theta=1/2$ (or, in light of \eqref{eqn:hess_both}, when
$\bm{d^{2} {\mathcal E}}(f_{\rm FD}(H_{*}))$ is positive definite), we can get exponential convergence \cite{levitt2012convergence}.
\qed
Note that the bounds used in the proof of the above statement (for
instance, on $\alpha_{0}$) are extremely pessimistic, since they
rely on the fact that the set of possible $P$ is bounded, and
therefore all density matrices of the form $f_{\rm FD}(H_{\rm KS}(P))$ have
occupations bounded away from $0$ and $1$, which results in bounded
derivatives for ${\rm Tr}(s(P))$. A more careful analysis is needed to
obtain better bounds (for instance, bounds that are better behaved
in the zero temperature limit).
\noindent \textbf{Proof of Theorem 2.}
From \eqref{eq:expansion_alphasquare} it is easily seen that the
linesearch process stops in a finite number of iterations, independent
on $n$. This ensures that there is $\alpha_{\rm min} > 0$ such that
$\alpha_{\rm min} \le \alpha_{n} \le \alpha_{\rm max}$. From
\eqref{eq:armijo} it follows that a similar inequality to
\eqref{eq:guaranteed_energy_decay} holds, and the rest of the proof
proceeds as in that of Theorem 1.
|
{
"timestamp": "2022-03-02T02:35:09",
"yymm": "2109",
"arxiv_id": "2109.14018",
"language": "en",
"url": "https://arxiv.org/abs/2109.14018"
}
|
\section{S.I. AAH Background}\label{measureAAHsi}
The main results of this work generalize to multiple classes of quasiperiodic models, but the Andre-Aubry model (almost-Mathieu operator) is the most well studied. We introduce it and some background on current methods in the study of single-particle quasi-periodic models. The Hamiltonian is very simple, but presents a rich playground for new techniques in analysis and single-particle physics:
\begin{eqnarray}\label{HamiltonianAAHeqRS2}
\hat{H} = \sum_{x} t(\hat{c}^{\dagger}_{x+1}\hat{c}_{x}+\hat{c}_{x+1}\hat{c}^{\dagger}_{x})+2V\cos(\Theta x+\delta_{x})\hat{c}^{\dagger}_{x}\hat{c}_{x}.\quad
\end{eqnarray}
Here $\Theta = 2\pi a$, and we'll take $a\in\mathbb{R}-\mathbb{Q}$. Note, the case where $a\in\mathbb{Q}$ is called the Mathieu operator and is simply a periodic 1D band model.
\indent To understand why the almost-Mathieu operator is interesting to mathematics, beyond the very physically interesting metal-insulator transition, we introduce the notion of a spectral measure. For any self-adjoint linear operator, $T$, one can decompose its measure on the target Hilbert space, $\mathcal{H}$ as an \textit{absolutely continuous}, \textit{singularly continuous}, and \textit{pure point-like} components. The spectral measure of $T$ is defined with respect to a vector $h\in\mathcal{H}$ and a positive linear functional $f: T\rightarrow \bra{h}f(T)\ket{h} = \int_{\sigma(T)}fd\mu_{h}$, where $\sigma(T)$ is the spectrum of the operator T and $\mu_{h}$ is the unique measure associated with $h$ and $T$. The portion of the Hilbert space, i.e. the subspace of vectors, for which $\mu_{h}$ is dominated by the Lebesgue measure on the same subspace -- for every measureable set $A$, if the Lebesgue measure $L(A)= 0$, $\mu_{h}(A) = 0$ -- is absolutely continuous. By contrast, the pure point like component is the discrete portion of the spectrum where points can have finite measure in terms of $\mu_{h}$, but points have zero Lebesgue measure. The singularly continuous part of the spectrum is defined as the singular part of the spectrum -- the subspace which can be formed by a disjoint union of sets $A$ and $B$ for which $\mu_{h}(A) = 0$ when $L(B) = 0$ -- which is not pure point like.
\indent The original metal-insulator transition was shown non-rigorously through the duality of the Andre-Aubry under a Fourier transform-like operation, $\hat{c}_{k} = \sum_{x}\exp(i\Theta k x) \hat{c}_{x}$,
\begin{eqnarray}\label{HamiltonianAAHeqKS2}
{\tilde{\hat{H}}} = \sum_{k} V(\hat{c}^{\dagger}_{k+1}\hat{c}_{k}+\hat{c}_{k+1}\hat{c}^{\dagger}_{k})+ 2t\cos(\Theta k+\delta_{k}) \hat{c}^{\dagger}_{k}\hat{c}_{k}\quad,
\end{eqnarray}
see S.I.~\ref{ogargumentsi}. The model has a self-dual point for $V = t$, fixing a transition from momentum-like to position-like eigenfunctions. A more complete formulation of the problem, however, was constructed and proven for almost-Mathieu operators. It was proven that the spectrum of the almost-Mathieu operator is (setting $t = 1$)
\begin{enumerate}
\item Absolutely continuous for all $\Theta$ and $\delta_{x}$ when $V < 1$
\item Singularly continuous for all $\Theta$ and $\delta_{x}$ when $V = 1$
\item Pure point-like for almost all $\Theta$ and $\delta_{x}$ when $V > 1$
\end{enumerate}
A pure point-like spectrum guarantees Anderson localization as it corresponds to eigenfunctions having finite measure at the eigenvalues and zero measure elsewhere. Thus, the eigenfunction is exponentially decaying on the lattice or is not normalizable. Or more formally, the pure-point like spectrum forces eigenfunctions to be Semi-Uniformly-Localized-Eigenstates \cite{deift1983almost}. By contrast, an absolutely continuous spectrum guarantees delocalization if the spectrum has finite measure, which has been shown to be the case for the Almost-Mathieu operator. Much less is known about the Singularly continuous case, and it has been the topic of multiple famous problems proposed by Barry Simon \cite{simon1984fifteen,simon2000schrodinger,simon2020twelve}. One of the few results on the Singularly continuous spectrum is its existence deep in the pure-point like regime for Liouville $a = 2\pi/\Theta$ -- sequence of rational approximates $\lbrace p_n/q_n\rbrace$ exists such that $\vert a - p_{n}/q_{n}\vert < n^{-q_{n}}$ \cite{last2007exotic}. In fact, for Liouville numbers, the pure-point like transition occurs for $\lambda = e^{\beta}$ with $\beta = \lim_{n\rightarrow\infty}\ln(q_{n})/q_{n+1}$ \cite{avila2017sharp}.
In this language, the almost-Mathieu operator becomes a clear bridge between the well understood Mathieu operator (periodic operators) and random disorder. Understanding localization for the almost-Mathieu operator directly links to our understanding of chaos and localization in disordered systems. And yet, we still do not understand the full parameter space of a 1D nearest neighbor hopping lattice model with a cosine potential. The almost-everywhere part of this problem is important as it determines the physical stability of the model. Modern techniques in the field rely on cocycle theory \cite{avila2006reducibility,avila2009ten,jitomirskaya1999metal,bourgain2002continuity}, and the absolutely continuous part of the spectrum is conjectured to be equivalent to the almost-reduciblity of the corresponding cocycle \cite{avila2006reducibility}. The connection with cocycle theory further highlights the importance of this problem, as the reducibility classes of $SL(2,\mathbb{R})$ cocycles are known to describe the onset of quantum chaos and directly link to the Lyapunov exponent \cite{avila2006reducibility,avila2006solving,bourgain2002continuity}.
\section{S.I. The Andre Aubry's Argument}\label{ogargumentsi}
Early studies of quasi-periodic system dynamics focused on the construction of eigenfunctions from sequences of rational approximates, inductively \cite{dinaburg1975one,frohlich1990localization}. While the original work by Andre and Aubry \cite{aubry1980annals} relied on the continuity of the Thouless parameter and self-dual models to explain the transition. The RG like induction methods proved rigorously the existence of a localized phase. For the AAH model\cite{dinaburg1975one,frohlich1990localization} and similar quasi-periodic potentials\cite{frohlich1990localization}, these methods demonstrated the emergence of a \textit{pure point-like} spectrum for strong enough onsite potentials (relative to hopping terms). A pure point-like spectrum enforces localized eigenfunctions as eigenfunctions lack support across any continuous energy windows \cite{dinaburg1975one,frohlich1990localization,jitomirskaya1998anderson,jitomirskaya1999metal,jitomirskaya2019critical}. Below we introduce the simple AAH model and note key insights about the breakdown of eigenfunction ergodicity.
\indent The original paper by Andre and Aubry \cite{aubry1980annals} rests on two fundamental requirements for a quasi-periodic Hamiltonian, its self-duality and its fidelity to a sequence of rational approximates. It proposed a Hamiltonian, the AAH model, which satisfies a self-duality constraint under a real-space to dual-space (momentum space in the continuum limit) transformation, $\hat{c}_{k} = \sum_{x}\exp(i\Theta k x) \hat{c}_{x}$:
\begin{eqnarray}\label{aahrealspacetop}
\hat{H} = \sum_{x} t(\hat{c}^{\dagger}_{x+1}\hat{c}_{x}+\hat{c}_{x+1}\hat{c}^{\dagger}_{x})+2V\cos(\Theta x+\delta_{x})\hat{c}^{\dagger}_{x}\hat{c}_{x}\quad\\
{\tilde{H}} = \sum_{k} V(\hat{c}^{\dagger}_{k+1}\hat{c}_{k}+\hat{c}_{k+1}\hat{c}^{\dagger}_{k})+2t\cos(\Theta k+\delta_{k})\hat{c}^{\dagger}_{k}\hat{c}_{k}\quad \label{aahkspacetop}
\end{eqnarray}
\indent Here $\Theta$ is some irrational parameter relative to $\pi$ and clearly for $t = V$ the Hamiltonian is self-dual, indicating the existence of a transition. One can introduce a sequence of rational approximates, $\lbrace a_{n}/b_{n}\rbrace_{n\in\mathbb{N}}$ with $a_{n},b_{n}\in\mathbb{Z}$ and $\lim_{n\rightarrow\infty} a_{n}/b_{n} = \Theta$. The sequence of Hamiltonians with periodic potentials links the density of states on either side of the duality transformation because there are well-defined bands. One can then write down the corresponding Thouless exponent for each side of the transition:
\begin{eqnarray}
\gamma(E) = \int_{-\infty}^{\infty}\log\vert E-E'\vert dN(E')\label{thouless_exp_gen}
\end{eqnarray}
For rational $a_{n}/b_{n}$, with $t = 1$ and $V = \lambda$, the transformation from Eq.~\eqref{aahrealspacetop} to Eq.~\eqref{aahkspacetop} takes $\tilde{V}\rightarrow1/\lambda$ and $E\rightarrow \tilde{E}/\lambda$, which implies $N_{\lambda,k} (E) = \tilde{N}_{1/\lambda,k}(E/\lambda)$ \cite{aubry1980annals}. So,
\begin{eqnarray}\label{thoulessduality}
\gamma(E) &=& \int_{-\infty}^{\infty}\log\vert \frac{E-E'}{\lambda}\vert d\tilde{N}\left(\frac{E'}{\lambda}\right) +\log\vert\lambda\vert\nonumber\\
\gamma(E) &=& \tilde{\gamma}(\frac{E}{\lambda})+\log\vert\lambda\vert.
\end{eqnarray}
Since quasiperiodic systems do not have bands, but rather protected band gaps (discussed below \cite{jitomirskaya1998anderson,jitomirskaya1999metal,avila2011holder,zhao2020holder,amor2009holder,jitomirskaya2019critical}), the Thouless exponent must be non-negative by construction in 1D \cite{aubry1980annals}. Thus, for $\lambda>1$, $\gamma(E)>0$ and states are exponentially localized. This all relies on the continuity of the Thouless exponent, only proven in 2002 \cite{bourgain2002continuity}. Here, the 1D rational approximates differ drastically from the irrational limit, but the density of state is well described by the approximation. In fact, these spectral properties are topologically protected by the quasiperiodic pattern's robustness \cite{prodan2015}, further expanded below.
\indent Via the above arguments, Andre and Aubry demonstrated the existence of non-zero Thouless parameter for $V >1$. And, by the duality of the model, $\gamma(E)$ must be zero for $V<1$. This transition is unusually sharp, exhibiting exponential localization on either side due to the relation between the Thouless parameters of the self-dual models. Further, the methodology is quite general in 1D and can be extended to other self-dual models, even if the duality is energy dependent, see S.I.\ref{gaahexamplesi}. The argument breaks down in higher dimensions as the Thouless exponent is no longer guaranteed to be non-negative \cite{aubry1980annals}.
\indent Returning to the Hamiltonian in Eq.~\ref{aahrealspacetop}, note the phase $\delta_{x}$ sets the "origin" of the pattern. The eigenvalues cannot depend on the phase $\delta_{x}$ in the thermodynamic limit. However, for $\lambda >1$, if a state of energy $E$ is localized to site $x$ when $\delta_{x} = 0$, then the state localized to site $x-\delta/\Theta$ has energy $E$ for the shifted Hamiltonian with phase $\delta_{x} = \delta$. Thus, the eigenfunctions of each eigenvalue do depend directly on the phase. In \cite{aubry1980annals,aubry1981bifurcation}, this is described as a gauge-group symmetry breaking transition. \\
\begin{widetext}
\section{S.I. Transfer Matrix}\label{transfermatrixsi}
The explicit construction of a $2\times2$ transfer matrix starts with a reduced SVD of the hopping matrix $J_{N}$,
\begin{eqnarray}
J_{N} &=& V_{N} D_{N} W^{\dagger}_{N}\\
J_{N}^{\dagger} &=& W_{N} D_{N}^{\dagger} V^{\dagger}_{N}
\end{eqnarray}
with $V^{\dagger}V = W^{\dagger}W = \mathbb{1}$ and $W^{\dagger} V = 0$, and in the case of $J_{N}$,
\begin{eqnarray}
D_{N} = D_{N}^{\dagger}=\begin{pmatrix}
t & 0 & \ldots &0\\
0 & 0 & \ldots & 0\\
\vdots& \vdots&\ddots & \vdots\\
0 & 0 & \ldots & 0
\end{pmatrix},
\end{eqnarray}
Since our hopping matrix is rank 1, we truncate $D_{N} = t$ and and correspondingly, the $q_{N}\times 1$ dimensional operators
\begin{eqnarray}
W_{N}= \begin{pmatrix}
0 \\ \vdots \\ 0 \\(-1)^{q_{N}}
\end{pmatrix},
V_{N}= \begin{pmatrix}
(-1)^{q_{N}} \\0\\ \vdots \\ 0
\end{pmatrix}
\end{eqnarray}
Rewriting our unit cell Green's function as
\begin{eqnarray}
G_{N}(\omega) = (\omega - M_{N})^{-1}
\end{eqnarray}
The transfer matrix equation reduces to
\begin{eqnarray}
\Psi_{n} &=& G_{N}J_{N}\Psi_{n+1} + G_{N}J_{N}^{\dagger}\Psi_{n-1}\\
\Psi_{n} &=& G_{N}V_{N}D_{N}W^{\dagger}_{N}\Psi_{n+1} + G_{N}W_{N}D_{N}V^{\dagger}_{N}\Psi_{n-1}
\end{eqnarray}
Projecting into the $V_{N},W_{N}$ subspaces of $\Psi_{n}$,
\begin{eqnarray}
V^{\dagger}_{N}\Psi_{n} &=& V^{\dagger}G_{N}V_{N}D_{N}W^{\dagger}_{N}\Psi_{n+1} + V^{\dagger}G_{N}W_{N}D_{N}V^{\dagger}_{N}\Psi_{n-1}\\
W^{\dagger}_{N}\Psi_{n} &=& W^{\dagger}G_{N}V_{N}D_{N}W^{\dagger}_{N}\Psi_{n+1} + W^{\dagger}G_{N}W_{N}D_{N}V^{\dagger}_{N}\Psi_{n-1}.
\end{eqnarray}
This reduces the Transfer matrix equation to \cite{dwivedi2016bulk}, setting $t =1$,
\begin{align}
\begin{pmatrix}
(W_{N}^{\dagger}G_{N}V_{N})^{-1} & -(W_{N}^{\dagger}G_{N}V_{N})^{-1}(W_{N}^{\dagger}G_{N}W_{N})\\
(V_{N}^{\dagger}G_{N}V_{N})(W_{N}^{\dagger}G_{N}V_{N})^{-1} & V_{N}^{\dagger}G_{N}W_{N}- V_{N}^{\dagger}G_{N}V_{N}(W_{N}^{\dagger}G_{N}V_{N})^{-1}W_{N}^{\dagger}G_{N}W_{N}
\end{pmatrix}\begin{pmatrix}
V_{N}^{\dagger}\Psi_{n} \\ W_{N}^{\dagger}\Psi_{n-1}
\end{pmatrix} =
\begin{pmatrix}
V_{N}^{\dagger}\Psi_{n+1} \\ W_{N}^{\dagger}\Psi_{n}
\end{pmatrix}
\end{align}
Notice all of the elements in the $2\times2$ transfer matrix are effectively scalars and thus commute with each other and we can just factor out the common factor $(W_{N}^{\dagger}G_{N}V_{N})^{-1}$,
\begin{eqnarray}
(W_{N}^{\dagger}G_{N}V_{N})^{-1}\begin{pmatrix}
1 & -(W_{N}^{\dagger}G_{N}W_{N})\\
V_{N}^{\dagger}G_{N}V_{N} & V_{N}^{\dagger}G_{N}W_{N}(W_{N}^{\dagger}G_{N}V_{N})- V_{N}^{\dagger}G_{N}V_{N}W_{N}^{\dagger}G_{N}W_{N}
\end{pmatrix}
\begin{pmatrix}
V_{N}^{\dagger}\Psi_{n} \\ W_{N}^{\dagger}\Psi_{n-1}
\end{pmatrix} =
\begin{pmatrix}
V_{N}^{\dagger}\Psi_{n+1} \\ W_{N}^{\dagger}\Psi_{n}
\end{pmatrix}
\end{eqnarray}
When $W_{N}^{\dagger}G_{N}W_{N}\neq 0$ and $V_{N}^{\dagger}G_{N}V_{N}\neq 0$, $\hat{T}_{q_N,n}$ is unitary and has reciprocal eigenvalues, $\lambda_{T,1}\lambda_{T,2} = 1$. The spectrum, $E\in\Sigma$, is formed by energies for which $\left\vert\lambda_T\right\vert = 1$. By contrast, the spectral gaps, $E\in\mathbb{R}-\Sigma$, are the energies for which $\left\vert\lambda_T\right\vert \in (0,1)\cup(1,\infty)$, see Fig.~\ref{fig:tme}.
If ever $(V_{N}^{\dagger}G_{N}V_{N})=0$ and $(V_{N}^{\dagger}G^{2}_{N}V_{N})=0$ or $(W_{N}^{\dagger}G_{N}W_{N})=0$ and $(W_{N}^{\dagger}G_{N}^{2}W_{N})=0$ the transfer matrix has rank 1 and the only non-vanishing solution is localized at the edge. The corresponding energy is not in the bulk spectrum, $\omega\notin\Sigma$. The above condition happens when $G_{N}(\omega) = (\omega-M_{N})^{-1} = 0$. These zeros correspond to protected obstructions of the spectrum and are important to quasi-periodic localization \cite{paper2}.
\section{S.I. Bulk to Projected Green's Function}\label{siboundstransfer}
Our rational approximation to the irrational Hamiltonian follows the continued fraction approximation of irrational parameter $\alpha$ described above.
We define,
\begin{eqnarray}
\hat{H}_{N} = \sum_{x}t(\hat{c}^{\dagger}_{x+1}\hat{c}_{x}+\hat{c}^{\dagger}_{x}\hat{c}_{x+1})+2V\cos{(2\pi \frac{p_{N}}{q_{N}}x+\phi)}\hat{c}_{x}^{\dagger}\hat{c}_{x}
\end{eqnarray}
for which we can define a $q_{N}$ site unit cell and Fourier transform into (setting $t =1$)
\begin{eqnarray}\label{fullmatrix}
\tilde{H}_{N} = \sum_{k}e^{ikx}c_{k}^{\dagger}c_{k}\begin{pmatrix}
2V\cos{(2\pi \frac{p_{N}}{q_{N}}1+\phi)} & 1 &0& \ldots & e^{ik}\\
1 & 2V\cos{(2\pi \frac{p_{N}}{q_{N}}2+\phi)} & 1& \ldots & 0\\
0 & 1 & 2V\cos{(2\pi \frac{p_{N}}{q_{N}}3+\phi)}& \ddots& 0\\
\vdots & \vdots &\ddots& \ddots & \vdots\\
e^{-ik} & \ldots &0& 1 & V\cos{(2\pi \frac{p_{N}}{q_{N}}q_{N}+\phi)}
\end{pmatrix}.
\end{eqnarray}
The bulk Green's function is just the natural $G(\omega,k) = (\omega-\tilde{H}_{N}(k))^{-1}$ and the corresponding ``projected" Green's function is simply
\begin{eqnarray}
G_{\perp,N}(\omega)=\int \frac{dk}{2\pi}G(\omega,k)
\end{eqnarray}
We examine difference between the almost tri-diagonal matrix in E.Q.~\eqref{fullmatrix} and the irrational Hamiltonian on a given unit cell.
First, we set $\phi = 0$ w.l.o.g. and notice that
\begin{eqnarray}
V\cos(2\pi\alpha x) = V\cos(2\pi(\frac{p_{N}}{q_{N}}+\delta_{N}) x) = V\cos(2\pi\frac{p_{N}}{q_{N}} x)- 2V\sin{(\pi\delta_{N}x)} \sin{(2\pi\frac{p_{N}}{q_{N}} x+\pi\delta_{N}x)}
\end{eqnarray}
So,
\begin{eqnarray}\label{diffbound}
\vert 2V\cos(2\pi\alpha x)-2V\cos(2\pi\frac{p_{N}}{q_{N}} x)\vert < \vert4V\sin{(\pi\delta_{N}x)}\vert = \vert 4V\sum_{n = 1}^{\infty} \frac{(-1)^{n}}{(2n-1)!} (\pi\delta_{N}x)^{2n-1}\vert<\vert 4V\pi\delta_{N}q_{N}\vert
\end{eqnarray}
where in the last inequality we used that an alternating and uniformly convergent series is bounded at any step and that $x\leq q_{N}$. Notice, that for any irrational number $\delta_{N}<\frac{1}{\sqrt{5}q_{N}^{2}}$ and thus
$$\vert 4V\pi\delta_{N}q_{N}\vert < \frac{4V\pi}{\sqrt{5}q_{N}}\rightarrow 0$$ in the limit of large $q_{N}$.
However, we keep $\delta_{N}$ going forward, to account for special cases of stronger bounds on $\delta_{N}$, i.e. Liouville numbers. Expanding the full irrational parameter on-site Green's function,
\begin{eqnarray}
G_{\perp,\alpha}(\omega)=\int\frac{dk}{2\pi}G_{N}(\omega)\left(\mathbb{1} + G_{N}(\omega)\left(\sum_{n=1}^{q_{N}}\left[2V\cos(2\pi\alpha n)-2V\cos(2\pi\frac{p_{N}}{q_{N}} n)\right]\ket{n}\bra{n}\right)\right)^{-1}
\end{eqnarray}
we see that
\begin{eqnarray}
G_{\perp,\alpha}(\omega)-G_{\perp,N}(\omega) &=& -\int\frac{dk}{2\pi}G_{N}^{2}(\omega)
\frac{\left(\sum_{n=1}^{q_{N}}\left[2V\cos(2\pi\alpha n)-2V\cos(2\pi\frac{p_{N}}{q_{N}} n)\right]\ket{n}\bra{n}\right)}{\left(\mathbb{1} + G_{N}(\omega)\left(\sum_{n=1}^{q_{N}}\left[2V\cos(2\pi\alpha n)-2V\cos(2\pi\frac{p_{N}}{q_{N}} n)\right]\ket{n}\bra{n}\right)\right)}.
\end{eqnarray}
We can bound the above expression in terms of the operator norm by using our above bound on
$$\vert 2V\cos(2\pi\alpha x)-2V\cos(2\pi\frac{p_{N}}{q_{N}} x)\vert<\vert4V\pi\delta_{N}q_{N}\vert.$$
Convergence then depends on the uniform convergence of
\begin{eqnarray}
\left\Vert G_{N}(\omega)\left(\sum_{n=1}^{q_{N}}\left[2V\cos(2\pi\alpha n)-2V\cos(2\pi\frac{p_{N}}{q_{N}} n)\right]\ket{n}\bra{n}\right)\right\Vert^{k}<\Vert\left(G_{N}(\omega)4V\pi\delta_{N}q_{N}
\right)\Vert^{k}
\end{eqnarray}
as $k\rightarrow\infty$, where we have used the operator norm $\vert\vert A\vert\vert = \sup_{\psi\in \mathcal{H}} (\vert \vert A\psi\vert\vert/\vert\vert\psi\vert\vert)$. We need
\begin{eqnarray}
\Vert\left(G_{N}(\omega)4V\pi\delta_{N}q_{N}
\right)\Vert < 1
\end{eqnarray}
We have taken $\omega$ not in the spectrum, $\Sigma$ of the operator and $G_{N}(\omega)<\infty$, but this doesn't necessarily bound $G_{N}(\omega)<1/(\delta_N q_N)$. The operator norm of $G_{N}(\omega) = (\omega - E^{*})^{-1}$, or approximately the inverse of the gap width. The minimum gap width for the rational Hofstadter approximates is known, $\sim V^{q_N/2}$ \cite{jitomirskaya2020spectrum}, which means $G_{N}(\omega) \sim V^{-q_N/2}$. Taking this approach, there will always be gaps for which the pGF doesn't converge.
Instead, consider a small shift of $\omega$ towards the upper and lower half of complex plane $\omega\rightarrow\omega \pm i\epsilon$. This fixes the maximum of $G_{N}(\omega\pm i\epsilon) < \epsilon^{-1}$. We can always take a large enough $q_N$ such that $\delta_{N}q_{N} < \epsilon$ for any $\epsilon>0$. However, we now need both the imaginary and real parts of the offset pGF to converge.
\begin{align}
\mathrm{Re}\,{(G_{\perp,\alpha}(\omega\pm i\epsilon) - G_{\perp,N}(\omega\pm i\epsilon))} = -\int\frac{dk}{2\pi}\frac{\left[(\omega-\tilde{H}_{N})^{2} + (\omega-\tilde{H}_{N})(\tilde{H}_{N}-\tilde{H}_{\alpha})-\epsilon^{2}\right](\tilde{H}_{\alpha}-\tilde{H}_{N})}{((\omega-\tilde{H}_{N})^{2}+\epsilon^{2})^{2}\left(\mathbb{1}+\frac{2\omega (H_{N} - H_{\alpha})+\tilde{H}_{N}^{2}- \tilde{H}_{\alpha}^{2}}{((\omega-\tilde{H}_{N})^{2}+\epsilon^{2})}\right)}\\
\mathrm{Im}\,{(G_{\perp,\alpha}(\omega\pm i\epsilon) - G_{\perp,N}(\omega\pm i\epsilon))} = \int\frac{dk}{2\pi}\frac{\left[\pm 2i\epsilon(\omega-\tilde{H}_{N})+i\epsilon(H_{N} -\tilde{H}_{\alpha})\right](\tilde{H}_{\alpha}-\tilde{H}_{N})}{((\omega-\tilde{H}_{N})^{2}+\epsilon^{2})^{2}\left(\mathbb{1}+\frac{2\omega (H_{N} - H_{\alpha})+\tilde{H}_{N}^{2}- \tilde{H}_{\alpha}^{2}}{((\omega-\tilde{H}_{N})^{2}+\epsilon^{2})}\right)}
\end{align}
Now we use that $\vert\vert(\omega-\tilde{H}_{N})^{2}+\epsilon^{2}\vert\vert >\epsilon^{2}$, to cancel the bottom factors $\sim \frac{1}{\epsilon^{2}}$. And, we can use the bound from above, $\vert\vert\tilde{H}_{N} - \tilde{H}_{ch,\alpha}\vert\vert<2\pi\sqrt{\frac{1}{15q_{N}}}$ to get
\begin{align}
\vert\vert \mathrm{Re}\,{(G_{\perp,\alpha}(\omega\pm i\epsilon) - G_{\perp,N}(\omega\pm i\epsilon))}\vert\vert < \int\frac{dk}{2\pi}\vert\vert\epsilon^{-4}\left[-\epsilon^{2}2\pi\sqrt{\frac{1}{15q_{N}}}+ (\omega-\tilde{H}_{N})\left(2\pi\sqrt{\frac{1}{15q_{N}}}\right)^{2} \right]\vert\vert\\
\vert\vert\mathrm{Im}\,{(G_{\perp,\alpha}(\omega\pm i\epsilon) - G_{\perp,N}(\omega\pm i\epsilon))}\vert \vert< \int\frac{dk}{2\pi}\epsilon^{-3}\left[\epsilon8\pi\sqrt{\frac{1}{15q_{N}}}+\left(2\pi\sqrt{\frac{1}{15q_{N}}}\right)^{2}\right].
\end{align}
Clearly the imaginary part will converge to zero for large $q_N$. We bound the real part by showing
\begin{eqnarray}\label{ourboundeq}
\vert\vert(\omega -\tilde{H}_{ch,N})\vert\vert < C\sqrt{q_N},
\end{eqnarray}
so that we can always take $q_N$ big enough to make $\left(2\pi\sqrt{\frac{1}{15q_{N}}}\right)^{2} C\sqrt{q_N}<\epsilon^{4}$ for any $\epsilon>0$ to get the uniform convergence above, for all Diophantine $\alpha$.
The operator norm obeys the triangle inequality, so
\begin{eqnarray}\label{triangleq}
\left\vert\vert\omega\vert- \vert\vert\tilde{H}_{N}\vert\vert\right\vert\leq\vert\vert(\omega-\tilde{H}_{N})\vert\vert \leq\vert\omega\vert+\vert\vert\tilde{H}_{N}\vert\vert
\end{eqnarray}
We can then use the lower bound to prove divergence for $V>1$ and the upper bound to prove convergence for $V<1$.
By \cite{jitomirskaya2019critical}, the semi-infinite spectrum of $\tilde{H}_{N}$ may contain up to 2 eigenvalues inside each gap; and up to 1 in each of the infinite intervals above and below the bulk. We need to bound the eigenvalues above and below the gap (fortunately symmetric). We do this by using Lagrange's inequality to bound the characteristic polynomial roots for $\tilde{H}_{N}$
\begin{align}
\vert\vert\tilde{H}_{N}\vert\vert < 1+\max_{0<i<q_N}\left\lbrace \left\vert\frac{a_i}{a_{q_N}}\right\vert\right\rbrace
\end{align}
with the $a_i$ the polynomial coefficients, the largest being $a_0 = \det{\tilde{H}_N}$. It thus suffices to prove that the largest coefficient, the determinant is finite for some parameter regime of $\tilde{H}_N$. We do this via an arithmetic mean - geometric type mean argument (AM-GM). Consider without loss of generality, $t=1$, then
\begin{eqnarray}
\det{\vert \tilde{H}_{N}\vert} &\leq& (\mathrm{Tr}{\left\vert \frac{\tilde{H}_{N}}{q_{N}} \right\vert})^{q_N} = \left\vert\sum_{x=1}^{q_N}\vert 2 V\cos{\Theta x}/q_{N}\vert\right\vert^{q_N}\nonumber\\
&\leq& \left(\frac{2V}{\pi} \int_{0}^{2\pi}\vert\cos{x} \vert dx\right)^{q_{N}} = (4V/\pi)^{q_N}.
\end{eqnarray}
Taking $V<\pi/4$, the bound holds and the rational transfer matrix approximates converge to the irrational transfer matrix. We extend this bound to $V < 1$ numerically ( intuitively seen by setting $\vert 2V \cos{x}\vert\rightarrow\max_x{\vert 2V \cos{x}\vert} = 2V$ to bound $\det{\tilde{H}_{N}}$). We leave the use of the lower bound in Eq.~\eqref{triangleq} to prove divergence for a companion paper \cite{paper2} and numerically show the divergence in Fig.~\ref{fig2}.
Thus, for $V<1$ and any $\epsilon>0$ both the imaginary and real parts of the rational pGFs converge for $q_N$ such that $\delta_{N}q_{N}<\epsilon^{4}$. This implies the uniform convergence of the rational approximates and the Green's function of the irrational limit is arbitrarily close to that of the translation invariant Green's function. Using the transfer matrix construction above, the quasi-periodic transfer matrix is arbitrarily close to the translation invariant transfer matrix. This is similar to almost-reducibility as defined by Artur Avilla in \cite{avila2006reducibility}.
Now, consider the case of $\delta_{N} = e^{-\beta q_{N}}$, when $\alpha$ is Liouville. Here,
\begin{eqnarray}
\Vert\left(G_{N}(\omega)4V\pi\delta_{N}q_{N}
\right)\Vert < \Vert\left(G_{N}(\omega)4V\pi e^{-\beta q_{N}}q_{N}
\right)\Vert
\end{eqnarray}
Now, even if $V>1$ our bound holds,
\begin{eqnarray}
\det{\vert \tilde{H}_{N}\vert} \leq (4V/\pi)^{q_N}e^{-\beta q_{N}}q_{N},
\end{eqnarray}
and the rational approximate pGFs converge for $V\lesssim e^{\beta}$. Thus, for $1<V<e^{\beta}$, both the horizontal and vertical unit cells are convergent -- this reproduces results in \cite{avila2017sharp}.
\end{widetext}
\subsection{S.I. Gauge Transformation}
If $V>1$, the above convergence proof fails and our rational Green's function approximates are no longer guaranteed converge to the irrational Green's function. Consequentially, the rational approximate transfer matrices no longer converge to the quasi-periodic transfer matrix and the quasi-periodic eigenfunctions are no longer the limit of the de-localized eigenufunctions of the rational approximate transfer matrices, defined by unit cells along the $x$-direction of the 2D parent Hamiltonian lattice.
Instead we must apply a 2D gauge transformation to the 2D Hamiltonian, such that the ``magnetic" flux is contained in the $x$-direction and the unit cell is constructed in the $y$-direction. The resulting transfer matrix will be constructed from on-site projected Green's functions of the form
\begin{widetext}
\begin{eqnarray}
G_{\perp,N} = \int\frac{d\phi}{2\pi}\left[\omega\mathbb{1}-\begin{pmatrix}
2\cos{(2\pi \frac{p_{N}}{q_{N}}1+k_x)} & V &0& \ldots & Ve^{i\phi}\\
V & 2\cos{(2\pi \frac{p_{N}}{q_{N}}2+k_x)} & V& \ldots & 0\\
0 & V & 2\cos{(2\pi \frac{p_{N}}{q_{N}}3+k_x}& \ddots& \vdots\\
\vdots & \vdots &\ddots& \ddots & V\\
Ve^{-i\phi} & \ldots &0& V & 2\cos{(2\pi \frac{p_{N}}{q_{N}}q_{N}+k_x)} \end{pmatrix} \right]^{-1}.\nonumber
\end{eqnarray}
We can factor out a V from the entire equation above and generate the same Green's function as above with $V\rightarrow 1/V$ and $\omega\rightarrow \omega/V$. As a consequence, we have a clear convergence for $1/V<1$ or $V >1$ for the projected Green's function along a vertical unit cell in the limit of $q_N\rightarrow\infty$.
As a subtle point, while this gauge transformation is well defined in 2D for Liouville numbers, it fails to manifest in 1D, as the ``magnetic flux" can only shift to the hopping elements in $\hat{H}$ under a quasi-Fourier transformation. When the rational approximates in the horizontal unit cell converge, the quasi-Fourier transform is close (exponentially for $\alpha$ Liouville) to a rational Fourier transform, and we simply obtain the horizontal unit cell rational approximates. This point only manifests itself for Liouville choices of $\alpha$, such that the convergence regime extends into the convergence regime of the gauge transformed Green's function, $V>1$.
\end{widetext}
For Liouville $\alpha$, multiple gauge choices and unit cell configurations define convergent approximations to the full quasi-periodic projected Green's functions, but the in the projection to 1D is biased towards the horizontal unit cells as the eigenfunctions survive this projection and the ``gauge group symmetry" -- phase shift invariance -- is preserved \cite{paper2}.
\section{S.I. Projected Green's Function Formalism}\label{zerospolessi}
\indent In translation-invariant systems, the Brillouin zone allows for flexibility in writing locally computable formulas for topological invariants. In this language, Green's function zeros are singular and carry topological significance \cite{bernevig2013topological,Slager2015, Rhim2018, Borgnia2020, volovik2003universe, slager2019translational, Gurarie2011,mong2011edge}. More recently, it was noticed that bound state formation criteria along an edge are also defined by Green's function zeros \cite{Slager2015,Rhim2018,Borgnia2020,jitomirskaya2019critical,mong2011edge,volovik2003universe}, thereby tracking both topological invariants and their corresponding edge modes.
Extending this methodology beyond translation invariant systems consists of two steps. One must show both that Green's function zeros are still of topological significance and that edge formation criteria are still described by the presence of in-gap zeros. We first show the latter.
\indent The poles of the Green's function restricted to a particular site in position space correspond to an energy state at that particular site. Here restricted refers to the projection of the system Green's function, $G$, to a single site,
\begin{eqnarray}\label{gprojectsum}
G(\omega,\mathbf{r}_{\perp},\alpha_{\parallel}) = \sum_{\alpha} \vert\braket{\alpha\vert\mathbf{r}_{\perp}}\vert^{2}G(\omega,\alpha),
\end{eqnarray}
where $\alpha$ generically labels the Eigenvalues and $\alpha_\parallel$ is the remaining index post the contraction with $\mathbf{r}_{\perp}$. Generically, there will be many poles corresponding to the spectrum at $\mathbf{r}_{\perp}$, but they are not universal.
By adding on-site impurities and considering $G(\omega,\mathbf{r}_{\perp},\alpha_{\parallel})$ only in the band gap of the bare Green's function, any poles will be a result of the impurity potential, $\mathcal{V}(r) = \mathcal{V}\delta(r - \mathbf{r}_{\perp})$, binding a state in the gap. Then, by constructing an appropriate impurity geometry, and taking $|\mathcal{V}|\rightarrow\infty$, an edge is formed. Therefore, the condition for impurity localized states as $|\mathcal{V}|\rightarrow\infty$ is equivalent to the criteria for the formation of edge localized modes. And, impurity bound states correspond to zeros of the restricted in-gap Green's function. This is most readily seen by factoring the full Green's function, $G$ of some system with Hamiltonian $H_{0}$ and an impurity potential $\mathcal{V}$. That is, the full Green's function $G$ can be written in terms of the Green's function $G_{0} = (\omega - H_{0})^{\text{-}1}$ of the original system without the impurity,
\begin{eqnarray}\label{factoring}
G(\omega,\alpha) &=& (\omega - (H_{0}+\mathcal{V}))^{\text{-}1}
= (1+\mathcal{V}G_{0})^{\text{-}1}G_{0}.
\end{eqnarray}
Correspondingly, impurity bound states (poles of $G$) in the gap (not a pole of $G_{0}$) must be a pole of $(1-\mathcal{V}G_{0})^{\text{-}1}$,
\begin{equation}\label{pgfdeteq} \det \left[ G_{0}(\omega,\alpha) \mathcal{V} - \mathbf{1} \right] = 0.
\end{equation}
For $|\mathcal{V}|\rightarrow\infty$, solutions require $G_{0} \rightarrow 0$, and the zeros of $G_{0}$ correspond to poles of $G$. Hence, the zeros of the restricted in-gap Green's function, $G(\omega,\mathbf{r}_{\perp},\alpha_{\parallel})$, correspond to edge modes, just as in the translation-invariant case \cite{Slager2015}.
\indent The above requires in-gap bound states of an aperiodic system to appear as zeros of the projected Green's function. We now derive the conditions under which such states are fixed by the pattern topology. In these cases, in-gap states survive small disorder \cite{prodan2015}, and impose constraints on system dynamics. The fundamental difference between the translation invariant and aperiodic cases comes down to the existence of a good momentum quantum number. For translation invariant systems, the $x$-basis is dual to the Brillouin zone momentum basis. Thus, each momentum eigenfunction is equally weighted in the projection, Eq.~\eqref{gprojectsum} with position $0$, and singular points of the projected Green's function directly relate to an obstruction of consistently writing the Green's function over $k$-space. Heuristically, for two bands separated by a topologically non-trivial gap, the eigenfunctions switch eigenvalues \cite{Slager2015,Borgnia2020,Rhim2018}.
There is no such guarantee for generic aperiodic systems, but we can reduce the the constraints of translation invariance to a single condition for which the sum in Eq.~\eqref{gprojectsum} is in fact reduce-able to a sum over the topologically-fixed IDoS. We now focus on 1D systems (higher dimensions generalize by choosing codimension-1 surface \cite{Borgnia2020,Slager2015,Rhim2018}), where the projected Green's function on the site $x_{0}$ reduces to
\begin{eqnarray}\label{1Dgprojectsum}
G_{\perp}(\omega,x_{0}) = \bra{x_{0}}\left[\sum_{\alpha}G(\omega,\alpha)\ket{\alpha}\bra{\alpha}\right]\ket{x_{0}},
\end{eqnarray}
with $\alpha$ indexing the eigenfunctions of the system. In 1D, it is clear that any shift of the projection site $x_{0}$ can be absorbed as a phase shift in the pattern. We illustrate this with the AAH model, whose Hamiltonian reads
\begin{align}\label{aahrealspace}
\hat{H} = \sum_{x} t(\hat{c}^{\dagger}_{x+1}\hat{c}_{x}+\hat{c}_{x+1}\hat{c}^{\dagger}_{x})+2V\cos(\Theta x+\delta_{x})\hat{c}^{\dagger}_{x}\hat{c}_{x}.
\end{align}
Here $\Theta$ is some irrational multiple of $2\pi$ generating the quasiperiodic pattern. The dynamics are generated by translations $\tau_{x}$, taking $V\cos(\Theta x+\delta_{x})\rightarrow V\cos(\Theta (x+1)+\delta_{x})$. And, we can shift $x_{0}\rightarrow x$ by taking $\delta_{x} = \Theta(x-x_{0})$. We can clearly index the Hamiltonian (and corresponding Green's function) by the real-space phase $\delta_{x}$. This is a general property of quasiperiodic patterns, and the spectrum is invariant under this shift \cite{bourne2018non}. This is guaranteed by choosing the hull of the pattern to define our unital algebra, as the system dynamics guarantee any initial point can be translated into any other point on the hull. We can therefore rewrite Eq.~\eqref{1Dgprojectsum} as
\begin{eqnarray}\label{1Dgprojectsumavg}
G_{\perp}(\omega,x_{0}) = \frac{1}{N}\sum_{x}\bra{x}\left[\sum_{\alpha}G(\omega,\alpha)\ket{\alpha(\delta_{x})}\bra{\alpha(\delta_{x})}\right]\ket{x},\nonumber\\
\end{eqnarray}
with $\delta_{x} = -\Theta(x-x_{0})$ depending on $x$, and the eigenfunctions depend on the choice of phase, i.e. an eigenfunction localized at $x_{0}$ becomes localized at site $x$, but with the same energy and corresponding Green's function component, $G(\omega,\alpha)$. If this were a translation-invariant setting, it would be clear that the choice of $x_{0}$ cannot matter, and, thus the sum in Eq.~\eqref{1Dgprojectsumavg} must reduce to
\begin{align}\label{1Dgprojectsum_nophase}
G_{\perp}(\omega,x_{0}) = \frac{1}{N}\sum_{\alpha}G(\omega,\alpha).
\end{align}
If such as for the AAH model with $V<t$, discussed above, translation invariance holds, then the quantization of the IDoS fixes the sum in Eq.~\eqref{1Dgprojectsum_nophase} and all states are summed over. As a consequence, for an $\omega$ in a spectral gap, states above and below the gap contribute to the sum, such that for some $\omega_{*}$, $G(\omega)\vert_{\omega =\omega_{*}} = 0$. In translation invariant systems, the location of $\omega_{*}$ is usually protected by symmetries such as chirality, fixing states to be at equal energies above and below the gap. In quasiperiodic systems, the IDoS fixes the number of states above and below the gap\cite{bourne2018non}. For some gap labeled, $F$, the integrated density of states (IDoS) below each gap is fixed, i.e. for the AAH model $\text{IDoS}(F) = (m+n\Theta)\cup[0,N]$ \cite{bourne2018non}. Thus, for each gap, $F$, the relevant $\omega_{F}$ for which $G(\omega\in F)\vert_{\omega = \omega_{F}} = 0$ would also be fixed. This observation motivates our generalization to aperiodic systems.
\section{S.I. Topological Criterion} \label{topologicalchoiceSI}
As discussed in the main text, the transition is completely constrained by the projected Green's function zeros, forcing a sharp binary choice between the abs. continuous and pure point like spectra.
We present a short review on the origins of non-commutative topological invariants in quasiperiodic systems. Not only do these systems exhibit a non-vanishing strong 1D topological invariant, it's existence consequentially dominates the bulk dynamics.
\begin{widetext}
\subsection{S.I. AAH Algebra}\label{siaahalgebra}
We follow the work of Prodan \cite{prodan2015} in deriving explicit topological invariants in the AAH model context. We construct the unital algebra and use it to label the resulting spectral gaps of the AAH Hamiltonian. Recall that it reads
\begin{eqnarray}\label{1Dham}
\mathcal{H}_{\delta_{x}} =\sum_{n}t \hat{c}_{n+1}^{\dagger}\hat{c}_{n} + \textnormal{h.c.} + 2V\cos(\Theta n+\delta_{x})\hat{c}_{n}^{\dagger}\hat{c}_{n},
\end{eqnarray}
in terms of the creation operators $c^{\dagger}_i$, lattice constant $a$, potential $V$ that depends on the position $n$ and is indexed by phase $\delta_{x}$.
The model exhibits a duality under the pseudo-Fourier transformation $c_{k} = \sum_{n} \exp(-ikn)c_{k}$ \cite{aubry1980annals}. Considering, $\delta_{x} = 0$, one obtains
\begin{eqnarray}
\tilde{\mathcal{H}}(k) &=& \sum_{k,k',n}t e^{ik(n+1)-ik'n}\hat{c}_{k}^{\dagger}\hat{c}_{k'}+t^{*} e^{ikn-ik'(n+1)}\hat{c}_{k}^{\dagger}\hat{c}_{k'} + V(e^{2\pi i an +i(k-k')n}+e^{-2\pi i an+i(k-k')n})\hat{c}_{k}^{\dagger}\hat{c}_{k'}\nonumber\\
\tilde{\mathcal{H}}_{\phi} &=& \sum_{k} 2t\cos(\Theta k)\hat{c}_{k}^{\dagger}\hat{c}_{k} + V(\hat{c}_{k+1}^{\dagger}\hat{c}_{k} +\textnormal{h.c.}).
\end{eqnarray}
where in the last line we have set $k = \Theta m$ and defined $\sum_{n} \exp(i\Theta n(m-m')) = \delta(m-m')$ in the limit $n\rightarrow\infty$.
A natural equivalence emerges between $\mathcal{H}$ and $\tilde{\mathcal{H}}$ under $V\rightarrow t$, implying the model undergoes a transition for $V = t$, being the well known 1D metal-insulator transition.
Considering the limits $V = 0$ and $t = 0$, the duality relates extended (momentum-localized) eigenfunctions to position-localized states.
\indent The duality in the AAH model has been focus of many localization studies, past and present \cite{aubry1980annals,jitomirskaya2019critical}. The model took on new light, however, when \cite{kraus2012topological} noticed it could be parameterized by the phase choice $\delta_{x}$ \cite{kraus2012topological,jitomirskaya2019critical,avila2006solving,bellissard1982quasiperiodic}. Naively, this phase choice is irrelevant as it corresponds to a shift in initial position of an infinite chain, but the 2D {\it parent} Hamiltonian, as function of $x$ and $\delta_{x}$, has a topological notion. In particular, it corresponds to a 2D tight-binding model with an irrational magnetic flux per plaquette. Explicitly,
\begin{eqnarray}\label{2Dhamiltonian2}
\mathcal{H} &=&\sum_{n,\delta_{x}}t \hat{c}_{n+1,\delta_{x}}^{\dagger}\hat{c}_{n,\delta_{x}} + t^{*} \hat{c}_{n,\delta_{x}}^{\dagger}\hat{c}_{n+1,\delta_{x}} + 2V\cos(\Theta x+\delta_{x})\hat{c}_{n,\delta_{x}}^{\dagger}\hat{c}_{n,\delta_{x}},\nonumber\\
\tilde{\mathcal{H}} &=&\sum_{n,m,m'}t \delta_{m,m'}(\hat{c}_{n+1,m}^{\dagger}\hat{c}_{n,m'} + t^{*} \hat{c}_{n,m}^{\dagger}\hat{c}_{n+1,m'} )+V\left(e^{i\Theta x}\delta_{m+1,m'}+e^{-i\Theta x}\delta_{m-1,m'}\right)\hat{c}_{n,m}^{\dagger}\hat{c}_{n,m'},\nonumber\\
\tilde{\mathcal{H}} &=&\sum_{n,m}t (\hat{c}_{n+1,m}^{\dagger}\hat{c}_{n,m} + \hat{c}_{n,m}^{\dagger}\hat{c}_{n+1,m} )+ V(e^{i\Theta n}\hat{c}_{n,m+1}^{\dagger}\hat{c}_{n,m} + e^{-i\Theta n}\hat{c}_{n,m-1}^{\dagger}\hat{c}_{n,m}).
\end{eqnarray}
The 2D spectrum amounts to a Hofstadter butterfly when varying the flux per plaquette, $\Theta$. For any rational flux, $\Theta/2\pi = p/q \in\mathbb{Q}$, one can define a magnetic unit cell specifying bands that have a Chern number, which sum to zero. This is however not possible for an irrational flux. In this case strategies outlining sequences of rational approximates, with similar band gaps, to find topological invariants were employed \cite{PhysRevB.91.014108}.
\end{widetext}
\indent Hamiltonian \eqref{2Dhamiltonian2} is manifestly topological for all rational fluxes, $a$ \cite{PhysRevB.91.014108}. We can therefore create a sequence of rational approximates to an irrational flux $\lbrace a_{n}\rbrace$, such that $\lim_{n\rightarrow\infty} a_{n} = a_{*}$. The problem, however, arises when projecting back down into one dimension. This is made most clear by considering the spectrum for different phase choices $\delta_{x}$, contrasting the irrational and rational case. For example, for $a = 1/2$, the choice $\delta_{x}$ changes the maximum amplitude of the on-site potential. In the irrational case, however, it has no effect on the spectrum and acts as a translation. Hence, the projection of the sequence of rational approximates {\it does not} create a 1D sequence of rational approximates to the AAH model \cite{prodan2015}. Instead more recent works leverage powerful tools from non-commutative geometry to tackle the problem conclusively, finding deep connections between non-commutative topological invariants and the inherited topology for the 1D projection, i.e the AAH model. In fact, methods described below describe both the inheritance of a 2D topological invariant and a bulk-boundary correspondence in the model \cite{prodan2015}.
\begin{figure*}
\centering
\includegraphics[scale =.1]{pattern2.pdf}
\caption{(a) quasiperiodic spectrum as function of plaquette flux $\Theta = 2\pi\alpha$. Also note the gap labeling index corresponds to slope of line $y = m\alpha + n$ with $m,n\in\mathbb{Z}$. (b) Illustration of quasi-periodic pattern generating a minimal surface (hull). This forms the underlying unital algebra, taking the place of a Brillouin zone.}
\label{patternfigure}
\end{figure*}
\subsubsection{Non-commutative topological characterization} We formalize the ideas behind a parent Hamiltonian for the AAH model and show the AAH model obeys the same algebra as the Hofstader Hamiltonian (defined below), forming a non-commutative torus. We show these two Hamiltonians are equivalent up to representation and all topological properties carry over \cite{prodan2015}.
\indent Consider the translation operator, acting on $\mathcal{H}_{\delta_{x}}$ as
\begin{align}\label{xspacetranslation}
T^{n}\mathcal{H}_{\delta_{x}}T^{\dagger n} = \mathcal{H}_{\delta_{x}+n\Theta}.
\end{align}
For $\Theta/2\pi\notin\mathbb{Q}$, the repeated action of the translation operator parameterizes the motion of the phase, $\phi\in\mathbf{S}$, along the unit circle. On the set of continuous functions over $\mathbf{S}$, $C(\mathbf{S}): \mathbf{S}\rightarrow\mathbb{C}$, the translation operator acts similarly. That is, for some $f \in C(\mathbf{S})$,
\begin{align}
\alpha_{n}:f(\phi) \rightarrow f(\phi+\Theta n).
\end{align}
More formally, one paramterizes the action of $\mathbb{Z}$ on $\mathbf{S}$ as a dynamical system and then constructs its dual, i.e. the pattern hull described in the main text. The key step, however, is to define a unitary which acts in place of the translation operator on functions $f\in C(\mathbf{S})$.
\begin{align}
u^{n}f(\phi)u^{-n} = f(\phi+\Theta n)
\end{align}
In this language, we can define elements of the space $C(\mathbf{S})\rtimes_{\alpha}\mathbb{Z}$, the semi-direct product between complex continuous functions of $\mathbf{S}$ and the translations $\mathbb{Z}$ generated by $\alpha: f(\phi) \xrightarrow{\alpha_{n}} f(\phi+\Theta n)$ as
\begin{align}
\mathbf{a} = \sum_{n\in\mathbb{Z}} a_{n}u^{n}.
\end{align}
In the above $a_{n} \in C(\mathbf{S})$, and $u^{n}$ corresponds to a translation along $\mathbb{Z}$. The main benefit of defining this operator algebra corresponding to the 1D translations is that we can pick a representation of the Hilbert space,
\begin{align}\label{aahrep}
\pi_{x}(\mathbf{a}) = \sum_{n,x\in\mathbb{Z}}a_{n}(\delta_{x}+\Theta x)\ket{x}\bra{x}T^{n},
\end{align}
with $\phi \in [0,2\pi]$. In this representation, the elements, $\mathcal{H}_{\delta_{x}}$, are simply represented as $\pi_{\phi}(\mathbf{h})$. Here,
\begin{align}\label{Celement}
\mathbf{h} = u+u^{-1} + 2V cos(\phi)
\end{align}
is an element of $C(\mathbf{S})\rtimes_{\alpha}\mathbb{Z}$, and the potential shifts when $T$ acts on $\phi$ at each site, i.e. $(\phi + \Theta n)\textnormal{mod}_{2\pi}$.
\indent As done for Eq. \eqref{2Dhamiltonian2}, we now show that this element $h\in C(\mathbf{S})\rtimes_{\alpha}\mathbb{Z}$ also generates the Hofstader Hamiltonian, see also \cite{prodan2015}. We first rewrite the Hofstader Hamiltonian as
\begin{align}\label{hofstaderhamiltonian2d}
H_{\Theta} = \sum_{x,y} T_{x}+T_{x}^{-1}+V(T_{y}+T_{y}^{-1}),
\end{align}
where
\begin{eqnarray}
T_{x}\ket{x,y} = \ket{x+1,y} \ \ \textnormal{and} \ \
T_{y} \ket{x,y} = e^{-i\Theta x}\ket{x,y+1}\nonumber
\end{eqnarray}
are magnetic translations with commutation relations $T_{x}T_{y} = e^{i\Theta}T_{y}T_{x}$. In this form, we define unitary operators $u$, as before, and $z = \exp{i\phi}$ acting on $C(\mathbf{S})$ corresponding to the translations along $x$ and $y$, respectively. These have the same commutation relations $uz = e^{i\Theta}zu$ and allow for a representation of the Hilbert space of $l^{2}$-normed functions on $\mathbb{Z}^{2}:$
\begin{align}\label{2delements}
\pi' (\textbf{a}) = \sum_{n,m} f_{n}T_{x}^{n}T_{y}^{m}.
\end{align}
Then, $H_{\Theta} = \pi'(\textbf{h})$, for
\begin{align}
\mathbf{h} = u+u^{-1} + V(z+ z^{-1}) = u+u^{-1} + V (e^{i\phi} + e^{-i\phi})
\end{align}
as in Eq. \eqref{Celement}.
\indent Framing the problem in terms of the operator algebra $C(\mathbf{S})\rtimes_{\alpha}\mathbb{Z}$ allows one to use techniques from non-commutative geometry to solve the problem immediately. In particular, this is a unital *-algebra for which a non-commutative calculus can be defined. The elements of the algebra are
\begin{align}
\mathbf{a} = \sum_{m,n\in\mathbb{Z}}f_{m,n}z^{m}u^{n},
\end{align}
\indent We can further revert to tools from non-commutative geometry to compute topological invariants. Details can be found in \cite{prodan2015}. We simply state the results here.
One can define differentiation intuitively along each direction:
\begin{eqnarray}
\partial_{1}\mathbf{a} &=& i\sum_{m,n\in\mathbb{Z}}m f_{m,n}z^{m}u^{n},\nonumber\\
\partial_{2}\mathbf{a} &=& i\sum_{m,n\in\mathbb{Z}}n f_{m,n}z^{m}u^{n}.
\end{eqnarray}
And, then integration follows as the inverse operation, $\mathcal{I}(\mathbf{a}) = f_{00}$, i.e. the constant term. These operations along with the algebra define the non-commutative Brillouin torus \cite{bellissard1986gaplabeling,bellissard1986k,bellissard2000hull}, $(C(\mathbf{S})\rtimes_{\alpha}\mathbb{Z},\partial,\mathcal{I})$, and form a special case of a spectral triple.
\indent Thus far, this section has only been a formalization of the concepts explained above. However, expressing the system as a spectral triple allows us to bring down the hammer of non-commutative geometry. In particular, there has been a careful formulation of K-theory in the case of spectral triples \cite{bellissard1986k,bourne2018non}. For the particularly simple case of a non-commutative torus, one can write down a locally computable index formula \cite{bourne2018non,bellissard1982quasiperiodic,bellissard1986gaplabeling,bellissard2000hull,bellissard1986k}, and compute a Chern number.
We introduce a projection operator, $\mathbf{p} = 1/2(1+\mathrm{sgn}(\epsilon_{F}-\mathbf{h}))$, which defines a filling of the spectrum below some Fermi level, $\epsilon_{F}$. The first non-commutative Chern number is then given by\cite{prodan2015,bellissard1986gaplabeling,bellissard1986k,bellissard2000hull,prodan2013non}
\begin{align}
\textnormal{Ch}_{1}(\mathbf{p}) = 2\pi \mathcal{I}(\mathbf{p}[\partial_{1}\mathbf{p},\partial_{2}\mathbf{p}]).
\end{align}
Like the normal Chern invariant, this is well defined as long as there is a finite spectral gap.
Here we replicate the key result of \cite{prodan2015}, by computing this integral in the representation given by Eq. \eqref{aahrep}.
where $\mathcal{I}(\mathbf{a}) = 1/(2\pi)\int_{\mathbf{S}}d\phi f_{0}(\phi)$. In this simple case, the integral reduces to
\begin{align}
\mathcal{I}(\mathbf{a}) &=& \lim_{N\rightarrow\infty} \frac{1}{2N}\sum_{-N\leq x\leq N}f_{0}(\phi+\Theta x) = \mathrm{Tr}_{L}(\pi_{\phi}(\mathbf{a}))
\end{align}
where we have used that $f_{0}(\phi+\Theta x) = \bra{x}\pi_{\phi}(\mathbf{a})\ket{x}$, and $\mathrm{Tr}_{L}$ is the normalized trace. In this representation
\begin{align}
\textnormal{Ch}_{1} = 2\pi i \mathrm{Tr}_{L}\left(\pi_{\phi}(\mathbf{p}[\partial_{1}\mathbf{p},\partial_{2}\mathbf{p})\right)
\end{align}
Using that,
\begin{eqnarray}
\pi_{\phi} (\partial_{1}\mathbf{p})=\partial_{\phi}\pi_{\phi}(\mathbf{p}) \ \ \textnormal{and} \ \ \pi_{\phi} (\partial_{2}\mathbf{p})=i\left[X,\pi_{\phi}(\mathbf{p})\right],\nonumber
\end{eqnarray}
Defining $P_{\phi} = \pi_{\phi}(\mathbf{p})$, the projection operator in the AAH representation, the Chern number takes on a simple form,
\begin{align}
\textnormal{Ch}_{1} = -2\pi \mathrm{Tr}_{L}\left(P_{\phi}[\partial_{\phi}P_{\phi},[X,P_{\phi}]]\right).
\end{align}
\indent Therefore, the Hofstader and AAH Hamiltonians are generated by the same element of $C(\mathbf{S})\rtimes_{\alpha}\mathbb{Z}$. This proves that the topological invariant of the 2D Hofstadter Hamiltonian is inherited by the 1D AAH model and explains the natural appearance of bulk-boundary correspondence -- the existence of boundary localized states reflecting the bulk topological invariant. The topological invariant is robust to disorder, and the edge spectrum is gapless when cycling through $\phi$ \cite{prodan2015}.
An interesting consequence of this pGF transfer matrix formalism is the clear connection via the transfer matrix poles between the non-commutative geometry of the system and the spectral measure. Interestingly, this topological criteria was also noticed by \cite{jitomirskaya2019critical} using a different set of techniques to analyze the semi-infinite AAH model.
\subsection{S.I. Gap-Labeling Theorems}\label{gltheoremsi}
A well studied question arises from this topological criterion. Non-commutative geometry predicts the existence of gaps in the quasi-periodic integrated density of states (IDoS) depending on the irrational parameter $\alpha$ in the AAH Hamiltonian. Given the clear role topology plays in the dynamics, it was predicted and shown that these gaps in the IDoS form open sets in the complement of the quasi-periodic spectrum \cite{bellissard1986gaplabeling,bellissard1982quasiperiodic,bellissard1986k}.
Quasi-periodic systems can be indexed by patterns generating a "deterministic" disorder \cite{bourne2018non,bellissard2000hull,prodan2013non}. In Fig.~\ref{patternfigure}b peaks can be labeled by a coordinate, $P = \lbrace p_{i}\rbrace_{i\in\mathbb{Z}}$, forming a pattern. In the absence of a Brillouin zone, we consider the pattern as a dynamical system and find its convex hull - the minimal surface into which it can be embedded, $\Omega$. For example, for a simple generator such as $G = \cos(\Theta x)$ with $\Theta/2\pi \notin\mathbb{Q}$ it forms a ellipse. However, we can similarly find the hull for more complicated patterns upon defining the map, $f = \lbrace p_{i} \in P\vert f(p_{i}) = (p_{i+1} - p_{i},p_{i+2}-p_{i+1},\ldots)\in X\rbrace$ where $X$ is a hyper-cube of edge length defined by the pattern $P$. Although each element of the pattern is assigned a coordinate in an arbitrarily high dimensional space, there are only as many linearly independent coordinates as there are generators of the pattern. For the aforementioned sinusoidal generator of fixed amplitude, the element $p_{i+1} - p_{i}$ sets the period and further elements - $p_{i+2}-p_{i+1},\ldots$ - are linearly dependent on the first two by a translation. Consequently, it forms the anticipated ellipse in any dimensional hypercube rather than a higher dimensional surface, see Fig.~\ref{patternfigure}b. Ergodicity on this minimal embedding implies there exists a trajectory between any initial approaching (arbitrarily close) any other point on the surface. On the pattern hull, the notion of gauge invariance for quasiperiodic eigenfunctions mentioned above is similar to gauge invariance in a Brillouin zone, taking $\ket{k}\rightarrow\ket{k+\delta}$.
\indent Thus, we consider the space of continuous functions on the hull of the pattern $\mathcal{C}(\Omega)$, the direct analog of a Brillouin zone, and introduce dynamics by defining the action of pattern translations, $\tau$, on these functions, defining a so-called $C^{*}$-Algebra, $\mathcal{C}(\Omega)\rtimes_{\tau}\mathbb{G}$, where $\mathbb{G}$ is the group generated by translations. Elements of this unital algebra are the non-commutative analogs for Hamiltonians on the momentum space torus. Adding information about the on-site Hilbert space - bands in translation invariant case - and a differential operator extracts sufficient information from the quasiperiodic system to define topology in the same way as done in conventional translation invariant band topology. More specifically, one relies on a generalization of the Atiyah-Singer Index theorem to spectral triples by Connes and Teleman \cite{connes1994quasiconformal}.
\indent As discussed in S.I.\ref{siaahalgebra}, the unital algebra generated by a quasiperiodic pattern with real-space translations is a non-commutative n-torus -- dimension corresponding to the number of generators for the pattern and system dynamics. As a direct consequence, the spectral gaps of quasiperiodic systems as a function of the incommensurate parameter, $\Theta \in [0,1]$, can be labeled by integers, see Fig.~\ref{patternfigure}a, i.e. $\lbrace m+n\Theta\vert m,n\in\mathbb{Z}\rbrace\cap[0,N]$ with N is system size, label the gaps in the IDoS of the AAH Model. By construction, this labeling is invariant to small disorder as long as the pattern is well defined and gaps remain open \cite{bourne2018non,prodan2015,prodan2013non}.
\section{S.I. Generalized Andre-Aubry-Harper Model}\label{gaahexamplesi}
While the AAH model is the canonical example of a 1D quasi-periodic systems, there are many generalizations of the Hamiltonian, modifying the on-site potential into other quasi-periodic patterns and/or dressing the hopping terms. These generalizations introduce new features to both the spectrum and wave-functions. Here we focus on a particular generalization parameterized by the onsite potential,
\begin{eqnarray}
V(x) = 2V \frac{\cos(\Theta x+\delta_{x})}{1-b\cos(\Theta x+\delta_{x})},
\end{eqnarray}
where $b\in(-1,1)$ detunes the model from the AAH model. As mentioned in the main text, this model hosts a \textit{mobility edge} -- states undergo a localized to delocalized transition at a fixed energy. It originates from its modified duality \cite{ganeshan2015nearest}, which depends on the energy $E$ of the eigenfunction in question
\begin{eqnarray}
b E = \textnormal{2 sgn}(\lambda)(\vert t\vert-\vert\lambda\vert)
\end{eqnarray}
Here, we reproduce this duality transformation from \cite{ganeshan2015nearest} below to show the immediate benefits from the above perspectives. In particular, the duality does not survive for Liouville $\alpha$, and the mobility edge arise from the absence of a gauge transformation reflecting the 1D duality in the 2D parent Hamiltonian.
We begin with the full Hamiltonian,
\begin{eqnarray}\label{gaahsi}
\hat{H} = \sum_{x}t\hat{c}^{\dagger}_{x+1}\hat{c}_{x} +t^{*}\hat{c}^{\dagger}_{x}\hat{c}_{x+1} + 2V \frac{\cos(\Theta x+\delta_{x})}{1-b\cos(\Theta x+\delta_{x})}.\nonumber\\
\end{eqnarray}
Then, and rewrite it in the following way acting on a particle at site $x$,
\begin{eqnarray}\label{gaahduality}
tu_{x+1}+t^{*}u_{x-1} + g\chi_{x}(\beta,\delta_{x})u_{x} = (E + 2V\cosh{\beta})u_{x}\nonumber\\
\end{eqnarray}
with $\cosh{\beta} = 1/b$, $g = 2V\cosh^{2}{\beta}/\sinh{\beta}$, and
\begin{eqnarray}\label{onsitepotenstialsi}
\chi_{x}(\beta,\delta_{x}) = \frac{\sinh{\beta}}{\cosh{\beta}-\cos{(\Theta x+\delta_{x})}}
\end{eqnarray}
Note, all sign dependence can be modulated by choosing a phase shift $\delta_{x}$. While \cite{ganeshan2015nearest} uses this to simplify the duality transformation, we notice this implies a phase shift changes the duality condition by changing the relative signs of $\cosh{\beta}$ and V. Continuing, one can decompose the onsite, potential further
\begin{eqnarray}\label{geomsumsi}
\chi_{x}(\beta,\delta_{x}) = \sum_{r=-\infty}^{\infty}e^{-\beta\vert r\vert}e^{ir(\Theta x+\delta_{x})}
\end{eqnarray}
In this form, it becomes clear that a Fourier-like transform will result in a new effective hopping term.
We first apply the transformation
\begin{eqnarray}\label{GAAHtransformsi1}
b_{n} = \sum_{x}e^{i(\Theta nx)}u_{x}
\end{eqnarray}
\begin{widetext}
To keep the phase dependences, we choose $t = \tau e^{i\delta_{k}}$. The resulting Hamiltonian is:
\begin{eqnarray}\label{gaahduality1}
2\tau\cos{(\Theta n+\delta_{k})}b_{n} + \sum_{x}ge^{i\Theta nx}\chi_{x}(\beta)u_{x} &=& \sum_{x}(E + 2V\cosh{\beta})e^{i(\Theta nx)}u_{n}\nonumber\\
\sum_{x}ge^{i\Theta nx}\sum_{r=-\infty}^{\infty}e^{-\beta\vert r\vert}e^{ir(\Theta x+\delta_{x})}u_{x} &=& (E+2V\cosh{\beta} - 2\tau\cos{(\Theta n+\delta_{k})})b_{n}\nonumber\\
g\sum_{r=-\infty}^{\infty}e^{-\beta\vert r-n\vert}e^{i(r-n)\delta_{x}}b_{r} &=& (E+2V\cosh{\beta} - 2\tau\cos{(\Theta n+\delta_{k})})b_{n}\nonumber\\
g\sum_{r=-\infty}^{\infty}e^{-\beta\vert r-n\vert}e^{i(r-n)\delta_{x}}b_{r} &=& \omega\chi^{-1}_{n}(\beta_{0},\delta_{k})b_{n}
\end{eqnarray}
Where, $\omega = 2\tau\sinh{\beta_{0}}$ and $\cosh{\beta_{0}} = 1/2\tau(2V\cosh{\beta}+E)$. One can then apply the transformation:
\begin{eqnarray}\label{GAAHtransformsi2}
v_{m} = \sum_{n}e^{i\Theta mn}\chi^{-1}_{n}(\beta_{0},\delta_{k})b_{n}
\end{eqnarray}
Resulting in
\begin{align}\label{gaahduality2}
\sum_{n}g\sum_{r'=-\infty}^{\infty}e^{-\beta\vert r'-n\vert}e^{i(r'-n)\delta_{x}}e^{i\Theta mn}b_{r'} = \sum_{n}\omega\chi^{-1}_{n}(\beta_{0},\delta_{k})e^{i\Theta mn}b_{n}\nonumber\\
\sum_{n}g\sum_{r'=-\infty}^{\infty}e^{-\beta\vert r'-n\vert}e^{i(r'-n)\delta_{x}}e^{i\Theta m(n-r')}\sum_{r=-\infty}^{\infty} e^{-\beta_{0}\vert r\vert}e^{ir(\Theta r'+\delta_{k})} e^{i\Theta mr'}\chi_{r'}^{-1}(\beta_{0},\delta_{k})b_{r'} = \omega v_{m}\nonumber\\
g\sum_{r=-\infty}^{\infty} e^{-\beta_{0}\vert r-m\vert}e^{i(r-m)\delta_{k}} v_{r} = \omega \chi^{-1}_{m}(\beta,-\delta_{x})v_{m}
\end{align}
We can now take the final step and define $f_{k} = \sum_{m}(e^{i\Theta mk}) v_{m}$, allowing us to rewrite the Hamiltonian as:
\begin{eqnarray}\label{gaahduality3}
\sum_{m}g\sum_{r=-\infty}^{\infty} e^{-\beta_{0}\vert r-m\vert}e^{i(r-m)\delta_{k}} e^{i\Theta mk} v_{r} &=& \omega \sum_{m}\chi^{-1}_{m}(\beta,-\delta_{x})e^{i\Theta mk}v_{m}\nonumber\\
\sum_{r}g\sum_{m=-\infty}^{\infty} e^{-\beta_{0}\vert r-m\vert}e^{i(r-m)\delta_{k}} e^{i\Theta (m-r)k}e^{i\Theta kr} v_{r} &=& \omega \sum_{m}\chi^{-1}_{m}(\beta,-\delta_{x})e^{i\Theta mk}v_{m}\nonumber\\
g\chi_{k}(\beta_{0},-\delta_{k}) \sum_{r}e^{i\Theta kr} v_{r} &=& 2\tau\sinh{\beta_{0}} \sum_{m}\frac{\cosh{\beta}-\cos{(\Theta m+\delta_{x})}}{\sinh{\beta}}e^{i\Theta mk}v_{m}\nonumber\\
(\tau e^{i\delta_{x}}f_{k+1}+\tau e^{-i\delta_{x}}f_{k-1})+g\frac{\sinh{\beta}}{\sinh{\beta_{0}}}\chi_{k}(\beta_{0},-\delta_{k}) f_{k} &=& 2\tau\cosh{\beta}f_{k}
\end{eqnarray}
Thus, as shown in \cite{ganeshan2015nearest}, if $\beta_{0} = \beta$ (and $\delta_{k}=-\delta_{x}$), the new Hamiltonian under the complete transformation,
\begin{eqnarray}\label{dualitytransformationsi}
f_{k} = \sum_{m,n,x} e^{i\Theta(mk+mn+nx)}\chi_{n}^{-1}(\beta_{0},\delta_{k})u_{x},
\end{eqnarray}
is self dual. This results in the cited condition,
\begin{eqnarray}\label{derivedcondition}
2V/b + E = 2\tau/b \implies bE = 2(\tau-V).
\end{eqnarray}
This result is an exciting example of a quasi-periodic system with a mobility edge, and suggestive of a connection between such models and random disorder. However, examining the problem from the pGF perspective presented here, we see the constraints are indeed topological unlike random disorder. The proof follows as in section S.I.~\ref{siboundstransfer}.
\begin{figure*}
\centering
\includegraphics[scale = .5]{gaahmobility.pdf}
\caption{Plot of pGF convergence for GAAH model at $V = 0.7, t = 1$ for $b = 1/2, 1/4$ and $N = 4096$. We notice the shift in mobility edge from convergence criteria. Note, blue/yellow lines indicate analytic mobility edge, and blue/yellow boxes indicate spectral gaps in which those mobility edges line. The blue line occurs deep in a spectral gap for which states above are localized and states below are delocalized, while the yellow line is closer to the gap edge. Since the the quasi-periodic spectrum forms a cantor set, all mobility edges will fall in a spectral gap, but numerical precision of pGF convergence is better in large gaps}
\label{fig:gaah}
\end{figure*}
\subsection{GAAH Projected Green's Function}
We begin by constructing the 2D rational approximates. In Eq.~\eqref{GAAHduality} we can redefine $E' = E + 2V\cosh{\beta}$ and operators can be assigned a corresponding phase, $\delta_x$. Then, using the expansion in Eq.~\eqref{geomsumsi}, we perform an inverse Fourier transform to arrive at
\begin{eqnarray}\label{2DlongrangeGAAHsi}
\mathcal{H}_{2D} &=& \sum_{x,\delta_x} \left[ t\hat{c}_{x+1,\delta_x}^{\dagger}\hat{c}_{x,\delta_x} + h.c.+g \sum_{r\geq0}e^{-\beta\vert r\vert} \cos{(\Theta x r +\delta_x)}\hat{c}^{\dagger}_{x,\delta_x}\hat{c}_{x,\delta_x} \right]\nonumber\\
\mathcal{H}_{2D} &=&\sum_{x,y} \left[ t\hat{c}_{x+1,y}^{\dagger}\hat{c}_{x,y} + g/2 \sum_{r = -\infty}^{\infty}e^{-\beta\vert r\vert} e^{ir\Theta x}\hat{c}^{\dagger}_{x,y+r}\hat{c}_{x,y} + h.c.\right]
\end{eqnarray}
Notice, that the 2D parent Hamiltonian of the GAAH model has long-range hopping along the ``phase" coordinate and short range hopping along the ``real space" coordinate. As such, no simple gauge transformation will generate the 1D duality transformation in 2D. And, the rational approximates from S.I.~\ref{siboundstransfer} will only host finite unit cells for a horizontal unit cell choice.
We take the approximating sequence,
\begin{eqnarray}\label{2DlongrangeGAAHrationalssi}
\mathcal{H}_{2D} &=& \sum_{x,\delta_x} \left[ t\hat{c}_{x+1,\delta_x}^{\dagger}\hat{c}_{x,\delta_x} + h.c.+g \sum_{r\geq0}e^{-\beta\vert r\vert} \cos{(\frac{p_N}{q_N} x r +\delta_x)}\hat{c}^{\dagger}_{x,\delta_x}\hat{c}_{x,\delta_x} \right],
\end{eqnarray}
where $\frac{p_N}{q_N}$ is the $N$-th continued fraction approximation of $\alpha$. We bound the difference in onsite potentials for the rational approximates vs $\mathcal{H}_{2D}$ as in S.I.~\ref{siboundstransfer}, using
\begin{eqnarray}
\vert2g\cos{(2\pi\alpha x r +\delta_x)} -2g\cos{(2\pi\alpha x r +\delta_x)}\vert < \vert 4g \sin{(\pi\delta_N rx)} < \vert 4g\pi\delta_N q_N r\vert,
\end{eqnarray}
where $\delta_N = \vert\alpha -\frac{p_N}{q_N}\vert$ and for $\alpha$ diophantine $\delta_N<1/\sqrt{5}q_{N}$. Unlike above, the dependence on $r$ makes this term unbounded, but we have the exponential term, $e^{-\beta\vert r\vert}$, which reduces the difference to
\begin{eqnarray}
\vert2g\chi_{\alpha}(\beta,\delta_x) - \chi_{N}(\beta,\delta_x)\vert < \vert\sum_{r\geq 0} 4g\pi\delta_N q_N r e^{-\beta\vert r\vert} \vert = \vert 4g\pi\delta_N q_N (\partial_\beta e^{-\beta r})\vert = \vert 4g\pi\delta_N q_N \vert\frac{e^{-\beta}}{(1-e^{-\beta})^{2}}< \frac{C}{q_N}.
\end{eqnarray}
The same argument regarding an $i\epsilon$ prescription applies and we need to bound
\begin{eqnarray}
\vert\vert(\omega-\mathcal{H}_{2D,N})\vert\vert< C\sqrt{q_N}
\end{eqnarray}
Unlike above, where self-duality meant we had not hope of a stronger bound, here we can keep $\omega$ to help us get a tighter bound on the convergent parameter space. Notice the arithmetic mean of $\cosh{\beta}\vert\chi(\beta)/\sinh{\beta}\vert$ over all $x$ is just the inverse geometric mean of $\vert 1 -b\cos(\Theta x)\vert$. So, $\det{\vert\mathcal{H}_{2D,N}\vert}< [(2(V-t)/b))^{q_{N}}$
We obtain:
$\det(\omega - \mathcal{H}_{2D,N}) <\infty$ if
$$\omega + 2V/b < 2t/b$$
For horizontal unit cells and $\alpha$ diophantine, the pGF converges when $bE<2(t-V)$ with positive $t,V$, see Fig.~\ref{fig:gaah}.
\end{widetext}
\end{document}
|
{
"timestamp": "2021-09-30T02:00:20",
"yymm": "2109",
"arxiv_id": "2109.13933",
"language": "en",
"url": "https://arxiv.org/abs/2109.13933"
}
|
\section{Introduction}
Most of today's fast-paced markets are organised as \emph{order-driven} markets, examples being found among the world's largest equity exchanges, such as the NASDAQ, the NYSE, Hong Kong, Shanghai, Shenzhen, London and Toronto Stock Exchange, and the Euronext. The \emph{order flow} is the dynamic sequence of orders submitted by traders in an order-driven market, and is the lever that causes the variation of prices at high-frequency timescales, such as intra-day price variations.
The state-of-the-art in the application of deep learning for modelling high-frequency price variations has focused on directly predicting the directional price change \cite{tsantekidis2017using,tsantekidis2017forecasting,dixon2018sequence,passalis2018temporal,dixon2018sequence,sirignano2018universal,lim2020deep}. Although this previous work has reported promising results, there are a number of advantages in addressing the generative modelling of the order flow itself, including the computation of future order intensities for high-frequency trading strategies \cite{avellaneda2008high}, the provision of data-driven insights into the market microstructure \cite{o2015high}, and as a simulator for evaluating and back-testing trading strategies \cite{hu2014agent}, with the advantage addressed here being the simulation of future price variations using the generated order sequences.
To our knowledge there is currently a gap in the machine learning literature in applying deep learning, or any machine learning models, to modelling the order flow. This paper fills this gap by introducing the \emph{Sequence Generative Adversarial Network} (SeqGAN) \cite{yu2017seqgan} for modelling the order flow. Since there is currently no related work in the machine learning literature, a well-known model from the quantitative finance literature is selected as benchmark. Model performance is evaluated by performing a statistical analysis of the simulated intra-day price variation resulting from the generated sequences, and comparing the results to ones acquired from corresponding analyses of real data.
\section{Related Work}
\label{section:related_work}
There are two philosophies of statistical modelling when deriving conclusions from data. One assumes a data generating process, while the other uses algorithmic models that treat the data mechanism as unknown. In modelling order flow data, the former approach gives rise to the \emph{zero-intelligence} models in the quantitative finance literature, while machine learning models fall into the latter class of algorithmic models.
Zero-intelligence models assume that the order flow is governed by stochastic processes without any assumptions about rational trader behaviour. The current state-of-the-art uses a framework that models the irregularly-spaced market, limit, and cancellation orders using independent counting processes. Most commonly the multiple Poisson process is used, where each process models the independent arrival of an order at a given price \cite{smith2003statistical,cont2010stochastic}. These zero-intelligence models are able to reproduce many empirical regularities found in real price variations. However, due to simplified assumptions about the data-generating mechanism, these models are sensitive to regime shifts, and lack generalisation power. Tractability and parametric estimation can also be an issue.
These drawbacks may be overcome by machine learning approaches that learn directly from data without assuming any data-generating process. However, to our knowledge, no previous work has applied machine learning to the modelling of the order flow. The closest related work that applies deep learning to order flow data is in \cite{tsantekidis2017using,tsantekidis2017forecasting,dixon2018sequence,passalis2018temporal,dixon2018sequence,sirignano2018universal,lim2020deep}, which predict high-frequency price variations using the limit order book or order flow related data, though they do not address the problem of modelling the order flow sequences. \cite{zhang2019stock,zhou2018stock,takahashi2019modeling} have applied GANs to directly model price time-series; however, in this paper, we are instead interested in learning the data-generating process that produces these price time-series.
\section{Domain Background}
\label{section:domain_background}
In this section, some features of order-driven markets are briefly introduced. Readers are directed to \cite{gould2013limit} for further technical details of order-driven markets.
The order flow is the sequence of placement and cancellation of \emph{limit orders} and \emph{market orders} by traders in order-driven markets. A limit order is a type of order to buy or sell a volume of a traded asset at a specified price or better. Although the price is guaranteed, the filling of a limit order is not. If a submitted limit order cannot be immediately executed against an existing order in the \emph{limit order book} (LOB), then the limit order is added to the LOB until it is cancelled, amended, or executed against subsequent orders. Meanwhile, market orders are immediately executed against limit orders queued at the best price in the order book, as fully as possible. Any unfilled portion may then be converted to limit orders at the same price, or executed at the next best available price until the market order is fully executed.
At any given time, the total volumes of limit orders in the LOB are grouped by price. This is the common state of the LOB as visualised and evaluated by traders. An example is shown in Figure \ref{section:domain_background:fig:lob}. Buy limit orders are on the left side, sorted by the highest price to the right, while sell limit orders are on the right side, sorted by the lowest price to the left.
\begin{figure}[ht!]
\centering
\caption{A sample visualisation of a limit order book.}
\includegraphics[width=0.99\columnwidth]{figs/lob}
\label{section:domain_background:fig:lob}
\end{figure}
Some necessary common terminology for various measures in the LOB must now be defined. The highest buy (or, lowest sell) price in the LOB at time $t$ is the \emph{best bid} $b(t)$ (or, \emph{best ask} $a(t)$). The \emph{mid-price} is $\frac{a(t)+b(t)}{2}$, and is the most common reference price when trading at high-frequency time-scales. In the example of Figure \ref{section:domain_background:fig:lob}, the mid-price is \$54.70. Finally, we will need to define the \emph{relative price}. Since the prices in an LOB are constantly changing, it is more useful to have a relative measure of the price rather than any specific price. From a modelling perspective, this will also naturally normalise any price variables. The relative price a buy (or, sell) order at time $t$ is the number of \emph{ticks} from $a(t)$ (or, $b(t)$), where a tick is the smallest permissible price change imposed by the exchange. In the example of Figure \ref{section:domain_background:fig:lob}, and assuming a tick of \$0.01, the relative price for the orders at \$54.68 is exactly 3 since it is 3 ticks away from $a(t)=\$54.71$.
\section{Technical Background}
\label{section:technical_background}
Though recurrent neural networks (RNNs) have been successful in modelling sequence data \cite{graves2013generating}, RNNs trained with maximum likelihood suffer from \emph{exposure bias} in the inference stage when generating sequences \cite{bengio2015scheduled}. The generative adversarial networks (GAN) framework \cite{goodfellow2014generative} is a potential solution to the exposure bias problem. GANs are made up of a generator $G_\theta$ and discriminator $D_\phi$, parameterised by $\theta$ and $\phi$ respectively. The generator $G_\theta$ is trained to produce a sequence $Y_{1:T} = (y_1, \dots, y_T )$, where $y_t \in \mathcal{Y}$ and $\mathcal{Y}$ is some set of discrete tokens. The discriminator $D_\phi$ is a binary classifier trained to distinguish the generated sequence $Y_{1:T}$ from a real sequence $X_{1:T}$. The probability $D_\phi(Y_{1:T})$, which measures how likely it is that generated sequence is a real sequence, is used as feedback for guidance as to how to further improve $G_\theta$. However, standard GANs are designed for real-valued continuous data, which is not appropriate for the order flow data considered in this paper, since the orders are naturally discrete event tokens.
The work in \cite{yu2017seqgan} addresses the adversarial learning of discrete sequences by introducing the SeqGAN framework. In a SeqGAN, the training of the generator is treated as a reinforcement learning (RL) problem. At any given timestep $t$, the state $s$ is the sequence produced thus far, $y_1,\dots,y_{t-1}$. The action $a$ is then which token to select as $y_t$ from $\mathcal{Y}$. The action to be taken in a given state is determined by the generator $G_\theta(y_t|Y_{1:t-1})$, which is a stochastic policy parameterised by $\theta$. The generator is updated via policy gradient, utilising rewards in the form of the output of the discriminator $D_\phi$. The action value defined by the authors of \cite{yu2017seqgan} for updating the generator is as follows:
\begin{equation}
\begin{aligned}
Q^{G_\theta}_{D_\phi}&(s=Y_{1:t-1},a=y_t) = \\
&\begin{cases}
\frac{1}{N}\sum^N_{n=1} D_\phi(Y^n_{1:T}) & \text{if } t<T, \\
D_\phi(Y_{1:t}), & \text{if } t=T,
\end{cases}
\end{aligned}
\label{section:technical_background:eq:action_value}
\end{equation}
\noindent
where $Y^n_{1:T}\in\text{MC}^{G_\theta}(Y_{1:t};N)$, $\text{MC}^{G_\beta}(Y_{1:t};N)$ represents an $N$-times Monte Carlo search algorithm for sampling the unknown last $T-t$ tokens using the generator $G_\theta$ as the rollout policy, and $Y^n_{1:T}$ is a sampled sequence. The gradient for the generator objective $J(\theta)$ is computed as:
\begin{equation}
\nabla_\theta J(\theta) \approx \sum^T_{t=1} \nabla_\theta \log G_\theta(y_t|Y_{1:t-1}) \cdot Q^{G_\theta}_{D_\phi}(Y_{1:t-1,y_t}),
\label{section:technical_background:eq:dj_dt}
\end{equation}
After improving the generator via policy gradient update for a number of epochs, $D_\phi$ is re-trained using negative examples produced from the improved $G_\theta$ by minimising the binary cross-entropy loss. Readers are directed to the original paper \cite{yu2017seqgan} for more detailed explanations and derivations.
\section{Methodology}
\label{section:methodology}
\subsection{Problem Formulation}
\label{section:methodology:subsection:problem_formulation}
An order in the order flow is defined in this paper to be a \emph{discrete event token} that represents a buy or sell market, limit, or cancellation order at a given relative price $q$. Let $\mathcal{O}$ be this set of event tokens, defined as follows:
\begin{gather}
\mathcal{O} \in \{ \mathcal{L} \cup \mathcal{M} \cup \mathcal{C} \cup \mathcal{E} \}, \\
\mathcal{L} \in \{ l_{B,1}, \dots, l_{B,Q},l_{A,1}, \dots, l_{A,Q} \}, \\
\mathcal{C} \in \{ c_{B,1}, \dots, c_{B,Q}, c_{A,1}, \dots, c_{A,Q} \}, \\
\mathcal{M} \in \{ \mu_{B}, \mu_{A} \}, \\
\mathcal{E} \in \{ \eta_B, \eta_A \},
\end{gather}
\noindent
where $\mathcal{L}$ is the set of all limit order event tokens, $\mathcal{M}$ is the set of all market order event tokens, $\mathcal{C}$ is the set of all cancellation event tokens, and $\mathcal{E}$ is the set of all other event tokens.
Each event token in $\mathcal{L}$, $\mathcal{M}$, $\mathcal{C}$, and $\mathcal{E}$, can be described as follows. The token $l_{B,q}$ represents a bid limit order at a relative price $q$ from the best ask, while $l_{A,q}$ represents an ask limit order at a relative price $q$ from the best bid. Using a similar notation, $c_{B,q}$ and $c_{A,q}$ are tokens for the cancellation of active bid and ask limit orders. All limit and cancellation orders not within $Q$ relative prices are represented by a single token $\eta_A$ for the ask side, and $\eta_B$ for the bid side of the order book, respectively. Finally, market orders arriving at the best bid and best ask are represented by $\mu_{B}$ and $\mu_{A}$ respectively. For the limit and cancellation orders, prices more than $Q$ ticks away from the best bid and best ask are not considered, since trading activities that impact the market occur mostly at prices closer to the best bid and best ask \cite{cont2014price}.
Given this set $\mathcal{O}$ of order event tokens, the order flow is defined here as a finite-length sequence $O_{1:T} = \{o_1, \dots, o_T \}$, where $o_{t} \in \mathcal{O}$ is a token indicating the type of order event arriving at a given relative price. Therefore we have here a discrete token sequence modelling problem. We aim to train $G_\theta$ on $O_{1:T}$ to produce a novel sequence of orders $O'_{1:T} = \{o'_1,\dots,o'_T\}$ such that the probabilistic difference between generated and real sequences is minimised.
\subsection{SeqGAN Modelling of Order Flow Sequences}
The algorithm for training the generator $G_\theta$ and discriminator $D_\phi$ using the SeqGAN framework is described in Algorithm \ref{section:methodology:subsection:seqgan_order:algorithm:token_train}. A recurrent neural network (RNN) with long-short term memory cells \cite{hochreiter1997long} is implemented as the generator model $G_\theta$, while the discriminator $D_\theta$ is implemented by a convolutional neural network \cite{zhang2015sensitivity}. In both networks, the tokens $(y_1,\dots,y_T)$ are embedded into a continuous space $(x_1,\dots,x_T)$ using a fully connected layer.
\begin{algorithm}[ht!]
\LinesNumbered
\KwIn{\\
\Indp
Order flow real sequences $X = \{O_{1:T}\}_{1:N}$; \\
Order flow start sequences $S = \{O_{-T+1:0}\}_{1:N}$
}
Initialise $G_\theta$ and $D_\phi$ with random parameters $\theta$ and $\phi$\;
Pre-train $G_\theta$ using MLE on $X$ with starting sequences $S$\;
Generate samples using $G_\theta$ using starting state $S$\;
Pre-train $D_\theta$ by minimising CE on generated samples as negative examples and $X$ as positive examples\;
\Repeat(){SeqGAN converges}{
\For{g-steps}{
Uniformly sample starting sequence $s$ from $S$\;
Generate ${o'_1,\dots,o'_T}$ using $G_\theta$ with starting state $s$\;
\For{$t$ in $1:T$}{
Compute $Q(a=o'_t, s=O'_{1:t-1})$ using Eq. \ref{section:technical_background:eq:action_value}\;
}
Update $\theta$ using Eq. \ref{section:technical_background:eq:dj_dt}\;
}
\For{d-steps}{
\For{each $s$ in $S$}{
Generate sequence sample using $G_\theta$ with starting state $s$\;
Append sequence sample to array of negative examples\;
}
Uniformly sample equal number of negative examples and positive examples $X$\;
Use bootstrapped data to train $D_\theta$ for a number of epochs given by Eq. \ref{section:technical_background:eq:action_value}\;
}
}
\caption{Algorithm for training the SeqGAN generator and discriminator on the order flow.}
\label{section:methodology:subsection:seqgan_order:algorithm:token_train}
\end{algorithm}
In the original SeqGAN paper \cite{yu2017seqgan}, the start state $s_0$ in the SeqGAN framework is a special token defining the start of a sequence, commonly used in natural language processing datasets. For the work here, it is proposed that the start state be a sequence of order flow, our reason being that it would be unnatural for an order flow to abruptly start, unlike a text sentence. Therefore, to generate an order flow $O'_{1:T}$, a start sequence is defined as $O_{-T+1:0}=\{o_{-T+1,-T+2,\dots,o_0} \}$, where the length of the start sequence is set to be the same as the length of the sequence to be generated. A given start sequence is always associated with a positive sequence example in the training set such that $O_{-T+1:0}$ concatenated with a positive example $O_{1:T}$ forms a continuous sequence of order flow that exists in the real data. When generating a simulated sequence $O'_{1:T}$, the start state can be uniformly sampled with replacement from the set of start sequences.
\subsection{Benchmark Model}
\label{section:methodology:subsection:benchmark_model}
Since the generative modelling of order flow sequences in this paper is novel, no direct comparison with existing approaches in the machine learning literature can be made. However, the quantitative finance literature contains well-known stochastic process approaches for the modelling of order flow, as presented in Section \ref{section:related_work}. Among these, the \emph{multiple Poisson process} \cite{smith2003statistical,cont2010stochastic} is the most suitable and reliable to use here as a benchmark model, due to its ubiquity in practice, as well as its simplicity in parameter estimation. In the multiple Poisson model, one Poisson process is used for modelling the arrival of a single order event tokenised in set $\mathcal{O}$. Interested readers are directed to \cite{cont2010stochastic} for more details of the model and how it is fitted. After the arrival rate parameter for each of the processes is fitted, the procedure for generating a sequence of tokens is quite straightforward. For each process, the arrival time of the token is sampled from the process. Then, all of the generated token sequences are concatenated into a single data-structure and sorted by time to obtain the generated order flow.
\section{Dataset}
\label{section:dataset}
Order flow data from most stock exchanges is either very expensive or difficult to obtain for the typical researcher. However, cryptocurrency exchanges allow access to the same kind of order flow data as could be obtained from regular stock exchanges, but at virtually no cost. The data for our experiments are for this reason obtained from Coinbase, a digital currency exchange. In this paper we gathered the order flow for the BTC-USD currency pair in the period between 4 Nov 2017 to 1 Dec 2017.
Data from the period between 4 Nov and 29 Nov is used for training. In this period, the order flow is partitioned into slices of 400 events. Each of the slices is split equally into two to obtain the real order flow sequence $\hat{O}_{1:T}$ and the start sequence $\hat{O}_{-T+1:0}$. All of the real order flow sequences are concatenated into a single dataset for training the discriminator $D_\phi$, while the start sequences are concatenated into a dataset to be used for generating a sequence of order flow in the generator $G_\theta$. For testing, we use the data from the period between 30 Nov and 1 Dec.
\section{Results: Macro-Behaviour Analyses}
\label{section:results:subsection:macro-behaviour}
For model evaluation, a set of macro-behaviour analyses are conducted to investigate how well the intra-day mid-price variations of the simulated order flow from both models reproduce important empirical regularities found in real mid-price variations. Specifically, this section will compare the mid-price log-returns distribution and the mid-price volatility for both models to that of the real mid-price series, over the test period between 30 Nov and 1 Dec 2017.
To simulate the intra-day price variation, the order volume and inter-order arrival time for each generated order needs to be sampled from an empirical distribution. The benchmark model needs only to sample the order volume, since the multiple Poisson model naturally models the arrival time of each order. The empirical distributions for the order volume and inter-order arrival time are estimated from the data in the training period. We generate enough order flow data from each model to produce a 1 minute interval mid-price time series for a period of 48 hours.
\subsection{Mid-Price Returns Distribution}
We first compare the log-returns distributions from the simulated mid-prices to that for the real mid-price time series using the two-sample Kolmogorov-Smirnov (K-S) test. Denoting the dataset of log-returns computed from one sample of a simulated time series as $A$, and the dataset of log-returns from the real mid-price series as $B$, the K-S test is then performed under the null hypothesis that datasets $A$ and $B$ are sampled from the same distribution. Since 100 samples of the simulated order flow sequences were obtained for both models, the K-S test has to be performed 100 times for each model.
However, we now encounter the issue of multiple comparison since the more samples of the simulated mid-prices we test, the more likely it is that one of them would pass the K-S test. To avoid this bias, Hochberg's step-up procedure \cite{dunnett1992step} is implemented as an additional step to control the outcome of the multiple K-S tests. The procedure sorts the hypotheses of the 100 K-S tests by p-value, and determines which of the hypotheses, those with the lowest p-values, should be rejected. For these tests, a larger than usual significance level of 0.1 is chosen since simulating noisy financial time-series is an immense challenge. Then, comparing the SeqGAN model and the benchmark, we say that the model with the least number of hypotheses rejected by Hochberg's step-up procedure is that which is more likely to produce an order flow with realistic macro-behaviour.
\begin{table}[ht!]
\centering
\caption{Number of Kolmogorov-Smirnov test hypotheses (out of 100 samples) rejected in Hochberg's step-up procedure. The length column refers to the first 1, 6 and 48 hours for each of the 100 samples.}
\begin{tabular}{||l|c|c||}
\hline
Time-Series Length & SeqGAN Model & Benchmark Model \\ \hline \hline
1 Hour & \textbf{73} & 86 \\
6 Hours & \textbf{88} & 91 \\
48 Hours & \textbf{98} & 100 \\ \hline
\end{tabular}
\label{section:results:subsection:macro-behaviour:table:hypo_reject}
\end{table}
Table \ref{section:results:subsection:macro-behaviour:table:hypo_reject} shows the number of hypotheses rejected for the SeqGAN model and benchmark model, with the experiments replicated for the first 1 hour, 6 hours, and 48 hours of the mid-price time-series. It can be observed that as the time-series length is increased, the similarity between the log-return distributions of the simulated order flow and the real order flow deteriorates, as would be expected. For the longer time-series, quite a large number of the samples are rejected for both models, but this is again as expected since high-frequency financial time-series are extremely challenging to realistically replicate, especially for long time periods.
Recall that the simulated order flow for the mid-price time series is produced iteratively where, initially, a new simulated sequence is generated from a starting sequence of real order flow. This generated sequence is then fed back as a starting sequence to generate another new sequence, and so on. The performance for time-series of different lengths in Table \ref{section:results:subsection:macro-behaviour:table:hypo_reject} would suggest that as each new sequence is generated, conditioned on a previously generated sequence, the resulting statistical behaviour of the mid-price log-returns starts to deviate from that of the actual ones. Although Generative Adversarial Networks in theory mitigate this exposure bias problem, it seems as if for this experiment the problem even so persists in the long run.
Nonetheless, the results here show the simulated order flow produced by the SeqGAN model is better at reproducing the mid-price log-returns of real data than the benchmark for all three time-series lengths in the experiment.
\subsection{Mid-Price Tail Exponents}
Next, the tails of the absolute log-returns distributions for the simulated mid-price of each of the models are compared to those of the real data. Empirical studies have reported strong evidence of power law behaviour \cite{gould2013limit} in the absolute log-return distributions of financial time series. Power law probability distributions are “heavy-tailed”, meaning the right tails of the distributions still contain a great deal of probability. Power law distributions are probability distributions with the form $p(x) \propto x^{-\alpha}$, and it is the tail-exponent $\alpha$ that is the subject of this analysis in this section.
The Jarque-Bera (JB) test \cite{jarque1980efficient} is first applied to the real mid-price series, for the first 1 hour, 6 hours and 48 hours, to determine if there are heavy tails in the absolute log-returns distribution. From Table \ref{section:results:subsection:macro-behaviour:table:fat_tail}, it can be observed that the kurtosis of the test distributions is much larger than 3, indicating heavy tails with very high statistical significance.
\begin{table}[ht!]
\centering
\caption{Test period kurtosis and p-values from the Jarque-Bera test, and computed tail-exponents, for the real mid-price time-series absolute log-returns. The length column refers to the first 1, 6, and 48 hours of the series.}
\begin{tabular}{||l|ccc||}
\hline
Time-Series Length & Tail-Exponent & Kurtosis & p-value \\ \hline \hline
1 Hour & 3.67 & 8.79 & 0.00 \\
6 Hour & 2.98 & 8.46 & 0.00 \\
48 Hour & 3.30 & 10.98 & 0.00 \\ \hline
\end{tabular}
\label{section:results:subsection:macro-behaviour:table:fat_tail}
\end{table}
We then equivalently test the absolute log-returns of the simulated mid-price series for heavy tails. Table \ref{section:results:subsection:macro-behaviour:table:fat_tail_models} shows the aggregated results of the Jarque-Bera test across the 100 samples generated by the SeqGAN and benchmark models. The measured kurtoses are averaged, and the Hochberg Step-Up procedure is again applied to determine the proportion of tests to be accepted at a 1\% significance, after controlling for repetition bias. It can be observed that the average kurtosis is much larger than 3, and the null hypotheses of the Jarque-Bera test across all samples are rejected. The results thus strongly indicate that the simulated mid-price series for both the SeqGAN and the benchmark models do replicate the heavy tails reported for real financial time-series.
\begin{table}[ht!]
\centering
\caption{Mean kurtosis from the Jarque-Bera test, and the number of tests rejected by the Hochberg Step-Up procedure, for the 100 mid-price time-series samples generated by the SeqGAN and benchmark models. The length column refers to the first 1, 6, and 48 hours for each of the 100 samples.}
\begin{tabular}{||l|l|cc||}
\hline
Length & Metric & SeqGAN Model & Benchmark Model \\ \hline \hline
\multirow{2}{*}{1 Hour} & Mean Kurtosis & 7.31 & 6.97 \\
& Rejection Count & 0 & 0 \\ \hline
\multirow{2}{*}{6 Hours} & Mean Kurtosis & 7.49 & 7.19 \\
& Rejection Count & 0 & 0 \\ \hline
\multirow{2}{*}{48 Hours} & Mean Kurtosis & 8.80 & 7.53 \\
& Rejection Count & 0 & 0 \\ \hline
\end{tabular}
\label{section:results:subsection:macro-behaviour:table:fat_tail_models}
\end{table}
We next compare the tail-exponents of the simulated mid-price distribution to those of the real data in Table \ref{section:results:subsection:macro-behaviour:table:fat_tail}. First, the distribution of tail-exponents is computed for the sampled mid-price time-series of each model. We then apply the one-sample two-tailed Student t-test between the tail-exponents from the simulated mid-price series and real mid-price series. Table \ref{section:results:subsection:macro-behaviour:table:tail_expo} shows the resulting p-value and t-statistics. We see that the null hypotheses of the tests for both models are rejected with high confidence, implying that neither model was realistically producing the tail-exponents. It can also be observed from the t-statistics that the distribution tails of both models are lighter than those of the real returns distribution. However, the t-statistics of the SeqGAN model are much smaller that those of the benchmark model, indicating that the SeqGAN model simulates mid-price variations with closer tail-exponent behaviours to those of the real data.
\begin{table}[ht!]
\centering
\caption{Test period results of one-sample two-tailed Student t-tests for the tail-exponent distributions of each model against the real tail-exponents, rounded to two decimal places. The length column refers to the first 1, 6 and 48 hours for each of the 100 samples.}
\begin{tabular}{||l|l|c|c||}
\hline
Length & Metric & SeqGAN Model & Benchmark Model \\ \hline \hline
\multirow{2}{*}{1 Hour} & p-value & 0.00 & 0.00 \\
& t-statistic & \textbf{2.66} & 3.29 \\ \hline
\multirow{2}{*}{6 Hours} & p-value & 0.00 & 0.00 \\
& t-statistic & \textbf{2.84} & 4.05 \\ \hline
\multirow{2}{*}{48 Hours} & p-value & 0.00 & 0.00 \\
& t-statistic & \textbf{2.23} & 3.71 \\ \hline
\end{tabular}
\label{section:results:subsection:macro-behaviour:table:tail_expo}
\end{table}
\subsection{Mid-Price Volatility}
Finally, the volatility in the mid-price produced by the two models is compared to the volatility of the real mid-price in the test set. Volatility is one of the most important measures of an asset's value as it measures the risk that would be undertaken when trading the security and is crucial in the construction of optimal portfolios. There are a number of measures of volatility defined in the literature, and the choice depends on the purpose. For intra-day price variations, the volatility definitions that have significance importance are the \emph{realised volatility} $v_r$, the \emph{realised volatility per trade} $v_p$, and the \emph{intraday volatility} $v_d$, as described in more detail in \cite{gould2013limit}.
Table \ref{section:results:subsection:macro-behaviour:table:real_vol} shows each of these volatilities computed from the real mid-price in the test period. A comparison to the mid-price volatilities of the SeqGAN and benchmark models is performed as follows. First, the empirical distributions of the volatility measures are computed across the simulated mid-price series produced by each model. A one-sample two-tailed Student t-test is then applied between the the data in the empirical distributions, and the volatility computed from the real mid-price. The results from the tests are given in Table \ref{section:results:subsection:macro-behaviour:table:compare_vol}, where it can be observed that the null hypotheses for all the tests are rejected with high confidence. This implies that neither model can generate an order flow able to reproduce the volatility of the real mid-price time series, with the negative t-statistics implying that the simulated mid-price time-series have much lower volatility than the real time-series. However, comparing the SeqGAN and benchmark models, it can be observed from the t-statistics in Table \ref{section:results:subsection:macro-behaviour:table:compare_vol} that the volatility of the mid-price is in general better replicated by the SeqGAN model. An exception to this would be the intraday volatility for time-series lengths 6 hours and 48 hours, which the t-statistics show were reproduced more closely by the benchmark model than the SeqGAN model.
\begin{table}[ht!]
\centering
\caption{Volatility measures computed from the real mid-price time-series in the test period. The length column refers to the first 1, 6, and 48 hours for each of the 100 samples.}
\begin{tabular}{||l|c|c|c||}
\hline
Time-Series Length & $v_r$ & $v_p$ & $v_d$ \\ \hline \hline
1 Hour & 0.00177 & 0.00149 & 0.0308 \\
6 Hours & 0.00186 & 0.00153 & 0.099 \\
48 Hours & 0.00257 & 0.00211 & 0.178 \\ \hline
\end{tabular}
\label{section:results:subsection:macro-behaviour:table:real_vol}
\end{table}
\begin{table}[ht!]
\centering
\caption{p-value and t-statistics of the one-sample two tailed Student t-test between different volatility distributions of the SeqGAN and benchmark models, against the real volatility measures, rounded to two decimal places. The length column refers to the first 1, 6, and 48 hours for each of the 100 samples.}
\begin{tabular}{||l|l|cc|cc||}
\hline
\multirow{2}{*}{Length} & \multirow{2}{*}{Volatility} & \multicolumn{2}{c||}{SeqGAN Model} & \multicolumn{2}{c|}{Benchmark Model} \\ \cline{3-6}
& & t-statistics & p-value & t-statistics & p-value \\ \hline \hline
\multirow{3}{*}{1 Hour} & $v_r$ & \textbf{-0.92} & 0.00 & -0.99 & 0.00 \\
& $v_p$ & \textbf{-0.99} & 0.00 & -1.10 & 0.00 \\
& $v_d$ & \textbf{-0.89} & 0.00 & -0.93 & 0.00 \\ \hline
\multirow{3}{*}{6 Hours} & $v_r$ & \textbf{-1.04} & 0.00 & -1.13 & 0.00 \\
& $v_p$ & \textbf{-0.99} & 0.00 & -1.19 & 0.00 \\
& $v_d$ & -1.11 & 0.00 & \textbf{-0.95} & 0.00 \\ \hline
\multirow{3}{*}{48 Hours} & $v_r$ & \textbf{-1.32} & 0.00 & -1.46 & 0.00 \\
& $v_p$ & \textbf{-1.27} & 0.00 & -1.43 & 0.00 \\
& $v_d$ & -1.18 & 0.00 & \textbf{-1.03} & 0.00 \\ \hline
\end{tabular}
\label{section:results:subsection:macro-behaviour:table:compare_vol}
\end{table}
\section{Conclusion}
\label{section:conclusion}
A novel application of the SeqGAN framework for generating simulated order flow sequences was introduced and benchmarked against a well-known model from the quantitative finance literature. An analysis of the macro-behaviour of the mid-price movements showed that the SeqGAN model is substantially better able than the benchmark model to replicate the overall returns distribution, the returns distribution tails and the volatility of the real mid-price time-series. While the results showed that there is further work to be done to improve this approach to the generative modelling of the order flow, financial sequences are in general hard to predict and even harder to simulate. Future work that could improve the generative modelling of order sequences could include improving the architecture to inject other covariates into the input of the generator, or extending the architecture to jointly model the event tokens, order volume, and inter-order arrival times. Also, further analysis on the actual sequences that were generated by the models could determine what is needed to improve the model. On the basis of its current performance, and with further work along these lines, including comparison of this method against an increased number of benchmarks, we believe the SeqGAN model could be of substantial practical value to the financial community in the future.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2021-09-29T02:26:23",
"yymm": "2109",
"arxiv_id": "2109.13905",
"language": "en",
"url": "https://arxiv.org/abs/2109.13905"
}
|
\section{Motivation and results}
We are interested in holomorphic Poisson structures on Calabi--Yau threefolds
that contain a contractible rational curve. Here we consider the local situation. Hence,
we study Poisson structures on Calabi--Yau threefolds that are the total space of
a rank 2 vector bundle on $\mathbb P^1$. A result of Jim\'enez \cite{J}
says that the contraction of a smooth rational curve on a threefold may happen in exactly 3 cases,
namely when the normal bundle to such a
curve is one of
$$\mathcal O_{\mathbb P^1}(-1) \oplus \mathcal O_{\mathbb P^1}(-1), \quad
\mathcal O_{\mathbb P^1}(-2) \oplus \mathcal O_{\mathbb P^1}(0), \quad \text{or} \quad
\mathcal O_{\mathbb P^1}(-3) \oplus \mathcal O_{\mathbb P^1}(1),$$
although only in the first case it can contract to an isolated singularity.
In this work we describe completely the local case, that is,
we classify all isomorphism classes of holomorphic Poisson structures on the local Calabi--Yau threefolds
$$W_k\mathrel{\mathop:}= \Tot (\mathcal O_{\mathbb P^1}(-k) \oplus \mathcal O_{\mathbb P^1}(k-2)), \quad k=1,2,3,$$
calculate their Poisson cohomology, describe their symplectic foliations and some properties of their moduli.
Polishchuk shows a correspondence between Poisson structures on a scheme $X$ and a blow-up $\widetilde{X}$,
which applies to the cases we study \cite[Thm.\thinspace 8.2, 8.4]{Po}. Hence,
describing Poisson structures on $W_k$
is equivalent to describing Poisson structures on the singular threefolds obtained from them by contracting the rational curve
to a point.
Poisson structures are parametrized by those elements of $H^0(W_k,\Lambda^2 TW_k)$ which are integrable.
We briefly recall some basic definitions from Poisson cohomology, for details see \cite[Ch.\thinspace 4]{LPV}.
Let $(M, \pi)$ be a Poisson Manifold. The graded algebra $\mathfrak{X}^{\bullet}(M) = \Gamma(\wedge^\bullet TM)$
and the degree-$1$ differential operator $d_\pi = [\pi, \cdot]$ define the {\it Poisson Cohomology} of $(M, \pi)$.
The first cohomology groups have clear geometric meaning:
$\H^0(M, \pi) = \ker [\pi, \cdot] = \mbox{Cas}(\pi) = $ holomorphic functions on $M$ which are constant along symplectic leaves.
These are the Casimir functions of $(M,\pi)$.
$\H^1(M, \pi) = \dfrac{\mbox{Poiss}(\pi)}{\mbox{Ham}(\pi)}$ is the quotient of Poisson vector fields by Hamiltonian vector fields.
We compute Poisson cohomology groups and use them to distinguish Poisson structures, identifying their degeneracy loci.
The \emph{$r^\text{th}$ degeneracy locus} of a holomorphic Poisson structure $\sigma$ on a complex manifold or algebraic variety $X$ is defined as
\[
D_{2r} (\sigma) \mathrel{\mathop:}= \{ x \in X \mid \mbox{rank}\, \sigma (x) \leq 2r \} \text{ ,}
\]
where $\sigma$ is viewed as a map $\mathcal T_X^* \to \mathcal T_X$
by contracting a $1$-form with the bivector field $\sigma$.
At a given point on a complex { threefold} a holomorphic Poisson structure has either rank $2$,
or rank $0$.
Therefore, for the threefolds $W_k$ we name
$$D (\sigma) \mathrel{\mathop:}= D_0 (\sigma)$$
the \emph{degeneracy locus} of $\sigma$, hence it consists of points where $\sigma$ has rank 0.
A nondegenerate holomorphic Poisson structure $\sigma$ is called a {\it holomorphic symplectic} structure,
given that $\sigma$ determines a nondegenerate closed holomorphic $2$-form $\omega$ by setting
\[
\omega (X_f, X_g) = \{ f, g \}_\sigma \text{ ,}
\]
where $X_f$ denotes the Hamiltonian vector field associated to a function $f$.
\begin{remark}\label{leaves}
Each Poisson structure determines a symplectic foliation, whose leaves consist of
maximal symplectic submanifolds.
In particular, in the case of threefolds,
the degeneracy locus $D(\sigma)$ is formed by leaves consisting of a single point each,
and all other leaves have complex dimension 2.
\end{remark}
Describing holomorphic Poisson structures on the Calabi--Yau threefolds $W_k$
can also be regarded as describing their first-order
noncommutative deformations. The commutative deformation theory of these threefolds $W_k$ and
the structure of moduli of vector bundles on them is described in detail in \cite{BGS}.
The surfaces $Z_k \mathrel{\mathop:}= \Tot (\mathcal O_{\mathbb P^1}(-k))$,
which were discussed in \cite{BG1,BG2} and \cite{BeG}, occur here as useful tools.
Motivated by the definition of {\it moduli space of Poisson structures} given in \cite[Sec.\thinspace 1.2]{Pym}, namely
the quotient $\mbox{Poiss}(X)/\mbox{Aut}(X)$,
we also describe some isomorphisms among Poisson structures.
We note that the space of Poisson structures on a threefold can be seen as a cone over global functions in
the following sense:
\begin{proposition}\label{subm}
Let $X$ be a smooth complex threefold and $u$ a Poisson structure on $X$, i.e. an integrable bivector field. Then $fu$ is
integrable for all $f\in \mathcal O(X)$.
\end{proposition}
\begin{proof}
Since $fu$ is a holomorphic bivector, we only need to check that $fu$ is integrable. This is a local condition. Locally $u$ is
the product of an element of $K_X^{-1}$ and a holomorphic one-form $w$, and the integrability of $u$ is equivalent to
$w \wedge dw = 0$ \cite[Eq.\thinspace 4]{Pym}. We have $(fw)\wedge d(fw) = f^2w\wedge dw + w\wedge df\wedge w = 0 + 0$.
\end{proof}
Among our local threefolds, the most famous is certainly $W_1$,
known in the physics literature as the resolved conifold, since it occurs as the crepant resolution of
the double point singularity $xy-zw=0$ in $\mathbb C^4$ known as the conifold.
The conifold singularity is extremely popular in string theory
because it can be resolved in two different ways, by
a 2-sphere (resolution) or a 3-sphere (deformation). This leads to what is known as a
geometric transition and establishes dualities between distinct theories in physics, such as
gauge--gravity and open--closed string duality, see \cite{BBR} and references therein.
We start with bivector fields on $W_1$:
\begin{lemma*}[\ref{biW1}]
The space $M_1 = H^0(W_1,\Lambda^2TW_1)$
parametrizing all holomorphic bivector fields on $W_1$
has the following structure as a module over global holomorphic functions:
$$M_1 = \langle e_1,e_2,e_3,e_4 \rangle / \langle zu_2e_1-zu_1 e_2- u_2 e_3 +u_1e_4 \rangle \text{ .}$$
\end{lemma*}
Describing obstructions to integrability, we obtain an explicit description of Poisson structures
on $W_1$. For ${\bf p}= (p^1,p^2,p^3,p^4) \in M_1$ we
describe a differential operator
$B({\bf p}) = {\bf p}^tQ{\bf p}$ (\Cref{operator}).
\begin{theorem*}[\ref{t1}]
Every holomorphic Poisson structure on $W_1$ has the form $\sum_{i=1}^4p^ie_i$ where
$ (p^1,p^2,p^3,p^4) \in B^{-1}(0)$.
\end{theorem*}
\begin{theorem*}[\ref{iso}]
The Poisson structures $e_1,e_2,e_3,e_4$ are all pairwise isomorphic.
\end{theorem*}
Since the generators give isomorphic Poisson structures, it is enough to describe the foliation
corresponding to one of them, we choose $e_2= \partial_0 \wedge \partial_1$.
\begin{theorem*}[\ref{w1fol}]
The symplectic foliation for $(W_1, \partial_0 \wedge \partial_1)$ is given by:
\begin{itemize}
\item $ \partial_0 \wedge \partial_1$ has degeneracy locus on the line
$\{v_2= \xi=0\}$, where the leaves are $0$-dimensional, consisting of single points, and
\item
$2$-dimensional symplectic leaves cut out on the $U$ chart by $u_2$ constant.
\end{itemize}
\end{theorem*}
We summarize this result in a small table.
\begin{center}
\begin{tabular}{c|c|l}
\multicolumn{3}{c}{\sc $W_1$ Poisson structures }\\
\multicolumn{3}{c}{} \\
$\pi$ & degeneracy & Casimir \\ \hline
$e_2$ & \chfan &$ f(u_2)$
\end{tabular}
\end{center}
We then see the Poisson structures as determined by surface embeddings.
\begin{theorem*}[\ref{princ}]
The 4 principal embeddings of the Poisson surface $(Z_1,\pi_0)$ generate all Poisson structures on $W_1$.
\end{theorem*}
From the viewpoint of Poisson structures, $W_2$ is the best of our local Calabi--Yaus, since it
is the only one that admits a nondenegerate Poisson structure, see \Cref{degeneracy2},
although this comes as no surprise since $W_2 \simeq T^*\mathbb P^1\times \mathbb C$
is a product of symplectic manifolds.
\begin{lemma*}[\ref{W2gens}]
The space $M_2 $
of holomorphic bivector fields on $W_2$
has the following structure as a module over global holomorphic functions:
$$M_2 = \langle e_1,e_2,e_3,e_4, e_5 \rangle / \langle u_1e_3-zu_1e_1,
u_2e_5-zu_2e_3-2zu_2e_2 \rangle \text{ .} $$
\end{lemma*}
By \Cref{iso1-5}, $e_1$ and $e_5$ give isomorphic Poisson structures, whereas
the others are distinct, giving interesting symplectic foliations, as follows.
\begin{theorem*}[\ref{foliations2}]
The symplectic foliations on $W_2$
have $0$-dimensional leaves consisting of single points over each
of their corresponding degeneracy loci described in \Cref{degeneracy2},
and their generic leaves, which are $2$-dimensional, are as follows:
\begin{itemize}
\item surfaces of constant $u_1$ for $e_1$ and $e_3$, one of them isomorphic to $\mathbb P^1 \times \mathbb C$.
\item isomorphic to $\mathbb C-\{0\} \times \mathbb C$ for $e_2$ (contained in the fibers of the projection to $\mathbb P^1$).
\item isomorphic to the surface $Z_2$ and cut out by $u_2=v_2$ constant for $e_4$.
\end{itemize}
\end{theorem*}
We depict their degeneracy loci and Casimir functions in the following table.
\begin{center}
\begin{tabular}{c|c|l}
\multicolumn{3}{c}{\sc $W_2$ Poisson structures }\\
\multicolumn{3}{c}{} \\
$\pi$ & degeneracy & Casimir \\ \hline
$e_1$ & \ccfan & $ f(u_1)$\\ \hline
$e_2$ & \Zzero& $ f(z)$ \\ \hline
$e_3$ & \ccfan $\cup$ \cfan & $f(u_1)$ \\ \hline
$e_4$ & $\emptyset$ & $f(u_2)$
\end{tabular}
\end{center}
\vspace{3mm}
We then continue onto the case of $W_3$, obtaining:
\begin{lemma*}[\ref{W3gens}]
The space $M_3 $
of holomorphic bivector fields on $W_3$
has the following structure as a module over global holomorphic functions:
$$M_3= \mathbb C \langle e_1, \dots, e_{13} \rangle / R$$
with the set of relations $R$ is the ideal generated by the expressions
$$\begin{array}{l}
u_1 e_2 - u_1u_2 e_1 \\
u_1 e_{10} -u_1 u_2 e_3 \\
u_1e_{13} - u_1u_2e_7 \\
\end{array}
\quad\quad \begin{array}{l}
zu_1e_{12} - u_1u_2 e_6 \\
zu_1e_{13} - u_1u_2e_8 \\
u_1e_{11} - zu_1 e_{10} \\
\end{array}
\quad \begin{array}{l}
u_1 e_4 - z u_1e_3 \\
u_1 e_5 - zu_1e_4 \\
u_1e_8 - zu_1e_7 \\
\end{array} \quad
\begin{array}{l}
u_1 e_6 - zu_1 e_5 - 3z^2u_1e_1 \\
u_1e_9 - zu_1e_8 + zu_1e_2 \\
u_1e_{12} - zu_1e_{11} - 3zu_1e_1 .\\
\end{array}$$
\end{lemma*}
Subsequently, we describe some features of the symplectic foliations
corresponding to the generating Poisson structures on $W_3$.
\begin{theorem*}[\ref{W3foliation}]
The symplectic foliations on $W_3$
have 0-dimensional leaves consisting of single points over each
of their corresponding degeneracy loci described in
the proofs of Lemmata \ref{alphas}, \ref{betas}, \ref{gammas}, and their generic leaves, which are $2$-dimensional,
are as follows:
\begin{itemize}
\item Isomorphic to $\mathbb C^*\times \mathbb C$ for $e_1$.
\item Isomorphic to $\mathbb C^*\times \mathbb C^*$ for $e_2$.
\item Surfaces of constant $u_1$, for $e_3, e_4,e_5, e_{10}, e_{11}$ and $e_{13}$.
\item Surfaces of constant $u_2$, for $e_7$ and $e_8$.
\end{itemize}
\end{theorem*}
Geometrically, these structures can be obtained from surface embeddings.
\begin{theorem*}[\ref{emb3}]
The embeddings of Poisson surfaces $j_0(\mathbb C^2, \pi_0),$ $j_1(Z_3,\pi_i)$ with $i=0,1,2$ and $j_2(Z_{-1},\pi_i)$
with $i=0,1,2,3$
generate
all Poisson structures on $W_3$.
\end{theorem*}
We finish by showing that except for $e_6,e_9, e_{12}$, the Poisson structures $e_i$
are all pairwise non-isomorphic, which can be seen from their degeneracy loci.
\begin{center}
\begin{tabular}{c|c|c}
\multicolumn{3}{c}{\sc $W_3$ Poisson structures }\\
\multicolumn{3}{c}{} \\
$\pi$ & degeneracy & Casimir \\ \hline
$e_1$ & \Zmone $\cup$ \ccfan & $f(z)$ \\ \hline
$e_2$ & \Zmone $\cup$ \Zthree & $ f(z)$ \\ \hline
$e_3$ & \ccfan & $ f(u_1)$ \\ \hline
$e_4$ &\ccfan $\cup$ \ccfan & $f(u_1)$ \\ \hline
$e_5$ & \ccfan $\cup$ \cfan & $ f(u_1)$ \\ \hline
$e_7$ & \Zmone $\cup$ \ccfan & $ f(u_2)$ \\ \hline
$e_8$ & \Zmone $\cup$ \ccfan $\cup$ \cfan & $ f(u_2)$ \\ \hline
$e_{10}$ & \Zthree $\cup$ \ccfan & $ f(u_1)$ \\ \hline
$e_{11}$ & \Zmone $\cup$ \ccfan $\cup$ \cfan & $f(u_1)$ \\ \hline
$e_{13}$ & \Zmone $\cup$ \Zthree & $ f(u_2)$
\end{tabular}
\end{center}
\begin{remark}
\Cref{biW1} (resp.\ \ref{W2gens}, \ref{W3gens}) shows that
the space of holomorphic bivector fields on $W_1$ (resp.\ $W_2$, $W_3$) is generated
as a module over global holomorphic functions by $4$ (resp.\ $5$, $13$) holomorphic bivectors.
Moreover, we find that $\mathbb{C}$-linear combinations of the basis vectors are integrable,
and by \Cref{subm} any multiples thereof by holomorphic functions are integrable, too. In the
case of $W_1$, \Cref{t1} describes the space of integrable bivectors as the kernel of
an explicit differential operator.
\end{remark}
However, we note that general combinations of the basis vectors with global functions as coefficients
may not be integrable. For example, on $W_1$ the expression $zu_2e_1+e_3$ gives a nonintegrable bivector field,
despite the fact that both $e_1$ and $e_3$ are integrable.
\section{Vector fields on \texorpdfstring{$W_k$}{W\_k}}
\begin{definition}\label{WKdef}For integers $k_1$ and $k_2$, we set
\[
W_{k_1,k_2} = \Tot (\mathcal{O}_{\mathbb{P}^1}(-k_1) \oplus \mathcal{O}_{\mathbb{P}^1}(-k_2)) \text{ .}
\]
The complex manifold structure can be described by gluing the open sets
$$U = \mathbb{C}^3_{\{z,u_1,u_2\}} \quad \mbox{and} \quad V = \mathbb{C}^3_{\{\xi,v_1,v_2\}}$$
by the relation
\begin{equation}\label{canonical}
(\xi, v_1, v_2) = (z^{-1}, z^{k_1} u_1, z^{k_2} u_2)
\end{equation}
whenever $z$ and $\xi$ are not equal to 0.
We call \eqref{canonical} the canonical coordinates for $W_{k_1,k_2}$.
\end{definition}
\begin{lemma}
The threefold $W_{k_1,k_2}$ is Calabi--Yau if and only if $k_1 + k_2 = 2$.
\end{lemma}
\begin{proof}
The canonical bundle is given by the transition
$-z^{k_1+k_2-2}$,
so it is trivial if and only if $k_1 + k_2 = 2$.
\end{proof}
\begin{notation}
We denote by $W_k$ the Calabi--Yau threefold
\[
W_k\mathrel{\mathop:}= W_{k,-k+2} = \Tot (\mathcal{O}_{\mathbb{P}^1}(-k) \oplus \mathcal{O}_{\mathbb{P}^1}(k+2)) \text{ .}
\]
\end{notation}
Let $U \subset W_k$ be our usual chart with coordinates $\{z, u_1, u_2\}$.
As a module over the ring of functions $H^0(U; \mathcal{O})$, the module of
global sections of vector fields over $U$, $H^0(U; \; TU)$ is spanned by the
coordinate partial derivatives, which we relabel for convenience:
\[ \pd{z} \equiv \frac{\partial}{\partial x^0} \equiv \partial_0 \text{ ,\quad}
\pd{u_1} \equiv \frac{\partial}{\partial x^1} \equiv \partial_1 \text{ , and }
\pd{u_2} \equiv \frac{\partial}{\partial x^2} \equiv \partial_2 \text{ .}
\]
The exterior powers are spanned by the appropriate wedge products:
\begin{eqnarray*}
H^0(U; \; \Lambda^1 TU) & = & \bigl\langle \left\{ \partial_i \right\}_{i=0}^{2} \bigr\rangle \\
H^0(U; \; \Lambda^2 TU) & = & \left\langle \left\{ b_0 \equiv \partial_1 \wedge \partial_2 \text{ , \ }
b_1 \equiv \partial_2 \wedge \partial_0 \text{ , \ }
b_2 \equiv \partial_0 \wedge \partial_1 \right\} \right\rangle \\
H^0(U; \; \Lambda^3 TU) & = & \left\langle \left\{ \partial_0 \wedge \partial_1 \wedge \partial_2 \right\} \right\rangle
\end{eqnarray*}
We are interested in bivectors, i.e.\ elements of $H^0(U; \; \Lambda^2 TU)$.
We write a bivector field as
\begin{equation}\label{eq.defq}
q = q^i b_i = \frac{1}{2} q^i \varepsilon_{i}^{jk} \partial_j \wedge \partial_k \text{ ,}
\end{equation}
where the coefficients $q^i$ are functions on $U$.
We are using Einstein summation convention throughout,
and we write $f_{,i}$ for $\frac{\partial f}{\partial x^i}$.
We collect a few useful identities involving Lie brackets, and some
preliminary expressions used to compute the Schouten--Nijenhuis
brackets. Let $X$, $Y$ be vector fields and $f$, $g$ be functions. Then:
\begin{itemize}
\item
For the coordinate partial derivatives, the Lie bracket vanishes: $[\partial_j, \partial_k] = 0$ for all $j$, $k$.
\item
$[X, g Y] = X(g)Y + g [X,Y]$, so in particular, $[\partial_j, g \partial_k] = \frac{\partial g}{\partial x^j}\partial_k$
and $[f \partial_j, \partial_k] = -\frac{\partial f}{\partial x^k}\partial_j$.
\item
The SN-bracket of two bivectors is commutative and results in a degree-$3$ trivector,
i.e.\ a scalar multiple of $\partial_0 \wedge \partial_1 \wedge \partial_2$. On basis elements, it is given by:
\begin{align}\label{eqn.snbasic}
\bigl[ f \, \partial_j \wedge \partial_k, g \, \partial_m \wedge \partial_n \bigr] = & \
\bigl[ f \, \partial_j, g \, \partial_m \bigr] \wedge \partial_k \wedge \partial_n - \nonumber
\bigl[ f \, \partial_j, \partial_n \bigr] \wedge \partial_k \wedge g \, \partial_m \\ & -
\bigl[ \partial_k, g \, \partial_m \bigr] \wedge f \, \partial j \wedge \partial_n +
\cancel{\bigl[ \partial_k, \partial_n \bigr]} \wedge f \, \partial_j \wedge g \, \partial_m \nonumber \\ = & \
f g_{,j} \,\trwdg{m}{k}{n} - g f_{,m} \,\trwdg{j}{k}{n} \nonumber \\ & +
g f_{,n} \,\trwdg{j}{k}{m} - f g_{,k} \,\trwdg{m}{j}{n}.
\end{align}
\end{itemize}
We are now in a position to compute the self-bracket of a general bivector field $q$:
\[ \bigl[ q, q \bigr] =
\bigl[ q^0 \partial_1 \wedge \partial_2 + q^1 \partial_2 \wedge \partial_0 + q^2 \partial_0 \wedge \partial_1, \;
q^0 \partial_1 \wedge \partial_2 + q^1 \partial_2 \wedge \partial_0 + q^2 \partial_0 \wedge \partial_1 \bigr]. \]
Consider distributing the sums out of this expression.
From the basis expression in Equation~\eqref{eqn.snbasic} we see that terms vanish unless
the indices in the triple wedge product are pairwise distinct, so that self-brackets of individual summands vanish.
Furthermore, commutativity of the SN-bracket on bivectors means that the cross terms group in pairs, so we have:
\[ \bigl[ q, q \bigr] = 2 \times \Bigl(
\bigl[ q^0 \partial_1 \wedge \partial_2,\; q^1 \partial_2 \wedge \partial_0 \bigr] +
\bigl[ q^1 \partial_2 \wedge \partial_0,\; q^2 \partial_0 \wedge \partial_1 \bigr] +
\bigl[ q^2 \partial_0 \wedge \partial_1,\; q^0 \partial_1 \wedge \partial_2 \bigr] \Bigr). \]
Now we apply Equation~\eqref{eqn.snbasic} to each term and group the results. Since the
four indices $j$, $k$, $m$, $n$ are always three distinct numbers and $k = m$, only two terms are non-zero, namely
$- g f_{,m} \,\trwdg{j}{k}{n} - f g_{,k} \,\trwdg{m}{j}{n} = \bigl(f g_{,k} - g f_{,k} \bigr) \, \trwdg{j}{k}{n}$.
We find:
\[ \bigl[ q, q \bigr] = 2 \, \trwdg{0}{1}{2} \Bigl(
q^1 q^2_{,0} - q^2 q^1_{,0} + q^2 q^0_{,1} - q^0 q^2_{,1} + q^0 q^1_{,2} - q^1 q^0_{,2}\Bigr). \]
A bivector field $q$ is called a \emph{Poisson bivector} if it is integrable,
which happens if and only if its SN-self-bracket vanishes, $[q, q] = 0$.
If $q$ is given in coordinates by Equation~\eqref{eq.defq}, with $q^0, q^1, q^2 \in H^0(U; \mathcal{O})$,
then the integrability condition is:
\begin{align}\label{eq.sn0}
0 &= q^1 \frac{\partial q^2}{\partial x^0} - q^2 \frac{\partial q^1}{\partial x^0}
+ q^2 \frac{\partial q^0}{\partial x^1} - q^0 \frac{\partial q^2}{\partial x^1}
+ q^0 \frac{\partial q^1}{\partial x^2} - q^1 \frac{\partial q^0}{\partial x^2} \nonumber \\
&= q^1 \frac{\partial q^2}{\partial z} - q^2 \frac{\partial q^1}{\partial z}
+ q^2 \frac{\partial q^0}{\partial u_1} - q^0 \frac{\partial q^2}{\partial u_1}
+ q^0 \frac{\partial q^1}{\partial u_2} - q^1 \frac{\partial q^0}{\partial u_2}.
\end{align}
Note that by \Cref{subm}, $\mathcal{O}$-multiples of Poisson bivectors are themselves Poisson,
which we can also see directly from the above explicit expressions.
Some Poisson structures on local surfaces will be useful. We summarize a few results.
\begin{remark}[surfaces]\label{Zs}Using canonical coordinate charts
$Z_k= \Tot(\mathcal O_{\mathbb P^1}(-k))$
\cite[Lem.\thinspace 2.8]{BG2} calculated all their
Poisson structures, obtaining generators
as:
$ (1,-\xi), (z,-1) $ for $k =1$;
$(1,-1)$ for $k =2$;
$(u,-\xi^2v), (zu, -\xi v), (z^2u,-v)$ for $k \geq 3$,
written in the basis $(\partial_z \wedge \partial_u, \partial_\xi \wedge\partial_v ).$
We will also use the generators for Poisson structures on $Z_0$ which are $(1,-\xi^2),(z,-\xi ),(z^2,-1)$, and
for $Z_{-1}$ which are $(1,-\xi^3),(z,-\xi^2 ),(z^2,-\xi), (z^3,1)$.
\end{remark}
\section{Poisson structures on \texorpdfstring{$W_1$}{W\_1}}
Let $\imath\colon U \hookrightarrow W_1$ denote the inclusion. We
actually demand that the coefficients of $q$ are functions on all of
$W_1$, i.e.\ that they should be in the image of $\imath^* \colon
R \mathrel{\mathop:}= H^0(W_1;\mathcal{O}_{W_1}) \to H^0(U;\mathcal{O}_{U})$.
(We will not distinguish between $R$ and its image over $U$:
we are only working in local coordinates on $U$, but with the understanding
that we are describing global objects on $W_1$.)
In local coordinates on $U$, $R$ consists of convergent power series in
\[ \bigl\{ 1, u_1, z u_1, u_2, z u_2 \bigr\} \text{ .} \]
This imposes additional conditions on the coefficients $q^i$.
\begin{lemma}\label{biW1}
The space $M_1 = H^0(W_1,\Lambda^2TW_1)$
parametrizing all holomorphic bivector fields on $W_1$
has the following structure as a module over global holomorphic functions:
$$M_1 = \langle e_1,e_2,e_3,e_4 \rangle / \langle zu_2e_1-zu_1 e_2- u_2 e_3 +u_1e_4 \rangle \text{ .}$$
\end{lemma}
\begin{proof}
Since $M_1$ is given by global holomorphic sections of $\Lambda^2TW_1$,
using \v{C}ech cohomology, we search for $a,b,c$ holomorphic functions on $U$ such that
$$
\left[
\begin{matrix}
z^2 & -zu_1 & -zu_2 \\
0 & z^{-1} & 0 \\
0 & 0 & z^{-1}
\end{matrix}
\right]
\left[
\begin{matrix}
a \\
b \\
c
\end{matrix}
\right]
$$
is holomorphic on $V$.
To start with $\displaystyle a= \sum_{l=0}^\infty\sum_{i=0}^\infty\sum_{s=0}^\infty a_{lis} z^lu_1^ iu_2^s$
and similar for $b$ and $c$.
Direct calculation by formal neighborhoods of $\mathbb P^1 \subset W_1$ gives the expression
of the sections. It turns out that all generators we need already appear on the second formal
neighborhood, where we have:
$$\left[\begin{matrix}
a \\
b \\
c
\end{matrix}\right]=
b_{000}
\left[
\begin{matrix}0 \\ 1\\ 0
\end{matrix}\right]+
c_{000}
\left[
\begin{matrix}0 \\ 0\\ 1
\end{matrix}\right]+
b_{100}
\left[\begin{matrix} u_1\\ z \\0
\end{matrix}\right]
+
c_{100}
\left[\begin{matrix} u_2\\ 0 \\ z
\end{matrix}\right]
+
a_{020}
\left[
\begin{matrix} u_1^2\\ 0\\ 0
\end{matrix}\right]+
a_{002}
\left[
\begin{matrix} u_2^2 \\ 0\\ 0
\end{matrix}\right]+
a_{011}
\left[\begin{matrix} u_1u_2\\ 0 \\0
\end{matrix}\right]
$$
$$+b_{010}
\left[
\begin{matrix}0 \\ u_1\\ 0
\end{matrix}\right]+
b_{110}
\left[
\begin{matrix}0 \\ zu_1\\ 0
\end{matrix}\right]+
b_{210}
\left[
\begin{matrix}zu_1^2 \\ z^2u_1\\ 0
\end{matrix}\right]+
b_{001}
\left[
\begin{matrix}0 \\ u_2\\ 0
\end{matrix}\right]+
b_{101}
\left[
\begin{matrix}0 \\ zu_2\\ 0
\end{matrix}\right]+
b_{201}
\left[
\begin{matrix}zu_1u_2 \\ z^2u_2\\ 0
\end{matrix}\right].$$
At this point we have 13 generators of $M_1$ as a vector space over $\mathbb C$:
$$e_1\mathrel{\mathop:}=
\left[
\begin{matrix}0 \\ 1\\ 0
\end{matrix}\right],
e_2\mathrel{\mathop:}=
\left[
\begin{matrix}0 \\ 0\\ 1
\end{matrix}\right],
e_3\mathrel{\mathop:}=
\left[\begin{matrix} u_1\\ z \\0
\end{matrix}\right],
e_4\mathrel{\mathop:}=
\left[\begin{matrix} u_2\\ 0 \\ z
\end{matrix}\right],
e_5\mathrel{\mathop:}=
\left[
\begin{matrix} u_1^2\\ 0\\ 0
\end{matrix}\right],
e_6\mathrel{\mathop:}=
\left[
\begin{matrix} u_2^2 \\ 0\\ 0
\end{matrix}\right],
e_7\mathrel{\mathop:}=
\left[\begin{matrix} u_1u_2\\ 0 \\0
\end{matrix}\right],
$$
$$e_8\mathrel{\mathop:}=
\left[
\begin{matrix}0 \\ u_1\\ 0
\end{matrix}\right],
e_9\mathrel{\mathop:}=
\left[
\begin{matrix}0 \\ zu_1\\ 0
\end{matrix}\right],
e_{10}\mathrel{\mathop:}=
\left[
\begin{matrix}zu_1^2 \\ z^2u_1\\ 0
\end{matrix}\right],
e_{11}\mathrel{\mathop:}=
\left[
\begin{matrix}0 \\ u_2\\ 0
\end{matrix}\right],
e_{12}\mathrel{\mathop:}=
\left[
\begin{matrix}0 \\ zu_2\\ 0
\end{matrix}\right],
e_{13}\mathrel{\mathop:}=
\left[
\begin{matrix}zu_1u_2 \\ z^2u_2\\ 0
\end{matrix}\right].$$
These satisfy the set of relations:
$$ zu_2e_1-zu_1e_2-u_2 e_3 +u_1e_4=0, \quad u_2^2e_5 - u_1^2e_ 6=0$$
$$e_5-u_1e_3+zu_1e_1=0\quad e_6-u_2e_4+zu_2e_2=0$$
$$e_7-u_2e_3+zu_2e_1=0\quad e_7-u_1e_4+zu_1e_2=0$$
$$ u_2e_5 - u_1e_7=0, \quad u_1e_6- e_7u_2=0.$$
We then proceed to obtain simpler presentations for $M_1$. For instance, clearly the relations
on lines 2 and 3
may be used to remove $e_5,e_6,e_7$, simplifying the presentation of $M_1$ to
a set of 10 generators with 4 relations, and so on.
After a long series of reductions, or else, using a computer
algebra, we arrive at a far simpler presentation:
$M_1= \langle e_1,e_2,e_3,e_4 \rangle$ with the single relation.
$$ zu_2e_1-zu_1 e_2- u_2 e_3 +u_1e_4=0.$$
\end{proof}
\begin{theorem}\label{t1}
Every holomorphic Poisson structure on $W_1$ has the form $\sum_{i=1}^4p^ie_i$ where
$ (p^1,p^2,p^3,p^4) \in \ker B$.
\end{theorem}
Specifically, by \Cref{biW1}
global bivectors on $W_1$ are generated by four elements over $R$,
given on the $U$ chart by
$$e_1=
\left[\begin{matrix}
0 \\
1 \\
0
\end{matrix}\right] ,
e_2=
\left[\begin{matrix}
0 \\
0 \\
1
\end{matrix}\right] ,
e_3=
\left[\begin{matrix}
u_1 \\
z \\
0
\end{matrix}\right],
e_4=
\left[\begin{matrix}
u_2 \\
0 \\
z
\end{matrix}\right].
$$
Now write $ p = \sum_{h=1}^4 p^h e_h $
for a bivector field $p$ that extends to all of $W_1$, $p \in H^0(W_1; \Lambda^2 TW_1)$, that is,
\begin{align*}
p &= p^1 b_1 + p^2 b_2 + p^3 (u_1 b_0 + z b_1) + p^4 (u_2 b_0 + z b_2) \\
&= (\underbrace{u_1 p^3 + u_2 p^4}_{q^0}) b_0 + (\underbrace{p^1 + z p^3}_{q^1}) b_1 + (\underbrace{p^2 + z p^4}_{q^2}) b_2 \text{ ,}
\end{align*}
where $p^h \in R$ for $h=1, 2, 3, 4$.
We consider the integrability condition \eqref{eq.sn0} with:
\begin{align*}
q^0(z, u_1, u_2) &= u_1 p^3(z, u_1, u_2) + u_2 p^4(z, u_1, u_2) \\
q^1(z, u_1, u_2) &= p^1(z, u_1, u_2) + z p^3(z, u_1, u_2) \\
q^2(z, u_1, u_2) &= p^2(z, u_1, u_2) + z p^4(z, u_1, u_2)
\end{align*}
The condition becomes:
\begin{align}\label{eq.sn0w1}
0 =& \ (p^1 + z p^3) \frac{\partial(p^2 + z p^4)}{\partial z} - (p^2 + z p^4) \frac{\partial(p^1 + z p^3)}{\partial z} \nonumber \\
&+ (p^2 + z p^4) \frac{\partial(u_1 p^3 + u_2 p^4)}{\partial{u_1}} - (u_1 p^3 + u_2 p^4) \frac{\partial(p^2 + z p^4)}{\partial{u_1}} \nonumber \\
&- (p^1 + z p^3) \frac{\partial(u_1 p^3 + u_2 p^4)}{\partial{u_2}} + (u_1 p^3 + u_2 p^4) \frac{\partial(p^1 + z p^3)}{\partial{u_2}} \nonumber \\[1em]
=& \ (p^1 + z p^3)(p^2_{,0} + p^4 + z p^4_{,0}) - (p^2 + z p^4)(p^1_{,0} + p^3 + z p^3_{,0}) \nonumber \\
&+ (p^2 + z p^4)(p^3 + u_1 p^3_{,1} + u_2 p^4_{,1}) - (u_1 p^3 + u_2 p^4)(p^2_{,1} + z p^4_{,1}) \nonumber \\
&- (p^1 + z p^3)(p^4 + u_2 p^4_{,2} + u_1 p^3_{,2}) + (u_1 p^3 + u_2 p^4)(p^1_{,2} + z p^3_{,2}) \nonumber \\[1em]
=& \ \pbra p120 + z(\pbra p140 + \pbra p320) + z^2(\pbra p340) \nonumber \\
&+ u_1(\pbra p231 + \pbra p312) + zu_1(\pbra p431) \nonumber \\
&+ u_2(\pbra p241 + \pbra p412) + zu_2(\pbra p432).
\end{align}\\
\begin{note}
The vectors $\langle \{ e_1, e_2, e_3, e_4 \} \rangle_{\mathbb{C}}$ generate a
submodule of Poisson bivector fields over $R$: any such vector field
is of the form $\mu_1 p e_1 + \mu_2 p e_2 + \mu_3 p e_3 + \mu_4 p e_4$
for some $\mu_1, \mu_2, \mu_3, \mu_4 \in \mathbb{C}$ and $p \in R$,
say, which can be seen to satisfy Equation~\eqref{eq.sn0w1}, with $p^h = \mu_h p$ for $h=1,2,3,4$:
each antisymmetric term $\pbra pijk = \mu_i \mu_j (p\,p_{,k} - p\,p_{,k})$ vanishes.
The connection between the $U$ and the $V$ chart is described above.
\end{note}
For $h=1, 2, 3, 4$, write:
\begin{align}
p^h(z, u_1, u_2) &= \sum_{s=0}^{\infty}\sum_{t=0}^{\infty} \sum_{r=0}^{s+t} p^h_{rst} z^r u_1^s u_2^t \label{eq.p_series} \\
p^h_{,0}(z, u_1, u_2) &= \sum_{s=0}^{\infty}\sum_{t=0}^{\infty} \sum_{r=1}^{s+t} p^h_{rst} r z^{r-1} u_1^s u_2^t \label{eq.p_series_z} \\
p^h_{,1}(z, u_1, u_2) &= \sum_{s=1}^{\infty}\sum_{t=0}^{\infty} \sum_{r=0}^{s+t} p^h_{rst} s z^r u_1^{s-1} u_2^t \label{eq.p_series_u1} \\
p^h_{,2}(z, u_1, u_2) &= \sum_{s=0}^{\infty}\sum_{t=1}^{\infty} \sum_{r=0}^{s+t} p^h_{rst} t z^r u_1^s u_2^{t-1} \label{eq.p_series_u2}.
\end{align}
We can substitute these power series expansions into condition
\eqref{eq.sn0w1} and derive conditions on every infinitesimal
neighbourhood, i.e.\ for bounded values of $s$ and $t$:
The restriction to the $n^\text{th}$ infinitesimal neighbourhood
sets to zero all terms $u_1^s u_2^t$ for which $s + t > n$.
Note that the series for $p^h$ has $(n + 1)^2$ terms on the $n^\text{th}$ infinitesimal neighbourhood
(more precisely: in the kernel of $\mathcal{O}_{\ell^{(n)}} \to \mathcal{O}_{\ell^{(n - 1)}}$),
where $n = s + t$.
\begin{note}
The expression in Equation~\eqref{eq.sn0w1} is an element of $R$,
i.e.\ a globally holomorphic function. This is to be expected, since
$p = \sum_{h=1}^4 p^h e_h$ is (the restriction to $U$ of) a global
bivector field, and the NS-bracket maps global (multi)vector fields to
global (multi)vector fields (being a composition of Lie brackets,
which map vector fields to vector fields). We can also verify this in
local coordinates: Let $[p, p] = f(p^i, p^i_{,j}) \; \partial_0 \wedge
\partial_1 \wedge \partial_2$, so that $f$ is the right-hand side of
Equation~\eqref{eq.sn0w1}. Note that $\partial_0 \wedge \partial_1
\wedge \partial_2 = \partial_{\tilde{0}} \wedge \partial_{\tilde{1}}
\wedge \partial_{\tilde{2}}$ on $U \cap V$ (after all, $W_1$ is
Calabi-Yau); we show that $f$ is globally holomorphic on $W_1$: If
$p^h \in R$, then $p^h_{,0}$ and $zp^h_{,0}$ are in $R$, too, as is
clear from considering \eqref{eq.p_series_z}. Terms $u_1 p^h_{,1}$
and $u_2 p^h_{,1}$ are holomorphic, as can be seen from
\eqref{eq.p_series_u1}, and similarly for $u_1 p^h_{,2}$ and $u_2
p^h_{,2}$. The remaining terms are not individually globally
holomorphic, but they group as follows:
\[ p^3 \Bigl( z^2 p^4_{,0} - z u_1 p^4_{,1} - z u_2 p^4_{,2} \Bigr) -
p^4 \Bigl( z^2 p^3_{,0} - z u_1 p^3_{,1} - z u_2 p^3_{,2} \Bigr) .\]
By considering \eqref{eq.p_series_z}, \eqref{eq.p_series_u1}, and
\eqref{eq.p_series_u2}, we see that the only non-holomorphic terms are
$z^{s + t + 1}u_1^s u_2^t$, and those appear with coefficient
$\bigl((s + t) - s - t\bigr)\bigl(p^4_{s+t,s,t} - p^3_{s+t,s,t}\bigr) = 0$.
\end{note}
\begin{note}\label{operator}
The quasi-linear differential operator $B$ defined above can be written as follows:
\begin{multline*}
B(p^1, p^2, p^3, p^4) = \mathbf{p}^T Q \, \mathbf{p} = \\
\Bigl[ p^1 \ p^2 \ p^3 \ p^4 \Bigr]
\begin{bmatrix}
0 & \partial_0 & -u_1 \partial_2 & z \partial_0 - u_2 \partial_2 \\
-\partial_0 & 0 & -z \partial_0 + u_1 \partial_1 & u_2 \partial_1 \\
u_1\partial_2 & z\partial_0 -u_1\partial_1 & 0 & z^2 \partial_0 - zu_1 \partial_1 - zu_2 \partial_2 \\
-z \partial_0 + u_2 \partial_2 & -u_2 \partial_1 & -z^2 \partial_0 + z u_1 \partial_1 + z u_2 \partial_2 & 0
\end{bmatrix}
\begin{bmatrix}p^1 \\ p^2 \\ p^3 \\ p^4 \end{bmatrix}
\end{multline*}
where have expressed $f$ using the quadratic form $Q$. We may linearize
this differential equation around a fixed solution $\mathbf{p}^T = [p^1 \ p^2 \ p^3 \ p^4]$:
\[ \lim_{\varepsilon \to 0} \frac{1}{\varepsilon} \Bigl( f(\mathbf{p} + \varepsilon \Delta\mathbf{p}) - f(\mathbf{p}) \Bigr) =
\Delta\mathbf{p}^T Q \, \mathbf{p} + \mathbf{p}^T Q \, \Delta\mathbf{p} .\]
\end{note}
\subsection{Symmetries and embeddings}
We now give isomorphism between the Poisson structures on $W_1$.
\begin{remark} Note that there are two clear symmetries of $W_1$:
\begin{itemize}
\item exchanging the radial directions $s_0(z,u_1,u_2) = (z,u_2,u_1)$ and
\item exchanging the charts $U$ and $V$, that is, $s_1(z,u_1,u_2)= (\xi,v_1,v_2).$
\end{itemize}
\end{remark}
These symmetries are automorphisms of $W_1$ and are also Poisson isomorphisms between some structures
on $W_1$ as shown in the diagram below:
\begin{center}
\begin{tikzcd}
(W_1,e_3) \arrow[<->]{r}{s_1}
\arrow[swap,<->]{d}{s_0}
&
(W_1,e_1) \arrow[<->]{d}{s_0}
\\
(W_1,e_2) \arrow[swap,<->]{r}{s_1}
&
(W_1,e_4)
\end{tikzcd}.
\end{center}
In other words, we have that
$e_1= s_0^*(e_2)$, $e_4= s_1^*(e_2)$, and $e_4 =s_1^* s_0^* e_2$.
\begin{theorem}\label{iso}
The Poisson structures $e_1,e_2,e_3,e_4$ are all pairwise isomorphic.
\end{theorem}
There are 2 obvious inclusions of the surface $Z_1$ into the threefold $W_1$.
\begin{notation}\label{pis} We denote by $\pi_{i}$ the Poisson structure on $Z_k$
that is given on the $U$-chart by $z^iu^\varepsilon$ where
$\varepsilon = 0$ if $i \leq 2$ and $\varepsilon = 1$ if $i \geq 3$.
We denote by $j_1\colon Z_1 \rightarrow W_1$ (resp. $j_2$) the inclusion into the first (resp. second) fiber coordinate, that is,
on the $U$-chart $j_1(z,u) = (z,u,0)$ (resp. $j_2(z,u) = (z,0,u)$).
We call $j_1 $, $s_0j_1, s_1 j_1, s_1s_0j_1$ the {\bf principal embeddings} of $Z_1$ into $W_1$.
\end{notation}
\begin{theorem}\label{princ}
The 4 principal embeddings of the Poisson surface $(Z_1,\pi_0)$ generate all Poisson structures on $W_1$.
\end{theorem}
\begin{proof}
Let $j_1(Z_1)$ (resp. $j_2(Z_1)$) be the embedding of the surface $Z_1$ into $W_1$
by $u_2=0$ and $v_2=0$ (resp. $u_1=0$ and $v_1=0$). Then Poisson structures induced by the first embedding are:
$$(j_1)_*(1)_U= \left[\begin{matrix}0 \\ 0 \\ 1\end{matrix}\right]_U,\quad
(j_1)_*(-\xi)_V= \left[\begin{matrix}0 \\ 0 \\ -\xi\end{matrix}\right]_V, \quad \text{hence} \quad
e_2\vert_{j_1(Z_1)}=(j_1)_*\pi_0, $$
analogously,
$ \gamma_1\vert_{j_1(Z_1)}=j_1(s \pi_0) $.
The induced Poisson structures by the second embedding are
$$(j_2)_*(1)_U= \left[\begin{matrix}0 \\ 1 \\ 0\end{matrix}\right]_U, \quad
(j_2)_*(-\xi)_V= \left[\begin{matrix}0 \\ -\xi\\ 0\end{matrix}\right]_V, \quad \text{hence} \quad
e_3\vert_{j_2(Z_1)}=(j_2)_*\pi_0,$$
analogously
$e_1\vert_{j_2(Z_1)} =(j_2)_*(s\pi_0) $.
\end{proof}
\subsection{Symplectic foliations on \texorpdfstring{\except{toc}{$\bm{W_1}$}\for{toc}{$W_1$}}{W\_1}}\label{leaf1}
Since $e_1,e_2,e_3,e_4$ are all isomorphic, to understand their corresponding
symplectic foliations, it is enough to describe the symplectic foliation in one case.
We consider $e_2$ whose expression in canonical coordinates is
$\{f,g\}_{e_2} = (df\wedge dg) \lrcorner (\frac{\partial}{\partial z} \wedge \frac{\partial}{\partial u_1})=
(df\wedge dg) \lrcorner (\partial_0 \wedge \partial_1).$
\begin{theorem}\label{w1fol}
The symplectic foliation for $(W_1, \partial_0 \wedge \partial_1)$ is given by:
\begin{itemize}
\item $ \partial_0 \wedge \partial_1$ has degeneracy locus on the line
$\{v_2= \xi=0\}$, where the leaves are $0$-dimensional, consisting of single points, and
\item
$2$-dimensional symplectic leaves cut out on the $U$ chart by $u_2$ constant.
\end{itemize}
\end{theorem}
\begin{proof}
To find the symplectic leaves we compute Poisson cohomology $H^0(W_1,e_2)$,
obtaining that $ e_2 = \, \partial_0 \wedge \partial_1$
has 2 dimensional symplectic leaves cut out on the $U$ chart by $u_2$ constant (the Casimir functions),
and
next changing coordinates
$$
\left[
\begin{matrix}
z^2 & -zu_1 & -zu_2 \\
0 & z^{-1} & 0 \\
0 & 0 & z^{-1}
\end{matrix}
\right]
\left[
\begin{matrix}
0 \\0 \\ 1
\end{matrix}
\right]= \left[
\begin{matrix}
-zu_2 \\
0 \\
z^{-1}
\end{matrix}
\right]=
\left[
\begin{matrix}
-v_2 \\
0 \\
\xi
\end{matrix}
\right]
$$
so, we see that the expression of $e_2$ on the $V$-coordinate
is $-v_2\frac{\partial}{\partial v_1}\wedge \frac{\partial}{\partial v_2}+\xi \frac{\partial}{\partial \xi} \wedge \frac{\partial}{\partial v_1}$, which
vanishes when $\xi=v_2=0$.
Hence
$e_2$ has degeneracy locus on the line
$D(e_2)=\{v_2= \xi=0\}$, where the leaves are
0 dimensional,
consisting of each of the points in the line $\xi=v_2=0$.
\end{proof}
\section{Poisson structures on \texorpdfstring{$W_2$}{W\_2}}
The Calabi--Yau threefold we consider in this section is
$$W_2\mathrel{\mathop:}= \Tot (\mathcal O_{\mathbb P^1}(-2) \oplus \mathcal O_{\mathbb P^1}) = Z_2 \otimes \mathbb C.$$
We will carry out calculations using the canonical coordinates $W_1= U \cup V$
where $U \simeq \mathbb C^3 \simeq V$ with coordinates $U = \{z,u_1,u_2\}$, $V= \{\xi, v_1,v_2\}$,
and change of coordinates on $U\cap V \simeq \mathbb C^* \times \mathbb C\times \mathbb C$ given by
\[ \bigl\{ \xi = z^{-1} \text{ ,\quad}
v_1 = z^2 u_1 \text{ ,\quad}
v_2 = u_2 \bigr\} \text{ ,} \]
so that $z = \xi^{-1}$, $u_1 = \xi^2 v_1$, and $u_2 = v_2$.
The transition matrix for the tangent bundle is the Jacobian matrix of the change of
coordinates, and taking the second exterior power we obtain the transition matrix for $\Lambda^2TW_2$:
\[
\left[
\begin{matrix}
z^2 & -2zu_1 & 0 \\
0 & -z^{-2} & 0 \\
0 & 0 & -1
\end{matrix}
\right]
.\]
Let $\imath\colon U \hookrightarrow W_2$ denote the inclusion. We
actually demand that the coefficients of $q$ are functions on all of
$W_2$, i.e.\ that they should be in the image of $\imath^* \colon
R \mathrel{\mathop:}= H^0(W_2;\mathcal{O}_{W_2}) \to H^0(U;\mathcal{O}_{U})$.
(We will not distinguish between $R$ and its image over $U$:
we are only working in local coordinates on $U$, but with the understanding
that we are describing global objects on $W_2$.)
In local coordinates on $U$, $R$ consists of convergent power series in
\[ \bigl\{ 1, u_1,zu_1,z^2u_1,u_2 \bigr\} \text{ .} \]
Now write $p = \sum_{h=1}^5 p^h e_h$
for a bivector field $p$ that extends to all of $W_2$, $p \in H^0(W_2; \Lambda^2 TW_2)$.
\begin{lemma} \label{W2gens}The space $M_2 = H^0(W_2,\Lambda^2TW_2)$
parametrizing all holomorphic bivector fields on $W_2$
has the following structure as a module over global holomorphic functions:
$$M_2 = \langle e_1,e_2,e_3,e_4, e_5 \rangle / \langle u_1e_3-zu_1e_1,
u_2e_5-zu_2e_3-2zu_2e_2\rangle.$$
\end{lemma}
\begin{proof}
To find
$H^0(W_2,\Lambda^2TW_2)$
we need global holomorphic sections, that is, we must find $a,b,c$ holomorphic on $U$ such that
$$
\left[
\begin{matrix}
z^2 & -2zu_1 & 0 \\
0 & -z^{-2} & 0 \\
0 & 0 & -1
\end{matrix}
\right]
\left[
\begin{matrix}
a \\
b \\
c
\end{matrix}
\right]
$$
is holomorphic on $V$.
To start with $\displaystyle a= \sum_{l=0}^\infty\sum_{i=0}^\infty\sum_{s=0}^\infty a_{lis} z^lu_1^ iu_2^s$
and similar for $b$ and $c$.
We proceed by calculations on formal neighborhoods of the $\mathbb P^1 \subset W_2$
and verify that generators for all global sections are already found on the first formal neighborhood,
where the general expression of a section of $\Lambda^2TW_2$ is:
$$
\left[\begin{matrix}
a \\
b \\
c
\end{matrix}\right] =
c_{000}
\left[\begin{matrix}
0 \\
0 \\
1
\end{matrix}\right]
+c_{010}
\left[\begin{matrix}
0 \\
0 \\
u_1
\end{matrix}\right]
+c_{110}
\left[\begin{matrix}
0 \\
0 \\
zu_1
\end{matrix}\right]
+c_{210}
\left[\begin{matrix}
0 \\
0 \\
z^2u_1
\end{matrix}\right]
+c_{001}
\left[\begin{matrix}
0 \\
0 \\
u_2
\end{matrix}\right]
+a_{010}
\left[\begin{matrix}
u_1 \\
0 \\
0
\end{matrix}\right] $$
$$
+b_{000}
\left[\begin{matrix}
0 \\
1 \\
0
\end{matrix}\right]
+b_{100}
\left[\begin{matrix}
0 \\
z \\
0
\end{matrix}\right]
+b_{200}
\left[\begin{matrix}
2 zu_1 \\
z^2 \\
0
\end{matrix}\right]
+b_{010}
\left[\begin{matrix}
0 \\
u_1 \\
0
\end{matrix}\right]\\
+b_{110}
\left[\begin{matrix}
0 \\
zu_1 \\
0
\end{matrix}\right]
+b_{210}
\left[\begin{matrix}
0 \\
z^2u_1 \\
0
\end{matrix}\right] \\
+b_{310}
\left[\begin{matrix}
0 \\
z^3u_1 \\
0
\end{matrix}\right].
$$
We then need the structure of $M = H^0(W_2,TW_2)$ as a module over global functions.
At first this gives us potentially 13 generators, but since $u_1,zu_1,z^2u_1,u_2$
are global functions, we obtain that in fact all sections can be obtained from the smaller set of
generators:
$$
e_1=
\left[\begin{matrix}
0 \\
1 \\
0
\end{matrix}\right],
e_2=
\left[\begin{matrix}
u_1 \\
0 \\
0
\end{matrix}\right] ,
e_3=
\left[\begin{matrix}
0 \\
z \\
0
\end{matrix}\right],
e_4=\left[\begin{matrix}
0 \\
0 \\
1
\end{matrix}\right] ,
e_5=
\left[\begin{matrix}
2zu_1 \\
z^2 \\
0
\end{matrix}\right].
$$
To describe the module structure over global sections,
we write relations among the generators.
We have the equations:
$$e_3-ze_1=0$$
$$e_5-ze_3-2ze_2 =0.$$
Note that there are no equations involving $e_4$. This corresponds to the fact that the geometry of $W_2= Z_2 \times \mathbb C$
is that of a surface product $\mathbb C$. Accordingly, we shall not involve $u_2$ in the relations to be obtained from these equations.
To get relations as an $\mathcal O(W_2)$-module, we multiply the equations by $u_1$, obtaining:
$$u_1e_3-zu_1e_1=0$$
$$u_2e_5-zu_3e_4-2zu_2e_2 =0.$$
\end{proof}
Next we discuss which of these bivector fields give isomorphic Poisson structures.
\begin{lemma}\label{iso1-5}
The Poisson manifolds $(W_2, e_1)$ and $(W_2,e_5)$ are isomorphic.
\end{lemma}
\begin{proof}
Note that by writing $e_1$ in $V$-coordinates we get:
\[
\left[
\begin{matrix}
z^2 & -2zu_1 & 0 \\
0 &- z^{-2} & 0 \\
0 & 0 & -1
\end{matrix}
\right]
\left[
\begin{matrix}
0 \\ 1 \\ 0
\end{matrix}
\right]
=
\left[
\begin{matrix}
-2zu_1 \\ -z^{-2} \\ 0
\end{matrix}
\right]
=
\left[
\begin{matrix}
-2\xi v_1 \\ -\xi^2 \\ 0
\end{matrix}
\right].
\]
So we get an isomorphism between $(W_2, e_1)$ and $(W_2,-e_5)$ by mapping the $U$-chart of one to the $V$-chart of the other and vice-versa. Then the desired isomorphism follows from the fact that $(W_2,e_5)$ and $(W_2,-e_5)$ are isomorphic.
\end{proof}
We now describe the loci where Poisson structures on $W_2$ degenerate.
\begin{lemma} \label{degeneracy2}
The degeneracy loci of Poisson structures on $W_2$ are:
\begin{itemize}
\item isomorphic to $\mathbb C^2$ for $e_1$, and
\item isomorphic to $\mathbb P^1 \times \mathbb C$ for $e_2$,
\item isomorphic to $\mathbb C^2\cup \mathbb C$ for $e_3$, and
\item empty for $e_4$.
\end{itemize}
\end{lemma}
\begin{proof} The coefficients of the Poisson structures in coordinate charts are:
$$
e_1 \mathrel{\mathop:}= \left[\begin{matrix} 0 \\ 1 \\ 0 \end{matrix} \right] _U = \left[\begin{matrix} -2 \xi v_1 \\ -\xi^2 \\ 0 \end{matrix} \right] _V
e_2 \mathrel{\mathop:}= \left[\begin{matrix} u_1 \\ 0 \\ 0 \end{matrix} \right] _U = \left[ \begin{matrix} v_1 \\ 0 \\ 0 \end{matrix} \right] _V
e_3 \mathrel{\mathop:}= \left[\begin{matrix} 0 \\ z \\ 0 \end{matrix} \right] _U = \left[\begin{matrix} -2 v_1 \\ \xi \\ 0 \end{matrix} \right] _V
e_4 \mathrel{\mathop:}= \left[ \begin{matrix} 0 \\ 0 \\ 1 \end{matrix} \right] _U = \left[ \begin{matrix} 0 \\ 0 \\ -1 \end{matrix} \right] _V.
$$
On the $V$ chart we have that $e_1$ degenerates when $\xi=0$ which is copy of $\mathbb C^2$.
Therefore, we have that $e_2$ degenerates when $u_1=v_1=0$ which gives a trivial product $\mathbb P^1 \times \mathbb C$.
On the $U$ chart we have that $e_3$ degenerates when $z=0$ which is a copy of $\mathbb C^2$, on the $V$ chart we have that $e_3$ degenerates when $\xi=v_1=0$ which is copy of $\mathbb C$.
For $e_4$ the degeneracy locus is empty.
\end{proof}
\begin{corollary} The brackets $e_1, e_2, e_3, e_4$ give $W_2$ non-isomorphic Poisson structures.
\end{corollary}
There are natural inclusions of the surfaces $\mathbb C$, $Z_0$, and $Z_2$ into $W_2$:
\begin{notation}
We denote by $j_s $ for $s=0,1,2$ the inclusions of $\mathbb C^2$, $Z_0$, and $Z_2$ into the threefold $W_2$.
Hence, in coordinates we have:
\begin{itemize}
\item $j_0\colon \mathbb C^2 \rightarrow W_2$ includes $\mathbb C^2$ as the fiber $z=\xi=1$,
\item $j_1\colon Z_2 \rightarrow W_2$ includes $Z_2$ as the surface $u_2=v_2=0$,
\item $j_2\colon Z_0 \rightarrow W_2$ includes $Z_0$ as the surface $u_1=\xi^2 v_1=0$.
\end{itemize}
\end{notation}
\begin{theorem} \label{emb2}The embedded Poisson surfaces $j_0(\mathbb C^2, \pi_0),$ $j_1(Z_2,\pi_0)$ and $j_2(Z_0,\pi_i)$
with $i=0,1,2,$
generate
all Poisson structures on $W_2$.
\end{theorem}
\begin{proof}
Let $j_1(Z_2)$ (resp. $j_2(Z_0)$) be the embedding of the surface $Z_2$ (resp. $Z_0$) into $W_2$
cut out by $u_2=v_2=0$ (resp. $u_1=\xi^2v_1=0$).
Then Poisson structure induced by the first embedding is:
$$(j_1)_*(1)_U= \left[\begin{matrix}0 \\ 0 \\ 1\end{matrix}\right]_U,\quad
(j_1)_*(-\xi)_V= \left[\begin{matrix}0 \\ 0 \\ -1\end{matrix}\right]_V, \quad \text{hence} \quad
e_4\vert_{j_1(Z_2)}=(j_1)_*\pi_0. $$
The Poisson structures induced by the second embedding are
$$(j_2)_*(1)_U = \left[\begin{matrix}0 \\ 1 \\ 0\end{matrix}\right]_U, \quad
(j_2)_*(-\xi^2)_V = \left[\begin{matrix}0 \\ -\xi^2\\ 0\end{matrix}\right]_V, \quad \text{hence} \quad
e_1\vert_{j_2(Z_0)}=(j_2)_*\pi_0,$$
and analogously
$e_2\vert_{j_2(Z_0)} =(j_2)_*(s\pi_1)$.
Since $j_0$ has image at $u=\xi=1$, we obtain
$(j_0)_*(1)_U= \left[\begin{matrix}1 \\ 0 \\ 0\end{matrix}\right]_U , \quad (j_0)_*(1)_V= \left[\begin{matrix}1 \\ 0 \\ 0\end{matrix}\right]_V
\Leftrightarrow (j_0)_*(z)_U= \left[\begin{matrix}u_1 \\ 0 \\ 0\end{matrix}\right]_V , \quad (j_0)_*(\xi)_V= \left[\begin{matrix}v_1 \\ 0 \\ 0\end{matrix}\right]_V ,$
\noindent hence
$ e_3\vert_{j_0(Z_0)}=(j_0)_*\pi_0.$
\end{proof}
\subsection{Symplectic foliations on \texorpdfstring{$\bm{W_2}$}{W\_2}}
In this section we perform the cohomological calculations, and identify the leaves of the symplectic foliation associated
to each Poisson structure on $W_2$.
\begin{lemma}\label{coh-beta0}
$H^0(W_2, e_1) = \{ f \in \mathcal{O}(W_2) / f = f(u_1) \}$
\end{lemma}
\begin{proof}
Recall that $e_1 = - \partial_0 \wedge \partial_2$. Then we have
\[
[f, e_1] = -[f, \partial_0 \wedge \partial_2] = -[f, \partial_0] \wedge \partial_2 + \partial_0 \wedge [f, \partial_2] =
\dfrac{\partial f}{\partial u_2} \partial_0 - \dfrac{\partial f}{\partial z} \partial_2,
\]
so that $ f \in \ker [e_1, \cdot] $ if and only if
$\displaystyle
\dfrac{\partial f}{\partial z} = \dfrac{\partial f}{\partial u_2} = 0,
$
i.e., $f$ does not depend on $z$ and $u_2$.
\end{proof}
\begin{lemma}\label{coh-alpha}
$H^0(W_2, e_2 ) = \{ f \in \mathcal{O}(W_2) / f = f(z) \}$
\end{lemma}
\begin{proof}
Recall that $e_2 = u_1 \partial_1 \wedge \partial_2$. Then we have
\[
[f, e_2] = [f, u_1 \partial_1 \wedge \partial_2] = [f, u_1 \partial_1] \wedge \partial_2 - u_1 \partial_1 \wedge [f, \partial_2] =
u_1 \dfrac{\partial f}{\partial u_1} \partial_2 - u_1\dfrac{\partial f}{\partial u_2} \partial_1, =
\]
so that $ f \in \ker [e_3, \cdot] $ if and only if
$\displaystyle
\dfrac{\partial f}{\partial u_1} = \dfrac{\partial f}{\partial u_2} = 0,
$
i.e., $f$ does not depend on $u_1$ and $u_2$.
\end{proof}
\begin{lemma}\label{coh-beta1}
$H^0(W_2, e_3) = \{ f \in \mathcal{O}(W_2) / f = f(u_1) \}$
\end{lemma}
\begin{proof}
Recall that $e_3 = - z\partial_0 \wedge \partial_2$. Then we have
\[
[f, e_3] = -[f, -z \partial_0 \wedge \partial_2] = -[f, z\partial_0] \wedge \partial_2 + z\partial_0 \wedge [f, \partial_2] = z\dfrac{\partial f}{\partial u_2} \partial_0 - z\dfrac{\partial f}{\partial z} \partial_2,
\]
so that $ f \in \ker [e_3, \cdot] $ if and only if
$\displaystyle
\dfrac{\partial f}{\partial z} = \dfrac{\partial f}{\partial u_2} = 0,
$
i.e., $f$ does not depend on $z$ and $u_2$.
\end{proof}
\begin{lemma}\label{coh-gamma}
$H^0(W_2, e_4) = \{ f \in \mathcal{O}(W_2) / f = f(u_2) \}$.
\end{lemma}
\begin{proof}
Recall that $e_4 = \partial_0 \wedge \partial_1$. Then we have
\[
[f, e_4] = [ f, \partial_0 \wedge \partial_1 ] = [f, \partial_0] \wedge \partial_1 - \partial_0 \wedge [f, \partial_1] =
\dfrac{\partial f}{\partial z} \partial_1 - \dfrac{\partial f}{\partial u_1} \partial_0,
\]
so that $f \in \ker [e_4, \cdot]$ if and only if
$\displaystyle
\dfrac{\partial f}{\partial z} = \dfrac{\partial f}{\partial u_1} = 0,
$
i.e., $f$ does not depend on $z$ and $u_1$.
\end{proof}
We then obtain the description of the symplectic foliations on $W_2$ determined by these Poisson structures.
\begin{theorem}\label{foliations2}
The symplectic foliations on $W_2$
have $0$-dimensional leaves consisting of single points over each
of their corresponding degeneracy loci described in \Cref{degeneracy2},
and their generic leaves, which are $2$-dimensional, are as follows:
\begin{itemize}
\item surfaces of constant $u_1$ for $e_1$ and $e_3$, one of them isomorphic to $\mathbb P^1 \times \mathbb C$.
\item isomorphic to $\mathbb C-\{0\} \times \mathbb C$ for $e_2$ (contained in the fibers of the projection to $\mathbb P^1$).
\item isomorphic to the surface $Z_2$ and cut out by $u_2=v_2$ constant for $e_4$.
\end{itemize}
\end{theorem}
\vspace{3mm}
\section{Poisson structures on \texorpdfstring{$W_3$}{W\_3}}
The Calabi--Yau threefold we consider in this section is
$W_3\mathrel{\mathop:}= \Tot (\mathcal O_{\mathbb P^1}(-3) \oplus \mathcal O_{\mathbb P^1}(1))$.
We will carry out calculations using the canonical coordinates $W_1= U \cup V$
with $U \simeq \mathbb C^3 \simeq V$ with coordinates $U = \{z,u_1,u_2\}$ and $V= \{\xi, v_1,v_2\}$
with change of coordinates on $U\cap V \simeq \mathbb C^* \times \mathbb C\times \mathbb C$ given by
\[ \bigl\{ \xi = z^{-1} \text{ ,\quad}
v_1 = z^3 u_1 \text{ ,\quad}
v_2 = z^{-1}u_2 \bigr\} \text{ ,} \]
so that $z = \xi^{-1}$, $u_1 = \xi^3 v_1$, and $u_2 = \xi^{-1} v_2$.
In these coordinates, the transition matrix for the tangent bundle of $W_3$ is the Jacobian matrix of the change of
coordinates, and taking $\Lambda^2$ we obtain the transition matrix for the second exterior power of the tangent bundle:
\[
\left[
\begin{matrix}
z^2 & -3zu_1 & zu_2 \\
0 & -z^{-3} & 0 \\
0 & 0 & -z
\end{matrix}
\right]
.\]
Let $\imath\colon U \hookrightarrow W_3$ denote the inclusion. We
actually demand that the coefficients of $q$ are functions on all of
$W_3$, i.e.\ that they should be in the image of $\imath^* \colon
R \mathrel{\mathop:}= H^0(W_3;\mathcal{O}_{W_3}) \to H^0(U;\mathcal{O}_{U})$.
(We will not distinguish between $R$ and its image over $U$:
we are only working in local coordinates on $U$, but with the understanding
that we are describing global objects on $W_2$.)
In local coordinates on $U$, $R$ consists of convergent power series in
\[ \bigl\{ 1, u_1,zu_1,z^2u_1, z^3u_1,u_1u_2, zu_1u_2, z^2u_1u_2 \bigr\} \text{ .} \]
Holomorphic Poisson structures on $W_3$ are parametrized by elements of
$M_3\mathrel{\mathop:}= H^0(W_2,\Lambda^2TW_3)$, which is infinite dimensional as a vector
space over $\mathbb C$. We will describe the structure of $M_3$ as
a module over global functions.
\begin{lemma}\label{W3gens} The space $M_3 = H^0(W_3,\Lambda^2TW_3)$
parametrizing all holomorphic bivector fields on $W_3$
has the following structure as a module over global holomorphic functions:
$$M_3= \mathbb C<e_1, \dots, e_{13}
>/R
$$
with the set of relations $R$ given by
$$\begin{array}{l}
u_1 e_2 - u_1u_2 e_1 \\
u_1 e_{10} -u_1 u_2 e_3 \\
u_1e_{13} - u_1u_2e_7 \\
\end{array}
\quad\quad \begin{array}{l}
zu_1e_{12} - u_1u_2 e_6 \\
zu_1e_{13} - u_1u_2e_8 \\
u_1e_{11} - zu_1 e_{10} \\
\end{array}
\quad \begin{array}{l}
u_1 e_4 - z u_1e_3 \\
u_1 e_5 - zu_1e_4 \\
u_1e_8 - zu_1e_7 \\
\end{array} \quad
\begin{array}{l}
u_1 e_6 - zu_1 e_5 - 3z^2u_1e_1 \\
u_1e_9 - zu_1e_8 + zu_1e_2 \\
u_1e_{12} - zu_1e_{11} - 3zu_1e_1 .\\
\end{array}$$
\end{lemma}
\begin{remark}
There is no natural way to simplify the presentation of $M_3$,
in fact, computer algebra calculations (for example in Macaulay2)
also give the same expression for the minimal presentation of $M_3$.
So, we really need all 13 generators and 13 relations to describe the space of Poisson structures
on $W_3$ as a module over global functions. As a complex vector space it is infinite dimensional.
\end{remark}
\begin{proof}[Proof of \Cref{W3gens}]
To find $H^0(W_3,\Lambda^2TW_3)$
we need global holomorphic sections so that we must find $a,b,c$ holomorphic on $U$ such that
$$
\left[
\begin{matrix}
z^2 & -3zu_1 & zu_2 \\
0 & -z^{-3} & 0 \\
0 & 0 & -z
\end{matrix}
\right]
\left[
\begin{matrix}
a \\
b \\
c
\end{matrix}
\right]
$$
is holomorphic on $V$.
To start with
$\displaystyle a= \sum_{l=0}^\infty\sum_{i=0}^\infty\sum_{s=0}^\infty a_{lis} z^lu_1^ iu_2^s$
and similar for $b$ and $c$.
We will give a presentation of $M\mathrel{\mathop:}= H^0(W_3,\Lambda^2TW_3)$ as a module over global sections.
Here we will need to perform calculations up to at least neighborhood 2, unlike the case of $W_2$
where neighborhood 1 was enough.
Thus, to calculate the module structure here,
we need the expressions of sections on the second formal neighborhood, which
consist of linear combinations of the following 42 terms:
\begin{flalign*}
\left[\begin{matrix}
0 \\
0 \\
z^lu_1
\end{matrix}\right]_{0\leq l\leq 1};
\left\{ \left[\begin{matrix}
0 \\
z^l \\
0
\end{matrix}\right];
\left[\begin{matrix}
0 \\
z^lu_2 \\
0
\end{matrix}\right] \right\}_{0\leq l\leq 2};
\left\{ \left[\begin{matrix}
z^lu_1^2 \\
0 \\
0
\end{matrix}\right];
\left[\begin{matrix}
0 \\
0 \\
z^lu_1^2
\end{matrix}\right];
\left[\begin{matrix}
0 \\
z^lu_1u_2 \\
0
\end{matrix}\right] \right\}_{0\leq l\leq 4};
\left[\begin{matrix}
0 \\
z^{l}u_1 \\
0
\end{matrix}\right]_{0\leq l\leq 5};
\left[\begin{matrix}
0 \\
z^lu_1^2 \\
0
\end{matrix}\right]_{0\leq l\leq 8}; &&
\end{flalign*}
\begin{flalign*}
\left[\begin{matrix}
u_1 \\
0 \\
0
\end{matrix}\right];
\left[\begin{matrix}
u_1u_2 \\
0 \\
0
\end{matrix}\right];
\left[\begin{matrix}
0 \\
0 \\
u_1u_2
\end{matrix}\right];
\left[\begin{matrix}
3z ^2u_1 \\
z^3 \\
0
\end{matrix}\right];
\left[\begin{matrix}
3zu_1u_2 \\
z^2u_2 \\
0
\end{matrix}\right];
\left[\begin{matrix}
3z^5u_1^2 \\
z^{6}u_1 \\
0
\end{matrix}\right];
\left[\begin{matrix}
3z^8u_1^3 \\
z^9u_1^2 \\
0
\end{matrix}\right];
\left[\begin{matrix}
3z^4u_1^2u_2 \\
z^5u_1u_2 \\
0
\end{matrix}\right];
\left[\begin{matrix}
-zu_1u_2 \\
0 \\
z^2u_1
\end{matrix}\right];
\left[\begin{matrix}
-u_1u_2^2 \\
0 \\
zu_1u_2
\end{matrix}\right] .&&
\end{flalign*}
\vspace{3mm}
But, upon removing all vectors that can be obtained from others by multiplying by
a global function we reduce the expression of a global section to:
$$
\left[\begin{matrix}
a \\
b \\
c
\end{matrix}\right] =
a_{010}
\left[\begin{matrix}
u_1 \\
0 \\
0
\end{matrix}\right]
+a_{011}
\left[\begin{matrix}
u_1u_2 \\
0 \\
0
\end{matrix}\right]
+b_{000}
\left[\begin{matrix}
0 \\
1 \\
0
\end{matrix}\right]
+b_{100}
\left[\begin{matrix}
0 \\
z \\
0
\end{matrix}\right]
+b_{200}
\left[\begin{matrix}
0 \\
z^2 \\
0
\end{matrix}\right]+
b_{300}
\left[\begin{matrix}
3z ^2u_1 \\
z^3 \\
0
\end{matrix}\right]
+b_{001}
\left[\begin{matrix}
0 \\
u_2 \\
0
\end{matrix}\right] +$$
$$
+b_{101}
\left[\begin{matrix}
0 \\
zu_2 \\
0
\end{matrix}\right]
+b_{201}
\left[\begin{matrix}
3zu_1u_2 \\
z^2u_2 \\
0
\end{matrix}\right]
+c_{010}
\left[\begin{matrix}
0 \\
0 \\
u_1
\end{matrix}\right]
+c_{110}
\left[\begin{matrix}
0 \\
0 \\
zu_1
\end{matrix}\right]
+c_{210}
\left[\begin{matrix}
-zu_1u_2 \\
0 \\
z^2u_1
\end{matrix}\right]
+c_{011}
\left[\begin{matrix}
0 \\
0 \\
u_1u_2
\end{matrix}\right].
$$
We now need the module structure of $M = H^0(W_3,TW_3)$ as a module over global functions.
So, we first write the generators and relations among them.
We establish the notation for the generators:
\[
e_1=
\left[\begin{matrix}
u_1 \\
0 \\
0
\end{matrix}\right],
e_2=
\left[\begin{matrix}
u_1u_2 \\
0 \\
0
\end{matrix}\right],
e_3=
\left[\begin{matrix}
0 \\
1 \\
0
\end{matrix}\right],
e_4=
\left[\begin{matrix}
0 \\
z \\
0
\end{matrix}\right],
e_5=
\left[\begin{matrix}
0 \\
z^2 \\
0
\end{matrix}\right] ,
e_6=
\left[\begin{matrix}
3z ^2u_1 \\
z^3 \\
0
\end{matrix}\right] ,
e_7=
\left[\begin{matrix}
0 \\
0 \\
u_1
\end{matrix}\right] ,
\]
\[
e_8=
\left[\begin{matrix}
0 \\
0 \\
zu_1
\end{matrix}\right],
e_9=
\left[\begin{matrix}
-zu_1u_2 \\
0 \\
z^2u_1
\end{matrix}\right],
e_{10}=
\left[\begin{matrix}
0 \\
u_2 \\
0
\end{matrix}\right],
e_{11}=
\left[\begin{matrix}
0 \\
zu_2 \\
0
\end{matrix}\right],
e_{12}=
\left[\begin{matrix}
3zu_1u_2 \\
z^2u_2 \\
0
\end{matrix}\right],
e_{13}=
\left[\begin{matrix}
0 \\
0 \\
u_1u_2
\end{matrix}\right].
\]
These then satisfy the equations:
$$\begin{array}{l}
e_2 - u_2 e_1=0 \\
e_{10} - u_2 e_3 =0\\
e_{13} - u_2e_7=0 \\
\end{array}
\quad\quad \begin{array}{l}
ze_{12} - u_2 e_6=0 \\
ze_{13} - u_2e_8=0 \\
e_{11} - z e_{10}=0 \\
\end{array}
\quad \begin{array}{l}
e_4 - z e_3 =0 \\
e_5 - ze_4 =0 \\
e_8 - ze_7=0 \\
\end{array} \quad
\begin{array}{l}
e_6 - z e_5 - 3z^2e_1=0 \\
e_9 - ze_8 + ze_2=0 \\
e_{12} - ze_{11} - 3ze_1=0 .\\
\end{array}$$
Since neither $z$ nor $u_2$ are global functions,
we multiply the equations by $u_1$ to obtain relations over
$\mathcal O(W_3)$, obtaining the claimed module structure.
\end{proof}
We now proceed to investigate the question of isomorphism of Poisson structures.
\begin{lemma}\label{isos3}
There are isomorphisms
$e_3 \simeq e_6$,
$e_7 \simeq e_9$, and
$e_{10} \simeq e_{12}$.
\end{lemma}
\begin{proof}
For each isomorphism use the transition function of $\Lambda^2TW_3$ and then exchange the $U$ and $V$ charts
as in the proof of \Cref{iso1-5}.
\end{proof}
There are natural inclusions of the surfaces $\mathbb C$, $Z_{-1}$, and $Z_3$ into $W_3$:
\begin{notation}
We denote by $j_s $ for $s=0,1,2$ the inclusions of $\mathbb C^2$, $Z_{-1}$, and $Z_3$ into the threefold $W_3$.
Hence, in coordinates we have:
\begin{itemize}
\item $j_0\colon \mathbb C^2 \rightarrow W_3$ includes $\mathbb C^2$ as the fiber $z=0$,
taking $\frac{\partial}{\partial u} \wedge \frac{\partial}{\partial v} \mapsto \partial_1 \wedge \partial_2$
\item $j_1\colon Z_3 \rightarrow W_3$ includes $Z_3$ as $u_2=0$, taking $\frac{\partial}{\partial z} \wedge \frac{\partial}{\partial u} \mapsto \partial_0 \wedge \partial_1$
\item $j_2\colon Z_{-1} \rightarrow W_3$ includes $Z_{-1}$ as $u_1=0$, taking $\frac{\partial}{\partial z} \wedge \frac{\partial}{\partial u} \mapsto \partial_0 \wedge \partial_2$.
\end{itemize}
\end{notation}
\begin{theorem} \label{emb3}
The embeddings of Poisson surfaces $j_0(\mathbb C^2, \pi_0),$ $j_1(Z_3,\pi_i)$ with $i=0,1,2$ and $j_2(Z_{-1},\pi_i)$
with $i=0,1,2,3$
generate
all Poisson structures on $W_3$.
\end{theorem}
\begin{proof}
Let $j_1(Z_3)$ (resp. $j_2(Z_{-1})$) be the embedding of the surface
$Z_3$ (resp. $Z_{-1}$) into $W_3$ by $u_2=v_2=0$ (resp. $u_1=v_1=0$).
Then Poisson structures induced by the first embedding are:
$$(j_1)_*(1)_U= \left[\begin{matrix}0 \\ 0 \\ 1\end{matrix}\right]_U,\quad
(j_1)_*(-\xi^2 v)_V= \left[\begin{matrix}0 \\ 0 \\ -\xi^2 v\end{matrix}\right]_V, \quad \text{hence} \quad
e_7\vert_{j_1(Z_3)}=(j_1)_*\pi_0. $$
Analogously, $e_8\vert_{j_1(Z_3)}=j_1(\pi_0)$.
The Poisson structures induced by the second embedding are
$$(j_2)_*(1)_U = \left[\begin{matrix}0 \\ 1 \\ 0\end{matrix}\right]_U, \quad
(j_2)_*(-\xi^3)_V = \left[\begin{matrix}0 \\ -\xi^3\\ 0\end{matrix}\right]_V, \quad \text{hence} \quad
e_3\vert_{j_2(Z_{-1})}=(j_2)_*\pi_0.$$
Analogously
$e_4\vert_{j_2(Z_{-1})}=(j_2)_*\pi_1$,
$e_5\vert_{j_2(Z_{-1})}=(j_2)_*\pi_2$.
Next, take $g \colon Z_{-1} \to \mathbb{C}$ defined by
$g\vert_U (z,u) = u$ and $g\vert_V (\xi,v) = \xi^{-1}v$,
then
$e_{10}\vert_{j_2(Z_{-1})} =( j_2)_* (g .(1,-\xi^3))$
and
$e_{11}\vert_{j_2(Z_{-1})} =( j_2)_*(g. (z,-\xi^2 ))$.
Finally,
$(j_0)_*(u) = e_1= u_1 \partial_1 \wedge \partial_2$ and
$(j_0)_*(uv) = e_2= u_1u_2 \partial_1 \wedge \partial_2$ give $e_1$ and $e_2$.
\end{proof}
\subsection{Poisson cohomology for \texorpdfstring{$\bm{W_3}$}{W\_3}}\label{cohw3}
\begin{lemma}\label{3classes}
The generators of Poisson structures on $W_3$ are divided up into 3 groups
according to their Casimir functions:
$\{e_1, e_2\}, \quad \{e_3, e_4,e_5,e_{10},e_{11}\}, \quad \{e_7, e_8,e_{13}\}.$
\end{lemma}
\begin{proof}
By \Cref{W3gens} all isomorphism classes of Poisson structures on $W_3$
are generated by $
e_1 ,
e_2 ,
e_3 ,
e_4 ,
e_5,
e_7,
e_8,
e_{10} ,
e_{11} ,
e_{13}$ and
calculating $0$-th Poisson cohomology, we obtain:
$$\mbox{Cas}(\pi) =\left\{\begin{array}{lll}
f \in \mathcal{O}(W_3) / f = f(z) & if & \pi = e_1, e_2,\\
f \in \mathcal{O}(W_3) / f = f(u_1)& if & \pi=
e_3, e_4,e_5,e_{10},e_{11},\\
f \in \mathcal{O}(W_3) / f = f(u_2) & if & \pi = e_7, e_8,e_{13}.
\end{array}
\right.$$
\end{proof}
\begin{lemma}\label{alphas}
The Poisson manifolds $(W_3,e_1)$ and $(W_3,e_2)$ are not isomorphic.
\end{lemma}
\begin{proof} We have that $e_1 = u_1 \partial u_1\wedge \partial u_2$ and $e_2= u_2 e_1$,
or written as section of $\Lambda^2TW_3$ we have
$$
e_1 =
\left[\begin{matrix}
u_1 \\
0 \\
0
\end{matrix}\right]_U
= \left[\begin{matrix}
\xi v_1 \\
0 \\
0
\end{matrix}\right]_V , \quad
e_2 =
\left[\begin{matrix}
u_1u_2 \\
0 \\
0
\end{matrix}\right]_U=
\left[\begin{matrix}
v_1v_2 \\
0 \\
0
\end{matrix}\right]_V
$$
Therefore the degeneracy loci have the following irreducible components:
$$D(e_1) = \{u_1=0\} \cup\{\xi=0\} \cup \{v_1=0\},$$
$$D(e_2) = \{u_1=0\} \cup\{u_2=0\} \cup \{v_1=0\} \cup\{v_2=0\}.$$
These have different number of irreducible components, implying $\alpha_0$ is not
isomorphic to $\alpha_1$.
\end{proof}
\begin{lemma}\label{betas}
The Poisson manifolds $(W_3,e_3)$ , $(W_3,e_4)$, $(W_3,e_5)$, $(W_3,e_{10})$ and $(W_3,e_{11})$
are pairwise nonisomorphic.
\end{lemma}
\begin{proof}
We compute the degeneracy loci of the Poisson structures.
We have that
$$
e_3 =
\left[\begin{matrix}
0 \\
1 \\
0
\end{matrix}\right]_U
= \left[\begin{matrix}
-3\xi^2v_1 \\
-\xi^3\\
0
\end{matrix}\right]_V , \quad
e_4 =
\left[\begin{matrix}
0 \\
z \\
0
\end{matrix}\right]_U=
\left[\begin{matrix}
-3\xi v_1 \\
-\xi^2 \\
0
\end{matrix}\right]_V,
e_5 =
\left[\begin{matrix}
0 \\
z^2 \\
0
\end{matrix}\right]_U
=
\left[\begin{matrix}
-3v_1 \\
-\xi \\
0
\end{matrix}\right]_V
$$
$$
e_{10} =
\left[\begin{matrix}
0 \\
u_2 \\
0
\end{matrix}\right]_U
= \left[\begin{matrix}
-3\xi v_1v_2 \\
-\xi^2 v_2 \\
0
\end{matrix}\right]_V , \quad
e_{11} =
\left[\begin{matrix}
0 \\
zu_2 \\
0
\end{matrix}\right]_U=
\left[\begin{matrix}
-3v_1v_2 \\
-\xi v_2\\
0
\end{matrix}\right]_V.
$$
The degeneracy loci are:
\begin{align*}
D(e_3) & = \{ \xi=0 \}, \\
D(e_4) & = \{ z=0 \} \cup \{ \xi=0 \}, \\
D(e_5) & = \{ z=0 \} \cup \{ \xi=v_1=0 \}, \\
D(e_{10}) & = \{u_2=0\} \cup \{\xi=0\} \cup \{v_2=0\}, \\
D(e_{11}) & = \{z=0\} \cup \{u_2=0\} \cup \{v_2=0\} \cup \{\xi=v_1=0\},
\end{align*}
these are pairwise nonisomorphic, and thus also
their corresponding Poisson structures.
\end{proof}
\begin{lemma}\label{gammas}
The Poisson manifolds $(W_3,e_7)$, $(W_3,e_8)$ and $(W_3,e_{13})$ are pairwise nonisomorphic.
\end{lemma}
\begin{proof}
We compute the degeneracy loci of the Poisson structures.
We have that
$$
e_7 =
\left[\begin{matrix}
0 \\
0 \\
u_1
\end{matrix}\right]_U
= \left[\begin{matrix}
\xi v_1 v_2\\
0\\
-\xi^2v_1
\end{matrix}\right]_V,
e_8 =
\left[\begin{matrix}
0 \\
0\\
zu_1
\end{matrix}\right]_U=
\left[\begin{matrix}
v_1v_2 \\
0\\
-\xi v_1
\end{matrix}\right]_V,
e_{13} =
\left[\begin{matrix}
0 \\
0 \\
u_1u_2
\end{matrix}\right]_U
=
\left[\begin{matrix}
v_1v_2^2 \\
0 \\
- \xi v_1v_2
\end{matrix}\right]_V.
$$
So, the degeneracy loci:
\begin{align*}
D(e_7) & = \{u_1 = 0\} \cup \{ \xi=0 \} \cup \{ v_1=0 \}, \\
D(e_8) & =\{z = 0\} \cup \{ u_1 = 0 \} \cup \{ \xi = v_2 = 0\} \cup \{v_1 = 0\}, \\
D(e_{13}) & = \{ u_1 = 0 \} \cup \{ u_2 = 0 \} \cup \{ v_1 = 0 \} \cup \{ v_2 = 0 \},
\end{align*}
are pairwise nonisomorphic, and therefore the corresponding Poisson
structures are distinct.
\end{proof}
This concludes the lemmata needed to prove \Cref{W3gens},
showing that all 10 listed isomorphism classes of Poisson structures are indeed distinct.
\subsection{Symplectic foliations on \texorpdfstring{$\bm{W_3}$}{W\_3}}
We have seen that all possible Poisson structures on $W_3$ can be obtained from
$e_1 ,
e_2 ,
e_3 ,
e_4 ,
e_5,
e_7,
e_8,
e_{10} ,
e_{11} ,
e_{13}$ given in \Cref{W3gens}. Their corresponding symplectic foliations have
$0$-dimensional leaves consisting of single points inside their degeneracy loci described in \Cref{3classes},
and outside these loci, all symplectic leaves
are $2$-dimensional and can be described as follows.
\begin{theorem}\label{W3foliation}
The symplectic foliations on $W_3$
have 0-dimensional leaves consisting of single points over each
of their corresponding degeneracy loci described in
the proofs of Lemmata \ref{alphas}, \ref{betas}, \ref{gammas}, and their generic leaves, which are 2-dimensional,
are as follows:
\begin{itemize}
\item Isomorphic to $\mathbb C^*\times \mathbb C$ for $e_1$.
\item Isomorphic to $\mathbb C^*\times \mathbb C^*$ for $e_2$.
\item Surfaces of constant $u_1$, for $e_3, e_4,e_5, e_{10}, e_{11}$ and $e_{13}$.
\item Surfaces of constant $u_2$, for $e_7$ and $e_8$.
\end{itemize}
\end{theorem}
\begin{proof}
Combine the Casimir functions given in the proof of \Cref{3classes} with \Cref{leaves}.
\end{proof}
\paragraph{\bf Acknowledgements}
We are grateful to Brent Pym for kindly explaining to us some of the fundamental notions of Poisson geometry.
E. Ballico is a member of GNSAGA of INdAM (Italy).
E. Gasparim thanks the Department of Mathematics of the University of Trento for the
support and hospitality.
B. Suzuki was supported by the ANID-FAPESP cooperation 2019/13204-0.
|
{
"timestamp": "2021-09-30T02:01:53",
"yymm": "2109",
"arxiv_id": "2109.13981",
"language": "en",
"url": "https://arxiv.org/abs/2109.13981"
}
|
\section{Introduction}
Let $\GL_{n+1}$ denote the group of all real invertible
$(n+1) \times (n+1)$ matrices,
$\Up_{n+1} \subset \GL_{n+1}$ the subgroup of upper triangular matrices.
Let $P_\sigma$ be the permutation matrix with entries
$(P_\sigma)_{i,i^{\sigma}} = 1$ and $0$ otherwise.
For a permutation $\sigma$ in the symmetric group $S_{n+1}$
of permutations of $\llbracket n+1 \rrbracket=\{1, 2, \ldots, n, n+1 \}$,
define the Bruhat cell of $\sigma$ in $\GL_{n+1}$ as
\[ \operatorname{Bru}_\sigma^{\GL} = \{ U_0 P_\sigma U_1;\;U_0, U_1 \in \Up_{n+1} \}
\subset \GL_{n+1}.\]
Let $\Lo_{n+1}^{1}$ denote the unipotent group
of lower triangular matrices with diagonal entries equal to $1$.
We are interested in the sets
\[\operatorname{BL}_\sigma = \Lo_{n+1}^1\cap \operatorname{Bru}_\sigma^{\GL}.\]
The set $\operatorname{BL}_{\sigma}$ is homeomorphic to the intersection of two Bruhat cells,
namely a top dimensional cell with an arbitrary one.
The study of the intersections of pairs of Bruhat/Schubert cells
appears naturally in many areas beyond topology such as:
representation theory, singularity theory, Kazhdan-Lusztig theory~\cite{SSV4},~\cite{SSV3},~\cite{SV} and locally convex curves~\cite{Goulart-Saldanha0}.
See~\cite{Alves-Saldanha} for a longer list of references.
There have been many advances in this topic. For instance, the
number of connected components of the intersection of two big Bruhat cells in generic position is well known.
In other words: let $\eta$ denote the Coxeter element in the symmetric group $S_{n+1}$.
The number of connected components of $\operatorname{BL}_\eta$ is $2, 6, 20, 52$ for $n = 1, 2, 3, 4$,
respectively; for $n \geq 5$, the number is
$3 \cdot 2^n$, see~\cite{SSV1} and~\cite{SSV2}.
This problem of counting connected components can be interpreted
as counting orbits for certain finite group of sympletic transvections
acting on a finite-dimensional vector space:
the group and the vector space are uniquely determined by the pairs of Bruhat cells~\cite{Se},~\cite{SSVZ}.
It would be interesting to clarify whether it is possible
to use the techniques developed in~\cite{SSV1},~\cite{SSV2},~\cite{Se} and~\cite{SSVZ}
to get information about
the low-dimensional homology groups for arbitrary intersections of pairs of Bruhat/Schubert cells.
In~\cite{Alves-Saldanha} we introduce a stratification of the sets $\operatorname{BL}_\sigma$ and
with the techniques developed in that work we were able to compute the homotopy type of several examples.
In particular, we counted the number of connected components of the sets $\operatorname{BL}_\sigma$,
for all permutation $\sigma \in S_{n+1}$ and $n = 2, 3, 4$:
it turns out that all these connected components are contractible, but some long computations are required in the hardest cases.
The aim of this work is to give more examples and technical remarks that we believe
can illuminate the understanding of this topic.
In Section~\ref{section:codimtwo} we presents the computations for the hardest example for $n=4$.
\bigskip
We denote a permutation $\sigma$ in many notations.
We use the complete notation,
a list of the values of $1^\sigma, 2^\sigma, \cdots, (n+1)^\sigma$ enclose in square brackets, for example $\sigma=[45132]$ satisfies
$1^\sigma = 4$, $2^\sigma = 5$, $3^\sigma = 1$, $4^\sigma = 3$ and $5^\sigma=2$.
Another useful notation is to write $\sigma$ as product of cycles: $\sigma = (143)(25) = [45132]$.
The most important notation for us is to write $\sigma$ as a reduced word.
Let us denote by $a_1, \cdots, a_n$ the standard Coxeter-Weyl generators of the symmetric
group $S_{n+1}$; where $a_i = (i, i + 1)$.
We denote by $\inv(\sigma)$ the number of inversions of $\sigma$.
A reduced word for $\sigma \in S_{n+1}$ is a product $\sigma = a_{i_1} \cdots a_{i_l}$, where
$\ell=\inv(\sigma)$ is the length of $\sigma$ in the generators $a_i$, $i \in \llbracket n \rrbracket=\{1, \ldots, n\}$.
For instance, we have $\sigma = [45132] = a_2a_1a_3a_2a_4a_3a_2 = a_2a_3a_1a_2a_4a_3a_2$;
notice that usually there is more than one reduced word.
A reduced word can be represented by a wiring diagram, as illustrated in Figure~\ref{fig:45132_a}.
We read left to right and each cross is a generator.
\begin{figure}[ht!]
\centering
\resizebox{0.6\textwidth}{!}{
\begin{tikzpicture}[roundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,inner sep=2pt},
squarednode/.style={rectangle, draw=red!60, fill=red!5, very thick, minimum size=5mm},
bigdiamondnode/.style={draw,diamond, fill=black, minimum size=1mm,very thick,inner sep=4pt},
diamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick,inner sep=4pt},
bigroundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,very thick, inner sep=4pt},
bigroundnodew/.style={circle, draw=black, fill=white, minimum size=0.1mm,very thick, inner sep=4pt},]
\begin{scope}[shift={(-5.5,0)}]
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n34.center);
\draw[very thick] (n24.center) -- (n35.center);
\draw[very thick] (n23.center) -- (n33.center);
\draw[very thick] (n22.center) -- (n32.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n35.center) -- (n85.center);
\draw[very thick] (n34.center) -- (n44.center);
\draw[very thick] (n33.center) -- (n42.center);
\draw[very thick] (n32.center) -- (n43.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\end{scope}
\begin{scope}[shift={(5.5,0)}]
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n35.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n45.center) -- (n85.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\end{scope}
\end{tikzpicture}}
\caption{Examples of reduced words for $\sigma = [45132] \in S_5$. On the left $a_2a_1a_3a_2a_4a_3a_2$, and on the right $a_2a_3a_1a_2a_4a_3a_2$.}
\label{fig:45132_a}
\end{figure}
An ancestry for a reduced word is obtained by marking crossings in the wiring diagram of $\sigma$
with black and white squares and disks.
Alternatively, an ancestry is a sequence $\varepsilon \in \{\pm 1, \pm 2 \}^\ell$ (where $\ell = \inv(\sigma)$)
satisfying suitable conditions (see Sections 1, 4 and 7 in~\cite{Alves-Saldanha}).
\begin{figure}[ht!]
\centering
\resizebox{0.8\textwidth}{!}{
\begin{tikzpicture}[roundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,inner sep=2pt},
squarednode/.style={rectangle, draw=red!60, fill=red!5, very thick, minimum size=5mm},
bigdiamondnode/.style={draw,diamond, fill=black, minimum size=1mm,very thick,inner sep=4pt},
diamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick,inner sep=4pt},
bigdiamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick, inner sep=4pt},
bigroundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,very thick, inner sep=4pt},
bigroundnodew/.style={circle, draw=black, fill=white, minimum size=0.1mm,very thick, inner sep=4pt},]
\begin{scope}[shift={(-9.5,0)}]
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n35.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n45.center) -- (n85.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnodew] at (1.5,3.5) (m1) {};
\node[bigroundnodew] at (3.5,4.5) (m2) {};
\node[bigroundnode] at (2.5,2.5) (m3) {};
\node[bigroundnode] at (4.5,3.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnodew] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(-0.5,0)}]
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n35.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n45.center) -- (n85.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigdiamondnode] at (1.5,3.5) (m1) {};
\node[bigroundnode] at (3.5,4.5) (m2) {};
\node[bigroundnode] at (2.5,2.5) (m3) {};
\node[bigdiamondnodew] at (4.5,3.5) (m4) {};
\node[bigroundnodew] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(8.5,0)}]
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n35.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n45.center) -- (n85.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigdiamondnode] at (1.5,3.5) (m1) {};
\node[bigroundnode] at (3.5,4.5) (m2) {};
\node[bigdiamondnode] at (2.5,2.5) (m3) {};
\node[bigroundnode] at (4.5,3.5) (m4) {};
\node[bigroundnodew] at (5.5,1.5) (m5) {};
\node[bigdiamondnodew] at (6.5,2.5) (m6) {};
\node[bigdiamondnodew] at (7.5,3.5) (m7) {};
\end{scope}
\end{tikzpicture}}
\caption{Examples of ancestries for the reduced word $a_2a_3a_1a_2a_4a_3a_2$ for $\sigma = [45132] \in S_5$:
an ancestry $\varepsilon_0=(+1,-1,+1,-1,-1,-1,+1)$, $\varepsilon_1=(-2,-1,-1,+2,+1,-1,-1)$
and $\varepsilon_2=(-2,-1,-1,+2,+1,-1,-1)$ with $\dim(\varepsilon_i)=i$, for $i=0, 1, 2$.}
\label{fig:45132_b}
\end{figure}
For each ancestry $\varepsilon$ we define a stratum $\operatorname{BLS}_\varepsilon \subset \operatorname{BL}_\sigma$
that is a smooth contractible submanifold of codimension
\[d = \dim (\varepsilon) = |\{k \,;\, \varepsilon(k) =-2 \}|=|\{k \,;\, \varepsilon(k) =+2 \}|.\]
Further, $\operatorname{BLS}_\varepsilon \subset \operatorname{BL}_\sigma$ is diffeomorphic to ${\mathbb{R}}^{\ell-d}$.
Beyond for distinct ancestries the respective strata are disjoint and their union over all ancestries
is the whole $\operatorname{BL}_\sigma$.
The sets $\operatorname{BL}_\sigma$ are homotopically equivalent to a finite
CW complex $\operatorname{BLC}_\sigma$, with one cell of codimension $d$ for each ancestry of dimension $d$.
\section{Preliminaries}
\label{section:preliminaries}
In this section we briefly review some concepts and results from previous works,
especially~\cite{Alves-Saldanha} and \cite{Goulart-Saldanha0}.
The spin group $\Spin_{n+1}$ is the double universal cover of $\SO_{n+1}$ and comes with a natural projection $\Pi: \Spin_{n+1} \rightarrow \SO_{n+1}$.
Let $B_{n+1}$ be the Coxeter-Weyl group of signed permutation matrices and set $B_{n+1}^+ = B_{n+1} \cap \SO_{n+1}$.
Denote by $\Diag_{n+1}^+ \subset B_{n+1}^+$ the normal subgroup of diagonal matrices.
Defining $\tilde{B}_{n+1}^+ = \Pi^{-1}[B_{n+1}^+]$ and $\Quat_{n+1} = \Pi^{-1}[\Diag_{n+1}^+]$, we have the exact sequences
\[1 \rightarrow \Quat_{n+1} \rightarrow \tilde{B}_{n+1}^+ \rightarrow S_{n+1} \rightarrow 1, \quad 1 \rightarrow \{ \pm 1 \} \rightarrow \Quat_{n+1} \rightarrow \Diag_{n+1}^+ \rightarrow 1.\]
Let ${\mathfrak a}_i$ be the $(n+1) \times (n+1)$ real skew symmetric matrix with the only non zero entries
being $({\mathfrak a}_i)_{i+1,i} = 1$ and $({\mathfrak a}_i)_{i,i+1} = -1$.
Set $\alpha_i(\theta) = \exp(\theta {\mathfrak a}_i)$, the one parameter subgroup $\alpha_i: {\mathbb{R}} \to \SO_{n+1}$.
More explicit, $\alpha_i: {\mathbb{R}} \to \SO_{n+1}$ is given by
\[
\alpha_i(\theta) =
\begin{pmatrix} I & & & \\
& \cos(\theta) & -\sin(\theta) & \\
& \sin(\theta) & \cos(\theta) & \\
& & & I \end{pmatrix},\]
where the central block ocupies rows and columns $i$ and $i+1$.
Since $\spin_{n+1}$ and $\so_{n+1}$ are isomorphic we denote by the same symbol
the one parameter subgroup $\alpha_i: {\mathbb{R}} \to \Spin_{n+1}$.
Set $\acute a_j = \alpha_j(\pi/2)$, $\grave a_j = (\acute a_j)^{-1} \in \tilde{B}_{n+1}^+$,
the elements $\acute a_j$ are generators of the group $ \tilde{B}_{n+1}^+$.
The elements $\hat a_j = (\acute a_j)^2 \in \Quat_{n+1}$, are the generators of the group $\Quat_{n+1}$.
We interpret the spin group $\Spin_{n+1}$ as a subset of the Clifford algebra $\Cl_{n+1}^0$.
This is the subalgebra $\Cl_{n+1}^0 \subset \Cl_{n+1}$ of the even elements of the Clifford algebra $\Cl_{n+1}$.
As in~\cite{Goulart-Saldanha0} and~\cite{Alves-Saldanha} we refer to $\Cl_{n+1}^0$ as the Clifford algebra.
The Clifford algebra $\Cl_{n+1}^0$ is generated by the elements $\hat a_1, \ldots, \hat a_n$.
A basis of the Clifford algebra as a real vector space is
\[ 1, \; \hat a_1, \; \hat a_2, \; \hat a_1 \hat a_2, \; \hat a_3, \; \hat a_1 \hat a_3, \; \hat a_2 \hat a_3, \; \hat a_1 \hat a_2 \hat a_3, \; \hat a_4, \ldots ; \]
this basis is orthonormal for the usual inner product in $\Cl_{n+1}^0$.
For $z \in \Cl_{n+1}^0$ we denote by $\Re(z) = \langle 1,z \rangle \in {\mathbb{R}}$ its real part.
In this notation, we have $\alpha_i(\theta) = \cos\left(\frac{\theta}{2}\right) + \sin\left(\frac{\theta}{2}\right) \hat a_i$.
For $\sigma \in S_{n+1}$, let $\operatorname{Bru}_\sigma \subset \Spin_{n+1}$ be the set of $z \in \Spin_{n+1}$
such that there exist $U_0, U_1 \in \Up_{n+1}$ with $\Pi(z)=U_0 P_\sigma U_1$,
where $P_\sigma$ is the permutation matrix.
Each connected component of $\operatorname{Bru}_\sigma$ contains exactly one element of $\tilde{B}_{n+1}^+$;
if $z \in \tilde{B}_{n+1}^+$ belong to a connected component of $\operatorname{Bru}_\sigma$,
we denote that component by $\operatorname{Bru}_z \subset \Spin_{n+1}$.
The smooth map ${\mathbf Q}: \Lo_{n+1}^1 \rightarrow \Spin_{n+1}$ is defined by
${\mathbf Q}(I)=1$ and $L = \Pi({\mathbf Q}(L))R$, $R \in \Up_{n+1}^+$.
The (possibly empty) subsets $\operatorname{BL}_z \subset \Lo_{n+1}^1$ are defined by $\operatorname{BL}_z = {\mathbf Q}^{-1}[\operatorname{Bru}_z]$.
The sets $\operatorname{BL}_\sigma$ are partitioned into subsets $\operatorname{BL}_z$:
\begin{equation}
\label{equation:BLz}
\operatorname{BL}_\sigma = \bigsqcup_{z \in \acute\sigma \Quat_{n+1}} \operatorname{BL}_z.
\end{equation}
Here and in~\cite{Alves-Saldanha} we discuss the homotopy type of the sets $\operatorname{BL}_z$:
for $n \leq 4$, the connected components of $\operatorname{BL}_z$ are contractible.
For this, we constructed here and in~\cite{Alves-Saldanha} a finite stratification of $\operatorname{BL}_z$:
we also obtain a CW-complex $\operatorname{BLC}_z$ with the same homotopy type as $\operatorname{BL}_z$.
Equation~\eqref{theo:N} below (based on Theorem 4 from~\cite{Alves-Saldanha})
gives a formula to count the number of strata with codimension $0$.
More precisely, given $z \in \acute{\sigma} \Quat_{n+1}$ the equation gives a formula for the number $N(z)$
of ancestries $\varepsilon$ with $\dim(\varepsilon)=0$ such that $\operatorname{BLS}_\varepsilon \subseteq \operatorname{BL}_z$.
In particular, it allows us to identify the few special cases when $\operatorname{BL}_z$ is empty.
Following~\cite{Alves-Saldanha}, a permutation $\sigma \in S_{n+1}$ {\em blocks at the entry $k$},
$1 \leq k \leq n $,
if and only if for all $j$,
$j \le k$ implies $j^\sigma \le k$.
Given $\sigma$, let $\operatorname{Block}(\sigma)$
be the set of values of $j$ such that $\sigma$ blocks at $j$.
Observe that, given a subset $B \subseteq \llbracket n \rrbracket$, the set $H_B$ of all permutations $\sigma$
such that $\operatorname{Block}(\sigma) \supseteq B$
is the subgroup of $S_{n+1}$ generated by $a_i$, $i \notin B$.
Denote by $\tilde H_B \subseteq \tilde B_{n+1}^{+}$
the subgroup generated by $\acute a_i$,
$i \in \llbracket n \rrbracket \smallsetminus B$.
Given $\sigma \in S_{n+1}$ and $z \in \acute \sigma \Quat_{n+1} \subset \tilde B_{n+1}^{+}$,
set $\ell = \inv(\sigma)$, $B = \operatorname{Block}(\sigma) \subseteq \llbracket n \rrbracket$, and
$b = |B|$. In the above notation,
if $z \notin \tilde H_B$ then $N(z) = N(-z) = 0$;
otherwise
\begin{equation}
N(z) = 2^{\ell-n+b-1} + 2^{\frac{\ell}{2}-1}\,\Re(z).
\label{theo:N}
\end{equation}
This equation is a restatament of Theorem 4 of~\cite{Alves-Saldanha}.
We denote by $\Diag_{n+1}$ the group of diagonal matrices with diagonal entries in $\{\pm 1\}$.
The quotient ${\cal E}_n = \Diag_{n+1}/\{\pm I\}$
is naturally isomorphic to $\{ \pm 1\}^{\llbracket n \rrbracket}$:
take $D \in \Diag_{n+1}$ to $E \in \{ \pm 1\}^{\llbracket n \rrbracket}$ with
$E_i=D_{i,i}D_{i+1,i+1}$.
The group $\Diag_{n+1}$ acts by conjugations in $\SO_{n+1}$: since $-I$ acts trivially,
this can be considered an action of ${\cal E}_n$.
This action can be lifted to $\Spin_{n+1}$ and then extend to $\Cl_{n+1}^0$.
Specifically, each $E \in {\cal E}_n$ defines automorphisms of $\Spin_{n+1}$ and $\Cl_{n+1}^0$ by
\[ (\hat a_i)^E = E_i \hat a_i, \quad (\alpha_i(\theta))^E = \alpha_i(E_i \theta). \]
For $z \in \Spin_{n+1}$ we denote by ${\cal E}_z$ the isotropy group of $z$, i.e.,
\[ {\cal E}_z = \{ E \in {\cal E}_n \;|\; z^{E} = z \}. \]
Recall that a set $X \subseteq \llbracket n+1 \rrbracket$ is $\sigma$-invariant
if and only if $X^\sigma = X$,
where $X^\sigma = \{x^\sigma, x \in X\}$.
This happens
if and only if $X$ is a disjoint union of cycles of $\sigma$.
Given $\sigma \in S_{n+1}$,
there exist $2^c$ $\sigma$-invariant sets $X \subseteq \llbracket n+1 \rrbracket$,
where $c = \nc(\sigma)$ is the number of cycles of
the permutation $\sigma$.
\begin{remark}
\label{remark_orbits}
Consider $\sigma \in S_{n+1}$, $z_0 \in \acute{\sigma} \Quat_{n+1}$
and $Q_0 = \Pi(z_0) \in B_{n+1}^+$ (for $\Pi: \Spin_{n+1} \to \SO_{n+1}$).
The orbit ${\mathcal O}_{Q_0}$ of ${Q_0}$
has cardinality $2^{n-c+1}$.
Concerning the action of ${\cal E}_n$ on $ \acute{\sigma} \Quat_{n+1}$,
there are two possibilities for the size of the orbit ${\mathcal O}_{z_0}$.
If there exists $E \in {\cal E}_n$
with $z_0^{E} = -z_0$, we set $c_{\anti}(z_0) = 1$;
otherwise $c_{\anti}(z_0) = 0$.
If $c_{\anti}(z_0) = 1$ the orbit ${\mathcal O}_{z_0}$ is $\Pi^{-1}[{\mathcal O}_{Q_0}]$,
with cardinality $2^{n-c+2}$.
If $c_{\anti}(z_0) = 0$
the orbits ${\mathcal O}_{z_0}$ and ${\mathcal O}_{-z_0}$ are disjoint,
with cardinality $2^{n-c+1}$ and with union $\Pi^{-1}[{\mathcal O}_{Q_0}]$;
we then say the orbit splits.
\end{remark}
\section{Strata of codimension zero, one or two}
\label{section:codimone}
In this section we describe the strata of codimension $0$, $1$ or $2$.
This are equivalent to cells of dimension $0$, $1$ or $2$, for which we describe glueing instructions.
This allows us to compute
the connected components of $\operatorname{BL}_\sigma$ or $\operatorname{BL}_z$. \\
We denote by $\lo_{n+1}^1$ the Lie algebra of $\Lo_{n+1}^1$,
i.e., the set of strictly lower triangular matrices.
Let $\fl_j \in \lo_{n+1}^1$, be the matrix whose only nonzero entry is
$(\fl_j)_{j+1,j} = 1$, for $j \in \llbracket n \rrbracket$.
Denote by $\lambda_j(t)$ its one parameter subgroup, i.e., $\lambda_j(t) = \exp(t \fl_j) \in \Lo_{n+1}^1$.
Given a reduced word $\sigma = a_{i_1} \cdots a_{i_\ell} \in S_{n+1}$ where
$\ell = \inv(\sigma)$,
consider the product
\begin{equation}
\label{equation:Lproduct}
L = \lambda_{i_1}(t_1) \cdots \lambda_{i_\ell}(t_\ell).
\end{equation}
It is well known that if $L \in \operatorname{BL}_{\sigma}$
can be written as in \eqref{equation:Lproduct} then the vector
$(t_1, \ldots, t_\ell)$ is unique~\cite{Berenstein-Fomin-Zelevinsky}.
Also, for almost all $L \in \operatorname{BL}_\sigma \subset \Lo_{n+1}^1$,
there exists a vector $(t_1, \ldots, t_\ell) \in ({\mathbb{R}} \smallsetminus \{0\})^\ell$
for which \eqref{equation:Lproduct} holds.
If $\varepsilon$ is an ancestry of dimension $0$, then, by definition,
\[\operatorname{BL}_{\varepsilon} = \{ \lambda_{i_1}(t_1) \cdots \lambda_{i_\ell}(t_\ell);\;
\operatorname{sign}(t_j) = \varepsilon(j) \}
\subseteq \operatorname{BL}_z,\]
where $z=(\acute a_{i_1})^{\operatorname{sign}(\varepsilon(1))} \cdots (\acute a_{i_\ell})^{\operatorname{sign}(\varepsilon(\ell))}$.
\begin{example}
\label{example:45132_definition}
We can take $n=4$ and $\sigma=[45132]$. Let us fix the reduced word $\sigma=a_2a_3a_1a_2a_4a_3a_2$.
If $\varepsilon$ is an ancestry of dimension $0$, then the matrices $L \in \operatorname{BL}_\varepsilon$ can be written as the following product
\begin{equation}
L = \lambda_2(t_1) \lambda_3(t_2) \lambda_1(t_3) \lambda_2(t_4) \lambda_4(t_5) \lambda_3(t_6) \lambda_2(t_7);
\end{equation}
where $\operatorname{sign}(t_j) = \varepsilon(j)$.
For instance
\[ L_0 = \begin{pmatrix}
1 & 0 & 0 & 0 & 0 \\ -3 & 1 & 0 & 0 & 0 \\
-3 & -3/2 & 1 & 0 & 0 \\ 0 & -7 & 3 & 1 & 0 \\ 0 & 4 & -2 & -2 & 1
\end{pmatrix} \in \operatorname{BL}_{\varepsilon_0} \subset \operatorname{BL}_{z_0}, \]
where $\varepsilon_0=(+1, +1, -1, -1, -1, +1, -1)$ and then $z_0=\acute a_2 \acute a_3 \grave a_1 \grave a_2 \grave a_4 \acute a_3 \grave a_2$.
\end{example}
But not all matrices in $\operatorname{BL}_\sigma$ can be written as a product of lower matrices given above.
In what follows we will describe how to determine the ancestry a given matrix in $\operatorname{BL}_\sigma$ corresponds to.
As usual, assume $\sigma \in S_{n+1}$ and a reduced word
$\sigma = a_{i_1}\cdots a_{i_\ell}$ to be fixed.
As we saw in~\cite{Alves-Saldanha}, a matrix $L \in \operatorname{BL}_\sigma$ can be identified with $\tilde{z_\ell}={\mathbf Q}(L) \in \grave \eta \operatorname{Bru}_{\acute \eta}$ and
with a sequence $(z_k)_{0 \le k \le \ell}$ of elements of $\Spin_{n+1}$
with $z_0 = 1$, $z_k = z_{k-1} \alpha_{i_k}(\theta_k)$,
$\theta_k \in (0,\pi)$,
$z_\ell = \tilde{z}_\ell q_\ell \in \operatorname{Bru}_{\acute\sigma} \cap (\grave\eta\operatorname{Bru}_{\eta})$, $q_\ell \in \Quat_{n+1}$.
The values of $\theta_k$ are smooth functions of $L \in \operatorname{BL}_\sigma$.
Define $\varrho_k \in \tilde{B}_{n+1}^+$ such that $z_k \in \operatorname{Bru}_{\varrho_k}$.
For all $k$ we have $\varrho_k=\varrho_{k-1} (\acute a_{i_k})^{\xi(k)}$, $\xi(k) \in \{0, 1, 2\}$.
The function $\xi: \{1, \ldots, \ell \} \to \{0, 1, 2\}$ is a corresponding ancestry.\\
We now recall the $\varepsilon$ notation for ancestries.
Let $\rho_k = \Pi(\varrho_k) \in S_{n+1}$, for $\Pi: B_{n+1}^+ \to S_{n+1}$.
For each $k$, define $\tilde{z}_k \in \grave \eta \operatorname{Bru}_{\acute \rho_k}$ by $z_k=\tilde{z}_k q_k, q_k \in \Quat_{n+1}$.
Define $\tilde \theta_k \in (-\pi, \pi)$ be defined by $\tilde{z}_k=\tilde{z}_{k-1} \alpha_{i_k}(\tilde{\theta}_k)$.
The function $\varepsilon: \{1, \ldots, \ell \} \to \{\pm 1, \pm 2 \}$ is defined by $\operatorname{sign}(\varepsilon(k))=\operatorname{sign}(\tilde{\theta}_k)$ and
$| \varepsilon(k) | = 2$ if and only if $\rho_k \ne \rho_{k-1}$.
The functions $\xi$ and $\varepsilon$ are alternative descriptions for an ancestry.
Equations (7.1) and (7.2) in~\cite{Alves-Saldanha} teach us how to translate from one notation to the other. \\
Assume $L \in \operatorname{BL}_{\xi} \subset \operatorname{BL}_{\sigma}$,
where $\xi$ is an ancestry of positive dimension $d$.
We can construct a transversal section to $\operatorname{BL}_{\xi}$
by keeping fixed the values of $\theta_k$ if
either $\xi(k) \ne 1$ or $\varrho_k \ge \varrho_{k-1}$.
There are $d$ values of $k$ for which
$\xi(k) = 1$ and $\varrho_k < \varrho_{k-1}$:
for these values of $k$ we allow the coordinates $\theta_k$
to vary freely (and independently)
in a small neighborhood of their original values.
We must then determine the ancestries of the perturbed strata.
An understanding of a transversal section yields
a description of the boundary map.
\begin{lemma}
\label{lemma:codimone}
If $k_1, k_2$ satisfy
\begin{equation}
\label{equation:k1k2}
i_{k_1} = i_{k_2}, \qquad
\forall k, (k_1 < k < k_2) \to (i_k \ne i_{k_1}),
\end{equation}
then
any function $\xi: \llbracket \ell \rrbracket \to \{0,1,2\}$
with $\xi^{-1}[\{1\}] = \{k_1,k_2\}$ is an ancestry of dimension $1$.
Conversely,
if $\xi$ is an ancestry of dimension $d = 1$
then $\xi^{-1}[\{1\}] = \{k_1,k_2\}$ where
$k_1 < k_2$ satisfy the condition in Equation \eqref{equation:k1k2}.
Then there are precisely two ancestries $\tilde\xi$
of dimension $0$ with $\xi \preceq \tilde\xi$.
We can call them $\xi_0, \xi_2$ with $\xi_i(k_1) = i$.
For $k \notin \{k_1,k_2\}$ we have
$\xi_0(k) = \xi_2(k) = \xi(k)$.
The set $\operatorname{BL}_\xi \subset \operatorname{BL}_\sigma$
is a submanifold of codimension one
with $\operatorname{BL}_{\xi_0}$ on one side and $\operatorname{BL}_{\xi_2}$ on the other side.
In the CW complex, $\xi$ is represented by an edge from
$\xi_0$ to $\xi_2$.
\end{lemma}
\begin{proof}
The first two claims follow directly from the definition of ancestries.
Let $\xi$ be an ancestry of dimension $d = 1$,
with $k_1 < k_2$ as above.
We then have
\[ \rho_k = \begin{cases}
\eta a_{i_{k_1}},& k_1 \le k < k_2, \\
\eta,& \textrm{otherwise.}
\end{cases} \]
If $\tilde\xi \succ \xi$ we must have
$\tilde\rho_k = \eta$ for $k < k_1$ or $k \ge k_2$:
we thus also have $\tilde\varrho_k = \varrho_k$
and therefore $\tilde\xi(k) = \xi(k)$
for $k < k_1$ or $k > k_2$.
For $k_1 \le k < k_2$ we must have
$\tilde\rho_k \in \{\eta, \eta a_{i_{k_1}}\}$.
If $k_1 \le k-1 < k < k_2$ we have
either $\rho_k = \rho_{k-1}$ or $\rho_k = \rho_{k-1} a_{i_k}$:
the second case contradicts the previous facts.
We thus have $\tilde\rho_k = \eta$ for all $k$.
In particular, $\dim(\tilde\xi) = 0$.
If $w_0, w_1 \in \tilde B_{n+1}^{+}$,
$w_0 < w_1$ and $\Pi(w_0) = \eta a_{i_{k_1}}$
then either
$w_1 = w_0 \acute a_{i_{k_1}}$ or $w_1 = w_0 \grave a_{i_{k_1}}$.
Thus, if $\tilde\xi \succ \xi$ there exists
$\tilde\varepsilon: \{k_1, \ldots, k_2 - 1\} \to \{\pm 1\}$
such that, for all $k$, $k_1 \le k < k_2$ implies
$\tilde\varrho_k =
\varrho_k (\acute a_{i_{k_1}})^{\tilde\varepsilon(k)}$.
For $k_1 < k < k_2$, we have
\begin{align*}
\tilde\varrho_k
&=
\varrho_k (\acute a_{i_{k_1}})^{\tilde\varepsilon(k)}
=
\varrho_{k-1} (\acute a_{i_k})^{\xi(k)}
(\acute a_{i_{k_1}})^{\tilde\varepsilon(k)} \\
&=
\tilde\varrho_{k-1} (\acute a_{i_k})^{\tilde\xi(k)}
=
\varrho_{k-1} (\acute a_{i_{k_1}})^{\tilde\varepsilon(k-1)}
(\acute a_{i_k})^{\tilde\xi(k)}
\end{align*}
and therefore
$(\acute a_{i_k})^{\xi(k)}
(\acute a_{i_{k_1}})^{\tilde\varepsilon(k)}
=
(\acute a_{i_{k_1}})^{\tilde\varepsilon(k-1)}
(\acute a_{i_k})^{\tilde\xi(k)}$.
If $|i_k - i_{k_1}| = 1$ and $\xi(k) = 2$
this implies $\tilde\xi(k) = \xi(k)$ and
$\tilde\varepsilon(k) = - \tilde\varepsilon(k-1)$.
Otherwise,
this implies $\tilde\xi(k) = \xi(k)$ and
$\tilde\varepsilon(k) = \tilde\varepsilon(k-1)$.
In either case, this implies $\tilde\xi(k) = \xi(k)$
for all $k \notin \{k_1,k_2\}$, as desired.
Furthermore, a choice of $\tilde\xi(k_1)$
uniquely determines $\tilde\xi(k)$ for $k_1 < k < k_2$.
Similarly, we have
\[ \tilde\varrho_{k_2}
=
\varrho_{k_2}
=
\varrho_{{k_2}-1} \acute a_{i_{k_1}}
=
\tilde\varrho_{{k_2}-1} (\acute a_{i_{k_2}})^{\tilde\xi({k_2})}
=
\varrho_{{k_2}-1} (\acute a_{i_{k_1}})^{\tilde\varepsilon({k_2}-1)}
(\acute a_{i_{k_1}})^{\tilde\xi({k_2})}
\]
and therefore
$\tilde\xi(k_2) = 1 - \tilde\varepsilon(k_2-1)$,
completing the proof that there are exactly two ancestries
$\tilde\xi$ with $\tilde\xi \succ \xi$.
The other claims follow by construction.
\end{proof}
A $\varepsilon$-ancestry of dimension $0$ can be represented
over a diagram for $\sigma$ by indicating a sign at each intersection.
The edges are then constructed as follows.
A bounded connected component of the complement
of the diagram has vertices
$k_1$ and $k_2$ on row $i_{k_1}$
plus all vertices $k$ with $k_1 < k < k_2$
and $|i_k - i_{k_1}| = 1$.
If $k_1$ and $k_2$ have opposite signs
we can click on that connected component,
with the effect of changing all signs on its boundary.
\begin{example}
\label{example:45132-a1}
Figure~\ref{fig:45132x} shows an example of the construction above.
We take $n = 4$ and $\sigma = [45132] = a_2a_3a_1a_2a_4a_3a_2$,
Figure~\ref{fig:45132} shows this reduced word as a diagram.
\begin{figure}[ht!]
\centering
\resizebox{0.2\textwidth}{!}{
\begin{tikzpicture}[roundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,inner sep=2pt},
squarednode/.style={rectangle, draw=red!60, fill=red!5, very thick, minimum size=5mm},
bigdiamondnode/.style={draw,diamond, fill=black, minimum size=1mm,very thick,inner sep=4pt},
diamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick,inner sep=4pt},
bigroundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,very thick, inner sep=4pt},
bigroundnodew/.style={circle, draw=black, fill=white, minimum size=0.1mm,very thick, inner sep=4pt},]
\begin{scope}[shift={(-5.5,0)}]
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n35.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n45.center) -- (n85.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\end{scope}
\end{tikzpicture}}
\caption{The permutation $\sigma \in S_5$.}
\label{fig:45132}
\end{figure}
As an example, take
$z_0 = \hat a_1 \acute \sigma$, where
\[ \acute\sigma =
\frac{-1 + \hat a_1\hat a_2
+ \hat a_3 - \hat a_1\hat a_2\hat a_3
+ \hat a_1\hat a_4 + \hat a_2\hat a_4
-\hat a_1 \hat a_3 \hat a_4 - \hat a_2\hat a_3\hat a_4}{2\sqrt{2}}. \]
\begin{figure}[h!]
\centering
\resizebox{0.50\textwidth}{!}{
\begin{tikzpicture}[roundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,inner sep=2pt},
squarednode/.style={rectangle, draw=red!60, fill=red!5, very thick, minimum size=5mm},
diamondnode/.style={draw,diamond, fill=black, minimum size=1mm,very thick,inner sep=4pt},
diamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick,inner sep=4pt},
bigroundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,very thick, inner sep=4pt},
bigroundnodew/.style={circle, draw=black, fill=white, minimum size=0.1mm,very thick, inner sep=4pt},]
\begin{scope}[shift={(-8.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\draw[ultra thick] (8.5,3) -- (10.5,3);
\draw[ultra thick] (18.5,3) -- (21.5,3);
\draw[ultra thick] (14.5,1) -- (14.5,-5);
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n35.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n45.center) -- (n85.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnodew] at (1.5,3.5) (m1) {};
\node[bigroundnode] at (3.5,4.5) (m2) {};
\node[bigroundnodew] at (2.5,2.5) (m3) {};
\node[bigroundnode] at (4.5,3.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnodew] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(1.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n35.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n45.center) -- (n85.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnode] at (1.5,3.5) (m1) {};
\node[bigroundnodew] at (3.5,4.5) (m2) {};
\node[bigroundnode] at (2.5,2.5) (m3) {};
\node[bigroundnodew] at (4.5,3.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnodew] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(1.5,-7)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n35.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n45.center) -- (n85.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnode] at (1.5,3.5) (m1) {};
\node[bigroundnodew] at (3.5,4.5) (m2) {};
\node[bigroundnodew] at (2.5,2.5) (m3) {};
\node[bigroundnode] at (4.5,3.5) (m4) {};
\node[bigroundnodew] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(11.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n35.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n45.center) -- (n85.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnode] at (1.5,3.5) (m1) {};
\node[bigroundnodew] at (3.5,4.5) (m2) {};
\node[bigroundnode] at (2.5,2.5) (m3) {};
\node[bigroundnode] at (4.5,3.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnodew] at (7.5,3.5) (m7) {};
\end{scope}
\end{tikzpicture}}
\caption{The stratification of $\operatorname{BL}_{z_0}$.}
\label{fig:45132x}
\end{figure}
\begin{comment}
\begin{figure}[h!]
\centering
\resizebox{0.50\textwidth}{!}{
\begin{tikzpicture}[roundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,inner sep=2pt},
squarednode/.style={rectangle, draw=red!60, fill=red!5, very thick, minimum size=5mm},
diamondnode/.style={draw,diamond, fill=black, minimum size=1mm,very thick,inner sep=4pt},
diamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick,inner sep=4pt},
bigroundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,very thick, inner sep=4pt},
bigroundnodew/.style={circle, draw=black, fill=white, minimum size=0.1mm,very thick, inner sep=4pt},]
\begin{scope}[shift={(-8.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\draw[ultra thick] (8.5,3) -- (10.5,3);
\draw[ultra thick] (18.5,3) -- (21.5,3);
\draw[ultra thick] (14.5,1) -- (14.5,-5);
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n32.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n34.center);
\draw[very thick] (n24.center) -- (n35.center);
\draw[very thick] (n23.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n85.center);
\draw[very thick] (n34.center) -- (n44.center);
\draw[very thick] (n33.center) -- (n42.center);
\draw[very thick] (n32.center) -- (n43.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnodew] at (1.5,3.5) (m1) {};
\node[bigroundnode] at (2.5,4.5) (m2) {};
\node[bigroundnodew] at (3.5,2.5) (m3) {};
\node[bigroundnode] at (4.5,3.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnodew] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(1.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n32.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n34.center);
\draw[very thick] (n24.center) -- (n35.center);
\draw[very thick] (n23.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n85.center);
\draw[very thick] (n34.center) -- (n44.center);
\draw[very thick] (n33.center) -- (n42.center);
\draw[very thick] (n32.center) -- (n43.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnode] at (1.5,3.5) (m1) {};
\node[bigroundnodew] at (2.5,4.5) (m2) {};
\node[bigroundnode] at (3.5,2.5) (m3) {};
\node[bigroundnodew] at (4.5,3.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnodew] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(1.5,-7)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n32.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n34.center);
\draw[very thick] (n24.center) -- (n35.center);
\draw[very thick] (n23.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n85.center);
\draw[very thick] (n34.center) -- (n44.center);
\draw[very thick] (n33.center) -- (n42.center);
\draw[very thick] (n32.center) -- (n43.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnode] at (1.5,3.5) (m1) {};
\node[bigroundnodew] at (2.5,4.5) (m2) {};
\node[bigroundnodew] at (3.5,2.5) (m3) {};
\node[bigroundnode] at (4.5,3.5) (m4) {};
\node[bigroundnodew] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(11.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n32.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n34.center);
\draw[very thick] (n24.center) -- (n35.center);
\draw[very thick] (n23.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n85.center);
\draw[very thick] (n34.center) -- (n44.center);
\draw[very thick] (n33.center) -- (n42.center);
\draw[very thick] (n32.center) -- (n43.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnode] at (1.5,3.5) (m1) {};
\node[bigroundnodew] at (2.5,4.5) (m2) {};
\node[bigroundnode] at (3.5,2.5) (m3) {};
\node[bigroundnode] at (4.5,3.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnodew] at (7.5,3.5) (m7) {};
\end{scope}
\end{tikzpicture}}
\caption{The stratification of $\operatorname{BL}_{z_0}$.}
\label{fig:45132x}
\end{figure}
\end{comment}
\newpage
A case by case verification shows that $\operatorname{BL}_{z_0}$
has $4$ strata of dimension $0$,
$3$ strata of dimension $1$ and
no strata of dimension higher than $1$.
It follows that $\operatorname{BL}_{z_0}$ is homotopically equivalent
to the graph in Figure~\ref{fig:45132x}
and therefore contractible.
In the figure, black indicates $\varepsilon(k) = -1$
and white indicates $\varepsilon(k) = +1$.
\end{example}
\begin{example}
\label{section:n3}
Take $n=3$ and $\sigma = [4 3 1 2]$.
Let us fix the reduced word $a_1a_2a_3a_1a_2$,
shown in the Figure~\ref{fig:4312}.
\begin{figure}[ht!]
\centering
\resizebox{0.2\textwidth}{!}{
\begin{tikzpicture}[roundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,inner sep=2pt},
squarednode/.style={rectangle, draw=red!60, fill=red!5, very thick, minimum size=5mm},
diamondnode/.style={draw,diamond, fill=black, minimum size=1mm,very thick,inner sep=4pt},
diamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick,inner sep=4pt},
bigdiamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick, inner sep=4pt},
bigroundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,very thick, inner sep=4pt},
bigroundnodew/.style={circle, draw=black, fill=white, minimum size=0.1mm,very thick, inner sep=4pt},]
\begin{scope}[shift={(-5.5,0)}]
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node[roundnode] at (6,4) (n64) {};
\node[roundnode] at (6,3) (n63) {};
\node[roundnode] at (6,2) (n62) {};
\node[roundnode] at (6,1) (n61) {};
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n31.center);
\draw[very thick] (n24.center) -- (n44.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n32.center) -- (n41.center);
\draw[very thick] (n31.center) -- (n42.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n41.center) -- (n61.center);
\draw[very thick] (n54.center) -- (n64.center);
\draw[very thick] (n52.center) -- (n63.center);
\draw[very thick] (n53.center) -- (n62.center);
\end{scope}
\end{tikzpicture}}
\caption{The permutation $\sigma \in S_4$.}
\label{fig:4312}
\end{figure}
Write $L \in \Lo_4^1$ as
\[ L = \begin{pmatrix}
1 & & & \\ x & 1 & & \\
u & y & 1 & \\ w & v & z & 1
\end{pmatrix}, \, u, v, w, x, y, z \in \mathbb{R}. \]
Applying the definition, the set $\operatorname{BL}_\sigma$ is
\[ \operatorname{BL}_\sigma = \bigg\{ L \, | \, w=0, \, u \neq 0, v \neq 0, xyz-xv-uz \neq 0 \bigg\} \subset \Lo^1_4.\]
A computations shows that $\acute{\sigma}=\frac{\sqrt[]{2}}{4}(-1+ \hat{a}_1 +\hat{a}_2 + \hat{a}_1 \hat{a}_2 + \hat{a}_3 + \hat{a}_1\hat{a}_3+ \hat{a}_2\hat{a}_3-\hat{a}_1\hat{a}_2\hat{a}_3)$: notice that $|-\frac{\sqrt[]{2}}{4}|=2^{-(n+1-c)/2}$.
We have $\ell = 5$, $b = |\operatorname{Block}(\sigma)| = 0$, $c = \nc(\sigma) = 1$ and
if $z \in \acute{\sigma} \Quat_{4}$ implies $\Re(z)$
equal $-\dfrac{\sqrt[]{2}}{4}$ or $\dfrac{\sqrt[]{2}}{4}$.
Therefore and from Remark~\ref{remark_orbits}, for all $z \in \acute{\sigma} \Quat_{4}$ the orbit ${\mathcal O}(z)$ has size $2^{n-c+1}=8$.
Now we describe $\operatorname{BL}_z$, for $z \in \acute{\sigma} \Quat_{4}$.
The matrices $L \in \operatorname{BLS}_\varepsilon$,
where $\varepsilon$ is an ancestry of dimension $0$, can be written as the following product
\[ L = \lambda_1(t_1) \lambda_2(t_2) \lambda_3(t_3) \lambda_1(t_4)\lambda_2(t_5) =
\begin{pmatrix}
1 & & & \\ t_1+t_4 & 1 & & \\
t_2 t_4 & t_2 + t_5 & 1 & \\ 0 & t_3t_5 & t_3 & 1
\end{pmatrix} \]
then $u = t_2 t_4, \, v = t_3 t_5, \, xyz-xv-uz = t_1 t_2 t_3$.
Set $z_1=\acute{a}_1 \acute{a}_2 \grave{a}_3 \acute{a}_1 \grave{a}_2=\frac{\sqrt[]{2}}{4}(1+ \hat{a}_1 +\hat{a}_2 - \hat{a}_1 \hat{a}_2 + \hat{a}_3 - \hat{a}_1\hat{a}_3- \hat{a}_2\hat{a}_3-\hat{a}_1\hat{a}_2\hat{a}_3)$.
From Equation~\ref{theo:N}, $N(z_1)=2^{5-3+0-1}+2^{5/2-1}(\frac{\sqrt[]{2}}{4})=3$,
therefore the set $\operatorname{BL}_{z_1}$ has $3$ ancestries of dimension $0$.
Moreover,
$\operatorname{BL}_{z_1}$ has
$2$ ancestries of dimension $1$ and
no ancestries of dimension higher than $1$.
The characterization of the set $\operatorname{BL}_{z_1}$ is given by
\[ \operatorname{BL}_{z_1}=\bigg\{ L \, | \, w=0, \, u > 0, v > 0, xyz-xv-uz > 0 \bigg\}. \]
Therefore the set $\operatorname{BL}_{z_1}$ contains three open strata
\[ \operatorname{BL}_{(-1,-1,+1,-1,+1)} = \{ L \, | \, z > 0, \, yz - v < 0 \}, \]
\[ \operatorname{BL}_{(-1,+1,+1,-1,-1)} = \{ L \, | \,z < 0, \, yz - v < 0 \}, \]
\[ \operatorname{BL}_{(+1,-1,-1,-1,-1)} = \{ L \, | \, z < 0, \, yz - v > 0 \}, \]
and two strata of dimension $1$
\[ \operatorname{BL}_{(-1,-2,+1,-1,+2)} = \{ L \, | \, z = 0, \, yz - v < 0 \}, \]
\[ \operatorname{BL}_{(-2,+1,-1,+2,-1)} = \{ L \, | \, z < 0, \, yz - v = 0 \}. \]
Finally, the set $\operatorname{BL}_{z_1}$ is homotopically equivalent to the CW complex in Figure~\ref{fig:4312xx}.
It follows that $\operatorname{BL}_{z_1}$ is contractible.
\begin{figure}[ht!]
\centering
\resizebox{0.7\textwidth}{!}{
\begin{tikzpicture}[roundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,inner sep=2pt},
squarednode/.style={rectangle, draw=red!60, fill=red!5, very thick, minimum size=5mm},
diamondnode/.style={draw,diamond, fill=black, minimum size=1mm,very thick,inner sep=4pt},
diamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick,inner sep=4pt},
bigdiamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick, inner sep=4pt},
bigdiamondnode/.style={draw,diamond, fill=black, minimum size=1mm,very thick, inner sep=4pt},
bigroundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,very thick, inner sep=4pt},
bigroundnodew/.style={circle, draw=black, fill=white, minimum size=0.1mm,very thick, inner sep=4pt},]
\draw[ultra thick] (-19.5,2.5) -- (6.5,2.5);
\begin{scope}[shift={(-24.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,4.5) -- (6.5,4.5) -- (6.5,0.5) -- cycle;
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node[roundnode] at (6,4) (n64) {};
\node[roundnode] at (6,3) (n63) {};
\node[roundnode] at (6,2) (n62) {};
\node[roundnode] at (6,1) (n61) {};
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n31.center);
\draw[very thick] (n24.center) -- (n44.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n32.center) -- (n41.center);
\draw[very thick] (n31.center) -- (n42.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n41.center) -- (n61.center);
\draw[very thick] (n54.center) -- (n64.center);
\draw[very thick] (n52.center) -- (n63.center);
\draw[very thick] (n53.center) -- (n62.center);
\node[bigroundnode] at (1.5,3.5) (m1) {};
\node[bigroundnode] at (2.5,2.5) (m2) {};
\node[bigroundnodew] at (3.5,1.5) (m3) {};
\node[bigroundnode] at (4.5,3.5) (m4) {};
\node[bigroundnodew] at (5.5,2.5) (m5) {};
\end{scope}
\begin{scope}[shift={(-17.5,3)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,4.5) -- (6.5,4.5) -- (6.5,0.5) -- cycle;
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node[roundnode] at (6,4) (n64) {};
\node[roundnode] at (6,3) (n63) {};
\node[roundnode] at (6,2) (n62) {};
\node[roundnode] at (6,1) (n61) {};
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n31.center);
\draw[very thick] (n24.center) -- (n44.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n32.center) -- (n41.center);
\draw[very thick] (n31.center) -- (n42.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n41.center) -- (n61.center);
\draw[very thick] (n54.center) -- (n64.center);
\draw[very thick] (n52.center) -- (n63.center);
\draw[very thick] (n53.center) -- (n62.center);
\node[bigroundnode] at (1.5,3.5) (m1) {};
\node[bigdiamondnode] at (2.5,2.5) (m2) {};
\node[bigroundnodew] at (3.5,1.5) (m3) {};
\node[bigroundnode] at (4.5,3.5) (m4) {};
\node[bigdiamondnodew] at (5.5,2.5) (m5) {};
\end{scope}
\begin{scope}[shift={(-10.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,4.5) -- (6.5,4.5) -- (6.5,0.5) -- cycle;
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node[roundnode] at (6,4) (n64) {};
\node[roundnode] at (6,3) (n63) {};
\node[roundnode] at (6,2) (n62) {};
\node[roundnode] at (6,1) (n61) {};
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n31.center);
\draw[very thick] (n24.center) -- (n44.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n32.center) -- (n41.center);
\draw[very thick] (n31.center) -- (n42.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n41.center) -- (n61.center);
\draw[very thick] (n54.center) -- (n64.center);
\draw[very thick] (n52.center) -- (n63.center);
\draw[very thick] (n53.center) -- (n62.center);
\node[bigroundnode] at (1.5,3.5) (m1) {};
\node[bigroundnodew] at (2.5,2.5) (m2) {};
\node[bigroundnode] at (3.5,1.5) (m3) {};
\node[bigroundnodew] at (4.5,3.5) (m4) {};
\node[bigroundnode] at (5.5,2.5) (m5) {};
\end{scope}
\begin{scope}[shift={(-3.5,3)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,4.5) -- (6.5,4.5) -- (6.5,0.5) -- cycle;
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node[roundnode] at (6,4) (n64) {};
\node[roundnode] at (6,3) (n63) {};
\node[roundnode] at (6,2) (n62) {};
\node[roundnode] at (6,1) (n61) {};
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n31.center);
\draw[very thick] (n24.center) -- (n44.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n32.center) -- (n41.center);
\draw[very thick] (n31.center) -- (n42.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n41.center) -- (n61.center);
\draw[very thick] (n54.center) -- (n64.center);
\draw[very thick] (n52.center) -- (n63.center);
\draw[very thick] (n53.center) -- (n62.center);
\node[bigdiamondnode] at (1.5,3.5) (m1) {};
\node[bigroundnodew] at (2.5,2.5) (m2) {};
\node[bigroundnode] at (3.5,1.5) (m3) {};
\node[bigdiamondnodew] at (4.5,3.5) (m4) {};
\node[bigroundnode] at (5.5,2.5) (m5) {};
\end{scope}
\begin{scope}[shift={(3.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,4.5) -- (6.5,4.5) -- (6.5,0.5) -- cycle;
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node[roundnode] at (6,4) (n64) {};
\node[roundnode] at (6,3) (n63) {};
\node[roundnode] at (6,2) (n62) {};
\node[roundnode] at (6,1) (n61) {};
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n31.center);
\draw[very thick] (n24.center) -- (n44.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n32.center) -- (n41.center);
\draw[very thick] (n31.center) -- (n42.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n41.center) -- (n61.center);
\draw[very thick] (n54.center) -- (n64.center);
\draw[very thick] (n52.center) -- (n63.center);
\draw[very thick] (n53.center) -- (n62.center);
\node[bigroundnodew] at (1.5,3.5) (m1) {};
\node[bigroundnode] at (2.5,2.5) (m2) {};
\node[bigroundnode] at (3.5,1.5) (m3) {};
\node[bigroundnode] at (4.5,3.5) (m4) {};
\node[bigroundnode] at (5.5,2.5) (m5) {};
\end{scope}
\end{tikzpicture}}
\caption{A CW complex $\operatorname{BLC}_{z_1}$ which is homotopically equivalent to $\operatorname{BL}_{z_1}$.}
\label{fig:4312xx}
\end{figure}
The construction for $\operatorname{BLC}_{-z_1}$ is completely similar.
Therefore $\operatorname{BL}_\sigma$ has $16$ connected components, all contractible.
\end{example}
\bigskip
Ancestries and strata of dimension $2$ also admit
a not too complicated description.
Understanding them allows us to compute
the fundamental group of each connected component.
For small values of $n$, when there are no ancestries (or strata)
of dimension $3$ or higher, we obtain a full description
of the spaces, in many cases allowing us to deduce
that their connected components are contractible.
Let $\varepsilon_0$ be a preancestry of dimension $2$ and $k_1 < k_2 < k_3 < k_4$, $|\varepsilon_0(k_i)|=2$.
We prefer to break into cases, that we call type I and type II.
In any case, we have $\varepsilon_0(k_1)=-2$ and $\varepsilon_0(k_4)=+2$.
An ancestry $\varepsilon$ of dimension $2$ is said to be of type I
when one of the following situations happens:
if $\varepsilon_0(k_2)=+2$ then $\varepsilon_0(k_3)=-2$ and
$i_{k_1} = i_{k_2}$, \,
$i_{k_3} = i_{k_4}$; or
if $\varepsilon_0(k_2)=-2$ and $|i_{k_1} - i_{k_2}| > 1$,
in this case the preancestry also has two pairs of consecutive intersections on two rows.
In Figure~\ref{fig:types}, first and second diagrams are examples of preancestries of dimension $2$ of type I.
An ancestry $\varepsilon_0$ of dimension $2$ is said to be of type II if $\varepsilon_0(k_2)=-2$ and
$|i_{k_1} - i_{k_2}| = 1$. In this case we have $i_{k_1} = i_{k_4}$,
$i_{k_2} = i_{k_3}$; see third diagram of Figure~\ref{fig:types}.
A preancestry of dimension $2$
is either of type I or of type II.
\begin{figure}[ht!]
\centering
\resizebox{0.8\textwidth}{!}{
\begin{tikzpicture}[roundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,inner sep=2pt},
squarednode/.style={rectangle, draw=red!60, fill=red!5, very thick, minimum size=5mm},
bigdiamondnode/.style={draw,diamond, fill=black, minimum size=1mm,very thick,inner sep=4pt},
diamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick,inner sep=4pt},
bigdiamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick, inner sep=4pt},
bigroundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,very thick, inner sep=4pt},
bigroundnodew/.style={circle, draw=black, fill=white, minimum size=0.1mm,very thick, inner sep=4pt},]
\begin{scope}[shift={(-9.5,0)}]
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node[roundnode] at (7,4) (n74) {};
\node[roundnode] at (7,3) (n73) {};
\node[roundnode] at (7,2) (n72) {};
\node[roundnode] at (7,1) (n71) {};
\draw[very thick] (n14) -- (n24.center);
\draw[very thick] (n13) -- (n22.center);
\draw[very thick] (n12) -- (n23.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n24.center) -- (n33.center);
\draw[very thick] (n23.center) -- (n34.center);
\draw[very thick] (n22.center) -- (n31.center);
\draw[very thick] (n21.center) -- (n32.center);
\draw[very thick] (n34.center) -- (n64.center);
\draw[very thick] (n33.center) -- (n42.center);
\draw[very thick] (n32.center) -- (n43.center);
\draw[very thick] (n31.center) -- (n41.center);
\draw[very thick] (n43.center) -- (n53.center);
\draw[very thick] (n42.center) -- (n51.center);
\draw[very thick] (n41.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n64.center);
\draw[very thick] (n53.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n74.center);
\draw[very thick] (n62.center) -- (n71.center);
\draw[very thick] (n61.center) -- (n72.center);
\node[bigdiamondnode] at (1.5,2.5) (m1) {};
\node[bigdiamondnodew] at (3.5,2.5) (m3) {};
\node[bigdiamondnode] at (4.5,1.5) (m4) {};
\node[bigdiamondnodew] at (6.5,1.5) (m6) {};
\end{scope}
\begin{scope}[shift={(-0.5,0)}]
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node[roundnode] at (7,4) (n74) {};
\node[roundnode] at (7,3) (n73) {};
\node[roundnode] at (7,2) (n72) {};
\node[roundnode] at (7,1) (n71) {};
\draw[very thick] (n14) -- (n24.center);
\draw[very thick] (n13) -- (n22.center);
\draw[very thick] (n12) -- (n23.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n24.center) -- (n33.center);
\draw[very thick] (n23.center) -- (n34.center);
\draw[very thick] (n22.center) -- (n31.center);
\draw[very thick] (n21.center) -- (n32.center);
\draw[very thick] (n34.center) -- (n64.center);
\draw[very thick] (n33.center) -- (n42.center);
\draw[very thick] (n32.center) -- (n43.center);
\draw[very thick] (n31.center) -- (n41.center);
\draw[very thick] (n43.center) -- (n53.center);
\draw[very thick] (n42.center) -- (n51.center);
\draw[very thick] (n41.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n64.center);
\draw[very thick] (n53.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n74.center);
\draw[very thick] (n62.center) -- (n71.center);
\draw[very thick] (n61.center) -- (n72.center);
\node[bigdiamondnode] at (2.5,1.5) (m2) {};
\node[bigdiamondnode] at (2.5,3.5) (m3) {};
\node[bigdiamondnodew] at (4.5,1.5) (m4) {};
\node[bigdiamondnodew] at (6.5,3.5) (m6) {};
\end{scope}
\begin{scope}[shift={(8.5,0)}]
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node[roundnode] at (7,4) (n74) {};
\node[roundnode] at (7,3) (n73) {};
\node[roundnode] at (7,2) (n72) {};
\node[roundnode] at (7,1) (n71) {};
\draw[very thick] (n14) -- (n24.center);
\draw[very thick] (n13) -- (n22.center);
\draw[very thick] (n12) -- (n23.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n24.center) -- (n33.center);
\draw[very thick] (n23.center) -- (n34.center);
\draw[very thick] (n22.center) -- (n31.center);
\draw[very thick] (n21.center) -- (n32.center);
\draw[very thick] (n34.center) -- (n64.center);
\draw[very thick] (n33.center) -- (n42.center);
\draw[very thick] (n32.center) -- (n43.center);
\draw[very thick] (n31.center) -- (n41.center);
\draw[very thick] (n43.center) -- (n53.center);
\draw[very thick] (n42.center) -- (n51.center);
\draw[very thick] (n41.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n64.center);
\draw[very thick] (n53.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n74.center);
\draw[very thick] (n62.center) -- (n71.center);
\draw[very thick] (n61.center) -- (n72.center);
\node[bigdiamondnode] at (2.5,1.5) (m1) {};
\node[bigdiamondnode] at (3.5,2.5) (m3) {};
\node[bigdiamondnodew] at (5.5,2.5) (m4) {};
\node[bigdiamondnodew] at (6.5,1.5) (m6) {};
\end{scope}
\end{tikzpicture}}
\caption{Here we have three preancestries of dimension $2$: on the left and on the central are of type I and on the right of type II.}
\label{fig:types}
\end{figure}
\begin{example}
\label{example:45132-b}
Set $\sigma = a_2 a_3 a_1 a_2 a_4 a_3 a_2 = [4 5 1 3 2]$.
In the notation of cycles, $\sigma = (143)(25)$;
therefore $n = 4$, $\ell = 7$, $c = 2$ and $b = 0$.
It follows from Equation~\ref{theo:N} that for $z \in \acute\sigma \Quat_5$,
we have $N(z) = 4 + 4\sqrt{2} \Re(z)$.
In addition,
\[ \acute\sigma =
\frac{-1 + \hat a_1\hat a_2
+ \hat a_3 - \hat a_1\hat a_2\hat a_3
+ \hat a_1\hat a_4 + \hat a_2\hat a_4
-\hat a_1 \hat a_3 \hat a_4 - \hat a_2\hat a_3\hat a_4}{2\sqrt{2}}, \]
therefore $z \in \acute\sigma \Quat_5$ implies
$\Re(z) \in \{0, \pm \sqrt{2}/4\}$.
Next we chose the representatives of interest in each orbit.
If $z \in \Pi^{-1}[\{ \sigma \}]$ and $\Re(z) \neq 0 $ then $c_{\anti}(z)=0$;
if $\Re(z) = 0$ then $c_{\anti}(z) = 1$.
Therefore the set $\acute\sigma \Quat_5$ thus has $3$ orbits under ${\cal E}_4$,
determined by the real part,
of sizes $8$, $16$ and $8$:
\begin{gather*}
{\mathcal O}_{\acute\sigma}, \quad \Re(z) = -\frac{\sqrt{2}}{4}, \quad
N(z) =2, \quad N_{\thin}(z) = 2, \\
{\mathcal O}_{\hat a_1\acute\sigma}, \quad \Re(z) = 0, \quad
N(z) = 4, \quad N_{\thin}(z) = 0, \\
{\mathcal O}_{-\acute\sigma}, \quad \Re(z) = \frac{\sqrt{2}}{4}, \quad
N(z) = 6, \quad N_{\thin}(z) = 0.
\end{gather*}
If $\Re(z)=-\frac{\sqrt{2}}{4}$, the set $\operatorname{BL}_z$ has two thin connected components
(and no thick one).
Moreover, the stratification of $\operatorname{BL}_{\hat a_1 \acute \sigma}$ is discussed in the Example~\ref{example:45132-a1};
we recall it is connected and contractible.
Now we detail the stratification of $\operatorname{BL}_{-\acute{\sigma}}$.
We start with the graphical representation in Figure~\ref{fig:45132xx}.
\begin{figure}[ht]
\centering
\resizebox{0.4\textwidth}{!}{
\begin{tikzpicture}[roundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,inner sep=2pt},
squarednode/.style={rectangle, draw=red!60, fill=red!5, very thick, minimum size=5mm},
diamondnode/.style={draw,diamond, fill=black, minimum size=1mm,very thick,inner sep=4pt},
diamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick,inner sep=4pt},
bigdiamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick, inner sep=4pt},
bigroundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,very thick, inner sep=4pt},
bigroundnodew/.style={circle, draw=black, fill=white, minimum size=0.1mm,very thick, inner sep=4pt},]
\draw[ultra thick] (-12.5,3) -- (6.5,3);
\draw[ultra thick] (-12.5,-3) -- (6.5,-3);
\draw[ultra thick] (-16,4) -- (-16,-3);
\draw[ultra thick] (4,4) -- (4,-3);
\begin{scope}[shift={(-20.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n35.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n45.center) -- (n85.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnodew] at (1.5,3.5) (m1) {};
\node[bigroundnodew] at (2.5,2.5) (m2) {};
\node[bigroundnodew] at (3.5,4.5) (m3) {};
\node[bigroundnode] at (4.5,3.5) (m4) {};
\node[bigroundnodew] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(-10.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n35.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n45.center) -- (n85.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnode] at (1.5,3.5) (m1) {};
\node[bigroundnode] at (2.5,2.5) (m2) {};
\node[bigroundnode] at (3.5,4.5) (m3) {};
\node[bigroundnodew] at (4.5,3.5) (m4) {};
\node[bigroundnodew] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(-0.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n35.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n45.center) -- (n85.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnode] at (1.5,3.5) (m1) {};
\node[bigroundnode] at (2.5,2.5) (m2) {};
\node[bigroundnode] at (3.5,4.5) (m3) {};
\node[bigroundnode] at (4.5,3.5) (m4) {};
\node[bigroundnodew] at (5.5,1.5) (m5) {};
\node[bigroundnodew] at (6.5,2.5) (m6) {};
\node[bigroundnodew] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(-20.5,-6)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n35.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n45.center) -- (n85.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnodew] at (1.5,3.5) (m1) {};
\node[bigroundnode] at (2.5,2.5) (m2) {};
\node[bigroundnodew] at (3.5,4.5) (m3) {};
\node[bigroundnodew] at (4.5,3.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnodew] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(-10.5,-6)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n35.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n45.center) -- (n85.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnodew] at (1.5,3.5) (m1) {};
\node[bigroundnode] at (2.5,2.5) (m2) {};
\node[bigroundnodew] at (3.5,4.5) (m3) {};
\node[bigroundnode] at (4.5,3.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnodew] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(-0.5,-6)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n35.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n45.center) -- (n85.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnode] at (1.5,3.5) (m1) {};
\node[bigroundnodew] at (2.5,2.5) (m2) {};
\node[bigroundnode] at (3.5,4.5) (m3) {};
\node[bigroundnodew] at (4.5,3.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnodew] at (7.5,3.5) (m7) {};
\end{scope}
\end{tikzpicture}}
\caption{The stratification of $\operatorname{BL}_{-\acute{\sigma}}$.}
\label{fig:45132xx}
\end{figure}
In $\operatorname{BL}_{- \acute{\sigma}}$ there are $6$ strata of dimension $0$,
$6$ strata of dimension $1$ and
exactly one stratum of dimension $2$:
with ancestry $\varepsilon = (-2,-2,-1,-1,+1,+2,+2)$
and corresponding $\xi$-ancestry $\xi = (1,1,0,2,0,1,1)$.
The ancestry $\varepsilon$ is of type II.
To get the stratification above we perform the computations in the orthogonal group.
In Section~\ref{section:preliminaries} we defined the one parameter subgroups
$\alpha_i: {\mathbb{R}} \to \SO_{n+1}$, given by
\[
\alpha_i(\theta) =
\begin{pmatrix} I & & & \\
& \cos(\theta) & -\sin(\theta) & \\
& \sin(\theta) & \cos(\theta) & \\
& & & I \end{pmatrix},\]
where the central block ocupies rows and columns
$i$ and $i+1$.
To make computations more algebraic, let $t = \tan\left(\frac{\theta}{2}\right)$.
Therefore
\[
\zeta_i(t)=\alpha_i(2\arctan(t))=
\begin{pmatrix} I & & & \\
& \frac{1-t^2}{1+t^2} & -\frac{2t}{1+t^2} & \\\\
& \frac{2t}{1+t^2} & \frac{1-t^2}{1+t^2} & \\
& & & I \end{pmatrix}.\]
In order to study a transversal section to $\operatorname{BL}_{\varepsilon}$,
we take
\[ z_{7} = \zeta_2(-1+x_1) \zeta_3(-1+x_2) \zeta_1(-1/2) \zeta_2(-1/2) \zeta_4(1/2) \zeta_3(1/2) \zeta_2(1/2).\]
In order to determine the position
of a point in the strata above,
we must study the signs of
\begin{gather*}
p_1(x_1, x_2)=x_1, \quad p_2(x_2, x_2)=x_2 \\
p_3(x_1,x_2)=5 x^2_1 x_2^2 - 10 x_1^2 x_2 - 2 x_1 x_2^2 + 10 x_1^2 + 4 x_1 x_2 - 8 x_2^2 - 20 x_1 + 16 x_2.
\end{gather*}
These three expressions have pairwise linearly independent deritatives
in the origin.
The transversal section is shown in Figure~\ref{fig:codim2II3}.
Thus, in the CW complex shown in Figure~\ref{fig:45132xx},
the cell of dimension $2$ glues in the obvious way.
The set $\operatorname{BL}_{-\hat a_1\acute\eta}$ is therefore contractible.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.5]{45132_stratum.pdf}
\end{center}
\caption{A transversal section to a stratum of dimension $2$.}
\label{fig:codim2II3}
\end{figure}
Summing up, $\operatorname{BL}_{\sigma}$ has $40$ connected components,
all contractible.
\end{example}
\label{section:n4}
\section{Higher dimension strata}
\label{section:codimtwo}
In this section we will present examples that contain
higher dimensional strata.
We will not be presenting theorems in the most general form but we hope through examples
to convey the ideas of the theory.
The last example we present is the most important for $n=4$ and the computations help to clarify the theory;
in this example there appear strata of dimension up to $4$. \\
\begin{comment}
\begin{example}
Consider now $\sigma = [4 5 1 3 2] = a_2a_1a_3a_2a_4a_3a_2$;
Figure~\ref{fig:45132} shows this reduced word as a diagram.
\begin{figure}[ht!]
\centering
\resizebox{0.3\textwidth}{!}{
\begin{tikzpicture}[roundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,inner sep=2pt},
squarednode/.style={rectangle, draw=red!60, fill=red!5, very thick, minimum size=5mm},
bigdiamondnode/.style={draw,diamond, fill=black, minimum size=1mm,very thick,inner sep=4pt},
diamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick,inner sep=4pt},
bigroundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,very thick, inner sep=4pt},
bigroundnodew/.style={circle, draw=black, fill=white, minimum size=0.1mm,very thick, inner sep=4pt},]
\begin{scope}[shift={(-5.5,0)}]
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n32.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n34.center);
\draw[very thick] (n24.center) -- (n35.center);
\draw[very thick] (n23.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n85.center);
\draw[very thick] (n34.center) -- (n44.center);
\draw[very thick] (n33.center) -- (n42.center);
\draw[very thick] (n32.center) -- (n43.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\end{scope}
\end{tikzpicture}}
\caption{The permutation $\sigma \in S_5$.}
\label{fig:45132}
\end{figure}
In the notation of cycles, $\sigma = (143)(25)$;
we therefore have $n = 4$, $\ell = 7$, $c = 2$ and $b = 0$.
Equation~\ref{theo:N} tells us that, for $z \in \acute\sigma \Quat_5$,
we have $N(z) = 4 + 4\sqrt{2} \Re(z)$.
We have
\[ \acute\sigma =
\frac{-1 + \hat a_1\hat a_2
+ \hat a_3 - \hat a_1\hat a_2\hat a_3
+ \hat a_1\hat a_4 + \hat a_2\hat a_4
-\hat a_1 \hat a_3 \hat a_4 - \hat a_2\hat a_3\hat a_4}{2\sqrt{2}} \]
therefore $z \in \acute\sigma \Quat_5$ implies
$\Re(z) \in \{0, \pm \sqrt{2}/4\}$.
It turns out that $\Re(z) = 0$ implies $c_{\anti} = 1$:
the set $\acute\sigma \Quat_5$ thus has $3$ orbits under ${\cal E}_4$,
of sizes $8$, $16$ and $8$:
\begin{gather*}
{\mathcal O}_{\acute\sigma}, \quad \Re(z) = -\frac{\sqrt{2}}{4}, \quad
N(z) =2, \quad N_{\thin}(z) = 2, \\
{\mathcal O}_{\hat a_1\acute\sigma}, \quad \Re(z) = 0, \quad
N(z) = 4, \quad N_{\thin}(z) = 0, \\
{\mathcal O}_{-\acute\sigma}, \quad \Re(z) = \frac{\sqrt{2}}{4}, \quad
N(z) = 6, \quad N_{\thin}(z) = 0.
\end{gather*}
Therefore the set $\operatorname{BL}_{\acute{\sigma}}$ has two connected components, but
$\operatorname{BL}_{\hat{a}_1\acute{\sigma}}$ is connected.
See Figure~\ref{fig:45132x} below.
\begin{figure}[h!]
\centering
\resizebox{0.70\textwidth}{!}{
\begin{tikzpicture}[roundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,inner sep=2pt},
squarednode/.style={rectangle, draw=red!60, fill=red!5, very thick, minimum size=5mm},
diamondnode/.style={draw,diamond, fill=black, minimum size=1mm,very thick,inner sep=4pt},
diamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick,inner sep=4pt},
bigroundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,very thick, inner sep=4pt},
bigroundnodew/.style={circle, draw=black, fill=white, minimum size=0.1mm,very thick, inner sep=4pt},]
\draw[ultra thick] (-4,-3) -- (2.5,-3);
\draw[ultra thick] (-4,4) -- (-4,-7);
\begin{scope}[shift={(-22.5,-2)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n32.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n34.center);
\draw[very thick] (n24.center) -- (n35.center);
\draw[very thick] (n23.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n85.center);
\draw[very thick] (n34.center) -- (n44.center);
\draw[very thick] (n33.center) -- (n42.center);
\draw[very thick] (n32.center) -- (n43.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnodew] at (1.5,3.5) (m1) {};
\node[bigroundnodew] at (2.5,4.5) (m2) {};
\node[bigroundnodew] at (3.5,2.5) (m3) {};
\node[bigroundnodew] at (4.5,3.5) (m4) {};
\node[bigroundnodew] at (5.5,1.5) (m5) {};
\node[bigroundnodew] at (6.5,2.5) (m6) {};
\node[bigroundnodew] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(-22.5,-10)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n32.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n34.center);
\draw[very thick] (n24.center) -- (n35.center);
\draw[very thick] (n23.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n85.center);
\draw[very thick] (n34.center) -- (n44.center);
\draw[very thick] (n33.center) -- (n42.center);
\draw[very thick] (n32.center) -- (n43.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnode] at (1.5,3.5) (m1) {};
\node[bigroundnode] at (2.5,4.5) (m2) {};
\node[bigroundnode] at (3.5,2.5) (m3) {};
\node[bigroundnode] at (4.5,3.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnodew] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(-8.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n32.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n34.center);
\draw[very thick] (n24.center) -- (n35.center);
\draw[very thick] (n23.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n85.center);
\draw[very thick] (n34.center) -- (n44.center);
\draw[very thick] (n33.center) -- (n42.center);
\draw[very thick] (n32.center) -- (n43.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnodew] at (1.5,3.5) (m1) {};
\node[bigroundnode] at (2.5,4.5) (m2) {};
\node[bigroundnodew] at (3.5,2.5) (m3) {};
\node[bigroundnode] at (4.5,3.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnodew] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(-8.5,-6)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n32.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n34.center);
\draw[very thick] (n24.center) -- (n35.center);
\draw[very thick] (n23.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n85.center);
\draw[very thick] (n34.center) -- (n44.center);
\draw[very thick] (n33.center) -- (n42.center);
\draw[very thick] (n32.center) -- (n43.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnode] at (1.5,3.5) (m1) {};
\node[bigroundnodew] at (2.5,4.5) (m2) {};
\node[bigroundnode] at (3.5,2.5) (m3) {};
\node[bigroundnodew] at (4.5,3.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnodew] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(1.5,-6)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n32.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n34.center);
\draw[very thick] (n24.center) -- (n35.center);
\draw[very thick] (n23.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n85.center);
\draw[very thick] (n34.center) -- (n44.center);
\draw[very thick] (n33.center) -- (n42.center);
\draw[very thick] (n32.center) -- (n43.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnode] at (1.5,3.5) (m1) {};
\node[bigroundnodew] at (2.5,4.5) (m2) {};
\node[bigroundnodew] at (3.5,2.5) (m3) {};
\node[bigroundnode] at (4.5,3.5) (m4) {};
\node[bigroundnodew] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(-8.5,-12)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n32.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n34.center);
\draw[very thick] (n24.center) -- (n35.center);
\draw[very thick] (n23.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n85.center);
\draw[very thick] (n34.center) -- (n44.center);
\draw[very thick] (n33.center) -- (n42.center);
\draw[very thick] (n32.center) -- (n43.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnode] at (1.5,3.5) (m1) {};
\node[bigroundnodew] at (2.5,4.5) (m2) {};
\node[bigroundnode] at (3.5,2.5) (m3) {};
\node[bigroundnode] at (4.5,3.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnodew] at (7.5,3.5) (m7) {};
\end{scope}
\end{tikzpicture}}
\caption{The stratification of $\operatorname{BL}_{\acute{\sigma}}$ and $\operatorname{BL}_{\hat{a}_1\acute{\sigma}}$.}
\label{fig:45132x}
\end{figure}
Figure~\ref{fig:45132xx} contains a stratification of $\operatorname{BL}_{-\acute{\sigma}}$
and this set is also connected.
Therefore $\operatorname{BL}_{\sigma}$ has $40$ connected components
and all is contractible:
$8 \times 2$ (${\mathcal O}_{\acute{\sigma}}$),
$16 \times 1$ (${\mathcal O}_{\acute{\sigma}}$) and
$8 \times 1$ (${\mathcal O}_{-\acute{\sigma}}$).
\begin{figure}[ht]
\centering
\resizebox{0.5\textwidth}{!}{
\begin{tikzpicture}[roundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,inner sep=2pt},
squarednode/.style={rectangle, draw=red!60, fill=red!5, very thick, minimum size=5mm},
diamondnode/.style={draw,diamond, fill=black, minimum size=1mm,very thick,inner sep=4pt},
diamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick,inner sep=4pt},
bigdiamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick, inner sep=4pt},
bigroundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,very thick, inner sep=4pt},
bigroundnodew/.style={circle, draw=black, fill=white, minimum size=0.1mm,very thick, inner sep=4pt},]
\draw[ultra thick] (-12.5,3) -- (6.5,3);
\draw[ultra thick] (-12.5,-3) -- (6.5,-3);
\draw[ultra thick] (-16,4) -- (-16,-3);
\draw[ultra thick] (4,4) -- (4,-3);
\begin{scope}[shift={(-20.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n32.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n34.center);
\draw[very thick] (n24.center) -- (n35.center);
\draw[very thick] (n23.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n85.center);
\draw[very thick] (n34.center) -- (n44.center);
\draw[very thick] (n33.center) -- (n42.center);
\draw[very thick] (n32.center) -- (n43.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnode] at (1.5,3.5) (m1) {};
\node[bigroundnode] at (2.5,4.5) (m2) {};
\node[bigroundnode] at (3.5,2.5) (m3) {};
\node[bigroundnode] at (4.5,3.5) (m4) {};
\node[bigroundnodew] at (5.5,1.5) (m5) {};
\node[bigroundnodew] at (6.5,2.5) (m6) {};
\node[bigroundnodew] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(-10.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n32.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n34.center);
\draw[very thick] (n24.center) -- (n35.center);
\draw[very thick] (n23.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n85.center);
\draw[very thick] (n34.center) -- (n44.center);
\draw[very thick] (n33.center) -- (n42.center);
\draw[very thick] (n32.center) -- (n43.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnode] at (1.5,3.5) (m1) {};
\node[bigroundnode] at (2.5,4.5) (m2) {};
\node[bigroundnode] at (3.5,2.5) (m3) {};
\node[bigroundnodew] at (4.5,3.5) (m4) {};
\node[bigroundnodew] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(-0.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n32.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n34.center);
\draw[very thick] (n24.center) -- (n35.center);
\draw[very thick] (n23.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n85.center);
\draw[very thick] (n34.center) -- (n44.center);
\draw[very thick] (n33.center) -- (n42.center);
\draw[very thick] (n32.center) -- (n43.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnodew] at (1.5,3.5) (m1) {};
\node[bigroundnodew] at (2.5,4.5) (m2) {};
\node[bigroundnodew] at (3.5,2.5) (m3) {};
\node[bigroundnode] at (4.5,3.5) (m4) {};
\node[bigroundnodew] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(-20.5,-6)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n32.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n34.center);
\draw[very thick] (n24.center) -- (n35.center);
\draw[very thick] (n23.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n85.center);
\draw[very thick] (n34.center) -- (n44.center);
\draw[very thick] (n33.center) -- (n42.center);
\draw[very thick] (n32.center) -- (n43.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnode] at (1.5,3.5) (m1) {};
\node[bigroundnode] at (2.5,4.5) (m2) {};
\node[bigroundnodew] at (3.5,2.5) (m3) {};
\node[bigroundnodew] at (4.5,3.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnodew] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(-10.5,-6)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n32.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n34.center);
\draw[very thick] (n24.center) -- (n35.center);
\draw[very thick] (n23.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n85.center);
\draw[very thick] (n34.center) -- (n44.center);
\draw[very thick] (n33.center) -- (n42.center);
\draw[very thick] (n32.center) -- (n43.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnodew] at (1.5,3.5) (m1) {};
\node[bigroundnodew] at (2.5,4.5) (m2) {};
\node[bigroundnode] at (3.5,2.5) (m3) {};
\node[bigroundnode] at (4.5,3.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnodew] at (7.5,3.5) (m7) {};
\end{scope}
\begin{scope}[shift={(-0.5,-6)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (8.5,5.5) -- (8.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node[roundnode] at (8,5) (n85) {};
\node[roundnode] at (8,4) (n84) {};
\node[roundnode] at (8,3) (n83) {};
\node[roundnode] at (8,2) (n82) {};
\node[roundnode] at (8,1) (n81) {};
\draw[very thick] (n15) -- (n25.center);
\draw[very thick] (n14) -- (n23.center);
\draw[very thick] (n13) -- (n24.center);
\draw[very thick] (n12) -- (n32.center);
\draw[very thick] (n11) -- (n51.center);
\draw[very thick] (n25.center) -- (n34.center);
\draw[very thick] (n24.center) -- (n35.center);
\draw[very thick] (n23.center) -- (n33.center);
\draw[very thick] (n35.center) -- (n85.center);
\draw[very thick] (n34.center) -- (n44.center);
\draw[very thick] (n33.center) -- (n42.center);
\draw[very thick] (n32.center) -- (n43.center);
\draw[very thick] (n44.center) -- (n53.center);
\draw[very thick] (n43.center) -- (n54.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n61.center) -- (n81.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n72.center) -- (n82.center);
\node[bigroundnodew] at (1.5,3.5) (m1) {};
\node[bigroundnodew] at (2.5,4.5) (m2) {};
\node[bigroundnode] at (3.5,2.5) (m3) {};
\node[bigroundnodew] at (4.5,3.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnodew] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\end{scope}
\end{tikzpicture}}
\caption{The stratification of $\operatorname{BL}_{\acute{\sigma}}$.}
\label{fig:45132xx}
\end{figure}
\end{comment}
\begin{example}
Consider $\sigma = [4 3 5 2 1]$. Let us fixed the reduced word: $\sigma=a_1 a_3 a_2 a_1 a_4 a_3 a_2 a_1$
(see Figure~\ref{fig:43521}).
\begin{figure}[h!]
\centering
\resizebox{0.2\textwidth}{!}{
\begin{tikzpicture}[roundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,inner sep=2pt},
squarednode/.style={rectangle, draw=red!60, fill=red!5, very thick, minimum size=5mm},
diamondnode/.style={draw,diamond, fill=black, minimum size=1mm,very thick,inner sep=4pt},
diamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick,inner sep=4pt},
bigroundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,very thick, inner sep=4pt},
bigroundnodew/.style={circle, draw=black, fill=white, minimum size=0.1mm,very thick, inner sep=4pt},]
\begin{scope}[shift={(-5.5,0)}]
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node[roundnode] at (9,5) (n95) {};
\node[roundnode] at (9,4) (n94) {};
\node[roundnode] at (9,3) (n93) {};
\node[roundnode] at (9,2) (n92) {};
\node[roundnode] at (9,1) (n91) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n35.center) -- (n45.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n43.center) -- (n63.center);
\draw[very thick] (n44.center) -- (n55.center);
\draw[very thick] (n45.center) -- (n54.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n55.center) -- (n85.center);
\draw[very thick] (n61.center) -- (n91.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n72.center) -- (n92.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n83.center) -- (n93.center);
\draw[very thick] (n85.center) -- (n94.center);
\draw[very thick] (n84.center) -- (n95.center);
\end{scope}
\end{tikzpicture} }
\caption{The permutation $\sigma \in S_5$.}
\label{fig:43521}
\end{figure}
\medskip
We have $\ell = 8$, $b = |\operatorname{Block}(\sigma)| = 0$, $c = \nc(\sigma) = 1$ and
\begin{gather*}
\acute{\sigma}=
\frac{1}{4} \bigg(
-1 - \hat a_1 + \hat a_2 - \hat a_1 \hat a_2 - \hat a_3 + \hat a_1 \hat a_3
- \hat a_2 \hat a_3 - \hat a_1 \hat a_2 \hat a_3 - \hat a_4 \qquad \qquad \\ \qquad \qquad
+ \hat a_1 \hat a_4 + \hat a_2 \hat a_4 + \hat a_1 \hat a_2 \hat a_4
- \hat a_3 \hat a_4 - \hat a_1 \hat a_3 \hat a_4 - \hat a_2 \hat a_3 \hat a_4
+ \hat a_1 \hat a_2 \hat a_3 \hat a_4
\bigg).
\end{gather*}
It follows from Equation~\ref{theo:N} that
$N(z) = 8 + 8 \Re(z)$.
In this example, it turns out that $\Pi^{-1}[\{\sigma\}]$ contains
$16$ elements with $\Re(z) = \frac14$ and
$16$ elements with $\Re(z) = -\frac14$.
It turns out that, for all $z \in \Pi^{-1}[\{\sigma\}]$, $\Re(z) \neq 0$ and then $c_{\anti}(z) = 1$.
The set $\Pi^{-1}[\{\sigma\}]$ thus has $2$ orbits both of $16$ sizes.
\begin{align*}
{\mathcal O}_{\acute\sigma},
\quad \Re(z)=-\frac14, \quad N(z) = 6, \quad
\quad N_{\thin}(z) = 1, \\
{\mathcal O}_{\hat a_1 \hat a_2 \hat a_3 \acute{\sigma}},\quad \Re(z)=\frac14, \quad N(z) = 10,
\quad N_{\thin}(z) = 0, \\
\end{align*}
The set $\operatorname{BL}_{\hat a_1 \hat a_2 \hat a_3 \acute\sigma}$ is connected.
But the set $\operatorname{BL}_{\acute\sigma}$ has two connected components, the thick part of $\operatorname{BL}_{ \acute\sigma}$ is also connected.
In Figure~\ref{fig:54231BLacutesigma} and Figure~\ref{fig:54231BLacuteminussigma} we draw the CW complex for one representative for each orbit.
The total number of connected components of $\operatorname{BL}_{\sigma}$ is therefore $48$.
Moreover all these connected components are contractible.
\end{example}
\begin{figure}[h!]
\centering
\resizebox{0.6\textwidth}{!}{
\begin{tikzpicture}[roundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,inner sep=2pt},
squarednode/.style={rectangle, draw=red!60, fill=red!5, very thick, minimum size=5mm},
diamondnode/.style={draw,diamond, fill=black, minimum size=1mm,very thick,inner sep=4pt},
diamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick,inner sep=4pt},
bigroundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,very thick, inner sep=4pt},
bigroundnodew/.style={circle, draw=black, fill=white, minimum size=0.1mm,very thick, inner sep=4pt},]
\draw[ultra thick] (-11.5,3) -- (-9.5,3);
\draw[ultra thick] (4.5,0.5) -- (4.5,-1);
\draw[ultra thick] (-5.5,0.5) -- (-5.5,-1);
\draw[ultra thick] (-15.5,0.5) -- (-15.5,-1);
\draw[ultra thick] (4.5,-5.5) -- (4.5,-7.5);
\draw[ultra thick] (-5.5,-5.5) -- (-5.5,-7.5);
\draw[ultra thick] (-15.5,-5.5) -- (-15.5,-7.5);
\begin{scope}[shift={(-20.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (9.5,5.5) -- (9.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node[roundnode] at (9,5) (n95) {};
\node[roundnode] at (9,4) (n94) {};
\node[roundnode] at (9,3) (n93) {};
\node[roundnode] at (9,2) (n92) {};
\node[roundnode] at (9,1) (n91) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n35.center) -- (n45.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n43.center) -- (n63.center);
\draw[very thick] (n44.center) -- (n55.center);
\draw[very thick] (n45.center) -- (n54.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n55.center) -- (n85.center);
\draw[very thick] (n61.center) -- (n91.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n72.center) -- (n92.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n83.center) -- (n93.center);
\draw[very thick] (n85.center) -- (n94.center);
\draw[very thick] (n84.center) -- (n95.center);
\node[bigroundnodew] at (1.5,4.5) (m1) {};
\node[bigroundnode] at (2.5,2.5) (m2) {};
\node[bigroundnodew] at (3.5,3.5) (m3) {};
\node[bigroundnode] at (4.5,4.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\node[bigroundnode] at (8.5,4.5) (m8) {};
\end{scope}
\draw[ultra thick] (-1.0,3) -- (1.5,3);
\begin{scope}[shift={(-10.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (9.5,5.5) -- (9.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node[roundnode] at (9,5) (n95) {};
\node[roundnode] at (9,4) (n94) {};
\node[roundnode] at (9,3) (n93) {};
\node[roundnode] at (9,2) (n92) {};
\node[roundnode] at (9,1) (n91) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n35.center) -- (n45.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n43.center) -- (n63.center);
\draw[very thick] (n44.center) -- (n55.center);
\draw[very thick] (n45.center) -- (n54.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n55.center) -- (n85.center);
\draw[very thick] (n61.center) -- (n91.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n72.center) -- (n92.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n83.center) -- (n93.center);
\draw[very thick] (n85.center) -- (n94.center);
\draw[very thick] (n84.center) -- (n95.center);
\node[bigroundnodew] at (1.5,4.5) (m1) {};
\node[bigroundnode] at (2.5,2.5) (m2) {};
\node[bigroundnode] at (3.5,3.5) (m3) {};
\node[bigroundnodew] at (4.5,4.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnodew] at (6.5,2.5) (m6) {};
\node[bigroundnodew] at (7.5,3.5) (m7) {};
\node[bigroundnode] at (8.5,4.5) (m8) {};
\end{scope}
\begin{scope}[shift={(-0.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (9.5,5.5) -- (9.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node[roundnode] at (9,5) (n95) {};
\node[roundnode] at (9,4) (n94) {};
\node[roundnode] at (9,3) (n93) {};
\node[roundnode] at (9,2) (n92) {};
\node[roundnode] at (9,1) (n91) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n35.center) -- (n45.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n43.center) -- (n63.center);
\draw[very thick] (n44.center) -- (n55.center);
\draw[very thick] (n45.center) -- (n54.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n55.center) -- (n85.center);
\draw[very thick] (n61.center) -- (n91.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n72.center) -- (n92.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n83.center) -- (n93.center);
\draw[very thick] (n85.center) -- (n94.center);
\draw[very thick] (n84.center) -- (n95.center);
\node[bigroundnodew] at (1.5,4.5) (m1) {};
\node[bigroundnodew] at (2.5,2.5) (m2) {};
\node[bigroundnodew] at (3.5,3.5) (m3) {};
\node[bigroundnodew] at (4.5,4.5) (m4) {};
\node[bigroundnodew] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnodew] at (7.5,3.5) (m7) {};
\node[bigroundnode] at (8.5,4.5) (m8) {};
\end{scope}
\begin{scope}[shift={(-20.5,-6)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (9.5,5.5) -- (9.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node[roundnode] at (9,5) (n95) {};
\node[roundnode] at (9,4) (n94) {};
\node[roundnode] at (9,3) (n93) {};
\node[roundnode] at (9,2) (n92) {};
\node[roundnode] at (9,1) (n91) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n35.center) -- (n45.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n43.center) -- (n63.center);
\draw[very thick] (n44.center) -- (n55.center);
\draw[very thick] (n45.center) -- (n54.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n55.center) -- (n85.center);
\draw[very thick] (n61.center) -- (n91.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n72.center) -- (n92.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n83.center) -- (n93.center);
\draw[very thick] (n85.center) -- (n94.center);
\draw[very thick] (n84.center) -- (n95.center);
\node[bigroundnode] at (1.5,4.5) (m1) {};
\node[bigroundnode] at (2.5,2.5) (m2) {};
\node[bigroundnode] at (3.5,3.5) (m3) {};
\node[bigroundnodew] at (4.5,4.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\node[bigroundnode] at (8.5,4.5) (m8) {};
\end{scope}
\begin{scope}[shift={(-10.5,-6)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (9.5,5.5) -- (9.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node[roundnode] at (9,5) (n95) {};
\node[roundnode] at (9,4) (n94) {};
\node[roundnode] at (9,3) (n93) {};
\node[roundnode] at (9,2) (n92) {};
\node[roundnode] at (9,1) (n91) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n35.center) -- (n45.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n43.center) -- (n63.center);
\draw[very thick] (n44.center) -- (n55.center);
\draw[very thick] (n45.center) -- (n54.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n55.center) -- (n85.center);
\draw[very thick] (n61.center) -- (n91.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n72.center) -- (n92.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n83.center) -- (n93.center);
\draw[very thick] (n85.center) -- (n94.center);
\draw[very thick] (n84.center) -- (n95.center);
\node[bigroundnodew] at (1.5,4.5) (m1) {};
\node[bigroundnode] at (2.5,2.5) (m2) {};
\node[bigroundnode] at (3.5,3.5) (m3) {};
\node[bigroundnode] at (4.5,4.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnodew] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\node[bigroundnodew] at (8.5,4.5) (m8) {};
\end{scope}
\draw[ultra thick] (-1,-3) -- (1.5,-3);
\begin{scope}[shift={(-0.5,-6)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (9.5,5.5) -- (9.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node[roundnode] at (9,5) (n95) {};
\node[roundnode] at (9,4) (n94) {};
\node[roundnode] at (9,3) (n93) {};
\node[roundnode] at (9,2) (n92) {};
\node[roundnode] at (9,1) (n91) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n35.center) -- (n45.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n43.center) -- (n63.center);
\draw[very thick] (n44.center) -- (n55.center);
\draw[very thick] (n45.center) -- (n54.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n55.center) -- (n85.center);
\draw[very thick] (n61.center) -- (n91.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n72.center) -- (n92.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n83.center) -- (n93.center);
\draw[very thick] (n85.center) -- (n94.center);
\draw[very thick] (n84.center) -- (n95.center);
\node[bigroundnodew] at (1.5,4.5) (m1) {};
\node[bigroundnodew] at (2.5,2.5) (m2) {};
\node[bigroundnodew] at (3.5,3.5) (m3) {};
\node[bigroundnode] at (4.5,4.5) (m4) {};
\node[bigroundnodew] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\node[bigroundnodew] at (8.5,4.5) (m8) {};
\end{scope}
\draw[ultra thick] (9,-3) -- (12,-3);
\begin{scope}[shift={(9.5,-6)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (9.5,5.5) -- (9.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node[roundnode] at (9,5) (n95) {};
\node[roundnode] at (9,4) (n94) {};
\node[roundnode] at (9,3) (n93) {};
\node[roundnode] at (9,2) (n92) {};
\node[roundnode] at (9,1) (n91) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n35.center) -- (n45.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n43.center) -- (n63.center);
\draw[very thick] (n44.center) -- (n55.center);
\draw[very thick] (n45.center) -- (n54.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n55.center) -- (n85.center);
\draw[very thick] (n61.center) -- (n91.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n72.center) -- (n92.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n83.center) -- (n93.center);
\draw[very thick] (n85.center) -- (n94.center);
\draw[very thick] (n84.center) -- (n95.center);
\node[bigroundnodew] at (1.5,4.5) (m1) {};
\node[bigroundnodew] at (2.5,2.5) (m2) {};
\node[bigroundnode] at (3.5,3.5) (m3) {};
\node[bigroundnodew] at (4.5,4.5) (m4) {};
\node[bigroundnodew] at (5.5,1.5) (m5) {};
\node[bigroundnodew] at (6.5,2.5) (m6) {};
\node[bigroundnodew] at (7.5,3.5) (m7) {};
\node[bigroundnodew] at (8.5,4.5) (m8) {};
\end{scope}
\begin{scope}[shift={(-20.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (9.5,5.5) -- (9.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node[roundnode] at (9,5) (n95) {};
\node[roundnode] at (9,4) (n94) {};
\node[roundnode] at (9,3) (n93) {};
\node[roundnode] at (9,2) (n92) {};
\node[roundnode] at (9,1) (n91) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n35.center) -- (n45.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n43.center) -- (n63.center);
\draw[very thick] (n44.center) -- (n55.center);
\draw[very thick] (n45.center) -- (n54.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n55.center) -- (n85.center);
\draw[very thick] (n61.center) -- (n91.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n72.center) -- (n92.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n83.center) -- (n93.center);
\draw[very thick] (n85.center) -- (n94.center);
\draw[very thick] (n84.center) -- (n95.center);
\node[bigroundnodew] at (1.5,4.5) (m1) {};
\node[bigroundnode] at (2.5,2.5) (m2) {};
\node[bigroundnodew] at (3.5,3.5) (m3) {};
\node[bigroundnode] at (4.5,4.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\node[bigroundnode] at (8.5,4.5) (m8) {};
\end{scope}
\draw[ultra thick] (-11,-9) -- (-9.5,-9);
\begin{scope}[shift={(-10.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (9.5,5.5) -- (9.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node[roundnode] at (9,5) (n95) {};
\node[roundnode] at (9,4) (n94) {};
\node[roundnode] at (9,3) (n93) {};
\node[roundnode] at (9,2) (n92) {};
\node[roundnode] at (9,1) (n91) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n35.center) -- (n45.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n43.center) -- (n63.center);
\draw[very thick] (n44.center) -- (n55.center);
\draw[very thick] (n45.center) -- (n54.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n55.center) -- (n85.center);
\draw[very thick] (n61.center) -- (n91.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n72.center) -- (n92.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n83.center) -- (n93.center);
\draw[very thick] (n85.center) -- (n94.center);
\draw[very thick] (n84.center) -- (n95.center);
\node[bigroundnodew] at (1.5,4.5) (m1) {};
\node[bigroundnode] at (2.5,2.5) (m2) {};
\node[bigroundnode] at (3.5,3.5) (m3) {};
\node[bigroundnodew] at (4.5,4.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnodew] at (6.5,2.5) (m6) {};
\node[bigroundnodew] at (7.5,3.5) (m7) {};
\node[bigroundnode] at (8.5,4.5) (m8) {};
\end{scope}
\draw[ultra thick] (-1,-9) -- (1.5,-9);
\begin{scope}[shift={(-0.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (9.5,5.5) -- (9.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node[roundnode] at (9,5) (n95) {};
\node[roundnode] at (9,4) (n94) {};
\node[roundnode] at (9,3) (n93) {};
\node[roundnode] at (9,2) (n92) {};
\node[roundnode] at (9,1) (n91) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n35.center) -- (n45.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n43.center) -- (n63.center);
\draw[very thick] (n44.center) -- (n55.center);
\draw[very thick] (n45.center) -- (n54.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n55.center) -- (n85.center);
\draw[very thick] (n61.center) -- (n91.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n72.center) -- (n92.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n83.center) -- (n93.center);
\draw[very thick] (n85.center) -- (n94.center);
\draw[very thick] (n84.center) -- (n95.center);
\node[bigroundnodew] at (1.5,4.5) (m1) {};
\node[bigroundnodew] at (2.5,2.5) (m2) {};
\node[bigroundnodew] at (3.5,3.5) (m3) {};
\node[bigroundnodew] at (4.5,4.5) (m4) {};
\node[bigroundnodew] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnodew] at (7.5,3.5) (m7) {};
\node[bigroundnode] at (8.5,4.5) (m8) {};
\end{scope}
\begin{scope}[shift={(-20.5,-12)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (9.5,5.5) -- (9.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node[roundnode] at (9,5) (n95) {};
\node[roundnode] at (9,4) (n94) {};
\node[roundnode] at (9,3) (n93) {};
\node[roundnode] at (9,2) (n92) {};
\node[roundnode] at (9,1) (n91) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n35.center) -- (n45.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n43.center) -- (n63.center);
\draw[very thick] (n44.center) -- (n55.center);
\draw[very thick] (n45.center) -- (n54.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n55.center) -- (n85.center);
\draw[very thick] (n61.center) -- (n91.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n72.center) -- (n92.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n83.center) -- (n93.center);
\draw[very thick] (n85.center) -- (n94.center);
\draw[very thick] (n84.center) -- (n95.center);
\node[bigroundnode] at (1.5,4.5) (m1) {};
\node[bigroundnode] at (2.5,2.5) (m2) {};
\node[bigroundnode] at (3.5,3.5) (m3) {};
\node[bigroundnode] at (4.5,4.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnodew] at (7.5,3.5) (m7) {};
\node[bigroundnodew] at (8.5,4.5) (m8) {};
\end{scope}
\begin{scope}[shift={(-10.5,-12)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (9.5,5.5) -- (9.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node[roundnode] at (9,5) (n95) {};
\node[roundnode] at (9,4) (n94) {};
\node[roundnode] at (9,3) (n93) {};
\node[roundnode] at (9,2) (n92) {};
\node[roundnode] at (9,1) (n91) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n35.center) -- (n45.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n43.center) -- (n63.center);
\draw[very thick] (n44.center) -- (n55.center);
\draw[very thick] (n45.center) -- (n54.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n55.center) -- (n85.center);
\draw[very thick] (n61.center) -- (n91.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n72.center) -- (n92.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n83.center) -- (n93.center);
\draw[very thick] (n85.center) -- (n94.center);
\draw[very thick] (n84.center) -- (n95.center);
\node[bigroundnode] at (1.5,4.5) (m1) {};
\node[bigroundnode] at (2.5,2.5) (m2) {};
\node[bigroundnodew] at (3.5,3.5) (m3) {};
\node[bigroundnodew] at (4.5,4.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnodew] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\node[bigroundnodew] at (8.5,4.5) (m8) {};
\end{scope}
\begin{scope}[shift={(-0.5,-12)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (9.5,5.5) -- (9.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node[roundnode] at (9,5) (n95) {};
\node[roundnode] at (9,4) (n94) {};
\node[roundnode] at (9,3) (n93) {};
\node[roundnode] at (9,2) (n92) {};
\node[roundnode] at (9,1) (n91) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n35.center) -- (n45.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n43.center) -- (n63.center);
\draw[very thick] (n44.center) -- (n55.center);
\draw[very thick] (n45.center) -- (n54.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n55.center) -- (n85.center);
\draw[very thick] (n61.center) -- (n91.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n72.center) -- (n92.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n83.center) -- (n93.center);
\draw[very thick] (n85.center) -- (n94.center);
\draw[very thick] (n84.center) -- (n95.center);
\node[bigroundnode] at (1.5,4.5) (m1) {};
\node[bigroundnodew] at (2.5,2.5) (m2) {};
\node[bigroundnode] at (3.5,3.5) (m3) {};
\node[bigroundnodew] at (4.5,4.5) (m4) {};
\node[bigroundnodew] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\node[bigroundnodew] at (8.5,4.5) (m8) {};
\end{scope}
\end{tikzpicture}
}
\caption{The CW complex $\operatorname{BLC}_{\hat a_1 \hat a_2 \hat a_3 \acute{\sigma}}$.}
\label{fig:54231BLacutesigma}
\end{figure}
\begin{figure}[h!]
\centering
\resizebox{0.6\textwidth}{!}{
\begin{tikzpicture}[roundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,inner sep=2pt},
squarednode/.style={rectangle, draw=red!60, fill=red!5, very thick, minimum size=5mm},
diamondnode/.style={draw,diamond, fill=black, minimum size=1mm,very thick,inner sep=4pt},
diamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick,inner sep=4pt},
bigroundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,very thick, inner sep=4pt},
bigroundnodew/.style={circle, draw=black, fill=white, minimum size=0.1mm,very thick, inner sep=4pt},]
\draw[ultra thick] (-11.5,3) -- (-9.5,3);
\draw[ultra thick] (-5.5,0.5) -- (-5.5,-1);
\draw[ultra thick] (-1,3) -- (1.5,3);
\draw[ultra thick] (-5.5,-5.5) -- (-5.5,-7.5);
\begin{scope}[shift={(-20.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (9.5,5.5) -- (9.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node[roundnode] at (9,5) (n95) {};
\node[roundnode] at (9,4) (n94) {};
\node[roundnode] at (9,3) (n93) {};
\node[roundnode] at (9,2) (n92) {};
\node[roundnode] at (9,1) (n91) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n35.center) -- (n45.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n43.center) -- (n63.center);
\draw[very thick] (n44.center) -- (n55.center);
\draw[very thick] (n45.center) -- (n54.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n55.center) -- (n85.center);
\draw[very thick] (n61.center) -- (n91.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n72.center) -- (n92.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n83.center) -- (n93.center);
\draw[very thick] (n85.center) -- (n94.center);
\draw[very thick] (n84.center) -- (n95.center);
\node[bigroundnodew] at (1.5,4.5) (m1) {};
\node[bigroundnode] at (2.5,2.5) (m2) {};
\node[bigroundnode] at (3.5,3.5) (m3) {};
\node[bigroundnode] at (4.5,4.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\node[bigroundnode] at (8.5,4.5) (m8) {};
\end{scope}
\begin{scope}[shift={(-0.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (9.5,5.5) -- (9.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node[roundnode] at (9,5) (n95) {};
\node[roundnode] at (9,4) (n94) {};
\node[roundnode] at (9,3) (n93) {};
\node[roundnode] at (9,2) (n92) {};
\node[roundnode] at (9,1) (n91) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n35.center) -- (n45.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n43.center) -- (n63.center);
\draw[very thick] (n44.center) -- (n55.center);
\draw[very thick] (n45.center) -- (n54.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n55.center) -- (n85.center);
\draw[very thick] (n61.center) -- (n91.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n72.center) -- (n92.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n83.center) -- (n93.center);
\draw[very thick] (n85.center) -- (n94.center);
\draw[very thick] (n84.center) -- (n95.center);
\node[bigroundnode] at (1.5,4.5) (m1) {};
\node[bigroundnode] at (2.5,2.5) (m2) {};
\node[bigroundnodew] at (3.5,3.5) (m3) {};
\node[bigroundnode] at (4.5,4.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnodew] at (7.5,3.5) (m7) {};
\node[bigroundnodew] at (8.5,4.5) (m8) {};
\end{scope}
\begin{scope}[shift={(-10.5,-6)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (9.5,5.5) -- (9.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node[roundnode] at (9,5) (n95) {};
\node[roundnode] at (9,4) (n94) {};
\node[roundnode] at (9,3) (n93) {};
\node[roundnode] at (9,2) (n92) {};
\node[roundnode] at (9,1) (n91) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n35.center) -- (n45.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n43.center) -- (n63.center);
\draw[very thick] (n44.center) -- (n55.center);
\draw[very thick] (n45.center) -- (n54.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n55.center) -- (n85.center);
\draw[very thick] (n61.center) -- (n91.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n72.center) -- (n92.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n83.center) -- (n93.center);
\draw[very thick] (n85.center) -- (n94.center);
\draw[very thick] (n84.center) -- (n95.center);
\node[bigroundnode] at (1.5,4.5) (m1) {};
\node[bigroundnode] at (2.5,2.5) (m2) {};
\node[bigroundnode] at (3.5,3.5) (m3) {};
\node[bigroundnode] at (4.5,4.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnodew] at (6.5,2.5) (m6) {};
\node[bigroundnodew] at (7.5,3.5) (m7) {};
\node[bigroundnode] at (8.5,4.5) (m8) {};
\end{scope}
\begin{scope}[shift={(9.5,-9)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (9.5,5.5) -- (9.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node[roundnode] at (9,5) (n95) {};
\node[roundnode] at (9,4) (n94) {};
\node[roundnode] at (9,3) (n93) {};
\node[roundnode] at (9,2) (n92) {};
\node[roundnode] at (9,1) (n91) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n35.center) -- (n45.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n43.center) -- (n63.center);
\draw[very thick] (n44.center) -- (n55.center);
\draw[very thick] (n45.center) -- (n54.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n55.center) -- (n85.center);
\draw[very thick] (n61.center) -- (n91.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n72.center) -- (n92.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n83.center) -- (n93.center);
\draw[very thick] (n85.center) -- (n94.center);
\draw[very thick] (n84.center) -- (n95.center);
\node[bigroundnodew] at (1.5,4.5) (m1) {};
\node[bigroundnodew] at (2.5,2.5) (m2) {};
\node[bigroundnodew] at (3.5,3.5) (m3) {};
\node[bigroundnodew] at (4.5,4.5) (m4) {};
\node[bigroundnodew] at (5.5,1.5) (m5) {};
\node[bigroundnodew] at (6.5,2.5) (m6) {};
\node[bigroundnodew] at (7.5,3.5) (m7) {};
\node[bigroundnodew] at (8.5,4.5) (m8) {};
\end{scope}
\begin{scope}[shift={(-10.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (9.5,5.5) -- (9.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node[roundnode] at (9,5) (n95) {};
\node[roundnode] at (9,4) (n94) {};
\node[roundnode] at (9,3) (n93) {};
\node[roundnode] at (9,2) (n92) {};
\node[roundnode] at (9,1) (n91) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n35.center) -- (n45.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n43.center) -- (n63.center);
\draw[very thick] (n44.center) -- (n55.center);
\draw[very thick] (n45.center) -- (n54.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n55.center) -- (n85.center);
\draw[very thick] (n61.center) -- (n91.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n72.center) -- (n92.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n83.center) -- (n93.center);
\draw[very thick] (n85.center) -- (n94.center);
\draw[very thick] (n84.center) -- (n95.center);
\node[bigroundnode] at (1.5,4.5) (m1) {};
\node[bigroundnode] at (2.5,2.5) (m2) {};
\node[bigroundnodew] at (3.5,3.5) (m3) {};
\node[bigroundnodew] at (4.5,4.5) (m4) {};
\node[bigroundnode] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnode] at (7.5,3.5) (m7) {};
\node[bigroundnode] at (8.5,4.5) (m8) {};
\end{scope}
\begin{scope}[shift={(-10.5,-12)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (9.5,5.5) -- (9.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node[roundnode] at (9,5) (n95) {};
\node[roundnode] at (9,4) (n94) {};
\node[roundnode] at (9,3) (n93) {};
\node[roundnode] at (9,2) (n92) {};
\node[roundnode] at (9,1) (n91) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n22.center);
\draw[very thick] (n11) -- (n21.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n25.center) -- (n45.center);
\draw[very thick] (n24.center) -- (n34.center);
\draw[very thick] (n23.center) -- (n32.center);
\draw[very thick] (n22.center) -- (n33.center);
\draw[very thick] (n21.center) -- (n51.center);
\draw[very thick] (n32.center) -- (n52.center);
\draw[very thick] (n33.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n43.center);
\draw[very thick] (n35.center) -- (n45.center);
\draw[very thick] (n42.center) -- (n52.center);
\draw[very thick] (n43.center) -- (n63.center);
\draw[very thick] (n44.center) -- (n55.center);
\draw[very thick] (n45.center) -- (n54.center);
\draw[very thick] (n51.center) -- (n62.center);
\draw[very thick] (n52.center) -- (n61.center);
\draw[very thick] (n53.center) -- (n63.center);
\draw[very thick] (n54.center) -- (n74.center);
\draw[very thick] (n55.center) -- (n85.center);
\draw[very thick] (n61.center) -- (n91.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n63.center) -- (n72.center);
\draw[very thick] (n62.center) -- (n73.center);
\draw[very thick] (n64.center) -- (n74.center);
\draw[very thick] (n72.center) -- (n92.center);
\draw[very thick] (n73.center) -- (n84.center);
\draw[very thick] (n74.center) -- (n83.center);
\draw[very thick] (n83.center) -- (n93.center);
\draw[very thick] (n85.center) -- (n94.center);
\draw[very thick] (n84.center) -- (n95.center);
\node[bigroundnode] at (1.5,4.5) (m1) {};
\node[bigroundnodew] at (2.5,2.5) (m2) {};
\node[bigroundnodew] at (3.5,3.5) (m3) {};
\node[bigroundnode] at (4.5,4.5) (m4) {};
\node[bigroundnodew] at (5.5,1.5) (m5) {};
\node[bigroundnode] at (6.5,2.5) (m6) {};
\node[bigroundnodew] at (7.5,3.5) (m7) {};
\node[bigroundnode] at (8.5,4.5) (m8) {};
\end{scope}
\end{tikzpicture}}
\caption{The CW complex $\operatorname{BLC}_{\acute{\sigma}}$.}
\label{fig:54231BLacuteminussigma}
\end{figure}
\newpage
\begin{example}
\label{example:54321}
Set $\sigma = \eta = a_1a_2a_1a_3a_2a_1a_4a_3a_2a_1$; Figure \ref{diagrampermutation54321} shows this reduced word as a diagram.
\begin{figure}[ht!]
\centering
\resizebox{0.30\textwidth}{!}{
\begin{tikzpicture}[roundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,inner sep=2pt},
squarednode/.style={rectangle, draw=red!60, fill=red!5, very thick, minimum size=5mm},
diamondnode/.style={draw,diamond, fill=black, minimum size=1mm,very thick,inner sep=4pt},
diamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick,inner sep=4pt},
bigroundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,very thick, inner sep=4pt},
bigroundnodew/.style={circle, draw=black, fill=white, minimum size=0.1mm,very thick, inner sep=4pt},]
\begin{scope}[shift={(-5.5,0)}]
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node at (9,5) (n95) {};
\node at (9,4) (n94) {};
\node at (9,3) (n93) {};
\node at (9,2) (n92) {};
\node at (9,1) (n91) {};
\node at (10,5) (n105) {};
\node at (10,4) (n104) {};
\node at (10,3) (n103) {};
\node at (10,2) (n102) {};
\node at (10,1) (n101) {};
\node[roundnode] at (11,5) (n115) {};
\node[roundnode] at (11,4) (n114) {};
\node[roundnode] at (11,3) (n113) {};
\node[roundnode] at (11,2) (n112) {};
\node[roundnode] at (11,1) (n111) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n42.center);
\draw[very thick] (n11) -- (n71.center);
\draw[very thick] (n25.center) -- (n35.center);
\draw[very thick] (n24.center) -- (n33.center);
\draw[very thick] (n23.center) -- (n34.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n45.center) -- (n55.center);
\draw[very thick] (n44.center) -- (n54.center);
\draw[very thick] (n43.center) -- (n52.center);
\draw[very thick] (n42.center) -- (n53.center);
\draw[very thick] (n55.center) -- (n65.center);
\draw[very thick] (n54.center) -- (n63.center);
\draw[very thick] (n53.center) -- (n64.center);
\draw[very thick] (n52.center) -- (n72.center);
\draw[very thick] (n65.center) -- (n74.center);
\draw[very thick] (n64.center) -- (n75.center);
\draw[very thick] (n63.center) -- (n83.center);
\draw[very thick] (n62.center) -- (n72.center);
\draw[very thick] (n75.center) -- (n105.center);
\draw[very thick] (n74.center) -- (n94.center);
\draw[very thick] (n73.center) -- (n83.center);
\draw[very thick] (n72.center) -- (n81.center);
\draw[very thick] (n71.center) -- (n82.center);
\draw[very thick] (n83.center) -- (n92.center);
\draw[very thick] (n82.center) -- (n93.center);
\draw[very thick] (n94.center) -- (n103.center);
\draw[very thick] (n93.center) -- (n104.center);
\draw[very thick] (n92.center) -- (n112.center);
\draw[very thick] (n81.center) -- (n111.center);
\draw[very thick] (n103.center) -- (n113.center);
\draw[very thick] (n104.center) -- (n115.center);
\draw[very thick] (n105.center) -- (n114.center);
\end{scope}
\end{tikzpicture} }
\label{diagrampermutation54321}
\caption{The permutation $\eta \in S_5$.}
\end{figure}
In the notations of cycles $\eta = (15)(24)(3)$. Therefore, we have $\ell = 10$, $b = |\operatorname{Block}(\eta)| = 0$, $c = \nc(\eta) = 3$. Moreover,
$$ \Pi^{-1} [\{ \eta \}] = \bigg\{ \frac{\pm 1 \pm \hat a_2\hat a_3
\pm \hat a_1\hat a_4 \pm \hat a_1 \hat a_2\hat a_3\hat a_4}{2},
\frac{\pm \hat a_1 \pm \hat a_1\hat a_2\hat a_3 \pm \hat a_4 \pm \hat a_2\hat a_3\hat a_4}{2},$$
$$\frac{\pm \hat a_1 \hat a_2 \pm \hat a_1 \hat a_3 \pm \hat a_2 \hat a_4
\pm \hat a_1 \hat a_4}{2}, \frac{ \pm \hat a_2 \pm \hat a_3 \pm \hat a_1 \hat a_2 \hat a_4 \pm \hat a_1 \hat a_3 \hat a_4}{2} \bigg \}$$
where we must take an even number of `$-$' signs.
Therefore, $\Pi^{-1}[\{\eta\}]$ contains
$4$ elements with $\Re(z) = \frac12$,
$4$ elements with $\Re(z) = -\frac12$ and
$24$ elements with $\Re(z) = 0$ (so that $|\Pi^{-1}[\{\eta\}]| = 32$).
From Equation~\ref{theo:N}, we have
$N(z) = 32 + 16 \Re(z)$.
It turns out that, for $z \in \Pi^{-1}[\{\eta\}]$,
$c_{\anti}(z) = 1$ if and only if $\Re(z) = 0$.
Therefore from the Remark~\ref{remark_orbits}, the set $\Pi^{-1}[\{\eta\}]$ has orbits of size $8$ and $4$.
If $\Re(z)=0$ then the orbit ${\mathcal O}_z$ has cardinality $2^{n-c+2}=2^{4-3+2}=8$,
and if $\Re(z)\neq0$ then the orbit ${\mathcal O}_z$ has cardinality $2^{n-c+1}=2^{4-3+1}=4$.
The action of ${\cal E}_4$ splits the set $\acute\eta\Quat_{n+1}$
into $5$ orbits, of sizes $8, 4, 4, 8, 8$, shown below.
\begin{align*}
{\mathcal O}_{\acute\eta} &= \left\{
\frac{\pm \hat a_1 \pm \hat a_1\hat a_2\hat a_3
\pm \hat a_4 \pm \hat a_2\hat a_3\hat a_4}{2}
\right\},
\quad N(z) = 32,
\quad N_{\thin}(z) = 2, \\
{\mathcal O}_{\hat a_1\acute\eta} &= \left\{
\frac{1 \pm \hat a_2\hat a_3
\pm \hat a_1\hat a_4 \pm \hat a_1\hat a_2\hat a_3\hat a_4}{2}
\right\}, \quad N(z) = 40,
\quad N_{\thin}(z) = 0, \\
{\mathcal O}_{-\hat a_1\acute\eta} &= \left\{
\frac{-1 \pm \hat a_2\hat a_3
\pm \hat a_1\hat a_4 \pm \hat a_1\hat a_2\hat a_3\hat a_4}{2}
\right\}, \quad N(z) = 24,
\quad N_{\thin}(z) = 0, \\
{\mathcal O}_{\hat a_2 \acute\eta} &= \left\{
\frac{\pm \hat a_1 \hat a_2 \pm \hat a_1 \hat a_3
\pm \hat a_2\hat a_4 \pm \hat a_1\hat a_4}{2}
\right\}, \quad N(z) = 32,
\quad N_{\thin}(z) = 0, \\
{\mathcal O}_{\hat a_1\hat a_2\acute\eta} &= \left\{
\frac{\pm \hat a_2 \pm \hat a_3
\pm \hat a_1\hat a_2\hat a_4 \pm \hat a_1\hat a_3\hat a_4}{2}
\right\}, \quad N(z) = 32,
\quad N_{\thin}(z) = 0.
\end{align*}
In order to count connected components
and obtain further information about the topology
of the sets $\operatorname{BL}_z$, $z \in \acute\eta\Quat_{n+1}$,
we can pick one representative from each orbit
and draw the strata.
In~\cite{Alves-Saldanha}, we already constructed the CW complex for $z = - \hat{a}_1 \acute{\eta} = - \acute{\eta}\hat{a}_4$
(see~\cite{Alves-Saldanha} to more details about this construction).
$\operatorname{BL}_{-\hat{a}_1 \acute{\eta}}$ is homotopically equivalent to the disjoint union of two points.
In other words, the two connected components
of $\operatorname{BL}_{-\hat a_1\acute\eta}$ are contractible.
Therefore the orbit ${\mathcal O}_{-\hat{a}_1 \acute{\eta}}$ contributes with $8$ connected components to $\operatorname{BL}_{\eta}$.
Now we want to explore the decomposition into strata of $\operatorname{BL}_{\acute{\eta}}$.
Here instead of $+ 1$, $-1$, $+2$ and $- 2$ we use $\makebox[4mm]{$\smallcircle$}$, $\makebox[4mm]{$\smallblackcircle$}$, $\makebox[4mm]{$\meddiamond$}$ and $\makebox[4mm]{$\medblackdiamond$}$, respectively.
Notice the similarity with diagrams.
For example, we use $(\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$})$
instead of $(-2, +1, +2, -2, +1, +1, -1, +2, +1, -1)$.
A computations shows that $\operatorname{BL}_{\acute{\eta}}$ has: $32$ strata of dimension $0$ (two of them are thin and obviously contractible), $48$ strata of dimension $1$, $22$ strata of dimension $2$, $3$ strata of dimension $3$ and no strata of higher dimension. There are exactly three ancestries of dimension $3$:
\begin{gather*}
(\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$}), \\
(\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$}), \\
(\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$}).
\end{gather*}
In what follows we constructed the CW complex of the ancestry \\
$(\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$})$.
A sketch of these $3$-dimension cell appears in the Figure \ref{54321_dimension3_dot}; and in the
Figure \ref{54321_flattening} we represent a flattening of these cell.
Note that the outer hexagon in the Figure \ref{54321_flattening} correspond to the $2$-dimension cell
with ancestry $(\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$})$.
The construction of the other two cells of dimension $3$ are similar.
These three cells of dimension $3$ have in common the $2$-dimension cell with ancestry
$(\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$})$.
In this hexagon we glue, in the obvious way, the three cells of dimension $3$.
Therefore this connected component is contractible.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.4]{54321_dimension3_dot.pdf}
\caption{A sketch of one $3$-dimensional cell in $\operatorname{BL}_{\acute{\eta}}$.}
\label{54321_dimension3_dot}
\end{center}
\end{figure}
\begin{figure}[h!]
\centering
\resizebox{0.6\textwidth}{!}{
\begin{tikzpicture}[roundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,inner sep=2pt},
squarednode/.style={rectangle, draw=red!60, fill=red!5, very thick, minimum size=5mm},
diamondnode/.style={draw,diamond, fill=black, minimum size=1mm,very thick,inner sep=4pt},
diamondnodew/.style={draw,diamond, fill=white, minimum size=1mm,very thick,inner sep=4pt},
bigroundnode/.style={circle, draw=black, fill=black, minimum size=0.1mm,very thick, inner sep=4pt},
bigroundnodew/.style={circle, draw=black, fill=white, minimum size=0.1mm,very thick, inner sep=4pt},]
\draw[ultra thick] (-12.5,3) -- (17.5,3);
\draw[ultra thick] (-21,3) -- (-21,-15);
\draw[ultra thick] (17,3) -- (17,-15);
\draw[ultra thick] (-15,1) -- (-15,-15);
\draw[ultra thick] (10,1) -- (10,-15);
\begin{scope}[shift={(-22.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (11.5,5.5) -- (11.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node at (9,5) (n95) {};
\node at (9,4) (n94) {};
\node at (9,3) (n93) {};
\node at (9,2) (n92) {};
\node at (9,1) (n91) {};
\node at (10,5) (n105) {};
\node at (10,4) (n104) {};
\node at (10,3) (n103) {};
\node at (10,2) (n102) {};
\node at (10,1) (n101) {};
\node[roundnode] at (11,5) (n115) {};
\node[roundnode] at (11,4) (n114) {};
\node[roundnode] at (11,3) (n113) {};
\node[roundnode] at (11,2) (n112) {};
\node[roundnode] at (11,1) (n111) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n42.center);
\draw[very thick] (n11) -- (n71.center);
\draw[very thick] (n25.center) -- (n35.center);
\draw[very thick] (n24.center) -- (n33.center);
\draw[very thick] (n23.center) -- (n34.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n45.center) -- (n55.center);
\draw[very thick] (n44.center) -- (n54.center);
\draw[very thick] (n43.center) -- (n52.center);
\draw[very thick] (n42.center) -- (n53.center);
\draw[very thick] (n55.center) -- (n65.center);
\draw[very thick] (n54.center) -- (n63.center);
\draw[very thick] (n53.center) -- (n64.center);
\draw[very thick] (n52.center) -- (n72.center);
\draw[very thick] (n65.center) -- (n74.center);
\draw[very thick] (n64.center) -- (n75.center);
\draw[very thick] (n63.center) -- (n83.center);
\draw[very thick] (n62.center) -- (n72.center);
\draw[very thick] (n75.center) -- (n105.center);
\draw[very thick] (n74.center) -- (n94.center);
\draw[very thick] (n73.center) -- (n83.center);
\draw[very thick] (n72.center) -- (n81.center);
\draw[very thick] (n71.center) -- (n82.center);
\draw[very thick] (n83.center) -- (n92.center);
\draw[very thick] (n82.center) -- (n93.center);
\draw[very thick] (n94.center) -- (n103.center);
\draw[very thick] (n93.center) -- (n104.center);
\draw[very thick] (n92.center) -- (n112.center);
\draw[very thick] (n81.center) -- (n111.center);
\draw[very thick] (n103.center) -- (n113.center);
\draw[very thick] (n104.center) -- (n115.center);
\draw[very thick] (n105.center) -- (n114.center);
\node[bigroundnodew] at (1.5,4.5) (m1) {};
\node[bigroundnodew] at (2.5,3.5) (m2) {};
\node[bigroundnode] at (3.5,4.5) (m3) {};
\node[bigroundnode] at (4.5,2.5) (m4) {};
\node[bigroundnode] at (5.5,3.5) (m5) {};
\node[bigroundnode] at (6.5,4.5) (m6) {};
\node[bigroundnode] at (7.5,1.5) (m7) {};
\node[bigroundnodew] at (8.5,2.5) (m8) {};
\node[bigroundnodew] at (9.5,3.5) (m9) {};
\node[bigroundnode] at (10.5,4.5) (m10) {};
\end{scope}
\begin{scope}[shift={(-8,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (11.5,5.5) -- (11.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node at (9,5) (n95) {};
\node at (9,4) (n94) {};
\node at (9,3) (n93) {};
\node at (9,2) (n92) {};
\node at (9,1) (n91) {};
\node at (10,5) (n105) {};
\node at (10,4) (n104) {};
\node at (10,3) (n103) {};
\node at (10,2) (n102) {};
\node at (10,1) (n101) {};
\node[roundnode] at (11,5) (n115) {};
\node[roundnode] at (11,4) (n114) {};
\node[roundnode] at (11,3) (n113) {};
\node[roundnode] at (11,2) (n112) {};
\node[roundnode] at (11,1) (n111) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n42.center);
\draw[very thick] (n11) -- (n71.center);
\draw[very thick] (n25.center) -- (n35.center);
\draw[very thick] (n24.center) -- (n33.center);
\draw[very thick] (n23.center) -- (n34.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n45.center) -- (n55.center);
\draw[very thick] (n44.center) -- (n54.center);
\draw[very thick] (n43.center) -- (n52.center);
\draw[very thick] (n42.center) -- (n53.center);
\draw[very thick] (n55.center) -- (n65.center);
\draw[very thick] (n54.center) -- (n63.center);
\draw[very thick] (n53.center) -- (n64.center);
\draw[very thick] (n52.center) -- (n72.center);
\draw[very thick] (n65.center) -- (n74.center);
\draw[very thick] (n64.center) -- (n75.center);
\draw[very thick] (n63.center) -- (n83.center);
\draw[very thick] (n62.center) -- (n72.center);
\draw[very thick] (n75.center) -- (n105.center);
\draw[very thick] (n74.center) -- (n94.center);
\draw[very thick] (n73.center) -- (n83.center);
\draw[very thick] (n72.center) -- (n81.center);
\draw[very thick] (n71.center) -- (n82.center);
\draw[very thick] (n83.center) -- (n92.center);
\draw[very thick] (n82.center) -- (n93.center);
\draw[very thick] (n94.center) -- (n103.center);
\draw[very thick] (n93.center) -- (n104.center);
\draw[very thick] (n92.center) -- (n112.center);
\draw[very thick] (n81.center) -- (n111.center);
\draw[very thick] (n103.center) -- (n113.center);
\draw[very thick] (n104.center) -- (n115.center);
\draw[very thick] (n105.center) -- (n114.center);
\node[bigroundnodew] at (1.5,4.5) (m1) {};
\node[bigroundnode] at (2.5,3.5) (m2) {};
\node[bigroundnodew] at (3.5,4.5) (m3) {};
\node[bigroundnodew] at (4.5,2.5) (m4) {};
\node[bigroundnodew] at (5.5,3.5) (m5) {};
\node[bigroundnode] at (6.5,4.5) (m6) {};
\node[bigroundnode] at (7.5,1.5) (m7) {};
\node[bigroundnodew] at (8.5,2.5) (m8) {};
\node[bigroundnodew] at (9.5,3.5) (m9) {};
\node[bigroundnode] at (10.5,4.5) (m10) {};
\end{scope}
\begin{scope}[shift={(6.5,0)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (11.5,5.5) -- (11.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node at (9,5) (n95) {};
\node at (9,4) (n94) {};
\node at (9,3) (n93) {};
\node at (9,2) (n92) {};
\node at (9,1) (n91) {};
\node at (10,5) (n105) {};
\node at (10,4) (n104) {};
\node at (10,3) (n103) {};
\node at (10,2) (n102) {};
\node at (10,1) (n101) {};
\node[roundnode] at (11,5) (n115) {};
\node[roundnode] at (11,4) (n114) {};
\node[roundnode] at (11,3) (n113) {};
\node[roundnode] at (11,2) (n112) {};
\node[roundnode] at (11,1) (n111) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n42.center);
\draw[very thick] (n11) -- (n71.center);
\draw[very thick] (n25.center) -- (n35.center);
\draw[very thick] (n24.center) -- (n33.center);
\draw[very thick] (n23.center) -- (n34.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n45.center) -- (n55.center);
\draw[very thick] (n44.center) -- (n54.center);
\draw[very thick] (n43.center) -- (n52.center);
\draw[very thick] (n42.center) -- (n53.center);
\draw[very thick] (n55.center) -- (n65.center);
\draw[very thick] (n54.center) -- (n63.center);
\draw[very thick] (n53.center) -- (n64.center);
\draw[very thick] (n52.center) -- (n72.center);
\draw[very thick] (n65.center) -- (n74.center);
\draw[very thick] (n64.center) -- (n75.center);
\draw[very thick] (n63.center) -- (n83.center);
\draw[very thick] (n62.center) -- (n72.center);
\draw[very thick] (n75.center) -- (n105.center);
\draw[very thick] (n74.center) -- (n94.center);
\draw[very thick] (n73.center) -- (n83.center);
\draw[very thick] (n72.center) -- (n81.center);
\draw[very thick] (n71.center) -- (n82.center);
\draw[very thick] (n83.center) -- (n92.center);
\draw[very thick] (n82.center) -- (n93.center);
\draw[very thick] (n94.center) -- (n103.center);
\draw[very thick] (n93.center) -- (n104.center);
\draw[very thick] (n92.center) -- (n112.center);
\draw[very thick] (n81.center) -- (n111.center);
\draw[very thick] (n103.center) -- (n113.center);
\draw[very thick] (n104.center) -- (n115.center);
\draw[very thick] (n105.center) -- (n114.center);
\node[bigroundnodew] at (1.5,4.5) (m1) {};
\node[bigroundnode] at (2.5,3.5) (m2) {};
\node[bigroundnode] at (3.5,4.5) (m3) {};
\node[bigroundnodew] at (4.5,2.5) (m4) {};
\node[bigroundnode] at (5.5,3.5) (m5) {};
\node[bigroundnodew] at (6.5,4.5) (m6) {};
\node[bigroundnode] at (7.5,1.5) (m7) {};
\node[bigroundnodew] at (8.5,2.5) (m8) {};
\node[bigroundnodew] at (9.5,3.5) (m9) {};
\node[bigroundnode] at (10.5,4.5) (m10) {};
\end{scope}
\draw[ultra thick] (-12.5,-3) -- (15.5,-3);
\begin{scope}[shift={(-20.5,-6)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (11.5,5.5) -- (11.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node at (9,5) (n95) {};
\node at (9,4) (n94) {};
\node at (9,3) (n93) {};
\node at (9,2) (n92) {};
\node at (9,1) (n91) {};
\node at (10,5) (n105) {};
\node at (10,4) (n104) {};
\node at (10,3) (n103) {};
\node at (10,2) (n102) {};
\node at (10,1) (n101) {};
\node[roundnode] at (11,5) (n115) {};
\node[roundnode] at (11,4) (n114) {};
\node[roundnode] at (11,3) (n113) {};
\node[roundnode] at (11,2) (n112) {};
\node[roundnode] at (11,1) (n111) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n42.center);
\draw[very thick] (n11) -- (n71.center);
\draw[very thick] (n25.center) -- (n35.center);
\draw[very thick] (n24.center) -- (n33.center);
\draw[very thick] (n23.center) -- (n34.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n45.center) -- (n55.center);
\draw[very thick] (n44.center) -- (n54.center);
\draw[very thick] (n43.center) -- (n52.center);
\draw[very thick] (n42.center) -- (n53.center);
\draw[very thick] (n55.center) -- (n65.center);
\draw[very thick] (n54.center) -- (n63.center);
\draw[very thick] (n53.center) -- (n64.center);
\draw[very thick] (n52.center) -- (n72.center);
\draw[very thick] (n65.center) -- (n74.center);
\draw[very thick] (n64.center) -- (n75.center);
\draw[very thick] (n63.center) -- (n83.center);
\draw[very thick] (n62.center) -- (n72.center);
\draw[very thick] (n75.center) -- (n105.center);
\draw[very thick] (n74.center) -- (n94.center);
\draw[very thick] (n73.center) -- (n83.center);
\draw[very thick] (n72.center) -- (n81.center);
\draw[very thick] (n71.center) -- (n82.center);
\draw[very thick] (n83.center) -- (n92.center);
\draw[very thick] (n82.center) -- (n93.center);
\draw[very thick] (n94.center) -- (n103.center);
\draw[very thick] (n93.center) -- (n104.center);
\draw[very thick] (n92.center) -- (n112.center);
\draw[very thick] (n81.center) -- (n111.center);
\draw[very thick] (n103.center) -- (n113.center);
\draw[very thick] (n104.center) -- (n115.center);
\draw[very thick] (n105.center) -- (n114.center);
\node[bigroundnode] at (1.5,4.5) (m1) {};
\node[bigroundnode] at (2.5,3.5) (m2) {};
\node[bigroundnodew] at (3.5,4.5) (m3) {};
\node[bigroundnode] at (4.5,2.5) (m4) {};
\node[bigroundnode] at (5.5,3.5) (m5) {};
\node[bigroundnode] at (6.5,4.5) (m6) {};
\node[bigroundnode] at (7.5,1.5) (m7) {};
\node[bigroundnodew] at (8.5,2.5) (m8) {};
\node[bigroundnodew] at (9.5,3.5) (m9) {};
\node[bigroundnode] at (10.5,4.5) (m10) {};
\end{scope}
\begin{scope}[shift={(-8,-6)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (11.5,5.5) -- (11.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node at (9,5) (n95) {};
\node at (9,4) (n94) {};
\node at (9,3) (n93) {};
\node at (9,2) (n92) {};
\node at (9,1) (n91) {};
\node at (10,5) (n105) {};
\node at (10,4) (n104) {};
\node at (10,3) (n103) {};
\node at (10,2) (n102) {};
\node at (10,1) (n101) {};
\node[roundnode] at (11,5) (n115) {};
\node[roundnode] at (11,4) (n114) {};
\node[roundnode] at (11,3) (n113) {};
\node[roundnode] at (11,2) (n112) {};
\node[roundnode] at (11,1) (n111) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n42.center);
\draw[very thick] (n11) -- (n71.center);
\draw[very thick] (n25.center) -- (n35.center);
\draw[very thick] (n24.center) -- (n33.center);
\draw[very thick] (n23.center) -- (n34.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n45.center) -- (n55.center);
\draw[very thick] (n44.center) -- (n54.center);
\draw[very thick] (n43.center) -- (n52.center);
\draw[very thick] (n42.center) -- (n53.center);
\draw[very thick] (n55.center) -- (n65.center);
\draw[very thick] (n54.center) -- (n63.center);
\draw[very thick] (n53.center) -- (n64.center);
\draw[very thick] (n52.center) -- (n72.center);
\draw[very thick] (n65.center) -- (n74.center);
\draw[very thick] (n64.center) -- (n75.center);
\draw[very thick] (n63.center) -- (n83.center);
\draw[very thick] (n62.center) -- (n72.center);
\draw[very thick] (n75.center) -- (n105.center);
\draw[very thick] (n74.center) -- (n94.center);
\draw[very thick] (n73.center) -- (n83.center);
\draw[very thick] (n72.center) -- (n81.center);
\draw[very thick] (n71.center) -- (n82.center);
\draw[very thick] (n83.center) -- (n92.center);
\draw[very thick] (n82.center) -- (n93.center);
\draw[very thick] (n94.center) -- (n103.center);
\draw[very thick] (n93.center) -- (n104.center);
\draw[very thick] (n92.center) -- (n112.center);
\draw[very thick] (n81.center) -- (n111.center);
\draw[very thick] (n103.center) -- (n113.center);
\draw[very thick] (n104.center) -- (n115.center);
\draw[very thick] (n105.center) -- (n114.center);
\node[bigroundnode] at (1.5,4.5) (m1) {};
\node[bigroundnode] at (2.5,3.5) (m2) {};
\node[bigroundnode] at (3.5,4.5) (m3) {};
\node[bigroundnode] at (4.5,2.5) (m4) {};
\node[bigroundnodew] at (5.5,3.5) (m5) {};
\node[bigroundnodew] at (6.5,4.5) (m6) {};
\node[bigroundnode] at (7.5,1.5) (m7) {};
\node[bigroundnodew] at (8.5,2.5) (m8) {};
\node[bigroundnodew] at (9.5,3.5) (m9) {};
\node[bigroundnode] at (10.5,4.5) (m10) {};
\end{scope}
\begin{scope}[shift={(4.5,-6)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (11.5,5.5) -- (11.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node at (9,5) (n95) {};
\node at (9,4) (n94) {};
\node at (9,3) (n93) {};
\node at (9,2) (n92) {};
\node at (9,1) (n91) {};
\node at (10,5) (n105) {};
\node at (10,4) (n104) {};
\node at (10,3) (n103) {};
\node at (10,2) (n102) {};
\node at (10,1) (n101) {};
\node[roundnode] at (11,5) (n115) {};
\node[roundnode] at (11,4) (n114) {};
\node[roundnode] at (11,3) (n113) {};
\node[roundnode] at (11,2) (n112) {};
\node[roundnode] at (11,1) (n111) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n42.center);
\draw[very thick] (n11) -- (n71.center);
\draw[very thick] (n25.center) -- (n35.center);
\draw[very thick] (n24.center) -- (n33.center);
\draw[very thick] (n23.center) -- (n34.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n45.center) -- (n55.center);
\draw[very thick] (n44.center) -- (n54.center);
\draw[very thick] (n43.center) -- (n52.center);
\draw[very thick] (n42.center) -- (n53.center);
\draw[very thick] (n55.center) -- (n65.center);
\draw[very thick] (n54.center) -- (n63.center);
\draw[very thick] (n53.center) -- (n64.center);
\draw[very thick] (n52.center) -- (n72.center);
\draw[very thick] (n65.center) -- (n74.center);
\draw[very thick] (n64.center) -- (n75.center);
\draw[very thick] (n63.center) -- (n83.center);
\draw[very thick] (n62.center) -- (n72.center);
\draw[very thick] (n75.center) -- (n105.center);
\draw[very thick] (n74.center) -- (n94.center);
\draw[very thick] (n73.center) -- (n83.center);
\draw[very thick] (n72.center) -- (n81.center);
\draw[very thick] (n71.center) -- (n82.center);
\draw[very thick] (n83.center) -- (n92.center);
\draw[very thick] (n82.center) -- (n93.center);
\draw[very thick] (n94.center) -- (n103.center);
\draw[very thick] (n93.center) -- (n104.center);
\draw[very thick] (n92.center) -- (n112.center);
\draw[very thick] (n81.center) -- (n111.center);
\draw[very thick] (n103.center) -- (n113.center);
\draw[very thick] (n104.center) -- (n115.center);
\draw[very thick] (n105.center) -- (n114.center);
\node[bigroundnode] at (1.5,4.5) (m1) {};
\node[bigroundnodew] at (2.5,3.5) (m2) {};
\node[bigroundnodew] at (3.5,4.5) (m3) {};
\node[bigroundnodew] at (4.5,2.5) (m4) {};
\node[bigroundnode] at (5.5,3.5) (m5) {};
\node[bigroundnodew] at (6.5,4.5) (m6) {};
\node[bigroundnode] at (7.5,1.5) (m7) {};
\node[bigroundnodew] at (8.5,2.5) (m8) {};
\node[bigroundnodew] at (9.5,3.5) (m9) {};
\node[bigroundnode] at (10.5,4.5) (m10) {};
\end{scope}
\draw[ultra thick] (-12.5,-9) -- (15.5,-9);
\begin{scope}[shift={(-20.5,-12)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (11.5,5.5) -- (11.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node at (9,5) (n95) {};
\node at (9,4) (n94) {};
\node at (9,3) (n93) {};
\node at (9,2) (n92) {};
\node at (9,1) (n91) {};
\node at (10,5) (n105) {};
\node at (10,4) (n104) {};
\node at (10,3) (n103) {};
\node at (10,2) (n102) {};
\node at (10,1) (n101) {};
\node[roundnode] at (11,5) (n115) {};
\node[roundnode] at (11,4) (n114) {};
\node[roundnode] at (11,3) (n113) {};
\node[roundnode] at (11,2) (n112) {};
\node[roundnode] at (11,1) (n111) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n42.center);
\draw[very thick] (n11) -- (n71.center);
\draw[very thick] (n25.center) -- (n35.center);
\draw[very thick] (n24.center) -- (n33.center);
\draw[very thick] (n23.center) -- (n34.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n45.center) -- (n55.center);
\draw[very thick] (n44.center) -- (n54.center);
\draw[very thick] (n43.center) -- (n52.center);
\draw[very thick] (n42.center) -- (n53.center);
\draw[very thick] (n55.center) -- (n65.center);
\draw[very thick] (n54.center) -- (n63.center);
\draw[very thick] (n53.center) -- (n64.center);
\draw[very thick] (n52.center) -- (n72.center);
\draw[very thick] (n65.center) -- (n74.center);
\draw[very thick] (n64.center) -- (n75.center);
\draw[very thick] (n63.center) -- (n83.center);
\draw[very thick] (n62.center) -- (n72.center);
\draw[very thick] (n75.center) -- (n105.center);
\draw[very thick] (n74.center) -- (n94.center);
\draw[very thick] (n73.center) -- (n83.center);
\draw[very thick] (n72.center) -- (n81.center);
\draw[very thick] (n71.center) -- (n82.center);
\draw[very thick] (n83.center) -- (n92.center);
\draw[very thick] (n82.center) -- (n93.center);
\draw[very thick] (n94.center) -- (n103.center);
\draw[very thick] (n93.center) -- (n104.center);
\draw[very thick] (n92.center) -- (n112.center);
\draw[very thick] (n81.center) -- (n111.center);
\draw[very thick] (n103.center) -- (n113.center);
\draw[very thick] (n104.center) -- (n115.center);
\draw[very thick] (n105.center) -- (n114.center);
\node[bigroundnode] at (1.5,4.5) (m1) {};
\node[bigroundnode] at (2.5,3.5) (m2) {};
\node[bigroundnodew] at (3.5,4.5) (m3) {};
\node[bigroundnode] at (4.5,2.5) (m4) {};
\node[bigroundnodew] at (5.5,3.5) (m5) {};
\node[bigroundnodew] at (6.5,4.5) (m6) {};
\node[bigroundnode] at (7.5,1.5) (m7) {};
\node[bigroundnode] at (8.5,2.5) (m8) {};
\node[bigroundnode] at (9.5,3.5) (m9) {};
\node[bigroundnode] at (10.5,4.5) (m10) {};
\end{scope}
\begin{scope}[shift={(-8,-12)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (11.5,5.5) -- (11.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node at (9,5) (n95) {};
\node at (9,4) (n94) {};
\node at (9,3) (n93) {};
\node at (9,2) (n92) {};
\node at (9,1) (n91) {};
\node at (10,5) (n105) {};
\node at (10,4) (n104) {};
\node at (10,3) (n103) {};
\node at (10,2) (n102) {};
\node at (10,1) (n101) {};
\node[roundnode] at (11,5) (n115) {};
\node[roundnode] at (11,4) (n114) {};
\node[roundnode] at (11,3) (n113) {};
\node[roundnode] at (11,2) (n112) {};
\node[roundnode] at (11,1) (n111) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n42.center);
\draw[very thick] (n11) -- (n71.center);
\draw[very thick] (n25.center) -- (n35.center);
\draw[very thick] (n24.center) -- (n33.center);
\draw[very thick] (n23.center) -- (n34.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n45.center) -- (n55.center);
\draw[very thick] (n44.center) -- (n54.center);
\draw[very thick] (n43.center) -- (n52.center);
\draw[very thick] (n42.center) -- (n53.center);
\draw[very thick] (n55.center) -- (n65.center);
\draw[very thick] (n54.center) -- (n63.center);
\draw[very thick] (n53.center) -- (n64.center);
\draw[very thick] (n52.center) -- (n72.center);
\draw[very thick] (n65.center) -- (n74.center);
\draw[very thick] (n64.center) -- (n75.center);
\draw[very thick] (n63.center) -- (n83.center);
\draw[very thick] (n62.center) -- (n72.center);
\draw[very thick] (n75.center) -- (n105.center);
\draw[very thick] (n74.center) -- (n94.center);
\draw[very thick] (n73.center) -- (n83.center);
\draw[very thick] (n72.center) -- (n81.center);
\draw[very thick] (n71.center) -- (n82.center);
\draw[very thick] (n83.center) -- (n92.center);
\draw[very thick] (n82.center) -- (n93.center);
\draw[very thick] (n94.center) -- (n103.center);
\draw[very thick] (n93.center) -- (n104.center);
\draw[very thick] (n92.center) -- (n112.center);
\draw[very thick] (n81.center) -- (n111.center);
\draw[very thick] (n103.center) -- (n113.center);
\draw[very thick] (n104.center) -- (n115.center);
\draw[very thick] (n105.center) -- (n114.center);
\node[bigroundnode] at (1.5,4.5) (m1) {};
\node[bigroundnodew] at (2.5,3.5) (m2) {};
\node[bigroundnode] at (3.5,4.5) (m3) {};
\node[bigroundnodew] at (4.5,2.5) (m4) {};
\node[bigroundnode] at (5.5,3.5) (m5) {};
\node[bigroundnodew] at (6.5,4.5) (m6) {};
\node[bigroundnode] at (7.5,1.5) (m7) {};
\node[bigroundnode] at (8.5,2.5) (m8) {};
\node[bigroundnode] at (9.5,3.5) (m9) {};
\node[bigroundnode] at (10.5,4.5) (m10) {};
\end{scope}
\begin{scope}[shift={(4.5,-12)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (11.5,5.5) -- (11.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node at (9,5) (n95) {};
\node at (9,4) (n94) {};
\node at (9,3) (n93) {};
\node at (9,2) (n92) {};
\node at (9,1) (n91) {};
\node at (10,5) (n105) {};
\node at (10,4) (n104) {};
\node at (10,3) (n103) {};
\node at (10,2) (n102) {};
\node at (10,1) (n101) {};
\node[roundnode] at (11,5) (n115) {};
\node[roundnode] at (11,4) (n114) {};
\node[roundnode] at (11,3) (n113) {};
\node[roundnode] at (11,2) (n112) {};
\node[roundnode] at (11,1) (n111) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n42.center);
\draw[very thick] (n11) -- (n71.center);
\draw[very thick] (n25.center) -- (n35.center);
\draw[very thick] (n24.center) -- (n33.center);
\draw[very thick] (n23.center) -- (n34.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n45.center) -- (n55.center);
\draw[very thick] (n44.center) -- (n54.center);
\draw[very thick] (n43.center) -- (n52.center);
\draw[very thick] (n42.center) -- (n53.center);
\draw[very thick] (n55.center) -- (n65.center);
\draw[very thick] (n54.center) -- (n63.center);
\draw[very thick] (n53.center) -- (n64.center);
\draw[very thick] (n52.center) -- (n72.center);
\draw[very thick] (n65.center) -- (n74.center);
\draw[very thick] (n64.center) -- (n75.center);
\draw[very thick] (n63.center) -- (n83.center);
\draw[very thick] (n62.center) -- (n72.center);
\draw[very thick] (n75.center) -- (n105.center);
\draw[very thick] (n74.center) -- (n94.center);
\draw[very thick] (n73.center) -- (n83.center);
\draw[very thick] (n72.center) -- (n81.center);
\draw[very thick] (n71.center) -- (n82.center);
\draw[very thick] (n83.center) -- (n92.center);
\draw[very thick] (n82.center) -- (n93.center);
\draw[very thick] (n94.center) -- (n103.center);
\draw[very thick] (n93.center) -- (n104.center);
\draw[very thick] (n92.center) -- (n112.center);
\draw[very thick] (n81.center) -- (n111.center);
\draw[very thick] (n103.center) -- (n113.center);
\draw[very thick] (n104.center) -- (n115.center);
\draw[very thick] (n105.center) -- (n114.center);
\node[bigroundnode] at (1.5,4.5) (m1) {};
\node[bigroundnodew] at (2.5,3.5) (m2) {};
\node[bigroundnodew] at (3.5,4.5) (m3) {};
\node[bigroundnodew] at (4.5,2.5) (m4) {};
\node[bigroundnodew] at (5.5,3.5) (m5) {};
\node[bigroundnode] at (6.5,4.5) (m6) {};
\node[bigroundnode] at (7.5,1.5) (m7) {};
\node[bigroundnode] at (8.5,2.5) (m8) {};
\node[bigroundnode] at (9.5,3.5) (m9) {};
\node[bigroundnode] at (10.5,4.5) (m10) {};
\end{scope}
\draw[ultra thick] (-12.5,-15) -- (17.5,-15);
\begin{scope}[shift={(-22.5,-18)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (11.5,5.5) -- (11.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node at (9,5) (n95) {};
\node at (9,4) (n94) {};
\node at (9,3) (n93) {};
\node at (9,2) (n92) {};
\node at (9,1) (n91) {};
\node at (10,5) (n105) {};
\node at (10,4) (n104) {};
\node at (10,3) (n103) {};
\node at (10,2) (n102) {};
\node at (10,1) (n101) {};
\node[roundnode] at (11,5) (n115) {};
\node[roundnode] at (11,4) (n114) {};
\node[roundnode] at (11,3) (n113) {};
\node[roundnode] at (11,2) (n112) {};
\node[roundnode] at (11,1) (n111) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n42.center);
\draw[very thick] (n11) -- (n71.center);
\draw[very thick] (n25.center) -- (n35.center);
\draw[very thick] (n24.center) -- (n33.center);
\draw[very thick] (n23.center) -- (n34.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n45.center) -- (n55.center);
\draw[very thick] (n44.center) -- (n54.center);
\draw[very thick] (n43.center) -- (n52.center);
\draw[very thick] (n42.center) -- (n53.center);
\draw[very thick] (n55.center) -- (n65.center);
\draw[very thick] (n54.center) -- (n63.center);
\draw[very thick] (n53.center) -- (n64.center);
\draw[very thick] (n52.center) -- (n72.center);
\draw[very thick] (n65.center) -- (n74.center);
\draw[very thick] (n64.center) -- (n75.center);
\draw[very thick] (n63.center) -- (n83.center);
\draw[very thick] (n62.center) -- (n72.center);
\draw[very thick] (n75.center) -- (n105.center);
\draw[very thick] (n74.center) -- (n94.center);
\draw[very thick] (n73.center) -- (n83.center);
\draw[very thick] (n72.center) -- (n81.center);
\draw[very thick] (n71.center) -- (n82.center);
\draw[very thick] (n83.center) -- (n92.center);
\draw[very thick] (n82.center) -- (n93.center);
\draw[very thick] (n94.center) -- (n103.center);
\draw[very thick] (n93.center) -- (n104.center);
\draw[very thick] (n92.center) -- (n112.center);
\draw[very thick] (n81.center) -- (n111.center);
\draw[very thick] (n103.center) -- (n113.center);
\draw[very thick] (n104.center) -- (n115.center);
\draw[very thick] (n105.center) -- (n114.center);
\node[bigroundnodew] at (1.5,4.5) (m1) {};
\node[bigroundnodew] at (2.5,3.5) (m2) {};
\node[bigroundnode] at (3.5,4.5) (m3) {};
\node[bigroundnode] at (4.5,2.5) (m4) {};
\node[bigroundnodew] at (5.5,3.5) (m5) {};
\node[bigroundnodew] at (6.5,4.5) (m6) {};
\node[bigroundnode] at (7.5,1.5) (m7) {};
\node[bigroundnode] at (8.5,2.5) (m8) {};
\node[bigroundnode] at (9.5,3.5) (m9) {};
\node[bigroundnode] at (10.5,4.5) (m10) {};
\end{scope}
\begin{scope}[shift={(-8,-18)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (11.5,5.5) -- (11.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node at (9,5) (n95) {};
\node at (9,4) (n94) {};
\node at (9,3) (n93) {};
\node at (9,2) (n92) {};
\node at (9,1) (n91) {};
\node at (10,5) (n105) {};
\node at (10,4) (n104) {};
\node at (10,3) (n103) {};
\node at (10,2) (n102) {};
\node at (10,1) (n101) {};
\node[roundnode] at (11,5) (n115) {};
\node[roundnode] at (11,4) (n114) {};
\node[roundnode] at (11,3) (n113) {};
\node[roundnode] at (11,2) (n112) {};
\node[roundnode] at (11,1) (n111) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n42.center);
\draw[very thick] (n11) -- (n71.center);
\draw[very thick] (n25.center) -- (n35.center);
\draw[very thick] (n24.center) -- (n33.center);
\draw[very thick] (n23.center) -- (n34.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n45.center) -- (n55.center);
\draw[very thick] (n44.center) -- (n54.center);
\draw[very thick] (n43.center) -- (n52.center);
\draw[very thick] (n42.center) -- (n53.center);
\draw[very thick] (n55.center) -- (n65.center);
\draw[very thick] (n54.center) -- (n63.center);
\draw[very thick] (n53.center) -- (n64.center);
\draw[very thick] (n52.center) -- (n72.center);
\draw[very thick] (n65.center) -- (n74.center);
\draw[very thick] (n64.center) -- (n75.center);
\draw[very thick] (n63.center) -- (n83.center);
\draw[very thick] (n62.center) -- (n72.center);
\draw[very thick] (n75.center) -- (n105.center);
\draw[very thick] (n74.center) -- (n94.center);
\draw[very thick] (n73.center) -- (n83.center);
\draw[very thick] (n72.center) -- (n81.center);
\draw[very thick] (n71.center) -- (n82.center);
\draw[very thick] (n83.center) -- (n92.center);
\draw[very thick] (n82.center) -- (n93.center);
\draw[very thick] (n94.center) -- (n103.center);
\draw[very thick] (n93.center) -- (n104.center);
\draw[very thick] (n92.center) -- (n112.center);
\draw[very thick] (n81.center) -- (n111.center);
\draw[very thick] (n103.center) -- (n113.center);
\draw[very thick] (n104.center) -- (n115.center);
\draw[very thick] (n105.center) -- (n114.center);
\node[bigroundnodew] at (1.5,4.5) (m1) {};
\node[bigroundnodew] at (2.5,3.5) (m2) {};
\node[bigroundnodew] at (3.5,4.5) (m3) {};
\node[bigroundnode] at (4.5,2.5) (m4) {};
\node[bigroundnode] at (5.5,3.5) (m5) {};
\node[bigroundnode] at (6.5,4.5) (m6) {};
\node[bigroundnode] at (7.5,1.5) (m7) {};
\node[bigroundnode] at (8.5,2.5) (m8) {};
\node[bigroundnode] at (9.5,3.5) (m9) {};
\node[bigroundnode] at (10.5,4.5) (m10) {};
\end{scope}
\begin{scope}[shift={(6.5,-18)}]
\draw[ultra thick,fill=white] (0.5,0.5) -- (0.5,5.5) -- (11.5,5.5) -- (11.5,0.5) -- cycle;
\node[roundnode] at (1,5) (n15) {};
\node[roundnode] at (1,4) (n14) {};
\node[roundnode] at (1,3) (n13) {};
\node[roundnode] at (1,2) (n12) {};
\node[roundnode] at (1,1) (n11) {};
\node at (2,5) (n25) {};
\node at (2,4) (n24) {};
\node at (2,3) (n23) {};
\node at (2,2) (n22) {};
\node at (2,1) (n21) {};
\node at (3,5) (n35) {};
\node at (3,4) (n34) {};
\node at (3,3) (n33) {};
\node at (3,2) (n32) {};
\node at (3,1) (n31) {};
\node at (4,5) (n45) {};
\node at (4,4) (n44) {};
\node at (4,3) (n43) {};
\node at (4,2) (n42) {};
\node at (4,1) (n41) {};
\node at (5,5) (n55) {};
\node at (5,4) (n54) {};
\node at (5,3) (n53) {};
\node at (5,2) (n52) {};
\node at (5,1) (n51) {};
\node at (6,5) (n65) {};
\node at (6,4) (n64) {};
\node at (6,3) (n63) {};
\node at (6,2) (n62) {};
\node at (6,1) (n61) {};
\node at (7,5) (n75) {};
\node at (7,4) (n74) {};
\node at (7,3) (n73) {};
\node at (7,2) (n72) {};
\node at (7,1) (n71) {};
\node at (8,5) (n85) {};
\node at (8,4) (n84) {};
\node at (8,3) (n83) {};
\node at (8,2) (n82) {};
\node at (8,1) (n81) {};
\node at (9,5) (n95) {};
\node at (9,4) (n94) {};
\node at (9,3) (n93) {};
\node at (9,2) (n92) {};
\node at (9,1) (n91) {};
\node at (10,5) (n105) {};
\node at (10,4) (n104) {};
\node at (10,3) (n103) {};
\node at (10,2) (n102) {};
\node at (10,1) (n101) {};
\node[roundnode] at (11,5) (n115) {};
\node[roundnode] at (11,4) (n114) {};
\node[roundnode] at (11,3) (n113) {};
\node[roundnode] at (11,2) (n112) {};
\node[roundnode] at (11,1) (n111) {};
\draw[very thick] (n15) -- (n24.center);
\draw[very thick] (n14) -- (n25.center);
\draw[very thick] (n13) -- (n23.center);
\draw[very thick] (n12) -- (n42.center);
\draw[very thick] (n11) -- (n71.center);
\draw[very thick] (n25.center) -- (n35.center);
\draw[very thick] (n24.center) -- (n33.center);
\draw[very thick] (n23.center) -- (n34.center);
\draw[very thick] (n35.center) -- (n44.center);
\draw[very thick] (n34.center) -- (n45.center);
\draw[very thick] (n33.center) -- (n43.center);
\draw[very thick] (n45.center) -- (n55.center);
\draw[very thick] (n44.center) -- (n54.center);
\draw[very thick] (n43.center) -- (n52.center);
\draw[very thick] (n42.center) -- (n53.center);
\draw[very thick] (n55.center) -- (n65.center);
\draw[very thick] (n54.center) -- (n63.center);
\draw[very thick] (n53.center) -- (n64.center);
\draw[very thick] (n52.center) -- (n72.center);
\draw[very thick] (n65.center) -- (n74.center);
\draw[very thick] (n64.center) -- (n75.center);
\draw[very thick] (n63.center) -- (n83.center);
\draw[very thick] (n62.center) -- (n72.center);
\draw[very thick] (n75.center) -- (n105.center);
\draw[very thick] (n74.center) -- (n94.center);
\draw[very thick] (n73.center) -- (n83.center);
\draw[very thick] (n72.center) -- (n81.center);
\draw[very thick] (n71.center) -- (n82.center);
\draw[very thick] (n83.center) -- (n92.center);
\draw[very thick] (n82.center) -- (n93.center);
\draw[very thick] (n94.center) -- (n103.center);
\draw[very thick] (n93.center) -- (n104.center);
\draw[very thick] (n92.center) -- (n112.center);
\draw[very thick] (n81.center) -- (n111.center);
\draw[very thick] (n103.center) -- (n113.center);
\draw[very thick] (n104.center) -- (n115.center);
\draw[very thick] (n105.center) -- (n114.center);
\node[bigroundnodew] at (1.5,4.5) (m1) {};
\node[bigroundnode] at (2.5,3.5) (m2) {};
\node[bigroundnode] at (3.5,4.5) (m3) {};
\node[bigroundnodew] at (4.5,2.5) (m4) {};
\node[bigroundnodew] at (5.5,3.5) (m5) {};
\node[bigroundnode] at (6.5,4.5) (m6) {};
\node[bigroundnode] at (7.5,1.5) (m7) {};
\node[bigroundnode] at (8.5,2.5) (m8) {};
\node[bigroundnode] at (9.5,3.5) (m9) {};
\node[bigroundnode] at (10.5,4.5) (m10) {};
\end{scope}
\end{tikzpicture} }
\caption{ A flattening of the cell above.}
\label{54321_flattening}
\end{figure}
We did the construction of the CW complex of one representative of the orbits
${\mathcal O}_{-\hat{a}_1 \acute{\eta}}$ (see~\cite{Alves-Saldanha}) and ${\mathcal O}_{\acute{\eta}}$.
The construction of the CW complex for one representative of the orbits
${\mathcal O}_{\hat{a}_2 \acute{\eta}}$ and ${\mathcal O}_{\hat{a}_1 \hat{a}_2\acute{\eta}}$
is similar and simpler to the construction above and left to the reader.
Finally we will discuss the case of $\operatorname{BL}_{\hat{a}_1\acute{\eta}}$.
This orbit has
$40$ strata of dimension $0$,
$72$ strata of dimension $1$,
$42$ strata of dimension $2$,
$10$ strata of dimension $3$ and
$1$ stratum of dimension $4$.
The CW complex associated with $\operatorname{BL}_{\hat a_1 \eta}$ is homotopically equivalent to a $4$-dimensional disk, ${\mathbb{D}}^4$.
As a consequence it is connected and contractible.
We briefly describe the construction of such CW complex in what follows.
First, a solid torus arises from the glueing of $6$ out of the $10$ cells of dimension $3$.
And then two of the $3$-cells fill in the boundary of the solid torus,
which leads to a CW complex homotopically equivalent to ${\mathbb{S}}^2$.
The remaining $3$-dimensional cells fill in the boundary of ${\mathbb{S}}^2$, yielding ${\mathbb{S}}^3$.
Finally, the $4$-dimensional cell attaches to the previous construction leading to ${\mathbb{D}}^4$.
Because a graphical representation of this construction is too complicated, we exhibit next
a sequence of collapses starting with the initial CW complex for $\operatorname{BL}_{\hat a_1 \eta}$
and ending with a CW complex which is homeomorphic to a disk of dimension $2$, shown in Figure~\ref{fig:54321x}.
Each elementary collapse is described by a pair of cells of adjacent dimensions.
The reader will then have difficulty obtaining a (rather long!) sequence of elementary collapses ending with a point.
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$})\]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$}) \]
We now proceed to remove all remaining cells of dimension $3$.
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$}) \]
At this stage we have a CW complex of dimension $2$.
\begin{figure}[p]
\begin{center}
\includegraphics[scale=0.25]{tikz_54321.pdf}
\end{center}
\caption{The stratification of a CW complex homotopically equivalent to $\operatorname{BL}_{-\hat a_1\acute\eta}$, obtained after a sequence of collapses.
This CW complex is homeomorphic to a disk of dimension $2$.}
\label{fig:54321x}
\end{figure}
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
At this point, the CW complex is a disk of dimension $2$, as shown in Figure~\ref{fig:54321x}.
The reader is invited to complete the very long sequence of collapses.
This sequence of elementary collapses ends with a single vertex, as desired.
This completes the proof that all $52$ connected components of $\operatorname{BL}_{\eta}$ are contractible.
\end{example}
\bigskip
\begin{comment}
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} ) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$}, \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$})\]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$}) \]
\[ (\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\medblackdiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\meddiamond$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$},\makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallblackcircle$} \makebox[4mm]{$\smallcircle$})\]
\end{comment}
|
{
"timestamp": "2021-09-29T02:25:51",
"yymm": "2109",
"arxiv_id": "2109.13888",
"language": "en",
"url": "https://arxiv.org/abs/2109.13888"
}
|
\section*{Introduction}
Traditionally, a Hamiltonian action is an action of a Lie group $G$ on a symplectic manifold $(S,\omega)$, equipped with an equivariant momentum map:
\begin{equation*} J:(S,\omega)\to \mathfrak{g}^*,
\end{equation*} taking values in the dual of the Lie algebra of $G$. Throughout the years, variations on this notion have been explored, many of which have the common feature that the momentum map:
\begin{equation}\label{mommapintro0} J:(S,\omega)\to (M,\pi),
\end{equation} is a Poisson map taking values in a specified Poisson manifold (see for instance \cite{McD,Lu,MeWo,We6}). In \cite{WeMi}, such momentum map theories were unified by introducing the notion of Hamiltonian actions for symplectic groupoids, in which the momentum map takes values in the Poisson manifold integrated by a given symplectic groupoid. In this paper, we show that the transverse momentum map of such Hamiltonian actions admits a natural stratification, provided the given symplectic groupoid is proper. To be more precise, let $(\mathcal{G},\Omega)\rightrightarrows (M,\pi)$ be a proper symplectic groupoid with a Hamiltonian action along a momentum map (\ref{mommapintro0}). The symplectic groupoid generates a partition of $(M,\pi)$ into symplectic manifolds, called the symplectic leaves of $(\mathcal{G},\Omega)$. On the other hand, the $(\mathcal{G},\Omega)$-action generates the partition of $(S,\omega)$ into orbits. We denote the spaces of orbits and leaves as:
\begin{equation*} \underline{S}:=S/\mathcal{G} \quad\&\quad \underline{M}:=M/\mathcal{G}.
\end{equation*}
The momentum map (\ref{mommapintro0}) descends to a map:
\begin{center}
\begin{tikzcd} (S,\omega)\arrow[r,"J"] \arrow[d] & (M,\pi) \arrow[d]\\
\underline{S}\arrow[r,"\underline{J}"] & \underline{M}
\end{tikzcd}
\end{center}
that we call the \textbf{transverse momentum map}. Because we assume $\mathcal{G}$ to be proper, by the results of \cite{PfPoTa,CrMe} (which we recall in Section \ref{stratsec}) both the orbit space $\underline{S}$ and the leaf space $\underline{M}$ admit a canonical Whitney stratification: $\S_\textrm{Gp}(\underline{S})$ and $\S_\textrm{Gp}(\underline{M})$, induced by the proper Lie groupoids $\mathcal{G}\ltimes S$ (the action groupoid) and $\mathcal{G}$. These, however, do not form a stratification of the transverse momentum map, in the sense that $\underline{J}$ need not send strata of $\S_\textrm{Gp}(\underline{S})$ into strata of $\S_\textrm{Gp}(\underline{M})$ (see Example \ref{hamGspex} below). Our first main result is Theorem \ref{canhamstratthm}, which shows that there is a natural refinement $\S_\textrm{Ham}(\underline{S})$ of $\S_\textrm{Gp}(\underline{S})$ that, together with the stratification $\S_\textrm{Gp}(\underline{M})$, forms a constant rank stratification of $\underline{J}$. This means that:
\begin{itemize}\item $\underline{J}$ sends strata of $\S_\textrm{Ham}(\underline{S})$ into strata of $\S_\textrm{Gp}(\underline{M})$,
\item the restriction of $\underline{J}$ to each pair of strata is a smooth map of constant rank.
\end{itemize} Theorem \ref{canhamstratthm} further shows that $\S_\textrm{Ham}(\underline{S})$ is in fact a Whitney stratification of the orbit space. We call $\S_\textrm{Ham}(\underline{S})$ the \textbf{canonical Hamiltonian stratification} of $\underline{S}$.
\begin{Ex}\label{hamGspex} Let $G$ be a compact Lie group with Lie algebra $\mathfrak{g}$ and let $J:(S,\omega)\to \mathfrak{g}^*$ be a Hamiltonian $G$-space with equivariant momentum map. In this case, $(\mathcal{G},\Omega)=(T^*G,-\d\lambda_\textrm{can})$ (cf. Example \ref{exhamGsp}), $\underline{S}=S/G$, $\underline{M}=\mathfrak{g}^*/G$, and $\S_\textrm{Gp}(\underline{S})$ and $\S_\textrm{Gp}(\underline{M})$ are the stratifications by connected components of the orbit types of the $G$-actions. The stratification $\S_\textrm{Ham}(\underline{S})$ can be described as follows. Let us call a pair $(K,H)$ of subgroups $H\subset K\subset G$ conjugate in $G$ to another such pair $(K',H')$ if there is a $g\in G$ such that $gKg^{-1}=K'$ and $gHg^{-1}=H'$. Consider the partition of $\underline{S}$ defined by the equivalence relation:
\begin{equation}\label{eqrelorbleaftyp} \O_p\sim \O_q\iff (G_{J(p)},G_p) \text{ is conjugate in $G$ to }(G_{J(q)},G_q),
\end{equation} where $G_p$ and $G_q$ denote the isotropy groups of the action on $S$, whereas $G_{J(p)}$ and $G_{J(q)}$ denote the isotropy groups of the coadjoint action on $\mathfrak{g}^*$. The connected components of the members of this partition form the stratification $\S_\textrm{Ham}(\underline{S})$. When $G$ is abelian, $\S_\textrm{Ham}(\underline{S})$ and $\S_\textrm{Gp}(\underline{S})$ coincide, but in general they need not (consider, for example, the cotangent lift of the action by left translation of a non-abelian compact Lie group $G$ on itself).
\end{Ex}
Our second main result is Theorem \ref{poisstratthm}$b$, which states that $\S_\textrm{Ham}(\underline{S})$ is in fact a constant rank Poisson stratification of the orbit space and gives a description of the symplectic leaves in terms of the fibers of the transverse momentum map. To elaborate, let us first provide some further context. The singular space $\underline{S}$ has a natural algebra of smooth functions $C^\infty(\underline{S})$: the algebra consisting of $\mathcal{G}$-invariant smooth functions on $S$. This is a Poisson subalgebra of:
\begin{equation*} (C^\infty(S),\{\cdot,\cdot\}_\omega).
\end{equation*} Hence, it inherits a Poisson bracket, known as the reduced Poisson bracket. Geometrically, this is reflected by the fact that $\S_\textrm{Gp}(\underline{S})$ is a Poisson stratification of the orbit space (see Definition \ref{poisstratdef} and Theorem \ref{poisstratthm}$a$). In particular, each stratum of $\S_\textrm{Gp}(\underline{S})$ admits a natural Poisson structure, induced by the Poisson bracket on $C^\infty(\underline{S})$. Closely related to this is the singular symplectic reduction procedure of Lerman-Sjamaar \cite{LeSj}, which states that for each symplectic leaf $\L$ of $(\mathcal{G},\Omega)$ in $M$, the symplectic reduced space:
\begin{equation}\label{sympredspintro} \underline{S}_\L:=J^{-1}(\L)/\mathcal{G}
\end{equation} admits a natural symplectic Whitney stratification. Let us call this the Lerman-Sjamaar stratification of (\ref{sympredspintro}). This is related to the Poisson stratification $\S_\textrm{Gp}(\underline{S})$ by the fact that each symplectic stratum of such a reduced space $(\ref{sympredspintro})$ coincides with a symplectic leaf of a stratum of $\S_\textrm{Gp}(\underline{S})$.
\begin{Rem} The facts mentioned above are stated more precisely in Theorems \ref{poisstratthm}$a$, \ref{redspstratthm} and \ref{poisstratthm}$c$. Although these theorems should be known to experts, in the literature we could not find a written proof (that is, not in the generality of Hamiltonian actions for symplectic groupoids; see e.g. \cite{FeOrRa,LeSj} for the case of Lie group actions). Therefore, we have included proofs of these.
\end{Rem}
Returning to our second main result: Theorem \ref{poisstratthm}$b$ states first of all that, like $\S_\textrm{Gp}(\underline{S})$, the canonical Hamiltonian stratification $\S_\textrm{Ham}(\underline{S})$ is a Poisson stratification of the orbit space, the leaves of which coincide with symplectic strata of the Lerman-Sjamaar stratification of the reduced spaces (\ref{sympredspintro}). In addition, it has the following properties:
\begin{itemize}
\item in contrast to $\S_\textrm{Gp}(\underline{S})$, the Poisson structure on each stratum of $\S_\textrm{Ham}(\underline{S})$ is regular (meaning that the symplectic leaves have constant dimension),
\item the symplectic foliation on each stratum $\underline{\Sigma}\in \S_\textrm{Ham}(\underline{S})$ coincides, as a foliation, with that by the connected components of the fibers of the constant rank map $\underline{J}\vert_{\underline{\Sigma}}$.
\end{itemize}
The reduced spaces $(\ref{sympredspintro})$ are, as topological spaces, the fibers of $\underline{J}$. As stratified spaces (equipped with the Lerman-Sjamaar stratification), these can now be seen as the fibers of the stratified map: \begin{equation*} \underline{J}:(\underline{S},\S_\textrm{Ham}(\underline{S}))\to (\underline{M},\S_\textrm{Gp}(\underline{M})).
\end{equation*}
Our third main result is Theorem \ref{poisstratintgrthm}, which says that, besides the fact that the Poisson structure on each stratum of $\S_\textrm{Ham}(\underline{S})$ is regular, these Poisson manifolds admit natural proper symplectic groupoids integrating them.
\begin{Ex}\label{identityactionex} Let $(\mathcal{G},\Omega)\rightrightarrows (M,\pi)$ be a proper symplectic groupoid. Then $(\mathcal{G},\Omega)$ has a canonical (left) Hamiltonian action on itself along the target map $t:(\mathcal{G},\Omega)\to M$. In this case, $(S,\omega)=(\mathcal{G},\Omega)$ and the orbit space $\underline{S}$ is $M$, with orbit projection the source map of $\mathcal{G}$. The stratification $\S_\textrm{Ham}(\underline{S})$ is the canonical stratification $\S_\textrm{Gp}(M)$ induced by the proper Lie groupoid $\mathcal{G}$ (as in Example \ref{exmortyp}). So, Theorem \ref{poisstratthm} and \ref{poisstratintgrthm} imply that each stratum of $\S_\textrm{Gp}(M)$ is a regular, saturated Poisson submanifold of $(M,\pi)$, that admits a natural proper symplectic groupoid integrating it. This is a result in \cite{CrFeTo2}.
\end{Ex}
Regular proper symplectic groupoids have been studied extensively in \cite{CrFeTo} and have been shown to admit a transverse integral affine structure. In particular, the proper symplectic groupoids over the strata of the canonical Hamiltonian stratification admit transverse integral affine structures. As it turns out, the leaf space of the proper symplectic groupoid over any stratum of $\S_\textrm{Ham}(\underline{S})$ is smooth, and the transverse momentum map descends to an integral affine immersion into the corresponding stratum of $\S_\textrm{Gp}(\underline{M})$. This is reminiscent of the findings of \cite{CoDaMo,Zu}. \\
\textbf{\underline{Brief outline:}} In Part 1 we generalize the Marle-Guillemin-Sternberg normal form for Hamiltonian actions of Lie groups, to those of symplectic groupoids (Theorem \ref{normhamthm}). From this we derive a simpler normal form for the transverse momentum map (Example \ref{locmodmoreq}), using a notion of equivalence for Hamiltonian actions that is analogous to Morita equivalence for Lie groupoids (Definition \ref{moreqdefHam}). Part 1 provides the main tools for the proofs in Part 2, where we introduce the canonical Hamiltonian stratification and prove the main theorems mentioned above (Theorems \ref{canhamstratthm}, \ref{redspstratthm}, \ref{poisstratthm} and \ref{poisstratintgrthm}). A more detailed outline is given at the start of each of these parts.\\
\textbf{\underline{Acknowledgements:}}
I would like to thank my PhD supervisor Marius Crainic for his guidance. Marius suggested to me to try to prove the aforementioned normal form theorem by means of Theorem \ref{sympmoreqasrep}. Moreover, he commented on an earlier version of this paper, which most certainly helped me to improve the presentation. I would further like to thank him, Rui Loja Fernandes and David Mart\'{i}nez Torres for sharing some of their unpublished work with me, and I am grateful to David and Rui for their lectures at the summer school of Poisson 2018; all of this has been an important source of inspiration for Theorem \ref{poisstratintgrthm}. This work was supported by NWO (Vici Grant no. 639.033.312).\\
\textbf{\underline{Conventions:}} Throughout, we require smooth manifolds to be both Hausdorff and second countable and we require the same for both the base and the space of arrows of a Lie groupoid.
\section{The normal form theorem}
In this part we prove a version of the Marle-Guillemin-Sternberg normal form theorem for Hamiltonian actions of symplectic groupoids.
\begin{thm}\label{normhamthm} Let $(\mathcal{G},\Omega)\rightrightarrows M$ be a symplectic groupoid and suppose that we are given a Hamiltonian $(\mathcal{G},\Omega)$-action along $J:(S,\omega)\to M$. Let $\O$ be the orbit of the action through some $p\in S$ and $\L$ the leaf of $\mathcal{G}$ through $x:=J(p)$. If $\mathcal{G}$ is proper at $x$ (in the sense of Definition \ref{propatxdefi}), then the Hamiltonian action is neighbourhood-equivalent (in the sense of Definition \ref{nhoodeq1defi}) to its local model around $\O$ (as constructed in Section \ref{locmodsec}).
\end{thm}
Both the local model and the proof of this theorem are inspired on those of two existing normal form theorems: the MGS-normal form \cite{Ma1,GS4} by Marle, and Guillemin and Sternberg on one hand, and the normal form for proper Lie groupoids \cite{We4,Zu,CrStr,FeHo} and symplectic groupoids \cite{Zu,CrMar1,CrFeTo1,CrFeTo2} on the other.\\
We split the proof of this theorem into a rigidity theorem (Theorem \ref{righamthm}) and the construction of a local model out of a certain collection of data that can be associated to any orbit $\O$ of a Hamiltonian action. In Section \ref{backhamactsec} and Section \ref{normrephamsec} we introduce the reader to this data and in Section \ref{locmodsec} we construct the local model.
To prove Theorem \ref{normhamthm}, we are then left to prove the rigidity theorem, which is the content of Section \ref{normformpfsubsec}.
Lastly, in Section \ref{translocmodsec} we introduce a notion of Morita equivalence between Hamiltonian actions that allows us to make sense of a simpler normal form for the transverse momentum map. We then study some elementary invariants for this notion of equivalence, analogous to those for Morita equivalence between Lie groupoids, which will lead to further insight into the proof of Theorem \ref{normhamthm}. This will also be important later in our definition of the canonical Hamiltonian stratification and our proof of Theorem \ref{canhamstratthm} and Theorem \ref{redspstratthm}.
\subsection{Background on Hamiltonian groupoid actions}\label{backhamactsec}
\subsubsection{Poisson structures and symplectic groupoids}\label{backhamactsubsec1}
Recall that a \textbf{symplectic groupoid} is a pair $(\mathcal{G},\Omega)$ consisting of a Lie groupoid $\mathcal{G}$ and a symplectic form $\Omega$ on $\mathcal{G}$ which is \textbf{multiplicative}. That is, it is compatible with the groupoid structure in the sense that:
\begin{equation*} (\textrm{pr}_1)^*\Omega=m^*\Omega-(\textrm{pr}_2)^*\Omega,
\end{equation*} where we denote by:
\begin{equation*} m,\textrm{pr}_1,\textrm{pr}_2:\mathcal{G}^{(2)}\to \mathcal{G}
\end{equation*} the groupoid multiplication and the projections from the space of composable arrows $\mathcal{G}^{(2)}$ to $\mathcal{G}$. Given a symplectic groupoid $(\mathcal{G},\Omega)\rightrightarrows M$, there is a unique Poisson structure $\pi$ on $M$ with the property that the target map $t:(\mathcal{G},\Omega)\to (M,\pi)$ is a Poisson map. The Lie algebroid of $\mathcal{G}$ is canonically isomorphic to the Lie algebroid $T^*_\pi M$ of the Poisson structure $\pi$ on $M$, via:
\begin{equation}\label{imsymp} \rho_\Omega:T^*_\pi M\to \text{Lie}(\mathcal{G}), \quad \iota_{\rho_\Omega(\alpha)}\Omega_{1_x}=(\d t_{1_x})^*\alpha, \quad \forall\alpha\in T^*_xM,\text{ } x\in M.
\end{equation} The symplectic groupoid $(\mathcal{G},\Omega)$ it said to integrate the Poisson structure $\pi$ on $M$
\begin{ex}\label{liealgintex} The dual of a Lie algebra $\mathfrak{g}$ is naturally a Poisson manifold $(\mathfrak{g}^*,\pi_{\textrm{lin}})$, equipped with the so-called Lie-Poisson structure. Given a Lie group $G$ with Lie algebra $\mathfrak{g}$, the cotangent groupoid $(T^*G,-\d\lambda_{\textrm{can}})$ is a symplectic groupoid integrating $(\mathfrak{g}^*,\pi_{\textrm{lin}})$. The groupoid structure on $T^*G$ is determined by that fact that, via left-multiplication on $G$, it is isomorphic to the action groupoid $G\ltimes \mathfrak{g}^*$ of the coadjoint action.
\end{ex}
\subsubsection{Momentum maps and Hamiltonian actions} To begin with, recall:
\begin{defi}[\cite{WeMi}]\label{hamactdef} Let $(S,\omega)$ be a symplectic manifold. A left action of a symplectic groupoid $(\mathcal{G},\Omega)\rightrightarrows M$ along a map $J:(S,\omega)\to M$ is called \textbf{Hamiltonian} if it satisfies the multiplicativity condition:
\begin{equation}\label{hammultcond} (\textrm{pr}_\mathcal{G})^*\Omega=(m_S)^*\omega-(\textrm{pr}_S)^*\omega,
\end{equation} where we denote by:
\begin{equation*} m_S,\textrm{pr}_S:\mathcal{G}\ltimes S\to S,
\quad \textrm{pr}_\mathcal{G}:\mathcal{G}\ltimes S\to \mathcal{G},
\end{equation*} the map defining the action and the projections from the action groupoid to $S$ and $\mathcal{G}$. Right Hamiltonian actions are defined similarly.
\end{defi} The infinitesimal version of Hamiltonian actions for symplectic groupoids are momentum maps. To be more precise, by a \textbf{momentum map} we mean a Poisson map $J:(S,\omega)\to (M,\pi)$ from a symplectic manifold into a Poisson manifold. That is, for all $f,g\in C^\infty(M)$ it holds that:
\begin{equation*} J^*\{f,g\}_\pi=\{J^*f,J^*g\}_\omega.
\end{equation*}
Every momentum map comes with a symmetry, in the form of a Lie algebroid action. Indeed, a momentum map $J:(S,\omega)\to (M,\pi)$ is acted on by the Lie algebroid $T^*_\pi M$ of the Poisson structure $\pi$. Explicitly, the Lie algebroid action $a_J:\Omega^1(M)\to \mathcal{X}(S)$ along $J$ is determined by the \textbf{momentum map condition}:
\begin{equation}\label{mommapcond} \iota_{a_J(\alpha)}\omega=J^*\alpha, \quad \forall \alpha\in \Omega^1(M).
\end{equation}
Hamiltonian actions integrate such Lie algebroid actions, in the following sense.
\begin{prop} Let $(\mathcal{G},\Omega)\rightrightarrows M$ be a symplectic groupoid and let $\pi$ be the induced Poisson structure on $M$ (as in Subsection \ref{backhamactsubsec1}). Suppose that we are given a left Hamiltonian $(\mathcal{G},\Omega)$-action along $J:(S,\omega)\to M$. Then $J:(S,\omega)\to (M,\pi)$ is a momentum map and the Lie algebroid action: \begin{equation}\label{assliealgact} a:\Omega^1(M)\to \mathcal{X}(S)
\end{equation} associated to the Lie groupoid action (via (\ref{imsymp})) coincides with the canonical $T^*_\pi M$-action along $J$. In other words, (\ref{assliealgact}) satisfies the momentum map condition (\ref{mommapcond}). A similar statement holds for right Hamiltonian actions.
\end{prop} An appropriate converse to this statement holds as well; see for instance \cite{BuCr}.
\begin{ex}\label{exhamGsp} Continuing Example \ref{liealgintex}: as observed in \cite{WeMi}, the data of a Hamiltonian $G$-action with equivariant momentum map $J:(S,\omega)\to \mathfrak{g}^*$ is the same as that of a Hamiltonian action of the symplectic groupoid $(G\ltimes \mathfrak{g}^*,-\d\lambda_{\textrm{can}})$ along $J$.
\end{ex}
\begin{ex} Any symplectic groupoid has canonical left and right Hamiltonian actions along its target and source map, respectively.
\end{ex}
\subsection{The local invariants}\label{normrephamsec}
\subsubsection{The leaves and normal representations of Lie and symplectic groupoids} To start with, we introduce some more terminology. Let $\mathcal{G}\rightrightarrows M$ be a Lie groupoid and $x\in M$. By the \textbf{leaf of $\mathcal{G}$ through $x$} we mean the set $\L_x$ consisting of points in $M$ that are the target of an arrow starting at $x$. By the \textbf{isotropy group of $\mathcal{G}$} at $x$ we mean the group $\mathcal{G}_x:=s^{-1}(x)\cap t^{-1}(x)$ consisting of arrows that start and end at $x$. In general, $\mathcal{G}_x$ is a submanifold of $\mathcal{G}$ and as such it is a Lie group. The leaf $\L_x$ is an initial submanifold of $M$, with smooth manifold structure determined by the fact that: \begin{equation}\label{sfibprinbun} t:s^{-1}(x)\to \L_x\end{equation} is a (right) principal $\mathcal{G}_x$-bundle. Notice that a leaf of $\mathcal{G}$ may be disconnected. Given a leaf $\L\subset M$ of $\mathcal{G}$, we let $\mathcal{G}_\L:=s^{-1}(\L)$
denote the restriction of $\mathcal{G}$ to $\L$. This is a Lie subgroupoid of $\mathcal{G}$. In all of our main theorems, we assume at least that $\mathcal{G}$ is proper at points in the leaves under consideration, in the sense below.
\begin{defi}[\cite{CrStr}]\label{propatxdefi} A Hausdorff Lie groupoid $\mathcal{G}$ is called \textbf{proper at $x\in M$} if the map \begin{equation*} (t,s):\mathcal{G}\to M\times M
\end{equation*} is proper at $(x,x)$, meaning that any sequence $(g_n)$ in $\mathcal{G}$ such that $(t(g_n),s(g_n))$ converges to $(x,x)$ admits a convergent subsequence.
\end{defi} If $\mathcal{G}$ is proper at some (or equivalently every) point $x\in \L$, then $\L$ and the Lie subgroupoid $\mathcal{G}_\L$ are embedded submanifolds of $M$ and $\mathcal{G}$ respectively, and the isotropy group $\mathcal{G}_x$ is compact. Returning to a general leaf $\L$, the normal bundle $\mathcal{N}_\L$ to the leaf in $M$ is naturally a representation:
\begin{equation*} \mathcal{N}_\L\in \textrm{Rep}(\mathcal{G}_\L)
\end{equation*} of $\mathcal{G}_\L$, with the action defined as:
\begin{equation}\label{normrepleaf} g\cdot[v]=[\d t(\hat{v})]\in \mathcal{N}_{t(g)}, \quad g\in \mathcal{G}_\L,\text{ } [v]\in \mathcal{N}_{s(g)},
\end{equation} where $\hat{v}\in T_g\mathcal{G}$ is any tangent vector satisfying $\d s(\hat{v})=v$. We call this the \textbf{normal representation of $\mathcal{G}$ at $\L$}. It encodes first order data of $\mathcal{G}$ in directions normal to $\L$ (see also \cite{CrStr}). Given $x\in \L$, so that $\L=\L_x$, this restricts to a representation:
\begin{equation}\label{normreppt} \mathcal{N}_x\in \textrm{Rep}(\mathcal{G}_x)
\end{equation}
of the isotropy group $\mathcal{G}_x$ on the fiber $\mathcal{N}_x$ of $\mathcal{N}_\L$ over $x$, which we refer to as the \textbf{normal representation of $\mathcal{G}$ at $x$}. Without loss of information, one can restrict attention to the normal representation at a point, which will often be more convenient for our purposes. This is because the transitive Lie groupoid $\mathcal{G}_\L$ is canonically isomorphic to the gauge-groupoid of the principal bundle (\ref{sfibprinbun}), and the normal bundle $\mathcal{N}_\L$ is canonically isomorphic to the vector bundle associated to the principal bundle (\ref{sfibprinbun}) and the representation (\ref{normreppt}).
\begin{ex} For the holonomy groupoid of a foliation (assumed to be Hausdorff here), the leaves are those of the foliation and the normal representation at $x$ is the linear holonomy representation (the linearization of the holonomy action on a transversal through $x$).
\end{ex}
\begin{ex} For the action groupoid of a Lie group action, the leaves are the orbits of the action and the normal representation at $x$ is simply induced by the isotropy representation on the tangent space to $x$.
\end{ex}
For a symplectic groupoid the basic facts stated below hold, which follow from multiplicativity of the symplectic form on the groupoid (see e.g. \cite {BuCrWeZh} for background on multiplicative $2$-forms).
\begin{prop}\label{normrepsymp} Let $(\mathcal{G},\Omega)\rightrightarrows M$ be a symplectic groupoid and let $\pi$ be the induced Poisson structure on $M$. Let $x\in M$, let $\L$ be the leaf of $\mathcal{G}$ through $x$ and $\mathcal{G}_\L$ the restriction of $\mathcal{G}$ to $\L$.
\begin{itemize}\item[a)] There is a unique symplectic form $\omega_\L$ on $\L$ such that:
\begin{equation*} \Omega\vert_{\mathcal{G}_\L}=t^*\omega_\L-s^*\omega_\L\in \Omega^2(\mathcal{G}_\L).
\end{equation*} The connected components of $(\L,\omega_\L)$ are symplectic leaves of the Poisson manifold $(M,\pi)$.
\item[b)] The normal representation (\ref{normreppt}) is isomorphic (via (\ref{imsymp})) to the coadjoint representation:
\begin{equation*} \mathfrak{g}_x^*\in \textrm{Rep}(\mathcal{G}_x).
\end{equation*}
\end{itemize}
\end{prop}
\subsubsection{The orbits, leaves and normal representations of Hamiltonian actions} Next, we will study the leaves and the normal representations for the action groupoid of a Hamiltonian action. Let $(\mathcal{G},\Omega)\rightrightarrows M$ be a symplectic groupoid and suppose that we are given a left Hamiltonian $(\mathcal{G},\Omega)$-action along $J:(S,\omega)\to M$. Let $p\in S$, $x:=J(p)\in M$, let $(\L,\omega_\L)$ be the symplectic leaf of $(\mathcal{G},\Omega)$ through $x$ (as in Proposition \ref{normrepsymp}) and let $\mathcal{G}_x$ be the isotropy group of $\mathcal{G}$ at $x$. By the \textbf{orbit of the action through $p$} we mean:
\begin{equation*} \O_p:=\{g\cdot p\mid g\in s^{-1}(x)\}\subset S,
\end{equation*} and by the \textbf{isotropy group of the $\mathcal{G}$-action at $p$} we mean the closed subgroup:
\begin{equation*} \mathcal{G}_p:=\{g\in \mathcal{G}_x\mid g\cdot p=p\}\subset \mathcal{G}_x.
\end{equation*} Note that these coincide with the leaf and the isotropy group at $p$ of the action groupoid. We let
\begin{equation}\label{normrepactgpoid} \mathcal{N}_p\in \textrm{Rep}(\mathcal{G}_p)
\end{equation} denote the normal representation of the action groupoid at $p$. There are various relationships between the orbits, leaves and the normal representations at $p$ and $x$. To state these, consider the symplectic normal space to the orbit $\O$ at $p$:
\begin{equation}\label{sympnormsp} \S\mathcal{N}_p:=\frac{T_p\O^\omega}{T_p\O\cap T_p\O^\omega},
\end{equation}
where we denote the symplectic orthogonal of the tangent space $T_p\O$ to the orbit through $p$ as:
\begin{equation}\label{symporthorb} T_p\O^\omega:=\{v\in T_pS\mid\omega(v,w)=0,\text{ }\forall w\in T_p\O\}.\end{equation}
Further, consider the annihilator of $\mathfrak{g}_p$ in $\mathfrak{g}_x$:
\begin{equation}\label{annisoliealg} \mathfrak{g}_p^0\subset \mathfrak{g}_x^*.
\end{equation}
\begin{prop}\label{normrepham} Let $(\mathcal{G},\Omega)\rightrightarrows M$ be a symplectic groupoid and suppose that we are given left Hamiltonian $(\mathcal{G},\Omega)$-action along $J:(S,\omega)\to M$. Let $\O$ be the orbit of the action through $p\in S$.
\begin{itemize}\item[a)] The map $J$ restricts to surjective submersion $J_\O:\O\to \L$ from the orbit $\O$ onto a leaf $\L$ of $\mathcal{G}$. Moreover, the restriction $\omega_\O\in \Omega^2(\O)$ of $\omega$ coincides with the pull-back of $\omega_\L$:
\begin{equation}\label{pullbackorbitform} \omega_\O=(J_\O)^*\omega_\L.
\end{equation}
\item[b)] The symplectic normal space (\ref{sympnormsp}) to $\O$ at $p$ is a subrepresentation of the normal representation (\ref{normrepactgpoid}) of the action at $p$. In fact, (\ref{sympnormsp}), (\ref{normrepactgpoid}) and (\ref{annisoliealg}) fit into a canonical short exact sequence of $\mathcal{G}_p$-representations:
\begin{equation}\label{ses1pois} 0\to \S\mathcal{N}_p\to \mathcal{N}_p\to \mathfrak{g}_p^0\to 0.
\end{equation}
\item[c)] The normal representation (\ref{normreppt}) of $\mathcal{G}$ at $x:=J(p)$ fits into the canonical short exact sequence of $\mathcal{G}_p$-representations:
\begin{equation}\label{ses2pois} 0\to \mathfrak{g}_p^0\to \mathfrak{g}^*_x\to \mathfrak{g}_p^*\to 0.
\end{equation}
\end{itemize}
\end{prop}
\begin{proof} That $J$ maps $\O$ submersively onto a leaf $\L$ follows from the axioms of a Lie groupoid action. The equality (\ref{pullbackorbitform}) is readily derived from (\ref{hammultcond}). Part $c$ is immediate from Proposition \ref{normrepsymp}$b$. To prove part $b$ and provide some further insight into part $c$, observe that $J$ induces a $\mathcal{G}_p$-equivariant map:
\begin{equation*} \underline{\d J}_p:\mathcal{N}_p\to \mathcal{N}_x.
\end{equation*} Therefore we have two short exact sequences of $\mathcal{G}_p$-representations:
\begin{align} &0\to \ker(\underline{\d J}_p)\to \mathcal{N}_p\to \textrm{Im}(\underline{\d J}_p)\to 0 \label{ses1}\\
&0\to \textrm{Im}(\underline{\d J}_p)\to \mathcal{N}_x\to \text{CoKer}(\underline{\d J}_p)\to 0 \label{ses2}
\end{align}
Using the proposition below, the short exact sequence (\ref{ses2}) translates into the short exact sequence (\ref{ses2pois}), whereas (\ref{ses1}) translates into (\ref{ses1pois}). In particular, this proves part $b$.
\end{proof}
\begin{prop}\label{infmomact} Let $(\mathcal{G},\Omega)\rightrightarrows M$ be a symplectic groupoid and suppose that we are given a left Hamiltonian $(\mathcal{G},\Omega)$-action along $J:(S,\omega)\to M$. Further, let $p\in S$.
\begin{itemize}\item[a)] The symplectic orthogonal (\ref{symporthorb}) of the tangent space $T_p\O$ to the orbit $\O$ through $p$ coincides with $\ker{(\d J_p)}$.
\item[b)] The isotropy Lie algebra $\mathfrak{g}_p$, viewed as subset of $T^*_xM$ via (\ref{imsymp}), is the annihilator of $\textrm{Im}(\d J_p)$ in $T_xM$, where $x=J(p)$.
\end{itemize}
\end{prop}
This is readily derived from the momentum map condition $(\ref{mommapcond})$.
\subsubsection{The symplectic normal representation} Notice that the symplectic form $\omega$ on $S$ descends to a linear symplectic form $\omega_p$ on the symplectic normal space (\ref{sympnormsp}).
\begin{prop} $(\S\mathcal{N}_p,\omega_p)$ is a symplectic $\mathcal{G}_p$-representation.
\end{prop}
\begin{proof} We ought to show that $\omega_p$ is $\mathcal{G}_p$-invariant. Note that, for any $v\in \ker(\d J_p)$ and $g\in \mathcal{G}_p$: \begin{equation*} g\cdot[v]=[\d m_{(g,p)}(0,v)].
\end{equation*} So, using Proposition \ref{infmomact}$a$ we find that for all $v,w\in T_p\O^\omega$ and $g\in \mathcal{G}_p$:
\begin{equation*} \omega_p(g\cdot[v],g\cdot[w])=(m^*\omega)_{(g,p)}((0,v),(0,w))=\omega_p([v],[w]),
\end{equation*} where in the last step we applied (\ref{hammultcond}).
\end{proof}
\begin{defi} Given a Hamiltonian action as above, we call \begin{equation}\label{sympnormrep} (\S\mathcal{N}_p,\omega_p)\in \textrm{SympRep}(\mathcal{G}_p)
\end{equation} its \textbf{symplectic normal representation at $p$}.
\end{defi}
Given any symplectic representation $(V,\omega_V)$ of a Lie group $H$, the $H$-action is Hamiltonian with quadratic momentum map:
\begin{equation}\label{quadsympmommap} J_V:(V,\omega_V)\to \mathfrak{h}^*, \quad \langle J_V(v),\xi \rangle=\frac{1}{2}\omega_V(\xi\cdot v,v).
\end{equation} As we will now show, given a Hamiltonian $(\mathcal{G},\Omega)$-action along $J:(S,\omega)\to M$, the quadratic momentum map:
\begin{equation}\label{quadsympmommap2} J_{\S\mathcal{N}_p}:(\S\mathcal{N}_p,\omega_p)\to \mathfrak{g}_p^*
\end{equation} of the symplectic normal representation at $p$ can be expressed in terms of the quadratic differential of $J$ at $p$. Recall from \cite{AGV1} that the \textbf{quadratic differential} of a map $F:S\to M$ at $p\in S$ is defined to be the quadratic map:
\begin{equation*} \d^2F_p:\ker(\d F_p)\to \text{CoKer}(\d F_p),\quad \d^2F_p(v)=\left[\frac{1}{2}\left.\frac{\d^2}{\d^2 t}\right|_{t=0}(\psi\circ F\circ \phi^{-1})(tv)\right],
\end{equation*} where $\phi:(U,p)\to (T_pS,0)$ and $\psi:(V,x)\to (T_{x}M,0)$ are any two open embeddings, defined on open neighbourhoods of $p$ and $x:=F(p)$ such that $F(U)\subset V$, with the property that their differentials at $p$ and $x$ are the respective identity maps. Returning to the momentum map $J$, by Proposition \ref{infmomact} its quadratic differential becomes a map:
\begin{equation}\label{quadsympmommap3} \d^2J_p:T_p\O^\omega\to \mathfrak{g}_p^*.
\end{equation}
\begin{prop}\label{quaddifmommap} Let $J:(S,\omega)\to M$ be the momentum map of a Hamiltonian action and $p\in S$. Then the quadratic differential (\ref{quadsympmommap3}) is the composition of the quadratic momentum map (\ref{quadsympmommap2}) with the canonical projection $T_p\O^\omega\to \S\mathcal{N}_p$:
\begin{center}
\begin{tikzcd} T_p\O^\omega\arrow[d] \arrow[r,"\d^2J_p"] & \mathfrak{g}_p^* \\
\S\mathcal{N}_p\arrow[ru,"J_{\S\mathcal{N}_p}"'] &
\end{tikzcd}
\end{center}
\end{prop}
For the proof, we use an alternative description of the quadratic differential. Recall that, given a vector bundle $E\to S$ and a germ of sections $e\in \Gamma_p(E)$ vanishing at $p\in S$, the linearization of $e$ at $p$ is the linear map:
\begin{equation*} e^\textrm{lin}_p:T_pS \to E_p,\quad e^\textrm{lin}_p:=\textrm{pr}_{E_p}\circ (\d e)_p,
\end{equation*} where we view the differential $(\d e)_p$ of the map $e$ at $p$ as map into $E_p\oplus T_pS$, via the canonical identification of $(TE)_{(p,0)}$ with $E_p\oplus T_pS$. With this, one can define the \textbf{intrinsic Hessian} of $F$ at $p$ to be the symmetric bilinear map:
\begin{equation*} \textrm{Hess}_p(F):\ker(\d F_p)\times \ker(\d F_p)\to \text{CoKer}(\d F_p), \quad (X_p,Y_p)\mapsto\left[\frac{1}{2} (\d F(Y))^\textrm{lin}_p(X_p)\right],
\end{equation*} where $Y\in \mathcal{X}_p(S)$ is any germ of vector fields extending $Y_p$ and we see $\d F(Y)$ as a germ of sections of $F^*(TM)$.
The quadratic differential is now given by the quadratic form:
\begin{equation*} \d^2F_p(v)=\textrm{Hess}_p(F)(v,v), \quad v\in \ker(\d F_p).
\end{equation*}
We will further use the following immediate, but useful, observation.
\begin{lemma}\label{linseclem1} Let $\Phi:E\to F$ be a map of vector bundles over the same manifold, covering the identity map. If $e\in \Gamma_p(E)$ is a germ of sections that vanishes at $p$, then so does $\Phi(e)\in \Gamma_p(F)$ and we have:
\begin{equation*} \Phi(e)^\textrm{lin}_p=\Phi\circ e^\textrm{lin}_p.
\end{equation*}
\end{lemma}
\begin{proof}[Proof of Proposition \ref{quaddifmommap}] Let $\alpha_x\in \mathfrak{g}_p \subset T_x^*M$ and $X_p\in \ker(\d J_p)=T_p\O^\omega$. We have to prove: \begin{equation*} \langle J_{\S\mathcal{N}_p}([X_p]), \alpha_x\rangle=\langle \alpha_x, \d^2J_p(X_p)\rangle.
\end{equation*} This will follow by linearizing both sides of equation (\ref{mommapcond}). Let $\alpha\in \Omega^1(M)$ and $X\in \mathcal{X}(S)$ be extensions of $\alpha_x$ and $X_p$, respectively. On one hand, we have:
\begin{align*} \langle (\iota_{a(\alpha)}\omega)^\textrm{lin}_p(X_p),X_p\rangle &=\omega_p(a(\alpha)^\textrm{lin}_p(X_p),X_p)\\
&=2\langle J_{\S\mathcal{N}_p}([X_p]), \alpha_x\rangle.
\end{align*} Here we have first used that, given a $k$-form $\beta$ and a vector field $Y$ that vanishes at $p$, it holds that \begin{equation*} (\iota_Y\beta)^\textrm{lin}_p(X_p)=\iota_{Y^\textrm{lin}_p(X_p)}\beta_p,\end{equation*} as follows from Lemma \ref{linseclem1}. Furthermore, for the second step we have used that the Lie algebra representation $\mathfrak{g}_p\to \mathfrak{sp}(\S\mathcal{N}_p,\omega_p)$ induced by the symplectic normal representation is given by:
\begin{equation*} \alpha_x\cdot[X_p]=[a(\alpha)^{\textrm{lin}}_p(X_p)].
\end{equation*}
On the other hand, linearizing the right-hand side of (\ref{mommapcond}) we find (as desired):
\begin{align*} \langle (J^*\alpha)^\textrm{lin}_p(X_p),X_p\rangle&=(\alpha(\d J(X))^\textrm{lin}_p(X_p)\\
&=2\langle \alpha_x, \d^2J_p(X_p)\rangle.
\end{align*} Here we have first used that, given a vector field $Y$ and a $k$-form $\beta$ that vanishes at $p$, it holds that
\begin{equation*} (\iota_Y\beta)^\textrm{lin}_p(X_p)=\iota_{Y_p}(\beta^\textrm{lin}_p(X_p)),\end{equation*}
as follows from Lemma \ref{linseclem1}. Furthermore, for the second step we have again used Lemma \ref{linseclem1}.
\end{proof}
\subsubsection{Neighbourhood equivalence and rigidity}\label{normformthmsec}
We now turn to the notion of neighbourhood equivalence, used in the statement of Theorem \ref{normhamthm}. In view of Proposition \ref{normrepsymp}, the restriction of a symplectic groupoid $(\mathcal{G},\Omega)$ to a leaf $\L$ gives rise to the data of:
\begin{itemize}\item a symplectic manifold $(\L,\omega_\L)$
\item a transitive Lie groupoid $\mathcal{G}_\L\rightrightarrows \L$ equipped with a closed multiplicative $2$-form $\Omega_\L$,
\end{itemize} subject to the relation:
\begin{equation}\label{data0} \Omega_\L=t_{\mathcal{G}_\L}^*\omega_\L-s_{\mathcal{G}_\L}^*\omega_\L.
\end{equation}
\begin{defi}\label{data0defi} We call a collection of such data a \textbf{zeroth-order symplectic groupoid data}.
\end{defi}
Further, using Proposition \ref{normrepham}$a$, we observe that the restriction of a Hamiltonian $(\mathcal{G},\Omega)$-action along $J:(S,\omega)\to M$ to an orbit $\O$ (with corresponding leaf $\L=J(\O)$) encodes the data of:
\begin{itemize}\item a zeroth-order symplectic groupoid data $(\mathcal{G}_\L,\Omega_\L)\rightrightarrows (\L,\omega_\L)$,
\item a pre-symplectic manifold $(\O,\omega_\O)$,
\item a transitive Lie groupoid action of $\mathcal{G}_\L$ along a map $J_\O:\O\to \L$,
\end{itemize} subject to the relations:
\begin{equation}\label{data1} (\textrm{pr}_{\mathcal{G}_\L})^*\Omega_\L=(m_\O)^*\omega_\O-(\textrm{pr}_\O)^*\omega_\O \quad \&\quad \omega_\O=(J_\O)^*\omega_\L.
\end{equation} where we denote by: \begin{equation*} m_\O,\textrm{pr}_\O:\mathcal{G}_\L\ltimes \O\to \O,
\quad \textrm{pr}_{\mathcal{G}_\L}:\mathcal{G}_\L\ltimes \O\to \mathcal{G}_\L,
\end{equation*} the map defining the action and the projections from the action groupoid to $\O$ and $\mathcal{G}_\L$.
\begin{defi}\label{data1defi} We call a collection of such data a \textbf{zeroth-order Hamiltonian data}.
\end{defi}
Next, we define realizations of such zeroth-order data and neighbourhood equivalences thereof.
\begin{defi}\label{nhoodeq0defi} By a \textbf{realization} of a given \textbf{zeroth-order symplectic groupoid data}:
\begin{center}
\begin{tikzpicture} \node (G1) at (0,0) {$(\mathcal{G}_\L,\Omega_\L)$};
\node (M1) at (0,-1.3) {$(\L,\omega_\L)$};
\node (G) at (2.7,0) {$(\mathcal{G},\Omega)$};
\node (M) at (2.7,-1.3) {$(M,\pi)$};
\draw[->,transform canvas={xshift=-\shift}](G1) to node[midway,left] {}(M1);
\draw[->,transform canvas={xshift=\shift}](G1) to node[midway,right] {}(M1);
\draw[right hook->] (0.8,-0.65) -- (2,-0.65) node[pos=0.4,above] {$\text{ }\text{ }i$};
\draw[->,transform canvas={xshift=-\shift}](G) to node[midway,left] {}(M);
\draw[->,transform canvas={xshift=\shift}](G) to node[midway,right] {}(M);
\end{tikzpicture}
\end{center} we mean an embedding of Lie groupoids $i:\mathcal{G}_\L\hookrightarrow \mathcal{G}$ with the property that $\Omega$ pulls back to $\Omega_\L$ and that $\mathcal{G}_\L$ embeds as the restriction of $\mathcal{G}$ to a leaf. Of course, $(\L,\omega_\L)$ then automatically embeds as a symplectic leaf of $(\mathcal{G},\Omega)$. We call two realizations $i_1$ and $i_2$ of the same zeroth-order symplectic groupoid data \textbf{neighbourhood-equivalent} if there are opens $V_1$ and $V_2$ around $\L$ in $M_1$ and $M_2$ respectively, together with an isomorphism of symplectic groupoids:
\begin{center}
\begin{tikzpicture} \node (G1) at (0,0) {$(\mathcal{G}_1,\Omega_1)\vert_{V_1}$};
\node (M1) at (0,-1.3) {$(V_1,\pi_1)$};
\node (G) at (2.7,0) {$(\mathcal{G}_2,\Omega_2)\vert_{V_2}$};
\node (M) at (2.7,-1.3) {$(V_2,\pi_2)$};
\draw[->,transform canvas={xshift=-\shift}](G1) to node[midway,left] {}(M1);
\draw[->,transform canvas={xshift=\shift}](G1) to node[midway,right] {}(M1);
\draw[transparent] (0.2,-0.65) -- (1.4,-0.65) node[opacity=1] {\resizebox{0.8cm}{0.2cm}{$\cong$}};
\draw[->,transform canvas={xshift=-\shift}](G) to node[midway,left] {}(M);
\draw[->,transform canvas={xshift=\shift}](G) to node[midway,right] {}(M);
\end{tikzpicture}
\end{center}
that intertwines $i_1$ with $i_2$.
\end{defi}
\begin{defi}\label{nhoodeq1defi} By a \textbf{realization} of a given \textbf{zeroth order Hamiltonian data}:
\begin{center}
\begin{tikzpicture} \node (G1) at (0,0) {$(\mathcal{G}_\L,\Omega_\L)$};
\node (M1) at (0,-1.3) {$(\L,\omega_\L)$};
\node (O) at (2,0) {$(\O,\omega_\O)$};
\node (G) at (5,0) {$(\mathcal{G},\Omega)$};
\node (M) at (5,-1.3) {$(M,\pi)$};
\node (S) at (6.8,0) {$(S,\omega)$};
\draw[->,transform canvas={xshift=-\shift}](G1) to node[midway,left] {}(M1);
\draw[->,transform canvas={xshift=\shift}](G1) to node[midway,right] {}(M1);
\draw[->](O) to node[pos=0.25, below] {$\text{ }\text{ }J_\O$} (M1);
\draw[->] (1.2,-0.15) arc (315:30:0.25cm);
\draw[right hook->] (2.7,-0.65) -- (4.4,-0.65) node[pos=0.4,above] {$\text{ }\text{ }(i,j)$};
\draw[->,transform canvas={xshift=-\shift}](G) to node[midway,left] {}(M);
\draw[->,transform canvas={xshift=\shift}](G) to node[midway,right] {}(M);
\draw[->](S) to node[pos=0.25, below] {$\text{ }\text{ }J$} (M);
\draw[->] (6.1,-0.15) arc (315:30:0.25cm);
\end{tikzpicture}
\end{center}
we mean a pair $(i,j)$ consisting of:
\begin{itemize}\item a realization $i$ of the zeroth-order symplectic groupoid data $(\mathcal{G}_\L,\Omega_\L)\rightrightarrows (\L,\omega_\L)$,
\item an embedding $j:\O\hookrightarrow S$ that pulls back $\omega$ to $\omega_\O$ and is compatible with $i$, in the sense that $i$ and $j$ together interwine $J_\O$ with $J$, and the actions along these maps.
\end{itemize}
We call two realizations $(i_1,j_1)$ and $(i_2,j_2)$ of the same zeroth-order Hamiltonian data \textbf{neighbourhood-equivalent} if there are opens $V_1$ and $V_2$ around $\L$ in $M_1$ and $M_2$ respectively, a $\mathcal{G}_1\vert_{V_1}$-invariant open $U_1$ and a $\mathcal{G}_2\vert_{V_2}$-invariant open $U_2$ around $\O$ in $J_1^{-1}(V_1)$, respectively $J_2^{-1}(V_2)$, together with:
\begin{itemize}
\item an isomorphism $(\mathcal{G}_1,\Omega_1)\vert_{V_1}\cong (\mathcal{G}_2,\Omega_2)\vert_{V_2}$ that intertwines $i_1$ with $i_2$,
\item a symplectomorphism $(U_1,\omega_1)\cong (U_2,\omega_2)$ that intertwines $j_1$ with $j_2$ and is compatible with the above isomorphism of symplectic groupoids, in the sense that together these intertwine $J_1:U_1\to V_1$ with $J_2:U_2\to V_2$, and the actions along these maps.
\end{itemize}
In other words, we have an isomorphism of Hamiltonian actions:
\begin{center}
\begin{tikzpicture} \node (G1) at (0,0) {$(\mathcal{G}_1,\Omega_1)\vert_{V_1}$};
\node (M1) at (0,-1.3) {$(V_1,\pi_1)$};
\node (O) at (2,0) {$(U_1,\omega_1)$};
\node[transparent] (A) at (2.2,-0.65) {$A$};
\node[transparent] (B) at (3.4,-0.65) {$B$};
\node (G) at (4,0) {$(\mathcal{G}_2,\Omega_2)\vert_{V_2}$};
\node (M) at (4,-1.3) {$(V_2,\pi_2)$};
\node (S) at (6,0) {$(U_2,\omega_2)$};
\draw[->,transform canvas={xshift=-\shift}](G1) to node[midway,left] {}(M1);
\draw[->,transform canvas={xshift=\shift}](G1) to node[midway,right] {}(M1);
\draw[->](O) to node[pos=0.25, below] {$\text{ }\text{ }J_1$} (M1);
\draw[->] (1.3,-0.15) arc (315:30:0.25cm);
\draw[transparent] (A) edge node[opacity=1] {\resizebox{0.8cm}{0.2cm}{$\cong$}} (B);
\draw[->,transform canvas={xshift=-\shift}](G) to node[midway,left] {}(M);
\draw[->,transform canvas={xshift=\shift}](G) to node[midway,right] {}(M);
\draw[->](S) to node[pos=0.25, below] {$\text{ }\text{ }J_2$} (M);
\draw[->] (5.3,-0.15) arc (315:30:0.25cm);
\end{tikzpicture}
\end{center} that intertwines the embeddings of zeroth-order data. Usually the embeddings are clear from the context and we simply call the two Hamiltonian actions neighbourhood-equivalent around $\O$.
\end{defi} We can now state the rigidity result mentioned in the introduction to this section.
\begin{thm}\label{righamthm} Suppose that we are given two realizations of the same zeroth-order Hamiltonian data with orbit $\O$ and leaf $\L$. Fix $p\in \O$ and let $x=J_\O(p)\in \L$. If both symplectic groupoids are proper at $x$ (in the sense of Definition \ref{propatxdefi}), then the realizations are neighbourhood-equivalent if and only if their symplectic normal representations at $p$ are isomorphic as symplectic $\mathcal{G}_p$-representations.
\end{thm}
In the coming section, we give an explicit construction to show:
\begin{prop}\label{existlocmodprop} For any zeroth-order Hamiltonian data with orbit $\O$, any choice of $p\in \O$ and any symplectic representation $(V,\omega_V)$ of the isotropy group $\mathcal{G}_p$, there is a realization of the zeroth-order data that has $(V,\omega_V)$ as symplectic normal representation at $p$.
\end{prop}
Given a Hamiltonian action, we call the realization constructed from the zeroth-order Hamiltonian data obtained by restriction to $\O$ and from the symplectic normal representation at $p$: \textbf{the local model} of the Hamiltonian action around $\O$ (we disregard the choice of $p\in \O$, as different choices result in isomorphic local models). Applying Theorem \ref{righamthm} to the given Hamiltonian action on one hand and, on the other hand, to its local model around $\O$, Theorem \ref{normhamthm} follows. Hence, after the construction of this local model, it remains for us to prove Theorem \ref{righamthm}.
\subsection{The local model}\label{locmodsec}
\subsubsection{Reorganization of the zeroth-order Hamiltonian data}\label{reorghamdat}
Before constructing the local model, we rearrange the zeroth-order data (defined in the previous subsection) into a simpler form. First, due to the relations (\ref{data0}) and (\ref{data1}), the triple of $2$-forms $\Omega_\L$, $\omega_\L$ and $\omega_\O$ can be fully reconstructed from the single $2$-form $\omega_\L$. Therefore, a collection of zeroth-order Hamiltonian data can equivalently be defined as the data of:
\begin{itemize}\item a symplectic manifold $(\L,\omega_\L)$,
\item a transitive Lie groupoid $\mathcal{G}_\L\rightrightarrows \L$,
\item a transitive Lie groupoid action of $\mathcal{G}_\L$ along a map $J_\O:\O\to \L$.
\end{itemize} After the choice of a point $p\in \O$, this can be simplified further to a collection consisting of:
\begin{itemize}
\item a symplectic manifold $(\L,\omega_\L)$,
\item a Lie group $G$ (corresponding to $\mathcal{G}_x$),
\item a (right) principal $G$-bundle $P\to \L$ (corresponding to $t:s^{-1}(x)\to \L$),
\item a closed subgroup $H$ of $G$ (corresponding to $\mathcal{G}_p$).
\end{itemize}
To see this, fix a point $p\in \O$ and let $x=J_\O(p)\in \L$. Since $\mathcal{G}_\L$ is transitive, the choice of $x\in \L$ induces an isomorphism between $\mathcal{G}_\L$ and the gauge-groupoid:
\begin{equation}\label{gaugegpoidprinbunlocmod} s^{-1}(x)\times_{\mathcal{G}_x}s^{-1}(x) \rightrightarrows \L,
\end{equation} of the principal $\mathcal{G}_x$-bundle $t:s^{-1}(x)\to \L$. In particular, $\mathcal{G}_\L$ is entirely encoded by this principal bundle. Furthermore, due to transitivity the $\mathcal{G}_\L$-action along $J_\O$ is entirely determined by this principal bundle and the subgroup $\mathcal{G}_p$ of $\mathcal{G}_x$. Indeed, the map $J_\O$ can be recovered from this, for we have a commutative square:
\begin{center}
\begin{tikzcd}
s^{-1}(x)/{\mathcal{G}_p} \arrow[r]\arrow{d}[rotate=90, xshift=-0.8ex, yshift=0.7ex]{\sim} & s^{-1}(x)/{\mathcal{G}_x}\arrow{d}[rotate=90, xshift=-0.8ex, yshift=0.7ex]{\sim} \\
\O \arrow[r,"J_\O"] & \L
\end{tikzcd}
\end{center} where the left vertical map is defined by acting on $p$ and the upper horizontal map is the canonical one. Moreover, the action can be recovered as the action of the groupoid (\ref{gaugegpoidprinbunlocmod}) along the upper horizontal map, given by $[p,q]\cdot [q]=[p]$.
\subsubsection{Construction of the local model for the symplectic groupoid}\label{locmodconstsec}
The construction presented here is well-known. For other (more Poisson geometric) constructions of this local model, see \cite{CrMar1,Mar1}. The local model for the symplectic groupoid is built out of the zeroth-order symplectic groupoid data, encoded as above by:
\begin{itemize}
\item a symplectic manifold $(\L,\omega_\L)$,
\item a Lie group $G$,
\item a (right) principal $G$-bundle $P\to \L$.
\end{itemize}
To construct the local model, we make an auxiliary choice of a connection $1$-form $\theta\in \Omega^1(P;\mathfrak{g})$ and define:\begin{equation}\label{hattheta} \hat{\theta}\in \Omega^1(P\times \mathfrak{g}^*), \quad \hat{\theta}_{(q,\alpha)}=\langle \alpha, \theta_q\rangle. \end{equation} Then, we use the symplectic structure $\omega_\L$ on $\L$ to define:
\begin{equation}\label{omegathetalocmod} \omega_\theta=(\textrm{pr}_\L)^*\omega_\L-\d\hat{\theta}\in \Omega^2(P\times \mathfrak{g}^*),
\end{equation} where by $\textrm{pr}_\L$ we denote the composition $P\times \mathfrak{g}^*\xrightarrow{\textrm{pr}_1} P\to \L$. The $2$-form $\omega_\theta$ is closed, non-degenerate at all points of $P\times \{0\}$ and $(P\times \mathfrak{g}^*,\omega_\theta)\to \mathfrak{g}^*$ is a (right) pre-symplectic Hamiltonian $G$-space. Therefore, the open subset $\Sigma_\theta\subset P\times \mathfrak{g}^*$ on which $\omega_\theta$ is non-degenerate is a $G$-invariant neigbourhood of $P\times \{0\}$. Since the action is free and proper, the symplectic form $\omega_\theta$ descends to a Poisson structure $\pi_\theta$ on the open neighbourhood $M_\theta$ of the zero-section $\L$, defined as:
\begin{equation*} M_\theta:=\Sigma_\theta/G\subset P\times_G\mathfrak{g}^*.
\end{equation*} This is the base of the local model. For the construction of the integrating symplectic groupoid, notice first that the pair groupoid: \begin{equation}\label{symppairlocmod} \left(\Sigma_\theta\times \Sigma_\theta,\omega_\theta\oplus -\omega_\theta\right)
\end{equation} is a symplectic groupoid and, furthermore, it is a (right) free and proper Hamiltonian $G$-space (being a product of two). Therefore, the symplectic form $\omega_\theta\oplus-\omega_\theta$ descends to the symplectic reduced space at $0\in \mathfrak{g}^*$: \begin{equation}\label{locmodsymgp} (\mathcal{G}_\theta,\Omega_\theta):=\left( (\Sigma_\theta\times\Sigma_\theta)\sslash G,\Omega_\textrm{red}\right).
\end{equation} The pair groupoid structure on $\Sigma_\theta\times \Sigma_\theta$ descends to a Lie groupoid structure on (\ref{locmodsymgp}), making it a symplectic groupoid integrating $(M_\theta,\pi_\theta)$. This is the symplectic groupoid in the local model. It is canonically a realization of the given zeroth-order symplectic groupoid data: the gauge-groupoid of the principal $G$-bundle $P\to \L$ (corresponding to (\ref{gaugegpoidprinbunlocmod})) embeds into $(\ref{locmodsymgp})$ via the zero-section.
\subsubsection{Construction of the local model for Hamiltonian actions}\label{hamlocmodconsec} The construction below generalizes that in \cite{Ma1,GS4}. The local model is built out of a zeroth-order Hamiltonian data and a symplectic representation of an isotropy group of the action, encoded as in Subsection \ref{reorghamdat} by:
\begin{itemize}
\item a symplectic manifold $(\L,\omega_\L)$,
\item a Lie group $G$,
\item a (right) principal $G$-bundle $P\to \L$,
\item a closed subgroup $H$ of $G$,
\item a symplectic $H$-representation $(V,\omega_V)$.
\end{itemize} Choose an auxiliary connection $1$-form $\theta\in \Omega^1(P;\mathfrak{g})$ and define $\omega_\theta$, $\Sigma_\theta$ and $M_\theta$ as in the construction of the local model for symplectic groupoids. To construct a Hamiltonian action of the symplectic groupoid (\ref{locmodsymgp}), consider the product of the Hamiltonian $H$-spaces:
\begin{equation*} \textrm{pr}_{\mathfrak{h}^*}:(\Sigma_\theta,\omega_\theta)\xrightarrow{\textrm{pr}_{\mathfrak{g}^*}}\mathfrak{g}^*\to \mathfrak{h}^*\quad \& \quad J_V:(V,\omega_V)\to \mathfrak{h}^*,
\end{equation*}
where $J_V$ is as in (\ref{quadsympmommap}). This is another (right) Hamiltonian $H$-space:
\begin{equation*} J_H:(\Sigma_\theta\times V, {\omega}_\theta\oplus\omega_V)\to \mathfrak{h}^*,\quad (q,\alpha,v)\mapsto \alpha\vert_\mathfrak{h}-J_V(v),
\end{equation*}
where the action is the diagonal one, which is free and proper. The symplectic manifold in the local model is the reduced space at $0\in \mathfrak{h}^*$:
\begin{equation}\label{locmodham1} (S_\theta,\omega_{S_\theta}):=\left((\Sigma_\theta\times V)\sslash H,\omega_{\textrm{red}}\right).
\end{equation}
To equip this with a Hamiltonian action of (\ref{locmodsymgp}), observe that, on the other hand, the symplectic pair groupoid (\ref{symppairlocmod}) acts along: \begin{equation*} \text{pr}_{\Sigma_\theta}:(\Sigma_\theta\times V, {\omega}_\theta\oplus\omega_V)\to \Sigma_\theta\end{equation*} in a Hamiltonian fashion as: $(\sigma,\tau)\cdot(\tau,v)=(\sigma,v)$ for $\sigma,\tau\in \Sigma_\theta$ and $v\in V$. This descends to a Hamiltonian action of (\ref{locmodsymgp}) that fits into a diagram of commuting Hamiltonian actions:
\begin{center}
\begin{tikzpicture} \node (G1) at (0,0) {$\left(\mathcal{G}_\theta,\Omega_\theta\right)$};
\node (M1) at (0,-1.3) {$M_\theta$};
\node (S) at (3.3,0) {$(\Sigma_\theta\times V, {\omega}_\theta\oplus\omega_V)$};
\node (M2) at (6.6,-1.3) {$\mathfrak{h}^*$};
\node (G2) at (6.6,0) {$(T^*H,-\d \lambda_{\textrm{can}})$};
\draw[->,transform canvas={xshift=-\shift}](G1) to node[midway,left] {}(M1);
\draw[->,transform canvas={xshift=\shift}](G1) to node[midway,right] {}(M1);
\draw[->,transform canvas={xshift=-\shift}](G2) to node[midway,left] {}(M2);
\draw[->,transform canvas={xshift=\shift}](G2) to node[midway,right] {}(M2);
\draw[->](S) to node[pos=0.25, below] {$\text{ }\text{ }\text{ }\text{ }\textrm{pr}_{M_\theta}$} (M1);
\draw[->] (1.75,-0.15) arc (315:30:0.25cm);
\draw[<-] (4.8,0.15) arc (145:-145:0.25cm);
\draw[->](S) to node[pos=0.25, below] {$J_H$\text{}} (M2);
\end{tikzpicture}
\end{center} with the property that the momentum map of each one is invariant under the action of the other. It therefore follows that the left-hand action descends to a Hamiltonian action along the map:
\begin{equation}\label{locmodham2} J_\theta:\left(S_\theta,\omega_{S_\theta}\right)\to M_\theta, \quad [\sigma,v]\mapsto [\sigma].
\end{equation} This is the Hamiltonian action in the local model. It is canonically a realization of the given zeroth-order Hamiltonian data: as in the previous subsection the gauge-groupoid of the principal $G$-bundle $P\to \L$ embeds into $(\ref{locmodsymgp})$ via the zero-section and similarly $P/H$ embeds into (\ref{locmodham1}). This completes the construction of the local model. Finally, given the starting data in Proposition \ref{existlocmodprop}, one readily verifies that the symplectic normal representation at $p$ of the resulting Hamiltonian action of (\ref{locmodsymgp}) along (\ref{locmodham2}) is isomorphic to $(V,\omega_V)$ as symplectic $\mathcal{G}_p$-representation. So, this also completes the proof of Proposition \ref{existlocmodprop}.
\begin{rem}\label{splitremlocmod} Under the assumption that the short exact sequence:
\begin{equation}\label{ses2poisabs} 0\to \mathfrak{h}^0\to \mathfrak{g}^*\to \mathfrak{h}^*\to 0
\end{equation} splits $H$-equivariantly (which holds if $H$ is compact), the local model can be put in the more familiar form of a vector bundle over $\O$. Indeed,
let $\mathfrak{p}:\mathfrak{h}^*\to \mathfrak{g}^*$ be such a splitting. Then we have an open embedding:
\begin{equation}\label{locmodvecbun} S_\theta\to P\times_H(\mathfrak{h}^0\oplus V), \quad [p,\alpha,v]\mapsto [p,\alpha-\mathfrak{p}(J_V(v)),v],
\end{equation} onto an open neighbourhood of the zero-section, which identifies the momentum map (\ref{locmodham2}) with the restriction to this open neighbourhood of the map:
\begin{equation}\label{mommaplocmod2} P\times_H(\mathfrak{h}^0\oplus V)\to P\times_G\mathfrak{g}^*,\quad [p,\alpha,v]\mapsto [p,\alpha+\mathfrak{p}(J_V(v))].
\end{equation}
To identify the action accordingly, observe that, as Lie groupoid, $(\ref{locmodsymgp})$ embeds canonically onto an open subgroupoid of:
\begin{equation}\label{simpmodgp} (P\times P)\times_{G}\mathfrak{g}^*\rightrightarrows P\times_G\mathfrak{g}^*,
\end{equation} which inherits its Lie groupoid structure from the submersion groupoid of $\textrm{pr}_{\mathfrak{g}^*}:P\times \mathfrak{g}^*\to \mathfrak{g}^*$, being a quotient of it. This identifies the action of (\ref{locmodsymgp}) along (\ref{mommaplocmod2}) with (a restriction of) the action of (\ref{simpmodgp}) along (\ref{mommaplocmod2}), given by:
\begin{equation*} [p_1,p_2,\alpha+\mathfrak{p}(J_V(v))]\cdot [p_2,\alpha,v]=[p_1,\alpha,v], \quad p_1,p_2\in P,\quad \alpha\in \mathfrak{h}^0,\quad v\in V.
\end{equation*}
\end{rem}
\subsubsection{Relation to the Marle-Guillemin-Sternberg model}\label{MGSrelsec} Let $G$ be a Lie group and consider a Hamiltonian $G$-space $J:(S,\omega)\to \mathfrak{g}^*$. As remarked in Example \ref{exhamGsp}, this is the same as a Hamiltonian action of the cotangent groupoid $(G\ltimes \mathfrak{g}^*,-\d \lambda_{\textrm{can}})\rightrightarrows \mathfrak{g}^*$ along $J$. Let $p\in S$, $\alpha=J(p)$ and suppose that $G\ltimes \mathfrak{g}^*$ is proper at $\alpha$ (in the sense of Definition \ref{propatxdefi}). In this case, our local model around the orbit $\O$ through $p$ is equivalent to the local model in the Marle-Guillemin-Sternberg (MGS) normal form theorem for Hamiltonian $G$-spaces (recalled below). To see this, first note that, since the isotropy group $G_\alpha$ is compact, the short exact sequence of $G_\alpha$-representations:
\begin{equation}\label{splitseqcotangentgpoid} 0\to \mathfrak{g}_\alpha^0\to \mathfrak{g}^*\to \mathfrak{g}_\alpha^*\to 0
\end{equation} is split. Let $\sigma:\mathfrak{g}_\alpha^*\to \mathfrak{g}^*$ be a $G_\alpha$-equivariant splitting of (\ref{splitseqcotangentgpoid}) and consider the connection one-form $\theta\in \Omega^1(G;\mathfrak{g}_\alpha)$ on $G$ (viewed as right principal $G_\alpha$-bundle) obtained by composing the left-invariant Maurer-Cartan form on $G$ with $\sigma^*:\mathfrak{g}\to \mathfrak{g}_\alpha$. The leaf $\L$ through $\alpha$ is a coadjoint orbit and $\omega_{\L}$ is the KKS-symplectic form, which is invariant under the coadjoint action. Therefore, the $2$-form $\omega_\theta\in \Omega^2(G\times \mathfrak{g}^*_\alpha)$, defined as in (\ref{omegathetalocmod}), is not only invariant under the right diagonal action of $G_\alpha$, but it also invariant under the left action of $G$ by left translation on the first factor. This implies that the open $\Sigma_\theta$ on which $\omega_\theta$ is non-degenerate is of the form $G\times W$ for a $G_\alpha$-invariant open $W$ around the origin in $\mathfrak{g}_\alpha^*$. The local model for the cotangent groupoid around $\L$ becomes: \begin{equation*}
(G\ltimes (G\times_{G_\alpha} W), \Omega_\theta)\rightrightarrows G\times_{G_\alpha}W,
\end{equation*} the groupoid associated to the action of $G$ by left translation on the first factor. To compare this to the cotangent groupoid itself, consider the $G$-equivariant map:
\begin{equation*}\label{lincoadform} \phi:G\times_{G_\alpha}W\to \mathfrak{g}^*, \quad [g,\beta]\mapsto g\cdot(\alpha+\sigma(\beta)).
\end{equation*} Since $G\ltimes \mathfrak{g}^*$ is proper at $\alpha$, we can shrink $W$ so that $\phi$ becomes an embedding onto a $G$-invariant open neighbourhood of $\L$. Then $\phi$ lifts canonically to an isomorphism of symplectic groupoids:
\begin{equation}\label{isolocmodmgsmod} (G\ltimes (G\times_{G_\alpha}W), \Omega_\theta)\xrightarrow{\sim} (G\ltimes \mathfrak{g}^*,-\d\lambda_{\textrm{can}})\vert_{\phi(G\times_{G_\alpha}W)},
\end{equation} and this is a neighbourhood equivalence around $G\ltimes \L$ (with respect to the canonical embeddings). Our local model for $(S,\omega)$ around $\O$ is the same as that in the MGS local model, and via (\ref{isolocmodmgsmod}) the Hamiltonian action in our local model is identified with the Hamiltonian $G$-space in the MGS local model. In particular the momentum map (\ref{mommaplocmod2}) is identified with:
\begin{equation*} J_{\textrm{MGS}}:G\times_{G_p}(\mathfrak{g}_p^0\oplus \S\mathcal{N}_p)\to \mathfrak{g}^*, \quad [g,\beta,v]\mapsto g\cdot\left(\alpha+\sigma\left(\beta+\mathfrak{p}\left(J_{\S\mathcal{N}_p}\left(v\right)\right)\right)\right).
\end{equation*}
\begin{rem}\label{sharphamliegpactrem} As will be clear from the proof of Theorem \ref{righamthm}, the conclusion of Theorem \ref{normhamthm} can be sharpened for Hamiltonian Lie group actions: if we start with a Hamiltonian $G$-space, then under the assumptions of Theorem \ref{normhamthm} we can in fact find a neighbourhood equivalence in which the isomorphism of symplectic groupoids is the explicit isomorphism (\ref{isolocmodmgsmod}). In particular, this neighbourhood equivalence is defined on $G$-invariant neighbourhoods of $\O$ in $S$ and $\L$ in $\mathfrak{g}^*$.
\end{rem}
\subsection{The proof}\label{normformpfsubsec}
\subsubsection{Morita equivalence of groupoids}
To prove Theorem \ref{righamthm} (and hence Theorem \ref{normhamthm}), we will reduce to the case where $\O\subset J_X^{-1}(0)$ is an orbit of a Hamiltonian $G$-space $J_X:(X,\omega_X)\to \mathfrak{g}^*$ (with $G$ a compact Lie group), to which we can apply the Marle-Guilleming-Sternberg theorem. The idea of such a reduction is by no means new \textemdash in fact, it appears already in the work of Guillemin and Sternberg. To do so, we use the fact that Morita equivalent symplectic groupoids have equivalent categories of modules. In preparation for this, we will now first recall the definition, some useful properties and examples of Morita equivalence.
\begin{defi} Let $\mathcal{G}_1\rightrightarrows M_1$ and $\mathcal{G}_2\rightrightarrows M_2$ be Lie groupoids. A \textbf{Morita equivalence} from $\mathcal{G}_1$ to $\mathcal{G}_2$ is a principal $(\mathcal{G}_1,\mathcal{G}_2)$-bi-bundle $(P,\alpha_1,\alpha_2)$. This consists of:
\begin{itemize} \item A manifold $P$ with two surjective submersions $\alpha_i:P\to M_i$.
\item A left action of $\mathcal{G}_1$ along $\alpha_1$ that makes $\alpha_2$ into a principal $\mathcal{G}_1$-bundle.
\item A right action of $\mathcal{G}_2$ along $\alpha_2$ that makes $\alpha_1$ into a principal $\mathcal{G}_2$-bundle.
\end{itemize} Furthermore, the two actions are required to commute. We depict this as:
\begin{center}
\begin{tikzpicture} \node (G1) at (0,0) {$\mathcal{G}_1$};
\node (M1) at (0,-1.3) {$M_1$};
\node (S) at (1.4,0) {$P$};
\node (M2) at (2.7,-1.3) {$M_2$};
\node (G2) at (2.7,0) {$\mathcal{G}_2$};
\draw[->,transform canvas={xshift=-\shift}](G1) to node[midway,left] {}(M1);
\draw[->,transform canvas={xshift=\shift}](G1) to node[midway,right] {}(M1);
\draw[->,transform canvas={xshift=-\shift}](G2) to node[midway,left] {}(M2);
\draw[->,transform canvas={xshift=\shift}](G2) to node[midway,right] {}(M2);
\draw[->](S) to node[pos=0.25, below] {$\text{ }\text{ }\alpha_1$} (M1);
\draw[->] (0.8,-0.15) arc (315:30:0.25cm);
\draw[<-] (1.9,0.15) arc (145:-145:0.25cm);
\draw[->](S) to node[pos=0.25, below] {$\alpha_2$\text{ }} (M2);
\end{tikzpicture}
\end{center}
For every leaf $\L_1\subset M_1$, there is a unique leaf $\L_2\subset M_2$ such that $\alpha_1^{-1}(\L_1)=\alpha_2^{-1}(\L_2)$; such leaves $\L_1$ and $\L_2$ are called \textbf{$P$-related}. When $(\mathcal{G}_1,\Omega_1)$ and $(\mathcal{G}_2,\Omega_2)$ are symplectic groupoids, then a \textbf{symplectic Morita equivalence} from $(\mathcal{G}_1,\Omega_1)$ to $(\mathcal{G}_2,\Omega_2)$ is a Morita equivalence with the extra requirement that $(P,\omega_P)$ is a symplectic manifold and both actions are Hamiltonian.
\end{defi}
Morita equivalence is an equivalence relation that, heuristically speaking, captures the geometry transverse to the leaves. The simplest motivation for this principle is the following basic result.
\begin{prop}\label{transgeomgpoid} Let $(P,\alpha_1,\alpha_2)$ be a Morita equivalence from $\mathcal{G}_1\rightrightarrows M_1$ to $\mathcal{G}_2\rightrightarrows M_2$.
\begin{enumerate} \item[a)] The map \begin{equation}\label{leafsphommoreq} h_P:\underline{M}_1\to \underline{M}_2, \quad \L_1\mapsto \alpha_2(\alpha_1^{-1}(\L_1))\end{equation} that sends a leaf $\L_1$ of $\mathcal{G}_1$ to the unique $P$-related leaf of $\mathcal{G}_2$ is a homeomorphism.
\item[b)] Suppose that $x_1\in M_1$ and $x_2\in M_2$ belong to $P$-related leaves and let $p\in P$ such that $\alpha_1(p)=x_1$ and $\alpha_2(p)=x_2$. Then the map:
\begin{equation}\label{isotgpisomoreq} \Phi_p:(\mathcal{G}_1)_{x_1}\to (\mathcal{G}_2)_{x_2}
\end{equation} defined by the relation:
\begin{equation*} g\cdot p=p\cdot \Phi_p(g), \quad g\in (\mathcal{G}_1)_{x_1},
\end{equation*} is an isomorphism of Lie groups. Furthermore, the map: \begin{equation}\label{isotrepisomoreq} \phi_p:\mathcal{N}_{x_1}\to \mathcal{N}_{x_2},\quad [v]\mapsto [d\alpha_2(\hat{v})],
\end{equation} where $\hat{v}\in T_pP$ is any tangent vector such that $d\alpha_1(\hat{v})=v$, is a compatible isomorphism between the normal representations at $x_1$ and $x_2$.
\end{enumerate}
\end{prop}
\begin{ex}\label{idmoreqex} Any Lie groupoid $\mathcal{G}\rightrightarrows M$ is Morita equivalent to itself via the canonical bi-module $(\mathcal{G},t,s)$. The same goes for symplectic groupoids. Another simple example: any transitive Lie groupoid is Morita equivalent to a Lie group (viewed as groupoid over the one-point space); as a particular case of this, the pair groupoid of a manifold is Morita equivalent to the unit groupoid of the one-point space.
\end{ex}
\begin{ex}\label{crucexmoreq2} Morita equivalences can be restricted to opens. Indeed, let $(P,\alpha_1,\alpha_2)$ be a Morita equivalence between $\mathcal{G}_1\rightrightarrows M_1$ and $\mathcal{G}_2\rightrightarrows M_2$, and let $V_1$ be an open in $M_1$. Then $V_2:=\alpha_2(\alpha_1^{-1}(V_1))$ is an invariant open in $M_2$ and $(\alpha_1^{-1}(V_1),\alpha_1,\alpha_2)$ is a Morita equivalence between $\mathcal{G}_1\vert_{V_1}$ and $\mathcal{G}_2\vert_{V_2}$. In particular, given a $\mathcal{G}\rightrightarrows M$ Lie groupoid and an open $V\subset M$, letting $\widehat{V}:=s(t^{-1}(V))$ denote the saturation of $V$ (the smallest invariant open containing $V$), the first Morita equivalence in Example \ref{idmoreqex} restricts to one between $\mathcal{G}\vert_V$ and $\mathcal{G}\vert_{\widehat{V}}$. The same goes for symplectic Morita equivalences.
\end{ex}
\begin{ex}\label{crucexmoreq}
The following example plays a crucial role in our proof of Theorem \ref{righamthm}. Consider the set-up of Subsection \ref{locmodconstsec}. There is a canonical symplectic Morita equivalence:
\begin{center}
\begin{tikzpicture} \node (G1) at (0,0) {$\left(\mathcal{G}_\theta,\Omega_\theta\right)$};
\node (M1) at (0,-1.3) {$M_\theta$};
\node (S) at (3.3,0) {$(\Sigma_\theta,\omega_\theta)$};
\node (M2) at (6.6,-1.3) {$W_\theta$};
\node (G2) at (6.6,0) {$(G\ltimes \mathfrak{g}^*,-\d \lambda_{\textrm{can}})\vert_{W_\theta}$};
\draw[->,transform canvas={xshift=-\shift}](G1) to node[midway,left] {}(M1);
\draw[->,transform canvas={xshift=\shift}](G1) to node[midway,right] {}(M1);
\draw[->,transform canvas={xshift=-\shift}](G2) to node[midway,left] {}(M2);
\draw[->,transform canvas={xshift=\shift}](G2) to node[midway,right] {}(M2);
\draw[->](S) to node[pos=0.25, below] {$\quad\quad\textrm{pr}_{M_\theta}$} (M1);
\draw[->] (2.25,-0.15) arc (315:30:0.25cm);
\draw[<-] (4.35,0.15) arc (145:-145:0.25cm);
\draw[->](S) to node[pos=0.25, below] {$\textrm{pr}_{\mathfrak{g}^*}$\text{ }} (M2);
\end{tikzpicture}
\end{center}
between (\ref{locmodsymgp}) and the restriction of the cotangent groupoid to the $G$-invariant open $W_\theta:=\textrm{pr}_{\mathfrak{g}^*}(\Sigma_\theta)$ around the origin in $\mathfrak{g}^*$. This relates the central leaf $\L$ in $M_\theta$ to the origin in $\mathfrak{g}^*$.
\end{ex}
\subsubsection{Equivalence between categories of modules} Next, we recall how a Morita equivalence induces an equivalence between the categories of modules. Given a Lie groupoid $\mathcal{G}\rightrightarrows M$, by a \textbf{$\mathcal{G}$-module} we simply mean smooth map $J:S\to M$ equipped with a left action of $\mathcal{G}$. A morphism from a $\mathcal{G}$-module $J_1:S_1\to M$ to $J_2:S_2\to M$ is smooth map $\phi:S_1\to S_2$ that intertwines $J_1$ and $J_2$ and is $\mathcal{G}$-equivariant. This defines a category $\textsf{Mod}(\mathcal{G})$.
\begin{ex}\label{restrinvopmodex} Let $\mathcal{G}\rightrightarrows M$ be a Lie groupoid and let $W$ be an invariant open in $M$. Consider the full subcategory $\textsf{Mod}_W(\mathcal{G})$ of $\textsf{Mod}(\mathcal{G})$ consisting of those $\mathcal{G}$-modules $J:S\to M$ with the property that $J(S)\subset W$. There is a canonical equivalence of categories between $\textsf{Mod}_W(\mathcal{G})$ and $\textsf{Mod}(\mathcal{G}\vert_W)$.
\end{ex}
\begin{ex}\label{eqmapmod} Let $G$ be a Lie group and $M$ a left $G$-space. Consider the category $\textsf{Hom}_G(-,M)$ of smooth $G$-equivariant maps from left $G$-spaces into $M$. A morphism between two such maps $J_1:S_1\to M$ and $J_2:S_2\to M$ is a smooth $G$-equivariant map $\phi:S_1\to S_2$ that intertwines $J_1$ and $J_2$. There is a canonical equivalence of categories between $\textsf{Hom}_G(-,M)$ and $\textsf{Mod}(G\ltimes M)$.
\end{ex}
We now recall:
\begin{thm}\label{moreqasrep} A Morita equivalence $(P,\alpha_1,\alpha_2)$ between two Lie groupoids $\mathcal{G}_1$ and $\mathcal{G}_2$ induces an equivalence of categories between $\textsf{Mod}(\mathcal{G}_1)$ and $\textsf{Mod}(\mathcal{G}_2)$, explicitly given by (\ref{moreqfunct}).
\end{thm}
\begin{proof} To any $\mathcal{G}_1$-module $J:S\to M_1$ we can associate a $\mathcal{G}_2$-module, as follows. The Lie groupoid $\mathcal{G}_1$ acts diagonally on the manifold $P\times_{M_1} S$ along the map $\alpha_1\circ \textrm{pr}_1$, in a free and proper way. Hence, the quotient:
\begin{equation*} P\ast_{\mathcal{G}_1} S:=\frac{(P\times_{M_1} S)}{\mathcal{G}_1}
\end{equation*} is smooth. Moreover, since the actions of $\mathcal{G}_1$ and $\mathcal{G}_2$ commute and $\alpha_1$ is $\mathcal{G}_2$-invariant, we have a left action of $\mathcal{G}_2$ along:
\begin{equation}\label{asmod} P_*(J):P\ast_{\mathcal{G}_1} S\to M_2, \quad [p_P,p_S]\mapsto \alpha_2(p_P),
\end{equation} given by:
\begin{equation*} g\cdot [p_P,p_S]=[p_P\cdot g^{-1},p_S].
\end{equation*} We call this the $\mathcal{G}_2$-module associated to the $\mathcal{G}_1$-module $J$. For any morphism of $\mathcal{G}_1$-modules there is a canonical morphism between the associated $\mathcal{G}_2$-modules. So, this defines a functor:
\begin{align}\label{moreqfunct} \textsf{Mod}(\mathcal{G}_1)&\to\textsf{Mod}(\mathcal{G}_2) \\
(J:S\to M_1)&\mapsto (P_*(J):P\ast_{\mathcal{G}_1}S\to M_2). \nonumber
\end{align} An analogous construction from right to left gives an inverse to this functor.
\end{proof}
Next, we recall the analogue for symplectic groupoids. Given a symplectic groupoid $(\mathcal{G},\Omega)\rightrightarrows M$, by a \textbf{Hamiltonian $(\mathcal{G},\Omega)$-space} (called \textbf{symplectic left $(\mathcal{G},\Omega)$-module} in \cite{Xu}) we mean a smooth map $J:(S,\omega)\to M$ equipped with a Hamiltonian $(\mathcal{G},\Omega)$-action. A morphism $\phi$ from $J_1:(S_1,\omega_1)\to M$ to $J_2:(S_2,\omega_2)\to M$ is a morphism of $\mathcal{G}$-modules satisfying $\phi^*\omega_2=\omega_1$. This defines a category $\textsf{Ham}(\mathcal{G},\Omega)$.
\begin{ex}\label{restrinvopsympmodex} Let $(\mathcal{G},\Omega)\rightrightarrows M$ be a symplectic groupoid and let $W$ be an invariant open in $M$. The equivalence in Example \ref{restrinvopmodex} restricts to an equivalence between the category $\textsf{Ham}_W(\mathcal{G},\Omega)$, consisting of Hamiltonian $(\mathcal{G},\Omega)$-spaces with the property that $J(S)\subset W$, and $\textsf{Ham}((\mathcal{G},\Omega)\vert_W)$.
\end{ex}
\begin{ex}\label{eqcatmodHamGex} Let $G$ be a Lie group and consider the category $\textsf{Ham}(G)$ of left Hamiltonian $G$-spaces. Here, a morphism between Hamiltonian $G$-spaces $J_1:(S_1,\omega_1)\to \mathfrak{g}^*$ and $J_2:(S_2,\omega_2)\to \mathfrak{g}^*$ is a $G$-equivariant map $\phi:S_1\to S_2$ that intertwines $J_1$ and $J_2$ and satisfies $\phi^*\omega_2=\omega_1$. The equivalence in Example \ref{eqmapmod} restricts to one between $\textsf{Ham}(G)$ and $\textsf{Ham}(G\ltimes \mathfrak{g}^*,-\d\lambda_{\textrm{can}})$. This refines the statement in Example \ref{exhamGsp}.
\end{ex}
\begin{thm}[\cite{Xu}]\label{sympmoreqasrep} A symplectic Morita equivalence $(P,\omega_P,\alpha_1,\alpha_2)$ between two symplectic groupoids $(\mathcal{G}_1,\Omega_1)$ and $(\mathcal{G}_2,\Omega_2)$ induces an equivalence of categories between $\textsf{Ham}(\mathcal{G}_1,\Omega_1)$ and $\textsf{Ham}(\mathcal{G}_2,\Omega_2)$, explicitly given by (\ref{sympmoreqfunct}).
\end{thm}
\begin{proof} Let $(P,\omega_P,\alpha_1,\alpha_2)$ be a symplectic Morita equivalence between symplectic groupoids $(\mathcal{G}_1,\Omega_1)$ and $(\mathcal{G}_2,\Omega_2)$ and let $J:(S,\omega_S)\to M_1$ be a Hamiltonian $(\mathcal{G}_1,\Omega_1)$-space. The symplectic form $(-\omega_P)\oplus \omega_S$ descends to a symplectic form $\omega_{PS}$ on $P\ast_{\mathcal{G}_1}S$ and the $(\mathcal{G}_2,\Omega_2)$-action along the associated module $P_*(J)$, as in (\ref{asmod}), becomes Hamiltonian. As before, this extends to a functor:
\begin{align}\label{sympmoreqfunct} \textsf{Ham}(\mathcal{G}_1,\Omega_1)&\to\textsf{Ham}(\mathcal{G}_2,\Omega_2) \\
(J:(S,\omega_S)\to M_1)&\mapsto (P_*(J):(P\ast_{\mathcal{G}_1}S,\omega_{PS})\to M_2) \nonumber
\end{align} and an analogous construction from right to left gives an inverse functor.
\end{proof}
\subsubsection{Proof of rigidity}
The proof of Theorem \ref{righamthm} hinges on the following two known results. The first is a rigidity theorem for symplectic groupoids.
\begin{thm}[\cite{CrFeTo1}]\label{normformsymgp} Suppose that we are given two realizations of the same zeroth-order symplectic groupoid data with leaf $\L$. Fix $x\in \L$. If both symplectic groupoids are proper at $x$ (in the sense of Definition \ref{propatxdefi}), then the realizations are neighbourhood-equivalent.
\end{thm}
\begin{rem} The assumption appearing in \cite[Thm 8.2]{CrFeTo1} is that $\mathcal{G}$ is proper, which is stronger than properness at $x$. However, if $\mathcal{G}$ is proper at $x$, then there is an open $U$ around the leaf $\L$ through $x$ such that $\mathcal{G}\vert_U$ is proper (see e.g. \cite[Remark 5.1.4]{Hoy}).
\end{rem}
The second result that we will need is the following rigidity theorem for Hamiltonian $G$-spaces.
\begin{thm}[\cite{Ma,GS4}]\label{mgsrigzerofib} Let $G$ be a compact Lie group and let $J_1:(S_1,\omega_1)\to \mathfrak{g}^*$ and $J_2:(S_2,\omega_2)\to \mathfrak{g}^*$ be Hamiltonian $G$-spaces. Suppose that $p_1\in J_1^{-1}(0)$ and $p_2\in J_2^{-1}(0)$ are such that $G_{p_1}=G_{p_2}$. Then there are $G$-invariant neighbourhoods $U_1$ of $p_1$ and $U_2$ of $p_2$, together with an isomorphism of Hamiltonian $G$-spaces that sends $p_1$ to $p_2$:
\begin{center}
\begin{tikzcd} (U_1,\omega_1,p_1)\arrow[rr,"\sim"]\arrow[dr,"J_1"'] & & (U_2,\omega_2,p_2)\arrow[dl, "J_2"] \\
& \mathfrak{g}^* &
\end{tikzcd}
\end{center} if and only if there is an equivariant symplectic linear isomorphism:
\begin{equation*} (\S\mathcal{N}_{p_1},\omega_{p_1})\cong (\S\mathcal{N}_{p_2},\omega_{p_2}).
\end{equation*}
\end{thm}
The main step in proving Theorem \ref{righamthm} is to prove the following generalization of Theorem \ref{mgsrigzerofib}.
\begin{thm}\label{loceqthmhamact} Let $(\mathcal{G},\Omega)\rightrightarrows M$ be a symplectic groupoid that is proper at $x\in M$. Suppose that we are given two Hamiltonian $(\mathcal{G},\Omega)$-spaces $J_1:(S_1,\omega_1)\to M$ and $J_2:(S_2,\omega_2)\to M$. Let $p_1\in S_1$ and $p_2\in S_2$ be such that $J_1(p_1)=J_2(p_2)=x$ and $\mathcal{G}_{p_1}=\mathcal{G}_{p_2}$. Then there are $\mathcal{G}$-invariant open neighbourhoods $U_1$ of $p_1$ and $U_2$ of $p_2$, together with an isomorphism of Hamiltonian $(\mathcal{G},\Omega)$-spaces that sends $p_1$ to $p_2$:
\begin{center}
\begin{tikzcd} (U_1,\omega_1,p_1) \arrow[rr,"\sim"] \arrow[dr, "J_1"'] & & (U_2,\omega_2,p_2) \arrow[dl,"J_2"] \\
& M &
\end{tikzcd}
\end{center} if and only if there is an equivariant symplectic linear isomorphism:
\begin{equation*} (\S\mathcal{N}_{p_1},\omega_{p_1})\cong (\S\mathcal{N}_{p_2},\omega_{p_2}).
\end{equation*}
\end{thm}
To prove this we further use the lemma below.
\begin{lemma}\label{assrepprop} Let $(P,\omega_P,\alpha_1,\alpha_2)$ be a symplectic Morita equivalence between $(\mathcal{G}_1,\Omega_1)$ and $(\mathcal{G}_2,\Omega_2)$. Further, let $J:(S,\omega_S)\to M$ be a Hamiltonian $(\mathcal{G}_1,\Omega_1)$-space, let $p_S\in S$ and fix a $p_P\in P$ such that $\alpha_1(p_P)=J(p_S)$. Then the isomorphism (\ref{isotgpisomoreq}) restricts to an isomorphism:
\begin{equation*} \Phi_{p_P}:\mathcal{G}_{p_S}\xrightarrow{\sim} \mathcal{G}_{[p_P,p_S]},
\end{equation*}
and there is a compatible symplectic linear isomorphism:
\begin{equation*} \left(\S\mathcal{N}_{p_S},(\omega_S)_{p_S}\right)\cong\left(\S\mathcal{N}_{[p_P,p_S]},(\omega_{PS})_{[p_P,p_S]}\right)
\end{equation*} between the symplectic normal representation at $p_S$ of the Hamiltonian $(\mathcal{G}_1,\Omega_1)$-space $J$ and the symplectic normal representation at $[p_P,p_S]$ of the associated Hamiltonian $(\mathcal{G}_2,\Omega_2)$-space $P_*(J)$ of Theorem \ref{sympmoreqasrep}.
\end{lemma}
Although this lemma can be verified directly, we postpone its proof to Subsection \ref{bashammorinv}, where we give a more conceptual explanation. With this at hand, we can prove the desired theorems.
\begin{proof}[Proof of Theorem \ref{loceqthmhamact}] The forward implication is straightforward. Let us prove the backward implication. Throughout, let $G:=\mathcal{G}_x$ denote the isotropy group of $\mathcal{G}$ at $x$. To begin with observe that, since $\mathcal{G}$ is proper at $x$, there is an invariant open neighbourhood $V$ of the leaf $\L$ through $x$ and a $G$-invariant open neighbourhood $W$ of the origin in $\mathfrak{g}^*$, together with a symplectic Morita equivalence:
\begin{center}
\begin{tikzpicture} \node (G1) at (0,0) {$(\mathcal{G},\Omega)\vert_V$};
\node (M1) at (0,-1.3) {$V$};
\node (S) at (3.3,0) {$(P,\omega_P)$};
\node (M2) at (6.6,-1.3) {$W$};
\node (G2) at (6.6,0) {$(G\ltimes \mathfrak{g}^*,-\d \lambda_{\textrm{can}})\vert_W$};
\draw[->,transform canvas={xshift=-\shift}](G1) to node[midway,left] {}(M1);
\draw[->,transform canvas={xshift=\shift}](G1) to node[midway,right] {}(M1);
\draw[->,transform canvas={xshift=-\shift}](G2) to node[midway,left] {}(M2);
\draw[->,transform canvas={xshift=\shift}](G2) to node[midway,right] {}(M2);
\draw[->](S) to node[pos=0.25, below] {$\quad\quad\alpha_1$} (M1);
\draw[->] (2.25,-0.15) arc (315:30:0.25cm);
\draw[<-] (4.35,0.15) arc (145:-145:0.25cm);
\draw[->](S) to node[pos=0.25, below] {$\alpha_2$\text{ }} (M2);
\end{tikzpicture}
\end{center}
that relates the leaf $\L$ to the origin in $\mathfrak{g}^*$. Indeed, this follows by first applying Theorem \ref{normformsymgp} to:
\begin{itemize}\item the zeroth-order data of $(\mathcal{G},\Omega)$ at $\L$,
\item the canonical realization $(\mathcal{G},\Omega)$,
\item the realization (\ref{locmodsymgp}),
\end{itemize} and then combining the neighbourhood-equivalence of symplectic groupoids obtained thereby with Examples \ref{crucexmoreq2} and \ref{crucexmoreq}. Since $V$ is $\mathcal{G}$-invariant, so are $J_1^{-1}(V)$ and $J_2^{-1}(V)$ and we can consider the Hamiltonian $(\mathcal{G},\Omega)$-spaces:
\begin{equation}\label{restmodloceqthmhamact} {J_1^V}:(J_1^{-1}(V),\omega_1)\to M \quad \&\quad {J_2^V}:(J_2^{-1}(V),\omega_2)\to M
\end{equation} obtained by restricting the given Hamiltonian $(\mathcal{G},\Omega)$-spaces $J_1$ and $J_2$. By Theorem \ref{sympmoreqasrep}, combined with Examples \ref{restrinvopsympmodex} and \ref{eqcatmodHamGex}, the above Morita equivalence induces an equivalence of categories (with explicit inverse) between the category of Hamiltonian $(\mathcal{G},\Omega)$-spaces $J:(S,\omega)\to M$ with $J(S)\subset V$ and the category of Hamiltonian $G$-spaces $J:(S,\omega)\to \mathfrak{g}^*$ with $J(S)\subset W$. Consider the Hamiltonian $G$-spaces associated to $(\ref{restmodloceqthmhamact})$:
\begin{equation*} P_*(J_1^V):(P\ast_{(\mathcal{G}\vert_V)} J_1^{-1}(V),\omega_{PS_1})\to \mathfrak{g}^* \quad \& \quad P_*(J_2^V):(P\ast_{(\mathcal{G}\vert_V)} J_2^{-1}(V),\omega_{PS_2})\to \mathfrak{g}^*,
\end{equation*} and fix a $p\in P$ such that $\alpha_1(p)=x$. We will show that these Hamiltonian $G$-spaces satisfy the assumptions of Theorem \ref{mgsrigzerofib} for the points $[p,p_1]$ and $[p,p_2]$. First of all, since the leaf $\L$ is $P$-related to the origin in $\mathfrak{g}^*$, it must be that $\alpha_2(p)=0$. Therefore, we find:
\begin{equation*} P_*(J_1^V)([p,p_1])=\alpha_2(p)=0 \quad \& \quad P_*(J_2^V)([p,p_2])=\alpha_2(p)=0.
\end{equation*} Second, Lemma \ref{assrepprop} implies that $G_{[p,p_1]}=G_{[p,p_2]}$, as both coincide with the image of $\mathcal{G}_{p_1}=\mathcal{G}_{p_2}$ under $\Phi_p:\mathcal{G}_x\to G$. Third, by the same lemma, there are symplectic linear isomorphisms:
\begin{equation*} \psi_1:\left(\S\mathcal{N}_{p_1},(\omega_1)_{p_1}\right)\xrightarrow{\sim} \left(\S\mathcal{N}_{[p,p_1]},(\omega_{PS_1})_{[p,p_1]}\right)\quad \& \quad \psi_2:\left(\S\mathcal{N}_{p_2},(\omega_2)_{p_2}\right)\xrightarrow{\sim} \left(\S\mathcal{N}_{[p,p_2]},(\omega_{PS_2})_{[p,p_2]}\right),
\end{equation*} that are both compatible with the isomorphism of Lie groups:
\begin{equation*} \mathcal{G}_{p_1}\xrightarrow{\Phi_p} G_{[p,p_1]}=\mathcal{G}_{p_2}\xrightarrow{\Phi_p} G_{[p,p_2]}.
\end{equation*}
By assumption, there is an equivariant symplectic linear isomorphism:
\begin{equation*} \psi:(\S\mathcal{N}_{p_1},\omega_{p_1})\xrightarrow{\sim} (\S\mathcal{N}_{p_2},\omega_{p_2}).
\end{equation*} All together, the composition:
\begin{equation*} \psi_2\circ \psi\circ \psi_1^{-1}: \left(\S\mathcal{N}_{[p,p_1]},(\omega_{PS_1})_{[p,p_1]}\right)\xrightarrow{\sim} \left(\S\mathcal{N}_{[p,p_2]},(\omega_{PS_2})_{[p,p_2]}\right)
\end{equation*} becomes an equivariant symplectic linear isomorphism. So, the assumptions of Theorem \ref{mgsrigzerofib} hold, which implies that there are $G$-invariant opens $U_{[p,p_1]}$ around $[p,p_1]$ and $U_{[p,p_2]}$ around $[p,p_2]$, together with an isomorphism of Hamiltonian $G$-spaces that sends $[p,p_1]$ to $[p,p_2]$:
\begin{center}
\begin{tikzcd} \left(U_{[p,p_1]},\omega_{PS_1},[p,p_1]\right)\arrow[rr,"\sim"]\arrow[dr,"P_*(J_1^V)"'] & & \left(U_{[p,p_2]},\omega_{PS_2},[p,p_2]\right)\arrow[dl, "P_*(J_2^V)"] \\
& \mathfrak{g}^* &
\end{tikzcd}
\end{center} One readily verifies that, by passing back through the above equivalence of categories via the explicit inverse functor, we obtain $\mathcal{G}$-invariant opens $U_1$ around $p_1$ and $U_2$ around $p_2$, together with an isomorphism of Hamiltonian $(\mathcal{G},\Omega)$-spaces from $J_1:(U_1,\omega_1)\to M$ to $J_2:(U_2,\omega_2)\to M$ that sends $p_1$ to $p_2$, as desired.
\end{proof}
\begin{proof}[Proof of Theorem \ref{righamthm}] As in the previous proof, the forward implication is straightforward. For the backward implication, let $(i_1,j_1)$ and $(i_2,j_2)$ be two realizations of the same zeroth-order Hamiltonian data (with notation as in Definition \ref{nhoodeq1defi}). Let $p\in \O$ and $x=J_\O(p)$ and suppose that their symplectic normal representations at $p$ are isomorphic as symplectic $\mathcal{G}_p$-representations. By Theorem \ref{normformsymgp} there are respective opens $V_1$ and $V_2$ around $\L$ in $M_1$ and $M_2$, together with an isomorphism:
\begin{equation*} \Phi:(\mathcal{G}_1,\Omega_1)\vert_{V_1}\xrightarrow{\sim}(\mathcal{G}_2,\Omega_2)\vert_{V_2}
\end{equation*}
that intertwines $i_1$ with $i_2$. Consider, on one hand, the Hamiltonian $(\mathcal{G}_1,\Omega_1)\vert_{V_1}$-space obtained from the given Hamiltonian $(\mathcal{G}_1,\Omega_1)$-space $J_1$ by restriction to $V_1$ and, on the other hand, the Hamiltonian $(\mathcal{G}_1,\Omega_1)\vert_{V_1}$-space $\Phi^*(J_2)$ obtained from the given Hamiltonian $(\mathcal{G}_2,\Omega_2)$-space $J_2$ by restriction to $V_2$ and pullback along $\Phi$. These two Hamiltonian $(\mathcal{G}_1,\Omega_1)\vert_{V_1}$-spaces meet the assumptions of Theorem \ref{loceqthmhamact} at the points $j_1(p)$ and $j_2(p)$. So, there are $(\mathcal{G}_1\vert_{V_1})$-invariant opens $U_1\subset J_1^{-1}(V_1)$ and $U_2\subset J_2^{-1}(V_2)$, together with an isomorphism of Hamiltonian $(\mathcal{G}_1,\Omega_1)\vert_{V_1}$-spaces that sends $j_1(p)$ to $j_2(p)$:
\begin{center}
\begin{tikzcd} (U_1,\omega_1,j_1(p)) \arrow[rr,"\Psi"] \arrow[dr, "J_1"'] & & (U_2,\omega_2,j_2(p)) \arrow[dl,"\Phi^*(J_2)"] \\
& V_1 &
\end{tikzcd}
\end{center} As one readily verifies, the pair $(\Phi,\Psi)$ is the desired neighbourhood equivalence.
\end{proof}
\subsection{The transverse part of the local model}\label{translocmodsec}
\subsubsection{Hamiltonian Morita equivalence} In order to define a notion of Morita equivalence between Hamiltonian actions, we first consider a natural equivalence relation between Lie groupoid maps (resp. groupoid maps of Hamilonian type, defined below). In the next subsection we explain how this restricts to an equivalence relation between Lie groupoid actions (resp. Hamiltonian actions).
\begin{defi}\label{moreqdefLie} Let $\mathcal{J}_1:\H_1\to \mathcal{G}_1$ and $\mathcal{J}_2:\H_2\to \mathcal{G}_2$ be maps of Lie groupoids. By a \textbf{Morita equivalence} from $\mathcal{J}_1$ to $\mathcal{J}_2$ we mean the data consisting of:
\begin{itemize}\item a Morita equivalence $(P,\alpha_1,\alpha_2)$ from $\mathcal{G}_1$ to $\mathcal{G}_2$,
\item a Morita equivalence $(Q,\beta_1,\beta_2)$ from $\H_1$ to $\H_2$,
\item a smooth map $j:Q\to P$ that interwines $J_i\circ\beta_i$ with $\alpha_i$ and that intertwines the $\H_i$-action with the $\mathcal{G}_i$-action via $\mathcal{J}_i$, for both $i=1,2$.
\end{itemize} We depict this as:
\begin{center}
\begin{tikzpicture}
\node (H1) at (-0.8,0) {$\H_1$};
\node (S1) at (-0.8,-1.3) {$S_1$};
\node (Q) at (1.35,0) {$Q$};
\node (S2) at (3.5,-1.3) {$S_2$};
\node (H2) at (3.5,0) {$\H_2$};
\node (G1) at (0,-3) {$\mathcal{G}_1$};
\node (M1) at (0,-4.3) {$M_1$};
\node (P) at (1.35,-3) {$P$};
\node (M2) at (2.7,-4.3) {$M_2$};
\node (G2) at (2.7,-3) {$\mathcal{G}_2$};
\draw[->, bend right=50](H1) to node[pos=0.45,below] {$\mathcal{J}_1\text{ }\text{ }\text{ }\text{ }$} (G1);
\draw[->, bend right=20](S1) to node[pos=0.45,below] {$J_1\text{ }\text{ }\text{ }\text{ }$} (M1);
\draw[->, bend left=50](H2) to node[pos=0.45,below] {$\text{ }\text{ }\text{ }\text{ }\mathcal{J}_2$} (G2);
\draw[->, bend left=20](S2) to node[pos=0.45,below] {$\text{ }\text{ }\text{ }\text{ }J_2$} (M2);
\draw[->](Q) to node[pos=0.45,below] {$j\text{ }\text{ }\text{ }\text{ }$} (P);
\draw[->,transform canvas={xshift=-\shift}](H1) to node[midway,left] {}(S1);
\draw[->,transform canvas={xshift=\shift}](H1) to node[midway,right] {}(S1);
\draw[->,transform canvas={xshift=-\shift}](H2) to node[midway,left] {}(S2);
\draw[->,transform canvas={xshift=\shift}](H2) to node[midway,right] {}(S2);
\draw[->](Q) to node[pos=0.25, below] {$\text{ }\text{ }\beta_1$} (S1);
\draw[->] (0.3,-0.15) arc (315:30:0.25cm);
\draw[<-] (2.4,0.15) arc (145:-145:0.25cm);
\draw[->](Q) to node[pos=0.25, below] {$\beta_2$\text{ }} (S2);
\draw[->,transform canvas={xshift=-\shift}](G1) to node[midway,left] {}(M1);
\draw[->,transform canvas={xshift=\shift}](G1) to node[midway,right] {}(M1);
\draw[->,transform canvas={xshift=-\shift}](G2) to node[midway,left] {}(M2);
\draw[->,transform canvas={xshift=\shift}](G2) to node[midway,right] {}(M2);
\draw[->](P) to node[pos=0.25, below] {$\text{ }\text{ }\alpha_1$} (M1);
\draw[->] (0.8,-3.15) arc (315:30:0.25cm);
\draw[<-] (1.9,-2.85) arc (145:-145:0.25cm);
\draw[->](P) to node[pos=0.25, below] {$\alpha_2$\text{ }} (M2);
\end{tikzpicture}
\end{center}
\end{defi}
As an analogue of this in the Hamiltonian setting, we propose the following definitions (more motivation for which will be given in the coming subsections).
\begin{defi}\label{gpoidmaphamtyp} Let $(\mathcal{G},\Omega)\rightrightarrows M$ be a symplectic groupoid and let $\H\rightrightarrows (S,\omega)$ be a Lie groupoid over a pre-symplectic manifold. We call a Lie groupoid map $\mathcal{J}:\H\to \mathcal{G}$ of \textbf{Hamiltonian type} if:
\begin{equation*} \mathcal{J}^*\Omega=(t_\H)^*\omega-(s_\H)^*\omega.
\end{equation*}
\end{defi}
\begin{defi}\label{moreqdefHam} Let $\mathcal{J}_1:\H_1\to \mathcal{G}_1$ and $\mathcal{J}_2:\H_2\to \mathcal{G}_2$ be of Hamiltonian type. By a \textbf{Hamiltonian Morita equivalence} from $\mathcal{J}_1$ to $\mathcal{J}_2$ we mean: a Morita equivalence (in the sense of Definition \ref{moreqdefLie}) with the extra requirement that $(P,\omega_P,\alpha_1,\alpha_2)$ is a symplectic Morita equivalence and that:
\begin{equation}\label{hameqmor1} j^*\omega_P=(\beta_1)^*\omega_1-(\beta_2)^*\omega_2.
\end{equation}
\end{defi}
The same type of arguments as for Morita equivalence of Lie and symplectic groupoids (see \cite{Xu}) show that Hamiltonian Morita equivalence indeed defines an equivalence relation.
\subsubsection{Morita equivalence between groupoid maps of action type} To see that the equivalence relation(s) in the previous subsection induce an equivalence relation between Lie groupoid actions (resp. Hamiltonian actions), the key remark is that a left action of a Lie groupoid $\mathcal{G}$ along a map $J:S\to M$ gives rise to a map of Lie groupoids covering $J$:
\begin{equation}\label{asslgpoidmapact} \textrm{pr}_\mathcal{G}:\mathcal{G}\ltimes S\to \mathcal{G}.
\end{equation}
Further, notice that the groupoid map (\ref{asslgpoidmapact}) is of Hamiltonian type precisely when the action is Hamiltonian (that is, when (\ref{hammultcond}) holds).
\begin{defi}\label{moreqdefHamacttyp} By a \textbf{Morita equivalence between (left) Lie groupoid actions} we mean a Morita equivalence between their associated Lie groupoid maps (\ref{asslgpoidmapact}). Similarly, by a \textbf{Morita equivalence between (left) Hamiltonian actions} we mean a Hamiltonian Morita equivalence between their associated groupoid maps (\ref{asslgpoidmapact}).
\end{defi}
In the remainder of this subsection, we further unravel what it means for to Hamiltonian actions to be Morita equivalent. The starting point for this is the following example, which concerns the modules appearing in Theorems \ref{moreqasrep} and \ref{sympmoreqasrep}.
\begin{ex}\label{asrepmoreq} Let $\mathcal{G}_1\rightrightarrows M_1$ be a Lie groupoid acting along $J:S\to M_1$ and suppose that we are given a Morita equivalence $(P,\alpha_1,\alpha_2)$ from $\mathcal{G}_1$ to another Lie groupoid $\mathcal{G}_2\rightrightarrows M_2$. Consider the associated $\mathcal{G}_2$-action along $P_*(J):P\ast_{\mathcal{G}_1}S\to M_2$. The Morita equivalence from $\mathcal{G}_1$ to $\mathcal{G}_2$ extends to a canonical Morita equivalence between these two actions:
\begin{center}
\begin{tikzpicture}
\node (H1) at (-1,0) {$\mathcal{G}_1\ltimes S$};
\node (S1) at (-1,-1.3) {$S$};
\node (Q) at (1.35,0) {$P\times_{M_1} S$};
\node (S2) at (3.9,-1.3) {$P\ast_{\mathcal{G}_1}S$};
\node (H2) at (3.9,0) {$\mathcal{G}_2\ltimes \left(P\ast_{\mathcal{G}_1}S\right)$};
\node (G1) at (0,-3) {$\mathcal{G}_1$};
\node (M1) at (0,-4.3) {$M_1$};
\node (P) at (1.35,-3) {$P$};
\node (M2) at (2.7,-4.3) {$M_2$};
\node (G2) at (2.7,-3) {$\mathcal{G}_2$};
\draw[->, bend right=50](H1) to node[pos=0.45,below] {$\textrm{pr}_{\mathcal{G}_1}\quad\quad$} (G1);
\draw[->, bend right=20](S1) to node[pos=0.45,below] {$J\text{ }\text{ }\text{ }\text{ }$} (M1);
\draw[->, bend left=50](H2) to node[pos=0.45,below] {$\quad\quad\quad\textrm{pr}_{\mathcal{G}_2}$} (G2);
\draw[->, bend left=20](S2) to node[pos=0.45,below] {$\quad\quad\text{ }\text{ }P_*(J)$} (M2);
\draw[->](Q) to node[pos=0.45,below] {$\textrm{pr}_P\quad\quad$} (P);
\draw[->,transform canvas={xshift=-\shift}](H1) to node[midway,left] {}(S1);
\draw[->,transform canvas={xshift=\shift}](H1) to node[midway,right] {}(S1);
\draw[->,transform canvas={xshift=-\shift}](H2) to node[midway,left] {}(S2);
\draw[->,transform canvas={xshift=\shift}](H2) to node[midway,right] {}(S2);
\draw[->](Q) to node[pos=0.25, below] {$\quad\text{ }\textrm{pr}_{S}$} (S1);
\draw[->] (0.5,-0.15) arc (315:30:0.25cm);
\draw[<-] (2.2,0.15) arc (145:-145:0.25cm);
\draw[->](Q) to node[pos=0.25, below] {$\textrm{pr}_{PS}$\text{ }} (S2);
\draw[->,transform canvas={xshift=-\shift}](G1) to node[midway,left] {}(M1);
\draw[->,transform canvas={xshift=\shift}](G1) to node[midway,right] {}(M1);
\draw[->,transform canvas={xshift=-\shift}](G2) to node[midway,left] {}(M2);
\draw[->,transform canvas={xshift=\shift}](G2) to node[midway,right] {}(M2);
\draw[->](P) to node[pos=0.25, below] {$\text{ }\text{ }\alpha_1$} (M1);
\draw[->] (0.8,-3.15) arc (315:30:0.25cm);
\draw[<-] (1.9,-2.85) arc (145:-145:0.25cm);
\draw[->](P) to node[pos=0.25, below] {$\alpha_2$\text{ }} (M2);
\end{tikzpicture}
\end{center}
Here the upper left action is induced by the diagonal $\mathcal{G}_1$-action, whereas the upper right action is induced by the $\mathcal{G}_2$-action on the first factor. When $(\mathcal{G}_1,\Omega_1)$ and $(\mathcal{G}_2,\Omega_2)$ are symplectic groupoids, the action along $J_1:(S_1,\omega_1)\to M_1$ is Hamiltonian, and the Morita equivalence $(P,\omega_P,\alpha_1,\alpha_2)$ is symplectic, then the associated $(\mathcal{G}_2,\Omega_2)$-action along $P_*(J):(P\ast_{\mathcal{G}_1}S,\omega_{PS})\to M_2$ is Hamiltonian. In this case, the above Morita equivalence is Hamiltonian.
\end{ex}
In fact, we will show that more is true:
\begin{prop}\label{moreqactgpoids} Every Morita equivalence between two Lie groupoid maps that are both of action type is of the form of Example \ref{asrepmoreq}. The same holds for Hamiltonian Morita equivalence.
\end{prop}
Here, for convenience, we used the following terminology.
\begin{defi} Let $\mathcal{J}:\H\to \mathcal{G}$ be map of Lie groupoids covering $J:S\to M$. We say that $\mathcal{J}$ is of \textbf{action type} if there is a smooth left action of $\mathcal{G}$ along $J$ and an isomorphism of Lie groupoids from $\mathcal{G}\ltimes S$ to $\H$ that covers the identity on $S$ and makes the diagram:
\begin{center}
\begin{tikzcd}
\mathcal{G}\ltimes S\arrow[rr,"\sim"]\arrow[dr,"\textrm{pr}_\mathcal{G}"'] & & \H \arrow[ld,"\mathcal{J}"] \\
& \mathcal{G} &
\end{tikzcd}
\end{center}
commute.
\end{defi}
This has the following more insightful characterization.
\begin{prop}\label{acttypcharprop} A Lie groupoid map $\mathcal{J}:\H\to\mathcal{G}$ is of action type if and only if for every $p\in S$ the map $\mathcal{J}$ restricts to a diffeomorphism from the source-fiber of $\H$ over $p$ onto that of $\mathcal{G}$ over $J(p)$. \end{prop}
This is readily verified. To prove Proposition \ref{moreqactgpoids} we use the closely related lemma below, the proof of which is also left to the reader.
\begin{lemma}\label{acttypeprop} Let $\mathcal{J}_1:\H_1\to \mathcal{G}_1$ and $\mathcal{J}_2:\H_2\to \mathcal{G}_2$ be maps of Lie groupoids and let a Morita equivalence between them (denoted as Definition \ref{moreqdefLie}) be given. Let $q\in Q$, and denote $p=j(q)$, $p_i=\beta_i(q)$ and $x_i=J_i(p_i)$ for $i=1,2$. Then we have a commutative square:
\begin{center}
\begin{tikzcd} \beta_2^{-1}(p_2)\arrow[r,"j"] & \alpha_2^{-1}(x_2)\\
s_{\H_1}^{-1}(p_1)\arrow[r,"\mathcal{J}_1"] \arrow[u,"m_q"] & s^{-1}_{\mathcal{G}_1}(x_1) \arrow[u,"m_p"]
\end{tikzcd}
\end{center}
in which all vertical arrows are diffeomorphisms. In particular, $\mathcal{J}_1$ is of action type if and only if $j$ restricts to a diffeomorphism between the $\beta_2$- and $\alpha_2$-fibers. Analogous statements hold for $\mathcal{J}_2$, replacing $\alpha_2$ and $\beta_2$ by $\alpha_1$ and $\beta_1$.
\end{lemma}
\begin{proof}[Proof of Proposition \ref{moreqactgpoids}] Suppose that we are given Lie groupoids $\mathcal{G}_1\rightrightarrows M_1$ and $\mathcal{G}_2\rightrightarrows M_2$, together with a $\mathcal{G}_1$-module $J_1:S_1\to M_1$ and a $\mathcal{G}_2$-module $J_2:S_2\to M_2$ and a Morita equivalence between the associated Lie groupoid maps (\ref{asslgpoidmapact}), denoted as in Definition \ref{moreqdefLie}. It follows from Lemma \ref{acttypeprop} that the map:
\begin{equation}\label{isomoreqactgpoidspf} (j,\beta_1):Q\to P\times_{M_1}S_1
\end{equation} is a diffeomorphism. The diagonal action of $\mathcal{G}_1$ along $\alpha_1\circ \textrm{pr}_P:P\times_{M_1}S_1\to M_1$ induces an action of $\mathcal{G}_1\ltimes S_1$ along $\textrm{pr}_{S_1}:P\times_{M_1}S_1\to S_1$, which is the upper left action in Example \ref{asrepmoreq}. The diffeomorphism $(\ref{isomoreqactgpoidspf})$ intertwines $\beta_1$ with $\text{pr}_{S_1}$ and is equivariant with respect this action. In particular, by principality of the $\mathcal{G}_1\ltimes S_1$-action, there is an induced diffeomorphism:
\begin{equation}\label{isomoreqactgpoidspf2} S_2\xrightarrow{\sim} P\ast_{\mathcal{G}_1}S_1.
\end{equation} One readily verifies that, when identifying $Q$ with $P\times_{M_1}S_1$ via (\ref{isomoreqactgpoidspf}) and $S_2$ with $P\ast_{\mathcal{G}_1}S_1$ via (\ref{isomoreqactgpoidspf2}), the given Morita equivalence is identified with that in Example \ref{asrepmoreq}. Furthermore, when $(\mathcal{G}_1,\Omega_1)$ and $(\mathcal{G}_2,\Omega_2)$ are symplectic groupoids, $(S_1,\omega_1)$ and $(S_2,\omega_2)$ are symplectic manifolds, the given actions along $J_1$ and $J_2$ are Hamiltonian and the Morita equivalence between $(\mathcal{G}_1,\Omega_1)$ and $(\mathcal{G}_2,\Omega_2)$ is symplectic, then one readily verifies that (\ref{isomoreqactgpoidspf2}) is a symplectomorphism from $(S_2,\omega_2)$ to $(P\ast_{\mathcal{G}_1}S_1,\omega_{PS_1})$ if and only if the relation (\ref{hameqmor1}) is satisfied. This proves the proposition.
\end{proof}
\subsubsection{The transverse local model}
In this paper we will mainly be interested in Hamiltonian Morita equivalences between Hamiltonian actions, rather than between the more general groupoid maps of Hamiltonian type (as in Definition \ref{gpoidmaphamtyp}). There is, however, one important exception to this:
\begin{ex}\label{locmodmoreq} This example gives a Hamiltonian Morita equivalence between the local model for Hamiltonian actions and a groupoid map $\mathcal{J}_\mathfrak{p}$ that is built out of less data and is often easier to work with. The use of this Morita equivalence makes many of the proofs in Section \ref{canhamstratsec} both simpler and more conceptual. Let $(\mathcal{G}_\theta,\Omega_\theta)$ be the symplectic groupoid (\ref{locmodsymgp}) and let $J_\theta:(S_\theta,\omega_{S_\theta})\to M_\theta$ be the Hamiltonian $(\mathcal{G}_\theta,\Omega_\theta)$-space (\ref{locmodham2}). The Morita equivalence of Example \ref{crucexmoreq} extends to a Hamiltonian Morita equivalence between the action along $J_\theta$ and a groupoid map of Hamiltonian type from $H\ltimes (\mathfrak{h}^0\oplus V)$ to $G\ltimes \mathfrak{g}^*$ (restricted to appropriate opens). To see this, let $\mathfrak{p}:\mathfrak{h}^*\to \mathfrak{g}^*$ be an $H$-equivariant splitting of (\ref{ses2poisabs}). Consider the $H$-equivariant map: \begin{equation}\label{slicemommap} J_\mathfrak{p}:\mathfrak{h}^0\oplus V\to \mathfrak{g}^*, \quad (\alpha,v)\mapsto \alpha+\mathfrak{p}(J_V(v)),
\end{equation} where $J_V:V\to \mathfrak{h}^*$ is the quadratic momentum map (\ref{quadsympmommap}). By $H$-equivariance, this lifts to a groupoid map:
\begin{equation}\label{bigJ_p} \mathcal{J}_\mathfrak{p}:H\ltimes (\mathfrak{h}^0\oplus V)\to G\ltimes \mathfrak{g}^*,\quad (h,\alpha,v)\mapsto (h,J_\mathfrak{p}(\alpha,v)).
\end{equation} This groupoid map is not of action type, but it is of Hamiltonian type with respect to the pre-symplectic form $0\oplus \omega_V$ on $\mathfrak{h}^0\oplus V$ and there is a canonical Hamiltonian Morita equivalence:
\begin{center}
\begin{tikzpicture}
\node (H1) at (-0.85,0) {$\mathcal{G}_\theta\ltimes S_\theta$};
\node (S1) at (-0.85,-1.3) {$(S_\theta,\omega_{S_\theta})$};
\node (Q) at (3.3,0) {$\Sigma_\theta \tensor[_{\textrm{pr}_{\mathfrak{h}^*}}]{\times}{_{J_V}} V$};
\node (S2) at (7.45,-1.3) {$(U_\theta,0\oplus \omega_V)$};
\node (H2) at (7.45,0) {$H\ltimes (\mathfrak{h}^0\oplus V)\vert_{U_\theta}$};
\node (G1) at (0,-3) {$(\mathcal{G}_\theta,\Omega_\theta)$};
\node (M1) at (0,-4.3) {$M_\theta$};
\node (P) at (3.3,-3) {$(\Sigma_\theta,\omega_\theta)$};
\node (M2) at (6.6,-4.3) {$W_\theta$};
\node (G2) at (6.6,-3) {$(G\ltimes \mathfrak{g}^*,-\d \lambda_{\textrm{can}})\vert_{W_\theta}$};
\draw[->, bend right=80](H1) to node[pos=0.55,below] {$\textrm{pr}_{\mathcal{G}_\theta}\quad\quad$} (G1);
\draw[->, bend right=20](S1) to node[pos=0.15,below] {$J_\theta\text{ }\text{ }\text{ }\text{ }$} (M1);
\draw[->, bend left=80](H2) to node[pos=0.55,below] {$\quad\quad\mathcal{J}_\mathfrak{p}$} (G2);
\draw[->, bend left=20](S2) to node[pos=0.15,below] {$\text{ }\text{ }\text{ }\text{ }J_\mathfrak{p}$} (M2);
\draw[->](Q) to node[pos=0.45,below] {$\textrm{pr}_{\Sigma_\theta}\quad\quad\quad$} (P);
\draw[->,transform canvas={xshift=-\shift}](H1) to node[midway,left] {}(S1);
\draw[->,transform canvas={xshift=\shift}](H1) to node[midway,right] {}(S1);
\draw[->,transform canvas={xshift=-\shift}](H2) to node[midway,left] {}(S2);
\draw[->,transform canvas={xshift=\shift}](H2) to node[midway,right] {}(S2);
\draw[->](Q) to node[pos=0.25, below] {$\text{ }\text{ }\textrm{pr}_{S_\theta}$} (S1);
\draw[->] (1.25,-0.15) arc (315:30:0.25cm);
\draw[<-] (5.35,0.15) arc (145:-145:0.25cm);
\draw[->](Q) to node[pos=0.25, below] {$\beta_\mathfrak{p}$\text{ }} (S2);
\draw[->,transform canvas={xshift=-\shift}](G1) to node[midway,left] {}(M1);
\draw[->,transform canvas={xshift=\shift}](G1) to node[midway,right] {}(M1);
\draw[->,transform canvas={xshift=-\shift}](G2) to node[midway,left] {}(M2);
\draw[->,transform canvas={xshift=\shift}](G2) to node[midway,right] {}(M2);
\draw[->](P) to node[pos=0.35, below] {$\quad\quad\textrm{pr}_{M_\theta}$} (M1);
\draw[->] (2.25,-3.15) arc (315:30:0.25cm);
\draw[<-] (4.35,-2.85) arc (145:-145:0.25cm);
\draw[->](P) to node[pos=0.25, below] {$\textrm{pr}_{\mathfrak{g}^*}$\quad\quad} (M2);
\end{tikzpicture}
\end{center}
that relates the central orbit in $S_\theta$ to the origin in $\mathfrak{h}^0\oplus V$. Here $W_\theta:=\textrm{pr}_\mathfrak{g}^*(\Sigma_\theta)$ and $U_\theta:=J_\mathfrak{p}^{-1}(W_\theta)$ are invariant open neighbourhoods of the respective origins in $\mathfrak{g}^*$ and $\mathfrak{h}^0\oplus V$. Furthermore, the map $\beta_\mathfrak{p}$ is defined as:
\begin{equation*}\beta_\mathfrak{p}: \Sigma_\theta \tensor[_{\textrm{pr}_{\mathfrak{h}^*}}]{\times}{_{J_V}} V\to U, \quad (p,\alpha,v)\mapsto (\alpha-\mathfrak{p}(J_V(v)),v).
\end{equation*} With this in mind, we think of the groupoid map $\mathcal{J}_\mathfrak{p}$ as a local model for the ``transverse part" of a Hamiltonian action near a given orbit.
\end{ex}
\subsubsection{Elementary Morita invariants}\label{bashammorinv}
As will be apparent in the rest of this paper, many invariants for Morita equivalence between Lie groupoids have analogues for Morita equivalence between Hamiltonian actions \textemdash in fact, the canonical Hamiltonian stratification can be thought of as an analogue of the canonical stratification on the leaf space of a proper Lie groupoid. In this subsection we give analogues of Proposition \ref{transgeomgpoid}. We start with a version for Lie groupoid maps.
\begin{prop}\label{transgeommap} Let $\mathcal{J}_1:\H_1\to \mathcal{G}_1$ and $\mathcal{J}_2:\H_2\to \mathcal{G}_2$ be maps of Lie groupoids and let a Morita equivalence between them (denoted as Definition \ref{moreqdefLie}) be given.
\begin{itemize}
\item[a)] The induced homeomorphisms between the orbit and leaf spaces (\ref{leafsphommoreq}) intertwine the maps induced by $J_1$ and $J_2$. That is, we have a commutative square:
\begin{center}
\begin{tikzcd}
\underline{S}_1\arrow[d,"\underline{J}_1"] \arrow[r,"h_Q"] & \underline{S}_2 \arrow[d,"\underline{J}_2"]\\
\underline{M}_1 \arrow[r, "h_P"] & \underline{M}_2
\end{tikzcd}
\end{center}
\end{itemize}
Further, suppose that $p_1\in S_1$ and $p_2\in S_2$ belong to $Q$-related orbits and let $q\in Q$ such that $\beta_1(q)=p_1$ and $\beta_2(q)=p_2$. Let $p=j(q)$, $x_1=J_1(p_1)$ and $x_2=J_2(p_2)$.
\begin{itemize}
\item[b)] The induced isomorphisms of isotropy groups (\ref{isotgpisomoreq}) interwine the maps induced by $\mathcal{J}_1$ and $\mathcal{J}_2$. That is, we have a commutative square:
\begin{center}
\begin{tikzcd}
(\H_1)_{p_1} \arrow[d,"\mathcal{J}_1"] \arrow[r, "\Phi_q"] & (\H_2)_{p_2} \arrow[d,"\mathcal{J}_2"]\\
(\mathcal{G}_1)_{x_1} \arrow[r,"\Phi_p"] & (\mathcal{G}_2)_{x_2}
\end{tikzcd}
\end{center}
\item[c)] The induced isomorphisms of normal representations (\ref{isotrepisomoreq}) intertwine the maps induced by $J_1$ and $J_2$. That is, we have a commutative square:
\begin{center}
\begin{tikzcd}
\mathcal{N}_{p_1}\arrow[d,"\underline{\d J}_1"]\arrow[r,"\phi_q"] & \mathcal{N}_{p_2}\arrow[d,"\underline{\d J}_2"]\\
\mathcal{N}_{x_1} \arrow[r,"\phi_p"] & \mathcal{N}_{x_2}
\end{tikzcd}
\end{center}
\end{itemize}
\end{prop}
The proof is straightforward.
\begin{ex} The Morita equivalence in Example \ref{locmodmoreq} induces an identification (of maps of topological spaces) between the transverse momentum map $\underline{J_\theta}$ and (a restriction of) the map:
\begin{equation*} \underline{J_\mathfrak{p}}:(\mathfrak{h}^0\oplus V)/H\to \mathfrak{g}^*/G.
\end{equation*}
\end{ex}
We now turn to Morita equivalences between Hamiltonian actions.
\begin{prop}\label{transgeomham} Let a Hamiltonian $(\mathcal{G}_1,\Omega_1)$-action along $J_1:(S_1,\omega_1)\to M_1$, a Hamiltonian $(\mathcal{G}_2,\Omega_2)$-action along $J_2:(S_2,\omega_2)\to M_2$ and a Hamiltonian Morita equivalence between them (denoted as in Definitions \ref{moreqdefLie} and \ref{moreqdefHam}) be given. Suppose that $p_1\in S_1$ and $p_2\in S_2$ belong to $Q$-related orbits and let $q\in Q$ such that $\beta_1(q)=p_1$ and $\beta_2(q)=p_2$. Let $p=j(q)$, $x_1=J(p_1)$ and $x_2=J(p_2)$.
\begin{itemize}\item[a)]The isomorphism $\Phi_p:(\mathcal{G}_1)_{x_1}\xrightarrow{\sim}(\mathcal{G}_2)_{x_2}$ restricts to an isomorphism:
\begin{equation*} (\mathcal{G}_1)_{p_1}\xrightarrow{\sim}(\mathcal{G}_2)_{p_2}.
\end{equation*}
\item[b)] There is a compatible symplectic linear isomorphism:
\begin{equation*} (\S\mathcal{N}_{p_1},(\omega_1)_{p_1})\cong(\S\mathcal{N}_{p_2},(\omega_2)_{p_2})
\end{equation*} between the symplectic normal representations at $p_1$ and $p_2$.
\end{itemize}
\end{prop}
\begin{proof} Part $a$ is immediate from Proposition \ref{transgeommap}$b$. For the proof of part $b$ observe that, by Proposition \ref{transgeommap}$c$, the isomorphism $\phi_q$ restricts to one between $\ker(\underline{\d J}_1)_{p_1}$ and $\ker(\underline{\d J}_2)_{p_2}$, so that we obtain an isomorphism of representations, compatible with part $a$, and given by:
\begin{equation}\label{indisosympnormrephammoreq} \S\mathcal{N}_{p_2}\to \S\mathcal{N}_{p_1},\quad [v]\mapsto [\d \beta_1(\hat{v})],
\end{equation} where $\hat{v}\in T_qQ$ is any vector such that $\d \beta_2(\hat{v})=v$ and $\d j(\hat{v})=0$. Note here (to see that such $\hat{v}$ exists) that, given $v\in \ker(\d J_2)_{p_2}$ and $\hat{w}\in T_qQ$ such that $\d \beta_2(\hat{w})=v$, we have $\d j(\hat{w})\in \ker(\d\alpha_2)$, hence by Lemma \ref{acttypeprop} there is a $\hat{u}\in \ker(\d\beta_2)_q$ such that $\d j(\hat{u})=\d j(\hat{w})$, so that $\hat{v}:=\hat{w}-\hat{u}$ has the desired properties. With this description of (\ref{indisosympnormrephammoreq}) it is immediate from (\ref{hameqmor1}) that $(\ref{indisosympnormrephammoreq})$ pulls $(\omega_1)_{p_1}$ back to $(\omega_2)_{p_2}$, which concludes the proof.
\end{proof}
We can now give a more conceptual proof of Lemma \ref{assrepprop}.
\begin{proof}[Proof of Lemma \ref{assrepprop}] Apply Proposition \ref{transgeomham} to Example \ref{asrepmoreq}.
\end{proof}
For Hamiltonian Morita equivalences as in Example \ref{locmodmoreq} (where one of the two groupoid maps is not of action type) it is not clear to us whether there is a satisfactory generalization of Proposition \ref{transgeomham}. The arguments in the proof of that proposition do show the following, which will be enough for our purposes.
\begin{prop}\label{finremhammoreq} Let a Hamiltonian Morita equivalence (denoted as in Definitions \ref{moreqdefLie} and \ref{moreqdefHam}) between groupoids maps $\mathcal{J}_1:\H_1\to (\mathcal{G}_1,\Omega_1)$ and $\mathcal{J}_2:\H_2\to (\mathcal{G}_2,\Omega_2)$ of Hamiltonian type be given. Suppose that $p_1\in S_1$ and $p_2\in S_2$ belong to $Q$-related orbits and let $q\in Q$ such that $\beta_1(q)=p_1$ and $\beta_2(q)=p_2$. Further, assume that $\mathcal{J}_1$ is of action type and the canonical injection:
\begin{equation*} \S\mathcal{N}_{p_2}:=\frac{\ker(\d J_2)_{p_2}}{\ker(\d J_2)_{p_2}\cap T_{p_2}\O}\hookrightarrow \ker(\underline{\d J}_2)_{p_2}
\end{equation*} is an isomorphism. The form $\omega_2$ on the base $S_2$ of $\H_2$ may be degenerate. Then:
\begin{itemize}\item[a)] the form $\omega_2$ descends to a linear symplectic form $(\omega_2)_{p_2}$ on $\S\mathcal{N}_{p_2}$, which is invariant under the $(\H_2)_{p_2}$-action defined by declaring the isomorphism with $\ker(\underline{\d J})_{p_2}$ to be equivariant,
\item[b)] there is a symplectic linear isomorphism $(\S\mathcal{N}_{p_1},\omega_{p_1})\cong (\S\mathcal{N}_{p_2},\omega_{p_2})$ that is compatible with the isomorphism of Lie groups $\Phi_q:(\H_1)_{p_1}\xrightarrow{\sim}(\H_2)_{p_2}$.
\end{itemize}
\end{prop}
\section{The canonical Hamiltonian stratification}
In this part we apply our normal form results to study stratifications on orbit spaces of Hamiltonian actions. To elaborate: in Section \ref{stratsec} we give background on Whitney stratifications of reduced differentiable spaces and we discuss the canonical Whitney stratification of the leaf space of a proper Lie groupoid. A novelty in our discussion is that we point out a criterion (Lemma \ref{whitreghomprop}) for a partition into submanifolds of a reduced differentiable space to be Whitney regular, which may be of independent interest. Furthermore, we give a similar criterion (Corollary \ref{fibwhitstratcor}) for the fibers of a map between reduced differentiable spaces to inherit a natural Whitney stratification from a constant rank partition of the map. In Section \ref{canhamstratsec} we introduce the canonical Hamiltonian stratification and prove Theorem \ref{canhamstratthm} and Theorem \ref{redspstratthm}, by verifying that the canonical Hamiltonian stratification of the orbit space and the Lerman-Sjamaar stratification of the symplectic reduced spaces meet the aforementioned criteria, using basic features of Hamiltonian Morita equivalence and the normal form theorem. In Section \ref{regpartsec} we study the regular (or principal) parts of these stratifications. There we will also consider the infinitesimal analogue of the canonical Hamiltonian stratification on $S$, because its regular part turns out to be better behaved. Section \ref{poisstratthmsec} concerns the Poisson structure on the orbit space. The main theorem of this section shows that the canonical Hamiltonian stratification is a constant rank Poisson stratification of the orbit space, and describes the symplectic leaves in terms of the fibers of the transverse momentum map. Finally, in Section \ref{sympintstratsec} we construct explicit proper integrations of the Poisson strata of the canonical Hamiltonian stratification. Section \ref{regpartsec} can be read independently of Section \ref{poisstratthmsec} and Section \ref{sympintstratsec}.
\subsection{Background on Whitney stratifications of reduced differentiable spaces}\label{stratsec}
\subsubsection{Stratifications of topological spaces}\label{stratdefsec}
In this paper, by a stratification we mean the following.
\begin{defi}\label{topstratdefi} Let $X$ be a Hausdorff, second-countable and paracompact topological space. A \textbf{stratification} of $X$ is a locally finite partition $\S$ of $X$ into smooth manifolds (called \textbf{strata}), that is required to satisfy:
\begin{itemize} \item[i)] Each stratum $\Sigma\in \S$ is a connected and locally closed topological subspace of $X$.
\item[ii)] For each $\Sigma\in \S$, the closure $\overline{\Sigma}$ in $X$ is a union of $\Sigma$ and strata of strictly smaller dimension.
\end{itemize} The second of these is called the \textbf{frontier condition}. A pair $(X,\S)$ is called a \textbf{stratified space}. By a map of stratified spaces $\phi:(X,\S_X)\to (Y,\S_Y)$ we mean a continuous map $\phi:X\to Y$ with the property that for each $\Sigma_X\in \S_X$:
\begin{itemize}\item[i)] There is a stratum $\Sigma_Y\in \S_Y$ such that $\phi(\Sigma_X)\subset \Sigma_Y$.
\item[ii)] The restriction $\phi:\Sigma_X\to \Sigma_Y$ is smooth.
\end{itemize}
\end{defi}
Due to the connectedness assumption on the strata, the frontier condition (a priori of a global nature) can be verified locally with the lemma below.
\begin{lemma}\label{frontcondloc} Let $X$ be a topological space and $\S$ a partition of $X$ into connected manifolds (equipped with the subspace topology). Then $\S$ satisfies the frontier condition if and only if for every $x\in X$ and every $\Sigma\in \S$ such that $x\in \overline{\Sigma}$ and $x\notin\Sigma$ the following hold:
\begin{itemize}
\item[i)] there is an open neighbourhood $U$ of $x$ such that $U\cap \Sigma_x\subset \overline{\Sigma}$,
\item[ii)] $\dim(\Sigma_x)<\dim(\Sigma)$.
\end{itemize}
\end{lemma}
\begin{rem}\label{compremstratdefi} Throughout, we will make reference to various texts that use slightly different definitions of stratifications. After restricting attention to Whitney stratifications (Definition \ref{whitstratdef}), the differences between these definitions become significantly smaller (also see Remark \ref{passtoconncomprem}). A comparison of Definition \ref{topstratdefi} with the notion of stratification in \cite{Mat,Pfl} can be found in \cite{CrMe}.
\end{rem}
The constructions of the stratifications in this paper follow a general pattern: one first defines a partition $\P$ of $X$ into manifolds (possibly disconnected, with connected components of varying, but bounded, dimension) which in a local model for $X$ have a particularly simple description. This partition $\P$ is often natural to the given geometric situation from which $X$ arises. Then, one passes to the partition $\S:=\P^\textrm{c}$ consisting of the connected components of the members of $\P$, and verifies that $\S$ is a stratification of $X$.
\begin{rem} When speaking of a manifold, we always mean that its connected components are of one and the same dimension, unless explicitly stated otherwise (such as above).
\end{rem}
\begin{ex}\label{exmortyp} The leaf space of a proper Lie groupoid admits a canonical stratification. To elaborate, let $\mathcal{G}\rightrightarrows M$ be a proper Lie groupoid, meaning that $\mathcal{G}$ is Hausdorff and the map $(t,s):\mathcal{G}\to M\times M$ is proper. This is equivalent to requiring that $\mathcal{G}$ is proper at every $x\in M$ (as in Definition \ref{propatxdefi}) and that its leaf space $\underline{M}$ is Hausdorff \cite[Proposition 5.1.3]{Hoy}. In fact, $\underline{M}$ is locally compact, second countable and Hausdorff (so, in particular it is paracompact). To define the stratifications of $M$ and $\underline{M}$, first consider the partition $\P_\mathcal{M}(M)$ of $M$ by \textbf{Morita types}. This is given by the equivalence relation: $x_1\sim_\mathcal{M} x_2$ if and only if there are invariant opens $V_1$ and $V_2$ around $\L_{x_1}$ and $\L_{x_2}$, respectively, together with a Morita equivalence:
\begin{equation*} \mathcal{G}\vert_{V_1}\simeq \mathcal{G}\vert_{V_2},
\end{equation*} that relates $\L_{x_1}$ to $\L_{x_2}$. Its members are invariant and therefore descend to a partition $\P_\mathcal{M}(\underline{M})$ of the leaf space $\underline{M}$. The partitions $\S_\textrm{Gp}(M)$ and $\S_\textrm{Gp}(\underline{M})$ obtained from $\P_\mathcal{M}(M)$ and $\P_\mathcal{M}(\underline{M})$ after passing to connected components form the so-called \textbf{canonical stratifications} of the base $M$ and the leaf space $\underline{M}$ of the Lie groupoid $\mathcal{G}$. These indeed form stratifications. This is proved in \cite{PfPoTa} and \cite{CrMe}, using the local description given by the linearization theorem for proper Lie groupoids (see \cite{We4,Zu,CrStr,FeHo}). There, the partition by Morita types is defined by declaring that $x,y\in M$ belong to the same Morita type if and only if there is an isomorphism of Lie groups:
\begin{equation*} \mathcal{G}_x\cong \mathcal{G}_y
\end{equation*}
together with a compatible linear isomorphism:
\begin{equation*} \mathcal{N}_x\cong \mathcal{N}_y
\end{equation*} between the normal representations of $\mathcal{G}$ at $x$ and $y$, as in (\ref{normreppt}). This is equivalent to the description given before, as a consequence of Proposition \ref{transgeomgpoid}$b$ and the linearization theorem.
\end{ex}
Often there are various different partitions that, after passing to connected components, induce the same stratification. This too can be checked locally, using the following lemma.
\begin{lemma}\label{passconncomp} Let $\P_1$ and $\P_2$ be partitions of a topological space $X$ into manifolds (equipped with the subspace topology) with connected components of possibly varying dimension. Then the partitions $\P_1^c$ and $\P_2^c$, obtained after passing to connected components, coincide if and only if every $x\in X$ admits an open neighbourhood $U$ in $X$ such that
\begin{equation*} P_1\cap U=P_2 \cap U,
\end{equation*} where $P_1$ and $P_2$ are the members of $\P_1$ and $\P_2$ through $x$.
\end{lemma}
\begin{ex}\label{exisotyp} Given a proper Lie groupoid $\mathcal{G}\rightrightarrows M$, there is a coarser partition of $M$ (resp. $\underline{M}$) that yields the canonical stratification on $M$ (resp. $\underline{M}$) after passing to connected components: the partition by \textbf{isomorphism types}. On $M$, this partition is given by the equivalence relation: $x\cong y$ if and only if the isotropy groups $\mathcal{G}_x$ and $\mathcal{G}_y$ are isomorphic (as Lie groups). We denote this partition as $\P_{\cong}(M)$. Its members are invariant and therefore descend to a partition of $\P_{\cong}(\underline{M})$ of the leaf space $\underline{M}$. The fact that these indeed induce the canonical stratifications $\S_\textrm{Gp}(M)$ and $\S_\textrm{Gp}(\underline{M})$ follows from Lemma \ref{passconncomp} and the linearization theorem for proper Lie groupoids.
\end{ex}
\begin{ex}\label{exorbtyp} The canonical stratification on the orbit space of a proper Lie group action is usually defined using the partition by \textbf{orbit types}. To elaborate, let $M$ be a manifold, acted upon by a Lie group $G$ in a proper fashion. The partition $\P_\sim(M)$ by orbit types is defined by the equivalence relation: $x\sim y$ if and only if the isotropy groups $G_x$ and $G_y$ are conjugate subgroups of $G$. Its members are $G$-invariant, and hence this induces a partition $\P_\sim(\underline{M})$ of the orbit space $\underline{M}:=M/G$ as well. The partitions obtained from $\P_\sim(M)$ and $\P_\sim(\underline{M})$ after passing to connected components coincide with the canonical stratifications $\S_\textrm{Gp}(M)$ and $\S_\textrm{Gp}(\underline{M})$ of the action groupoid $G\ltimes M$ (as in Example \ref{exmortyp}). Another interesting partition that induces the canonical stratifications in this way is the partition by \textbf{local types}, defined by the equivalence relation: $x\cong y$ if and only if there is a $g\in G$ such that $G_x=gG_yg^{-1}$, together with a compatible linear isomorphism $\mathcal{N}_x\cong \mathcal{N}_y$ between the normal representations at $x$ and $y$. That these partitions induce the canonical stratifications follows from Lemma \ref{passconncomp} and the tube theorem for proper Lie group actions (see e.g. \cite{DuKo}).
\end{ex}
\begin{rem} The discussion above is largely a recollection of parts of \cite{CrMe}. There the reader can find most details and proofs of the claims made in this subsection. A further discussion can be found in \cite{CrFeTo2}, where the canonical stratifications are studied in the context Poisson manifolds of compact types.
\end{rem}
\subsubsection{Reduced differentiable spaces}\label{reddiffspsec} Further interesting properties of a stratified space can be defined when the space $X$ comes equipped with the structure of reduced differentiable space (a notion of smooth structure on $X$) and the stratification is compatible with this structure. We now recall what this means. Throughout, a sheaf will always mean a sheaf of $\mathbb{R}$-algebras.
\begin{defi} A \textbf{reduced ringed space} is a pair $(X,\O_X)$ consisting of a topological space $X$ and a subsheaf $\O_X$ of the sheaf of continuous functions $\mathcal{C}_X$ on $X$ that contains all constant functions. We refer to $\O_X$ as the \textbf{structure sheaf}. A \textbf{morphism of reduced ringed spaces}:
\begin{equation}\label{morphringsp} \phi:(X,\O_X)\to (Y,\O_Y)
\end{equation} is a continuous map $\phi:X\to Y$ with the property that for every open $U$ in $Y$ and every function $f\in \O_Y(U)$, it holds that $f\circ \phi\in \O_X(\phi^{-1}(U))$. Given such a morphism, we let
\begin{equation}\label{pullbackringmor1} \phi^*:\O_Y\to \phi_*\O_X
\end{equation} denote the induced map of sheaves over $Y$ and we use the same notation for the corresponding map of sheaves over $X$: \begin{equation}\label{pullbackringmor2}
\phi^*:\phi^*\O_Y\to \O_X.
\end{equation}
\end{defi}
\begin{ex} Let $M$ be a smooth manifold and $\mathcal{C}^\infty_M$ its sheaf of smooth functions. Then $(M,\mathcal{C}^\infty_M)$ is a reduced ringed space. A map $M\to N$ between smooth manifolds is smooth precisely when it is a morphism of reduced ringed spaces $(M,\mathcal{C}^\infty_M)\to (N,\mathcal{C}^\infty_N)$.
\end{ex}
\begin{ex}\label{smoothfunsubsp} Let $Y$ be a subspace of $\mathbb{R}^n$. We call a function defined on (an open in) $Y$ smooth if it extends to a smooth function on an open in $\mathbb{R}^n$. This gives rise to the sheaf of smooth functions $\mathcal{C}^\infty_Y$ on $Y$.
\end{ex}
\begin{ex}\label{smoothfunleafsp} The leaf space $\underline{M}$ of a Lie groupoid $\mathcal{G}\rightrightarrows M$ is naturally a reduced ringed space, with structure sheaf $\mathcal{C}^\infty_{\underline{M}}$ given by:
\begin{equation*} \mathcal{C}^\infty_{\underline{M}}(\underline{U})=\{f\in \mathcal{C}_{\underline{M}}(\underline{U})\mid f\circ q\in \mathcal{C}^\infty_M(q^{-1}(\underline{U}))\},
\end{equation*} where $q:M\to \underline{M}$ denotes the projection onto the leaf space. We simply refer to this as the \textbf{sheaf of smooth functions on the leaf space}. Often we implicitly identify $\mathcal{C}^\infty_{\underline{M}}$ with the (push-forward of) the sheaf of $\mathcal{G}$-invariant smooth functions on $M$, via $q^*:\mathcal{C}_{\underline{M}}\to q_*\mathcal{C}_M$.
\end{ex}
\begin{defi}[\cite{GoSa}]\label{reddifsp} A \textbf{reduced differentiable space} is a reduced ringed space $(X,\O_X)$ with the property that for every $x\in X$ there is an open neighbourhood $U$, a locally closed subspace $Y$ of $\mathbb{R}^n$ (where $n$ may depend on $x$) and a homeomorphism $\chi:U\to Y$ that induces an isomorphism of reduced ringed spaces:
\begin{equation*} (U,\O_X\vert_U)\cong (Y,\mathcal{C}^\infty_Y).
\end{equation*} We call such a homeomorphism $\chi$ a \textbf{chart} of the reduced differentiable space. A \textbf{morphism of reduced differentiable spaces} is simply a morphism of the underlying reduced ringed spaces.
\end{defi}
\begin{ex} A reduced differentiable space $(X,\O_X)$ is a $n$-dimensional smooth manifold if and only if around every $x\in X$ there is a chart for $(X,\O_X)$ that maps onto an open in $\mathbb{R}^n$.
\end{ex}
\begin{ex}\label{leafspreddiffspex} The leaf space $\underline{M}$ of a proper Lie groupoid $\mathcal{G}\rightrightarrows M$, equipped with the structure sheaf of Example \ref{smoothfunleafsp}, is a reduced differentiable space. The proof of this will be recalled at the end of this subsection.
\end{ex}
\begin{rem}\label{partofunityreddiffbsp} A reduced differentiable space $(X,\O_X)$ is locally compact. So, if it is Hausdorff and second countable, then it is also paracompact. Moreover, it then admits $\O_X$-partitions of unity subordinate to any open cover (this can be proved as for manifolds, see e.g. \cite{GoSa}).
\end{rem}
To say what it means for a stratification to be compatible with the structure of reduced differentiable space, we will need an appropriate notion of submanifold.
\begin{defi}\label{embredringdef} Let $(Y,\O_Y)$ and $(X,\O_X)$ be reduced ringed spaces and $i:Y\hookrightarrow X$ a topological embedding. We call $i$ an \textbf{embedding of reduced ringed spaces} if it is a morphism of reduced ringed spaces and $i^*:\O_X\vert_Y\to \O_Y$ is a surjective map of sheaves. In other words, $\O_Y$ coincides with the image sheaf of the map $i^*:\O_X\vert_Y\to \mathcal{C}_Y$, meaning that for every open $U$ in $Y$:
\begin{equation*} \O_Y(U)=\{f\in \mathcal{C}_Y(U)\mid \forall y\in U,\text{ }\exists (\widehat{f})_{i(y)}\in (\O_X)_{i(y)} : (f)_y=(\widehat{f}\vert_Y)_y\}.
\end{equation*}
\end{defi}
\begin{rem}\label{indfuncstr} Let us stress that for any subspace $Y$ of a reduced ringed space $(X,\O_X)$ there is a unique subsheaf $\O_Y\subset \mathcal{C}_Y$ making $i:(Y,\O_Y)\hookrightarrow (X,\O_X)$ into an embedding of reduced ringed spaces. We will call this the \textbf{induced structure sheaf} on $Y$. Note that, if $(X,\O_X)$ is a reduced differentiable space and $Y$ is locally closed in $X$, then $Y$, equipped with its induced structure sheaf, is a reduced differentiable space as well, because charts for $X$ restrict to charts for $Y$.
\end{rem}
\begin{ex}\label{embredringspex} Here are some examples of embeddings:
\begin{itemize} \item[i)] For maps between smooth manifolds, the above notion of embedding is the usual one.
\item[ii)] In Example \ref{smoothfunsubsp}, the inclusion $i:(Y,\mathcal{C}^\infty_Y)\hookrightarrow (\mathbb{R}^n,\mathcal{C}^\infty_{\mathbb{R}^n})$ is an embedding.
\item[iii)] Let $(X,\O_X)$ be a reduced ringed space and $U\subset X$ open. A homeomorphism $\chi:U\to Y$ onto a locally closed subspace $Y$ of $\mathbb{R}^n$ is a chart if and only if $\chi:(U,\O_X\vert_U)\to (\mathbb{R}^n,\mathcal{C}^\infty_{\mathbb{R}^n})$ is an embedding.
\end{itemize}
\end{ex}
\begin{rem}\label{morphsmstrspsimp} Let $\phi:(X_1,\O_{X_1})\to (X_2,\O_{X_2})$ be a morphism of reduced ringed spaces and let $Y_1\subset X_1$ and $Y_2\subset X_2$ be subspaces such that $\phi(Y_1)\subset Y_2$. Then $\phi$ restricts to a morphism of reduced ringed spaces $(Y_1,\O_{Y_1})\to (Y_2,\O_{Y_2})$ with respect to the induced structure sheaves.
\end{rem}
\begin{defi} Let $(X,\O_X)$ be a reduced differentiable space and $Y$ a locally closed subspace of $X$. We call $Y$ a \textbf{submanifold} of $(X,\O_X)$, when endowed with its induced structure sheaf it is a smooth manifold.
\end{defi}
\begin{rem} Let $(X,\O_X)$ be a reduced differentiable space. Let $Y$ be a subspace of $X$. Then $Y$ is a $d$-dimensional submanifold of $(X,\O_X)$ if and only if for every chart $(U,\chi)$ of $(X,\O_X)$ the image $\chi(U\cap Y)$ is a $d$-dimensional submanifold of $\mathbb{R}^n$.
\end{rem}
\begin{ex}\label{canstratsubmanex} Let $\mathcal{G}\rightrightarrows M$ be a proper Lie groupoid. Each Morita type in $\underline{M}$ is a submanifold of the leaf space $(\underline{M},\mathcal{C}^\infty_{\underline{M}})$. The same holds for each stratum of the canonical stratification.
\end{ex}
We end this subsection by recalling proofs of the claims in Example \ref{leafspreddiffspex} and \ref{canstratsubmanex}. The following observation will be useful for this and for later reference.
\begin{prop}\label{globchar} Let $(Y,\O_Y)$ be a reduced ringed space, $(X,\O_X)$ a Hausdorff and second countable reduced differentiable space. Suppose that $i:Y\hookrightarrow X$ is both a topological embedding and a morphism of reduced ringed spaces. Then $i$ is an embedding of reduced ringed spaces if and only if every global function $f\in \O_Y(Y)$ extends to a function $g\in \O_X(U)$ defined on some open neighbourhood $U$ of $i(Y)$ in $X$. Moreover, if $i(Y)$ is closed in $X$, then $U$ can be chosen to be $X$.
\end{prop}
\begin{proof} For the forward implication, let $f\in \O_Y(Y)$. Since $i$ is an embedding of reduced ringed spaces, for every $y\in Y$ there is a local extension of $f$, defined on an open around $i(y)$ in $X$. By Remark \ref{partofunityreddiffbsp}, any open in $X$ admits $\O_X$-partitions of unity subordinate to any open cover. So, using the standard partition of unity argument we can construct, out of the local extensions, an extension $g\in \O_X(U)$ of $f$ defined on an open neighbourhood $U$ of $i(Y)$ in $X$, which can be taken to be all of $X$ if $i(Y)$ is closed in $X$. For the backward implication, it suffices to show that every germ in $\O_Y$ can be represented by a globally defined function in $\O_Y(Y)$. For this, it is enough to show that for every $y\in Y$ and every open neighbourhood $U$ of $y$ in $Y$, there is a function $\rho\in \O_Y(Y)$, supported in $U$, such that $\rho=1$ on an open neighbourhood of $y$ in $U$. To verify the latter, let $y$ and $U$ be as above. Let $V$ be an open in $X$ around $i(y)$ such that $V\cap i(Y)=i(U)$. Using a chart for $(X,\O_X)$ around $i(y)$, we can find a function $\rho_X \in \O_X(X)$, supported in $V$, such that $\rho_X=1$ on an open neighbourhood of $i(y)$ in $V$. Now, $\rho:=i^*(\rho_X)\in \O_Y(Y)$ is supported in $U$ and equal to $1$ on an open neighbourhood of $y$ in $U$. This proves the proposition.
\end{proof}
Returning to Example \ref{leafspreddiffspex}: first consider the case of a compact Lie group $G$ acting linearly on a real finite-dimensional vector space $V$ (that is, $V$ is a representation of $G$). The algebra $P(V)^G$ of $G$-invariant polynomials on $V$ is finitely generated. Given a finite set of generators $\{\rho_1,...,\rho_n\}$ of $P(V)^G$, one can consider the polynomial map:
\begin{equation}\label{orbmaprep} \rho=(\rho_1,...,\rho_n):V\to \mathbb{R}^n.
\end{equation} We call this a \textbf{Hilbert map} for the representation $V$. Any such map factors through an embedding of topological spaces $\underline{\rho}:V/G\to \mathbb{R}^n$ onto a closed subset of $\mathbb{R}^n$. Furthermore:
\begin{thm}[\cite{Schw1}] Let $G$ be a compact Lie group, $V$ a real finite-dimensional representation of $G$ and $\rho:V\to \mathbb{R}^n$ a Hilbert map. Then the associated map (\ref{orbmaprep}) satisfies:
\begin{equation*} \rho^*(C^\infty(\mathbb{R}^n))=C^\infty(V)^G.
\end{equation*}
\end{thm}
So, in view of Proposition \ref{globchar}, the morphism of reduced ringed spaces:
\begin{equation}\label{schwarzchart} \underline{\rho}:(V/G,\mathcal{C}^\infty_{V/G})\to (\mathbb{R}^n,\mathcal{C}^\infty_{\mathbb{R}^n}).
\end{equation}
is in fact an embedding of reduced ringed spaces (Definition \ref{embredringdef}), and hence a globally defined chart for the orbit space $V/G$ (by Example \ref{embredringspex}). Next, we show how this leads to charts for the leaf space of a proper Lie groupoid. Recall:
\begin{prop}\label{moreqisoredring} The homeomorphism of leaf spaces (\ref{leafsphommoreq}) induced by a Morita equivalence of Lie groupoids is an isomorphism of reduced ringed spaces:\begin{equation*}h_P:(\underline{M}_1,\mathcal{C}^\infty_{\underline{M}_1})\xrightarrow{\sim} (\underline{M}_2,\mathcal{C}^\infty_{\underline{M}_2}).
\end{equation*}
\end{prop}
\begin{proof} Suppose we are given a Morita equivalence between Lie groupoids:
\begin{center}
\begin{tikzpicture} \node (G1) at (0,0) {$\mathcal{G}_1$};
\node (M1) at (0,-1.3) {$M_1$};
\node (S) at (1.4,0) {$P$};
\node (M2) at (2.7,-1.3) {$M_2$};
\node (G2) at (2.7,0) {$\mathcal{G}_2$};
\draw[->,transform canvas={xshift=-\shift}](G1) to node[midway,left] {}(M1);
\draw[->,transform canvas={xshift=\shift}](G1) to node[midway,right] {}(M1);
\draw[->,transform canvas={xshift=-\shift}](G2) to node[midway,left] {}(M2);
\draw[->,transform canvas={xshift=\shift}](G2) to node[midway,right] {}(M2);
\draw[->](S) to node[pos=0.25, below] {$\text{ }\text{ }\alpha_1$} (M1);
\draw[->] (0.8,-0.15) arc (315:30:0.25cm);
\draw[<-] (1.9,0.15) arc (145:-145:0.25cm);
\draw[->](S) to node[pos=0.25, below] {$\alpha_2$\text{ }} (M2);
\end{tikzpicture}
\end{center}
Then, given two $P$-related invariant opens $U_1\subset M_1$ and $U_2\subset M_2$, we have algebra isomorphisms:
\begin{center}
\begin{tikzpicture} \node (S_1) at (0,0) {$\mathcal{C}_{M_1}^\infty(U_1)^{\mathcal{G}_1}$};
\node (S_2) at (8,0) {$\mathcal{C}_{M_2}^\infty(U_2)^{\mathcal{G}_2}$};
\node (Q) at (4,1) {$\mathcal{C}_P^\infty(\alpha_1^{-1}(U_1))^{\mathcal{G}_1}\cap \mathcal{C}_P^\infty(\alpha_2^{-1}(U_2))^{\mathcal{G}_2}$};
\node (X_1) at (0,-2) {$\mathcal{C}_{\underline{M}_1}^\infty(\underline{U}_1)$};
\node (X_2) at (8,-2) {$\mathcal{C}_{\underline{M}_2}^\infty(\underline{U}_2)$};
\draw[->](S_1) to node[pos=0.45, below] {$\text{ }\text{ }\alpha_1^*$} (Q);
\draw[->](S_2) to node[pos=0.45, below] {$\text{ }\text{ }\alpha_2^*$} (Q);
\draw[->](X_1) to node[pos=0.55, right] {$q_1^*\text{ }\text{ }$} (S_1);
\draw[->](X_2) to node[pos=0.55, left] {$\text{ }\text{ }q_2^*$} (S_2);
\draw[<-, dashed](X_1) to node[pos=0.45, below] {$\text{ }\text{ }h_P^*$} (X_2);
\end{tikzpicture}
\end{center} that complete to a commutative diagram via $h_P^*:\mathcal{C}_{\underline{M}_2}\to (h_P)_*\mathcal{C}_{\underline{M}_1}$.
\end{proof}
Now, the linearization theorem for proper Lie groupoids implies that, given a proper Lie groupoid $\mathcal{G}\rightrightarrows M$ and an $x\in M$, there is an invariant open neighbourhood $U$ of $x$ in $M$ and a Morita equivalence between $\mathcal{G}\vert_U$ and the action groupoid $\mathcal{G}_x\ltimes \mathcal{N}_x$ of the normal representation at $x$, as in (\ref{normreppt}), that relates $\L_x$ to the origin in $\mathcal{N}_x$.
So, applying Proposition \ref{moreqisoredring} we find an isomorphism:
\begin{equation}\label{locmodliegpoidchart} (\underline{U},\mathcal{C}^\infty_{\underline{M}}\vert_{\underline{U}})\cong (\mathcal{N}_x/\mathcal{G}_x,\mathcal{C}^\infty_{\mathcal{N}_x/\mathcal{G}_x}),
\end{equation} which composes with the embedding (\ref{schwarzchart}) to a chart for $(\underline{M},\mathcal{C}^\infty_{\underline{M}})$, as desired. We conclude that $(X,\mathcal{C}^\infty_X)$ is a reduced differentiable space, as claimed in Example \ref{leafspreddiffspex}. To see why the claims in Example \ref{canstratsubmanex} hold true, let $\underline{\Sigma}\in \P_\mathcal{M}(\underline{M})$ be a Morita type. Suppose that $\L_x\in \underline{\Sigma}$. The isomorphism (\ref{locmodliegpoidchart}) identifies $\underline{U}\cap \underline{\Sigma}$ with the Morita type of $\mathcal{G}_x\ltimes \mathcal{N}_x$ through the origin, which is the fixed point set $\mathcal{N}_x^{\mathcal{G}_x}$ \textemdash a submanifold of $\mathcal{N}_x/\mathcal{G}_x$. Therefore $\underline{\Sigma}$ is a submanifold of $\underline{M}$ near $\L_x$. This being true for all points in $\underline{\Sigma}$, it follows that $\underline{\Sigma}$ is a submanifold with connected components of possibly varying dimension. The dimension of the connected component through $\L_x$ is $\dim(\mathcal{N}_x^{\mathcal{G}_x})$, hence it follows from Proposition \ref{transgeomgpoid}$b$ that all connected components of $\underline{\Sigma}$ in fact have the same dimension. So, the Morita types are indeed submanifolds of the leaf space, and so are their connected components.
\subsubsection{Whitney stratifications of reduced differentiable spaces}\label{whitneysec}
\begin{defi}\label{smoothstratspdefi} Let $(X,\O_X)$ be a Hausdorff and second countable reduced differentiable space. A \textbf{stratification} $\S$ of $(X,\O_X)$ is a stratification of $X$ by submanifolds of $(X,\O_X)$. That is, $\S$ is a stratification of $X$ with the property that the given smooth structure on each stratum coincides with its induced structure sheaf. We call the triple $(X,\O_X,\S)$ a \textbf{smooth stratified space}. A \textbf{morphism of smooth stratified spaces} is a morphism of the underlying stratified spaces that is simultaneously a morphism of the underlying reduced ringed spaces.
\end{defi}
\begin{rem} As noted in \cite{PfPoTa}, the notion of smooth stratified space is equivalent (up to the slight difference pointed out in Remark \ref{compremstratdefi}) to the notion of stratified space with smooth structure in \cite{Pfl}, which is defined starting from an atlas of compatible singular charts, rather than a structure sheaf.
\end{rem}
On stratifications of reduced differentiable spaces, we can impose an important extra regularity condition: Whitney's condition (b). We now recall this, starting with:
\begin{defi} Let $R$ and $S$ be disjoint submanifolds of $\mathbb{R}^n$, and let $y\in S$. Then $R$ is called \textbf{Whitney regular} over $S$ at $y$ if the following is satisfied. For any two sequences $(x_n)$ in $R$ and $(y_n)$ in $S$ that both converge to $y$ and satisfy:
\begin{itemize}\item[i)] $T_{x_n}R$ converges to some $\tau$ in the Grassmannian of $\dim(R)$-dimensional subspaces of $\mathbb{R}^n$,
\item[ii)] the sequence of lines $[x_n-y_n]$ in $\mathbb{R} P^{n-1}$ converges to some line $\ell$,
\end{itemize} it must hold that $\ell \subset \tau$.
\end{defi}
Using charts, this generalizes to reduced differentiable spaces, as follows.
\begin{defi} Let $(X,\O_X)$ be a reduced differentiable space and let $R$ and $S$ be disjoint submanifolds. Then $R$ is called \textbf{Whitney regular} over $S$ at $y\in S$ if for every chart $(U,\chi)$ around $y$, the submanifold $\chi(R\cap U)$ of $\mathbb{R}^n$ is Whitney regular over $\chi(S\cap U)$ at $\chi(y)$. We call $R$ Whitney regular over $S$ if it is so at every $y\in S$. Moreover, we call a partition $\P$ of $(X,\O_X)$ into submanifolds Whitney regular if every member of $\P$ is Whitney regular over each other member.
\end{defi}
\begin{defi}\label{whitstratdef} A smooth stratified space $(X,\O_X,\S)$ is called a \textbf{Whitney stratified space} when the partition $\S$ of $(X,\O_X)$ is Whitney regular.
\end{defi}
To verify Whitney regularity of $R$ over $S$ at $y$, it is enough to do so in a single chart around $y$. To see this, the key remark is the proposition below, combined with the fact that Whitney regularity is invariant under smooth local coordinate changes of the ambient space $\mathbb{R}^n$.
\begin{prop} Let $(X,\O_X)$ be a reduced differentiable space. Any two charts $(U_1,\chi_1)$ and $(U_2,\chi_2)$ onto locally closed subsets of $\mathbb{R}^n$ are smoothly compatible, in the sense that: for any $y\in U_1\cap U_2$, there is a diffeomorphism $H:O_1\to O_2$ from an open neighbourhood $O_1$ of $\chi_1(y)$ in $\mathbb{R}^n$ onto an open neighbourhood $O_2$ of $\chi_2(y)$ in $\mathbb{R}^n$ such that:
\begin{equation*} H\vert_{O_1\cap\chi_1(U_1\cap U_2)}=\chi_2\circ (\chi_1^{-1})\vert_{O_1\cap\chi_1(U_1\cap U_2)}.
\end{equation*}
\end{prop}
\begin{proof} Although this is surely known, we could not find a proof in the literature. The argument here is closely inspired by that of \cite[Proposition 1.3.10]{Pfl}. Turning to the proof: it is enough to show that, given two subspaces $Y_1,Y_2\subset \mathbb{R}^n$ and an isomorphism of reduced ringed spaces:
\begin{equation*} \phi:(Y_1,\mathcal{C}^\infty_{Y_1})\xrightarrow{\sim}(Y_2,\mathcal{C}^\infty_{Y_2}),
\end{equation*} there are, for every $y\in Y_1$, an open $U_1$ in $\mathbb{R}^n$ around $y$ and a smooth open embedding $\widehat{\phi}:U_1\to \mathbb{R}^n$ such that $\widehat{\phi}\vert_{U_1\cap Y_1}=\phi\vert_{U_1\cap Y_1}$. To this end, let us first make a general remark. Given $Y\subset \mathbb{R}^n$ and $y\in Y$, let $\mathfrak{m}^Y_y$ and $\mathfrak{m}^{\mathbb{R}^n}_y$ denote the respective maximal ideals in the stalks $(\mathcal{C}^\infty_Y)_y$ and $(\mathcal{C}^\infty_{\mathbb{R}^n})_y$, consisting of germs of those functions that vanish at $y$. Further, let $(\mathcal{I}_Y)_y$ denote the ideal in $(\mathcal{C}^\infty_{\mathbb{R}^n})_y$ consisting of germs of those functions that vanish on $Y$. Notice that we have a canonical short exact sequence:
\begin{equation*} 0\to \left((\mathcal{I}_Y)_y+(\mathfrak{m}_y^{\mathbb{R}^n})^2\right)/(\mathfrak{m}_y^{\mathbb{R}^n})^2 \to \mathfrak{m}_y^{\mathbb{R}^n}/(\mathfrak{m}_y^{\mathbb{R}^n})^2 \xrightarrow{(i_Y)_y^*} \mathfrak{m}_y^{Y}/(\mathfrak{m}_y^{Y})^2\to 0.
\end{equation*} Furthermore, recall that there is a canonical isomorphism of vector spaces:
\begin{equation*} \mathfrak{m}_y^{\mathbb{R}^n}/(\mathfrak{m}_y^{\mathbb{R}^n})^2\xrightarrow{\sim} T^*_y\mathbb{R}^n,\quad (f)_y\mod (\mathfrak{m}_y^{\mathbb{R}^n})^2\mapsto \d f_y.
\end{equation*} It follows that, for any $(h_1)_y,...,(h_k)_y\in \mathfrak{m}^{\mathbb{R}^n}_y$ that project to a basis of $\mathfrak{m}_y^{Y}/(\mathfrak{m}_y^{Y})^2$, we can find $(h_{k+1})_y,...,(h_n)_y\in (\mathcal{I}_Y)_y$ such that $\d (h_1)_y,...,\d (h_n)_y\in T^*_y\mathbb{R}^n$ form a basis, or in other words, such that $(h_1,...,h_n)_y$ is the germ of a diffeomorphism from an open neighbourhood of $y$ in $\mathbb{R}^n$ onto an open neighbourhood of the origin in $\mathbb{R}^n$. Now, we return to the isomorphism $\phi$. Let $k$ be the dimension of $\mathfrak{m}_{\phi(y)}^{Y_2}/(\mathfrak{m}_{\phi(y)}^{Y_2})^2$. Using the above remark we can, first of all, find a diffeomorphism:
\begin{equation*} f=(f_1,...,f_n):U_2\xrightarrow{\sim} V_2
\end{equation*} from an open $U_2$ in $\mathbb{R}^n$ around $\phi(y)$ onto an open $V_2$ in $\mathbb{R}^n$ around the origin, such that:
\begin{equation*} (f_1)_{\phi(y)},...,(f_k)_{\phi(y)}\in \mathfrak{m}_{\phi(y)}^{\mathbb{R}^n}
\end{equation*} project to a basis of $\mathfrak{m}_{\phi(y)}^{Y_2}/(\mathfrak{m}_{\phi(y)}^{Y_2})^2$ and such that $f_{k+1},...,f_n$ vanish on $U_2\cap Y_2$. Since $\phi$ is an isomorphism of reduced ringed spaces, it induces an isomorphism:
\begin{equation*} (\phi^*)_y: \mathfrak{m}_{\phi(y)}^{Y_2}/(\mathfrak{m}_{\phi(y)}^{Y_2})^2 \xrightarrow{\sim} \mathfrak{m}_{y}^{Y_1}/(\mathfrak{m}_{y}^{Y_1})^2,
\end{equation*} which maps the above basis to a basis of $\mathfrak{m}_{y}^{Y_1}/(\mathfrak{m}_{y}^{Y_1})^2$. Using this and the remark above once more, we can find a diffeomorphism:
\begin{equation*} g=(g_1,...,g_n):U_1\xrightarrow{\sim} V_1,
\end{equation*} from an open $U_1$ in $\mathbb{R}^n$ around $y$ such that $\phi(U_1\cap Y_1)\subset U_2$, onto an open $V_1\subset V_2$ around the origin in $\mathbb{R}^n$, with the property that:
\begin{equation*} g_j\vert_{U_1\cap Y_1}=f_j\circ (\phi\vert_{U_1\cap Y_1}), \quad \forall j=1,...,k,
\end{equation*} and that $g_{k+1},...,g_n$ vanish on $U_1\cap Y_1$. Then, in fact $g\vert_{U_1\cap Y_1}=f\circ (\phi\vert_{U_1\cap Y_1})$, so that the smooth open embedding:
\begin{equation*} \widehat{\phi}:=f^{-1}\circ g:U_1\to \mathbb{R}^n,
\end{equation*} restricts to $\phi$ on $U_1\cap Y_1$, as desired.
\end{proof}
\begin{rem}\label{passtoconncomprem} Contuining Remark \ref{compremstratdefi}:
\begin{itemize} \item[i)] Let $(X,\O_X)$ be a Hausdorff and second countable reduced differentiable space and let $\P$ be a locally finite partition of $(X,\O_X)$ into submanifolds. In the terminology of \cite{GWPL}, such a partition $\P$ would be called a stratification. If $\P$ is Whitney regular, then the partition $\P^c$ (obtained after passing to connected components) is locally finite and satisfies the frontier condition. Hence, $\P^c$ is then a Whitney stratification of $(X,\O_X)$. In the case that $(X,\O_X)$ is a locally closed subspace of $\mathbb{R}^n$ equipped with its induced structure sheaf, this statement is proved in \cite{GWPL} using the techniques developed in \cite{Th,Mat,Mat1}. The general statement follows from this case by using charts and Lemma \ref{frontcondloc}.
\item[ii)] Combined with the discussion in \cite[Section 4.1]{CrMe} and \cite[Proposition 1.2.7]{Pfl}, the previous remark shows that the notion of Whitney stratified space used here is actually equivalent to that in \cite{Pfl}.
\end{itemize}
\end{rem}
\subsubsection{Semi-algebraic sets and homogeneity}
For proofs of the facts on semi-algebraic sets that we use throughout, we refer to \cite{BoCoRo}; further see \cite{GWPL} for a concise introduction. By a semi-algebraic subset of $\mathbb{R}^n$, we mean a finite union of subsets defined by real polynomial equalities and inequalities. Semi-algebraic sets are rather rigid geometric objects. For instance, any semi-algebraic set $A\subset \mathbb{R}^n$ has a finite number of connected components and admits a canonical Whitney stratification with finitely many strata (in contrast: any closed subset of $\mathbb{R}^n$ is the zero-set of some smooth function). As remarked in \cite{GWPL}, there is a useful criterion for stratifications in $\mathbb{R}^n$ to be Whitney regular, when the strata are semi-algebraic. This criterion can be extended to smooth stratified spaces, as follows.
\begin{defi}\label{locsemalgdefi} We call a partition $\P$ of a reduced differentiable space $(X,\O_X)$ \textbf{locally semi-algebraic} at $x\in X$ if there is a chart $(U,\chi)$ around $x$ that maps every member of $\P\vert_U$ onto a semi-algebraic subset of $\mathbb{R}^n$. We call the partition locally semi-algebraic if it is so at every $x\in X$.
\end{defi}
\begin{defi}\label{homdefi} We call a partition $\P$ of a topological space $X$ \textbf{homogeneous} if for any two $x_1,x_2\in X$ that belong to the same member of $\P$, there is a homeomorphism:
\begin{equation*} h:U_1\xrightarrow{\sim} U_2
\end{equation*} from an open $U_1$ around $x_1$ onto an open $U_2$ around $x_2$ in $X$, with the property that $h(x_1)=x_2$ and for every $\Sigma\in \P$:
\begin{equation*} h(U_1\cap \Sigma)=U_2\cap \Sigma.
\end{equation*} If $(X,\O_X)$ is a reduced differentiable space and the members of $\P$ are submanifolds, then we call $\P$ \textbf{smoothly homogeneous} if the homeomorphisms $h$ can in fact be chosen to be isomorphisms of reduced differentiable spaces:
\begin{equation*} h:(U_1,\O_X\vert_{U_1})\xrightarrow{\sim} (U_2,\O_X\vert_{U_2}).
\end{equation*}
\end{defi}
\begin{rem} Notice that: \begin{itemize}\item[i)] Homogeneity of a partition $\P$ of $X$ implies that $\P$ satisfies the topological part of the frontier condition: the closure of any member $\Sigma\in\P$ is a union of $\Sigma$ with other members.
\item[ii)] If $\P$ is smoothly homogeneous, then a map $h$ as above restricts to diffeomorphisms between the members of $\P\vert_{U_1}$ and $\P\vert_{U_2}$ (by Remark \ref{morphsmstrspsimp}).
\end{itemize}
\end{rem}
Together the above conditions give a criterion for Whitney regularity.
\begin{lemma}\label{whitreghomprop} Let $(X,\O_X)$ be a reduced differentiable space and let $\P$ be a partition of $(X,\O_X)$ into submanifolds. If $\P$ is smoothly homogeneous and locally semi-algebraic, then it is Whitney regular.
\end{lemma}
\begin{proof} Let $R,S\in \P$ be two distinct members. Since $\P$ is smoothly homogeneous, either $R$ is Whitney regular over $S$ at all points in $S$, or at no points at all. Indeed, this follows from the simple fact that Whitney regularity is invariant under isomorphisms of reduced differentiable spaces. As $\P$ is locally semi-algebraic, the latter option cannot happen, and hence the partition must be Whitney regular. In order to explain this, suppose first that $R,S\subset \mathbb{R}^n$ are semi-algebraic and submanifolds of $\mathbb{R}^n$ (also called Nash submanifolds of $\mathbb{R}^n$). Consider the set of bad points: \begin{equation*} \mathcal{B}(R,S),
\end{equation*} which consists of those $y\in S$ at which $R$ is not Whitney regular over $S$. The key fact is now that, because $R$ and $S$ are semi-algebraic, the subset $\mathcal{B}(R,S)$ has empty interior in $S$ (see \cite{Wall} for a concise proof), hence it cannot be all of $S$. In general, we can pass to a chart around any $y\in S$ in which the strata $R$ and $S$ are semi-algebraic and the same argument applies, because Whitney regularity can be verified in a single chart. \end{proof}
To exemplify the use of Lemma \ref{whitreghomprop}, let us point out how it leads to a concise proof of:
\begin{thm}[\cite{PfPoTa}]\label{stratleafspthm} The canonical stratification of the leaf space of a proper Lie groupoid is a Whitney stratification.
\end{thm}
To verify the criteria of Lemma \ref{whitreghomprop}, we use:
\begin{prop}[\cite{Bi}]\label{semialgstratrep} Let $G$ be a compact Lie group and let $V$ be a real finite-dimensional representation of $G$. Then any Hilbert map $\rho:V\to \mathbb{R}^n$ (see Subsection \ref{reddiffspsec}) identifies the strata of the canonical stratification $\S_\textrm{Gp}(V/G)$ with semi-algebraic subsets of $\mathbb{R}^n$.
\end{prop}
See also \cite[Theorem 1.5.2]{Schw2} for a more elementary proof.
\begin{proof}[Proof of Theorem \ref{stratleafspthm}] Let $\mathcal{G}\rightrightarrows M$ be a proper Lie groupoid. We return to the discussion at the end of Subsection \ref{reddiffspsec}. As recalled there, for any $x\in M$ there is an open $\underline{U}$ around the leaf $\L_x\in \underline{M}$ and an isomorphism (\ref{locmodliegpoidchart}) that identifies $\underline{U}$, as a reduced differentiable space, with $\mathcal{N}_x/\mathcal{G}_x$. Furthermore, (\ref{locmodliegpoidchart}) identifies the partition $\P_\mathcal{M}(\underline{M})\vert_{\underline{U}}$ by Morita types of $\mathcal{G}\vert_U$ with the partition of $\mathcal{N}_x/\mathcal{G}_x$ by Morita types of $\mathcal{G}_x\ltimes \mathcal{N}_x$. Recall that the canonical stratification on the orbit space of a real, finite-dimensional representation of a compact Lie group has finitely many strata (see e.g. \cite[Proposition 2.7.1]{DuKo}). In combination with Proposition \ref{semialgstratrep}, this implies that a Hilbert map $\rho:\mathcal{N}_x\to \mathbb{R}^n$ for the normal representation $\mathcal{N}_x$ maps the Morita types in $\mathcal{N}_x/\mathcal{G}_x$ onto semi-algebraic subsets of $\mathbb{R}^n$. This shows that $\P_\mathcal{M}(\underline{M})$ is locally semi-algebraic. Secondly, Proposition \ref{moreqisoredring} implies that the partition by Morita types is homogeneous. To see this, note that by the very definition of the partition by Morita types on $\underline{M}$, for any two leaves $\L_1$ and $\L_2$ in the same Morita type, there are invariant opens $V_1$ around $\L_1$, $V_2$ around $\L_2$ in $M$ and a Morita equivalence $\mathcal{G}\vert_{V_1}\simeq \mathcal{G}\vert_{V_2}$ relating $\L_1$ to $\L_2$. The homeomorphism of leaf spaces induced by this Morita equivalence is an isomorphism of reduced differentiable spaces:
\begin{equation*} (\underline{V}_1,\mathcal{C}^\infty_{\underline{M}} \vert_{\underline{V}_1})\cong (\underline{V}_2,\mathcal{C}^\infty_{\underline{M}} \vert_{\underline{V}_2})
\end{equation*} that identifies $\L_1$ with $\L_2$ and $\underline{V}_1\cap \underline{\Sigma}$ with $\underline{V}_2\cap \underline{\Sigma}$ for every Morita type $\underline{\Sigma}$. So, the partition by Morita types is indeed smoothly homogeneous. In light of Lemma \ref{whitreghomprop}, it follows that the partition by Morita types is Whitney regular. Hence, passing to connected components, we find that $\S_\textrm{Gp}(\underline{M})$ is a Whitney stratification of the leaf space $(\underline{M},\mathcal{C}^\infty_{\underline{M}})$ (as in Remark \ref{passtoconncomprem}).
\end{proof}
\subsubsection{Constant rank stratifications of maps}\label{constrkstratsec} Finally, we turn to constant rank stratifications of maps between reduced differentiable spaces. In this subsection, let $(X,\O_X)$ and $(Y,\O_Y)$ be Hausdorff, second countable reduced differentiable spaces.
\begin{defi}\label{constrkstratdefi} By a \textbf{partition of a morphism $f:(X,\O_X)\to (Y,\O_Y)$ into submanifolds} we mean a pair $(\P_X,\P_Y)$ consisting of a partition $\P_X$ of $(X,\O_X)$ and a partition $\P_Y$ of $(Y,\O_Y)$ into submanifolds, such that $f$ maps every member of $\P_X$ into a member of $\P_Y$. We call this a \textbf{constant rank partition} of $f$ if in addition, for every $\Sigma_X\in \P_X$ and $\Sigma_Y\in \P_Y$ such that $f(\Sigma_X)\subset \Sigma_Y$, the smooth map $f:\Sigma_X\to \Sigma_Y$ has constant rank. Furthermore, by a \textbf{constant rank stratification} of $f$ we mean a constant rank partition for which both partitions are stratifications.
\end{defi}
In the remainder of this subsection we focus on the partition induced on the fibers of a morphism $f:(X,\O_X)\to (Y,\O_Y)$ by a constant rank partition. The fibers of such a morphism are the reduced differentiable spaces $(f^{-1}(y),\O_{f^{-1}(y)})$, equipped with the induced structure sheaf as in Remark \ref{indfuncstr}. Given a constant rank partition $(\P_X,\P_Y)$ of $f$, its fibers have an induced partition:
\begin{equation}\label{indpartfibconstrk} \P_X\vert_{f^{-1}(y)}=\{ \Sigma_X\cap f^{-1}(y) \mid \Sigma_X\in \P_X\},
\end{equation} the members of which are submanifolds, being the fibers of the constant rank maps obtained by restricting $f$ to the members of $(\P_X,\P_Y)$. The example below shows that the connected components of the members of (\ref{indpartfibconstrk}) need not form a stratification, even if $\P_X^c$ and $\P_Y^c$ are Whitney stratifications.
\begin{ex} Consider the polynomial map:
\begin{equation*} f:\mathbb{R}^3\to \mathbb{R},\quad f(x,y,z)=x^2-zy^2.
\end{equation*} The fiber of $f$ over the origin in $\mathbb{R}$ is the Whitney umbrella. Consider the stratification of $\mathbb{R}^3$ by the five strata $\{y<0\}$, $\{y>0\}$, $\{y=0,x<0\}$, $\{y=0,x>0\}$ and the $z$-axis $\{x=y=0\}$. Together with the stratification of $\mathbb{R}$ consisting of a single stratum, this forms a constant rank stratification of $f$. The induced partition (\ref{indpartfibconstrk}) of the fiber of $f$ over the origin consists of two connected surfaces and the $z$-axis. This does not satisfy the frontier condition, because the negative part of the $z$-axis is not contained in the closure of these surfaces.
\end{ex}
We will now give a criterion that does ensure that the induced partitions (\ref{indpartfibconstrk}) of the fibers form stratifications. Recall that a map between semi-algebraic sets is called semi-algebraic when its graph is a semi-algebraic set. Below, let $f:(X,\O_X)\to (Y,\O_Y)$ be a morphism and $(\P_X,\P_Y)$ a partition $f$ into submanifolds.
\begin{defi}\label{locsemalgdefimap} We call $(f,\P_X,\P_Y)$ \textbf{locally semi-algebraic} at $x\in X$ if there are a chart $(U,\chi)$ around $x$ and a chart $(V,\phi)$ around $f(x)$ with $f(U)\subset V$, that map the respective members of $\P_X\vert_U$ and $\P_Y\vert_V$ onto semi-algebraic sets, and have the property that the coordinate representation $\phi\circ f\circ \chi^{-1}$ restricts to semi-algebraic maps between the members of $\chi(\P_X\vert_U)$ and $\phi(\P_Y\vert_V)$. We call $(f,\P_X,\P_Y)$ locally semi-algebraic if it is so at every $x\in X$.
\end{defi}
\begin{defi}\label{homdefimap} We call $(f,\P_X,\P_Y)$ \textbf{smoothly homogeneous} if for any two $x_1,x_2\in X$ that belong to the same member of $\P_X$, there are isomorphisms of reduced differentiable spaces:
\begin{equation*} h_X:(U_1,\O_X\vert_{U_1})\xrightarrow{\sim} (U_2,\O_X\vert_{U_2}) \quad \& \quad h_Y:(V_1,\O_Y\vert_{V_1})\xrightarrow{\sim} (V_2,\O_Y\vert_{V_2})
\end{equation*}
from an open $U_1$ around $x_1$ onto an open $U_2$ around $x_2$ in $X$, and from an open $V_1$ around $f(x_1)$ onto an open $V_2$ around $f(x_2)$ in $Y$, that fit in a commutative diagram:
\begin{center}
\begin{tikzcd}
(U_1,\O_X\vert_{U_1},x_1)\arrow[d,"f"]\arrow[r,"h_X"] & (U_2,\O_X\vert_{U_2},x_2) \arrow[d,"f"] \\
(V_1,\O_Y\vert_{V_1},f(x_1)) \arrow[r,"h_Y"] & (V_2,\O_Y\vert_{V_2},f(x_2))
\end{tikzcd}
\end{center}
and have the property that for all $\Sigma_X\in \P_X$, $\Sigma_Y\in \P_Y$:
\begin{equation*} h_X(U_1\cap \Sigma_X)=U_2\cap \Sigma_X \quad \& \quad h_Y(V_1\cap \Sigma_Y)=V_2\cap \Sigma_Y.
\end{equation*}
\end{defi}
\begin{rem}\label{smoothhomimpliesconstrk} Notice that if $(f,\P_X,\P_Y)$ is smoothly homogeneous, then $(\P_X,\P_Y)$ is necessarily a constant rank partition of $f$.
\end{rem}
The following shows that, if both of these criteria are met, then the fibers of $f$ meet the criteria of Lemma \ref{whitreghomprop}.
\begin{prop}\label{whitreghompropmap} Let $f:(X,\O_X)\to (Y,\O_Y)$ be a morphism and $(\P_X,\P_Y)$ a constant rank partition of $f$. If $(f,\P_X,\P_Y)$ is smoothly homogeneous and locally semi-algebraic, then so are the induced partitions (\ref{indpartfibconstrk}) of the fibers of $f$.
\end{prop}
The proof of this is straightforward. Appealing to Lemma \ref{whitreghomprop} and Remark \ref{passtoconncomprem} we obtain:
\begin{cor}\label{fibwhitstratcor} Let $f:(X,\O_X)\to (Y,\O_Y)$ be a morphism and $(\P_X,\P_Y)$ a constant rank partition of $f$. Suppose that $\P_X$ is locally finite and that $(f,\P_X,\P_Y)$ is smoothly homogeneous and locally semi-algebraic. Then the partitions of the fibers of $f$ obtained from (\ref{indpartfibconstrk}) after passing to connected components are Whitney stratifications of the fibers.
\end{cor}
\subsection{The stratifications associated to Hamiltonian actions}\label{canhamstratsec}
\subsubsection{The canonical Hamiltonian stratification and Hamiltonian Morita types}
Throughout, let $(\mathcal{G},\Omega)$ be a proper symplectic groupoid and suppose that we are given a Hamiltonian $(\mathcal{G},\Omega)$-action along $J:(S,\omega)\to M$. Let $\underline{S}:=S/\mathcal{G}$ denote the orbit space of the action and $\underline{M}:=M/\mathcal{G}$ the leaf space of the groupoid. The construction of the canonical Hamiltonian stratifications on $S$ and $\underline{S}$ is of the sort outlined in Subsection \ref{stratdefsec}. To begin with, we give a natural partition that, after passing to connected components, will induce the desired stratification.
\begin{defi} The partition $\P_{\textrm{Ham}}(S)$ of $S$ by \textbf{Hamiltonian Morita types} is defined by the equivalence relation: $p_1\sim p_2$ if and only if there are invariant opens $V_i$ around $\L_{J(p_i)}$ in $M$, invariant opens $U_i$ around $\O_{p_i}$ in $J^{-1}(V_i)$, together with a Hamiltonian Morita equivalence (as in Definition \ref{moreqdefHamacttyp}) that relates $\O_{p_1}$ to $\O_{p_2}$:
\begin{center}
\begin{tikzpicture} \node (G1) at (0,0) {$(\mathcal{G},\Omega)\vert_{V_1}$};
\node (M1) at (0,-1.3) {$V_1$};
\node (O) at (2,0) {$(U_1,\omega)$};
\node[transparent] (A) at (2.2,-0.65) {$A$};
\node[transparent] (B) at (3.4,-0.65) {$B$};
\node (G) at (4,0) {$(\mathcal{G},\Omega)\vert_{V_2}$};
\node (M) at (4,-1.3) {$V_2$};
\node (S) at (6,0) {$(U_2,\omega)$};
\draw[->,transform canvas={xshift=-\shift}](G1) to node[midway,left] {}(M1);
\draw[->,transform canvas={xshift=\shift}](G1) to node[midway,right] {}(M1);
\draw[->](O) to node[pos=0.25, below] {$\text{ }\text{ }J$} (M1);
\draw[->] (1.3,-0.15) arc (315:30:0.25cm);
\draw[transparent] (A) edge node[opacity=1] {\resizebox{0.8cm}{0.2cm}{$\simeq$}} (B);
\draw[->,transform canvas={xshift=-\shift}](G) to node[midway,left] {}(M);
\draw[->,transform canvas={xshift=\shift}](G) to node[midway,right] {}(M);
\draw[->](S) to node[pos=0.25, below] {$\text{ }\text{ }J$} (M);
\draw[->] (5.3,-0.15) arc (315:30:0.25cm);
\end{tikzpicture}
\end{center}
The members of $\P_\textrm{Ham}(S)$ are invariant with respect to the $\mathcal{G}$-action, so that $\P_{\textrm{Ham}}(S)$ descends to a partition $\P_{\textrm{Ham}}(\underline{S})$ of $\underline{S}$.
\end{defi}
\begin{rem}\label{improphammortyp} Let us point out some immediate properties of these partitions.
\begin{itemize} \item[i)] They are invariant under Hamiltonian Morita equivalence, meaning that the homeomorphism of orbit spaces induced by a Hamiltonian Morita equivalence (Proposition \ref{transgeommap}$a$) identifies the partitions by Hamiltonian Morita types.
\item[ii)] The transverse momentum map sends each member of $\P_\textrm{Ham}(\underline{S})$ into a member of $\P_\mathcal{M}(\underline{M})$ (the partition of $\underline{M}$ by Morita types of $\mathcal{G}$; see Example \ref{exmortyp}).
\end{itemize}
\end{rem}
In analogy with Example \ref{exmortyp}, the partition by Hamiltonian Morita types has the following alternative characterization.
\begin{prop}\label{eqcharhammortyp} Two points $p,q\in S$ belong to the same Hamiltonian Morita type if and only if there is an isomorphism of pairs of Lie groups:
\begin{equation*} (\mathcal{G}_{J(p)},\mathcal{G}_p)\cong(\mathcal{G}_{J(q)},\mathcal{G}_q)
\end{equation*} together with a compatible symplectic linear isomorphism:
\begin{equation*} (\S\mathcal{N}_p,\omega_p)\cong (\S\mathcal{N}_q,\omega_q).
\end{equation*}
\end{prop}
\begin{proof} The forward implication is immediate from Proposition \ref{transgeomham}. For the converse, notice the following. Let $p\in S$, write $G=\mathcal{G}_{J(p)}$, $H=\mathcal{G}_p$ and $(V,\omega_V)=(\S\mathcal{N}_p,\omega_p)$, and let $\mathfrak{p}:\mathfrak{h}^*\to \mathfrak{g}^*$ be any choice of $H$-equivariant splitting of (\ref{ses2poisabs}). Then from the normal form theorem, Example \ref{locmodmoreq} and Example \ref{crucexmoreq2}, it follows that there are invariant opens $W$ around $\L_x$ in $M$ and $U$ around $\O_p$ in $J^{-1}(W)$, together with a Hamiltonian Morita equivalence between the Hamiltonian $(\mathcal{G},\Omega)\vert_W$-action along $J:(U,\omega)\to W$ and (a restriction of) the groupoid map of Hamiltonian type (\ref{bigJ_p}) (to invariant opens around the respective origins in $\mathfrak{h}^0\oplus V$ and $\mathfrak{g}^*$), that relates $\O_p$ to the origin in $\mathfrak{h}^0\oplus V$. With this at hand the backward implication is clear, for (\ref{bigJ_p}) is built naturally out of the pair $(G,H)$, the symplectic representation $(V,\omega_V)$ and the splitting $\mathfrak{p}$.
\end{proof}
We now turn to the stratifications induced by the Hamiltonian Morita types.
\begin{defi} Let $\S_\textrm{Ham}(S)$ and $\S_\textrm{Ham}(\underline{S})$ denote the partitions obtained from the Hamiltonian Morita types on $S$ and $\underline{S}$, respectively, by passing to connected components. We call $\S_\textrm{Ham}(S)$ and $\S_\textrm{Ham}(\underline{S})$ the \textbf{canonical Hamiltonian stratifications}.
\end{defi}
The main aim of this section will be to prove:
\begin{thm}\label{canhamstratthm} Let $(\mathcal{G},\Omega)\rightrightarrows M$ be a proper symplectic groupoid and suppose we are given a Hamiltonian $(\mathcal{G},\Omega)$-action along $J:(S,\omega)\to M$.
\begin{itemize}
\item[a)] The partition $\S_\textrm{Ham}(\underline{S})$ is a Whitney stratification of the orbit space $(\underline{S},\mathcal{C}^\infty_{\underline{S}})$.
\item[b)] The pair consisting of the canonical Hamiltonian stratification of the orbit space $\underline{S}$ and the canonical stratification of the leaf space $\underline{M}$ of $\mathcal{G}$:
\begin{equation*} \left(\S_\textrm{Ham}(\underline{S}),\S_\textrm{Gp}(\underline{M})\right)
\end{equation*} is a constant rank stratification (as in Definition \ref{constrkstratdefi}) of the transverse momentum map:
\begin{equation}\label{transmommap} \underline{J}:(\underline{S},\mathcal{C}^\infty_{\underline{S}})\to (\underline{M},\mathcal{C}^\infty_{\underline{M}}).
\end{equation}
\end{itemize}
\end{thm}
The fiber of (\ref{transmommap}) over a leaf $\L$ of $(\mathcal{G},\Omega)$, is (as topological space) the quotient $J^{-1}(\L)/\mathcal{G}$. This is the reduced space at $\L$ appearing in the procedure of symplectic reduction. Throughout, we will denote this as:
\begin{equation*} \underline{S}_\L:=\underline{J}^{-1}(\L),
\end{equation*} and we will simply denote the induced structure sheaf on the fiber space as $\mathcal{C}^\infty_{\underline{S}_\L}$. As we will show, $(\P_\textrm{Ham}(\underline{S}),\P_\mathcal{M}(\underline{M}))$ is a constant rank partition of the transverse momentum map (\ref{transmommap}), so that (as discussed in Subsection \ref{constrkstratsec}) the fiber $(\underline{S}_\L,\mathcal{C}^\infty_{\underline{S}_\L})$ has a natural partition into submanifolds:
\begin{equation}\label{indpartfibtrmommap} \P_\textrm{Ham}(\underline{S}_\L):=\{P\cap \underline{S}_\L\mid P\in \mathcal{P}_\textrm{Ham}(\underline{S})\}.
\end{equation}
Besides Theorem \ref{canhamstratthm}, in this section we will prove:
\begin{thm}\label{redspstratthm} The fibers $(\underline{S}_{\L},\mathcal{C}^\infty_{\underline{S}_{\L}})$ of the transverse momentum map, endowed with the partition $\S_\textrm{Ham}(\underline{S}_{\L})$ obtained from (\ref{indpartfibtrmommap}) after passing to connected components, are Whitney stratified spaces.
\end{thm}
In the case of a Hamiltonian action of a compact Lie group, the stratification $\S_\textrm{Ham}(\underline{S}_{\L})$ coincides with that in \cite{LeSj} (see also Remark \ref{hamliegpactdiffpartex}). \\
The partition $\S_\textrm{Ham}(S)$ of the smooth manifold $S$ turns out to be a Whitney stratification as well. Furthermore, in contrast to the stratification $\S_\textrm{Gp}(S)$ associated to the action groupoid, it is a constant rank stratification of the momentum map $J:S\to M$. This can be proved using the normal form theorem. Here we will not go into details on this, but rather focus on the proof of the theorems concerning the transverse momentum map. We can already give an outline of this.
\begin{proof}[Outline of the proof of Theorem \ref{canhamstratthm} and \ref{redspstratthm}] In the coming subsection we will show that the Hamiltonian Morita types are submanifolds of the orbit space. By part ii) of Remark \ref{improphammortyp}, it then follows that the pair $(\P_\textrm{Ham}(\underline{S}),\P_\mathcal{M}(\underline{M}))$ is a partition of the transverse momentum map (\ref{transmommap}) into submanifolds (as in Definition \ref{constrkstratdefi}). In complete analogy with our proof of Theorem \ref{stratleafspthm}, Proposition \ref{transgeommap}$a$ and \ref{moreqisoredring} imply that the triple $(\underline{J},\P_\textrm{Ham}(\underline{S}),\P_\mathcal{M}(\underline{M}))$ is smoothly homogeneous (as in Definition \ref{homdefimap}). In particular, $(\P_\textrm{Ham}(\underline{S}),\P_\mathcal{M}(\underline{M}))$ is a constant rank partition of (\ref{transmommap}) (see Remark \ref{smoothhomimpliesconstrk}). In Subsection \ref{locpropcanhamstratpropsec} we further prove that $\P_\textrm{Ham}(\underline{S})$ is locally finite and that $(\underline{J},\P_\textrm{Ham}(\underline{S}),\P_\mathcal{M}(\underline{M}))$ is locally semi-algebraic (as in Definition \ref{locsemalgdefimap}). Combining Lemma \ref{whitreghomprop} with part i) of Remark \ref{passtoconncomprem}, it then follows that $\S_\textrm{Ham}(\underline{S})$ is indeed a Whitney stratification of the orbit space, completing the proof of Theorem \ref{canhamstratthm}. Furthermore, Theorem \ref{redspstratthm} is then a consequence of Corollary \ref{fibwhitstratcor}.
\end{proof}
In the coming subsections we will address the remaining parts of the proof.
\subsubsection{Different partitions inducing the canonical stratifications} In this and the next subsection we study various local properties of the partition by Hamiltonian Morita types. To this end, it will be useful to consider the coarser partitions:
\begin{equation*} \P_{\cong_J}(S):=\P_{\cong}(S)\cap J^{-1}(\P_{\cong}(M))\quad \&\quad \P_{\cong_J}(\underline{S}):=\P_{\cong}(\underline{S})\cap \underline{J}^{-1}(\P_{\cong}(\underline{M})),
\end{equation*} where we take memberwise pre-images and intersections. Explicitly: $p,q\in S$ belong to the same member of $\P_{\cong_J}(S)$ if and only if $\mathcal{G}_{p}\cong \mathcal{G}_q$ and $\mathcal{G}_{J(p)}\cong \mathcal{G}_{J(q)}$.
\begin{defi} We call $\P_{\cong_J}(S)$ and $\P_{\cong_J}(\underline{S})$ the partitions by \textbf{$J$-isomorphism types}.
\end{defi}
In the remainder of this subsection, we will prove:
\begin{prop}\label{partprop} Both on $S$ and $\underline{S}$, the following hold.
\begin{itemize}\item[a)] Each $J$-isomorphism type is a submanifold with connected components of possibly varying dimension.
\item[b)] The $J$-isomorphism types and the Hamiltonian Morita types yield the same partition after passing to connected components.
\item[c)] Each Hamiltonian Morita type is (in fact) a submanifold with connected components of a single dimension.
\end{itemize} Moreover, the orbit projection $S\to \underline{S}$ restricts to a submersion between the Hamiltonian Morita types (respectively the $J$-isomorphism types).
\end{prop}
To prove this proposition we will compute the Hamiltonian Morita types and the $J$-isomorphism types in the local model for Hamiltonian actions. There are two important remarks here that simplify this computation: first of all, the partitions by $J$-isomorphism types introduced above make sense for any groupoid map and, secondly they are invariant under Morita equivalence of Lie groupoid maps. Therefore, the computation of these reduces to that for the groupoid map $\mathcal{J}_\mathfrak{p}$ of Example \ref{locmodmoreq}, which is the content of the lemma below.
\begin{lemma}\label{techlemisotype} Let $G$ be a compact Lie group, $H\subset G$ a closed subgroup and $(V,\omega_V)$ a symplectic $H$-representation. Fix an $H$-equivariant splitting $\mathfrak{p}:\mathfrak{h}^*\to \mathfrak{g}^*$ of (\ref{ses2poisabs}). Consider the groupoid map $\mathcal{J}_\mathfrak{p}$ defined in (\ref{bigJ_p}).
\begin{itemize}
\item[a)] The $J_\mathfrak{p}$-isomorphism type through the origin in $\mathfrak{h}^0\oplus V$ is equal to the linear subspace: \begin{equation*} (\mathfrak{h}^0)^G\oplus V^H
\end{equation*} where $(\mathfrak{h}^0)^G$ and $V^H$ are the sets of points in $\mathfrak{h}^0$ and $V$ that are fixed by $G$ and $H$.
\item[b)] The $G$-isomorphism type through the origin in $\mathfrak{g}^*$ is equal to $(\mathfrak{g}^*)^G$.
\item[c)] The restriction of $J_\mathfrak{p}$ to these isomorphism types is given by:
\begin{equation}\label{mommaplocmodcentisotyp} (\mathfrak{h}^0)^G\oplus V^H\to (\mathfrak{g}^*)^G, \quad (\alpha,v)\mapsto \alpha.
\end{equation}
\item[d)] Considered as subspace of the reduced differentiable space $(\mathfrak{h}^0\oplus V)/H$ (resp. $\mathfrak{g}^*/G$), the $J_\mathfrak{p}$-isomorphism type $(\mathfrak{h}^0)^G\oplus V^H$ (resp. $G$-isomorphism type $(\mathfrak{g}^*)^G$) is a closed submanifold.
\end{itemize}
\end{lemma}
\begin{proof} We use a standard fact: given a compact Lie group $H$ and a closed subgroup $K$, if $K$ is diffeomorphic to $H$, then $K=H$. Since the origin is fixed by $H$ it follows from this fact that for $(\alpha,v)\in \mathfrak{h}^0\oplus V$ we have:
\begin{align*} (\alpha,v)\cong (0,0) &\iff H_{(\alpha,v)}\cong H\\
&\iff H_{(\alpha,v)}=H\\
&\iff \alpha\in (\mathfrak{h}^0)^H \quad \&\quad v\in V^H.
\end{align*}
Similarly, for $\alpha\in \mathfrak{g}^*$, it follows that:
\begin{equation*} \alpha\cong 0\iff \alpha\in (\mathfrak{g}^*)^G.
\end{equation*} Moreover, (\ref{quadsympmommap}) implies that $J_V$ vanishes on $V^H$ and hence $J_\mathfrak{p}(\alpha,v)=\alpha$ for $v\in V^H$. Therefore: \begin{align*}
(\alpha,v)\cong_J(0,0)&\iff (\alpha,v)\cong (0,0)\quad \&\quad \alpha\cong 0, \\
&\iff \alpha\in (\mathfrak{h}^0)^G\quad \&\quad v\in V^H,
\end{align*} and we conclude that both $a$ and $b$ hold. Since $J_V$ vanishes on $V^H$, part $c$ follows as well. As for part $d$, it is clear that the canonical inclusion $(\mathfrak{h}^0)^G\oplus V^H\hookrightarrow (\mathfrak{h}^0\oplus V)/H$ is a closed topological embedding and a morphism of reduced differentiable spaces with respect to the standard manifold structure on the vector space $(\mathfrak{h}^0)^G\oplus V^H$. Furthermore, choosing an $H$-invariant linear complement to $(\mathfrak{h}^0)^G\oplus V^H$, we can extend any smooth function defined on an open in the vector space $(\mathfrak{h}^0)^G\oplus V^H$ (by zero) to an $H$-invariant smooth function defined on an open in $\mathfrak{h}^0\oplus V$. So, $(\mathfrak{h}^0)^G\oplus V^H$ is indeed a closed submanifold of $(\mathfrak{h}^0\oplus V)/H$. The argument for $(\mathfrak{g}^*)^G$ in $\mathfrak{g}^*/G$ is the same.
\end{proof}
\begin{proof}[Proof of Proposition \ref{partprop}] Near a given orbit in $S$, we can identify the member of $\P_{\cong_J}(S)$ (resp. $\P_\textrm{Ham}(S)$) through this orbit (via the normal form theorem) with the corresponding member through the orbit $\O:=P/H$ in the local model for the Hamiltonian action (in the notation of Subsection \ref{hamlocmodconsec}). Using the Morita equivalence of Example \ref{locmodmoreq}, combined with Lemma \ref{techlemisotype} and the Morita invariance of the partitions by isomorphism types, we find that the $J_\theta$-isomorphism type through the orbit $\O$ in $S_\theta$ is a submanifold, being an open around $\O$ in:
\begin{equation}\label{centhamtyp} \O\times \left((\mathfrak{h}^0)^G\oplus V^H\right).
\end{equation} Therefore, the $J$-isomorphism types are submanifolds of $S$ with connected components of possibly varying dimension. Passing to the orbit space of the local model, we can again use the Morita equivalence of Example \ref{locmodmoreq} to identify the orbit space of the local model with an open neighbourhood of the origin in $(\mathfrak{h}^0\oplus V)/H$, as reduced differentiable spaces (see Corollary \ref{moreqisoredring}). By Lemma \ref{techlemisotype} and Morita invariance of the partitions by isomorphism types, the $J_\theta$-isomorphism type through $\O$ is identified with an open neighbourhood of the origin in the submanifold $(\mathfrak{h}^0)^G\oplus V^H$ of $(\mathfrak{h}^0\oplus V)/H$. Therefore, the $J$-isomorphism types are submanifolds of $(\underline{S},\mathcal{C}^\infty_{\underline{S}})$ with connected components of possibly varying dimension. This proves part $a$. For part $b$, it suffices to prove that the Hamiltonian Morita type through the orbit $\O$ in the local model coincides with the $J_\theta$-isomorphism type computed above (by Lemma \ref{passconncomp} and the normal form theorem). That is, we have to verify that all $[p,\alpha,v]\in S_\theta$ such that $(\alpha,v)\in (\mathfrak{h}^0)^G\oplus V^H$ belong to the same Hamiltonian Morita type. To this end, we again use the Hamiltonian Morita equivalence of Example \ref{locmodmoreq}. Let $[p,\alpha,v]$ be as above. Then the Morita equivalence relates $[p,\alpha,v]$ to $(\alpha,v)$. Since $v\in V^H$, it holds for all $w\in V$ that:
\begin{equation}\label{quadmomfixset}
J_V(w+v)=J_V(w),
\end{equation} as follows from (\ref{quadsympmommap}). This implies that $\S\mathcal{N}_{(\alpha,v)}=V$ and therefore the conditions in Proposition \ref{finremhammoreq} are satisfied for the aforementioned Morita equivalence, at the points $[p,\alpha,v]$ and $(\alpha,v)$. Moreover, we have $H_{(\alpha,v)}=H$, $G_{J_\mathfrak{p}(\alpha,v)}=G$ and, by linearity of the $H$-action, $\S\mathcal{N}_{(\alpha,v)}$ and $V$ in fact coincide as $H$-representations. So, applying the proposition, we obtain an isomorphism $\mathcal{G}_{J_\theta([p,\alpha,v])}\cong G$ that restricts to an isomorphism $\mathcal{G}_{[p,\alpha,v]}\cong H$, and we obtain a compatible isomorphism of symplectic representations: \begin{equation*} (\S\mathcal{N}_{[p,\alpha,v]},\omega_{[p,\alpha,v]})\cong (V,\omega_V).\end{equation*} So, all such $[p,\alpha,v]$ indeed belong to the same Hamiltonian Morita type. For part $c$ it remains to show for each Hamiltonian Morita type in $S$ or $\underline{S}$, the connected components have the same dimension. This follows from Proposition \ref{transgeomham} and a dimension count. Finally, in the above description of the Hamiltonian Morita types and $J$-isomorphism types in $S$ and $\underline{S}$ through $\O$, the orbit projection is identified (near $\O$) with the projection $\O\times (\mathfrak{h}^0)^G\oplus V^H\to (\mathfrak{h}^0)^G\oplus V^H$. This shows that it restricts to a submersion between the members in $S$ and $\underline{S}$.
\end{proof}
\begin{rem}\label{hamliegpactdiffpartex} Let $G$ be a compact Lie group and $J:(S,\omega)\to \mathfrak{g}^*$ a Hamiltonian $G$-space. The partition in Example \ref{hamGspex}, which is an analogue of the partition by orbit types for proper Lie group actions (cf. Example \ref{exorbtyp}), induces the canonical Hamiltonian stratification as well after passing to connected components. Another interesting partition that induces the canonical Hamiltonian stratification in this way can be defined by the equivalence relation: $p\sim q$ if and only if there is a $g\in G$ such that $G_p=gG_qg^{-1}$ and $G_{J(p)}=gG_{J(q)}g^{-1}$, together with a compatible symplectic linear isomorphism $(\S\mathcal{N}_p,\omega_p)\cong (\S\mathcal{N}_q,\omega_q)$. This is an analogue of the partition by local types for proper Lie group actions. The fact that these indeed induce the canonical Hamiltonian stratification follows from the same arguments as in the proof above, using the normal form theorem with the explicit isomorphism of symplectic groupoids (\ref{isolocmodmgsmod}) (see Remark \ref{sharphamliegpactrem}). Similarly, the partition (\ref{indpartfibtrmommap}) of $\underline{S}_\L$ and the partition used in \cite{LeSj} (given by: $\O_p\sim \O_q$ if and only if there is a $g\in G$ such that $G_p=gG_qg^{-1}$) yield the same partition after passing to connected components.
\end{rem}
\begin{ex} Let $G$ be a compact Lie group and $J:(S,\omega)\to \mathfrak{g}^*$ a Hamiltonian $G$-space. The fixed point set $M^G$ is a member of the partition in Example \ref{hamGspex} (provided it is non-empty). From the above remark we recover the well-known fact that for any two points $p,q\in S$ belonging to the same connected component of $M^G$, the symplectic $G$-representations $(T_pM,\omega_p)$ and $(T_qM,\omega_q)$ are isomorphic.
\end{ex}
\begin{ex} Let $T$ be a torus and $J:(S,\omega)\to \t^*$ a Hamiltonian $T$-space. In this case, the partition in Example \ref{hamGspex} coincides with the partition by orbit types of the $T$-action. Furthermore, the above remark implies that for any two points $p,q\in S$ belonging to the same connected component of an orbit type with isotropy group $H$, the symplectic normal representations at $p$ and $q$ are isomorphic as symplectic $H$-representations.
\end{ex}
\subsubsection{End of the proof}\label{locpropcanhamstratpropsec}
To complete the proof of Theorem \ref{canhamstratthm} and \ref{redspstratthm}, it remains to show:
\begin{prop}\label{weakwhitstratthm} The partition by Hamiltonian Morita types $\P_\textrm{Ham}(\underline{S})$ is locally finite and the triple $(\underline{J},\P_\textrm{Ham}(\underline{S}),\P_{\mathcal{M}}(\underline{M}))$ is locally semi-algebraic (as in Definition \ref{locsemalgdefimap}).
\end{prop}
\begin{proof}[Proof of Proposition \ref{weakwhitstratthm}] Let $p\in S$, let $\O_p$ be the orbit through $p$ and let $\L_x=J(\O_p)$ be the corresponding leaf through $x=J(p)$. Further, let $G=\mathcal{G}_x$ denote the isotropy group of $\mathcal{G}$ at $x$, $H=\mathcal{G}_p$ the isotropy group of the action at $p$, and let $(V,\omega_V)=(\S\mathcal{N}_p,\omega_p)$ denote the symplectic normal representation at $p$. As in the proof of Proposition \ref{eqcharhammortyp}, there are invariant opens $W$ around $\L_x$ in $M$ and $U$ around $\O_p$ in $J^{-1}(W)$, together with a Hamiltonian Morita equivalence between the action of $(\mathcal{G},\Omega)\vert_W$ along $J:U\to W$ and a restriction of the groupoid map (\ref{bigJ_p}), that relates $\O_p$ to the origin in $\mathfrak{h}^0\oplus V$. Here, we can arrange the opens in $\mathfrak{h}^0\oplus V$ and $\mathfrak{g}^*$ to which $(\ref{bigJ_p})$ is restricted to be invariant open balls $B_{\mathfrak{g}^*}\subset \mathfrak{g}^*$ and $B_{\mathfrak{h}^0\oplus V}\subset J_\mathfrak{p}^{-1}(B_{\mathfrak{g}^*})$ (with respect to a choice of invariant inner products) centered around the respective origins. Let $\rho:\mathfrak{h}^0\oplus V\to \mathbb{R}^n$ and $\sigma:\mathfrak{g}^*\to \mathbb{R}^m$ be Hilbert maps (see Subsection \ref{reddiffspsec}). By the same reasoning as in \cite[Example 6.5]{LeSj}, since $J_\mathfrak{p}:\mathfrak{h}^0\oplus V\to \mathfrak{g}^*$ is an $H$-equivariant and polynomial map, there is a polynomial map $P:\mathbb{R}^n\to \mathbb{R}^m$ that fits into a commutative square:
\begin{center}
\begin{tikzcd}
(\mathfrak{h}^0\oplus V)/H \arrow[d,"\underline{J_\mathfrak{p}}"] \arrow[r,"\underline{\rho}"] & \mathbb{R}^n\arrow[d,"P"] \\
\mathfrak{g}^*/G \arrow[r,"\underline{\sigma}"] & \mathbb{R}^m
\end{tikzcd}
\end{center}
In view of Proposition \ref{transgeommap}$a$, Corollary \ref{moreqisoredring} and the discussion at the end of Subsection \ref{reddiffspsec}, we obtain a diagram of reduced differentiable spaces:
\begin{center}
\begin{tikzcd}
\left(\underline{U},\mathcal{C}^\infty_{\underline{S}} \vert_{\underline{U}}\right) \arrow[r,"\sim"] \arrow[d,"\underline{J}"] & \left(\underline{B}_{\mathfrak{h}^0\oplus V},\mathcal{C}^\infty_{(\mathfrak{h}^0\oplus V)/H}\vert_{\underline{B}_{\mathfrak{h}^0\oplus V}}\right) \arrow[d,"\underline{J_\mathfrak{p}}"] \arrow[r,"\sim"] & \left(\rho(B_{\mathfrak{h}^0\oplus V}), \mathcal{C}^\infty_{\rho(B_{\mathfrak{h}^0\oplus V})}\right)\arrow[d,"P"] \\
\left(\underline{W},\mathcal{C}^\infty_{\underline{M}} \vert_{\underline{W}}\right) \arrow[r,"\sim"] & \left(\underline{B}_{\mathfrak{g}^*},\mathcal{C}^\infty_{\mathfrak{g}^*/G}\vert_{\underline{B}_{\mathfrak{g}^*}}\right) \arrow[r,"\sim"] & \left(\sigma(B_{\mathfrak{g}^*}),\mathcal{C}^\infty_{\sigma(B_{\mathfrak{g}^*})}\right)
\end{tikzcd}
\end{center}
in which all horizontal arrows are isomorphisms. Due to Morita invariance of the partitions by isomorphism types, the partition of $\underline{U}$ by $J$-isomorphism types is identified with the partition of $\rho(B_{\mathfrak{h}^0\oplus V})$ consisting of the subsets of the form:
\begin{equation*} \rho(B_{\mathfrak{h}^0\oplus V})\cap\rho(\Sigma_{\mathfrak{h}^0\oplus V})\cap P^{-1}(\sigma(\Sigma_{\mathfrak{g}^*})) , \quad\quad \Sigma_{\mathfrak{h}^0\oplus V}\in \P_{\cong}(\mathfrak{h}^0\oplus V), \quad \Sigma_{\mathfrak{g}^*}\in \P_{\cong}(\mathfrak{g}^*).
\end{equation*}
Recall from the proof of Theorem \ref{stratleafspthm} that the canonical stratification of the orbit space of a real, finite-dimensional representation of a compact Lie group has finitely many strata, each of which is mapped onto a semi-algebraic set by any Hilbert map. The same must then hold for the partition by isomorphism types of such a representation. The above partition of $\rho(B_{\mathfrak{h}^0\oplus V})$ therefore also has finitely many members, each of which is semi-algebraic, for $P$ is polynomial and $\rho(B_{\mathfrak{h}^0\oplus V})$ is semi-algebraic (being the image of a semi-algebraic set under a semi-algebraic map). The same then holds for the partition obtained after passing to connected components, because any semi-algebraic set has finitely many connected components, each of which is again semi-algebraic. By Proposition \ref{partprop}$b$, the members of $\P_\textrm{Ham}(\underline{S})\vert_{\underline{U}}$ are unions of the connected components of the $J$-isomorphism types in $\underline{U}$. So, $\P_\textrm{Ham}(\underline{S})\vert_{\underline{U}}$ has finitely many members, each of which is mapped onto a semi-algebraic set by the above chart for $(\underline{S},\mathcal{C}^\infty_{\underline{S}})$, the image being a finite union of semi-algebraic sets. By similar reasoning, the above chart for $(\underline{M},\mathcal{C}^\infty_{\underline{M}})$ maps the members of $\P_\mathcal{M}(\underline{M})\vert_{\underline{W}}$ onto semi-algebraic sets. Since $P$ is polynomial, it restricts to semi-algebraic maps between the images of these members under the above charts. So, this proves the proposition.
\end{proof}
We end this section with a concrete example, similar to that in \cite{ArCuGot}.
\begin{ex}\label{concreteex} Let $G=\textrm{SU}(2)\times\textrm{SU}(2)$ and consider the circle in $G$ given by the closed subgroup:
\begin{equation*} H=\left\{ \left(\begin{pmatrix} e^{i\theta} & 0 \\ 0 & e^{-i\theta} \end{pmatrix}, \begin{pmatrix} e^{i\theta} & 0 \\ 0 & e^{-i\theta} \end{pmatrix}\right)\in \textrm{SU}(2)\times \textrm{SU}(2) \textrm{ }\middle\vert\textrm{ }\theta\in \mathbb{R}\right\}.
\end{equation*} The cotangent bundle $T^*(G/H)$ of the homogeneous space $G/H$ is naturally a Hamiltonian $G$-space, and the canonical Hamiltonian strata can be realized as concrete semi-algebraic submanifolds of $\mathbb{R}^5$, as follows. The orbit space of the $G$-action on $T^*(G/H)$ can be canonically identified with the orbit space of the linear $H$-action on $\mathfrak{h}^0$ induced by the coadjoint action of $G$ on $\mathfrak{g}^*$, and the transverse momentum map becomes the map $\underline{J}:\mathfrak{h}^0/H\to \mathfrak{g}^*/G$ induced by the inclusion $\mathfrak{h}^0\hookrightarrow \mathfrak{g}^*$. To find Hilbert maps for $\mathfrak{g}^*$ and $\mathfrak{h}^0$, consider the $\text{SU}(2)$-invariant inner product on $\mathfrak{su}(2)$ given by:
\begin{equation}\label{invinprodsu(2)} \langle A, B \rangle_{\mathfrak{su}(2)}=-\textrm{Trace}(AB)\in \mathbb{R},
\end{equation} and notice that under the identification of $\mathfrak{su}(2)$ with $\mathbb{R}\times \mathbb{C}$ obtained by writing:
\begin{equation*} \mathfrak{su}(2)=\left\{\begin{pmatrix} i\theta & -\bar{z} \\ z & -i\theta \end{pmatrix}\in \mathfrak{gl}(2,\mathbb{C}) \text{ }\middle\vert\text{ } \theta \in \mathbb{R},\text{ } z\in \mathbb{C}\right\},
\end{equation*} (\ref{invinprodsu(2)}) corresponds (up to a factor) to the standard Euclidean inner product. Using the induced $G$-invariant inner product on $\mathfrak{g}=\mathfrak{su}(2)\times \mathfrak{su}(2)$, we identify $\mathfrak{g}^*$ with $\mathfrak{g}$. The orbits of the adjoint $\textrm{SU}(2)$-action on $\mathfrak{su}(2)$ are the origin and the concentric spheres centered at the origin. Using this, one readily sees that the algebra of $\textrm{SU}(2)$-invariant polynomials on $\mathfrak{su}(2)$ is generated by the single polynomial given by the square of the norm induced by (\ref{invinprodsu(2)}). So, the algebra of $G$-invariant polynomials on $\mathfrak{g}^*$ is generated by:
\begin{equation*} \sigma_1(\theta_1,z_1,\theta_2,z_2)=\theta_1^2+|z_1|^2, \quad \sigma_2(\theta_1,z_1,\theta_2,z_2)=\theta_2^2+|z_2|^2, \quad \theta_1,\theta_2\in \mathbb{R}, \quad z_1,z_2\in \mathbb{C}.
\end{equation*}
On the other hand, $\mathfrak{h}^0$ is identified with the orthogonal complement:
\begin{equation*} \mathfrak{h}^{\perp}=\left\{ \left(\begin{pmatrix} i\theta & -\bar{z}_1 \\ z_1 & -i\theta \end{pmatrix}, \begin{pmatrix} -i\theta & -\bar{z}_2 \\ z_2 & i\theta \end{pmatrix}\right)\in \mathfrak{su}(2)\times \mathfrak{su}(2) \textrm{ }\middle\vert\textrm{ }\theta\in \mathbb{R},\text{ }z_1,z_2\in \mathbb{C}\right\}.
\end{equation*} Identifying $\mathfrak{h}^\perp$ with $\mathbb{R}\times \mathbb{C}^2$ accordingly, the $H$-orbits are identified with those of the $\mathbb{S}^1$-action:
\begin{equation*} \lambda\cdot (\theta,z_1,z_2)=(\theta,\lambda z_1,\lambda z_2), \quad \lambda\in \mathbb{S}^1, \quad (\theta,z_1,z_2)\in \mathbb{R}\times \mathbb{C}^2.
\end{equation*} In light of this, the algebra of $H$-invariant polynomials on $\mathfrak{h}^0$ is generated by:
\begin{alignat*}{2} \rho_1(\theta,&z_1,z_2)=\theta,\quad\quad\quad \rho_2(\theta,z_1,z_2)=&&|z_1|^2,\quad\quad\quad \rho_3(\theta,z_1,z_2)=|z_2|^2,\\
&\rho_4(\theta,z_1,z_2)=\textrm{Re}(z_1\bar{z}_2),\quad\quad\quad &&\rho_5(\theta,z_1,z_2)=\textrm{Im}(z_1\bar{z}_2).
\end{alignat*}
Now, consider the polynomial map:
\begin{equation*} P:\mathbb{R}^5\to \mathbb{R}^2, \quad P(x_1,...,x_5)=\left(x_1^2+x_2,x_1^2+x_3\right).
\end{equation*} Then we have a commutative square:
\begin{center}
\begin{tikzcd} \mathfrak{h}^0/H\arrow[r,"\underline{\rho}"]\arrow[d,"\underline{J}"] & \mathbb{R}^5 \arrow[d,"P"]\\
\mathfrak{g}^*/G \arrow[r,"\underline{\sigma}"] & \mathbb{R}^2
\end{tikzcd}
\end{center} The image of $\mathfrak{h}^0/H$ under $\underline{\rho}$ is the semi-algebraic subset of $\mathbb{R}^5$ given by:
\begin{equation*}\{x_2\geq 0,\textrm{ }x_3\geq 0, \textrm{ }x_4^2+x_5^2=x_2x_3\},
\end{equation*} whereas the image of $\mathfrak{g}^*/G$ under $\underline{\sigma}$ is the semi-algebraic subset of $\mathbb{R}^2$ given by:
\begin{equation*} \{y_1\geq 0,\text{ }y_2\geq 0\}.
\end{equation*} The canonical stratification of the orbit space of the $G$-action on $T^*(G/H)$ has two strata, corresponding to the semi-algebraic submanifolds of $\mathbb{R}^5$ given by:
\begin{align}\label{orbtypstratex1} &\{x_2=x_3=x_4=x_5=0\}, \\
\label{orbtypstratex2} &\{x_4^2+x_5^2=x_2x_3\}\cap (\{x_2> 0\}\cup\{x_3>0\}).
\end{align} On the other hand, the canonical stratification of $\mathfrak{g}^*/G$ has four strata, corresponding to the semi-algebraic submanifolds of $\mathbb{R}^2$ given by: \begin{equation*} \{y_1=y_2=0\},\quad \{y_1>0,\text{ }y_2=0\},\quad \{y_1=0,\text{ }y_2>0\}, \quad\{y_1>0,\text{ }y_2>0\}.\end{equation*} From this we see that the canonical Hamiltonian stratification of the orbit space of the Hamiltonian $G$-space $T^*(G/H)$ has six strata, three of which correspond to the semi-algebraic submanifolds of (\ref{orbtypstratex1}) given by the respective intersections of (\ref{orbtypstratex1}) with $\{x_1<0\}$, $\{x_1=0\}$ and $\{x_1>0\}$, and the other three of which correspond to the semi-algebraic submanifolds of (\ref{orbtypstratex2}) given by:
\begin{align*} \{&x_1=0,\text{ }x_2>0,\text{ }x_3=x_4=x_5=0\},\quad \{x_1=x_2=0,\text{ }x_3>0,\text{ }x_4=x_5=0\},\\
&\{x_1^2+x_2>0,\text{ }x_1^2+x_3>0,\text{ }x_4^2+x_5^2=x_2x_3\}\cap (\{x_2> 0\}\cup\{x_3>0\}).
\end{align*} The restriction of $P$ to any of the first five strata is injective, hence its fibers are points. The restriction of $P$ to the last stratum has $2$-dimensional fibers. In fact, given $y_1,y_2>0$ the fiber of this restricted map over $(y_1,y_2)\in \mathbb{R}^2$ is projected diffeomorphically onto the semi-algebraic submanifold of $\mathbb{R}^3$ given by:
\begin{equation*} \{(x_1,x_4,x_5)\in \mathbb{R}^3\mid x_4^2+x_5^2=(y_1-x_1^2)(y_2-x_1^2),\text{ }x_1^2<\max(y_1,y_2)\},
\end{equation*} which is semi-algebraically diffeomorphic to a $2$-sphere if $y_1\neq y_2$, whereas it is semi-algebraically diffeomorphic to a $2$-sphere with two points removed if $y_1=y_2$.
\end{ex}
\subsection{The regular parts of the stratifications}\label{regpartsec}
\subsubsection{The regular part of a stratification}
To start with, we give a reminder on the regular part of a stratification, mostly following the exposition in \cite{CrMe}. A stratification $\S$ of a space $X$ comes with a natural partial order given by:
\begin{equation}\label{partordstrat} \Sigma\leq \Sigma' \quad \iff \quad \Sigma \subset \overline{\Sigma'}.
\end{equation} We say that a stratum $\Sigma\in \S$ is \textbf{maximal} if it is maximal with respect to this partial order. Maximal strata can be characterized as follows.
\begin{prop}\label{regpartopen} Let $(X,\S)$ be a stratified space. Then $\Sigma \in\S$ is maximal if and only if it is open in $X$. Moreover, the union of all maximal strata is open and dense in $X$.
\end{prop}
\begin{defi} The union of all maximal strata of a stratified space $(X,\S)$ is called the \textbf{regular part} of the stratified space.
\end{defi}
Given a stratification $\S$, an interesting question is whether it admits a greatest element with respect to the partial order (\ref{partordstrat}). This is equivalent to asking whether the regular part of $\S$ is connected.
\begin{ex}\label{prinorbthm} Let $G$ be a Lie group acting properly on a manifold $M$. The partition by orbit types $\P_\sim(\underline{M})$ (see Example \ref{exorbtyp}) comes with a partial order of its own. Namely, if $\underline{M}_x$ and $\underline{M}_y$ denote the orbit types containing the respective orbits $\O_x$ and $\O_y$, then by definition:
\begin{equation*} \underline{M}_x\leq \underline{M}_y\quad \iff G_y \text{ is conjugate in $G$ to a subgroup of }G_x.
\end{equation*} The principal orbit type theorem states that, if $\underline{M}$ is connected, then there is a greatest element with respect to this partial order, called the principal orbit type, which is connected, open and dense in $\underline{M}$. In this case, the regular part of $\S_\textrm{Gp}(\underline{M})$ coincides with the principal orbit type; in particular, it is connected. On the other hand, the regular part of $\S_\textrm{Gp}(M)$ need not be connected, even if $M$ is connected.
\end{ex}
\begin{ex}\label{princtypliegpoidex} Let $\mathcal{G}\rightrightarrows M$ be a proper Lie groupoid. We denote the respective regular parts of $\S_\textrm{Gp}(M)$ and $\S_\textrm{Gp}(\underline{M})$ as $M^\textrm{princ}$ and $\underline{M}^\textrm{princ}$. From the linearization theorem it follows that a point $x$ in $M$ belongs to $M^\textrm{princ}$ if and only if the action of $\mathcal{G}_x$ on $\mathcal{N}_x$ is trivial. From this it is clear that $M^\textrm{princ}$ and $\underline{M}^\textrm{princ}$ are unions of Morita types. The analogue of the principal orbit type theorem for Lie groupoids \cite[Theorem 15]{CrMe} states that, if $\underline{M}$ is connected, then $\underline{M}^\textrm{princ}$ is connected.
\end{ex}
The lemma below gives a useful criterion for the regular part to be connected.
\begin{lemma}\label{cod1stratlem} Let $M$ be a connected manifold and $\S$ a stratification of $M$ by submanifolds. If $\S$ has no codimension one strata, then the regular part of $\S$ is connected.
\end{lemma}
\begin{proof} As in the proof of \cite[Proposition 2.8.5]{DuKo}, by a transversality principle \cite[pg. 73]{GuPo} any smooth path $\gamma$ that starts and ends in the regular part is homotopic in $M$ to a path $\widetilde{\gamma}$ that intersects only strata of codimension at most $1$ and starts and ends at the same points as $\gamma$.
\end{proof}
\begin{ex}\label{infstratex} Although $\S_\textrm{Gp}(M)$ may have codimension one strata, the base $M$ of a proper Lie groupoid $\mathcal{G}$ admits a second interesting Whitney stratification that does not have codimension one strata: the \textbf{infinitesimal stratification} $\S_\textrm{Gp}^\textrm{inf}(M)$. As for the canonical stratification, the infinitesimal stratification is induced by various different partitions of $M$. Indeed, each of the partitions mentioned in Subsection \ref{stratdefsec} has an infinitesimal analogue, obtained by replacing the Lie groups in their defining equivalence relations by the corresponding Lie algebras. Yet another partition that induces the infinitesimal stratification on $M$ is the partition $\P_\textrm{dim}(M)$ of $M$ by \textbf{dimension types}, defined by the equivalence relation: $x\sim y$ if and only if $\dim(\L_x)=\dim(\L_y)$, or equivalently, $\dim(\mathfrak{g}_x)=\dim(\mathfrak{g}_y)$. The members of each of these partitions are invariant. Therefore, each of these descends to a partition of $\underline{M}$. However, the members of $\S_\textrm{Gp}^\textrm{inf}(\underline{M})$ may fail to be submanifolds of the leaf space. For this reason we only consider the stratification on $M$.
We let $M^\textrm{reg}$ denote the regular part of the infinitesimal stratification $\S_\textrm{Gp}^\textrm{inf}(M)$. As for the canonical stratification, this has a Lie theoretic description: a point $x$ in $M$ belongs to $M^\textrm{reg}$ if and only if the action of $\mathfrak{g}_x$ on $\mathcal{N}_x$ is trivial. Since the infinitesimal stratification has no codimension one strata, Lemma \ref{cod1stratlem} applies. Therefore, $M^\textrm{reg}$ is connected if $M$ is connected.
\end{ex}
\subsubsection{The infinitesimal Hamiltonian stratification}
In the remainder of this section we will study the regular part of both the canonical Hamiltonian stratification and of a second stratification associated to a Hamiltonian action of a proper symplectic groupoid, that we call the \textbf{infinitesimal Hamiltonian stratification}. We include the latter in this section, because a particularly interesting property of this stratification is that its regular part is better behaved than that of the canonical Hamiltonian stratification. To introduce the infinitesimal Hamiltonian stratification, let $(\mathcal{G},\Omega)\rightrightarrows M$ be a proper symplectic groupoid and suppose that we are given a Hamiltonian $(\mathcal{G},\Omega)$-action along $J:(S,\omega)\to M$. Each of the partitions of $S$ defined in Section \ref{canhamstratsec} has an infinitesimal counterpart, obtained by replacing the role of the isotropy Lie groups by the corresponding isotropy Lie algebras. For example, by definition two points $p,q\in S$ belong to the same \textbf{infinitesimal Hamiltonian Morita type} if there is an isomorphism of pairs of Lie algebras:
\begin{equation*} (\mathfrak{g}_{J(p)},\mathfrak{g}_p)\cong (\mathfrak{g}_{J(q)},\mathfrak{g}_q)
\end{equation*} together with a compatible symplectic linear isomorphism:
\begin{equation*} (\S\mathcal{N}_p,\omega_p)\cong (\S\mathcal{N}_q,\omega_q),
\end{equation*} where compatibility is now meant with respect to the Lie algebra actions. These partitions induce, after passing to connected components, one and the same Whitney stratification $\S_\textrm{Ham}^\text{inf}(S)$ of $S$: the infinitesimal Hamiltonian stratification. There is in fact an even simpler partition that induces this stratification, obtained from the partitions by dimensions of the orbits on $S$ and the leaves of $M$ (see Example \ref{infstratex}):
\begin{equation}\label{J-dimtypdef} \P_{\textrm{dim}_J}(S):=\P_\textrm{dim}(S)\cap J^{-1}(\P_\textrm{dim}(M)),
\end{equation} where we take memberwise intersections. Explicitly, two points $p,q\in S$ belong to the same member of (\ref{J-dimtypdef}) if and only if $\dim(\O_p)=\dim(\O_q)$ and $\dim(\L_{J(p)})=\dim(\L_{J(q)})$. That the members of the above partitions are submanifolds of $S$ (with connected components of possibly varying dimension) and that all of these partitions indeed yield one and the same partition $\S_\textrm{Ham}^\textrm{inf}(S)$ after passing to connected components follows from the same type of arguments as in the proof of Proposition \ref{partprop}. From the normal form theorem it further follows that $\S_\textrm{Ham}^\textrm{inf}(S)$ is a constant rank stratification of the momentum map.
\subsubsection{Lie theoretic description of the regular parts}\label{liedescrpregpartsec}
Given a proper symplectic groupoid $(\mathcal{G},\Omega)$ and a Hamiltonian $(\mathcal{G},\Omega)$-action along $J:(S,\omega)\to M$, we will use the following notation for the regular parts of the various stratifications that we consider.
\begin{itemize} \item For the canonical Hamiltonian stratifications $\S_\textrm{Ham}(S)$ and $\S_\textrm{Ham}(\underline{S})$, and the infinitesimal Hamiltonian stratification $\S_\textrm{Ham}^\text{inf}(S)$ of the Hamiltonian $(\mathcal{G},\Omega)$-action:
\begin{equation*} S^\textrm{princ}_\textrm{Ham}, \quad \quad \underline{S}^\textrm{princ}_\textrm{Ham},\quad\quad S^\textrm{reg}_\textrm{Ham}.
\end{equation*}
\item For the canonical stratifications $\S_\textrm{Gp}(S)$ and $\S_\textrm{Gp}(\underline{S})$ and the infinitesimal stratification $\S^\textrm{inf}_\textrm{Gp}(S)$ of the $\mathcal{G}$-action:
\begin{equation*} S^\textrm{princ}, \quad \quad \underline{S}^\textrm{princ},\quad\quad S^\textrm{reg}.
\end{equation*}
\item For the stratification $\S_\textrm{Ham}(\underline{S}_\L)$ on the reduced space over a leaf $\L$:
\begin{equation*}
\underline{S}_\L^\textrm{princ}.
\end{equation*}
\end{itemize}
\begin{rem} Proposition \ref{regpartopen}, together with the fact that the orbit projection $q$ is open, implies:
\begin{equation*} S^\textrm{princ}=q^{-1}(\underline{S}^\textrm{princ})\quad\& \quad S^\textrm{princ}_\textrm{Ham}=q^{-1}(\underline{S}^\textrm{princ}_\textrm{Ham}).
\end{equation*} Furthermore, there are obvious inclusions:
\begin{center}
\begin{tikzcd} & S^\textrm{princ} \arrow[hookrightarrow]{rd} & \\
S^\textrm{princ}_\textrm{Ham} \arrow[hookrightarrow]{ru} \arrow[hookrightarrow]{rd} & & S^\textrm{reg} \\
& S^\textrm{reg}_\textrm{Ham} \arrow[hookrightarrow]{ru} &
\end{tikzcd}
\end{center}
\end{rem} We have the following Lie theoretic description of the regular parts.
\begin{prop}\label{regpartalg} Let $p\in S$ and denote $x=J(p)\in M$. Then the following hold.
\begin{itemize}\item[a)] $p\in S^\textrm{princ}$ if and only if the actions of $\mathcal{G}_p$ on both $\mathfrak{g}_p^0$ and on $\S\mathcal{N}_p$ are trivial.
\item[b)] $p\in S^\textrm{reg}$ if and only if the actions of $\mathfrak{g}_p$ on both $\mathfrak{g}_p^0$ and on $\S\mathcal{N}_p$ are trivial.
\item[c)] $p\in S^\textrm{princ}_\textrm{Ham}$ if and only if $p\in S^\textrm{princ}$ and $\mathcal{G}_x$ fixes $\mathfrak{g}_p^0$.
\item[d)] $p\in S^\textrm{reg}_\textrm{Ham}$ if and only if $p\in S^\textrm{reg}$ and $\mathfrak{g}_x$ fixes $\mathfrak{g}_p^0$.
\item[e)] $\O_p\in \underline{S}^\textrm{princ}_\L$ if and only if the action of $\mathcal{G}_p$ on $(J_{\S\mathcal{N}_p})^{-1}(0)$ is trivial.
\end{itemize}
\end{prop}
\begin{proof} We will only prove statement $c$, as the other statements follow by entirely similar reasoning. In view of the above remark, we may as well work on the level of $\underline{S}$. Let $G=\mathcal{G}_x$, $H=\mathcal{G}_p$ and $V=\S\mathcal{N}_p$. As in the proof of Proposition \ref{partprop}, near $\O_p$ we can identify the orbit space $\underline{S}$ with an open neighbourhood of the origin in $(\mathfrak{h}^0\oplus V)/H$, in such a way that $\O_p$ is identified with the origin and the stratum $\Sigma\in \S_\textrm{Ham}(\underline{S})$ through $\O_p$ is identified (near $\O_p$) with an open in $(\mathfrak{h}^0)^G\oplus V^H$. By invariance under scaling, the origin lies in the interior of $(\mathfrak{h}^0)^G\oplus V^H$ in $(\mathfrak{h}^0\oplus V)/H$ if and only if $(\mathfrak{h}^0)^G=\mathfrak{h}^0$ and $V=V^H$. So, statement $c$ follows.
\end{proof}
Proposition \ref{regpartalg} has the following direct consequence.
\begin{cor}\label{prinJstrat} The canonical Hamiltonian stratification $\S_\textrm{Ham}(S^\textrm{princ})$ of the restriction of the Hamiltonian $(\mathcal{G},\Omega)$-action on $S$ to $S^\textrm{princ}$ consists of strata of $\S_\textrm{Ham}(S)$. In particular, the regular part of $\S_\textrm{Ham}(S^\textrm{princ})$ coincides with $S^\textrm{princ}_\textrm{Ham}$. The same goes for the stratifications on $\underline{S}$ and the infinitesimal counterparts on $S$.
\end{cor}
\subsubsection{Principal type theorems}
Next, for each of the stratifications listed before, we address the question of whether the regular part is connected. As in \cite[Section 2.8]{DuKo}, our strategy to answer this will be to study the occurence of codimension one strata. First of all, we have:
\begin{thm}\label{printypthminf} The infinitesimal Hamiltonian stratification $\S_\textrm{Ham}^\text{inf}(S)$ has no codimension one strata. In particular, if $S$ is connected, then $S^\textrm{reg}_\textrm{Ham}$ is connected as well.
\end{thm}
The following will be useful to prove this.
\begin{lemma}\label{onedimreplem} Let $H$ be a compact Lie group and $W$ a real one-dimensional representation of $H$. Then $H$ acts by reflection in the origin. In particular, if $H$ is connected, then $H$ acts trivially.
\end{lemma}
\begin{proof} By compactness of $H$, there is an $H$-invariant inner product $g$ on $W$. Therefore the representation $H\to \textrm{GL}(W)$ takes image in the orthogonal group $\textrm{O}(W,g)=\{\pm 1\}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{printypthminf}] We will argue by contradiction. Suppose that $p\in S$ belongs to a codimension one stratum. Let $H$ and $G$ denote the respective identity components of $\mathcal{G}_p$ and $\mathcal{G}_{J(p)}$, and let $V=\S\mathcal{N}_p$. The normal form theorem and a computation analogous to the one for Lemma \ref{techlemisotype} show that $(\mathfrak{h}^0)^{G}\oplus V^{H}$ must have codimension one in $\mathfrak{h}^0\oplus V$. Since $H$ is compact, $V^{H}\subset V$ is a symplectic linear subspace, and so it has even codimension. Therefore, it must be so that $V^H=V$ and $(\mathfrak{h}^0)^G$ has codimension one in $\mathfrak{h}^0$. Appealing to Lemma \ref{onedimreplem}, we find that $H$ acts trivially on any $H$-invariant linear complement to $(\mathfrak{h}^0)^G$ in $\mathfrak{h}^0$. By compactness of $H$ we can always find such a complement, hence $H$ fixes all of $\mathfrak{h}^0$. Therefore, $\mathfrak{h}$ is a Lie algebra ideal in $\mathfrak{g}$. Since $G$ is connected, this means that $\mathfrak{h}^0$ is invariant under the coadjoint action of $G$. As for $H$, it now follows that $G$ must actually fix all of $\mathfrak{h}^0$, contradicting the fact that $(\mathfrak{h}^0)^G$ has positive codimension in $\mathfrak{h}^0$.
\end{proof}
The situation for $\S_\textrm{Ham}(\underline{S})$ and $\S_\textrm{Ham}(S)$ is more subtle. Indeed, the regular parts of the canonical Hamiltonian stratification on both $S$ and $\underline{S}$ can be disconnected, even if both $S$, as well as the source-fibers and the base of $\mathcal{G}$ are connected. This is shown by the example below.
\begin{ex}\label{prinpartdiscon} Consider the circle $\mathbb{S}^1$, the real line $\mathbb{R}$ and the $2$-dimensional torus $\mathbb{T}^2$ equipped with the $\mathbb{Z}_2$-actions given by:
\begin{equation*} (\pm1)\cdot e^{i\theta}=e^{\pm i\theta}, \quad\quad (\pm1)\cdot x=\pm x, \quad\quad (\pm 1)\cdot (e^{i\theta_1},e^{i\theta_2})=(\pm e^{i\theta_1}, e^{i\theta_2}).
\end{equation*} Now, consider the proper Lie groupoid:
\begin{equation}\label{gpoiddisconprincpartex} (\mathbb{T}^2\times \mathbb{T}^2)\times_{\mathbb{Z}_2} (\mathbb{S}^1\times \mathbb{R})\rightrightarrows \mathbb{T}^2\times_{\mathbb{Z}_2}\mathbb{R}
\end{equation} with source, target and multiplication given by:
\begin{align*} s([e^{i\theta_1},e^{i\theta_2},e^{i\theta_3}, e^{i\theta_4},e^{i\theta},x])&=[e^{i\theta_3},e^{i\theta_4},x],\\
t([e^{i\theta_1},e^{i\theta_2},e^{i\theta_3}, e^{i\theta_4},e^{i\theta},x])&=[e^{i\theta_1},e^{i\theta_2},x], \\
m([e^{i\theta_1},e^{i\theta_2},e^{i\theta_3}, e^{i\theta_4},e^{i\theta},x],[e^{i\theta_3},e^{i\theta_4},e^{i\theta_5}, e^{i\theta_6},e^{i\phi},x])&=[e^{i\theta_1},e^{i\theta_2},e^{i\theta_5}, e^{i\theta_6}, e^{i(\theta+\phi)},x].
\end{align*}
This becomes a symplectic groupoid when equipped with the symplectic form induced by:
\begin{equation*} \d\theta_1\wedge\d\theta_2-\d\theta_3\wedge\d\theta_4-\d\theta\wedge\d x\in \Omega^2(\mathbb{T}^2\times \mathbb{T}^2\times \mathbb{S}^1\times \mathbb{R}).
\end{equation*} Furthermore, this symplectic groupoid acts in a Hamiltonian fashion along:
\begin{equation*} J:(\mathbb{T}^2\times \mathbb{S}^1\times \mathbb{R}, \d\theta_1\wedge \d\theta_2-\d\theta\wedge \d x)\to \mathbb{T}^2\times_{\mathbb{Z}_2}\mathbb{R}, \quad (e^{i\theta_1},e^{i\theta_2},e^{i\theta},x)\mapsto [e^{i\theta_1},e^{i\theta_2},x],
\end{equation*} with the action given by:
\begin{equation*} [e^{i\theta_1},e^{i\theta_2},e^{i\theta_3}, e^{i\theta_4},e^{i\theta},x]\cdot (e^{i\theta_3},e^{i\theta_4},e^{i\phi},x)=(e^{i\theta_1},e^{i\theta_2},e^{i(\theta+\phi)},x).
\end{equation*}
This action is free and its orbit space is canonically diffeomorphic to $\mathbb{R}$. The canonical Hamiltonian stratification on the orbit space consists of three strata: $\{x>0\}$, $\{x<0\}$ and the origin $\{x=0\}$, because the isotropy groups of (\ref{gpoiddisconprincpartex}) at points in $\mathbb{T}^2\times_{\mathbb{Z}_2}\mathbb{R}$ with $x\neq 0$ are isomorphic to $\mathbb{S}^1$, whilst those at points with $x=0$ are isomorphic to $\mathbb{Z}_2\ltimes \mathbb{S}^1$. So, we see that its regular part is disconnected. \end{ex}
The following theorem provides a criterion that does ensure connectedness of the regular part.
\begin{thm}\label{printypthm} Let $(\mathcal{G},\Omega)\rightrightarrows M$ be a proper symplectic groupoid and suppose that we are given a Hamiltonian $(\mathcal{G},\Omega)$-action along $J:(S,\omega)\to M$. The following conditions are equivalent.
\begin{itemize}\item[a)] For every $p\in S$ that belongs to a codimension one stratum of the canonical Hamiltonian stratification $\S_\textrm{Ham}(S)$, the action of $\mathcal{G}_p$ on $\mathfrak{g}_p^0$ is non-trivial.
\item[b)] The regular part $S^\textrm{princ}$ of $\S_\textrm{Gp}(S)$ (as in Subsection \ref{liedescrpregpartsec}) does not contain codimension one strata of $\S_\textrm{Ham}(S)$.
\end{itemize} Furthermore, if $\underline{S}$ is connected and the above conditions hold, then $\underline{S}^\textrm{princ}_\textrm{Ham}$ is connected as well. If in addition the orbits of the action are connected, then $S^\textrm{princ}_\textrm{Ham}$ is also connected.
\end{thm}
\begin{proof} As in the proof of Theorem \ref{printypthminf} it follows that if $p\in S$ belongs to a codimension one stratum of $\S_\textrm{Ham}(S)$, then the action of $\mathcal{G}_p$ on $\S\mathcal{N}_p$ is trivial. So, by Proposition \ref{regpartalg}$a$, for such $p\in S$ the action of $\mathcal{G}_p$ on $\mathfrak{g}_p^0$ is trivial if and only if $p\in S^\textrm{princ}$. From this it is clear that the two given conditions are equivalent. Furthermore, if $\underline{S}$ is connected, then by the principal type theorem for proper Lie groupoids (see Example \ref{princtypliegpoidex}), $\underline{S}^\textrm{princ}$ is connected. So, in light of Corollary \ref{prinJstrat} and Lemma \ref{cod1stratlem}, $\underline{S}^\textrm{princ}_\textrm{Ham}$ will be connected if in addition $\underline{S}^\textrm{princ}$ does not contain codimension one strata of $\S_\textrm{Ham}(\underline{S})$, or equivalently, if in addition condition $b$ holds.
\end{proof}
The proposition below gives a criterion for the conditions in the previous theorem to hold.
\begin{prop} If $p\in S$ belongs to a codimension one stratum of $\S_\textrm{Ham}(S)$ and the coadjoint orbits of $\mathcal{G}_{J(p)}$ are connected, then the action of $\mathcal{G}_p$ on $\mathfrak{g}_p^0$ is non-trivial.
\end{prop}
\begin{proof} The same reasoning as in the proof of Theorem \ref{printypthminf} shows that if the action of $\mathcal{G}_p$ on $\mathfrak{g}_p^0$ would be trivial, then the identity component of $\mathcal{G}_{J(p)}$ would fix all of $\mathfrak{g}_p^0$. By connectedness of its coadjoint orbits, the entire group $\mathcal{G}_{J(p)}$ would then fix all of $\mathfrak{g}_p^0$, which, as in the aforementioned proof, leads to a contradiction.
\end{proof}
\begin{cor}\label{printypHamGspacethm} Let $G$ be a compact and connected Lie group and let $J:(S,\omega)\to \mathfrak{g}^*$ be a connected Hamiltonian $G$-space. Then $\underline{S}^\textrm{princ}_\textrm{Ham}$ is connected.
\end{cor}
\begin{proof} For $G$ compact and connected, the isotropy groups of the coadjoint $G$-action are connected. So, the previous proposition ensures that condition $a$ in Theorem \ref{printypthm} is satisfied.
\end{proof}
\begin{ex} Let $G$ be a compact and connected Lie group and let $J:(S,\omega)\to \mathfrak{g}^*$ be a connected Hamiltonian $G$-space. We return to the partition in Example \ref{hamGspex}. This comes with a partial order, defined as follows. If $\underline{S}_p$ and $\underline{S}_q$ denote the members through the respective orbits $\O_p$ and $\O_q$, then by definition:
\begin{equation*} \underline{S}_{p}\leq \underline{S}_{q} \iff (G_{J(q)},G_{q})\text{ is conjugate in $G$ to pair of subgroups of }(G_{J(p)},G_{p}).
\end{equation*}
In analogy with the principal orbit type theorem (see Example \ref{prinorbthm}), this partial order has a greatest element, namely $\underline{S}^\textrm{princ}_\textrm{Ham}$. To see this, notice that from the normal form theorem as in Remark \ref{sharphamliegpactrem} it follows that every $\O_p\in \underline{S}$ admits an open neighbourhood $\underline{U}$ with the property that $\underline{S}_{p}\leq \underline{S}_{q}$ for all $\O_q\in \underline{U}$. From this and the fact that $\underline{S}^\textrm{princ}_\textrm{Ham}$ is connected and dense in $\underline{S}$, it follows that it is indeed a member of the partition in Example \ref{hamGspex}, and that it is the greatest element with respect to the above partial order.
\end{ex}
To end with, we note that the following generalization of \cite[Theorem 5.9, Remark 5.10]{LeSj} holds.
\begin{thm} Let $\L$ be a leaf of $\mathcal{G}$ and suppose that $\underline{S}_\L$ is connected. Then the regular part $\underline{S}^\textrm{princ}_\L$ of $\S_\textrm{Ham}(\underline{S}_\L)$ is connected as well.
\end{thm}
\begin{proof} Since $\underline{S}^\textrm{princ}_\L$ is dense in $\underline{S}_\L$ and $\underline{S}_\L$ is connected, it is enough to show that every point in $\underline{S}_\L$ admits an open neighbourhood that intersects $\underline{S}_\L^\textrm{princ}$ in a connected subspace. To this end, let $\O_p\in \underline{S}_\L$, let $H=\mathcal{G}_p$ and $V=\S\mathcal{N}_p$. Consider a Hamiltonian Morita equivalence as in the proof of Proposition \ref{weakwhitstratthm}, so that the induced homeomorphism of orbit spaces identifies an open $\underline{U}$ around $\O_p$ in $\underline{S}$ with an open $\underline{B}_{\mathfrak{h}^0\oplus V}$ around the origin in $(\mathfrak{h}^0\oplus V)/H$. Let $B$ be the intersection of $B_{\mathfrak{h}^0\oplus V}$ with $V$ and consider the Hamiltonian $H$-space:
\begin{equation*} J_B=J_V\vert_B:(B,\omega_V)\to \mathfrak{h}^*.
\end{equation*} Then $\underline{U}\cap \underline{S}_\L$ is identified with $J_B^{-1}(0)/H$, and $\underline{U}\cap \underline{S}_\L^\textrm{princ}$ is identified with the principal part of $J_B^{-1}(0)/H$ (as follows from Morita invariance of the partitions by isomorphism types). Since $J_B^{-1}(0)$ is star-shaped with respect to the origin, $J_B^{-1}(0)/H$ is connected and hence, by \cite[Theorem 5.9, Remark 5.10]{LeSj}, so is its principal part. So, we have found the desired neighbourhood of $\O_p$.
\end{proof}
\subsubsection{Relations amongst the regular parts}
In this last subsection we discuss another relationship between the regular parts of the various stratifications, starting with the following observation.
\begin{prop} Suppose that $J$ is a submersion on $S^\textrm{reg}$. Then the various regular and principal parts on $S$, $M$, $\underline{S}$ and $\underline{M}$ are related as:
\begin{equation*} S^\textrm{reg}_\textrm{Ham}=S^\textrm{reg}\cap J^{-1}(M^\textrm{reg}), \quad \quad S^\textrm{princ}_\textrm{Ham}=S^\textrm{princ}\cap J^{-1}(M^\textrm{princ}), \quad\quad \underline{S}^\textrm{princ}_\textrm{Ham}=\underline{S}^\textrm{princ}\cap (\underline{J})^{-1}(\underline{M}^\textrm{princ}).
\end{equation*}
\end{prop}
\begin{proof} We prove the equality for $S^\textrm{reg}_\textrm{Ham}$; the others are proved similarly. Let $p\in S$, $x=J(p)$ and consider the strata
$\Sigma^\textrm{Ham}_p\in \S_\textrm{Ham}^\textrm{inf}(S)$, $\Sigma^\textrm{Gp}_p\in \S_\textrm{Gp}^\textrm{inf}(S)$ and $\Sigma^\textrm{Gp}_x\in \S_\textrm{Gp}^\textrm{inf}(M)$ through $p$ and $x$. Then
\begin{equation}\label{eqregpartrel} \Sigma^\textrm{Ham}_p\subset \Sigma^\textrm{Gp}_p\cap J^{-1}\left(\Sigma^\textrm{Gp}_{x}\right)
\end{equation} is open in the right-hand space. This, combined with the fact that $J:S^\textrm{reg}\to M$ is open and continuous, implies that $\Sigma^\textrm{Ham}_p$ is open at $p$ in $S$ if and only if $\Sigma^\textrm{Gp}_p$ is open at $p$ in $S$ and $\Sigma^\textrm{Gp}_{x}$ is open at $x$ in $M$. In light of Proposition \ref{regpartopen} this means that:
\begin{equation*} S^\textrm{reg}_\textrm{Ham}=S^\textrm{reg}\cap J^{-1}(M^\textrm{reg}),
\end{equation*} as claimed.
\end{proof}
In general (that is, if $J$ is not submersive on $S^\textrm{reg}$) one would hope for a similar result. Since the image of $J$ need not intersect $M^\textrm{reg}$, one however needs an appropriate replacement for it. The proposition below gives a sufficient condition for the existence of such a replacement.
\begin{prop}\label{printypdescr} Suppose that $S^\textrm{reg}_\textrm{Ham}$ is connected. Then there is a unique stratum $\Sigma\in \S_\textrm{Gp}^\textrm{inf}(M)$ with the property that $\Sigma\cap J(S)$ is open and dense in $J(S)$. Moreover, it holds that:
\begin{equation*} S^\textrm{reg}_\textrm{Ham}=S^\textrm{reg}\cap J^{-1}(\Sigma),
\end{equation*} and $J^{-1}(\Sigma)$ is connected, open and dense in $S$. Similar conclusions hold for the principal part on $S$ (resp. $\underline{S}$), under the assumption that $S^\textrm{princ}_\textrm{Ham}$ (resp. $\underline{S}^\textrm{princ}_\textrm{Ham}$) is connected.
\end{prop}
\begin{proof} Again, we prove the result only for $S^\textrm{reg}_\textrm{Ham}$ since the other proofs are analogous. We use the notation introduced in the proof of the previous proposition. Consider $R\subset J(S)$ defined as:
\begin{equation*} R:=\{x\in J(S)\mid \Sigma^\textrm{Gp}_x\cap J(S)\text{ is open in }J(S)\}.
\end{equation*} We claim that $S^\textrm{reg}_\textrm{Ham}=S^\textrm{reg}\cap J^{-1}(R)$, that $R$ is connected, open and dense in $J(S)$ and that $J^{-1}(R)$ is connected, open and dense in $S$. The desired stratum $\Sigma$ is then the unique stratum containing $R$. To see that our claim holds, notice first $R$ is clearly open in $J(S)$, and so $J^{-1}(R)$ is open in $S$. Moreover, by continuity of $J$ and $(\ref{eqregpartrel})$ we find that $S^\textrm{reg}\cap J^{-1}(R)$ is a union of strata of $\S_\textrm{Ham}^\textrm{inf}(S)$ contained in $S^\textrm{reg}_\textrm{Ham}$. So, if $S^\textrm{reg}_\textrm{Ham}$ is connected, then $S^\textrm{reg}\cap J^{-1}(R)$ must coincide with $S^\textrm{reg}_\textrm{Ham}$. Then since $S^\textrm{reg}_\textrm{Ham}$ is dense in $S$, so is $J^{-1}(R)$, and furthermore, $R$ must be dense in $J(S)$. Finally, because $S^\textrm{reg}_\textrm{Ham}$ is connected and dense in $J^{-1}(R)$, it follows that $J^{-1}(R)$ is connected and hence $R$ is connected as well. This proves our claim.
\end{proof}
\begin{ex} Let $G$ be a compact and connected Lie group and let $J:(S,\omega)\to \mathfrak{g}^*$ be a connected Hamiltonian $G$-space.
Let $T\subset G$ be a maximal torus, $\t^*_+$ a choice of closed Weyl chamber in $\t^*$ and $J_+(S):=J(S)\cap \t^*_+$, where $\t^*$ is canonically identified with the $T$-fixed point set $(\mathfrak{g}^*)^T$ in $\mathfrak{g}^*$. According to \cite[Theorem 3.1]{LeMeToWo}, there is a unique open face of the Weyl chamber (called the principal face) that intersects $J_+(S)$ in a dense subset of $J_+(S)$. Combining Corollary \ref{printypHamGspacethm} with Proposition \ref{printypdescr}, we recover the existence of the principal face.
\end{ex}
\subsection{The Poisson structure on the orbit space}\label{poisstratthmsec}
\subsubsection{Poisson structures on reduced differentiable spaces and Poisson stratifications} In this section we discuss the Poisson structure on the orbit space of a Hamiltonian action and discuss basic Poisson geometric properties of the various stratifications associated to such an action. First, we give some more general background.
\begin{defi} A \textbf{Poisson reduced ringed space} is a reduced ringed space $(X,\O_X)$ together with a Poisson bracket $\{\cdot,\cdot\}$ on the structure sheaf $\O_X$. A \textbf{morphism of Poisson reduced ringed spaces} is a morphism of reduced ringed spaces: \begin{equation*} \phi:(X,\O_X)\to (Y,\O_Y)\end{equation*} with the property that for every open $U$ in $Y$:
\begin{equation*} \phi^*:\left(\O_Y(U),\{\cdot,\cdot\}_U\right)\to \left(\O_X(\phi^{-1}(U)),\{\cdot,\cdot\}_{\phi^{-1}(U)}\right)
\end{equation*} is a Poisson algebra map. We will also call such $\phi$ simply a \textbf{Poisson map}. When $(X,\O_X)$ is a reduced differentiable space, we call $(X,\O_X,\{\cdot,\cdot\})$ a \textbf{Poisson reduced differentiable space}.
\end{defi}
\begin{rem}\label{globpoisbrackrem} The Poisson reduced ringed spaces in this paper will all be Hausdorff and second countable reduced differentiable spaces. For such reduced ringed spaces $(X,\O_X)$ the data of a Poisson bracket on the sheaf $\O_X$ is the same as the data of a Poisson bracket on the $\mathbb{R}$-algebra $\O_X(X)$, so that when convenient we can restrict attention to the Poisson algebra of globally defined functions. This follows as for manifolds, using bump functions in $\O_X(X)$ (cf. Remark \ref{partofunityreddiffbsp}).
\end{rem}
Next, we turn to subspaces and stratifications of Poisson reduced differentiable spaces.
\begin{defi}\label{poissubmandef} Let $(X,\O_X,\{\cdot,\cdot\}_X)$ be a Poisson reduced differentiable space. A locally closed subspace $Y$ of $(X,\O_X)$ is a \textbf{Poisson reduced differentiable subspace} if the induced structure sheaf $\O_Y$ admits a (necessarily unique) Poisson bracket for which the inclusion of $Y$ into $X$ becomes a Poisson map. If $Y$ is also a submanifold of $(X,\O_X)$, then we call it a \textbf{Poisson submanifold}.
\end{defi}
As in \cite{FeOrRa}, we use the following definition.
\begin{defi}\label{poisstratdef} Let $(X,\O_X,\{\cdot,\cdot\}_X)$ be a Hausdorff and second countable Poisson reduced differentiable space. A \textbf{Poisson stratification} of $(X,\O_X,\{\cdot,\cdot\}_X)$ is a stratification $\S$ of $(X,\O_X)$ with the property that every stratum is a Poisson submanifold. We call $(X,\O_X,\{\cdot,\cdot\}_X,\S)$ a \textbf{Poisson stratified space}. A \textbf{Symplectic stratified space} is a Poisson stratified space for which the strata are symplectic. A \textbf{morphism of Poisson stratified spaces} is a morphism of the underlying stratified spaces that is simultaneously a morphism of the underlying Poisson reduced ringed spaces.
\end{defi}
As for manifolds, we have the following useful characterization.
\begin{prop}\label{poisidealcharprop} Let $(X,\O_X,\{\cdot,\cdot\}_X)$ be a Hausdorff and second countable Poisson reduced differentiable space and let $Y$ be a locally closed subspace. Then $Y$ is a Poisson reduced differentiable subspace if and only if the vanishing ideal $\mathcal{I}_Y(X)$ in $\O_X(X)$ (consisting of $f\in \O_X(X)$ such that $f\vert_Y=0$) is a Poisson ideal (meaning that: if $f,h\in \O_X(X)$ and $h\vert_Y=0$, then $\{f,h\}_X\vert_Y=0$).
\end{prop}
\begin{proof} The forward implication is immediate. For the backward implication the same argument as for manifolds applies: given $f,h\in \O_Y(Y)$, by Proposition \ref{globchar} we can choose extensions $\widehat{f},\widehat{h}\in \O_X(U)$ of $f$ and $h$ defined on some open neighbourhood $U$ of $Y$ and set:
\begin{equation*} \{f,h\}_Y:=\{\widehat{f},\widehat{h}\}_U\vert_Y.
\end{equation*} This does not depend on the choice of extensions, because for any open $U$ in $X$ the ideal $\mathcal{I}_Y(U)$ in $\O_X(U)$, consisting of functions that vanish on $U\cap Y$, is a Poisson ideal. Indeed, this follows from the assumption that $\mathcal{I}_Y(X)$ is a Poisson ideal in $\O_X(X)$, using bump functions (cf. Remark \ref{partofunityreddiffbsp}). By construction, $\{\cdot,\cdot\}_Y$ defines a Poisson bracket on $\O_Y(Y)$ (and hence on $\O_Y$, by Remark \ref{globpoisbrackrem}) for which the inclusion of $Y$ into $X$ becomes a Poisson map.
\end{proof}
\subsubsection{The Poisson algebras of invariant functions} Next, we turn to the definition of the Poisson bracket on the orbit space of a Hamiltonian action, starting with the following observation.
\begin{prop}\label{poisalginvfunprop} Let $(\mathcal{G},\Omega)$ be a symplectic groupoid and suppose that we are given a Hamiltonian $(\mathcal{G},\Omega)$-action along $J:(S,\omega)\to M$. The algebra of invariant smooth functions:
\begin{equation*} C^\infty(S)^\mathcal{G}=\{f\in C^\infty(S)\mid f(g\cdot p)=f(p),\text{ } \forall (g,p)\in \mathcal{G}\times_MS\}.
\end{equation*} is a Poisson subalgebra of $(C^\infty(S),\{\cdot,\cdot\}_\omega)$.
\end{prop}
\begin{proof} Although this is surely known, let us give a proof. Let $f,h\in C^\infty(S)^\mathcal{G}$ and let $\Phi_f$ denote the Hamiltonian flow of $f$. Using the lemma below we find that for all $(g,p)\in \mathcal{G}\times_MS$:
\begin{align*} \{f,h\}_\omega(g\cdot p)&=\left.\frac{\d}{\d t}\right|_{t=0} h(\Phi_f^t(g\cdot p))\\
&=\left.\frac{\d}{\d t}\right|_{t=0} h(g\cdot \Phi_f^t(p))\\
&=\left.\frac{\d}{\d t}\right|_{t=0} h(\Phi_f^t(p))=\{f,h\}_\omega(p),\end{align*} so that $\{f,g\}_\omega\in C^\infty(S)^\mathcal{G}$, as required.
\end{proof}
Here we used the following lemma, which will also be useful later.
\begin{lemma}\label{hamflowinvfunlem} Let $f\in C^\infty(S)^\mathcal{G}$ and let $\Phi_f$ denote its Hamiltonian flow. Then, for every $t\in \mathbb{R}$, the domain $U_t$ and the image $V_t$ of $\Phi_f^t$ are $\mathcal{G}$-invariant and $\Phi^t_f$ is an isomorphism of Hamiltonian $(\mathcal{G},\Omega)$-spaces:
\begin{center}\begin{tikzcd} (U_t,\omega)\arrow[rr,"\Phi_f^t"]\arrow[rd,"J"'] & & (V_t,\omega) \arrow[ld,"J"]\\
& M &
\end{tikzcd}
\end{center}
\end{lemma}
\begin{proof} Invariance of $f$ implies that $X_f(p)\in T_p\O^\omega$ for all $p\in S$. From this and Proposition \ref{infmomact}$a$ it follows that $J(\Phi_f^t(p))=J(p)$ for any $p\in S$ and any time $t$ at which the flow through $p$ is defined. So, for any $(g,p)\in \mathcal{G}\times_MS$ we can consider the curve:
\begin{equation*} t\mapsto \left(g,\Phi_f^t(p)\right) \in \mathcal{G}\times_MS.
\end{equation*} Given such $(g,p)$, let $v\in T_{g\cdot p}S$ and take a tangent vector $\hat{v}$ to $\mathcal{G}\times_M S$ at $(g,p)$ such that $\d m(\widehat{v})=v$. Then we find:
\begin{equation*} \omega\left(\left.\frac{\d}{\d t}\right|_{t=0} g\cdot\Phi^t_f(p),v\right)=(m_S^*\omega)\left(\left.\frac{\d}{\d t}\right|_{t=0}\left(g,\Phi_f^t(p)\right),\widehat{v}\right).
\end{equation*} Using (\ref{hammultcond}) this is further seen to be equal to:
\begin{equation*} \omega\left(X_f(p),\d (\textrm{pr}_S)(\widehat{v})\right)=\d (f\circ \textrm{pr}_S)(\widehat{v})=\d f(v),
\end{equation*} where in the last step we used invariance of $f$. As this holds for all such $v$, we deduce that:
\begin{equation*} X_f(g\cdot p)=\left.\frac{\d}{\d t}\right|_{t=0} g\cdot \Phi_f^t(p).
\end{equation*} This being true for all $p$ in the fiber of $J$ over $s(g)$, and in particular for all points on a maximal integral curve of $X_f$ starting in this fiber, it follows that the maximal integral curve of $X_f$ through $g\cdot p$ is given by $t\mapsto g\cdot \Phi_f^t(p)$. The lemma readily follows from this.
\end{proof}
Given a proper symplectic groupoid $(\mathcal{G},\Omega)$ and a Hamiltonian $(\mathcal{G},\Omega)$-action along $J:(S,\omega)\to M$, the Poisson bracket $\{\cdot,\cdot\}_\omega$ on the algebra $C^\infty(S)^\mathcal{G}$ in Proposition \ref{poisalginvfunprop} gives the orbit space $(\underline{S},\mathcal{C}^\infty_{\underline{S}})$ the structure of a Poisson reduced differentiable space, with Poisson bracket determined by the fact that the orbit projection becomes a Poisson map. Moreover, for each leaf $\L$ of $\mathcal{G}$ in $M$, the reduced space $\underline{S}_\L$ is a Poisson reduced differentiable subspace. Indeed, identifying the algebra of globally defined smooth functions on $\underline{S}$ with $C^\infty(S)^\mathcal{G}$, the vanishing ideal of $\underline{S}_\L$ is identified with the ideal $\mathcal{I}^{\mathcal{G}}_\L$ of invariant smooth functions that vanish on $J^{-1}(\L)$, which is a Poisson ideal by the proposition below. This observation is due to \cite{ArCuGot} in the setting of Hamiltonian group actions.
\begin{prop}\label{poisidealpropredsp} The ideal $\mathcal{I}_\L^\mathcal{G}$ is a Poisson ideal of $(C^\infty(S)^\mathcal{G},\{\cdot,\cdot\}_\omega)$.
\end{prop}
\begin{proof} If $f\in C^\infty(S)^\mathcal{G}$ and $h\in\mathcal{I}_\L^\mathcal{G}$ then by Lemma \ref{hamflowinvfunlem} the Hamiltonian flow of $f$ starting at $p\in J^{-1}(\L)$ is contained in a single fiber of $J$, and hence in $J^{-1}(\L)$, so that $\{f,h\}_\omega(p)=0$.
\end{proof}
We will denote the respective Poisson structures on $(\underline{S},\mathcal{C}^\infty_{\underline{S}})$ and $(\underline{S}_\L,\mathcal{C}^\infty_{\underline{S}_\L})$ by $\{\cdot,\cdot\}_{\underline{S}}$ and $\{\cdot,\cdot\}_{\underline{S}_\L}$.
\subsubsection{The Poisson stratification theorem} Now, we move to the main theorem of this section.
\begin{thm}\label{poisstratthm} Let $(\mathcal{G},\Omega)\rightrightarrows M$ be a proper symplectic groupoid and suppose that we are given a Hamiltonian $(\mathcal{G},\Omega)$-action along $J:(S,\omega)\to M$. Then the following hold.
\begin{itemize} \item[a)] The stratification $\S_\textrm{Gp}(\underline{S})$ is a Poisson stratification of the orbit space: \begin{equation*} (\underline{S},\mathcal{C}^\infty_{\underline{S}},\{\cdot,\cdot\}_{\underline{S}}).\end{equation*}
\item[b)] The stratification $\S_\textrm{Ham}(\underline{S})$ is a Poisson stratification of the orbit space: \begin{equation*}(\underline{S},\mathcal{C}^\infty_{\underline{S}},\{\cdot,\cdot\}_{\underline{S}}),\end{equation*} the strata of which are regular Poisson submanifolds.
\item[c)] For each leaf $\L$ of $\mathcal{G}$ in $M$, the stratification $\S_\textrm{Ham}(\underline{S}_\L)$ is a symplectic stratification of the reduced space at $\L$: \begin{equation*} (\underline{S}_\L,\mathcal{C}^\infty_{\underline{S}_\L},\{\cdot,\cdot\}_{\underline{S}_\L}).
\end{equation*}
\end{itemize}
These are related as follows. First of all, the inclusions of smooth stratified spaces:
\begin{equation*} \left(\underline{S}_\L,\mathcal{C}^\infty_{\underline{S}_\L}, \{\cdot,\cdot\}_{\underline{S}_\L},\S_\textrm{Ham}(\underline{S}_\L)\right)\hookrightarrow \left(\underline{S},\mathcal{C}^\infty_{\underline{S}},\{\cdot,\cdot\}_{\underline{S}},\S_\textrm{Ham}(\underline{S})\right)\hookrightarrow \left(\underline{S},\mathcal{C}^\infty_{\underline{S}},\{\cdot,\cdot\}_{\underline{S}},\S_\textrm{Gp}(\underline{S})\right)
\end{equation*} are Poisson maps that map symplectic leaves onto symplectic leaves.
Moreover, for each stratum $\underline{\Sigma}_S\in \S_\textrm{Ham}(\underline{S})$, the symplectic leaves in $\underline{\Sigma}_S$ are the connected components of the fibers of the constant rank map $\underline{J}:\underline{\Sigma}_S\to \underline{\Sigma}_M$, where $\underline{\Sigma}_M\in \S_\textrm{Gp}(\underline{M})$ is the stratum such that $\underline{J}(\underline{\Sigma}_S)\subset \underline{\Sigma}_M$.
\end{thm}
\begin{proof}[Proof of Theorem \ref{poisstratthm}] Let $\underline{\Sigma}\in \S_\textrm{Ham}(\underline{S})$ be a stratum and let $\Sigma:=q^{-1}(\underline{\Sigma})$ where $q:S\to \underline{S}$ denotes the orbit projection. Identifying the algebra of globally defined smooth functions on $\underline{S}$ with $C^\infty(S)^\mathcal{G}$, the vanishing ideal of $\underline{\Sigma}$ is identified with the ideal: \begin{equation*} \mathcal{I}_\Sigma^\mathcal{G}=\{f\in C^\infty(S)^\mathcal{G}\mid f\vert_{\Sigma}=0\}.
\end{equation*} This is a Poisson ideal of $C^\infty(S)^\mathcal{G}$, for if $f\in C^\infty(S)^\mathcal{G}$ and $h\in \mathcal{I}_\Sigma^\mathcal{G}$, then as an immediate consequence of Lemma \ref{hamflowinvfunlem}, the Hamiltonian flow of $f$ leaves $\Sigma$ invariant and therefore: \begin{equation*}
\{f,h\}_\omega\vert_{\Sigma}=(\L_{X_f} h)\vert_{\Sigma}=0.
\end{equation*} By Proposition \ref{poisidealcharprop} this means that $\underline{\Sigma}$ is a Poisson submanifold (in the sense of Definition \ref{poissubmandef}). So, $\S_\textrm{Ham}(\underline{S})$ is a Poisson stratification of the orbit space. By the same reasoning it follows that the stratifications in statements $a$ and $c$ are Poisson stratifications. From the construction of the Poisson brackets on the orbit space and the reduced spaces, it is immediate that the inclusions given in the statement of the theorem are Poisson. Hence, each stratum of $\S_\textrm{Gp}(\underline{S})$ is partitioned into Poisson submanifolds by strata of $\S_\textrm{Ham}(\underline{S})$ and each stratum of $\S_\textrm{Ham}(\underline{S})$ is partitioned into Poisson submanifolds by strata of $\S_\textrm{Ham}(\underline{S}_\L)$, for varying $\L\in \underline{M}$. If $(N,\pi)$ is a Poisson manifold partitioned by Poisson submanifolds, then the symplectic leaves of each of the Poisson submanifolds in the partition are symplectic leaves of $(N,\pi)$. This follows from the fact that each symplectic leaf of a Poisson submanifold is an open inside a symplectic leaf of the ambient Poisson manifold. Therefore, each of the inclusions given in the statement of the theorem indeed maps symplectic leaves onto symplectic leaves. \\
It remains to see that for each stratum $\underline{\Sigma}_S\in \S_\textrm{Ham}(\underline{S})$ the foliation by symplectic leaves of the Poisson structure $\pi_{\underline{\Sigma}_S}$ on $\underline{\Sigma}_S$ coincides with that by the connected components of the fibers of the constant rank map $\underline{J}:\underline{\Sigma}_S\to \underline{\Sigma}_M$, because the claims on regularity and non-degeneracy made in statements $b$ and $c$ follow from this as well. To this end, we have to show that for every orbit $\O\in \underline{\Sigma}_S$ the tangent space to the symplectic leaf at $\O$ coincides with $\ker(\d\underline{J}\vert_{\underline{\Sigma}_S})_\O$. Here the language of Dirac geometry comes in useful. We refer the reader to \cite{Cou,Bu} for background on this. Let $\Sigma_S=q^{-1}(\underline{\Sigma}_S)$ and consider the pre-symplectic form:\begin{equation*} \omega_{{\Sigma}_S}:=\omega\vert_{\Sigma_S}\in \Omega^2(\Sigma_S).
\end{equation*} We claim that the orbit projection:
\begin{equation}\label{orbprojdirac} q:(\Sigma_S,\omega_{\Sigma_S})\to (\underline{\Sigma}_S,\pi_{\underline{\Sigma}_S})
\end{equation} is a forward Dirac map. To see this, we will use the fact that a map $\phi:(Y,\omega_Y)\to (N,\pi_N)$ from a pre-symplectic manifold into a Poisson manifold is forward Dirac if for every $f\in C^\infty(N)$ there is a vector field $X_{\phi^*f}\in \mathcal{X}(Y)$ such that:
\begin{equation*} \iota_{X_{\phi^*f}}\omega_Y=\d (\phi^*f) \quad\&\quad \phi_*(X_{\phi^*f})=X_f.
\end{equation*}
Given an $f\in C^\infty(\underline{\Sigma}_S)$, choose a smooth extension $\widehat{f}$ defined an open $\underline{U}$ around $\underline{\Sigma}_S$ in $\underline{S}$. Because $q^*\widehat{f}$ is $\mathcal{G}$-invariant, its Hamiltonian flow leaves $\Sigma_S$ invariant (as before). Therefore, we can consider: \begin{equation*} X_{q^*f}:=(X_{q^*\widehat{f}})\vert_{\Sigma_S}\in \mathcal{X}(\Sigma_S)\end{equation*} and as is readily verified this satisfies:
\begin{equation*} \iota_{(X_{q^*f})}\omega_{\Sigma_S}=\d (q^*f) \quad \& \quad q_*(X_{q^*f})=X_f.
\end{equation*} So (\ref{orbprojdirac}) is indeed a forward Dirac map. From the equality of Dirac structures $L_{\pi_{\underline{\Sigma}_S}}=q_*(L_{\omega_{\Sigma_S}})$ we read off that the tangent space to the symplectic leaf at an orbit $\O$ through $p\in S$ is given by:
\begin{equation}\label{tansplfeq} \frac{T_p\O^{(\omega_{\Sigma_S})}}{T_p\O\cap T_p\O^{(\omega_{\Sigma_S})}}\subset \frac{T_p{\Sigma_S}}{T_p\O}=T_\O(\underline{\Sigma}_S).
\end{equation} It follows from Proposition \ref{infmomact}$a$ that $T_p\O^{(\omega_{\Sigma_S})}=\ker(\d J\vert_{\Sigma_S})_p$. This implies that (\ref{tansplfeq}) equals:
\begin{equation*} \frac{\ker(\d J\vert_{\Sigma_S})_p}{T_p\O\cap \ker(\d J\vert_{\Sigma_S})_p}=\ker(\d\underline{J}\vert_{\underline{\Sigma}_S})_\O\subset T_\O(\underline{\Sigma}_S),
\end{equation*} as we wished to show.
\end{proof}
From the proof we also see:
\begin{cor}\label{orbprojstratdirac} For every stratum $\underline{\Sigma}_S\in \S_\textrm{Ham}(\underline{S})$, the orbit projection $(\ref{orbprojdirac})$ is forward Dirac. The same holds for the strata of $\S_\textrm{Gp}(\underline{S})$.
\end{cor}
\subsubsection{Dimension of the symplectic leaves} In the remainder of this section we make some further observations on the Poisson geometry of the orbit space, starting with:
\begin{prop}\label{dimsymplvslocnondecrprop} Let $(\mathcal{G},\Omega)\rightrightarrows M$ be a proper symplectic groupoid and suppose that we are given a Hamiltonian $(\mathcal{G},\Omega)$-action along $J:(S,\omega)\to M$. The dimension of the symplectic leaves in the orbit space $\underline{S}$ is locally non-decreasing. That is, every $\O\in \underline{S}$ admits an open neighbourhood $\underline{U}$ in $\underline{S}$ such that any symplectic leaf intersecting $\underline{U}$ has dimension greater than or equal to that of the symplectic leaf through $\O$.
\end{prop}
\begin{proof} First, let us make a more general remark. Let $p\in S$, let $\underline{\Sigma}_S\in \S_\textrm{Ham}(\underline{S})$ be the stratum through $\O_p$ and let $\underline{\Sigma}_M\in \S_\textrm{Gp}(\underline{M})$ be such that $\underline{J}(\underline{\Sigma}_S)\subset \underline{\Sigma}_M$. From a Hamiltonian Morita equivalence as in the proof of Proposition \ref{eqcharhammortyp} we obtain (via Proposition \ref{transgeommap}$a$) an identification of smooth maps between $\underline{J}:\underline{\Sigma}_S\to \underline{\Sigma}_M$ near $\O_p$ and the map (\ref{mommaplocmodcentisotyp}) near the origin. Therefore, the dimension of the fibers of the former map is equal to that of the latter, which is $\dim(\S\mathcal{N}_p^{\mathcal{G}_p})$, or equivalently: $\dim(\ker(\underline{\d J}_p)^{\mathcal{G}_p})$ (see the proof of Proposition \ref{normrepham}$b$). In view of Theorem \ref{poisstratthm}, this is also the dimension of the symplectic leaf through $\O_p$. To prove the proposition, it is therefore enough to show that each $p\in S$ admits an invariant open neighbourhood $U$ with the property that $\ker(\underline{\d J}_p)^{\mathcal{G}_p}$ has dimension less than or equal to that of $\ker(\underline{\d J}_q)^{\mathcal{G}_q}$ for each $q\in U$. To this end, given $p\in S$, choose an invariant open neighbourhood $U$ for which there is a Hamiltonian Morita equivalence as in the proof of Proposition \ref{eqcharhammortyp}. Then $U$ has the desired property. Indeed, in light of Proposition \ref{transgeommap}$c$, it suffices to show (using the notation of the proof of Proposition \ref{eqcharhammortyp}) that for each $\alpha\in \mathfrak{h}^0$ and $v\in V$:
\begin{equation*} \dim(V^H)\leq \dim(\ker(\underline{\d J}_\mathfrak{p})_{(\alpha,v)}^{H_{(\alpha,v)}}).
\end{equation*} To this end, consider the linear map:
\begin{equation}\label{incldimcountsymleavesmap} V^H\to \ker(\underline{\d J}_\mathfrak{p})_{(\alpha,v)}^{H_{(\alpha,v)}}, \quad w\mapsto \left[\left.\frac{\d}{\d t}\right\vert_{t=0} (\alpha,v+tw)\right].
\end{equation} Note here that this indeed takes values in $\ker(\underline{\d J}_\mathfrak{p})_{(\alpha,v)}$, because for all $w\in V^H$:
\begin{equation*} \left.\frac{\d}{\d t}\right\vert_{t=0} v+tw \in \ker(\d J_V)_{v},
\end{equation*} as follows from (\ref{quadmomfixset}). To complete the proof, we will now show that (\ref{incldimcountsymleavesmap}) is injective. Suppose that:
\begin{equation*} \left.\frac{\d}{\d t}\right\vert_{t=0} (\alpha,v+tw) \in T_{(\alpha,v)}\O.
\end{equation*} Then:
\begin{equation*} \left.\frac{\d}{\d t}\right\vert_{t=0} v+tw \in T_{v}\O\cap T_{v}(v+V^H).
\end{equation*} Because $H$ is compact, $V^H$ admits an $H$-invariant linear complement in $V$, which implies that:
\begin{equation*} T_{v}\O\cap T_{v}(v+V^H)=0.
\end{equation*} Therefore $w=0$, proving that (\ref{incldimcountsymleavesmap}) is indeed injective.
\end{proof}
\begin{rem}\label{vectspsymplfisofxpt} In the above proof we have seen that the dimension of the symplectic leaf $(\L,\omega_\L)$ through $\O_p$ is $\dim(\S\mathcal{N}_p^{\mathcal{G}_p})$. In fact, there is a canonical isomorphism of symplectic vector spaces:
\begin{equation*} (T_{\O_p}\L,(\omega_{\L})_{\O_p})\cong(\S\mathcal{N}_p^{\mathcal{G}_p},\omega_p).
\end{equation*}
\end{rem}
\subsubsection{Morita invariance of the Poisson stratifications} We end this section with:
\begin{prop}\label{hammorinvpoisbrack} Each of the stratifications in Theorem \ref{poisstratthm} is invariant under Hamiltonian Morita equivalence, as Poisson stratification.
\end{prop}
\begin{proof} Suppose we are given a Morita equivalence between two Hamiltonian actions of two proper symplectic groupoids; we use the notation of Definition \ref{moreqdefLie} and \ref{moreqdefHam}. It is immediate that the induced homeomorphism $h_Q$ (see Proposition \ref{transgeommap}$a$) maps strata of $\S_\textrm{Ham}(\underline{S}_1)$ onto strata of $\S_\textrm{Ham}(\underline{S}_2)$, and the same goes for $\S_\textrm{Gp}(\underline{S}_1)$ and $\S_\textrm{Gp}(\underline{S}_2)$. So, in view of Proposition \ref{moreqisoredring} $h_Q$ is an isomorphism of smooth stratified spaces, for both of these stratifications. By Proposition \ref{transgeommap}$a$, $h_Q$ identifies the reduced space at a leaf $\L_1$ with the reduced space at the leaf $\L_2:=h_P(\L_1)$ (these being the fibers of $\underline{J}_1$ and $\underline{J}_2$) and it is clear that it maps strata of $\S_\textrm{Ham}(\underline{S}_{\L_1})$ onto strata of $\S_\textrm{Ham}(\underline{S}_{\L_2})$. So, by Remark \ref{morphsmstrspsimp} it restricts to an isomorphism of smooth stratified spaces between these reduced spaces. To prove the proposition we are left to show that $h_Q$ is a Poisson map, for it will then restrict to a Poisson map between the reduced spaces and between the strata as well. To this end, let $U_1$ and $U_2$ be $Q$-related invariant opens in $S_1$ and $S_2$. By the proof of Proposition \ref{moreqisoredring}, the Hamiltonian Morita equivalence induces isomorphisms:
\begin{center}
\begin{tikzpicture} \node (S_1) at (0,0) {$\mathcal{C}_{S_1}^\infty(U_1)^{\mathcal{G}_1}$};
\node (S_2) at (8,0) {$\mathcal{C}_{S_2}^\infty(U_2)^{\mathcal{G}_2}$};
\node (Q) at (4,1) {$\mathcal{C}_Q^\infty(\beta_1^{-1}(U_1))^{\mathcal{G}_1}\cap \mathcal{C}_Q^\infty(\beta_2^{-1}(U_2))^{\mathcal{G}_2}$};
\draw[->](S_1) to node[pos=0.45, below] {$\text{ }\text{ }\beta_1^*$} (Q);
\draw[->](S_2) to node[pos=0.45, below] {$\text{ }\text{ }\beta_2^*$} (Q);
\end{tikzpicture}
\end{center} and to prove that $h_Q$ is a Poisson map we have to show that $(\beta_2^*)^{-1}\circ \beta_1^*$ is an isomorphism of Poisson algebras. To see this, let $f_1,h_1\in \mathcal{C}_{S_1}^\infty(U_1)^{\mathcal{G}_1}$ and $f_2,h_2\in \mathcal{C}_{S_2}^\infty(U_2)^{\mathcal{G}_2}$ such that $\beta_1^*f_1=\beta_2^*f_2$ and $\beta_1^*h_1=\beta_2^*h_2$. Let $p_1\in U_1$, $p_2\in U_2$ and $q\in Q$ such that $p_1=\beta_1(q)$ and $p_2=\beta_2(q)$. As we have seen in Lemma \ref{hamflowinvfunlem} it holds that $X_{f_1}(p_1)\in \ker(\d J_1)$. So, as in the proof of Proposition \ref{transgeomham} we can find $\widehat{v}\in \ker(\d j_q)$ such that $\d \beta_1(\widehat{v})=X_{f_1}(p_1)$. It follows from (\ref{hameqmor1}) that:
\begin{align*} \omega_2(X_{f_2}(p_2),\d \beta_2(\cdot))&=\d(\beta_2^*f_2)_q\\
&=\d (\beta_1^*f_1)_q\\
&=(\beta_1^*\omega_1)(\widehat{v},\cdot)\\
&=(\beta_2^*\omega_2)(\widehat{v},\cdot)=\omega_2(\d \beta_2(\widehat{v}),\d \beta_2(\cdot)),
\end{align*} so that, since $\beta_2$ is a submersion, we find that $\d \beta_2(\widehat{v})=X_{f_2}(p_2)$. Using this we see that:
\begin{align*} \{f_1,h_1\}_{\omega_1}(p_1)&=\d h_1(X_{f_1}(p_1))\\
&=\d(\beta_1^*h_1)(\widehat{v})\\
&=\d(\beta_2^*h_2)(\widehat{v})\\
&=\d h_2(X_{f_2}(p_2))=\{f_2,h_2\}_{\omega_2}(p_2),
\end{align*} which proves that $(\beta_2^*)^{-1}\circ \beta_1^*$ is indeed an isomorphism of Poisson algebras.
\end{proof}
\begin{rem} From the above proposition it follows that $\mathcal{P}_\textrm{Ham}(\underline{S})$ and $\mathcal{P}_\textrm{Ham}(\underline{S}_\L)$ are in fact Poisson homogeneous, meaning that they are smoothly homogeneous as in Definition \ref{homdefi}, with the extra requirement that the isomorphisms $h$ can be chosen to be Poisson maps. This gives another proof of the fact that the Poisson structures on the strata of $\S_\textrm{Ham}(\underline{S})$ must be regular.
\end{rem}
\subsection{Symplectic integration of the canonical Hamiltonian strata}\label{sympintstratsec}
\subsubsection{The integration theorem}
The main theorem of this section is:
\begin{thm}\label{poisstratintgrthm} Let $(\mathcal{G},\Omega)$ be a proper symplectic groupoid and suppose that we are given a Hamiltonian $(\mathcal{G},\Omega)$-action along $J:(S,\omega)\to M$. Let $\underline{\Sigma}_S\in \S_\textrm{Ham}(\underline{S})$ and let $\pi_{\underline{\Sigma}_S}$ be the Poisson structure on $\underline{\Sigma}_S$ of Theorem \ref{poisstratthm}. There is a naturally associated proper symplectic groupoid (the symplectic leaves of which may be disconnected) that integrates $(\underline{\Sigma}_S,\pi_{\underline{\Sigma}_S})$.
\end{thm}
Our proof consists of two main steps: first we prove the theorem for Hamiltonian actions of principal type (defined below), and then we show how to reduce to actions of this type.
\subsubsection{Hamiltonian actions of principal type}
\begin{defi}\label{princtypdefi} We say that:
\begin{itemize}\item[i)] a proper Lie groupoid $\mathcal{G}\rightrightarrows M$ of \textbf{principal type} if $M^\textrm{princ}=M$ (see Example \ref{princtypliegpoidex}),
\item[ii)] a Hamiltonian action of a proper symplectic groupoid $(\mathcal{G},\Omega)$ along $J:(S,\omega)\to M$ is of \textbf{principal type} if $S^\textrm{princ}_\textrm{Ham}=S$ and $M^\textrm{princ}=M$ (see Subsection \ref{liedescrpregpartsec}).
\end{itemize}
\end{defi}
\begin{rem}\label{pringpoidrem} Notice that:
\begin{itemize} \item[i)] a proper Lie groupoid $\mathcal{G}\rightrightarrows M$ with connected leaf space $\underline{M}$ is of principal type if and only if $\mathcal{G}_x$ is isomorphic to $\mathcal{G}_y$ for all $x,y\in M$.
\item[ii)] a Hamiltonian action of a proper symplectic groupoid $(\mathcal{G},\Omega)$ along $J:(S,\omega)\to M$ with connected orbit space $\underline{S}$ and connected leaf space $\underline{M}$ is of principal type if and only if $\mathcal{G}_p$ is isomorphic to $\mathcal{G}_q$ for all $p,q\in S$ and $\mathcal{G}_x$ is isomorphic to $\mathcal{G}_y$ for all $x,y\in M$.
\end{itemize}
\end{rem}
For the rest of this subsection, let $(\mathcal{G},\Omega)\rightrightarrows M$ be a proper symplectic groupoid and suppose that we are given a Hamiltonian $(\mathcal{G},\Omega)$-action of principal type along $J:(S,\omega)\to M$, for which both the orbit space $\underline{S}$ and the leaf space $\underline{M}$ are connected. Then both $\underline{S}$ and $\underline{M}$ are smooth manifolds and $J:S\to M$, as well as $\underline{J}:\underline{S}\to \underline{M}$, is of constant rank. If the action happens to be free, then $J$ is a submersion and the gauge construction (\cite[Theorem 3.2]{Xu}) yields a proper symplectic groupoid integrating $(\underline{S},\pi_{\underline{S}})$. This groupoid is obtained as quotient of the submersion groupoid:
\begin{equation*}\label{submgpoid} S\times_MS\rightrightarrows S
\end{equation*} by the diagonal action of $\mathcal{G}$ on $S\times_MS$ along $J\circ \textrm{pr}_1$. As we will now show, this construction can be generalized to arbitrary Hamiltonian actions of principal type (for which the action need not be free). To this end, we consider to the subgroupoid:
\begin{equation*} \mathcal{R}=\{(p_1,p_2)\in S\times S\mid J(p_1)=J(p_2)\textrm{ and }\mathcal{G}_{p_1}=\mathcal{G}_{p_2}\}
\end{equation*} of the pair groupoid $S\times S$.
\begin{thm}\label{intgpoidpropprintyp} The groupoid $\mathcal{R}$ has the following properties.
\begin{itemize}\item[a)] It is a closed embedded Lie subgroupoid of the pair groupoid $S\times S$.
\item[b)] It is invariant under the diagonal action of $\mathcal{G}$ on $S\times_MS$, the restriction of the action to $\mathcal{R}$ is smooth, $\underline{\mathcal{R}}:=\mathcal{R}/\mathcal{G}$ is a smooth manifold and the orbit projection $\mathcal{R}\to \underline{\mathcal{R}}$ is a submersion.
\item[c)] The symplectic pair groupoid $(S\times S,\omega\oplus -\omega)$ descends to give a proper symplectic groupoid:
\begin{equation*} (\underline{\mathcal{R}},\Omega_{\underline{\mathcal{R}}})\rightrightarrows \underline{S},
\end{equation*} that integrates $(\underline{S},\pi_{\underline{S}})$.
\end{itemize}
\end{thm}
\begin{proof}[Proof of Theorem \ref{intgpoidpropprintyp}; part $a$] We will first use the normal form to study the subspace $S\times_MS$. To this end, let $(p_1,p_2)\in S\times_MS$ and let $x:=J(p_1)=J(p_2)$. Then, as in the proof of Theorem \ref{righamthm}, we can find two neighbourhood equivalences $(\Phi,\Psi_1)$ and $(\Phi,\Psi_2)$ between the given Hamiltonian action and the two local models for it around the respective orbits $\O_{p_1}$ and $\O_{p_2}$ through $p_1$ and $p_2$, using one and the same isomorphism of symplectic groupoids $\Phi$ for both neighbourhood equivalences. Using this, the subset $S\times _MS$ of $S\times S$ is identified near $(p_1,p_2)$ with the subset $S_{\theta,1}\times_{M_\theta} S_{\theta,2}$ of the product $S_{\theta,1}\times S_{\theta,2}$ of the local models around $\O_{p_1}$ and $\O_{p_2}$ (using the notation of Subsection \ref{hamlocmodconsec}) near $(\Psi_1(p_1),\Psi_2(p_2))$. Since we assume the Hamiltonian action to be of principal type, the coadjoint $\mathcal{G}_x$-action and the actions underlying the symplectic normal representations at $p_1$ and $p_2$ are trivial (cf. Proposition \ref{normrepsymp}$b$, Example \ref{princtypliegpoidex} and Proposition \ref{regpartalg}). So, denoting by $P$ the source-fiber of $\mathcal{G}$ over $x$, the momentum maps $J_{\theta,i}:S_{\theta,i}\to M_\theta$ in the local model become:
\begin{equation}\label{mommaplocmoddescpprinctyp} P/\mathcal{G}_{p_i}\times (\mathfrak{g}_{p_i}^0\oplus \S\mathcal{N}_{p_i})\to P/\mathcal{G}_x\times \mathfrak{g}_x^*, \quad ([q],\alpha,v)\mapsto ([q],\alpha),
\end{equation} (or rather, a restriction of this to an open neighbourhood of the central orbit $P/\mathcal{G}_{p_i}$) for $i\in \{1,2\}$. From this we see that $S_{\theta,1}\times_{M_\theta} S_{\theta,2}$ is a submanifold of $S_{\theta,1}\times S_{\theta,2}$ with tangent space given by all pairs of tangent vectors $(v_1,v_2)$ satisfying $\d J_{\theta,1}(v_1)=\d J_{\theta,2}(v_2)$. Passing back to $S\times S$ via $(\Psi_1,\Psi_2)$, we find that $S\times_MS$ is an embedded submanifold of $S\times S$ at $(p_1,p_2)$ with tangent space:
\begin{equation}\label{destngtspeq} \{(v_1,v_2)\in T_{p_1}S\times T_{p_2}S\mid \d J_{p_1}(v_1)=\d J_{p_2}(v_2)\}.
\end{equation} We now turn to $\mathcal{R}$. As we will show in a moment, $\mathcal{R}$ is both open and closed in $S\times_MS$. Together with the above, this would show that $\mathcal{R}$ is a closed embedded submanifold of $S\times S$ (with connected components of possibly varying dimension), the tangent space of which is given by (\ref{destngtspeq}). To then show that $\mathcal{R}$ is an embedded Lie subgroupoid (with connected components of one and the same dimension), it would be enough to show the two projections $\mathcal{R}\to S$ are submersions. In view of the description (\ref{destngtspeq}) of the tangent space of $\mathcal{R}$ this is equivalent to the requirement that $\textrm{Im}(\d J_{p_1})=\textrm{Im}(\d J_{p_2})$ for all $(p_1,p_2)\in \mathcal{R}$, which is indeed satisfied, as follows from Proposition \ref{infmomact}$b$. So, to prove part $a$ it remains to show that $\mathcal{R}$ is both open and closed in $S\times_MS$.\\
To prove that $\mathcal{R}$ is closed in $S\times_MS$, we will show that every $(p_1,p_2)\in S\times_MS$ admits an open neighbourhood that intersects $\mathcal{R}$ in a closed subset of this neighbourhood. Given such $(p_1,p_2)$, as before, we pass to the local models around $\O_{p_1}$ and $\O_{p_2}$ using $(\Phi,\Psi_1)$ and $(\Phi,\Psi_2)$. From the description (\ref{mommaplocmoddescpprinctyp}) we find that $S_{\theta,1}\times_{M_\theta}S_{\theta,2}$ is the subset of $S_{\theta,1}\times S_{\theta,2}$ consisting of pairs:
\begin{equation*} (([q_1],\alpha_1,v_1),([q_2],\alpha_2,v_2))
\end{equation*} satisfying:
\begin{equation*} [q_1]=[q_2]\in P/\mathcal{G}_x \quad \& \quad \alpha_1=\alpha_2\in \mathfrak{g}_x^*.
\end{equation*}
Furthermore, a straightforward verification shows that $(\Psi_1,\Psi_2)$ identifies $\mathcal{R}$ near $(p_1,p_2)$ with the subset of those pairs that in addition satisfy:
\begin{equation}\label{condRset} [q_1:q_2]\in N_{\mathcal{G}_x}(\mathcal{G}_{p_1},\mathcal{G}_{p_2}):=\{g\in \mathcal{G}_x\mid g\mathcal{G}_{p_1}g^{-1}=\mathcal{G}_{p_2}\}.
\end{equation} Notice that $N_{\mathcal{G}_x}(\mathcal{G}_{p_1},\mathcal{G}_{p_2})$ is closed in $\mathcal{G}_x$ and invariant under left multiplication by elements of $\mathcal{G}_{p_2}$ and under right multiplication by elements of $\mathcal{G}_{p_1}$, so that it corresponds to a closed subset of:
\begin{equation*} \mathcal{G}_{p_2}\backslash \mathcal{G}_x /\textrm{ }\mathcal{G}_{p_1}.
\end{equation*} Hence, by continuity of the map:
\begin{equation*} (P/\mathcal{G}_{p_1})\times_{P/\mathcal{G}_x} (P/\mathcal{G}_{p_2})\to \mathcal{G}_{p_2}\backslash \mathcal{G}_x /\textrm{ }\mathcal{G}_{p_1},\quad ([q_1],[q_2])\mapsto [q_1:q_2] \mod \mathcal{G}_{p_2}\times \mathcal{G}_{p_1}
\end{equation*} it follows that (\ref{condRset}) is a closed condition in $S_{\theta,1}\times_{M_\theta}S_{\theta,2}$. So, $\mathcal{R}$ is indeed closed in $S\times_MS$.\\
To show that $\mathcal{R}$ is open in $S\times_MS$ we can argue in exactly the same way, now restricting attention to pairs $(p_1,p_2)\in \mathcal{R}$, so that the condition (\ref{condRset}) becomes:
\begin{equation*} [q_1:q_2]\in N_{\mathcal{G}_x}(\mathcal{G}_{p_1}),
\end{equation*}
where $N_{\mathcal{G}_x}(\mathcal{G}_{p_1})$ denotes the normalizer of $\mathcal{G}_{p_1}$ in $\mathcal{G}_x$, and we are left to show that $N_{\mathcal{G}_x}(\mathcal{G}_{p_1})$ is open in $\mathcal{G}_x$. To this end, recall from before that the coadjoint action of $\mathcal{G}_x$ on $\mathfrak{g}_x^*$ is trivial. So, the action by conjugation of $\mathcal{G}_x$ on its identity component $\mathcal{G}^0_x$ is trivial. This can be rephrased as saying that the action by conjugation of $\mathcal{G}^0_x$ on $\mathcal{G}_x$ is trivial. In particular, $\mathcal{G}^0_x$ is contained in $N_{\mathcal{G}_x}(\mathcal{G}_{p_1})$ and therefore the Lie subgroup $N_{\mathcal{G}_x}(\mathcal{G}_{p_1})$ is indeed open in $\mathcal{G}_x$. This concludes the proof of part $a$.
\end{proof}
For the proof of part $b$ we recall the lemma below, which follows from the linearization theorem for proper Lie groupoids (see e.g. the proof of \cite[Proposition 23]{CrMe} for details).
\begin{lemma}\label{orbspsmoothlem} Let $\mathcal{G}\rightrightarrows M$ be a proper Lie groupoid with a single isomorphism type, meaning that $\mathcal{G}_x$ is isomorphic to $\mathcal{G}_y$ for all $x,y\in M$ (see Example \ref{exisotyp}). Then the leaf space $(\underline{M},\mathcal{C}^\infty_{\underline{M}})$ is a smooth manifold and the projection $M\to \underline{M}$ is a submersion.
\end{lemma}
\begin{proof}[Proof of Theorem \ref{intgpoidpropprintyp}; parts $b$ and $c$] It is readily verified that $\mathcal{R}$ is invariant under the diagonal $\mathcal{G}$-action along $J\circ \textrm{pr}_1:S\times_MS\to M$ and that the restricted action is smooth. Since $\mathcal{G}$ is proper, so is the action groupoid $\mathcal{G}\ltimes \mathcal{R}$. Furthermore, the isotropy group of the $\mathcal{G}$-action at $(p,q)\in \mathcal{R}$ is the isotropy of the $\mathcal{G}$-action on $S$ at $p$. So, since the isotropy groups of the action on $S$ are all isomorphic (by Remark \ref{pringpoidrem}), the same holds for the isotropy groups of the action on $\mathcal{R}$. In view of Lemma \ref{orbspsmoothlem}, we conclude that part $b$ holds. \\
We turn to part $c$. One readily verifies that $\underline{\mathcal{R}}$ inherits the structure of a Lie groupoid over $\underline{S}$ from the Lie groupoid $\mathcal{R}\rightrightarrows S$. To see that the Lie groupoid $\underline{\mathcal{R}}$ is proper, suppose that we are given a sequence of $[p_n,q_n]\in \underline{\mathcal{R}}$ with the property that $t_{\underline{\mathcal{R}}}([p_n,q_n])=[p_n]$ and $s_{\underline{\mathcal{R}}}([p_n,q_n])=[q_n]$ converge in $\underline{S}$ as $n\to \infty$. We have to show that the given sequence in $\underline{\mathcal{R}}$ admits a convergent subsequence. Since the orbit projection $S\to \underline{S}$ is a surjective submersion, it admits local sections around all points in $\underline{S}$. Using this, we can (for $n$ large enough) find $g_n, h_n\in \mathcal{G}$ in the source fiber over $J(p_n)=J(q_n)$ such that $g_n\cdot p_n$ and $h_n\cdot q_n$ converge in $S$ as $n\to \infty$. Then $t_\mathcal{G}(g_nh_n^{-1})=J(g_n\cdot p_n)$ and $s_\mathcal{G}(g_nh_n^{-1})=J(h_n\cdot q_n)$ both converge in $M$ as $n\to \infty$. By properness of $\mathcal{G}$, it follows that there is a subsequence $g_{n_k}h_{n_k}^{-1}$ that converges in $\mathcal{G}$ as $k\to \infty$. Together with convergence of $h_{n_k}\cdot q_{n_k}$, this implies that $g_{n_k}\cdot q_{n_k}$ converges in $S$ as well. So, since $\mathcal{R}$ is closed in $S\times S$, it follows that $g_{n_k}\cdot (p_{n_k},q_{n_k})$ converges in $\mathcal{R}$. Therefore, $[p_{n_k},q_{n_k}]$ converges in $\underline{\mathcal{R}}$. This shows that the required subsequence exists and hence proves properness of the Lie groupoid $\underline{\mathcal{R}}$. \\
To complete the proof of $c$, we are left to show that the symplectic structure on the pair groupoid $S\times S$ descends to a symplectic structure $\Omega_{\underline{\mathcal{R}}}$ on $\underline{\mathcal{R}}$, and that $(\underline{\mathcal{R}},\Omega_{\underline{\mathcal{R}}})$ integrates $(\underline{S},\pi_{\underline{S}})$. To see that the restriction $\Omega_\mathcal{R}\in \Omega^2(\mathcal{R})$ of $\omega\oplus -\omega$ to $\mathcal{R}$ descends to a $2$-form on $\underline{\mathcal{R}}$, recall that this is equivalent to asking that $\Omega_\mathcal{R}$ is basic with respect to the $\mathcal{G}$-action on $\mathcal{R}$ (in the sense of \cite{PfPoTa,Wa,Yu}), which means that: $m_\mathcal{R}^* \Omega_\mathcal{R}=\textrm{pr}_\mathcal{R}^*\Omega_\mathcal{R}$, where $m_\mathcal{R},\textrm{pr}_\mathcal{R}:\mathcal{G}\ltimes \mathcal{R}\to \mathcal{R}$ denote the target and source map of the action groupoid. This equality is readily verified. So, $\Omega_\mathcal{R}$ indeed descends to a $2$-form $\Omega_{\underline{\mathcal{R}}}$ on $\underline{\mathcal{R}}$. Further notice that $\Omega_{\underline{\mathcal{R}}}$ is closed (because $\omega$ is closed) and it inherits multiplicativity from the multiplicative form $\omega\oplus -\omega$ on the pair groupoid $S\times S$. Moreover, using the momentum map condition (\ref{mommapcond}), Proposition \ref{infmomact} and the description (\ref{destngtspeq}) of the tangent space to $\mathcal{R}$, it is straightforward to check that $\Omega_{\underline{\mathcal{R}}}$ is non-degenerate. So, $(\underline{\mathcal{R}},\Omega_{\underline{\mathcal{R}}})$ is a symplectic groupoid. We leave it to the reader to verify that $(\underline{\mathcal{R}},\Omega_{\underline{\mathcal{R}}})$ integrates $(\underline{S},\pi_{\underline{S}})$. \end{proof}
\subsubsection{Reduction to Hamiltonian actions of principal type}
The aim of this subsection is to show that the restriction of a given Hamiltonian action (by a proper symplectic groupoid) to any stratum of $\S_\textrm{Ham}(\underline{S})$ can be reduced to a Hamiltonian action of principal type. More precisely, we prove:
\begin{thm}\label{redstratthm} Let $(\mathcal{G},\Omega)$ be a proper symplectic groupoid and suppose that we are given a Hamiltonian $(\mathcal{G},\Omega)$-action along $J:(S,\omega)\to M$. Let $\underline{\Sigma}_S\in \S_\textrm{Ham}(\underline{S})$ and let $\underline{\Sigma}_M\in \S_\textrm{Gp}(\underline{M})$ be such that $\underline{J}(\underline{\Sigma}_S)\subset\underline{\Sigma}_M$. Finally, let $q_S:S\to \underline{S}$ and $q_M:M\to \underline{M}$ be the orbit and leaf space projections, and consider $\Sigma_S=q_S^{-1}(\underline{\Sigma}_S)$ and $\Sigma_M=q_M^{-1}(\underline{\Sigma}_M)$. Then the following hold.
\begin{itemize}\item[a)] The restriction $\omega_{\Sigma_S}\in \Omega^2(\Sigma_S)$ of the symplectic form $\omega$ to $\Sigma_S$ has constant rank. Moreover, the null foliation integrating $\ker(\omega_{\Sigma_S})$ is simple, meaning that its leaf space admits a smooth manifold structure with respect to which the leaf space projection is a submersion.
\end{itemize} Let $S_{\Sigma}$ denote this leaf space and let $\omega_{S_\Sigma}$ denote the induced symplectic form on $S_{\Sigma}$.
\begin{itemize}
\item[b)] The restriction of $\Omega$ to $\mathcal{G}\vert_{\Sigma_M}$ has constant rank and the leaf space of its null foliation is naturally a proper symplectic groupoid $(\mathcal{G}_{\Sigma_M},\Omega_{\Sigma_M})$ over $\Sigma_M$.
\item[c)] The map $J$ descends to a map:
\begin{equation*} J_{S_\Sigma}:(S_{\Sigma},\omega_{S_\Sigma})\to \Sigma_M
\end{equation*} and the Hamiltonian $(\mathcal{G},\Omega)$-action along $J$ descends to a Hamiltonian $(\mathcal{G}_{\Sigma_M},\Omega_{\Sigma_M})$-action along $J_{S_\Sigma}$, which is of principal type.
\item[d)] There is a canonical Poisson diffeomorphism:
\begin{equation*} (\underline{\Sigma}_S,\pi_{\underline{\Sigma}_S})\cong (\underline{S_{\Sigma}},\pi_{\underline{S_{\Sigma}}}).
\end{equation*}
\end{itemize}
\end{thm}
Together with Theorem \ref{intgpoidpropprintyp}, this would prove Theorem \ref{poisstratintgrthm}. To prove Theorem \ref{redstratthm}, we use the following Lie theoretic description of the null foliation of $\omega_{\Sigma_S}$. Recall that, given a real finite-dimensional representation $V$ of a compact Lie group $G$, the fixed-point set $V^G$ has a canonical $G$-invariant linear complement $\c_V$ in $V$, given by the linear span of the collection:
\begin{equation*} \{v-g\cdot v\mid v\in V, g\in G\}.
\end{equation*} To see that $\c_V$ is indeed a linear complement to $V^G$, note that for any choice of $G$-invariant inner product on $V$, $\c_V$ coincides with the orthogonal complement to $V^G$ in $V$. We will call $\c_V$ the \textbf{fixed-point complement} of $V$. For the dual representation $V^*$, it holds that:
\begin{equation}\label{coinv} (V^*)^G=(\c_V)^0,
\end{equation} the annihilator of $\c_V$ in $V$. Of particular interest will be the adjoint representation.
\begin{prop}\label{fixptcompliealg} Let $G$ be a compact Lie group. The fixed-point complement $\c_\mathfrak{g}$ of the adjoint representation is a Lie subalgebra of $\mathfrak{g}$, given by:
\begin{equation*} \c_\mathfrak{g}=\c_{Z(\mathfrak{g})}\oplus \mathfrak{g}^{\textrm{ss}},
\end{equation*} where $Z(\mathfrak{g})$ is the center (viewed as $G$-representation) and $\mathfrak{g}^\textrm{ss}=[\mathfrak{g},\mathfrak{g}]$ is the semi-simple part of $\mathfrak{g}$.
\end{prop}
\begin{proof} This follows from the observation that $Z(\mathfrak{g})$ is the fixed-point set for the adjoint action of the identity component of $G$ and $[\mathfrak{g},\mathfrak{g}]$ is the orthogonal complement to $Z(\mathfrak{g})$ in $\mathfrak{g}$ with respect to any invariant inner product.
\end{proof}
We now give the aforementioned description of the null foliation.
\begin{lemma}\label{liedescrisofol} Let $p\in \Sigma_S$ and $x=J(p)\in \Sigma_M$, with notation as in Theorem \ref{redstratthm}. Let \begin{equation*} a_J:J^*(T^*M)\to TS
\end{equation*} be the bundle map underlying the infinitesimal action (\ref{assliealgact}) associated to the Hamiltonian action. Further, let $\c_{\mathfrak{g}_x}$ denote the fixed-point complement of the adjoint representation of $\mathcal{G}_x$. Then:
\begin{equation*} \ker(\omega_{\Sigma_S})_p=(a_J)_p(\c_{\mathfrak{g}_x}),
\end{equation*} where we view $\mathfrak{g}_x\subset T_x^*M$ via (\ref{imsymp}).
\end{lemma}
\begin{proof} Because (by Corollary \ref{orbprojstratdirac}) the orbit projection (\ref{orbprojdirac}) is a forward Dirac map from the pre-symplectic manifold $(\Sigma_S,\omega_{\Sigma_S})$ into a Poisson manifold, it must hold that:
\begin{equation*} \ker(\omega_{\Sigma_S})_p\subset T_p\O.
\end{equation*} Since $T_p\O\subset T_p\Sigma_S$, it also holds that:
\begin{equation*} \ker(\omega_{\Sigma_S})_p\subset T_p\O^\omega.
\end{equation*} For any Hamiltonian action, we have the equality:
\begin{equation*} T_p\O\cap T_p\O^\omega=(a_J)_p(\mathfrak{g}_x),
\end{equation*} as is readily derived from the momentum map condition (\ref{mommapcond}). So, we conclude that:
\begin{equation*} \ker(\omega_{\Sigma_S})_p\subset (a_J)_p(\mathfrak{g}_x).
\end{equation*} Now consider the composition of maps:
\begin{equation}\label{compmapsliedescrisofol} \frac{T_p\Sigma_S}{T_p\O}\hookrightarrow \mathcal{N}_p\xrightarrow{\underline{\d J}_p}\mathcal{N}_x\xrightarrow{\sim}\mathfrak{g}_x^*,
\end{equation} where the third map is dual to the canonical isomorphism between $\mathfrak{g}_x$ (which via (\ref{imsymp}) we view as the annihilator of $T_x\L$ in $T_x^*M$) and $\mathcal{N}_x^*$. Using a Hamiltonian Morita equivalence as in the proof of Proposition \ref{eqcharhammortyp}, together with Proposition \ref{normrepham}$b$, Proposition \ref{transgeommap}$c$, Lemma \ref{techlemisotype} and Morita invariance of the $J$-isomorphism types, it is readily verified that the image of (\ref{compmapsliedescrisofol}) is $(\mathfrak{g}_p^0)^{\mathcal{G}_x}$. From this and the momentum map condition (\ref{mommapcond}) it follows that, given $\alpha\in \mathfrak{g}_x$, the tangent vector $(a_J)_p(\alpha)$ belongs to $\ker(\omega_{\Sigma_S})_p$ if and only if $\alpha$ belongs to the annihilator of $(\mathfrak{g}_p^0)^{\mathcal{G}_x}$. This annihilator equals $\mathfrak{g}_p+\c_{\mathfrak{g}_x}$, as (\ref{coinv}) implies that:
\begin{equation*} (\mathfrak{g}_p^0)^{\mathcal{G}_x}:=\mathfrak{g}_p^0\cap (\mathfrak{g}_x^*)^{\mathcal{G}_x}=(\mathfrak{g}_p+\c_{\mathfrak{g}_x})^0.
\end{equation*} So, all together it follows that:
\begin{equation*} \ker(\omega_{\Sigma_S})_p=(a_J)_p(\mathfrak{g}_p+\c_{\mathfrak{g}_x})=(a_J)_p(\c_{\mathfrak{g}_x}),
\end{equation*} which proves the lemma.
\end{proof}
We can interpret this as follows: the $T_\pi^*M$-action associated to the momentum map $J$ restricts to an infinitesimal action of the bundle of Lie algebras:
\begin{equation*} \bigsqcup_{x\in \Sigma_M}\c_{\mathfrak{g}_x}\subset T^*M\vert_{\Sigma_M}
\end{equation*} and the orbit distribution of this infinitesimal action coincides with the distribution $\ker(\omega_{\Sigma_S})$. The proof of Theorem \ref{redstratthm} therefore boils down to showing that this infinitesimal action integrates to an action of a bundle of Lie groups in $\mathcal{G}$, the orbit space of which is smooth.
\begin{prop}[\cite{CrFeTo2}]\label{gpoidredstrat} Let $(\mathcal{G},\Omega)\rightrightarrows M$ be a proper symplectic groupoid, let $\underline{\Sigma}\in \S_\textrm{Gp}(\underline{M})$ and let $\Sigma=q^{-1}(\underline{\Sigma})$, for $q:M\to \underline{M}$ the leaf space projection. Consider the family of Lie groups:
\begin{equation*} \bigsqcup_{x\in \Sigma} C_{\mathfrak{g}_x}\subset \mathcal{G}\vert_{\Sigma}
\end{equation*} where $C_{\mathfrak{g}_x}$ is the unique connected Lie subgroup of $\mathcal{G}_x$ that integrates $\c_{\mathfrak{g}_x}$. This defines a closed, embedded and normal Lie subgroupoid of $\mathcal{G}\vert_{\Sigma}$ and the quotient of $\mathcal{G}\vert_{\Sigma}$ by this bundle of Lie groups is naturally a proper symplectic groupoid over $\Sigma$ of principal type.
\end{prop}
\begin{proof} First, observe that for any compact Lie group $G$, the connected Lie subgroup $C_{\mathfrak{g}}$ of $G$ with Lie algebra $\c_\mathfrak{g}$ is compact. To see this, let $G^\textrm{ss}$ be the connected Lie subgroup of $G$ with Lie algebra the compact and semisimple Lie subalgebra $\mathfrak{g}^\textrm{ss}=[\mathfrak{g},\mathfrak{g}]$ of $\mathfrak{g}$, let $G^0$ be the identity component of $G$ and let $Z(G^0)^0$ denote the identity component of the center of $G^0$. Fix $g_1,...,g_n\in G$ such that $G/G^0=\{[g_1],...,[g_n]\}$. It follows from Proposition \ref{fixptcompliealg} that $C_\mathfrak{g}$ is the image of the morphism of Lie groups:
\begin{equation*} \left(Z(G^0)^0\right)^n\times G^\textrm{ss}\to G, \quad (h_1,...,h_n,g)\mapsto [h_1,g_1]\cdot ...\cdot [h_n,g_n]\cdot g,
\end{equation*} where $[h_i,g_i]=h_ig_ih_i^{-1}g_i^{-1}$ is the commutator (which again belongs to $Z(G^0)^0$). So, since both $G^\textrm{ss}$ and $Z(G^0)^0$ are compact, $C_\mathfrak{g}$ is compact as well. Using this and the linearization theorem for proper Lie groupoids (see Subsection \ref{locmodconstsec} for the local model) one sees that the family of the Lie groups $C_{\mathfrak{g}_x}$ is a closed embedded Lie subgroupoid of $\mathcal{G}\vert_{\Sigma}$ over $\Sigma$. Furthermore, for every $g\in \mathcal{G}\vert_\Sigma$ starting at $x$ and ending at $y$, it holds that:
\begin{equation*} gC_{\mathfrak{g}_x}g^{-1}=C_{\mathfrak{g}_y}.
\end{equation*} This follows from the observation that an isomorphism of compact Lie groups $G_1\to G_2$ maps $C_{\mathfrak{g}_1}$ onto $C_{\mathfrak{g}_2}$. So, the family of Lie groups is also a normal subgroupoid of $\mathcal{G}\vert_{\Sigma}$. Therefore, the quotient of the proper Lie groupoid $\mathcal{G}\vert_{\Sigma}$ by this bundle of Lie groups is again a proper Lie groupoid. It follows from Lemma \ref{liedescrisofol}, applied to the Hamiltonian action of Example \ref{identityactionex}, that the pre-symplectic form $\Omega\vert_{(\mathcal{G}\vert_{\Sigma})}$ on $\mathcal{G}\vert_{\Sigma}$ has constant rank and its null foliation coincides with the foliation by orbits of the action on $\mathcal{G}\vert_{\Sigma}$ of this bundle of Lie groups. So, the quotient groupoid inherits a symplectic form. This symplectic form inherits multiplicativity from $\Omega$. Hence, the quotient is a symplectic groupoid. Finally, in light of Remark \ref{pringpoidrem} the quotient groupoid is of principal type, because for any $x,y\in \Sigma$ there is an isomorphism between $\mathcal{G}_x$ and $\mathcal{G}_y$, and any such isomorphism descends to one between the isotropy groups $\mathcal{G}_x/C_{\mathfrak{g}_x}$ and $\mathcal{G}_y/C_{\mathfrak{g}_y}$ of the quotient groupoid.
\end{proof}
We are now ready to complete the proof of the reduction theorem.
\begin{proof}[Proof of Theorem \ref{redstratthm}] Consider the family of Lie groups:
\begin{equation*} \H_{\Sigma_M}:=\bigsqcup_{x\in \Sigma_M} C_{\mathfrak{g}_x}\subset \mathcal{G}\vert_{\Sigma_M}
\end{equation*} of Proposition \ref{gpoidredstrat}. Being a closed embedded Lie subgroupoid of the proper Lie groupoid $\mathcal{G}\vert_{\Sigma_M}$, the Lie groupoid $\H_{\Sigma_M}$ is proper as well. Hence, so is any smooth action of $\H_{\Sigma_M}$. It acts along $J:\Sigma_S\to \Sigma_M$ via the action of $\mathcal{G}$. Proposition \ref{eqcharhammortyp} implies that for any $p,q\in \Sigma_S$, writing $x=J(p)$ and $y=J(q)$, there is an isomorphism of pairs of Lie groups:
\begin{equation}\label{isopairliegp} (\mathcal{G}_{x},\mathcal{G}_p)\cong (\mathcal{G}_{y},\mathcal{G}_q).
\end{equation} Such an isomorphism restricts to an isomorphism between the isotropy groups of the $\H_{\Sigma_M}$-action:
\begin{equation*} (\H_{\Sigma_M})_p=C_{\mathfrak{g}_{x}}\cap \mathcal{G}_p\cong C_{\mathfrak{g}_{y}}\cap \mathcal{G}_q=(\H_{\Sigma_M})_q.
\end{equation*} So, appealing to Lemma \ref{orbspsmoothlem}, we find that the orbit space admits a smooth manifold structure for which the orbit projection is a submersion. It follows from Lemma \ref{liedescrisofol} that the orbits of this action are the leaves of the null foliation of $\omega_{\Sigma_S}$, so this proves part $a$ of the theorem. Part $b$ of the theorem is proved in Proposition \ref{gpoidredstrat}. For part $c$, notice that $J$ factors through to a map $J_{S_\Sigma}$ (since the source and target of any element in $\H_{\Sigma_M}$ coincide) and the action of $\mathcal{G}$ along $J$ descends to an action of $\mathcal{G}_{\Sigma_M}$ along $J_{S_\Sigma}$. As the action of $(\mathcal{G},\Omega)$ along $J$ is Hamiltonian, the same follows for the action of $(\mathcal{G}_{\Sigma_M},\Omega_{\Sigma_M})$ along $J_{S_\Sigma}$. By the previous proposition, $\mathcal{G}_{\Sigma_M}$ is of principal type. Furthermore, for any two $[p],[q]\in S_{\Sigma}$ there is, as before, an isomorphism of pairs (\ref{isopairliegp}), and this descends to an isomorphism between the Lie groups:
\begin{equation*} \mathcal{G}_p/(C_{\mathfrak{g}_x}\cap \mathcal{G}_p)\cong \mathcal{G}_q/(C_{\mathfrak{g}_y}\cap \mathcal{G}_q),
\end{equation*} which are canonically isomorphic to the respective isotropy groups of the $\mathcal{G}_{\Sigma_M}$-action at $[p]$ and $[q]$. In view of Remark \ref{pringpoidrem} we conclude that the Hamiltonian $(\mathcal{G}_{\Sigma_M},\Omega_{\Sigma_M})$-action is of principal type. This completes the proof of parts $a-c$. For the final statement, consider the diagram:
\begin{center}
\begin{tikzcd} (\Sigma_S, \omega_{\Sigma_S})\arrow[r] \arrow[d] & (S_{\Sigma},\omega_{S_\Sigma}) \arrow[d] \\
(\underline{\Sigma}_S,\pi_{\underline{\Sigma}_S}) & (\underline{S_{\Sigma}},\pi_{\underline{S_{\Sigma}}})
\end{tikzcd}
\end{center} All arrows are surjective submersions and by construction (in particular, Corollary \ref{orbprojstratdirac}) each is forward Dirac. Evidently, the left vertical map factors through the composition of the other two, and vice versa. Hence, by functoriality of the push-forward construction for Dirac structures, the diagram completes to give the desired Poisson diffeomorphism.
\end{proof}
\bibliographystyle{plain}
|
{
"timestamp": "2021-09-29T02:25:22",
"yymm": "2109",
"arxiv_id": "2109.13874",
"language": "en",
"url": "https://arxiv.org/abs/2109.13874"
}
|
\section{Introduction} \label{sect:intro}
Lithium (Li) is a particularly important chemical element for astrophysics, with an impact on problems ranging from the early universe to the assembly of planetary systems recently formed around young stars. The $^7$Li isotope is one of the few species created during big bang nucleosynthesis (BBN), and it burns in stellar interiors via proton capture at relatively low temperatures. Because of its fragility, stars with outer convective envelopes easily deplete any Li on their surfaces, making it a useful thermometer of the physical conditions that set the controls for the heart of the Sun and the stars.
Besides BBN, other known astrophysical sources of Li synthesis include asymptotic giant branch stars \citep{cameron71,sackmann92}, novae \citep{novae1979,tajitsu2015,izzo2015} and cosmic ray spallation \citep{spallation1970,spallation1992}. During their longer-lived evolutionary phases, stars in general are not expected to add to the Li budget of the Galaxy, but to largely deplete it.
Large spectroscopic surveys are now providing detailed abundance patterns for hundreds of thousands of stars. GALAH \citep{galah} and LAMOST \citep{LAMOST}, among others, include Li data, and some recent works include \cite{smiljanic2018,casey2019,gao2019,gao2021}; and \cite{martell2021}. In this paper we focus on GALAH Li data for evolved stars.
Interpreting Li data is not straightforward. There is a large literature on Li abundances in stars; for example, see \citet{sestito} for a discussion in the open cluster context, and \cite{Lidesert} for a recent analysis of field stars. The known observational pattern is complex, but it is useful to summarize the main expectations for evolved stars (see \citealt{pinsono1997} for a more detailed discussion).
Solar-type and lower mass stars can destroy a significant amount of Li during the pre-main sequence \citep{iben1965}; this depletion may be modified by the structural effects of star spots and magnetic fields \citep{somers2016}. Stars also experience main sequence (MS) depletion (e.g., \citealt{sestito}); this depletion depends on mass, composition, and age, and there is a dispersion in depletion even between open cluster stars with the same mass. The latter provides strong evidence for rotationally induced mixing as an explanation for Li depletion, as stars that are born with different rotation rates mix at different rates \citep{pinsono1989}.
There is a Li dip, located at the transition from radiative to convective envelopes on the MS \citep{Lidip}; stars in this domain almost completely destroy Li on the MS, and this phenomenon is clearly seen in GALAH data \citep{gao2020_Lidip}. Li is difficult to measure in upper MS stars, but there is some evidence for dispersion and MS destruction from observations of red clump (RC) stars in young open clusters \citep{gilroy1989}. Once stars leave the MS, they develop deep convective envelopes and dilute their Li content. Li on the red giant branch (RGB) can further be reduced by extra mixing on the upper RGB (e.g., \citealt{shetrone2019}); mass transfer can induce severe Li depletion \citep{ryan2001}; and under some rare circumstances, Li can be produced in evolved stars \citep{cameron71,sackmann92}.
In this context, a recent work by \cite{nature} claimed the discovery of evidence for a new source of Li production that would be ubiquitous to low-mass stars and which would be associated to some event occurring somewhere in between the tip of the RGB and the subsequent helium (He) burning phase. Possibilities for the underlying mechanism behind the new production channel have since been proposed (e.g., \citealt{flash,neutrinos}). Along similar lines, an independent analysis of field giants with Li data from LAMOST and stellar masses from {\it Kepler} and {\it K2} also argued for evidence of Li production during the helium flash \citep{zhang2021}.
In GALAH data, the Li abundances observed among core He-burning, or RC stars are higher than those observed in shell hydrogen(H)-burning, upper RGB stars. However, as noted above, Li is subject to strong mass and composition dependent destruction mechanisms. Selection effects are therefore a serious concern, and, in order to use the relative RC and RGB Li abundances as evidence for production, it would be necessary to establish that the distribution of Li detections is representative of the underlying population, and that the RC stars are the evolutionary successors of the RGB stars.
In this work we demonstrate that such population effects are important for interpreting Li data, and that non-detections are a serious concern. We conclude that the case for universal Li production has not been established, and that further analysis, especially including the large cohort with upper limits, is required. The GALAH RC giants are part of a field population, and, unlike the giants in a stellar cluster, span a wide range of stellar mass and composition. This is crucial for the subject at hand because low mass stars experience much more severe MS depletion than high mass stars. The same happens on the RGB, where low-mass stars deplete Li much more efficiently than stars just slightly more massive. When comparing field populations, it is therefore of paramount importance to ensure that differences in the underlying mass distributions are accounted for. This complication is avoided if looking at stellar clusters, and indeed a recent analysis of cluster data from the Gaia-ESO survey cannot confirm any universal Li production event between the upper RGB and the RC \citep{magrini2021}.
Specifically, in a field population such as that of GALAH, red giants and clump giants do not come from the same MS progenitors. Clump giants are a younger population than red giants, because the lifetime of the RC is much less mass dependent than the lifetime of the RGB. And since stars of different mass experience different amounts of Li depletion during the MS (as well as during post-MS), then also the initial conditions of Li content at the end of the MS were not the same for the RGB and RC field populations observed today.
A second conclusion reported in \cite{nature}, but which will not be further addressed in the present work, is that the widely used threshold of Li abundance, A(Li) = 12. + $\log(N_{\rm Li}/N_{\rm H}) = 1.5$ dex, above which giants have been historically classified as Li-rich should be revised downward. In reality, as results of the present work will also serve to show, that threshold is dependent on stellar mass, as was first reported by \cite{claudia2016,trumpler2016}, based on a study of Li enrichment in RGB stars arising from the engulfment of substellar companions.
\begin{figure*}
\centering
\includegraphics[width=17.5cm]{fig1_cropped.pdf}
\caption{The distributions of stellar mass, age, and metallicity for field RGB and RC stars (blue and red, respectively) in the {\it Kepler} field as determined by APOKASC. As a population, field RC stars are somewhat younger and more massive than field RGB stars. See text for details and discussion.
}
\label{fig:APOKASC}
\end{figure*}
This paper is organized as follows. In Section \ref{sect:samples} we discuss the properties of the GALAH populations of red giants and clump giants, where we assume that they can be well represented by the corresponding populations in the {\it Kepler} field as characterized by the Second APOKASC Catalog \citep{apokasc}. This allows us to take advantage of the availability of seismic mass and age determinations, for RGB and RC stars.
In Section \ref{sect:model} we describe and discuss the results of our model for simulating the Li content of the {\it Kepler}/GALAH clump giants, starting from adequate initial conditions, and in Section \ref{sect:conclusions} we summarize our results and conclusions.
\section{Clump giants in GALAH and {\it Kepler}} \label{sect:samples}
In order to best quantify the differences between the field populations of RGB and RC stars in general, it would be convenient to rely on stars with seismic characterization. Unfortunately, the {\it Kepler} field is not in the same hemisphere as the GALAH footprint, and the overlap between GALAH and {\it K2} is small. Our analysis (Section \ref{sect:model}) will therefore combine seismic masses for stars in the {\it Kepler} field with Li abundances obtained for stars towards another (large) region of the Galaxy, effectively ignoring known differences between the stellar mass distributions of the two samples, which most likely arise from Galactic structure (e.g., \citealt{miglio2013,sharma2016}). In particular, populations in the \textit{Kepler} field are dominated by the thin disk; but the field was chosen to avoid the disk mid-plane, and stars below 1 Gyr are under-represented. We briefly discuss some potential consequences in the conclusions (\S\,\ref{sect:conclusions}).
We take advantage of the Second APOKASC Catalog \citep{apokasc}, which provides reliably determined evolutionary states, surface gravities, mean densities, masses, radii, and ages for more than 6,000 evolved stars. Figure \ref{fig:APOKASC} shows the mass, age and metallicity distributions of RGB and RC stars in APOKASC2. From the full sample in \citet{apokasc} we removed stars with low quality seismic solutions or ambiguous evolutionary states. High age estimates (i.e., older than the age of the universe), and their corresponding low stellar mass estimates (i.e., lower mass than that capable of evolving off the MS in a Hubble time, which we approximate as $\sim 0.8\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi$), either reflect random errors from lower true ages/higher true masses, or arise from stars that experienced binary interactions. We have retained these so as to avoid biasing the mean properties that we are interested in identifying, so they appear in Figures \ref{fig:APOKASC} and \ref{fig:formula}, but have been removed from the final results in Figure \ref{fig:model}.
The two mass distributions peak above 1\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi, at 1.21 and 1.32\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi, respectively. Similar masses were reported by \cite{nature} for the GALAH clump giants. However, the RC distribution appears significantly skewed towards higher masses, while the RGB distribution much less so. This suggests that the RC population must be composed, in general, of stars younger than an old RGB star of 1\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi\, (the model chosen by \citealt{nature} for their comparison to the GALAH clump giants). Examining the middle panel of Figure \ref{fig:APOKASC}, one can see that indeed this is the case, with the age distribution of RC stars in APOKASC being $\sim 2-3$ Gyr younger than that of RGB stars.
Given the severe known mass sensitivity of Li depletion, even 0.1\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi\, offsets can be a major concern. In addition, we expect very different Li depletion/destruction properties for high and low mass stars. The fraction above 1.5 solar masses is 14\% for the RGB and 38\% for the RC. Given the strongly peaked RGB mass distribution, even these estimates are likely to be conservative: the lifetime on the RGB for massive stars is much less than that for low mass stars, and, as the latter are more numerous, the underlying population is skewed. Therefore, many of the high mass estimates for RGB stars are likely to arise from random errors in ordinary low mass giants.
A potential additional concern is that today's RGB population and the RGB progenitors of today's RC population have different metallicity distributions as well, and extra mixing on the RGB is known to be metallicity-dependent \citep{shetrone2019}. However, the rightmost panel of Figure \ref{fig:APOKASC} shows that the distributions of the two populations are basically indistinguishable above [Fe/H] $ > -1$ (although we don't show it, the lowest-mass RGB stars are found at the lower metallicities, some of which are likely associated with the halo). Therefore, while metallicity-dependent mixing may impact Li depletion, it would operate similarly in both RGB and RC populations (i.e., in the RGB progenitors of the latter), and should not be expected to contribute to selection effects in observed Li abundances.
A key aspect in the \cite{nature} analysis is the selection of the sample of RC giants. These are selected directly from the Hertzprung-Russell (HR) diagram, restricting the analysis to stars less massive than 2\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi. Stellar masses are estimated from the relation
\begin{equation}
\log(M/\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi) = \log(L/L_{\odot}) + 4\log(T_{\rm eff}^{\odot}/T_{\rm eff}) + \log(g/g_{\odot}),
\label{eq:mass}
\end{equation}
\noindent where the luminosities are taken from Gaia DR2 \citep{dr2}, effective temperatures ($T_{\rm eff}$) and surface gravities are from GALAH DR2, and $T_{\rm eff}^{\odot} = 5,777$ K and $\log g_{\odot} = 4.44$ are the adopted solar parameters. Finally, using asteroseismic parameters taken from the literature for a fraction of their RC sample, \cite{nature} estimate a contamination from first ascent RGB stars of $\sim 10$\%.
\begin{figure}
\centerline{\includegraphics[width=1.0\columnwidth]{fig2_cropped.pdf}}
\caption{Comparison between stellar masses determined by stellar parameters (Equation \ref{eq:mass}) and masses determined from asteroseismic measurements, for the APOKASC samples of red and clump giants. It can be seen that, for the RC sample, selection of stars less massive than 2.0\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi\, following the mass determination via stellar parameters results in including a number of RC giants with seismic masses up to 2.5\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi\, and beyond.}
\label{fig:formula}
\end{figure}
In Figure \ref{fig:formula} we compare the stellar masses as computed by \cite{nature} from Equation \ref{eq:mass}, against asteroseismic determinations, always for the RGB and RC populations in the {\it Kepler} field. While there is a scatter of $\sim 0.3-0.4$\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi\, throughout, in both cases there is a relatively decent one-to-one correspondence between both mass determinations, with possibly a small systematic in the sense of slightly larger values for the seismic masses. However, more relevant to our discussion is the fact that, as the lower panel of Figure \ref{fig:formula} shows, a selection of RC giants less massive than 2\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi\, based on a stellar mass computed from the formula in Equation \ref{eq:mass} leads to the inclusion of a non-negligible fraction of stars that are actually more massive than 2\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi. That is, when \cite{nature} attempt to cut their RC sample at 2\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi, what really happens is that stars as massive as 2.5-3\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi\, creep into the sample. As we show in the next section, for clump giants as massive as these, it is perfectly natural to expect Li abundances as observed in the GALAH data, which cannot be compared to what is expected for an old red giant of 1\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi.
\section{Standard model prediction for the Li content of field clump giants} \label{sect:model}
In order to predict the Li abundances of a given population of RC stars we need three ingredients: (1) the stellar mass distribution of the progenitors of the RC stars, (2) initial conditions for the Li content on the MS as a function of stellar mass, and (3) a stellar evolution model. The latter is then used to forward model the Li initial conditions up to the RGB tip, following the stellar mass distribution assumed for the progenitors. Given that the main goal of this work is to determine whether new physics is needed regarding the behavior of Li in low-mass giants (i.e., whether there is any need or not for new, unknown Li production/destruction mechanisms), we assume that the Li abundance computed at the RGB tip is the same as that on the horizontal branch. Comparison to the observational data will then decide whether this assumption needs to be revised or not.
The only stellar interiors ingredient that determines the outcome of this exercise is the ability of a star of a given mass to deplete Li during their post-MS evolution, so we discuss our stellar models first.
For a given MS progenitor mass and an initial Li abundance, we use standard stellar models constructed with the Yale Rotating Evolutionary Code (YREC; \citealt{pinsono1989,demarque2008}) to compute the depletion of Li from the MS turnoff until the tip of the RGB. We stress that we use standard stellar models, i.e., with no extra- nor thermohaline mixing, and where the only mixing agent is convection.
We will assume, for simplicity, solar metallicity for all the stars in our models. This assumption is justified by two reasons. First, the distribution of metallicities of field RC stars in APOKASC peaks at solar metallicity, as shown in the right-most panel of Figure \ref{fig:APOKASC}. Possibly more relevant, the stars analized by \citet{nature} also have a metallicity distribution that peaks almost at solar metallicity ($-0.1$ dex). Second, the main point of our exercise is to quantify the impact of mass on the predicted stellar Li depletion. A full simulation would account for chemical evolution, and both the mass and metallicity dependence of Li depletion. This would require a complete simulation of the relevant field populations, a drastic increase in scope. However, as we will see below, the mass effects alone are dramatic enough to drastically impact the interpretation of Li data.
\begin{figure}
\centerline{\includegraphics[width=1.0\columnwidth]{fig3_cropped.pdf}}
\caption{Post main sequence depletion of lithium as a function of stellar mass, for standard evolutionary models (i.e., mixing only due to convection) with solar metallicity. Note the largely different depletion factors between stars of 1 and 2\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi, of about 7 and 2 dex, respectively.}
\label{fig:depletion}
\end{figure}
In Figure \ref{fig:depletion} we show the amount of Li depleted by our models as a function of stellar mass. This will be a good approximation for both standard models and ones with rotationally induced mixing. Two effects in rotating models that could affect the picture of Li evolution shown in Figure 3 are: 1) increasing the surface Li by mixing up previously gravitationally-settled Li; and 2) destroying additional Li as more Li is mixed to hot enough temperatures for Li destruction. Regarding the first effect, data from star clusters indicate that, at least for the Li dip, there is no such dredge-up of a sub-surface reservoir \citep{balachandran1995}. Lithium on the MS therefore is largely destroyed, rather than simply being hidden below the surface by gravitational settling. To the extent that the true Li content for field MS stars is higher than their surface abundances would indicate, then we will be under-estimating the true Li abundances in evolved stars, so our approach is conservative.
For our initial conditions on Li abundance on the MS, we go to the same source as \cite{nature} and use data from GALAH DR2 to set the distribution of abundances for stars leaving the MS. Note that Li measurements in GALAH DR2 are not flagged as either detections or upper limits, even though it is to be expected that a fraction of those measurements are formally upper limits. The Third Data Release (DR3) of GALAH \citep{galahDR3} flags upper limits, but we chose to stay with the unflagged DR2 measurements because using the more complete characterization of the Li data from DR3 would amount to an important difference with respect to the analysis of \cite{nature}, hence potentially diluting one of the main points of the present work, which is about the properties of the field population.
We need to restrict ourselves to the mass range of MS stars that are young enough to reach the RC and RGB, and old enough to be close to the end of their MS lifetime. We therefore restrict our sample to stars with $\log$ g $> 4$, and errors in A(Li) smaller than 0.04 dex, which we deem as a secure limit for good measurements based on the inspection of the distribution of the overall GALAH DR2 Li data. The run of Li initial conditions on the MS as a function of stellar mass is shown as gray dots in all the left hand panels of Figure \ref{fig:model} (ABCD-1). These are the actual GALAH DR2 data for stars with $\log$ g $> 4$ and small errors, as mentioned above. At any given mass, there is a distribution of possible Li abundances on the MS, which we will account for in our model predictions by directly taking real Li measurements from these mass-dependent distributions. Another useful sample for obtaining initial conditions for Li on the MS is that of \cite{Lidesert}, shown in blue in the right-hand panels of Figure \ref{fig:model} (ABCD-2), but we use this sample only as a background to show the run of initial conditions of our experiments below. Our luminosities are taken directly from Gaia DR2, which reports estimates of stellar parameters, extinction, reddening, and luminosities from the Apsis data processing pipeline \citep{apsis}.
\begin{figure}
\centerline{\includegraphics[width=1.0\columnwidth]{fig4_new_08msun_cropped.pdf}}
\caption{Initial conditions (ABCD-1) and results (ABCD-2) of our simulations, for the four experiments discussed in this paper. Cases A and B illustrate the situation when a single stellar mass is adopted for the progenitors of field RC stars (green arrows aid in signaling location of progenitors for these two cases), while cases C and D adopt more realistic distributions of initial conditions for the RC progenitors, without and with mass loss, respectively. {\bf LEFT}: Abundances of Li on the MS as a function of stellar mass, used as initial conditions for our simulations. In gray, GALAH Li measurements for stars with $\log$ g\,\,$> 4$ and errors smaller than 0.04 dex. Data for stars in M67 ($\sim 4$ Gyr) and NGC 6819 ($\sim 2$ Gyr) are shown in black and magenta, respectively, and are used in order to avoid initial Li abundances that would be appropriate for very young stars or the region of the Li dip (see text). In green, the randomly selected initial conditions for each simulation, which correspond to real stars in GALAH DR2, as described in the text. {\bf RIGHT}: In red, the predicted Li abundances for field RC stars for the corresponding simulation. The vertical lines indicated the location of the bulk of the RC stars observed by GALAH. Green points are the same as in the left panels, and blue point are the Li data from \cite{Lidesert} for MS stars, used here just as background upon which to see how the (green) initial conditions change between our experiments.}
\label{fig:model}
\end{figure}
Note that stars from GALAH, which we use to obtain initial conditions for our simulations, are not all located in the MS turnoff, and include stars in earlier MS stages than the turnoff. The use of this sample directly would not account for possible MS depletion and would overestimate the Li in the simulated stars. We account for this by using additional cluster data to estimate the maximum Li expected for stars of different masses after they experience depletion. We consider as a reference data for M67 \citep{m67} and NGC 6819 \citep{ngc6819}, with $\sim 4$ and $\sim 2$ Gyr, respectively, included in Figure 4. We treat these as Li upper envelopes to our turnoff distribution, i.e., we reject any draw in our experiments above these values, lowering the Li abundance for masses in and below the Li-dip, but preserving Li of stars of higher mass that do not experience significant Li depletion. Notice also that the descendants of these higher mass stars are more common in the clump (Figure 1), which would explain why they have high Li relative to what we expect from lower-mass stars.
Then, our recipe for initial Li distribution in the RGB uses the GALAH dataset with additional conditions from these clusters to account for MS depletion: if $\mathrm{M/M_\odot}<1.3$: reject draws with A(Li)\,$>2.5$; if $1.3<\mathrm{M/M_\odot}<1.4$: reject draws with A(Li)\,$>2.0$; and if $\mathrm{M/M_\odot}>1.4$: allow all draws. In practice, our experiments show that there are enough progenitors of field RC stars with M $> 2$\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi\, that the above recipe has little effect on the final result.
We perform four experiments, using four different sets of assumptions. Our whole exercise is to simply take a given run of initial conditions of Li content as a function of stellar mass, and predict the distribution of Li among RC stars using evolutionary models. The four cases are illustrated on four corresponding pairs of panels in Figure \ref{fig:model}, labeled A through D. The left-hand panels (A1 through D1) show (in green) the selection of initial conditions of each experiment, i.e., the distribution of progenitor stellar masses and their starting Li abundances at the MS turnoff. The right-hand panels (A2 through D2) show the result of each experiment, i.e., the initial (green, blue) and final (red) distributions of Li abundances, at the luminosity of the MS turnoff and the RC, respectively. The two vertical lines indicate the range of Li abundances of the bulk of the RC stars as measured by GALAH. That is, one can consider that predictions are successful insofar as enough red points fall between these lines.
First, if we assume that the RC population can be well represented by progenitor stars of 1\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi\,(A1), the resulting Li abundances would be all very low, very far from the observed levels, as shown in panel A2 in Figure \ref{fig:model}. This is what led \cite{nature} to announce the discovery of Li production common to all low-mass stars. In order to illustrate how critical the adopted mass of the progenitors is for this problem, in panels B1-B2 of Figure \ref{fig:model} we run our simulation assuming that the RC progenitors are all of 1.3\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi\, instead, i.e., just 0.3\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi\, more massive than adopted by \cite{nature}. As can be seen, the impact is very large, with the bulk of red points ending at higher final Li abundances than the previous case, and a small fraction of predicted RC stars now reaching the observed levels of Li in the GALAH sample. Once again, this is naturally expected, because an RGB star of 1.3\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi\, depletes Li much less efficiently than one of 1.0\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi, as illustrated in Figure \ref{fig:depletion}. Conversely, although we don't show it as an example, adopting even slightly higher masses would put progenitor stars right in the Li dip (the run of magenta points from NGC 6819), and our simulation would predict only upper limits at the RC. The pattern would reverse above the Li dip, with higher mass turnoff stars having higher initial Li.
Next, we perform the exercise accounting for more realistic initial conditions. We assume the stellar mass distribution of field clump giants observed by GALAH to be well represented by that of the {\it Kepler} sample (see \S\,\ref{sect:samples}). Then, for each RC star in APOKASC, we follow the same procedure as \cite{nature} and compute the stellar mass from the stellar parameters according to Equation \ref{eq:mass}. If the resulting mass is larger than 2\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi, we do not include the star and move to the next one in the list. If the resulting mass is smaller than 2\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi, we keep the star but from now on we rely on its seismic mass determination from APOKASC. As shown in the bottom panel of Figure \ref{fig:formula}, this procedure is permitting RC stars more massive than 2\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi\, to make it into the final sample.
Note that, because of mass loss during the upper RGB, the mass of the RC stars as selected above must be smaller than the mass of their MS progenitors. In order to account for the effect of mass loss on our model predictions, we run our procedure for two cases, meant to illustrate limiting possibilities. First, a case with no mass loss at all, so that the mass of the RC star as selected above is the same as the mass of its progenitor at the MS turnoff. And second, a somewhat extreme case in which all stars lose 0.3\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi\, in the upper RGB, so that the mass of the MS progenitor at the turnoff is 0.3\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi\, larger than that of the RC star as determined earlier. Mass loss shifts the MS mass distribution of the RC stars to even higher mass, as the current mass is an underestimate of the mass at earlier stages. Panels C1-C2 in Figure \ref{fig:model} illustrate our results for the case with no mass loss, while panels D1-D2 correspond to the case where all stars lose the same amount of mass as just described.
Next, for each RC star in APOKASC (with a mass smaller than 2\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi\, according to Equation \ref{eq:mass}), we assign a randomly drawn initial Li abundance that follows the observed distribution of A(Li) as described above for MS stars at the corresponding MS progenitor mass. See panels C1 and D1 of Figure \ref{fig:model}, for the initial conditions for the cases without and with mass loss, respectively. It can be seen that the mass distribution of the MS progenitors for the case including mass loss is shifted to higher values exactly by 0.3\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi, as designed.
The results of forward modeling these initial conditions are depicted in panels C2 and D2 of Figure \ref{fig:model}, for the case without and with mass loss, respectively. These should be compared with the similar diagram in \cite{nature} (their Fig.3). It can be seen that our standard model predictions, accounting for the range of stellar masses of the progenitors of clump giants, cover the region occupied by the data quite well, regardless of the assumption on mass loss. Moreover, in Figure \ref{fig:mass_code} we show an enlarged version of panel D2 of Figure \ref{fig:model} that color codes the simulated RC according to progenitor's mass, therefore showing the stellar mass distribution of the field RC as predicted by our simulation. Figure \ref{fig:mass_code} shows clearly that $1\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi$ progenitors produce RC giants with low levels of Li, A(Li) $< -1$, while the RC region as observed by GALAH, with A(Li) $\sim$ 0.5 - 1.0 dex, is expected to be populated by somewhat higher mass progenitors, with M $\sim 1.5\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi$ and above, such as we see in the \textit{Kepler} RC (Figure \ref{fig:APOKASC}, left panel). This prediction is testable with asteroseismic data.
\begin{figure}
\centerline{\includegraphics[width=1.0\columnwidth]{fig5_D2_panel_colorcoded.pdf}}
\caption{Same as panel D2 of Figure \ref{fig:model}, with the simulated RC color coded according to the mass of the progenitor stars. One solar mass progenitors populate the very low Li region of the RC, with A(Li) $< -1$, while the region of the RC populated by GALAH DR2 detections (region between the vertical dashed lines) is explained by higher mass progenitors, with M$\,\,\sim 1.5 \ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi$ and above.}
\label{fig:mass_code}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=1.0\columnwidth]{fig6_cropped.pdf}}
\caption{Lithium measurements with {\it K2} asteroseismic data for stars classified as red clump (top panel) and red giant (bottom panel). Red clump and red giant branch detections are the blue (top) and red (bottom) symbols, while upper limits are denoted by triangles. Masses are taken from the \citet{zinn2021} asteroseismic factors with \citet{galahDR3} GALAH temperatures; Li measurements and limits are taken from \citet{galahDR3}. Evolutionary states are from \citet{zinn2021}.}
\label{fig:K2_galahdr3}
\end{figure}
The simulations in C2 and D2, besides naturally explaining the observed bulk distribution of Li in the RC, also predict an extended tail of RC stars with very low Li abundances, which is not seen in the GALAH data analyzed by \cite{nature}. This tail is less extended on the low-Li end when accounting for mass loss (i.e., the more realistic of the four models), which indicates that it reflects the initial conditions adopted in our models, combined with the fact that the less massive progenitors deplete Li more efficiently than the more massive ones.
A large fraction of evolved GALAH stars do not have reported Li measurements, and it is natural to associate them with this predicted population. \cite{nature} contest this, on the grounds that the Li equivalent widths for the non-detections are in the detectable regime, and therefore the non-detections are not upper limits; instead they claim that they are not reported for other reasons.
We find this explanation unconvincing for several reasons. First, the Cannon methodology, used in the initial GALAH data release, is not designed to measure upper limits. However, it is clear from an examination of the observed pattern that a large number of the measurements must be Li non-detections. Moreover, the Li detection limits, and measurements, are strong functions of effective temperature, with reported values all the way down to the detection limit at all values of T$_{\rm eff}$. It is very unlikely that the true abundance distribution cuts off at exactly the same place where the lines become weak. The Li equivalent width can include other nearby weak lines, and a non-zero value does not require a detectable Li abundance. Finally, open cluster samples of evolved giants have numerous true upper limits (see \citealt{magrini2021} for a most recent analysis based on data from the Gaia-ESO survey), and such stars must also exist in a field sample.
We confirm these arguments with new GALAH data. The latest GALAH Data Release (DR3, \citealt{galahDR3}), uses the SME pipeline to infer Li abundances, and it does explicitly include upper limits. We cross-matched the GALAH DR3 catalog with the \citet{zinn2021} K2 asteroseismic data. We used the combination of the published GALAH effective temperatures and the seismic parameters from \citet{zinn2021} to infer masses, and we adopted the evolutionary states from that paper as well.
In Figure \ref{fig:K2_galahdr3}, we show the results for RC and RGB stars. More than 80\% of the measurements are upper limits in both the RC and RGB (top and bottom panels respectively). The Li abundances in the two groups are comparable, with the exception of a larger population of truly Li-rich stars in a minority of clump giants. Most of the claimed detections are close to the detection limit and are marginal. We caution that the distributions of masses is not the same as those in {\it Kepler}, so our simulation is not directly comparable to these data. However, the large numbers of upper limits, along with the mean Li level in the clump, fit comfortably in the domain predicted by our simulations.
\section{Conclusions} \label{sect:conclusions}
A key lesson from our simulation is that Li depletion is a strong function of mass on the RGB, so a proper understanding of Li in evolved low-mass stars requires an understanding of the distribution of masses in the sample. Furthermore, a large number of very low Li abundances are expected for evolved stars, and a proper treatment of non-detections is therefore crucial.
We have modeled the expected distribution of Li abundances in the MS for the progenitors of present day field clump giants. Crucially, such progenitors have sufficiently different properties than today’s field red giants in what respects to the evolution of Li. Our procedure then uses standard stellar models to simulate the distribution of Li abundances and the luminosities of such progenitors when arriving at the horizontal branch, in order to be compared with the GALAH Li data of field stars at that evolutionary state. A full analysis should also attempt to match the metallicity distribution of both the RC and RGB, and to distinguish between high$-$ and low$-\alpha$ populations, but this is beyond the scope of this paper. We adopted a stellar population model from the {\it Kepler} field, and different stellar populations in the GALAH footprint would impact our results. However, the central conclusions should be robust: the relative contributions of stars to the clump and giant branch depend sensitively on mass and composition, and we expect clump stars with detected Li to preferentially be produced in younger stellar populations relative to primarily old first ascent giants.
If there are no unknown mechanisms of Li production operating between the MS turnoff and the subsequent core helium burning phase, our simulated Li abundances should be comparable to those for low-mass field clump giants. Our predicted distribution of Li abundances indeed matches the Li abundances of the bulk population of field RC stars as measured by GALAH. Therefore, there is no evidence among clump giants of an unknown Li production mechanism occurring between the upper RGB and the horizontal branch for low-mass stars.
Furthermore, our model makes testable predictions: that the Li detections in the RC should be preferentially higher mass stars; that there are a large number of upper limits in the RC; and that the upper limits should be preferentially low mass stars. The data from \citet{carlberg2016} is very encouraging in this regard; they report Li detections for RC stars with masses from 1.6 to 2\ifmmode {\,{\rm M}_{\odot}}\else${\,{\rm M}_{\odot}}$\fi\, in the range expected for our models. A confirmed detection of Li in a large number of low mass RC stars (i.e., beyond the classical Li-rich ones) would indeed be evidence for Li production in at least some cases. The limited overlap between our {\it K2} asteroseismic sample and GALAH DR3 consisted largely of upper limits or detections at a low level, along with a small fraction of true Li-rich stars. This is consistent with our model; a larger sample with more high-mass stars would be needed to see whether modest detection (A(Li) $\sim 1$) are associated with higher mass stars.
Finally, we stress that a crucial difference between the predictions of our simulations and the observed Li abundances of field RC stars is the presence in the data of stars with very high Li levels (up to A(Li) $\sim 3.5$ dex) that standard models cannot reproduce. These are the well known Li-rich giants, which certainly remain an open problem, and are not within the scope of this paper. However, the issue of what is the correct threshold for defining a giant star as normal or Li-rich needs to change and be incorporated in the mainstream urgently. Specifically, classification schemes for Li-rich giants should not be based on a fixed threshold, but on a variable one that must depend on stellar mass \citep{claudia2016,trumpler2016}. Otherwise, we will keep misinforming the efforts to study that truly challenging and longstanding problem in stellar evolution.
\section*{acknowledgments}
We are grateful to our referee, Sarah Martell, for an insightful and useful report that clarified our messages and improved this paper throughout. We thank Con Deliyannis for providing us his data on lithium as a function of stellar mass for stars in NGC 6819, which we use to set the initial conditions of our simulations. J.C. acknowledges support from the Agencia Nacional de Investigaci\'on y Desarrollo (ANID) via Proyecto Fondecyt Regular 1191366; and from ANID BASAL
projects CATA-Puente ACE210002 and CATA2-FB210003. C.A.G acknowledges financial support from ANID FONDECYT Postdoctoral Fellowship 2018 Project 3180668. JCZ is supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-2001869.
\bibliographystyle{aasjournal}
|
{
"timestamp": "2022-07-05T02:12:13",
"yymm": "2109",
"arxiv_id": "2109.13955",
"language": "en",
"url": "https://arxiv.org/abs/2109.13955"
}
|
\section{Introduction}
Deep neural networks (DNNs) provide a powerful framework for approximating complex mappings, possessing universal approximation properties~\cite{Cybenko1989} and flexible architectures composed of simple functions parameterized by weights. Numerous studies have shown that excellent performance can be obtained using state-of-the-art DNNs in numerous applications including mage processing, speech recognition, surrogate modeling, and dimensionality reduction \cite{Goodfellow:2016wc,RuthottoHaber2019,Tripathy:2018kk}. However, getting such results \textit{in practice} may be a computationally expensive and cumbersome task. The process of training DNNs, or finding the optimal weights, is rife with challenges, e.g., the optimization problem is non-convex, expressive networks require a very large number of weights, and, perhaps most critically, appropriate regularization is needed to ensure the trained network generalizes well to unseen data. Due to these challenges, it can be difficult to train a network efficiently and to sufficient accuracy, especially for large, high-dimensional datasets and complex mappings and in the absence of experience on similar learning task. This is a particular challenge in scientific applications that often involve unique training data sets, which limits the use of standard architectures and established hyperparameters.
While the literature on effective solvers for training DNNs is vast (see, e.g., the recent survey~\cite{bottou2016optimization}), the most popular approaches are stochastic approximation (SA) methods. SA methods are computationally appealing since only a small, randomly-chosen sample (i.e., mini-batch) from the training data is needed at each iteration to update the DNN parameters. Also, SA methods tend to exhibit good generalization properties. The most extensively studied and utilized SA method is the stochastic gradient descent (SGD) method \cite{RobbinsMonro1951} and its many popular variants such as AdaGrad \cite{duchi2011adaptive} and ADAM \cite{kingma2014adam}. Despite the popularity of SGD variants, major disadvantages include slow convergence and, most notoriously, the need to select a suitable learning rate (step size). Stochastic Newton and stochastic quasi-Newton methods have been proposed to accelerate convergence of SA methods \cite{bottou2004large,gower2017randomized,Byrd2016,wang2017stochastic, chung2017stochastic}, but including curvature information in SA methods is not trivial. Contrary to deterministic methods, which are known to benefit from the use of second-order information (consider, e.g., the natural step size of one and local quadratic convergence of Newton's method), noisy curvature estimates in stochastic methods may have harmful effects on the robustness of the iterations \cite{chung2017stochastic}. Furthermore, SA methods cannot achieve a convergence rate that is faster than sublinear \cite{agarwal2012information}, and additional care must be taken to handle nonlinear, nonconvex problems arising in DNN training. The performance and convergence properties of SA methods depend heavily on the properties of the objective function and on the choice of the learning rate.
In this paper, we seek to simplify the training of DNNs by exploiting the \emph{separability} inherent in most common DNN architectures. We assume that the network, $G$, is parameterized by two blocks of weights, $\bfW$ and $\bftheta$, and is of the form
\begin{equation}\label{eq:separableDNN}
G(\cdot, \bfW, \bftheta) = \bfW F(\cdot, \bftheta),
\end{equation}
where $F$, also referred to as a feature extractor, is a parameterized, nonlinear function. The important observation here is that the DNN is nonlinear in $\bftheta$ and, crucially, is \emph{linear in $\bfW$}. Any DNN whose last layer does not contain a nonlinear activation function can be written in this form, so our definition includes many state-of-the-art DNNs; see, e.g.,~\cite{he2016deep, Raissi:2019hv, Lecun:1990mnist, Krizhevsky:2012alexnet, ronneberger2015unet} and following works like \cite{Sjoberg1997,Tripathy:2018kk,newman2020train}. In a supervised learning framework, the goal is to find a set of network weights, $(\bfW, \bftheta)$, such that $\bfW F(\bfy,\bftheta) \approx \bfc$ for all input-target pairs $(\bfy,\bfc)$ in a data space. Training the network means learning the network weights by minimizing an expected loss or discrepancy of the DNN approximation over all input-target pairs $(\bfy, \bfc)$ in a training set, while generalizing well to unobserved input-target pairs.
\paragraph{\textit{\textbf{Main contributions}}}
In this paper, we describe {\tt slimTrain}, a \textit{sampled limited-memory training} method that exploits the separability of the DNN architecture to leverage recently-developed sampled Tikhonov methods for automatic regularization parameter tuning \cite{newman2020train, chung2020sampled}.For the linear weights in a regression framework, we obtain a stochastic linear least-squares problem, and we use recent work on sampled limited-memory methods to approximate the global curvature of the underlying least-squares problem. Such methods can be viewed as row-action or SA methods and can speed up the initial convergence and to improve the accuracy of iterates \cite{chung2020sampled}. As discussed above, applying a second-order SA method to the entire problem is not trivial and obtaining curvature information for the nonlinear weights is computationally expensive, particularly for deep networks. As our approach only incorporates curvature in the final layer of the network, where we have a linear structure, its computational overhead is minimal. In doing so, we can not only improve initial convergence of DNN training, but also can select the regularization parameter automatically by exploiting connections between the learning rate of the linear weights and the regularization parameter for Tikhonov regularization \cite{chung2019iterative}. Thus, {\tt slimTrain} is an efficient, practical method for training separable DNNs that is memory-efficient (i.e., working only on mini-batches), exhibits faster initial convergence compared to standard SA approaches (e.g., ADAM), produces networks that generalize well, and incorporates automatic hyperparameter selection.
The paper is organized as follows. In \cref{sec:separable}, we describe separable DNN architectures and review various approaches to train such networks, with special emphasis on variable projection. Notably, ae provide new theoretical analysis to support a VarPro stochastic approximation method. In \cref{sec:sa}, we introduce our new {\tt slimTrain} approach that incorporates sampled limited-memory Tikhonov ({\tt slimTik}) methods within the nonlinear learning problem. Here, we describe cross-validation-based techniques to automatically and adaptively select the regularization parameter. Numerical results are provided in \cref{sec:results}, and conclusions follow in \cref{sec:conclusions}.
\section{Exploiting separability with variable projection}
\label{sec:separable}
Given the space of input features $\Ycal \subseteq \Rbb^{\nFeatIn}$ and the space of target features $\Ccal \subseteq \Rbb^{\nTargets}$, let $\Dcal \subseteq \Ycal \times \Ccal$ be the data space containing input-target pairs $(\bfy, \bfc) \in \Dcal$. We focus on separable DNN architectures that consist of two separate phases: a nonlinear feature extractor $F: \Ycal \times \Rbb^{\nWeights} \to \Rbb^{\nFeatOut}$ parametrized by $\bftheta\in \Rbb^{\nWeights}$ followed by a linear model $\bfW \in \Rbb^{\nTargets\times \nFeatOut}$. In general, the goal is to learn the network weights, $(\bfW,\bftheta)$, by solving the stochastic optimization problem
\begin{align}\label{eq:objFull}
\min_{\bfW, \bftheta} \ \Ebb \ L(\bfW F(\bfy, \bftheta), \bfc) + R(\bftheta) + S(\bfW),
\end{align}
where $L:\Rbb^{\nTargets} \times \Ccal \to \Rbb$ is a loss function, and $R:\Rbb^{\nWeights} \to \Rbb$ and $S:\Rbb^{\nTargets \times \nFeatOut} \to \Rbb$ are regularizers. Here, $\Ebb$ denotes the expected value over a distribution of input-target pairs in $\Dcal$.
Choosing an appropriate loss function $L$ is task-dependent. For example, a least-squares loss function promotes data-fitting and is well-suited for function approximation tasks whereas a cross-entropy loss function is preferred for classification tasks where the network outputs are interpreted as a discrete probability distribution~\cite{hui2020evaluation}. In this work, we focus on exploiting separability to improve DNN training for function approximation or data fitting tasks such as PDE surrogate modeling~\cite{Tripathy:2018kk,Zhu:2018ik} and dimensionality reduction such as autoencoders~\cite{Goodfellow:2016wc}. Hence, we restrict our focus to a stochastic least-squares loss function with Tikhonov regularization
\begin{align}
\label{eq:objFctnLSFull}
\min_{\bfW,\bftheta} \ \Phi(\bfW,\bftheta) &\equiv
\Ebb \ \thf \left\|\bfW F(\bfy, \bftheta) - \bfc\right\|_2^2 + \tfrac{\alpha}{2}\|\bfL\bftheta\|_2^2+ \tfrac{\lambda}{2}\|\bfW\|_{\rm F}^2,
\end{align}
where $\Phi: \Rbb^{\nTargets \times \nFeatOut} \times \Rbb^{\nWeights} \to \Rbb$ is the objective function, $\bfL$ is a user-defined operator, $\|\cdot \|_{\rm F}$ is the Frobenius norm, and $\alpha, \lambda \geq 0$ are the regularization parameters for $\bftheta$ and $\bfW$, respectively.
\subsection{SA methods that exploit separability}
A standard, and the current state-of-the-art, approach to solve~\eqref{eq:objFctnLSFull} is stochastic optimization over both sets of weights $(\bfW,\bftheta)$ simultaneously (i.e., joint estimation). While generic and straightforward, this fully-coupled approach can suffer from slow convergence (e.g., due to ill-conditioning) and does not attain potential benefits that can be achieved by treating the separate blocks of weights differently (e.g., exploiting the structure of the arising subproblems). We seek computational methods for training DNNs that exploit separability, i.e., we treat the two parameter sets $\bftheta$ and $\bfW$ differently and exploit linearity in $\bfW$. Three general approaches to numerically tackle the optimization problem~\cref{eq:objFctnLSFull} while taking advantage of the separability are as follows.
\paragraph{\textit{\textbf{Alternating directions}}}
One approach that exploits separability of the variables $\bftheta$ and $\bfW$ is alternating optimization~\cite{bezdek2002some}. For~\cref{eq:objFctnLSFull}, this corresponds to alternating between two stochastic optimization problems. Note for simplicity of presentation we assume that each of following optimization problems has a unique minimizer. Suppose we initialize $\bftheta_0$. Then, at the $k$-th iteration, we embark on
\begin{equation}
\label{eq:optW}
\bfW_k = \argmin_{\bfW} \ \Phi(\bfW, \bftheta_{k-1})
\end{equation}
and
\begin{equation}
\label{eq:opttheta}
\bftheta_k = \argmin_{\bftheta} \
\ \Phi(\bfW_{k}, \bftheta).
\end{equation}
Notice that convergence of this approach can be slow when variables are tightly coupled~\cite{Beck2013:bcgd, wright2015coordinate}. Furthermore, this approach is not practical in our settings, since minimization problem~\cref{eq:optW} and \cref{eq:opttheta} are computationally expensive, particularly the non-convex, high-dimensional, often non-smooth optimization problem for $\bftheta$.
\paragraph{\textit{\textbf{Block coordinate descent}}}
A practical alternative for alternating directions is block coordinate descent. The general idea of a block coordinate descent approach for \cref{eq:objFctnLSFull} is to approximate the alternating optimization of \cref{eq:optW} and \cref{eq:opttheta} via iterative update schemes (e.g., one iteration of an iterative optimization step) for each set of variables~\cite{wright2015coordinate}. Note that under certain assumptions, a block coordinate descent method applied to two sets of parameters has been shown to converge~\cite{nesterov2012efficiency, richtarik2014iteration}. Although a block coordinate descent approach provides a computationally appealing alternative to the fully coupled and alternating directions approaches, this approach, like alternating directions, suffers from slow convergence when the blocks are tightly coupled.
\paragraph{\textit{\textbf{Variable projection (VarPro)}}}
A compromise between alternating directions and block coordinate descent is to solve~\cref{eq:optW} with respect to $\bfW$ while performing an iterative update method for \cref{eq:opttheta} with respect to $\bftheta$. This can be seen as a stochastic approximation version of a variable projection approach~\cite{GolubPereyra2003}. Formally, we can write the iteration in terms of the \textit{reduced} stochastic optimization problem
\begin{align}
\label{eqn:objFctnReducedphi}
\min_{\bftheta} \ \Phi^{\rm red}(\bftheta) &\equiv \Phi(\widehat{\bfW}(\bftheta),\bftheta)
\end{align}
where
\begin{equation}
\label{eq:objFctnLinphi}
\widehat\bfW(\bftheta) = \argmin_{\bfW} \Ebb \ \thf \left\|\bfW F(\bfy,\bftheta) - \bfc\right\|_2^2 + \tfrac{\lambda}{2}\|\bfW\|_{\rm F}^2.
\end{equation}
Notice that~\eqref{eq:objFctnLinphi} is a \textit{stochastic} Tikhonov-regularized \textit{linear} least-squares problem and, under mild assumptions, there exists a closed form solution, i.e.,
\begin{equation}\label{eq:What}
\widehat \bfW(\bftheta) = \bbE \bfc F(\bfy,\bftheta)\t \left(\bfSigma_{\bfy}(\bftheta) + \bfmu_\bfy(\bftheta)\bfmu_\bfy(\bftheta)\t + \lambda\bfI\right)^{-1}.
\end{equation}
Here, $\bfmu_\bfy(\bftheta) = \bbE F(\bfy,\bftheta)$ and $\bfSigma_\bfy(\bftheta) = \bbE (F(\bfy,\bftheta) -\bfmu_\bfy)(F(\bfy,\bftheta) -\bfmu_\bfy)\t$. Details of the derivation can be found in \cref{sec:stochlinearTik}.
\subsection{Theoretical justification for VarPro in SA methods}
\label{sub:updatetheta}
After solving for $\widehat{\bfW}(\bftheta)$ in~\eqref{eq:objFctnLinphi}, VarPro uses an iterative scheme, typically an SGD variant, to update $\bftheta$. The key is to ensure that the mini-batch gradients used to update $\bftheta$ are unbiased. To the best of our knowledge, we provide the first theoretical analysis demonstrating that VarPro in an SA setting produces an unbiased estimate of the gradient. We note that the derivation, presented for stochastic Tikhonov-regularized least-squares problems, can be extended to any objective function which is convex with respect to the linear weights, such as when using a cross-entropy loss function.
In the context of the DNN training problem, let $\Tcal\subseteq \Dcal$ be a finite training set. At the $k$-th training iteration, we select a mini-batch the training set, $\Tcal_k\subset \Tcal$. For the $\Tcal_k$ we seek to minimize the function
\begin{equation}\label{eq:batchObjFctn}
\Phi_k(\bfW, \bftheta) = \frac{1}{|\Tcal_k|}\sum_{(\bfy,\bfc)\in \Tcal_k}\tfrac{1}{2}\norm[2]{\bfW F(\bfy,\bftheta) - \bfc}^2 + \tfrac{\alpha}{2}\norm[2]{\bfL\bftheta}^2 + \tfrac{\lambda}{2}\norm[\rm F]{\bfW}^2.
\end{equation}
A VarPro SA method applied to \cref{eqn:objFctnReducedphi} considers the reduced functional at the $k$-th iteration,
\begin{align}
\Phi_k^{\rm red}(\bftheta) & = \Phi_k(\widehat\bfW(\bftheta), \bftheta)
\end{align}
where $\widehat{\bfW}(\bftheta)$ is obtained from~\eqref{eq:objFctnLinphi}, i.e., the solution to the stochastic Tikhonov-regularized linear least-squares problem \emph{over the entire data space}.
To update the nonlinear weights, we select a ``descent'' direction $\bfp_k$ with respect to $\bftheta$ and compute the next iterate,
\begin{equation}
\bftheta_k = \bftheta_{k-1} + \gamma \bfp_k(\bftheta_{k-1};\widehat\bfW(\bftheta_{k-1})).
\end{equation}
Here, $\gamma$ denotes an appropriate learning rate and $\bfp_k$ is a direction that is computed based on the current estimate of $\bftheta_{k-1}$ with respect to the current batch $\calT_k$. The selection of $\bfp_k$ depends on the chosen stochastic optimization method and requires knowing information about the derivative of~\eqref{eq:batchObjFctn}. Explicitly, we compute the derivative of $\Phi^{\rm red}_k$ with respect to $\bftheta$ by
\begin{equation}
\begin{split}
{\rm D}_\bftheta \Phi_k^{\rm red}(\bftheta) &= {\rm D}_\bftheta \Phi_k (\widehat\bfW(\bftheta),\bftheta) \\
& = \left[{\rm D}_{\bfW} \Phi_k(\bfW,\bftheta)\right]_{\bfW = \widehat\bfW(\bftheta)} \cdot {\rm D}_{\bftheta}\widehat\bfW(\bftheta) + \left[{\rm D}_{\widetilde \bftheta} \Phi_k(\widehat\bfW(\bftheta),\widetilde \bftheta)\right]_{\widetilde\bftheta = \bftheta}. \label{eq:Varpro_grad}
\end{split}
\end{equation}
Note that, contrary to VarPro derivations in deterministic settings \cite{GolubPereyra2003, chung2010efficient, newman2020train}, the first term in \cref{eq:Varpro_grad} does not vanish. This is because $\widehat\bfW(\bftheta)$ satisfies the optimality conditions for $\Phi$, the objective function for expected value minimization problem \cref{eq:objFctnLinphi} but may not be optimal for $\Phi_k$, the objective function for the current batch. However, we observe that the term vanishes in expectation over all samples, that is,
\begin{equation}\label{eq:unbiasedVarPro}
\begin{split}
\bbE \,\left( \left[{\rm D}_{\bfW} \Phi_k(\bfW,\bftheta)\right]_{\bfW = \widehat\bfW(\bftheta)} \cdot {\rm D}_{\bftheta}\widehat\bfW(\bftheta) \right)
& = \left[{\rm D}_{\bfW} \bbE \,\Phi_k(\bfW,\bftheta)\right]_{\bfW = \widehat\bfW(\bftheta)} \cdot {\rm D}_{\bftheta}\widehat\bfW(\bftheta) \\
& =
\left[{\rm D}_{\bfW} \Phi(\bfW,\bftheta)\right]_{\bfW = \widehat\bfW(\bftheta)} \cdot {\rm D}_{\bftheta}\widehat\bfW(\bftheta) \\
&=
{\bf0}.
\end{split}
\end{equation}
Because~\eqref{eq:Varpro_grad} is equal to the gradient of the full objective function $\Phi$ in expectation, we say the update for $\bftheta$ is unbiased.
Since SA methods can handle unbiased noisy gradients, one could define a VarPro SGD approach using the following unbiased estimator for the gradient,
\begin{equation}\label{eq:direction}
\bfp_k(\bftheta; \widehat\bfW(\bftheta)) = -\left[{\rm D}_{\widetilde\bftheta} \Phi_k(\widehat\bfW(\bftheta),\widetilde\bftheta)\right]_{\widetilde\bftheta = \bftheta}\t
\end{equation}
where the derivative is
\begin{equation}
\label{eq:p_approx}
\begin{split}
{\rm D}_{\bftheta} \Phi_k(\bfW,\bftheta)
%
&={\rm D}_{\bftheta}\left(\frac{1}{|\Tcal_k|}\sum_{(\bfy,\bfc)\in \Tcal_k}\tfrac{1}{2}\norm[2]{\bfW F(\bfy,\bftheta) - \bfc}^2 + \tfrac{\alpha}{2}\norm[2]{\bfL\bftheta}^2 \right)\\
%
&=\frac{1}{|\Tcal_k|}\sum_{(\bfy,\bfc)\in \Tcal_k} \left(\bfW F(\bfy,\bftheta)\t - \bfc\right)\bfW{\rm D}_{\bftheta} F(\bfy,\bftheta)
+\alpha\bftheta\t\bfL\t\bfL.
\end{split}
\end{equation}
Note that ${\rm D}_{\bftheta} F(\bfy,\bftheta)$ can be obtained through back propagation which can be parallelized over samples.
\subsection{Challenges of VarPro in stochastic optimization}
\label{sub:challengesVarPro}
The appeal of a VarPro approach is marred by the impracticality of computing $\widehat\bfW(\bftheta)$ in~\eqref{eq:objFctnLinphi}. For each mini-batch update of $\bftheta$, one would need to recompute $\widehat\bfW(\bftheta)$, which requires propagating many samples through the network. Since a computation is costly, in terms of time and storage, we can only obtain an approximation of $\widehat{\bfW}(\bftheta)$ in practice.
One way to approximate $\widehat\bfW(\bftheta)$ is to replace the vector $\bfmu_\bfy(\bftheta)$ and the matrix $\bbE \bfc F(\bfy,\bftheta)\t$ with sample mean approximations and the covariance matrix $\bfSigma_\bfy(\bftheta)$ with a sample covariance matrix. The accuracy of the approximation, and hence the expected bias of the gradients for the nonlinear weights, will depend on the size of the sample. However, these quantities still depend on $\bftheta$, and hence for any iterative process where $\bftheta$ is being updated, these values need to be recomputed at each iteration.
A practical strategy to approximate $\widehat\bfW(\bftheta)$ is to use a sample average approximation (SAA) approach. In SAA methods, one first approximates the expected loss using a (large and representative) sample. The resulting optimization problem is deterministic and a wide range of optimization methods with proven theoretical guarantees can be used. For example, inexact Newton methods may be utilized to obtain fast convergence \cite{Bollapragada_2018,OLearyRoseberry:2019vf,Xu_2020}. Solving a deterministic SAA optimization problem with an efficient solver guarantees the linear model fits the sampled data optimally at each training iteration. Note that if an SAA approach were used to solve both \cref{eqn:objFctnReducedphi} and \cref{eq:objFctnLinphi} with the same (fixed) sample set, then this would be equivalent to the variable projection SAA approach described in \cite{newman2020train}. Indeed, there are various recent works~\cite{newman2020train, patel2020block, cyr2019robust} that exploit the separable structures~\eqref{eq:separableDNN} of neural networks in SAA settings in order to accelerate convergence. However, the disadvantage of SAA methods is that very large batch sizes are needed to obtain sufficient accuracy of the approximation and to prevent overfitting. Although parallel computing tools (e.g., GPU and distributed computing) and strategies such as repeated sampling may be used, the storage requirements for SAA methods remain prohibitively large.
To summarize~\cref{sec:separable}, the widely-used, fully-coupled approach (optimizing over $\bftheta$ and $\bfW$ simultaneously) and the alternating minimization approach represent two extremes: the former is a tractable approach, but ignores the separable structure while the latter exploits separability, but is computationally intractable in the stochastic setting. Although a block coordinate descent approach decouples the parameters and replaces expensive optimization solves with iterative updates, a VarPro approach can mathematically eliminate the linear weights, thereby reducing the problem to a stochastic optimization problem in $\bftheta$ only. The resulting noisy gradient estimates for $\bftheta$ are unbiased when $\widehat{\bfW}(\bftheta)$ is computed exactly, making VarPro compatible with SGD variants to update $\bftheta$. However, computing $\widehat{\bfW}(\bftheta)$ when also updating $\bftheta$ is intractable and poor approximations may lead to a large bias in the gradients for $\bftheta$. Hence, providing an effective and efficient way to approximate $\widehat{\bfW}(\bftheta)$ is crucial to obtain a practical implementation of VarPro stochastic optimization.
\section{Sampled limited-memory DNN training with {\tt slimTrain}}
\label{sec:sa}
We present {\tt slimTrain} as a tractable variant of VarPro in the SA setting, which adopts a sampled limited-memory Tikhonov scheme to approximate the linear weights and to estimate an effective regularization parameter for the linear weights. The key idea is to approximate the linear weights using the output features obtained from recent mini-batches and nonlinear weight iterates. By storing the output features from the most recent iterates, {\tt slimTrain} avoids additional forward and backward propagations through the neural network which, especially for deep networks, is computationally the most expensive part of training, and hence adds only a small computational overhead to the training.
\subsection{Sampled Tikhonov methods to approximate $\widehat{\bfW}(\bftheta)$}
\label{sub:slimTik}
As described in~\cref{sec:separable}, approximating $\widehat{\bfW}(\bftheta)$ well is challenging, but important for reducing bias in the gradient for $\bftheta$, see \cref{eq:unbiasedVarPro}. This motivates us to use state-of-the-art iterative sampling approaches to solve stochastic, Tikhonov-regularized, \textit{linear} least-squares problems. For exposition purposes, we first reformulate \cref{eq:objFctnLinphi} as
\begin{equation}
\label{eq:linstochTik}
\widehat \bfw(\bftheta) = \argmin_{\bfw} \ \Ebb \ \thf \left\| \bfA(\bfy, \bftheta) \bfw - \bfc\right\|_2^2 + \tfrac{\lambda}{2}\| \bfw\|_{\rm 2}^2,
\end{equation}
where $\bfw = {\rm vec}(\bfW) \in \Rbb^{\nTargets\nFeatOut}$ concatenates the columns of $\bfW$ in a single vector, $\bfA(\bfy, \bftheta) = F(\bfy, \bftheta)^\top \otimes \bfI_{\nTargets}$ with $\otimes$ denoting the Kronecker product. This Kronecker structure extends to a mini-batch $\Tcal_k$. Suppose we order the samples $(\bfy_i,\bfc_i) \in \Tcal_k$ for $i=1,\dots, |\Tcal_k|$. Then, the final layer be expressed for vectorized linear weights as
\begin{align*}
\bfW \bfZ_k(\bftheta) \approx \bfC_k
\qquad \begin{array}{c}
\xrightarrow[\hspace{1cm}]{\text{vec}}\\[-1em]
\xleftarrow[\text{mat}]{\hspace{1cm}} \end{array} \qquad
\bfA_k(\bftheta) \bfw \approx \bfb_k
\end{align*}
where
\begin{align*}
\bfZ_k(\bftheta) &= \begin{bmatrix}F(\bfy_1,\bftheta) & \cdots & F(\bfy_{|\Tcal_k|},\bftheta) \end{bmatrix} &&\in \bbR^{\nFeatOut \times \batchSize},\\
\bfC_k &= \begin{bmatrix} \bfc_1 & \cdots & \bfc_{\batchSize} \end{bmatrix} && \in \bbR^{\nTargets \times\batchSize},\\
\bfA_k(\bftheta) &= \bfZ_k(\bftheta)^\top \otimes \bfI_{\nTargets} && \in \Rbb^{\batchSize\nTargets \times \nFeatOut \nTargets}, \quad\mbox{and}\\
\bfb_k &= {\rm vec}(\bfC_k) = \begin{bmatrix} \bfc_1 \\ \vdots\\ \bfc_{\batchSize} \end{bmatrix} && \in \Rbb^{\nTargets \batchSize}.
\end{align*}
Henceforth, in this section, since $\bftheta$ is fixed in~\cref{eq:linstochTik}, we use $\bfA_k = \bfA_k(\bftheta)$ for presentation purposes.
Introduced in~\cite{slagel2019sampled,chung2020sampled}, sampled Tikhonov ({\tt sTik}) and sampled limited-memory Tikhonov ({\tt slimTik}) methods are specialized iterative methods developed for solving stochastic regularized linear least-squares problems. For an initial iterate $\bfw_0$, the $k$-th {\tt sTik} iterate is given by
\begin{equation}
\label{eq:sTik}
\bfw_k(\Lambda) = \argmin_\bfw \frac{1}{2}\left\| \begin{bmatrix} \bfA_1 \\ \vdots \\ \bfA_{k-1} \\ \bfA_k \\ \sqrt{\Lambda + \sum_{i=1}^{k-1} \Lambda_i} \bfI
\end{bmatrix}
\bfw - \begin{bmatrix} \bfA_1 \bfw_{k-1} \\ \vdots \\ \bfA_{k-1} \bfw_{k-1} \\ \bfb_k \\ \frac{\sum_{i=1}^{k-1} \Lambda_i}{\sqrt{\Lambda + \sum_{i=1}^{k-1} \Lambda_i}} \bfw_{k-1}
\end{bmatrix}\right\|_2^2,
\end{equation}
where $\bfw_{k-1}$ is the previously computed estimate, $\bfA_1, \ldots, \bfA_{k-1}$ are matrices containing previously computed output features, $\Lambda +\sum_{i=1}^{k-1} \Lambda_i >0$, and $\Lambda$ is a regularization parameter estimate. The {\tt sTik} iterates can also be expressed in update form as an SA method,
\begin{equation}
\label{eq:samplediterative}
\bfw_k(\Lambda) = \bfw_{k-1} - \bfB_k(\Lambda) \bfg_k(\bfw_{k-1},\Lambda),
\end{equation}
with $\bfg_k(\bfw_{k-1},\Lambda) = \bfA_k\t (\bfA_k \bfw_{k-1} - \bfb_k) + \Lambda \bfw_{k-1}$ containing gradient information for the current mini-batch and $\bfB_k(\Lambda) = ((\Lambda + \sum_{i=1}^{k-1} \Lambda_i) \bfI + \sum_{i=1}^k \bfA_i\t \bfA_i)^{-1}$ containing global curvature information of the least-squares problem. Note that contrary to standard SA methods, \cref{eq:samplediterative} does not require a learning rate nor a line search parameter. The learning rate can be interpreted as one, which is optimal for Newton's method.
Importantly, the regularization parameter $\lambda$ in \cref{eq:linstochTik}, which is typically required to be set in advance, has been replaced with a new parameter estimate $\Lambda$ which can be chosen adaptively at each iteration. Each $\Lambda_k$ corresponds to a regularization parameter at iteration $k$ and can change at each iteration ($\Lambda_j$, $j = 1,\ldots,k-1$ correspond to regularization parameters from previous iterations). In fact, the parameters $\lambda$ and $\Lambda_k$'s are directly connected. After one epoch (e.g., iterating through all training samples), the {\tt sTik} iterate is identical to the Tikhonov solution of \cref{eq:linstochTik} with $\lambda = \sum_{i=1}^k \Lambda_i$ where $k$ is the number of iterations required for one epoch. We exemplify the convergence of {\tt sTik} in \cref{fig:slimTikIntuition} when approximating {\sc Matlab}'s \texttt{peaks} function \cite{HaberRuthotto2017}. Moreover, it has been shown that {\tt sTik} iterates converge asymptotically to a Tikhonov solution and subsequently adaptive parameter selection methods were developed in \cite{slagel2019sampled}.
Since~\cref{eq:sTik} and~\cref{eq:slimTik} correspond to standard Tikhonov problems, extensions of standard regularization parameters methods, such as the discrepancy principle (DP), unbiased predictive risk minimization (UPRE), and generalized cross validation (GCV) techniques can be utilized. Indeed, \emph{sampled} regularization parameter selection methods sDP, sUPRE, and sGCV for {\tt sTik} and {\tt slimTik} and their connection to the overall regularization parameter $\lambda$ can be found in \cite{slagel2019sampled}. In this work, we focus on regularization parameter selection via sGCV since this method does not require any further hyperparameters (e.g., noise level estimates for the mini-batch), and we have observed that sGCV provides favorable $\lambda$ estimates. For details on the GCV function, see original works~\cite{Golub1979:gcv,Wahba1977PracticalAS} and books~\cite{Hansen1998:rankDeficient,Vogel2002:inverseProblems}. The sGCV parameter at the $k$-th {\tt slimTik} iterate can be computed as
\begin{equation}\label{eq:sGCV}
\Lambda_k = \argmin_{\Lambda} \ \frac{|\calT_k| \ \|\bfA_k \bfw_k(\Lambda) - \bfb_k\|_2^2}{\left( |\calT_k| -\trace{\bfA_k \bfT_k(\Lambda) \bfA_k\t}\right)^2}
\end{equation}
where
\begin{equation}
\bfT_k(\Lambda) = \left(\left(\Lambda + \sum_{i=1}^{k-1} \Lambda_i \right)\bfI_n + \sum_{i=k-r}^k \bfA_i\t \bfA_i \right)^{-1}.
\end{equation}
\begin{figure}[bt]
\includegraphics[width=\textwidth]{sTikIntuition.pdf}
\caption{
Illustration comparing convergence of {\tt sTik} and ADAM with a fixed regularization parameter for solving~\eqref{eq:linstochTik}. We consider approximating {\sc Matlab}'s \texttt{peaks} function, $f:\Rbb^2\to \Rbb$, using training data located on a uniform grid. We apply a fixed nonlinear transformation to each point in the domain to form the rows of $\bfA_k\in \Rbb^{|\Tcal_k|\times |\bfw|}$ and the corresponding true function values are stored in $\bfb_k\in \Rbb^{|\Tcal_k|}$ where $|\bfw|$ is the number of linear weights. The constant regularization parameters are $\Lambda_k = \frac{\lambda}{80}$ where $80$ is the number of iterations in one epoch. Both $|\bfw|$ and $\lambda$ are chosen arbitrarily and the number of iterations depends on the number of training points and the batch size. The best linear weights are given by the Tikhonov solution, $\widehat{\bfw} = (\bfA^\top \bfA +\lambda\bfI)^{-1}\bfA^\top \bfb$, and the corresponding best function approximator is $\bfA\widehat\bfw$. To the left, we plot the convergence of the relative error $\|\bfw_k - \widehat{\bfw}\|_2/\|\widehat{\bfw}\|_2$ for each iteration $k$ in a single epoch. By design, \texttt{sTik} converges to the least-squares solution in one epoch whereas ADAM makes little progress. To the right, the middle row shows the function approximations for different \texttt{sTik} iterates, $\bfA\bfw_k$, and the bottom row shows the absolute difference of the approximation with the best approximation. The top row depicts the true \texttt{peaks} function (left) and the best approximation obtained from the Tikhonov solution (right).
}
\label{fig:slimTikIntuition}
\end{figure}
For some problems, e.g., inverse problems where $\bfA_k$ represent large-scale forward model matrices, {\tt sTik} may not be practical since each iteration requires either solving a least-squares problem \cref{eq:sTik} whose coefficient matrix is growing at each iteration or updating matrix $\bfB_k$. To alleviate the memory burden, a variant of {\tt sTik} called the sampled limited-memory Tikhonov ({\tt slimTik}) method was proposed in \cite{slagel2019sampled}. Let $r \in \bbN_0$ be a memory depth parameter. Then, the $k$-th {\tt slimTik} iterate has the form
\begin{equation}
\label{eq:slimTik}
\bfw_k (\Lambda) = \argmin_{\bfw} \frac{1}{2} \norm[2]{
\begin{bmatrix}
\bfA_{k-r} \\ \vdots \\ \bfA_{k-1} \\ \bfA_{k}\\ \sqrt{ \Lambda + \sum_{i = 1}^{k-1} \Lambda_i}\bfI
\end{bmatrix} \bfw -
\begin{bmatrix}
\bfA_{k-r} \bfw_{k-1} \\ \vdots \\ \bfA_{k-1} \bfw_{k-1} \\\bfb_k\\ \frac{\sum_{i = 1}^{k-1} \Lambda_i}{\sqrt{ \Lambda+\sum_{i = 1}^{k-1} \Lambda_i}}\bfw_{k-1}
\end{bmatrix}}^2.
\end{equation}
We provide a few remarks about the {\tt slimTik} method. For linear least-squares problems, it can be shown that for the case $r=0$, the {\tt slimTik} method is equivalent to the stochastic block Kaczmarz method. Furthermore, for linear least-squares problems with a fixed regularization parameter, theoretical convergence results for {\tt slimTik} with memory $r=0$ were developed in \cite{chung2020sampled}. We point out that limited memory methods like {\tt slimTik} were initially developed to address problems where the size of $\bfw$ is massive, but this is not necessarily the case in DNN training where the number of weights in $\bfw$ may be modest. However, as we will see in \cref{sub:saslimtik}, a limited memory approach is suitable and can even be desirable in the context of solving nonlinear problems, where nonlinear parameters have direct impact on the model matrices $\bfA_k$. In this work, we are interested in incorporating extensions of {\tt slimTik} with \textit{adaptive} regularization parameter selection for nonlinear problems that exploit separability.
\subsection{\texttt{slimTrain}}
\label{sub:saslimtik}
Our proposed SA algorithm, {\tt slimTrain} takes advantage of the separable structure of many DNNs and integrates the {\tt slimTik} method for efficiently updating the linear parameters and for automatic regularization parameter tuning. We consider the {\tt slimTik} update of $\bfW$ to serve as an approximation of the eliminated linear weights in VarPro SA from~\eqref{eq:objFctnLinphi}. Specifically, at the $k$-th iteration, $\widehat \bfW(\bftheta) \approx \bfW_k = {\rm mat}(\bfw_k(\Lambda_k))$ where
\begin{equation}
\label{eq:slimTik_nonlin}
\bfw_k(\Lambda_k) = \argmin_{\bfw} \ \norm[2]{
\begin{bmatrix}
\bfM_{k} \\ \bfA_{k}(\bftheta_{k-1})\\ \sqrt{\sum_{i = 1}^{k} \Lambda_i}\bfI
\end{bmatrix} \bfw -
\begin{bmatrix}
\bfM_k \bfw_{k-1} \\\bfb_k\\ \frac{\sum_{i = 1}^{k-1} \Lambda_i}{\sqrt{\sum_{i = 1}^{k} \Lambda_i}}\bfw_{k-1}
\end{bmatrix}}^2,
\end{equation}
with
\begin{equation}\label{eq:updateM}
\bfM_k =
\begin{bmatrix}
\bfA_{k-r}(\bftheta_{k-r-1}) \\ \vdots \\ \bfA_{k-1} (\bftheta_{k-2})
\end{bmatrix}
\end{equation}
and $\Lambda_k$ is computed using the sGCV method (c.f., \cref{eq:sGCV}). Notice that this is not equivalent to the {\tt slimTik} method for $\argmin_\bfW \Phi(\bfW, \bftheta_{k-1})$, since there is no inner iterative process and because of the dependence on previous $\bftheta_j$. A summary of the algorithm is provided in \cref{alg:decoupledSA}.
\begin{algorithm}
\begin{algorithmic}[1]
\State \textbf{Training Data:} $\Tcal\subseteq \Dcal$
\State \textbf{Hyperparameters:} memory depth $r \in \bbN_0$, mini-batch size $n_{\rm batch}$, learning rate $\gamma$, regularization parameter $\alpha$
\State \textbf{Initialize:} $\bftheta_0 \in \bbR^{\nTheta}$, $\bfW_0\in \bbR^{\nTargets \times \nFeatOut}$
\medskip
\While{not converged}
\State randomly partition $\Tcal$ into mini-batches such that $\Tcal=\bigsqcup_k \Tcal_k$ and $|\Tcal_k| = n_{\rm batch}$
\For{$k=1,\dots,\lfloor |\Tcal| / n_{\rm batch}\rfloor$}
\State select mini-batch $\Tcal_k$
\State forward propagate network to obtain $\bfA_k(\bftheta_{k-1})$
\State update memory matrix $\bfM_k$ \algorithmiccomment{\cref{eq:updateM}}
\State select $\Lambda_k$ using sGCV \algorithmiccomment{\cref{eq:sGCV}}
\State compute $\bfW_k = {\rm mat}(\bfw_k(\Lambda_k))$ \algorithmiccomment{\cref{eq:slimTik_nonlin}}
\State compute $\left[{\rm D}_\bftheta \Phi_k(\bfW_k,\bftheta)\right]_{\bftheta = \bftheta_{k-1}}$ via backpropagation \algorithmiccomment{\cref{eq:batchGradExplicit}}
\State select search direction $\bfp_k$
\State update $\bftheta_k = \bftheta_{k-1} + \gamma \bfp_k(\bftheta_{k-1};\bfW_k)$
\EndFor
\EndWhile
\end{algorithmic}
\caption{{\tt slimTrain}: sampled limited-memory training for separable DNNs}
\label{alg:decoupledSA}
\end{algorithm}
We note that an SA method that incorporates the {\tt slimTik} method was considered for separable nonlinear inverse problems in \cite{chung2019iterative}, but there are some distinctions. First, the results in \cite{chung2019iterative} use a fixed regularization parameter, but here we allow for adaptive parameter choice, which has previously only been considered for linear problems. We note that updating regularization parameters in nonlinear problems (especially stochastic ones) is a challenging task, and currently there are no theoretical justifications. Second, all forward matrices were recomputed for each new set of nonlinear parameters in \cite{chung2019iterative}. That is, for updated estimate $\bftheta_{k-1},$
\begin{equation}
\bfM_k =
\begin{bmatrix}
\bfA_{k-r}(\bftheta_{k-1}) \\ \vdots \\ \bfA_{k-1}(\bftheta_{k-1})
\end{bmatrix}.
\end{equation}
Such an approach would be computationally demanding for DNN learning problems, since this would require revisiting previous mini-batches and re-computing the forward propagation matrix for new parameters $\bftheta_{k-1}$. Instead, we propose to use \cref{eq:updateM}, and we will show that these methods can perform well in practice.
\subsection{Efficient implementation}
\label{sec:implementation}
Training separable DNNs with {\tt slimTrain} \linebreak adds some computational costs compared to existing SA methods like ADAM; however, those are modest in many cases and the overhead in computational time can be reduced by an efficient implementation. The additional costs stem from solving for the optimal linear weights in~\eqref{eq:slimTik} and approximating the optimal regularization parameter using the sGCV function~\eqref{eq:sGCV}. The costs of these steps depend on the size of the nonlinear feature matrix, $\bfA_k \in \Rbb^{\batchSize\nTargets \times \nFeatOut\nTargets}$, the size of the memory matrix, $\bfM_k$, which contains $r$ blocks of nonlinear features from previous batches, and the number of linear weights. In the case when the linear weights are applied via dense matrix, we can exploit the Kronecker structure in our problem; see~\cref{sub:slimTik} for details. The Kronecker structure results in solving $\nTargets$ least-squares problems simultaneously where each problem is moderate in size (typically, on the order of $10^2$ or $10^3$). Due to the modest problem size, we use a singular value decomposition (SVD) to solve the least-squares problem. We also re-use the SVD factors for efficiently adapting the regularization parameter. For the \texttt{peaks} and surrogate modeling experiments (\cref{sec:peaks} and~\cref{sec:surrogate}), we implement the Kronecker-structure framework in {\sc Matlab}. The code is available in the {\tt Meganet.m} repository on \url{https://github.com/XtractOpen/Meganet.m}.
In the case when the linear weights parameterize a linear operator (most importantly, a convolution), efficient iterative solvers, such as LSQR~\cite{Paige82lsqr:an} that only require matrix-vector products and avoid forming the matrix explicitly, can be used to find the optimal linear weights. Such methods were employed in~\cite{chung2019iterative} where the authors applied {\tt slimTik} to massive, separable nonlinear inverse problems where the data matrix could not be represented all-at-once. Modifications of the sGCV function using stochastic trace estimators can then be used for estimating the regularization parameter efficiently; for more details, see~\cite{slagel2019sampled}.
In~\cref{sec:autoencoder}, the linear weights parameterize a convolution layer with several input but only one output channel. Exploiting the separability between the different channels and the small number of weights per channel, we form the nonlinear feature matrix, $\bfA_k$, explicitly in our implementation. This allows us to use the same SVD-based automatic regularization parameter selection as in the dense case. To be precise, the columns of $\bfA_k$ are shifted copies of the batch data, which is large, but accessible (on the order of $10^5$). Importantly, the number of columns (copies of the data) is small because the number of weights parameterizing the linear operator, denoted $|\bfw|$, is small (on the order of $10^2$). We can construct the data matrix $\bfA_k$ efficiently by taking advantage of the structure of convolutional operators; each channel has its own linear weights and the samples share the same weights. For storage efficiency, we can form the smaller matrix $\bfA_k^\top \bfA_k \in \Rbb^{|\bfw|\times |\bfw|}$ one time, and use the update rule~\eqref{eq:samplediterative} to adjust the linear weights. We implement the convolutional operator framework in Pytorch~\cite{pytorch}. The code is available on {\tt github} at \url{https://github.com/elizabethnewman/slimTrain}.
\section{Numerical results}
\label{sec:results}
We present a numerical study of training separable DNNs using {\tt slimTrain} with automatic regularization parameter selection. In this section, we first provide a general discussion on numerical considerations of our proposed method in \cref{sec:implementation}. In~\cref{sec:peaks}, we explore the relationship between various {\tt slimTrain} hyperparameters (e.g., batch size, memory depth, regularization parameters) in a function approximation task. Our results show that automatic regularization parameter selection can mitigate poor hyperparameter selection. In~\cref{sec:surrogate}, we apply {\tt slimTrain} to a PDE surrogate modeling task and show that it outperforms the state-of-the-art ADAM for the default hyperparameters. In~\cref{sec:autoencoder}, we apply {\tt slimTrain} to a dimensionality-reduction task in which the linear weights are applied via a convolution. Notably, we observe faster convergence and, particularly with limited training data, improved results compared to ADAM.
\subsection{Peaks}
\label{sec:peaks}
To explore the hyperparameters in {\tt slimTrain}, we examine a scalar function approximation task. We train a DNN to fit the \texttt{peaks} function in {\sc Matlab}, which is a mixture of two-dimensional Gaussians. We use a small residual neural network (ResNet)~\cite{he2016deep} with a width of $w=8$ and a depth of $d=8$ corresponding to a final time of $T=5$. Further details about the ResNet architecture can be found in~\cref{app:resnet}. The nonlinear feature extractor maps $F:\Rbb^2\times \Rbb^{528} \to \Rbb^8$ where $528$ is the number of weights in $\bftheta$. The final linear layer introduces the weights $\bfW\in \Rbb^{1 \times 9}$, where the number of columns equals the width of the ResNet plus an additive bias. Our training data consists of 2,000 points sampled uniformly on the domain $[-3,3]\times [-3,3]$. We display the convergence of {\tt slimTrain} for various combinations of hyperparameters in~\cref{fig:peaksConvergenceStudy}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{peaksConvergence.pdf}
\caption{Convergence of training loss for the \texttt{peaks} experiment when training with {\tt slimTrain} with a learning rate of $\gamma=10^{-3}$.
Each row corresponds to a different choice of fixed regularization parameter for $\bfW$, $\lambda=10^0, 10^{-3}, 10^{-10}$. When training with adaptive regularization parameter selection, the initial regularization parameter $\Lambda_0$ is set to be the same as the fixed regularization parameter. Each column corresponds to a different batch size, $|\Tcal_k|=1, 5, 10$. Each convergence plot consists of dashed and solid lines corresponding using a fixed regularization parameter and adaptively choosing the regularization parameter using sGCV, respectively. The color of each line corresponds to memory depth $r=0,5,10$ and, additionally, $r=100$ for $|\Tcal_k|=1$.}
\label{fig:peaksConvergenceStudy}
\end{figure}
The interplay between number of output features, the batch size, and the memory depth is apparent in~\cref{fig:peaksConvergenceStudy}. In this scalar-function example, we seek $9$ weights (i.e., $\bfW\in \Rbb^{1 \times 9}$) to fit $(r + 1)|\Tcal_k|$ samples. With small memory depth and batch size, the problem is underdetermined (or not sufficiently overdetermined) and solving for $\bfW$ significantly overfits the given batch at each iteration. This results in the slow, oscillatory convergence behavior, particularly with a batch size of $|\Tcal_k|=1$ (\cref{fig:peaksConvergenceStudy}, first column). When the memory depth and batch size are large enough (e.g., $r=100$ in the $|\Tcal_k|= 1$), the linear least-squares problem is sufficiently overdetermined and the training loss converges faster and to a lower value (\cref{fig:peaksConvergenceStudy}, purple line in first column).
Solving the optimization problem and decreasing the loss of the training data is a proxy to the goal of DNN training: to generalize to unseen data. To illustrate the generalizability of DNNs trained with {\tt slimTrain}, we display the DNN approximations in~\cref{fig:peaksApproxStudy} corresponding to a batch size of $|\Tcal_k|=5$ (second column of \cref{fig:peaksConvergenceStudy}) of the convergence plots.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{peaksApprox.pdf}
\caption{DNN approximations for the \texttt{peaks} experiment with batch size of $|\Tcal_k|=5$ and a learning rate of $\gamma=10^{-3}$. The results use the network weights corresponding to the lowest validation loss for each training method. Each block row corresponds to a different choice of fixed regularization parameter for $\bfW$, $\lambda=10^0, 10^{-3}, 10^{-10}$. The top rows of images in each block depict the DNN approximations of the \texttt{peaks} function. The bottom rows of images in each block depict the absolute difference of the DNN approximations and the true \texttt{peaks} function. The DNN weights used provided the smallest validation loss during training. The relative error of the DNN approximation versus the true function is displayed below the corresponding absolute difference image.}
\label{fig:peaksApproxStudy}
\end{figure}
Exemplified in~\cref{fig:peaksApproxStudy}, the choice of regularization parameter for $\bfW$ significantly impacts the approximation quality of the network when training with a fixed regularization parameter (\cref{fig:peaksApproxStudy}, second column set of figures). If the optimization problem over-regularizes the linear weights ($\lambda=10^0$), the DNN approximation is smoother than the true \texttt{peaks} function and does not fit the extremes tightly (\cref{fig:peaksApproxStudy}, first row). In the under-regularized case ($\lambda=10^{-10}$) with a small memory depth ($r=0$), $\bfW$ overfits the batches and the DNN approximation does not generalize well (e.g., we miss the small peaks) (\cref{fig:peaksApproxStudy}, third row). With a well-chosen regularization parameter (here, $\lambda=10^{-3}$), the DNN approximation is close to the true \texttt{peaks} function, but tuning this regularization parameter can be costly (\cref{fig:peaksApproxStudy}, second row). In comparison, the DNN approximations when automatically choosing a regularization parameter using the sGCV method are good approximations and look similar, no matter the initial regularization parameter or memory depth (\cref{fig:peaksApproxStudy}, first column set of figures).
The selected regularization parameters are related to the ill-posedness of the problem, as illustrated for the $\lambda=10^{-10}$ case in~\cref{fig:peaksAlphaStudy}. When the batch size is $|\Tcal_k|=1$ (\cref{fig:peaksAlphaStudy}, first column), the linear least-squares problem is underdetermined for memory depths $r=0$ and $r=5$ and is overdetermined when $r=10$. To avoid overfitting in the underdetermined cases, larger regularization parameters are selected. In the overdetermined case, overfitting is less likely and thus less regularization is needed.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{peaksAlphas.pdf}
\caption{Regularization parameters selected by approximately minimizing the sGCV function in the \texttt{peaks} example for a learning rate of $\gamma=10^{-3}$ and an initial regularization parameter of $\Lambda_0=10^{-10}$. Each column corresponds to a different batch size, $|\Tcal_k|=1,5,10$, respectively. Each row corresponds to a different memory depth, $r=0,5,10$, respectively. In each image, the horizontal axis is the number of epochs, in this case $50$, and the vertical axis is the number of iterations per epoch. For example, when the batch size is $|\Tcal_k|=5$, the vertical axis has $400$ iterations (the number of training samples divided by the batch size). Each pixel corresponds to the regularization parameter used for a particular batch batch and the batches change because we shuffle the training data at the start of each epoch. The images are displayed in log scale. The first few regularization parameters in each case are small (top left corner of each image) because we start with a small initial regularization parameter.}
\label{fig:peaksAlphaStudy}
\end{figure}
With an adequate choice of memory depth and batch size, training a DNN with {\tt slimTrain} decreases the training loss and generalizes well to unseen data. The choice of regularization parameter significantly impacts the resulting network: too much regularization and the training stagnates; too little regularization and the training oscillates. Employing adaptive regularization parameter selection mitigates these extremes and simplifies the costly a priori step of tuning the parameter.
\subsection{PDE surrogate modeling}
\label{sec:surrogate}
Due to their approximation properties, there has been increasing interest in using DNNs as efficient surrogate models for computationally expensive tasks arising in scientific applications. One common task is partial differential equation (PDE) surrogate modeling in which a DNN replaces expensive linear system solves~\cite{olearyroseberry2021derivativeinformed, bhattacharya2021model, Zhu_2019,Tripathy:2018kk}. Here, we consider a parameterized PDE
\begin{align}
\bfc = \Pcal u \quad \text{where} \quad \Acal(u, \bfy) = 0,
\end{align}
where $u$ is the solution to a PDE defined by $\Acal$ and parameterized by $\bfy$ (which could be discrete or continuous). In our case, the solution is measured at discrete points given by the linear operator $\Pcal$ and the observations are contained in $\bfc$. The goal is to train a DNN as a surrogate mapping from parameters $\bfy$ to observables $\bfc$ and avoid costly PDE solves.
In our experiment, we consider the convection diffusion reaction (CDR) equation which models physical phenomena in many fields including climate modeling~\cite{stocker2011introduction} and mathematical biology~\cite{deVries2006:mathBio, Britton1986:cdrBiology}. As its name suggests, the CDR equation is composed of three terms: a diffusion term that encourages an even distribution of the solution $u$ (e.g., chemical concentration), a convection (or advection) term that describes how the flow (e.g., of the fluid containing the chemical) moves the concentration, and a reaction term that captures external factors that affect the concentration levels. In our example, the reaction term is a linear combination of $55$ different reaction functions and the parameters $\bfy\in \Rbb^{55}$ are the coefficients. The observables $\bfc\in \Rbb^{72}$ are measured at the same $6$ spatial coordinates and $12$ different time points; for details, see~\cite{newman2020train}. We train a ResNet with a width of $w=16$ and a depth of $d=8$ corresponding to a final time of $T=4$; see~\cref{app:resnet} for further details. The linear weights in the final, separable layer are stored as a matrix $\bfW \in \Rbb^{72\times 17}$, where the number of columns is the width of the ResNet plus an additive bias. The results of training the ResNet with {\tt slimTrain} are displayed in~\cref{fig:cdrParameterStudy}. The major takeaway is that {\tt slimTrain} exploits the separable structure of the ResNet and, as a result, trains the network faster and fits the observed data better (lower loss) than ADAM with the recommended learning rate ($\gamma=10^{-3}$).
\begin{figure}
\centering
\includegraphics[width=\textwidth]{cdrConvergence.pdf}
\caption{Convergence results for the training loss for the CDR experiment. The rows correspond to different batch sizes, $|\Tcal_k| = 5,10$, and the columns correspond to different learning rates, $\gamma=10^{-3}, 10^{-2},10^{-1}$. The colorful, solid lines depict the convergence of the training loss using {\tt slimTrain} with sGCV regularization parameter selection. Each color corresponds to a different memory depth, $r=0,5,10$. The black line with markers depicts the convergence of the training loss using ADAM.}
\label{fig:cdrParameterStudy}
\end{figure}
\begin{table}
\setlength\extrarowheight{5pt}
\centering
\caption{Training and validation loss in the CDR experiment for batch size $|\Tcal_k|=5,10$ and learning rates $\gamma=10^{-3}, 10^{-2}, 10^{-1}$. We display the loss after the first $20$ epochs to compare early performance. Because the memory depth does not significantly impact convergence, we display the loss for {\tt slimTrain} with a memory depth of $r=0$. Closeness between the training and validation losses indicates good generalization. The best overall performance (lowest loss) is achieved by {\tt slimTrain} with a batch size of $|\Tcal_k|=10$, denoted in bold.}
\label{tab:cdrValidation}
\small
\begin{tabular}{|c|c||rr|rr|rr|}
\hline
\multicolumn{1}{|c}{} && \multicolumn{2}{c|}{$\gamma=10^{-3}$} & \multicolumn{2}{c|}{$\gamma=10^{-2}$} & \multicolumn{2}{c|}{$\gamma=10^{-1}$}\\
\cline{3-8}
\multicolumn{1}{|c}{} & & \multicolumn{1}{c}{train} & \multicolumn{1}{c|}{valid}
& \multicolumn{1}{c}{train} & \multicolumn{1}{c|}{valid}
& \multicolumn{1}{c}{train} & \multicolumn{1}{c|}{valid}\\
\hline\hline
\multirow{2}{*}{\rotatebox{90}{ \scriptsize $|\Tcal_k|=5$~~}}
& {\tt slimTrain}, $r=0$
& $42.98$ & $41.17$
& $22.06$ & $22.25$
& $18.74$ & $23.25$ \\
& ADAM
& $1453.00$ & $1338.00$
& $45.24$ & $42.73$
& $8.07$ & $8.70$ \\[1ex]
\hline
\multirow{2}{*}{\rotatebox{90}{\scriptsize $|\Tcal_k|=10$~~}} &
{\tt slimTrain}, $r=0$
& $47.65$ & $52.95$
& $\bf 4.28$ & $\bf 5.30$
& $15.61$ & $16.60$ \\
& ADAM
& $4405.00$ & $4143$
& $49.92$ & $41.23$
& $10.67$ & $10.71$\\[2ex]
\hline
\end{tabular}
\end{table}
In~\cref{tab:cdrValidation}, we examine if the performance of {\tt slimTrain} and ADAM generalizes to unseen after $20$ epochs; we choose $20$ epochs to analyze early performance and because the training loss decreases more closes after $20$ epochs in~\cref{fig:cdrParameterStudy}. The training and validation losses are close for both {\tt slimTrain} and ADAM, indicating that both training algorithms produce networks that generalize well. For ADAM's suggested learning rate, $\gamma=10^{-3}$, {\tt slimTrain} achieves a validation loss that is two orders of magnitude less than that of ADAM. When the learning rate is tuned to $\gamma=10^{-1}$, the performance of ADAM improves, but the overall best performance is achieved by {\tt slimTrain}. Most significantly, the performance of {\tt slimTrain} is less sensitive to the choice learning rate.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{cdrAlphas.pdf}
\caption{Effect of learning rate and memory depth on the choice of regularization parameters in the CDR experiment. The presented plots are from training using {\tt slimTrain} for a batch size of $|\Tcal_k| = 5$. We show the regularization parameters (in log scale) obtained for various learning rates (columns) and memory depths (rows).}
\label{fig:cdrRegularization}
\end{figure}
As with the numerical experiment in~\cref{sec:peaks}, there is a relationship between batch size, memory depth, and the number of output features. In this experiment, because $\bfW\in \Rbb^{72\times 17}$, we solve $72$ independent least-squares problems with $17$ unknowns in each problem. Illustrated in~\cref{fig:cdrRegularization}, when the memory depth is small ($r=0,5$), each least-squares problem is underdetermined or not sufficiently overdetermined, and hence more regularization on $\bfW$ is needed to avoid overfitting. Because we use sGCV to automatically select the regularization parameter, the training with {\tt slimTrain} achieves a comparable loss for all memory depths. In addition, the learning rate to update $\bftheta$ plays a role in the regularization parameters chosen. When the learning rate is large ($\gamma=10^{-1}$), the output features of the network can change rapidly. As a result, larger regularization parameters are selected, even in the sufficiently overdetermined case ($r=10$), to avoid fitting features that will change significantly at the next iteration.
In this surrogate modeling example, {\tt slimTrain} converges faster to the same or a better accuracy than ADAM using the recommended learning rate ($\gamma=10^{-3}$) by exploiting the separability of the DNN architecture. Tuning the learning rate can improve the results for ADAM, but training with {\tt slimTrain} produces comparable results and reaches a desirable loss in the same or fewer epochs. Using sGCV to select the regularization parameter on the weights $\bfW$ provides more robust training, adjusting automatically to the various hyperparameters (memory depth, learning rate) to produce consistent convergence.
\subsection{Autoencoders}
\label{sec:autoencoder}
Autoencoders are a dimensionality-reduction technique \linebreak using two neural networks: an encoder that represents high-dimensional data in a low-dimensional space and a decoder that reconstructs the high-dimensional data from this encoding, illustrated in \cref{fig:autoencoderIllustration}. Training an autoencoder is an unsupervised learning problem that can be phrased as optimization problem
\begin{align}\label{eq:autoencoderObjFctn}
\min_{\bfw,\bftheta_{\rm dec}, \bftheta_{\rm enc}} \Phi_{\rm auto}(\bfw,\bftheta_{\rm dec},\bftheta_{\rm enc}) &\equiv
\Ebb \ \tfrac{1}{2}\|\bfK(\bfw)F_{\rm dec}(F_{\rm enc}(\bfy,\bftheta_{\rm enc}),\bftheta_{\rm dec}) - \bfy\|_2^2 \\
&\quad
+ \tfrac{\alpha_{\rm enc}}{2} \|\bftheta_{\rm enc}\|_2^2
+ \tfrac{\alpha_{\rm dec}}{2} \|\bftheta_{\rm dec}\|_2^2
+ \tfrac{\lambda}{2} \|\bfw\|_2^2,\nonumber
\end{align}
where the components of the objective function are the following:
\begin{itemize}
\item \textbf{Encoder:} $F_{\rm enc}: \Ycal \times \Rbb^{|\bftheta_{\rm enc}|} \to \Rbb^{n_{\rm lat}}$ is the encoding neural network that reduces the dimensionality of the input features $\nFeatIn$ to an \emph{intrinsic dimension} $n_{\rm lat}$ with $n_{\rm lat} \ll \nFeatIn$. Typically, the true intrinsic dimension is not known and must be chosen manually. The weights are $\bftheta_{\rm enc} \in \Rbb^{|\bftheta_{\rm enc}|}$, the number of encoder weights is $|\bftheta_{\rm enc}|$, and the regularization parameter is $\alpha_{\rm enc}\geq 0$.
\item \textbf{Decoder Feature Extractor:} $F_{\rm dec}: \Rbb^{n_{\rm lat}} \times \Rbb^{|\bftheta_{\rm dec}|} \to \Rbb^{\nFeatOut}$ is the decoder feature extractor. The weights are $\bftheta_{\rm dec} \in \Rbb^{|\bftheta_{\rm dec}|}$, the number of weights is $|\bftheta_{\rm dec}|$, and the regularization parameter is $\alpha_{\rm dec}\geq 0$.
\item \textbf{Decoder Final Layer:} $\bfK(\cdot): \Rbb^{|\bfw|} \to \Rbb^{\nFeatIn\times \nFeatOut}$ is a linear operator, mapping $\bfw$ to a matrix $\bfK(\bfw)$. For instance, $\bfK(\bfw)$ could be a sparse convolution matrix which can be accessed via function calls. The learnable weights $\bfw$ have a regularization parameter $\lambda\geq 0$.
\end{itemize}
For notational simplicity, we let $\bftheta = (\bftheta_{\rm enc},\bftheta_{\rm dec})$ and $\alpha = \alpha_{\rm enc} = \alpha_{\rm dec}$ for the remainder of this section.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{autoencoderCartoon.pdf}
\caption{Illustration of autoencoder for the MNIST data. The goal is to represent high-dimensional data in a low-dimensional, latent space for dimensionality reduction and feature extraction~\cite{Goodfellow:2016wc}. The encoder is a neural network that maps input data $\bfy$ to the latent space with intrinsic dimension ${n_{\rm lat}}$ (typically user-defined). The decoder is a neural network that maps from the latent space to obtain an approximation of the original input, $\widehat{\bfy}$.}
\label{fig:autoencoderIllustration}
\end{figure}
In this experiment, we train a small autoencoder on the MNIST dataset~\cite{Lecun:1990mnist}. The data consists of 60,000 training and 10,000 test gray-scale images of size $28\times 28$ (i.e., $784$ input features). We implement convolutional neural networks for both the encoder and decoder with intrinsic dimension $n_{\rm lat}=50$; see details in~\cref{app:autoencoder}. Unlike the dense matrices in the previous experiments, the final, separable layer is a (transposed) convolution. Because convolutions use few weights and the prediction is high-dimensional, the least-squares problem is always overdetermined for this application. Hence, we require only a moderate memory depth in our experiments and, motivated by our results in~\cref{sec:peaks} and~\cref{sec:surrogate}, we use a memory depth of $r=5$ when training with {\tt slimTrain}.
\begin{figure}
\centering
\begin{tabular}{c}
\includegraphics[width=0.9\textwidth]{autoencoderConvergenceFixedAlpha.pdf}\\
DNN approximations after $1$ epoch\\
\includegraphics[width=0.9\textwidth]{autoencoderApproxEpoch1.pdf}
\end{tabular}
\caption{Training loss convergence and visualizations of MNIST autoencoder approximations. For the convergence plots, the networks are trained for $50$ epochs and with recommended learning rate of $\gamma=10^{-3}$, batch size of $|\Tcal_k|=32$, regularization parameter $\alpha=10^{-10}$ for $\bftheta$, and 50,000 training images plus 10,000 for validation. For ADAM, we train with three different regularization parameters for $\bfw$, $\lambda=10^{0}, 10^{-1}, 10^{-10}$. When using {\tt slimTrain}, we automatically select the regularization parameters using sGCV with initial parameter $\Lambda_0=10^{-1}$ and choose a modest memory depth of $r=5$. We display the DNN approximations after the first epoch below the convergence plots. The top row of MNIST images are, from left to right, $16$ test images, the approximation from the ADAM-trained networks with various regularization parameters on $\bfw$, and the approximation obtained from {\tt slimTrain}. The bottom row of images are the absolute differences (in log scale) between the network approximations and the true test images. The value below the absolute difference images is the test loss over all 10,000 test images after the first epoch.}
\label{fig:autoencoderResultsFullData}
\end{figure}
The convergence results comparing {\tt slimTrain} and ADAM are presented in~\cref{fig:autoencoderResultsFullData}. Here, we see that training with {\tt slimTrain} converges faster than ADAM in the first $10$ epochs and to a comparable lowest loss after $50$ epochs. Each training scheme forms an autoencoder that approximates the MNIST data accurately and generalizes well, even after the first epoch. However, the absolute difference between the {\tt slimTrain} approximation and the true test images after the first epoch is noticeably less noisy than the ADAM-trained approximations after the first epoch, particularly for a poor choice of regularization parameter on $\bfw$ (e.g., $\lambda=10^0$). We note that because we employ automatic regularization parameter selection, the performance of {\tt slimTrain} was nearly identical with different initial regularization parameters, $\Lambda_0$. We display the case that produced slightly less oscillatory convergence.
Using a good choice of the regularization parameter on the nonlinear weights ($\alpha=10^{-10}$) is partially responsible for the quality approximations obtained for each training method. The results in~\cref{fig:autoencoderAlpha} support our choice of a small regularization parameter on $\bftheta$. It can be seen that smaller regularization parameters on $\bftheta$ produce better DNN approximations. When $\alpha$ is poorly-chosen (in this case, when $\alpha$ is large), {\tt slimTrain} produces a considerably smaller loss than training with ADAM. Hence, training with {\tt slimTrain} and sGCV can adjust to poor hyperparameter selection, even when those hyperparameters are not directly related to the regularization on $\bfw$.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{autoencoderVariedAlpha.pdf}
\caption{Effect of regularization parameters on the minimum loss. We vary the regularization parameter $\alpha$ of $\bftheta$ for both ADAM (blue) and {\tt slimTrain} (black), the regularization parameter $\lambda$ on $\bfw$ for ADAM, and the initial regularization parameter $\Lambda_0$ on $\bfw$ for {\tt slimTrain}. For simplicity, we set the (initial) regularization parameters equal, $\alpha=\lambda=\Lambda_0$. The height of each bar is the training (solid) and validation (striped) loss for the network that obtained the lowest validation loss in $50$ epochs for the given hyperparameters.}
\label{fig:autoencoderAlpha}
\end{figure}
In addition to adjusting regularization parameters for the linear weights, we found that training with {\tt slimTrain} offers significant performance benefits in the limited-data setting; see~\cref{fig:autoencoderResultsSmallData}. When only a few training samples were used, training with {\tt slimTrain} produces a lower training and validation loss. In the small training data regime, the optimization problem is more ill-posed and there are fewer network weight updates per epoch. Hence, the automatic regularization selection and fast initial convergence of {\tt slimTrain} produces a more effective autoencoder.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{autoencoderSmallData.pdf}
\caption{Mean loss of MNIST autoencoder for small number of training data with a batch size of $32$. All networks were trained for $50$ epochs for $10$ different weight initializations using the same hyperparameters as with the full training data in~\cref{fig:autoencoderResultsFullData}. For each initialization, we choose the network that produced the minimal validation loss over $50$ epochs. In the plot, each point denotes the mean loss over the $10$ runs and the bands depict one standard deviation from the mean.}
\label{fig:autoencoderResultsSmallData}
\end{figure}
Consistent with the results in our previous experiments, in the autoencoder example with a final convolutional layer, {\tt slimTrain} converges faster initially than ADAM to a good approximation and is less sensitive to the choice regularization on the nonlinear weights, $\bftheta$. In the case of limited data, a common occurance for scientific applications, the training problem becomes more ill-posed. Here, {\tt slimTrain} produces networks that fit and generalize better than ADAM. By solving for good weights $\bfw$ and automatically choosing an appropriate regularization parameter at each iteration, {\tt slimTrain} achieves more consistent training performance for many different choices of hyperparameters.
\section{Conclusions}
\label{sec:conclusions}
We address the challenges of training DNNs by exploiting the separability inherent in most commonly-used architectures whose output depends linearly on the weights of the final layer. Our proposed algorithm, {\tt slimTrain}, leverages this separable structure for function approximation tasks where the optimal weights of the final layer can be obtained by solving a stochastic regularized linear least-squares problem. The main idea of {\tt slimTrain} is to iteratively estimate the weights of the final layer using the sampled limited-memory Tikhonov scheme {\tt slimTik}~\cite{chung2020sampled}, which is a state-of-the-art method to solve stochastic linear least-squares problems. By using {\tt slimTik} to update the linear weights, {\tt slimTrain} provides a reasonable approximation for the optimal linear weights and simultaneously estimates an effective regularization parameter for the linear weights. The latter point is crucial -- {\tt slimTrain} does not require a difficult-to-tune learning rate and automatically adapts the regularization parameter for the linear weights, which can simplify the training process. In our numerical experiments, {\tt slimTrain} is less sensitive to the choice of hyperparameters, which can make it a good candidate to train DNNs for new datasets with limited experience and no clear hyperparameter selection guidelines.
From a theoretical perspective, {\tt slimTrain} can be seen as an inexact version of the variable projection~\cite{GolubPereyra1973,OLearyRust2013} (VarPro) scheme extended to the stochastic approximation (SA) setting. Using this viewpoint, we show in ~\cref{sub:saslimtik} that we obtain unbiased gradient estimates for the nonlinear weights when the linear weights are estimated accurately. This motivates the design of {\tt slimTrain} as a tractable alternative to VarPro SA, which is infeasible as it requires re-evaluation of the nonlinear feature extractor over many samples after every training step. The computational costs of {\tt slimTrain} are limited as it re-uses features from the most recent batches and therefore adds little computational overhead; see~\cref{sec:implementation}. In addition, {\tt slimTrain} approximates the optimal linear weights obtained from VarPro, thereby reducing the bias introduced by the approximation when updating the nonlinear weights.
From a numerical perspective, the benefits of {\tt slimTrain}, and specifically automated hyperparameter selection, are demonstrated by the numerical experiments for both fully-connected and convolutional final layers. In~\cref{sec:peaks}, we explore the relationship of the {\tt slimTrain} parameters, observing that memory depth and batch size play a crucial role in determining the ill-posedness of the least-squares problem to solve for the linear weights. The regularization parameter adapts to the least-squares problem accordingly -- larger regularization parameters are selected when the problem is underdetermined. In~\cref{sec:surrogate}, we observe that {\tt slimTrain} is less sensitive to the choice of learning rate, outperforming the recommended settings for ADAM. Again, the regularization parameters adapt to the learning rate -- larger parameters are chosen when the nonlinear weights change more rapidly. In~\cref{sec:autoencoder}, we show that {\tt slimTrain} can be applied to a final convolutional layer and outperforms ADAM in the limited-data regime, which is typical in scientific applications.
\section*{Acknowledgments}
This work was initiated as a part of the SAMSI Program on Numerical Analysis in Data Science in 2020. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
\bibliographystyle{siamplain}
|
{
"timestamp": "2021-09-30T02:02:40",
"yymm": "2109",
"arxiv_id": "2109.14002",
"language": "en",
"url": "https://arxiv.org/abs/2109.14002"
}
|
\section{The $np\leftrightarrow d\gamma$ cross section in $\chi$EFT}
\label{sec:npdagmma}
The detailed-balance principle relates the $np\rightarrow d\gamma$ cross section $\sigma_{np}$ to the deuteron photodissociation $d\gamma \rightarrow np$ cross section $\sigma_{\gamma d}$:
\begin{equation}
\sigma_{np} = \frac{3}{2}\,\frac{(s-m_d^2)^2}{(s-4m^2)(s-\delta m^2)}\,\sigma_{\gamma d}\,,
\end{equation}
where $m_d$ is the deuteron mass, $m$ is the isospin-averaged nucleon mass, and $\delta m$ is the neutron-proton mass difference. The Mandelstam variable $s$ can be conveniently expressed in terms of the $np$ relative energy $E$, or the neutron energy $E_n$ in the rest frame of the proton,
\begin{equation}
s = ( E + 2 m )^2 = 2 m_p E_n + 4 m^2\,,
\end{equation}
where $m_p$ is the proton mass.
The photodissociation cross section can be expressed as a function of the photon energy $\nu$ (in the rest frame of the deuteron) through
\begin{equation}
\sigma_{\gamma d}(\nu) = \frac{2\pi^2}{\nu} \, \alpha \, R_T(\nu,\nu)
\end{equation} in terms of the transverse response function of the deuteron, defined as
\begin{align}
\label{eq:r_t}
R_T(\nu,q) = \frac{1}{6} \sum_{M} \sum_{S^\prime S_z^\prime} & \sum_{T^\prime}
\int\frac{\mathrm{d}^3k}{(2\pi)^3}
\, \delta(\nu+m_d-E_+-E_-) \nonumber\\
& \sum_{\lambda=\pm1} \vert \langle\phi_{\mathbf{k},S^\prime S_z^\prime,T^\prime}\vert j_\lambda\vert\psi_{M}\rangle \vert^2\,.
\end{align}
Here, $j_\lambda$ are the spherical components of the electromagnetic current operator ${\bf j}$ at four-momentum transfer $(\nu,\mathbf{q})$. The deuteron ground state is denoted by $| \psi_{M}\rangle$, where $M$ is the projection of the total angular momentum, whereas
$|\phi_{\mathbf{k},S^\prime S_z^\prime,T^\prime}\rangle $ denotes the $pn$ scattering state with the relative momentum, total isospin, total spin and spin projection given by $\mathbf{k}$, $T^\prime$, $S^\prime$ and $S_z^\prime$ respectively. $E_\pm$ are the energies of the final-state nucleons in the rest frame of the deuteron, \emph{i.e.,} $E_\pm = \sqrt{(\mathbf{q}/2 \pm \mathbf{k})^2+m^2}$. The deuteron and the $np$ scattering-state wave functions are obtained by solving their Lippmann-Schwinger equations using the momentum-space regularized semilocal potentials of Ref.~\cite{Reinert:2017usi} at various orders in the $\chi$EFT expansion. The electromagnetic currents were first derived in $\chi$EFT in Refs.~\cite{Park:1995pn,Pastore:2008ui,Kolling:2009iq}. We use multipole expansions~\cite{Bijaya_neutrino,chi-disp-mu-d} of these operators~\cite{bibnote1} that contribute at orders $(Q/\Lambda)^{-3,-2,1,0}$, namely the one-body currents consisting of the convection and spin-magnetization terms, and the leading one-pion exchange currents. Although we retain the quadrupole and octupole operators in the multipole expansions, their contributions are negligible compared to the dominant $M1$ and $E1$ operators at low energies. To compare the relative strengths of the $M1$ and $E1$ transitions, we calculate the analyzing power, defined as $\Sigma(\theta)\equiv(N_\parallel-N_\perp)/(N_\parallel+N_\perp)$, where $N_\parallel (N_\perp)$ is the number of outgoing neutrons parallel (perpendicular) to the photon-polarization plane in a photodissociation experiment. It is related to the $M1$ and $E1$ contributions to the photodissociation cross section through~\cite{ando}
\begin{equation}
\Sigma(\theta) = \frac{\frac{3}{2}\sigma_{\gamma d}^{E1}\sin^2\theta}{\sigma_{\gamma d}^{M1}+\frac{3}{2}\sigma_{\gamma d}^{E1}\sin^2\theta} \,,
\end{equation}
where $\theta$ is the angle of the neutrons with respect to the photon beam axis.
We now present the $\chi$EFT predictions for observables related to $np\leftrightarrow d\gamma$ starting with the $p(n,\gamma)d$ cross section for thermal neutrons in Table~\ref{tab:threshold}. Here (and throughout this work), we denote cross sections calculated with an $n$th-order (N$^n$LO) potential of Ref.~\cite{Reinert:2017usi} (and electromagnetic currents fixed to one-body plus two-body one-pion exchange) as $y_n$. The contribution $y_1$ is zero because there are no corrections to the $\chi$EFT potential at this order. At this threshold energy, our results undershoot the experimental value of Ref.~\cite{cokinos-melkonian} by a few percent. It was shown in Ref.~\cite{piarulli}, that percentage-level agreement with the experiment of Ref.~\cite{cokinos-melkonian} required current operators at the order $(Q/\Lambda)^1$ not considered in this work; these included leading two-pion exchange, sub-leading corrections to one-pion exchange and contact operators. Since the goal of this paper is to apply machine learning tools for the first time to uncertainty quantification in nuclear electroweak reactions in $\chi$EFT and including these operators significantly complicates the calculations, we work with fixed currents and focus on analyzing convergence in the $\chi$EFT potential only, leaving the inclusion of higher order currents to future work. We discuss the implications of our findings for such a study further below.
\begin{center}
\begin{table}[htbp]
\begin{tabularx}{0.75\columnwidth}{cccccc}
\hline
~$y_0$~ & ~$y_2$~ & ~$y_3$~ & ~$y_4$~ & ~$y_5$~ & Experiment\\
\hline
340.6~ & 325.0~ & 321.8~ & 321.0~& 322.6~ & 332.6 $\pm$ 0.7 \\
\hline
\end{tabularx}
\caption{ The $p(n,\gamma)d$ cross section $\sigma_{np}$ in mb for thermal neutrons (corresponding to $E=1.2625\times10^{-08}$~MeV) calculated using $\chi$EFT potentials at various orders. The experimental result is from Ref.~\cite{cokinos-melkonian}.
\label{tab:threshold}}
\end{table}
\end{center}
\begin{figure*}[th]
\centering
\includegraphics[width=0.8\textwidth]{npdgamma_predictions.pdf}
\caption{$\chi$EFT predictions for the $np\leftrightarrow d\gamma$ cross section and related observables at BBN energies. (a) The product of $p(n,\gamma)d$ cross section and the neutron speed versus the energy of the neutron. (b) The deuteron photodissociation cross section as a function of the photon energy in the rest frame of the deuteron. (c) The photon analyzing power for photodissociation versus its energy. Experimental data are from Refs.~\cite{suzuki}~(triangles), \cite{nagai}~(circle), \cite{hara}~(crosses), \cite{moreh}~(square) and \cite{tornow}~(diamonds). Beam-energy resolution errors are not shown. Purple hexagons are calculated using pionless EFT results of Ref.~\cite{rupak}.
}
\label{fig:compare_to_expt}
\end{figure*}
In Fig.~\ref{fig:compare_to_expt}, we compare our $\chi$EFT predictions at different orders for various observables related to $np\leftrightarrow d\gamma$ with the pionless EFT results of Ref.~\cite{rupak} and available experimental data at energies relevant for BBN and beyond. The $\chi$EFT predictions agree with the experimental data for all orders. Since Ref.~\cite{rupak} uses the experimental cross section given in Table~\ref{tab:threshold} as input, their $M1$ contribution is larger than our $\chi$EFT predictions at low energies.
\section{Gaussian process model for correlated EFT errors}
\label{sec:gpintro}
Following the formalism introduced by Melendez \emph{et al.}~\cite{Melendez:2019izc}, we consider an observable $y$ as a function of the kinematic variable $p$, which, in our case is the $np$ relative momentum. The order-$k$ EFT prediction is written as
\begin{equation}
y_k(p) = y_\mathrm{ref}(p) \sum_{n=0}^k c_n(p) \, [Q(p)/\Lambda]^n\,,
\end{equation}
and its EFT truncation error as
\begin{equation}
\label{eq:trunc_err}
\delta y_k(p) = y_\mathrm{ref}(p) \sum_{n=k+1}^\infty c_n(p) \, [Q(p)/\Lambda]^n\,.
\end{equation}
Here $y_\mathrm{ref}(p)$ is a dimensionful quantity that sets the overall scale such that the \emph{a priori} unknown dimensionless coefficients $c_n(p)$ are smooth naturally-sized (\emph{i.e.,} of order 1) curves, provided that the EFT is converging in the expected manner. The Bayesian model for EFT error quantification seeks to build upon this prior knowledge with the data, the values of $c_{n\leq k}(p)$ evaluated from order-by-order EFT calculations up through order $k$, to refine our expectations for $c_{n>k}(p)$, and thereby obtain an estimate for $\delta y_k(p)$. Refs.~\cite{Furnstahl:2014xsa,Furnstahl:2015rha,Wesolowski:2015fqa,Melendez:2017phj,Wesolowski:2018lzj,Melendez:2019izc} developed a pointwise error model that is applicable when (i) $y$ is a number and not a function of one or more kinematic variables; (ii) $y(p)$ is analyzed at only one value of $p$ or at multiple values of $p$ that are sufficiently far apart such that the $y(p)$ values can be safely assumed to be uncorrelated. Ref.~\cite{Melendez:2019izc} extended this framework to study curvewise convergence by employing GPs. Below we first introduce the GP error model and then present application to $np\leftrightarrow d\gamma$.
\paragraph*{The GP error model---}The basic idea of the error model is to use GPs to build a stochastic representation (an \emph{emulator}) to serve as a surrogate for the sequence of deterministic calculations (the \emph{simulator}) that yields order-by-order EFT predictions up through order $k$. The statistical properties of the emulator are then exploited to yield estimates for the EFT truncation errors at various orders, subsuming the contributions of even those terms that have never been calculated. Specifically, we start with the assumption that $c_n(p)$ are independent draws from an underlying GP, \emph{i.e.,} they follow a multivariate normal distribution for every finite set of $p$. The GP is completely specified by the mean $\mu$ and covariance function $\bar{c}^2r(p,p^\prime;\ell)$,
\begin{equation}
\label{eq:cpgp}
c_n(p) \,\vert\, \bar{c}^2, \ell \, \stackrel{\mathrm{iid}}{\sim} \, \mathcal{GP}[\mu,\bar{c}^2r(p,p^\prime;\ell)]\,.
\end{equation}
The correlation function $r(p,p^\prime;\ell)$ is commonly chosen to have a squared-exponential form,
\begin{equation}
r(p,p^\prime;\ell) = \exp\left[-\frac{(p-p^\prime)^2}{2\ell^2}\right]\,.
\end{equation} The correlation length $\ell$, the mean $\mu$ and the marginal variance $\bar{c}^2$ are the hyperparameters of the GP, which are learned from the training data set that comprises order-by-order EFT calculations up through order $k$. Remarkably, it follows from Eq.~\eqref{eq:cpgp} that the truncation error defined by Eq.~\eqref{eq:trunc_err} has the distribution
\begin{equation}
\label{eq:trunc_err_distr}
\delta y_k(p) \,\vert\, \bar{c}^2, \ell, Q(p), \Lambda \sim
\mathcal{GP}[m_{\delta k}(p),\bar{c}^2R_{\delta k}(p,p^\prime;\ell)]\,,
\end{equation}
where
\begin{equation}
\label{eq:mean_trunc_err}
m_{\delta k}(p) = \frac{y_\mathrm{ref}(p)}{\Lambda^k}\frac{Q(p)^{k+1}}{\Lambda-Q(p)}\,\mu\,,
\end{equation}
and
\begin{align}
\label{eq:corr_trunc_err}
R_{\delta k}(p,p^\prime;\ell) = \frac{y_\mathrm{ref}(p)y_\mathrm{ref}(p^\prime)}{\Lambda^{2k}}\frac{[Q(p)Q(p^\prime)]^{k+1}}{\Lambda^2-Q(p)Q(p^\prime)}\,r(p,p^\prime;\ell)\,.
\end{align}
With point estimates for $\ell$ and $Q(p)$, the normal-inverse-$\chi^2$ prior serves as the conjugate prior for the hyperparameters $(\mu,\bar{c}^2)$ of the Gaussian processes above, \emph{i.e.,} their Bayesian posteriors can be analytically derived and have the same functional forms as the priors (see Ref.~\cite{Melendez:2019izc}). The assumptions made above are known to impose certain limitations~\cite{bastos_ohagan}; however their validity can be assessed from several diagnostic metrics on the validation data set (see below).
\paragraph*{Application to $\sigma_{np}$ and $\sigma_{\gamma d}$---}We now apply the tools described above to the $np\leftrightarrow d\gamma$ cross section.
We begin by partitioning the data, \emph{i.e.,} the order-by-order $\chi$EFT results depicted in Fig.~\ref{fig:compare_to_expt}, into training and validation sets. This data set consists of 32 values of $p$ in 1~MeV increment starting form the smallest that corresponds to thermal-neutron $p(n,\gamma)d$. These values span a range of approximately 0 to 1~MeV~(2~MeV) in $E$~($E_n$), and encompass the $4~\mathrm{MeV}<p<14~\mathrm{MeV}$ interval most relevant for BBN. We use every fifth point for training and the rest for validation. We will see further below that our maximum \emph{a posteriori} (MAP) value of $\ell$ vindicates this grid and training-validation splitting. Similar results are obtained for other reasonable choices. As in Ref.~\cite{Melendez:2019izc}, we take $\Lambda=600$~MeV and $Q(p) = (p^8+m_\pi^8)/(p^7+m_\pi^7)\,,$
which give an expansion parameter of approximately 0.23 with a very weak dependence on $p$. We take $y_\mathrm{ref}(p)=y_0(p)$. The coefficient $c_5(p)$ turns out to be rather unnaturally sized, particularly at smaller values of $p$. We therefore exclude it from the statistical analysis. This leaves $c_{2,3,4}(p)$ for GP modeling. Finally, we need a value for the \emph{nugget} $\sigma_n^2$, the variance of the Gaussian white noise added to the data to stabilize matrix inversions during fitting. We find that $\sigma_n^2>10^{-8}$ is required to avoid singularities and that $\sigma_n^2>10^{-3}$ can introduce noises similar in size to our precision goal of 0.1-0.2\% on the predicted cross sections. We therefore pick $\sigma_n^2=10^{-5}$ as this value produces the best performing model on the validation set under the diagnostic criteria discussed below.
\section{Model calibration, validation and prediction}
\label{sec:gpmodel}
\begin{figure*}[th]
\centering
\includegraphics[width=0.9\textwidth]{npdgamma_gp_modeling.pdf}
\caption{GP modeling of the $\chi$EFT expansion coefficients and its diagnostics. (a) The simulators (solid lines) along with the corresponding GP emulators (dashed lines) and their $2\sigma$ intervals (bands). The training data are denoted by filled circles; 4 validation points are located uniformly between each adjacent pair of training points. (b) The Mahalanobis distances compared to the mean (interior line), $50\%$ (box) and $95\%$ (whiskers) credible intervals of the reference distribution. (c) The pivoted Cholesky diagnostics versus the index along with $95\%$ credible intervals (gray lines). (d) The credible interval diagnostics with $1\sigma$ (dark gray) and $2\sigma$ (light gray) bands estimated by sampling 1000 GP emulators.}
\label{fig:diagnostics}
\end{figure*}
We now present the results of the GP modeling, obtained using the package {\tt gsum}~\cite{Melendez:2019izc}. Fig.~\ref{fig:diagnostics}(a) shows the coefficients $c_n(p)$ for the observable $\sigma_{np}$, along with their GP emulators. The MAP value of $\ell$ is found to be $\ell_\mathrm{MAP}=$10.4~MeV. Our choices of the grid and training-validation splitting places the training points at 5 MeV intervals. This results in 3 training data points within one correlation length, which is optimal for GP modeling. Furthermore, our data set spans approximately $3\ell_\mathrm{MAP}$ which, as a rule of thumb, is the range beyond which the data points become uncorrelated.
Fig.~\ref{fig:diagnostics}(a) provides a beautiful visual indication that the GP model has done an excellent job of emulating the actual $\chi$EFT calculations. However, detailed diagnostic checks are needed to quantitatively assess the adequacy of the model. To this end, we use the diagnostic metrics proposed by Ref.~\cite{bastos_ohagan}, and originally implemented in EFT error model by Ref.~\cite{Melendez:2019izc}.
Fig.~\ref{fig:diagnostics}(b) shows the squared Mahalanobis distances, defined as
\begin{equation}
\mathrm{D}^2_\mathrm{MD} = (\mathbf{f}-\mathbf{m})^\mathrm{T} K^{-1} (\mathbf{f}-\mathbf{m})\,,
\end{equation}
where we have used the notation $\mathbf{f}$ for the vector of validation data points, $\mathbf{m}$ for the vector of means of the emulator at these points and $K$ for its covariance matrix. $\mathrm{D}^2_\mathrm{MD}$ is a generalization of the sum of squared residuals to the case of data points that are correlated across the independent variable. Values much larger than its reference distribution, a $\chi^2$, indicate conflict between the emulator and the simulator. Values much smaller than the reference, as we see in the case of $c_2$ to some extent, indicate that, given statistical fluctuations, there is an unusually close match between the emulator and the simulator.
We now look at a more informative metric, the pivoted Cholesky diagnostic $\mathrm{D}_\mathrm{PC}$. For a ``correct" emulator, it returns draws from a standard Gaussian at every index, which represents the validation points arranged such that the first element is the one with the largest predictive variance, the second element is the one with the largest variance conditioned on the first element, and so on. A group of unusually large or small $\mathrm{D}_\mathrm{PC}$ values across all indices indicates a misestimated variance whereas a group of unusually large or small $\mathrm{D}_\mathrm{PC}$ values in the latter part of the sequence indicates an inappropriate correlation structure. Overall, Fig.~\ref{fig:diagnostics}(c) shows that the points are distributed as expected, \emph{e.g.,} 4 out of the 72 points lie outside the $-2<\mathrm{D}_\mathrm{PC}< 2$ range. We also notice that there is a slight indication that the variance on $c_2$ ($c_3$) might have been somewhat overestimated (underestimated) by observing the spread of the corresponding data points.
The credible interval diagnostic involves constructing uncertainty bands at each order and checking whether it actually encompasses the correction that enters at the next order. The claimed $(1-\alpha)100\%$ credible intervals are then plotted against the percentage of validation data points found within the interval---emulators that output credible intervals containing too few data points compared to the reference distribution are overconfident and those that contain too many are underconfident. For uncorrelated data points, the reference distribution is a binomial. For correlated data points, the reference distribution is numerically estimated by sampling a large number of emulators from the underlying process. Fig.~\ref{fig:diagnostics}(d) shows that the model is performing as expected and that it is important to account for correlations while assigning truncation errors.
Now that we have demonstrated that the coefficients $c_{2,3,4}$ can be appropriately described by a GP, we can now use Eq.~\eqref{eq:trunc_err_distr} to compute the truncation errors. We list the $np\rightarrow d\gamma$ cross section values at order $k=4$ along with their $1\sigma$ truncation errors at several energies in Table~\ref{tab:uncertainty}. We note that these errors are different from, and vary much more smoothly with $E$, than naive estimates~\cite{PhysRevLett.115.122301} based on multiplying the largest order-to-order shift with the appropriate power of the expansion parameter in a pointwise manner at each value of $E$. In Fig.~\ref{fig:uncertainty}(a), we plot $\sigma_{np}v_n$, for which we showed order-by-order results earlier in Fig.~\ref{fig:compare_to_expt}(a), versus $E_n$. This quantity is proportional to the reaction rate in BBN. The bands represent 2$\sigma$ truncation errors, \emph{i.e.,} 95\% Bayesian credible intervals for $\sigma_{np}v_n$. In Fig.~\ref{fig:uncertainty}(b), we show these bands for $\sigma_{\gamma d}$ and compare them with photodissociation data shown earlier in Fig.~\ref{fig:compare_to_expt}(b). Reassuringly, the experimental data as well as the highest-order theoretical calculation, $y_5$, are compatible with the truncation error estimates.
\begin{center}
\begin{table}[htbp]
\begin{tabularx}{0.75\columnwidth}{cc}
\hline
$E$ [MeV] &~~~ $\sigma_{np}$ [mb] \\
\hline
~~~$1.262500\times10^{-08}$ &~~~ $321.009 \pm 0.71496$ \\
~~~$9.607513\times10^{-03}$ &~~~ $0.32739 \pm 0.00073$ \\
~~~$3.838601\times10^{-02}$ &~~~ $0.12762 \pm 0.00029$ \\
~~~$8.633551\times10^{-02}$ &~~~ $0.06853 \pm 0.00015$ \\
~~~$1.534560\times10^{-01}$ &~~~ $0.04658 \pm 0.00010$ \\
~~~$2.397475\times10^{-01}$ &~~~ $0.03792 \pm 0.00008$ \\
~~~$3.452100\times10^{-01}$ &~~~ $0.03464 \pm 0.00007$ \\
~~~$4.698435\times10^{-01}$ &~~~ $0.03368 \pm 0.00007$ \\
~~~$6.136480\times10^{-01}$ &~~~ $0.03373 \pm 0.00007$ \\
~~~$7.766235\times10^{-01}$ &~~~ $0.03414 \pm 0.00007$ \\
~~~$9.587699\times10^{-01}$ &~~~ $0.03461 \pm 0.00007$ \\
\hline
\end{tabularx}
\caption{$\chi$EFT predictions at order $k=4$ for the $np\rightarrow d\gamma$ cross section at $np$ relative energy $E$, along with their $1\sigma$ errors from the truncation of the $\chi$EFT potential.}
\label{tab:uncertainty}
\end{table}
\end{center}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.75\textwidth]{photodissociation_uncertainty.pdf}
\caption{The $2\sigma$ truncation error bands on the $\chi$EFT predictions $y_k$ at $k=2,3,4$ along with the prediction $y_5$ and data from Fig.~\ref{fig:compare_to_expt}. $(a)$ The product of $p(n,\gamma)d$ cross section and the neutron speed versus the energy of the neutron. $(b)$ The deuteron photodissociation cross section as a function of the photon energy in the rest frame of the deuteron. }
\label{fig:uncertainty}
\end{figure*}
The uncertainties quoted in Table~\ref{tab:uncertainty} and Fig.~\ref{fig:uncertainty}, which amount to about 0.2$\%$, only include truncation error in the $\chi$EFT potential. The full theory uncertainty also comprises statistical error from fitting the LECs to experimental data. The framework we have adopted allows one to also incorporate fitting uncertainties on the LECs that appear up to the EFT order we have calculated~\cite{Melendez:2019izc}. Another missing source of uncertainty, which is important in the $M1$-dominated regime, is the truncation of the current. Inclusion of the $M1$ operator at order $(Q/\Lambda)^1$ is crucial for obtaining agreement with the experimental value for threshold capture given in Table~\ref{tab:threshold}~\cite{piarulli,Phillips:2016mov}, although it introduces several new LECs and thus poses significant challenges for rigorous uncertainty analysis. The fitting strategy that uses minimal assumptions about the short-distance behavior of the current operator, among several explored by Ref.~\cite{piarulli}, is the one that constrains the LECs $d_{1,2}^V$ simultaneously to $\sigma_{np}(E=1.2625\times10^{-08}~\mathrm{MeV})$ and the isovector combination of the $A=3$ magnetic moments. However, it was found that this yields unnatural values for $d_{1,2}^V$. This fine-tuning can be mitigated by including the theory uncertainty we calculated in this paper, as well as the experimental error, in the fit. Such a strategy for performing parameter estimation with $\chi$EFT truncation error included as a guard against overfitting was recently successfully pursued by Ref.~\cite{Wesolowski:2021cni} in the context of constraining 3N interactions from properties of $A=3,4$ nuclei. A calculation of $np\leftrightarrow d\gamma$ along these lines is a subject for future work. \newline{}
\section{Summary and outlook}
\label{sec:conclusion}
We performed the first $\chi$EFT calculations of the energy-dependent $np\leftrightarrow d\gamma$ cross section at low energies, including the range relevant to BBN, and the first Bayesian analysis of $\chi$EFT truncation error to a nuclear reaction cross section. Working with fixed one- and two-body electromagnetic current operators, we studied the convergence of this observable in the EFT expansion of the potential. By harnessing recent progress in Bayesian analysis of EFT uncertainty,
we were able to provide statistical estimates, amounting to 0.2$\%$, for the theory uncertainty that stems from truncation of the $\chi$EFT potential.
At the $\chi$EFT order up to which we work, our calculations are pure predictions as no new LECs enter that need to be fixed.
Inclusion of the next order in the current operator adds new parameters that are not well constrained at present and will most likely require fitting to electromagnetic observables in $A=2,3$ systems. To make predictions for this cross section with subpercentage-level precision even at threshold, we will therefore need further investigation into the nature of the current operator at short distances and calculation of the electromagnetic observables for $A=3$ nuclei with NN and 3N interactions truncated consistently. Finally, uncertainty analysis in theoretical calculations of Deuterium burning processes will be an important development because these strongly affect the primordial Deuterium abundance and there is currently some discrepancy between theory and experiments, most notably for ${}^2$H$(p,\gamma){}^3$He~\cite{Pisanti:2020efz}.
\section*{Acknowledgements}
We are grateful to Jordan Melendez, Richard J Furnstahl and Daniel R Phillips for fruitful discussions. We would also like to thank Daniel R Phillips for a critical reading of the manuscript, Evgeny Epelbaum for providing us with computer programs for chiral potentials used in this work and the BUQEYE collaboration for making the library {\tt gsum} publicly accessible. This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), through the
Cluster of Excellence [Precision Physics, Fundamental Interactions, and Structure of Matter] (PRISMA$^+$ EXC 2118/1)
within the German Excellence Strategy (Project ID 39083149).
|
{
"timestamp": "2022-03-24T01:28:58",
"yymm": "2109",
"arxiv_id": "2109.13972",
"language": "en",
"url": "https://arxiv.org/abs/2109.13972"
}
|
\section{\label{sec:intro}Introduction}
A number of popular extensions of the Standard
Model (SM) of particle physics predict the existence of doubly charged scalar
particles $X^{\pm\pm}$. These include the Type-II seesaw \cite{Schechter:1980gr,
Magg:1980ut,Cheng:1980qt,Lazarides:1980nt,Mohapatra:1980yp,Lindner:2016bgg} and
the Zee-Babu \cite{Zee:1980ai,Babu:1988ki} models of neutrino masses, the
Left–Right model \cite{Pati:1974yy,Mohapatra:1974hk,Senjanovic:1975rk},
the Georgi–Machacek model \cite{Georgi:1985nv,Chanowitz:1985ug,Gunion:1989ci,
Gunion:1990dt,Ismail:2020zoz}, the 3-3-1 model \cite{CiezaMontalvo:2006zt,
Alves:2011kc} and the little Higgs model \cite{ArkaniHamed:2002qx}).
Doubly charged scalars appear also in simplified models, in which one merely
adds such scalars in a gauge invariant way in various representations of the
SM gauge group $SU(2)_L$ to the particle content of the SM.
The Lagrangian of the model is then complemented by
gauge-invariant interaction terms involving these new fields
\cite{Delgado:2011iz,Alloul:2013raa}.
Doubly charged scalars may be long-lived or even stable
\cite{Alloul:2013raa,Alimena:2019zri,Acharya:2020uwc,Hirsch:2021wge}.
As the simplest example, one can add to the SM an uncolored
$SU(2)_L$-singlet scalar field $X$ with hypercharge
$Y=2$ \cite{Alloul:2013raa}. The corresponding doubly charged particles will
couple to the neutral gauge bosons $\gamma$ and $Z^0$ and may also
interact with the SM Higgs boson $H$ through the $(H^\dag H)(X^\dag X)$
term in the Higgs potential. Gauge invariance allows, in addition, the Yukawa
coupling of $X$ to right-handed charged leptons, $h_X l_R l_R X+h.c.$
This is the only coupling that makes the $X$-particles unstable
in this model; they will be long-lived if the Yukawa coupling constants
$h_X$ are small. The Yukawa coupling of $X$ may be forbidden by e.g.\ $Z_2$
symmetry $X\to -X$, in which case the $X$-scalars will be stable.
Doubly charged scalar particles are being actively searched for
experimentally, but up to now have not been discovered.
For discussions of current experimental constraints on the doubly charged
particles and of the sensitivities to them of future experiments
see \cite{Alloul:2013raa,Alimena:2019zri,Fuks:2019clu,Padhan:2019jlc,
Acharya:2020uwc,Hirsch:2021wge,Dev:2021axj} and references therein.
In addition to interesting particle-physics phenomenology, doubly charged
scalars may have important implications for cosmology. In this paper we
will, however, consider another aspect of their possible existence.
As we shall demonstrate, doubly charged particles can catalyze fusion of
light nuclei, with potentially important applications for energy production.
The negatively charged $X^{--}$
(which we will hereafter simply refer to as
$X$) can form atomic bound systems with the nuclei of light elements, such as
deuterium, tritium or helium.
One example is the antihelium-like $(ddX)$ atom
with the $X$-particle as the ``nucleus'' and two deuterons in the 1$s$ atomic
state instead of two positrons. (Here and below we use the brackets to denote
states bound by the Coulomb force). As $X$ is expected to be very heavy, the
size of such an atomic system will in fact be determined by the deuteron
mass $m_d$ and will be of the order of the Bohr radius of the $(dX)$
ion, $a_d\simeq 7.2$ fm. Similar small-size atomic systems $(N\!N'X)$ can
exist for other light nuclei $N$ and $N'$ with charges $Z\le 2$.
Atomic binding of two nuclei to an $X$-particle brings them so close together
that this essentially eliminates the necessity for them to overcome the
Coulomb barrier in order to undergo fusion. The exothermic fusion reactions
can then occur unhindered and do not require high temperature or pressure.
The $X$-particle is not consumed in this process and can then facilitate
further nuclear fusion reactions, acting thus as a catalyst.
This $X$-catalyzed fusion mechanism is to some extent similar to muon
catalyzed fusion ($\mu$CF) (\cite{Frank,Sakharov,Zeldovich1,Alvarez:1957un,
Jackson:1957zza,Zeldovich2,Zeldovich3,bogd}, see \cite{Zeldovich4,GerPom,
bracci,Breunlich:1989vg,Ponomarev:1990pn,Zeldovich:1991pbl,bogd2} for
reviews), in which the role of the catalyst is played by singly negatively
charged muons. $\mu$CF of hydrogen isotopes was once considered a prospective
candidate for cold fusion. However, already rather early in its studies it
became clear that $\mu$CF suffers from a serious shortcoming that
may prevent it from being a viable mechanism of energy production. In the
fusion processes, isotopes of helium are produced,
and there is a chance that they will capture on their atomic orbits
the negative muons present in the final state of the fusion reactions.
Once this happens, muonic ions ($^3$He$\mu)$ or ($^4$He$\mu)$ are formed,
which, being positively charged, cannot catalyze further fusion reactions.
This effect is cumulative; the sticking to helium nuclei thus eventually
knocks the muons out from the catalytic process, i.e.\ the catalytic
poisoning occurs.
Out of all $\mu$CF reactions, the $d-t$ fusion has the smallest muon sticking
probability, $\omega_s\simeq 10^{-2}$. This means that a single muon will
catalyze $\sim$100 fusion reactions before it gets removed from the catalytic
process. The corresponding total produced energy is $\sim$1.7 GeV, which is
at least a factor of five smaller than the energy
needed to produce and handle one muon \cite{Jackson:1957zza}.
In addition, muon's short lifetime makes it impractical to try to dissolve
the produced ($^3$He$\mu)$ or ($^4$He$\mu)$ bound states by irradiating them
with particle beams in order to reuse the released muons.
These considerations have essentially killed the idea of using $\mu$CF
for energy production.
There were discussions in the literature of
the possibility of energy generation through the catalysis of
nuclear fusion by hypothetic heavy long-lived or stable singly
charged \cite{Zeldovich3,Rafelski:1989pz,
Ioffe:1979tv,Hamaguchi:2006vp} or fractionally charged
\cite{Zweig:1978sb} particles. However, it has been shown in
\cite{Zeldovich3,Ioffe:1979tv,Hamaguchi:2006vp} that these processes
suffer from the same problem of catalytic poisoning
as $\mu$CF, and therefore they cannot be useful sources of energy.
In particular, in ref.~\cite{Ioffe:1979tv} it was demonstrated that
reactivation of the catalyst particles by irradiating their atomic bound
states with helium nuclei by neutron beams, as suggested in
\cite{Zweig:1978sb}, would require beams that are about nine orders of
magnitude higher than those currently produced by most powerful nuclear
reactors.
In this paper we consider the fusion of light nuclei catalyzed by doubly
negatively charged $X$-particles and demonstrate that, unlike $\mu$CF, this
process may be a viable source of energy. We analyze $X$-catalyzed fusion
($X$CF) in deuterium environments and show that the catalytic poisoning may
only occur in this case due to the sticking of $X$-particles
to $^6$Li nuclei, which are produced in the fusion reactions of the third
stage. The corresponding sticking probability is shown to be very low,
and, before getting bound to $^6$Li, each $X$-particle can catalyze
$\sim 3.5\cdot 10^{9}$ fusion cycles, producing $\sim 7\cdot 10^{4}$ TeV
of energy.
To the best of the present author's knowledge, nuclear fusion
catalyzed by doubly charged particles has never been considered before.
\section{\label{sec:xcg}$X$-catalyzed fusion
in deuterium}
We will be assuming that $X$-particles interact only electromagnetically,
which in any case should be a very good approximation at low energies relevant
to nuclear fusion. Let $X$-particles be injected in pressurized D$_2$ gas or
liquid deuterium. Being very heavy and negatively charged,
the $X$-particles can easily penetrate D$_2$ molecules and D atoms,
dissociating the former and ionizing the latter and losing energy
on the way. Once the velocity of an $X$-particle becomes comparable to
atomic velocities ($v\simeq 2e^2/\hbar\sim 10^{-2}c$), it captures a deuteron
on a highly excited atomic level of the ($dX$) system, which then very
quickly de-excites to its ground state, mostly through electric dipole
radiation and inelastic scattering on the neighboring deuterium atoms.
As the ($dX$) ion is negatively charged, it swiftly picks up another
deuteron to form the ($ddX$) atom. The characteristic time of this
atomic phase of the $X$CF process is dominated by the $X$ moderation time
and is $\sim 10^{-10}$\,s at liquid hydrogen density
$N_0=4.25\times 10^{22}$ nuclei/cm$^3$ and $T\simeq 20$K and
about $10^{-7}$\,s in deuterium gas at $0^\circ$C and pressure of one bar
(see Appendix~\ref{sec:Xatom}).
After the $(ddX)$ atom has been formed, the deuterons
undergo nuclear fusion through several channels, see below.
Simple estimates show that the fusion rates are many orders of magnitudes
faster than the rates of the atomic formation processes.
That is, once ($ddX$) [or similar ($N\!N'X$)] atoms are formed, the
fusion occurs practically instantaneously. The time scale of $X$CF is
therefore determined by the atomic formation times.
The rates of the fusion reactions, however, determine the branching ratios of
various fusion channels, which are important for the kinetics of the
catalytic cycle.
At the first stage of $X$CF in deuterium two deuterons fuse to produce
$^3$He, $^3$H or $^4$He. In each case there is at least one channel
in which the final-state $X$ forms an atomic bound state with one of the
produced nuclei. Stage I fusion reactions are
\begin{align}
&(ddX)\to {^3\rm He}+n+X &(Q=2.98~{\rm MeV},~29.1\%)
\tag{1a} \label{eq:r1a}\\
&(ddX)\to ({\rm ^3He}X)+n &(Q=3.89~{\rm MeV},
~19.4\%) \tag{1b} \label{eq:r1b}
\end{align}
\vglue-8mm
\begin{align}
&(ddX)\to {\rm ^3H}+p+X &&(Q=3.74~{\rm MeV},~34.4\%)
\tag{2a} \label{eq:r2a}\\
&(ddX)\to ({\rm ^3H}X)+p &&(Q=4.01~{\rm MeV}, ~6.2\%)
\tag{2b} \label{eq:r2b}\\
&(ddX)\to {\rm ^3H}+(pX) &&(Q=3.84~{\rm MeV},
~0.5\%)
\tag{2c} \label{eq:r2c}
\end{align}
\vglue-8mm
\begin{align}
&(ddX)\to {\rm ^4He}+\gamma+X &&(Q=23.6~{\rm MeV}, ~4\!\cdot\!10^{-9})
\tag{3a} \label{eq:r3a}\\
&(ddX)\to ({\rm ^4He}X)+\gamma &&(Q=24.7~{\rm MeV}, ~3\!\cdot\! 10^{-8})
\tag{3b} \label{eq:r3b}\\
&(ddX)\to {\rm ^4He}+X &&(Q=23.6~{\rm MeV}, ~10.4\%)
\tag{3c} \label{eq:r3c}
\end{align}
Here in the parentheses the $Q$-values and the branching ratios of the
reactions are shown. In evaluating the $Q$-values we have taken into account
that the atomic binding of the two deuterons to $X$ in the initial state
reduces $Q$, whereas the binding to $X$ of one of the
final-state nuclei increases it. As the Bohr radii of most of the $X$-atomic
states we consider are either comparable to or smaller than the nuclear
radii, in calculating the Coulomb binding energies one has to allow for the
finite nuclear sizes. We do that by making use of a variational approach,
as described in Appendix~\ref{sec:Bind}.
The rates of reactions (\ref{eq:r1b}), (\ref{eq:r2b}), (\ref{eq:r2c}) and
(\ref{eq:r3b}) with bound $X$-particles in the final states
are proportional to the corresponding $X$-particle sticking probabilities,
$\omega_{s}$. The existence of such channels obviously affects the branching
ratios of the analogous reactions with free $X$ in the final states.
Radiative reactions (\ref{eq:r3a}) and (\ref{eq:r3b}) have tiny branching
ratios, which is related to their electromagnetic nature and to the fact that
for their $X$-less version, $d+d\to{\rm ^4He}+\gamma$, transitions
of E1 type are strictly forbidden. This comes about because the two fusing
nuclei are identical, which, in particular, means that they have the same
charge-to-mass ratio. This reaction therefore proceeds mainly
through E2 transitions \cite{bogd}. When the deuterons are bound to $X$,
the strict prohibition of E1 transitions is lifted due to possible
transitions through intermediate excited atomic states.%
\footnote{\label{fn:1}The author is grateful to M.~Pospelov for raising this
issue and suggesting an example of a route through which E1 transitions could
proceed in reaction (\ref{eq:r3b}).}
However, as shown in Appendix~\ref{sec:lift},
the resulting E1 transitions are in this case heavily hindered and their
rates actually fall below the rates of the E2 transitions.
Reaction (\ref{eq:r3c}) is an internal conversion process.
Note that, unlike for reactions (\ref{eq:r1a}) - (\ref{eq:r3b}),
the $X$-less version of (\ref{eq:r3c}) does not exist: the process
$d+d\to ^4$He is forbidden by kinematics. For the details of the calculation
of the rate of reaction (\ref{eq:r3c}) as well as of the rates of the other
discussed in this paper reactions, see Appendix~\ref{sec:Sfactor}.
The relevant $Q$-values of the reactions and sticking probabilities are
evaluated in Appendices~\ref{sec:Bind} and \ref{sec:sticking}, respectively.
The final states of reactions (\ref{eq:r1a}), (\ref{eq:r2a}), (\ref{eq:r3a})
and (\ref{eq:r3c}) contain free $X$-particles which are practically at rest
and can immediately capture deuterons of the medium, forming again the
($ddX$) atoms. Thus, they can again catalyze $d-d$ fusion through stage I
reactions (\ref{eq:r1a})-(\ref{eq:r3c}).
The same is also true for the $X$-particles in the final state of reaction
(\ref{eq:r2c}) which emerge being bound to protons. Collisions of ($pX$)
with deuterons of the medium lead to fast replacement of the protons
by deuterons through the exothermic charge exchange reaction
$(pX)+d\to (dX)+p$ with the energy release $\sim$90~keV
(see Appendix~\ref{sec:charge}).
The produced $(dX)$ ion then picks up a deuteron to form the $(ddX)$ atom,
which can again participate in stage I reactions
(\ref{eq:r1a})-(\ref{eq:r3c}).
The situation is different for the $X$-particles
in the final states of reactions (\ref{eq:r1b}) and (\ref{eq:r2b})
forming the bound states with $^3$He and $^3$H, respectively.
They can no longer directly participate in stage I $d-d$ fusion reactions.
However, they are not lost for the fusion process: the produced
(${\rm ^3He}X$) and (${\rm ^3H}X$) can still pick up deuterons of the medium
to form the atomic bound states (${\rm ^3He}dX$) and
(${\rm ^3H}dX$), which can give rise to stage II fusion reactions, which we
will consider next.
Before we proceed, a comment is in order. While (${\rm ^3H}X$) is a singly
negatively charged ion which can obviously pick up a positively charged
deuteron to form an (${\rm ^3H}dX$) atom, (${\rm ^3He}X$) is a neutral
$X$-atom. It is not immediately obvious whether it can form a stable bound
state with $d$, which, if exists, would be a positive ion. In the case of
the usual atomic systems, analogous (though negatively charged) states
do exist -- a well-known example is the negative ion of hydrogen H$^-$.
However, the stability of (${\rm ^3He}\,dX$) cannot be directly deduced
from the stability of H$^-$: in the latter case the two particles
orbiting the nucleus are identical electrons, whereas for
(${\rm ^3He}dX$) these are different entities -- nuclei with differing masses
and charges. Nevertheless, from the results of a general analysis of
three-body Coulomb systems carried out in \cite{Martin:1998zc,krikeb,armour}
it follows that the state (${\rm ^3He}dX$) (as well as the bound state
(${\rm ^4He}dX$) which we will discuss later on) should exist and be stable.
For additional information see Appendix~\ref{sec:posIons}.
Once (${\rm ^3He}X$) and (${\rm ^3H}X$), produced in reactions (\ref{eq:r1b})
and (\ref{eq:r2b}), have picked up deuterons from the medium and formed the
atomic bound states (${\rm ^3He}dX$) and (${\rm ^3H}dX$),
the following stage II fusion reactions occur:
\begin{align}
\!\!\!\!\!&(^3{\rm He}dX)\to {\rm ^4He}+p+X\!\!\!
&&(Q=17.4~{\rm MeV},\,94\%)
\!\!\!\tag{4a} \label{eq:r4a}\\
\!\!\!\!\!&({\rm ^3He}dX)\to ({\rm ^4He}X)+p\!\!\!
&&(Q=18.6~{\rm MeV},\,6\%) \nonumber
\!\!\!\tag{4b} \label{eq:r4b}\\
\!\!\!\!\!&({\rm ^3He}dX)\to {\rm ^4He}+(pX)\!\!\!
&&(Q=17.5~{\rm MeV},\,3\!\cdot\!10^{-4})
\!\!\!
\tag{4c} \label{eq:r4c}
\end{align}
\vglue-8.0mm
\begin{align}
\!\!\!\!\!\!&(^3{\rm H} d X)
\to {\rm ^4He}+n+X
\!\!\!&&(Q=17.3~{\rm MeV},~96\%)
\tag{5a} \label{eq:r5a}\\
\!\!\!\!\!\!&(^3{\rm H} d X)
\to ({\rm ^4He}X)+n \!\!\!
&&(Q=18.4~{\rm MeV}, ~4\%)
\tag{5b} \label{eq:r5b}
\end{align}
In these reactions vast majority of $X$ bound to $^3$He and $^3$H are
liberated; the freed $X$-particles can again form $(ddX)$ states and catalyze
stage I fusion reactions (\ref{eq:r1a})-(\ref{eq:r3c}). The same applies to
the final-state $X$-particles bound to protons, as was discussed above.
The remaining relatively small fraction of $X$-particles come out of
stage II reactions in the form of $({\rm ^4He}X)$ atoms.
Together with a very small amount of $({\rm ^4He}X)$ produced in
reaction (\ref{eq:r3b}), they pick up deuterons from the medium and form
$({\rm ^4He}dX)$ states, which undergo stage III $X$CF reactions:
\begin{align}
&(^4{\rm He} d X)\to {\rm ^6Li}+\gamma+X
\!\!\!
\!\!\!
&&(Q=0.32~{\rm MeV},
~10^{-13})
\tag{6a} \label{eq:r6a}\\
&({\rm ^4He} d X)\to ({\rm ^6Li}X)+\gamma
\!\!\!
\!\!\!
&&(Q=2.4~{\rm MeV}, ~\,2\!\cdot\! 10^{-8})
\tag{6b} \label{eq:r6b}\\
&({\rm ^4He} d X)\to {\rm ^6Li}+X \!\!\!
\!\!\!\!\!&&(Q=0.32~{\rm MeV},
\,\simeq100\%)
\tag{6c} \label{eq:r6c}
\end{align}
In these reactions, almost all previously bound $X$-particles are liberated
and are free to catalyze again nuclear fusion through $X$CF reactions of
stages I and II. The remaining tiny fraction of $X$-particles end up being
bound to the produced ${\rm ^6Li}$ nuclei through reaction (\ref{eq:r6b}).
However, as small as it is, this fraction is very important for the
kinetics of $X$CF. The bound states $({\rm ^6Li}X)$ are ions of
charge +1; they cannot form bound state with positively charged nuclei and
participate in further $X$CF reaction. That is, with their formation
catalytic poisoning occurs and the catalytic process stops.
{}From the branching ratios of stage I, II, and III $X$CF reactions one
finds that the fraction of the initially injected $X$-particles which
end up in the $({\rm ^6Li}X)$ bound state is $\sim 2.8\times 10^{-10}$.
This means that each initial $X$-particle, before getting stuck to a $^6$Li
nucleus, can catalyze $\sim 3.5\times 10^{9}$ fusion cycles.
Direct inspection shows that, independently of which sub-channels were
involved, the net effect of stage I, II and III
$X$CF reactions is the conversion of four deuterons to
a $^6$Li nucleus, a proton and a neutron:
\begin{equation}
4d\to {\rm ^6Li}+p+n+23.1\,{\rm MeV}\,.
\tag{7} \label{eq:7}
\end{equation}
Therefore, each initial $X$-particle will produce about $7\times 10^4$ TeV of
energy before it gets knocked out of the catalytic process. It should be
stressed that this assumes that the $X$-particles are sufficiently
long-lived to survive during $3.5\times 10^9$ fusion cycles. From our
analysis it follows that the slowest processes in the $X$CF cycle are the
formation of positive ions $({\rm ^3He}dX)$ and $({\rm ^4He}dX)$.
The corresponding formation times are estimated to be of the order
of $10^{-8}$\,s.
(see sec.~\ref{sec:posIons} of the Supplemental material).
Therefore, for the $X$-particles to survive during $3.5\times 10^9$ fusion
cycles and produce $\sim 7\times 10^4$ TeV of energy, their lifetime $\tau_X$
should exceed $\sim 10^2$\,s. For shorter lifetimes the energy produced by a
single $X$-particle before it gets stuck to a $^6$Li nucleus
is reduced accordingly.
\section{\label{sec:acquis} Acquisition and reactivation of $X$-particles}
The amount of energy produced by a single $X$-particle has to be compared
with energy expenditures related to its production.
$X$-particles can be produced in pairs in accelerator experiments, either in
$l^+l^-$ annihilation at lepton colliders or through the Drell-Yan processes
at hadronic machines. Although the energy $E\sim 7\times 10^4$ TeV
produced by one $X$-particle before it gets knocked out of the catalytic
process is quite large on microscopic scale, it is only about
10\,mJ. This means that
$\gtrsim 10^{8}$ $X$-particles are needed to generate 1\,MJ of energy.
While colliders are better suited for discovery of new particles,
for production of large numbers of $X$-particles fixed-target accelerator
experiments are more appropriate. For such experiments the beam energy must
exceed the mass of the $X$-particle significantly. Currently, plans
for building such machines are being discussed \cite{Benedikt:2020ejr}.
The problem is, however, that the $X$-particle production cross section is
very small. This comes about because of their expected large mass
($m_X\gtrsim 1$\,TeV/$c^2$) and the fact
that for their efficient moderation needed to make the formation of $(dX)$
atoms possible, $X$-particles should be produced with relatively low velocities.
The cross section $\sigma_p$ of production of $X$-particles
with mass $m_X\simeq 1$\,TeV/$c^2$ and
$\beta=v/c\simeq 0.3$ is only $\sim 1$ fb (note that for scalar $X$-particles
$\sigma_p\propto \beta^3$). As a result, the energy spent on production of an
$X^{++}X^{--}$ pair will be by far larger than the energy that can be
generated by one $X^{--}$ before it gets bound to a $^6$Li nucleus. This
means that reactivating and reusing the bound $X$-particles multiple times
would be mandatory in this case. This, in turn, implies that only very
long lived $X$-particles with $\tau_X\gtrsim 3\times 10^{4}$\,yr
will be suitable for energy production.
Reactivation of $X$-particles bound to $^6$Li requires dissociation
of $({\rm ^6Li}X)$ ions. This could be achieved by irradiating them
with particle beams, similarly to what was suggested for reactivation of
lower-charge catalyst particles in ref.~\cite{Zweig:1978sb}.
However, it would be much more efficient to use instead $({\rm ^6Li}X)$ ions
as projectiles and irradiate a target with their beam.%
\footnote{We thank M.~Pospelov for this suggestion.}
The Coulomb binding energy of $X$ to ${\rm ^6Li}$ is about 2 MeV; to strip
them off by scattering on target nuclei with the average atomic number
$A\simeq 40$ one would have to accelerate $({\rm ^6Li}X)$ ions to velocities
$\beta\simeq 0.01$ which, for $m_X\simeq 1$\,TeV/$c^2$,
corresponds to beam energy $\sim 0.05$ GeV. At these
energies the cross section of the stripping reaction is $\gtrsim 0.1$\,b,
and $X$-particles can be liberated with high efficiency in relatively small
targets. The energy spent on the reactivation of one $X$-particle will then
only be about $10^{-9}$ of the energy it can produce before sticking to a
$^6$Li nucleus.
If $X$-particles are stable or practically stable, i.e.\ their lifetime
$\tau_X$ is comparable to the age of the Universe, there may exist a
terrestrial population of relic $X$-particles bound to nuclei or (in the
case of $X^{++}$) to electrons and thus forming exotic nuclei or atoms. The
possibility of the existence of exotic bound states containing charged
massive particles was suggested in ref.~\cite{Cahn:1980ss} (see also
\cite{DeRujula:1989fe}) and has been studied by many authors.
The concentration of such exotic atoms on the Earth may be very low if
reheating after inflation occurs at sufficiently low temperatures.
Note that reheating temperatures as low as a few MeV are consistent
with observations \cite{Hannestad:2004px}.
A number of searches for such superheavy exotic isotopes has been
carried out using a variety of experimental techniques,
and upper limits on their concentrations were established, see
\cite{Burdin:2014xma} for a review.
Exotic helium atoms $(X^{++}ee)$ were searched for in the Earth's
atmosphere using laser spectroscopy technique, and the limit of their
concentration $10^{-12}-10^{-17}$ per atom over the mass range 20 $-$
$10^4$\,GeV/$c^2$ was established~\cite{Mueller:2003ji}.
In the case of doubly negatively charged $X$, their Coulomb binding to nuclei
of charge $Z$ would produce superheavy exotic isotopes with nuclear
properties of the original nuclei but chemical properties of atoms with
nuclear charge $Z-2$. Such isotopes could have accumulated in
continental crust and marine sediments.
Singly positively charged ions
($^6$Li$X$) and ($^7$Li$X$) chemically behave as superheavy protons; they can
capture electrons and form anomalously heavy hydrogen atoms.
Experimental searches for anomalous hydrogen in normal water have put upper
limits on its concentration at the level of $\sim 10^{-28} - 10^{-29}$ for the
mass range 12 to 1200 GeV/$c^2$ \cite{smith1} and $\sim 6\times 10^{-15}$ for
the masses between 10 and $10^5$ TeV/$c^2$ \cite{verkerk}.
If superheavy isotopes containing relic $X$-particles of cosmological origin
exist, they can be extracted
from minerals e.g.\ by making use of mass spectrometry techniques, and
their $X$-particles can then be stripped off. To estimate the required
energy, we conservatively assume that it is twice the energy needed to
vaporize the matter sample. As an example, it takes about 10 kJ to vaporize
1\,g of granite \cite{Woskov}; denoting the concentration of $X$-particles in
granite (number of $X$ per molecule) by $c_X$, we find that the energy
necessary to extract one $X$-particle is
$\sim 2.3\times 10^{-18}\,{\rm J}/c_X$.
Requiring that it does not exceed the energy one $X$-particle can produce
before getting stuck to a $^6$Li nucleus
leads to the constraint $c_X\gtrsim 2.3\times 10^{-16}$.
If it is satisfied, extracting
$X$-particles from granite would allow $X$CF to produce more energy than it
consumes, even without reactivation and recycling of the $X$-particles.
Another advantage of the extraction of relic $X$-particles from minerals
compared with their production at accelerators is that it could work even for
$X$-particles with mass $m_X\gg 1$\,TeV/$c^2$.
In assessing the viability of $X$CF as a mechanism of energy generation,
in addition to pure energy considerations one should
obviously address many technical issues
related to its practical implementation,
such as collection and moderation of the produced
$X$-particles and prevention of their binding to the surrounding nuclei
(or their liberation if such binding occurs), etc. However, the corresponding
technical difficulties seem to be surmountable~\cite{Goity:1993ih}.
\section{\label{sec:disc} Discussion}
There are several obvious ways in which our analysis of $X$CF can be
generalized. Although we only considered nuclear fusion catalyzed by
scalar $X$-particles, doubly charged particles of non-zero spin can do
the job as well. While we studied $X$CF in deuterium, fusion processes
with participation of other hydrogen isotopes can also be catalyzed by
$X$-particles.
We considered $X$CF taking place in $X$-atomic states.
The catalyzed fusion can also proceed through in-flight reactions occurring
e.g.\ in $d+(dX)$ collisions. However, because even at the highest attainable
densities the average distance $\bar{r}$ between deuterons is
much larger than it is in $(ddX)$ atoms, the rates of in-flight reactions
are suppressed by a factor of the order of $(\bar{r}/a_d)^3\gtrsim 10^{9}$
compared with those of reactions occurring in $X$-atoms.
Our results depend sensitively on the properties of positive ions
$({\rm ^3He} d X)$ and $({\rm ^4He} d X)$, for which we obtained only crude
estimates. More accurate calculations of these properties and of the formation
times of these positive ions would be highly desirable.
The existence of long-lived doubly charged particles may have important
cosmological consequences. In particular, they may form exotic
atoms, which have been discussed in connection with the dark matter
problem \cite{Fargion:2005ep,Belotsky:2006pp,Cudell:2015xiw}. They may also
affect primordial nucleosynthesis in an important way.
In ref.~\cite{Pospelov:2006sc} it was suggested
that singly negatively charged heavy metastable particles may catalyze
nuclear fusion reactions at the nucleosynthesis era, possibly solving the
cosmological lithium problem. The issue has been subsequently studied by many
authors, see refs.~\cite{Pospelov:2010hj,Kusakabe:2017brd} for reviews.
Doubly charged scalars $X$ may also catalyze nuclear fusion reactions in the
early Universe and thus may have significant impact on primordial
nucleosynthesis. On the other hand, cosmology may provide important
constraints on the $X$CF mechanism discussed here. Therefore,
a comprehensive study of cosmological implications of the existence of
$X^{\pm\pm}$ particles would be of great interest.
To conclude, we have demonstrated that long-lived or stable doubly
negatively charged scalar particles $X$, if exist, can catalyze nuclear fusion
and provide a viable source of energy. Our study gives a strong additional
motivation for continuing and extending the experimental searches for such
particles.
{\it Note added.} Recently, the ATLAS Collaboration has reported a
3.6$\sigma$ (3.3$\sigma$) local (global) excess of events with large
specific ionization energy loss $|dE/dx|$ in their search for long-lived
charged particles at LHC \cite{ATLAS:2022pib}. In the complete LHC Run 2
dataset, seven events were found for which the values of $|dE/dx|$ were
in tension with the time-of-flight velocity measurements, assuming that
the corresponding particles were of unit charge. It has been shown in
\cite{Giudice:2022bpq} that this excess could be explained as being due
to relatively long-lived doubly charged particles. It would be very
interesting to see if the reported excess will survive with increasing
statistics of the forthcoming LHC Run 3.
\section{Acknowledgments}
The author is grateful to Manfred Lindner, Alexei Smirnov and Andreas Trautner
for useful discussions. Special thanks are due to Maxim Pospelov for numerous
helpful discussions of various aspects of $X$-catalyzed fusion and
constructive criticism.
|
{
"timestamp": "2022-07-20T02:00:39",
"yymm": "2109",
"arxiv_id": "2109.13960",
"language": "en",
"url": "https://arxiv.org/abs/2109.13960"
}
|
\section{Introduction}
Physics-informed learning (PIL)~\cite{karniadakis2021physics, raissi2017physics,raissi2017physics2,RAISSIdeep,Raissihidden} has been widely used in scientific applications where physical inductive biases are applicable. The integration of domain knowledge into machine learning can not only enhance generalization, but also make models more interpretable.
However, PIL implicitly requires physics properties to be \textit{discriminative}, as opposed to \textit{generative} (defined below). To complement PIL, we propose a new paradigm call physics-augmented learning (PAL) to handle generative properties, as illustrated in Figure~\ref{fig:framework}.
We define and compare \textit{discriminative} and \textit{generative} properties in Section~\ref{sec:dis_gen}, propose PAL in Section~\ref{sec:pil_pal} and compare it with PIL, and demonstrate PAL's effectiveness via numerical experiments in Section~\ref{sec:exp}.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{./illustration.pdf}
\caption{Compare Physics-informed learning (PIL, left) and physics-augmented learning (PAL, right). PIL and PAL apply to discriminative and generative properties respectively.}
\label{fig:framework}
\end{figure}
\section{Discriminative and Generative Properties}\label{sec:dis_gen}
{\bf What is a property?} A property $P$ is a mapping from an object $f$ to a boolean variable: $P(f)$ is true if $f$ satisfies $P$, false otherwise. For example, if $f$ refers to an individual with age $a$, and $P$ is the statement that ``The individual has the age no more than 30.", then $P(f)$ is true if $a\leq 30$ and $P(f)$ is false if $a>30$.
Inspired by Generative adversarial networks (GAN)~\cite{goodfellow2014generative}, we define a \textit{generator} and a \textit{discriminator} associated with a property $P$.
\begin{definition}
{\rm \bf (Generator)}. A generator can generate objects $f$ that have the property $P$.
\end{definition}
\begin{definition}
{\rm \bf (Discriminator)}. A discriminator determines if an input object $f$ has the property $P$ or not.
\end{definition}
To be maximally useful, a generator should be implementable as a symbolic formula or feedforward neural network,
and a discriminator can be implementable as a classifier or a loss function. More specifically, we define
\textit{ideal} generators and discriminators as follows:
\begin{definition}
{\rm \bf (Ideal Generator)}. An ideal generator is (1) accurate (never generate an $f$ such that $P(f)$ is false), (2) complete (can generate any $f$ such that $P(f)$ is true), (3) efficient (can generate $f$ in polynomial time), and (4) differentiable (can exploit derivative-based optimization methods such as back propagation).
\end{definition}
\begin{definition}
{\rm \bf (Ideal Discriminator)}. An ideal discriminator is (1) accurate (always computes $P(f)$ correctly), (2) efficient (computes $P(f)$ in polynomial time), and (3) differentiable (can exploit derivative-based optimization methods such as back propagation). In this paper, we deal with one specific ideal discriminator $\hat{L}$ such that $\hat{L}f=0$ when $P(f)$ is true and $\hat{L}f\neq 0$ when $P(f)$ is false.
\end{definition}
We now define discriminative and generative properties:
\begin{definition}
{\rm \bf (Generative property)}. A property $P$ is \textit{generative} if there exists an ideal generator for $P$.
\end{definition}
\begin{definition}
{\rm \bf (Discriminative property)}. A property $P$ is \textit{discriminative} if there exists an ideal discriminator for $P$.
\end{definition}
Let us clarify these abstract definitions with a few examples of properties below, summarized in Table \ref{tab:properties}).
{\bf A. Lagrangian property}
For a physics system with generalized coordinate $\mat{q}$ and velocity $\dot{\mat{q}}$, the acceleration field $\ddot{\mat{q}}$ is {\it Lagrangian} if there exists a Lagrangian function $\mathcal{L}(\mat{q},\dot{\mat{q}})$ such that
$\ddot{\mat{q}}=(\nabla_{\dot{\mat{q}}}\nabla_{\dot{\mat{q}}}^T \mathcal{L})^{-1}(\nabla_{\mat{q}}\mathcal{L}-(\nabla_{\mat{q}}\nabla^T_{\dot{\mat{q}}}\mathcal{L})\dot{\mat{q}})$ ~\cite{cranmer2020lagrangian, nnphd}. The Lagrangian property is generative by definition but not discriminative, because (perhaps surprisingly) there is no known efficient method to determine whether a given $\ddot{\mat{q}}$ is Lagrangian or not~\cite{nnphd}.
{\bf B. Positive definiteness} By analogy with linear operators, we say that a function $f(x)$ is {\it positive definite} if there exists a function $g$ such that $f(x)=g(g(x))$. For example, the function that time-evolves a physical system during an interval $\Delta T$ is positive definite, since it is equivalent by time-evolving by $\Delta T/2$ twice.
The positive definiteness property is generative by definition, but not discriminative to the best of our knowledge.
{\bf C. Manifest symmetry} Many manifest symmetries are discriminative, with associated discriminators that can be elegantly described by partial differential equations~\cite{liu2021machinelearning}. For example,
a vector field $\mat{f}(\mat{x})$ is symmetric under a Lie group $\mathcal{G}$ if
$\mat{f}(g\mat{x})=g(\mat{f}(\mat{x}))$ for all $g\in\mathcal{G}$.
This symmetry property is discriminative because it is equivalent to
$(K_i\mat{x}\cdot\nabla-K_i)\mat{f}=0$ for all group generators $K_i$\cite{liu2021machinelearning}.
It is also generative due to recent advances in equivariant neural networks~\cite{cohen2016group, thomas2018tensor,fuchs2020se,kondor2018clebsch,satorras2021n}.
{\bf D. Hidden symmetry} However, hidden symmetries are not discriminative, because they require coordinate transformations to `generate' manifestly symmetric objects~\cite{liu2021machinelearning}. For example, manifest Hamiltonicity is shown to be discriminative in~\cite{liu2021machinelearning}, but hidden Hamiltonicity is a generative property and that is equivalent to Lagrangianess \cite{nnphd}.
{\bf E. Separability}: A differentiable bivariate function $f(x_1,x_2)$ is (additively) separable if there exists two unary functions $f_1$ and $f_2$ such that $f(x_1,x_2)=f_1(x_1)+f_2(x_2)$. Separability is generative by definition, and also discriminative because it is equivalent to $\hat{L}f\equiv\partial^2 f/\partial x_1\partial x_2=0$ ~\cite{Udrescueaay2631, udrescu2020ai}.
{\bf F. PDE satisfiability} We say a function $f(x,t)$ satisfies a partial differential equation (PDE) if $g(t,f,f_t,f_x,..)=0$. This property is discriminative by definition, with $\hat{L}f=g$. It is also generative when $f(x,t)$ can be efficiently computed by a numerical PDE solver given proper boundary conditions; this idea underlies neural ordinary/partial/stochastic differential equations~\cite{chen2019neural,hsieh2019learning,kidger2021neural}.
\begin{table}[]
\centering
\caption{Generative and discriminative properties}
\resizebox{\textwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|c|c|c|}\hline
Properties & \makecell{A. Lagrangian \\ Property} & \makecell{B. Positive \\ Definiteness} & \makecell{C. Manifest \\ Symmetry} & \makecell{D. Hidden \\ Symmetry} & E. Separability & \makecell{F. PDE\\ Satisfiability} \\\hline
Generative & Yes & Yes & Yes & Yes & Yes & Yes \\\hline
Discriminative & No & No & Yes & No & Yes & Yes \\\hline
\end{tabular}}
\label{tab:properties}
\end{table}
\section{Physics-informed Learning (PIL) and Physics-augmented learning (PAL)}\label{sec:pil_pal}
In this section, we first review physics-informed learning (PIL) and its limitations, motivating our proposed physics-augmented learning (PAL) framework.
\subsection{Physics-informed Learning (PIL)}
The essence of PIL is to seamlessly integrate data and mathematical
physics models, and a common way is to add a soft penalty term $L_2$ (corresponding to physics properties) to the prediction loss $L_1$~\cite{karniadakis2021physics} (Figure \ref{fig:framework}, left panel). PIL works for {\it discriminative} properties. Indeed, one of its greatest successes lies in solving forward/inverse PDE problems~\cite{raissi2017physics,raissi2017physics2,Raissihidden}, based on the unstated fact that satisfying a PDE is discriminative. We clarify PIL with a toy example below.
{\bf Example: PIL for separability} Suppose that our training dataset ($N$ samples) is generated by the oracle $y=f_0(x_1,x_2)$ where $f_0:\mathbb{R}^2\to\mathbb{R}$ and that we want to use a parametrized neural network $f(x_1,x_2;\theta)$ to fit the data with a function $f(x_1,x_2;\theta)$ that is additively separable.
PIL does this using a loss function with two terms: $L\equiv L_1+\lambda L_2$,
where the prediction loss $L_1$ and separability loss $L_2$ are
$$
L_1(\theta)\equiv\frac{1}{N}\sum_i |f_0(x_1^{(i)},x_2^{(i)})-f(x_1^{(i)},x_2^{(i)};\theta)|,
\quad
L_2(\theta)\equiv\frac{1}{N}\sum_i\left| {\partial^2 f(x_1^{(i)},x_2^{(i)};\theta)\over\partial x_1\partial x_2}\right|,
$$
and the constant $\lambda>0$ is a penalty coefficient.\footnote{Alternatively, $L_2$ can be expressed in terms of finite differences instead of derivatives; we do this for our numerical experiments. For example, additive separability corresponds to the condition listed in Table 2, which we average over all pairs of data points.}
By definition, each discriminative property $P$ can be written $\hat{L}f=0$ for some operator $\hat{L}$,
so $L_2\equiv |\hat{L}f|$ is a natural measure of property violation.
In contrast, non-discriminative properties, {\frenchspacing\it e.g.}, being Lagrangian or positive definite, lack a known efficiently computable criterion $\hat{L}f=0$.
Fortunately, the PAL framework proposed below can come to rescue whenever the properties of interest are {\it generative}.
\begin{table}[]
\centering
\caption{Neural network and loss function for PIL and PAL on the separability example}
\resizebox{\textwidth}{!}{%
\begin{tabular}{|c|c|c|}\hline
Paradigm & NN & \makecell{Prediction error $L_1$, \\Penalty $L_2$ }
\\\hline
PIL & $f(x_1,x_2;\theta)$ & \makecell{$\frac{1}{N}\sum_{i}|f_0(x_1^{(i)},x_2^{(i)})-f(x_1^{(i)},x_2^{(i)};\theta)|$, \\ $\frac{2}{N(N-1)}\sum_{j>i}|f(x_1^{(i)},x_2^{(i)};\theta)+f(x_1^{(j)},x_2^{(j)};\theta)-f(x_1^{(i)},x_2^{(j)};\theta)-f(x_1^{(j)},x_2^{(i)};\theta)|$} \\\hline
PAL & \makecell{$f_1(x_1;\theta_1),f_2(x_2;\theta_2),$ \\ $f_{12}(x_1,x_2;\theta_{12})$} & \makecell{$\frac{1}{N}\sum_i|f_0(x_1^{(i)},x_2^{(i)})-(f_1(x_1^{(i)};\theta_1)+f_2(x_2^{(i)};\theta_2))-f_{12}(x_1^{(i)},x_2^{(i)};\theta_{12})|$, \\ $\frac{1}{N}\sum_i|f_{12}(x_1^{(i)},x_2^{(i)};\theta_{12})|$} \\\hline
\end{tabular} }
\label{tab:separability}
\end{table}
\subsection{Physics-augmented Learning (PAL)}
Although both PIL and PAL aim to leverage inductive biases in machine learning, their approaches are quite different: while PIL is based on regularization design, PAL is based on model design. Useful regularization designs and
model designs are only available for properties that are discriminative and generative, respectively.
In the PAL paradigm, the whole model consists of two parallel modules (see Figure \ref{fig:framework}, right panel): the first module ({\tt PhyGen}) strictly satisfies the generative property, and the second module ({\tt Blackbox}) \textit{augments} the expressive power to allow violation of the property. The loss function consists of two terms: the standard prediction error $L_1$, and a penalty term $L_2$ defined as some norm of {\tt Blackbox} module output. The combined loss function is $L=L_1+\lambda L_2$.
{\bf Example: PAL for separability} In PAL we have two neural modules {\tt PhyGen} and {\tt Blackbox}. {\tt PhyGen} satisfies strictly the additive separability by having two sub-networks $(f_1(x_1;\theta_1), f_2(x_2;\theta_2))$ and outputs their sum. {\tt Blackbox} is a fully-connected neural network $f_{12}(x_1,x_2;\theta_{12})$ that can universally approximate any two-variable continuous function. The whole prediction is thus $(f_1(x_1;\theta_1)+f_2(x_2;\theta_2))+f_{12}(x_1,x_2;\theta_{12})$ and the prediction loss $L_1$ is its distance from the label $y=f_0(x_1,x_2)$. The penalty loss is simply the function norm of the {\tt Blackbox} {\frenchspacing\it i.e.}, $L_2=|f_{12}(x_1,x_2;\theta_{12})|$.
Our PIL and PAL examples are summarized and compared in Table \ref{tab:separability}.
{\bf When should I use PAL rather than PIL?} PIL and PAL are complementary frameworks exploiting discriminative and generative properties, respectively. As a consequence, one should resort to PAL without hesitation
when the property of interest is generative and non-discriminative. Another reason to use PAL is that it provides a model decomposition into {\tt PhyGen} and {\tt Blackbox}, and interpreting them potentially produces physical insights. For example, the decomposition into conservative and non-conservative force fields enabled new-physics discovery in~\cite{nnphd}.
{\bf How to choose the hyperparameter $\lambda$?} It was recently proven that
$L=L_1+\lambda L_2$ produces a phase transition at $\lambda=1$ ~\cite{nnphd}\footnote{Here the loss functions $L_1$ and $L_2$ should be defined as norms to produce sharp phase transition behavior, {\frenchspacing\it e.g.}, mean-absolute error (MAE) or Euclidean norm.
In contrast, MSE does not produce a sharp phase transition.}. $\lambda>1$ is the undesirable phase, so in principle one can simply choose any $\lambda<1$ to obtain vanishing prediction loss; the numerical results of \cite{nnphd} suggest that $\lambda\in[0.02,0.5]$ produces accurate and robust results in practice. We verify these observations with the additive separability example in Appendix~\ref{app:symbolic}. In our numerical experiments, we choose $\lambda=0.2$ if not stated otherwise.
\section{Numerical Experiments}\label{sec:exp}
We demonstrate the effectiveness of PAL on two tasks: symbolic regression and dynamics prediction. PAL performs well on these applications, while PIL is inapplicable or performs worse than PAL.
\subsection{Symbolic Regression}
The goal of symbolic regression is to find a symbolic expression that matches data from an unknown function $f$. The physics-inspired AI Feynman symbolic regression module~\cite{Udrescueaay2631,udrescu2020ai}
tests if a dataset satisfies certain properties, including symmetries. However, AI Feynman can only
discovers and exploits these if they hold with high accuracy.
To relax this, we employ PAL to first decompose the function $f$ into two parts: a property-satisfying part $f_+$ and a property-violating residual $f_-$. We then apply AI Feynman to both parts separately to obtain symbolic formulas; $f_+$ satisfies the strict property which AI Feynman can exploit.
We experiment with three properties: additive separability, rotational invariance and positive definiteness. Training details can be found in Appendix \ref{app:symbolic}. Table \ref{tab:symbolic} shows the symbolic regression results: for the first two properties, PAL decomposes the function as desired while PIL can only learn the whole function; for the positivity example, PIL is not applicable while PAL can extract meaningful partial function $g$.
\begin{table}[]
\centering
\caption{Symbolic regression results}
\begin{tabular}{|c|c|c|c|}\hline
Property & Methods & $f_+$ & $f_-$ \\\hline
\multirow{3}{*}{\makecell{Additive Separability \\ $f=x_1^2+x_2^2+x_1x_2$}} & Truth & $(x_1^2)+(x_2^2)$ & $x_1x_2$ \\\cline{2-4}
& AI Feynman + PAL & $(x_1^2-0.02)+ (x_2^2-0.01)$ & $x_1x_2+0.03$
\\\cline{2-4}
& AI Feynman + PIL & \multicolumn{2}{c|}{$x_1^2+x_2^2+x_1x_2$} \\\hline
\multirow{3}{*}{\makecell{Rotational Invariance \\ $f=0.5(x_1^2+x_2^2)+0.32x_1$}} & Truth & $0.5(x_1^2+x_2^2)$ & $0.32x_1$ \\\cline{2-4}
& AI Feynman + PAL & $0.5(x_1^2+x_2^2)$ & $0.31998x_1$
\\\cline{2-4}
& AI Feynman + PIL & \multicolumn{2}{c|}{$0.5(x_1^2+x_2^2)+0.32x_1$} \\\hline
\multirow{3}{*}{\makecell{Positivity \\ $f={\rm sin}({\rm sin}(x))+0$}} & Truth & ${\rm sin}({\rm sin}(x))$ & $0$ \\\cline{2-4}
& AI Feynman + PAL & $g(g(x)), g=-{\rm sin}(x)+0.004$ & $0$ \\\cline{2-4}
& AI Feynman + PIL & \multicolumn{2}{c|}{Not Applicable} \\\hline
\end{tabular}
\label{tab:symbolic}
\end{table}
\begin{comment}
\subsection{Hamiltonian Learning}
{\bf Problem setting}: Given a Hamiltonian matrix (under a specific representation) whose entries are partially known and others are missing. Besides the known values, we also have access to its eigenvalues. Can we impute those missing matrix elements?
{\bf Discriminative eigen-matching problem}: given a real symmetric matrix $H\in\mathbb{R}^{N\times N}$, whether it contains eigenvalues $\lambda_i (i=1,\cdots,N)$? No shortcut solution is available other than solving the eigen problem directly $Ax=\lambda x$ and see if eigenvalues match.
{\bf Generative eigen-matching problem}: Given eigenvalues $\{\lambda_1,\cdots,\lambda_N\}$, can we generate a symmetric matrix $H$ such that it contains $\{\lambda_1,\cdots,\lambda_N\}$ as eigenvalues? In fact, one can easily generate $H$ as
\begin{equation}
\begin{aligned}
H = O
\begin{pmatrix}
\lambda_1 & 0 & 0 & 0\\
0 & \lambda_2 & 0 & 0\\
0 & 0 & \cdots & 0\\
0 & 0 & 0 & \lambda_N
\end{pmatrix}
O^T
\end{aligned}
\end{equation}
where $O\in\mathbb{R}^{N\times N}$ is an orthogonal matrix i.e. $O^TO=I$. Given the discussion above, the generative view (PAL) is more feasible than the discriminative view (PIL) in the current problem. Note that we don't even need the blackbox in PAL if we are confident that the eigenvalues are exact.
{\bf Concrete examples?}
{\bf We have assumed that some matrix elements are known, are there any experimental techniques that can determine matrix elements of a Hamiltonian? [can we justify the assumption?]}
Consider a charged particle trapped in a 2D quadratic well and infected by a uniform electric field:
$V(x,y)=\frac{1}{2}(x^2+y^2)+ax$ = (Rotational invariance) + residue
Schrodinge equation: $-\frac{1}{2}\nabla^2\psi+V\psi=0$.
Know the ground state and/or a few excited states, can we infer the potential which has minimal breakdown of rotational invariance?
Likewise, for spin-orbit coupling $H=H_L(L)+H_S(S)+H_{LS}(L,S)$. Can we obtain the Hamiltonian with the minimal interaction term $H_{LS}(L,S)$?
\end{comment}
\begin{figure}[htbp]
\vskip -0.7cm
\centering
\begin{minipage}{0.5\textwidth}
\subfloat[Trajectory]
{
\captionsetup[subfigure]{labelformat=empty}
\subfloat[Ground Truth]{
\includegraphics[width=0.48\linewidth]{./N_body_ground_truth_trajectory.pdf}
}
\subfloat[PAL \& PIL]{
\includegraphics[width=0.48\linewidth]{./N_body_PAIL_trajectory.pdf}
}
\setcounter{subfigure}{1}
\label{fig2:a}
}
\label{Fig2_a}
\end{minipage}
\begin{minipage}{0.35\textwidth}
\center
\subfloat
{
\captionsetup[subfigure]{labelformat=empty}
\subfloat[(b) Force]{
\includegraphics[width=0.96\linewidth]{./N_body_PIL_PAL_force.pdf}
}
}
\end{minipage}
\caption{Dynamics prediction results: (a) Trajectory; (b) Force.}
\label{fig2}
\vskip -0.5cm
\end{figure}
\subsection{Dynamics prediction: N-body Problem}
In this example, we aim to learn $N$-body dynamics from the initial $(t=0)$ and final $(t=1)$ states of $N$ particles. We assume that these unit mass particles ($m=1$) obey the Newtonian mechanics with pairwise interactions, {\frenchspacing\it i.e.}, particle $i$ exerts a force $\mat{f}_{ij}=f(|\mat{x}_i-\mat{x}_j|)\frac{\mat{x}_i-\mat{x}_j}{|\mat{x}_i-\mat{x}_j|_2}$ on particle $j$. Our ground truth is that $f(r)=r^2$ when computing final states from initial states, but we pretend not to know $f$ and aim to infer it solely from the initial and final states (indicated by triangles and dots in Figure 2).
The property $P$ that we explore with PAL and PIL is the time-independence of $f$, which is clearly both discriminative and generative. Details of loss functions, data generation and training details are included in Appendix \ref{app:nn+losses} and \ref{app:n-body}.
Figure 2 shows that PAL outperforms PIL in terms of both trajectory interpolation and force recovery.
In other words, although both PIL and PAL can be applied for this example, PAL reveals the underlying dynamics much more accurately that PIL.
\section{Conclusions \& Discussions}
We have proposed a new paradigm called physics-augmented learning (PAL) to effectively integrate physical properties into unconstrained neural networks. PAL complements the well-known physics-informed learning (PIL) paradigm, by applying in some cases where PIL is inapplicable and by outperforming PIL in some cases where both can be used. While PIL is based on regularization design and applies to discriminative properties, PAL is based on model design and applies to generative properties.
Although PAL in its general form is explicitly formulated for the first time in this paper, examples of it have been implicitly adopted in many successful machine learning models owing to its ability to integrate human knowledge into the model design phase. For example, AlphaFold2 demonstrated that designing deep learning models with proper inductive biases could give superior performance for an unsolved grand challenge \cite{jumper2021highly}. There is growing interest in the ML community in how inductive biases can shape deep learning models; for example, \cite{raghu2021vision} discovers how two fundamental deep learning models (CNNs and Transformers) leverage their own inductive biases and tackle challenges in the computer vision domain.
We are hopeful that the PAL-PIL framework unifying inductive biases can
lead to further advances in the ML community for tackling real-world challenges with optimal selection and design of inductive biases.
\section*{Acknowledgement}
We would like to thank Silviu Udrescu and Jiaxi Zhao for valuable discussions. We thank the Center for Brains, Minds, and Machines (CBMM)
for hospitality. This work was supported by The Casey
and Family Foundation, the Foundational Questions Institute, the Rothberg Family Fund for Cognitive Science and IAIFI through NSF grant PHY-2019786.
|
{
"timestamp": "2021-09-29T02:26:21",
"yymm": "2109",
"arxiv_id": "2109.13901",
"language": "en",
"url": "https://arxiv.org/abs/2109.13901"
}
|
\section{Introduction}
In a previous work \cite{dkm}, I considered the possibility of expressing the constraint of Gauss's law
in the perturbative expansion of gauge field theories via a Lagrange multiplier
field, $\lambda$, and argued for the generation
of an effective potential term of the Coleman-Weinberg type \cite{CW} for $\lambda$,
and its relation to the problems of the mass gap and confinement
in the non-Abelian case.
Here, I elaborate on the consequences of the procedure and proposed effective action, I derive an effective Hamiltonian, and
examine the energy and stability of solutions
with ``bubbles'' of the chromoelectric field.
A discussion of the Euclidean action, the vacua and possible related vacuum transitions, confinement, and the strong-CP problem is also included.
The symmetries and other properties of the perturbative and the confining vacuum are explored, and connections are made with older phenomenological models of the strong interactions.
In particular, the works of \cite{kogut} can be mentioned, where some interesting and intuitive phenomenological models of the confining mechanism have been proposed, with the addition of a scalar field and an associated effective potential term, that modify the dielectric and fermion condensate properties of the theory at its minimum.
The present work also includes a scalar field, the Lagrange multiplier, and an associated effective potential term. Here, however, $\lambda$ has no kinetic term and no additional degrees of freedom, hence there is no symmetry breaking, and the effective potential term appears ``inverted'', with the opposite sign.
Although this effective potential term here is unbounded below, because of the interplay of the gauge kinetic and gradient terms, as well as the constraint of Gauss's law, stability is proven for all classical solutions.
Also, two vacua emerge, a local minimum (the perturbative, Coulomb vacuum, $\Omega_0$, at $\lambda=0$) and a maximum of the effective potential (the confining vacuum, $\Omega_\mu$, at the generated mass scale $\lambda^2=\mu^2$), that are also shown to be quantum mechanically stable; there are no finite action Euclidean solutions that mediate their decay.
There are, however, stable Lorentzian solutions, ``bubbles'' of the chromoelectric field, ``glueballs'', that connect the two vacua and can mediate the transitions between them. They are solitonic solutions with a finite mass ``gap'' of order $\mu /g^2$ (where $g$ is the coupling constant of the non-Abelian theory).
Once the vacuum structure of the theory is better understood, several properties of the Yang-Mills theory, that were also expected to be related, can be easily seen: confinement \cite{conf}, bag model \cite{bag}, chiral symmetry breaking \cite{chiral}, as well as a possible solution to the strong-CP problem \cite{cp}.
As far as the Lorentz invariance of the theory is concerned,
the two vacua admit a Lorentz invariant energy-momentum tensor, the one expected by the bag model, but they are completely stable for pure Yang-Mills theory at zero temperature, and there is no Lorentz invariant energy-momentum tensor that connects them (at least not with the effective action derived in this work, which concerns pure Yang-mills at zero temperature). Any transitions between the two vacua happen in non-trivial backgrounds of finite temperature or fermion density.
Although the work presented here can be related to older phenomenological models, and can also be considered as an effective action that describes the properties of the strong interactions, it should be stressed that it is derived from first principles, namely the treatment of the constraints in the quantum theory, and is proposed as a complete and exact description of the Yang-Mills vacuum and associated features.
The layout of this work is as follows:
in Sec.~2, I start with a description of the combinatorics for the Abelian case
in order to explain the procedure in a simpler setting, but also to show that the method proposed here does not change the perturbative behavior of the theory.
In Sec.~3, I consider
the non-Abelian, self-interacting case, show the derivation of an effective potential term for
$\lambda$, and examine the solitonic, ``bubble'' solutions, and their stability in the Lagrangian and Hamiltonian frameworks in Minkowski space-time.
In Sec.~4, I give a preliminary discussion of the Euclidean action, the vacua and possible solutions.
In Sec.~5, the previous considerations are utilized in order to obtain a better picture of the confining vacuum and mechanism.
In Sec.~6, I give a more detailed treatment of the Euclidean action and show that there are no finite action solutions that mediate vacuum decay of either the perturbative or the confining vacuum of pure Yang-Mills theory at zero temperature.
The only solutions of the Euclidean equations are the usual Yang-Mills instantons, which exist at both the perturbative and the confining vacuum.
In Sec.~7, I examine the symmetry properties of the theory at the two vacua, discuss Lorentz invariance, the bag model, and chiral symmetry breaking.
In Sec.~8, I give some arguments towards the resolution of the strong-CP problem.
In Sec.~9, I conclude with some comments.
\section{The Abelian theory}
In order to investigate the consequences of the constraint of Gauss's law in the perturbation expansion
of gauge field theories I will start with the Abelian case, including a massive fermion, with Lagrangian $\cal L$ and action
\begin{equation}
S=\int
{\cal L}= \int -\frac{1}{4}F_{\mu\nu}^2 + \bar{\psi} (i \gamma^{\mu}D_{\mu} - m) \psi \,,
\end{equation}
where $F_{\mu\nu}=\partial_{\mu} A_{\nu} - \partial_{\nu} A_{\mu}$ and $D_{\mu}=\partial_{\mu} + i e A_{\mu}$.
Integrations are over $d^4 x$ and
the metric conventions are $g_{\mu\nu}=(+---)$, $\partial_{\mu}=(\partial_0, \partial_i)$,
$\partial^{\mu}=(\partial_0, -\partial_i)$.
Generally,
$r$ will denote the three-dimensional, spatial distance.
Since the Lagrangian is independent of $\dot{A_0}=\partial_0 A_0$, the
respective equation of motion for that field, namely
\begin{equation}
\frac{\delta S}{\delta A_0}=0,
\label{gauss1}
\end{equation}
is not a dynamical equation, but, rather, a constraint corresponding to Gauss's law,
which can be incorporated in the perturbative expansion via a Lagrange multiplier field,
$\lambda$, in the path integral
\begin{equation}
Z(J_{\mu}, \Lambda)=\int[dA_{\mu}][d\psi][d\bar{\psi}][d\lambda] e^{i \int\tilde{\cal L}},
\end{equation}
where
\begin{eqnarray}
\nonumber
\tilde{\cal L} &=&\frac{1}{2}(\partial_0 A_i - \partial_i A_0)^2
-\frac{1}{4} F_{ij}^2\\ \nonumber
&+& \bar{\psi} (i \gamma^{\mu}\partial_{\mu} -e\gamma^0 A_0 +e \gamma^i A_i - m) \psi \\ \nonumber
&-& \lambda(\nabla^2 A_0-\partial_0\partial_i A_i + e \bar{\psi}\gamma^0\psi) \\
&-& \frac{1}{2} (\partial_0 A_0 -\partial_i A_i +\partial_0 \lambda)^2
+A_0 J_0 - A_i J_i + \lambda \Lambda.
\label{l1}
\end{eqnarray}
In the above equation the first and the second lines contain the original gauge and fermion terms,
the third line is the constraint $\lambda \frac{\delta S}{\delta A_0}$, implemented with
a gauge-invariant $\lambda$, and the last line contains
the gauge-fixing term and the sources $J_{\mu}, \Lambda$.
A special gauge-fixing condition was used, since the associated term, which can be derived by the
usual Faddeev-Popov procedure, gives the simplest set of Feynman rules.
Other gauge conditions can be used \cite{dkm}, with a similar combinatoric result as described below:
After the usual inversion procedures one obtains the propagators with momentum $k$,
\begin{eqnarray}
G_{00} &=& -\frac{1}{k^2}-\frac{1}{\vec{k}^2} \\
G_{\lambda\lambda} &=& -\frac{1}{\vec{k}^2} \\
G_{0\lambda} &=& \frac{1}{\vec{k}^2} \,=\, G_{\lambda 0}\\
G_{ii} &=& \frac{1}{k^2}.
\label{props}
\end{eqnarray}
One can easily deduce the vertices from (\ref{l1}), and observe the fact that
the propagators are combined in all interactions so as to reproduce
all the usual QED diagrams. $G_{00}$, $G_{\lambda\lambda}$ and $G_{0\lambda}$
appear together and their sum gives the ordinary $0-0$ propagator in Feynman gauge.
For example, for two static current sources
separated by a spatial distance, $\vec{r}$,
one obtains the usual Coulomb
interaction energy
from the sum of the diagrams in Fig.~1
in the static limit of $k_0=0$,
\begin{equation}
V_{\rm Coul}(r) =4\pi\alpha_e \int \frac{d^3 {k}}{(2 \pi)^3} \frac{e^{i\vec{k}\cdot\vec{r}}}{\vec{k}^2}
= \frac{\alpha_e}{r},
\label{coulomb}
\end{equation}
with $\alpha_e = \frac{e^2}{4\pi}$.
\section{The non-Abelian theory}
I now consider the case of the non-Abelian, Yang-Mills gauge theory, with coupling $g$, and gauge group $G$, with generators $T^a$ and structure constants $f^{abc}$, and initial action
\begin{equation}
S_0=\int -\frac{1}{4}F^a_{\mu\nu} F^{a\mu\nu} \,,
\end{equation}
with $F^a_{\mu\nu}=\partial_{\mu}A^a_{\nu}-\partial_{\nu}A^a_{\mu} - g f^{abc}A^b_{\mu}A^c_{\nu}$.
The theory is gauge invariant, with
$A_{\mu}\rightarrow\omega A_{\mu} \omega^{-1} + \frac{i}{g}\omega\partial_{\mu} \omega^{-1}$
under the local gauge transformation
$\omega(\alpha) = e^{iT^a\alpha^a (x)} \in G$
(with the usual notation $A_{\mu}= T^a A^a_{\mu}$).
Although the addition of fermions will not be considered here, a massive fermion in the representation $R$
can be included with the term $ \bar{\psi} (i \gamma^{\mu}D_{\mu} - m) \psi$ in the Lagrangian
with $\psi\rightarrow\omega_R(\alpha)\psi$,
and $D_{\mu}=\partial_\mu + i g\,A_{\mu}^a \, T_R^a$.
After imposing the constraint in
\begin{equation}
\tilde{S} = S_0 + \int \lambda^a \frac{\delta S_0}{\delta A_0},
\label{arg1}
\end{equation}
the theory is still gauge-invariant with $\lambda\rightarrow\omega\lambda\omega^{-1}$, and can be gauge-fixed
similarly to the Abelian case.
The resulting gauge field propagators
are the same as the Abelian theory and diagonal in color indices. Other gauge conditions are possible \cite{dkm}, the general result being, as described before, the missing Coulomb interaction, and its reconstruction with the modified Feynman rules.
The incorporation of the constraint of Gauss's law via the term
$\lambda^a \frac{\delta S_0}{\delta A^a_0}$ has the additional effect
of introducing interactions between the gauge field and $\lambda$. These are
the same as the usual interactions, with one $A_0$ leg replaced by $\lambda$.
For example, in Fig.~2, a vertex of the non-Abelian theory is shown
together with the new corresponding vertex with the same value.
The usual QCD interactions can be reproduced, with the exception
that, for diagrams with external $\lambda$ legs, the Coulomb interaction is
missing in the internal propagators: in Fig.~3, this is shown for the $A_i-A_j$ propagator, with momentum $k$,
and external, constant $\lambda$ fields, where the missing Coulomb interaction
gives a factor of $g^2 C_2 \lambda^2 \frac{k_i k_j}{\vec{k}^2}$,
where $\lambda^2=\lambda^a\lambda^a$ and $f^{acd}f^{bcd}=C_2 \delta^{ab}$.
This amounts to a mass term
in loops like Fig.~4, where the
$\lambda-A_0$ and $\lambda-\lambda$ interaction cannot be inserted in the loop,
and has the effect of generating a gauge invariant
effective potential from these terms \cite{dkm},
which would otherwise add up to zero.
It is of the Coleman-Weinberg form \cite{CW},
\begin{equation}
U(\lambda)= \frac{(\alpha_s C_2)^2}{4}\lambda^4 \left(\ln\frac{\lambda^2}{\mu^2}-\frac{1}{2}\right),
\label{m0}
\end{equation}
with $\alpha_s = g^2/4\pi$,
renormalized at a scale $\mu$ where $dU/d\lambda=0$,
and appears in the effective action with the opposite sign (it is upside-down).
The resulting, gauge invariant, effective action in Minkowski space,
\begin{equation}
S_{\rm M, eff}= \int -\frac{1}{4}F^a_{\mu\nu} \, F^{a \,\mu\nu} +
\lambda^a \, D_i F^{a i0} + U(\lambda),
\label{meff1}
\end{equation}
can also be written in terms of the (chromo)-electric and -magnetic fields,
\begin{equation}
E^a_i = F^a_{0i}=F^{a i0},\,\,\, B^a_i=-\frac{1}{2}\epsilon^{ijk}F^a_{jk},
\end{equation}
as
\begin{equation}
S_{\rm eff}= \int \frac{1}{2}{E_i^a}E_i^a -\frac{1}{2}{B_i^a}B_i^a +
\lambda^a\, {D_i} {E_i^a} + U(\lambda),
\label{meff2}
\end{equation}
and the variational equations become
\begin{equation}
\frac{\delta}{\delta\lambda^a}=0 \,\Rightarrow
D_i E_i^a = -\frac{\partial U}{\partial \lambda^a},
\label{m1}
\end{equation}
\begin{equation}
\frac{\delta}{\delta A_0^a} =0 \,\Rightarrow
D_i^2 \lambda^a = D_i E_i^a,
\label{m2}
\end{equation}
\begin{equation}
\frac{\delta}{\delta A_i^a} =0 \Rightarrow
D_0 \, E_i^a = (D\times B)_i^a + D_0 D_i \lambda^a.
\label{m3}
\end{equation}
Generally, their solution requires a choice of gauge, which can be imposed at this level, or added in the effective action as usual. However, one can obtain a set of solutions by inspection,
setting $E_i^a = D_i \lambda ^a$, and demanding $D_i^2 \lambda^a =-\frac{\partial U}{\partial \lambda^a}$, further setting $A_i^a=0, B_i^a=0$,
hence $E_i =-\partial_i A_0 = \partial_i \lambda$, and considering the
solutions of the equation
\begin{equation}
\nabla^2 \lambda^a =-\frac{\partial U}{\partial \lambda^a}.
\label{m4}
\end{equation}
This is the same as the equation that would describe tunneling in a three-dimensional model of a scalar field with an inverted Coleman-Weinberg potential term. There, with a potential unbounded from below, the theory would develop an instability; here, this is not clear yet, since there is no kinetic term for $\lambda$ in the effective action. The solutions of (\ref{m4}) are spherically symmetric ``bubbles'' of non-zero $\lambda^a$, with
$\lambda^a \approx \mu$ near the center, and finite radius $R\approx \frac{1} {\alpha_s \, C_2 \, \mu}$
(the color index, $a$, with the non-zero field values, is arbitrary and can be rotated by a gauge transformation).
Obviously, there is also the solution with $\lambda =0$ and the other fields zero everywhere, which corresponds to the usual, perturbative Yang-Mills vacuum.
The solution with $\lambda^2=\mu^2$, equal everywhere to the other, non-zero extremum of $U$, will be discussed later.
There are also solutions that consist of combinations of various bubbles, separated at finite distances from each other, and with varying relative gauge orientations.
The covariance of (\ref{m1}-\ref{m3}), as well as the invariance of the generated effective potential, implies that we can apply time-independent gauge transformations, $\omega$, to the solutions with zero $A_i$, to transform to a pure gauge $A_i= \frac{i}{g} \omega^{-1} \partial_i \,\omega$, while keeping the magnetic field equal to zero, and satisfying the remaining requirements, $E_i^a = D_i \lambda ^a\,$, $D_i^2 \lambda^a =-\frac{\partial U}{\partial \lambda^a}$, for the bubble solutions, thus these also exist in the well-known, topologically non-trivial, sectors of the Yang-Mills theory \cite{cw}.
In order to proceed to a canonical formalism, one can exploit the gauge invariance of (\ref{meff1}) to set $A_0^a=0$,
hence $E_i^a = \dot{A_i^a}$, and consider the effective action
\begin{equation}
S_{\rm eff, 0}=\int L_{\rm eff, 0} = \int \frac{1}{2} \dot{A_i^a}\dot{A_i^a} - \frac{1}{2} B_i^a B_i^a
+ \lambda^a D_i \dot{A_i^a} +U,
\label{meff2}
\end{equation}
from which the equivalent set of equations
\begin{equation}
\frac{\delta}{\delta \lambda^a}=0 \Rightarrow D_i \dot{A_i^a}= D_i E_i^a = -\frac{\partial U}{\partial \lambda^a},
\label{m11}
\end{equation}
\begin{equation}
\frac{\delta}{\delta A_i^a}=0 \Rightarrow \partial_0^2 A_i^a = \dot{E_i^a}=
(\nabla\times {B})^a_i +\partial_0 D_i \lambda^a,
\label{m22}
\end{equation}
are derived, that also admit the bubble solutions
with $E_i^a =D_i \lambda^a,\, B_i^a =0,\, D_i^2 \lambda^a = -\partial U/ \partial\lambda^a$.
Specifically, there is a gauge transformation, $\tilde{\omega}$, that changes the previous
solution $A_i^a=0, \, A_0^a =-\lambda^a, \, E_i^a =\partial_i \lambda^a$ to this gauge with
$\tilde{A_0^a}=0, \, \tilde{A_i}=\frac{i}{g} \tilde\omega^{-1} \partial_i \,\tilde\omega , \,
\tilde{E_i} =\tilde{\omega} E_i \tilde\omega^{-1}$, and still allows the freedom of time-independent gauge transformations.
Now, the three canonical variables, $Q^a_i = A^a_i$, are further constrained by (\ref{m11}), and admit
the conjugate canonical momenta,
\begin{equation}
P_i^a = \frac{\partial L_{\rm eff, 0}}{\partial \dot{A_i^a}} = \dot{A_i^a} -D_i\lambda^a = E_i^a-D_i\lambda^a .
\label{pq}
\end{equation}
An effective Hamiltonian can be defined as
\begin{eqnarray}
H_{\rm eff} &=&\int_3 P_i^a \dot{Q_i^a} - L_{\rm eff, 0} = \nonumber \\
&=&\int_3 \frac{1}{2}P_i^a P_i^a + \frac{1}{2} B_i^a B_i^a +
\frac{1}{2} (D_i \lambda^a)^2 + P_i^a D_i\lambda^a - U,
\label{heff}
\end{eqnarray}
where $\int_3$ denotes integration in three-dimensional space.
The canonical equations
\begin{equation}
\dot{Q_i^a}= \frac{\delta H_{\rm eff}}{\delta P_i^a} = P_i^a + D_i \lambda^a,
\label{h11}
\end{equation}
\begin{equation}
\dot{P_i^a}= -\frac{\delta H_{\rm eff}}{\delta Q_i^a} = (\nabla\times B)_i^a,
\label{h22}
\end{equation}
together with the constraint
\begin{equation}
\frac{\delta H_{\rm eff}}{\delta \lambda}=0 \Rightarrow D_i^2 \lambda^a + D_i P_i^a = -\frac{\partial U}{\partial \lambda^a}
\label{h33}
\end{equation}
are easily seen to be equivalent to (\ref{m11}, \ref{m22}).
Substituting (\ref{pq}) in (\ref{heff}), in order to express the Hamiltonian and the energy in terms of the physical fields,
one gets
\begin{equation}
H_{\rm eff} =\int_3 \frac{1}{2} E_i^a E_i^a + \frac{1}{2} B_i^a B_i^a - U.
\label{energy}
\end{equation}
As far as the bubble solution is concerned, its energy is
\begin{equation}
\epsilon_{\rm b}= \int_3 \frac{1}{2} (\nabla \lambda)^2 -U,
\label{eb}
\end{equation}
and is positive (much like the action of an instanton that mediates vacuum decay for a
three-dimensional theory with an inverted potential).
Here, dimensional arguments from (\ref{m0}, \ref{m4}) show that
$\epsilon_{\rm b} \approx \frac{\mu}{C_2 \alpha_s}$.
It is well-known that instanton solutions are unstable
against expansion, so it is important to check the problem of stability.
It is easy to see, however, that the bubble solutions, as well as all solutions to the equations of motion
stemming from (\ref{meff2}) or (\ref{heff}), are classically stable, since the second variation
of the Hamiltonian (\ref{heff}) around a solution is
\begin{equation}
\delta^2 H_{\rm eff} = \int_3 (\delta P_i^a)^2 + \delta\lambda^a (-D_i^2 - U'') \delta\lambda^a
+ \delta P_i^a D_i \delta \lambda^a
\end{equation}
(without the variation from the magnetic term, which is obviously positive).
The operator $-\nabla^2 - U''$ has a negative eigenvalue, corresponding to the
similar tunneling problem mentioned before. Here, however, all variations have to satisfy the constraint
(\ref{h33}), hence
$(- D_i^2 - U'') \delta \lambda^a = D_i \delta P_i^a$,
and the solutions are stable, since then $\delta^2 H_{\rm eff} = \int_3 (\delta P_i^a)^2 \geq 0$ (plus terms from the variation of the
canonical coordinates from the magnetic field, which are also positive).
In particular, the bubble solutions derived here are stable (although not topologically) soliton solutions, ``glueballs'' of the (chromo)-electric field.
\section{The Euclidean action}
The Euclidean action obtained from
(\ref{meff1}) is
\begin{equation}
S_{\rm E, eff}= \int \frac{1}{2} E_i^a E_i^a + \frac{1}{2} B_i^a B_i^a +
\lambda^a D_i E_i^a - U(\lambda).
\label{ee}
\end{equation}
The rotations involved are
\begin{eqnarray}
\nonumber
S_{\rm M} &\rightarrow& i S_{\rm E} \\ \nonumber
t &\rightarrow& - i \tau \\ \nonumber
A_0, \lambda, \partial_0 &\rightarrow& i A_0, i \lambda, i \partial_\tau,
\end{eqnarray}
integrations are over $d\tau d^3x$,
and the Euclidean equations are
\begin{equation}
\frac{\delta}{\delta\lambda^a}=0 \,\Rightarrow
D_i E_i^a = \frac{\partial U}{\partial \lambda^a},
\label{e1}
\end{equation}
\begin{equation}
\frac{\delta}{\delta A_0^a} =0 \,\Rightarrow
D_i^2 \lambda^a = D_i E_i^a,
\label{e2}
\end{equation}
\begin{equation}
\frac{\delta}{\delta A_i^a} =0 \Rightarrow
D_0 \, E_i^a =- (D\times B)_i^a + D_0 D_i \lambda^a,
\label{e3}
\end{equation}
from which some preliminary observations can be made:
a) The configuration with $\lambda =0$ and the gauge fields also equal to zero, is a vacuum solution, $\Omega_0$, of both the
Euclidean and Minkowski equations, with zero energy.
b) The bubble solutions are not stationary points of the Euclidean equations of motion, so apart from their
classical stability, their quantum stability against tunneling is also possible.
c) The Euclidean action has an imaginary part from the continuation of the effective potential, which grows for
large values of $\lambda$ and hints to instabilities.
d) All points that satisfy $\lambda^2 =\mu^2$, with the gauge fields equal to zero, are gauge equivalent copies of the same vacuum, $\Omega_\mu$,
a solution of both the Minkowski and the Euclidean equations, with positive Euclidean action and energy density.
Although it is a maximum of $-U$, it is classically stable by the previous discussion (as is $\Omega_0$).
Specifically, in the $A_0=0$ gauge, the vacuum $\Omega_\mu$ consists of time-independent, covariantly constant
configurations, $\lambda(\vec{x})= \omega(\vec{x}) \, \bar{\lambda}\, \omega(\vec{x})^{-1}$,
with $\bar{\lambda}$ a fixed adjoint vector with $\bar{\lambda}^2=\mu^2$ (and $F_{\mu\nu}=0$, with $A_i=\frac{i}{g}\omega(\vec{x}) \, \partial_i\, \omega(\vec{x})^{-1}$ as usual).
e) The Euclidean solutions generally satisfy $D_i^2 \lambda^a = \partial U / \partial \lambda^a$.
For $E_i=\partial_i \lambda$ and $A_i, B_i=0$ this becomes
\begin{equation}
\nabla^2 \lambda^a = \frac{d^2 \lambda^a}{d r^2} + \frac{2}{r}\frac{d \lambda^a}{d r} =
\frac{\partial U}{\partial \lambda^a},
\label{et}
\end{equation}
which, if it were not for the ``friction'' term, would describe ``rolling'' of $\lambda$, from $\mu$, down through $0$, to the opposite but equivalent point of $\Omega_\mu$.
It is similar to the equation for a three-dimensional soliton, which does not exist, because of Derrick's theorem \cite{cw}.
The possibility of solutions of the full Euclidean equations will be discussed later.
f) There is no solution of (\ref{et}) starting from $\lambda=0$ that would describe decay of the $\Omega_0$ vacuum
to larger values of $\lambda$ with negative energy density; this does not rule out, of course, other Euclidean solutions with more general field configurations.
g) At both $\Omega_0$ and $\Omega_\mu$, the remaining equations in Minkowski and Euclidean spacetime (\ref{m1}-\ref{m3})
and (\ref{e1}-\ref{e3}) are equivalent to the usual Yang-Mills equations, $D_\mu F^{a \mu\nu}=0$,
so one expects the well-known, Lorentz-invariant physics and results.
\section{Confinement}
Non-zero values of $\lambda$ signal confinement.
In fact, $\Omega_\mu$, with a constant $\lambda^2=\mu^2$ and the
gauge fields equal to zero, is the confining vacuum. The diagram of Fig.~5, with external insertions of constant $\lambda$ (two smaller blobs),
gives, in the static limit, a factor of
$C_2 \frac{g^2 \lambda^2}{\vec{k}^4}$ that corresponds to a confining potential between two colored sources (two larger blobs).
Because of the combinatorics and the Feynman rules described before, there are no other diagrams that contribute to this order in the static limit; there
are higher loop diagrams, but these do not spoil the result.
Thus, the confining interaction arises from the $\lambda$-condensate, with the same mechanism that generated
the effective potential term for $\lambda$.
Two charges that are a small distance apart in the perturbative vacuum have a Coulomb interaction (\ref{coulomb}).
As they move further apart, stable bubble solutions, with adjacent electric field dipoles, are formed between them.
Bubble condensation eventually drives the system to the confining vacuum.
As was explained before, the confining vacuum, $\Omega_\mu$, has higher energy density than the perturbative vacuum, $\Omega_0$, and
is classically stable (essentially because of kinetic and gradient terms). Its quantum mechanical stability depends on the Euclidean action and will be discussed in the following Section.
Obviously, the confining vacuum consists of the entire set of points $\lambda^2=\mu^2$, as described in comment d) of Section 4, and there is no spontaneous symmetry breaking, since $\lambda$ is not a dynamical field with additional degrees of freedom or a kinetic term.
\section{More on Euclidean solutions}
First, I will show that there are no finite action solutions of the Euclidean equations that mediate the decay of the confining vacuum, so, besides its classical stability, it is also quantum mechanically stable.
In fact, one is interested in the difference
\begin{equation}
{\cal B}=S_{\rm E, solution} -S_{\rm E, background},
\end{equation}
where, in our case, the background configuration is the confining vacuum with $\lambda^2 = \mu^2$, which decays via a solution to the Euclidean equations (with the boundary condition that it tends to $\mu^2$ at Euclidean infinity). From (\ref{ee}) we have
(integrations are over four-dimensional Euclidean spacetime)
\begin{eqnarray}
\nonumber
{\cal B} &=& \int \, \frac{1}{2} E^2 + \frac{1}{2} B^2 + \lambda^a \,D_i E_i^a -(U-U(\mu^2))= \\
& =& I_E +I_B+I_C +I_U,
\end{eqnarray}
with obvious notation, and $I_E, I_B > 0$, $I_U <0$.
After a rescaling of the fields, with $A_M (x_M) \rightarrow \alpha A_M (\alpha \, x_M),\,\, \lambda(x_M) \rightarrow \lambda(\alpha \, x_M)$, we have
\begin{equation}
{\cal B}(\alpha)=I_E + I_B + \frac{1}{\alpha} I_C + \frac{1}{\alpha^4} I_U,
\end{equation}
and, if the field configuration is a solution of the Euclidean equations at $\alpha =1$, we get
$I_C + 4 I_U =0$. Here $x_M=x_0, x_i$ are the Euclidean coordinates ($x_0$ is the Euclidean time $\tau$), and after a different rescaling with $A_0 \rightarrow A_0 (x_0, \alpha x_i), \,\, A_i \rightarrow \alpha A_i (x_0, \alpha x_i),\,\, \lambda \rightarrow \lambda (x_0, \alpha x_i)$,
we have
\begin{equation}
{\cal B}(\alpha)=\frac{1}{\alpha}I_E + {\alpha}I_B + \frac{1}{\alpha} I_C + \frac{1}{\alpha^3} I_U,
\end{equation}
hence $I_E + I_C + 3 I_U = I_B$. Combining the last two results we get
$I_E = I_B + I_U$, and, in particular, $I_E < I_B$.
However, the two Euclidean equations (\ref{e2}) and (\ref{e3}), can be also written as
\begin{equation}
D_i (E_i^a - D_i \lambda^a )=0,
\label{e22}
\end{equation}
\begin{equation}
D_0 \,( E_i^a - D_i \lambda^a)=- (D\times B)_i^a,
\label{e33}
\end{equation}
and can be compared to the usual Euclidean equations for Yang-Mills, which, assuming $O(3)$-symmetry \cite{taubes}, are solved by
\begin{equation}
E_i^a - D_i \lambda^a = \pm B_i^a.
\label{taubes1}
\end{equation}
(\ref{taubes1}) leads to
\begin{equation}
I_E =I_B + \int \frac{1}{2} (D_i \lambda)^2 \pm \partial_i (\lambda^a \, B_i^a),
\label{taubes2}
\end{equation}
hence, at least for field configurations that fall sufficiently fast and are topologically trivial at infinity, $I_E > I_B$, in contrast with the previous result.
Thus, there are no Euclidean solutions with finite ${\cal B}$ that mediate the decay of the confining vacuum.
In the previous derivation, the first Bianchi identity,
\begin{equation}
D_i B_i^a =0,
\label{bianchi1}
\end{equation}
was used,
and the second Bianchi identity,
\begin{equation}
D_0 B_i^a = - (D \times E)_i^a,
\label{bianchi2}
\end{equation}
is seen to hold also for the reshuffling of $E_i - D_i \lambda$ in (\ref{e2}, \ref{e3}).
It is in (\ref{e1}) or (\ref{et}) that the new non-linearities are expressed;
(\ref{e2}, \ref{e3}) can be treated as the usual Yang-Mills and generally lead to (\ref{taubes1}).
Next, I will show that there are no solutions of the Euclidean equations that mediate the decay of the perturbative vacuum. Since $U(\lambda^2=0)=0$, I will show that there are no finite action solutions of the Euclidean equations whatsoever.
The derivation proceeds as before, except that now, the related $I_U = \int -U $ is not positive or negative definite. Still, the relations $I_E = I_B +I_U$, $I_E +I_C +3 I_U=I_B$,
and $I_C+4 I_U=0$
are derived,
and (\ref{taubes1}) holds for every Euclidean solution with at least $O(3)$-symmetry (which will be assumed). Then (\ref{taubes2}), for sufficiently smooth and topologically trivial solutions, leads to
\begin{equation}
I_E =I_B + \int -\frac{1}{2} \lambda\, D^2_i \lambda =
I_B -\frac{1}{2} I_C
\label{taubes3},
\end{equation}
after using (\ref{e2}).
The previous relations can only be consistent if $I_U = I_C =0$.
Now $I_C = \int \lambda^a \frac{\partial U}{\partial \lambda^a}$ also holds on the Euclidean solutions by (\ref{e1}, \ref{e2}),
and, for a Coleman-Weinberg potential of the form appearing here,
$U= c \, \lambda^4 ( \ln \frac{\lambda^2}{\mu^2} -\frac{1}{2})$, with $c$ a constant, and
$\lambda^2 = \lambda^a \lambda^a$,
we have
\begin{equation}
\lambda^a \frac{\partial U}{\partial \lambda^a} = 4 U + 2 c \, \lambda^4 ,
\end{equation}
hence $I_C = 4 I_U + 2c\int \lambda^4$, and
the previous relations cannot be satisfied for a non-trivial (non-zero $\lambda$) Euclidean solution with finite action.
The only solutions to the Euclidean equations have constant $\lambda=0$ (or $\lambda^2=\mu^2$, in the previous case) and are the usual Yang-Mills instantons, with $\vec{E}^a=\pm \vec{B}^a$ and $I_E=I_B$.
For the field configurations at infinity, it should be noted that,
generally, there is no spontaneous symmetry breaking,
and there is no $U(1)$-electromagnetic charge.
This, of course, does not forbid solutions that may have an overall $U(1)$ symmetry.
As far as vacuum transitions and decays are concerned, however, it is difficult to see how they could involve a net ``magnetic'' charge.
Topologically non-trivial solutions to generalizations of the Euclidean equations (\ref{e1}, \ref{e2}, \ref{e3})
or (\ref{e1}, \ref{taubes1}) in other backgrounds may exist. Then, their
asymptotic values, with $\lambda^2$ going to $\mu^2$ at spatial infinity, may correspond to a non-trivial second homotopy group,
but they are not necessarily topologically stable. Their ``core'' is in the perturbative vacuum, $\lambda=0$, which has lower
energy and lower Euclidean action (at least in the $-U$ term). Their instabilities reflect the fact that they are possible Euclidean solutions, describing quantum tunnelling phenomena.
In fact, when one considers thermal fluctuations and matter fields, it is expected that
such solutions to the Euclidean field equations, with finite ${\cal B}$, and trivial or non-trivial topology, exist
(a generalization of the ansatz of \cite{witten} may be relevant).
At finite temperature, there are modifications to the generated effective potential; the fact that the thermal phase transition in Coleman-Weinberg models
is of the first order, already gives a prediction for the order of the deconfining phase transition. At finite temperature, however, the situation
is intrinsically non-covariant because of the plasma background, and there are more terms generated in the effective action that need to be considered.
Colored, fermionic, matter fields also couple to $\lambda$ via Gauss's law,
as in (\ref{l1}) and its generalization to the non-Abelian case,
and can ``tilt'' the effective potential, $U(\lambda)$, thereby enabling
finite action solutions of (\ref{et}) and (\ref{taubes1}) that connect the two vacua.
\section{Energy-momentum tensor, global current, Lorentz invariance, the bag model, chiral symmetry breaking, etc.}
Because of the effective potential term, generated by quantum effects,
the effective Lagrangian
\begin{equation}
{\cal L_{\rm eff}} = \frac{1}{2} E_i^a\,E_i^a - \frac{1}{2} B_i^a \, B_i^a +\lambda^a \, D_i E_i^a +U(\lambda)
\end{equation}
singles out Gauss's law and the time component that it involves. Eventually, however,
two vacua emerged. The perturbative, Coulomb vacuum, with $\lambda^2 =0$ and $U=0$, where
one has the usual perturbative theory, and the confining vacuum, with $\lambda^2 = \mu^2$ and positive $-U$. They are both classically and quantum mechanically stable, and one may further examine the symmetry properties of the theory, first by considering the general expression,
\begin{equation}
T^{\mu \nu}= \sum_n \frac{\partial {\cal L_{\rm eff}}}{\partial(\partial_\mu \phi_n)}
\,\partial^\nu \phi_n
- \eta^{\mu\nu} {\cal L_{\rm eff}},
\end{equation}
where $\phi_n$ denotes all the fields and their components in ${\cal L}$.
This
can be improved with the addition of
\begin{equation}
\Delta T^{\mu\nu} = -\partial_\rho (F^{a \mu\rho}A^{a \nu}),
\end{equation}
a total derivative with $\partial_{\mu} \Delta T^{\mu\nu} =0$,
and the further addition of
$\Delta {\tilde T^{\mu\nu}}$ with
\begin{equation}
\Delta {\tilde T_{00}}=\partial_i (D_i \lambda^a \,A_0^a),\,\,
\Delta {\tilde T_{i0}}=\partial_0 (D_i \lambda^a \,A_0^a),
\end{equation}
\begin{equation}
\Delta {\tilde T_{ij}}=\partial_0 (D_i \lambda^a \,A_j^a),\,\,
\Delta {\tilde T_{0j}}=\partial_i (D_i \lambda^a \,A_j^a),
\end{equation}
also a total derivative with $\partial_{\mu} \Delta {\tilde T^{\mu\nu}} =0$.
The final expression
\begin{equation}
\Theta^{\mu\nu}=T^{\mu\nu}+\Delta T^{\mu\nu} +\Delta {\tilde T^{\mu\nu}}
\end{equation}
also satisfies $\partial_\mu \Theta^{\mu\nu} =0$, and
\begin{equation}
\Theta^{00}=\frac{1}{2}E_i^a E_i^a +\frac{1}{2}B_i^a B_i^a -U ={\cal H}_{\rm eff},
\end{equation}
coincides with the expression in (\ref{energy})
(modulo surface terms of the form $\lambda^a D_i E^a_i + E^a_i D_i\lambda^a $ that will be ignored for fields that fall sufficiently fast and are topological trivial at infinity).
However, the remaining components of $\Theta^{\mu\nu}$, for a general
$D_i \lambda^a \neq 0$, are neither symmetric nor do they form a Lorentz-invariant tensor (for example $\Theta^{0i}=(\vec{E}-\vec{D\lambda})\times \vec{B}$,
$\Theta^{i0}=\vec{E}\times \vec{B}$).
At the vacua, where $D_i\lambda^a =0$,
\begin{equation}
\Theta^{\mu\nu}=\Theta^{\mu\nu}_{Y-M} - U \, \eta^{\mu\nu}
\end{equation}
is the symmetric, conserved, Lorentz invariant, energy-momentum tensor of the theory.
$\Theta^{\mu\nu}_{Y-M}$ is the usual, traceless, energy-momentum tensor
for perturbative Yang-Mills ($\Theta^{\mu\nu}_{Y-M}=F^{a \mu \rho}{F^{a \nu}}_\rho -\frac{1}{4} \eta^{\mu\nu}F^{a\lambda\rho}F^a_{\lambda\rho})$
and coincides with $\Theta^{\mu\nu}$ at the perturbative vacuum, where $U=0$. At the confining vacuum, with $\lambda^2 = \mu^2$ one
also gets a Lorentz invariant theory, with the additional energy-momentum tensor of
a bag model, $-U\, \eta^{\mu\nu}$, with a positive constant, $-U(\mu^2)$,
and a positive, non-zero trace $\Theta^\mu_\mu = - 4 U(\mu^2)$.
Physics is Lorentz invariant at both vacua. However, the transitions between the two vacua involve non-trivial (chromo)-electric and -magnetic field configurations (for example, the solitons, ``glueballs'' of finite radius of the chromoelectric field, described earlier) and cannot be decribed in a Lorentz invariant manner.
Obviously, a physical quantity, like the rate of a transition from one vacuum to the other, should be Lorentz invariant.
However, for pure Yang-Mills theory, the two vacua do not decay.
In fact, the lack of a Lorentz invariant energy-momentum tensor that connects them is related to that.
The vacua may ``jump'' to configurations where $-U$ is negative, with fluctuations of $\lambda$ of order $\mu$, from where, depending on the background involved, they may settle to the absolutely stable confining vacuum via processes like the condensation of the previously derived solitons (``glueballs''). But there is no solution of the Euclidean equations, for pure Yang-Mills, that describes such a process.
Such fluctuations and transitions between the two vacua obviously happen in the presence
of background configurations with finite temperature and/or fermion density
(or other fluctuations, like during the purely academic example where one considers two initially static, colored sources and pulls them apart)
but there, the issue of Lorentz invariance, and especially its description via an effective action, is more involved, or even moot.
Continuing the investigation of the symmetry properties of the theory,
one may consider the expression
\begin{equation}
J_\mu^a = \sum_n \frac{\partial {\cal L_{\rm eff}}}{\partial(\partial_\mu \phi_n)}
\frac{\delta \phi_n}{\delta \alpha^a},
\end{equation}
where the $\alpha^a$'s are constant, global parameters of the gauge group.
Again, for general configurations with $D_i \lambda^a \neq 0$, this expression contains factors of $D_i \lambda^a$ and is not Lorentz invariant.
At both vacua, where $D_i \lambda^a =0$, this expression is Lorentz invariant, it is the same at both vacua, and coincides with the usual expression for the Yang-Mills theory
($J_\mu^a= f^{abc} A^{b \rho} F^c_{\rho\mu}$).
It is a conserved and Lorentz invariant current, but it is not gauge invariant. The reason is the same at both vacua. Namely, even in the confining vacuum, the theory still contains the massless gluons and the Coulomb interaction, as in Fig.~1, in addition to the confining interaction of Fig.~5. The limitations of \cite{ww}, therefore, continue to hold
(it is noted again that, for both $T_{\mu\nu}$ and $J_\mu$, the calculations and improvements were made for fields that fall sufficiently fast and are topologically trivial at infinity).
Finally, although this work investigates the vacuum structure of pure Yang-Mills theory, it is also of interest to make another connection with older phenomenological models and notice that, when fermionic, matter fields, are included, they also couple to $\lambda$ via Gauss's law, as in (\ref{l1}) and its generalization to the non-Abelian theory.
In the confining vacuum, therefore, the $\lambda^2 = \mu^2$ condensate gives an effective interaction of the Nambu--Jona-Lasinio type,
\begin{equation}
G_{NJL} (\bar{\psi} \gamma^0 \psi)^2.
\end{equation}
In terms of the infrared and ultraviolet cut-offs, $\Lambda_{IR}, \Lambda_{UV}$, needed to define the effective interaction \cite{chiral},
$G_{NJL}\approx \frac{g^2 \, \mu^2}{\Lambda_{IR}^4}$, and chiral symmetry breaking occurs when $G_{NJL} \stackrel{>}{\sim} \frac{1}{\Lambda_{UV}^2}$,
and is also related to the confining mechanism, as expected.
\section{Comments on the strong-CP problem}
It is well-known that, in the $A_0 =0$ gauge, space-dependent gauge transformations
in the perturbative vacuum
fall into topologically distinct
sectors, $|n>$, that correspond to the winding-number, $n$, of maps of the compactified three-dimensional space to
the group, $G$. These then combine into the so-called $\theta$-vacuum,
$|\theta> = \sum_n e^{-i n \theta}|n>$, and $\theta$ is equivalent to an additional periodic parameter multiplying
a CP-violating term, proportional to $\vec{E}\cdot\vec{B}$, in the action \cite{cw}.
The same situation occurs in the confining vacuum.
It also has a similar, non-trivial structure, described in comment d) of Section 4, and
the usual Yang-Mills instantons exist there too, as can be easily seen from the Euclidean equations
(\ref{e1}-\ref{e3}) or (\ref{taubes1}) and the discussion of Section~6.
The two vacua for pure Yang-Mills are completely stable, but the presence of two $\theta$-vacua, that can be connected via a multitude of physical processes,
makes it very difficult, if not impossible, for $\theta$ to have any value other than zero.
For example, the stable soliton ``glueball'' solutions derived in Section 3
are topologically trivial configurations in the perturbative vacuum, with fixed directions in color space;
one may imagine, however, their condensation in three-dimensional space with
varying such directions,
so that they cover it, for example, with winding number unity, thereby
corresponding to a topology-changing transition,
$|n>\rightarrow |n+1>$, from $\Omega_0$ to $\Omega_\mu$.
Then, since
$|\theta>\rightarrow\sum_n e^{-i n \theta}|n+1> = e^{i \theta} |\theta>$,
consistency implies that $\theta$ is limited to values essentially zero,
and the $\theta$-vacuum to an unglorious demise.
The fact that the usual Yang-Mills instantons (solutions of $\vec{E}^a=\pm \vec{B}^a $), as well as all the
Euclidean and Minkowski solutions of the usual Yang-Mills equations, still exist in
both the perturbative and the confining vacuum, as was repeatedly mentioned before, ensures that most of the traditional folklore
regarding, for example, the solution of the $U(1)$-problem, remains unharmed.
Both vacua, for pure Yang-Mills at zero temperature,
are completely stable.
Physically, however, the two vacua are obviously connected, as was discussed before,
and transitions between them, for example, in backgrounds of finite temperature or density, are expected to exist.
Unless every transition between the two vacua follows the $|n>\rightarrow |n>$ pattern, $\theta$ is effectively zero.
The solution to the strong-CP problem, therefore, is also related to the confining vacuum and mechanism, as
has been argued before \cite{cp}.
\section{Discussion}
In the usual process of quantization of a non-Abelian gauge theory, it is often stated that $A_0$ acts as a Lagrange
multiplier enforcing Gauss's law (a constraint equation that includes $A_0$, which also poses as the associated Lagrange multiplier). Sometimes a shift $A_0 + \lambda \rightarrow A_0$ is performed, one generally moves between the Hamiltonian and Lagrangian formalisms, and finally one arrives at a set of Feynman rules
that claim to express the constraint, but could have been derived anyway, by simply inserting unity
in the path integral and splitting it via the Fadeev-Popov trick, without mentioning constraint quantization whatsoever.
It is the claim of this work that the afore-mentioned shift and general procedure of treating the constraint, are not exact beyond tree-level,
and that there are some ``left-over'', $\lambda^2$-dependent, terms described here. Eventually, by treating these terms, I demonstrated the mass-gap of the Yang-Mills theory,
clarified the structure of the confining vacuum and confining mechanism, their relation to the perturbative vacuum, and gave some quite suggestive arguments for the subsequent resolution of the strong-CP problem.
The Feynman rules used here came from a procedure that is not initially explicitly Lorentz invariant, but they combine to reproduce all known perturbative processes.
All classical solutions of the Lorentzian equations of motion, were shown to be classically stable, and both vacua were also shown to be quantum mechanically stable.
At both vacua, the remaining equations of motion are the usual ones, $D_\mu F^{a\mu\nu}=0$.
The energy-momentum tensor is the usual, Lorentz invariant, expression at both vacua, with the addition of a Lorentz invariant, bag model, contribution.
As was expected, the confining vacuum provides the resolution and explanation of the strong-CP problem, as well as chiral symmetry breaking, and enjoys a Lorentz invariant energy-momentum tensor, with a non-zero trace, that contributes to the baryon mass, and may have other interesting experimental and cosmological consequences.
The usual, perturbative results, like, for example, the trace anomaly, still exist, but there is no need to invoke dubious gauge, or fermion, field ``condensates'', although, of course, additional, ``non-perturbative'' effects may also exist.
Euclidean or Lorentzian solutions that connect the two vacua, like
the stable (Lorentzian) soliton solutions derived here (spherically symmetric ``glueballs'' of the chromo-electric field) obviously cannot be manifestly Lorentz-frame independent.
The two vacua, for pure Yang-Mills at zero temperature, are absolutely stable and transitions between them can only occur via fluctuations (of order $\mu$) such as, for example, in a background of finite temperature or density. Then, the problem of Lorentz invariance of the relevant quantities (if it can be posed) becomes more involved and beyond the scope of the effective action derived and used here. As far as pure Yang-Mills at zero temperature is concerned, the two vacua are not connected, and physics is Lorentz invariant at each one.
Although many expected properties were derived with the present approach (mass gap, confinement, chiral symmetry breaking, resolution of the strong-CP problem, bag constant) the possibility of a quantum field theory (especially the ubiquitous Yang-Mills theory) with two separate, stable, Lorentz invariant vacua is quite surprising and far reaching in terms of the search of a unified theory, the applicability and generality of the Lagrangian formalism, the effective field theory approach, and other problems beyond the strong interactions.
The scales involved in quantum chromodynamics, as well as properties such as factorization \cite{al}, make it difficult to investigate the characteristics of the two vacua. For other non-Abelian gauge theories that may exist at higher energy scales, this may not be the case. The experimental and cosmological consequences of processes at these scales then will be quite distinct.
Also, the relation between different phases like the confining, Coulomb, and the spontaneously broken, Higgs phase of a non-Abelian gauge theory can hopefully be investigated in future works, using extensions of the present formalism. In the weak interactions, the scales involved ensure that they are typically observed in the broken phase. Generally, for a non-Abelian gauge theory, there is an interplay between its characteristic generated scale (the scale $\mu$, derived in this work), the scales of the masses of the fermionic matter fields that are subject to this interaction, any ``Higgs''-type scalar fields that are involved, as well as the ``environment'' with the temperature and other experimental scales, leading to a rich structure of different phases (confining, Coulomb, and Higgs, among them) and phase transitions between them, that can hopefully be more thoroughly investigated using some of the tools described in this work.
|
{
"timestamp": "2022-08-24T02:09:14",
"yymm": "2109",
"arxiv_id": "2109.13990",
"language": "en",
"url": "https://arxiv.org/abs/2109.13990"
}
|
\section{Introduction}
Image scaling aims to get the image at a different size preserving the original content as much as possible, with minor loss of quality, in two opposite ways: downscaling and upscaling.
Downscaling is a compression process by which the size of the high-resolution (HR) input image is reduced to recover the low-resolution (LR) target image. Conversely, upscaling is an enhancement process in which the size of the LR input image is enlarged to regain the HR target image.
Image scaling (also termed image resampling or image resizing) is a widely used tool in several fields such as medical imaging, remote sensing, gaming, electronic publishing, autonomous driving, and aerial photography
\cite{Atkinson, Meijering, App1, App2, App3, App4}. For example, upscaling allows highlighting of important details of the image in remote sensing and medical applications \cite{Atkinson, Meijering}, while downscaling is a fundamental operation for fast browsing or sharing purposes \cite{App1, App2}. Other applicative examples regard scenarios like deforestation, monitoring, traffic, surveillance, and many other engineering tasks. Sometimes image scaling is used for illicit purposes, e.g., to automatically generate camouflage images whose visual semantics change dramatically after scaling \cite{Xiao}. In these cases, it is very important to detect the scaling effects in order to defend against such attacks and adopt suitable countermeasures \cite{Lin, Bruni}.
From a computational point of view, image scaling can be addressed by different numerical methods (see Section 2), whose main critical points typically are: a) undesired effects, such as ringing artifacts and aliasing, due to the increase/decrease in the number of pixels which introduces/reduces information to the image; b) computational efficiency in performing the resampling task in real-world applications. Moreover, most existing methods treat the resampling in only one direction since downscaling and upscaling are often considered separate problems in literature \cite{scaling1}.
We aim to propose a scaling method that works in both downscaling and upscaling directions. To this aim, looking at the scaling problem as an approximation problem, we employ an interpolation polynomial based on an {\it adjustable} filter of de la Vall\'ee Poussin (briefly VP) type, which can be suitably modulated to improve the approximation (see, e.g., \cite{Them_Van_Barel,Occo_Them_LNCS,Themistoclakis_2011}).
Indeed, the VP type interpolation has been introduced in literature as a valid alternative to Lagrange interpolation to provide a better pointwise approximation, especially when the Gibbs phenomenon occurs \cite{Occo_Them_LNCS,Occo_Them_AMC,Occo_Them_Apnum}. In fact, an interesting feature of VP filtered approximation is the presence of a free additional degree--parameter, which is responsible for the localization degree of the polynomial interpolation basis (the so--called fundamental VP polynomials) around the nodes.
By changing this parameter, we may modulate the typical oscillatory behavior of the fundamental Lagrange polynomials according to the data, improving the approximation without destroying the interpolation property and keeping fixed the number of the interpolation nodes.
Moreover, it is also worth noting that VP interpolation can be embedded in a wavelet scheme with decomposition and reconstruction algorithms very fast since based on fast cosine transforms \cite{CaThwave}.
From a theoretical point of view, the literature concerning VP filtered approximation provides many convergence theorems, also in the uniform norm. They estimate an error comparable with the error of the best polynomial approximation \cite{Themistoclakis_2011,WoulaL1} and allow to predict the convergence order from the regularity of the function to approximate \cite{Occo_Them_DRNA}.
Due to such nice behavior, VP approximation has been usefully applied as a demonstration tool to carry out proofs of different theorems \cite{mastrorussothem,Them_gauss,Mastro_them_acta,OccoRusso2011,prandtl_occo_db}.
From a more applicative point of view, it has been used to solve singular integral and integro-differential equations \cite{Air_ThMastro}, \cite{Prandtl_nostra} or derive good quadrature rules for the finite Hilbert transform \cite{Hilbert_MG,Hilbert_MG2}. However, to our knowledge, it has never been applied to Image Processing. Hence, the present paper represents the first step in investigating how the VP interpolation scheme can be usefully employed in image scaling.
To explain the proposed scaling method (shortly denoted by VPI method or simply by VPI), as a starting point, we consider that the input RGB image is represented at a continuous scale by a vectorial function (with separate channels for each color) whose sampling yields the pixels values. We globally approximate such function using suitable VP interpolation polynomials, modulated by a free parameter $\theta\in ]0,1]$ \cite{Themistoclakis_2011, Them_Van_Barel, Occo_Them_DRNA, Occo_Them_Apnum, Occo_Them_AMC}. Hence, we get the resized image by evaluating such VP polynomials in a denser (upscaling) or coarser (downscaling) grid of sampling points.
Being designed both for downscaling and upscaling, VPI method is flexible and implementable for any scale factor. The rescaling can be obtained by specifying the scale factor or, alternatively, the desired size of the image. We point out that, in the following, to distinguish between upscaling and downscaling mode, we use the notation u-VPI and d-VPI, respectively.
Both in upscaling and downscaling, for the limiting parameter choice $\theta=0$, VPI coincides with the LCI method proposed in \cite{Lagrange} and based on classical Lagrange interpolation at the same nodes. Moreover, for any choice of the parameter $\theta$, d-VPI with odd scale factors also produces the same output resized image of LCI, which results by a direct assignment without any computation. In these cases, if the LR image satisfies the Nyquist-Shannon sampling theorem \cite {Shannon}, d-VPI produces a MSE not greater than input MSE times the scale factor squared. Thus, we can get a null MSE and the best visual quality measures in case of {\it exact} input data (cf. Proposition 1). However, we point out that in cases where the downscaling size violates the sampling theorem, aliasing effects occur. Experiments in the paper also deepen this aspect, and a partial solution is proposed, remaining the problem open to further investigations.
A further contribution of this paper includes a detailed quantitative and qualitative analysis of the obtained results on several publicly available datasets commonly used in Image Processing. The experimental results confirm the effectiveness and utility of employing the VP interpolation scheme, achieving on average a good compromise between visual quality and processing time: the resized images present few blurred edges and artifacts, and the implementation is computationally simple and rather fast.
On average, VPI has a competitive and satisfactory performance, with quality measures generally higher and more stable than other existing scaling methods considered as a benchmark. In general, we have satisfactory performance, also for high scale factors, compared to the benchmark methods. Specifically, in downscaling, when the free parameter is not equal to zero, VPI improves the LCI performance and results to be more stable than the latter due to the uniform boundedness of Lebesgue constants corresponding to the de la Vall\'ee Poussin type interpolation. Moreover, VPI results much faster than the methods specialized in only downscaling or upscaling.
At a visual level, VPI captures the object's visual structure by preserving the salient details, the local contrast, and the luminance of the input image, with well--balanced colors and a limited presence of artifacts. To about the same extent as the other benchmark methods, in downscaling VPI exhibits aliasing effects.
Overall, due to its features, we consider VPI suitable for real--world applications, and, at the same time, we look at it as a complete method because it can also perform upscaling and downscaling with adequate performance.
The remainder of this paper is as follows. In Section \ref{rel_work}, we outline the related work, briefly explaining the benchmark scaling methods we employ in the experimental phase. In Section \ref{math_prel} we provide the mathematical background. In Section \ref{method}, we describe the VPI method and state its main properties. In Section 5 we provide the most relevant implementation details and the qualitative/quantitative evaluations of the experimental results taken over a significant numebr of different image datasets. Finally, conclusions are drawn in Section \ref{concl}.
\section{Related work}
\label{rel_work}
Image scaling has received great attention in the literature of the past decades, during which many methods based on different approaches have been developed. An overview containing pros and cons for some of them can be found in \cite{rev1, rev2}.
Traditionally, image scaling methods are grouped into two categories \cite{book1}: non-adaptive \cite{NN-NIL2, Lanczos, B-splines, Burger, ScSR} and adaptive \cite{Stentiford, Setlur, Ramella2, Zhou, Yang}. In the first category, all the pixels are equally treated. In the second, suitable changes are arranged, depending on image features and intensity values, edge information, texture, etc. The non-adaptive category includes many of the most commonly used algorithms such as the nearest neighbor, bilinear, bicubic and B-splines interpolation, Lanczos method \cite{book1, NN-NIL2, Lanczos, B-splines, Burger, CN}. Adaptive methods are designed to maximize the quality of the results. They are also employed in most common approaches such as context-aware computing \cite{Stentiford}, segmentation techniques\cite{Setlur}, and adaptive bilinear schemes \cite{Ramella2}. Machine Learning (ML) methods can be ascribed to the latter category,even if they are often considered as a separate problem \cite{Zhou, Yang}. The learning paradigm of ML methods aims to compensate for complete (missing) information of the downscaled (upscaled) image using a relationship between HR (LR) and LR (HR) images. Mostly, this paradigm is implemented by a training step, in which the relationship is learned, followed by a step in which the learned knowledge is applied to unseen HR (LR) images.
Usually, non-adaptive scaling methods have problems of blurring or artifacts around edges and only store the low-frequency components of the original image. On the other hand, adaptive scaling methods generally provide better image visual quality and preserve high-frequency components. However, adaptive methods take more computational time as compared to non-adaptive ones. In turn, the ML methods ensure high-quality results but, at the same time, require extensive learning based on a huge number of parameters and labeled training images.
In this section, we limit to describe shortly the methods considered in the validation phase of the VPI method (see Section 5), namely DPID \cite{Weber}, L$_0$ \cite{Liu}, SCN \cite{SCN}, LCI \cite{Lagrange} and BIC \cite{BIC}. The source code of such methods is made available by the authors themselves in a common language (Matlab). Except for BIC and LCI, these methods are designed and tested considering the problem of resizing in one direction, i.e., in downscaling (DPID and L$_0$) or upscaling mode (SCN).
DPID is based on the assumption that the Laplacian edge detector and adaptive low-pass filtering can be useful tools to approximate the behavior of the Human Visual System. Important details are preserved in the downscaled image by employing convolutional filters and by selecting the input pixels that contribute more to the output image the more their color deviates from their local neighborhood.
In L$_0$, an optimization framework for image downscaling, focusing on two critical issues, is proposed: salient features preservation and downscaled image construction. Accordingly, two L$_0$-regularized priors are introduced and applied iteratively until the objective function is verified. The first, based on gradient ratio, allows preserving the most salient edges and the visual perceptual properties of the original image. The second optimates the downscaled image by the guidance of the original one, avoiding undesirable artifacts.
SCN (Sparse Coding based Network) adopts a neural network based on sparse coding, trained in a cascaded structure from end to end. It introduces some improvements in terms of both recovery accuracy and human perception employing a CNN (Convolutional Neural Network) model.
In LCI, the input RGB image is globally approximated by the bivariate Lagrange interpolating polynomial at a suitable grid of first kind Chebyshev zeros. The output RGB image is obtained by sampling this polynomial at the Chebyshev grid of the desired size. Since the LCI method works both in upscaling and downscaling, according to the notation in \cite{Lagrange}, we use the notation u-LCI and d-LCI in upscaling and in downscaling, respectively.
BIC, one of the most commonly used rescaling methods, employs bicubic interpolation. It computes the unknown pixel value as a weighted average of $4\times 4$ pixels closest to it. Note that BIC produces noticeably sharper images than the other classical non-adaptive methods such as bilinear and nearest neighbor, offering w.r.t. them a favorable quality image and processing time ratio.
We remark that in the following, BIC is implemented by the Matlab built-in function \texttt{imresize} with \texttt{bicubic} option. For the other methods, we used the publicly available Matlab codes provided by the authors with the default parameters settings.
\section{Mathematical preliminaries}
\label{math_prel}
Let $I$ denote any color image of $n_1\times n_2$ pixels, with $n_1,n_2\in{\mathbb N}$.
As is well--known, in the RGB space $I$ is represented by means of a triad of $n_1\times n_2$ matrices that we indicate using the same letter of the image they compose, namely $I_\lambda$, with $\lambda=1:3$ (i.e., $\lambda=1,2,3$). The entries of these matrices are integers from $0$ to $\max_f$ that denotes the maximum possible value of the image pixel, (e.g., $\max_f= 255$ if the pixels are represented using 8 bits per sample).
On the other hand, such discrete values can be embedded in a vector function of the spatial coordinates, say $\f(x,y)=[f_1(x,y), f_2(x,y), f_3(x,y)]$, which represents the image at a continuous scale and whose sampling yields its digital versions of any finite size.
Hence, once fixed the sampling model, that is the system of nodes
\begin{equation}\label{X}
X_{\mu\times \nu}=\{(x_i^\mu, y_j^\nu)\}_{i=1:\mu, j=1:\nu}, \quad \mu,\nu\in{\mathbb N},
\end{equation}
we suppose that the digital image $I=[I_1,I_2,I_3]$ has behind the function $\f=[f_1,f_2,f_3]$ such that
\begin{equation}\label{I}
I(i,j)=\f(x_i^{n_1},y_j^{n_2}),\quad i=1:n_1,\quad j=1:n_2.
\end{equation}
In both downscaling and upscaling, the goal is getting an accurate reconstruction of $I$ at a different (reduced and enhanced, resp.) size. Denoting by $N_1\times N_2$ the new size that we aim to get and denoting by $R=[R_1,R_2,R_3]$ the target resized image of $N_1\times N_2$ pixels, according to the previous settings, we have
\begin{equation}\label{R}
R(i,j)=\f(x_i^{N_1},y_j^{N_2}),\quad i=1:N_1,\quad j=1:N_2.
\end{equation}
From this viewpoint, the scaling problem becomes a typical approximation problem: how to approximate the values of $\f$ at the grid $X_{N_1\times N_2}$ once known the values of $\f$ at the finer (in downscaling) or coarser (in upscaling) grid $X_{n_1\times n_2}$.
Within this setting, the choice of the nodes system (\ref{X}) as well as the choice of the approximation tool are both decisive for the success of a scaling method. In the next subsections we introduce these two basic ingredients and the evaluation metrics we use for our scaling method.
\subsection{Sampling system}
Since it is well--known that any finite interval $[a,b]$ can be mapped onto $[-1,1]$, in the following, we suppose that each spatial coordinate belongs to the reference interval $[-1,1]$, so that the sampling system in (\ref{X}) is included in the square $[-1,1]^2$.
In literature, the equidistant nodes model is usually adopted for sampling. According to such traditional model, in (\ref{X}) the coordinates $\{x_i^{\mu}\}_i$ and $\{y_j^{\nu}\}_j$ are those nodes that divide the segment $[-1,1]$ into $(\mu+1)$ and $(\nu+1)$ equal parts, respectively.
On the other hand, we recall that other coherent choices of the sampling system (\ref{X}) have been recently investigated, for instance, in \cite{demarchi1}, \cite{demarchi2} for Magnetic Particle Imaging.
Here we follow the sampling model recently introduced in \cite{Lagrange}. According to this model, we assume that (\ref{X}) is the Chebyshev grid where the coordinates $\{x_i^{\mu}\}_i$ and $\{y_j^{\nu}\}_j$ are the zeros of the Chebyshev polynomial of first kind of degree $\mu$ and $\nu$, respectively. This means that in (\ref{X}) we are going to assume that
\begin{equation}\label{xy}
x_i^\mu= \cos (t_i^\mu) \qquad \mbox{and}\qquad
y_j^\nu= \cos (t_j^\nu)
\end{equation}
where, for all $n\in{\mathbb N}$, it is
\begin{equation}\label{t}
t_k^n=\frac{(2k-1)\pi}{2n}, \qquad k=1:n.
\end{equation}
Hence, supposed that $\f$ is the vector function representing the image at a continuous scale, at a discrete scale we interpret the digital version of the image with size $\mu\times\nu$, as resulting from the sampling of $\f$ at the Chebyshev grid $X_{\mu\times\nu}$ defined by (\ref{X}) and (\ref{xy})-(\ref{t}).
We point out that both the coordinates in (\ref{xy}) are not equidistant in $[-1,1],$ but they are arcsine distributed and become denser approaching the extremes $\pm 1$. Such nodes distribution is optimal from the approximation point of view but rather unusual in image sampling. Nevertheless, from a certain perspective, our sampling model is related to the traditional sampling at equidistant nodes since the nodes in (\ref{t}) are equally spaced in $[0,\pi]$.
Indeed, the idea behind our sampling model is to transfer the sampling question from the segment to the unit semicircle, which is divided into equal arcs by the nodes system equation (\ref{t}).
The main advantage of adopting this unusual point of view is the possibility of globally approximating the image, in a stable and near--best way, by the interpolation polynomials introduced in the next subsection.
\subsection{Filtered VP interpolation}
Regarding the approximation tool underlying our method, we consider some filtered interpolation polynomials recently studied in \cite{Occo_Them_AMC}. Such kind of interpolation is based on a generalization of the trigonometric VP means (see \cite{Filbir_Them, Them_Van_Barel}) and, besides the number of nodes, it depends on two additional parameters which can be suitable modulated in order to reduce the Gibbs phenomenon (see \cite{Occo_Them_AMC, Themistoclakis_2011}).
More precisely, for any $n_i,m_i\in{\mathbb N}$ such that $m_i\le n_i$, $i=1,2$, let
\[
\n=(n_1,n_2),\quad\mbox{and}\quad \m=(m_1,m_2),
\]
and let $n,m$ denote indifferently the first components (i.e. $n_1,m_1$ resp.) or the second components (i.e. $n_2,m_2$ resp.) of such vectors. Corresponding to these parameters,
for any $r=0:(n-1)$, we define the following {\it orthogonal VP polynomials}
\begin{equation}\label{qr}
q_{m,r}^n(\xi)=\hspace{-.1cm}\left\{
\begin{array}{l}
\cos(r t)
\qquad\qquad\mbox{if $0\le r\le (n-m)$},\\ [.1in]
\frac{n+m-r}{2m}\cos(r t)+\frac{n-m-r}{2m}\cos((2n-r)t)
\\ [.07in]
\qquad\qquad\qquad\mbox{if $n-m<r<n$},
\end{array}\right.
\end{equation}
where here and in the following $\xi\in [-1,1]$ and $t\in[0,\pi]$ are related by $\xi=\cos t$.
We recall the polynomial system in (\ref{qr}) consists of $n$ univariate algebraic polynomials of degree at most $(n+m-1)$ that are orthogonal with respect to the scalar product
\[
<F, G>=\int_{-1}^1F(\xi)G(\xi)\frac{d\xi}{\sqrt{1-\xi^2}}.
\]
They generate the space (of dimension $n$)
\[
S_m^n:=\textrm{span}\{q_{m,r}^n:\ r=0:(n-1)\}
\]
that is an intermediate polynomial space nested between the sets of all polynomials of degree at most $n-m$ and $n+m-1$.
The space $S_m^n$ has also an interpolating basis consisting of the so--called {\it fundamental VP polynomials} that, in terms of the orthogonal basis (\ref{qr}), have the following expansion \cite{Themistoclakis_2011},\cite{Occo_Them_AMC}
\begin{equation}\label{fundVP}
\Phi_{m,k}^{n}(\xi)=\frac 2n\left[\frac 12 +\hspace{-.2cm}\sum_{r=1}^{n-1} \cos(r t_k^n) q_{m,r}^n(\xi)\right],\quad k=1:n.
\end{equation}
\begin{figure*}[!htbp]
\begin{center}
\includegraphics[height=8cm, keepaspectratio]{vp_n5_m4.jpg}
\caption{Fundamental VP polynomials $\{\Phi_{m,k}^n\}_{k=1}^n$ for $n=5$ and $m=4$}
\end{center}
\end{figure*}
\begin{figure*} [!htbp]
\begin{center}
\includegraphics[height=8cm, keepaspectratio]{lag_n5.jpg}
\caption{Fundamental Lagrange polynomials $\{\ell_{n,k}\}_{k=1}^n$ for $n=5$}
\end{center}
\end{figure*}
In Figures 1 and 2 we show respectively the fundamental VP polynomials for $n=5$ and $m=4$, and for the same $n=5$, the well-known fundamental Lagrange polynomials, defined as \begin{equation}\label{fundLAG}
l_{n,k}(\xi)=\frac 2n\left[\frac 12 +\hspace{-.2cm}\sum_{r=1}^{n-1} \cos(r t_k^n) \cos(r t)\right],\quad k=1:n.
\end{equation}
We see that, similarly to $\{l_{n,k}(\xi)\}_{k=1}^n$
also the fundamental VP polynomials satisfy the interpolation property
\begin{equation}\label{inter}
\Phi_{m,k}^{n}(\cos t_h^n)=l_{n,k}(\cos t_h^n)=\left\{
\begin{array}{lr}
1 & h=k\\
0 & h\ne k
\end{array}\right.
\end{equation}
for all $h,k=1:n$.
In addition to the number $n$ of nodes, we also have the free parameter $m$ which can be arbitrarily chosen ($m=1:n$ being possible) without loosing the interpolation property (\ref{inter}), as stated in \cite{Themistoclakis_2011}. Moreover, we note that also the limiting choice $m=0$ is possible, being, in this case, $S_0^n$ equal to the space of polynomials of degree at most $(n-1)$ and
\begin{equation}\label{limite}
\Phi_{0,k}^{n}(\xi)=l_{n,k}(\xi), \qquad \forall |\xi|\le 1, \qquad k=1:n .
\end{equation}
We recall in \cite{CaThwave} both $n,m$ are chosen depending on a resolution level $\ell\in{\mathbb N}$ and the fundamental VP polynomials constitute the scaling functions generating the multiresolution spaces $V_\ell=S_m^n$.
Another choice of $m$, often suggested in literature, is the following (see e.g. \cite{Occo_Them_Apnum},\cite{Occo_Them_DRNA})
\begin{equation}\label{mteta}
m=\lfloor\theta n\rfloor, \qquad\mbox{with $\theta\in ]0,1[$},
\end{equation}
where, $\forall a\in{\mathbb R}^+$, $\lfloor a \rfloor$ denotes the largest integer not greater than $a$.
Figure \ref{fig:1} displays the plots of the fundamental VP polynomials corresponding to fixed $n,k$ and $m$ given by (\ref{mteta}) with different values of $\theta$. Indeed such parameter (and more generally $m$) is responsible for the localization of the fundamental VP polynomial $\Phi_{n,k}^{m}(\xi)$ around the node $\xi_k^n=\cos t_k^n$. In fact, in Figure \ref{fig:1} we can see how those oscillations typical of the fundamental Lagrange polynomial $\ell_{n,k}$ (plotted too) are very dampened by suitable choices of $\theta$.
\begin{figure*} [!htbp]
\begin{center}
\includegraphics[height=8cm, keepaspectratio]{curve_pol_fond_new.jpg}
\caption{Fundamental polynomials $\ell_{n,k}$ and $\Phi_{m,k}^n$ for $n=21,\ k=11$ and $m=\lfloor n\theta\rfloor,$ with $\theta \in \{\ 0.4,\ 0.6,\ 0.8\}.$}
\label{fig:1}
\end{center}
\end{figure*}
By using the fundamental VP polynomials (\ref{fundVP}) we can approximate any function $g(x,y)$ on the square $[-1,1]^2$ by means of its samples at the Chebyshev grid (\ref{X}) as follows
\begin{equation}\label{VP}
V_{\n}^{\m}g(x,y):=\sum_{i=1}^{n_1}\sum_{j=1}^{n_2} g(x_i^{n_1}, y_j^{n_2})\Phi_{n_1,i}^{m_1}(x)\Phi_{n_2,j}^{m_2}(y).
\end{equation}
This is the definition of the {\it VP polynomial of $g$} and the approximation tool we use in our method.
By virtue of (\ref{inter}), such polynomial coincides with $g$ at the grid $X_{n_1\times n_2}$, i.e.
\begin{equation}\label{inter-g}
V_{\n}^{\m}g(x_i^{n_1},y_j^{n_2})=g(x_i^{n_1},y_j^{n_2}), \quad i=1:n_1,\ j=1:n_2,
\end{equation}
Moreover, it has been proved that for any $(x,y)\in [-1,1]^2$, if (\ref{m-tested}) holds with an arbitrarily fixed $\theta\in ]0,1[$ then for all continuous functions $g$, the following limit holds uniformly on $[-1,1]^2$
\[
\lim_{\n\rightarrow \infty}\left|V_{\n}^{\m}g(x,y)-g(x,y)\right|=0
\]
with the same convergence rate of the error of best polynomial approximation of $g$ \cite[Th.3.1]{Occo_Them_AMC}.
\subsection{Quality metrics}
Similar to most of the existing methods in literature, the performance of our method is quantitatively evaluated and compared with other scaling methods in terms of the Peak-Signal-to-Noise-Ratio (PSNR) and the Structural Similarity Index (SSIM). For our method such metrics will give a measure of the error between the target resized image $R=[R_1,R_2,R_3]$ and the output resized image that, in the following, we denote by $\tilde R=[\tilde R_1, \tilde R_2,\tilde R_3]$ .
The definition of PSNR is based on the standard definition of the Mean Squared Error between two matrices
\begin{equation}\label{def_mse}
\rm{MSE}(A,B)= \displaystyle\frac{1}{\nu\mu}\|A-B\|_F^2,\qquad \forall A,B\in {\mathbb R}^{\nu\times \mu} \end{equation}
being $\|\cdot \|_F$ the Frobenius norm defined as
\[
\|A\|_F:=\left(\sum_{h=1}^{\nu}\sum_{k=1}^{\mu} a_{h,k}^2\right)^\frac 12, \ \forall A=(a_{h,k})\in {\mathbb R}^{\nu\times \mu}.
\]
The extension of such definition to the case of color digital images of $\nu\times\mu$ pixels can be performed in different ways giving rise to different measures of the related PSNR (see e.g. \cite{Ramella1, Matlab}). More precisely, for the color images $R$ and $\tilde R$, defining their MSE as follows
\begin{equation}\label{MSE-mean}
\rm{MSE}(\tilde R, R)=\frac 13 \sum_{\lambda=1}^3 \rm{MSE}(\tilde R_\lambda,R_\lambda),
\end{equation}
the first, usually adopted definition of PSNR (used for instance in \cite{Lagrange}) is the following
\begin{equation}\label{PSNR}
\rm{PSNR}(\tilde R, R)=20 \log_{10}\left(\frac{\max_f}{\sqrt{\rm{MSE}(\tilde R,R)}}\right).
\end{equation}
Another common way to measure the PSNR (also used in \cite{SCN}) is given by converting to the color space YCrCb both the color RGB images $R=[R_1,R_2,R_3]$ and $\tilde R=[\tilde R_1,\tilde R_2,\tilde R_3]$, and separating the intensity Y (luma) channels that we denote by $R_Y$ and $\tilde R_Y$, respectively. We recall they are defined by the following weighted average of the respective RGB components
\begin{equation}\label{RY}
R_Y=\sum_{\lambda=1}^3 \alpha_\lambda R_\lambda+\alpha_4,\quad \tilde R_Y=\sum_{\lambda=1}^3 \alpha_\lambda \tilde R_\lambda+\alpha_4,
\end{equation}
with $\alpha_i,\ i=1:4$ coefficients of the ITU -R BT.601 standard (see e.g. \cite{Burger}).
Hence, taking the MSE of the matrices $R_Y$ and $\tilde R_Y$, the second, commonly used, definition of PSNR is referred to only such luma channel as follows
\begin{equation}\label{psnr}
\rm{PSNR}(\tilde R,R)=20 \displaystyle \log_{10}\left( \frac{\max_f}{\sqrt{\rm{MSE}(\tilde R_Y, R_Y)}}\right),
\end{equation}
We point out that in our experiments the PSNR has been computed using both the previous definitions. However, for brevity, in this paper we report only the values achieved by definition (\ref{psnr}), giving no new insight the results obtained by using the other definition (\ref{PSNR}).
Finally, also the SSIM metric is defined via the luma channel as follows \cite{SSIM}
\begin{equation}\label{def_ssim}
\rm{SSIM}(\tilde R, R)=
\frac{\left[2\tilde\mu \mu+c_1\right]\left[2\rm{cov} +c_2\right]}
{\left[{\tilde \mu}^2+\mu^2+c_1\right]\left[{\tilde\sigma}^2+\sigma^2+c_2\right]},
\end{equation}
where $\tilde \mu, \mu$ and $\tilde\sigma, \sigma$ denote the average and variance of the matrices $\tilde R_Y, R_Y$, respectively, $\rm{cov} $ indicates their covariance, and the constants are usually fixed as $c_1=(0.01\times L), c_2=(0.03\times L)$ with the dynamic range of the pixel values $L=255$ in the case of 8-bit images.
\section{VPI scaling method}
\label{method}
According to the notation introduced in the previous section, both $I$ and $R$ are digital versions (with $n_1\times n_2$ and $N_1\times N_2$ pixels, respectively) of the same continuous image represented by the vector function $\f=(f_1,f_2,f_3)$ (cf. (\ref{I}), (\ref{R})).
Nevertheless, to be more general, in view of the finite representation of the data and the accuracy used to store the image, we suppose the effective input image of our method is a more or less corrupted version of $I$. We denote it by $\tilde I=[\tilde I_1,\tilde I_1,\tilde I_3]$ and assume that there exists a corrupted function $\widetilde \f=(\widetilde f_1,\widetilde f_2,\widetilde f_3)$ such that
\begin{equation}\label{I-tilde}
\tilde I(i,j)=\tilde\f(x_i^{n_1},y_j^{n_2}),\quad i=1:n_1,\quad j=1:n_2.
\end{equation}
Starting from these initial data, VPI method computes the output image $\tilde R$ having the desired size $N_1\times N_2$ and defined as follows
\begin{equation}\label{R-tilde}
\tilde R(i,j)=V_{\n}^{\m}\tilde \f(x_i^{N_1},y_j^{N_2}), \ i=1:N_1,\ j=1:N_2.
\end{equation}
In terms of the RGB components $\tilde R=[\tilde R_1,\tilde R_2, \tilde R_3]$, by (\ref{VP}), this means that for any $i=1:N_1,\ j=1:N_2$ and $\lambda=1:3$ we have
\begin{eqnarray}\label{Rk-tilde}
\tilde R_\lambda(i,j)&=&V_{\n}^{\m}\tilde f_\lambda(x_i^{N_1},y_j^{N_2})=\\
\nonumber
&=& \sum_{u=1}^{n_1}\sum_{v=1}^{n_2} \tilde I_\lambda(u,v)\Phi_{n_1,u}^{m_1}(x_i^{N_1})\Phi_{n_2,v}^{m_2}(y_j^{N_2}),
\end{eqnarray}
that is
\begin{equation}\label{R-prod}
\tilde R_\lambda=V_1^T \tilde I_\lambda V_2, \quad \lambda=1:3,
\end{equation}
where the matrices $V_1\in {\mathbb R}^{n_1\times N_1}$ and $V_2\in {\mathbb R}^{n_2\times N_2}$ have the following entries
\begin{eqnarray}\label{V1}
V_1(i,j)&=&\Phi_{n_1,i}^{m_1}(x_j^{N_1}),\qquad i=1:n_1,\ j=1:N_1\\
\label{V2}
V_2(i,j)&=&\Phi_{n_2,i}^{m_2}(y_j^{N_2}),\qquad i=1:n_2,\ j=1:N_2
\end{eqnarray}
To compute $V_1,V_2$ efficient algorithms based on Fast Fourier transform can be implemented (see, e.g., \cite {FFT}). Moreover, by pre-computing matrices $V_i$, the representation (\ref{R-prod}) allows to reduce the computational effort when we have to resize many images for the same fixed sizes.
Now, we note that in the previous formulas, the integers $n_\ell$ and $N_\ell$ for $\ell=1,2$ are determined by the initial and final size of the scaling problem at hand, while the parameter $m_\ell$ is free. Theoretically, it can be arbitrarily chosen from the set of integers between 1 and
$n_\ell $. According to (\ref{mteta}), for our method we fix
\begin{equation}\label{m-tested}
m_\ell=\lfloor\theta n_\ell\rfloor, \qquad \ell=1,2,\qquad \mbox{with}\qquad \theta\in ]0,1]
\end{equation}
including the limit case $\theta =1$ too.
Moreover, we also allow $\theta=0$, but, in this case, we remark that, by virtue of (\ref{limite}), VPI reduces to Lagrange--Chebyshev Interpolation (LCI) scaling method recently proposed in \cite{Lagrange}. In this sense, VPI can be considered a generalization of the LCI method.
Regarding the choice of the parameter $\theta$, in the experimental validation of VPI, we consider two modes that we indicate in the sequel: "supervised VPI" and "unsupervised VPI". In the latter case, $\theta$ must be supplied by the user as an input parameter, arbitrarily chosen in $[0,1]$, where the choice $\theta=0$ means to select the LCI method.
Nevertheless, if a target resized image is available, we have structured VPI method in a supervised mode that requires the target image as input argument, instead of the parameter $\theta$. In this case, we take several choices of $\theta\in [0,1]$ and, consequently, we get several matrices $V_1, V_2$ that determine, using (\ref{R-prod}), several resized images. Among these images, the one that, once compared with the target image, gives the smallest MSE is chosen as the output image of the supervised VPI method.
In the sequel of this Section, we focus on d-VPI method with odd scale factors $s=n_1/N_1=n_2/N_2$. In the following proposition, we suppose that the lower resolution sampling satisfies the Nyquist limit so that the continuous image $\f$ can be uniquely reconstructed from both digital images $I$ and $R$ without any error. In this ideal case, we prove d-VPI method produces a MSE that it is not greater than $s^2$ times the MSE of the input data (cf. (\ref{eq-mse})) and, in particular, we get a null MSE if the input image is {\it exact} or, at least, only some {\it crucial} pixels of it are {\it exact} (cf. Remark 1).
\begin{proposition}\label{prop}
Let $I$ and $R$ be the initial and resized true images given by (\ref{I}) and (\ref{R}) respectively, and let $\tilde I$ and $\tilde R$ by input and output images of d-VPI method, respectively given by (\ref{I-tilde}) and (\ref{R-tilde}), with arbitrarily fixed integer parameters $m_1<n_1$ and $m_2<n_2$. If there exists $\ell\in{\mathbb N}$ that relates the initial size $n_1\times n_2$ with the final size $N_1\times N_2$ as follows
\begin{equation}\label{hp}
\frac{n_1}{N_1}=\frac{n_2}{N_2}=(2\ell-1),
\end{equation}
then we have
\begin{equation}\label{eq-mse}
\rm{MSE}(R,\tilde R)\le s^2 \rm{MSE}(I, \tilde I),\qquad s=(2\ell -1).
\end{equation}
The same estimate holds also for the luma channel and, if in addition $I=\tilde I$ holds too, then we get
\begin{equation}\label{dim_ssim}
\rm{PSNR}(R,\tilde R)=\infty,\qquad\mbox{and}\qquad \rm{SSIM} (R,\tilde R)=1.\end{equation}
\end{proposition}
{\it Proof. } Recalling the definition (\ref{MSE-mean}), to prove (\ref {eq-mse}) it is sufficient to state the same inequality holds for the respective RGB components. Hence, according to our notation, let us state that for all $\lambda=1:3$ we have
\begin{equation}\label{dim-mse}
\rm{MSE}(R_\lambda,\tilde R_\lambda)< s^2 \rm{MSE}(I_\lambda, \tilde I_\lambda).
\end{equation}
To this aim, by using the short notation $n$ and $N$ to denote $n_i$ and $N_i$, respectively, for any $i=1,2$, we note that whenever we have $n= s N$ with $s=(2\ell -1)$ and $\ell\in{\mathbb N}$, all the zeros of the first kind Chebyshev polynomial of degree $N$ (i.e. $\cos(t_h^{N})$ with $h=1:N$) are also zeros of the first kind Chebyshev polynomial of degree $n$. More precisely, we have
\begin{equation}\label{nodi_3}
\cos(t_h^{N})=\cos(t_{i(h)}^{n}) \quad \mbox{with\ } i(h)=\frac{s(2h-1)+1}2,
\end{equation}
where we point out that for all $h=1:N$ the index $i(h)=\frac{s(2h-1)+1}2$ is an integer between 1 and $n$ thanks to the hypothesis $s$ is odd.
By virtue of (\ref{nodi_3}), for all $h_1=1:N_1$ and $h_2=1:N_2$, recalling (\ref{R})--(\ref{t}) we get
\begin{equation}\label{RI}
\begin{array}{rl}
R_\lambda(h_1,h_2)=&f_\lambda\left(x_{h_1}^{N_1},y_{h_2}^{N_2}\right)\\
=& f_\lambda\left(x_{i(h_1)}^{n_1},y_{i(h_2)}^{n_2}\right)\\
=& I_\lambda\left(i(h_1),\ i(h_2)\right), \qquad \lambda=1:3.
\end{array}
\end{equation}
Similarly, from (\ref{R-tilde}), (\ref{nodi_3}), (\ref{inter-g}) and (\ref{I-tilde}), we deduce
\begin{equation}\label{RI-tilde}
\begin{array}{rl}
\tilde R_\lambda(h_1,h_2)=&V_{\n}^{\m}\tilde f_\lambda\left(x_{h_1}^{N_1},y_{h_2}^{N_2}\right)\\ [.1in]
=& V_{\n}^{\m}\tilde f_\lambda\left(x_{i(h_1)}^{n_1},y_{i(h_2)}^{n_2}\right)
=\tilde f_\lambda\left(x_{i(h_1)}^{n_1},y_{i(h_2)}^{n_2}\right)\\ [.1in]
=& \tilde I_\lambda\left(i(h_1),\ i(h_2)\right), \qquad \lambda=1:3.
\end{array}
\end{equation}
Therefore, by (\ref{RI}) and (\ref{RI-tilde}), for any $\lambda=1:3$ we deduce (\ref{dim-mse}) as follows
\begin{eqnarray*}
&&\rm{MSE}(R_\lambda,\tilde R_\lambda)=\\
&=& \frac 1{N_1N_2}\sum_{h_1=1}^{N_1}\sum_{h_2=1}^{N_2}
\left[R_\lambda(h_1,h_2)-\tilde R_\lambda(h_1,h_2)\right]^2\\
&=&\frac 1{N_1N_2}\sum_{h_1=1}^{N_1}\sum_{h_2=1}^{N_2}
\left[I_\lambda\left(i(h_1),\ i(h_2)\right)-\tilde I_\lambda\left(i(h_1),\ i(h_2)\right)\right]^2\\
&\le & \frac 1{N_1N_2}\sum_{i=1}^{n_1}\sum_{j=1}^{n_2}
\left[I_\lambda(i,j)-\tilde I_\lambda(i,j))\right]^2\\
&=&\frac {s^2}{n_1n_2}\sum_{i=1}^{n_1}\sum_{j=1}^{n_2}
\left[I_\lambda(i,j)-\tilde I_\lambda(i,j))\right]^2\\
&=&s^2 \rm{MSE}(I_\lambda,\tilde I_\lambda).
\end{eqnarray*}
As regards the luma chanel, we note that by (\ref{RI})--(\ref{RI-tilde}) and (\ref{RY}) we easily deduce that
\begin{equation}\label{RY-1}
\begin{array}{rl}
R_Y(h_1,h_2)=&I_Y(i(h_1), \ i(h_2))\\
\tilde R_Y(h_1,h_2)=&\tilde I_Y(i(h_1), \ i(h_2))
\end{array}
\end{equation}
and such identities, similarly to the case of the RGB components, easily imply
\begin{equation}\label{RY-2}
\rm{MSE}(R_Y, \tilde R_Y)\le s^2 \rm{MSE}(I_Y,\tilde I_Y)
\end{equation}
Finally, in the case that $I=\tilde I$, from (\ref{RI}) and (\ref{RI-tilde}) we deduce that
\[
R_\lambda(h_1,h_2)=\tilde R_\lambda(h_1,h_2), \quad \lambda=1:3
\]
holds for any $h_1=1:N_1$ and $h_2=1:N_2$. Consequently we get the best result (\ref{dim_ssim}).
\begin{flushleft}$\diamondsuit$\end{flushleft}
\begin{remark}
The previous proof shows that the hypothesis $I=\tilde I$ can be relaxed requiring that these images coincide only on some suitable pixels.
More precisely, in order to get (\ref{dim_ssim}) it is sufficient that
\begin{equation}\label{hp-reduced}
\begin{array}{l}
I_\lambda(i(h_1),\ i(h_2))=\tilde I_\lambda(i(h_1),\ i(h_2)), \\ [.1in]
\qquad \lambda=1:3, \quad h_1=1:N_1,\quad h_2=1:N_2
\end{array}
\end{equation}
holds, where we defined
\begin{equation}\label{ih}
i(h)=\frac{s(2h-1)+1}2.
\end{equation}
\end{remark}
\begin{remark}
From (\ref{RI-tilde}), we deduce that in all cases of downscaling with odd scale factors, the choice of the parameter $\theta$, and more generally, the values we assign to $ \ m $ do not matter as d-VPI always returns the same output image which coincides with the one produced by the d-LCI method. Moreover, by virtue of (\ref{RI-tilde}), starting from a given image and using the same odd scale factor in opposite directions, for any $\m$ ones may sequentially run u-VPI first and then d-VPI, getting back the initial image without any error.
In all downscaling with odd scale factors, formula (\ref{RI-tilde}) is used instead of \eqref{R-prod} to get the output image by d-VPI.
\end{remark}
In conclusion, we point out that the theoretical results of Proposition \ref{prop} do not exclude the possible occurrence of aliasing effects when we are downscaling input images with high-frequency details. In this case, even starting from not corrupted HR images, d-VPI produces LR images with aliasing effects that, under a certain size, become more and more visible. The experimental results in the next section show that aliasing also occurs when the downscaling factor (if any) does not satisfy the hypothesis of Proposition \ref{prop}.
Following the sampling theorem \cite {Shannon}, the standard approach for minimizing aliasing artifacts involves limiting the spectral bandwidth of the HR input image by filtering the image via convolution with a kernel before subsampling. As a well-known side effect, the resulting LR output image might suffer from loss of fine details and blurring of sharp edges. Thus, many filters have been developed \cite{Wolberg} to balance mathematical optimality with the perceptual quality of the downsampled result. However, these filter-based methods can introduce undesirable ringing or over-smoothing artifacts.
Both L$_0$ and DPID have focused on the aspects of detail preserving. Specifically, to solve the aliasing problem L$_0$ proposes an L$_0$-regularized optimization framework where the gradient ratio and reconstruction prior are iteratively optimized in an alternative way. In contrast, DPID uses an inverse bilateral filter to emphasize the differences between areas with small detail and bigger ones with similar colors. However, the aliasing reduction process of these methods influences their performance results both in quality and CPU time terms, as it is possible to see in the next Section \ref{ER}.
In this paper, our attention is focused on studying the effect of VP interpolation applied to image resizing in both upscaling and downscaling. Consequently, we limit to suggest the employment of suitable convolutional filtering for high downscaling scale factors whenever the aliasing effects are too evident (see Subsection \ref{PE}). In the meantime, we are working on finding better solutions to reduce the possible aliasing effects in d-VPI.
\section{Experimental Results}
\label{ER}
In this section, we describe the experimental validation of VPI. We test it on some publicly available image datasets, and we compare it with the methods described in Section 2; namely, we compare d-VPI with BIC, d-LCI, L$_0$, DPID, and u-VPI with BIC, u-LCI, SCN. Although DPID and L$_0$ (SCN) can also be applied in upscaling (downscaling) mode, we do not force the comparison with them in unplanned way to avoid an incorrect experimental evaluation.
All methods have run on the same computer with the configuration Intel Core i7 3770K CPU @350GHz in Matlab 2018a. In the following, Subsection \ref{Data} introduces the considered datasets while Subsections \ref{QE} and 5.3 are respectively devoted to quantitative and qualitative performance evaluation, both in downscaling and upscaling.
\subsection{Datasets}
\label{Data}
To be more general, besides the datasets used by the benchmark methods \cite{Weber,Liu,SCN,Lagrange,BIC}, we also consider some datasets offering different characteristics and extensively employed in Image Processing. \\
Specifically, the d-VPI performance evaluation is carried out on some publicly available datasets comprising 1026 color images in total. In particular, we consider BSDS500 dataset \cite{Martin}, available at \cite{Berkeley} which includes 500 color images having the same size (481$\times$321 or 321$\times$481). This set, also used in \cite{Liu, Lagrange}, is sufficiently general and provides a large variety of images often employed also in other different image analysis tasks, such as in image segmentation \cite{seg_survey, seg1, seg2, seg3} and color quantization \cite{Chaki, CQ1, CQ2, CQ3}.
We also consider the following datasets to favor the comparison with the benchmark methods.
\begin{itemize}
\item
The 13 natural-color images of the user study in \cite{Oztireli} available at \cite{13US} and here denoted by 13US. They are originally taken from the MSRA Salient Object Database \cite{Liu_2011}, used in a previous study \cite{Kopf} and also employed in \cite{Weber}.
These images have sizes ranging from 241$\times$400 to 400$\times$310 pixels. \item
The extensive two sets selected in \cite{Weber} from the Yahoo 100Mimage dataset \cite{Thomee} and the NASA Image Gallery \cite{NASA}, available at \cite{NASA_set}. We denote by NY17 and NY96 the corresponding sets of color images extracted from them. These sets comprise 17 and 96 color images, with sizes ranging from 500$\times$334 to 6394$\times$3456, respectively.
\item The Urban100 dataset \cite{Urban100} including 100 color images related to an urban context, with one dimension at most equal to 1024 and the other ranging from 564 to 1024 pixels. It has also been employed in \cite{Liu}.
\item The dataset PEXELS300 considered in \cite{Lagrange} and available with VPI code. It consists of 300 color images randomly selected from \cite{pex} and originally having different large sizes that we centrally cropped to 1800$\times$1800 pixels.
\end{itemize}
Regarding the u-VPI performance evaluation, in addition to the previous datasets, we have also used the following well--known datasets, commonly used by the Super Resolution community \cite{Hayat, Li-DL} for a total of 1943 color images.
\begin{itemize}
\item
The 5 images, known in the literature as Set5 and originally taken from \cite{Bevilacqua}, with sizes ranging from 256$\times$256 to 512$\times$512 available at \cite{Set5}.
\item
The 12 color images belonging to the Set14 \cite{Zeyde}, with sizes ranging from 276$\times$276 to 512$\times$768 available at \cite{Set14}.
\item
The image dataset DIV2k (DIVerse 2k) consisting of high-quality resolution images used for the NTIRE 2017 SR challenge (CVPR 2017 and CVPR 2018) \cite{Timofte} available at \cite{DIV2k}. It comprises the train set (DIV2k-T) and the valid set (DIV2k-V), with 800 and 100 color images, respectively. Such images have one dimension equal to 2040, while the other ranges from 768 to 2040. DIV2k has permitted testing the performance of all the benchmark methods on input images characterized by different degradations. Such input images are included in DIV2k and collected as follows:
\begin{itemize}
\item
DIV2k-T-B ( DIV2k-V-B), generated by BIC (-B);\item
DIV2k-T-u ( DIV2k-V-u), classified as unknown (-u); \item
DIV2k-T-d ( DIV2k-V-d), classified as difficult (-d);\item
DIV2k-T-m ( DIV2k-V-m), classified as mild (-m).
\end{itemize}
\end{itemize}
Since DIV2k is the only one to contain both the input image and the target image, in order to implement supervised VPI with the other datasets, we fix the images of these datasets as target images with $N_1\times N_2$ pixels. Hence, we generate the respective input images, with size $n_1\times n_2$, by a scaling method which we assume to be BIC in most cases, since it can be used both in downscaling (for testing supervised u-VPI) and upscaling (for testing supervised d-VPI). However, in Subsection 5.2.3, we also analyze the implementation of the other scaling methods from Section 2 to generate the input images.
For simplicity, in the following, we distinguish how the input image is generated by premising the name of the generating method. For instance, the input image generated by BIC is indicated as BIC input image.
\vspace{.2cm}
\subsection{Quantitative evaluation}
\label{QE}
For the quantitative evaluation, we compute, both in upscaling and downscaling, the visual quality measures PSNR, SSIM (cf. Subsection 3.3), and the CPU time for VPI and the benchmark methods starting from the same input image. Here we focus on the quality measures while the CPU time is analyzed in Subsection 5.2.3.
Since the target image is necessary to compute PSNR and SSIM, we employ the supervised VPI both in upscaling and downscaling. Specifically, we let the free parameter $\theta$ vary from 0.05 to 0.95 with step 0.05. In this way, we get 19 resized images, and we take as output image the one with minimum MSE.
First, we test supervised VPI and the benchmark methods on the DIV2k image dataset. As above specified, this is the only dataset that, for certain fixed scaling factors, includes both the input $n_1\times n_2$ images and the target $N_1\times N_2$ image. Since in DIV2k only the cases
\[
(N_1, N_2)=s (n_1, n_2), \qquad\mbox{with}\qquad s=2,3,4,
\]
are present, on DIV2k we test upscaling methods (supervised u-VPI, BIC, u-LCI, and SCN) for these scale factors.
Tables \ref{DIV2k-up234} and \ref{DIV2k-up4} show the average results of PSNR and SSIM computed with target images from both the train and valid sets (DIV2k-T and DIV2k-V, resp.), taking as input images the respective ones classified in DIV2k as BIC (-B), unknown (-u), difficult (-d), and mild (-m).
We remark that Table \ref{DIV2k-up4} concerns only the case $s=4$ since for $s=2,3$ the input LR images are not present in DIV2k-T-d/m and DIV2k-V-d/m datasets.
To test supervised VPI and the benchmark methods on the other datasets detailed in Subsection 5.1, as previously announced, we take as target $N_1\times N_2$ images the ones in the datasets and apply BIC to them in order to generate the input $n_1\times n_2$ images. For brevity, in both upscaling and downscaling,
we show only the performance results for the scale factors s=2,3,4, which means the input size $n_1\times n_2$ is computed from the target size according to the following formula
\begin{equation}\label{input-size}
n_i=\left\{\begin{array}{ll}
sN_i & \mbox{to test downscaling methods}\\ [.1in]
\left\lfloor
\frac{N_i}{s}\right\rfloor & \mbox{to test upscaling methods}
\end{array}\right.\ i=1,2.
\end{equation}
Tables \ref{up234} and \ref{down234} concern upscaling and downscaling, respectively, and show the average results of PSNR, SSIM values obtained for the datasets and methods specified in the first columns. Note that for all methods we provide the input images generated from those in the datasets, with size $N_1\times N_2$, by applying BIC according to (\ref{input-size}).
We remark that, in upscaling, the comparison with SCN is limited to DIV2k since, for the images in the datasets of Table \ref{up234}, the SCN demo version does not always produce the exact size of the HR image, making it impossible to compute the quality measure values.
The bar graphs describing the trend discovered by Tables \ref{DIV2k-up234}--\ref{down234} are shown in Figures \ref{fig:12a} and \ref{fig:11a} for PSNR and SSIM values, respectively.
\begin{table}[!htbp]
\caption{Average performance of upscaling methods on DIV2k with input images generated by BIC (-B) and classified as unknown (-u)}
\label{DIV2k-up234}
\tiny{
\begin{tabular}{r|rr|rr|rr}
\hline
&\multicolumn{2}{c}{x2} & \multicolumn{2}{|c}{x3} & \multicolumn{2}{|c}{x4}\\ \hline
& PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\
\hline
\textbf {DIV2k-T-B}& & & & & & \\
BIC & 32,369 & 0,944 & 29,623 & 0,899 & 28,094 & 0,865 \\
SCN & \textbf{34,336} & \textbf{0,960} & \textbf{30,924} & \textbf{0,918} & \textbf{29,170} & \textbf{0,884}\\
u-LCI & 32,969 & 0,948 & 29,967 & 0,903 & 28,381 & 0,868 \\
u-VPI & 33,003 & 0,949 & 30,013 & 0,905 & 28,419 & 0,870 \\
\hline
\textbf {DIV2k-V-B}& & & & & & \\
BIC & 32,411& 0,940 & 29,647 & 0,891 & 28,108 & 0,853 \\
SCN & \textbf{34,513} & \textbf{0,958 }& \textbf{31,078 }& \textbf{0,912}& \textbf{29,312}&\textbf{0,875}\\
u-LCI & 33,010 & 0,944& 29,989 & 0,895 & 28,396 & 0,856 \\
u-VPI & 33,072 & 0,946 &30,036 & 0,897 & 28,433 & 0,859\\
\hline
\textbf {DIV2k-T-u}& & & & & & \\
BIC & 26,566& 0,835 & 27,292 & 0,849 & 23,409 & 0,751 \\
SCN & 26,444 & 0,831 & 27,378 & 0,853 & \textbf{27,292} & \textbf{0,849}\\
u-LCI &26,525 & 0,834 & 27,430& 0,853 & 23,357& 0,749\\
u-VPI &\textbf{26,586} & \textbf{0,836} & \textbf{27,433}& \textbf{0,854}&23,435& 0,752\\
\hline
\textbf {DIV2k-V-u}& & & & & & \\
BIC &26,536 & 0,820 & 27,279 & 0,836 & 23,233 &0,726\\
SCN & 26,412 & 0,816 & 27,368 & 0,840 & 23,074 & 0,720 \\
u-LCI & 26,496 & 0,818 & 27,418 &0,840 &23,180 & 0,724 \\
u-VPI &\textbf{26,557} & \textbf{0,821} & \textbf{27,421}& \textbf{0,841}& \textbf{23,262}& \textbf{0,727}\\
\hline
\end{tabular}
}
\end{table}
\begin{table}[!htbp]
\caption{Average performance of upscaling methods on DIV2k with input images classified as difficult (-d) and mild (-m)}
\label{DIV2k-up4}
\tiny{
\begin{tabular}{r|rr|r|rr}
\hline
\multicolumn{3}{c}{x4} & \multicolumn{3}{c}{x4}\\ \hline
& PSNR & SSIM & & PSNR & SSIM \\
\hline
\textbf {DIV2k-T-d}& & & \textbf {DIV2k-V-d} & \\
BIC & 20,056 &0,665 & BIC & \textbf{23,233} & \textbf{0,726 }\\
SCN & 19,956 &0,649 & SCN & 19,824 & 0,621 \\
u-LCI & 20,010 & 0,656 & u-LCI & 19,876 & 0,628 \\
u-VPI& \textbf{20,138} & \textbf{0,672} & u-VPI & 20,012 & 0,644\\
\hline
\hline
\textbf {DIV2k-T-m}& & & \textbf {DIV2k-V-m} & \\
BIC & 19,589& 0,652 & BIC & \textbf{23,233}& \textbf{0,726} \\
SCN & 19,475 &0,636 & SCN & 19,047 & 0,601 \\
u-LCI & 19,530 & 0,643 & u-LCI & 19,095 & 0,608 \\
u-VPI & \textbf{19,735} & \textbf{0,661} & u-VPI& 19,332 & 0,627 \\
\hline
\hline
\end{tabular}
}
\end{table}
\begin{table}[!htbp]
\caption{Average performance of upscaling methods with BIC input images}
\label{up234}
\tiny{
\begin{tabular}{r|rr|rr|rr}
\hline
&\multicolumn{2}{c}{x2} & \multicolumn{2}{|c}{x3} & \multicolumn{2}{|c}{x4}\\ \hline
& PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\
\hline
\textbf {BSDS500}& & & & & & \\
BIC & 27,665 & 0,886 &26,148 &0,837 &23,678 &0,701 \\
u-LCI & 27,707 &0,888 &26,196 &0,839 &23,793 &0,769 \\
u-VPI& \textbf{27,748} &\textbf{0,890} &\textbf{26,237} &\textbf{0,841} &\textbf{23,865} &\textbf{0,770}\\
\hline
\textbf {13US}& & & & & & \\
BIC & 25,429 & 0,861 &22,125 &0,734&21,906 &0,710 \\
u-LCI & 25,800&0,868 &22,127 &0,738 &22,010&0,713 \\
u-VPI& \textbf{25,859} &\textbf{0,872} &\textbf{22,194} &\textbf{0,739} &\textbf{22,045} &\textbf{0,716}\\
\hline
\textbf {NY17}& & & & & & \\
BIC & 37,638& 0,958 & 32,298& 0,924 &32,232 & 0,907 \\
u-LCI & 38,485 &0,960 &34,910& 0,924 &32,793& 0,908 \\
u-VPI & \textbf{38,540} & \textbf{0,961} & \textbf{34,976}& \textbf{0,927}1&\textbf{32,830} & \textbf{0,910} \\
\hline
\textbf {NY96}& & & & & & \\
BIC & 34,979 & 0,953& 31,354 & 0,913 &30,368 & 0,891 \\
u-LCI & 35,507 & 0,955 &31,556 &0,914&30,607& 0,891 \\
u-VPI & \textbf{35,573} & \textbf{0,956} & \textbf{31,602} &\textbf{0,916}&\textbf{30,654}& \textbf{0,894} \\
\hline
\textbf {URBAN100}& & & & & & \\
BIC & 26,860 & 0,882& 22,737 & 0,755 &23,135 & 0,741 \\
u-LCI & 27,321 & 0,886 &22,755 &0754&23,350& 0,793 \\
u-VPI & \textbf{27,387} & \textbf{0,891} & \textbf{22,802} &\textbf{0,759}&\textbf{23,388}& \textbf{0,748}\\
\hline
\textbf {PEXELS300}& & & & & & \\
BIC & 36,249 & 0,96 & 33,147 & 0,932& 31,374 & 0,908\\
u-LCI & 37,067 & \textbf{0,966}& 33,622 &0,935 &31,741 &0,910\\
u-VPI & \textbf{37,128} & \textbf{0,966}&\textbf{33,671}& \textbf{0,936}& \textbf{31,786} & \textbf{0,912}\\
\hline
\textbf {Set5}& & & & & & \\
BIC & 33,646& 0,965 & 28,596 &0,916 & 28,425 & 0,908\\
u-LCI & 34,499 & \textbf{0,969 }& 28,881 &\textbf{0,918}&28,915 &0,912\\
u-VPI&\textbf{34,540} & \textbf{0,969} & \textbf{28,924} &\textbf{0,918} &\textbf{28,946} &\textbf{0,913}\\
\hline
\textbf {Set14}& & & & & & \\
BIC & 30,375 & 0,917 & 27,597 &0,839 &26,022 & 0,807\\
u-LCI &31,020 & 0,921& 28,026 &0,841&26,333 & 0,809\\
u-VPI & \textbf{31,080} & \textbf{0,923}& \textbf{28,064} &\textbf{0,843}&\textbf{26,371} & \textbf{0,812}\\
\hline
\end{tabular}
}
\end{table}
\begin{table}[!htbp]
\caption{Average performance of downscaling methods with BIC input imagess (oom is the short way to indicate "out of memory").}
\label{down234}
\tiny{
\begin{tabular}{r|rr|rr|rr}
\hline
&\multicolumn{2}{c}{:2} & \multicolumn{2}{|c}{:3} & \multicolumn{2}{|c}{:4}\\ \hline
& PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\
\hline
\textbf {BSDS500} & & & & & & \\
BIC & 40,152 & 0,993 &40,526 &0,993 &40,456 &0,993 \\
DPID & 43,011 & 0,996 &43,090 &0,996 &42,481 &0,996 \\
L$_0$ & 30,742 & 0,961 &34,304 &0,971 &35,585 &0,971 \\
d-LCI & 54,852 &\textbf{1,000} &\textbf{$\infty$} &\textbf{1,000} &56,890 &\textbf{1,000} \\
d-VPI & \textbf{56,025} &\textbf{1,000} &\textbf{$\infty$} &\textbf{1,000} &\textbf{60,928} &\textbf{1,000} \\
\hline
\textbf {13US}& & & & & & \\
BIC & 36,419 & 0,990 & 36,755 & 0,991 &36,683 & 0,991 \\
DPID & 39,405 & 0,996 &39,676 & 0,996 &38,988 &0,995 \\
L$_0$ & 27,008 &0,949 & 31,637 &0,972 &34,467 &0,979 \\
d-LCI &54,172 &\textbf{1,000} & \textbf{$\infty$} &\textbf{1,000} &56,541 &\textbf{1,000} \\
d-VPI & \textbf{55,039} &\textbf{1,000} & \textbf{$\infty$} &\textbf{1,000} &\textbf{59,529} &\textbf{1,000} \\
\hline
\textbf {NY17}& & & & & & \\
BIC & 47,688 &0,995& 48,849& 0,996& 48,706 & 0,996\\
DPID& 49,218&0,998& 49,212& 0,998& 48,706& 0,997 \\
L$_0$ & 36,461 & 0,972 & 39,043& 0,979 & oom & oom \\
d-LCI & 55,497 & 0,999 &\textbf{$\infty$} & \textbf{1,000}&58,497& \textbf{1,000}\\
d-VPI & \textbf{57,632} & \textbf{1,000} &\textbf{$\infty$} & \textbf{1,000}&\textbf{ 64,042}& \textbf{1,000}\\
\hline
\textbf {NY96}& & & & & & \\
BIC & 46,261& 0,996 & 47,371 &0,997 & 47,189 & 0,996\\
DPID& 48,187& 0,998 & 48,398 &0,998& 47,901& 0,998\\
L$_0$ & 35,686 & 0,974 & oom & oom & oom & oom \\
d-LCI & 55,227 & 0,999& \textbf{$\infty$} &\textbf{1,000}&58,306 & \textbf{1,000}\\
d-VPI & \textbf{57,315} & \textbf{1,000} &\textbf{$\infty$} & \textbf{1,000}&\textbf{ 63,800}& \textbf{1,000}\\
\hline
\textbf {URBAN100}& & & & & & \\
BIC & 37,071& 0,989 &37,414&0,990 & 37,344 & 0,989\\
DPID& 40,648& 0,996& 40.858 &0,996& 40,192& 0,995\\
L$_0$ & 28,215 &0,951 & 32,608 & 0,969 & 35,185 & 0,973\\
d-LCI & 53,800 & 0,999& \textbf{$\infty$} &\textbf{1,000}&56,560 & \textbf{1,000}\\
d-VPI & \textbf{54,703} & \textbf{1,000}&\textbf{$\infty$} & \textbf{1,000}&\textbf{ 59,730}& \textbf{1,000}\\
\hline
\textbf {PEXELS300}& & & & & & \\
BIC & 46,803 & 0,997 & 47,834 & 0,997& 47,675&0,997 \\
DPID & 48,193 & 0,998 & 48,388 &0,998 & 47,935& 0,998\\
L$0$ & 35,388 & 0,976 &37,693 & 0,980 &38,799 & 0,981 \\
d-LCI & 55,741& \textbf{1,000} & \textbf{$\infty$} & \textbf{1,000} & 58,069 & \textbf{1,000}\\
d-VPI & \textbf{57,780} & \textbf{1,000} &\textbf{$\infty$} & \textbf{1,000}& \textbf{ 63,595}& \textbf{1,000}\\
\hline
\end{tabular}
}
\end{table}
\begin{figure*}[!htbp]
\includegraphics[height=4.5cm,width=8.5cm]{Div2k_PSNR_zoom_average_2.jpg}
\includegraphics[height=4.5cm,width=8.5cm]{PSNR_zoom_average_2.jpg}
\newline
\newline
\includegraphics[height=4.5cm,width=8.5cm]{Div2k_PSNR_zoom_average_3.jpg}
\includegraphics[height=4.5cm,width=8.5cm]{PSNR_zoom_average_3.jpg}
\newline
\newline
\includegraphics[height=4.5cm,width=8.5cm]{Div2k_PSNR_zoom_average_4.jpg}
\includegraphics[height=4.5cm,width=8.5cm]{PSNR_zoom_average_4.jpg}
\newline
\newline
\includegraphics[height=4.5cm,width=8.5cm]{PSNR_average_down_2.jpg}
\includegraphics[height=4.5cm,width=8.5cm]{PSNR_average_down_4.jpg}
\caption {PSNR values extracted from Tables \ref{DIV2k-up234}--\ref{down234}}
\label{fig:12a}
\end{figure*}
\begin{figure*}[!htbp]
\includegraphics[height=4.5cm,width=8.5cm]{Div2k_SSIM_zoom_average_2.jpg}
\includegraphics[height=4.5cm,width=8.5cm]{SSIM_zoom_average_2.jpg}
\newline
\newline
\includegraphics[height=4.5cm,width=8.5cm]{Div2k_SSIM_zoom_average_3.jpg}
\includegraphics[height=4.5cm,width=8.5cm]{SSIM_zoom_average_3.jpg}
\newline
\newline
\includegraphics[height=4.5cm,width=8.5cm]{Div2k_SSIM_zoom_average_4.jpg}
\includegraphics[height=4.5cm,width=8.5cm]{SSIM_zoom_average_4.jpg}
\newline
\newline
\includegraphics[height=4.5cm,width=8.5cm]{SSIM_average_down_2.jpg}
\includegraphics[height=4.5cm,width=8.5cm]{SSIM_average_down_4.jpg}
\caption{SSIM values extracted from Tables \ref{DIV2k-up234}--\ref{down234}}
\label{fig:11a}
\end{figure*}
From the displayed average results, we observe the following.
\vspace{.3cm}\newline
$\Box$ Concerning upscaling:
\begin{enumerate}
\item[u.1]
On the datasets displayed in Table \ref{up234}, employing the methods with BIC input images, u-VPI has a slightly higher performance than BIC and u-LCI in terms of the visual quality values;
\item[u.2]
On the DIV2k dataset, providing the input images from the datasets therein included, we observe that the previous trend for BIC, u-LCI, and u-VPI is confirmed. However, we also find SCN that gives the best visual quality values when the input images are generated by BIC (i.e., belonging to DIV2k-T-B and DIV2k-V-B datasets). Nevertheless, in the case of input images classified unknown (i.e., belonging to DIV2k-T-u and DIV2k-V-u datasets), slightly higher performance values are given by u-VPI, except in one case. Indeed, SCN has the best performance on DIV2k-T-u when s=4. On the contrary, SCN always provides the lowest PSNR and SSIM values when the input images are classified as both difficult and mild (see Table \ref{DIV2k-up4}). In this case, u-VPI continues to provide the best quality values for the Train images (i.e., for input images in DIV2k-T-d and DIV2k-T-m). However, BIC outperforms it for the Valid images (i.e., with input images from DIV2k-V-d and DIV2k-V-m). Finally, no change regards the comparison u-VPI / u-LCI, where u-VPI always gives slightly higher values.
\end{enumerate}
\begin{table}[!htbp]
\caption {Average of the optimal values of $\theta$ resulting from supervised-VPI with BIC input images. The downscaling $:3$ case is missing since it is independent of $\theta$ }
\label{tab:0}
\scriptsize{
\begin{center}
\begin{tabular}{r|r|r|r|r|r}
\hline
&{:2} & {:4} &{x2} & {x3} & {x4}\\
\hline
\textbf {BSDS500} & 0,279&0,649 &0,198 &0,203 &0,543 \\
\textbf {13US} &0,250 &0,250 &0,135 &0,288 &0,142 \\
\textbf {NY17} &0,374 &0,688 &0,132 &0,150 &0,153 \\
\textbf {NY96} &0,363 &0,678 &0,146 &0,193 &0,183 \\
\textbf {Urban100} &0,251 &0,596 &0,127 &0,264 &0,135 \\
\textbf {Pexels300} &0,370 & 0,596&0,118 &0,122 &0,121 \\
\textbf {Set 5} & & &0,110 & 0,250&0,120 \\
\textbf {Set 14} & & &0,121 &0,108 &0,188 \\
\textbf {DIV2k-T-B} & & &0,113 &0,125 &0,125 \\
\textbf {DIV2k-V-B} & & &0,114 &0,123 &0,121 \\
\textbf {DIV2k-T-u} & & &0,607 &0,098 &0,717 \\
\textbf {DIV2k-V-u} & & &0,609 &0,093 &0,722 \\
\textbf {DIV2k-T-d} & & & & &0,894 \\
\textbf {DIV2k-V-d} & & & & &0,896 \\
\textbf {DIV2k-T-m} & & & & & 0,892 \\
\textbf {DIV2k-V-m} & & & & & 0,912 \\
\hline
\end{tabular}
\end{center}
}
\end{table}
\begin{figure}[!htbp]
\begin{center}
\includegraphics [height=4.5cm,width=8.5cm]{boxplotdown.jpg}
\noindent
\newline
\newline
\includegraphics [height=4.5cm,width=8.5cm]{boxplotup.jpg}
\caption{Boxplot derived from Table \ref{tab:0}}
\label{fig:bp}
\end{center}
\end{figure}
\noindent
$\Box$ Concerning downscaling (with BIC input images):
\begin{enumerate}
\item[d.1]
The best PSNR and SSIM values are achieved by d-VPI, followed in order by d-LCI ({\it ex-aequo} in some cases), DPID, BIC, and L$_0$. For all datasets and scale factors displayed in Table \ref{up234}, there is a consistent gap between the PSNR values by d-VPI and those provided by BIC, DPID, and $L_0$. According to the results in \cite{Lagrange}, also d-LCI provides good values, but generally, d-VPI outperforms it, even if with a smaller gap.
\item[d.2] The demo version of $L_0$ has memory problems with input images of large size. In fact, in the case of target images from $NY17$ and $NY96$ datasets, $L_0$ does not give any output for the scale factor $s=2,3$ that, according to (\ref{input-size}), generate a large input size.
\item[d.3]
When $s=3$, d-VPI confirms the optimal performance proved in Proposition 1 for odd scaling factors. We note that this case is missing in Figures \ref{fig:12a}--\ref{fig:11a} since (\ref{dim_ssim}) holds for both d-LCI and d-VPI. Nevertheless, we point out that (\ref{dim_ssim}) has been proved under the ideal assumptions that the required LR sampling satisfies the Nyquist--Shannon theorem and that the initial data are not corrupted. Hence it is not always true. Indeed, we have verified (\ref{dim_ssim}) continues to hold starting from HR input images generated by the Nearest-Neighbor and Bilinear methods \cite{NN-NIL2} (using Matlab \texttt{imresize} with \texttt{nearest} and \texttt{bilinear} option respectively), but in Subsection 5.2.3 we show it does not hold if we use SCN to generate the input HR image.
\end{enumerate}
\subsubsection{Parameter modulation}
\label{sub4}
In this subsection, we use the previous quantitative analysis to hint to the user for setting the parameter $\theta$ in practice when the target image is unavailable.
To this aim, in the previous tests employing supervised VPI with BIC input images, for each dataset, we compute the average of the optimal values of $\theta$ corresponding to the output images presenting the minimal MSE. For both upscaling and downscaling, we report these results in Table \ref{tab:0} and show the relative boxplot in Figure \ref{fig:bp}. We remark that the downscaling case with scale factor $s=3$ is missing since, as previously stated, the d-VPI output is independent of $\theta$.
\begin{table}[!htbp]
\caption{Average CPU time in the upscaling cases of Table \ref{up234}}
\label{tempi-up}
\scriptsize{
\begin{center}
\begin{tabular}{r|r|r|r}
\hline\hline
& & & \\
& {x2} & {x3} & {x4} \\ \hline\hline
\textbf{BSDS500} & & & \\
BIC & 0,003 & 0,002 & 0,003 \\
u-LCI & 0,014 & 0,010 & 0,008 \\
u-VPI & 0,013 & 0,012 & 0,010 \\
\hline
\textbf {13US} & & & \\
BIC & 0,002 & 0,002 & 0,002 \\
u-LCI & 0,012 & 0,010 & 0,009 \\
u-VPI & 0,014 & 0,009 & 0,012 \\
\hline
\textbf{NY17} & & & \\
BIC & 0,091 & 0,074 & 0,071 \\
u-LCI & 1,357 & 0,839 & 0,634 \\
u-VPI & 1,314 & 0,930 & 0,732 \\
\hline
\textbf{NY96} & & & \\
BIC & 0,056 & 0,055 & 0,051 \\
u-LCI & 0,812 & 0,567 & 0,459 \\
u-VPI & 0,864 & 0,606 & 0,487 \\
\hline
\textbf{URBAN100} & & & \\
BIC & 0,009 & 0,008 & 0,008 \\
u-LCI & 0,051 & 0,051 & 0,059 \\
u-VPI & 0,076 & 0,058 & 0,050 \\
\hline
\textbf{PEXELS300} & & & \\
BIC & 0,041& 0,037 & 0,035 \\
u-LCI & 0,300 & 0,216 & 0,185 \\
u-VPI & 0,366 & 0,300 & 0,255 \\
\hline
\textbf{SET5} & & & \\
BIC & 0,003 & 0,003 & 0,002 \\
u-LCI & 0,014 & 0,008 & 0,006 \\
u-VPI & 0,015 & 0,009 & 0,009 \\
\hline
\textbf{SET14} & & & \\
BIC & 0,003& 0,003 & 0,005 \\
u-LCI & 0,020& 0,014 & 0,013 \\
u-VPI & 0,022 & 0,017 & 0,015 \\
\end{tabular}
\end{center}
}
\end{table}
\begin{table}[!htbp]
\caption{Average CPU time in the upscaling cases of Tables \ref{DIV2k-up234}--\ref{DIV2k-up4}}
\label{tempi-up-DIV2k}
\scriptsize{
\begin{center}
\begin{tabular}{r|r|r|r}
\hline\hline
& & & \\
& {x2} & {x3} & {x4} \\ \hline\hline
\textbf{DIV2k-T-B} & & & \\
BIC & 0,036 & 0,031 & 0,032 \\
SCN & 14,353 & 30,972 & 17,879\\
u-LCI & 0,245 & 0,175 & 0,165 \\
u-VPI & 0,318 & 0,225 & 0,187\\
\hline
\textbf{DIV2k-V-B} & & & \\
BIC & 0,035 & 0,029 & 0,030 \\
SCN & 14,822 & 32,872 & 18,855 \\
u-LCI & 0,252 & 0,197 & 0,152 \\
u-VPI & 0,315 & 0,234 & 0,192 \\
\hline
\textbf{DIV2k-T-u} & & & \\
BIC & 0,035 & 0,025 & 0,023 \\
SCN & 14,442 & 29,500 & 17,568 \\
u-LCI & 0,233 & 0,176 & 0,149 \\
u-VPI & 0,322 & 0,224 & 0,190 \\
\hline
\textbf{DIV2k-V-u} & & & \\
BIC & 0,029 & 0,029 &0,027 \\
SCN & 14,976 & 32,270 & 18,362 \\
u-LCI & 0,248 & 0,179 & 0,147 \\
u-VPI & 0,333 & 0,231 & 0,214 \\
\hline
\textbf{DIV2k-T-d} & & & \\
BIC & & & 0,022 \\
SCN & & & 17,842 \\
u-LCI & & & 0,150 \\
u-VPI & & & 0,206 \\
\hline
\textbf{DIV2k-V-d} & & & \\
BIC & & & 0,023 \\
SCN & & & 17,608 \\
u-LCI & & & 0,151 \\
u-VPI & & & 0,211 \\
\hline
\textbf{DIV2k-T-m} & & & \\
BIC & & & 0,024 \\
SCN & & & 18,532 \\
u-LCI & & & 0,150 \\
u-VPI & & & 0,206 \\
\hline
\textbf{DIV2k-V-m} & & & \\
BIC & & & 0,024 \\
SCN & & & 18,496 \\
u-LCI & & & 0,150 \\
u-VPI & & & 0,213 \\
\end{tabular}
\end{center}
}
\end{table}
\begin{table}[!htbp]
\caption{Average CPU time in downscaling tests of Table \ref{down234} (oom denotes "out of memory").}
\label{tempi-down}
\scriptsize{
\begin{center}
\begin{tabular}{r|r|r|r}
\hline\hline
& & & \\
& {:2} & {:3} & {:4} \\ \hline\hline
\textbf{BSDS500} & & & \\
BIC & 0,006 & 0,009 & 0,017 \\
DPID & 7,696 & 12,246 & 18,615 \\
L0 & 3,647 & 8,020 & 14,295 \\
d-LCI & 0,057 & 0,001 & 0,137 \\
d-VPI & 0,046 & 0,001 & 0,117 \\
\hline
\textbf{13US} & & & \\
BIC & 0,005 & 0,009 & 0,013 \\
DPID & 5,593 & 8,905 & 13,819 \\
L0 & 2,572 & 5,706 & 9,939 \\
d-LCI & 0,042 & 0,001 & 0,108 \\
d-VPI & 0,034 & 0,001 & 0,086 \\
\hline
\textbf{NY17} & & & \\
BIC & 0,229 & 0,325 & 0,639 \\
DPID & 448,023 & 731,498 & 1.098,113 \\
L0 & 208,574 & 617,386 & oom \\
d-LCI & 6,614 & 0,023 & 22,859 \\
d-VPI & 7,246 & 0,023 & 20,194 \\
\hline
\textbf{NY96} & & & \\
BIC & 0,155 & 0,219 & 0,422\\
DPID & 291,68 & 479,638 & 714,908\\
L0 & 133,19 & oom & om \\
d-LCI & 4,698 & 0,016 & 13,898\\
d-VPI & 4,704 & 0,016 & 13,949\\
\hline
\textbf{URBAN100} & & & \\
BIC & {0,027} & {0,041} & 0,068\\
DPID & 37,592 & 60,834 & 93,996\\
L0 & 13,267 & 28,845 & 50,312 \\
d-LCI & 0,234 & 0,002 & 0,618\\
d-VPI & 0,259 & 0,002 & 0,732\\
\hline
\textbf{PEXELS300} & & & \\
BIC & 0,088 & 0,120 & 0,225 \\
DPID & 151,800 &246,573 &374,155 \\
L0 & 56,454 &125,413 &228,683 \\
d-LCI & 1,560 & 0,009 &4,977 \\
d-VPI & 1,689 & 0,009 & 4,406\\
\end{tabular}
\end{center}
}
\end{table}
\subsubsection{CPU time analysis}
\label{sub5}
The CPU time required by each scaling method is also an important aspect to consider in the quantitative performance evaluation. For this reason, in the previous experiments, besides PSNR and SSIM, we measure the CPU time each method takes to produce the output images. Tables \ref{tempi-up}--\ref{tempi-down} show the average CPU time values we computed, for each scaling factor and dataset, by employing the displayed methods, in upscaling (Tables \ref{tempi-up}, \ref{tempi-up-DIV2k}) and downscaling (Table \ref{tempi-down}).
Regarding VPI, we point out that in Tables \ref{DIV2k-up234}--\ref{down234} we tested supervised VPI that is optimally structured to produce 19 resized images corresponding to unsupervised VPI called with 19 equidistant values of the input parameter $\theta$. For this reason, in comparison with the benchmark methods, we report in Tables \ref{tempi-up}--\ref{tempi-down} the average CPU time that unsupervised VPI takes, providing in input the average values of $\theta$ reported in Table \ref{tab:0} for each employed datasets and scale factors 2,3,4. For other values of $\theta$, we did not observe significant variations w.r.t. the displayed results.
Inspecting Tables \ref{tempi-up}--\ref{tempi-down}, we observe the following.
\vspace{.2cm}\newline
$\Box$ Concerning upscaling:
\vspace{.2cm}\newline
The method requiring the least CPU time is BIC. At a short distance, we find u-LCI and u-VPI with CPU times very close to each other. Much higher computation time is required by SCN, which always requires the longest CPU time. Table \ref{tempi-up-DIV2k} shows that this trend is independent of how the input image is generated (i.e., -B, -u, -d, -m).
\vspace{.2cm}\newline
$\Box$ Concerning downscaling:
\vspace{.2cm}\newline
Even in this case, the method requiring the least computation time is BIC, closely followed by d-LCI and d-VPI that coincide and are much faster when the scale factor is 3, due to (\ref{RI-tilde}).
In the ranking, $L_0$ and DPID follow with much higher computation time than d-VPI, especially on datasets characterized by larger image sizes (such as NY17, NY96, and PEXELS300). In particular, $L_0$ does not give any output for target images in NY96 and NY17 with scale factors $s\ge2$ and $s\ge 3$, respectively, while it is faster than DPID in the remaining cases.
\begin{table*}[!htbp]
\caption {Average performance results of upscaling methods (first column) on PEXELS300 dataset with input images generated by BIC, u-LCI, u-VPI, DPID and $L_0$ }
\label{cambioGin-up}
\scriptsize{
\begin{center}
\begin{tabular}{l||cc||cc||cc||cc||cc}
\hline
&\multicolumn{2}{c|}{\bf BIC input image} & \multicolumn{2}{|c|}{\bf LCI input image} &
\multicolumn{2}{|c|}{\bf VPI input image}& \multicolumn{2}{|c|}{\bf DPID input image}
& \multicolumn{2}{|c}{\bf $L_0$ input image}
\\ \hline
& PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\
\hline
{\bf x2} &&&&&&&& \\
{BIC} &
36.249 & 0.962 & 36.274 & 0.963 & 37.047 & 0.968 & 36.336 & {\bf 0.966} & 35.194 & {\bf 0.964} \\
{u-LCI} &
37.067 & 0.966 & 35.302 & 0.950 & 36.436 & 0.959 & 35.567 & 0.957 & 33.589 & 0.945 \\
{u-VPI} &
37.128 & 0.966 & {\bf 36.482} & {\bf 0.964} & {\bf 37.397} & {\bf 0.969} & {\bf 36.427} & {\bf 0.966} & {\bf 35.230} & {\bf 0.964} \\
{SCN} &
{\bf 37.916} & {\bf 0.971} & 33.939 & 0.945 & 35.184 & 0.957 & 34.325 & 0.953 & 31.818 & 0.936 \\
\hline
{\bf x3} &&&&&&&& \\
{BIC} &
33.147 & 0.932 & 32.409 & 0.929 & 32.409 & 0.929 & 32.988 & {\bf 0.935} & {\bf 32.055} & {\bf 0.929} \\
{u-LCI} &
33.622 & 0.935 & 31.444 & 0.908 & 31.444 & 0.908 & 32.494 & 0.924 & 31.173 & 0.913 \\
{u-VPI} &
33.671 & 0.936 & {\bf 32.535} & {\bf 0.930} & {\bf 32.535} & {\bf 0.930} & {\bf 33.050} & {\bf 0.935} & 32.048 & 0.928 \\
{SCN} &
{\bf 34.462} & {\bf 0.944} & 30.245 & 0.904 & 30.245 & 0.904 & 31.849 & 0.926 & 30.145 & 0.906 \\
\hline
{\bf x4} &&&&&&&& \\
{BIC} &31.374 &0.908& 30.333 &0.903& 30.589 &0.907& 31.113 & {\bf 0.910}& 31.406 & {\bf 0.913} \\
{u-LCI} &31.741 &0.910& 29.414 &0.880& 29.721 &0.885& 30.689 &0.899& 31.001&0.901 \\
{u-VPI} &31.786 &0.912& {\bf 30.458} &{\bf 0.905}& {\bf 30.683} &{\bf 0.908}& {\bf 31.151} &{\bf 0.910}& {\bf 31.473}& {\bf 0.913} \\
{SCN} &{\bf 32.551} &{\bf 0.922}& 28.147 &0.872& 28.585 &0.882& 30.098 &0.902& 30.619 &0.907\\
\hline
\end{tabular}
\end{center}
}
\end{table*}
\begin{table*}[!htbp]
\caption {Average performance results of downscaling methods (first column) on BSDS500 dataset with input images generated by BIC, d-LCI, d-VPI and SCN}
\label{cambioGin-down}
\scriptsize{
\begin{center}
\begin{tabular}{l||cc||cc||cc||cc}
\hline\\
&\multicolumn{2}{c}{\bf BIC input image} & \multicolumn{2}{|c}{\bf LCI input image} &
\multicolumn{2}{|c}{\bf VPI input image}& \multicolumn{2}{|c}{\bf SCN input image}
\\
\hline\hline
& PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\
\hline
{\bf :2} &&&&&&&& \\
{BIC} &
40.080 & 0.992 & 42.906 & 0.996 & 40.610 & 0.993 & {\bf 47.910} & {\bf 0.999} \\
{d-LCI} &
54.840 & {\bf 1.000} & 57.392 & {\bf 1.000} & 58.509 & {\bf 1.000} & 36.931 & 0.987 \\
{d-VPI} &
{\bf 55.993} & {\bf 1.000} & {\bf 62.467} & {\bf 1.000} & {\bf 61.901} & {\bf 1.000} & 46.983 & {\bf 0.999} \\
{DPID} &
41.065 & 0.991 & 40.256 & 0.990 & 40.942 & 0.991 & 37.173 & 0.985 \\
{$L_0$} &
37.539 & 0.990 & 36.943 & 0.989 & 37.250 & 0.989 & 32.057 & 0.966 \\
\hline
{\bf :3} &&&&&&&& \\
{BIC} &
40.452 & 0.993 & 43.325 & 0.996 & 40.860 & 0.994 & {\bf 47.453} & {\bf 0.999 } \\
{d-LCI} &
$\mathbf\infty$ & {\bf 1.000} & $\mathbf\infty$ & {\bf 1.000} & $\mathbf\infty$ & {\bf 1.000} & 36.643 & 0.986 \\
{d-VPI} &
$\mathbf\infty$ & {\bf 1.000} & $\mathbf\infty$ & {\bf 1.000} & $\mathbf\infty$ & {\bf 1.000} & 36.643 & 0.986 \\
{DPID} &
42.282 & 0.994 & 40.981 & 0.992 & 42.021 & 0.994 & 38.212 & 0.988 \\
{$L_0$} &
36.828 & 0.987 & 36.348 & 0.985 & 36.680 & 0.986 & 33.027 & 0.969 \\
\hline
{\bf :4} &&&&&&&& \\
{BIC} &
40.383 & 0.993 & 43.218 & 0.996 & 40.806 &0.994& {\bf 47.501} & {\bf 0.999} \\
{d-LCI} &
56.846 &{\bf 1.000}& 59.981 &{\bf 1.000}& 60.319 &{\bf 1.000}& 35.911 & 0.984 \\
{d-VPI} &
{\bf 60.861} &{\bf 1.000}& $\mathbf\infty$ &{\bf 1.000}& {\bf 71.533} &{\bf 1.000}& 40.619 & 0.994 \\
{DPID} &
42.281 &0.994& 40.735 &0.993& 41.991 &0.994& 38.165 & 0.989 \\
{$L_0$} &
45.123 &0.997& 46.087 &0.998& 45.449 &0.998& 38.712 &0.991 \\
\hline
\end{tabular}
\end{center}
}
\end{table*}
\subsubsection{Input image dependency}
\label{sub3}
In this subsection, we study the dependency of the VPI performance on the way the input images are generated. To this aim, we repeat the previous quantitative analysis computing the average PSNR and SSIM values for supervised VPI and the benchmark methods, but we let vary the scaling method generating the input images from the target ones in the dataset. More precisely, in downscaling we provide input HR images generated from BIC, u-LCI, u-VPI, and SCN upscaling methods, while in upscaling we get the input LR image by applying the downscaling methods BIC, d-LCI, d-VPI, DPID, and $L_0$. We point out that whenever u-VPI or d-VPI are employed to generate the input image, we use the unsupervised mode with the default value $\theta =0.5$.
As above, we consider the scale factors $s=2,3,4$ and we require for the input images the size $n_1\times n_2$, determined by (\ref{input-size}), where $N_1\times N_2$ is the size of the target images in the dataset.
Since we have experimented that demo codes of $L_0$ and SCN have problems in processing images with a large size, we do not consider all datasets for this test, but we focus on PEXELS300 dataset in upscaling and on BSDS500 dataset in downscaling. The average performance results are shown in Tables \ref{cambioGin-up} and \ref{cambioGin-down}, respectively.
From these tables, we observe the following.
\vspace{.2cm}\newline
$\Box$ Concerning upscaling:
\begin{itemize}
\item SCN provides the highest quality measures only in the case of BIC input images, confirming the trend displayed in Table \ref{DIV2k-up234}.
\item In the case of input images generated by downscaling methods different from BIC, SCN always provides the lowest values and the best performance is attained by u-VPI except in the upscaling x4 case with $L_0$ input images when BIC presents slightly higher performance values than u-VPI.
\item Similarly to BIC, u-VPI has a more stable behavior with respect to variations of the input image. The quality measures by u-VPI are always higher than those by u-LCI that behave better than BIC only with BIC input images.
\item In upscaling (x3), we note the same performance values for u-LCI and u-VPI input images (3th and 4th columns of Table \ref{cambioGin-up}), confirming that in downscaling (:3), both d-LCI and d-VPI generate the same input images.
\end{itemize}
$\Box$ Concerning downscaling:
\begin{itemize}
\item For even scale factors ($s=2,4$), d-VPI always provides much higher performance values than DPID and $L_0,$ which always presents the lowest quality measures. The d-VPI method followed by d-LCI achieves the highest performance, unless in the case of SCN input images where BIC holds the record, followed in order by d-VPI, DPID, d-LCI, and $ L_0 $
For odd scale factors, it is confirmed that d-VPI reduces to d-LCI reaching the optimal quality measures in the case of input images generated by BIC, u-LCI, or u-VPI. Nevertheless, for SCN input images, the ranking of the even cases $s = 2,4$ is confirmed, i.e., the best performance is given by BIC, followed in order by DPID, d-LCI = d-VPI, and $L_0.$
\end{itemize}
\subsection{Qualitative evaluation}
\label{PE}
We test VPI and the benchmark methods for scale factors varying from 2 to very large values both for supervised and unsupervised mode. In this subsection, some visual results of the numerous performed tests are given.
\vspace{.2cm}\newline
$\bullet$ Concerning the supervised mode:
\vspace{.2cm}\newline
we show some examples of performance results in Figures \ref{fig:6}-\ref{fig:7} for upscaling and in Figures \ref{fig:8}-\ref{fig:9} for downscaling with different BIC input images and scale factors 2, 3, 4. In these figures, some Regions of Interest (ROI) are shown in order to highlight the results at a perceptual level. The visual inspection of these performance results confirms the quantitative evaluation in terms of PSSNR and SSIM exhibited in Subsection \ref{QE}. Hence, we deduce that: a) the observable structure of the objects is captured; b) local contrast and luminance of the input image are preserved; c) small details and most of the salient edges are maintained; d) the presence of ringing and over smoothing artifacts is very limited; e) the resized image is sufficiently not blurred.
\vspace{.2cm}\newline
$\bullet$ Concerning the unsupervised mode:
\vspace{.2cm}\newline
we set, as default, the free parameter $\theta$ equal to 0.5 and take the input images directly from the datasets. Unlike the supervised mode, we cannot compute the PSNR and SSIM quality measures since the target image is missing. Consequently, in the following, we evaluate the performance according to our absolute human perceptual ability taking into account the CPU time (briefly denoted by T) when the results are almost equivalent in terms of perceived quality.
Firstly, we consider as input two images already displayed for the supervised mode and used in that case as target images (see Figure \ref{fig:6} and Figure \ref{fig:8}). We show the performance results at the scale factor 2 (upscaling) in Figure \ref{fig:10} and at the scale factor 4 (downscaling) in Figure \ref{fig:11}. A careful examination of Figure \ref{fig:10} does not highlight significant perceptual visual differences in the upscaled images produced by all methods. However, the required CPU times for u-VPI, u-LCI, and BIC are very close, while SCN takes much more processing time. On the other hand, since the input image in Figure \ref{fig:11} has too high-frequency details, differently from Figure \ref{fig:8} achieved by BIC input image, aliasing effects are visually detectable for all methods. In particular, d-LCI and d-VPI present more evident aliasing effects than the other methods, although with a minor CPU time with respect to L$_0$ and DPID.
Even if avoiding aliasing effects is an important part of downscaling methods, this is out of the aim of the present paper, which intends to show a different point of view for both upscaling and downscaling by using a specific Approximation Theory tool in the Image Processing framework. Consequently, in Figure \ref{fig:11}, as well as in the remaining experiments, we just show the performance result obtained by a pre-filtering combined with d-VPI (denoted as f-d-VPI). We point out that in f-d-VPI, the type of filter to employ is selected based on the feature images. Our selection includes the following 2-D filters: a) averaging filter ('average'); b) circular averaging filter ('disk'); c) Gaussian filter ('gaussian'); d) motion ('motion'), they all implemented in Matlab using \texttt{hspecial} to specify the filter type.
As mentioned in Section \ref{method}, this solution only partially reduces the aliasing effect in Figure \ref{fig:11}. It does not affect the processing time too much since the CPU time of f-d-VPI is much smaller than that of L$_0$ and DPID, and very close to the CPU time of BIC.
Intending to highlight the aliasing influence, in the sequel, we consider different kinds of input images extracted from PEXELS300 dataset (so having the same input size 1800$\times$1800), and we apply to them unsupervised d-VPI and the benchmark methods with the same downscaling factor.
In Figure \ref{fig:12} downscaling with scaling factor 3 is applied to the images (1640882 and 163064 from PEXELS300) displayed at the top. Some ROIs of the resulting output images are shown in the middle and bottom in order to emphasize the aliasing phenomenon. By visually inspecting of these ROIs, we can check a different behavior of d-VPI that, in this case, coincides with d-LCI being the LR image computed by (\ref{ih}). Indeed, we note that for the input image on the right (163064) the aliasing effect is not appreciable (see, for instance, the diagonal line in the ROIs) while it becomes visible for the input image on the left (1640882). In the latter case, we observe aliasing occurs for all methods to a different extent, but BIC, L$_0$, and DPID have better performance since the downscaled images are affected by aliasing to a lesser extent than d-VPI (see the vertical elements of the railing). However, in the resized image by f-d-VPI, the aliasing effect is equally present with respect to the other methods without a significant computational burden. It results to be the second-fastest method and is competitive with BIC (L$_0$ and DPID have a greater CPU time).
Finally, in Figure \ref{fig:13}, we test all downscaling methods at the scale factor 8 on the input image displayed on the top (3472764 from PEXELS300). In this case, d-VPI and d-LCI produce better visual performance results since the stars in the sky are more adequately preserved in terms of numbers and shape. BIC and L$_0$ reduce too much the number of stars and introduce a blurring effect, while DPID provides a downscaled image where the stars are almost all reshaped and doubled. Moreover, f-d-VPI seems not to give new insights. Note that the aliasing is visible in other areas of the image (see, for example, the mountain ridge area) with almost the same intensity for all methods. About CPU time, also in this case, DPID and L$_0$ are the most expensive.
In conclusion, we point out that the aliasing effect does not always occur at the same scale factor and does not always influence the downscaling performance similarly. Moreover, in some contexts, even the downscaling methods designed to reduce the aliasing can result inadequate to manage this problem and can introduce distortions even greater than aliasing itself. In these cases, as well as when the aliasing is not visible and when the quality of the downscaled image is visually equivalent, d-VPI may prove to be preferable since it provides a good compromise in terms of quality and CPU time.
\begin{figure*}[!htbp]
\begin{center}
\normalsize{ \hspace{2cm} \textbf{x2} \hspace{5cm} \textbf{x3}\hspace{5cm}\textbf{x4}}
\newline
\newline
\includegraphics[height=3.5cm,width=5.1cm]{0023.jpg}
\includegraphics[height=3.5cm,width=5.1cm]{S14.2_ret.jpg}
\includegraphics[height=3.5cm,width=5.1cm]{N2_ret.jpg}\\
\includegraphics[height=1cm, keepaspectratio]{Rit_0023.jpg} \hspace{3.2cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_S14.2.jpg}\hspace{3.3cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_N2_ret.jpg}
\end{center}
\begin{tabular}{lll}
\hspace *{1.9cm}\textbf{Target image} & \hspace *{1.8cm} &\hspace *{1.8cm}\\
\hspace *{1.8cm} \textbf{0023 (1356$\times$2040) } & \hspace *{1.8cm} \textbf{S14.6 (576$\times$720)} &\hspace *{1.8cm} \textbf{N2 (2304$\times$3072)}\\
\hspace *{1.8cm} \textbf{from DIV2k} & \hspace *{1.8cm} \textbf{from Set14} &\hspace *{1.8cm} \textbf{from NY17}\\
\end{tabular}
\begin{center}
\includegraphics[height=3.5cm,width=5.1cm]{0023_BIC_zoom2.jpg}
\includegraphics[height=3.5cm,width=5.1cm]{S14.2_BIC_zoom3.jpg}
\includegraphics[height=3.5cm,width=5.1cm]{N2_BIC_zoom4.jpg}\\
\includegraphics[height=1cm, keepaspectratio]{Rit_0023_BIC_zoom2.jpg} \hspace{3.2cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_S14.2_BIC_zoom3.jpg}\hspace{3.3cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_N2_BIC_zoom4.jpg}
\end{center}
\begin{tabular}{lll}
\hspace *{1.9cm}\textbf{BIC} & \hspace *{1.8cm} &\hspace *{1.8cm}\\
\hspace *{1.8cm} \textbf{PSNR=45,773} & \hspace *{2.2cm} \textbf{PSNR=26,204} &\hspace *{2.1cm} \textbf{ PSNR=39,220}\\
\hspace *{1.8cm} \textbf{SSIM=0,995} & \hspace *{2.2cm} \textbf{SSIM=0,811} &\hspace *{2.2cm} \textbf{SSIM=0,952}\\
\end{tabular}
\begin{center}
\includegraphics[height=3.5cm,width=5.1cm]{0023_SCN_zoom2.jpg}
\includegraphics[height=3.5cm,width=5.1cm]{S14.2_SCN_zoom3.jpg}
\includegraphics[height=3.5cm,width=5.1cm]{N2_SCN_zoom4.jpg}\\
\includegraphics[height=1cm, keepaspectratio]{Rit_0023_SCN_zoom2.jpg} \hspace{3.2cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_S14.2_SCN_zoom3.jpg}\hspace{3.3cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_N2_SCN_zoom4.jpg}
\end{center}
\begin{tabular}{lll}
\hspace *{1.9cm}\textbf{SCN} & \hspace *{1.8cm} &\hspace *{1.8cm}\\
\hspace *{1.8cm} \textbf{PSNR=36,143} & \hspace *{2.2cm} \textbf{PSNR=26,274} &\hspace *{2.1cm} \textbf{ PSNR=31,637}\\
\hspace *{1.8cm} \textbf{SSIM=0,975} & \hspace *{2.2cm} \textbf{SSIM=0,821} &\hspace *{2.2cm} \textbf{SSIM=0,958}\\
\end{tabular}
\caption{Examples of supervised upscaling performance results at the scale factor 2 (left), at the scale factor 3 (middle), at the scale factor 4 (right).}
\label{fig:6}
\end{figure*}
\begin{figure*}[!htbp]
\begin{center}
\normalsize{ \hspace{2cm} \textbf{x2} \hspace{5cm} \textbf{x3}\hspace{5cm}\textbf{x4}}
\newline
\newline
\includegraphics[height=3.5cm,width=5.1cm]{0023_L_zoom2.jpg}
\includegraphics[height=3.5cm,width=5.1cm]{S14.2_L_zoom3.jpg}
\includegraphics[height=3.5cm,width=5.1cm]{N2_L_zoom4.jpg}\\
\includegraphics[height=1cm, keepaspectratio]{Rit_0023_L_zoom2.jpg} \hspace{3.2cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_S14.2_L_zoom3.jpg}\hspace{3.3cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_N2_L_zoom4.jpg}
\end{center}
\begin{tabular}{lll}
\hspace *{1.9cm}\textbf{LCI} & \hspace *{1.8cm} &\hspace *{1.8cm}\\
\hspace *{1.8cm} \textbf{PSNR=46,688} & \hspace *{2.1cm} \textbf{PSNR=26,585 } &\hspace *{2.1cm} \textbf{ PSNR=40,068}\\
\hspace *{1.8cm} \textbf{SSIM=0,996} & \hspace *{2.1cm} \textbf{SSIM=0,822} &\hspace *{2.2cm} \textbf{SSIM=0,957}\\
\end{tabular}
\begin{center}
\includegraphics[height=3.5cm,width=5.1cm]{0023_VPI_zoom2.jpg}
\includegraphics[height=3.5cm,width=5.1cm]{S14.2_VPI_zoom3.jpg}
\includegraphics[height=3.5cm,width=5.1cm]{N2_VPI_zoom4.jpg}\\
\includegraphics[height=1cm, keepaspectratio]{Rit_0023_VPI_zoom2.jpg} \hspace{3.2cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_S14.2_VPI_zoom3.jpg}\hspace{3.3cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_N2_VPI_zoom4.jpg}
\end{center}
\begin{tabular}{lll}
\hspace *{1.9cm}\textbf{VPI} & \hspace *{1.8cm} &\hspace *{1.8cm}\\
\hspace *{1.8cm} \textbf{PSNR=46,745} & \hspace *{2.1cm} \textbf{PSNR=26,590 } &\hspace *{2.1cm} \textbf{ PSNR=40,113}\\
\hspace *{1.8cm} \textbf{SSIM=0,996} & \hspace *{2.1cm} \textbf{SSIM=0,822} &\hspace *{2.2cm} \textbf{SSIM=0,957}\\
\end{tabular}
\caption{Examples of supervised upscaling performance results at the scale factor 2 (left), at the scale factor 3 (middle), at the scale factor 4 (right).}
\label{fig:7}
\end{figure*}
\begin{figure*}[!htbp]
\begin{center}
\normalsize{ \hspace{2cm} \textbf{:2} \hspace{5cm} \textbf{:3}\hspace{5cm}\textbf{:4}}
\newline
\newline
\includegraphics[height=3.5cm, width=5.1cm]{135069.jpg}
\includegraphics[height=3.5cm, width=5.1cm]{NA38.jpg}
\includegraphics[height=3.5cm, width=5.1cm]{img075.jpg}\\
\includegraphics[height=1cm, keepaspectratio]{Rit_135069_OR_down2.jpg} \hspace{3.2cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_NA38_OR_down3.jpg}\hspace{3.3cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_img075_OR_down4.jpg}
\end{center}
\begin{tabular}{lll}
\hspace *{1.9cm}\textbf{Target image} & \hspace *{1.8cm} &\hspace *{1.8cm}\\
\hspace *{1.8cm} \textbf{135069 (321$\times$481)} & \hspace *{1.8cm} \textbf{NA38 (480$\times$640)} &\hspace *{1.8cm} \textbf{img075 (680$\times$1024)}\\
\hspace *{1.8cm} \textbf{from BSDS500} & \hspace *{1.8cm} \textbf{from NY96} &\hspace *{1.8cm} \textbf{from Urban100}\\
\end{tabular}
\begin{center}
\includegraphics[height=3.5cm,width=5.1cm]{135069_BIC_down2.jpg}
\includegraphics[height=3.5cm, width=5.1cm]{NA38_BIC_down3.jpg}
\includegraphics[height=3.5cm, width=5.1cm]{img075_BIC_down4.jpg}\\
\includegraphics[height=1cm, keepaspectratio]{Rit_135069_BIC_down2.jpg} \hspace{3.2cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_NA38_BIC_down3.jpg}\hspace{3.3cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_img075_BIC_down4.jpg}
\end{center}
\begin{tabular}{lll}
\hspace *{1.9cm}\textbf{BIC} & \hspace *{1.8cm} &\hspace *{1.8cm}\\
\hspace *{1.8cm} \textbf{PSNR=50,266} & \hspace *{2.1cm} \textbf{PSNR=39,885} &\hspace *{2.3cm} \textbf{PSNR=41,035}\\
\hspace *{1.8cm} \textbf{SSIM=1,000} & \hspace *{2.1cm} \textbf{SSIM=0,995} &\hspace *{2.3cm} \textbf{SSIM=0,997}\\
\end{tabular}
\begin{center}
\includegraphics[height=3.5cm,width=5.1cm]{135069_DPID_down2.jpg}
\includegraphics[height=3.5cm,width=5.1cm]{NA38_DPID_down3.jpg}
\includegraphics[height=3.5cm,width=5.1cm]{img075_DPID_down4.jpg}\\
\includegraphics[height=1cm, keepaspectratio]{Rit_135069_DPID_down2.jpg}\hspace{3.2cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_NA38_DPID_down3.jpg}\hspace{3.3cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_img075_DPID_down4.jpg}
\end{center}
\begin{tabular}{lll}
\hspace *{1.9cm}\textbf{DPID} & \hspace *{1.8cm} &\hspace *{1.8cm}\\
\hspace *{1.8cm} \textbf{PSNR=54,342 } & \hspace *{2.1cm} \textbf{PSNR=42,037} &\hspace *{2.3cm} \textbf{ PSNR=44,545}\\
\hspace *{1.8cm} \textbf{SSIM=1,000} & \hspace *{2.1cm} \textbf{SSIM=0,997} &\hspace *{2.4cm} \textbf{SSIM=0,999}\\
\end{tabular}
\caption{Examples of supervised downscaling performance results at the scale factor 2 (left), at the scale factor 3 (middle), at the scale factor 4 (right). }
\label{fig:8}
\end{figure*}
\begin{figure*}[!htbp]
\begin{center}
\normalsize{ \hspace{2cm} \textbf{:2} \hspace{5cm} \textbf{:3}\hspace{5cm}\textbf{:4}}
\newline
\newline
\includegraphics[height=3.5cm,width=5.1cm]{135069_L0_down2.jpg}
\includegraphics[height=3.5cm,width=5.1cm]{NA38_L0_down3.jpg}
\includegraphics[height=3.5cm,width=5.1cm]{img075_L0_down4.jpg}\\
\includegraphics[height=1cm, keepaspectratio]{Rit_135069_L0_down2.jpg}\hspace{3.2cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_NA38_L0_down3.jpg}\hspace{3.3cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_img075_L0_down4.jpg}
\end{center}
\begin{tabular}{lll}
\hspace *{1.8cm}\textbf{ L$_0$} & \hspace *{1.8cm} &\hspace *{1.8cm}\\
\hspace *{1.8cm} \textbf{PSNR=39,771 } & \hspace *{2.1cm} \textbf{PSNR=33,064} &\hspace *{2.1cm} \textbf{ PSNR=37,198}\\
\hspace *{1.8cm} \textbf{SSIM=0,997} & \hspace *{2.1cm} \textbf{SSIM=0,971} &\hspace *{2.2cm} \textbf{SSIM=0,991}\\
\end{tabular}
\begin{center}
\includegraphics[height=3.5cm,width=5.1cm]{135069_L_down2.jpg}
\includegraphics[height=3.5cm,width=5.1cm]{NA38_L_down3.jpg}
\includegraphics[height=3.5cm,width=5.1cm]{img075_L_down4.jpg}\\
\includegraphics[height=1cm, keepaspectratio]{Rit_135069_L_down2.jpg}\hspace{3.2cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_NA38_L_down3.jpg}\hspace{3.3cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_img075_L_down4.jpg}
\end{center}
\begin{tabular}{lll}
\hspace *{1.9cm}\textbf{LCI} & \hspace *{1.8cm} &\hspace *{1.8cm}\\
\hspace *{1.8cm} \textbf{PSNR=56,984} & \hspace *{2.1cm} \textbf{PSNR={$\infty$} } &\hspace *{2.3cm} \textbf{ PSNR=57,600}\\
\hspace *{1.8cm} \textbf{SSIM=1,000} & \hspace *{2.1cm} \textbf{SSIM=1,000} &\hspace *{2.4cm} \textbf{SSIM=1,000}\\
\end{tabular}
\begin{center}
\includegraphics[height=3.5cm,width=5.1cm]{135069_VPI_down2.jpg}
\includegraphics[height=3.5cm,width=5.1cm]{NA38_VPI_down3.jpg}
\includegraphics[height=3.5cm,width=5.1cm]{img075_VPI_down4.jpg}\\
\includegraphics[height=1cm, keepaspectratio]{Rit_135069_VPI_down2.jpg}\hspace{3.2cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_NA38_VPI_down3.jpg}\hspace{3.3cm}
\includegraphics[height=1cm, keepaspectratio]{Rit_img075_VPI_down4.jpg}
\end{center}
\begin{tabular}{lll}
\hspace *{1.9cm}\textbf{VPI} & \hspace *{1.8cm} &\hspace *{1.8cm}\\
\hspace *{1.8cm} \textbf{PSNR=62.452} & \hspace *{2.1cm} \textbf{PSNR={$\infty$} } &\hspace *{2.3cm} \textbf{ PSNR=62,536}\\
\hspace *{1.8cm} \textbf{SSIM=1,000} & \hspace *{2.1cm} \textbf{SSIM=1,000} &\hspace *{2.4cm} \textbf{SSIM=1,000}\\
\end{tabular}
\caption{Examples of supervised downscaling performance results at the scale factor 2 (left), at the scale factor 3 (middle), at the scale factor 4 (right). For layout reasons, the performance results both for d-LCI and d-VPI are reported although they coincide for the downscaling factor 3.}
\label{fig:9}
\end{figure*}
\begin{figure*}[!htbp]
\begin{center}
\includegraphics[height=3.5cm, width=5.1cm]{0023_BIC_resized.jpg}
\hspace *{1cm}
\includegraphics[height=3.5cm, width=5.1cm]{0023_SCN_resized.jpg}
\end{center}
\begin{tabular}{ll}
\hspace *{3cm} \textbf{BIC (T=0.124)} &\hspace *{3.2cm} \textbf{SCN (T=42,497)}\\
\end{tabular}
\begin{center}
\includegraphics[height=3.5cm, width=5.1cm]{0023_LCI_resized.jpg}
\hspace *{1cm}
\includegraphics[height=3.5cm, width=5.1cm]{0023_VPI_resized.jpg}
\end{center}
\begin{tabular}{ll}
\hspace *{3cm} \textbf{u-LCI (T=1.462)} &\hspace *{3cm} \textbf{u-VPI (T=1,698)}\\
\end{tabular}
\caption{An example of unsupervised upscaling performance results at the scale factor 2 on an image extracted from DIV2k (size=1356$\times$2040). }
\label{fig:10}
\end{figure*}
\begin{figure*}[!htbp]
\begin{center}
\includegraphics[height=3.5cm, width=5.1cm]{img_075_BIC_resized.jpg}
\hspace *{0.5cm}
\includegraphics[height=3.5cm, width=5.1cm]{img_075_DPID_resized.jpg}
\hspace *{0.5cm}
\includegraphics[height=3.5cm, width=5.1cm]{img_075_L0_resized.jpg}
\end{center}
\begin{tabular}{lll}
\hspace *{0.2cm} \textbf{BIC (T=0,018)} &\hspace *{2.9cm} \textbf{DPID (T=5,476)}&\hspace *{2.5cm} \textbf{ L$_0$ (T=2,008)}\\
\end{tabular}
\begin{center}
\includegraphics[height=3.5cm, width=5.1cm]{img_075_LCI_resized.jpg}
\hspace *{0.5cm}
\includegraphics[height=3.5cm, width=5.1cm]{img_075_VPI_resized.jpg}
\hspace *{0.5cm}
\includegraphics[height=3.5cm, width=5.1cm]{img_075_VPI_filtered_disk3.jpg}
\end{center}
\begin{tabular}{lll}
\hspace *{0.2cm} \textbf{d-LCI (T=0,032)} &\hspace *{2.8cm} \textbf{d-VPI (T=0,033)}&\hspace *{2.5cm} \textbf{f-d-VPI (T=0,133)}\\
\end{tabular}
\caption{An example of unsupervised downscaling performance results at the scale factor 4 on a image extracted from Urban100 (size=680$\times$1024). In f-d-VPI a circular averaging filter with size 3 is employed.}
\label{fig:11}
\end{figure*}
\begin{figure*}[!htbp]
\begin{tabular}{l}
\hspace *{7.3cm} \textbf{Input images}\\
\end {tabular}
\begin{center}
\includegraphics[height=5.2cm, width=5.2cm]{1640882.jpg}
\hspace *{2cm}
\includegraphics[height=5.2cm, width=5.2cm]{163064.jpg}
\end{center}
\begin{tabular}{ll}
\hspace *{4cm} \textbf{1640882} &\hspace *{5.3cm} \textbf{163064}\\
\newline
\newline
\newline
\end{tabular}
\begin{tabular}{lllll}
\newline
\hspace *{0.3cm} \textbf{BIC} &\hspace *{2.1cm} \textbf{ DPID}&\hspace *{1.7cm} \textbf{ L$_0$} &\hspace *{2.3cm} \textbf{ d-VPI $\equiv$ d-LCI}&\hspace *{0.5cm} \textbf{f-d-VPI}\\
\end{tabular}
\begin{center}
\includegraphics[height=3.2cm,width=3.2cm]{1640882_BIC_rit1.jpg}
\includegraphics[height=3.2cm,width=3.2cm]{1640882_DPID_rit1.jpg}
\includegraphics[height=3.2 cm,width=3.2cm]{1640882_L0_rit1.jpg}
\includegraphics[height=3.2cm,width=3.2cm]{1640882_VPI_rit1.jpg}
\includegraphics[height=3.2cm,width=3.2cm]{1640882_VPI_filtered_rit1.jpg}
\end{center}
\begin{tabular}{lllll}
\hspace *{0.2cm} \textbf{(T=0,029)} &\hspace *{1.4cm} \textbf{(T=30,251)}&\hspace *{1cm} \textbf{(T=12,368)} & \hspace *{1cm} \textbf{(T=0,002)} &\hspace *{1.1cm} \textbf{(T=0,037)}\\
\end{tabular}
\begin{center}
\includegraphics[height=3.2cm,width=3.2cm]{163064_BIC_rit1.jpg}
\includegraphics[height=3.2cm,width=3.2cm]{163064_DPID_rit1.jpg}
\includegraphics[height=3.2 cm,width=3.2cm]{163064_L0_rit1.jpg}
\includegraphics[height=3.2cm,width=3.2cm]{163064_VPI_rit1.jpg}
\includegraphics[height=3.2cm,width=3.2cm]{163064_VPI_filtered_rit1.jpg}
\end{center}
\begin{tabular}{lllll}
\hspace *{0.2cm} \textbf{(T=0,090)} &\hspace *{1.4cm} \textbf{(T=30,116)}&\hspace *{1cm} \textbf{(T=13,126)} & \hspace *{1cm} \textbf{(T=0,004)} &\hspace *{1.1cm} \textbf{(T=0,125)}\\
\end{tabular}
\caption{Two input images extracted from PEXELS300 (top) and ROIs of unsupervised downscaling performance results at the scale factor 3 (size: 1800$\times$1800) (middle and bottom). In f-d-VPI an average filter is employed. }
\label{fig:12}
\end{figure*}
\begin{figure*}[!htbp]
\begin{tabular}{l}
\hspace *{7.3cm} \textbf{Input image}\\
\end {tabular}
\begin{center}
\includegraphics[height=5.2cm, width=5.2cm]{3472764.jpg}
\end{center}
\begin{tabular}{l}
\hspace *{7.5cm} \textbf{3472764 }\\
\newline
\newline
\end{tabular}
\begin{center}
\includegraphics[height=5.2cm,width=5.2cm]{3472764_BIC.jpg}
\includegraphics[height=5.2cm,width=5.2cm]{3472764_DPID.jpg}
\includegraphics[height=5.2 cm,width=5.2cm]{3472764_L0.jpg}
\end {center}
\begin{tabular}{lll}
\hspace *{0.6cm} \textbf{BIC (T=0,031)} &\hspace *{2.5cm} \textbf{DPID (T=22.100)}&\hspace *{2.1cm} \textbf{ L$_0$ (T=12,996)}\\
\end{tabular}
\begin{center}
\includegraphics[height=5.2cm,width=5.2cm]{3472764_LCI.jpg}
\includegraphics[height=5.2cm,width=5.2cm]{3472764_VPI.jpg}
\includegraphics[height=5.2cm,width=5.2cm]{3472764_VPI_filtered.jpg}
\end{center}
\begin{tabular}{lll}
\hspace *{0.6cm} \textbf{d-LCI (T=0,091)} &\hspace *{2.5cm} \textbf{d-VPI (T=0,113)}&\hspace *{2.1cm} \textbf{f-d-VPI (T=0,137)}\\
\end{tabular}
\caption{Examples of unsupervised downscaling performance results at the scale factor 8 on an input image extracted from PEXELS300 (size 1800$\times$1800). The input image is shown with the same printing size of the resulting images to facilitate the visual comparison. In f-d-VPI an average filter is employed.}
\label{fig:13}
\end{figure*}
\section{Conclusions}
\label{concl}
This paper proposes a new image scaling method, VPI, which is based on non-uniform sampling grids and employs the filtered de la Vall\'ee Poussin type polynomial interpolation at Chebyshev zeros of 1st kind.
The VPI method is simple to implement and highly flexible since it can be applied to resize arbitrary digital images both in upscaling and downscaling by specifying the scale factor or the desired size.
VPI depends on an additional input parameter $\theta\in [0,1]$ that, if necessary, can be suitably modulated to improve the approximation. In particular, taking $\theta=0$, VPI reduces to the LCI method that has been introduced by the authors in \cite{Lagrange} and is based on classical Lagrange interpolation at the same nodes. Nevertheless, for any $\theta\in ]0,1]$ VPI improves the LCI performance and proves to be more stable than the latter due to the uniform boundedness of Lebesgue constants corresponding to the de la Vall\'ee Poussin type interpolation.
The VPI performance has been evaluated using two commonly adopted quality measures, PSNR and SSIM, and measuring the required CPU time too. Comparisons with other recent resizing methods (also specialized in only upscaling or downscaling) have been carried on a wide number of images belonging to several, commonly available, datasets and characterized by different contents and sizes ranging from small to large scale. During the VPI validation procedure, also the modulation of the free parameter $\theta$ has been observed experimentally. Further, the dependency on the input image has been considered by applying to the target images in the datasets different scaling methods in order to generate the input images.
The experimental results confirm that VPI has a competitive and satisfactory performance, with quality measures generally higher and more stable than those of the benchmark methods. Moreover, VPI results much faster than the methods specialized in only downscaling or upscaling, with CPU time close to the one required by LCI and {\rm imresize}, the Matlab optimized version of bicubic interpolation method (BIC).
At a visual level, VPI captures the object's visual structure by preserving the salient details, the local contrast, and the luminance of the input image, with well--balanced colors and limited presence of artifacts.
One limitation of VPI concerns downscaling performance when HR images have high--frequency details. Indeed, in downscaling with odd scale factors, VPI produces the same LR image of LCI for any value of $\theta$. In this case, if the Nyquist limit is satisfied, we give a theoretical estimate for the MSE, which, in particular, is null if the input image (or some crucial pixels of it) are "not corrupted". Nevertheless, even starting from "exact" HR images, in downscaling VPI can suffer from aliasing problems when the frequency content of the image and the required size for the LR image are such to violate the Nyquist--Shannon theorem. In our experiments, we report cases when aliasing does not occur and cases when aliasing occurs. In the latter case, we just apply an appropriate filter to the input image before running d-VPI. However, reducing aliasing effects for d-VPI remains an open problem to further investigations.
\vspace{0.7cm}
\textbf{Funding}
The research has been accomplished within RITA (Research ITalian network on Approximation), and UMI (Unione Matematica Italiana) research groups: TAA\-UMI (Approximation Theory and Applications) and AI\&ML\&MAT\-UMI (Mathematics for Artificial Intelligence and Machine Learning). It has been partially supported by GNCS\-INdAM and University of Basilicata (local funds).
\textbf{Acknowledgements}
The authors thank the anonymous reviewers for their helpful remarks that allowed to improve the quality of the paper.
\textbf{Code and supplementary materials}
The source code and the dataset PEXELS300 are openly available at the following link:\\
https://github.com/ImgScaling/VPIscaling.
|
{
"timestamp": "2022-07-11T02:14:43",
"yymm": "2109",
"arxiv_id": "2109.13897",
"language": "en",
"url": "https://arxiv.org/abs/2109.13897"
}
|
\section{Introduction} \label{sec:introduction}
Recently, several attempts to use machine learning (ML) to deal with combinatorial optimization problems (COPs) have been proposed.
Indeed, ML promises to support the design of algorithms as well as the resolution process, by decreasing the need for hand-crafted and specialized solution approaches, which are notably known to require high expertise and a huge time investment for being developed.
The incredible advances in deep learning (DP, \citet{Goodfellow-et-al-2016}) coupled with increasingly powerful hardware, have led to very promising results for a variety of COPs (\citet{bengio2020machine, mazyavkina2020reinforcement}).
In this work, we focus on COPs arising in the area of vehicle routing. Vehicle routing problems (VRPs, \citet{VRPBOOK}) are, in fact, increasingly receiving attention in the ML community both because of their relevance in real-world applications and the computational challenges they pose (\citet{Vesselinova_2020}).
Indeed, VRPs have been the test-bed for novel neural network architectures such as pointer networks (\citet{NIPS2015_29921001}), graph embedding networks (\citet{dai2016discriminative, Khalil2017learning}) and attention-based models (\citet{kool2019attention, deudon2018learning}) achieving stunning results on a variety of problems.
Currently, most of the proposed approaches aim at learning constructive heuristics, which sequentially extend a partial solution, possibly employing additional procedures such as sampling and beam search (see e.g., \citet{bello2017neural} and \citet{hottung2021efficient}). Few others, such as \citet{wu2020learning} and \citet{NEURIPS2019_131f383b}, instead, focus on learning improvement heuristics to guide the exploration of the search space and iteratively refine an existing solution.
Despite of the relevance of the problems, the increasing attention they are receiving in the ML community and the remarkable results achieved so far, these techniques have not yet become widespread in the Operations Research (OR) community.
This may be because of their novelty and the nontrivial effort required to adopt them by the OR community that typically has a different background.
But also, at the same time, the computational testing provided by some of the proposed ML methods may not be very convincing with respect to the standard practices in the OR and VRP community.
The current work focuses on this second aspect by highlighting important points to consider during the computational testing and providing guidelines and methodologies that we believe are relevant from an OR perspective.
The remainder of the paper is organized as follows. Section \ref{sec:guidelines} surveys crucial aspects faced during a computational testing such as the selection of proper benchmark instances and baseline algorithms, as well as techniques to properly compare different solution approaches. Section \ref{sec:examples} provides concrete examples and pointers to representative computational studies found in recent papers. Finally, concluding remarks are given in Section \ref{sec:conclusions}.
\section{Guidelines for a Computational Testing} \label{sec:guidelines}
In this section, we highlight important points an OR practitioner would consider crucial when examining the computational results section of a novel approach.
\subsection{Benchmark Instances and Problem Definition}
The training phase of a ML-based approach typically requires a large number of VRP instances sharing the same characteristics.
A common way to generate these instances is by randomly defining their characteristics, sampled from arbitrarily defined distributions.
In the following, we argue that adopting a common problem description as well as a common set of benchmark instances is extremely important to correctly assess the potentialities of a novel approach.
\textit{Problem description and objectives.}
The term VRP identifies a class of problems containing an enormous number of different variants. Among the most studied ones, we have the Capacitated Vehicle Routing Problem (CVRP), the Vehicle Routing Problem with Time Windows (VRPTW), and the Vehicle Routing Problem with Pickup and Deliveries.
In the following, if not specified differently, we limit our treatment to the CVRP which is often taken as a reference problem in ML-based approaches and, despite its simple definition, is still a challenging problem for both traditional and innovative approaches.
We refer to Chapter 1 of \citet{VRPBOOK} for a thorough overview of VRPs. Specific VRPs have widely accepted formulations and precise problem definitions. For example, modern CVRP instances do not fix a-priori the number of routes a solution should have or heuristic approaches to the VRPTW typically consider a hierarchical objective: first minimize the number of vehicles and then the routing cost. It is thus clear that the specific definition of the problem has a crucial impact on the obtained results.
\textit{Test instances representativeness.} Despite VRPs being $\mathcal{NP}$-hard, the actual challenge posed by a specific instance is highly dependant on several factors such as customers and depot location, the vehicle capacity and the customers demand distribution.
The VRP community has thus identified, for each specific problem in the VRP class, a set of benchmark instances that is currently considered to be relevant for testing modern approaches.
Thus, in addition to possible instances defined by the ML community, we highlight the importance of using, whenever possible, instances derived from these widely recognized, used and studied datasets.
As an example, in \citet{UCHOA2017845}, the authors introduced the $\mathbb{X}$ instances for the CVRP, thoroughly describing their generation process.
This process can then be emulated to generate (possibly smaller-sized) more representative CVRP instances, as it was done in \citet{kool2021deep} and in \citet{hottung2021efficient} (appendix). In \citet{wu2020learning}, the authors directly used a subset of $\mathbb{X}$ instances along with distributions more commonly used in other ML works.
As an additional example, consider the VRPTW for which the so-called Solomon instances and their extension proposed by \citet{solomon1987} and \citet{gehring1999parallel}, respectively, are the current benchmark. Also for them, \citet{solomon1987} describes the procedure used to define the time window constraints that can thus be considered when defining new instances.
\textit{Repositories of instances.}
Together with papers introducing or using certain sets of instances (whose authors could be contacted to retrieve), we mention two among the most popular repositories of VRP instances. Namely, \href{http://www.vrp-rep.org/}{\citeauthor{vrprep}} collects instances for more than 50 different VRP variants, best known solution values, and references to papers obtaining these results. In addition, \href{http://vrp.atd-lab.inf.puc-rio.br/index.php/en/}{\citeauthor{CVRPLIB}} contains instances and up-to-date best-known solutions of CVRP instances.
Finally, small instances commonly used in the past or in exact methods can be found in \href{http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/}{\citeauthor{tsplib}} and \href{http://people.brunel.ac.uk/~mastjjb/jeb/info.html}{\citeauthor{orlib}}.
\subsection{Baseline Algorithms}
The selection of a proper baseline algorithm is extremely important, failing in this task would hinder the objective evaluation of the potential of a novel approach.
Indeed, since results (computing time, solution quality, and possibly more sophisticated measures as detailed in Section \ref{sec:comparison}) are crucial to compare different solution approaches over a common set of problems and instances, a wrong baseline may distort their interpretation, undermining the whole validation process. Despite the purpose of ML-based approaches not being that of outperforming highly specialized solvers, but rather that of proposing versatile tools not requiring high-levels of manual engineering, the comparison should still occur against the best performing algorithms to better comprehend the tradeoff between data-driven and ad-hoc algorithms.
\textit{Include the best available algorithms.}
Along with simple baselines and competing ML-based methods, we argue that one should consider the inclusion of the best available algorithms proposed by the OR community for each specific VRP.
The selection of baseline algorithms is often guided by the availability of free-to-use or open-source reliable software packages.
Fortunately, more and more researchers are publishing the source code of heuristic as well as exact state-of-the-art VRP solvers that can be freely used for research activities.
\paragraph{Heuristic solvers}
The widely used Google OR-Tools (\citet{ortools}) is erroneously considered by most ML papers to be among the best open-source VRP solvers (see, e.g., \citet{nazari2018reinforcement}) while achieving on the CVRP far-from-optimal results on the $\mathbb{X}$ instances (see \citet{vidal2020hybrid}). Much better open-source solvers for the CVRP are fortunately available. Along with LKH-3 (\citet{lkh3}), which is already widely used by the ML community for solving VRPs, we mention HGS-CVRP (\citet{vidal2020hybrid}) and FILO (\citet{filo}) as highly effective and efficient open-source heuristic solvers for the CVRP that, on the widely studied $\mathbb{X}$ instances of the CVRP (\citet{UCHOA2017845}), produce superior results compared with LKH-3 (see \citet{lscgh}). Finally, we mention SISR (\citet{ChristiaensJanSIbS}), that, despite not being already available in terms of source code, is conceptually simple and easy to implement, yet providing state-of-the-art results on a great variety of VRPs.
\paragraph{Exact solvers}
Several papers solve small instances to optimality by using general-purpose optimization solvers such as Gurobi or CPLEX. Despite the noble attempt, trying to directly solve simple compact VRP formulations by using a generic branch-and-cut approach would soon turn out to be an extremely challenging task.
Indeed, several papers report the ability of solving only very small instances (e.g., CVRPs with about 20 customers).
Instead, VRPSolver (\citet{Pessoa2020}), a freely available (for academic purposes) exact solver specialized for routing problems, should be considered for serious and reliable testing of VRPs. Indeed, VRPSolver combines a branch-cut-and-price algorithm with other sophisticated techniques specifically designed for VRPs such as route enumeration, state space relaxation, and others being able to consistently solve CVRP instances with up to 200 customers (a size already out of reach for most ML-based approaches proposed so far).
\subsection{Algorithms Comparison} \label{sec:comparison}
Comparing algorithms having a completely different nature is an extremely challenging task. On the one hand, traditional solution approaches proposed by the OR community are designed to handle set of instances having a broad range of different characteristics. Moreover, these approaches are typically tuned to achieve, with a single set of parameter values, results that are on average good over all tested instances.
On the other hand, ML-based approaches generally need to treat every instance distribution separately, requiring a specific tuning and thus additional training (possibly taking up to several weeks of computing time).
In addition, traditional OR algorithms are (almost) always executed on CPUs using a single thread, while ML-based approaches naturally benefit from running on massively parallel hardware architectures such as GPUs. Finally, the programming language may also play a role. In fact, traditional OR algorithms are usually implemented in highly efficient languages such as C++, whereas ML-based approaches typically use Python that mixes slow interpreted code with efficient native libraries.
\textit{Facilitate comparisons (i.e., run all solvers by using a common configuration).} In production, algorithms should obviously make fully use of the best existing available technologies.
In fact, even traditional algorithms may contain (portions of) embarrassingly parallel code that would thus benefit from being run on multiple threads.
As an example, a common dynamic programming procedure employed in VRP exact solvers would greatly benefit from a GPU implementation compared to a traditional sequential version (\citet{boschetti2017route} report speedup of up to 40 times). Another well-known example is the parallel implementation of Branch \& Bound algorithms (see, e.g., \citet{parallelbb2006}).
Despite the clear time-savings of a parallel implementation, experimental evaluation asks to reduce at a minimum the different factors that would render comparisons among approaches unnecessarily more challenging. It is thus commonly accepted to consider single-threaded algorithms run on standard CPU architectures.
A very interesting information, that would promote a direct comparison of newly proposed algorithms with existing ones, consists in including also the computing time taken by the algorithm when run with the above defined settings.
If running all the experiments both on GPU and on CPU with a single thread would be computationally prohibitive, we argue that one should consider adding a measure of the speedup associated with the model inference when run on a GPU rather than on a CPU together with the total algorithmic time (in which everything except the model inference is run on a CPU with a single thread), and the fraction of time spent doing inference on GPU. This way, the total running time of the algorithm on a CPU with a single thread could be easily estimated. Another possibility sees the inclusion of a rough measure of the number of CPU cores needed to match the GPU capacity as it was done in \citet{hottung2021efficient}.
\textit{Convert computing time to a common scale.} When using results published on other papers, because the source code may not be available or running again the experiments may be too time consuming, a common practice consists in roughly scaling the computing time to a common base by using appropriate factors.
A practice increasingly adopted in the VRP community consists in using the single-thread rating defined by \citeauthor{passmark}.
So that, given a base processor, say an Intel \href{https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E3-1245+v5+\%40+3.50GHz&id=2674}{Xeon CPU E3-1245 v5} having a single-thread rating of 2277, and a target processor, say an \href{https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core2+Duo+T5500+\%40+1.66GHz&id=922}{Intel Core2 Duo T5500} having a single-thread rating of 594, the computing time of an algorithm run on the target processor is reduced of a factor $\approx 3.83$.
The comparison is still rough, being the CPU just one among the several components affecting the overall performance. However, this is one of the possible approaches generally accepted as sound by the VRP community.
Note that, the above considerations assume all algorithms have been run on an unloaded system and on a CPU by using a single thread.
\textit{On the comparison with exact solvers.} Exact solvers are often used as baseline algorithms when approaching small-to-medium size instances. When reporting the computing time spent by an exact solver, especially if its results are compared with a heuristic algorithm, it should be considered that the former may find very good quality solutions early during the run and then spend the majority of the time to prove optimality.
Thus, when reporting results obtained by an exact solver, an additional column could be added showing the computing time in which the last solution (i.e., the optimal one, if the solver is not prematurely stopped) was found.
Despite being true that the solver does not have a termination criterion to know whether the solution at hand is optimal or not, being compared with heuristics, this approach would still provide a feeling on the convergence speed of the solver.
Another possibility, which however requires a much more detailed collection and processing of data, consists in realizing a convergence profile chart (see later). We finally mention the primal integral introduced by \citet{Achterberg2012}, as a measure able to take into account the overall solution process in terms of convergence towards the optimal (or best known) solution over the entire solving time.
\textit{Consider the statistical relevance of the results.}
In the past, it was common practice to use just average percentage errors with respect to the best known solution to compare the performance of different algorithms. However, more recently, the inclusion of simple statistical tests to objectively assess the differences among algorithms is increasingly becoming popular in the VRP community.
A common practice consists in using a one-tailed Wilcoxon signed-rank test (see \citet{wilcoxon}) possibly coupled with correction methods when multiple comparisons involving the same data are performed (e.g., the Bonferroni correction, see \citet{dunn}).
These tests are used to determine whether two sets of paired observations, for which no assumption can be done on their distribution, are statistically different, and thus, whether two algorithms are considered to provide equivalent results.
A common tool for performing these statistical tests is the R language (\citeauthor{rcore}), which allows to execute them with just a few lines of code.
An example of a test application can be found in Section \ref{sec:stattest}.
\textit{Always compare averages.}
When experimenting with algorithms containing randomized components, we argue that one should make comparison by considering the average results obtained over a reasonable number of runs. Typically, the used number of runs are 10 or 50, that, despite not being statistically significant, allow to qualitatively identify whether an algorithm provides stable or highly variable results.
\textit{Include charts.}
Charts allow to visually compare several algorithms at once, thus providing a fast and effective way to view the results, that, when coupled with tables and statistical tests, give a satisfying and thorough picture of the computational studies for a set of instances.
\begin{figure}
\centering
\begin{subfigure}[b]{0.4\textwidth}
\include{teximgs/performance-chart}
\end{subfigure}
%
\begin{subfigure}[b]{0.4\textwidth}
\include{teximgs/convergence-profile-chart}
\end{subfigure}
\caption{On the left side, the performance chart showing for algorithms A, B, C, D, and E the average normalized time $t$ and solution quality obtained in a number $n$ of runs with different seeds. Algorithms C, D, and E dominate A and B. On the right side, the search trajectory defined by the average gap found at a given time instant for algorithms X, Y and Z.}
\label{fig:charts}
\end{figure}
Among the most used charts, we mention
\begin{itemize}
\item the performance chart, relating the average normalized computing time with the average solution quality (e.g., the percentage gap with respect to a reference value, which is typically that of the optimal or best known solution) of each algorithm in the comparison, see Figure \ref{fig:charts} (left);
\item the convergence profile chart, showing for each time instant the average solution quality for the compared algorithms, see Figure \ref{fig:charts} (right).
\end{itemize}
The performance chart clearly identifies Pareto optimal algorithms and dominated ones, whereas the convergence profile chart shows their converge speed when executed for a specific period of time. Convergence profile charts can be used for improvement heuristics but also for simpler constructive heuristics provided that they include additional iterative procedures used to further improve the solution quality (e.g., the active search of \citet{bello2017neural}).
\textit{On the choice of the programming language.} (Baseline) Algorithms should be implemented efficiently. This may include using appropriate data structures as well as programming languages producing directly executable machine code. An efficient implementation of a competing algorithm should not be considered negatively.
\textit{Consider the analysis of the various algorithm components.}
Several papers include additional analysis on the components of the proposed algorithm. This may include the behavior of the algorithm when some parameters are changed as well as the contribution of individual components to the overall final results (e.g., the average improvement of different local search operators analyzed throughout the algorithm execution).
This kind of analysis is both considerably appreciated in the VRP community and extremely important to grasp insights on the overall contribution and usefulness of the different components of an algorithm.
\section{Examples of a Computational Studies} \label{sec:examples}
To make the above suggestions more concrete, in this section, we report extracts of and pointers to representative computational studies found in recent papers on the CVRP.
We will consider two scenarios: in the first one, we assume to have all source codes available, whereas in the second one, we assume at least one of the competing algorithm source code is not available to the tester.
\subsection{Scenario 1: all algorithms source codes are available}
This is the simplest and most fortunate setting. Indeed, given enough time and processing power, all algorithms can be run on exactly the same platform and for the same amount of time. In this context the convergence profile chart perfectly shows the evolution of each algorithm, making evident its speed to reach certain quality levels.
Table \ref{table:scenario1} reports a rearranged extract of Tables 1-3 proposed in \citet{vidal2020hybrid} comparing three of the above mentioned algorithms (more specifically HGS-CVRP, SISR, and OR-Tools) run on a common platform for the same computing time and over the same $\mathbb{X}$ instances of the CVRP. In particular, the table shows for each competing algorithm the average solution value (Avg) and the associated average gap (Gap) computed considering a certain number of runs of the algorithm when it includes randomized components. The gap is computed with respect to a reference best known solution value (BKS) as Gap = $100 \cdot (\textrm{Avg} - \textrm{BKS}) / \textrm{BKS}$. Additional useful columns could include the worst and best gap obtained, so as to better examine the variability of the quality for the algorithm on the dataset.
\begin{table}
\footnotesize
\centering
\begin{tabular}{lrrrrrrrrrr}
\toprule
&&& \multicolumn{2}{c}{HGS-CVRP} && \multicolumn{2}{c}{SISR} && \multicolumn{2}{c}{OR-Tools} \\
\cmidrule{4-5}
\cmidrule{7-8}
\cmidrule{10-11}
Instance & BKS && Avg & Gap && Avg & Gap && Avg & Gap \\
\toprule
X-n101-k25 & 27591 && 27591.0 & 0.00 && 27593.3 & 0.01 && 27977.2 & 1.40\\
X-n106-k14 & 26362 && 26381.4 & 0.07 && 26380.9 & 0.07 && 26757.5 & 1.50\\
X-n110-k13 & 14971 && 14971.9 & 0.00 && 14972.1 & 0.01 && 15099.8 & 0.86\\
\multicolumn{1}{c}{\ldots}\\
X-n979-k58 & 118987 && 119247.5 & 0.22 && 119108.2 & 0.10 && 123885.2 & 4.12\\
X-n1001-k43 & 72359 && 72748 & 0.54 && 72533.1 & 0.24 && 78084.7 & 7.91\\
\midrule
Mean &&& & 0.11 && & 0.19 && & 4.01\\
\bottomrule
\end{tabular}
\caption{Minimal table showing the solution quality obtained by algorithms run for the same amount of time on a common platform.}
\label{table:scenario1}
\end{table}
Finally, we refer to Figures 3 and 4 of \citet{vidal2020hybrid} for examples of convergence profile charts for the above-mentioned algorithms.
\subsection{Scenario 2: at least one algorithm source code is not available}
This scenario, which is still unfortunately frequent in the VRP community, makes the comparison of results much more challenging. The common case sees only the presence of tables showing the performance of a proposed algorithm over a set of benchmark instances when run on a certain platform. First of all, an assumption is required when reviewing these tables reporting the computational results. In particular, we have to assume that the results published in the paper, which are inevitably obtained with a specific tuning of parameters, are the values on which the authors desire to compare with other approaches. We shall note that these parameters do include the termination criterion, which was then selected as a design choice, and considered to provide competitive results (otherwise a different criterion would have been selected).
Since data on the convergence of the algorithm are typically not available, the comparison can then be performed over the published results by first normalizing the computing time in the best possible way and then analyzing the bi-objective perspective provided by the performance chart shown in \ref{fig:charts} (left) showing non dominated algorithms for a fixed configuration of their parameters.
\subsection{Statistical validation of the results}
\label{sec:stattest}
In both the above scenarios (and provided that the average solution quality obtained by an algorithm is available for each instance in the dataset), the (final) solution quality could be assessed with simple statistical tests. This procedure is useful especially when the proposed methods do not clearly dominate the others, for example because they obtain similar average percentage gaps on the considered benchmark instances.
As an example, in the following we compare HGS-CVRP, SISR, and OR-Tools on the $\mathbb{X}$ instances. Data have been taken from Tables 1 and 3 of \citet{vidal2020hybrid} and is summarized in the boxplots of Figure \ref{fig:boxplots}.
\begin{figure}[b]
\centering
\include{teximgs/boxplot-groups}
\caption{Average gaps obtained on the $\mathbb{X}$ instances by HGS-CVRP, SISR and OR-Tools. Note the different y-axis for OR-Tools. The thick line identifies the median value.}
\label{fig:boxplots}
\end{figure}
Similarly to what was done in \citet{ChristiaensJanSIbS}, we can assess the results obtained by our proposed algorithm, say HGS-CVRP, against the remaining ones, by conducting a one-tailed Wilcoxon signed-rank test in which we consider a null hypothesis $H_0$
\begin{equation*}
H_0: \Call{AvgSolCost}{HGS-CVRP} = \Call{AvgSolCost}{X},
\end{equation*}
and an alternative hypothesis $H_1$
\begin{equation*}
H_1: \Call{AvgSolCost}{HGS-CVRP} > \Call{AvgSolCost}{X},
\end{equation*}
where $X$ can be SISR and OR-Tools. A hypothesis is rejected when its $p$-value is lower than a significance level $\alpha$. In particular, we have that
\begin{itemize}
\item failing to reject $H_0$ means that the average results of the two methods are not statistically different;
\item whereas, when $H_0$ is not rejected, the average results are statistically different and the alternative hypothesis $H_1$ can be tested to find whether they are greater than those of a competing method. Rejecting $H_1$ thus implies that HGS-CVRP performs better than the competing method.
\end{itemize}
Moreover, as mentioned in Section \ref{sec:comparison}, when performing multiple comparisons involving the same data, the probability of erroneously rejecting a null hypothesis increases. To control these errors, the significance level $\alpha$ is adjusted to lower values. Bonferroni correction (\citet{dunn}) is a simple method that can be used for this purpose. In particular, given $n$ comparisons, the significance level is set to $\alpha / n$.
In the above comparison we tested a total number of $n=2$ hypothesis corresponding to the partitioning of instances (1, all $\mathbb{X}$ instances together) and to the two hypothesis (2, $H_0$ and $H_1$). Assuming an initial significance level $\alpha_0 = 0.025$, the adjusted value becomes $\alpha = 0.025 / 2 = 0.0125$.
We can compute the $p$-values, which are shown in Table \ref{tab:pvalues}, for our analysis for example by using the R language.
\begin{table}
\centering
\begin{tabular}{l rrr}
\toprule
&& SISR & OR-Tools \\
\midrule
$H_0$ && 8.27934e-06 & 3.95591e-18 \\
$H_1$ && 4.13967e-06 & 1.97796e-18 \\
\bottomrule
\end{tabular}
\caption{$p$-values obtained when comparing HGS-CVRP with SISR and OR-Tools.}
\label{tab:pvalues}
\end{table}
For both SISR and OR-Tools we have that the associated $p$-value is lower than the significance level $\alpha$. We can thus reject the hypothesis $H_0$ that the average results of HGS-CVRP are similar to those of SISR and OR-Tools. Finally, by testing $H_1$ we again are able to reject the hypothesis, concluding that HGS-CVRP average results are statistically better than those obtained by SISR and OR-Tools on the $\mathbb{X}$ instances.
\section{Conclusions} \label{sec:conclusions}
With this work we highlighted several challenges arising when comparing traditional and machine learning-based solution approaches for vehicle routing problems. Our aim was that of providing a set of reference guidelines that would help the machine learning community to produce computational studies that would be better appreciated by the operations research, and especially the vehicle routing, community. Finally, we conclude by referring the reader to the excellent overview on experimental analysis by \citet{Johnson1999ATG}.
\printbibliography
\end{document}
|
{
"timestamp": "2021-09-30T02:01:57",
"yymm": "2109",
"arxiv_id": "2109.13983",
"language": "en",
"url": "https://arxiv.org/abs/2109.13983"
}
|
\section{Introduction}
Despite a long history, starting with the 1798 measurement by Henry Cavendish~\cite{cavendish}, our current knowledge
of the universal gravitational constant $G$ is rather poor compared to other fundamental constants. Gravity is the weakest force in nature and it is impossible to shield. Measuring $G$ is a serious precision measurement challenge. The 2018 recommended value by the Committee on Data for Science and Technology (CODATA)~\cite{codata19} has a relative uncertainty of $22 \times 10^{-6}$. Figure~\ref{fig:fig1} shows the most precise measurements conducted in the past 30 years, including both the measurements used in the 2018 CODATA evaluation and two newer measurements~\cite{Li2018}. The scatter among the values from different experiments is far beyond what one would expect based on the quoted errors.
This implies a lack of complete understanding of either the physics behind gravitation, or the systematics used to perform these measurements, or both. General relativity, the currently accepted description of gravitation, may not be complete. A successful unification of gravitation with quantum mechanics remains elusive~\cite{grreview}.
Typical $G$ experiments measure small forces,
torques, or accelerations with a relative uncertainty of about $10^{-5}$, and it is fair to say that most people believe that the scatter in this data represents unaccounted-for systematic errors which plague the metrology of small forces. Many of these metrological issues are discussed in a recent review~\cite{Rothleitner2017}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=11cm]{GMike.png}
\caption{\textit{The most precise measurements of $G$ over the last 30 years. The blue points were used to produce the 2019 CODATA recommended value. The red data points come from two new recent measurements of G~\cite{Li2018}.}}
\label{fig:fig1}
\end{center}
\end{figure}
Given this situation, future precision $G$ measurements will justifiably be held to a
higher standard for their analysis and quantitative characterization of systematic errors.
The procedures for corrections to the raw $G$ data from apparatus
calibration and systematic errors using subsidiary measurements is very specific to the particular measurement apparatus and approach. However, there are some sources of systematic uncertainties that are common to almost all precision measurements of $G$, for example the metrology of the source/test masses used in the experiments~\cite{bigGreview}. A research program which successfully addresses this issue can help improve G measurements.
We are working to characterize optically transparent masses for $G$ experiments. The use of transparent source/test masses in $G$ experiments enables nondestructive, quantitative internal density gradient measurements using optical interferometry and can help prepare the way for optical metrology methods for the critical distance measurements needed in many $G$ measurements. The density variations of glass and single crystals are generally much smaller than those for metals~\cite{gillies2014} and hence they are likely to be a better choice for source/test masses for experiments which will need improved systematic errors. It is also desirable to use a transparent test mass with a large density.
A typical $G$ measurement instrument involves distances on the order of 50~cm and source mass dimensions on the order of
10~cm. The masses are usually arranged in a pattern which lowers the sensitivity of the gravitational signal to
small shifts $\delta R$ in the location of the true center of mass from the geometrical center of the masses by about two orders of magnitude. Under these typical conditions, we conclude that in order to attain a 1 ppm uncertainty in mass metrology, the internal number density gradients of the test masses must be controlled at the level of about ${1 \over N} {dN \over dx}=10^{-4}$/cm and the absolute precision of the distance measurement to sub-micron precision. It is also valuable to use a test mass with a high density to maximize the gravitational signal strength while also bringing the masses as close as possible to each other consistent with the other experimental constraints.
We have chosen to characterize density gradients in lead tungstate (PbWO$_{4}$) to the precision required for ppm-precision $G$ measurements. We are performing this characterization using both optical interferometry and neutron interferometry. This paper discusses the results of the neutron interferometric characterization.
\section{Relevant Properties of PbWO$_{4}$}
Lead tungstate is a reasonable material to choose for such a characterization. It is dense, optically transparent, nonmagnetic, and machinable to the precision required to determine $G$ and to characterize its internal density gradients by optical techniques. This material can be grown in very large optically transparent single crystals in the range of sizes needed for $G$ experiments and has been developed in high energy physics as a high Z scintillating crystal. The high density uniformity and low impurity concentrations developed to meet the technical requirements for efficient transmission of the internal scintillation light inside these crystals also match the requirements for $G$ test masses. Lead tungstate is transparent for the entire visible spectrum as shown in Fig.~\ref{fig:transp}.
These crystals are non-hygroscopic and are commercially available, for example from the Shanghai Institute of Ceramics (SIC)~\cite{NISTnonendorse}. Several tons of PbWO$_{4}$ are currently in use in nuclear and high energy physics experiments all over the world. This material is actively studied and produced in large quantities for several new detectors under construction and for R\&D on future detectors to be used at the recently-approved Electron Ion Collider to be built at Brookhaven National Lab. We therefore foresee a long-term motivation for continued R\&D on crystal size and quality from nuclear and high energy physics. Based on their scintillation characteristics, the SIC grows boules of PbWO$_{4}$ measuring 34 mm$\times$34 mm$\times$360 mm, which are then diamond cut and polished to the desired size (typically 24 mm$\times$24 mm$\times$260 mm)~\cite{sic1}. SIC has recently produced crystals of 60~mm in diameter and are capable of producing crystals with a diameter of 100~mm and length of 320~mm~\cite{sic2}. These larger sizes are very suitable for $G$ experiments~\cite{decca}.
Since impurities in PbWO$_{4}$ affect the crystal quality, degrade the optical transmittance properties, reduce scintillation light yield, and produce radiation damage, great effort is put into minimizing or eliminating common impurities such as Mo$^{6+}$, Fe$^{2+}$, Na$^{+}$, K$^{+}$ and Y$^{3}$. Analysis using glow discharge mass spectroscopy (GDMS) indicates that most of the common impurities can be reduced to $<$ 1 ppm by weight~\cite{sic1}. At 32 ppm by weight Y$^{3+}$ is the largest impurity and has a direct effect on the uniformity and scintillation properties of PbWO$_{4}$. Therefore, the distribution of Y$^{3+}$ is carefully controlled to ensure uniformity in detector grade crystals~\cite{sic1}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4.8 in]{pbtransp.png}
\caption{Transmittance of $PbWO_{4}$ as a function of wavelength of incident light.}
\label{fig:transp}
\end{center}
\end{figure}
The mass density of PbWO$_{4}$, $\rho= 8.26$ g/cm$^{3}$, is only a factor of 2 smaller than that of tungsten, the densest material commonly used in $G$ measurements. Its transparency opens up the possibility for $G$ experiments to conduct laser interferometric measurements of its dimensions and location, thereby providing a way to cross-check coordinate measuring machine metrology and thereby independently confirm its absolute accuracy. In addition, the existence of such a source mass material might inspire new designs of $G$ apparatus to take advantage of the possibility of in-situ optical metrology to re-optimize the apparatus design tradeoffs between systematic errors and $G$ signal size. In addition to PbWO$_{4}$ there are several additional high density materials that are also optically transparent and could be used as source/test masses. A list of relatively high density materials and their key physical properties is listed in Table~\ref{tab:alt_mat}. Although not listed in the table, silicon could be another choice for a $G$ test mass material in view of the large volumes and very high crystal quality available and developed over decades for the semiconductor industry. Neutron and optical interferometric methods can be applied to all of these materials.
\begin{table}[h!]
\caption{List of non-hygroscopic, optically-transparent, high-density crystals and their physical properties.}
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|} \hline
Material & PbWO$_{4}$ & CdWO$_{4}$ & LSO & LYSO & BGO \\\hline
Density [g/cm$^3$] & 8.3 & 7.9 & 7.4 & 7.3& 7.13 \\
Atomic numbers & 82, 74, 8 & 48, 74, 8 & 71, 32,8 &71, 39, 32,8 & 83, 32, 8 \\
Refractive index (light) & 2.2 & 2.2-2.3 & 1.82 & 1.82 & 2.15 \\
Thermal expansion & 8.3 (para)& 10.2 & 5 & 5 & 7 \\
coefficient(s) [10$^{-6}/^{\circ}$C] & 19.7 (perp)& & & & \\ \hline
\end{tabular}
\end{center}
\label{tab:alt_mat}
\end{table}
\section{Use of Talbot-Lau Phase Contrast Imaging for Mass Density Gradient Search}
Slow neutrons can penetrate macroscopic amounts of matter, and their coherent interactions with matter dominate the low-energy dynamics and enable various forms of interferometric measurement methods. We sought a neutron interferometric phase-sensitive measurement method which would be as sensitive as possible to internal density gradients. A neutron passing through a medium of number density $N$ and thickness $L$ will accumulate a phase shift $\Phi=Nb_{c}\lambda L$ where $b_{c}$ is the coherent neutron scattering amplitude and $\lambda$ is the neutron wavelength. The quantity of interest for our study is the fractional density gradient ${1 \over N}{dN \over dx}$. The coherent neutron scattering lengths $b_{c}$ of almost all stable nuclei are known from experiment, and almost all of them possess negligible dispersion so that their values are constant to high accuracy. The neutron wavelength $\lambda$ of slow neutrons can be determined with sufficient precision using a wide variety of methods including diffraction, neutron time-of-flight, etc. Uncertainties in $L$ measurements can be quite small.
We therefore seek an interferometric technique which can measure a phase gradient ${ \partial \Phi \over \partial x}$. This can be done in principle using different types of phase contrast imaging techniques. Here we briefly review the operation of a specific type of shearing interferometer, a Talbot-Lau interferometer for neutron phase gradient measurement and its application to our problem. We refer readers to the description in~\cite{psinTLI} and to a recent review~\cite{Momose2020} for more details.
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=11cm]{tlilayout.png}
\caption{Sketch of the Talbot-Lau phase gradient measurement setup, not to scale, showing the gratings $G_{0}$, $G_{1}$, and $G_{2}$ whose functions are described in the text. The PbWO$_{4}$ crystal is placed so that each face is approximately 45 degrees with respect to the neutron beam axis ($\beta$) which refracts the incident beam by angle $\alpha$. The measurement consists of acquiring phase step images with the sample in and then translated out of the beam. The phase step images were acquired by translating G1 through one period.}
\label{fig:tlilayout}
\end{center}
\end{figure}
The Talbot-Lau interferometer sketched in Figure~\ref{fig:tlilayout} consists of three gratings $G_{0}$, $G_{1}$, and $G_{2}$ with their associated periods $p_{i}$. $G_{0}$ is an absorbing grating whose spacing is chosen to convert an incoherent beam of incident neutrons into multiple parallel independent line sources which are mutually incoherent but possess enough transverse phase coherence for Talbot-Lau interferometry. This allows the self-images formed at $G_{2}$ described below to be overlaid constructively and thereby enables phase contrast imaging even with the types of incoherent beams available at neutron sources.
The $G_{1}$ phase modulation grating, chosen in our case to produce $\pi$ phase shifts in the transmitted neutron phase, diffracts the neutron beam and produces a near-field interference pattern (the Talbot-Lau carpet) with maxima at Talbot lengths $d_{T}=p_{d1}^{2}/8n\lambda$ where $p_{d1}$ is the period of $G_{1}$, and $n$ is an odd integer (1 in our case). An absorption grating $G_{2}$ is located at one of the Talbot carpet maxima and generates a Moire pattern that can be recorded on a 2D sensitive neutron detector. This Moire pattern has the form
$$
I(x,y)=A(x,y)+B(x,y) \cos{[{2\pi z \over mp_{d2}}\alpha(x, y) + \Delta (x, y)]}
$$
where $x, y$ are the coordinates transverse to the neutron beam optical axis, which points along the direction of the separation $z$ between gratings $G_{1}$ and $G_{2}$, and $m$ is an integer. A deflection of the neutron beam direction between $G_{1}$ and $G_{2}$, which in our case can be produced by neutron refraction from the sample, induces a corresponding lateral shift of the interference pattern. The neutron refraction angle $\alpha$ can be related to the gradient of the sample phase shift $\Phi(x, y)$ normal to the neutron beam direction
$$
\alpha(x, y)= {\lambda \over 2\pi} {\partial \Phi(x,y) \over \partial x}
$$
In our experiment the square cross section PbWO$_{4}$ crystals were mounted in the beam so that their faces were oriented at 45 degrees to the neutron beam axis and so that one hypotenuse is centered on the beam. This geometry produces transverse phase gradients and therefore neutron refraction angles of equal magnitude and opposite sign on either side of the neutron beam axis. If the density of the medium is uniform, $\alpha(x, y)$ is independent of $x$. A fractional density gradient ${1 \over N}{dN \over dx}$ in the PbWO$_{4}$ would generate an additional phase gradient ${\partial \Phi(x,y) \over \partial x} = Nb_{c}\lambda L {dN \over dx}$. We can therefore analyze the phase shift image to either search for or place upper bounds on the internal density gradients of interest along a direction normal to the neutron beam.
To extract the neutron phase shift information of interest from $I(x, y)$ one can measure several phase contrast maps with a grating displaced normal to the neutron beam by different values of $x$. This procedure generates a sinusoidal phase-stepping curve at each $(x, y)$ pixel of the image from which a map of the total phase can be extracted. The contribution to the total phase shift from $\Delta (x, y)$, which comes from imperfections, alignment offsets, etc. in the gratings, can be removed by taking data with the sample absent.
\section{Sample Characterization}
Visual inspection reveals no evidence for any nonuniform local density anomalies from internal voids or inclusions of the type which might introduce systematic errors in $G$ measurements. The detailed shapes of the four 2.3 cm x 2.3 cm area surfaces for each crystal were measured by the NIST Metrology Group on a coordinate measuring machine, with a maximum permissible error (MPE) of $0.3$ $ \mu$m $+ (L/1000)$ $\mu$m where $L$ is in units of mm. The angles between the surface normals for the two pairs of crystal faces to be exposed to the neutron beam varied between $179.93$ degrees to $179.98$ degrees.
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=11cm]{PbWO4crystalsonCMM.png}
\caption{Two PbWO$_{4}$ crystals mounted in the NIST coordinate measuring machine.}
\label{fig:averagephasegradientplus}
\end{center}
\end{figure}
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=11cm]{Crystal01.jpg}
\caption{Example map of the surface flatness deviations for two opposing pairs of one of the crystal surfaces as measured by the NIST coordinate measuring machine. The color code on the right extends over a full range of $\pm 0.01$ mm. The size of the measured deviations on the other surfaces of the test masses is similar.}
\label{fig:averagephasegradientplus}
\end{center}
\end{figure}
\noindent
\section{Experimental Details}
The density gradients in two nominally identical samples of PbWO$_{4}$ crystals were measured at the NCNR Cold Neutron Imaging Facility (CNIF)~\cite{Hussey2015} at the National Institute of Standards and Technology in Gaithersburg, MD. The PbWO$_{4}$ crystals, both of nominal dimensions 2.3 cm x 2.3 cm x 12 cm, were purchased from the Shanghai Institute of Ceramics in Shanghai, China. The crystal is grown along the (001) lattice direction. PbWO$_{4}$ has a body-centered tetragonal crystallographic structure with lattice parameters $a=0.54619$ nm and $c=1.2049$ nm (ICDD card n.19-708).
The neutron beam from the NG-6 cold neutron guide was prepared by diffraction from a double-crystal pyrolytic graphite monochromator and had a mean neutron wavelength of $4.4$ Angstroms. The Talbot-Lau interferometer consists of two vertical gratings G$_{0}$ and G$_{1}$ separated by $3.5$ m, an absorption analyzer grating G$_{2}$ located at the Talbot-Lau self-image location $d_{T}/8$ where $d_{T}$ is the Talbot length, and an imaging detector (described below) in contact with G$_{2}$. The gratings employ Gd coatings to absorb the neutrons. Gadolinium is used for this purpose in view of its extremely high neutron absorption cross section. The initial absorption grating G$_{0}$ consists of 5 $\mu$m thick Gd lines on a 774 $\mu$m period with a duty cycle of 60 \%. G$_{1}$ is a phase modulation grating with 32 $\mu$m deep trenches etched in Si with a period of 7.94 $\mu$m with 50 \% duty cycle. G$_{2}$ is an absorbing analyzer grating with 4 $\mu$m period and 3 $\mu$m high Gd lines. It is placed about 18.1 mm downstream of G$_{1}$, which corresponds to the $1/8^{th}$ Talbot distance ($7.94^{2}$/($8 \times 0.44$ nm) in close proximity to the scintillator screen. The gratings were made at the NIST Center for Nanoscale Science and Technology nanofabrication facility and are described in the literature ~\cite{Lee2009}. Gratings G$_{0}$ and G$_{1}$ are mounted on 6-axis manipulation stages to enable alignment with the neutron optical axis. The scintillator screen is 150 $\mu$m thick LiF:ZnS scintillator purchased from Applied Scintillator Technologies (now Scintacor). The camera is an Andor NEO sCMOS with 6.5 $\mu$m pixel pitch, viewing the scintillator via a Nikon 50 mm f/1.2 lens for an effective pixel pitch of about 51.35~$\mu$m.
The phase gradient images were obtained by stepping G$_{0}$ through one grating period in 21 equidistant steps in the $x$ direction. The sample was alternately translated into and out of the neutron beam periodically. The median of 3 images was taken to rid the image of spurious background events in the zinc sulfide scintillator induced by gamma rays interacting with the camera from cosmic rays and from the ambient neutron capture gamma background in the neutron guide hall facility. After background subtraction, the amplitude and phase of the signals in each pixel of the image were fit using an analysis procedure described by~\cite{butler2014} and implemented at CNIF in the Matlab analysis environment.
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=12cm]{IMG-0147.jpg}
\caption{One of the PbWO$_{4}$ crystals mounted in the NIST CNIF beamline. G2 is visible.}
\label{fig:averagephasegradientplus}
\end{center}
\end{figure}
For the $i^{th}$ measurement sequence the phase shift is calculated as
$$
\Delta \Phi_{i} = \Phi_{i} - \frac{1}{2} \left( \Phi_{{\mbox{open}}_{i-1}} + \Phi_{{\mbox{open}}_{i+1}} \right)
$$
\noindent
where $\Phi_{i}$ is the phase shift of the sample for the $i^{th}$ sequence, $\Phi_{\mbox{open}_{i-1}}$ and $\Phi_{\mbox{open}_{i+1}}$ are the phase shifts of the $(i-1)^{th}$ and $(i+1)^{th}$ sequences when the sample was out of the beam. The phase gradient for the sample is found by averaging over the individual calculated gradients.
The fractional uncertainty $\delta (\Phi)$ depends on the number of neutrons registered in the detector, which in turn is the product of the neutron fluence, number of phase steps, height of the region of interest (ROI), the sampling time and the area of a single pixel. The exact values of these critical parameters of the measurement are listed in Table~\ref{tab:params}.
\begin{table}[h!]
\caption{List of critical parameters}
\begin{center}
\begin{tabular}{|l|c|} \hline
Parameter & Value \\ \hline
Pixel pitch & 51.48 $\mu$m \\
Height of ROI (Sample 1) & 5.148~cm (1000 $\times$ Pixel pitch) \\
Height of ROI (Sample 2) & 5.122~cm (995 $\times$ Pixel pitch) \\
Neutron fluence & 10$^{6}$cm$^{-2}s^{-1}$ \\
Number of phase steps & 21 \\
Sampling time & 45 s \\
Density & 8.30 g/cm$^{-3}$\\
Wavelength($\lambda$) & 4.4 $\AA$ \\
Talbot distance (d) & 7.08 $\times$ 10$^{8} \AA$ \\
Grating period (p$_2$) & 4.0 $\times$ 10$^{4} \AA$ \\
Inclination ($\beta$) & 45$^{\circ}$ \\
Talbot angle ($\theta_{T}$), 1 wedge & 0.7 rad \\\hline
\end{tabular}
\end{center}
\label{tab:params}
\end{table}
With these parameters $N=1.3 \times$ 10$^{5}$~cm for sample 1 and $1.3 \times$ 10$^{5}$~cm for sample 2. The relative uncertainty is therefore $\delta \phi = 2.8 \times 10^{-3} \phi$ for sample 1 and $ 2.8 \times 10^{-3} \phi$ (the same) for sample 2. The average phase gradient and the average phase gradient uncertainty is calculated as
$$\phi_{AV}^{j} = \sum_{i=1}^{n} \frac{\phi_{i}^{j}}{(\Delta \phi_{i}^{j})^{2}}; \hspace{5ex} \Delta \phi_{AV}^{j} = \sqrt{ \frac{1}{\sum_{i=1}^{n}\frac{1}{(\Delta \phi_{i}^{j})^{2}}}},$$
where $n$ is the number of measurements to be averaged and $j$ is the number of pixels.
Using the parameters in Table ~\ref{tab:params} we can calculate the scattering length density as:
$$
Nb_{c} =\frac{p_{2} \theta_{T}}{\lambda^{2} d \tan(\beta)}=2.043 \times 10^{-6} \AA^{-2},
$$
and the inclination angle of the sample surface from the normal to the neutron beam calculated from the observed phase shift is:
$$
2\theta_{T} =\frac{Nb_{c} \lambda^2 d \tan(\beta)}{p_{2}}= 1.4 rad.
$$
This angle is exactly twice the value in the Table~\ref{tab:params} for the case of a neutron beam incident on a wedge, since the square cross sectional area of the PbWO$_{4}$ sample, when rotated by 45 degrees, presents two wedge surfaces at $\pm 45$ degrees to the neutron beam which both refract the beam into the same direction.
\section{Analysis}
Figures ~\ref{fig:averagephasegradient} and ~\ref{fig:averagephasegradientplus} show the averaged phase gradients for one of the PbWO$_{4}$ samples. The opposite signs for $ {\partial \Phi \over \partial x}$ on either side of the image are exactly as expected for the 45 degree-inclined square cross sectional area sample employed. These images resolve a very small but nonzero transverse gradient in $ {\partial \Phi \over \partial x}$ with the same magnitude and sign on both sides. For $0.4 < x < 1.2$ one can calculate the slope of the phase gradient of $d/dx[{\partial \Phi \over \partial x}]=1.9 \times 10^{-3}$ rad/cm$^{2}$. Using the relation $\Phi=Nb_{c}\lambda L$, differentiating twice, and neglecting possible gradient terms in $N$ and $L$ quadratic in $y$, one picks out the term $2b_{c}\lambda {\partial L \over \partial y}{\partial N \over \partial y}$ of interest in this work. The slope of this phase gradient therefore places an upper bound on the fractional density gradient in the PbWO$_{4}$ of ${1\over N}{dN \over dx}<0.5 \times 10^{-6}$ cm$^{-1}$. The value on the other side of the image is similar. This value is about two orders of magnitude smaller than required for future $G$ measurements.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=11cm]{Ave_phase_gradient.pdf}
\caption{Phase gradient image of the PbWO$_{4}$ sample measured by the Talbot-Lau neutron interferometer.}
\label{fig:averagephasegradient}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=11cm]{Ave_phase_gradient_positive_region.pdf}
\caption{Closeup view of the phase gradient image of the PbWO$_{4}$ sample measured by the Talbot-Lau neutron interferometer. A small but nonzero change in the phase gradient is visible.}
\label{fig:averagephasegradientplus}
\end{center}
\end{figure}
\section{Conclusions and Future Work}
We have used a neutron Talbot-Lau interferometer to place an upper bound on the internal density gradient in macroscopic crystals of PbWO$_{4}$ of ${1 \over N}{dN \over dx} < 0.5 \times 10^{-6}$ cm$^{-1}$. This density gradient bound is consistent with the order-of-magnitude one might expect from the level of impurities in commercially-available PbWO$_{4}$ crystals. The size of the phase gradient is also consistent with the NIST coordinate measuring machine measurements of the small deviations of the shape of the crystals away from our assumed geometry. This density gradient value is about two orders of magnitude smaller than what is needed in typical macroscopic mechanical apparatus of the type often used to measure $G$.
The two PbWO$_{4}$ crystals used in the neutron interferometer studies are currently undergoing testing using a newly constructed laser interferometer. The density gradient will be quantified in terms of variation in the interference fringes as a laser beam scans the crystals. The orientation of the crystal in the optical interferometer in will be kept similar to the orientation in the neutron measurement. If the neutron result is confirmed with the optical interferometer measurements in progress we will conclude that PbWO$_{4}$ is a strong candidate for use as a test mass in future $G$ measurements.
The same measurement methods used in this work would also work for many other candidate $G$ test mass materials. The neutron measurements of course are not restricted to optically transparent materials and could also be used to nondestructively inspect opaque dense test masses as well. If one wanted an independent confirmation of the results in this case one would need to employ another probe which could penetrate the required thickness of material. This can be done using traditional gamma ray transmission radiography. A gamma ray radiography facility at Los Alamos can routinely penetrate several centimeters of dense material and resolve internal voids at the millimeter scale~\cite{Espy}. It has also recently been shown that an epithermal neutron beamline imaging facility at Los Alamos can at the same time also perform gamma imaging with a comparable spatial resolution. At a spallation neutron source such imaging beams possesses an intense gamma flash when the GeV energy proton beam strikes a high Z target to liberate neutrons by spallation, and the imaging detectors possess both neutron and gamma sensitivity. All three of these methods (neutron and optical phase contrast imaging and gamma transmission) are nondestructive and can therefore in principle be applied to the same $G$ test mass used in the actual experimental measurements. We are therefore encouraged to think that we are close to establishing a method which can address one of the common potential systematic error sources in measurements of $G$.
\section{Acknowledgements}
We would like to thank D. Newell and S. Schlamminger of the National Institute of Standards and Technology in Gaithersburg, MD for their help in arranging the coordinate measuring machine work. K. T. A. Assumin-Gyimah, D. Dutta, and W. M. Snow acknowledge support from US National Science Foundation grant PHY-1707988. W. M. Snow acknowledges support from US National Science Foundation grants PHY-1614545 and PHY-1914405 and the Indiana University Center for Spacetime Symmetries. C. Langlois was supported by the NSF Research Experiences for Undergraduates program.
|
{
"timestamp": "2021-09-30T02:03:14",
"yymm": "2109",
"arxiv_id": "2109.14008",
"language": "en",
"url": "https://arxiv.org/abs/2109.14008"
}
|
\section{Introduction} \label{sec:intro}
It has long been known that potentially all early-type galaxies (ETGs)
contain excess flux at short (vacuum ultraviolet)
wavelengths when compared to expectations from their
old, metal-rich stellar populations extrapolated to
this wavelength regime (e.g., \citealt{Code1979,
Bertola1982,Burstein1988} -- see also
\citealt{OConnell1999,yi2010} for reviews and discussions). Typical observed $FUV-V$ colors for local ETGs are
2--3 magnitudes bluer than expected from spectrum
synthesis models (e.g., \citealt{conroy2009}) that,
with standard initial conditions (solar metallicity,
formation redshift of 4, star-formation e-folding
timescale of 0.3 Gyr\footnote{The star-formation rate decays by a factor of $e$ during this timescale.}) reproduce the observed optical and infrared colors of ETGs and their evolution even at redshift $\sim2$.
There is now a broad consensus that
hot (blue) horizontal branch (HB) stars are
responsible for producing the excess UV flux
\citep{Greggio1990,Dorman1993,Dorman1995}. These stars
have been directly identified as the sources of the
UV light in the nearby bulges of M31 and M32
\citep{Brown1998,tbrown2000b}. \cite{Rosenfield2012} find that the vast majority of the UV light in the bulge of M31 can only be produced by hot HB stars (unresolved in their imaging). Since the bulge of M31 is
both old \citep{Saglia2010} and metal-rich
\citep{Sarajedini2005}, it provides a good
counterpart to the stellar populations
of more distant ETGs. Contributions from sources other than blue
HB stars are not fully consistent with observational evidence from local galaxies: Post-AGB (PAGB) stars \citep{Lee1999,Werle2020} are absent
from the UV color-magnitude diagram of
M32 \citep{tbrown2000b,tbrown2004} and provide only a relatively small ($\sim 20\%$ -- \citealt{Rosenfield2012}) fraction of the UV flux in the M31 bulge: those found are actually
descendants of the blue HB population
producing the bulk of the UV flux \citep{brown1997,Rosenfield2012}. $FUV$
spectra of nearby ETGs from {\it Astro HUT}
\citep{Ferguson1993,brown1997} lack the broad absorption features that would be
expected by a population of intermediate
luminosity white dwarfs \citep{Werle2020}.
Young stars \citep{Vazdekis2016,Rusinol2019,Werle2020}
cannot easily account for the UV excess light as well. No star hotter than B1V is observed
in M31's bulge \citep{OConnell1992,Rosenfield2012}. Images of several ETGs from {\it Astro UIT} \citep{Stecher1997} exhibit none of the clumpiness usually associated with star formation elsewhere. $FUV$ Spectra of six nearby ETGs from \cite{brown1997} and spectral energy distributions covering
1500--3000\AA\ for several dozen ETGs in
Coma and Abell 1689 \citep{ali2018a,ali2018b} are consistent with a single blackbody and unlike the flux distribution produced by young stellar populations for normal initial mass functions. Finally, estimates of the
star formation rate in ETGs from \cite{Hakobyan2012} and \cite{Sedgwick2021}
using Type II supernovae imply much lower
star formation rates ($\sim 0.01$ to $0.1$ $M_{\odot}$ yr$^{-1}$) than needed to account
for the observed UV flux: furthermore, star formation tends to occur among lower mass ETGs, whereas the UV upturn is stronger in more massive galaxies.
While neither of the above contributions to the UV upturn flux can be conclusively ruled out from current data (many deriving from nearly 30 years old observations from space telescopes flown on the Space Shuttle), blue HB stars appear to
be the most likely contributor to the bulk
of the UV flux, at least for nearby galaxies. A fundamental assumption of this (and other papers) is that the redshift evolution of the UV upturn is due to one of these sources at all epochs (which of course is a reflection of the comparatively
poor degree of spatial and spectral evolution available, especially for high redshift galaxies, that are extremely faint in the UV), and that such sources should also have local counterparts in the Milky
Way (e.g., blue He-rich stars in globular clusters, sdB binaries in the field, etc.)
However, standard stellar
evolution models do not produce blue HB stars for
ages and metallicities typical of ETGs (e.g.,
\citealt{Catelan2009}). Blue HB stars are naturally produced by low
metallicity stellar populations (\citealt{Park1997} -- e.g., as in metal-poor globular clusters in our Galaxy). However, if
such stars existed in sufficient numbers to account
for the observed far-UV flux in ETGs, the optical
colors and spectra of these galaxies would look dramatically different.
High metallicity stars (of standard composition),
on the other hand, can never evolve to the blue HB within
cosmological timescales, unless they lose sufficient
envelope mass during their first ascent of the red
giant branch prior to the helium flash \citep{yi1997},
to expose the hotter inner core, but this is ruled out
by observations showing no evidence for extra mass
loss in local open and globular clusters
\citep{Miglio2012,Zijlstra2015,Williams2018} over
a range of 3 dex in metal abundance\footnote{e.g., \cite{Percival2011} postulate this mechanism to force the existence of blue HB stars in metal-rich isochrones}.
Mass loss during
the evolution of close binaries \citep{Han2007,Hernandez2014},
losing their envelopes by angular momentum transfer
as one star evolves off the main sequence and expands,
has also been proposed as a possible mechanism, but
such stars are rare in the globular clusters of our
Galaxy \citep{MoniBidin2009,Kamann2020} and in the bulge
\citep{Badenes2018}. As already noted by \cite{Smith2012}, if such binaries existed, they would have to be systematically more frequent and tighter as a function of galaxy mass and metallicity to account for the observed trends of the UV upturn color with galaxy mass and metallicity \citep{Burstein1988,ali2018a}. They would also have to be tighter and more frequent as a function of galactocentric radius to explain the radial color gradients in the UV upturn color and their correlation with radial gradients in $Mg_2$ strength and overall metallicity as observed by \cite{carter2011} and \cite{jeong2012}. The $FUV-V$ color of cluster ETGs at $z > 0.6$ \citep{ali2018c} and the apparent decrease in
the fraction of blue HB stars at $0.6 < z < 1.0$ as inferred from mid-UV spectral indices in \cite{lecras2016}, are not well
reproduced by this model, where the binary contribution to the UV colors is instead nearly constant after the first Gyr.
However, stars enriched in helium can evolve to the blue HB even at high metallicities (\citealt{lee2005b}; \citealt{chung2011,chung2017}); the extra helium causes faster evolution of stars by increasing their mean molecular weight, which in turn results in smaller mass at the He-burning stage at a given age. Such He-rich stars are observed in the Milky Way globular clusters, where they produce anomalously blue HBs (\citealt{piotto2007,Gratton2012}) and are directly associated with the multiple stellar populations observed in such systems (see review by \citealt{bastian2018}). \cite{peacock2017} found evidence for hot HB stars in M87's metal rich globular clusters while \cite{Goudfrooij2018} suggests that the disintegration of large numbers of such clusters may supply the UV upturn stars in galaxies.
Each of the above models produce different scenarios
for the evolution of the UV upturn color with redshift
\citep{yi2010}. \cite{Brown1998b,Brown2000,Brown2003} obtained a UV color by differencing two STIS filters for a few bright galaxies in clusters at $z=0.33, 0.37$ and $0.55$; these data are consistent with mild or no evolution of the UV upturn over this redshift range. \cite{Ree2007} used bright ellipticals from GALEX out to $z=0.2$ and again finds little evidence for any significant change in the UV upturn color. \cite{Donahue2010} and \cite{Loubser2011} study a small sample of brightest cluster galaxies (BCG) at moderate redshifts ($z < 0.2$) and also find no or modest evolution. \cite{boissier2018} use a large sample of BCGs in the background of the Virgo cluster and detect upturn in these objects out to $z=0.35$ (showing no clear signs of evolution in their $FUV-NUV$ color), with an excess of upturn sources found at $z\sim0.25$. In a follow-up study of GAMA galaxies, \cite{Dantas2018} suggest an overabundance of upturn carriers at $z\sim0.25$, but with no clear statistical evidence to trends over z=0.25. The evolution
regarding the amount of galaxies carrying the UV upturn beyond this redshift remains to be probed. These data are in mild disagreement with the metal-poor and metal-rich HB scenarios (with no extra He) but do not provide strong constraints on either of the models discussed above. Most of these studies, however, have limited redshift range and/or target only a small number of very bright galaxies. We \citep{ali2018a,ali2018b,ali2018c} have used archival
data from the Hubble Space Telescope (HST) to measure
the evolution of the UV upturn out to $z=0.7$ and for
galaxies reaching to or below the $M^*$ point in the
luminosity function (i.e., the more normal population, as opposed to the brighter cluster galaxies
used in most previous studies as cited). Our data
show that there is no significant evolution in the
color and scatter of the UV upturn out to $z=0.55$,
but there is clear evidence that at $z=0.7$ the
$FUV-V$ color has become significantly redder.
Although only a lower (blue) limit to the mean UV
upturn color of red sequence ETGs at $z=0.7$ could be
derived, this is $\sim 1.2$ mag. redder (at the 3$\sigma$
level) than at lower redshifts \citep{ali2018c}. This observation is not consistent with all other models we have discussed earlier for the origin of the UV
upturn, except best explained by a minority population of He-rich stars, while other models cannot reproduce this pattern of evolution, and constrains their helium abundance to be $Y > 0.42$, nearly twice the cosmological value from Big Bang nucleosynthesis. Similar evolutionary trends are also observed for group/field ETGs at $z\sim 0.05$ \citep{phillipps2020} and to $z=0.6$ \citep{Atlee2009} and $z=1$ \citep{lecras2016}.
A testable prediction of this model is that (a)
$FUV-V$ evolves rapidly to the red above a redshift
specified by the epoch of galaxy formation and the
helium abundance, and (b) the scatter in this color
decreases as the upturn fades to match the small
scatter in the optical/infrared colors of ETGs (i.e.,
the extra component due to the anomalous blue HB
disappears). However, this generally requires
deep observations in the vacuum UV that are hard to
obtain. While not ideal, near-UV colors (around 2300\AA)
are also a suitable proxy for the UV upturn, as the UV
sources contribute more than ¾ of the total flux in
this region of the spectrum \citep{ali2019,phillipps2020}. The UV
upturn `leaks' out to nearly 3000\AA\ and the older
stellar populations of normal composition contribute
little to this regime (e.g., see
\citealt{ali2018a,ali2018b}), although the
sensitivity to evolution is lower at increasing
wavelength.
In this paper we extend the cluster-wide analysis of
the UV upturn to red sequence galaxies of 12 clusters,
reaching out to $z\sim1$ - the
highest redshift as allowed by currently available
data, at which it is possible to reach down to and
beyond the $M^*$ point\footnote{The luminosity function of galaxies is well described by the Schechter function $\Phi(M)dM=\Phi^* 10^{-0.4(M-M^*)(\alpha+1)} \exp (10^{-0.4(M-M^*))}$ where $M^*$ is the characteristic magnitude of bright galaxies and $\alpha$ the slope of the faint end.}. At this redshift we explore
an epoch in which the hot horizontal branch stars are
no longer expected to exist, as the low mass stars in
ETGs simply have not had enough time
to occupy the late stages (core helium burning) of
stellar evolution. Typical evolutionary timescales for stars to reach the HB are $>6.5$ Gyrs. Given $z=0.6$ as the point at which they appear to ‘switch on the UV upturn’, one would get ages of around 12 Gyrs, i.e. $z\sim3-4$; similar to globular clusters. The results for cluster galaxies
between $0.3<z<1$ are compared to one another and to
predictions of theoretical models of UV upturn. We also measure
the scatter in the UV colors of cluster galaxies at
increasing redshift and compare them to theoretical
expectations. The aforementioned analyses will allow
us to place stringent constraints on any helium
enhancement present in these galaxies, their formation
epochs and probe the evolution of the upturn
phenomenon at large.
We describe in section 2 our dataset and present
results in section 3. In section 4 we discuss our
results in the light of helium-rich models and
theories of galaxy formation and evolution. We use
the conventional cosmological parameters for this
analysis H$_0 = 67$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_m=0.3$, $\Omega_{\Lambda}=0.7$ (\citealt{Planck2020}). All magnitudes
quoted are in the AB system.
\section{Data and Photometry}
\begin{deluxetable*}{ccccccc}
\tablenum{1}
\tablecaption{Summary of Observational Data\label{table1}}
\tablewidth{0pt}
\tablehead{
\colhead{Cluster} & \colhead{Redshift} & \colhead{Filters} & \colhead{Rest-Frame Central} \vspace{-0.2cm} & \colhead{Total Exposure} & \colhead{Proposal ID} & \colhead{PI} \\ \colhead{} & \colhead{} & \colhead{} & \colhead{Wavelength (\AA)} & \colhead{Time (ks)} & \colhead{} & \colhead{} \\
}
\startdata
Abell 2744 & 0.31 & F336W & 2569 & 27.7 & 13389 & Siana \\
& & F435W & 3326 & 45.7 & 13495 & Lotz \tablenotemark{1} \\
& & F606W & 4633 & 23.6 & 13495 & Lotz \tablenotemark{1}\\
& & F814W & 6223 & 104.3 & 13495 & Lotz \tablenotemark{1}\\
Abell S1063 & 0.35 & F336W & 2493 & 22.6 & 14209 & Siana \\
& & F435W & 3227 & 46.4 & 14037 & Lotz \tablenotemark{1}\\
& & F606W & 4496 & 25.8 & 14037 & Lotz \tablenotemark{1}\\
& & F814W & 6039 & 120.2 & 14037 & Lotz \tablenotemark{1}\\
Abell 370 & 0.38 & F336W & 2446 & 22.2 & 14209 & Siana \\
& & F435W & 3164 & 51.4 & 14038 & Lotz \tablenotemark{1}\\
& & F606W & 4407 & 25.3 & 14038 & Lotz \tablenotemark{1}\\
& & F814W & 5920 & 130.6 & 14038 & Lotz \tablenotemark{1}\\
MACS J0416$-$2403 & 0.40 & F336W & 2407 & 22.2 & 14209 & Siana \\
& & F435W & 3116 & 54.5 & 13496 & Lotz \tablenotemark{1}\\
& & F606W & 4341 & 33.5 & 13496 & Lotz \tablenotemark{1}\\
& & F814W & 5831 & 129.9 & 13496 & Lotz \tablenotemark{1}\\
MACS J0717.5+3745 & 0.54 & F390W & 2516 & 4.9 & 12103 & Postman \tablenotemark{2}\\
& & F555W & 3580 & 8.9 & 9722 & Ebeling \\
& & F606W & 3910 & 27.0 & 13498 & Lotz \tablenotemark{1}\\
& & F814W & 5252 & 114.6 & 13498 & Lotz \tablenotemark{1}\\
MACS J1149.5+2223 & 0.54 & F390W & 2516 & 4.9 & 12068 & Postman \tablenotemark{2}\\
& & F555W & 3580 & 9.0 & 9722 & Ebeling \\
& & F606W & 3910 & 24.8 & 13504 & Lotz \tablenotemark{1}\\
& & F814W & 5252 & 104.2 & 13504 & Lotz \tablenotemark{1}\\
MACS J2129.4$-$0741 & 0.59 & F390W & 2453 & 4.6 & 10493 & Gal-Yam \\
& & F555W & 3491 & 8.9 & 9722 & Ebeling \\
& & F814W & 5119 & 13.4 & 10493 & Gal-Yam \\
& & F125W & 7862 & 2.4 & 12100 & Postman \tablenotemark{2}\\
& & F160W & 10063 & 5.0 & 12100 & Postman \tablenotemark{2}\\
SDSS 1004+4112 & 0.68 & F435W & 2589 & 13.4 & 10509 & Kochanek \\
& & F555W & 3304 & 8.0 & 10509 & Kochanek \\
& & F814W & 4845 & 5.4 & 10509 & Kochanek \\
MACS J0744.8+3927 & 0.69 & F435W & 2574 & 4.0 & 12067 & Postman \tablenotemark{2}\\
& & F555W & 3284 & 17.8 & 12067 & Postman \tablenotemark{2}\\
& & F814W & 4817 & 25.8 & 10493 & Gal-Yam \\
& & F125W & 7396 & 2.5 & 12067 & Postman \tablenotemark{2}\\
& & F160W & 9467 & 3.1 & 12067 & Postman \tablenotemark{2}\\
\enddata
\tablenotetext{1}{\cite{lotz2017}}
\tablenotetext{2}{\cite{postman2012}}
\end{deluxetable*}
\begin{deluxetable*}{ccccccc}
\tablenum{1}
\tablecaption{Summary of Observational Data (Continued) }
\tablewidth{0pt}
\tablehead{
\colhead{Cluster} & \colhead{Redshift} & \colhead{Filters} & \colhead{Rest-Frame Central} \vspace{-0.2cm} & \colhead{Total Exposure} & \colhead{Proposal ID} & \colhead{PI} \\ \colhead{} & \colhead{} & \colhead{} & \colhead{Wavelength (\AA)} & \colhead{Time (ks)} & \colhead{} & \colhead{} \\
}
\startdata
RCS 2327-0204 & 0.70 & F435W & 2559 & 4.2 & 10846 & Gladders \\
& & F814W & 4788 & 5.3 & 10846 & Gladders \\
& & F125W & 7353 & 4.6 & 13177 & Bradac \tablenotemark{3}\\
& & F160W & 9412 & 4.6 & 13177 & Bradac \tablenotemark{3}\\
CL 1226.9+3332 & 0.89 & F475W & 2513 & 4.4 & 12791 & Postman \tablenotemark{2}\\
& & F606W & 3206 & 32.0 & 9033 & Ebeling \\
& & F125W & 6614 & 2.5 & 12791 & Postman \tablenotemark{2}\\
& & F160W & 7407 & 2.3 & 12791 & Postman \tablenotemark{2}\\
SPT-CL 2011$-$5228 & 0.96 & F475W & 2423 & 5.4 & 14630 & Collett \\
& & F606W & 3092 & 5.4 & 14630 & Collett \\
& & F125W & 6378 & 1.4 & 14630 & Collett \\
& & F140W & 7143 & 1.4 & 14630 & Collett \\
\enddata
\tablenotetext{2}{\cite{postman2012}}
\tablenotetext{3}{\cite{bradac2014}}
\end{deluxetable*}
In order to perform a cluster-wide analysis of the upturn in the red sequence population (and not just of the brighter luminous red galaxies - LRGs; e.g. \citealt{lecras2016}, or brightest cluster galaxies - BCGs; e.g. \citealt{Brown1998b,brown2000a,Brown2003}) at high redshifts, one requires rest-frame UV data of sufficient depth below $\sim$2500\AA\ down to the Lyman limit, the wavelength regime most sensitive to the upturn. At $z < 1$ these bandpasses still reside in the vacuum ultraviolet and need deep space-based observations. In this paper we measure the evolution
of the UV upturn flux at $\sim 2400$ \AA, where the
main contribution to the observed flux is still
dominated by the UV upturn sources (from the full
UV spectral energy distributions we have derived in
our previous work -- \citealt{ali2018a,ali2018b,ali2019,phillipps2020}).
\cite{lecras2016} carry out a similar measurement
using a series of spectral indices at rest-frame
wavelengths $\sim 2000 - 3000$ \AA, i.e., within the
near ultraviolet regime we probe here.
We have identified 12 clusters in the Hubble Legacy
Archive with data in filters $F336W$, $F390W$, $F435W$
and $F475W$ probing a rest-frame bandpass centered at
$\sim 2400$\AA\ out to $z \sim 1$. The clusters
are identified in Table~\ref{table1}, where we
detail all filters used for each cluster, their rest-frame central wavelengths, the respective programs and the relative exposure times. The clusters cover the redshift range from $z=0.31$ (Abell 2744) to $z=0.96$ (CL2011). All images were retrieved as drizzled and fully processed data on which photometry could be performed. In the cases where multiple observations exist in each band, those frames were combined together using $IRAF$'s $imcombine$ function to produce the deepest possible images.
\begin{figure*}
\centering
{\includegraphics[width=0.325\textwidth]{g-r_a2744.png}}
{\includegraphics[width=0.325\textwidth]{g-r_as1063.png}}
{\includegraphics[width=0.325\textwidth]{g-r_a370.png}}
{\includegraphics[width=0.325\textwidth]{g-r_m0416.png}}
{\includegraphics[width=0.325\textwidth]{B-V_m1149.png}}
{\includegraphics[width=0.325\textwidth]{B-V_m0717.png}}
{\includegraphics[width=0.325\textwidth]{i-z_m2129.png}}
{\includegraphics[width=0.325\textwidth]{i-z_m0744.png}}
{\includegraphics[width=0.325\textwidth]{i-z_rcs2327.png}}
{\includegraphics[width=0.325\textwidth]{r-i_cl1226.png}}
{\includegraphics[width=0.325\textwidth]{r-i_cl2011.png}}
\caption{Optical color-magnitude diagrams for 11 of the clusters analysed in this paper. The red sequence galaxies are denoted with the red filled circles within the dashed lines and have photometric uncertainties of $<0.05$ magnitudes in their optical colors. The dashed lines show the selection region for quiescent cluster ETGs (as described in the text, within $\pm 0.1$ mag. of the mean ridge line for the red sequence). SDSS1004 did not have any equivalent optical (rest-frame) data to construct a corresponding CMD.}
\label{fig:1}
\end{figure*}
This provides us with a set of clusters at multiple
redshift intervals, thus allowing for a study of the
evolution of the upturn with lookback time;
particularly important as most theoretical models predict a decline in the
strength of the upturn at higher redshifts (except \cite{Han2007} where the UV upturn color is nearly constant with redshift to $z \sim 6$).
Pinpointing the specific epoch at which this
decline occurs can allow us to constrain the key
model parameters, such as the He-enhancement $Y$
and the age of formation of the stellar population.
\begin{figure*}
\centering
{\includegraphics[width=0.325\textwidth]{u-r_a2744.png}}
{\includegraphics[width=0.325\textwidth]{u-r_as1063.png}}
{\includegraphics[width=0.325\textwidth]{u-r_a370.png}}
{\includegraphics[width=0.325\textwidth]{u-r_m0416.png}}
{\includegraphics[width=0.325\textwidth]{u-v_m1149.png}}
{\includegraphics[width=0.325\textwidth]{u-v_m0717.png}}
{\includegraphics[width=0.325\textwidth]{u-r_m2129.png}}
{\includegraphics[width=0.325\textwidth]{u-g_sdss1004.png}}
{\includegraphics[width=0.325\textwidth]{u-g_m0744.png}}
{\includegraphics[width=0.325\textwidth]{u-r_cl1226.png}}
{\includegraphics[width=0.325\textwidth]{u-r_cl2011.png}}
\caption{U-band (rest-frame) color-magnitude diagrams for clusters analysed in this paper. The red sequence is denoted with the red filled circles within the dashed lines and have photometric uncertainties of $<0.05$ magnitudes in their optical colors. RCS2327 did not have any $u$-band (rest-frame) data available to construct a corresponding CMD.}
\label{fig:2}
\end{figure*}
As in our previous work we select a sample of passive
cluster ETGs using red sequence galaxies (spectroscopic samples -- e.g., \citealt{rozo2015,depropris2016} -- confirm that red sequence galaxies have $>95\%$ likelihood of being quiescent cluster members, usually having morphologies consistent with their being early-type systems). The initial red sequence in all clusters is selected from an optical color - $F606W-F814W$ for Abell 2744/Abell S1063/Abell 370/MACSJ0416 MACSJ1149/MACSJ0717, $F555W-F814W$ for SDSS1004, $F125W-F160W$ for RCS2327/MACSJ0744 and $F125W-F140W$ for CL1226/CL2011. Optical photometry for all clusters was performed using Source Extractor (\citealt{bertin1996}) to measure the \cite{Kron1980} style total magnitude and an aperture magnitude within a fixed 10 kpc diameter for each galaxy in the image. The optical color-magnitude diagrams (CMDs) and the identification of red sequence galaxies for all clusters are shown in Fig. \ref{fig:1} (apart from SDSS1004, which did not have data in multiple optical bands). All clusters exhibit a tight red sequence in their CMD with a scatter of roughly $\pm$0.1 mags as is typical of ETGs. This red sequence is observed up to the point where the likelihood of contamination from non-cluster members and dusty foreground galaxies increases significantly, as seen in Fig. \ref{fig:1}. The two thick lines around the red sequence identify the likely early-type cluster members we analyze in this paper.
We then further constrained our sample to exclude
objects with even low levels of possible residual star
formation, by deriving red sequences in (rest-frame)
$u$-band CMDs for the optically selected red sequence
galaxies, and excluding objects bluer than the derived
red sequence in these $u$-band colors. The use of
multiple colors, particularly the $u$-band allows
for the selection of a truly passive sample of ETGs
(\citealt{ali2019,phillipps2020}). The standard $u$-band is very sensitive to even low levels of star formation, while contamination from the putative UV upturn sources is relatively small, allowing us to discriminate against red sequence galaxies with possible on-going low-level star formation. The $u-r$ vs. $r$
CMDs for all clusters are shown in Fig. \ref{fig:2}
(apart from RCS2327, which had no rest-frame $u$-band
data available). We finally check by eye all galaxies
that survived both color cuts and classify them, to reject any that
may have non-early type morphologies, in order to
maintain a likely purely passive sample of galaxies. Several of these clusters are common to our earlier work \citep{depropris2016} where we have measured the Sersic indices of red sequence galaxies, showing that they are consistent with those of local clusters ETGs (at least to $z=0.8$). This three-fold selection method allows us to discriminate truly quiescent early-type cluster members from all
other interlopers, thus presenting us with a sample of
galaxies for which the UV SED will almost entirely be
dominated by the upturn, if present.
The rest-frame $\sim2400-optical$ color in all
clusters can then be used to probe the near-UV
output of the selected galaxies and gauge its
evolution. While this color is clearly not as
efficient for analysing the upturn as the classical
$FUV-optical$ colors used in our previous studies
\citep{ali2018a,ali2018b,ali2018c} and in other works,
it still retains reasonable sensitivity to flux from
sources of the upturn, as will be explored later. The
UV photometry was carried out using $IRAF$'s aperture
photometry package, centred at the RA and DEC of each
galaxy's optical position, within a metric 10 kpc
diameter aperture after properly aligning the UV and
optical images. This ensures the same physical
aperture sizes for the measurement of $UV-optical$
colors across all clusters, irrespective of redshift.
A 5$\sigma$ signal-to-noise photometric cut was then made and each
object checked by eye to ensure their detection. In
all clusters we detect galaxies in the UV about
$2\sim3$ magnitudes fainter than $M^*$ in the luminosity
function, as seen in Fig. \ref{fig:3}. However, for
CL1226/CL2011, due to the large distance and
relatively limited exposure times, we directly detect
only the brightest galaxies in these clusters (above
$M^*$). To overcome this limitation, for both clusters
we stacked all galaxies per magnitude bin in the UV from
their optical positions between
$21<M_{F125W}(r)<24$ (i.e. 3 bins) where very few
galaxies are directly detected. For each magnitude
bin the stacking procedure yielded at least a
$5\sigma$ detection, thus allowing us to push beyond
the direct detection limit and calculate the average
$UV-optical$ colors of the galaxies beyond $M^*$ in
these higher redshift clusters and compare them to
their lower redshift counterparts.
Furthermore, we also consider the potential of contamination of our sample by dust-reddened green valley or blue cloud galaxies. In a previous study, \cite{phillipps2020} sought to define truly passive galaxies in the GAMA survey that exhibit a UV upturn by using a multi-band color cut similar to that employed in this paper in order to isolate even low level star-forming galaxies. For the galaxies that survived the initial optical/u-band cuts, a further cut was made in the WISE $W2-W3$, which is a completely independent measure of star-formation (essentially from re-processed UV light) with minimal effect from internal dust extinction on the IR light directly. This color allows for the identification of potential star-forming galaxies that may have been dust-reddened into the red sequence and thought to be passive systems (\citealt{fraser2016}). Almost all galaxies that survived the initial selection but not the IR cut had $NUV-r\lesssim5$. Similar $NUV-r$ cuts have also been used in other studies to identify potential green valley galaxies and residual star-formers from true ETGs (e.g. \citealt{salim2014,crossett2014}). As such we check all clusters in our sample to identify the number of red sequence members that have colors bluer than $NUV-r<5$ (by using a simple SSP as described in the next sub-section to transform GALEX $NUV-r=5$ to the equivalent rest-frame $NUV-optical$ color of each cluster). We find that the number of galaxies that fall into this criterion for all clusters is roughly $\sim5\%$ or less, and even then most of these blue galaxies are not significantly bluer than $NUV-r=5$ (within approximately 0.5 mags). As such, we do not explicitly reject them from our sample, but these galaxies are identified later (in box plots) as statistical outliers in the overall UV color distribution of clusters.
\subsection{Extinction and k-corrections}
\label{sec:kcorr}
\begin{deluxetable}{ccccccc}
\tablenum{2}
\tablecaption{Extinction and k-corrections for all clusters. K-corrections are used for Fig. \ref{fig:yeps} and \ref{fig:chem}.\label{table2}}
\tablewidth{0pt}
\tablehead{\colhead{Cluster} & \colhead{Observed} \vspace{-0.2cm} & \colhead{Rest-frame} & \colhead{Extinction} & \colhead{K-correction}\\ \colhead{} & \colhead{Color} & \colhead{Color} & \colhead{Correction} & \colhead{}}
\startdata
CL2011 & F475-F125 & 2400-r & 0.168 & 0\\
CL1226 & F475-F125 & 2500-r & 0.049 & 0.31\\
RCS2327 & F435-F814 & 2550-g & 0.162 & 0.93\\
MACSJ0744 & F435-F814 & 2550-g & 0.12 & 0.96\\
SDSS1004 & F435-F814 & 2550-g & 0.038 & 0.99\\
MACSJ2129 & F390-F814 & 2450-g & 0.18 & 0.58\\
MACSJ1149 & F390-F814 & 2500-V & 0.054 & 0.6\\
MACSJ0717 & F390-F814 & 2500-V & 0.182 & 0.6\\
MACSJ0416 & F336-F814 & 2400-r & 0.119 & -0.12\\
Abell370 & F336-F814 & 2450-r & 0.095 & -0.02\\
AbellS1063 & F336-F814 & 2500-r & 0.035 & 0.1\\
Abell2744 & F336-F814 & 2550-r & 0.039 & 0.2\\
\enddata
\end{deluxetable}
\begin{figure}
\includegraphics[width=0.49\textwidth]{SED.png}
\caption{IUE far and near-UV SEDs of the giant elliptical galaxies NGC1139 and NGC4649 from \protect\cite{chavez2011}. The purple and blue curves show the observed UV upturn in their spectra while the red curve is a theoretical spectrum from a stellar population template of solar metallicity and age of 10 Gyrs from \protect\cite{chavez2009}, which represents an ETG with no UV upturn component. Plotted on top are the filter response curves of three filters used for the near-UV observations of the clusters studied in this paper, to show their coverage in the upturn dominated region of the spectra in ETGs.}
\label{fig:sed}
\end{figure}
Extinction corrections are taken from the NASA/IPAC database\footnote{https://ned.ipac.caltech.edu}, which uses the reddening maps from \cite{schlafly2011} and extrapolates to the UV following the Milky Way extinction law of \cite{Cardelli1989}. For simplicity, we assume all galaxies within a cluster to have the same line-of-sight extinction and given the distance to each cluster, there should be little variation in extinctions from galaxy to galaxy. The extinction corrections mode to the $UV-optical$ colors for all clusters are given in Table \ref{table2}. We also assume that all red sequence members have little to no internal dust content, as is the case for ETGs in general and as demonstrated through model fitting of Coma red sequence members in \cite{ali2018a}, where $A{_V}$ was found to be $<0.1$ in most cases. In agreement with this assumption, a measurement for nearby ETGs from the SDSS returns a mean internal extinction $A_z$ of 0 \citep{Barber2007}. Observations in several nearby ETGs from {\it Astro UIT} and {\it Astro HUT} also imply very low internal extinction ($E(B-V) < 0.03$ -- \citealt{Ferguson1993}) in the $FUV$ bandpass, while \cite{Rettura2006,Rettura2011} also find internal extinction values $E(B-V) < 0.05$ (and consistent with zero) for the majority of ETGs in clusters up to $z=1.3$.
The clusters we use in this study were carefully selected such that despite their varying redshifts, they have observations in filters (of increasing wavelength at higher cluster redshifts) that probe roughly the same wavelength range in the rest-frame UV (centred at $\sim$2400\AA). This was done in order to avoid the large k-corrections that become necessary when using the same filter to probe the upturn across large redshift ranges. It is particularly important to minimise the magnitude of the k-corrections in the UV, where the spectra of ETGs are not as rigorously studied, and the stochastic nature of the upturn (which causes a large scatter in the UV colors compared to optical) introduces uncertainty to the k-corrections applied to each galaxy without a priori knowledge of their spectra, which we of course do not have.
Despite our careful selection of clusters, it is still necessary to perform some k-corrections to the data. This k-correction is driven by two key factors - 1) the difference in the rest-frame wavelengths covered by the filters; and 2) the difference in shape of the filters used to observe the rest-frame UV. These two elements are illustrated visually in Fig. \ref{fig:sed}, where the different filters used do not exactly align in the wavelength regime and have varying shapes in their response curves. The k-correction term we apply seeks to correct for these two factors. However, given that the UV data for all clusters are centred between $2400\sim2500$\AA, k-corrections only need to account for about a $100$\AA\ shift at maximum in the wavelength regime, so the correction is mostly dominated by the varying shape of the filters (i.e. bandpass correction).
To perform this correction, we use the YEPS spectrophotometric models of \cite{chung2017} which seek to replicate the UV output of ETGs through He-enhancement (any model that reproduces the observed UV SED could instead have been used, yielding similar results). We take two SSPs, both of which have $Z$=\(Z_\odot\), $z_f=4$ and an age of 12 Gyrs, but with $Y_{ini}$\footnote{The helium abundance $Y$ of a stellar population is related to the initial helium abundance $Y_{ini}$ and the metallicity $Z$ through the following equation: $Y = \Delta Y/\Delta Z \times $ $Z$\, +\, $Y_{ini}$, where $\Delta Y/\Delta Z$ is the galactic helium enrichment parameter, assumed to be 2.0.}$=0.23$ (no upturn) and $Y_{ini}=0.43$ (maximum upturn). The SSPs are shifted to the cluster redshift and k-corrections from both SSPs are calculated in the observed colors; then averaged. An average of the two SSPs is used as the vast majority of our observed galaxies have colors that lie between those of the aforementioned SSPs (see Fig. \ref{fig:chem} for the color evolution of the SSPs). Due to the stochastic nature of the upturn and large scatter in the UV colors, using a single k-correction value for all galaxies in a cluster introduces an uncertainty to the results, as not all galaxies will be fit by the same SSP, which is an issue that cannot be mitigated without detailed knowledge of the UV SED of each galaxy. However for any given galaxy, we find that the k-correction term used has an error of approximately $\pm 0.2$ mags, as the k-correction calculated using the $Y_{ini}=0.23$ (no upturn) and $Y_{ini}=0.43$ (maximum upturn) SSPs, which form the two extreme cases for what the galaxies' UV colors can be, only vary by $\sim \pm 0.2$ mags from our adopted values.
The k-corrections as described above are particularly important for Fig. \ref{fig:yeps}, where we directly compare the evolution of the rest-frame $UV-optical$ colors of cluster galaxies across redshift (will be discussed in detail in later sections). We apply the k-correction to all clusters to bring them in line with those of CL2011's observed $F475W-F125W$ at $z=0.9$ (rest-frame $2400-r$) and because the evolution of the models are also plotted in the observed (and redshifted) bands of CL2011. To elaborate further on the error in the k-correction term, we can take for example MACSJ1149, where the observed color is $F390W-F814W$ - roughly $2500-V$ in rest-frame. The k-correction calculated (as described earlier) to bring this color to the rest-frame $2400-r$ of CL2011 is 0.6 mags. Using the $Y_{ini}=0.23$ SSP, the k-correction is 0.74 mags and using the $Y_{ini}=0.43$ SSP yields 0.45 mags, which are both within 0.2 mags of our adopted correction of 0.6.
The k-corrections as applied to all clusters in Fig. \ref{fig:yeps} and \ref{fig:chem} are shown in Table \ref{table2}. The clusters for which the rest-frame color is closer to CL2011 have smaller k-corrections as expected. No age/evolutionary corrections are made as these figures seek to analyse the age evolution of the UV colors in cluster galaxies.
A similar k-correction is also applied in each of the redshift bins of Fig. \ref{fig:3} (detailed in next section). In each sub-plot, all clusters are k-corrected to the redshift of one selected cluster to account for the small redshift difference between clusters in each redshift bin. For example, in the top left sub-plot (which shows the colors of Abell 2744 at $z=0.31$ and Abell S1063 at $z=0.35$), a k-correction is applied to the Abell S1063 galaxies to bring them in line with the rest-frame colors of Abell 2744 (accounting for the redshift difference of 0.04). Similarly for the rest of the sub-plots the clusters to which the others were k-corrected to are MACSJ0416, MACSJ1149, RCS2327 and CL2011. It should be noted that in each case the k-correction is rather small ($<0.2$ mags in most cases) as the redshift difference between the clusters in each bin is 0.05 or less and because the clusters in each bin are observed using the same filters.
\section{Results}
\subsection{Near-UV color-magnitude diagrams}
We present in Fig. \ref{fig:3} the rest-frame $2400-r$ or $2500-g$ / $2500-V$ colors for selected cluster ETGs (as previously described) in 5 redshift bins: Abell 2744/Abell S1063 at $z=0.31\sim0.35$ (top left), Abell 370/MACSJ0416 at $z=0.38\sim0.40$ (top right), MACSJ1149/MACSJ0717/MACSJ2129 at $z=0.55\sim0.59$ (middle left), SDSS1004/RCS2327/MACSJ0744 at $z=0.68\sim0.7$ (middle right) and CL1226/CL2011 at $z=0.89\sim0.96$ (bottom). For the redshift bins this rest-frame color lies in the observed $F336W-F814W$, $F390W-F814W$, $F435W-F814W$ and $F475W-F125W$ respectively. For each set of clusters, also plotted is the prediction for the same color from the YEPS (Yonsei Evolutionary Population Synthesis) infall model of \protect\cite{chung2017} with $Z$=\(Z_\odot\), 0.5\(Z_\odot\), 2\(Z_\odot\), a fixed redshift of formation - $z_f=4$, cosmological helium ($Y_{ini}=0.23$) and a delta burst star-formation history (i.e. all stars formed instantly at the same age). These models represent the UV colors of galaxies with no upturn component (and standard composition) - the model colors are purely driven by age and metallicity effects, and are comparable to results from other population synthesis models that do not treat the UV upturn and reproduce the typical optical and infrared colors of ETGs even at these redshifts -- e.g., see \cite{mei2006}.
\begin{figure*}
\centering
\includegraphics[width=0.49\textwidth]{2500-r_a2744_as1063_infall.png}
\includegraphics[width=0.49\textwidth]{2400-r_a370_m0416_infall.png}
\includegraphics[width=0.49\textwidth]{2500-V_m1149_m0717_infall.png}
\includegraphics[width=0.49\textwidth]{2500-g_sdss1004_rcs2327_macsj0744_infall.png}
\includegraphics[width=0.49\textwidth]{2500-r_cl2011_cl1226_infall.png}
\caption{Rest-frame $2400\sim2500-optical$ CMDs of clusters at five redshift bins between $0.3<z<1$ as denoted in each figure. The vertical black line represents the respective cluster $M^{*}$ and the shaded region denotes the error in the measurement, taken and adjusted from \protect\cite{connor2017} and \protect\cite{depropris2013}. The horizontal dotted lines in each plot are the colors as given by an infall model from the \protect\cite{chung2017} with cosmological He ($Y_{ini}=0.23$) and $z_f$=4, but varying metallicities at $Z$=\(Z_\odot\), 0.5\(Z_\odot\) \& 2\(Z_\odot\).}
\label{fig:3}
\end{figure*}
While the upturn has been classically analysed using photometry at shorter wavelengths, particularly the $FUV$, previous studies (\citealt{schombert2016}; \citealt{phillipps2020}) have shown that even the $NUV$ flux of quiescent ETGs is mostly dominated by the upturn. Given that the UV wavelength probed in this study is extremely close to the classical GALEX $NUV$ waveband (centred at 2250\AA), we expect the near-UV bands (centred at $\sim$2400\AA) to also contain significant flux from the upturn sub-population, albeit with a stronger contribution from the short wavelength tail of the blackbody emission of the majority main sequence/red giant branch stellar population. To visualise this, we plot in Fig. \ref{fig:sed} the UV SEDs below 3000\AA\ of the giant elliptical galaxies NGC1139 and NGC4649 (\citealt{chavez2011}) as taken by the IUE. These are galaxies known for a strong UV upturn. Also plotted for comparison is the SED of a 10 Gyr old SSP with solar metallicity from \cite{chavez2009}, which acts as a representation for ETGs with no upturn component (such a model well fits the optical and infrared colors of normal ETGs in the local universe and even their evolution to high redshift). Superimposed on top of the SEDs are the response curves of the filters used to make the UV observations of the clusters analysed in this paper. From the figure we can see that at $>3000$\AA, the SEDs of the observed ETGs and the reference model are nearly identical, while at $<3000$\AA, the upturn starts to contribute and the UV flux rises with decreasing wavelength up to the Lyman limit in the observed galaxies. Conversely the UV SED of the theoretical spectrum is nearly flat with almost no flux, as would be expected from a purely quiescent stellar population. Even in the near-UV region probed in this study, there is $\sim5-6$ times more flux from galaxies with upturn than one without. As such we expect this wavelength region to still be reasonably sensitive to the upturn, while remaining vigilant about the contribution from the standard stellar population in ETGs in our analysis.
In Fig. \ref{fig:3} we see in the first four redshift bins that the colors of a majority of our observed galaxies cannot entirely be reproduced with composite
stellar populations (CSPs) of solar metallicity or higher, as is otherwise expected for standard ETGs of this luminosity. A large number of the brightest galaxies in our sample (around or above $M^*$) have UV colors that can only be reproduced with metallicities below solar, which is implausible given that the brightest cluster ETGs are known to have solar or super-solar abundances from their optical spectra/colors. Even lower mass galaxies that are several magnitudes below $M^*$, and are naturally expected to be bluer because of their lower metallicities \citep{gallazzi2005,gallazzi2014}, are much bluer than the half-solar CSP in this near-UV color. In both instances of high and low mass galaxies, this suggests that metallicity (and age) alone cannot account for the extremely blue UV colors as observed in our galaxies. Instead there needs to exist a `second parameter' that drives the bluer UV colors in these otherwise quiescent galaxies. Interestingly however, in the final highest redshift bin, most galaxies across the luminosity function seem to be reasonably fit with 0.5\(Z_\odot\) $< Z <$ 2\(Z_\odot\) models with no upturn. Even in the fainter galaxies, including stacks (containing the average flux of galaxies with optical luminosities well below $M^*$), the colors are largely consistent with models down to $Z\sim0.5$\(Z_\odot\), as would be reasonably expected from the mass-metallicity relation (\citealt{gallazzi2005,gallazzi2014}). Virtually no galaxy brighter than $M^*$ (and very few overall) in these two $z\sim 0.9$ clusters has $2400-r \lesssim 5$ while most galaxies in the lower redshift clusters are this blue or even bluer. Any such object (if it existed) would be easily detected in these data to well below the $M^*$ luminosity. Similarly, if such blue objects existed they would drive the colors of galaxy stacks in these two higher redshift clusters to the blue, but this is not observed. This suggests that at this redshift the upturn is significantly weakened, in agreement with the findings of \cite{lecras2016}, using near-UV spectral indices. We cannot however exclude that a small upturn component may still be present for the brighter, more He-rich galaxies which formed earlier (as in Coma -- \citealt{ali2018a}) and as recently discovered by \cite{lonoce2020} for one $z=1.4$ galaxy in the COSMOS field (although this is a 2$\sigma$ detection based on a single index while all other indices are consistent with no upturn), but our data in these two $z \sim 1$ clusters are consistent with a model where the UV upturn component clearly observed at low redshift has now largely if not totally disappeared.
\subsection{Intrinsic scatter}
\begin{figure}
\includegraphics[width=0.49\textwidth]{scatter.png}
\caption{Observed scatters about the near-UV color magnitude relations of Fig \ref{fig:3} vs. redshift of cluster. Also plotted are the predicted near-UV scatters for each cluster assuming a pure age spread around the color-magnitude relation from the optical data for $z_f=4$ (green squares) and $z_f=10$ (pink diamonds). Also shown is a linear fit to the data and the 95\% confidence and prediction intervals around the fit.}
\label{fig:scatter}
\end{figure}
Another parameter of the CMDs that can be used to probe the strength of the upturn is the intrinsic scatter in the UV colors. The scatter in the optical colors of cluster ETGs is generally found to be extremely tight and of the order $\sim0.1$ mags in $g-r$, due to their similar ages and metallicities, leading to little variance in their SEDs. However, the same galaxies in the $UV-optical$ colors exhibit several times larger scatter, e.g. $\sim1.5$ mags in $FUV-V$ (e.g. \citealt{ali2018a}). While we expect the scatter to naturally increase at shorter wavelengths, this increase can be predicted using the optical colors and compared with observations to check if the UV scatter can be attributable to simple age/metallicity effects as is the case in the optical regime. We use the Hyper-Fit software (\citealt{robotham2015}) to measure the intrinsic scatter in the optical and UV CMDs (as in Fig. \ref{fig:1} and \ref{fig:3}) for each cluster. The code assumes a Gaussian distribution in the data and no covariance between errors in magnitude and color to fit the best representative linear model to the data, and estimate the intrinsic scatter from the model. In Fig. \ref{fig:scatter} we plot the observed scatter in the near-UV color for all of our clusters (apart from CL1226/CL2011 as we only individually detect a small subset of the brightest galaxies) against redshift. Qualitatively, we can see that the scatter in the colors seem to show a general decrease with increasing redshift - the largest decrease being seen around the $z=0.7$ clusters, albeit with significant differences across the redshift range, possibly indicating cluster-to-cluster variations in the degree of helium enhancement or formation epoch. This variation is manifested statistically when we plot a linear fit to the data and the 95\% confidence/prediction intervals to the fit. The fit shows a general decreasing trend with redshift, however the confidence/predictions intervals are very wide, suggesting a large variation in cluster-to-cluster scatter. Despite this large variation, the scatter still shows a gradual decrease with redshift.
We then use a model from \cite{conroy2009} with parameters as described above to estimate the scatter in optical colors, assuming that the entire intrinsic scatter around the mass-metallicity relation is due to age, and from these derive the expected scatter in the $UV-optical$ colors for various assumed formation redshifts, which are also plotted in Fig. \ref{fig:scatter}. At $z < 0.6$ there is clear excess scatter in the near-UV colors in most clusters, for any assumed formation redshift. However, and in agreement with our earlier work and the analysis shown from simple models discussed later, at $z > 0.6$ the scatter in near-UV colors appear closer to the predicted scatter from the optical colors, or even have lower scatters than predicted for certain formation redshifts. This suggests a weakening of the UV upturn at these redshifts, as observed in \cite{ali2018c} and, by a different approach, \cite{lecras2016}.
One mechanism for this is that the UV upturn is produced by a population with non-cosmological helium abundance, with a significant fraction of stars having
helium abundances larger than 0.23, as observed in
Galactic globular clusters (e.g., in NGC 2808). The most extreme He-enhanced stars contribute most to the
$FUV$ flux. As these stars disappear (when they have
younger ages and are too massive to reach the
extreme HB), the remaining He-rich population cannot extend so far into the blue HB and
provides more and more flux to the NUV regime,
appearing gradually over the redshift range $0.6
< z < 1$ as observed by \cite{lecras2016}.
\section{Discussion}
\subsection{Comparison with YEPS models}
\begin{figure*}
\includegraphics[width=\textwidth]{Legend.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=2_5_feh=0_01_inf.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=2_5_feh=0_02_inf.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=2_5_feh=0_04_inf.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=4_feh=0_01_inf.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=4_feh=0_02_inf.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=4_feh=0_04_inf.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=10_feh=0_01_inf.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=10_feh=0_02_inf.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=10_feh=0_04_inf.png}
\caption{YEPS infall (CSP) models showing the evolution of the rest-frame $2400-r$ (observed $F475W-F125W$ at z=0.96) color over redshift/lookback time for a range of initial helium abundances - $Y_{ini}=0.28,0.33,0.38,0.43$ (dashed lines), formation redshifts ($z_f$) and metallicities as detailed in the figure legends. Also included in every sub-plot is the evolution of the same color for infall models with $Y_{ini}=0.23$ (i.e. no upturn) for $Z$=\(Z_\odot\), 0.5\(Z_\odot\) \& 2\(Z_\odot\) (solid lines) at varying formation redshifts. Plotted on top for comparison are box plots which show the rest-frame $2400-r$ colors of all clusters between $z=0.3-1$. Photometric uncertainties in color are $<0.2$ magnitudes.}
\label{fig:yeps}
\end{figure*}
In order to analyse the evolution of the upturn over cosmic time, in Fig. \ref{fig:yeps} we compare our observed colors to those from YEPS spectrophotometric models of \cite{chung2017}. The YEPS models used in this figure are CSPs (with an infall chemical history) of given $Z$ and $z_f$, but also with varying initial helium abundances - $Y_{ini}$. This He-enhancement parameter represents the degree of UV upturn in stellar populations, with a larger $Y_{ini}$ giving rise to hotter HB stars at earlier cosmic times. There have been many observations of multiple stellar populations producing hot and extended HBs in Milky Way globular clusters (e.g. \citealt{lee2005b}; \citealt{piotto2005,piotto2007}) leading to a strong vacuum UV flux (e.g., see \citealt{dalessandro2012}), with direct spectroscopic evidence of enhanced helium in stars within such systems (e.g. \citealt{marino2014} and references therein).
In Fig. \ref{fig:yeps} we plot the time evolution of the rest-frame $\sim2400-r$ over redshift/lookback time. First, in every sub-plot we show the evolution of the YEPS models with solar-like $Y_{ini}=0.23$, for $Z$=\(Z_\odot\), 0.5\(Z_\odot\), 2\(Z_\odot\) (represented as solid lines in the plot). These form an important set of baseline models that \textit{do not} exhibit a UV upturn component, with the colors being driven largely by age and metallicity effects. Hence they act as a key point of comparison to both the observational data and the other models of increasing $Y_{ini}$, to determine how enhanced helium affects the observed colors.
In each sub-plot we then show the evolution of this near-UV color as given by the YEPS model for 4 more increasing initial helium abundances: $Y_{ini}=0.28, 0.33, 0.38, 0.43$ (represented as dashed lines in the plot). In the individual sub-plots we then vary the metallicity and the redshift of formation of these models. In each row, the $z_f$ is varied between three values of 2.5, 4 and 10 (from top to bottom). In each column, the $Z$ of the YEPS models are also varied between three values of 0.5\(Z_\odot\), \(Z_\odot\) and 2\(Z_\odot\) (from left to right). This creates a grid of models of varying $Z$ and $z_f$, allowing us to examine the combined effect of these parameters as well as $Y$ in determining the evolution of ETGs in the UV.
Alongside the models, we plot the observed colors (rest-frame $2400-r$) of ETGs in our 12 clusters from all redshift bins between $0.3<z<1$. The colors of Abell 2744, Abell S1063, Abell 370, MACSJ0416, MACSJ1149, MACSJ0717, MACSJ2129, SDSS1004, MACSJ0744, RCS2327 and CL1226 (as seen in Fig. \ref{fig:3}) were k-corrected using the method described in section 3.1.1 to that of CL2011's rest-frame $2400-r$ ($F475W-F125W$ at $z=0.96$) . As detailed in section \ref{sec:kcorr}, given that all clusters were observed at wavelengths close to rest-frame $2400-r$, the results could be reliably compared after applying a relatively small k-correction that took into account the difference in rest-frame wavelengths and the shapes of the filters used to observe the clusters in the UV (as seen in Fig. \ref{fig:sed}). The colors are plotted in the form of box plots for an intuitive look at the statistics of each cluster. The box plots show the median of the color distribution of each cluster, the 25\% and 75\% quartiles and the entire range of colors in each cluster as indicated by the whiskers. Galaxies that have colors 1.5 times the interquartile range above and below the quartiles are plotted as individual points and are considered outliers. The notches around the median in each box plot represents the 95\% confidence interval of the median value for each cluster. Due to the small number of directly detected galaxies in CL2011 and CL1226, the notches around the median are large, representing the uncertainty in the median value. For these clusters we also plot the stacks separately from the box plots as these represent amalgam values of all galaxies in several magnitude bins (as seen in Fig. \ref{fig:3}). The models and the cluster data combined allow us to probe the evolution of the upturn in the general cluster population out to $z=0.96$.
From this figure we can see that the UV upturn is clearly detected in all clusters at $0.3<z<0.6$ and the $2400-r$ colors are relatively consistent from cluster-to-cluster, showing no significant signs of evolution. A subset of galaxies can be reproduced using models of standard $z_f$ and $Z$ combinations, but even in the most extreme cases that produce the bluest YEPS $Y_{ini}=0.23$ (no upturn) colors (i.e. $z_f$=2.5 and Z=0.5\(Z_\odot\) - see top left sub-plot), there still exists a large subset of bluer galaxies outside these models that can only be explained using models with enhanced $Y$. For more moderate and reasonable YEPS models with $z_f=4$ or higher (see middle and bottom row sub-plots), the argument for an enhanced $Y$ is even stronger in order to account for the larger proportion of blue galaxies outside the bounds of the non-upturn models. Increasing the $Y$ gradually also brings on the onset of the UV upturn at earlier redshifts/lookback times. This is affected by both $z_f$ and $Z$ at a given $Y$, with an earlier evolution to bluer colors for higher $z_f$ and lower $Z$ and vice versa. This is important as galaxies with different $z_f$ and/or $Z$ within a cluster can thus have different strengths and times for the onset of the upturn.
There is also a detection of the upturn for galaxies in the $z\sim0.7$ clusters, but at a slightly lower degree of confidence, as the `blue tail' of galaxies appear to be receding, and as seen in figure \ref{fig:scatter}, have generally smaller observed scatters in their near-UV colors compared to their lower redshift counterparts. These factors may indicate that we are starting to see some signs of evolution in the strength and incidence of the upturn, which is backed up by the findings in \cite{ali2018c}, where SDSS1004 galaxies even when stacked together showed only a weak $3\sigma$ detection of 7.2 mags in rest-frame $1650-g$ (about one mag. redder than similar mass galaxies at $z=0.55$), in a color much more sensitive to the upturn than $NUV-r$. The fact that we still observe an upturn, albeit weakening, in $2400-r$ in the same cluster (and those of similar redshift) may suggest that even the most extreme He-rich stars at this age are no longer able to achieve the high surface temperatures needed to produce flux in the $FUV$ as their stellar envelopes are now too massive, and are thus contributing relatively more to the near-UV flux. As such we may be seeing the first signs of the upturn sub-population starting to disappear.
This is further confirmed by the data in CL1226 and CL2011 where we see that the majority of galaxies out to 3 mags below $M^*$ can be encapsulated by a standard non-upturn $Y_{ini}=0.23$ YEPS model of $Z$ roughly between above 2\(Z_\odot\) and 0.5\(Z_\odot\), assuming $z_f=2.5\sim4$ (top middle and middle sub-plots in Fig. \ref{fig:yeps}), without the need to invoke any significant He-enhancement above primordial levels. It is likely that by these redshifts, even the most extreme He-rich stars are no longer old enough (haven't had enough time to reach the HB) to achieve the surface temperatures required to output significant flux even in the near-UV. This gradual fading of the upturn is an excellent agreement with the results of \cite{lecras2016}, who also found the strength and incidence of the upturn decrease from $z=0.6$ to $z=1$ from the analysis of a number of near-UV indices of a large sample of LRGs. While non-upturn YEPS models do appear to fit the data relatively well at higher redshifts, it is important to note that even at these redshifts not all galaxies will be without an upturn. A combination of earlier $z_f$ and lower metallicity can lead to an earlier onset of the upturn, which may be applicable to some galaxies in a given cluster and explain the recent finding of a massive ETG exhibiting upturn at $z\sim1.4$ (\citealt{lonoce2020} -- although this is based on a single line at the $2\sigma$ level only, with other indicators disagreeing).
The overall color spread of the cluster galaxies is very similar across all clusters and in each individual redshift bin, indicating that all clusters exhibit similar upturns in their member galaxies irrespective of size or environment, as demonstrated in previous studies (\citealt{ali2019}; \citealt{phillipps2020}). Even between the clusters in the different redshift bins, we find the observed colors to be reasonably similar out to $z\sim0.6$, suggesting that the upturn has not evolved significantly, although there is a slightly smaller fraction of very blue galaxies at higher redshifts. These results are mostly consistent with our previous studies, where we detected strong $FUV$ emission from MACSJ1149/MACSJ0717 galaxies both directly and using stacking analysis to several magnitudes below $M^{*}$, which were also agreeing with colors measured in $z\sim0$ clusters such as Coma, Fornax and Perseus (\citealt{ali2018a,ali2018b,ali2018c}). This finding led to the conclusion that the upturn color has remained broadly similar in cluster galaxies out to $z\sim0.55$, which we reaffirm here.
Given a scenario in which the upturn color remains approximately constant out to $z\sim0.6$, then shows signs of weakening at $z\sim0.7$ and has mostly disappeared by $z\sim0.9$, the YEPS models (assuming a moderate \(Z_\odot\) and $z_f=4$ - middle sub-plot in Fig. \ref{fig:yeps}) suggest a minimum $Y_{ini}$ of 0.42 to explain the observations. This equates to a Y of 0.46 using the formula from \cite{chung2017} cited earlier ($Y=Y_{ini}+0.04$ for \(Z_\odot\)). An earlier formation epoch can reduce the requirement of high helium enhancement somewhat (by allowing more time for evolution), but this age can only be reasonably pushed back by 1 Gyr ($z=10$ corresponds to 0.5 Gyr after the Big Bang, which is the most likely redshift for the formation of the first galaxy of Milky Way mass -- \citealt{Naoz2006} -- whereas under the assumption of simple spherical infall the core regions of galaxies cannot form until $z=20$, yielding only an extra 200 Myr in time) at most and even with the most extreme case of $z_f=10$ (bottom middle sub-plot in Fig. \ref{fig:yeps}), the required $Y_{ini}$ only decreases to about 0.40.
\begin{figure*}
\includegraphics[width=\textwidth]{Legend.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=4_feh=0_02_inf.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=4_feh=0_02_sim.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=4_feh=0_02_ssp.png}
\caption{YEPS models of different chemical formulations showing the evolution of the rest-frame $2400-r$ (observed $F475W-F125W$ at z=0.96) color over redshift/lookback time for a range of initial helium abundances - $Y_{ini}=0.28,0.33,0.38,0.43$ (dashed lines) with solar metallicity and formation redshift of 4. Also included in every sub-plot is the evolution of the same color for models with $Y_{ini}=0.23$ (i.e. no upturn) for $Z$=\(Z_\odot\), 0.5\(Z_\odot\) \& 2\(Z_\odot\) (solid lines). \textit{Left column:} YEPS composite stellar population infall model. \textit{Middle column:} YEPS composite stellar population simple closed box model. \textit{Right column:} YEPS simple stellar population model. Plotted on top for comparison are box plots that show the rest-frame $2400-r$ colors of all clusters between $z=0.3-1$. Photometric uncertainties in color are $<0.2$ magnitudes.}
\label{fig:chem}
\end{figure*}
\subsubsection{Comparison with different chemical evolutionary models}
In Fig.~\ref{fig:chem} we show the look-back time evolution of YEPS models with three different assumptions on the chemical evolution histories. The models are a simple stellar population and two variations of a composite stellar population - simple (closed box) and infall assumptions as described in \cite{kodama1997}. Using the metallicity distribution function from \cite{kodama1997}, simple stellar populations have been added up to mimic the composite stellar population of ETGs. For all of the models we shift the resulting SEDs to the redshift of the clusters and measure the integrated color at observed passbands. We overplot the observed colors of our clusters on these models as before.
The SSP models exhibit the reddest $UV-optical$ colors, while the closed box models being the bluest, and the infall models having colors in between. For a given $Y_{ini}$, SSPs appear to have a rapid onset of the upturn, while in CSPs the colors get bluer more gradually over time. This suggests that the upturn sub-population in CSPs take longer to fully populate the HB and become UV-bright compared to SSPs. Our observations appear to support the somewhat gradual mode of evolution of CSPs, given that we observe a strong upturn in all galaxies up to $z=0.6$, which then weakens (but does not fully disappear) at $z=0.7$ and has mostly ceased by $z=0.96$. Beyond our observations, CSPs generally provide a more accurate representation of ETGs, which will contain stellar populations with a range of ages and a metallicity distribution, unlike an SSP.
Amidst the CSP models, the closed box model being the bluest would require the reddest galaxies in our sample to have unusually high metallicities, much greater than 2\(Z_\odot\). Conversely, the highest metallicity required by the infall model to fit our reddest galaxies is $\sim$2\(Z_\odot\), which provides a very realistic upper limit for the metallicity of large cluster ETGs from their optical spectra, as confirmed in previous studies - e.g. in Coma (\citealt{price2011}). The infall models thus provide overall the most realistic and economical fit to our data for all three paratemeters of $Y_{ini}$, $z_f$ and $Z$ simultaneously. Infall models are also generally heralded to better replicate the evolution of observed galaxies, particularly with their ability to solve the `G-dwarf' problem of closed box models.
\subsection{Comparison with other UV upturn models}
We can briefly consider the implications of these results for other models of the origin of the UV upturn.
Low metallicity HB stars or high metallicity HB stars (apart from the issues we have mentioned earlier) would start to appear only at $z\sim 0.3$ \citep{yi1997}, well below the redshift where the presence of the UV upturn is already securely detected (at least $z=0.55$). Therefore our results confirm that these objects (with normal $Y$ abundance) cannot provide the sources of the UV upturn and account for its evolution with time.
The observed evolution, here and in previous studies \citep{lecras2016,ali2018c} is also not consistent with the binary model of \cite{Han2007} and \cite{Hernandez2014}, where the UV upturn color in these models does not change significantly until $z \sim 6$ and there is clear evidence of evolution in our data. Similarly, PAGB stars would only stop contributing to the UV light about 10 Gyrs prior to the present epoch \citep{Lee1999}, corresponding to $z \sim 3$, again far higher than observed here and in previous works.
\cite{Vazdekis2016} and \cite{Rusinol2019},
among others, suggest that residual star formation within ETGs
may contribute to the UV upturn. However, the results presented here (and the observed decline of the UV upturn at $z>0.6$ in \citealt{lecras2016} and \citealt{ali2018c}) cannot be explained in this fashion, as the residual star formation rate would have to decrease as a function of increasing redshift, opposite to all observed behavior for star-forming galaxies in the field and clusters \citep{Finn2005}. The galaxies in our sample, additionally, lie within rich clusters, where quenching of star formation is believed to be more efficient than in the field (but we note that the evolution of the UV upturn color appears to be similar between field and cluster environments -- \citealt{Atlee2009}, De Propris et al., in preparation).
\subsection{Implications for galaxy evolution}
While there exists several other models for the upturn, the observed evolution of the near-UV colors is best fitted by He-rich models. Since He-rich stars have been directly observed in local globular clusters and are directly linked to a stronger UV output in hot HB stars, this mechanism is the only one thus far proposed that does not require significant modifications to cosmology or theories of stellar evolution, and where local counterparts are observed to exist.
In this scenario ETGs form, at least from the standpoint of chemical evolution, in the same manner as globular clusters, albeit at generally much higher metallicities (but He-rich metal rich clusters are observed in M87 -- \citealt{peacock2017}). This would imply that most ETGs form their stellar populations and assemble their mass at very early times. Indeed, the stellar mass needed to provide the observed flux in the UV corresponds to about 10\% of the total stellar mass, and this must have been formed at $z > 2.5$ and probably closer to $z=4$ as discussed above. However, these are second generation stars, whose helium content must have been enriched by previous processing, likely in fast rotating massive stars \citep[e.g.,][]{decressin2007} or in massive AGB stars \citep[e.g.,][]{ventura2001} during the third dredge-up. Given what is known about the likely yields of element production and the resulting timescales, and within a closed box model, the vast majority of the stellar mass must already have been present within ETGs at these redshifts, especially if we also need to account for the observed radial gradients in the UV upturn populations within galaxies \citep{carter2011,jeong2012}. This therefore suggests a very early period of mass assembly for ETGs, as indicated by the discovery of several massive fully formed galaxies at redshifts approaching 4 \citep{guarnieri2019}.
\subsection{Caveats}
Finally, we note here the key caveats in our data and analysis (see body of paper for a more thorough discussion):
\begin{itemize}
\item The upturn has been conventionally classified using $FUV-optical$ colors (centred at $\sim1500$\AA), the wavelength regime where the upturn is at its strongest, while we have used a $NUV-optical$ color for our analysis. Although this wavelength regime is not the most optimal and is partly affected by the tail end of the blackbody emission of the main sequence population in ETGs, this study (e.g. Fig. \ref{fig:sed}) and others (\citealt{schombert2016}; \citealt{phillipps2020}) have shown that it is still mostly dominated by the emission from the upturn sub-population. Hence the $NUV-optical$ color should also trace the evolution of the upturn with redshift, as is predicted by models and seen in our results.
\item At high redshift, for CL1226 and CL2011 the data is not deep enough to individually detect galaxies below the cluster $M^*$, unlike the lower redshift clusters. Clusters at higher redshift are also generally not as rich as their lower redshift counterparts, which when combined with the aforementioned detection limits can lead to small number statistics particularly towards the faint end of the luminosity function. To overcome this issue we have stacked all galaxies below $M^*$ and were able to clearly detect the average flux per magnitude bin in this region, reaching down to a similar point in the luminosity function as the lower redshift clusters and probing the more general ETG population. While stacking does not allow for us to explore the scatter in the color caused by the upturn, it does allow us to probe the general evolution of the color with redshift.
\item We have selected our clusters using the standard red sequence selection method as there are no spectroscopic or accurate photometric redshifts particularly for the high redshift cluster galaxies. However, to reduce contamination from star-forming and background/foreground objects, we have selected the red sequence simultaneously in both an optical and U-band colors. This method of cluster ETG selection is found to be rather successful from spectroscopic studies (\citealt{rozo2015}).
\item While the He-enhanced models can definitely explain the evolution of the upturn with redshift most consistently and have local observed analogues such as in Milky Way globular and open clusters, there is still no theoretical construct by which one can get such anomalously He-enriched HB stars in old systems, potentially formed at very high redshifts ($z\sim3-4$) to allow for the required timeframes to evolve on to the HB. Some suggestions have been made, such as the disintegration of GCs providing He-enhanced HB stars in ETGs (\citealt{Goudfrooij2018}), but there is no observational evidence thus far to support such hypotheses. Further theoretical and observational efforts will be required in both the galaxy evolution and globular cluster fronts to discover any potential links between the systems (such as metal-rich GCs being observed in M87 - \citealt{peacock2017}) and to further constrain the source of the He-enhacement in a subset of the stellar population in these systems.
\end{itemize}
\section{Conclusions}
We have measured an approximate $NUV-r$ color for ETGs within clusters at $0.3 \lesssim z \lesssim 1$. At $z < 0.6$ we observe the classical UV upturn showing little evolution and homogeneous UV colors with large spread. Above this redshift we find evidence of a decline in the strength of the upturn, that largely disappears by $z=0.96$ (our most distant target). This is most consistent with composite stellar population models with an infall chemical history where the UV upturn is produced by a population of blue HB stars with $Z$=\(Z_\odot\) formed at $z=2.5\sim4$, in turn originating from a sub-component stellar population with high He content ($\sim 45-47\%$) similar to the second generation population in globular clusters. These models are able to best replicate the evolution and scatter of the UV colors across the entire redshift range of $0.3<z<1$ analysed in this study. Given the evolutionary timescales for these stars, the results imply a surprising degree of chemical evolution occurring within the first 2 Gyrs of the history of the Universe.
\acknowledgments
Based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA). C. C. acknowledges the support provided by the National Research Foundation of Korea (2017R1A2B3002919).
\section{Introduction} \label{sec:intro}
It has long been known that potentially all early-type galaxies (ETGs)
contain excess flux at short (vacuum ultraviolet)
wavelengths when compared to expectations from their
old, metal-rich stellar populations extrapolated to
this wavelength regime (e.g., \citealt{Code1979,
Bertola1982,Burstein1988} -- see also
\citealt{OConnell1999,yi2010} for reviews and discussions). Typical observed $FUV-V$ colors for local ETGs are
2--3 magnitudes bluer than expected from spectrum
synthesis models (e.g., \citealt{conroy2009}) that,
with standard initial conditions (solar metallicity,
formation redshift of 4, star-formation e-folding
timescale of 0.3 Gyr\footnote{The star-formation rate decays by a factor of $e$ during this timescale.}) reproduce the observed optical and infrared colors of ETGs and their evolution even at redshift $\sim2$.
There is now a broad consensus that
hot (blue) horizontal branch (HB) stars are
responsible for producing the excess UV flux
\citep{Greggio1990,Dorman1993,Dorman1995}. These stars
have been directly identified as the sources of the
UV light in the nearby bulges of M31 and M32
\citep{Brown1998,tbrown2000b}. \cite{Rosenfield2012} find that the vast majority of the UV light in the bulge of M31 can only be produced by hot HB stars (unresolved in their imaging). Since the bulge of M31 is
both old \citep{Saglia2010} and metal-rich
\citep{Sarajedini2005}, it provides a good
counterpart to the stellar populations
of more distant ETGs. Contributions from sources other than blue
HB stars are not fully consistent with observational evidence from local galaxies: Post-AGB (PAGB) stars \citep{Lee1999,Werle2020} are absent
from the UV color-magnitude diagram of
M32 \citep{tbrown2000b,tbrown2004} and provide only a relatively small ($\sim 20\%$ -- \citealt{Rosenfield2012}) fraction of the UV flux in the M31 bulge: those found are actually
descendants of the blue HB population
producing the bulk of the UV flux \citep{brown1997,Rosenfield2012}. $FUV$
spectra of nearby ETGs from {\it Astro HUT}
\citep{Ferguson1993,brown1997} lack the broad absorption features that would be
expected by a population of intermediate
luminosity white dwarfs \citep{Werle2020}.
Young stars \citep{Vazdekis2016,Rusinol2019,Werle2020}
cannot easily account for the UV excess light as well. No star hotter than B1V is observed
in M31's bulge \citep{OConnell1992,Rosenfield2012}. Images of several ETGs from {\it Astro UIT} \citep{Stecher1997} exhibit none of the clumpiness usually associated with star formation elsewhere. $FUV$ Spectra of six nearby ETGs from \cite{brown1997} and spectral energy distributions covering
1500--3000\AA\ for several dozen ETGs in
Coma and Abell 1689 \citep{ali2018a,ali2018b} are consistent with a single blackbody and unlike the flux distribution produced by young stellar populations for normal initial mass functions. Finally, estimates of the
star formation rate in ETGs from \cite{Hakobyan2012} and \cite{Sedgwick2021}
using Type II supernovae imply much lower
star formation rates ($\sim 0.01$ to $0.1$ $M_{\odot}$ yr$^{-1}$) than needed to account
for the observed UV flux: furthermore, star formation tends to occur among lower mass ETGs, whereas the UV upturn is stronger in more massive galaxies.
While neither of the above contributions to the UV upturn flux can be conclusively ruled out from current data (many deriving from nearly 30 years old observations from space telescopes flown on the Space Shuttle), blue HB stars appear to
be the most likely contributor to the bulk
of the UV flux, at least for nearby galaxies. A fundamental assumption of this (and other papers) is that the redshift evolution of the UV upturn is due to one of these sources at all epochs (which of course is a reflection of the comparatively
poor degree of spatial and spectral evolution available, especially for high redshift galaxies, that are extremely faint in the UV), and that such sources should also have local counterparts in the Milky
Way (e.g., blue He-rich stars in globular clusters, sdB binaries in the field, etc.)
However, standard stellar
evolution models do not produce blue HB stars for
ages and metallicities typical of ETGs (e.g.,
\citealt{Catelan2009}). Blue HB stars are naturally produced by low
metallicity stellar populations (\citealt{Park1997} -- e.g., as in metal-poor globular clusters in our Galaxy). However, if
such stars existed in sufficient numbers to account
for the observed far-UV flux in ETGs, the optical
colors and spectra of these galaxies would look dramatically different.
High metallicity stars (of standard composition),
on the other hand, can never evolve to the blue HB within
cosmological timescales, unless they lose sufficient
envelope mass during their first ascent of the red
giant branch prior to the helium flash \citep{yi1997},
to expose the hotter inner core, but this is ruled out
by observations showing no evidence for extra mass
loss in local open and globular clusters
\citep{Miglio2012,Zijlstra2015,Williams2018} over
a range of 3 dex in metal abundance\footnote{e.g., \cite{Percival2011} postulate this mechanism to force the existence of blue HB stars in metal-rich isochrones}.
Mass loss during
the evolution of close binaries \citep{Han2007,Hernandez2014},
losing their envelopes by angular momentum transfer
as one star evolves off the main sequence and expands,
has also been proposed as a possible mechanism, but
such stars are rare in the globular clusters of our
Galaxy \citep{MoniBidin2009,Kamann2020} and in the bulge
\citep{Badenes2018}. As already noted by \cite{Smith2012}, if such binaries existed, they would have to be systematically more frequent and tighter as a function of galaxy mass and metallicity to account for the observed trends of the UV upturn color with galaxy mass and metallicity \citep{Burstein1988,ali2018a}. They would also have to be tighter and more frequent as a function of galactocentric radius to explain the radial color gradients in the UV upturn color and their correlation with radial gradients in $Mg_2$ strength and overall metallicity as observed by \cite{carter2011} and \cite{jeong2012}. The $FUV-V$ color of cluster ETGs at $z > 0.6$ \citep{ali2018c} and the apparent decrease in
the fraction of blue HB stars at $0.6 < z < 1.0$ as inferred from mid-UV spectral indices in \cite{lecras2016}, are not well
reproduced by this model, where the binary contribution to the UV colors is instead nearly constant after the first Gyr.
However, stars enriched in helium can evolve to the blue HB even at high metallicities (\citealt{lee2005b}; \citealt{chung2011,chung2017}); the extra helium causes faster evolution of stars by increasing their mean molecular weight, which in turn results in smaller mass at the He-burning stage at a given age. Such He-rich stars are observed in the Milky Way globular clusters, where they produce anomalously blue HBs (\citealt{piotto2007,Gratton2012}) and are directly associated with the multiple stellar populations observed in such systems (see review by \citealt{bastian2018}). \cite{peacock2017} found evidence for hot HB stars in M87's metal rich globular clusters while \cite{Goudfrooij2018} suggests that the disintegration of large numbers of such clusters may supply the UV upturn stars in galaxies.
Each of the above models produce different scenarios
for the evolution of the UV upturn color with redshift
\citep{yi2010}. \cite{Brown1998b,Brown2000,Brown2003} obtained a UV color by differencing two STIS filters for a few bright galaxies in clusters at $z=0.33, 0.37$ and $0.55$; these data are consistent with mild or no evolution of the UV upturn over this redshift range. \cite{Ree2007} used bright ellipticals from GALEX out to $z=0.2$ and again finds little evidence for any significant change in the UV upturn color. \cite{Donahue2010} and \cite{Loubser2011} study a small sample of brightest cluster galaxies (BCG) at moderate redshifts ($z < 0.2$) and also find no or modest evolution. \cite{boissier2018} use a large sample of BCGs in the background of the Virgo cluster and detect upturn in these objects out to $z=0.35$ (showing no clear signs of evolution in their $FUV-NUV$ color), with an excess of upturn sources found at $z\sim0.25$. In a follow-up study of GAMA galaxies, \cite{Dantas2018} suggest an overabundance of upturn carriers at $z\sim0.25$, but with no clear statistical evidence to trends over z=0.25. The evolution
regarding the amount of galaxies carrying the UV upturn beyond this redshift remains to be probed. These data are in mild disagreement with the metal-poor and metal-rich HB scenarios (with no extra He) but do not provide strong constraints on either of the models discussed above. Most of these studies, however, have limited redshift range and/or target only a small number of very bright galaxies. We \citep{ali2018a,ali2018b,ali2018c} have used archival
data from the Hubble Space Telescope (HST) to measure
the evolution of the UV upturn out to $z=0.7$ and for
galaxies reaching to or below the $M^*$ point in the
luminosity function (i.e., the more normal population, as opposed to the brighter cluster galaxies
used in most previous studies as cited). Our data
show that there is no significant evolution in the
color and scatter of the UV upturn out to $z=0.55$,
but there is clear evidence that at $z=0.7$ the
$FUV-V$ color has become significantly redder.
Although only a lower (blue) limit to the mean UV
upturn color of red sequence ETGs at $z=0.7$ could be
derived, this is $\sim 1.2$ mag. redder (at the 3$\sigma$
level) than at lower redshifts \citep{ali2018c}. This observation is not consistent with all other models we have discussed earlier for the origin of the UV
upturn, except best explained by a minority population of He-rich stars, while other models cannot reproduce this pattern of evolution, and constrains their helium abundance to be $Y > 0.42$, nearly twice the cosmological value from Big Bang nucleosynthesis. Similar evolutionary trends are also observed for group/field ETGs at $z\sim 0.05$ \citep{phillipps2020} and to $z=0.6$ \citep{Atlee2009} and $z=1$ \citep{lecras2016}.
A testable prediction of this model is that (a)
$FUV-V$ evolves rapidly to the red above a redshift
specified by the epoch of galaxy formation and the
helium abundance, and (b) the scatter in this color
decreases as the upturn fades to match the small
scatter in the optical/infrared colors of ETGs (i.e.,
the extra component due to the anomalous blue HB
disappears). However, this generally requires
deep observations in the vacuum UV that are hard to
obtain. While not ideal, near-UV colors (around 2300\AA)
are also a suitable proxy for the UV upturn, as the UV
sources contribute more than ¾ of the total flux in
this region of the spectrum \citep{ali2019,phillipps2020}. The UV
upturn `leaks' out to nearly 3000\AA\ and the older
stellar populations of normal composition contribute
little to this regime (e.g., see
\citealt{ali2018a,ali2018b}), although the
sensitivity to evolution is lower at increasing
wavelength.
In this paper we extend the cluster-wide analysis of
the UV upturn to red sequence galaxies of 12 clusters,
reaching out to $z\sim1$ - the
highest redshift as allowed by currently available
data, at which it is possible to reach down to and
beyond the $M^*$ point\footnote{The luminosity function of galaxies is well described by the Schechter function $\Phi(M)dM=\Phi^* 10^{-0.4(M-M^*)(\alpha+1)} \exp (10^{-0.4(M-M^*))}$ where $M^*$ is the characteristic magnitude of bright galaxies and $\alpha$ the slope of the faint end.}. At this redshift we explore
an epoch in which the hot horizontal branch stars are
no longer expected to exist, as the low mass stars in
ETGs simply have not had enough time
to occupy the late stages (core helium burning) of
stellar evolution. Typical evolutionary timescales for stars to reach the HB are $>6.5$ Gyrs. Given $z=0.6$ as the point at which they appear to ‘switch on the UV upturn’, one would get ages of around 12 Gyrs, i.e. $z\sim3-4$; similar to globular clusters. The results for cluster galaxies
between $0.3<z<1$ are compared to one another and to
predictions of theoretical models of UV upturn. We also measure
the scatter in the UV colors of cluster galaxies at
increasing redshift and compare them to theoretical
expectations. The aforementioned analyses will allow
us to place stringent constraints on any helium
enhancement present in these galaxies, their formation
epochs and probe the evolution of the upturn
phenomenon at large.
We describe in section 2 our dataset and present
results in section 3. In section 4 we discuss our
results in the light of helium-rich models and
theories of galaxy formation and evolution. We use
the conventional cosmological parameters for this
analysis H$_0 = 67$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_m=0.3$, $\Omega_{\Lambda}=0.7$ (\citealt{Planck2020}). All magnitudes
quoted are in the AB system.
\section{Data and Photometry}
\begin{deluxetable*}{ccccccc}
\tablenum{1}
\tablecaption{Summary of Observational Data\label{table1}}
\tablewidth{0pt}
\tablehead{
\colhead{Cluster} & \colhead{Redshift} & \colhead{Filters} & \colhead{Rest-Frame Central} \vspace{-0.2cm} & \colhead{Total Exposure} & \colhead{Proposal ID} & \colhead{PI} \\ \colhead{} & \colhead{} & \colhead{} & \colhead{Wavelength (\AA)} & \colhead{Time (ks)} & \colhead{} & \colhead{} \\
}
\startdata
Abell 2744 & 0.31 & F336W & 2569 & 27.7 & 13389 & Siana \\
& & F435W & 3326 & 45.7 & 13495 & Lotz \tablenotemark{1} \\
& & F606W & 4633 & 23.6 & 13495 & Lotz \tablenotemark{1}\\
& & F814W & 6223 & 104.3 & 13495 & Lotz \tablenotemark{1}\\
Abell S1063 & 0.35 & F336W & 2493 & 22.6 & 14209 & Siana \\
& & F435W & 3227 & 46.4 & 14037 & Lotz \tablenotemark{1}\\
& & F606W & 4496 & 25.8 & 14037 & Lotz \tablenotemark{1}\\
& & F814W & 6039 & 120.2 & 14037 & Lotz \tablenotemark{1}\\
Abell 370 & 0.38 & F336W & 2446 & 22.2 & 14209 & Siana \\
& & F435W & 3164 & 51.4 & 14038 & Lotz \tablenotemark{1}\\
& & F606W & 4407 & 25.3 & 14038 & Lotz \tablenotemark{1}\\
& & F814W & 5920 & 130.6 & 14038 & Lotz \tablenotemark{1}\\
MACS J0416$-$2403 & 0.40 & F336W & 2407 & 22.2 & 14209 & Siana \\
& & F435W & 3116 & 54.5 & 13496 & Lotz \tablenotemark{1}\\
& & F606W & 4341 & 33.5 & 13496 & Lotz \tablenotemark{1}\\
& & F814W & 5831 & 129.9 & 13496 & Lotz \tablenotemark{1}\\
MACS J0717.5+3745 & 0.54 & F390W & 2516 & 4.9 & 12103 & Postman \tablenotemark{2}\\
& & F555W & 3580 & 8.9 & 9722 & Ebeling \\
& & F606W & 3910 & 27.0 & 13498 & Lotz \tablenotemark{1}\\
& & F814W & 5252 & 114.6 & 13498 & Lotz \tablenotemark{1}\\
MACS J1149.5+2223 & 0.54 & F390W & 2516 & 4.9 & 12068 & Postman \tablenotemark{2}\\
& & F555W & 3580 & 9.0 & 9722 & Ebeling \\
& & F606W & 3910 & 24.8 & 13504 & Lotz \tablenotemark{1}\\
& & F814W & 5252 & 104.2 & 13504 & Lotz \tablenotemark{1}\\
MACS J2129.4$-$0741 & 0.59 & F390W & 2453 & 4.6 & 10493 & Gal-Yam \\
& & F555W & 3491 & 8.9 & 9722 & Ebeling \\
& & F814W & 5119 & 13.4 & 10493 & Gal-Yam \\
& & F125W & 7862 & 2.4 & 12100 & Postman \tablenotemark{2}\\
& & F160W & 10063 & 5.0 & 12100 & Postman \tablenotemark{2}\\
SDSS 1004+4112 & 0.68 & F435W & 2589 & 13.4 & 10509 & Kochanek \\
& & F555W & 3304 & 8.0 & 10509 & Kochanek \\
& & F814W & 4845 & 5.4 & 10509 & Kochanek \\
MACS J0744.8+3927 & 0.69 & F435W & 2574 & 4.0 & 12067 & Postman \tablenotemark{2}\\
& & F555W & 3284 & 17.8 & 12067 & Postman \tablenotemark{2}\\
& & F814W & 4817 & 25.8 & 10493 & Gal-Yam \\
& & F125W & 7396 & 2.5 & 12067 & Postman \tablenotemark{2}\\
& & F160W & 9467 & 3.1 & 12067 & Postman \tablenotemark{2}\\
\enddata
\tablenotetext{1}{\cite{lotz2017}}
\tablenotetext{2}{\cite{postman2012}}
\end{deluxetable*}
\begin{deluxetable*}{ccccccc}
\tablenum{1}
\tablecaption{Summary of Observational Data (Continued) }
\tablewidth{0pt}
\tablehead{
\colhead{Cluster} & \colhead{Redshift} & \colhead{Filters} & \colhead{Rest-Frame Central} \vspace{-0.2cm} & \colhead{Total Exposure} & \colhead{Proposal ID} & \colhead{PI} \\ \colhead{} & \colhead{} & \colhead{} & \colhead{Wavelength (\AA)} & \colhead{Time (ks)} & \colhead{} & \colhead{} \\
}
\startdata
RCS 2327-0204 & 0.70 & F435W & 2559 & 4.2 & 10846 & Gladders \\
& & F814W & 4788 & 5.3 & 10846 & Gladders \\
& & F125W & 7353 & 4.6 & 13177 & Bradac \tablenotemark{3}\\
& & F160W & 9412 & 4.6 & 13177 & Bradac \tablenotemark{3}\\
CL 1226.9+3332 & 0.89 & F475W & 2513 & 4.4 & 12791 & Postman \tablenotemark{2}\\
& & F606W & 3206 & 32.0 & 9033 & Ebeling \\
& & F125W & 6614 & 2.5 & 12791 & Postman \tablenotemark{2}\\
& & F160W & 7407 & 2.3 & 12791 & Postman \tablenotemark{2}\\
SPT-CL 2011$-$5228 & 0.96 & F475W & 2423 & 5.4 & 14630 & Collett \\
& & F606W & 3092 & 5.4 & 14630 & Collett \\
& & F125W & 6378 & 1.4 & 14630 & Collett \\
& & F140W & 7143 & 1.4 & 14630 & Collett \\
\enddata
\tablenotetext{2}{\cite{postman2012}}
\tablenotetext{3}{\cite{bradac2014}}
\end{deluxetable*}
In order to perform a cluster-wide analysis of the upturn in the red sequence population (and not just of the brighter luminous red galaxies - LRGs; e.g. \citealt{lecras2016}, or brightest cluster galaxies - BCGs; e.g. \citealt{Brown1998b,brown2000a,Brown2003}) at high redshifts, one requires rest-frame UV data of sufficient depth below $\sim$2500\AA\ down to the Lyman limit, the wavelength regime most sensitive to the upturn. At $z < 1$ these bandpasses still reside in the vacuum ultraviolet and need deep space-based observations. In this paper we measure the evolution
of the UV upturn flux at $\sim 2400$ \AA, where the
main contribution to the observed flux is still
dominated by the UV upturn sources (from the full
UV spectral energy distributions we have derived in
our previous work -- \citealt{ali2018a,ali2018b,ali2019,phillipps2020}).
\cite{lecras2016} carry out a similar measurement
using a series of spectral indices at rest-frame
wavelengths $\sim 2000 - 3000$ \AA, i.e., within the
near ultraviolet regime we probe here.
We have identified 12 clusters in the Hubble Legacy
Archive with data in filters $F336W$, $F390W$, $F435W$
and $F475W$ probing a rest-frame bandpass centered at
$\sim 2400$\AA\ out to $z \sim 1$. The clusters
are identified in Table~\ref{table1}, where we
detail all filters used for each cluster, their rest-frame central wavelengths, the respective programs and the relative exposure times. The clusters cover the redshift range from $z=0.31$ (Abell 2744) to $z=0.96$ (CL2011). All images were retrieved as drizzled and fully processed data on which photometry could be performed. In the cases where multiple observations exist in each band, those frames were combined together using $IRAF$'s $imcombine$ function to produce the deepest possible images.
\begin{figure*}
\centering
{\includegraphics[width=0.325\textwidth]{g-r_a2744.png}}
{\includegraphics[width=0.325\textwidth]{g-r_as1063.png}}
{\includegraphics[width=0.325\textwidth]{g-r_a370.png}}
{\includegraphics[width=0.325\textwidth]{g-r_m0416.png}}
{\includegraphics[width=0.325\textwidth]{B-V_m1149.png}}
{\includegraphics[width=0.325\textwidth]{B-V_m0717.png}}
{\includegraphics[width=0.325\textwidth]{i-z_m2129.png}}
{\includegraphics[width=0.325\textwidth]{i-z_m0744.png}}
{\includegraphics[width=0.325\textwidth]{i-z_rcs2327.png}}
{\includegraphics[width=0.325\textwidth]{r-i_cl1226.png}}
{\includegraphics[width=0.325\textwidth]{r-i_cl2011.png}}
\caption{Optical color-magnitude diagrams for 11 of the clusters analysed in this paper. The red sequence galaxies are denoted with the red filled circles within the dashed lines and have photometric uncertainties of $<0.05$ magnitudes in their optical colors. The dashed lines show the selection region for quiescent cluster ETGs (as described in the text, within $\pm 0.1$ mag. of the mean ridge line for the red sequence). SDSS1004 did not have any equivalent optical (rest-frame) data to construct a corresponding CMD.}
\label{fig:1}
\end{figure*}
This provides us with a set of clusters at multiple
redshift intervals, thus allowing for a study of the
evolution of the upturn with lookback time;
particularly important as most theoretical models predict a decline in the
strength of the upturn at higher redshifts (except \cite{Han2007} where the UV upturn color is nearly constant with redshift to $z \sim 6$).
Pinpointing the specific epoch at which this
decline occurs can allow us to constrain the key
model parameters, such as the He-enhancement $Y$
and the age of formation of the stellar population.
\begin{figure*}
\centering
{\includegraphics[width=0.325\textwidth]{u-r_a2744.png}}
{\includegraphics[width=0.325\textwidth]{u-r_as1063.png}}
{\includegraphics[width=0.325\textwidth]{u-r_a370.png}}
{\includegraphics[width=0.325\textwidth]{u-r_m0416.png}}
{\includegraphics[width=0.325\textwidth]{u-v_m1149.png}}
{\includegraphics[width=0.325\textwidth]{u-v_m0717.png}}
{\includegraphics[width=0.325\textwidth]{u-r_m2129.png}}
{\includegraphics[width=0.325\textwidth]{u-g_sdss1004.png}}
{\includegraphics[width=0.325\textwidth]{u-g_m0744.png}}
{\includegraphics[width=0.325\textwidth]{u-r_cl1226.png}}
{\includegraphics[width=0.325\textwidth]{u-r_cl2011.png}}
\caption{U-band (rest-frame) color-magnitude diagrams for clusters analysed in this paper. The red sequence is denoted with the red filled circles within the dashed lines and have photometric uncertainties of $<0.05$ magnitudes in their optical colors. RCS2327 did not have any $u$-band (rest-frame) data available to construct a corresponding CMD.}
\label{fig:2}
\end{figure*}
As in our previous work we select a sample of passive
cluster ETGs using red sequence galaxies (spectroscopic samples -- e.g., \citealt{rozo2015,depropris2016} -- confirm that red sequence galaxies have $>95\%$ likelihood of being quiescent cluster members, usually having morphologies consistent with their being early-type systems). The initial red sequence in all clusters is selected from an optical color - $F606W-F814W$ for Abell 2744/Abell S1063/Abell 370/MACSJ0416 MACSJ1149/MACSJ0717, $F555W-F814W$ for SDSS1004, $F125W-F160W$ for RCS2327/MACSJ0744 and $F125W-F140W$ for CL1226/CL2011. Optical photometry for all clusters was performed using Source Extractor (\citealt{bertin1996}) to measure the \cite{Kron1980} style total magnitude and an aperture magnitude within a fixed 10 kpc diameter for each galaxy in the image. The optical color-magnitude diagrams (CMDs) and the identification of red sequence galaxies for all clusters are shown in Fig. \ref{fig:1} (apart from SDSS1004, which did not have data in multiple optical bands). All clusters exhibit a tight red sequence in their CMD with a scatter of roughly $\pm$0.1 mags as is typical of ETGs. This red sequence is observed up to the point where the likelihood of contamination from non-cluster members and dusty foreground galaxies increases significantly, as seen in Fig. \ref{fig:1}. The two thick lines around the red sequence identify the likely early-type cluster members we analyze in this paper.
We then further constrained our sample to exclude
objects with even low levels of possible residual star
formation, by deriving red sequences in (rest-frame)
$u$-band CMDs for the optically selected red sequence
galaxies, and excluding objects bluer than the derived
red sequence in these $u$-band colors. The use of
multiple colors, particularly the $u$-band allows
for the selection of a truly passive sample of ETGs
(\citealt{ali2019,phillipps2020}). The standard $u$-band is very sensitive to even low levels of star formation, while contamination from the putative UV upturn sources is relatively small, allowing us to discriminate against red sequence galaxies with possible on-going low-level star formation. The $u-r$ vs. $r$
CMDs for all clusters are shown in Fig. \ref{fig:2}
(apart from RCS2327, which had no rest-frame $u$-band
data available). We finally check by eye all galaxies
that survived both color cuts and classify them, to reject any that
may have non-early type morphologies, in order to
maintain a likely purely passive sample of galaxies. Several of these clusters are common to our earlier work \citep{depropris2016} where we have measured the Sersic indices of red sequence galaxies, showing that they are consistent with those of local clusters ETGs (at least to $z=0.8$). This three-fold selection method allows us to discriminate truly quiescent early-type cluster members from all
other interlopers, thus presenting us with a sample of
galaxies for which the UV SED will almost entirely be
dominated by the upturn, if present.
The rest-frame $\sim2400-optical$ color in all
clusters can then be used to probe the near-UV
output of the selected galaxies and gauge its
evolution. While this color is clearly not as
efficient for analysing the upturn as the classical
$FUV-optical$ colors used in our previous studies
\citep{ali2018a,ali2018b,ali2018c} and in other works,
it still retains reasonable sensitivity to flux from
sources of the upturn, as will be explored later. The
UV photometry was carried out using $IRAF$'s aperture
photometry package, centred at the RA and DEC of each
galaxy's optical position, within a metric 10 kpc
diameter aperture after properly aligning the UV and
optical images. This ensures the same physical
aperture sizes for the measurement of $UV-optical$
colors across all clusters, irrespective of redshift.
A 5$\sigma$ signal-to-noise photometric cut was then made and each
object checked by eye to ensure their detection. In
all clusters we detect galaxies in the UV about
$2\sim3$ magnitudes fainter than $M^*$ in the luminosity
function, as seen in Fig. \ref{fig:3}. However, for
CL1226/CL2011, due to the large distance and
relatively limited exposure times, we directly detect
only the brightest galaxies in these clusters (above
$M^*$). To overcome this limitation, for both clusters
we stacked all galaxies per magnitude bin in the UV from
their optical positions between
$21<M_{F125W}(r)<24$ (i.e. 3 bins) where very few
galaxies are directly detected. For each magnitude
bin the stacking procedure yielded at least a
$5\sigma$ detection, thus allowing us to push beyond
the direct detection limit and calculate the average
$UV-optical$ colors of the galaxies beyond $M^*$ in
these higher redshift clusters and compare them to
their lower redshift counterparts.
Furthermore, we also consider the potential of contamination of our sample by dust-reddened green valley or blue cloud galaxies. In a previous study, \cite{phillipps2020} sought to define truly passive galaxies in the GAMA survey that exhibit a UV upturn by using a multi-band color cut similar to that employed in this paper in order to isolate even low level star-forming galaxies. For the galaxies that survived the initial optical/u-band cuts, a further cut was made in the WISE $W2-W3$, which is a completely independent measure of star-formation (essentially from re-processed UV light) with minimal effect from internal dust extinction on the IR light directly. This color allows for the identification of potential star-forming galaxies that may have been dust-reddened into the red sequence and thought to be passive systems (\citealt{fraser2016}). Almost all galaxies that survived the initial selection but not the IR cut had $NUV-r\lesssim5$. Similar $NUV-r$ cuts have also been used in other studies to identify potential green valley galaxies and residual star-formers from true ETGs (e.g. \citealt{salim2014,crossett2014}). As such we check all clusters in our sample to identify the number of red sequence members that have colors bluer than $NUV-r<5$ (by using a simple SSP as described in the next sub-section to transform GALEX $NUV-r=5$ to the equivalent rest-frame $NUV-optical$ color of each cluster). We find that the number of galaxies that fall into this criterion for all clusters is roughly $\sim5\%$ or less, and even then most of these blue galaxies are not significantly bluer than $NUV-r=5$ (within approximately 0.5 mags). As such, we do not explicitly reject them from our sample, but these galaxies are identified later (in box plots) as statistical outliers in the overall UV color distribution of clusters.
\subsection{Extinction and k-corrections}
\label{sec:kcorr}
\begin{deluxetable}{ccccccc}
\tablenum{2}
\tablecaption{Extinction and k-corrections for all clusters. K-corrections are used for Fig. \ref{fig:yeps} and \ref{fig:chem}.\label{table2}}
\tablewidth{0pt}
\tablehead{\colhead{Cluster} & \colhead{Observed} \vspace{-0.2cm} & \colhead{Rest-frame} & \colhead{Extinction} & \colhead{K-correction}\\ \colhead{} & \colhead{Color} & \colhead{Color} & \colhead{Correction} & \colhead{}}
\startdata
CL2011 & F475-F125 & 2400-r & 0.168 & 0\\
CL1226 & F475-F125 & 2500-r & 0.049 & 0.31\\
RCS2327 & F435-F814 & 2550-g & 0.162 & 0.93\\
MACSJ0744 & F435-F814 & 2550-g & 0.12 & 0.96\\
SDSS1004 & F435-F814 & 2550-g & 0.038 & 0.99\\
MACSJ2129 & F390-F814 & 2450-g & 0.18 & 0.58\\
MACSJ1149 & F390-F814 & 2500-V & 0.054 & 0.6\\
MACSJ0717 & F390-F814 & 2500-V & 0.182 & 0.6\\
MACSJ0416 & F336-F814 & 2400-r & 0.119 & -0.12\\
Abell370 & F336-F814 & 2450-r & 0.095 & -0.02\\
AbellS1063 & F336-F814 & 2500-r & 0.035 & 0.1\\
Abell2744 & F336-F814 & 2550-r & 0.039 & 0.2\\
\enddata
\end{deluxetable}
\begin{figure}
\includegraphics[width=0.49\textwidth]{SED.png}
\caption{IUE far and near-UV SEDs of the giant elliptical galaxies NGC1139 and NGC4649 from \protect\cite{chavez2011}. The purple and blue curves show the observed UV upturn in their spectra while the red curve is a theoretical spectrum from a stellar population template of solar metallicity and age of 10 Gyrs from \protect\cite{chavez2009}, which represents an ETG with no UV upturn component. Plotted on top are the filter response curves of three filters used for the near-UV observations of the clusters studied in this paper, to show their coverage in the upturn dominated region of the spectra in ETGs.}
\label{fig:sed}
\end{figure}
Extinction corrections are taken from the NASA/IPAC database\footnote{https://ned.ipac.caltech.edu}, which uses the reddening maps from \cite{schlafly2011} and extrapolates to the UV following the Milky Way extinction law of \cite{Cardelli1989}. For simplicity, we assume all galaxies within a cluster to have the same line-of-sight extinction and given the distance to each cluster, there should be little variation in extinctions from galaxy to galaxy. The extinction corrections mode to the $UV-optical$ colors for all clusters are given in Table \ref{table2}. We also assume that all red sequence members have little to no internal dust content, as is the case for ETGs in general and as demonstrated through model fitting of Coma red sequence members in \cite{ali2018a}, where $A{_V}$ was found to be $<0.1$ in most cases. In agreement with this assumption, a measurement for nearby ETGs from the SDSS returns a mean internal extinction $A_z$ of 0 \citep{Barber2007}. Observations in several nearby ETGs from {\it Astro UIT} and {\it Astro HUT} also imply very low internal extinction ($E(B-V) < 0.03$ -- \citealt{Ferguson1993}) in the $FUV$ bandpass, while \cite{Rettura2006,Rettura2011} also find internal extinction values $E(B-V) < 0.05$ (and consistent with zero) for the majority of ETGs in clusters up to $z=1.3$.
The clusters we use in this study were carefully selected such that despite their varying redshifts, they have observations in filters (of increasing wavelength at higher cluster redshifts) that probe roughly the same wavelength range in the rest-frame UV (centred at $\sim$2400\AA). This was done in order to avoid the large k-corrections that become necessary when using the same filter to probe the upturn across large redshift ranges. It is particularly important to minimise the magnitude of the k-corrections in the UV, where the spectra of ETGs are not as rigorously studied, and the stochastic nature of the upturn (which causes a large scatter in the UV colors compared to optical) introduces uncertainty to the k-corrections applied to each galaxy without a priori knowledge of their spectra, which we of course do not have.
Despite our careful selection of clusters, it is still necessary to perform some k-corrections to the data. This k-correction is driven by two key factors - 1) the difference in the rest-frame wavelengths covered by the filters; and 2) the difference in shape of the filters used to observe the rest-frame UV. These two elements are illustrated visually in Fig. \ref{fig:sed}, where the different filters used do not exactly align in the wavelength regime and have varying shapes in their response curves. The k-correction term we apply seeks to correct for these two factors. However, given that the UV data for all clusters are centred between $2400\sim2500$\AA, k-corrections only need to account for about a $100$\AA\ shift at maximum in the wavelength regime, so the correction is mostly dominated by the varying shape of the filters (i.e. bandpass correction).
To perform this correction, we use the YEPS spectrophotometric models of \cite{chung2017} which seek to replicate the UV output of ETGs through He-enhancement (any model that reproduces the observed UV SED could instead have been used, yielding similar results). We take two SSPs, both of which have $Z$=\(Z_\odot\), $z_f=4$ and an age of 12 Gyrs, but with $Y_{ini}$\footnote{The helium abundance $Y$ of a stellar population is related to the initial helium abundance $Y_{ini}$ and the metallicity $Z$ through the following equation: $Y = \Delta Y/\Delta Z \times $ $Z$\, +\, $Y_{ini}$, where $\Delta Y/\Delta Z$ is the galactic helium enrichment parameter, assumed to be 2.0.}$=0.23$ (no upturn) and $Y_{ini}=0.43$ (maximum upturn). The SSPs are shifted to the cluster redshift and k-corrections from both SSPs are calculated in the observed colors; then averaged. An average of the two SSPs is used as the vast majority of our observed galaxies have colors that lie between those of the aforementioned SSPs (see Fig. \ref{fig:chem} for the color evolution of the SSPs). Due to the stochastic nature of the upturn and large scatter in the UV colors, using a single k-correction value for all galaxies in a cluster introduces an uncertainty to the results, as not all galaxies will be fit by the same SSP, which is an issue that cannot be mitigated without detailed knowledge of the UV SED of each galaxy. However for any given galaxy, we find that the k-correction term used has an error of approximately $\pm 0.2$ mags, as the k-correction calculated using the $Y_{ini}=0.23$ (no upturn) and $Y_{ini}=0.43$ (maximum upturn) SSPs, which form the two extreme cases for what the galaxies' UV colors can be, only vary by $\sim \pm 0.2$ mags from our adopted values.
The k-corrections as described above are particularly important for Fig. \ref{fig:yeps}, where we directly compare the evolution of the rest-frame $UV-optical$ colors of cluster galaxies across redshift (will be discussed in detail in later sections). We apply the k-correction to all clusters to bring them in line with those of CL2011's observed $F475W-F125W$ at $z=0.9$ (rest-frame $2400-r$) and because the evolution of the models are also plotted in the observed (and redshifted) bands of CL2011. To elaborate further on the error in the k-correction term, we can take for example MACSJ1149, where the observed color is $F390W-F814W$ - roughly $2500-V$ in rest-frame. The k-correction calculated (as described earlier) to bring this color to the rest-frame $2400-r$ of CL2011 is 0.6 mags. Using the $Y_{ini}=0.23$ SSP, the k-correction is 0.74 mags and using the $Y_{ini}=0.43$ SSP yields 0.45 mags, which are both within 0.2 mags of our adopted correction of 0.6.
The k-corrections as applied to all clusters in Fig. \ref{fig:yeps} and \ref{fig:chem} are shown in Table \ref{table2}. The clusters for which the rest-frame color is closer to CL2011 have smaller k-corrections as expected. No age/evolutionary corrections are made as these figures seek to analyse the age evolution of the UV colors in cluster galaxies.
A similar k-correction is also applied in each of the redshift bins of Fig. \ref{fig:3} (detailed in next section). In each sub-plot, all clusters are k-corrected to the redshift of one selected cluster to account for the small redshift difference between clusters in each redshift bin. For example, in the top left sub-plot (which shows the colors of Abell 2744 at $z=0.31$ and Abell S1063 at $z=0.35$), a k-correction is applied to the Abell S1063 galaxies to bring them in line with the rest-frame colors of Abell 2744 (accounting for the redshift difference of 0.04). Similarly for the rest of the sub-plots the clusters to which the others were k-corrected to are MACSJ0416, MACSJ1149, RCS2327 and CL2011. It should be noted that in each case the k-correction is rather small ($<0.2$ mags in most cases) as the redshift difference between the clusters in each bin is 0.05 or less and because the clusters in each bin are observed using the same filters.
\section{Results}
\subsection{Near-UV color-magnitude diagrams}
We present in Fig. \ref{fig:3} the rest-frame $2400-r$ or $2500-g$ / $2500-V$ colors for selected cluster ETGs (as previously described) in 5 redshift bins: Abell 2744/Abell S1063 at $z=0.31\sim0.35$ (top left), Abell 370/MACSJ0416 at $z=0.38\sim0.40$ (top right), MACSJ1149/MACSJ0717/MACSJ2129 at $z=0.55\sim0.59$ (middle left), SDSS1004/RCS2327/MACSJ0744 at $z=0.68\sim0.7$ (middle right) and CL1226/CL2011 at $z=0.89\sim0.96$ (bottom). For the redshift bins this rest-frame color lies in the observed $F336W-F814W$, $F390W-F814W$, $F435W-F814W$ and $F475W-F125W$ respectively. For each set of clusters, also plotted is the prediction for the same color from the YEPS (Yonsei Evolutionary Population Synthesis) infall model of \protect\cite{chung2017} with $Z$=\(Z_\odot\), 0.5\(Z_\odot\), 2\(Z_\odot\), a fixed redshift of formation - $z_f=4$, cosmological helium ($Y_{ini}=0.23$) and a delta burst star-formation history (i.e. all stars formed instantly at the same age). These models represent the UV colors of galaxies with no upturn component (and standard composition) - the model colors are purely driven by age and metallicity effects, and are comparable to results from other population synthesis models that do not treat the UV upturn and reproduce the typical optical and infrared colors of ETGs even at these redshifts -- e.g., see \cite{mei2006}.
\begin{figure*}
\centering
\includegraphics[width=0.49\textwidth]{2500-r_a2744_as1063_infall.png}
\includegraphics[width=0.49\textwidth]{2400-r_a370_m0416_infall.png}
\includegraphics[width=0.49\textwidth]{2500-V_m1149_m0717_infall.png}
\includegraphics[width=0.49\textwidth]{2500-g_sdss1004_rcs2327_macsj0744_infall.png}
\includegraphics[width=0.49\textwidth]{2500-r_cl2011_cl1226_infall.png}
\caption{Rest-frame $2400\sim2500-optical$ CMDs of clusters at five redshift bins between $0.3<z<1$ as denoted in each figure. The vertical black line represents the respective cluster $M^{*}$ and the shaded region denotes the error in the measurement, taken and adjusted from \protect\cite{connor2017} and \protect\cite{depropris2013}. The horizontal dotted lines in each plot are the colors as given by an infall model from the \protect\cite{chung2017} with cosmological He ($Y_{ini}=0.23$) and $z_f$=4, but varying metallicities at $Z$=\(Z_\odot\), 0.5\(Z_\odot\) \& 2\(Z_\odot\).}
\label{fig:3}
\end{figure*}
While the upturn has been classically analysed using photometry at shorter wavelengths, particularly the $FUV$, previous studies (\citealt{schombert2016}; \citealt{phillipps2020}) have shown that even the $NUV$ flux of quiescent ETGs is mostly dominated by the upturn. Given that the UV wavelength probed in this study is extremely close to the classical GALEX $NUV$ waveband (centred at 2250\AA), we expect the near-UV bands (centred at $\sim$2400\AA) to also contain significant flux from the upturn sub-population, albeit with a stronger contribution from the short wavelength tail of the blackbody emission of the majority main sequence/red giant branch stellar population. To visualise this, we plot in Fig. \ref{fig:sed} the UV SEDs below 3000\AA\ of the giant elliptical galaxies NGC1139 and NGC4649 (\citealt{chavez2011}) as taken by the IUE. These are galaxies known for a strong UV upturn. Also plotted for comparison is the SED of a 10 Gyr old SSP with solar metallicity from \cite{chavez2009}, which acts as a representation for ETGs with no upturn component (such a model well fits the optical and infrared colors of normal ETGs in the local universe and even their evolution to high redshift). Superimposed on top of the SEDs are the response curves of the filters used to make the UV observations of the clusters analysed in this paper. From the figure we can see that at $>3000$\AA, the SEDs of the observed ETGs and the reference model are nearly identical, while at $<3000$\AA, the upturn starts to contribute and the UV flux rises with decreasing wavelength up to the Lyman limit in the observed galaxies. Conversely the UV SED of the theoretical spectrum is nearly flat with almost no flux, as would be expected from a purely quiescent stellar population. Even in the near-UV region probed in this study, there is $\sim5-6$ times more flux from galaxies with upturn than one without. As such we expect this wavelength region to still be reasonably sensitive to the upturn, while remaining vigilant about the contribution from the standard stellar population in ETGs in our analysis.
In Fig. \ref{fig:3} we see in the first four redshift bins that the colors of a majority of our observed galaxies cannot entirely be reproduced with composite
stellar populations (CSPs) of solar metallicity or higher, as is otherwise expected for standard ETGs of this luminosity. A large number of the brightest galaxies in our sample (around or above $M^*$) have UV colors that can only be reproduced with metallicities below solar, which is implausible given that the brightest cluster ETGs are known to have solar or super-solar abundances from their optical spectra/colors. Even lower mass galaxies that are several magnitudes below $M^*$, and are naturally expected to be bluer because of their lower metallicities \citep{gallazzi2005,gallazzi2014}, are much bluer than the half-solar CSP in this near-UV color. In both instances of high and low mass galaxies, this suggests that metallicity (and age) alone cannot account for the extremely blue UV colors as observed in our galaxies. Instead there needs to exist a `second parameter' that drives the bluer UV colors in these otherwise quiescent galaxies. Interestingly however, in the final highest redshift bin, most galaxies across the luminosity function seem to be reasonably fit with 0.5\(Z_\odot\) $< Z <$ 2\(Z_\odot\) models with no upturn. Even in the fainter galaxies, including stacks (containing the average flux of galaxies with optical luminosities well below $M^*$), the colors are largely consistent with models down to $Z\sim0.5$\(Z_\odot\), as would be reasonably expected from the mass-metallicity relation (\citealt{gallazzi2005,gallazzi2014}). Virtually no galaxy brighter than $M^*$ (and very few overall) in these two $z\sim 0.9$ clusters has $2400-r \lesssim 5$ while most galaxies in the lower redshift clusters are this blue or even bluer. Any such object (if it existed) would be easily detected in these data to well below the $M^*$ luminosity. Similarly, if such blue objects existed they would drive the colors of galaxy stacks in these two higher redshift clusters to the blue, but this is not observed. This suggests that at this redshift the upturn is significantly weakened, in agreement with the findings of \cite{lecras2016}, using near-UV spectral indices. We cannot however exclude that a small upturn component may still be present for the brighter, more He-rich galaxies which formed earlier (as in Coma -- \citealt{ali2018a}) and as recently discovered by \cite{lonoce2020} for one $z=1.4$ galaxy in the COSMOS field (although this is a 2$\sigma$ detection based on a single index while all other indices are consistent with no upturn), but our data in these two $z \sim 1$ clusters are consistent with a model where the UV upturn component clearly observed at low redshift has now largely if not totally disappeared.
\subsection{Intrinsic scatter}
\begin{figure}
\includegraphics[width=0.49\textwidth]{scatter.png}
\caption{Observed scatters about the near-UV color magnitude relations of Fig \ref{fig:3} vs. redshift of cluster. Also plotted are the predicted near-UV scatters for each cluster assuming a pure age spread around the color-magnitude relation from the optical data for $z_f=4$ (green squares) and $z_f=10$ (pink diamonds). Also shown is a linear fit to the data and the 95\% confidence and prediction intervals around the fit.}
\label{fig:scatter}
\end{figure}
Another parameter of the CMDs that can be used to probe the strength of the upturn is the intrinsic scatter in the UV colors. The scatter in the optical colors of cluster ETGs is generally found to be extremely tight and of the order $\sim0.1$ mags in $g-r$, due to their similar ages and metallicities, leading to little variance in their SEDs. However, the same galaxies in the $UV-optical$ colors exhibit several times larger scatter, e.g. $\sim1.5$ mags in $FUV-V$ (e.g. \citealt{ali2018a}). While we expect the scatter to naturally increase at shorter wavelengths, this increase can be predicted using the optical colors and compared with observations to check if the UV scatter can be attributable to simple age/metallicity effects as is the case in the optical regime. We use the Hyper-Fit software (\citealt{robotham2015}) to measure the intrinsic scatter in the optical and UV CMDs (as in Fig. \ref{fig:1} and \ref{fig:3}) for each cluster. The code assumes a Gaussian distribution in the data and no covariance between errors in magnitude and color to fit the best representative linear model to the data, and estimate the intrinsic scatter from the model. In Fig. \ref{fig:scatter} we plot the observed scatter in the near-UV color for all of our clusters (apart from CL1226/CL2011 as we only individually detect a small subset of the brightest galaxies) against redshift. Qualitatively, we can see that the scatter in the colors seem to show a general decrease with increasing redshift - the largest decrease being seen around the $z=0.7$ clusters, albeit with significant differences across the redshift range, possibly indicating cluster-to-cluster variations in the degree of helium enhancement or formation epoch. This variation is manifested statistically when we plot a linear fit to the data and the 95\% confidence/prediction intervals to the fit. The fit shows a general decreasing trend with redshift, however the confidence/predictions intervals are very wide, suggesting a large variation in cluster-to-cluster scatter. Despite this large variation, the scatter still shows a gradual decrease with redshift.
We then use a model from \cite{conroy2009} with parameters as described above to estimate the scatter in optical colors, assuming that the entire intrinsic scatter around the mass-metallicity relation is due to age, and from these derive the expected scatter in the $UV-optical$ colors for various assumed formation redshifts, which are also plotted in Fig. \ref{fig:scatter}. At $z < 0.6$ there is clear excess scatter in the near-UV colors in most clusters, for any assumed formation redshift. However, and in agreement with our earlier work and the analysis shown from simple models discussed later, at $z > 0.6$ the scatter in near-UV colors appear closer to the predicted scatter from the optical colors, or even have lower scatters than predicted for certain formation redshifts. This suggests a weakening of the UV upturn at these redshifts, as observed in \cite{ali2018c} and, by a different approach, \cite{lecras2016}.
One mechanism for this is that the UV upturn is produced by a population with non-cosmological helium abundance, with a significant fraction of stars having
helium abundances larger than 0.23, as observed in
Galactic globular clusters (e.g., in NGC 2808). The most extreme He-enhanced stars contribute most to the
$FUV$ flux. As these stars disappear (when they have
younger ages and are too massive to reach the
extreme HB), the remaining He-rich population cannot extend so far into the blue HB and
provides more and more flux to the NUV regime,
appearing gradually over the redshift range $0.6
< z < 1$ as observed by \cite{lecras2016}.
\section{Discussion}
\subsection{Comparison with YEPS models}
\begin{figure*}
\includegraphics[width=\textwidth]{Legend.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=2_5_feh=0_01_inf.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=2_5_feh=0_02_inf.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=2_5_feh=0_04_inf.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=4_feh=0_01_inf.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=4_feh=0_02_inf.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=4_feh=0_04_inf.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=10_feh=0_01_inf.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=10_feh=0_02_inf.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=10_feh=0_04_inf.png}
\caption{YEPS infall (CSP) models showing the evolution of the rest-frame $2400-r$ (observed $F475W-F125W$ at z=0.96) color over redshift/lookback time for a range of initial helium abundances - $Y_{ini}=0.28,0.33,0.38,0.43$ (dashed lines), formation redshifts ($z_f$) and metallicities as detailed in the figure legends. Also included in every sub-plot is the evolution of the same color for infall models with $Y_{ini}=0.23$ (i.e. no upturn) for $Z$=\(Z_\odot\), 0.5\(Z_\odot\) \& 2\(Z_\odot\) (solid lines) at varying formation redshifts. Plotted on top for comparison are box plots which show the rest-frame $2400-r$ colors of all clusters between $z=0.3-1$. Photometric uncertainties in color are $<0.2$ magnitudes.}
\label{fig:yeps}
\end{figure*}
In order to analyse the evolution of the upturn over cosmic time, in Fig. \ref{fig:yeps} we compare our observed colors to those from YEPS spectrophotometric models of \cite{chung2017}. The YEPS models used in this figure are CSPs (with an infall chemical history) of given $Z$ and $z_f$, but also with varying initial helium abundances - $Y_{ini}$. This He-enhancement parameter represents the degree of UV upturn in stellar populations, with a larger $Y_{ini}$ giving rise to hotter HB stars at earlier cosmic times. There have been many observations of multiple stellar populations producing hot and extended HBs in Milky Way globular clusters (e.g. \citealt{lee2005b}; \citealt{piotto2005,piotto2007}) leading to a strong vacuum UV flux (e.g., see \citealt{dalessandro2012}), with direct spectroscopic evidence of enhanced helium in stars within such systems (e.g. \citealt{marino2014} and references therein).
In Fig. \ref{fig:yeps} we plot the time evolution of the rest-frame $\sim2400-r$ over redshift/lookback time. First, in every sub-plot we show the evolution of the YEPS models with solar-like $Y_{ini}=0.23$, for $Z$=\(Z_\odot\), 0.5\(Z_\odot\), 2\(Z_\odot\) (represented as solid lines in the plot). These form an important set of baseline models that \textit{do not} exhibit a UV upturn component, with the colors being driven largely by age and metallicity effects. Hence they act as a key point of comparison to both the observational data and the other models of increasing $Y_{ini}$, to determine how enhanced helium affects the observed colors.
In each sub-plot we then show the evolution of this near-UV color as given by the YEPS model for 4 more increasing initial helium abundances: $Y_{ini}=0.28, 0.33, 0.38, 0.43$ (represented as dashed lines in the plot). In the individual sub-plots we then vary the metallicity and the redshift of formation of these models. In each row, the $z_f$ is varied between three values of 2.5, 4 and 10 (from top to bottom). In each column, the $Z$ of the YEPS models are also varied between three values of 0.5\(Z_\odot\), \(Z_\odot\) and 2\(Z_\odot\) (from left to right). This creates a grid of models of varying $Z$ and $z_f$, allowing us to examine the combined effect of these parameters as well as $Y$ in determining the evolution of ETGs in the UV.
Alongside the models, we plot the observed colors (rest-frame $2400-r$) of ETGs in our 12 clusters from all redshift bins between $0.3<z<1$. The colors of Abell 2744, Abell S1063, Abell 370, MACSJ0416, MACSJ1149, MACSJ0717, MACSJ2129, SDSS1004, MACSJ0744, RCS2327 and CL1226 (as seen in Fig. \ref{fig:3}) were k-corrected using the method described in section 3.1.1 to that of CL2011's rest-frame $2400-r$ ($F475W-F125W$ at $z=0.96$) . As detailed in section \ref{sec:kcorr}, given that all clusters were observed at wavelengths close to rest-frame $2400-r$, the results could be reliably compared after applying a relatively small k-correction that took into account the difference in rest-frame wavelengths and the shapes of the filters used to observe the clusters in the UV (as seen in Fig. \ref{fig:sed}). The colors are plotted in the form of box plots for an intuitive look at the statistics of each cluster. The box plots show the median of the color distribution of each cluster, the 25\% and 75\% quartiles and the entire range of colors in each cluster as indicated by the whiskers. Galaxies that have colors 1.5 times the interquartile range above and below the quartiles are plotted as individual points and are considered outliers. The notches around the median in each box plot represents the 95\% confidence interval of the median value for each cluster. Due to the small number of directly detected galaxies in CL2011 and CL1226, the notches around the median are large, representing the uncertainty in the median value. For these clusters we also plot the stacks separately from the box plots as these represent amalgam values of all galaxies in several magnitude bins (as seen in Fig. \ref{fig:3}). The models and the cluster data combined allow us to probe the evolution of the upturn in the general cluster population out to $z=0.96$.
From this figure we can see that the UV upturn is clearly detected in all clusters at $0.3<z<0.6$ and the $2400-r$ colors are relatively consistent from cluster-to-cluster, showing no significant signs of evolution. A subset of galaxies can be reproduced using models of standard $z_f$ and $Z$ combinations, but even in the most extreme cases that produce the bluest YEPS $Y_{ini}=0.23$ (no upturn) colors (i.e. $z_f$=2.5 and Z=0.5\(Z_\odot\) - see top left sub-plot), there still exists a large subset of bluer galaxies outside these models that can only be explained using models with enhanced $Y$. For more moderate and reasonable YEPS models with $z_f=4$ or higher (see middle and bottom row sub-plots), the argument for an enhanced $Y$ is even stronger in order to account for the larger proportion of blue galaxies outside the bounds of the non-upturn models. Increasing the $Y$ gradually also brings on the onset of the UV upturn at earlier redshifts/lookback times. This is affected by both $z_f$ and $Z$ at a given $Y$, with an earlier evolution to bluer colors for higher $z_f$ and lower $Z$ and vice versa. This is important as galaxies with different $z_f$ and/or $Z$ within a cluster can thus have different strengths and times for the onset of the upturn.
There is also a detection of the upturn for galaxies in the $z\sim0.7$ clusters, but at a slightly lower degree of confidence, as the `blue tail' of galaxies appear to be receding, and as seen in figure \ref{fig:scatter}, have generally smaller observed scatters in their near-UV colors compared to their lower redshift counterparts. These factors may indicate that we are starting to see some signs of evolution in the strength and incidence of the upturn, which is backed up by the findings in \cite{ali2018c}, where SDSS1004 galaxies even when stacked together showed only a weak $3\sigma$ detection of 7.2 mags in rest-frame $1650-g$ (about one mag. redder than similar mass galaxies at $z=0.55$), in a color much more sensitive to the upturn than $NUV-r$. The fact that we still observe an upturn, albeit weakening, in $2400-r$ in the same cluster (and those of similar redshift) may suggest that even the most extreme He-rich stars at this age are no longer able to achieve the high surface temperatures needed to produce flux in the $FUV$ as their stellar envelopes are now too massive, and are thus contributing relatively more to the near-UV flux. As such we may be seeing the first signs of the upturn sub-population starting to disappear.
This is further confirmed by the data in CL1226 and CL2011 where we see that the majority of galaxies out to 3 mags below $M^*$ can be encapsulated by a standard non-upturn $Y_{ini}=0.23$ YEPS model of $Z$ roughly between above 2\(Z_\odot\) and 0.5\(Z_\odot\), assuming $z_f=2.5\sim4$ (top middle and middle sub-plots in Fig. \ref{fig:yeps}), without the need to invoke any significant He-enhancement above primordial levels. It is likely that by these redshifts, even the most extreme He-rich stars are no longer old enough (haven't had enough time to reach the HB) to achieve the surface temperatures required to output significant flux even in the near-UV. This gradual fading of the upturn is an excellent agreement with the results of \cite{lecras2016}, who also found the strength and incidence of the upturn decrease from $z=0.6$ to $z=1$ from the analysis of a number of near-UV indices of a large sample of LRGs. While non-upturn YEPS models do appear to fit the data relatively well at higher redshifts, it is important to note that even at these redshifts not all galaxies will be without an upturn. A combination of earlier $z_f$ and lower metallicity can lead to an earlier onset of the upturn, which may be applicable to some galaxies in a given cluster and explain the recent finding of a massive ETG exhibiting upturn at $z\sim1.4$ (\citealt{lonoce2020} -- although this is based on a single line at the $2\sigma$ level only, with other indicators disagreeing).
The overall color spread of the cluster galaxies is very similar across all clusters and in each individual redshift bin, indicating that all clusters exhibit similar upturns in their member galaxies irrespective of size or environment, as demonstrated in previous studies (\citealt{ali2019}; \citealt{phillipps2020}). Even between the clusters in the different redshift bins, we find the observed colors to be reasonably similar out to $z\sim0.6$, suggesting that the upturn has not evolved significantly, although there is a slightly smaller fraction of very blue galaxies at higher redshifts. These results are mostly consistent with our previous studies, where we detected strong $FUV$ emission from MACSJ1149/MACSJ0717 galaxies both directly and using stacking analysis to several magnitudes below $M^{*}$, which were also agreeing with colors measured in $z\sim0$ clusters such as Coma, Fornax and Perseus (\citealt{ali2018a,ali2018b,ali2018c}). This finding led to the conclusion that the upturn color has remained broadly similar in cluster galaxies out to $z\sim0.55$, which we reaffirm here.
Given a scenario in which the upturn color remains approximately constant out to $z\sim0.6$, then shows signs of weakening at $z\sim0.7$ and has mostly disappeared by $z\sim0.9$, the YEPS models (assuming a moderate \(Z_\odot\) and $z_f=4$ - middle sub-plot in Fig. \ref{fig:yeps}) suggest a minimum $Y_{ini}$ of 0.42 to explain the observations. This equates to a Y of 0.46 using the formula from \cite{chung2017} cited earlier ($Y=Y_{ini}+0.04$ for \(Z_\odot\)). An earlier formation epoch can reduce the requirement of high helium enhancement somewhat (by allowing more time for evolution), but this age can only be reasonably pushed back by 1 Gyr ($z=10$ corresponds to 0.5 Gyr after the Big Bang, which is the most likely redshift for the formation of the first galaxy of Milky Way mass -- \citealt{Naoz2006} -- whereas under the assumption of simple spherical infall the core regions of galaxies cannot form until $z=20$, yielding only an extra 200 Myr in time) at most and even with the most extreme case of $z_f=10$ (bottom middle sub-plot in Fig. \ref{fig:yeps}), the required $Y_{ini}$ only decreases to about 0.40.
\begin{figure*}
\includegraphics[width=\textwidth]{Legend.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=4_feh=0_02_inf.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=4_feh=0_02_sim.png}
\includegraphics[width=0.325\textwidth]{Evolution_zf=4_feh=0_02_ssp.png}
\caption{YEPS models of different chemical formulations showing the evolution of the rest-frame $2400-r$ (observed $F475W-F125W$ at z=0.96) color over redshift/lookback time for a range of initial helium abundances - $Y_{ini}=0.28,0.33,0.38,0.43$ (dashed lines) with solar metallicity and formation redshift of 4. Also included in every sub-plot is the evolution of the same color for models with $Y_{ini}=0.23$ (i.e. no upturn) for $Z$=\(Z_\odot\), 0.5\(Z_\odot\) \& 2\(Z_\odot\) (solid lines). \textit{Left column:} YEPS composite stellar population infall model. \textit{Middle column:} YEPS composite stellar population simple closed box model. \textit{Right column:} YEPS simple stellar population model. Plotted on top for comparison are box plots that show the rest-frame $2400-r$ colors of all clusters between $z=0.3-1$. Photometric uncertainties in color are $<0.2$ magnitudes.}
\label{fig:chem}
\end{figure*}
\subsubsection{Comparison with different chemical evolutionary models}
In Fig.~\ref{fig:chem} we show the look-back time evolution of YEPS models with three different assumptions on the chemical evolution histories. The models are a simple stellar population and two variations of a composite stellar population - simple (closed box) and infall assumptions as described in \cite{kodama1997}. Using the metallicity distribution function from \cite{kodama1997}, simple stellar populations have been added up to mimic the composite stellar population of ETGs. For all of the models we shift the resulting SEDs to the redshift of the clusters and measure the integrated color at observed passbands. We overplot the observed colors of our clusters on these models as before.
The SSP models exhibit the reddest $UV-optical$ colors, while the closed box models being the bluest, and the infall models having colors in between. For a given $Y_{ini}$, SSPs appear to have a rapid onset of the upturn, while in CSPs the colors get bluer more gradually over time. This suggests that the upturn sub-population in CSPs take longer to fully populate the HB and become UV-bright compared to SSPs. Our observations appear to support the somewhat gradual mode of evolution of CSPs, given that we observe a strong upturn in all galaxies up to $z=0.6$, which then weakens (but does not fully disappear) at $z=0.7$ and has mostly ceased by $z=0.96$. Beyond our observations, CSPs generally provide a more accurate representation of ETGs, which will contain stellar populations with a range of ages and a metallicity distribution, unlike an SSP.
Amidst the CSP models, the closed box model being the bluest would require the reddest galaxies in our sample to have unusually high metallicities, much greater than 2\(Z_\odot\). Conversely, the highest metallicity required by the infall model to fit our reddest galaxies is $\sim$2\(Z_\odot\), which provides a very realistic upper limit for the metallicity of large cluster ETGs from their optical spectra, as confirmed in previous studies - e.g. in Coma (\citealt{price2011}). The infall models thus provide overall the most realistic and economical fit to our data for all three paratemeters of $Y_{ini}$, $z_f$ and $Z$ simultaneously. Infall models are also generally heralded to better replicate the evolution of observed galaxies, particularly with their ability to solve the `G-dwarf' problem of closed box models.
\subsection{Comparison with other UV upturn models}
We can briefly consider the implications of these results for other models of the origin of the UV upturn.
Low metallicity HB stars or high metallicity HB stars (apart from the issues we have mentioned earlier) would start to appear only at $z\sim 0.3$ \citep{yi1997}, well below the redshift where the presence of the UV upturn is already securely detected (at least $z=0.55$). Therefore our results confirm that these objects (with normal $Y$ abundance) cannot provide the sources of the UV upturn and account for its evolution with time.
The observed evolution, here and in previous studies \citep{lecras2016,ali2018c} is also not consistent with the binary model of \cite{Han2007} and \cite{Hernandez2014}, where the UV upturn color in these models does not change significantly until $z \sim 6$ and there is clear evidence of evolution in our data. Similarly, PAGB stars would only stop contributing to the UV light about 10 Gyrs prior to the present epoch \citep{Lee1999}, corresponding to $z \sim 3$, again far higher than observed here and in previous works.
\cite{Vazdekis2016} and \cite{Rusinol2019},
among others, suggest that residual star formation within ETGs
may contribute to the UV upturn. However, the results presented here (and the observed decline of the UV upturn at $z>0.6$ in \citealt{lecras2016} and \citealt{ali2018c}) cannot be explained in this fashion, as the residual star formation rate would have to decrease as a function of increasing redshift, opposite to all observed behavior for star-forming galaxies in the field and clusters \citep{Finn2005}. The galaxies in our sample, additionally, lie within rich clusters, where quenching of star formation is believed to be more efficient than in the field (but we note that the evolution of the UV upturn color appears to be similar between field and cluster environments -- \citealt{Atlee2009}, De Propris et al., in preparation).
\subsection{Implications for galaxy evolution}
While there exists several other models for the upturn, the observed evolution of the near-UV colors is best fitted by He-rich models. Since He-rich stars have been directly observed in local globular clusters and are directly linked to a stronger UV output in hot HB stars, this mechanism is the only one thus far proposed that does not require significant modifications to cosmology or theories of stellar evolution, and where local counterparts are observed to exist.
In this scenario ETGs form, at least from the standpoint of chemical evolution, in the same manner as globular clusters, albeit at generally much higher metallicities (but He-rich metal rich clusters are observed in M87 -- \citealt{peacock2017}). This would imply that most ETGs form their stellar populations and assemble their mass at very early times. Indeed, the stellar mass needed to provide the observed flux in the UV corresponds to about 10\% of the total stellar mass, and this must have been formed at $z > 2.5$ and probably closer to $z=4$ as discussed above. However, these are second generation stars, whose helium content must have been enriched by previous processing, likely in fast rotating massive stars \citep[e.g.,][]{decressin2007} or in massive AGB stars \citep[e.g.,][]{ventura2001} during the third dredge-up. Given what is known about the likely yields of element production and the resulting timescales, and within a closed box model, the vast majority of the stellar mass must already have been present within ETGs at these redshifts, especially if we also need to account for the observed radial gradients in the UV upturn populations within galaxies \citep{carter2011,jeong2012}. This therefore suggests a very early period of mass assembly for ETGs, as indicated by the discovery of several massive fully formed galaxies at redshifts approaching 4 \citep{guarnieri2019}.
\subsection{Caveats}
Finally, we note here the key caveats in our data and analysis (see body of paper for a more thorough discussion):
\begin{itemize}
\item The upturn has been conventionally classified using $FUV-optical$ colors (centred at $\sim1500$\AA), the wavelength regime where the upturn is at its strongest, while we have used a $NUV-optical$ color for our analysis. Although this wavelength regime is not the most optimal and is partly affected by the tail end of the blackbody emission of the main sequence population in ETGs, this study (e.g. Fig. \ref{fig:sed}) and others (\citealt{schombert2016}; \citealt{phillipps2020}) have shown that it is still mostly dominated by the emission from the upturn sub-population. Hence the $NUV-optical$ color should also trace the evolution of the upturn with redshift, as is predicted by models and seen in our results.
\item At high redshift, for CL1226 and CL2011 the data is not deep enough to individually detect galaxies below the cluster $M^*$, unlike the lower redshift clusters. Clusters at higher redshift are also generally not as rich as their lower redshift counterparts, which when combined with the aforementioned detection limits can lead to small number statistics particularly towards the faint end of the luminosity function. To overcome this issue we have stacked all galaxies below $M^*$ and were able to clearly detect the average flux per magnitude bin in this region, reaching down to a similar point in the luminosity function as the lower redshift clusters and probing the more general ETG population. While stacking does not allow for us to explore the scatter in the color caused by the upturn, it does allow us to probe the general evolution of the color with redshift.
\item We have selected our clusters using the standard red sequence selection method as there are no spectroscopic or accurate photometric redshifts particularly for the high redshift cluster galaxies. However, to reduce contamination from star-forming and background/foreground objects, we have selected the red sequence simultaneously in both an optical and U-band colors. This method of cluster ETG selection is found to be rather successful from spectroscopic studies (\citealt{rozo2015}).
\item While the He-enhanced models can definitely explain the evolution of the upturn with redshift most consistently and have local observed analogues such as in Milky Way globular and open clusters, there is still no theoretical construct by which one can get such anomalously He-enriched HB stars in old systems, potentially formed at very high redshifts ($z\sim3-4$) to allow for the required timeframes to evolve on to the HB. Some suggestions have been made, such as the disintegration of GCs providing He-enhanced HB stars in ETGs (\citealt{Goudfrooij2018}), but there is no observational evidence thus far to support such hypotheses. Further theoretical and observational efforts will be required in both the galaxy evolution and globular cluster fronts to discover any potential links between the systems (such as metal-rich GCs being observed in M87 - \citealt{peacock2017}) and to further constrain the source of the He-enhacement in a subset of the stellar population in these systems.
\end{itemize}
\section{Conclusions}
We have measured an approximate $NUV-r$ color for ETGs within clusters at $0.3 \lesssim z \lesssim 1$. At $z < 0.6$ we observe the classical UV upturn showing little evolution and homogeneous UV colors with large spread. Above this redshift we find evidence of a decline in the strength of the upturn, that largely disappears by $z=0.96$ (our most distant target). This is most consistent with composite stellar population models with an infall chemical history where the UV upturn is produced by a population of blue HB stars with $Z$=\(Z_\odot\) formed at $z=2.5\sim4$, in turn originating from a sub-component stellar population with high He content ($\sim 45-47\%$) similar to the second generation population in globular clusters. These models are able to best replicate the evolution and scatter of the UV colors across the entire redshift range of $0.3<z<1$ analysed in this study. Given the evolutionary timescales for these stars, the results imply a surprising degree of chemical evolution occurring within the first 2 Gyrs of the history of the Universe.
\acknowledgments
Based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA). C. C. acknowledges the support provided by the National Research Foundation of Korea (2017R1A2B3002919).
|
{
"timestamp": "2021-09-30T02:00:24",
"yymm": "2109",
"arxiv_id": "2109.13935",
"language": "en",
"url": "https://arxiv.org/abs/2109.13935"
}
|
\section{Introduction}
The connection between the theory of electrical networks and the Lie theory was discovered in the poineering work \cite{LP}. Later in \cite{L} an embedding $X$ of the space of electrical networks $E_n$ on a planar circular graph with $n$ boundary points to the non-negative part of the Grassmannian $\mathrm{Gr}(n-1,2n)$ was constructed using the technique developed in the work of A. Postnikov. Furthermore its image $X(E_n)$ was explicitly identified in combinatorial terms, and its closure was studied.
Of particular importance for us there will be the operations of attaching an edge and a spike to the graph of a network. The change of $E_n$ under these operations was described in \cite{L} as an action of a certain group, which we will refer to as the Lam group $\mathcal El_n$, on the space of dimension $2n$ and therefore on $\mathrm{Gr}(n-1,2n)$. This group is closely related to the electrical group introduced in \cite{LP}.
Another description of $E_n$ was obtained in \cite{GT} using the technique from the theory of integrable models. This description provided a different embedding of $E_n$ into the same Grassmannian.
In this paper we will construct a new parametrisation of $X(E_n)$ using the ideas from the work of Lam \cite{L} and Postnikov \cite{P}. First we will show that $X(E_n)$ naturally sits in a smaller manifold
$\mathrm{Gr}(n-1,V)\subset \mathrm{Gr}(n-1,2n)$ where $V\subset \mathbb R^{2n}$ is a certain subspace of dimension $2n-2$. The role of this reduction in the dimension of the ambient space is crucial for us because next we show that $X(E_n)$
sits in the Lagrangian Grassmannian $\mathrm{LG}(n-1,V)\subset \mathrm{Gr}(n-1,V)$. Since the dimension of $\mathrm{LG}(n-1,V)$
is equal to the dimension of $E_n$ the former
is a natural home for the latter.
As it is known from geometric representation theory $\mathrm{LG}(n-1)$ can be identified with $\mathrm{Gr}(n-1,2n-2)\cap \mathbb{P} L$ where $L\subset \bigwedge^{n-1}\, \mathbb R^{2n-2}$ is the space of the fundamental representation of the group $Sp(2n-2)$ which corresponds to the last vertex of the Dynkin diagram.
Its dimension is equal to the Catalan number $C_n$.
We will interpret in these terms the description of $X(E_n)$ as the slice of the Grassmannian $\mathrm{Gr}(n-1,2n)$ by a projectivisation of the subspace $H$ of dimension equal to the Catalan number $C_n$ obtained in \cite{L}. It turns out
that $V$ is invariant under the action of the group $\mathcal El
_n$, which factors through the action of the symplectic group $Sp(2n-2)$.
Furthermore $H$
is a subspace of $\bigwedge^{n-1} V$ and it is invariant under the action of $Sp(2n-2)$ as well.
Therefore its projectivisation becomes the space for the Plucker embedding of $\mathrm{LG}(n-1)$ as the orbit of the highest weight vector in the fundamental representation of the group $Sp(2n-2)$ which corresponds to the last vertex of the Dynkin diagram.
It means in particular that the linear relations cutting out $X(E_n)$ from $\mathrm{Gr}(n-1,2n)$ which were found in \cite{L} define
the space of the above fundamental representation of $Sp(2n-2)$ connecting therefore the combinatorics discovered in \cite{L} and representation theory of the symplectic group.
\footnote{After the work described in this paper was finished Thomas Lam informed us in a private communication that he obtained the result that $X(E_n)\subset \mathrm{LG}(n-1,V)$ when \cite{L} was already published.
We have learned as well a similar result was obtained in a recent preprint \cite{CGS}.
However the geometric interpretation of the space $H$ and the connection to the representation theory of the symplectic group appear to be new.
}
The description of $X(E_n)$ given in \cite{GT} also can be naturally identified with the subset of $\mathrm{LG}(n-1)$. Finally we show that the descriptions of the $X(E_n)$ given in \cite{GT} and \cite{L} are related by an isomorphism of the ambient space $\mathbb R^{2n-2}$.
As it was pointed out in \cite{GP} there is a striking similarity of the parametrisation of the image of the space $E_n$ in $\mathrm{Gr}(n-1,2n)$ and the parametrisation of the space of the Ising models on a planar circular graph embedded in the orthogonal Grassmannian $\mathrm{OG}(n,2n)$ constructed in \cite{GP}. Our results may provide some insight into it.
We are planning to explore it in future publications.
\subsection{Organisation of the paper}
In Section \ref{background} we have collected the basic facts from the theory of electrical networks and some important for us constructions due to T. Lam and A. Postnikov.
In Section \ref{sec:networks} we introduce a new parametrisation of the space $E_n$ in the Grassmannian $\mathrm{Gr}(n-1,2n)$.
We study the compactification of $X(E_n)$ introduced in \cite{L} in terms of our parametrisation.
In Section \ref{sec:Lagr} we prove that $X(E_n)$ naturally sits inside $\mathrm{LG}(n-1,V)$, where $V\subset \mathbb R^{2n}$ is a subspace of co-dimension $2$.
In Section \ref{sec:representation} we obtain the main result of the paper, we show that the linear space $H$ from \cite{L} becomes the space of the projective embedding of $\mathrm{LG}(n-1, V)$, moreover $H$ is invariant under the action of the electrical group $\mathcal{E}l_n$ which factors through the action of the symplectic group $Sp(2n-2)$, and, as a representation of $Sp(2n-2)$, it is the fundamental representation which corresponds to the last vertex of the Dynkin diagram.
In Section \ref{Sec:nonneg} we show that the image of $X(E_n)$ lands in $\mathrm{Gr}_{\geq 0}(n-1,V)$, the non-negative part of $\mathrm{Gr}(n-1,V)$. This is a non-trivial refinement of the fact $X(E_n)\subset \mathrm{Gr}_{\geq 0}(n-1,2n)$ proved in \cite{L}. On the other hand $X(E_n)$ does not land in $\mathrm{LG}_{\geq 0}(n-1)$ studied in \cite{K} as we will explain in this section.
In Section \ref{Sec:app} we give some useful applications of our parametrisation of $X(E_n)$. Namely we will prove the well-known formulas from \cite{CIM} for the change of the response matrix entries after adjoining a boundary spike or a boundary bridge and give a simple proof of Theorem $1.1$ from \cite{KW}, which expresses Laurent polynomial phenomenon for the special ratios of groves partition functions.
In Section \ref{Sec:vertex} following the work \cite{GT} we describe the natural connection between the boundary measurements matrices assigned to elements of $E_n$ and points in $\mathrm{LG}(n-1,V)$. We show that the embedding from \cite{GT} and the Lam embedding can be identified. And finally we present another proofs of Theorem \ref{laggr} and Theorem \ref{nonneglagr} using the vertex model description of electrical networks on critical graphs.
\section{Electircal networks}
\label{background}
\subsection{Postnikov's approach and Lam's embedding}
We start with some background from \cite{L}, \cite{LT} and \cite{P}.
\begin{definition} \label{l1}
A planar circular graph $\Gamma$ together with a weight function $\omega:E(\Gamma)\rightarrow \mathbb R_{\geq}$ is called a {\em bipartite network} $N(\Gamma, \omega)$ if the following holds:
\begin{itemize}
\item The boundary vertices $V_0(\Gamma)$ are labeled by positive integers clockwise;
\item The degrees of the boundary vertices are all equal to one and the weights of the edges incident to the boundary vertices are all equal to one;
\item All the vertices $v\in V(\Gamma)$ are coloured by either black or white colour and each edge is incident to the vertices of different colours.
\end{itemize}
\end{definition}
\begin{definition} An {\em almost perfect matching} $\Pi$ is a collection of edges of $\Gamma$ such that
\begin{itemize}
\item Each interior vertex is incident to one of the edges in $\Pi$;
\item boundary vertices may or may not be incident to the edges in $\Pi$.
\end{itemize}
The weight $\mathrm{wt}(\Pi)$ of a matching $\Pi$ is a product of the weights of all edges in the matching.
\end{definition}
For $I\subset V_0(\Gamma)$ denote by $\Pi(I)$ the set of almost perfect matchings such that the black vertices in $I$ are {\it incident} to the edges in $\Pi$ and the white vertices in $I$ are {\it not incident} to the edges from $\Pi$.
Define the following invariant of a bipartite network $N(\Gamma, \omega)$:
$$k(\Gamma)=\frac{1}{2}\left(n+\sum\limits_{v \in black}(\mathrm{deg}(v)-2)+\sum\limits_{v \in white}(2-\mathrm{deg}(v))\right),$$
here $n$ is a number of boundary vertices of the graph $\Gamma.$
For a bipartite network $N(\Gamma, \omega)$ define a collection of boundary measurements as follows:
\begin{definition} \label{ld}
For each $I\subset V_0(\Gamma)$ such that $|I|=k(G) $ the {\em boundary measurement} defined by $I$ is given by the formula:
\begin{equation}
\Delta_{I}^M=\sum \limits_{\Pi\in \Pi(I)}\mathrm{wt}(\Pi).
\end{equation}
The superscript $M$ means that it is a boundary measurement of an almost perfect matching.
\end{definition}
\begin{definition}
The \textit{totally non-negative Grassmanian} (we usually call it just non-negative Grassmanian) is the locus in the Grassmanian with the non-negative Plucker coordinates.
\end{definition}
\begin{theorem}\cite{LT}\label{l2}
For a bipartite network $N(\Gamma, \omega)$ the collection of boundary measurements $\Delta_{I}^M$ considered as a set of Plucker coordinates defines a point in the non-negative Grassmannian $\mathrm{Gr}(k(\Gamma), n)$. In particular the boundary measurements $\Delta_{I}^M$ satisfy the Plucker relations.
\end{theorem}
We will need a different way to calculate boundary measurements $\Delta_{I}^M$. For this purpose we define the notion of a flow.
\begin{definition}
Let $N(\Gamma, \omega)$ be a bipartite network with an orientation $O$ on the set of edges of $\Gamma$ which satisfies the following:
\begin{itemize}
\item Each internal black vertex has exactly one outgoing edge;
\item Each internal white vertex has exactly one incoming edge.
\end{itemize}
Such an orientation we call {\em perfect} and the network a {\em perfect network} and denote by $N(\Gamma, \omega, O)$.
\end{definition}
\begin{remark} \label{bij}
For each bipartite graph there is a natural bijection between the set of perfect orientations and the set of almost perfect matchings. Consider an almost perfect matching $\Pi$ on a bipartite graph $\Gamma$. We construct a perfect orientation $O(\Pi)$ of $\Gamma$ as follows:
\begin{itemize}
\item if $e \notin \Pi,$ then we orient the edge $e$ from white to black;
\item if $e \in \Pi,$ then we orient the edge $e$ from black to white.
\end{itemize}
It is easy to see that the constructed map is a bijection. Thus for each bipartite graph there is at least one perfect orientation.
\end{remark}
\begin{definition} \label{pf}
A subset $F$ of the set of edges $E(\Gamma)$ of a perfect network $N(\Gamma,\omega,O)$ is called a {\em flow} if for each internal vertex the number of incoming edges is equal to the number of outgoing edges.
The weight of a flow $\mathrm{wt}(F)$ is the product of the weights of its edges.
\end{definition}
In the perfect network $N(\Gamma,\omega,O)$ the set of boundary vertices is naturally split into the sources and the sinks.
The following statement is obvious from the definition
\begin{proposition} \label{pf1}
Each flow is a union of paths connecting the sources and the sinks and cycles.
\end{proposition}
For a set $I\subset V_0$, denote by $F(I)$ the set of flows $F$ such that the sources which belong to the set $I$ are not incident to any edge of $F$ and the sinks which belong to the set $I$ are incident to some edge in $F$.
\begin{definition}\label{lp1}
The {\em boundary flow measurements} for a perfect network with chosen set $I\subset V_0$ of cardinality $k(\Gamma)$ is defined by the formula:
$$\Delta_{I}^F=\sum \limits_{F\in F(I)}\mathrm{wt}(F).$$
\end{definition}
We have the following
\begin{theorem} \cite{LT}, \cite{P} \label{lpt}
For a perfect network $N(\Gamma,\omega,O)$ the collection of boundary flow measurements $\Delta_{I}^F$ considered as a set of Plucker coordinates defines a point in the non-negative Grassmannian $\mathrm{Gr}(k(\Gamma), n)_{\geq 0}$. In particular the boundary measurements $\Delta_{I}^{F}$ satisfy the Plucker relations.
\end{theorem}
We have two embeddings of the space of weighted graphs which are equipped with a bipartite and a perfect network structure into $\mathrm{Gr}(k(\Gamma), n)_{\geq 0}$. The following theorem compares them:
\begin{theorem}\cite{LT}, \cite{P} \label{link}
Let $N(\Gamma,\omega)$ and $N(\Gamma,\omega',O)$ be a bipartite and a perfect network structure on the same graph and the weight functions $\omega$ and $\omega'$ are connected as follows:
\begin{itemize}
\item for any $e\in E(\Gamma)$ in the network $N(\Gamma, \omega', O)$ connecting the white and the black vertices the weight $\omega(e)$ is the same as it is in the bipartite graph $N(\Gamma, \omega)$
\item for any $e\in E(\Gamma)$ in the network $N(\Gamma, \omega', O)$ connecting the black and the white vertices the weight $\omega(e)$ is reciprocal of the weight of the same edge in the bipartite graph $N(\Gamma, \omega)$
\end{itemize}
Then the Plucker coordinates defined by networks $N(\Gamma,\omega)$ and $N(\Gamma,\omega',O)$ relate with each other as follows:
$$\Delta_{I}^F=\frac{\Delta_{I}^M}{\mathrm{wt}(\Pi_0)},$$
where $wt(\Pi_0)$ is the weight of the almost perfect matching corresponding to the perfect orientation $O$, see Remark \ref{bij}.
In other words, these two networks define the same point of $\mathrm{Gr}(k(\Gamma), n)_{\geq 0}.$
\end{theorem}
Postnikov also considered a larger class of networks defined below. We will call them the {\em Postnikov networks}.
\begin{definition} \label{matrpost}
Let $(\Gamma, \omega, O)$ be an oriented graph with the orientation (not necessary perfect) and weight function $\omega$. We will call such set a {\em Postnikov network} and denote it by $PN(\Gamma,\omega, O)$ if
\begin{itemize}
\item $\omega:E(\Gamma)\rightarrow \mathbb R_{\geq}$;
\item There are $k$ sources and $n-k$ sinks on the boundary and the boundary vertices are labeled by positive integers clockwise;
\item The degrees of the boundary vertices are all equal to one and the weights of the edges incident to the boundary vertices are all equal to one.
\end{itemize}
\end{definition}
The labeling of the boundary vertices induces the labeling of the set of sources $I=\{i_1,\dots,i_k\}$.
\begin{definition}
Consider a Postnikov network $PN(\Gamma,\omega, O)$ and denote by $M_{i_rj}$ the sum of weights of the directed paths from $i_r$ to $j$, which is defined as follows:
$$M_{i_rj}=\sum_{p:i_r \to j}(-1)^{wind(p)}wt(p),$$
here $wt(p)$ is the product of all edges belonging to $p$, and $wind(p)$ is the winding index of a path $p$, which is the signed number of full turns of a path $p,$ counting counterclockwise turns as positive, see \cite{P} for more details.
\end{definition}
\begin{definition}
\label{def:extendedmatr}
The {\em extended matrix of boundary measurements} $A$ for a Postnikov network $PN(\Gamma,\omega, O)$ is a $k \times n$ is defined as follows:
\begin{itemize}
\item submatrix formed by the columns labeling the sources is the identity matrix;
\item otherwise $a_{rj}=(-1)^sM_{i_rj},$ where $s$ is the number of elements from $I$ between the source labeled $i_r$ and the sink labeled $j$.
\end{itemize}
\end{definition}
\begin{theorem} \cite{P}
For a Postnikov network $PN(\Gamma,\omega, O)$ the minors $\Delta_I(A)$ of the extended boundary measurements matrix $A$ are non-negative, hence $A$ defines a point of the non-negative Grassmannian $\mathrm{Gr}(k(\Gamma), n)_{\geq 0}$.
\end{theorem}
By construction the above point depends on the choice of the orientation, however if we are working with the bipartite graphs and perfect orientations on them this dependence is not essential. Namely we have the following:
\begin{theorem} \cite{P} \label{chanorient}
Let $\Gamma $ be a bipartite graph and $O_1$ and $O_2$ be two different perfect orientations on $\Gamma.$ Consider two Postnikov networks $PN_1(\Gamma, \omega_1, O_1),$ $PN_2(\Gamma, \omega_2, O_2)$ such that
\begin{itemize}
\item All vertices $v\in V(\Gamma)$ coloured in black or white and each edge connects the vertices of different colours;
\item Both orientations $O_1$ and $O_2$ are perfect;
\item If the orientations of $e \in \Gamma$ are the same in both networks then $w_1(e) = w_2(e)$ and $w_1(e)w_2(e)=1$ otherwise.
\end{itemize}
Then the minors of the extended boundary measurement matrices $A_1$ and $A_2$ are related as follows:
$$\Delta_J(A_2)=\frac{\Delta_J(A_1)}{\Delta_{I_2}(A_1)},$$
where $I_2$ is the set of indices of the sources of the second network.
\end{theorem}
\subsection{Electrical networks}
In this subsection we remind some basic fact of the theory of electrical networks.
\begin{definition}
An electrical network $e(\Gamma, \omega)$ is a planar graph embedding into a disk, which satisfies the following conditions:
\begin{itemize}
\item all vertices are divided into the set of interior vertexes and the set of exterior or boundary vertices and all boundary vertices lie on a circle;
\item boundary vertices are enumerated counterclockwise;
\item every edge of $\Gamma$ is equipped with a positive weight which is the conductivity of this edge .
\end{itemize}
We will write that $e \in E_n$ if $\Gamma$ has $n$ boundary vertices.
When electrical voltage is applied to the boundary vertices it extends to the interior vertexes and the electrical current will run through the edges of $\Gamma.$ These voltages and electrical currents are defined by the Kirchhoff laws \cite{CIM}.
\end{definition}
One of the major objects in the theory is the {\it response matrix}.
\begin{theorem}\cite{CIM}
Consider an electrical network and apply to its boundary vertices voltage $\textbf{U}.$ This voltage induces the current $\textbf{I}$ running through the boundary vertices. There is a matrix $M_R(e)$ which relates $\textbf{U}$ and $\textbf{I}$ as follows:
$$M_R(e)\textbf{U}=\textbf{I}.$$
This matrix is called {\it response matrix}.
\end{theorem}
We will need the following important properties of the response matrix.
\begin{theorem}\cite{CIM} \label{respc}
Let $e \in E_n $ and $M_R(e)$ be its response matrix. Then the following holds:
\begin{itemize}
\item $M_R(e)$ is a symmetrical matrix;
\item non-diagonal entries are non-negative;
\item sum of entries of each column and each row is equal to $0.$
\end{itemize}
\end{theorem}
Now we are ready to define an embedding of $E_n$
to $\mathrm{Gr}(k(\Gamma), n)_{\geq 0}$.
\begin{definition} \label{def-grov1}
A {\em grove} $G$ on $\Gamma$ is a spanning subforest that is an acyclic subgraph that uses all vertices such that each connected component $G_i \subset G$ contains the boundary vertices.
The {\em boundary partition} $\sigma(G)$ is the set partition of $\{\bar 1, \bar 2, . . . , \bar n\}$ which specifies which
boundary vertices lie in the same connected component of $
G$. Note that since $\Gamma$ is planar,
$\sigma(G)$ must
be a {\em non-crossing partition}, also called a {\em planar set partition}.
\end{definition}
We will often write set partitions in the form $\sigma = (\bar a\bar b\bar c \bar d \bar e| \bar f \bar g|\bar h)$. Denote by $\mathcal NC_n$ the set of non-crossing partitions on $\{\bar 1,\ldots, \bar n\}$.
Each non-crossing partition $\sigma$ on $\{\bar 1, \bar 2, . . . , \bar n\}$ has a {\em dual non-crossing partition} on $\{\tilde 1, \tilde 2, . . . , \tilde n\}$ where by
convention $\tilde i$ lies between $\bar i$ and $\overline {i + 1}$. For example $(\bar1,\bar 4, \bar 6|\bar 2, \bar 3|\bar 5)$ is dual to $(\tilde{1}, \tilde{3}|\tilde{2}|\tilde{4}, \tilde{5}|\tilde{6})$.
\begin{definition} \label{def-grov2}
For a non-crossing partition $\sigma$ we define the grove measurement related to $\sigma$ as follows
$$L_{\sigma(\Gamma)} := \sum_{G|\sigma(G)=\sigma} \mathrm{wt}(G),$$
where the summation is over all groves with boundary partition $\sigma$, and $\mathrm{wt}(G)$ is the product of weights of edges in $G$.
\end{definition}
In particular we will need the groves measurements of the following non-crossing partitions whose definition is clear from the notations:
\begin{itemize}
\item $L:=L_{*|*|*\dots}$;
\item $L_{ij}:=L_{ij|*|*\dots}$;
\item $L_{kk}:=\sum_{i\neq k}L_{ik}.$
\end{itemize}
\begin{definition} \label{def-elb}
For $e\in E_n$ we define a planar bipartite network $N(\Gamma, \omega)$ embedded into the disk.
If $\Gamma$ has boundary vertices $\{\bar 1, \bar 2, . . . , \bar n\}$, then $N(\Gamma, \omega)$ will have boundary vertices $\{1, 2, . . . , 2n\}$, where boundary vertex $\bar i$ is identified with $2i-1$, and
a boundary vertex $2i$ in $N(\Gamma, \omega)$ lies between $\bar i$ and $\overline {i + 1}$. The boundary vertex $2i$ can be identified with the vertex $\tilde i$ used to label dual non-crossing partitions. The planar bipartite network $N(\Gamma, \omega)$ always has boundary vertices of degree $1$.
The interior vertices of $N(\Gamma, \omega)$ are as follows: we have a black interior vertex $b_v$ for each interior vertex $v$ of $\Gamma$, and a black interior vertex $b_F$ for each interior face $F$ of $\Gamma$; we have a white interior vertex $w_e$ placed at the midpoint of each interior edge $e$ of $\Gamma$. For each
vertex $\bar i$, we also make a black interior vertex $b_i$. The edges of $N(\Gamma, \omega)$ are defined as follows:
\begin{itemize}
\item if $v$ is a vertex of an edge $e$ in $\Gamma$, then $b_v$ and $w_e$ are joined, and the weight of this edge is equal
to the weight $\omega(e)$ of $e$ in $\Gamma$,
\item if $e$ borders $F$, then $w_e$ is joined to $b_F$ by an edge with weight $1$,
\item the vertex $b_i$ is joined by an edge with weight $1$ to the boundary vertex $2i - 1$ in $N$, and $b_i$
is also joined by an edge with weight 1 to $w_e$ for any edge $e$ incident
to $\bar i$ in $\Gamma$,
\item even boundary vertices $2i$ in $N(\Gamma, \omega)$ are joined by an edge with weight $1$
to the face vertex $w_F$ of the face $F$ that they lie in.
\end{itemize}
\end{definition}
\begin{definition} \label{conc}
We call an $(n-1)$-element subset $I \subset 1 \dots 2n$ {\em concordant} with a non-crossing partition $\sigma$ if each part of $\sigma$ and each part of the dual partition $\tilde{\sigma}$
contains exactly one element not in $I$. In this situation we also say that $\sigma$ or $(\sigma, \tilde{\sigma})$ is concordant with $I$.
\end{definition}
\begin{theorem} \cite{L} \label{elcon}
Let $e \in E_n$ and $N(\Gamma, \omega)$ be the bipartite network associated with $e$ then the following hold:
$$
\Delta_{I}^M= \sum\limits_{(\sigma, I)} L_{\sigma},
$$
here summation is over all such $\sigma,$ which are concordant with $I.$
\end{theorem}
\begin{example}
For the networks on Fig. \ref{fig:grove} we have
$$
L_{1|2|3}=a+b+c,\; L_{123}=abc,\; L_{12|3}=ac,\; L_{13|2} = bc,\; L_{23|1} =ab;
$$
$$
\Delta_{12}=\Delta_{45}=bc,\; \Delta_{23}=\Delta_{56}=ab,\; \Delta_{34}=\Delta_{16}=ac,\; \Delta_{13}=\Delta_{35}=\Delta_{15}=abc,
$$
$$
\Delta_{24}=\Delta_{46}=\Delta_{26}=a+b+c,\; \Delta_{14}=ac+bc,\;\Delta_{25}=ba+bc,\;\Delta_{36}=ab+ac.
$$
\end{example}
\begin{figure}[h]
\centering
\includegraphics[scale=1]{picture-7.pdf}
\caption{Grove measurements and Plucker coordinates}
\label{fig:grove}
\end{figure}
Now we establish the connection between the grove measurements $L, L_{ij}, L_{kk}$ and the Plucker coordinates derived out of a bipartite network:
\begin{lemma} \label{lemmal}
For $e(\Gamma,\omega)\in E_n$ and $N(\Gamma, \omega)$ as above the following holds:
$$L=\Delta_{R}^M,$$ where $R$ is any set of even indices
such that $|R|=n-1$.
$$L_{ij}=\Delta_{\{2i-1\}\cup N}^M,$$ where $N$ is the set of all even indices except of the two closest to
$2j-1$ and $|N|=n-2.$
$$L_{kk}=\Delta_{\{2k-1\}\cup T}^M$$ where $T$ is the set of all even indices except of the two closest to $2k-1$ and $|T|=n-2$.
\end{lemma}
\begin{proof}
The dual for the partition $\sigma=(\bar{1}|\bar{2}|\dots|\bar{n})$ is $\widetilde{\sigma}=(\widetilde{1} \widetilde{2} \dots \widetilde{n}).$ It is easy to see that the set $(\sigma, \widetilde{\sigma})$ is concordant with $R$ and there are no other $\sigma$ with this property, so $L=\Delta_{R}^M.$
The dual for the partition $\sigma=(\bar{i}\bar{j}|*|\dots|*)$ has the form $\widetilde{\sigma}=(\widetilde{A}\setminus\{\widetilde{j_1}\}|\widetilde{B}\setminus\{\widetilde{j_2}\})$ where $\widetilde{j_1}$ and $\widetilde{j_2}$ are two closest to $\bar{j}$ and $\widetilde{A}, \widetilde{B}$ are subsets of $(\widetilde{1} \widetilde{2} \dots \widetilde{n})$. It is easy to see that $\sigma$ is concordant with the set $\{2i-1\}\cup N.$
Suppose there is another $\sigma$ concordant with the set $\{2i- 1\}\cup N$ then $\sigma$ must have the form $\sigma=(\bar{i}\bar{k}|*|\dots|*)$ ($k\neq j$) and its dual has the form $\widetilde{\sigma}=(\widetilde{A'}|\widetilde{B'})$, where $\widetilde{A}', \widetilde{B}'$ are subsets of $(\widetilde{1} \widetilde{2} \dots \widetilde{n})$. Then at least one of the two closest to $2j-1$ must belong to $J$, because either $\widetilde{A'}$ or $\widetilde{B'}$ contains both $\widetilde{j}_1$ and $\widetilde{j}_2$, hence we get a contradiction and $L_{ij}=\Delta_{\{2i-1\}\cup N}^M.$
It is easy to see that each partition of the form $\sigma=(\bar{k}\bar{i}|*|\dots|*)$ is concordant with the set $\{2k-1\}\cup T.$
Finally notice that any $\sigma$ which is concordant with the set $\{2k-1\}\cup T$ must have a form $\sigma=(\bar{k} \bar{i}|*|\dots|*)$ ($k \neq i$). On the other hand we have already listed all such $\sigma$, therefore $L_{kk}=\Delta_{\{2k-1\}\cup T}^M$.
\end{proof}
\begin{example}
For any network from $E_5$ we have $L_{13}=\Delta_{\{1,2,8,10\}}^M$ and $L_{11}=\Delta_{\{1,4,6,8\}}^M.$
\end{example}
The following theorem was proved in \cite{KW}:
\begin{theorem} \label{kenwil}
For an electrical network $e(\Gamma,\omega)\in E_n$ the off-diagonal elements of its response matrix $M_R(e)$ satisfies the following relation:
$$x_{ij}=-\frac{L_{ij|*|*|*\ldots}}{L_{*|*|*\ldots}}:=-\frac{L_{ij}}{L}.$$
\end{theorem}
\section{Electrical networks and Grassmannian}
\label{sec:networks}
In this section we define a new parametrisation of the image of $E_n$ in $\mathrm{Gr}(n-1,2n)$ constructed in \cite{L}. This parametrisation is based on the ideas from the Postnikov paper \cite{P} on the parametrisation of the non-negative Grassmannian.
All necessary definitions, statements, and notations from these papers we have collected in the Appendix.
\subsection{New parametrisation
\begin{definition}
Let $e(\Gamma,\omega)\in E_n$ and $M_R(e)=(x_{
ij})$ denote its response matrix. Define a point in
$\mathrm{Gr}(n-1,2n)$ associated to $e(\Gamma,\omega)$ as the row space of the matrix:
\begin{eqnarray}
\Omega_n(e)=\left(
\begin{array}{cccccccc}
x_{11} & 1 & -x_{12} & 0 & x_{13} & 0 & \cdots & (-1)^n\\
-x_{21} & 1 & x_{22} & 1 & -x_{23} & 0 & \cdots & 0 \\
x_{31} & 0 & -x_{32} & 1 & x_{33} & 1 & \cdots & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots
\end{array}
\right)
\end{eqnarray}
\end{definition}
\begin{example}
For $e\in E_4$, $\Omega_4(e)$ has the following form:
\begin{equation*}
\Omega_4(e) = \left(
\begin{array}{ccccccccc}
x_{11}& 1 & -x_{12} & 0 & x_{13} & 0 & -x_{14} & 1 \\
-x_{21}& 1 & x_{22} & 1 & -x_{23} & 0 & x_{24} & 0 & \\
x_{31}& 0 & -x_{32} & 1 & x_{33} & 1 & -x_{34} & 0 & \\
-x_{41}& 0 & x_{42} & 0 & -x_{43} & 1 & x_{44} & 1 & \\
\end{array}
\right),
\end{equation*}
\end{example}
The sum of the elements in each row in $M_R(e)$ is equal to zero hence the rank of $\Omega_n(e)$ is equal to $n-1$ and it indeed defines a point in $\mathrm{Gr}(n-1,2n)$.
Here is one of the main results of the paper:
\begin{theorem}\label{maint} The row space of \, $\Omega_n(e)$ defines the same point in
$\mathrm{Gr}_{\geq 0}(n-1,2n)$ as the point defined by $e(\Gamma,\omega)$ under the Lam embedding.
\end{theorem}
Let $N(\Gamma, \omega)$ be the bipartite network associated to $e\in E_n$ and $O(I)$ be a perfect orientation on $N(\Gamma, \omega)$ with the set of sources $I$ labeled by even indices such that $|I|=n-1$. Such an orientation exists by Lemma \ref{lemmal}:
$$\Delta_{I}^M=\sum\limits_{\Pi \in \Pi(I)}\mathrm{wt}(\Pi)=L \neq 0,$$
so by Remark \ref{bij} for each $\Pi \in \Pi(I)$ there is a perfect orientation $O_{\Pi}(I)$.
Our goal is to connect the Plucker coordinates corresponding to the networks $N(\Gamma, \omega', O)$, $N(\Gamma, \omega)$, and the Postnikov network $PN(\Gamma, \omega',O)$, where the weight function $\omega'$ is defined in \ref{link}.
We use the following statement from \cite{T}:
\begin{theorem} \cite{T} \label{tal}
Let $PN(\Gamma, \omega , O)$ be a Postnikov network with the set of sources $I$ and $A$ be its extended boundary measurements matrix, then the minors of the matrix $A$ can be calculated as follows:
$$\Delta_J(A)=\frac{\sum_F \mathrm{wt}(F)}{\sum_C \mathrm{wt}(C)},$$
where the sum in the numerator is over all flows whose paths connect vertices in the set of sources $I$ with vertices in the set $J$. The sum in the denominator is over all flows which do not connect points on the boundary.
\end{theorem}
As a corollary we obtain:
\begin{corollary} \label{contal}
Let $N(\Gamma,\omega)$ be the bipartite network associated to $(\Gamma,\omega)\in E_n$ and $O(I)$ be a perfect orientation with the set of sources $I$ labeled by even indices. For a Postnikov network $PN(\Gamma, \omega',O)$ the minors of the extended boundary measurement matrix $A$ obey
$$\Delta_J(A)=\frac{\Delta_J^M}{L},$$
where $L=\Delta_I^M$ is a grove measurement as in Lemma \ref{lemmal}.
\end{corollary}
\begin{proof}
According to Theorems \ref{link} and \ref{tal} we have:
$$\Delta_J(A)=\frac{\sum_F \mathrm{wt}(F)}{\sum_C \mathrm{wt}(C)}=\frac{\Delta_J^F}{\sum_C
\mathrm{wt}(C)}=\frac{\Delta_J^M}{\mathrm{wt}(\Pi_0)\sum_C \mathrm{wt}(C)},$$
where $\mathrm{wt}(\Pi_0)$ is the weight of the almost perfect matching corresponding to the orientation $O(I)$ as explained in Remark \ref{bij}. On the one hand Definition \ref{matrpost} implies that $\Delta_I(A)=1$, on the other hand
$$\Delta_I(A)=\frac{\sum_F \mathrm{wt}(F)}{\sum_C \mathrm{wt}(C)}=\frac{\Delta_I^F}{\sum_C \mathrm{wt}(C)}=\frac{\Delta_I^M}{\mathrm{wt}(\Pi_0)\sum_C \mathrm{wt}(C)}$$
therefore $\Delta_I^d=\mathrm{wt}(\Pi_0)\sum \limits_C \mathrm{wt}(C)$. To finish the proof we use Lemma \ref{lemmal} which implies that
$$L=\Delta_I^M=\mathrm{wt}(\Pi_0)\sum_C \mathrm{wt}(C).$$
\end{proof}
Now we are ready to prove our theorem. Fix an electrical network $e(\Gamma,\omega)\in E_n$ and an arbitrary $k \in [1\dots n]$. Let $I_k$ be the set of all even indices from $[2 \dots 2n]$ except the even index closest to $2k-1$ {\em counterclockwise}. We will denote it by $j_k$. It is clear that $|I_k|=n-1$.
Denote by $j^k$ the even index closest to $2k-1$ {\em clockwise}.
For $e(\Gamma,\omega)$ and $N(\Gamma,\omega)$ be as above let $PN(\Gamma,\omega',O_k)$
be the Postnikov network where $O_k$ is a perfect orientation with the set of sources $I_k$. Let $A_k$ be the extended boundary measurement matrix of $PN(\Gamma,\omega',O_k)$, see Definition \ref{def:extendedmatr}.
\begin{theorem} \label{constr}
Denote by $a_{j^k \to m}$ the entry of $A_k$ which corresponds to the weight of the flow
from $j^k$ to $m.$ Then the following holds:
\\
1. $a_{j^k \to m}=(-1)^{s+1}x_{k\frac{m+1}{2}}$ for any sink with an odd index $m \neq 2k-1$ where $s$ is the number of the elements in $I_k$ situated between the source labeled by $j^k$ and the sink labeled by $m$;\\
2. $a_{j^k \to m}=x_{kk}$ for the sink labeled by $m = 2k-1$;\\
3. $a_{j^k \to m}=(-1)^s$ for the sink labeled by $m = j_k$ where $s$ is the number of the elements in $I_k$ situated between the source labeled by $j^k$ and the sink labeled by $j_k$;\\
4. $a_{j^k \to j^k}=1$; \\
5. $a_{j^k \to m}=0$ if there is a source labeled by $m \neq j^k$.
\end{theorem}
\begin{figure}[h]
\centering
\includegraphics[scale=0.8]{picture-9.pdf}
\caption{Example \ref{ex}}
\end{figure}
\begin{example}
\label{ex}
$$L_{1|2|3} = 1,\; L_{12|3}=b,\; L_{13|2}=a,\; L_{23|1} = c;$$
$$a_{21} = a+b,\; a_{22} = 1,\; a_{23}=b,\;a_{24} = 0,\; a_{25} = -a,\; a_{26}=-1;$$
$$x_{12} = \frac{-L_{12|3}}{L_{1|2|3}} = -b,\; x_{13} = \frac{-L_{13|2}}{L_{1|2|3}} = -a.$$
\end{example}
\begin{proof}
\textcolor{black}{1.} As it follows from Definition \ref{matrpost}
$$\Delta_{\{m\}\cup I_k -\{j^k\} }(A_k)=(-1)^{s} a_{j^k \to m}.$$
On the other hand Corollary \ref{contal} implies that:
$$\Delta_{\{m\}\cup I_k -\{j^k\} }(A_k)=\frac{\Delta_{\{m\}\cup I_k -\{j^k\} }^M}{L}.$$
Lemma \ref{lemmal} and Theorem \ref{kenwil} now prove the claim:
$$\frac{\Delta_{\{m\}\cup I_k -\{j^k\} }^M}{L}=\frac{L_{\frac{m+1}{2}k}}{L}=-x_{\frac{m+1}{2}k}=-x_{k\frac{m+1}{2}}.$$
\textcolor{black}{2.} As it follows from Definition \ref{matrpost}
$$\Delta_{\{m\}\cup I_k -\{j^k\} }(A_k)=a_{j^k \to m}.$$
On the other hand Corollary \ref{contal} implies that
$$\Delta_{\{m\}\cup I_k -\{j^k\} }(A_k)=\frac{\Delta_{\{m\}\cup I_k -\{j^k\} }^M}{L}.$$
Lemma \ref{lemmal} and Theorem \ref{kenwil} now prove the claim:
$$\frac{\Delta_{\{m\}\cup I_k -\{j^k\} }^M}{L}=\frac{L_{kk}}{L}=x_{kk}.$$
\textcolor{black}{3.} As it follows from Definition \ref{matrpost}
$$\Delta_{\{m\}\cup I_k -\{j^k\} }(A_k)=(-1)^{s}a_{j^k\to m}.$$
On the other hand Corollary \ref{contal} implies that
$$\Delta_{\{m\}\cup I_k -\{j^k\} }(A_k)=\frac{\Delta_{\{m\}\cup I_k -\{j^k\} }^M}{L}.$$
Lemma \ref{lemmal} now proves the claim:
$$\frac{\Delta_{\{m\}\cup I_k -\{j^k\} }^M}{L}=\frac{L}{L}=1.$$
\textcolor{black}{4.} Since $m=j^k$ is the source the claim
immediately follows from Definition \ref{matrpost}.
\textcolor{black}{5.} Since $m \neq j^k$ is the source the claim
immediately follows from Definition \ref{matrpost}.
\end{proof}
It remains to notice that by Theorem \ref{chanorient} all the vectors representing the rows of the matrix $\Omega_n(e)$ belong to the same subspace defined by extended boundary measurements matrix $A_k$ for any $k$.
Moreover by Corollary \ref{contal} this subspace defines the same point in $\mathrm{Gr}(n-1,2n)$ as the point defined by the bipartite network $N(\Gamma,\omega)$, and by Theorem \ref{elcon} this point belongs to $\mathrm{Gr}(n-1, 2n)_{\geq 0}$.
\subsection{Compactification}\label{compactification}
In this section we extend our parametrisation of $E_n$ to its compactification introduced in \cite{L}. Let $\overline{E}_n$ be the set of electrical networks the weights of some edges of $\Gamma$ are allowed to be equal to $0$ or $\infty$.
One can give a combinatorial interpretation of
zero and infinite values of conductivity. Indeed by the Kirchhoff law the zero conductivity of an edge means the deletion of this edge, the infinite conductivity of an edge means the contraction of this edge.
The edges connecting two boundary vertices can have conductivity equal to $\infty$ as well. It leads us to the definition of a {\em cactus network}.
\begin{definition}\label{def:comp}
Let $S$ be a circle with $n$ boundary points labeled by $1, \dots, n$. Let $\sigma\in \mathcal{NC}_n$, identifying the boundary points according to the parts of $\sigma$ gives a hollow cactus $S_\sigma$. It is a union of circles glued together at the identified points. The interior of $S_\sigma$, together with $S_\sigma$ itself is called a {\em cactus}. A {\em cactus network} is a weighted graph $\Gamma$ embedded into a cactus. In other words, we might think about cactus networks as the networks which are obtained by contracting the set of edges between the boundary vertices defined by $\sigma\in \mathcal{NC}_n$ which have infinite conductivity.
\end{definition}
Let $e(\Gamma, \omega) \in \overline E_n$ have no edges with infinite conductivity which connect boundary vertices. After deleting and contracting all edges with the conductivities equal to zero and infinity respectively we obtain a new electrical network $e'(\Gamma', \omega')$. It is easy to see that for $e'$ the response matrix $M_R(e')$ is well-defined therefore we are able to use Theorem \ref{maint} to construct the matrix $\Omega_n(e')$. However if a network $e$ contains edges with infinite conductivities between boundary vertices we will not be able to construct the matrix $M_R(e)$ and therefore $\Omega_n(e')$. In order to resolve this problem we have to slightly change the parametrisation defined by Theorem \ref{maint}.
\begin{definition}
We let $R_{i,j}$ denote the \textit{effective electrical resistance} between nodes $i$ and $j$ i.e. the voltage at node $i$ which, when node $j$ is held at $0$ volts, causes a unit current to flow through the circuit from node $i$ to node $j$.
\end{definition}
\begin{theorem}\cite{KW} \label{kw-rep} Let $e(\Gamma, \omega)\in E_n$ be a planar circular electrical network on a connected graph $\Gamma$ and $M_R(e)$ is its response matrix. Denote by $M'_R(e)$ the matrix obtained from $M_R(e)$ by deleting the first row and the last column, then the following holds
\begin{itemize}
\item $M'_R(e)$ is invertible;
\item The matrix elements of its inverse are given by the formula
\begin{equation}
M'_R(e)^{-1}_{ij}=\begin{cases}
R_{ij},\, \text{if}\,\, i=j \\
\frac{1}{2}(R_{in}+R _{jn}-R_{ij}),\, \text{if}\,\, i\not = j,\\
\end{cases}
\end{equation}
\end{itemize}
where $R_{ij}$ is the effective resistance between the boundary points $i$ and $j$.
\end{theorem}
Let $e(\Gamma, \omega)$ be as in Theorem \ref{kw-rep}. Denote by $\Omega'_n(e)$ the matrix obtained from $\Omega_n(e)$ by deleting the last row. Note it has the same row space as $\Omega_n(e)$. Assign to $\Omega'_n(e)$ the following matrix
\begin{eqnarray}\label{eq:compact}
M'_R(e)^{-1}D_{n-1}\Omega'_n(e),
\end{eqnarray}
where $D_{n-1}$ is a diagonal $n-1 \times n-1$ matrix with the entries $d_{ii}=(-1)^{i+1}.$ The matrix \eqref{eq:compact} and $\Omega_n(e)$ define the same point of $\mathrm{Gr}(n-1,2n).$
Using \ref{kw-rep} and a particular form of \eqref{eq:compact} we conclude:
\begin{corollary} \label{norm} The matrix entries of \eqref{eq:compact} are linear combinations of $R_{ij}$'s.
\end{corollary}
\begin{proof}
For all columns with even indexes of the matrix $M'_R(e)^{-1}D_{n-1}\Omega'_n(e)$ the preposition is obvious, because columns with even indexes of the matrix $D_{n-1}\Omega'_n(e)$ contain entries only equal to $0, 1$ or $-1.$
For all the columns except the one indexed by $2n-1$ the statement follows from Theorem \ref{kw-rep}. To finish the proof notice that due to Theorem \ref{respc} the column of $D_{n-1}\Omega'_n(e)$ indexed by $2n-1$ is the linear combination of other odd columns with the constant coefficients.
\end{proof}
If the boundary vertices $i$ and $j$ are connected by an edge with infinite conductivity, then $R_{ij}=0$, thus
we obtain the algorithm for finding the parametrization of the points of $\mathrm{Gr}(n-1,2n)$ associated with cactus networks given below.
As it is explained in Definition \ref{def:comp} an arbitrary cactus network $\overline{e}$ is obtained from a network $e(\Gamma, \omega)$ by contracting some edges $e_{ij}$ between the boundary vertices which have the infinite conductivity. Produce a new network $e_{aux}$ by changing the conductivities of $e_{ij}$ to finite quantities $c_{ij}$. For simplicity we assume at first that $\Gamma$ is connected. The matrix $\Omega'_n(e_{aux}) $ defines the same point of $\mathrm{Gr}(n-1,2n)$ as
$$M'_R(e_{aux})^{-1}D_{n-1}\Omega'_n(e_{aux}).$$
Passing to the limit when $R_{ij} \to 0$ is equivalent to passing to the limit when $c_{ij}\to \infty$ therefore we conclude that the cactus network $\overline{e}$ defines the following point of $\mathrm{Gr}(n-1,2n)$:
\begin{equation}
X(\overline{e})=\lim_{c_{ij}\to \infty}\left(M'_R(e_{aux})^{-1}D_{n-1}\Omega'_n(e_{aux})\right)=\lim_{R_{ij}\to 0}\left(M'_R(e_{aux})^{-1}D_{n-1}\Omega'_n(e_{aux})\right).
\end{equation}
\begin{remark}
For a cactus network $\overline{e}$ which is obtained from a network $e(\Gamma, \omega)\in E_n$ with $\Gamma$
being not connected we have to apply the algorithm to each connected component.
\end{remark}
\section{Lagrangian Grassmannian}
\label{sec:Lagr}
Recall a {\it symplectic vector space} is a vector space equipped with a symplectic bilenear form.
An {\it isotropic subspace} of a symplectic vector space is a vector subspace on which the symplectic form vanishes.
A maximal isotropic subspace is called a {\it lagrangian} subspace.
For $V$ a symplectic vector space, the {\it Lagrangian Grassmannian} $\mathrm{LG}(n,V)$ is the space of lagrangian subspaces. If it is clear which symplectic space we are working with we will denote the Lagrangian Grassmannian $\mathrm{LG}(n)$.
In this section we obtain an important consequence of our parametrisation of the set $E_n$, namely we will prove in Theorem \ref{laggr} that the matrix $\Omega_n(e)$ defines a point in Lagrangian Grassmannian $\mathrm{LG}(n-1)$.
\begin{definition}
Define the subspace
\begin{equation*}
V:=\{v\in \Bbb C^{2n}| \sum_{i=1}^n(-1)^i v_{2i}=0\,, \sum_{i=1}^n (-1)^iv_{2i-1}=0\}.
\end{equation*}
\end{definition}
\begin{lemma}
All the rows of the matrix $\Omega_n(e)$ belong to the subspace $V$.
\end{lemma}
\begin{proof}
The proof follows from the third property of the matrix $M_R(e)$ given in the Theorem \ref{respc}.
\end{proof}
Fix a basis for the subspace $V$:
\begin{multline}
\label{V:basis}
w_1=(1, 0, 1, 0, \dots, 0, 0, 0),\; w_2=(0, 1, 0, 1, \dots, 0, 0, 0),\dots,\\
w_{2n-2}=(0, 0, 0, 0, \dots, 1, 0, 1).
\end{multline}
Expanding the rows of the matrix $\Omega_n(e)$ in this basis we obtain the main object of this section -- the matrix $\widetilde{\Omega}_n(e)$ defined as
\begin{equation*}\label{expansion}
\widetilde{\Omega}_n(e)=\Omega_n(e)B_n^{-1},
\end{equation*}
here $B_n$ is the matrix whose rows are the vectors $v_i$ and $B_n^{-1}$ is its right inverse:
\begin{equation}
\label{eq:Binverse}
B_n^{-1} = \left(
\begin{array}{cccccc}
1& 0 & -1 & 0 & \dots&0 \\
0& 1 & 0 & -1 & \dots&(-1)^{n+1}\\
0& 0 & 1 & 0 & \dots&0\\
0&0&0&1&\dots&(-1)^n\\
\vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\
0& 0 & 0 & 0 & \dots&1\\
0& 0 & 0 & 0 & \dots&0\\
0& 0 & 0 & 0 & \dots&0
\end{array}
\right).
\end{equation}
\begin{example} \label{exlagr}
For $e\in E_3$, $\widetilde{\Omega}_3(e)$ has the following form:
\begin{equation*}
\widetilde{\Omega}_3(e) = \left(
\begin{array}{ccccc}
x_{11}& 1 & x_{13} & -1 & \\
-x_{21}& 1 & -x_{23} & 0 & \\
x_{31}& 0 & -x_{31}-x_{32} & 1 & \\
\end{array}
\right).
\end{equation*}
For $e\in E_4$, $\widetilde{\Omega}_4(e)$ has the following form:
\begin{equation*}
\widetilde{\Omega}_4(e) = \left(
\begin{array}{ccccccc}
x_{11}& 1 & x_{13}+x_{14} & -1 & -x_{14} & 1 & \\
-x_{21}& 1 & -x_{23}-x_{24} & 0 & x_{24} & 0 & \\
x_{31}& 0 & -x_{31}-x_{32} & 1 & -x_{34} & 0 & \\
-x_{41}& 0 & x_{41}+x_{42} & 0 & -x_{41}-x_{42}-x_{43} & 1 & \\
\end{array}
\right),
\end{equation*}
\end{example}
Now we are ready to formulate the main result of this section. Let $\Lambda_{2n-2}$ be the following symplectic form on the space $\Bbb C^{2n-2}$
\begin{equation} \label{def:lambda}
\Lambda_{2n-2} =
\left(
\begin{array}{cccccccccc}
0& 1 & 0 & 0 &\ldots & 0& \\
-1& 0 &-1 & 0 &\ldots& 0& \\
0& 1 & 0 & 1 &\ldots& 0& \\
\vdots & \vdots & \ddots & \vdots & \ddots & \vdots & \\
\end{array}
\right),
\end{equation}
then the following theorem holds:
\begin{theorem} \label{laggr} For $e\in E_n$ the matrix $\widetilde{\Omega}_n(e)$ defines a point of $\mathrm{LG}(n-1,V).$
In other words the following identity holds:
\begin{eqnarray}
\widetilde{\Omega}_n(e)\Lambda_{2n-2}\widetilde{\Omega}_n(e)^T=0.
\end{eqnarray}
\end{theorem}
We will prove some auxiliary statements first which will be also useful when we will treat the vertex representation of electrical network.
Let $\Bbb R^{2n}$ be a vector space equipped with the symplectic form
\begin{eqnarray}
\lambda_{2n}=\left(
\begin{array}{cc}
0 & g\\
-g^T & 0
\end{array}
\right);
\end{eqnarray}
where
\begin{eqnarray}
g=\left(
\begin{array}{ccccc}
1 & 0 & 0 & \cdots & 0\\
1 & 1 & 0 & \cdots & 0\\
1 & 1 & 1 & \cdots & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
1 & 1 & 1 & \cdots & 1
\end{array}
\right).
\end{eqnarray}
Let $V'\subset \Bbb R^{2n}$ of dimension $2n-2$ defined as follows
$$V':=\{v\in \Bbb R^{2n}| \sum_{i=1}^n v_{i}=0\;\; \text{and}\, \sum_{i=n+1}^{2n} v_{i}=0\}$$
Then $2n-2$ vectors
\begin{eqnarray}
\{e_1-e_2,e_2-e_3,\ldots ,e_{n+1}-e_{n+2},\ldots ,e_{2n-1}-e_{2n}\}\label{basis-V'}
\end{eqnarray}
form a basis for $V'$ and together with $e_{1},e_{n+1}$ they form a basis of $\Bbb R^{2n}$.
\begin{lemma}
The restriction of $\lambda_{2n}$ to $V'$ in the basis \eqref{basis-V'} of $V'$ takes the form:
\begin{eqnarray}
\left(
\begin{array}{cc}
0 & -h^T\\
h & 0
\end{array}
\right);
\end{eqnarray}
where
\begin{eqnarray}
h=\left(
\begin{array}{ccccc}
-1 & 0 & 0 & \cdots & 0\\
1 &- 1 & 0 & \cdots & 0\\
0 & 1 & -1 & \cdots & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
0 & 0 & \cdots & 1&-1
\end{array}
\right)
\end{eqnarray}
and is \textbf{non-degenerate}.
\end{lemma}
Let $M_R(e)=(x_{ij})$ be the response matrix for $e\in E_n$. Denote by $\Omega^{aux}(e)$ the subspace of $V'$ of dimension $n-1$ spanned by the $n$ vectors
$$(1,0\ldots ,0,-1,x_{11},\ldots ,x_{1n})$$
$$(-1,1,0\ldots ,0,x_{21},\ldots ,x_{2n})$$
$$(0,-1,1,0\ldots ,0,x_{31},\ldots ,x_{3n})$$
$$\ldots$$
$$(0,\ldots ,0-1,1,x_{n1},\ldots ,x_{nn}).$$
We denote by the same symbol $\Omega^{aux}(e)$ the matrix with these vectors as rows.
\begin{lemma} \label{vertex model} The restriction of $\lambda_{2n}$ to the row space of $\Omega^{aux}(e)$ is zero for any value of the parameters therefore this subspace is a point in $\mathrm{LG}(n-1,V')$.
\end{lemma}
\begin{proof}
Indeed the sum of the elements in each row of the response matrix $M_R(e)$ is equal to zero, therefore:
\begin{equation*}
\Omega^{aux}(e)\lambda_{2n}\Omega^{aux}(e)^t=
\left(\begin{array}{ccccc}
0 & x_{21}-x_{12} & x_{31}-x_{13} & \cdots & x_{n1}-x_{1n}\\
x_{12}-x_{21}& 0 & x_{32}-x_{23} & \cdots & x_{n2}-x_{2n}\\
x_{23}-x_{32}& x_{34}-x_{43} & 0 & \cdots & x_{n3}-x_{3n}\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
\end{array}
\right).
\end{equation*}
And $M_R(e)$ is a symmetric matrix, so we obtain $\Omega^{aux}(e)\lambda_{2n}\Omega^{aux}(e)^t=0$.
\end{proof}
\begin{lemma}\label{vertex model 2}
\begin{equation}
\Omega_n^{aux}(e)=D_{n}\Omega_n(e)\overline{T}_{2n},
\end{equation}
Where $\overline{T}_{2n}$ is a $2n\times 2n$ signed permutation matrix corresponding to the permutation in the row notation
\[(2,-4,6,\ldots,(-1)^{n+1}2n,1,-3,5,\ldots,(-1)^{n+1}(2n-1)),\]
or as a matrix:
\begin{equation*}
\overline{T}_{2n}=\left(
\begin{array}{cccccccc}
0 & 0 & 0 &\cdots & 1 & 0 &0 &\cdots\\
1 & 0 & 0 & \cdots & 0&0 & 0 & \cdots \\
0 & 0 & 0 & \cdots & 0 & -1 & 0 &\cdots \\
0 & -1 & 0 & \cdots & 0 & 0 & 0 &\cdots \\
0 & 0 & 0 & \cdots & 0 & 0 & 1 &\cdots \\
0 & 0 & 1 & \cdots & 0 & 0 &0 &\cdots \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots
\end{array}
\right).
\end{equation*}
\end{lemma}
\begin{proof}
The proof follows immediately from the structure of the matrix $\Omega_n(e)$.
\end{proof}
Putting together the facts proved above we obtain
\[0 = \Omega_n^{aux}(e)\lambda_{n}(\Omega_n^{aux}(e))^t=D_{n}\Omega_n(e)\overline{T}_{2n}\lambda_{n}\overline{T}^t_{2n}(\Omega_n(e))^tD_{n}^t=\]
\[D_{n}\widetilde{\Omega}_n(e)B_n\overline{T}_{2n}\lambda_{n}\overline{T}^t_{2n}B^t_n(\widetilde{\Omega}_n(e))^tD_{n}^t\]
Since $D_{n}$ is invertible
\begin{eqnarray}
\widetilde{\Omega}_n(e)B_n\overline{T}_{2n}\lambda_{n}\overline{T}^t_{2n}B^t_n\widetilde{\Omega}_n(e)^t=0
\end{eqnarray}
Finally a direct calculation shows that
\begin{eqnarray}
B_n\overline{T}_{2n}\lambda_{n}\overline{T}^t_{2n}B^t_n=\Lambda_{2n-2},
\end{eqnarray}
which finishes the proof of the Theorem \ref{laggr}.
\begin{remark}
Thus, we obtained that every $e\in E_n$ defines the point of $\mathrm{LG}(n-1,V).$ Using the continuity argument described in Section \ref{compactification} we immediately conclude that every $e\in \overline{E}_n$ defines the point of $\mathrm{LG}(n-1,V)$ as well. From now on we will use the notation $\mathrm{LG}(n-1)$ instead of $\mathrm{LG}(n-1,V)$.
\end{remark}
\section{Fundamental representation of the symplectic group}
\label{sec:representation}
In this section we connect the results from \cite{L} to representation theory of the symplectic group.
The symplectic group is a classical group defined as the set of linear transformations of a $2n$-dimensional vector space over $\Bbb C$ which preserve a non-degenerate skew-symmetric bilinear form. Such a vector space is called a symplectic vector space, and the symplectic group of an abstract symplectic vector space $V$ is denoted $Sp(V)$. Upon fixing a basis for $V$, the symplectic group becomes the group of $2n \times 2n$ symplectic matrices under the operation of matrix multiplication. This group is denoted by $Sp(2n,\Bbb C)$. If the bilinear form is represented by the nonsingular skew-symmetric matrix $\Lambda$, then
\[
{\displaystyle \operatorname {Sp} (2n,\Bbb C)=\{M\in M_{2n\times 2n}(\Bbb C):M^{\mathrm {T} }\Lambda M=\Lambda \},}\]
where $M^T$ is the transpose of $M$.
The Lie algebra of $Sp(2n, \Bbb C)$ is the set
\[{\displaystyle {\mathfrak {sp}}(2n,\Bbb C)=\{X\in M_{2n\times 2n}(\Bbb C):\Lambda X+X^{\mathrm {T} }\Lambda =0\},}\]
equipped with the commutator as its Lie bracket.
\subsection{The Lam group}
\begin{definition}
\label{def:xy}
Introduce some auxiliary matrices
\begin{itemize}
\item if $i \in 1, \dots, n-1$ then $x_i(t)$ is the upper-triangular matrix with ones on the main diagonal and only one non-zero entry $(x_i)_{i,
i+1}=t$ above the diagonal
\item if $i \in 1, \dots, n-1$ then $y_i(t)$ is the low-triangular matrix with ones on the main diagonal and only one non-zero entry $(y_i)_{i+1,i}
=t$ below the diagonal
\item if $i =n$ then
$x_n(t)=s_{n} x_1(t) s_{n}^{-1}$
\item if $i =n$ then $
y_n(t)=s_{n} y_1(t) s_{n}^{-1}$
\end{itemize}
where
\begin{equation} \label{transpos}
s_{n} = \left(
\begin{array}{cccccccc}
0 & 1 & 0 & 0 & \cdots & 0\\
0 & 0 & 1 & 0 & \cdots & 0 \\
0 & 0 & 0 & 1 & \cdots & 0 \\
\vdots & \vdots & \vdots & \vdots & \ddots & 1\\
(-1)^{n-1} & 0 & 0 & 0 & \cdots & 0
\end{array}
\right)
\end{equation}
\end{definition}
The following proposition appeared in \cite{L} and was proved by T.Lam and A.Postnikov.
\begin{proposition}\label{u} For $1 \leq i \leq 2n$, let $u_i(t) = x_i(t)y_{i-1}(t) = y_{i-1}(t)x_i(t)$, where the indices are taking $mod\,\,2n$, be a one-parameter subgroup of $Gl(2n).$
Then the matrices $u_i(t)$ satisfy the following relations
\begin{itemize}
\item $u_i(t_1)u_i(t_2) = u_i(t_1 + t_2)$
\item $u_i(t_1)u_j (t_2) = u_j (t_2)u_i(t_1)$ for $|i - j| \geq 2$
\item
$u_i(t_1)u_{i\pm 1}(t_2)u_i(t_3) = u_{i\pm 1}(t_2t_3/(t_1 + t_3 + t_1t_2t_3))u_i(t_1 + t_3 + t_1t_2t_3)u_{i\pm 1}(t_1t_2/(t_1 + t_3 + t_1t_2t_3))$
\end{itemize}
The set of all matrices $u_i(t)$, where $t$ is an arbitrary parameter and $i=1, \dots, 2n$, generates the group $\mathcal El_{2n}$ which we will call the Lam group.
\end{proposition}
The Lie algebra $\mathfrak{el}_{2n}$ of $\mathcal El_{2n}$ is generated by definition by the logarithms of the generators $u_i(t)$. Denote these generators $\mathfrak{u}_i$.
An explicit representation of $\mathfrak{el}_{2n}$ as a subalgebra of $\mathfrak sl_{2n}$ was given in \cite{L}:
$$\mathfrak{u}_i=e_i+f_{i-1},\,\,i=2,...,2n-2$$
$$\mathfrak{u}_1=e_1+(-1)^n[e_{1},[e_{2},[...[e_{2n-1}]]...]$$
$$\mathfrak{u}_0 = \mathfrak{u}_{2n}=f_{2n-1}+(-1)^n[f_{2n-1},[f_{2n-2},[...[f_{1}]]...]$$
where $e_i = E_{i,i+1},\; f_i = E_{i-1,i}$.
It is easy to check $\mathfrak u_i$ satisfy the relations
\[
[\mathfrak{u}_i,\mathfrak{u}_j]=0,\,\text{\rm if}\,\, |i-j|>1\]
\[ [\mathfrak{u}_i,[\mathfrak{u}_i,\mathfrak{u}_j]]=-2\mathfrak{u}_i, \,\text{\rm if}\,\, |i-j|=1\,\text{mod\,\,2n}.
\]
The affine version of the electrical Lie algebra $\hat{\mathfrak{el}}_{2n}$ was introduced in \cite{BGG}. This is the affine version of the object introduced in \cite{LP}. It is defined there together with an embedding into affine version of the algebra $\mathfrak{sl}_{2n}$. The algebra $\hat{\mathfrak{el}}_{2n}$ has finite dimensional representations.
The image of one of them has generators $\mathfrak{u}_i$ obeying the above relations.
The group generated by the formal exponents of the elements of the affine electric Lie algebra we will call the {\it affine electric Lie group}.
The Lam group $\mathcal El_{2n}$ is a representation of the affine electric Lie group.
\begin{remark}
The connection of affine electric Lie algebra and the affine symplectic Lie algebra is not clear to us at the moment.
\end{remark}
We write the operators $x_i(t), y_i(t)$ and $u_i(t)$ as acting on $\mathrm{Gr}(n-1,2n)$ on the {\bf right} to make it consistent with Section \ref{Sec:nonneg} where these operators arise in the process of adding spikes and bridges to a bipartite network. We therefore will be using an equivalent definition of the symplectic group and the symplectic algebra. For us, for example, the symplectic group is
\[
{\displaystyle \operatorname {Sp} (2n,\Bbb C)=\{M\in M_{2n\times 2n}(\Bbb C):M \Lambda M^{\mathrm {T}}=\Lambda \}},\]
Recall that we have defined the subspace $V\subset \mathbb{C}^{2n}$ of dimension $2n-2$ as follows
$$V:=\{v\in \Bbb C^{2n}| \sum_{i=1}^n(-1)^i v_{2i}=0\, \\, \sum_{i=1}^n (-1)^iv_{2i-1}=0\}.$$
Now, we are able to prove the main result of this section.
\begin{theorem} \label{res}
Operators $u_i(t)|_V$ preserve the symplectic form $\Lambda_{2n-2}$ defined in \eqref{def:lambda}. Moreover the restriction of the Lam group to the subspace $V$ generated by the set of matrices $u_i(t)|_V, i=1, \dots, 2n$ is the representation of the symplectic group $Sp(2n-2)$.
\end{theorem}
\begin{proof}
Using the definition of the matrix $B$ an explicit form of the restriction of $u_i(t)$ on the subspace $V$ in the basis \eqref{V:basis} could be computed as:
\begin{equation}
\label{uib}
u_i(t)|_V=B_nu_i(t)B_n^{-1}.
\end{equation}
In particular,
\begin{eqnarray} \label{expl-rest}
u_1(t)|_V =& (s_{n-1}|_V)^2 (u_3(t)|_V) (s_{n-1}|_V)^{-2},\\\nonumber
u_2(t)|_V =&y_1(t),\\\nonumber
u_i(t)|_V =&x_{i-2}(t)y_{i-1}(t),\;3\leq i\leq 2n-2,\\\nonumber
u_{2n-1}(t)|_V =&x_{2n-3}(t);\\\nonumber
u_{2n}(t)|_V =& (s_{n-1}|_V)^2 (u_2(t)|_V)(s_{n-1}|_V)^{-2}.\nonumber
\end{eqnarray}
here
\begin{equation}
s_{n-1}|_V = B_ns_{n-1}B_n^{-1}= \left(
\begin{array}{cccccccccc}
0& 1 & 0 & 0 &\ldots & 0& 0 & 0& \\
0& 0 & 1 & 0 &\ldots& 0& 0 & 0&\\
0& 0 & 0 & 1 &\ldots& 0& 0 & 0&\\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \\
0& 0 & 0 & 0 &\ldots& 0& 0& 1& \\
(-1)^n & 0 & (-1)^{n+1} & 0 &\ldots& 0 & 1 & 0& \\
\end{array}
\right).
\end{equation}
Let us give here explicit expressions for $u_1(t)|_V$ and $u_{2n}|_V$.
\begin{equation*}
u_{1}(t)|_V = \left(
\begin{array}{cccccccccc}
1& t & 0 & -t &\ldots & (-1)^{n+1}t& 0 & (-1)^{n}t& \\
0& 1 & 0 & 0 &\ldots& 0& 0 & 0&\\
0& 0 & 1 & 0 &\ldots& 0& 0 & 0&\\
0& 0 & 0 & 1 &\ldots& 0& 0 & 0&\\
0& 0 & 0 & 0 &\ldots& 0& 0 & 0&\\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \\
0& 0 & 0 & 0 &\ldots& 0& 1& 0& \\
0& 0 & 0 & 0 &\ldots& 0 & 0 & 1& \\
\end{array}
\right),
\end{equation*}
\begin{equation*}
u_{2k}(t)|_V = \left(
\begin{array}{cccccccccc}
1& 0 & 0 & 0 &\ldots & 0& 0 & 0& \\
0& 1 & 0 & 0 &\ldots& 0& 0 & 0&\\
0& 0 & 1 & 0 &\ldots& 0& 0 & 0&\\
0& 0 & 0 & 1 &\ldots& 0& 0 & 0&\\
0& 0 & 0 & 0 &\ldots& 0& 0 & 0&\\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \\
0& 0 & 0 & 0 &\ldots& 0& 1& 0& \\
(-1)^n t& 0 & (-1)^{n+1}t & 0 &\ldots& 0 & t & 1& \\
\end{array}
\right).
\end{equation*}
With these formulas it is immediately obvious that
$u_i(t)|_V\Lambda_{2n-2}u_i(t)^t|_V=\Lambda_{2n-2}$.
It is well known that for a simple algebraic group the image of the exponential map generates the whole group.
The generators $u_i(t)$, $2\leq i\leq 2n-1$ are the exponents of the elements $e_{i-2}+f_{i-1}$, where $f_{2n-2}=e_0=
0$ of the Lie algebra $\mathfrak{sl}_{2n-2}$. These generate the symplectic Lie algebra $\mathfrak {sp}_{2n-2}$ as the dimension count shows therefore we conclude that the above representation of the Lam group
factors through the vector representation of $Sp(2n-2)$.
\end{proof}
\subsection{The symplectic group and the Lagrangian Grassmannian}
Recall that in \cite{L} it was proved that the image of the set $\overline E_n$ can be described in combinatorial terms as follows.
\begin{definition}
Define a matrix $A_n=(a_{I\sigma}),$ where $I$ is a subset of the set $\{1,\dots, 2n\}$, $|I|=n-1$, and $\sigma\in \mathcal{NC}_n$ as follows:
\begin{equation*}
a_{I\sigma}=\begin{cases}
1,\, \text{if}\,\, \text{$\sigma$ is concordant with $I$} \\
0,\, \text{otherwise.}\,\\
\end{cases}
\end{equation*}
Let $H$ be the column space of $A_n$, it is a subspace of $\bigwedge^{n-1}\Bbb R^{2n}$ and $dim\,H=C_{n}$, the Catalan number. Denote by $\Bbb PH$ the projectivization of $H$.
\end{definition}
We will be using later the following notations. The standard basis vectors of $\Bbb R^{2n}$ we will denote by $e_i$. For $I=(i_1, i_2, \dots, i_{n-1})$ denote by $e_{I}\in \bigwedge^{n-1}\Bbb R^{2n}$ a standard vector $e_{i_1}\wedge e_{i_2}\wedge \dots \wedge e_{i_{n-1}}$ and by $v_{\sigma}=\sum_{I}a_{I\sigma}e_{I}$ a column of $A_n$ labeled by $\sigma$.
Notice that Definition \ref{def-elb} and Definition \ref{def-grov2} naturally generalized for the cactus networks, therefore we are able to refine Theorem \ref{elcon}:
\begin{theorem} \cite{L} \label{elconcom}
Let $e \in \overline{E}_n$ and $N(\Gamma, \omega)$ be the bipartite network associated with $e$ then the following hold:
$$\Delta_{I}^M= \sum\limits_{\sigma} a_{\sigma I} L_{\sigma},$$
here summation is over all non-crossing partitions $\sigma.$
\end{theorem}
The proof of the next statement is an easy corollary of Theorem \ref{elconcom}.
\begin{theorem} \cite{L} \label{Lthm}
The image of the set $\overline E_n$ in
$\mathrm{Gr}(n-1, 2n)$ belongs to the
intersection $\mathrm{Gr}(n-1,2n)\cap\Bbb PH$.
\end{theorem}
We will show that $H$ is invariant under the action of the group $Sp(2n-2)$, moreover it can be identified with the fundamental representation of the group $Sp(2n-2)$ which corresponds to the last vertex of the Dynkin diagram for the Lie algebra $\mathfrak{sp}_{2n-2}$.
Recall some basic facts about this representation. Fix a symplectic form $\omega\in \bigwedge^2\Bbb R^{2k}$ and denote by $Sp(2k)$ the subgroup of elements of $SL(2k)$ which preserve $\omega$. We refer to it as the symplectic group. Representation theory of $Sp(2k)$ is well known. The fundamental representations are irreducible representations labeled by the vertices of the Dynkin diagram for the Lie algebra $\mathfrak{sp}_{2n-2}$. The representation corresponding the last vertex is a sub-representation in the space of the fundamental representation $\bigwedge^k \Bbb C^{2k}$ of $SL(2k)$ and can be described as the kernel of the operator
\begin{equation}\label{def:Q}
Q:\bigwedge^k \Bbb C^{2k}\rightarrow \bigwedge^{k-2} \Bbb C^{2k}
\end{equation}
given by the formula
$$Q(x_1\wedge x_2\cdots \wedge x_k)=\sum_{i<j} \omega(x_i,x_j)x_1 \wedge\cdots \wedge \hat x_i \wedge\cdots \wedge \hat x_j \wedge \cdots \wedge x_k$$
where $\omega$ is a symplectic form, see \cite{FH} for example. Computing the dimension of the kernel we obtain
$$\binom{2k}{k}-\binom{2k}{k-2}=C_{k+1}$$
The appearance of the Lagrangian Grassmannian is very natural in this context.
\begin{lemma} $\mathrm{LG}(k,2k) = \mathrm{Gr}(k,2k)\cap P(\text{ker}\, Q)$.
\end{lemma}
\begin{proof}
Clearly $\mathrm{LG}(k,2k) \subset \mathrm{Gr}(k,2k) \cap P(\ker Q)$.
For the other inclusion, if $w \in \mathrm{Gr}(k,2k) \cap P(\ker Q)$, then $w$ is the equivalence class of $x_1 \wedge\ldots \wedge x_k$ with $x_1,\ldots,x_k$ linearly independent and $Q(w) = 0$, that is
$$Q(w) = \sum_{i<j}\omega (x_i,x_j) x_1 \wedge\ldots \hat x_i \wedge\ldots \wedge \hat x_j \wedge\ldots\wedge x_k = 0$$
$1\leq i<j\leq k$ and thus $\omega (x_i,x_j) =0$ for all $1\leq i < j \leq k$.
\end{proof}
The above lemma suggests a way to refine the statement of the Theorem \ref{Lthm}. We have already proved that the image of the set $E_n$ lands
in
$\mathrm{Gr}(n-1,2n-2)\cong\mathrm{Gr}(n-1,V)\subset \mathrm{Gr}(n-1,2n)$. Now one would expect that the subspace $H$ is the kernel of the operator $Q$ and therefore is invariant under the action of the group $Sp(2n-2)$. This is indeed the case.
\begin{theorem}\label{mainth}
The following holds
\begin{itemize}
\item $H $ is the subspace of $\bigwedge^{2n-2} V$.
\item $H$ coincides with the kernel of the operator $Q$.
\item The subspace $H$ is invariant under the action of the Lam group and hence the action of $Sp(2n-2)$. Moreover $H$ is the the space of the fundamental representation of the group $Sp(2n-2)$ which corresponds to the last vertex of the Dynkin diagram.
\end{itemize}
\end{theorem}
\begin{proof}
Note that the operator $Q$ defined above descents to the projectivization of the source and the target. Therefore it makes sense to apply it to points of $\mathrm{Gr}(n-1,2n)$.
Since the basis vectors of $H$ are labeled by $\sigma\in\mathcal{NC}_n$
we can prove the first two statements by induction on a number $r(\sigma)$ defined for $\sigma$ by the following formula:
$$r(\sigma)=n-c(\sigma),$$
where $c(\sigma)$ is the number of connected components of $\sigma$.
The basis of induction is provided by the partition $\sigma=(1|2|3|\dots|n)$ since it is the only partition with $r(\sigma)=0$.
Let $e_{\emptyset} \in El_n$ be the empty electrical network then by direct calculation we conclude that
\begin{eqnarray}
X(e_{\emptyset})=(e_2+e_4)\wedge (e_4+e_6)\wedge \dots \wedge(e_{2n-2}+e_{2n})= w_2 \wedge w_4 \dots \wedge w_{2n-2}.
\end{eqnarray}
It is clear that $w_i\in V$ therefore $X(e_{\emptyset})\in\bigwedge^{n-1} V$ and by Theorem \ref{laggr} it belongs to the kernel of $Q$ as well.
On the other hand $X(e_{\emptyset})=v_{1|2|3|\dots|n}$ so we conclude that the basis vector $v_{1|2|3|\dots|n}\in H$ belongs to $\bigwedge^{n-1}V$ and to the kernel of $Q$ providing therefore the basis of the induction.
Now let $\sigma$ be an arbitrary non-crossing partition and $v_\sigma\in H$ the corresponding basis vector. Consider any electrical network $e_{\sigma}(\Gamma, \omega) \in E_n$ such that the connected components of the graph $\Gamma$ induce on the boundary vertices a non-crossing partition equal to $\sigma$. The points $X(e_{\sigma})$ belong to $\mathrm{LG}(n-1)$ by Theorem \ref{laggr} therefore
\begin{eqnarray}
Q(X(e_{\sigma}))=Q(\sum_{I} \Delta_I e_{I})=0.
\end{eqnarray}
Using Theorem \ref{elcon} rewrite the last identity as follows
\begin{equation}
Q(\sum_{I} \Delta_I e_{I})=Q(\sum_{I}(\sum_{\delta\in\mathcal{NC}_n}a_{I\delta}L_{\delta})e_{I})=Q(L_{\sigma}v_{\sigma})+\sum_{\sigma'\not=\sigma}Q(L_{\sigma'}v_{\sigma'})=0.
\end{equation}
It is easy to see that each $\sigma'$ with $L_{\sigma'}$ not equal to zero has the number $r(\sigma')$ strictly less than $\sigma$ so by the induction hypothesis each $v_{\sigma'}$ belongs to $\bigwedge^{n-1}V$ and to the kernel of $Q$, thus we obtain that $v_{\sigma}$ belongs to $\bigwedge^{n-1}V$ and to the kernel of $Q$ as well.
Since the subspace $H$ belongs to the kernel of the operator $Q$ and has the same dimension as the kernel it coincides with it.
The restriction of the Lam group action on the subspace $V$ coincides by Theorem \ref{res} with the standard action of
$Sp(2n-2)$ on $\Bbb R^{2n-2}\cong V$. By the above the kernel of the operator $Q$ and hence the subspace $H$ is the representation space of the fundamental representation of $Sp(2n-2)$ which corresponds to the last vertex of the Dynkin diagram.
\end{proof}
\begin{remark} Various explicit bases of this subrepresentation are of interest and have been studied a lot, see \cite{PLG}, \cite{DC} for example.
It is interesting to compare the basis found in \cite{L} to the standard bases found in \cite{DC}.
\end{remark}
Putting it all together allows us to refine the statement of Theorem \ref{Lthm}.
\begin{theorem}
The image of the set $\overline{E}_n$ maps to the projectivisation of the orbit of the highest weight vector $X(e_{\emptyset})$ in the fundamental representation of the group $Sp(2n-2)$ which corresponds to the last vertex in the Dynkin diagram for the Lie algebra $\mathfrak{sp}_{2n-2}$.
\end{theorem}
\section{Non-negative Lagrangian Grassmannian}\label{Sec:nonneg}
The choice of a symplectic form on $\Bbb R^{2n}$ leads to a specific embedding $Sp(2n)\rightarrow Sl(2n)$ and hence to a specific embedding $\mathrm {LG}(n)\rightarrow \mathrm {Gr}(n,2n)$.
Lusztig introduced the notion of the positive part in a homogenious spaces, in particular, in the type $A$ flag variety and the Grassmannian $\mathrm {Gr}(n,2n)$.
This was described explicitly in \cite{MR}.
According to \cite{K} there is a choice of the symplectic form such that the Lusztig non-negative part of $\mathrm {LG}(n)$ set-theoretically obeys the identity
$$\mathrm{LG}_{\geq 0}(n)=\mathrm{Gr}_{\geq 0}(n,2n)\cap \mathrm{LG}(n).$$
In this section we will take the above formula as the definition of the non-negative part of $\mathrm{LG}(n)$ for our choice of the symplectic form, which is different from the choice made in \cite{K}, and show that the points of $\mathrm{LG}(n-1)$ defined by $\widetilde{\Omega}_n(e)$ from (\ref{expansion}) are non-negative. We will prove it using the technique of adding boundary spikes and bridges developed in \cite{L}.
Recall that according to our convention the operator $x_i(t), y_i(t)$ and $u_i(t)$ act on the right.
\begin{theorem} \cite{LT} \label{b-b}
Let $N(\Gamma, \omega)$ be a bipartite network with $n$ boundary vertices. Assume that $N'(\Gamma', \omega')$ is obtained from $N(\Gamma, \omega)$ by adding a bridge with the weight $t$, with a white vertex at $i$ and a black vertex at $i+1$. Then the points $X(N'),$ $X(N)
$ of $\mathrm{Gr}(n-1,2n)$
associated with $N'(\Gamma', \omega')$ and $N(\Gamma, \omega)$ accordingly are related to each other as follows:
$$X(N')=X(N) x_i(t)$$
If $N''$ is obtained from $N(\Gamma, \omega)$ by adding a bridge with the weight $t$, a black vertex at $i$ and a white vertex at $i+1$. Then the points $X(N'')$ and $X(N)$ of $\mathrm{Gr}(n-1,2n)$ associated with $N''(\Gamma'', \omega'')$ and $N(\Gamma, \omega)$ are related to each other as follows: $$X(N'')=X(N) y_i(t),$$
where the matrices $x_i(t)$ and $y_i(t)$ introduced in Definition \ref{def:xy}.
Thus adding any bridge with a positive weight preserves non-negativity.
\end{theorem}
\begin{remark} \label{s-change}
Let $e\in E_n$ and $e_{cl}\in E_n$ is obtained from $e$ by the clockwise shift by one of the indices of the boundary vertices then
$$X(N_{cl})=X(N)s_n$$
where the matrix $s_n$ was defined in \eqref{transpos}.
\end{remark}
\begin{definition} \cite{L}
For $e(\Gamma, \omega) \in E_n$, an arbitrary integer
$k\in \{1, 2,\dots , n\}$, and $t$ a non-negative real number define $u_{2k-1}(t)(e)$ to be the electrical network obtained from $e$ by adding a new edge with the weight $\frac{1}{t}$ connecting a new boundary vertex $v$ to the old boundary vertex labeled by $k$ which becomes an interior vertex. We call this operation \textit{adding a boundary spike}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.8]{picture-8.pdf}
\caption{Adding a boundary bridge and a boundary spike}
\end{figure}
For $e(\Gamma, \omega) \in E_n$, an arbitrary integer
$k\in \{1, 2,\dots , n\}$, and $t$ a non-negative real number define $u_{2k}(t)(e)$ be the electrical network obtained from $e$ by adding a new edge from $k$ to $k + 1$ (indices taken modulo $n + 1$) with the weight $t$. We call this operation \textit{adding a boundary bridge}.
\end{definition}
\begin{figure}[h]
\centering
\includegraphics[scale=0.8]{picture-11.pdf}
\caption{Adding a bridge}
\end{figure}
\begin{theorem} \cite{L} \label{s-b}
Let $e(\Gamma, \omega) \in E_n$ be an electrical network and $N_1(\Gamma, \omega)$ be associated with it the bipartite network. Likewise for the electrical networks $u_{2k-1}(t)(e)$ and $u_{2k}(t)(e) $ let $N_2(u_{2k-1} \Gamma, u_{2k-1} \omega)$ and $N_2(u_{2k} \Gamma, u_{2k} \omega)$ be the associated bipartite networks. Then, the points $X(N_1)$ and $X(N_2)$ of Grassmannian defined by $N_1(\Gamma, \omega)$, $N_2(u_{2k-1} \Gamma, u_{2k-1} \omega)$ and $N_2(u_{2k} \Gamma, u_{2k} \omega)$ are related to each other as follows:
$$X(N_2)=X(N_1)u_{2k-1}(t), X(N_2)=X(N_1)u_{2k}(t),$$
where $u_i(t)=x_i(t)y_{i+1}(t).$
\end{theorem}
We will illustrate this statement with a few particular examples to help the reader to see what is going on.
Consider the following $e\in E_n$ and the associated with it $N(\Gamma, \omega).$
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{picture-1.pdf}
\caption{Electrical and its bipartite networks}
\end{figure}
Adding a boundary bridge between the vertices $1$ and $2$ we obtain the electrical network $u_2(b)(e)$ and the associated bipartite network $N(u_2\Gamma, u_2\omega).$
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{picture-2.pdf}
\caption{$u_2(b)(e)$ on the left, $N(u_2\Gamma, u_2\omega)$ on the right}
\end{figure}
Using Postnikov transformations \cite{LT} for bipartite networks we obtain that $N(u_2\Gamma, u_2\omega)$ is equivalent to the following bipartite network which can be obtained from the network $N(\Gamma,\omega)$ by adding two bridges with weights $b$.
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{picture-3.pdf}
\caption{Network $N(\Gamma,\omega)$ with two added bridges}
\end{figure}
\newpage
Similarly for $e\in E_n$ on the left of Fig. \ref{fig:04} the network $u_3(\frac{1}{a})(e)$, on the left of Fig. \ref{fig:05}, is obtained by adding a spike to the vertex $3$
\begin{figure}[h]
\centering
\includegraphics[scale=0.9]{picture-4.pdf}
\caption{$e$ and associated bipartite network}
\label{fig:04}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.9]{picture-5.pdf}
\caption{$u_3(\frac{1}{a})(e)$ and associated bipartite network $N(u_3\Gamma, u_3\omega)$}
\label{fig:05}
\end{figure}
Again using the Postnikov transformations we obtain that the bipartite network associated with $u_3(\frac{1}{a})(e)$ is equivalent to the following network which can be obtained from the network $N(\Gamma, \omega)$ by adding two bridges with weights $\frac{1}{a}$.
\begin{figure}[h]
\centering
\includegraphics[scale=0.9]{picture-6.pdf}
\caption{Equivalent to $N(u_3\Gamma, u_3\omega)$ bipartite network}
\label{fig:bipartite}
\end{figure}
It was proved in \cite{L} that the image of $\overline{E}_n$ lands in $\mathrm{Gr}_{\geq 0}(n-1,2n)$. Now we are ready to prove the following theorem:
\begin{theorem} \label{nonneglagr}
Let $e(\Gamma, \omega)\in E_n $ and $\widetilde{ \Omega}_n(e)$ is the point of $\mathrm{LG}(n-1)$ associated with it then $\widetilde{\Omega}_n(e)$ belongs to $\mathrm{LG}_{\geq 0}(n-1)$.
\end{theorem}
\begin{proof}
As it was demonstrated in Chapter $2.4 $ \cite{CIM1} each electrical network is equivalent to a critical electrical network \cite{Kl}. Moreover two equivalent electrical networks define the same point of $\mathrm{Gr}_{\geq 0}(n-1, 2n)$ \cite{L} and as a consequence the same point of $\mathrm{LG}(n-1).$ Therefore it is sufficient to prove our statement only for the critical electrical networks.
The proof is going by induction on the number of edges of $e \in E_n.$ Our argument is very similar to Lam's argument for the proof of the following statement from \cite{L}: for each point $X \in \mathrm{Gr}_{\geq 0}(k,n)$ there is a bipartite network $N(\Gamma, \omega)$ which defines $X.$
Lam's original proof based on the fact that each point $X \in \mathrm{Gr}_{\geq 0}(k,n)$ can be "reduced".
Note that if $e$ has more than one connected component, the matrix $\widetilde{\Omega}_n(e)$ has a block structure and it is sufficient to check non-negativity for each block, i.e. we can deal with each connected component of $e$ separately, therefore let us suppose that $e$ is connected.
The basis of induction is provided by the fact that each electrical network with three or less edges defines a positive point in $\mathrm{LG}(n-1).$ In fact we will prove a more general statement: each $e\in E_3$ defines a point in $\mathrm{LG}_{\geq 0}(n-1).$ Consider an electrical network $e\in E_3$ and associated with it a point $\widetilde{\Omega}_3(e)\in $ which has the following form (see Example \ref{exlagr}):
\begin{equation*}
\widetilde{\Omega}_3(e) = \left(
\begin{array}{ccccc}
-x_{21}& 1 & -x_{23} & 0 & \\
x_{31}& 0 & -x_{31}-x_{32} & 1 & \\
\end{array}
\right).
\end{equation*}
The Plucker coordinates of $\widetilde{\Omega}_3(e)$ are equal to
\begin{equation*}
\Delta_{12}(\widetilde{\Omega}_3(e))=-x_{31},\, \Delta_{13}(\widetilde{\Omega}_3(e))=x_{21}x_{31}+x_{21}x_{32}+x_{23}x_{31},\, \Delta_{14}(\widetilde{\Omega}_3(e))=-x_{21},
\end{equation*}
\begin{equation*}
\Delta_{23}(\widetilde{\Omega}_3(e))=-x_{31}-x_{32},\, \Delta_{24}(\widetilde{\Omega}_3(e))=1, \, \Delta_{34}(\widetilde{\Omega}_3(e))=-x_{23}.
\end{equation*}
Using the properties of a response matrix (see Theorem \ref{respc}) we obtain that each Plucker coordinate is non-negative.
Now we turn to the induction step. An edge of a critical electrical network $e\in E_n$ is called a boundary edge (respectively a boundary spike) if after its deletion (respectively contraction) we obtain an electrical network $e'$ such that $e=u_i(t)e,$ here $u_i(t)$ is the edge which we deleted (respectively contracted). Notice that $e'$ is again critical \cite{CIM1}.
Consider a connected critical electrical network $e\in E_n$ with more than three edges and more than three boundary vertices and associated with it point $\widetilde{\Omega}_n(e).$ As it was demonstrated in \cite{CIM1} $e$ has at least three edges such that each of them is a boundary edge or a boundary spikes (called them boundary edges). Therefore there is a critical electrical network $e' \in E_n$ such that $e=e'u_i(t)$ here $i=1,\dots, 2n.$ Moreover, as we have at least three boundary edges we can choose one of them such that $i=2,\dots, 2n-1.$
By induction assumption $\widetilde{\Omega}_n(e')$ defines a point in $\mathrm{LG}_{\geq 0}(n-1).$ Useing the Theorem \ref{s-b} we conclude that
$$\widetilde{\Omega}_n(e)=\widetilde{\Omega}_n(e')u_i|_V(t)$$
Now notice that each matrix $u_{i}(t)|_V$ $i=2, \dots, 2n-1$ preserves non-negativity as it follows from the Theorems \ref{b-b} and \eqref{expl-rest}. Hence we obtain that $\widetilde{\Omega}_n(e)$ defines a point in $\mathrm{LG}_{\geq 0}(n-1).$
\end{proof}
As a corollary of the previous theorem we obtain the following:
\begin{corollary}
The right action of matrices $u_{i_j}(t_j)|_V$ $j=1, \dots, 2n$ and $s_n|_V$ preserve non-negativity of the points $\widetilde{\Omega}_n(e).$
\end{corollary}
\begin{proof}
Consider two electrical networks $e' \in E_n$ and $e \in E_n$ such that $e'=eu_i(t)$ or $e'=es_n$ (see Remark \ref{s-change} for the geometrical meaning of the action of $s_n$). Using the Theorem \ref{s-b} we obtain that
$$\widetilde{\Omega}_n(e')=\widetilde{\Omega}_n(e)u_i|_V(t)$$
or
$$\widetilde{\Omega}_n(e')=\widetilde{\Omega}_n(e)s_n|_V(t).$$
Since we have already proved that $\widetilde{\Omega}_n(e')$ and $\widetilde{\Omega}_n(e)$ define points in $\mathrm{LG}_{\geq 0}(n-1)$ we conclude that matrices $u_{i}(t)|_V$ $i=1, \dots, 2n$ and $s_n|_V$ preserve non-negativity.
\end{proof}
\begin{remark}
Using the continuity arguments described in Section \ref{compactification} we conclude that $X(e)\in \mathrm{LG}_{\geq 0}(n-1)$ for any $e\in \overline{E}_n$.
\end{remark}
\section{Applications of the new parametrisation}
\label{Sec:app}
We will give two applications of the parametrisation of the set $E_n$ as an orbit of the symplectic group action. Namely we will prove the well-known formulas from \cite{CIM} for the change of the response matrix entries after adjoining a spike or an edge between boundary vertices and give a simple proof of Theorem 1.1 from \cite{KW} .
\subsection{Adjoining edges and the change of the response matrix}
\begin{theorem} \cite{CIM}, \cite{LP}
Let $e \in E_n $ be an electrical network. If $e' \in E_n $ is obtained from $e$ by adding an edge with the weight $t$ between the boundary vertices with the indices $k$ and $k+1$ then the following formula for the response matrix entries $x'_{ij}$ of $e'$ holds:
\begin{equation} \label{brf}
x'_{ij}=x_{ij}+(\delta_{ik}-\delta_{(i+1)k})(\delta_{jk}-\delta_{(j+1)k})t,
\end{equation}
If $e' \in El_n $ is obtained from $e$ by adding a spike with the weight $\frac{1}{t}$ to a vertex with the index $k$ then the following formula for the entries $x'_{ij}$ of the response matrix of the network $e'$ holds:
\begin{equation} \label{sf}
x'_{ij}=x_{ij}-\frac{tx_{ik}x_{kj}}{tx_{kk}+1},
\end{equation}
here $x_{ij}$ are the entries of the response matrix of $e$ and $\delta_{ij}$ are the Kronecker delta functions.
\end{theorem}
\begin{proof}
It is sufficient to prove \eqref{brf} in the case of adjoining an edge between the vertices with the indices $1$ and $2$ because other cases can be obtained from it by the shift of enumeration of vertices.
By Theorem \ref{s-b} matrices $\Omega(e) u_2(t)$ and $\Omega(e')$ represent the same point of $\mathrm{Gr}(n-1,2n)$.
By a straightforward computation we get:
\begin{equation*}
\Omega(e) u_2(t) = \left(
\begin{array}{cccccccccc}
x_{11}+t& 1 & -x_{12}+t & 0 & x_{13} &\ldots & (-1)^n& \\
-x_{21}+t& 1 & t+x_{22} & 1 & -x_{23} &\ldots& 0&\\
x_{31}& 0 & -x_{32} & 1 & x_{33} &\ldots & 0&\\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \\
(-1)^n x_{n1}& 0 & (-1)^{n+1} x_{n2} & 0 & (-1)^{n+2} x_{n3} &\ldots& 1& \\
\end{array}
\right),
\end{equation*}
The multiplication by $u_2(t)$ does not change the columns of the matrix $\Omega(e)$ with even indices and therefore we obtain:
$$\Omega(e) u_2(t)=\Omega(e')$$ which implies \eqref{brf}.
The same argument works for each $u_{2k}(t)$.
For the proof of \eqref{sf} it is sufficient to obtain the formula in the case of adjoining a spike to the vertex of index $2$.
As before from Theorem \ref{s-b} we conclude that the matrices $\Omega(e')$ and $\Omega(e) u_3(t)$ represent the same point of $\mathrm{Gr}(n-1,2n)$ and we get
\begin{equation*}
Cal_2(t)\Omega(e) u_3(t) = \left(
\begin{array}{cccccccccc}
x_{11}-\frac{x_{12}x_{21}}{x_{22}t+1}& 1 & -x_{12}+\frac{x_{12}x_{22}}{x_{22}t+1} & 0 & x_{13}-\frac{x_{12}x_{23}}{x_{22}t+1} &\ldots & (-1)^n& \\
-x_{21}+\frac{x_{22}x_{21}}{x_{22}t+1}& 1 & x_{22}-\frac{x_{22}x_{22}}{x_{22}t+1} & 1 & -x_{23}+\frac{x_{22}x_{32}}{x_{22}t+1} &\ldots& 0&\\
x_{31}-\frac{x_{32}x_{12}}{x_{22}t+1}& 0 & -x_{32}+\frac{x_{32}x_{22}}{x_{22}t+1} & 1 & x_{33}-\frac{x_{32}x_{23}}{x_{22}t+1} &\ldots & 0&\\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \\
\end{array}
\right),
\end{equation*}
where
\begin{equation*}
Cal_2(t)= \left(
\begin{array}{cccccccccc}
1& \frac{x_{12}}{1+x_{22}t} & 0 & 0 & \ldots & 0 \\
0& \frac{1}{1+x_{22}t} & 0 & 0 & \ldots & 0 \\
0& \frac{x_{32}}{1+x_{22}t} & 1 & 0 & \ldots & 0 \\
0& \frac{x_{42}}{1+x_{22}t} & 0 & 1 & \ldots & 0 \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \\
\end{array}
\right).
\end{equation*}
Therefore
$$Cal_2(t)\Omega(e) u_3(t)=\Omega(e').$$ which implies \eqref{sf}.
\end{proof}
\subsection{Kenyon-Wilson theorem}
We will show now that Theorem $1.1$ from \cite{KW} could be obtained as a simple consequence of the main Theorem \ref{maint}.
\begin{theorem} \cite{KW} \label{kw}
Let $e=(\Gamma, \omega) \in E_n$ and $\sigma\in \mathcal{NC}$
then each fraction
$$\frac{L_{\sigma}}{L}$$
can be expressed as a homogeneous polynomial of degree $n-k$ in variables $L_{i;j}:=\frac{L_{ij}}{L}$ with integer coefficients where $k$ is a number of connected components of $\sigma$.
\end{theorem}
For simplicity we consider only the case when $\sigma\in \mathcal{NC}$, but the statement is true even for an arbitrary $\sigma$ \cite{KW}.
The following is essentially the statement of Proposition $5.19$ from \cite{L}:
\begin{proposition} \label{gro}
Let $e=(\Gamma, \omega) \in E_n$ be an electrical network. We associated to it a bipartite network $N(\Gamma, \omega)$. Then for each $\sigma\in \mathcal{NC}$ there is a set of coordinates $\Delta^d_{J(\sigma)}$ such that the following identity holds:
$$L_{\sigma}=\sum \limits_{J(\sigma)} a_{J(\sigma)}\Delta^d_{J(\sigma)},$$
with $a_{J(\sigma)}=\{1, -1\}$.
Moreover for each $J(\sigma)$ the following holds: $|J(\sigma)_{odd}|=n-k$ where $|J(\sigma)_{odd}|$ is the number of odd indices in $J(\sigma).$
\end{proposition}
Now we are ready to prove Theorem \ref{kw}. Let $e=(\Gamma, \omega) \in E_n$ be an electrical network and $N(\Gamma, \omega)$ is associated bipartite network. Introduce a perfect orientation $O$ on $N(\Gamma, \omega)$ to obtain the Postnikov network $NP(\Gamma, \omega', O)$. For $\sigma\in\mathcal{NC}$ and the appropriate coordinate $L_{\sigma}$ according to Theorems \ref{contal} and \ref{gro} the following holds:
$$\frac{L_{\sigma}}{L}=\sum \limits_{J(\sigma)} \frac{a_{J(\sigma)}\Delta^d_{J(\sigma)}}{L}=\sum \limits_{J(\sigma)}a_{J(\sigma)}\Delta_{J(\sigma)}(A),$$
where $A$ is the extended matrix of boundary measurements for $NP(\Gamma, \omega', O).$
On the other hand for each $\Delta_{J(\sigma)}$ using Theorem \ref{maint} we get: $$\frac{\Delta_{J(\sigma)}(A)}{\Delta_{K}(A)}=\frac{\Delta_{J(\sigma)}(\Omega'_n(e))}{\Delta_{K}(\Omega'_n(e))},$$
where $K=\{2,4,6, \dots, 2n-2\}$. Using that $\Delta_K(A)=\Delta_K(\Omega'_n)=1$ we obtain the following identities:
$$\frac{L_{\sigma}}{L}=\sum \limits_{J(\sigma)}a_{J(\sigma)}\Delta_{J(\sigma)}(A)=\sum \limits_{J(\sigma)}a_{J(\sigma)}\frac{\Delta_{K}(A)\Delta_{J(\sigma)}(\Omega'_n(e))}{\Delta_{K}(\Omega'_n(e))}=\sum \limits_{J(\sigma)}a_{J(\sigma)}\Delta_{J(\sigma)}(\Omega'_n(e)).$$
Notice that the last expression is a homogeneous polynomial in the variables $x_{ij}$ with integer coefficients and therefore according to Theorem \ref{kenwil} is a polynomial in $L_{i;j}$, since $L_{i;j}=-x_{ij}.$ It is easy to see that the degree of each summand $\Delta_{J(\sigma)}(\Omega'_n(e))$ is equal to $|J(\sigma)_{odd}|$. The last number is equal to the number of columns with odd indices in the matrix $\Omega'_n(e)$. Recall that the columns with even indices contain only $0$ and $1$. Now using Theorem \ref{gro} we conclude that $|J(\sigma)_{odd}|=n-k.$
\begin{example}
Consider an electrical network $e \in E_n$ and associated to it a bipartite network $N(\Gamma, \omega).$ Using Theorem \ref{elcon} it is easy to check that for $N(\Gamma, \omega)$ the following holds:
$$\Delta_{567}^M= L_{14|23}.$$
Therefore by Theorem \ref{kw} proof we obtain that
$$\frac{L_{14|23}}{L}=\Delta_{567}(\Omega'_4(e))=x_{14}x_{23}-x_{24}x_{13}=L_{14}L_{23}-L_{24}L_{13}.$$
\end{example}
\section{Vertex representation}
\label{Sec:vertex}
The authors of \cite{GT} defined for $e\in E_n$ the boundary partition function $M_B(e)$ or simply $M_B$ if it is clear which $e$ we work with. It is a matrix which depends on at most $n(n-1)/2$ parameters, these parameters are the conductivities of the edges of the network. Such a matrix according to \cite{GT} belongs to the symplectic group $Sp(n)$ where $n$ can be odd or even.
\begin{eqnarray}
T_{2n}=\left(
\begin{array}{ccccccc}
1 & 0 & 0 &\cdots & 0 & 0 &\cdots\\
0 & 0 & 0 & \cdots & 1 & 0 & \cdots \\
0 & 1 & 0 & \cdots & 0 & 0 & \cdots \\
0 & 0 & 0 & \cdots & 0 & 1 & \cdots \\
0 & 0 & 1 & \cdots & 0 & 0 & \cdots \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\
0 & 0 & 0& \cdots & 0 & 0 & \ddots
\end{array}
\right);\qquad
S_n=\left(
\begin{array}{ccccc}
1 & 0 &0 & \cdots & -1\\
-1 & 1 & 0 & \cdots & 0 \\
0 &-1 & 1 & \cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots\\
0 & 0 & 0 & \cdots & 1
\end{array}
\right).\nonumber
\end{eqnarray}
One of the main results from \cite{GT} is
\begin{lemma} \label{lemma:w1w2}
For an electrical network $e \in E_n$ on the standard graph $\Sigma_n$ the row spaces of the matrices
\begin{eqnarray}
W_1=(S_n,M_R)\quad \mbox{and} \quad
W_2=(M_B,Id_n)S_{2n} T_{2n}\nonumber
\end{eqnarray}
define the same point in $\mathrm{Gr}(n-1,2n)$.
\end{lemma}
We define standard graphs by induction, the first few examples are presented on Figure \ref{stgr}.
\begin{figure}[h]
\centering
\includegraphics[width=80mm]{sgraphs}
\caption{Standard graphs}
\label{stgr}
\end{figure}
Lemmas \ref{lemma:w1w2}, \ref{vertex model} and \ref{vertex model 2} imply immediately:
\begin{corollary}\label{boundary}
The subspace $W_2$ defines a point in $\mathrm{LG}(n-1,V')$ for any value of the parameters.
Moreover the presentations of $E_n$ given in \cite{L} and \cite{GT} inside of the projective space $\mathbb P(\bigwedge^{n-1}\mathbb R^{2n})$ differ by the automorphism of the ambient space $\mathbb R^{2n}$ given by the matrix $\overline{T}_{2n}$ from Lemma \ref{vertex model 2}.
\end{corollary}
\begin{proof}
The first statement follows from the fact that the subspace $\Omega^{aux}_e$ and $W_1$ coincide.
The second statement is a direct consequence of the formula given in
\ref{vertex model 2}
\end{proof}
\subsection{Vertex approach}
In this subsection we sketch another strategy for proving Theorem \ref{laggr} and Theorem \ref{nonneglagr} using the vertex model description of electrical networks on critical graphs given above.
We will start the proof Theorem \ref{laggr} by showing that an electrical network on a well-connected critical graph \cite{CIM1} defines a point in $\mathrm{LG}(n-1,V').$ This result is simpler to obtain using the specifics of the vertex models for electrical networks on well-connected critical graphs rather than exploiting the technique of bipartite networks. Next we will extend the proof for all networks by using the fact from \cite{Kl} that any network is electrical equivalent to a network on a minor of a well-connected graph, where minor is a subgraph of a graph after deletion or contraction some set of edges. As we explained in Section \ref{compactification} edge deletion and contraction corresponds to passing to the limit when some of the edge conductivities are approaching zero or infinity, therefore we can use the continuity argument to conclude that all networks define points in $\mathrm{LG}(n-1,V').$
By the same fashion we can also prove Theorem \ref{nonneglagr}. First of all let us prove it for only for electrical networks $e(\Sigma_n, \omega)$ on the standard well-connected critical graphs (see Figure \ref{stgr}). Consider the electrical network $e(\Sigma_n, \omega).$ Notice that it can be obtained from the empty network $e_{\emptyset} \in E_n$ by the sequence $\{u_{i_j}(t_j)\}$ $j=2, \dots, 2n-1$ of adding spikes and bridges (for example, the electrical network $e(\Sigma_4, \omega)$ can be obtained by these operations as it is shown in Figure \ref{fig:adding}, each electrical networks $e(\Sigma_n, \omega)$ can be obtained by the same way). Using Theorem \ref{s-b} we conclude that
$$\widetilde{\Omega}_n(e)=\widetilde{\Omega}_n(e_{\emptyset})\prod_{j=1}^m(u_{i_j}(t_j)|_V),$$
here $\widetilde{\Omega}_n(e_{\emptyset})$ is the point of $\mathrm{LG}(n-1)$ associated with the empty network $e_{\emptyset}.$ Easy to see that the point $\widetilde{\Omega}_n(e_{\emptyset})$ is non-negative. Also as we already noted (see the proof of Theorem \ref{nonneglagr}) each $u_{i_j}(t_j)|_V$ $j=2, \dots, 2n-1$ preserves non-negativity. Therefore we obtain that the point
$$\widetilde{\Omega}_n(e)=\widetilde{\Omega}_n(e_{\emptyset})\prod_{j=1}^m(u_{i_j}(t_j)|_V)$$
is non-negative.
\begin{figure}[h]
\centering
\includegraphics[scale=0.4]{picture-10.pdf}
\caption{Adding boundary spikes and boundary bridges for $\Sigma_4$}
\label{fig:adding}
\end{figure}
Now it remains to extend the proof of non-negativity for all networks by using the continuity argument which we described above. So, we conclude that all networks define non-negative points in $\mathrm{LG}(n-1,V').$
So in fact it is sufficient to prove some theorems only for networks on well-connected critical graphs. Moreover this idea might be useful for working with other classes of dimer models, in particular for studying the Ising model. We are planing to develop this approach in a future publication.
\section*{Acknowledgements}
We are grateful to Arkady Berenstein and Alexander Kuznetsov for useful discussions.
The authors have been supported by the RSF Grant No. 20-61-46005.
Part of the work in this project was completed while the second author visited the ICERM, USA and the MPI, Bonn, Germany.
|
{
"timestamp": "2021-12-22T02:19:46",
"yymm": "2109",
"arxiv_id": "2109.13952",
"language": "en",
"url": "https://arxiv.org/abs/2109.13952"
}
|
\section{Introduction}
In \cite{RSS_Bold1}, we produced a functorial extension of the evaluation map $S^1 \times \mathcal{L} X \to X$ to transfers along finite covers and showed that this induces a natural evaluation map on the Burnside category of finite groups. The homotopy category of classifying spectra of finite groups admits an algebraic interpretation in terms of completions of Burnside modules of finite groups, but in \cite{RSS_Bold1} we showed that the natural evaluation map does not extend to the homotopy category of classifying spectra of finite groups. For a prime $p$, the homotopy category of $p$-completed classifying spectra of finite groups admits an algebraic interpretation in terms of Burnside modules for fusion systems. Our goal in this paper is to show that the natural evaluation map does extend to the Burnside category of fusion systems and also to the homotopy category of $p$-completed classifying spectra of finite groups.
The notion of a saturated fusion system is an axiomatization of the properties that can be detected about a finite group $G$ from a choice of Sylow $p$-subgroup $S \leq G$ equipped with conjugation data. There are saturated fusion systems that do not come from a choice of finite group and Sylow $p$-subgroup. Every saturated fusion system has a classifying space that is a natural generalization of the $p$-completion of the classifying space of a finite group.
The classifying spectrum of a saturated fusion system is built from the classifying space of the saturated fusion system in a simple manner. There is a close relationship between the classifying spectrum of a saturated fusion system and the $p$-completed classifying spectrum of the underlying Sylow $p$-subgroup. There is an idempotent endomorphism of the classifying spectrum of the underlying Sylow $p$-subgroup that splits off the classifying spectrum of the saturated fusion system. This idempotent is known as the characteristic idempotent of the saturated fusion system. If the saturated fusion system comes from a finite group $G$, this construction recovers the $p$-completion of the classifying spectrum of $G$.
The set of homotopy classes of maps between two $p$-completed classifying spectra of finite $p$-groups is isomorphic to the $p$-completion of the Burnside module of bisets between the two $p$-groups. The $p$-complete Burnside category of $p$-groups (in which the Burnside modules have been completed at $p$) is therefore equivalent to the homotopy category of $p$-completed classifying spectra of finite $p$-groups. The characteristic idempotent for a saturated fusion system appears as an idempotent in the $p$-completion of the double Burnside ring of the underlying Sylow $p$-subgroup. Similarly, there is a $p$-complete Burnside category for all saturated fusion systems, and this is equivalent to the homotopy category of all classifying spectra of fusion systems, which includes $p$-completed classifying spectra of all finite groups.
We extend the natural evaluation map of \cite{RSS_Bold1} from the $p$-complete Burnside category of $p$-groups to the $p$-complete Burnside category of saturated fusion systems by analyzing its effect on the characteristic idempotent. For formal reasons, this provides us with an evaluation map whose domain is a direct summand of the $p$-completed classifying spectrum of $B\mathbb{Z}/p^k \times \mathcal{L} BS$. However, work needs to be done to relate this summand to the free loop space of the classifying space of the saturated fusion system.
\subsection*{In more detail...}
Let $\mathbb{AG}$ be the Burnside category of finite groups. The objects of $\mathbb{AG}$ are finite groups and the abelian group of morphisms between two groups $G$ and $H$ is the Burnside module of $H$-free $(G,H)$-bisets. We may formally extend this category to include formal coproducts of finite groups. In this case, a map is given by a matrix of virtual bisets. The homotopy category of classifying spaces of finite groups faithfully embeds in $\mathbb{AG}$. For a finite group $G$, let $\mathcal{L} G = \coprod_{[g]} C_G(g)$, the formal coproduct over conjugacy classes of elements in $G$ of the centralizers. In \cite{RSS_Bold1}*{Section 3}, we constructed and studied a functor $\twistO n \colon \mathbb{AG} \to \mathbb{AG}$ with the property that $\twistO n(G) = (\mathbb{Z}/k)^n \times \mathcal{L}^n(G)$ (for $k$ large enough) and a natural transformation $\twistO n \Rightarrow \Id_{\mathbb{AG}}$ that extends the evaluation map natural transformation.
Fix a prime $p$ and let $\mathbb{AF}_p$ be the Burnside category of saturated fusion systems. The objects of $\mathbb{AF}_p$ are saturated fusion systems and, given two saturated fusion systems $\mathcal{F}$ on a $p$-group $S$ and $\mathcal{G}$ on a $p$-group $T$, the morphisms $\mathbb{AF}_p(\mathcal{F},\mathcal{G})$ is the submodule of $\mathbb{AG}(S,T)^{\wedge}_{p}$ consisting of bistable elements. Just as in \cite{RSS_Bold1}*{Section 3}, we may formally extend this category to include formal coproducts of fusion systems, which we call fusoids.
The category $\mathbb{AF}_p$ is a full subcategory of the homotopy category of $p$-complete spectra. Let $\hat\Sigma^{\infty}_{+} BS$ be the $p$-completion of the classifying spectrum $\Sigma^{\infty}_{+} BS$. Given a saturated fusion system $\mathcal{F}$ on a $p$-group $S$, there is a characteristic idempotent
\[
\omega_{\mathcal{F}} \in \mathbb{AG}(S,S)^{\wedge}_{p} \cong [\hat\Sigma^{\infty}_{+} BS,\hat\Sigma^{\infty}_{+} BS].
\]
We will denote the summand of $\hat\Sigma^{\infty}_{+} BS$ split off by this idempotent by $\hat\Sigma^{\infty}_{+} B\mathcal{F}$. There is a canonical isomorphism
\[
\mathbb{AF}_p(\mathcal{F},\mathcal{G}) \cong [\hat\Sigma^{\infty}_{+} B\mathcal{F}, \hat\Sigma^{\infty}_{+} B\mathcal{G}].
\]
Recall that if the fusion system $\mathcal{F}$ comes from a finite group $G$ (with Sylow $p$-subgroup $S$), then $\hat\Sigma^{\infty}_{+} B\mathcal{F}$ is equivalent to the $p$-completion of the classifying spectrum of $G$. Thus $\mathbb{AF}_p$ contains the homotopy category of $p$-completed classifying spectra of finite groups as a full subcategory.
Just as with groups, there is a notion of a centralizer fusion system of an element in the underlying $p$-group of a fusion system and we define
\[
\mathcal{L} \mathcal{F} = \coprod_{[g]} C_\mathcal{F}(g).
\]
It is important to note that $\mathcal{L} \mathcal{F}$ generally has fewer components than $\mathcal{L} S$ because $\mathcal{F}$ generally has fewer conjugacy classes. Consequently $\mathcal{L} S$ is not the ``Sylow $p$-subgroup'' of $\mathcal{L} \mathcal{F}$ even though $S$ is the Sylow subgroup of $\mathcal{F}$. Other technical challenges that we address stem from the fact that the notion of a centralizer fusion system only behaves well for certain choices of representatives for the conjugacy classes in $\mathcal{F}$.
For $e$ large enough, the evaluation map is a map of fusion systems
\[
\mathbb{Z}/p^e \times (\coprod_{[g]} C_\mathcal{F}(g)) \to \mathcal{F}.
\]
Taking $p$-completed classifying spectra, we get a map
\[
\hat\Sigma^{\infty}_{+} (B\mathbb{Z}/p^e \times \mathcal{L} B\mathcal{F}) \to \hat\Sigma^{\infty}_{+} B\mathcal{F}.
\]
Here the free loop space $\mathcal{L} B\mathcal{F}$ is modeled algebraically by $\mathcal{L} \mathcal{F}$ as stated in Proposition \ref{propFreeLoopSpaceDef}, which is a result essentially due to Broto--Levi--Oliver \cite{BLO2}.
We show that the domain of the evaluation map above arises by two other constructions. First, applying $\Id_{\mathbb{Z}/p^e} \times \mathcal{L}(-)$ to the characteristic idempotent gives an idempotent
\[
\hat\Sigma^{\infty}_{+} (B\mathbb{Z}/p^e \times \mathcal{L} BS) \xrightarrow{\Id_{\mathbb{Z}/p^e} \times \mathcal{L} \omega_{\mathcal{F}}} \hat\Sigma^{\infty}_{+} (B\mathbb{Z}/p^e \times \mathcal{L} BS).
\]
Secondly, we may apply $\twistO 1$ to the characteristic idempotent $\omega_{\mathcal{F}}$ to get an idempotent
\[
\hat\Sigma^{\infty}_{+} (B\mathbb{Z}/p^e \times \mathcal{L} BS) \xrightarrow{\twistO 1 \omega_{\mathcal{F}}} \hat\Sigma^{\infty}_{+} (B\mathbb{Z}/p^e \times \mathcal{L} BS).
\]
Neither of these idempotents are the characteristic idempotent for $\mathbb{Z}/p^e\times \mathcal{L} \mathcal{F}$, but we prove that both of these idempotents still split off the spectrum
\[
\hat\Sigma^{\infty}_{+} (B\mathbb{Z}/p^e \times \mathcal{L} B\mathcal{F}).
\]
Since each map between fusion systems arises as a map between classifying spectra of $p$-groups that commutes with the characteristic idempotent, we get a natural transformation
\[
\twistO 1 (-) \Rightarrow \Id_{\mathbb{AF}_p}
\]
of functors from $\mathbb{AF}_p$ to $\mathbb{AF}_p$. Or, said another way, we have extended the functoriality of the evaluation map to the homotopy category of classifying spectra of fusion systems and, in particular, to the homotopy category of $p$-completed classifying spectra of finite groups.
From here, Theorem \ref{thmIntroMain} is proved for fusion systems in a reasonably formal manner as Theorem \ref{thmFusionMain}. We also derive formulas for $\twistO n$ and the evaluation map. These formulas are similar to those described for finite groups in \cite{RSS_Bold1}*{Section 3}.
\begin{theorem}\label{thmIntroMain}
We construct a family of endofunctors $\twistO n \colon \mathbb{AF}_p\to \mathbb{AF}_p$ for $n\geq 0$ with the following properties:
\begin{enumerate}
\renewcommand{\theenumi}{$(\roman{enumi})$}\renewcommand{\labelenumi}{\theenumi}
\item[$(\emptyset )$] Let $L^{\dagger,\mathbb{AG}}_n\colon \mathbb{AG} \to \mathbb{AG}$ be the functor constructed in \cite{RSS_Bold1}*{Section 3}. When restricted to the full subcategories of $\mathbb{AG}$ and $\mathbb{AF}_p$ spanned by formal unions of finite $p$-groups, the functor $\twistO n\colon \mathbb{AF}_p \to \mathbb{AF}_p$ is the $\mathbb{Z}_p$-linearization of $L^{\dagger,\mathbb{AG}}_n$.
\item\label{itemIntroFusionLnZero} $\twistO 0$ is the identity functor on $\mathbb{AF}_p$.
\item\label{itemIntroFusionLnObjects} On objects, $\twistO n$ takes a saturated fusoid $\mathcal{F}$ to the saturated fusoid
\[\twistO n(\mathcal{F}) = (\mathbb{Z}/p^e)^n\times \freeO n (\mathcal{F})=\coprod_{\underline a\in \cntuples n\mathcal{F}} (\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline a).\]
\item\label{itemIntroFusionEquivariant} The group $\Sigma_n$ acts on $\freeO n\mathcal{F}=\coprod_{\underline a\in \cntuples n\mathcal{F}} C_\mathcal{F}(\underline a)$ by permuting the coordinates of the $n$-tuples $\underline a$. Explicitly, if $\sigma\in \Sigma_n$ and if $\widetilde{\sigma(\underline a)}$ is the representative for the $\mathcal{F}$-conjugacy class of $\sigma(\underline a)$, then $\sigma\colon \freeO n\mathcal{F}\to \freeO n\mathcal{F}$ maps $C_\mathcal{F}(\underline a)=C_\mathcal{F}(\sigma(\underline a))$ to $C_\mathcal{F}(\widetilde{\sigma(\underline a)})$ via the isomorphism $\zeta_{\sigma(\underline a)}^{\widetilde{\sigma(\underline a)}}\in \mathbb{AF}_p(C_\mathcal{F}(\underline a),C_\mathcal{F}(\widetilde{\sigma(\underline a)}))$.
The functor $\twistO n$ is equivariant with respect to the $\Sigma_n$-action on $(\mathbb{Z}/p^e)^n\times \freeO n(-)$ that permutes the coordinates of both $(\mathbb{Z}/p^e)^n$ and $\freeO n(-)$, i.e. for every $\sigma\in\Sigma_n$ the diagonal action of $\sigma$ on $(\mathbb{Z}/p^e)^n\times \freeO n(-)$ induces a natural isomorphism $\sigma\colon \twistO n\overset\cong\Rightarrow \twistO n$.
\item\label{itemIntroFusionLnForwardMaps} Let $\mathcal{E}$ and $\mathcal{F}$ be saturated fusion systems on $R$ and $S$ respectively. For forward maps, i.e. transitive bisets $[R,\varphi]_\mathcal{E}^\mathcal{F}\in \mathbb{AF}_p(\mathcal{E},\mathcal{F})$ with $\varphi\colon R\to S$ fusion preserving, the functor $\twistO n$ coincides with $(\mathbb{Z}/p^e)^n \times \freeO n(-)$ so that
\[\twistO n([R,\varphi]_\mathcal{E}^\mathcal{F})=(\mathbb{Z}/p^e)^n\times \freeO n([R,\varphi]_\mathcal{E}^\mathcal{F}).\]
In addition, $\freeO n([R,\varphi]_\mathcal{E}^\mathcal{F})$ is the biset matrix that takes a component $C_\mathcal{E}(\underline a)$ of $\freeO n\mathcal{E}$ to the component $C_\mathcal{F}(\underline b)$ of $\freeO n\mathcal{F}$ by the biset
\[
[C_R(\underline a),\varphi]_{C_\mathcal{E}(\underline a)}^{C_S(\varphi(\underline a))}\mathbin{\odot} \zeta_{\varphi(\underline a)}^{\underline b}\in \mathbb{AF}_p( C_\mathcal{E}(\underline a), C_\mathcal{F}(\underline b)),
\]
where $\underline b$ represents the $\mathcal{F}$-conjugacy class of $\varphi(\underline a)$.
\item\label{itemIntroFusionEvalSquare} For all $n \geq 0$, the functor $\twistO n$ commutes with evaluation maps, i.e. the evaluation maps $\mathord{\mathrm{ev}}_\mathcal{F}\colon (\mathbb{Z}/p^e)^n\times \freeO n(\mathcal{F})\to \mathcal{F}$ form a natural transformation $\mathord{\mathrm{ev}}\colon \twistO n \Rightarrow \Id_{\mathbb{AF}_p}$.
\item\label{itemIntroFusionLnPartialEvaluation} For all $n \geq 0$, the partial evaluation maps $\ensuremath{\partial \mathrm{ev}}_\mathcal{F}\colon \mathbb{Z}/p^e\times \freeO {n+1}(\mathcal{F}) \to \freeO n (\mathcal{F})$ given as fusion preserving maps $\ensuremath{\partial \mathrm{ev}}_{\underline a}\colon \mathbb{Z}/p^e \times C_\mathcal{F}(\underline a) \to C_\mathcal{F}(a_1,\dotsc, a_n)$ in terms of the formula
\[
\ensuremath{\partial \mathrm{ev}}_{\underline a}(t,z)= (a_{n+1})^t\cdot z\in C_S(a_1,\dotsc,a_n), \quad \text{for } t\in \mathbb{Z}/p^e, z\in C_S(\underline a),
\] form natural transformations $(\mathbb{Z}/p^e)^n\times \ensuremath{\partial \mathrm{ev}}\colon \twistO {n+1} \Rightarrow \twistO n$.
\item\label{itemIntroFusionIterateLn} For all $n,m\geq 0$, and any saturated fusoid $\mathcal{F}$ on $S$, the formal union $(\mathbb{Z}/p^e)^{n+m}\times \freeO {n+m} \mathcal{F}$ embeds into $(\mathbb{Z}/p^e)^m\times \freeO m((\mathbb{Z}/p^e)^n\times \freeO n \mathcal{F})$ as the components corresponding to the commuting $m$-tuples in $(\mathbb{Z}/p^e)^n\times \freeO n \mathcal{F}$ that are zero in the $(\mathbb{Z}/p^e)^n$-coordinate, i.e. the embedding takes each component $(\mathbb{Z}/p^e)^{n+m}\times C_\mathcal{F}(\underline x,\underline y)$ to the component $(\mathbb{Z}/p^e)^m \times C_{(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline x)}(\underline 0\times \underline y)$, for $\underline x\in \cntuples n \mathcal{F}$ and $\underline y\in \cntuples m \mathcal{F}$, via the fusion preserving map given by
\[
((\underline s,\underline r),z) \mapsto (\underline r,(\underline s,z)),
\]
for $\underline s\in (\mathbb{Z}/p^e)^n$, $\underline r\in (\mathbb{Z}/p^e)^m$, and $z\in C_S(\underline x,\underline y)$.
These embeddings $(\mathbb{Z}/p^e)^{n+m}\times \freeO {n+m} \mathcal{F}\to (\mathbb{Z}/p^e)^m\times \freeO m((\mathbb{Z}/p^e)^n\times \freeO n \mathcal{F})$ then form a natural transformation $\twistO {n+m}(-)\Rightarrow \twistO m(\twistO n(-))$.
\end{enumerate}
\end{theorem}
For any finite group $G$, if we work with the free loop space $\freeO{} G$ in the context of a fixed prime $p$, we may want to restrict our view to only those components of $\freeO{} G$ that correspond to conjugacy classes of $p$-power order elements in $G$.
In general for $n\geq 0$, we let $\freeO n_p G$ consist of the components in $\freeO n G$ corresponding to conjugacy classes of commuting $n$-tuples of $p$-power order elements in $G$.
The functors $\twistO n\colon \mathbb{AG} \to \mathbb{AG}$ from \cite{RSS_Bold1}*{Section 3} restrict naturally from $(Z/\ell)^n\times \freeO n G$ to $(\mathbb{Z}/\ell)^n\times \freeO n_p G$ and give us functors $\twistO {n,p} \colon \mathbb{AG}\to \mathbb{AG}$ for $n\geq 0$.
Separately, consider the canonical functor $\mathbb{AG} \to \Ho(\mathord{\mathrm{Sp}})$ taking finite groups to their classifying spectra and post-compose this with the $p$-completion functor for spectra $(-)^\wedge_p\colon\Ho(\mathord{\mathrm{Sp}})\to \Ho(\mathord{\mathrm{Sp}}_{p})$. The resulting functor factors through $\mathbb{AF}_p$ giving us a functor $(-)^\wedge_p\colon \mathbb{AG}\to \mathbb{AF}_p$ corresponding to the $p$-completion functor for classifying spectra.
The $(-)^\wedge_p$ takes a finite group to its associated fusion system at the prime $p$, and explicit formulas for the functor $(-)^\wedge_p$ were previously described in \cite{RSS_p-completion}.
Our final result describes the interplay between these functors. We prove the functor $\twistO n\colon \mathbb{AF}_p\to \mathbb{AF}_p$, when applied to a fusion system coming from a finite group, is in essence the $p$-completion of the functor $\twistO {n,p}\colon \mathbb{AG}\to \mathbb{AG}$.
\begin{theorem}\label{thmIntroMainTheoremPCompletion}
We have $(-)^\wedge_p \circ \twistO {n,p} = \twistO n \circ (-)^\wedge_p$
as functors $\mathbb{AG}\to \mathbb{AF}_p$ for all $n\geq 0$.
\end{theorem}
\subsection*{Outline}
Section \ref{secFusionRecollections} recalls notation and terminology for saturated fusion systems, their Burnside modules, and characteristic idempotents.
Section \ref{secFusionSystems} introduces the technology needed for working with bisets between free loop spaces of fusion systems.
Section \ref{secFusionFreeLoopFunctor} proves that the mapping telescope of the idempotent $\freeO n(\omega_\mathcal{F})$ is equivalent to the classifying spectrum for the free loop space of the fusion system.
Section \ref{secFusionEvaluation} proves a similar result for the mapping telescope of $\twistO n(\omega_\mathcal{F})$. The proof reduces to the results of Section \ref{secFusionFreeLoopFunctor} by showing that $\twistO n(\omega_\mathcal{F})$ and $(\mathbb{Z}/p^e)^n\times \freeO n(\omega_\mathcal{F})$ induce equivalent mapping telescopes. Section \ref{secFusionEvaluation} provides explicit formulas for $\twistO n$ applied to the Burnside category of saturated fusion systems.
Section \ref{secFusionMainTheorem} proves Theorem \ref{thmIntroMain} as Theorem \ref{thmFusionMain}, and
finally, Section \ref{secFusionPCompletionCommutes} proves Theorem \ref{thmIntroMainTheoremPCompletion} as Theorem \ref{thmTwistedLoopsAndPCompletion}.
\subsection*{Acknowledgements} It is a pleasure to thank Mike Hopkins and Haynes Miller for their suggestions. Their helpful comments led to a complete revision of these papers. Further, we thank Cary Malkiewich for generously sharing his thoughts regarding transfers and free loop spaces after his work with Lind in \cite{LindMalkiewich}. Section 2 in the prequel is based on ideas coming from discussions with him. Finally, we thank Matthew Gelvin and Erg\"un Yal\c{c}{\i}n for their comments.
While working on this paper the first author was funded by the Independent Research Fund Denmark (DFF–4002-00224) and later on by BGSMath and the María de Maeztu Programme (MDM–2014-0445). The second and third author were jointly supported by the US-Israel Binational Science Foundation under grant 2018389. The second author was partially supported by the Alon Fellowship and ISF 1588/18. The third author was partially supported by NSF grant DMS-1906236 and the SFB 1085 \emph{Higher Invariants} at the University of Regensburg. All three authors thank the HIM and the MPIM for their hospitality.
\input{"fusion_systems_II.tex"}
\section{Auxiliary results about characteristic idempotents}
In this appendix we gather a few utility results about characteristic idempotents and ``virtual double cosets.'' The results are mostly elementary. However, to our knowledge, they do not appear in the literature.
First we observe a simple principle for recognizing fusion preserving maps in terms of characteristic idempotents:
\begin{lemma}\label{lemmaFusionPreserving}
Let $\mathcal{E}$ and $\mathcal{F}$ be saturated fusion systems over $p$-groups $R$ and $S$ respectively. Then the following statements are equivalent for any homomorphism $f\colon R\to S$.
\begin{enumerate}
\renewcommand{\theenumi}{$(\roman{enumi})$}\renewcommand{\labelenumi}{\theenumi}
\item\label{itemFusionPreserving} $f$ is a fusion preserving map from $\mathcal{E}$ to $\mathcal{F}$.
\item\label{itemAbsorbsLeftIdempotent} The equation
\(
\omega_\mathcal{E} \mathbin{\odot} [R,f]_R^S\mathbin{\odot} \omega_\mathcal{F} = [R,f]_R^S\mathbin{\odot} \omega_\mathcal{F}
\)
holds in $\mathbb{AF}_p(R,S)$.
\item\label{itemMapIsLeftStable} The virtual biset $[R,f]_R^S\mathbin{\odot} \omega_\mathcal{F}\in \mathbb{AF}_p(R,S)$ is left $\mathcal{E}$-stable.
\end{enumerate}
\end{lemma}
\begin{proof}
The properties \ref{itemAbsorbsLeftIdempotent} and \ref{itemMapIsLeftStable} are immediately seen to be equivalent since an element $X\in \mathbb{AF}_p(R,S)$ is left $\mathcal{E}$-stable if and only if $\omega_\mathcal{E}\mathbin{\odot} X=X$.
We will show that \ref{itemFusionPreserving} implies \ref{itemMapIsLeftStable}. Suppose that $f$ is fusion preserving. We will prove that $[R,f]_R^S\mathbin{\odot} \omega_\mathcal{F}$ is left $\mathcal{E}$-stable. Suppose that $P\leq R$ and $\varphi\in \mathcal{E}(P,R)$ is any morphism in the fusion system $\mathcal{E}$. Since $f$ is fusion preserving, we have a corresponding map $\tilde\varphi\in \mathcal{F}(f(P),S)$ such that $f\circ \varphi = \tilde\varphi\circ f$ as maps $P\to S$. If we now restrict $[R,f]_R^S\mathbin{\odot} \omega_\mathcal{F}$ along $\varphi$ on the left, we get
\begin{multline*}
[P,\varphi]_P^R\mathbin{\odot} [R,f]_R^S\mathbin{\odot} \omega_\mathcal{F} = [P,f\circ \varphi]_R^S \mathbin{\odot} \omega_\mathcal{F}
\\= [P,\tilde\varphi\circ f]_R^S \mathbin{\odot} \omega_\mathcal{F} = [P,f]_P^{f(P)}\mathbin{\odot} [f(P),\tilde\varphi]_{f(P)}^S \mathbin{\odot} \omega_\mathcal{F}.
\end{multline*}
Because $\omega_\mathcal{F}$ is left $\mathcal{F}$-stable, restricting $\omega_\mathcal{F}$ along $\tilde\varphi$ is the same as restricting along the inclusion:
\begin{multline*}
[P,f]_P^{f(P)}\mathbin{\odot} [f(P),\tilde\varphi]_{f(P)}^S \mathbin{\odot} \omega_\mathcal{F} = [P,f]_P^{f(P)}\mathbin{\odot} [f(P),\incl]_{f(P)}^S \mathbin{\odot} \omega_\mathcal{F}
\\= [P,\incl]_P^{R}\mathbin{\odot} [R,f]_R^S \mathbin{\odot} \omega_\mathcal{F}.
\end{multline*}
Consequently, $[R,f]_R^S \mathbin{\odot} \omega_\mathcal{F}$ is left $\mathcal{E}$-stable as required.
Conversely, suppose that $f$ satisfies the equation \ref{itemAbsorbsLeftIdempotent}. We will then prove that $f$ is fusion preserving. To that purpose, suppose $P\leq R$ and $\varphi\in\mathcal{E}(P,R)$; we shall prove that there is a corresponding map $f(P)\to S$ inside $\mathcal{F}$. Consider the basis element of $\mathbb{AF}_p(P,\mathcal{F})$ given by the composite $f\circ \varphi$:
\[
[P,f\circ \varphi]_P^\mathcal{F} = [P,\varphi]_P^R\mathbin{\odot} [R,f]_R^S\mathbin{\odot} \omega_\mathcal{F}.
\]
By \ref{itemAbsorbsLeftIdempotent} we can add in $\omega_\mathcal{E}$ in the middle and we get
\begin{multline*}
[P,\varphi]_P^R\mathbin{\odot} [R,f]_R^S\mathbin{\odot} \omega_\mathcal{F} = [P,\varphi]_P^R\mathbin{\odot} \omega_\mathcal{E}\mathbin{\odot} [R,f]_R^S\mathbin{\odot} \omega_\mathcal{F} = [P,\incl]_P^R\mathbin{\odot} \omega_\mathcal{E}\mathbin{\odot} [R,f]_R^S\mathbin{\odot} \omega_\mathcal{F}
\\ = [P,\incl]_P^R\mathbin{\odot} [R,f]_R^S\mathbin{\odot} \omega_\mathcal{F} = [P,f]_P^\mathcal{F}.
\end{multline*}
These calculations show that the two basis elements $[P,f\circ \varphi]_P^\mathcal{F}$ and $[P,f]_P^\mathcal{F}$ in $\mathbb{AF}_p(P,\mathcal{F})$ are equal. Consequently, $f\circ\varphi$ arises from $f$ by simultaneously precomposing with some conjugation $c_x$ in $P$ and postcomposing with some map $\psi$ in $\mathcal{F}$, i.e. we have:
\[f\circ \varphi= \psi\circ f\circ c_x\]
as maps $P\to S$ with $x\in N_R(P)$ and $\psi\in\mathcal{F}(f(P),S)$. We can now apply $f$ to the element $x$ as well, resulting in
\[f\circ \varphi= \psi\circ f\circ c_x = \psi\circ c_{f(x)}\circ f.\]
Hence $\tilde\varphi:=\psi\circ c_{f(x)}$ serves the purpose, proving that $f$ is in fact fusion preserving from $\mathcal{E}$ to $\mathcal{F}$.
\end{proof}
Next we will work towards writing out a double coset formula for composing virtual bisets of saturated fusion systems.
\begin{lemma}\label{lemmaBisetDoubleCosetFormula}
Let $R$, $S$, and $T$ be finite $p$-groups, and consider transitive bisets $[H,\varphi]_R^S$ and $[K,\psi]_S^T$. Suppose $X$ is a bifree $(S,S)$-biset. For each $x\in X$ the stabilizer of $x$ is given as a pair of a subgroup $S_x\leq S$ and a homomorphism $c_x\colon S_x\to S$. That is, for all $s\in S_x$, we have $s x = x c_x(s)$.
The composition $[H,\varphi]_R^S\mathbin{\odot} X\mathbin{\odot} [K,\psi]_S^T$ can then be calculated in terms of double cosets of $X$:
\[[H,\varphi]_R^S\mathbin{\odot} X\mathbin{\odot} [K,\psi]_S^T = \sum_{x\in \varphi H\backslash X / K} [\varphi^{-1}(S_x\cap c_x^{-1}(K)), \psi\circ c_x\circ \varphi]_R^T.\]
\end{lemma}
\begin{proof}
First write the composition $[H,\varphi]_R^S\mathbin{\odot} X\mathbin{\odot} [K,\psi]_S^T$ as
\[[H,\varphi]_R^{\varphi H} \mathbin{\odot} [\varphi H,\id]_{\varphi H}^S\mathbin{\odot} X\mathbin{\odot} [K,\id]_S^K \mathbin{\odot} [K,\psi]_K^T.\]
The middle part, $[\varphi H,\id]_{\varphi H}^S\mathbin{\odot} X\mathbin{\odot} [K,\id]_S^K$, is then simply the $(S,S)$-biset $X$ restricted to the subgroup $\varphi H$ on the left and $K$ on the right.
Now the orbits of the restriction ${}_{\varphi H} X_K= [\varphi H,\id]_{\varphi H}^S\mathbin{\odot} X\mathbin{\odot} [K,\id]_S^K$ are precisely the double cosets $\varphi H \backslash X / K$. Given a representative $x\in \varphi H \backslash X / K$ for any of the orbits, the stabilizer of $x\in {}_{\varphi H} X_K$ is obtained by restricting the stabilizer of $x$ inside the $(S,S)$-biset with the smaller subgroups $\varphi H$ and $K$. Since the stabilizer of $x$ inside ${}_S X_S$ is given by $S_x\leq S$ and $c_x\colon S_x\to S$, the restriction to $(\varphi H, K)$ becomes the subgroup $\varphi H\cap S_x \cap c_x^{-1}(K)\leq \varphi H$ and we may restrict $c_x$ to the homomorphism $c_x\colon \varphi H\cap S_x \cap c_x^{-1}(K) \to K$.
When we take representatives for all the double cosets $\varphi H \backslash X / K$, we get the following expression for the restriction ${}_{\varphi H} X_K$:
\[{}_{\varphi H} X_K = \sum_{x\in \varphi H\backslash X / K} [\varphi H \cap S_x\cap c_x^{-1}(K), c_x]_{\varphi H}^K.\]
It remains to compose ${}_{\varphi H} X_K$ with $[H,\varphi]_R^{\varphi H}$ on the left and $[K,\psi]_K^T$ on the right. This amounts to composing with the isomorphisms $\varphi\colon H\to \varphi H$ and $\psi\colon K\to \psi K$ and then inducing the actions from $H$ and $\psi K$ up to $R$ and $T$ respectively. Since induction doesn't change point stabilizers, we get the final formula:
\begin{align*}
&{} [H,\varphi]_R^S\mathbin{\odot} X\mathbin{\odot} [K,\psi]_S^T
\\ ={}& [H,\id]_R^H \mathbin{\odot} [H,\varphi]_H^{\varphi H} \mathbin{\odot} \Bigl(\sum_{x\in \varphi H\backslash X / K} [\varphi H \cap S_x\cap c_x^{-1}(K), c_x]_{\varphi H}^K\Bigr) \mathbin{\odot} [K,\psi]_K^{\psi K} \mathbin{\odot} [\psi K, \id]_{\psi K}^T
\\ ={}& [H,\id]_R^H \mathbin{\odot} \Bigl(\sum_{x\in \varphi H\backslash X / K} [\varphi^{-1}(S_x\cap c_x^{-1}(K)), \psi\circ c_x\circ \varphi]_H^{\psi K}\Bigr) \mathbin{\odot} [\psi K, \id]_{\psi K}^T
\\ ={}& \sum_{x\in \varphi H\backslash X / K} [\varphi^{-1}(S_x\cap c_x^{-1}(K)), \psi\circ c_x\circ \varphi]_R^T. \qedhere
\end{align*}
\end{proof}
\begin{remark}\label{remarkVirtualDoubleCosetFormula}
Lemma \ref{lemmaBisetDoubleCosetFormula} is linear in $X$ with respect to addition/disjoint union of bifree $(S,S)$-biset. We can therefore extend the lemma to cover all virtual bifree $(S,S)$-bisets $X\in \mathbb{AG}(S,S)^\wedge_p$, as long as we extend the notation
\[\sum_{x\in \varphi H\backslash X / K} \dotsb\]
linearly to virtual bisets as well. The convention will be that each virtual point $x\in X$ is assigned a weight $\varepsilon_x$ equal to the coefficient of the virtual orbit in $X$ that contains $x$. Explicitly this means that if we decompose $X$ in terms of orbits
\[X = \sum_{\substack{L\leq S\\ \rho\colon L\to S}} \varepsilon_{L,\rho}\cdot [L,\rho]_S^S,\]
then each point of $[L,\rho]_S^S$ is assigned the weight $\varepsilon_{L,\rho}$, and the formula of Lemma \ref{lemmaBisetDoubleCosetFormula} becomes
\begin{align*}
&{}[H,\varphi]_R^S\mathbin{\odot} X\mathbin{\odot} [K,\psi]_S^T
\\={}& \sum_{x\in \varphi H\backslash X / K} [\varphi^{-1}(S_x\cap c_x^{-1}(K)), \psi\circ c_x\circ \varphi]_R^T
\\ \overset{\text{def}}={}& \sum_{\substack{L\leq S\\ \rho\colon L\to S}} \sum_{x\in \varphi H\backslash [L,\rho]_S^S / K} \varepsilon_{L,\rho}\cdot[\varphi^{-1}(S_x\cap c_x^{-1}(K)), \psi\circ c_x\circ \varphi]_R^T.
\end{align*}
This formula is then valid for every bifree virtual biset $X\in \mathbb{AG}(S,S)^\wedge_p$.
\end{remark}
\begin{prop}\label{propFusionDoubleCosetFormula}
Suppose that $\mathcal{E}$, $\mathcal{F}$, and $\mathcal{G}$ are saturated fusion systems on $p$-groups $R$, $S$, and $T$, respectively, and consider virtual bisets $[H,\varphi]_\mathcal{E}^\mathcal{F}$ and $[K,\psi]_\mathcal{F}^\mathcal{G}$, where $H\leq R$, $\varphi\colon H\to S$, $K\leq S$, and $\psi\colon K\to T$.
The composition of these bisets satisfies a double coset formula following the conventions of Lemma \ref{lemmaBisetDoubleCosetFormula} and Remark \ref{remarkVirtualDoubleCosetFormula}:
\[[H,\varphi]_\mathcal{E}^\mathcal{F}\mathbin{\odot} [K,\psi]_\mathcal{F}^\mathcal{G} = \sum_{x\in \varphi H\backslash \omega_\mathcal{F} / K} [\varphi^{-1}(S_x\cap c_x^{-1}(K)), \psi\circ c_x\circ \varphi]_\mathcal{E}^\mathcal{G}.\]
In the special case where $K=S$ and $\psi\colon S\to T$ is a fusion preserving map from $\mathcal{F}$ to $\mathcal{G}$, the composition simplifies to
\[[H,\varphi]_\mathcal{E}^\mathcal{F} \mathbin{\odot} [S,\psi]_\mathcal{F}^\mathcal{G} = [H,\psi\circ \varphi]_\mathcal{E}^\mathcal{G}.\]
\end{prop}
\begin{proof}
By definition of the basis elements $[H,\varphi]_\mathcal{E}^\mathcal{F}$ and $[K,\psi]_\mathcal{F}^\mathcal{G}$ (see the discussion following Definition \ref{defFusionBisetModule}) the expression $[H,\varphi]_\mathcal{E}^\mathcal{F}\mathbin{\odot} [K,\psi]_\mathcal{F}^\mathcal{G}$ is shorthand for
\[[H,\varphi]_\mathcal{E}^\mathcal{F}\mathbin{\odot} [K,\psi]_\mathcal{F}^\mathcal{G} = \omega_{\mathcal{E}} \mathbin{\odot} [H,\varphi]_R^S \mathbin{\odot} \omega_{\mathcal{F}} \mathbin{\odot} [K,\psi]_S^T \mathbin{\odot} \omega_{\mathcal{G}}.\]
We apply Lemma \ref{lemmaBisetDoubleCosetFormula} to the middle triple $[H,\varphi]_R^S \mathbin{\odot} \omega_{\mathcal{F}} \mathbin{\odot} [K,\psi]_S^T$ and get
\begin{align*}
{}& [H,\varphi]_\mathcal{E}^\mathcal{F}\mathbin{\odot} [K,\psi]_\mathcal{F}^\mathcal{G}
\\ ={}& \omega_{\mathcal{E}} \mathbin{\odot} \Bigl(\sum_{x\in \varphi H\backslash \omega_\mathcal{F} / K} [\varphi^{-1}(S_x\cap c_x^{-1}(K)), \psi\circ c_x\circ \varphi]_R^T \Bigr) \mathbin{\odot} \omega_{\mathcal{G}}
\\ ={}& \sum_{x\in \varphi H\backslash \omega_\mathcal{F} / K} [\varphi^{-1}(S_x\cap c_x^{-1}(K)), \psi\circ c_x\circ \varphi]_\mathcal{E}^\mathcal{G}.
\end{align*}
This covers the general case of the proposition.
In the special case where $K=S$ and $\psi\colon S\to T$ is fusion preserving from $\mathcal{F}$ to $\mathcal{G}$, we refer to \cite{RSS_p-completion}*{Lemma 4.6} which states that $\omega_{\mathcal{F}} \mathbin{\odot} [S,\psi]_S^T \mathbin{\odot} \omega_{\mathcal{G}} = [S,\psi]_S^T \mathbin{\odot} \omega_{\mathcal{G}}$ when $\psi$ is fusion preserving. This in turn gives us the required formula:
\begin{align*}
{}& [H,\varphi]_\mathcal{E}^\mathcal{F}\mathbin{\odot} [S,\psi]_\mathcal{F}^\mathcal{G}
\\ ={}& \omega_{\mathcal{E}} \mathbin{\odot} [H,\varphi]_R^S \mathbin{\odot} \omega_{\mathcal{F}} \mathbin{\odot} [S,\psi]_S^T \mathbin{\odot} \omega_{\mathcal{G}}
\\ ={}& \omega_{\mathcal{E}} \mathbin{\odot} [H,\varphi]_R^S \mathbin{\odot} [S,\psi]_S^T \mathbin{\odot} \omega_{\mathcal{G}}
\\ ={}& \omega_{\mathcal{E}} \mathbin{\odot} [H,\psi \circ \varphi]_R^T \mathbin{\odot} \omega_{\mathcal{G}}
\\ ={}& [H,\psi \circ \varphi]_\mathcal{E}^{\mathcal{G}}.\qedhere
\end{align*}
\end{proof}
\begin{prop}\label{propCentralizerIdempotent}
Let $\mathcal{F}$ be a saturated fusion system on $S$ and suppose $a\in S$ is central in $\mathcal{F}$, meaning that $C_\mathcal{F}(a) = \mathcal{F}$. Then $\omega_\mathcal{F}$ only contains orbits of the form $[P,\varphi]$, where $a\in P$ (and $\varphi(a)=a$ since $a$ is $\mathcal{F}$-central).
\end{prop}
\begin{proof}
Suppose $a\in S$ is central in the saturated fusion system $\mathcal{F}$, and let $\omega_\mathcal{F}$ be the characteristic idempotent.
In particular this means that if $a\leq P\leq S$ and $\varphi\in \mathcal{F}(P,S)$, then $\varphi(a)=a$.
Furthermore, we let $\bar \omega_\mathcal{F}$ be the $a$-centralizing part of $\omega_\mathcal{F}$ meaning that
\[
\bar \omega_\mathcal{F} := \sum_{\substack{a\in P\leq S\\ \varphi\in \mathcal{F}(P,S)\\ \text{up to $(\mathcal{F},\mathcal{F})$-conj.}}} c_{P,\varphi} \cdot [P,\varphi]_S^S,
\]
where $c_{P,\varphi}\in \mathbb{Z}_p$ is the coefficient of $[P,\varphi]_S^S$ in the orbit decomposition of $\omega_\mathcal{F}$. We wish to prove that $\omega_\mathcal{F}$ does not contain any additional orbits with $a\not\in P$, so we can equivalently prove that $\bar \omega_\mathcal{F} = \omega_\mathcal{F}$.
The algorithm for constructing $\omega_\mathcal{F}$ in \cite{ReehIdempotent} starts with the transitive biset $[S,\id]_S^S$ and then proceeds one $(\mathcal{F},\mathcal{F})$-conjugacy class of pairs $(P,\varphi)$ at a time (in decreasing order) and adds/subtracts orbits in the conjugacy class of $(P,\varphi)$ to make the biset stable at that conjugacy class and above.
The virtual biset $\bar \omega_\mathcal{F}$ is then the intermediate result of this algorithm where we have stabilized $[S,\id]_S^S$ only across all pairs $(P,\varphi)$ with $a\leq P$. If we can prove that $\bar\omega_\mathcal{F}$ is in fact fully $(\mathcal{F},\mathcal{F})$-stable instead of just stable for pairs containing $a$, this would mean that no further stabilization is needed and the algorithm stops with $\omega_\mathcal{F}=\bar\omega_\mathcal{F}$.
For readers who don't feel comfortable with the argument above, we will bring a more explicit way of finishing the proof below -- once we have proved that $\bar\omega_\mathcal{F}$ is $(\mathcal{F},\mathcal{F})$-stable.
\textbf{$\bar\omega_\mathcal{F}$ is $(\mathcal{F},\mathcal{F})$-stable:} This is essentially \cite{GelvinReeh}*{Proposition 9.10}, using that $\mathcal{F}$ is the centralizer fusion system of $a$. The difference is that \cite{GelvinReeh} deals with actual bisets, while $\bar\omega_\mathcal{F}$ here is a virtual biset. We adapt the proof used in \cite{GelvinReeh}.
One of the equivalent ways to state $(\mathcal{F},\mathcal{F})$-stability for an $(S,S)$-biset $X$ is the property that the number of fixed points for a pair $(Q,\psi)$ only depends on $(Q,\psi)$ up to $(\mathcal{F},\mathcal{F})$-conjugation:
\[\abs{X^{(Q,\psi)}} = \abs{X^{(Q',\psi')}}\]
whenever $(Q',\psi')$ is $(\mathcal{F},\mathcal{F})$-conjugate to $(Q,\psi)$. The fixed point set $X^{(Q,\psi)}$ is the set of $x\in X$ such that $q x = x\psi(q)$ for all $q\in Q$. Working with virtual fixed points, the same characterization of $(\mathcal{F},\mathcal{F})$-stability works for virtual bisets $X\in \mathbb{AG}(S,S)^\wedge_p$.
If $X\in \mathbb{AG}(S,S)^\wedge_p$ is $\mathcal{F}$-generated, such as $\omega_\mathcal{F}$ or $\bar\omega_\mathcal{F}$, then it is sufficient to check fixed points for $(Q,\psi)$ with $\psi\in \mathcal{F}(Q,S)$, where we ask that
\[\abs{X^{(Q,\psi)}} = \abs{X^{(Q',\id)}}\]
for any $\mathcal{F}$-conjugate $Q'$ to $Q$.
If $(Q,\psi)$ has $a\in Q$, then the fixed points $\abs{\omega_\mathcal{F}^{(Q,\psi)}}$ only depends on the orbits with stabilizers containing $Q$ and hence $a$. We thus have
\[
\abs{\bar\omega_\mathcal{F}^{(Q,\psi)}} = \abs{\omega_\mathcal{F}^{(Q,\psi)}} = \abs{\omega_\mathcal{F}^{(Q',\id)}} = \abs{\bar\omega_\mathcal{F}^{(Q',\id)}},
\]
whenever $Q'$ is $\mathcal{F}$-conjugate to $Q$, $Q$ contains $a$, and $\psi\in \mathcal{F}(Q,S)$.
Given any pair $(Q,\psi)$ with $\psi\in \mathcal{F}(Q,S)$ and $a\not\in Q$, the assumption that $a$ is central in $\mathcal{F}$ means that $\psi$ extends (uniquely) to a homomorphism $\hat\psi\colon Q\gen a \to S$ such that $\hat\psi|_Q=\psi$ and $\hat\psi(a)=a$.
Because $\bar\omega_\mathcal{F}$ only has orbits with stabilizers that contain $a$, and since the extension $\hat\psi$ is unique, we get
\[\abs{\bar\omega_\mathcal{F}^{(Q,\psi)}} = \abs{\bar\omega_\mathcal{F}^{(Q\gen a,\hat\psi)}}.\]
We already know that the fixed points are $(\mathcal{F},\mathcal{F})$-conjugation invariant for $(Q\gen a,\hat\psi)$, so the same is true for $(Q,\psi)$. Hence $\bar\omega_\mathcal{F}$ is in fact $(\mathcal{F},\mathcal{F})$-stable.
This completes the proof that the algorithm for constructing $\omega_\mathcal{F}$ stops once we have $\bar\omega_\mathcal{F}$. Alternatively, though it boils down to the same formulas, we can complete the proof of the lemma as follows.
\textbf{Alternative proof that $\bar\omega_\mathcal{F} = \omega_\mathcal{F}$:}
Let $c_{P,\varphi}$ be the coefficient of $[P,\varphi]_S^S$ in the orbit decomposition of $\bar\omega_{\mathcal{F}}$. Hence $c_{P,\varphi}=0$ unless $a\in P\leq S$ and $\varphi\in \mathcal{F}(P,S)$.
Because $\omega_\mathcal{F}$ is obtained by stabilizing the transitive biset $[S,\id]_S^S$. Remark 4.7 of \cite{ReehIdempotent} states that the coefficients satisfy
\[
\sum_{\substack{\text{$(P',\varphi')$ up to $(S,S)$-conj.}\\ \text{s.t. $(P',\varphi')$ is $(\mathcal{F},\mathcal{F})$-conj. to $(P,\id)$}}}\hspace{-1cm} c_{P',\varphi'} = \begin{cases} 1 &\text{if $P=S$} \\ 0 &\text{otherwise},\end{cases}
\]
when $a$ is in $P$ and $P'$. When $a$ is not in the subgroups, all the coefficients are zero so the formula still holds.
If we multiply $\bar\omega_\mathcal{F}$ with $\omega_\mathcal{F}$ from both sides, the formula above gives us
\begin{align*}
{}& \omega_\mathcal{F}\mathbin{\odot} \bar\omega_\mathcal{F} \mathbin{\odot} \omega_\mathcal{F}
\\ ={}& \sum_{\text{$(P,\varphi)$ up to $(S,S)$-conj.}} \hspace{-1cm} c_{P,\varphi}\cdot (\omega_\mathcal{F} \mathbin{\odot} [P,\varphi]_S^S \mathbin{\odot} \omega_\mathcal{F})
\\ ={}& \sum_{\text{$P\leq S$ up to $\mathcal{F}$-conj.}} \hspace{-.8cm} [P,\id]_\mathcal{F}^\mathcal{F} \cdot \Bigl( \sum_{\substack{\varphi'\in \mathcal{F}(P',S)\\\text{up to $(S,S)$-conj.}\\\text{s.t. $P'$ is $\mathcal{F}$-conj to $P$}}} c_{P',\varphi'}\Bigr)
\\ ={}& [S,\id]_\mathcal{F}^\mathcal{F} \cdot 1
\\ ={}& \omega_\mathcal{F}.
\end{align*}
At the same time, $\bar\omega_\mathcal{F}$ is $(\mathcal{F},\mathcal{F})$-stable as proven earlier, so
\[\omega_\mathcal{F}\mathbin{\odot} \bar\omega_\mathcal{F}\mathbin{\odot} \omega_\mathcal{F}=\bar\omega_\mathcal{F}.\]
This finishes the alternative proof that $\omega_\mathcal{F}=\bar\omega_\mathcal{F}$.
\end{proof}
\section{Recollections about fusion systems}\label{secFusionRecollections}
We first recall the basics of the definition of a saturated fusion system. For additional details see \cite{RagnarssonStancu}*{Section 2} or \cite{AKO}*{Part I}.
\begin{definition}
A fusion system on a finite $p$-group $S$ is a category $\mathcal{F}$ with the subgroups of $S$ as objects and where the morphisms $\mathcal{F}(P,Q)$ for $P,Q\leq S$ satisfy
\begin{enumerate}
\renewcommand{\theenumi}{$(\roman{enumi})$}\renewcommand{\labelenumi}{\theenumi}
\item Every morphism $\varphi\in \mathcal{F}(P,Q)$ is an injective group homomorphism $\varphi\colon P\to Q$.
\item Every map $\varphi\colon P\to Q$ induced by conjugation in $S$ is in $\mathcal{F}(P,Q)$.
\item Every map $\varphi\in\mathcal{F}(P,Q)$ factors as $P\xrightarrow{\varphi} \varphi(P) \xrightarrow{\incl} Q$ in $\mathcal{F}$ and the inverse isomorphism $\varphi^{-1}\colon \varphi(P)\to P$ is also in $\mathcal{F}$.
\end{enumerate}
In addition, the underlying group $S$ is considered part of the structure of the fusion system, so a fusion system is really a pair $(S,\mathcal{F})$ of a $p$-group equipped with a category as above.
\end{definition}
We think of the morphisms in $\mathcal{F}$ as being conjugation maps induced by some, possibly non-existent, ambient group. Consequently, we say that two subgroups $P,Q\leq S$ are conjugate in $\mathcal{F}$ if there is an isomorphism between them in $\mathcal{F}$.
A saturated fusion system satisfies some additional axioms that we will not go through as they play almost no direct role in this paper (see e.g. \cite{RagnarssonStancu}*{Definition 2.5} instead). The important aspect of saturated fusion systems in this paper is that these are the fusion systems that have characteristic idempotents and classifying spectra as described below.
Given fusion systems $\mathcal{F}_1$ and $\mathcal{F}_2$ on $p$-groups $S_1$ and $S_2$, respectively, a group homomorphism $\phi\colon S_1\to S_2$ is said to be fusion preserving if whenever $\psi\colon P\to Q$ is a map in $\mathcal{F}_1$, there is a corresponding map $\rho\colon \phi(P)\to \phi(Q)$ in $\mathcal{F}_2$ such that $\phi|_{Q}\circ \psi = \rho\circ \phi|_P$. Note that each such $\rho$ is unique if it exists.
\begin{example}
Let $G$ be a finite group with Sylow $p$-subgroup $S$. This data determines a fusion system $\mathcal{F}_G$ on $S$. The maps in $\mathcal{F}_G(P,Q)$ for subgroups $P,Q\leq S$ are precisely the homomorphisms $P\to Q$ induced by conjugation in $G$, i.e. if $g^{-1} P g\leq Q$ for some $g\in G$, then $c_g(x) = g^{-1}xg$ defines a homomorphism $c_g\in \mathcal{F}_G(P,Q)$. Note that different group elements $g$ and $g'$ can give rise to the same homomorphism $c_g=c_{g'}$ in $\mathcal{F}_G(P,Q)$.
The fusion system $\mathcal{F}_G$ associated to a finite group at a prime $p$ is always saturated.
\end{example}
Every saturated fusion system $\mathcal{F}$ has a classifying spectrum originally constructed by Broto-Levi-Oliver in \cite{BLO2}*{Section 5}. The most direct way of constructing this spectrum, due to Ragnarsson \cite{Ragnarsson}, is as the mapping telescope
\[
\Sigma^{\infty}_{+} B\mathcal{F} = \colim ( \Sigma^{\infty}_{+} BS \xrightarrow{\omega_{\mathcal{F}}} \Sigma^{\infty}_{+} BS \xrightarrow{\omega_{\mathcal{F}}} \dotsb),
\]
where $\omega_{\mathcal{F}} \in \mathbb{AG}(S,S)^\wedge_p$ is the characteristic idempotent of $\mathcal{F}$ (see the characterization below). By construction, $\Sigma^{\infty}_{+} B\mathcal{F}$ is a wedge summand of $\Sigma^{\infty}_{+} BS$.
\begin{comment}
The transfer map
\[
\mathord{\mathrm{tr}} \colon \Sigma^{\infty}_{+} B\mathcal{F} \rightarrow \Sigma^{\infty}_{+} BS
\]
is the inclusion of $\Sigma^{\infty}_{+} B\mathcal{F}$ as a summand and the ``inclusion" map
\[
\incl \colon \Sigma^{\infty}_{+} BS \rightarrow \Sigma^{\infty}_{+} B\mathcal{F}
\]
is the projection on $\Sigma^{\infty}_{+} B\mathcal{F}$. Thus $\incl \circ \mathord{\mathrm{tr}} = 1$ and $\mathord{\mathrm{tr}} \circ \incl=\omega_{\mathcal{F}}$.
\end{comment}
As remarked in Section 5 of \cite{BLO2}, the spectrum $\Sigma^{\infty}_{+} B\mathcal{F}$ constructed this way is in fact the suspension spectrum for the classifying space $B\mathcal{F}$ defined in \cites{BLO2, Chermak}. One way to see this is to note that $H^*(B\mathcal{F},\mathbb{F}_p)$ coincides with $\omega_\mathcal{F} \cdot H^*(BS,\mathbb{F}_p)$ as the $\mathcal{F}$-stable elements, and that the suspension spectrum of $B\mathcal{F}$ is $H\mathbb{F}_p$-local.
\begin{definition}\label{defFCharacteristic}
Let $\mathcal{F}$ be a fusion system on $S$.
A virtual biset $X\in \mathbb{AG}(S,S)^\wedge_p$ is said to be \emph{$\mathcal{F}$-characteristic} if it satisfies the following properties:
\begin{itemize}
\item $X$ is \emph{$\mathcal{F}$-generated}, meaning that $X$ is a linear combination of transitive bisets $[P,\varphi]_S^S$ with $\varphi\in \mathcal{F}(P,S)$.
\item $X$ is \emph{left $\mathcal{F}$-stable}, meaning that for all $P\leq S$ and $\varphi\in \mathcal{F}(P,S)$ the restriction of $X$ along $\varphi$ on the left,
\[X_{P,\varphi}^S = [P,\varphi]_P^S \mathbin{\odot} X \in \mathbb{AG}(P,S)^\wedge_p,\]
is isomorphic to the restriction $X_P^S= [P,\id]_P^S \mathbin{\odot} X\in \mathbb{AG}(P,S)^\wedge_p$ along the inclusion.
\item $X$ is \emph{right $\mathcal{F}$-stable}, meaning the dual property to the one above:
\[X\mathbin{\odot} [\varphi P, \varphi^{-1}]_S^P = X\mathbin{\odot} [P, \id]_S^P,\]
for all $P\leq S$ and $\varphi\in \mathcal{F}(P,S)$.
\item $\abs{X}/\abs{S}$ is invertible in $\mathbb{Z}_p$.
\end{itemize}
Whenever $\mathcal{F}$ is the fusion system generated by a finite group $G$ with Sylow $p$-subgroup $S$, we can consider $G$ itself as an $(S,S)$-biset $G_S^S$. The biset $G_S^S$ is always $\mathcal{F}$-characteristic and is the motivating example for Definition \ref{defFCharacteristic}.
According to \cite{RagnarssonStancu}, a fusion system is saturated if and only if it has a characteristic virtual biset, in which case there is a unique idempotent $\omega_\mathcal{F}$ among all the characteristic virtual bisets. An explicit description and construction of $\omega_\mathcal{F}$, and a classification of all the characteristic virtual bisets, can be found in \cite{ReehIdempotent}.
\end{definition}
\begin{definition}\label{defFusionBisetModule}
Let $\mathcal{F}_1$ and $\mathcal{F}_2$ be saturated fusion systems over $p$-groups $S_1$ and $S_2$, respectively. We then let $\mathbb{AF}_p(\mathcal{F}_1,\mathcal{F}_2)$ denote the submodule of $\mathbb{AG}(S_1,S_2)^\wedge_p$ consisting of all virtual $(S_1,S_2)$-bisets that are both left $\mathcal{F}_1$-stable and right $\mathcal{F}_2$-stable.
Here $\mathcal{F}_1$- and $\mathcal{F}_2$-stability can be defined as for the characteristic idempotents above. Alternatively, a virtual biset $X\in \mathbb{AG}(S_1,S_2)^\wedge_p$ is $(\mathcal{F}_1,\mathcal{F}_2)$-stable if and only if
\[\omega_{\mathcal{F}_1} \mathbin{\odot} X \mathbin{\odot} \omega_{\mathcal{F}_2} = X.\]
\end{definition}
As a special case, for the trivial fusion systems induced by $S_1,S_2$ on themselves, we have $\mathbb{AF}_p(S_1,S_2) = \mathbb{AG}(S_1,S_2)^\wedge_p$. Hence $\mathbb{AF}_p(S_1,S_2)$ is the free $\mathbb{Z}_p$-module spanned by the transitive bisets $[P,\varphi]_{S_1}^{S_2}$ for $P\leq S_1$ and $\varphi\colon P\to S_2$ up to conjugation. Similarly, $\mathbb{AF}_p(\mathcal{F}_1,\mathcal{F}_2)$ is the free $\mathbb{Z}_p$-module on basis elements of the form
\[
[P,\varphi]_{\mathcal{F}_1}^{\mathcal{F}_2} := \omega_{\mathcal{F}_1} \mathbin{\odot} [P,\varphi]_{S_1}^{S_2} \mathbin{\odot} \omega_{\mathcal{F}_2},
\]
where $P\leq S_1$ and $\varphi\colon P\to S_2$ are taken up to pre- and postcomposition with isomorphisms in $\mathcal{F}_1$ and $\mathcal{F}_2$ respectively (see \cite{Ragnarsson}*{Proposition 5.2}).
Suppose we have a third saturated fusion system $\mathcal{F}_3$ on $S_3$ and virtual bisets $X\in \mathbb{AF}_p(\mathcal{F}_1,\mathcal{F}_2)$ and $Y\in \mathbb{AF}_p(\mathcal{F}_2,\mathcal{F}_3)$. The composition $X\mathbin{\odot} Y = X\times_{S_2} Y \in \mathbb{AF}_p(S_1,S_3)$ is then always $(\mathcal{F}_1,\mathcal{F}_3)$-stable, so composition gives a well-defined map
\[
\mathbin{\odot}\colon \mathbb{AF}_p(\mathcal{F}_1,\mathcal{F}_2) \times \mathbb{AF}_p(\mathcal{F}_2,\mathcal{F}_3) \to \mathbb{AF}_p(\mathcal{F}_1,\mathcal{F}_3).
\]
Here we use ``right-composition'' similar as for bisets of finite groups in \cite[Definition 3.1]{RSS_Bold1}.
\begin{remark}
Given finite groups $G$ and $H$ with Sylow subgroups $S$ and $T$ and a $(G,H)$-biset $X$, we may restrict the $G$-action to $S$ and the $H$-action to $T$ to get an $(S,T)$-biset $X_S^T$. Let $\mathcal{F}_G$ be the fusion system associated to $G$ on $S$ and $\mathcal{F}_H$ the fusion system associated to $H$ on $T$. The restricted biset $X_S^T$ is always a stable biset and so we may further consider it as an $(\mathcal{F}_G, \mathcal{F}_H)$-biset $X_{\mathcal{F}_G}^{\mathcal{F}_H}\in \mathbb{AF}_p(\mathcal{F}_G,\mathcal{F}_H)$.
\end{remark}
\begin{convention}\label{conventionFusionOrbitDecomposition}
As with \cite[Convention 3.2]{RSS_Bold1} for groups, we allow flexibility when decomposing virtual bisets into basis elements for fusion systems.
Let $\mathcal{F}$ and $\mathcal{G}$ be saturated fusion systems over $p$-groups $S$ and $T$ respectively.
Given $X\in \mathbb{AF}_p(\mathcal{F},\mathcal{G})$, we can write $X$ as a $\mathbb{Z}_p$-linear combination of basis elements:
\[\sum_{(R,\varphi)} c_{R,\varphi}\cdot [R,\varphi]_\mathcal{F}^\mathcal{G}.\]
The summation runs over all $R\leq S$ and $\varphi\colon R\to T$ (not taken up to conjugacy). The coefficient function $c_{(-)}$ is a choice of function from the set of all pairs $(R,\varphi)$ to $\mathbb{Z}_p$ such that the sum of coefficients $c_{R',\varphi'}$ over the pairs $(\mathcal{F},\mathcal{G})$-conjugate to $(R,\varphi)$ is the number of copies of the basis element $[R,\varphi]_\mathcal{F}^\mathcal{G}$ in $X$.
As with \cite[Convention 3.2]{RSS_Bold1}, the linear combination is \emph{not} unique as several conjugate pairs $(R,\varphi)$ can contribute to the sum at the same time. If we require $c_{(-)}$ to be concentrated on chosen representatives for the conjugacy classes of pairs, then the linear combination is unique.
\end{convention}
\begin{remark}
An advantage of the flexibility in the linear combinations above is that we can use the same coefficients for \cite[Convention 3.2]{RSS_Bold1} and Convention \ref{conventionFusionOrbitDecomposition} at the same time:
Given $X\in \mathbb{AF}_p(\mathcal{F},\mathcal{G})$, we first consider the $(S,T)$-biset ${}_S X_T$ and write this as a linear combination according to \cite[Convention 3.2]{RSS_Bold1},
\[
{}_S X_T = \sum_{(R,\varphi)} c_{R,\varphi}\cdot [R,\varphi]_S^T.
\]
Recalling that $X$ is taken to be $(\mathcal{F},\mathcal{G})$-stable, we can compose with the characteristic idempotents from each side without changing $X$:
\[
X= \omega_\mathcal{F} \mathbin{\odot} {}_S X_T \mathbin{\odot} \omega_\mathcal{G} = \sum_{(R,\varphi)} c_{R,\varphi}\cdot (\omega_\mathcal{F}\mathbin{\odot} [R,\varphi]_S^T\mathbin{\odot} \omega_\mathcal{G}) = \sum_{(R,\varphi)} c_{R,\varphi}\cdot [R,\varphi]_\mathcal{F}^\mathcal{G}.
\]
Hence the coefficients $c_{R,\varphi}$ chosen when decomposing ${}_S X_T$ as an $(S,T)$-biset also work when decomposing $X$ as an $(\mathcal{F},\mathcal{G})$-biset.
\end{remark}
Suppose $\mathcal{E}$ and $\mathcal{F}$ are saturated fusion systems on $R$ and $S$ respectively. Because we construct $\hat\Sigma^{\infty}_{+} B\mathcal{F}$ for a saturated fusion system $\mathcal{F}$ on $S$ as the colimit
\[
\hat\Sigma^{\infty}_{+} B\mathcal{F} = \colim(\hat\Sigma^{\infty}_{+} BS \xrightarrow{\omega_\mathcal{F}} \hat\Sigma^{\infty}_{+} BS \xrightarrow{\omega_\mathcal{F}} \dotsb)
\]
with respect to the idempotent $\omega_\mathcal{F}\in \mathbb{AF}_p(S,S)$, the stable maps from $\hat\Sigma^{\infty}_{+} B\mathcal{E}$ to $\hat\Sigma^{\infty}_{+} B\mathcal{F}$ are given by
\[
[\hat\Sigma^{\infty}_{+} B\mathcal{E}, \hat\Sigma^{\infty}_{+} B\mathcal{F}] \cong \omega_{\mathcal{E}}\mathbin{\odot} \mathbb{AF}_p(R,S)\mathbin{\odot} \omega_{\mathcal{F}} =\mathbb{AF}_p(\mathcal{E},\mathcal{F}).
\]
\section{Free loop spaces for saturated fusion systems}\label{secFusionSystems}
In order to have a framework in which to work with free loop spaces for fusion systems, we introduce formal unions of fusion systems:
\begin{definition}
Suppose we have a formal union of finite $p$-groups $S=S_1\sqcup \dotsb \sqcup S_k$. We define a \emph{fusoid} $\mathcal{F}$ on $S$ to be a collection of fusion systems $\mathcal{F}_i$ on $S_i$ for $1\leq i\leq k$, and we write $\mathcal{F}=\mathcal{F}_1\sqcup \dotsb \sqcup \mathcal{F}_k$.
Given another fusoid $\mathcal{E}$ with underlying union $R$ of $p$-groups, we define a fusion preserving map $\mathcal{E}\to \mathcal{F}$ to be a collection of homomorphisms that take each component $R_i$ of $R$ to some component $S_j$ of $S$ and where the homomorphism $R_i\to S_j$ is fusion preserving from $\mathcal{E}_i$ to $\mathcal{F}_j$.
A fusoid $\mathcal{F}$ is \emph{saturated} if each component of $\mathcal{F}$ is saturated. Furthermore each saturated fusoid has as classifying space $B\mathcal{F}$ the disjoint union of the classifying spaces of the components, and $\mathcal{F}$ has a classifying spectrum
$\hat\Sigma^{\infty}_{+} B\mathcal{F} = \colim (\hat\Sigma^{\infty}_{+} BS\xrightarrow{\omega_{\mathcal{F}}} \hat\Sigma^{\infty}_{+} BS \xrightarrow{\omega_{\mathcal{F}}} \dotsb)$, where $\omega_\mathcal{F}\in \mathbb{AG}(S,S)^\wedge_p$ is the diagonal matrix with entries $\omega_{\mathcal{F}_i}\in \mathbb{AG}(S_i,S_i)^\wedge_p$.
The classifying spectrum $\hat\Sigma^{\infty}_{+} B\mathcal{F}$ is also the sum under $\mathbb S^\wedge_p$ of the classifying spectra $\hat\Sigma^{\infty}_{+} B\mathcal{F}_i$ for the components of $\mathcal{F}$ where the copy of $\mathbb S^\wedge_p$ in each $\hat\Sigma^{\infty}_{+} B\mathcal{F}_i$ coming from the disjoint basepoints are identified with each other.
\end{definition}
\begin{definition}
For saturated fusoids $\mathcal{E}$ and $\mathcal{F}$ over unions $R$ and $S$ of $p$-groups, we define $\mathbb{AF}_p(\mathcal{E},\mathcal{F})$ similarly as for unions of groups, so that each $X\in \mathbb{AF}_p(\mathcal{E},\mathcal{F})$ is a matrix of virtual bisets with entries $X_{i,j} \in \mathbb{AF}_p(\mathcal{E}_i,\mathcal{F}_j)$ for the corresponding components of $\mathcal{E}$ and $\mathcal{F}$. The $\mathbb{Z}_p$-module $\mathbb{AF}_p(\mathcal{E},\mathcal{F})$ is a $\mathbb{Z}_p$-submodule of the module $\mathbb{AF}_p(R,S)$ of matrices for the underlying unions of $p$-groups.
Again we define composition $\mathbin{\odot}$ in terms of matrix multiplication, and we let \emph{the biset category of fusoids} $\mathbb{AF}_p$ be the category with objects the saturated fusoids at the prime $p$ and morphism set from $\mathcal{E}$ to $\mathcal{F}$ given by $\mathbb{AF}_p(\mathcal{E},\mathcal{F})$.
The identity map $\id_{\mathcal{F}}\in \mathbb{AF}_p(\mathcal{F},\mathcal{F})$ for a saturated fusoid is just the diagonal matrix $\omega_{\mathcal{F}}$ with entry $\omega_{\mathcal{F}_i}$ for each component $\mathcal{F}_i$ of $\mathcal{F}$.
\end{definition}
There is a functor $\mathbb{AF}_p\to \Ho(\mathord{\mathrm{Sp}}_{p})$ that takes a fusoid $\mathcal{F}$ to the $p$-completed classifying spectrum $\hat\Sigma^{\infty}_{+} B\mathcal{F} \simeq \mathbb S^\wedge_p \vee \Sigma^{\infty} B\mathcal{F}$, and on morphisms it is the Segal map for fusoids, which is an isomorphism:
\[[\hat\Sigma^{\infty}_{+} B\mathcal{E},\hat\Sigma^{\infty}_{+} B\mathcal{F}] \cong \mathbb{AF}_p(\mathcal{E},\mathcal{F}).\]
As such, $\mathbb{AF}_p\to \Ho(\mathord{\mathrm{Sp}}_{p})$ is fully faithful.
A word of caution: While the restriction of actions ${}_G X_H\mapsto {}_{\mathcal{F}_G}X_{\mathcal{F}_H}$ defines a map $\mathbb{AG}(G,H)\to \mathbb{AF}_p(\mathcal{F}_G,\mathcal{F}_H)$ for any finite groups $G$ and $H$, this does not define a functor $\mathbb{AG}\to \mathbb{AF}_p$. See \cite{RSS_p-completion}*{Theorem 1.1} for a description of the functor $\mathbb{AG}\to \mathbb{AF}_p$ that corresponds to $p$-completion of spectra.
We wish to give an algebraic model for the $n$-fold free loop space $\freeO n(B\mathcal{F})$, when $\mathcal{F}$ is a saturated fusion system or fusoid. In order to do this, we first need to specify what we mean by commuting $n$-tuples in $\mathcal{F}$, their conjugacy classes, and their centralizer fusion systems.
\begin{definition}
Let $\mathcal{F}$ be a saturated fusoid or fusion system on a union $S$ of finite $p$-groups. For each $n\geq 1$, we consider
$n$-tuples $\underline a = (a_1,\dotsc, a_n)$ of commuting elements in $S$. Note that the elements of a tuple $\underline a$ are required to lie in the same component of the formal union $S$. We say that two $n$-tuples $\underline a$ and $\underline b$ are \emph{$\mathcal{F}$-conjugate} if they lie in the same component of $S$ and there is a map in $\mathcal{F}$ sending $\underline a$ to $\underline b$, i.e. a map
\[\varphi\colon \gen{\underline a} = \gen{a_1,\dotsc,a_n} \to \gen{\underline b}=\gen{b_1,\dotsc,b_n} \text{ in $\mathcal{F}$}\]
such that $\varphi(a_i)=b_i$.
We let $\cntuples n\mathcal{F}$ denote the collection of equivalence classes of commuting $n$-tuples in $S$ up to $\mathcal{F}$-conjugation, for $n>0$. For $n=0$, we consider $\cntuples 0\mathcal{F}$ to consist of a single empty/trivial $0$-tuple.
For a finite group $G$ and $n\geq 0$, let $\cnptuples nG$ denote the classes of commuting $n$-tuples of \emph{elements with $p$-power order} in $G$ up to $G$-conjugation.
\end{definition}
\begin{definition}
Let $\mathcal{F}$ be a saturated fusion system or fusoid over $S$. We say that an $n$-tuple $\underline a$ in $S$ is fully $\mathcal{F}$-centralized if $\abs{C_S(\underline a)}\geq \abs{C_S(\underline a')}$ for all $n$-tuples $\underline a'$ conjugate to $\underline a$ in $\mathcal{F}$, which is the case if and only if the subgroup $\gen{a_1,\dotsc,a_n}$ is fully $\mathcal{F}$-centralized in the normal terminology of fusion systems.
When $\mathcal{F}$ is a saturated fusion system, and $\underline a$ is fully centralized in $\mathcal{F}$, we define the centralizer fusion system $C_\mathcal{F}(\underline a)$ to be the fusion system over the $p$-group $C_S(\underline a)$ with maps
\begin{multline*}
\Hom_{C_\mathcal{F}(\underline a)}(Q,P)=\{\varphi\in \mathcal{F}(Q,P) \mid \text{$\varphi$ extends to a map $\tilde\varphi\in \mathcal{F}$ defined on }\\\text{ $Q$ and each $a_i$, $1\leq i\leq n$, such that $\tilde\varphi|_Q = \varphi$ and $\tilde\varphi(a_i)=a_i$}\}
\end{multline*}
for subgroups $Q,P\leq C_S(\underline a)$.
The fusion system $C_\mathcal{F}(\underline a)$ coincides with the usual notion of the centralizer fusion system for the subgroup $\gen{a_1,\dotsc,a_n}\leq S$.
When $\mathcal{F}$ is a fusoid, and $\underline a$ is fully $\mathcal{F}$-centralized, we define the centralizer fusion system $C_\mathcal{F}(\underline a)$ to be the centralizer inside the component of $\mathcal{F}$ containing $\underline a$. As such, the centralizer $C_\mathcal{F}(\underline a)$ is always a fusion system and not a fusoid.
\end{definition}
\begin{lemma}\label{lemmaSaturatedCentralizers}
Suppose $\underline a$ is a fully $\mathcal{F}$-centralized $n$-tuple, then the centralizer fusion system $C_\mathcal{F}(\underline a)$ is saturated.
\end{lemma}
\begin{proof}
This is just \cite{BLO2}*{Proposition A.6} applied to the subgroup $\gen{a_1,\dotsc,a_n}$ and the centralizer system $C_\mathcal{F}(\gen{a_1,\dotsc,a_n})$.
\end{proof}
\begin{remark}\label{remarkExtendingToCentralizers}
The following fact, which we will need for the next lemma, is a special case of the second saturation axiom \cite{RagnarssonStancu}*{Definition 2.5(II)}.
If a commuting $n$-tuple $\underline a$ is fully $\mathcal{F}$-centralized in a saturated fusion system $\mathcal{F}$, and if $\varphi$ in $\mathcal{F}$ takes any other $n$-tuple $\underline b$ to $\underline a$, then the map $\varphi\colon \gen{b_1,\dotsc,b_n}\to \gen{a_1,\dotsc,a_n}$ extends to a map between centralizers $\tilde\varphi\colon C_S(\underline b)\to C_S(\underline a)$ with $\tilde\varphi(b_i)=a_i$.
\end{remark}
\begin{lemma}\label{lemmaIteratedRepr}
Let $\mathcal{F}$ be a saturated fusion system on $S$, and let $\underline a\in S$ be a commuting $(n+1)$-tuple. Write $\underline a= (a_1,\dotsc,a_{n+1})$. Suppose $\partial\tup a= (a_1,\dotsc,a_n)$ is fully centralized in $\mathcal{F}$, and suppose further that $a_{n+1}\in C_S(\partial\tup a)$ is fully centralized in $C_\mathcal{F}(\partial\tup a)$, then $\underline a$ is fully centralized in $\mathcal{F}$.
Furthermore, each commuting $(n+1)$-tuple $\underline b$ in $S$ is $\mathcal{F}$-conjugate to a fully centralized tuple of the form above.
\end{lemma}
\begin{proof}
First of all, for any $(n+1)$-tuple $\underline a$ the element $a_{n+1}$ commutes with $\partial\tup a$ if and only if $a_{n+1}\in C_S(\partial\tup a)$, and in that case
\[C_S(\underline a) =C_{C_S(\partial\tup a)}(a_{n+1}).\]
Suppose $\underline a$ is a commuting $(n+1)$-tuple with $\partial\tup a$ fully $\mathcal{F}$-centralized and $a_{n+1}$ fully $C_\mathcal{F}(\partial\tup a)$-centralized.
Consider any other $(n+1)$-tuple $\underline b$ that is $\mathcal{F}$-conjugate to $\underline a$, and suppose $\varphi\in \mathcal{F}$ sends $\underline b$ to $\underline a$. Let $\partial\tup b=(b_1,\dotsc,b_n)$ and let $\varphi_\partial$ be the restriction of $\varphi$ to the subgroup generated by $\partial\tup b$, so that $\varphi_\partial(\partial\tup b)=\partial\tup a$. Since $\partial\tup a$ is fully $\mathcal{F}$-centralized, Remark \ref{remarkExtendingToCentralizers} implies that $\varphi_\partial$ extends to the centralizers as a map $\psi\colon C_S(\partial\tup b) \to C_S(\partial\tup a)$. In particular $\psi(b_{n+1})\in C_S(\partial\tup a)$.
All elements of $C_S(\underline b)\leq C_S(\partial\tup b)$ centralize $b_{n+1}$, so after applying $\psi$ we get
\[\psi(C_S(\underline b)) \leq C_{C_S(\partial\tup a)}(\psi(b_{n+1})).\]
We proceed to look at the composite
\[\varphi\circ \psi^{-1} \colon \gen{a_1,\dotsc,a_n,\psi (b_{n+1})} \to \gen{b_1,\dotsc,b_{n+1}} \to \gen{a_1,\dotsc,a_{n+1}}.\]
Note that $\varphi\circ \psi^{-1}$ maps $\psi(b_{n+1})$ to $a_{n+1}$ and at the same time maps $\partial\tup a$ identically to itself. The composite $\varphi\circ \psi^{-1}$ therefore defines a map in $C_\mathcal{F}(\partial\tup a)$ from $\psi(b_{n+1})$ to $a_{n+1}$. Hence $a_{n+1}$ and $\psi(b_{n+1})$ are conjugate in $C_\mathcal{F}(\partial\tup a)$, wherein $a_{n+1}$ was assumed to be fully centralized, so we conclude that
\[\abs{C_S(\underline a)} = \abs{C_{C_S(\partial\tup a)}(a_{n+1})} \geq \abs{C_{C_S(\partial\tup a)}(\psi(b_{n+1}))} \geq \abs{\psi(C_S(\underline b))} =\abs{C_S(\underline b)}.\]
This completes the proof that $\underline a$ is in fact fully centralized in $\mathcal{F}$.
For the second part of the lemma, let $\underline b$ be any commuting $(n+1)$-tuple in $S$. Consider the truncated $n$-tuple $\partial\tup b=(b_1,\dotsc,b_n)$, and choose any preferred fully $\mathcal{F}$-centralized conjugate $\underline a$ of $\partial\tup b$. Let $\varphi$ be a map in $\mathcal{F}$ from $\partial\tup b$ to $\underline a$, then by Remark \ref{remarkExtendingToCentralizers} as above we get an extension of $\varphi$ to $\tilde\varphi\colon C_S(\partial\tup b)\to C_S(\underline a)$. We have $b_{n+1}\in C_S(\partial\tup b)$ and $\varphi(b_{n+1})\in C_S(\underline a)$. Choose any $z\in C_S(\underline a)$ that is fully $C_\mathcal{F}(\underline a)$-centralized and conjugate to $\varphi(b_{n+1})$ inside $C_\mathcal{F}(\underline a)$. Any $\psi \in C_\mathcal{F}(\underline a)$ that takes $\varphi(b_{n+1})$ to $z$ then $\psi$ extends trivially onto $\underline a$, hence $\psi\circ \varphi$ takes the entire tuple $\underline b$ to $(a_1,\dotsc,a_n,z)$. Furthermore $(a_1,\dotsc,a_n,z)$ has the requested form -- and hence is fully $\mathcal{F}$-centralized by the lemma.
\end{proof}
We wish to take the formal union of centralizer fusion systems $C_\mathcal{F}(\underline a)$ with $[\underline a]\in \cntuples n\mathcal{F}$ as an algebraic model for the $n$-fold free loop space of $B\mathcal{F}$. To show that the centralizers do not depend on the choice of representatives, we give the following analogue of \cite[Lemma 3.12]{RSS_Bold1}.
\begin{lemma}\label{lemmaCentralizerInclusion}
Let $\underline a$ be an $n$-tuple of commuting elements in $S$, and suppose that $\reptup a$ is fully $\mathcal{F}$-centralized and $\mathcal{F}$-conjugate to $\underline a$. Any map $\varphi$ in $\mathcal{F}$ that takes $\underline a$ to $\reptup a$, then induces a fusion preserving injective map $C_\mathcal{F}(\underline a)\hookrightarrow C_\mathcal{F}(\reptup a)$. In $\mathbb{AF}_p$, all such inclusions give rise to the same virtual bifree biset
\[\zeta_{\underline a}^{\reptup a} \in \mathbb{AF}_p(C_S(\underline a), C_\mathcal{F}(\reptup a))\]
that is left $C_\mathcal{F}(\underline a)$-stable in addition to being right $C_\mathcal{F}(\reptup a)$-stable.
If $\reptup a'$ is another fully $\mathcal{F}$-centralized tuple that is conjugate to $\underline a$ (and therefore to $\reptup a$), then
\[\zeta_{\reptup a}^{\reptup a'}\in \mathbb{AF}_p(C_\mathcal{F}(\reptup a), C_\mathcal{F}(\reptup a')),\]
and the chosen bisets are compatible with composition
\[\zeta_{\underline a}^{\reptup a'} = \zeta_{\underline a}^{\reptup a} \mathbin{\odot} \zeta_{\reptup a}^{\reptup a'}.\]
\end{lemma}
\begin{proof}
The $\mathcal{F}$-conjugation from $\underline a$ to $\reptup a$ is given by a unique map $\varphi\colon \gen{\underline a} \to \gen{\reptup a}$ in $\mathcal{F}$.
By Remark \ref{remarkExtendingToCentralizers} the map $\varphi$ extends to a map
\[\tilde\varphi\colon C_S(\underline a) \to C_S(\reptup a)\]
such that $\tilde\varphi|_{\gen{\underline a}} = \varphi$. We define
\[\zeta_{\underline a}^{\reptup a} := [C_S(\underline a), \tilde\varphi]_{C_S(\underline a)}^{C_S(\reptup a)} \mathbin{\odot} \omega_{C_\mathcal{F}(\reptup a)} = [C_S(\underline a), \tilde\varphi]_{C_S(\underline a)}^{C_\mathcal{F}(\reptup a)}.\]
Given any other choice of extension $\psi \colon C_S(\underline a) \to C_S(\reptup a)$ that maps $\underline a$ to $\reptup a$, the composite $\psi\circ (\tilde \varphi)^{-1}$ defines a map in $\mathcal{F}$ from $\tilde\varphi(C_S(\underline a))$ to $\psi(C_S(\underline a))$. The composite $\psi\circ (\tilde \varphi)^{-1}$ maps the fully centralized tuple $\reptup a$ to itself by the identity, so $\rho=\psi\circ (\tilde \varphi)^{-1}$ defines a map in $C_\mathcal{F}(\reptup a)$ from the subgroup $\tilde\varphi(C_S(\underline a))$ to $\psi(C_S(\underline a))$. Since $\psi=\rho \circ \tilde\varphi$ and $\rho\in C_\mathcal{F}(\reptup a)$, the two maps $\psi$ and $\tilde\varphi$ give rise to the same virtual biset
\[[C_S(\underline a), \psi]_{C_S(\underline a)}^{C_\mathcal{F}(\reptup a)} = [C_S(\underline a), \rho\circ \tilde\varphi]_{C_S(\underline a)}^{C_\mathcal{F}(\reptup a)} = [C_S(\underline a), \tilde\varphi]_{C_S(\underline a)}^{C_\mathcal{F}(\reptup a)}.\]
The virtual biset $\zeta_{\underline a}^{\reptup a}$ is therefore independent of the choice of extension $\tilde\varphi$ of the map $\varphi$.
We now claim that $\tilde\varphi\colon C_S(\underline a)\to C_S(\reptup a)$ is in fact fusion preserving from $C_\mathcal{F}(\underline a)$ to $C_\mathcal{F}(\reptup a)$. Given $\zeta\in C_\mathcal{F}(\underline a)(Q,P)$, it extends to $\tilde\zeta\colon \gen{\underline a}Q \to \gen{\underline a}P$ sending the elements of $\underline a$ to themselves. The composite $\tilde\varphi\circ \zeta \circ (\tilde\varphi)^{-1}\colon \tilde\varphi(Q)\to \tilde\varphi(P)$ is a map in $\mathcal{F}$ that sends $\reptup a$ to itself, hence this composite lies in $C_\mathcal{F}(\reptup a)$ as required, so $\tilde \varphi$ is fusion preserving. Because $\tilde\varphi$ is fusion preserving, it follows from \cite{RSS_p-completion}*{Lemma 4.6} that
\[\zeta_{\underline a}^{\reptup a} = [C_S(\underline a), \tilde\varphi]_{C_S(\underline a)}^{C_S(\reptup a)} \mathbin{\odot} \omega_{C_\mathcal{F}(\reptup a)}\]
is left $C_\mathcal{F}(\underline a)$-stable.
Given a further map $\theta$ in $\mathcal{F}$ from $\reptup a$ to $\reptup a'$, we have
\[\zeta_{\underline a}^{\reptup a} \mathbin{\odot} \zeta_{\reptup a}^{\reptup a'} = [C_S(\underline a), \tilde\varphi]_{C_S(\underline a)}^{C_\mathcal{F}(\reptup a)} \mathbin{\odot} [C_S(\reptup a), \tilde\theta]_{C_\mathcal{F}(\reptup a)}^{C_\mathcal{F}(\reptup a')} = [C_S(\underline a), \tilde \theta\circ \tilde\varphi]_{C_S(\underline a)}^{C_\mathcal{F}(\reptup a')},\]
by Proposition \ref{propFusionDoubleCosetFormula}, since $\tilde\theta$ is fusion preserving. The composite \[\tilde\theta\circ\tilde\varphi\colon C_S(\underline a) \to C_S(\reptup a')\]
sends $\underline a$ to $\reptup a'$ and is therefore a valid choice of extension $\widetilde{\theta\circ\varphi}:= \tilde\theta\circ \tilde\varphi$. Using this choice for $\widetilde{\theta\circ\varphi}$, we then get
\[\zeta_{\underline a}^{\reptup a'} = [C_S(\underline a), \tilde \theta\circ \tilde\varphi]_{C_S(\underline a)}^{C_\mathcal{F}(\reptup a')} = \zeta_{\underline a}^{\reptup a} \mathbin{\odot} \zeta_{\reptup a}^{\reptup a'}.\qedhere\]
\end{proof}
\begin{lemma}\label{lemmaChangeOfRepresentatives}
Suppose $\underline a_1,\dotsc,\underline a_r$ and $\underline b_1,\dotsc,\underline b_r$ are two choices of fully centralized representatives for $\cntuples n\mathcal{F}$, and suppose the labelling is such that $\underline a_i$ is $\mathcal{F}$-conjugate to $\underline b_i$ for $1\leq i\leq r$. There is then a canonical isomorphism of formal unions
$\displaystyle\coprod_{i} C_\mathcal{F}(\underline a_i) \xrightarrow{\cong} \coprod_{i} C_\mathcal{F}(\underline b_i)$ in $\mathbb{AF}_p$ via the diagonal matrix whose $i$th entry is $\zeta_{\underline a_i}^{\underline b_i}\in \mathbb{AF}_p(C_\mathcal{F}(\underline a_i),C_\mathcal{F}(\underline b_i))$.
Given further choices of representatives, the isomorphisms are compatible with respect to composition.
\end{lemma}
\begin{proof}
Let $\iso_a^b$ be the diagonal matrix with entries $\zeta_{\underline a_i}^{\underline b_i}$. Given a third set of fully centralized representatives $\underline c_1,\dotsc,\underline c_r$ for $\cntuples n\mathcal{F}$, the fact that the isomorphisms are compatible,
\[\iso_a^c=\iso_a^b\mathbin{\odot} \iso_b^c,\]
is immediate from the composition of diagonal entries $\zeta_{\underline a_i}^{\underline c_i} = \zeta_{\underline a_i}^{\underline b_i} \mathbin{\odot} \zeta_{\underline b_i}^{\underline c_i}$, which follows from Lemma \ref{lemmaCentralizerInclusion}.
The inverse matrix to $\iso_a^b$ is just the diagonal matrix $\iso_b^a$ with diagonal entries $\zeta_{\underline b_i}^{\underline a_i}$. The fact that that $\iso_a^b$ and $\iso_b^a$ are inverses follows from the equality $\zeta_{\underline a_i}^{\underline b_i} \mathbin{\odot} \zeta_{\underline b_i}^{\underline a_i}=\zeta_{\underline a_i}^{\underline a_i}$. Here $\zeta_{\underline a_i}^{\underline a_i}$ is the identity element in $A(C_\mathcal{F}(\underline a_i),C_\mathcal{F}(\underline a_i))$ since $\zeta_{\underline a_i}^{\underline a_i}$ is induced by the identity map on $C_S(\underline a_i)$ in $\mathcal{F}$. The analogous statement is true for the $\underline b$'s.
\end{proof}
\begin{convention}\label{conventionTupleReps}
As with \cite[Convention 3.14]{RSS_Bold1}, we will now suppose that a choice of preferred representatives for $\cntuples n\mathcal{F}$ has been made for all $n\geq 0$. We require that each representative $n$-tuple $\underline a$ is fully $\mathcal{F}$-centralized.
Lemma \ref{lemmaIteratedRepr} furthermore enables us to chose representatives such that each chosen representative $n$-tuple $\underline a=(a_1,\dotsc,a_n)$ satisfies that $\partial\tup a=(a_1,\dotsc,a_{n-1})$ is one of the previously chosen representative $(n-1)$-tuples (and $a_n$ is fully $C_\mathcal{F}(\partial\tup a)$-centralized).
\end{convention}
Since $B\mathcal{F}$, as constructed by \cite{BLO2} and \cite{Chermak}, is the $p$-completion of a finite category, the usual $n$-fold free loop space $\Map(B(\mathbb{Z}^n),B\mathcal{F})$ is equivalent to the colimit over cyclic $p$-groups:
\[\freeO n B\mathcal{F} \simeq \colim_{e\to \infty} \Map(B(\mathbb{Z}/p^e)^n, B\mathcal{F})\]
for any union $\mathcal{F}$ of saturated fusion systems. In the following we shall replace $S^1$ with the classifying space $B(\mathbb{Z}/p^e)$ for sufficiently large $e$. We will follow \cite{RSS_Bold1}*{Convention 3.21} and suppose $e$ is large enough to work for all the finitely many fusion systems in each calculation.
As in \cite[Definition 3.11]{RSS_Bold1} we introduce an algebraic model for the $n$-fold free loop space of $B\mathcal{F}$ as a union of centralizer fusion systems:
\begin{definition}\label{defFusionLoop}\label{defFusionAlgLoop}
Let $\mathcal{F}$ be a saturated fusion system on $S$. We define $\freeO n \mathcal{F}$ to be the saturated fusoid
\[\freeO n \mathcal{F} := \coprod_{[\underline a]\in \cntuples n\mathcal{F}} C_\mathcal{F}(\underline a),\]
where the chosen representatives are fully centralized (according to Convention \ref{conventionTupleReps}).
\end{definition}
By Lemma \ref{lemmaChangeOfRepresentatives}, different choices of representatives for the conjugacy classes result in isomorphic fusoids $\freeO n \mathcal{F}$.
Mapping spaces into $B\mathcal{F}$ are described in detail by Broto-Levi-Oliver in their paper \cite{BLO2}, and applying their results when mapping out of $B(\mathbb{Z}/p^e)^n$ essentially gives a proof that $\freeO n\mathcal{F}$ models the $n$-fold free loop space of $B\mathcal{F}$.
\begin{prop}[\cite{BLO2}]\label{propFreeLoopSpaceDef}
Let $\mathcal{F}$ be a saturated fusoid or fusion system. The fusoid $\freeO n\mathcal{F}$ is an algebraic model for the free loop space $\displaystyle\freeO n B\mathcal{F} \simeq \colim_{e\to\infty} \Map(B(\mathbb{Z}/p^e)^n , B\mathcal{F})$, that is we have homotopy equivalences
\[\freeO n B\mathcal{F} \simeq B(\freeO n \mathcal{F}) = \coprod_{[\underline a]\in \cntuples n\mathcal{F}} BC_\mathcal{F}(\underline a),\]
where the chosen representatives $\underline a$ are fully centralized.
\end{prop}
Before we go through the proof, we make the following observation based on the construction preceding Theorem 6.3 of \cite{BLO2}.
\begin{remark}\label{remarkFreeLoopSpaceDef}
Suppose $\mathcal{F}$ is a saturated fusion system. The homotopy equivalence in Proposition \ref{propFreeLoopSpaceDef} is given in terms of maps
\[f_{\underline a}\colon B(\mathbb{Z}/p^e)^n\times BC_\mathcal{F}(\underline a) \to B\mathcal{F},\]
for each representing $n$-tuple $\underline a$. Each map $f_{\underline a}$, when restricted to the underlying $p$-groups, is $B(\mathord{\mathrm{ev}}_{\underline a})$, where $\mathord{\mathrm{ev}}_{\underline a}$ is the evaluation map of \cite[Lemma 3.20]{RSS_Bold1}
\[\mathord{\mathrm{ev}}_{\underline a}\colon (\mathbb{Z}/p^e)^n\times C_S(\underline a) \to S\]
given by $\mathord{\mathrm{ev}}_{\underline a}(t_1,\dotsc,t_n,z) := (a_1)^{t_1}\dotsm(a_n)^{t_n}z$. Because $\mathord{\mathrm{ev}}_{\underline a}$ is a homomorphism of $p$-groups, when we pass to the Burnside category $\mathbb{AF}_p$, the map $f_{\underline a}$ is just
\[ [(\mathbb{Z}/p^e)^n\times C_S(\underline a) , \mathord{\mathrm{ev}}_{\underline a}]_{(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline a) }^\mathcal{F}. \]
We will see below in Lemma \ref{lemmaEvalFusionPreserving} that each $\mathord{\mathrm{ev}}_{\underline a}$ is in fact fusion preserving.
\end{remark}
\begin{proof}[Proof of Proposition \ref{propFreeLoopSpaceDef}]
It is enough to consider the case when $\mathcal{F}$ is a saturated fusion system over a finite $p$-group $S$.
Let $e\gg0$, in fact $p^e \geq \abs S$ is enough. Corollary 4.5 of \cite{BLO2} tells us that
$[B(\mathbb{Z}/p^e)^n, B\mathcal{F}]$ is in bijection with the classes of commuting $n$-tuples $\cntuples n\mathcal{F}$.
For each class in $\cntuples n\mathcal{F}$ choose a fully centralized representative $\underline a$. Then Theorem 6.3 of \cite{BLO2} states that the connected component of $\Map(B(\mathbb{Z}/p^e)^n , B\mathcal{F})$ corresponding to $\underline a$ is homotopy equivalent to $BC_\mathcal{F}(\underline a)$. In total this shows that
\[\Map(B(\mathbb{Z}/p^e)^n , B\mathcal{F}) \simeq \coprod_{[\underline a]\in \cntuples n\mathcal{F}} BC_\mathcal{F}(\underline a)\]
for sufficiently large $e$.
\end{proof}
\begin{lemma}\label{lemmaEvalFusionPreserving}
Let $\mathcal{F}$ be a saturated fusion system, and let $\underline a$ be a representative $n$-tuple. The homomorphism $\mathord{\mathrm{ev}}_{\underline a}\colon (\mathbb{Z}/p^e)^n\times C_S(\underline a) \to S$ given by
\[
\mathord{\mathrm{ev}}_{\underline a}(t_1,\dotsc,t_n,z) := (a_1)^{t_1}\dotsm(a_n)^{t_n}z
\]
is a fusion preserving with respect to the fusion systems $(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline a)$ and $\mathcal{F}$.
\end{lemma}
\begin{proof}
Any morphism in the product fusion system $(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline a)$ has the form $\id\times \varphi\colon D\to E$ for subgroups $D,E\leq (\mathbb{Z}/p^e)^n\times C_S(\underline a)$ and where $\varphi$ is a morphism in the centralizer fusion system $C_\mathcal{F}(\underline a)$.
Denote the projections of $D$ by $D_1\leq (\mathbb{Z}/p^e)^n$ and $D_2\leq C_S(\underline a)$ respectively, so $D\leq D_1\times D_2$, and similarly $E\leq E_1\times E_2$. Then $\varphi\in \Hom_{C_\mathcal{F}(\underline a)}(D_2, E_2)$ by definition of a product fusion system.
By definition of the centralizer fusion system, $\varphi$ extends to $\tilde\varphi\colon \gen{\underline a, D_2}\to \gen{\underline a, E_2}$ with $\tilde\varphi(\underline a) = \underline a$ and $\tilde \varphi\in \mathcal{F}$. Now the diagram
\[
\begin{tikzpicture}
\node [matrix of math nodes] (M) {
D &[1cm] D_1\times D_2 &[2cm] \gen{\underline a, D_2} \\[2cm]
E & E_1\times E_2 & \gen{\underline a, E_2} \\
};
\path
(M-1-1) -- node{$\leq$} (M-1-2)
(M-2-1) -- node{$\leq$} (M-2-2)
;
\path [auto, ->, arrow]
(M-1-1) edge node{$\id\times \varphi$} (M-2-1)
(M-1-2) edge node{$\id\times \varphi$} (M-2-2)
edge node{$\mathord{\mathrm{ev}}_{\underline a}$} (M-1-3)
(M-1-3) edge node{$\tilde\varphi$} (M-2-3)
(M-2-2) edge node{$\mathord{\mathrm{ev}}_{\underline a}$} (M-2-3)
;
\end{tikzpicture}
\]
commutes, so $\tilde \varphi$ satisfies $\mathord{\mathrm{ev}}_{\underline a}\circ (\id \times \varphi) = \tilde\varphi \circ \mathord{\mathrm{ev}}_{\underline a}$ as homomorphisms $D\to \mathord{\mathrm{ev}}_{\underline a}(E)$. Hence $\mathord{\mathrm{ev}}_{\underline a}$ is fusion preserving.
\end{proof}
\begin{remark}
As homotopy classes of stable maps the isomorphism of Lemma \ref{lemmaChangeOfRepresentatives} commutes with the equivalence
\[\freeO n B\mathcal{F} \xrightarrow{\simeq} \coprod_{i} BC_\mathcal{F}(\underline a_i).\]
This can be seen the following way since we know from Lemma \ref{lemmaEvalFusionPreserving} that the evaluation maps are fusion preserving:
As mentioned in Remark \ref{remarkFreeLoopSpaceDef}, the equivalence $\freeO nB\mathcal{F} \simeq \coprod_{i} BC_\mathcal{F}(\underline a_i)$ is adjoint to the maps $f_i\colon B(\mathbb{Z}/p^e)^n \times BC_\mathcal{F}(\underline a_i) \to B\mathcal{F}_i$, where $\mathcal{F}_i$ is the component of $\mathcal{F}$ containing $\underline a_i$. The stable homotopy class of $f_i$ is represented by the element
\[ [ (\mathbb{Z}/p^e)^n\times C_S(\underline a_i), \mathord{\mathrm{ev}}_{\underline a_i}]_{ (\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline a_i) }^{\mathcal{F}_i} \]
in $\mathbb{AF}_p$.
By construction $\iso_a^b$ has entries
\[\zeta_{\underline a_i}^{\underline b_i}=[C_S(\underline a_i), \tilde\varphi]_{C_\mathcal{F}(\underline a_i)}^{C_\mathcal{F}(\underline b_i)},\]
where $\tilde\varphi$ is an $\mathcal{F}$-isomorphism $C_S(\underline a_i)\xrightarrow{\cong} C_S(\underline b_i)$ that sends $\underline a_i$ to $\underline b_i$.
As group homomorphisms we have $\mathord{\mathrm{ev}}_{\underline b_i} \circ (\id\times \tilde\varphi) = \tilde\varphi \circ \mathord{\mathrm{ev}}_{\underline a_i}$ since both composites send $(t_1,\dotsc,t_n,z)\in (\mathbb{Z}/p^e)^n\times C_S(\underline a_i)$ to the same element \[((\underline b_i)_1)^{t_1}\dotsm ((\underline b_i)_n)^{t_n}\cdot \tilde\varphi(z) = \tilde\varphi\bigl(((\underline a_i)_1)^{t_1}\dotsm ((\underline a_i)_n)^{t_n}\cdot z \bigr).\]
If we compose $\zeta_{\underline a_i}^{\underline b_i}$ with the evaluation map for $\underline b_i$, we can use that $\mathord{\mathrm{ev}}_{\underline b_i}$ is fusion preserving and apply the special case of Proposition \ref{propFusionDoubleCosetFormula}. This gives us
\begin{align*}
&((\mathbb{Z}/p^e)^n\times \zeta_{\underline a_i}^{\underline b_i}) \mathbin{\odot} [ (\mathbb{Z}/p^e)^n\times C_S(\underline b_i), \mathord{\mathrm{ev}}_{\underline b_i}]_{(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline b_i) }^{\mathcal{F}_i}
\\={}&
[ (\mathbb{Z}/p^e)^n\times C_S(\underline a_i), \mathord{\mathrm{ev}}_{\underline b_i}\circ (\id\times \tilde\varphi)]_{ (\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline a_i)}^{\mathcal{F}_i}
\\ ={}& [ (\mathbb{Z}/p^e)^n\times C_S(\underline a_i), \tilde\varphi \circ \mathord{\mathrm{ev}}_{\underline a_i}]_{(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline a_i)}^{\mathcal{F}_i}
\\ ={}& [(\mathbb{Z}/p^e)^n\times C_S(\underline a_i), \mathord{\mathrm{ev}}_{\underline a_i}]_{(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline a_i)}^{\mathcal{F}_i} \text{ since $\tilde\varphi\in \mathcal{F}_i$.}
\end{align*}
Taking adjoints it follows that $\zeta_{\underline a_i}^{\underline b_i}$ commutes with the maps from $BC_\mathcal{F}(\underline a_i)$ and $BC_\mathcal{F}(\underline b_i)$ to $\freeO n B\mathcal{F}$ as homotopy classes of stable maps.
\end{remark}
\section{The functor $\freeO n$ for the category of fusion systems}\label{secFusionFreeLoopFunctor}
Given a saturated fusoid $\mathcal{F}$ over a formal union of $p$-groups $S$, we can apply $\freeO n$ to the characteristic idempotent $\omega_\mathcal{F}\in\mathbb{AF}_p(S,S)$. The result is an idempotent endomorphism $\freeO n(\omega_F)$ from $\freeO n S$ to itself.
\begin{definition}
Let $\tel_{\freeO n(\omega_\mathcal{F})}$ denote the mapping telescope
\[\tel_{\freeO n(\omega_\mathcal{F})} = \colim(\hat\Sigma^{\infty}_{+} B\freeO n S\xrightarrow{\freeO n(\omega_\mathcal{F})} \hat\Sigma^{\infty}_{+} B\freeO n S \xrightarrow{\freeO n(\omega_\mathcal{F})} \dotsb).\]
Then $\tel_{\freeO n(\omega_\mathcal{F})}$ is the retract of $B\freeO n S$ with respect to the idempotent $\freeO n(\omega_\mathcal{F})\in \mathbb{AF}_p(S,S)$.
\end{definition}
Given any $(\mathcal{E},\mathcal{F})$-stable virtual biset $X\in \mathbb{AF}_p(\mathcal{E},\mathcal{F})$, if we apply $\freeO n$ to the relation $X= \omega_\mathcal{E}\mathbin{\odot} X\mathbin{\odot} \omega_\mathcal{F}$, we get
\[\freeO n(X) = \freeO n(\omega_\mathcal{E}) \mathbin{\odot} \freeO n(X) \mathbin{\odot} \freeO n(\omega_\mathcal{F}).\]
This implies that $\freeO n(X)$ descends to a map $\freeO n(X)\colon \tel_{\freeO n(\omega_\mathcal{E})} \to \tel_{\freeO n(\omega_\mathcal{F})}$.
The next step for us is to compare $\tel_{\freeO n(\omega_\mathcal{F})}$ with the algebraic model for $\freeO n \mathcal{F}$ in Definition \ref{defFusionAlgLoop}.
\begin{definition}\label{defLnFRetractOfLnS}
Let $\mathcal{F}$ be a saturated fusoid with underlying union of $p$-groups $S$. We define a matrix $I_\mathcal{F}\in\mathbb{AF}_p(\freeO nS,\freeO n \mathcal{F})$ as follows:
\[
(I_\mathcal{F})_{\underline a,\underline b} = \begin{cases}
0 & \text{if $\underline a$ is not $\mathcal{F}$-conjugate to $\underline b$,}
\\ \zeta_{\underline a}^{\underline b} & \text{if $\underline b$ is the representative for the $\mathcal{F}$-conjugacy class of $\underline a$,}
\end{cases}
\]
where $(I_\mathcal{F})_{\underline a,\underline b}$ is an element of $\mathbb{AF}_p(C_S(\underline a),C_\mathcal{F}(\underline b))$.
Next we define a matrix $T_\mathcal{F}\in \mathbb{AF}_p(\freeO n\mathcal{F}, \freeO nS)$ by the formula
\[
(T_\mathcal{F})_{\underline a,\underline b} = \lc{\underline a}(\omega_\mathcal{F})^{\underline b}\in \mathbb{AF}_p(C_\mathcal{F}(\underline a),C_S(\underline b)).
\]
It follows that $(T_\mathcal{F})_{\underline a,\underline b}$ is zero unless $\underline b$ is in the $\mathcal{F}$-conjugacy class represented by $\underline a$. Furthermore, $(T_\mathcal{F})_{\underline a,\underline b} = \freeO n((\omega_\mathcal{F})_S^S)_{\underline a,\underline b}$ by \cite[Proposition 3.15]{RSS_Bold1}, so we can obtain $T_\mathcal{F}$ from $\freeO n((\omega_\mathcal{F})_S^S)$ by deleting all rows not belonging to the chosen representatives $\underline a$ for the $\mathcal{F}$-conjugacy classes of commuting $n$-tuples in $S$.
\end{definition}
\begin{remark}\label{remarkTransferMapWellDefined}
In order for $T_\mathcal{F}$ to be well-defined, we need $\lc{\underline a}(\omega_\mathcal{F})^{\underline b}$ to be left $C_\mathcal{F}(\underline a)$-stable. This is a consequence of the fact that $\omega_\mathcal{F}$ is $\mathcal{F}$-stable and can be seen as follows:
Suppose $P\leq C_S(\underline a)$ and $\varphi\in C_\mathcal{F}(\underline a)(P,C_S(\underline a))$. Let $\gen{\underline a}P$ be the subgroup of $S$ generated by $\underline a$ and $P$. By definition of the centralizer fusion system, we have an extension of $\varphi$, $\tilde\varphi\colon \gen{\underline a}P \to C_S(\underline a)$, inside $C_\mathcal{F}(\underline a)$ with the property that $\tilde\varphi(\underline a)=\underline a$.
If we restrict $\lc{\underline a}(\omega_\mathcal{F})^{\underline b}$ along $\varphi$, we find that
\begin{align*}
[P,\varphi]_P^{C_S(\underline a)}\mathbin{\odot} \lc{\underline a}(\omega_\mathcal{F})^{\underline b} &= [P,\incl]_P^{\gen{\underline a}P}\mathbin{\odot} [\gen{\underline a}P,\tilde\varphi]_{\gen{\underline a}P}^{C_S(\underline a)}\mathbin{\odot} \lc{\underline a}(\omega_\mathcal{F})^{\underline b}
\\ &= [P,\incl]_P^{\gen{\underline a}P}\mathbin{\odot} \lc{\underline a}( [\gen{\underline a}P,\tilde\varphi]_{\gen{\underline a}P}^{S}\mathbin{\odot} \omega_\mathcal{F})^{\underline b}
\\ &= [P,\incl]_P^{\gen{\underline a}P}\mathbin{\odot} \lc{\underline a}( [\gen{\underline a}P,\incl]_{\gen{\underline a}P}^{S}\mathbin{\odot} \omega_\mathcal{F})^{\underline b}
\\ &= [P,\incl]_P^{\gen{\underline a}P}\mathbin{\odot} [\gen{\underline a}P,\incl]_{\gen{\underline a}P}^{C_S(\underline a)}\mathbin{\odot} \lc{\underline a}(\omega_\mathcal{F})^{\underline b}
\\ &= [P,\incl]_P^{C_S(\underline a)}\mathbin{\odot} \lc{\underline a}(\omega_\mathcal{F})^{\underline b}.
\end{align*}
We conclude that $\lc{\underline a}(\omega_\mathcal{F})^{\underline b}$ is left $C_\mathcal{F}(\underline a)$-stable and an element of $\mathbb{AF}_p(C_\mathcal{F}(\underline a),C_S(\underline b))$ as required.
\end{remark}
We claim that $I_\mathcal{F}$ and $T_\mathcal{F}$ express $\freeO n \mathcal{F}$ as a retract of $\freeO n S$ and that $I_\mathcal{F}\mathbin{\odot} T_\mathcal{F}=\freeO n((\omega_\mathcal{F})_S^S)$. The first claim is proved as Proposition \ref{propFreeLoopRetractIdentity} below. The second claim is an easy consequence of $\mathcal{F}$-stability for $\omega_\mathcal{F}$, hence we shall prove the second claim first.
\begin{lemma}\label{lemmaIdempotentOfRetract}
The matrices $I_\mathcal{F}$ and $T_\mathcal{F}$ satisfy
\[I_\mathcal{F}\mathbin{\odot} T_\mathcal{F} = \freeO n((\omega_\mathcal{F})_S^S)\in \mathbb{AF}_p(\freeO n S,\freeO nS).\]
\end{lemma}
\begin{proof}
Recall from \cite[Proposition 3.15]{RSS_Bold1} that $\freeO n((\omega_\mathcal{F})_S^S)_{\underline a,\underline b} = \lc{\underline a}(\omega_\mathcal{F})^{\underline b}$. Since $\omega_\mathcal{F}$ is $\mathcal{F}$-generated (Definition \ref{defFCharacteristic}), we have $\freeO n((\omega_\mathcal{F})_S^S)_{\underline a,\underline b}=0$ unless $\underline a$ and $\underline b$ are $\mathcal{F}$-conjugate.
By construction, we also have $(I_\mathcal{F}\mathbin{\odot} T_\mathcal{F})_{\underline a,\underline b}=0$ unless $\underline a$ and $\underline b$ are $\mathcal{F}$-conjugate, in which case
\[
(I_\mathcal{F}\mathbin{\odot} T_\mathcal{F})_{\underline a,\underline b} = (I_{\mathcal{F}})_{\underline a,\underline c} \mathbin{\odot} (T_\mathcal{F})_{\underline c,\underline b} = \zeta_{\underline a}^{\underline c} \mathbin{\odot} \lc{\underline c}(\omega_\mathcal{F})^{\underline b},
\]
where $\underline c$ is the chosen representative for the $\mathcal{F}$-conjugacy class of $\underline a$ and $\underline b$.
By Lemma \ref{lemmaCentralizerInclusion}, $\zeta_{\underline a}^{\underline c}=[C_S(\underline a),\varphi]_{C_S(\underline a)}^{C_\mathcal{F}(\underline c)}$, for any $\varphi\colon C_S(\underline a)\to C_S(\underline c)$ in $\mathcal{F}$ such that $\varphi(\underline a)=\underline c$. Precomposing with $[C_S(\underline a),\varphi]$ is the same as the restriction $\Res_\varphi$ along $\varphi$. It follows that
\[
\zeta_{\underline a}^{\underline c} \mathbin{\odot} \lc{\underline c}(\omega_\mathcal{F})^{\underline b} = \lc{\underline a}(\Res_\varphi \omega_\mathcal{F})^{\underline b} = \lc{\underline a}(\Res_{C_S(\underline a)}^S \omega_\mathcal{F})^{\underline b} = \lc{\underline a}(\omega_\mathcal{F})^{\underline b},
\]
since the restrictions $\Res_\varphi \omega_\mathcal{F}$ and $\Res_{C_S(\underline a)}^S \omega_\mathcal{F}$ are equal in $\mathbb{AF}_p(C_S(\underline a),S)$ by left $\mathcal{F}$-stability of $\omega_\mathcal{F}$.
We conclude that $(I_\mathcal{F}\mathbin{\odot} T_\mathcal{F})_{\underline a,\underline b}= \freeO n((\omega_\mathcal{F})_S^S)_{\underline a,\underline b}$ for all commuting $n$-tuples $\underline a$ and $\underline b$.
\end{proof}
\begin{prop}\label{propRetractOfSimpleLoopFunctor}
Let $\mathcal{E}$ and $\mathcal{F}$ be saturated fusoids with underlying unions of $p$-groups $R$ and $S$ respectively. Suppose $X\in \mathbb{AF}_p(\mathcal{E},\mathcal{F})$, we can apply $\freeO n$ to $X_R^S\in \mathbb{AF}_p(R,S)$ and get a matrix of virtual bisets $\freeO n (X_R^S)\in \mathbb{AF}_p(\freeO n R,\freeO n S)$. Precomposing with $T_\mathcal{E}$ and postcomposing with $I_\mathcal{F}$ then gives us a matrix in $\mathbb{AF}_p(\freeO n \mathcal{E},\freeO n\mathcal{F})$ with entries
\[
(T_\mathcal{E}\mathbin{\odot} \freeO n(X_R^S) \mathbin{\odot} I_\mathcal{F})_{\underline a,\underline b} =\sum_{\substack{[\underline b']\in \cntuples nS \\ \underline b' \sim_\mathcal{F} \underline b}} (\freeO n(X_R^S))_{\underline a,\underline b'} \mathbin{\odot} \zeta_{\underline b'}^{\underline b} = \sum_{\substack{[\underline b']\in \cntuples nS \\ \underline b' \sim_\mathcal{F} \underline b}} \lc{\underline a}X^{\underline b'}\mathbin{\odot} \zeta_{\underline b'}^{\underline b}.
\]
\end{prop}
\begin{proof}
We first calculate the entries of $T_\mathcal{E}\mathbin{\odot} \freeO n(X_R^S)$. Suppose $\underline a$ represents an $\mathcal{F}$-conjugacy class and $\underline c$ represents an $S$-conjugacy class of $n$-tuples in $S$.
We have
\[
(T_\mathcal{E}\mathbin{\odot} \freeO n(X_R^S))_{\underline a,\underline c} = \sum_{[\underline d]\in\cntuples n S} (T_\mathcal{E})_{\underline a,\underline d}\mathbin{\odot} \freeO n(X_R^S)_{\underline d,\underline c}.
\]
By Definition \ref{defLnFRetractOfLnS}, $(T_\mathcal{E})_{\underline a,\underline d} = \freeO n((\omega_\mathcal{E})_R^R)_{\underline a,\underline d}$. We plug this in above to get
\begin{multline*}
(T_\mathcal{E}\mathbin{\odot} \freeO n(X_R^S))_{\underline a,\underline c} = \sum_{[\underline d]\in\cntuples n S} \freeO n((\omega_\mathcal{E})_R^R)_{\underline a,\underline d}\mathbin{\odot} \freeO n(X_R^S)_{\underline d,\underline c}
\\= \freeO n((\omega_\mathcal{E}\mathbin{\odot} X)_R^S)_{\underline a,\underline c} = (\freeO n(X_R^S))_{\underline a,\underline c},
\end{multline*}
making use of the fact that $X$ is left $\mathcal{E}$-stable. From here we can easily calculate $(T_\mathcal{E}\mathbin{\odot} \freeO n(X_R^S) \mathbin{\odot} I_\mathcal{F})_{\underline a,\underline b}$, if we recall that $(\freeO n(X_R^S))_{\underline a,\underline c} = \lc{\underline a} X^{\underline c}$ by \cite[Proposition 3.15]{RSS_Bold1}. We have
\begin{multline*}
(T_\mathcal{E}\mathbin{\odot} \freeO n(X_R^S) \mathbin{\odot} I_\mathcal{F})_{\underline a,\underline b} = \sum_{[\underline d]\in\cntuples n S} (\freeO n(X_R^S))_{\underline a,\underline c} \mathbin{\odot} (I_\mathcal{F})_{\underline c,\underline b}
\\ =\sum_{\substack{[\underline b']\in \cntuples nS \\ \underline b' \sim_\mathcal{F} \underline b}} (\freeO n(X_R^S))_{\underline a,\underline b'} \mathbin{\odot} \zeta_{\underline b'}^{\underline b} = \sum_{\substack{[\underline b']\in \cntuples nS \\ \underline b' \sim_\mathcal{F} \underline b}} \lc{\underline a}X^{\underline b'} \mathbin{\odot} \zeta_{\underline b'}^{\underline b}.\qedhere
\end{multline*}
\end{proof}
\begin{prop}\label{propFreeLoopRetractIdentity}
Let $\mathcal{F}$ be a saturated fusoid. Then $T_\mathcal{F}\mathbin{\odot} I_\mathcal{F}$ is the identity in $\mathbb{AF}_p(\freeO n\mathcal{F},\freeO n\mathcal{F})$, i.e. the diagonal matrix with diagonal entries $\omega_{C_\mathcal{F}(\underline a)}$ for $[\underline a]\in \cntuples n\mathcal{F}$.
\end{prop}
\begin{proof}
First note that
\[T_\mathcal{F} = T_\mathcal{F} \mathbin{\odot} \freeO n((\omega_\mathcal{F})_S^S).\]
As in the proof of Proposition \ref{propRetractOfSimpleLoopFunctor}, this is an easy consequence of Definition \ref{defLnFRetractOfLnS}:
\begin{multline*}
\bigl(T_\mathcal{F}\mathbin{\odot} \freeO n((\omega_\mathcal{F})_S^S)\bigr)_{\underline a,\underline b} = \sum_{[c]\in\cntuples n S} \freeO n((\omega_\mathcal{F})_S^S)_{\underline a,\underline c}\mathbin{\odot} \freeO n((\omega_\mathcal{F})_S^S)_{\underline c,\underline b}
\\= \freeO n((\omega_\mathcal{F}\mathbin{\odot} \omega_\mathcal{F})_S^S)_{\underline a,\underline b}
= \freeO n((\omega_\mathcal{F})_S^S)_{\underline a,\underline b} = (T_\mathcal{F})_{\underline a,\underline b}.
\end{multline*}
We now apply Proposition \ref{propRetractOfSimpleLoopFunctor} with $X=\omega_\mathcal{F}\in\mathbb{AF}_p(S,S)$:
\[(T_\mathcal{F} \mathbin{\odot} I_\mathcal{F})_{\underline a,\underline b} = (T_\mathcal{F} \mathbin{\odot} \freeO n((\omega_\mathcal{F})_S^S)\mathbin{\odot} I_\mathcal{F})_{\underline a,\underline b} = \sum_{\substack{[\underline b']\in \cntuples nS \\ \underline b' \sim_\mathcal{F} \underline b}} \lc{\underline a}(\omega_\mathcal{F})^{\underline b'}\mathbin{\odot} \zeta_{\underline b'}^{\underline b}.
\]
Since $\omega_\mathcal{F}$ is $\mathcal{F}$-generated, $\lc{\underline a}(\omega_\mathcal{F})^{\underline b'} = 0$ unless $\underline a$ is $\mathcal{F}$-conjugate to $\underline b'$. Hence $\underline a$ is $\mathcal{F}$-conjugate to $\underline b$, and since both $\underline a$ and $\underline b$ are representatives for their $\mathcal{F}$-conjugacy class, we conclude $\underline a=\underline b$ unless $\lc{\underline a}(\omega_\mathcal{F})^{\underline b'}=0$.
Thus $T_\mathcal{F} \mathbin{\odot} I_\mathcal{F}\in \mathbb{AF}_p(\freeO n\mathcal{F},\freeO n\mathcal{F})$ is a diagonal matrix. It remains to show that each diagonal entry $(T_\mathcal{F} \mathbin{\odot} I_\mathcal{F})_{\underline a,\underline a}$ is the characteristic idempotent for $C_\mathcal{F}(\underline a)$.
We will prove that $(T_\mathcal{F} \mathbin{\odot} I_\mathcal{F})_{\underline a,\underline a}$ has all of the properties of Definition \ref{defFCharacteristic} and is idempotent.
A direct application of Lemma \ref{lemmaIdempotentOfRetract} gives
\[T_\mathcal{F} \mathbin{\odot} I_\mathcal{F} \mathbin{\odot} T_\mathcal{F} \mathbin{\odot} I_\mathcal{F} = T_\mathcal{F} \mathbin{\odot} \freeO n((\omega_\mathcal{F})_S^S)\mathbin{\odot} I_\mathcal{F} = T_\mathcal{F} \mathbin{\odot} I_\mathcal{F}\]
so $(T_\mathcal{F} \mathbin{\odot} I_\mathcal{F})_{\underline a,\underline a}$ is idempotent.
To see that $(T_\mathcal{F} \mathbin{\odot} I_\mathcal{F})_{\underline a,\underline a}$ is $C_\mathcal{F}(\underline a)$-generated consider the orbit decomposition of $\lc{\underline a}(\omega_\mathcal{F})^{\underline a'}$ as a virtual $(C_S(\underline a),C_S(\underline a'))$-biset for each $[\underline a']\in\cntuples n S$ with $\underline a'\sim_\mathcal{F} \underline a$. For an orbit $[P,\varphi]$, with $P\leq C_S(\underline a)$ and $\varphi\colon P\to C_S(\underline a')$, to be summand of $\lc{\underline a}(\omega_\mathcal{F})^{\underline a'}$, we must have $\underline a\in P$ and $\varphi(\underline a)=\underline a'$.
Suppose $\zeta_{\underline a'}^{\underline a} = [C_S(\underline a'),\rho]_{C_S(\underline a')}^{C_S(\underline a)}\mathbin{\odot} \omega_{C_\mathcal{F}(\underline a)}$, where $\rho\colon C_S(\underline a')\to C_S(\underline a)$ is such that $\rho(\underline a')=\underline a$. Then we have
\[
[P,\varphi]_{C_S(\underline a)}^{C_S(\underline a')}\mathbin{\odot} \zeta_{\underline a'}^{\underline a} = \Bigl( [P,\varphi]_{C_S(\underline a)}^{C_S(\underline a')} \mathbin{\odot} [C_S(\underline a'),\rho]_{C_S(\underline a')}^{C_S(\underline a)} \Bigr) \mathbin{\odot} \omega_{C_\mathcal{F}(\underline a)} = [P,\rho\circ \varphi]_{C_S(\underline a)}^{C_S(\underline a)} \mathbin{\odot} \omega_{C_\mathcal{F}(\underline a)}.
\]
Now $\rho(\varphi(\underline a))=\underline a$, so $\rho\circ \varphi$ is a morphism in $C_\mathcal{F}(\underline a)$. Hence $[P,\rho\circ \varphi]$ is $C_\mathcal{F}(\underline a)$-generated, and $\omega_{C_\mathcal{F}(\underline a)}$ is $C_\mathcal{F}(\underline a)$-generated by definition of being the characteristic idempotent. Consequently the composite $[P,\rho\circ \varphi] \mathbin{\odot} \omega_{C_\mathcal{F}(\underline a)}$ is $C_\mathcal{F}(\underline a)$-generated as well. The diagonal entry $(T_\mathcal{F} \mathbin{\odot} I_\mathcal{F})_{\underline a,\underline a}$ is thus a linear combination of $C_\mathcal{F}(\underline a)$-generated elements and therefore $C_\mathcal{F}(\underline a)$-generated.
Next, $(T_\mathcal{F} \mathbin{\odot} I_\mathcal{F})_{\underline a,\underline a}$ is right $C_\mathcal{F}(\underline a)$-stable because each $\zeta_{\underline a'}^{\underline a}\in\mathbb{AF}_p(C_S(\underline a'),C_\mathcal{F}(\underline a))$ is right $C_\mathcal{F}(\underline a)$-stable. Similarly, $(T_\mathcal{F} \mathbin{\odot} I_\mathcal{F})_{\underline a,\underline a}$ is left $C_\mathcal{F}(\underline a)$-stable because each $\lc{\underline a}(\omega_\mathcal{F})^{\underline a'}$ is left $C_\mathcal{F}(\underline a)$-stable by Remark \ref{remarkTransferMapWellDefined}.
Finally, we need to show that $\abs{(T_\mathcal{F} \mathbin{\odot} I_\mathcal{F})_{\underline a,\underline a}}/\abs{C_S(\underline a)}$ is invertible in $\mathbb{Z}_p$, i.e. is not divisible by $p$.
According to \cite{ReehIdempotent}*{Theorem B}, we have
\[\abs{\lc{\underline a}(\omega_\mathcal{F})^{\underline a'}} = \frac{\abs S}{\abs{\mathcal{F}(\gen{\underline a},S)}}\in \mathbb{Z}_p,\]
and $\abs{\mathcal{F}(\gen{\underline a},S)}$ is simply the total number of $n$-tuples that are $\mathcal{F}$-conjugate to $\underline a$.
Since $\omega_{C_\mathcal{F}(\underline a)}$ is idempotent, $\abs{\omega_{C_\mathcal{F}(\underline a)}}/\abs{C_S(\underline a)}=1$ (alternatively just apply \cite{ReehIdempotent}*{Theorem B} to the fixed points for $\omega_{C_\mathcal{F}(\underline a)}$ with respect to the trivial subgroup of $C_S(\underline a)$).
Hence we see that
\[
\frac{\abs{\zeta_{\underline a'}^{\underline a}}}{\abs{C_S(\underline a)}} = \frac{\abs{[C_S(\underline a'),\rho]_{C_S(\underline a')}^{C_S(\underline a)}}}{\abs{C_S(\underline a)}}\cdot \frac{\abs{\omega_{C_\mathcal{F}(\underline a)}}}{\abs{C_S(\underline a)}}
= 1 \cdot 1 = 1.
\]
Putting the pieces together, we find that
\begin{align*}
\frac{\abs{(T_\mathcal{F} \mathbin{\odot} I_\mathcal{F})_{\underline a,\underline a}}}{\abs{C_S(\underline a)}} &= \sum_{\substack{[\underline a']\in \cntuples nS \\ \underline a' \sim_\mathcal{F} \underline a}} \frac{\abs{\lc{\underline a}(\omega_\mathcal{F})^{\underline a'}}}{\abs{C_S(\underline a')}}\cdot\frac{\abs{ \zeta_{\underline a'}^{\underline a}}}{\abs{C_S(\underline a)}}
\\ &= \sum_{\substack{[\underline a']\in \cntuples nS \\ \underline a' \sim_\mathcal{F} \underline a}} \frac{\abs{S}}{\abs{C_S(\underline a')}\cdot\abs{\mathcal{F}(\gen{\underline a},S)}}\cdot 1
\\ &= \frac 1{\abs{\mathcal{F}(\gen{\underline a},S)}} \sum_{\substack{[\underline a']\in \cntuples nS \\ \underline a' \sim_\mathcal{F} \underline a}} \frac{\abs{S}}{\abs{C_S(\underline a')}}
\\ &= \frac 1{\abs{\mathcal{F}(\gen{\underline a},S)}} \sum_{\substack{[\underline a']\in \cntuples nS \\ \underline a' \sim_\mathcal{F} \underline a}} \abs{[\underline a']}
\\ &= \frac 1{\abs{\mathcal{F}(\gen{\underline a},S)}} \cdot \abs{\{\text{$n$-tuples $\underline a'\in S$}\mid \underline a'\sim_{\mathcal{F}}\underline a\}}
\\ &= 1.
\end{align*}
This completes the proof that $(T_\mathcal{F} \mathbin{\odot} I_\mathcal{F})_{\underline a,\underline a}$ is in fact $C_\mathcal{F}(\underline a)$-characteristic, hence by uniqueness of characteristic idempotents $(T_\mathcal{F} \mathbin{\odot} I_\mathcal{F})_{\underline a,\underline a}=\omega_{C_\mathcal{F}(\underline a)}$ as required.
\end{proof}
\begin{cor}\label{corFreeLoopTelescopeEquiv}
Let $\mathcal{F}$ be a saturated fusoid over a union of $p$-groups $S$. Then $I_\mathcal{F}$ and $T_\mathcal{F}$ induce inverse equivalences
\[\tel_{\freeO n((\omega_\mathcal{F})_S^S)} \simeq \hat\Sigma^{\infty}_{+} B\freeO n \mathcal{F}\]
in $\Ho(\mathord{\mathrm{Sp}}_{p})$.
\end{cor}
\begin{proof}
By Lemma \ref{lemmaIdempotentOfRetract} and Proposition \ref{propFreeLoopRetractIdentity} the matrices $I_\mathcal{F}$ and $T_\mathcal{F}$ induce maps between the towers
\[\freeO n S \xrightarrow{\freeO n(\omega_\mathcal{F})} \freeO n S \xrightarrow{\freeO n(\omega_\mathcal{F})} \dotsb\]
and
\[\freeO n\mathcal{F} \xrightarrow{\id} \freeO n\mathcal{F}\xrightarrow{\id} \dotsb.\]
The composite $T_\mathcal{F}\mathbin{\odot} I_\mathcal{F}$ is simply the identity on the constant tower $\freeO n \mathcal{F}$. The composite $I_\mathcal{F}\mathbin{\odot} T_\mathcal{F}$ applies $\freeO n (\omega_\mathcal{F})$ levelwise to the tower $\freeO n S \xrightarrow{\freeO n(\omega_\mathcal{F})} \freeO n S \xrightarrow{\freeO n(\omega_\mathcal{F})} \dotsb$ which is homotopic to the identity on the colimit $\tel_{\freeO n((\omega_\mathcal{F})_S^S)}$.
\end{proof}
\begin{prop}\label{propSimpleFusionLoop}
The functor $\freeO n$ on unions of $p$-groups extends to a functor $\freeO n\colon \mathbb{AF}_p\to \mathbb{AF}_p$ given on objects by $\mathcal{F}\mapsto \freeO n \mathcal{F}$ and on morphisms $X\in \mathbb{AF}_p(\mathcal{E},\mathcal{F})$ by the matrix with entries
\[\freeO n(X_\mathcal{E}^\mathcal{F})_{\underline a,\underline b} =\sum_{\substack{[\underline b']\in \cntuples nS \\ \underline b' \sim_\mathcal{F} \underline b}} (\freeO n(X_R^S))_{\underline a,\underline b'} \mathbin{\odot} \zeta_{\underline b'}^{\underline b} = \sum_{\substack{[\underline b']\in \cntuples nS \\ \underline b' \sim_\mathcal{F} \underline b}} \lc{\underline a}X^{\underline b'}\mathbin{\odot} \zeta_{\underline b'}^{\underline b} \qquad\in \mathbb{AF}_p(C_\mathcal{E}(\underline a),C_\mathcal{F}(\underline b)),\]
where $R$ and $S$ are the underlying unions of $p$-groups for $\mathcal{E}$ and $\mathcal{F}$ respectively.
\end{prop}
\begin{remark} \label{rem:idempotents}
By Proposition \ref{propRetractOfSimpleLoopFunctor}, we then have $\freeO n(X_\mathcal{E}^\mathcal{F}) = T_\mathcal{E}\mathbin{\odot} \freeO n(X_R^S) \mathbin{\odot} I_\mathcal{F}$. In particular, we have $I_\mathcal{F}=\freeO n((\omega_\mathcal{F})_S^\mathcal{F})$ and $T_\mathcal{F}=\freeO n((\omega_\mathcal{F})_\mathcal{F}^S)$ for any saturated fusoid $\mathcal{F}$ over $S$.
\end{remark}
\begin{proof}
Proposition \ref{propFreeLoopRetractIdentity} states that $\freeO n$ takes the identity $\omega_\mathcal{F}$ on $\mathcal{F}$ to the identity on $\freeO n\mathcal{F}$.
Let $\mathcal{E}$, $\mathcal{F}$, and $\mathcal{G}$ be saturated fusoids over $R$, $S$, and $T$, respectively. Suppose $X\in \mathbb{AF}_p(\mathcal{E},\mathcal{F})$ and $Y\in \mathbb{AF}_p(\mathcal{F},\mathcal{G})$. As in the Remark \ref{rem:idempotents}, we have
\[\freeO n(X_\mathcal{E}^\mathcal{F}) = T_\mathcal{E}\mathbin{\odot} \freeO n(X_R^S) \mathbin{\odot} I_\mathcal{F}.\]
It follows that
\begin{align*}
\freeO n(X_\mathcal{E}^\mathcal{F})\mathbin{\odot} \freeO n(Y_\mathcal{F}^\mathcal{G}) &= T_\mathcal{E}\mathbin{\odot} \freeO n(X_R^S)\mathbin{\odot} I_\mathcal{F}\mathbin{\odot} T_\mathcal{F} \mathbin{\odot} \freeO n(Y_S^T) \mathbin{\odot} I_\mathcal{G}
\\ &= T_\mathcal{E}\mathbin{\odot} \freeO n(X_R^S)\mathbin{\odot} \freeO n((\omega_\mathcal{F})_S^S) \mathbin{\odot} \freeO n(Y_S^T) \mathbin{\odot} I_\mathcal{G}
\\ &= T_\mathcal{E}\mathbin{\odot} \freeO n(X_R^S\mathbin{\odot} (\omega_\mathcal{F})_S^S \mathbin{\odot} Y_S^T) \mathbin{\odot} I_\mathcal{G}
\\ &= T_\mathcal{E}\mathbin{\odot} \freeO n((X\mathbin{\odot} Y)_S^T) \mathbin{\odot} I_\mathcal{G}
\\ &= \freeO n((X\mathbin{\odot} Y)_\mathcal{E}^\mathcal{G}).
\end{align*}
The characteristic idempotent $\omega_\mathcal{F}$ disappears from the middle because both $X$ and $Y$ are $\mathcal{F}$-stable (in fact either of these would be enough). Thus $\freeO n$ preserves composition of virtual bisets between fusoids.
\end{proof}
Similarly to \cite[Corollary 3.18]{RSS_Bold1}, there is also a formula for $\freeO n(X_\mathcal{E}^\mathcal{F})_{\underline a,\underline b}$ in terms of the restriction of $X\in \mathbb{AF}_p(\mathcal{E},\mathcal{F})$ to the centralizer $C_\mathcal{E}(\underline a)$. We state the formula for fusion systems and the formula can easily be applied component-wise in the general case.
\begin{cor}\label{corSimpleFusionLoopOrbits}
Let $\mathcal{E}$ and $\mathcal{F}$ be saturated fusion systems over $p$-groups $R$ and $S$ respectively, and suppose $X\in \mathbb{AF}_p(\mathcal{E},\mathcal{F})$ is a virtual biset. Furthermore, let $\underline a$ in $\mathcal{E}$ and $\underline b$ in $\mathcal{F}$ be chosen representatives for conjugacy classes of commuting $n$-tuples (according to Convention \ref{conventionTupleReps}). Consider the restriction of $X$ to the centralizer fusion system $C_\mathcal{E}(\underline a)$, and write $M_{C_\mathcal{E}(\underline a)}^\mathcal{F}$ as a linear combination of basis elements (recalling Convention \ref{conventionFusionOrbitDecomposition}):
\[
X_{C_\mathcal{E}(\underline a)}^\mathcal{F} =\sum_{(P,\varphi)}c_{P,\varphi} \cdot [P,\varphi]_{C_\mathcal{E}(\underline a)}^\mathcal{F},
\]
where $P\leq C_R(\underline a)$ and $\varphi\colon P\to S$.
The matrix entry $\freeO n(X)_{\underline a,\underline b}$ then satisfies the formula
\[
\freeO n(X)_{\underline a,\underline b} = \sum_{\substack{(P,\varphi)\text{ s.t. $\underline a\in P$ and}\\\text{$\varphi(\underline a)$ is $\mathcal{F}$-conjugate to $\underline b$}}}c_{P,\varphi}\cdot [P,\varphi]_{C_\mathcal{E}(\underline a)}^{C_S(\varphi(\underline a))}\mathbin{\odot} \zeta_{\varphi(\underline a)}^{\underline b},
\]
with $P$, $\varphi$, and $c_{P,\varphi}$ as in the linear combination above.
\end{cor}
\begin{proof}
Instead of writing $X_{C_\mathcal{E}(\underline a)}^\mathcal{F}$ as a linear combination of basis elements in $\mathbb{AF}_p(C_\mathcal{E}(\underline a),\mathcal{F})$, consider $X$ as an element of $\mathbb{AF}_p(C_R(\underline a),S)$:
\[
X_{C_R(\underline a)}^S = \sum_{(P,\varphi)}u_{P,\varphi} \cdot [P,\varphi]_{C_R(\underline a)}^S,
\]
with a (possibly) different collection of coefficients $u_{P,\varphi}$.
If we precompose with $\omega_{C_\mathcal{E}(\underline a)}$ and postcompose with $\omega_\mathcal{F}$, the linear combination above becomes
\[
X_{C_\mathcal{E}(\underline a)}^\mathcal{F} = \omega_{C_\mathcal{E}(\underline a)}\mathbin{\odot} X_{C_R(\underline a)}^S \mathbin{\odot} \omega_{\mathcal{F}} = \sum_{(P,\varphi)}u_{P,\varphi} \cdot [P,\varphi]_{C_\mathcal{E}(\underline a)}^\mathcal{F}.
\]
Thus the coefficients $u_{P,\varphi}$ and $c_{P,\varphi}$ provide different choices for the decomposition in $\mathbb{AF}_p(C_\mathcal{E}(\underline a),\mathcal{F})$, but only $u_{P,\varphi}$ provides a decomposition in $\mathbb{AF}_p(C_R(\underline a),S)$. We shall start by proving that the formula for $\freeO n(X)_{\underline a,\underline b}$ in the statement of the corollary is independent of the choice of linear combination for $X_{C_\mathcal{E}(\underline a)}^\mathcal{F}$.
Two basis elements $[P,\varphi]_{C_\mathcal{E}(\underline a)}^\mathcal{F}$ and $[Q,\psi]_{C_\mathcal{E}(\underline a)}^\mathcal{F}$ are equal if and only if $Q$ is $C_\mathcal{E}(\underline a)$-isomorphic to $P$ and $\psi$ arises from $\varphi$ by precomposing with a map in $C_\mathcal{E}(\underline a)$ and postcomposing with a map in $\mathcal{F}$.
The total coefficient in front of a particular basis element $[P,\varphi]_{C_\mathcal{E}(\underline a)}^\mathcal{F}$ in the linear combinations for $X_{C_\mathcal{E}(\underline a)}^\mathcal{F}$ is therefore given by the two sums
\[
\sum_{\text{$(P',\varphi')$, $(C_\mathcal{E}(\underline a),\mathcal{F})$-conj. to $(P,\varphi)$}} c_{P',\varphi'} \qquad\text{and}\qquad \sum_{\text{$(P',\varphi')$, $(C_\mathcal{E}(\underline a),\mathcal{F})$-conj. to $(P,\varphi)$}} u_{P',\varphi'}.
\]
Hence these two coefficient sums must be equal.
Suppose $(P',\varphi')$ is $(C_\mathcal{E}(\underline a),\mathcal{F})$-conjugate to a particular $(P,\varphi)$, and suppose further that $\underline a\in P$ and $\varphi(\underline a)$ is $\mathcal{F}$-conjugate to $\underline b$. Since $P'$ is isomorphic to $P$ in $C_\mathcal{E}(\underline a)$, we also have $\underline a\in P'$. Let $\varphi'=\gamma\circ \varphi\circ \alpha$ with $\alpha\in C_\mathcal{E}(\underline a)$ and $\gamma\in \mathcal{F}$, then $\alpha(\underline a)=\underline a$ and $\varphi'(\underline a)=\gamma(\varphi(\underline a))$ is $\mathcal{F}$-conjugate to $\varphi(\underline a)$ and hence to $\underline b$.
Let $\zeta_{\varphi'(\underline a)}^{\underline b}=[C_S(\varphi'(\underline a)),\rho]_{C_S(\underline a)}^{C_\mathcal{F}(\underline b)}$, then $\rho\circ\gamma\colon \varphi(C_R(\underline a)) \to C_S(\underline b)$ extends to a morphism $\tilde{\rho\circ\gamma}\colon C_S(\varphi(\underline a))\to C_S(\underline b)$ and $[C_S(\varphi(\underline a)), \tilde{\rho\circ\gamma}]_{C_S(\underline a)}^{C_\mathcal{F}(\underline b)}=\zeta_{\varphi(\underline a)}^{\underline b}$. We next have
\begin{multline*}
[P,\varphi']_{C_\mathcal{E}(\underline a)}^{C_S(\varphi'(\underline a)}\mathbin{\odot} \zeta_{\varphi'(\underline a)}^{\underline b} = [P,\gamma\circ \varphi\circ \alpha]_{C_\mathcal{E}(\underline a)}^{C_S(\varphi'(\underline a)}\mathbin{\odot} \zeta_{\varphi'(\underline a)}^{\underline b} \\= [P,\varphi]_{C_\mathcal{E}(\underline a)}^{C_S(\varphi'(\underline a)}\mathbin{\odot} [C_S(\varphi(\underline a)), \tilde{\rho\circ\gamma}]_{C_S(\underline a)}^{C_\mathcal{F}(\underline b)} = [P,\varphi]_{C_\mathcal{E}(\underline a)}^{C_S(\varphi'(\underline a)}\mathbin{\odot} \zeta_{\varphi(\underline a)}^{\underline b}.
\end{multline*}
The pairs $(P',\varphi')$ and $(P,\varphi)$ therefore give the same contribution to the formula in the statement of the corollary whenever the pairs are $(C_\mathcal{E}(\underline a),\mathcal{F})$-conjugate. We conclude that
\begin{multline}\label{eqChangeOfCoefficientsForLinearComb}
\sum_{\substack{(P,\varphi)\text{ s.t. $\underline a\in P$ and}\\\text{$\varphi(\underline a)$ is $\mathcal{F}$-conjugate to $\underline b$}}}c_{P,\varphi}\cdot [P,\varphi]_{C_\mathcal{E}(\underline a)}^{C_S(\varphi(\underline a))}\mathbin{\odot} \zeta_{\varphi(\underline a)}^{\underline b} \\= \sum_{\substack{(P,\varphi)\text{ s.t. $\underline a\in P$ and}\\\text{$\varphi(\underline a)$ is $\mathcal{F}$-conjugate to $\underline b$}}}u_{P,\varphi}\cdot [P,\varphi]_{C_\mathcal{E}(\underline a)}^{C_S(\varphi(\underline a))}\mathbin{\odot} \zeta_{\varphi(\underline a)}^{\underline b}.
\end{multline}
Hence it is sufficient to prove the corollary with the coefficients $u_{P,\varphi}$.
By Proposition \ref{propSimpleFusionLoop}, we can write $\freeO n(X_\mathcal{E}^\mathcal{F})_{\underline a,\underline b}$ as
\begin{align*}
\freeO n(X_\mathcal{E}^\mathcal{F})_{\underline a,\underline b} &= \sum_{\substack{[\underline b']\in \cntuples n S\\\underline b'\sim_\mathcal{F} \underline b}} \freeO n(X_R^S)_{\underline a,\underline b'}\mathbin{\odot} \zeta_{\underline b'}^{\underline b}.
\end{align*}
We can then apply \cite[Corollary 3.18]{RSS_Bold1} with the linear combination for $X_{C_R(\underline a)}^S$ given by the coefficients $u_{P,\varphi}$:
\begin{align*}
\freeO n(X_\mathcal{E}^\mathcal{F})_{\underline a,\underline b} &= \sum_{\substack{[\underline b']\in \cntuples n S\\\underline b'\sim_\mathcal{F} \underline b}} \freeO n(X_R^S)_{\underline a,\underline b'}\mathbin{\odot} \zeta_{\underline b'}^{\underline b}
\\ &= \sum_{\substack{[\underline b']\in \cntuples n S\\\underline b'\sim_\mathcal{F} \underline b}} \sum_{\substack{(P,\varphi)\text{ s.t. $\underline a\in P$ and}\\\text{$\varphi(\underline a)$ is $S$-conjugate to $\underline b'$}}}u_{P,\varphi}\cdot [P,\varphi]_{C_R(\underline a)}^{C_S(\varphi(\underline a))}\mathbin{\odot} \zeta_{\varphi(\underline a)}^{\underline b'}\mathbin{\odot} \zeta_{\underline b'}^{\underline b}
\\&= \sum_{\substack{(P,\varphi)\text{ s.t. $\underline a\in P$ and}\\\text{$\varphi(\underline a)$ is $\mathcal{F}$-conjugate to $\underline b$}}}u_{P,\varphi}\cdot [P,\varphi]_{C_R(\underline a)}^{C_S(\varphi(\underline a))}\mathbin{\odot} \zeta_{\varphi(\underline a)}^{\underline b}.
\end{align*}
Finally, note that $\freeO n(X_\mathcal{E}^\mathcal{F})_{\underline a,\underline b}\in \mathbb{AF}_p(C_\mathcal{E}(\underline a),C_\mathcal{F}(\underline b))$ is left $C_\mathcal{E}(\underline a)$-stable and as such does not change if we precompose with $\omega_{C_\mathcal{E}(\underline a)}$:
\begin{align*}
\freeO n(X_\mathcal{E}^\mathcal{F})_{\underline a,\underline b} &= \omega_{C_\mathcal{E}(\underline a)}\mathbin{\odot} \freeO n(X_\mathcal{E}^\mathcal{F})_{\underline a,\underline b}
\\&= \sum_{\substack{(P,\varphi)\text{ s.t. $\underline a\in P$ and}\\\text{$\varphi(\underline a)$ is $\mathcal{F}$-conjugate to $\underline b$}}}u_{P,\varphi}\cdot \omega_{C_\mathcal{E}(\underline a)}\mathbin{\odot} [P,\varphi]_{C_R(\underline a)}^{C_S(\varphi(\underline a))}\mathbin{\odot} \zeta_{\varphi(\underline a)}^{\underline b}
\\&= \sum_{\substack{(P,\varphi)\text{ s.t. $\underline a\in P$ and}\\\text{$\varphi(\underline a)$ is $\mathcal{F}$-conjugate to $\underline b$}}}u_{P,\varphi}\cdot [P,\varphi]_{C_\mathcal{E}(\underline a)}^{C_S(\varphi(\underline a))}\mathbin{\odot} \zeta_{\varphi(\underline a)}^{\underline b}.
\end{align*}
Finally, by \eqref{eqChangeOfCoefficientsForLinearComb} we can replace the coefficients $u_{P,\varphi}$ with the coefficients $c_{P,\varphi}$ to get the formula in the corollary:
\[\freeO n(X_\mathcal{E}^\mathcal{F})_{\underline a,\underline b}=\sum_{\substack{(P,\varphi)\text{ s.t. $\underline a\in P$ and}\\\text{$\varphi(\underline a)$ is $\mathcal{F}$-conjugate to $\underline b$}}}c_{P,\varphi}\cdot [P,\varphi]_{C_\mathcal{E}(\underline a)}^{C_S(\varphi(\underline a))}\mathbin{\odot} \zeta_{\varphi(\underline a)}^{\underline b}.\qedhere\]
\end{proof}
\section{Evaluation maps and $\twistO n$ for fusion systems}\label{secFusionEvaluation}
Given a saturated fusoid $\mathcal{F}$ over a union of $p$-groups $S$, we can apply $\twistO n$ to the characteristic idempotent $\omega_\mathcal{F}\in \mathbb{AF}_p(S,S)$ to get an idempotent endomorphism of $(\mathbb{Z}/p^e)^n\times \freeO n S$ in $\mathbb{AF}_p$.
As in Section \ref{secFusionFreeLoopFunctor}, we let $\tel_{\twistO n(\omega_\mathcal{F})}$ denote the mapping telescope
\[\tel_{\twistO n(\omega_\mathcal{F})} = \colim(\hat\Sigma^{\infty}_{+} B((\mathbb{Z}/p^e)^n\times \freeO n S)\xrightarrow{\twistO n(\omega_\mathcal{F})} \hat\Sigma^{\infty}_{+} B((\mathbb{Z}/p^e)^n\times \freeO n S) \xrightarrow{\twistO n(\omega_\mathcal{F})} \dotsb).\]
This is a retract of $\hat\Sigma^{\infty}_{+} B((\mathbb{Z}/p^e)^n\times \freeO n S)$.
Our next step is to prove that $\tel_{\twistO n(\omega_\mathcal{F})}$ is equivalent to $\hat\Sigma^{\infty}_{+} B((\mathbb{Z}/p^e)^n)\wedge \tel_{\freeO n(\omega_\mathcal{F})}$ and hence to $\hat\Sigma^{\infty}_{+} B((\mathbb{Z}/p^e)^n\times \freeO n\mathcal{F})$ via Corollary \ref{corFreeLoopTelescopeEquiv}.
In order to produce this equivalence, we first need a technical lemma about the characteristic idempotent $\omega_\mathcal{F}$, followed by a proposition relating the idempotents $\twistO n(\omega_\mathcal{F})$ and $\freeO n(\omega_\mathcal{F})$.
\begin{lemma}\label{lemmaIdempotentCoefficients}
Let $\mathcal{F}$ be a saturated fusion system over $S$, and let $\underline a$ be a commuting $n$-tuple in $S$. Write the characteristic idempotent $\omega_\mathcal{F}$, restricted to $C_S(\underline a)$, as a linear combination according to \cite[Convention 3.2]{RSS_Bold1}:
\[(\omega_\mathcal{F})_{C_S(\underline a)}^S = \sum_{\substack{R\leq C_S(\underline a)\\ \varphi\in\mathcal{F}( R, S)}} c_{R,\varphi}\cdot [R,\varphi]_{C_S(\underline a)}^S.\]
Then the coefficients $c_{R,\varphi}$ satisfy the following relation for each subgroup $R\leq C_S(\underline a)$
\[
\sum_{\substack{R'\sim_{C_S(\underline a)} R\\ \varphi\in \mathcal{F}(R',S)}} c_{R',\varphi} = \begin{cases}1&\text{if $R=C_S(\underline a)$,}\\ 0&\text{otherwise.}\end{cases}
\]
\end{lemma}
\begin{proof}
The restriction $(\omega_\mathcal{F})_{C_S(\underline a)}^S$ equals $[C_S(\underline a),\incl]_{C_S(\underline a)}^S\mathbin{\odot} \omega_\mathcal{F}$.
According to \cite{ReehIdempotent}*{Corollary 5.13} the map $\pi\colon \mathbb{AF}_p(C_S(\underline a),S)\to \mathbb{AF}_p(C_S(\underline a),\mathcal{F})$ given by $X\mapsto X\mathbin{\odot} \omega_\mathcal{F}$ coincides with the map of Burnside rings $\pi'\colon A(C_S(\underline a)\times S)^\wedge_p \to A(C_S(\underline a)\times \mathcal{F})^\wedge_p$ given in \cite{ReehIdempotent}*{Theorem A} for the product fusion system $C_S(\underline a)\times \mathcal{F}$. We are particularly interested in
\[(\omega_\mathcal{F})_{C_S(\underline a)}^S = [C_S(\underline a),\incl]_{C_S(\underline a)}^S\mathbin{\odot} \omega_\mathcal{F} = \pi([C_S(\underline a),\incl]_{C_S(\underline a)}^S).\]
The result is now a consequence of \cite{ReehIdempotent}*{Remark 4.7}, which describes how the coefficients of an element $X\in A(C_S(\underline a)\times S)^\wedge_p$ relates to the coefficients of $\pi'(X)\in A(C_S(\underline a)\times \mathcal{F})^\wedge_p$. Since $\pi'$ for the product fusion system $C_S(\underline a)\times \mathcal{F}$ coincides with $\pi$ for bisets, we have the same relation between coefficients of $X\in \mathbb{AF}_p(C_S(\underline a),S)$ and $\pi(X)\in \mathbb{AF}_p(C_S(\underline a),\mathcal{F})$.
Let $c_{R',\varphi}(X)$ denote the coefficient of the orbit $[R',\varphi]_{C_S(\underline a)}^S$ in the decomposition of a general element $X\in \mathbb{AF}_p(C_S(\underline a),S)$.
\cite{ReehIdempotent}*{Remark 4.7} states that, since $(\omega_\mathcal{F})_{C_S(\underline a)}^S= \pi([C_S(\underline a),\incl]_{C_S(\underline a)}^S)$ is the image of a transitive biset, we have
\begin{multline*}
\sum_{\substack{R'\sim_{C_S(\underline a)} R\\ \varphi\in \mathcal{F}(R',S)}} c_{R',\varphi}(\pi([C_S(\underline a),\incl]_{C_S(\underline a)}^S)) \\= \sum_{\substack{R'\sim_{C_S(\underline a)} R\\ \varphi\in \mathcal{F}(R',S)}} c_{R',\varphi}([C_S(\underline a),\incl]_{C_S(\underline a)}^S) = \begin{cases}1&\text{if $R=C_S(\underline a)$,}\\ 0&\text{otherwise.}\end{cases}\qedhere
\end{multline*}
\end{proof}
\begin{prop}\label{propComposingLoopIdempotents}
Let $\mathcal{F}$ be a saturated fusion system over $S$, and let $n\geq 0$. The characteristic idempotent gives us two idempotent endomorphisms of $(\mathbb{Z}/p^e)^n\times \freeO nS$ coming from the functors $(\mathbb{Z}/p^e)^n\times \freeO n(-)$ and $\twistO n (-)$. The two resulting idempotents satisfy the following relations when composed with each other:
\begin{enumerate}
\renewcommand{\theenumi}{$(\roman{enumi})$}\renewcommand{\labelenumi}{\theenumi}
\item\label{itemIdempotentCompTwist} $\displaystyle \bigl( (\mathbb{Z}/p^e)^n\times \freeO n(\omega_\mathcal{F})\bigr)\mathbin{\odot} \twistO n(\omega_\mathcal{F}) = \twistO n(\omega_\mathcal{F})$
\item\label{itemIdempotentCompFree} $\displaystyle \twistO n(\omega_\mathcal{F})\mathbin{\odot} \bigl((\mathbb{Z}/p^e)^n\times \freeO n(\omega_\mathcal{F}) \bigr)= (\mathbb{Z}/p^e)^n\times \freeO n(\omega_\mathcal{F})$.
\end{enumerate}
Both of these composites are taken in $\mathbb{AF}_p((\mathbb{Z}/p^e)^n\times \freeO nS, (\mathbb{Z}/p^e)^n\times \freeO nS)$.
\end{prop}
\begin{proof}
Let $\underline a$ and $\underline b$ be representatives for the conjugacy classes of commuting $n$-tuples in $S$. Restrict $\omega_\mathcal{F}$ to $C_S(\underline a)$ on the left and write
\[(\omega_\mathcal{F})_{C_S(\underline a)}^S = \sum_{\substack{R\leq C_S(\underline a)\\ \varphi\in\mathcal{F}( R, S)}} c_{R,\varphi}\cdot [R,\varphi]_{C_S(\underline a)}^S\]
as in Lemma \ref{lemmaIdempotentCoefficients}.
Let us first consider part \ref{itemIdempotentCompTwist} of the proposition. According to \cite[Corollary 3.18]{RSS_Bold1} the matrix $\freeO n(\omega_\mathcal{F})\in \mathbb{AF}_p(\freeO n S,\freeO nS)$ has entries
\[
\freeO n(M)_{\underline a,\underline c} = \sum_{\substack{R\leq C_S(\underline a)\\\varphi\in\mathcal{F}(R,S)\\\text{ s.t. $\underline a\in R$ and}\\\text{$\varphi(\underline a)$ is $S$-conjugate to $\underline c$}}}c_{R,\varphi}\cdot [R,\varphi]_{C_S(\underline a)}^{C_S(\varphi(\underline a))}\mathbin{\odot} \zeta_{\varphi(\underline a)}^{\underline c}
\]
for any representative $n$-tuple $\underline c$. Let $\psi_{\varphi(\underline a)}\colon C_S(\varphi(\underline a))\to C_S(\underline c)$ be any conjugation map in $S$ taking $\varphi(\underline a)$ to the representative of its conjugacy class. By the description of $\zeta_{\varphi(\underline a)}^{\underline c}$ in \cite[Lemma 3.12]{RSS_Bold1}, we then have
\[
\freeO n(M)_{\underline a,\underline c} = \sum_{\substack{R\leq C_S(\underline a)\\\varphi\in\mathcal{F}(R,S)\\\text{ s.t. $\underline a\in R$ and}\\\text{$\varphi(\underline a)$ is $S$-conjugate to $\underline c$}}}c_{R,\varphi}\cdot [R,\psi_{\varphi(\underline a)}\circ \varphi]_{C_S(\underline a)}^{C_S(\underline c)}.
\]
When we compose $(\mathbb{Z}/p^e)^n\times \freeO n(\omega_\mathcal{F})$ with $\twistO n(\omega_\mathcal{F})$, we take a sum over all conjugacy classes of commuting $n$-tuples:
\begin{align*}
&\Bigl(\bigl( (\mathbb{Z}/p^e)^n\times \freeO n(\omega_\mathcal{F})\bigr)\mathbin{\odot} \twistO n(\omega_\mathcal{F})\Bigr)_{\underline a,\underline b}
\\={}& \sum_{[\underline c]\in \cntuples n S} \bigl( (\mathbb{Z}/p^e)^n\times \freeO n(\omega_\mathcal{F})_{\underline a,\underline c}\bigr)\mathbin{\odot} \twistO n(\omega_\mathcal{F})_{\underline c,\underline b}
\\={}& \sum_{[\underline c]\in \cntuples n S} \sum_{\substack{R\leq C_S(\underline a)\\\varphi\in\mathcal{F}(R,S)\\\text{ s.t. $\underline a\in R$ and}\\\text{$\varphi(\underline a)$ is $S$-conjugate to $\underline c$}}}c_{R,\varphi}\cdot \bigl((\mathbb{Z}/p^e)^n\times [R,\psi_{\varphi(\underline a)}\circ \varphi]_{C_S(\underline a)}^{C_S(\underline c)}\bigr)\mathbin{\odot} \twistO n(\omega_\mathcal{F})_{\underline c,\underline b}
\\={}& \sum_{\substack{R\leq C_S(\underline a)\\\varphi\in\mathcal{F}(R,S)\\\text{ s.t. $\underline a\in R$}}}c_{R,\varphi}\cdot \bigl((\mathbb{Z}/p^e)^n\times [R,\psi_{\varphi(\underline a)}\circ \varphi]_{C_S(\underline a)}^{C_S(\psi_{\varphi(\underline a)}(\varphi(\underline a)))}\bigr)\mathbin{\odot} \twistO n(\omega_\mathcal{F})_{\psi_{\varphi(\underline a)}(\varphi(\underline a)),\underline b}
\\={}& \sum_{\substack{R\leq C_S(\underline a)\\\varphi\in\mathcal{F}(R,S)\\\text{ s.t. $\underline a\in R$}}}c_{R,\varphi}\cdot \bigl((\mathbb{Z}/p^e)^n\times [R,\id]_{C_S(\underline a)}^R\bigr)
\\* &\hspace{2.5cm}\mathbin{\odot} \bigl((\mathbb{Z}/p^e)^n\times [R,\psi_{\varphi(\underline a)}\circ \varphi]_R^{C_S(\psi_{\varphi(\underline a)}(\varphi(\underline a)))}\bigr)\mathbin{\odot} \twistO n(\omega_\mathcal{F})_{\psi_{\varphi(\underline a)}(\varphi(\underline a)),\underline b}.
\end{align*}
In the last line we have split $[R,\psi_{\varphi(\underline a)}\circ \varphi]_{C_S(\underline a)}^{C_S(\psi_{\varphi(\underline a)}(\varphi(\underline a)))}$ into its transfer and homomorphism parts. This enables us to now apply \cite[Theorem 3.33.(iv)]{RSS_Bold1} to note that
\begin{multline*}
(\mathbb{Z}/p^e)^n\times [R,\psi_{\varphi(\underline a)}\circ \varphi]_R^{C_S(\psi_{\varphi(\underline a)}(\varphi(\underline a)))} \\= (\mathbb{Z}/p^e)^n\times \freeO n([R,\psi_{\varphi(\underline a)}\circ \varphi]_R^S)_{\underline a,\psi_{\varphi(\underline a)}(\varphi(\underline a))} = \twistO n([R,\psi_{\varphi(\underline a)}\circ \varphi]_R^S)_{\underline a,\psi_{\varphi(\underline a)}(\varphi(\underline a))}.
\end{multline*}
Functoriality of $\twistO n$ then gives us
\begin{align*}
&\Bigl(\bigl( (\mathbb{Z}/p^e)^n\times \freeO n(\omega_\mathcal{F})\bigr)\mathbin{\odot} \twistO n(\omega_\mathcal{F})\Bigr)_{\underline a,\underline b}
\\={}& \sum_{\substack{R\leq C_S(\underline a)\\\varphi\in\mathcal{F}(R,S)\\\text{ s.t. $\underline a\in R$}}}c_{R,\varphi}\cdot \bigl((\mathbb{Z}/p^e)^n\times [R,\id]_{C_S(\underline a)}^R\bigr)
\\* &\hspace{2.5cm}\mathbin{\odot} \twistO n([R,\psi_{\varphi(\underline a)}\circ \varphi]_R^S)_{\underline a,\psi_{\varphi(\underline a)}(\varphi(\underline a))}\mathbin{\odot} \twistO n(\omega_\mathcal{F})_{\psi_{\varphi(\underline a)}(\varphi(\underline a)),\underline b}
\\={}& \sum_{\substack{R\leq C_S(\underline a)\\\varphi\in\mathcal{F}(R,S)\\\text{ s.t. $\underline a\in R$}}}c_{R,\varphi}\cdot \bigl((\mathbb{Z}/p^e)^n\times [R,\id]_{C_S(\underline a)}^R\bigr)
\mathbin{\odot} \twistO n([R,\psi_{\varphi(\underline a)}\circ \varphi]_R^S\mathbin{\odot} \omega_\mathcal{F})_{\underline a,\underline b}.
\end{align*}
The characteristic idempotent $\omega_\mathcal{F}\in \mathbb{AF}_p(S,S)$ is $\mathcal{F}$-stable, so restricting $\omega_\mathcal{F}$ along the homomorphism $(\psi_{\varphi(\underline a)}\circ \varphi)\in \mathcal{F}(R,S)$ on the left is equivalent to restricting along the inclusion $\incl\colon R\to S$. This means that $[R,\psi_{\varphi(\underline a)}\circ \varphi]_R^S\mathbin{\odot} \omega_\mathcal{F} = [R,\incl]_R^S\mathbin{\odot} \omega_\mathcal{F}$
and we can run our intermediary calculations in reverse:
\begin{align*}
&\Bigl(\bigl( (\mathbb{Z}/p^e)^n\times \freeO n(\omega_\mathcal{F})\bigr)\mathbin{\odot} \twistO n(\omega_\mathcal{F})\Bigr)_{\underline a,\underline b}
\\={}& \sum_{\substack{R\leq C_S(\underline a)\\\varphi\in\mathcal{F}(R,S)\\\text{ s.t. $\underline a\in R$}}}c_{R,\varphi}\cdot \bigl((\mathbb{Z}/p^e)^n\times [R,\id]_{C_S(\underline a)}^R\bigr)
\mathbin{\odot} \twistO n([R,\incl]_R^S\mathbin{\odot} \omega_\mathcal{F})_{\underline a,\underline b}
\\={}& \sum_{\substack{R\leq C_S(\underline a)\\\varphi\in\mathcal{F}(R,S)\\\text{ s.t. $\underline a\in R$}}}c_{R,\varphi}\cdot \bigl((\mathbb{Z}/p^e)^n\times [R,\id]_{C_S(\underline a)}^R\bigr)
\mathbin{\odot} \twistO n([R,\incl]_R^S)_{\underline a,\underline a}\mathbin{\odot} \twistO n(\omega_\mathcal{F})_{\underline a,\underline b}
\\={}& \sum_{\substack{R\leq C_S(\underline a)\\\varphi\in\mathcal{F}(R,S)\\\text{ s.t. $\underline a\in R$}}}c_{R,\varphi}\cdot \bigl((\mathbb{Z}/p^e)^n\times [R,\incl]_{C_S(\underline a)}^{C_S(\underline a)}\bigr)
\mathbin{\odot} \twistO n(\omega_\mathcal{F})_{\underline a,\underline b}
\\={}& \sum_{\substack{R\leq C_S(\underline a)\text{ up to $C_S(\underline a)$-conj.}\\\text{s.t. $a\in R$}}} \biggl(\sum_{\substack{R'\sim_{C_S(\underline a)} R\\ \varphi\in\mathcal{F}(R',S)}} c_{R',\varphi} \biggr)\cdot \bigl((\mathbb{Z}/p^e)^n\times [R,\incl]_{C_S(\underline a)}^{C_S(\underline a)}\bigr)
\mathbin{\odot} \twistO n(\omega_\mathcal{F})_{\underline a,\underline b}.
\end{align*}
Lemma \ref{lemmaIdempotentCoefficients} implies that the sum of coefficients is $0$, when $R< C_S(\underline a)$, and the sum of coefficients equals $1$, when $R=C_S(\underline a)$. We complete our calculation with
\begin{align*}
&\Bigl(\bigl( (\mathbb{Z}/p^e)^n\times \freeO n(\omega_\mathcal{F})\bigr)\mathbin{\odot} \twistO n(\omega_\mathcal{F})\Bigr)_{\underline a,\underline b}
\\={}& \sum_{\substack{R\leq C_S(\underline a)\text{ up to $C_S(\underline a)$-conj.}\\\text{s.t. $a\in R$}}} \biggl(\sum_{\substack{R'\sim_{C_S(\underline a)} R\\ \varphi\in\mathcal{F}(R',S)}} c_{R',\varphi} \biggr)\cdot \bigl((\mathbb{Z}/p^e)^n\times [R,\incl]_{C_S(\underline a)}^{C_S(\underline a)}\bigr)
\mathbin{\odot} \twistO n(\omega_\mathcal{F})_{\underline a,\underline b}
\\={}& \bigl((\mathbb{Z}/p^e)^n\times [C_S(\underline a),\id]_{C_S(\underline a)}^{C_S(\underline a)}\bigr)
\mathbin{\odot} \twistO n(\omega_\mathcal{F})_{\underline a,\underline b}
\\={}& \twistO n(\omega_\mathcal{F})_{\underline a,\underline b}.
\end{align*}
This completes the proof of part \ref{itemIdempotentCompTwist} of the proposition.
\medskip
The proof of part \ref{itemIdempotentCompFree} makes use of the exact same tricks as the proof of part \ref{itemIdempotentCompTwist}. By \cite[Proposition 3.31]{RSS_Bold1}, the matrix $\twistO n(\omega_\mathcal{F})\in \mathbb{AF}_p((\mathbb{Z}/p^e)^n\times \freeO nS, (\mathbb{Z}/p^e)^n\times \freeO nS)$ has entries
\begin{align*}
&\twistO n(\omega_\mathcal{F})_{\underline a,\underline c}
\\={}& \sum_{\substack{R\leq C_S(\underline a)\\\varphi\in \mathcal{F}(R,S)\\ \text{ s.t. } \varphi(\underline a^{k(\underline a,R)})\\\text{is $S$-conj. to $\underline c$}}} c_{R,\varphi}\cdot
\Bigl( [\mathord{\mathrm{ev}}_{\underline a}^{-1}(R),(\id_{(\mathbb{Z}/p^e)^n}\times\varphi)\circ \wind(\underline a, R)]_{(\mathbb{Z}/p^e)^n\times C_S(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_S(\varphi(\underline a^{k(\underline a,R)}))}
\\* & \hspace{2.5cm}\mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \zeta_{\varphi(\underline a^{k(\underline a,R)})}^{\underline c}) \Bigr)
\\={}& \sum_{\substack{R\leq C_S(\underline a)\\\varphi\in \mathcal{F}(R,S)\\ \text{ s.t. } \varphi(\underline a^{k(\underline a,R)})\\\text{is $S$-conj. to $\underline c$}}} c_{R,\varphi}\cdot
[\mathord{\mathrm{ev}}_{\underline a}^{-1}(R),(\id_{(\mathbb{Z}/p^e)^n}\times(\psi_{\varphi(\underline a^{k(\underline a,R)})}\circ \varphi)\circ \wind(\underline a, R)]_{(\mathbb{Z}/p^e)^n\times C_S(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_S(\underline c)},
\end{align*}
where $\psi_{\varphi(\underline a^{k(\underline a,R)})}\colon C_S(\varphi(\underline a^{k(\underline a,R)})) \to C_S(\underline c)$ is an $S$-conjugation map taking $\varphi(\underline a^{k(\underline a,R)})$ to the representative of its conjugacy class in $S$.
We will name this representative $\underline z$ to ease the notation in the following calculations. Note that $\underline z= \psi_{\varphi(\underline a^{k(\underline a,R)})}(\varphi(\underline a^{k(\underline a,R)}))$ represents the conjugacy class of $\varphi(\underline a^{k(\underline a,R)})$ and as such depends on $R$, $\varphi$, and $\underline a$.
We decompose $[\mathord{\mathrm{ev}}_{\underline a}^{-1}(R),(\id_{(\mathbb{Z}/p^e)^n}\times(\psi_{\varphi(\underline a^{k(\underline a,R)})}\circ \varphi)\circ \wind(\underline a, R)]$ into
\[
[\mathord{\mathrm{ev}}_{\underline a}^{-1}(R),\wind(\underline a,R)]_{(\mathbb{Z}/p^e)^n\times C_S(\underline a)}^{(\mathbb{Z}/p^e)^n\times R}\mathbin{\odot} \Bigl((\mathbb{Z}/p^e)^n\times[R,\psi_{\varphi(\underline a^{k(\underline a,R)})}\circ \varphi]_{R}^{C_S(\underline z)}\Bigr).
\]
Using the functoriality of $\freeO n$ and the $\mathcal{F}$-stability of $\omega_\mathcal{F}$, we then proceed as in part \ref{itemIdempotentCompTwist} to replace $\psi_{\varphi(\underline a^{k(\underline a,R)})}\circ \varphi$ with the inclusion $\incl\colon R\to S$. The main steps are the following:
\begin{align*}
&\Bigl( \twistO n(\omega_\mathcal{F})\mathbin{\odot} \bigl( (\mathbb{Z}/p^e)^n\times \freeO n(\omega_\mathcal{F})\bigr)\Bigr)_{\underline a,\underline b}
\\={}& \sum_{\substack{R\leq C_S(\underline a)\\\varphi\in\mathcal{F}(R,S)}}c_{R,\varphi}\cdot [\mathord{\mathrm{ev}}_{\underline a}^{-1}(R),\wind(\underline a,R)]_{(\mathbb{Z}/p^e)^n\times C_S(\underline a)}^{(\mathbb{Z}/p^e)^n\times R}
\\* &\hspace{2.5cm}\mathbin{\odot} \Bigl((\mathbb{Z}/p^e)^n\times [R,\psi_{\varphi(\underline a^{k(\underline a,R)})}\circ \varphi]_R^{C_S(\underline z)}\Bigr)
\\* &\hspace{2.5cm}\mathbin{\odot} \bigl((\mathbb{Z}/p^e)^n\times \freeO n(\omega_\mathcal{F})_{\underline z,\underline b}\bigr)
\\={}& \sum_{\substack{R\leq C_S(\underline a)\\\varphi\in\mathcal{F}(R,S)}}c_{R,\varphi}\cdot [\mathord{\mathrm{ev}}_{\underline a}^{-1}(R),\wind(\underline a,R)]_{(\mathbb{Z}/p^e)^n\times C_S(\underline a)}^{(\mathbb{Z}/p^e)^n\times R}
\\* &\hspace{2.5cm}\mathbin{\odot} \bigl((\mathbb{Z}/p^e)^n\times \freeO n([R,\psi_{\varphi(\underline a^{k(\underline a,R)})}\circ \varphi]_R^S\mathbin{\odot} \omega_\mathcal{F})_{\underline a^{k(\underline a,R)},\underline b}\bigr)
\\={}& \sum_{\substack{R\leq C_S(\underline a)\\\varphi\in\mathcal{F}(R,S)}}c_{R,\varphi}\cdot [\mathord{\mathrm{ev}}_{\underline a}^{-1}(R),\wind(\underline a,R)]_{(\mathbb{Z}/p^e)^n\times C_S(\underline a)}^{(\mathbb{Z}/p^e)^n\times R}
\\* &\hspace{2.5cm}\mathbin{\odot} \bigl((\mathbb{Z}/p^e)^n\times \freeO n([R,\incl]_R^S\mathbin{\odot} \omega_\mathcal{F})_{\underline a^{k(\underline a,R)},\underline b}\bigr)
\\={}& \sum_{\substack{R\leq C_S(\underline a)\\\varphi\in\mathcal{F}(R,S)}}c_{R,\varphi}\cdot [\mathord{\mathrm{ev}}_{\underline a}^{-1}(R),(\id_{(\mathbb{Z}/p^e)^n}\times \psi_{\underline a^{k(\underline a,R)}})\circ \wind(\underline a,R)]_{(\mathbb{Z}/p^e)^n\times C_S(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_S(\psi_{\underline a^{k(\underline a,R)}}(\underline a^{k(\underline a,R)}))}
\\* &\hspace{2.5cm}\mathbin{\odot} \bigl((\mathbb{Z}/p^e)^n\times \freeO n(\omega_\mathcal{F})_{\psi_{\underline a^{k(\underline a,R)}}(\underline a^{k(\underline a,R)}),\underline b}\bigr)
\\={}& \sum_{\substack{R\leq C_S(\underline a)\\\text{ up to $C_S(\underline a)$-conj.}}} \biggl(\sum_{\substack{R'\sim_{C_S(\underline a)} R\\ \varphi\in\mathcal{F}(R',S)}} c_{R',\varphi} \biggr)\cdot
\\* &\hspace{2.5cm} [\mathord{\mathrm{ev}}_{\underline a}^{-1}(R),(\id_{(\mathbb{Z}/p^e)^n}\times \psi_{\underline a^{k(\underline a,R)}})\circ \wind(\underline a,R)]_{(\mathbb{Z}/p^e)^n\times C_S(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_S(\psi_{\underline a^{k(\underline a,R)}}(\underline a^{k(\underline a,R)}))}
\\* &\hspace{2.5cm}\mathbin{\odot} \bigl((\mathbb{Z}/p^e)^n\times \freeO n(\omega_\mathcal{F})_{\psi_{\underline a^{k(\underline a,R)}}(\underline a^{k(\underline a,R)}),\underline b}\bigr).
\end{align*}
We again apply Lemma \ref{lemmaIdempotentCoefficients} to remove all summands except $R=C_S(\underline a)$. For $R=C_S(\underline a)$, we have $\underline a^{k(\underline a,R)} = \underline a$ and $\wind(\underline a,C_S(\underline a)) = \id_{(\mathbb{Z}/p^e)^n\times C_S(\underline a)}$. We finish the calculation with
\begin{align*}
&\Bigl( \twistO n(\omega_\mathcal{F})\mathbin{\odot} \bigl( (\mathbb{Z}/p^e)^n\times \freeO n(\omega_\mathcal{F})\bigr)\Bigr)_{\underline a,\underline b}
\\={}& [\mathord{\mathrm{ev}}_{\underline a}^{-1}(C_S(\underline a)),(\id_{(\mathbb{Z}/p^e)^n}\times \psi_{\underline a})\circ \wind(\underline a,C_S(\underline a))]_{(\mathbb{Z}/p^e)^n\times C_S(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_S(\psi_{\underline a}(\underline a))}
\\* &\hspace{2.5cm}\mathbin{\odot} \bigl((\mathbb{Z}/p^e)^n\times \freeO n(\omega_\mathcal{F})_{\underline a,\underline b}\bigr)
\\={}& [(\mathbb{Z}/p^e)^n\times C_S(\underline a),\id]_{(\mathbb{Z}/p^e)^n\times C_S(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_S(\underline a)}
\\* &\hspace{2.5cm}\mathbin{\odot} \bigl((\mathbb{Z}/p^e)^n\times \freeO n(\omega_\mathcal{F})_{\underline a,\underline b}\bigr)
\\={}& (\mathbb{Z}/p^e)^n\times \freeO n(\omega_\mathcal{F})_{\underline a,\underline b}.
\end{align*}
This completes the proof of part \ref{itemIdempotentCompFree}.
\end{proof}
\begin{cor}\label{corEquivalentTelescopes}
The idempotents $\twistO n ((\omega_\mathcal{F})_S^S)$ and $(\mathbb{Z}/p^e)^n\times \freeO n((\omega_\mathcal{F})_S^S)$ induce inverse equivalences
\[
\begin{tikzpicture}
\node (M) [matrix of math nodes] {
\tel_{\twistO n (\omega_\mathcal{F})} &[3cm] \hat\Sigma^{\infty}_{+} B((\mathbb{Z}/p^e)^n)\wedge \tel_{\freeO n(\omega_\mathcal{F})}. \\
};
\path[->,arrow,auto]
(M-1-1.north east) edge[bend left=30] node{$(\mathbb{Z}/p^e)^n\times \freeO n((\omega_\mathcal{F})_S^S)$} (M-1-2.north west)
(M-1-2.south west) edge[bend left=30] node{$\twistO n((\omega_\mathcal{F})_S^S)$} (M-1-1.south east)
;
\end{tikzpicture}
\]
\end{cor}
\begin{proof}
By Proposition \ref{propComposingLoopIdempotents}, we can apply $(\mathbb{Z}/p^e)^n\times \freeO n((\omega_\mathcal{F})_S^S)$ and $\twistO n((\omega_\mathcal{F})_S^S)$ level-wise to get maps back and forth between the two towers
\[
\hat\Sigma^{\infty}_{+} B((\mathbb{Z}/p^e)^n\times \freeO n S)\xrightarrow{(\mathbb{Z}/p^e)^n\times \freeO n(\omega_\mathcal{F})} \hat\Sigma^{\infty}_{+} B((\mathbb{Z}/p^e)^n\times \freeO n S) \xrightarrow{(\mathbb{Z}/p^e)^n\times \freeO n(\omega_\mathcal{F})} \dotsb
\]
and
\[
\hat\Sigma^{\infty}_{+} B((\mathbb{Z}/p^e)^n\times \freeO n S)\xrightarrow{\twistO n(\omega_\mathcal{F})} \hat\Sigma^{\infty}_{+} B((\mathbb{Z}/p^e)^n\times \freeO n S) \xrightarrow{\twistO n(\omega_\mathcal{F})} \dotsb.
\]
Again by Proposition \ref{propComposingLoopIdempotents}, the composite $\twistO n((\omega_\mathcal{F})_S^S)\mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \freeO n((\omega_\mathcal{F})_S^S)) = (\mathbb{Z}/p^e)^n\times \freeO n((\omega_\mathcal{F})_S^S)$ is the identity on the telescope $\hat\Sigma^{\infty}_{+} B((\mathbb{Z}/p^e)^n)\wedge \tel_{\freeO n(\omega_\mathcal{F})}$ of the first tower, and the composite $((\mathbb{Z}/p^e)^n\times \freeO n((\omega_\mathcal{F})_S^S))\mathbin{\odot} \twistO n((\omega_\mathcal{F})_S^S) = \twistO n((\omega_\mathcal{F})_S^S)$ is the identity on the telescope $\tel_{\twistO n(\omega_\mathcal{F})}$ of the second tower.
\end{proof}
\begin{cor}\label{corTwistedLoopTelescopeEquiv}
Combining Corollaries \ref{corFreeLoopTelescopeEquiv} and \ref{corEquivalentTelescopes}, we have inverse equivalences
\[
\begin{tikzpicture}
\node (M) [matrix of math nodes] {
\tel_{\twistO n (\omega_\mathcal{F})} &[3cm] \hat\Sigma^{\infty}_{+} B((\mathbb{Z}/p^e)^n\times \freeO n\mathcal{F}), \\
};
\path[->,arrow,auto]
(M-1-1.north east) edge[bend left=30] node{$(\mathbb{Z}/p^e)^n\times I_\mathcal{F}$} (M-1-2.north west)
(M-1-2.south west) edge[bend left=30] node{$\twistT\mathcal{F}$} (M-1-1.south east)
;
\end{tikzpicture}
\]
with matrices $(\mathbb{Z}/p^e)^n\times I_\mathcal{F}\in \mathbb{AF}_p((\mathbb{Z}/p^e)\times \freeO n S, (\mathbb{Z}/p^e)^n\times \freeO n \mathcal{F})$ and\linebreak $\twistT\mathcal{F} \in \mathbb{AF}_p((\mathbb{Z}/p^e)\times \freeO n \mathcal{F}, (\mathbb{Z}/p^e)^n\times \freeO n S)$. These matrices have the following entries:
\[
((\mathbb{Z}/p^e)^n\times I_\mathcal{F})_{\underline a,\underline b} = \begin{cases}
0 & \text{if $\underline a$ is not $\mathcal{F}$-conjugate to $\underline b$,}
\\ (\mathbb{Z}/p^e)^n\times \zeta_{\underline a}^{\underline b} & \text{if $\underline b$ is the representative for the $\mathcal{F}$-conjugacy class of $\underline a$,}
\end{cases}
\]
with $((\mathbb{Z}/p^e)^n\times I_\mathcal{F})_{\underline a,\underline b}\in \mathbb{AF}_p((\mathbb{Z}/p^e)^n\times C_S(\underline a), (\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline b)) $,
and
\[
(\twistT\mathcal{F})_{\underline a,\underline b} = \twistO n((\omega_\mathcal{F})_S^S)_{\underline a,\underline b},
\]
with $(\twistT\mathcal{F})_{\underline a,\underline b}\in \mathbb{AF}_p((\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline a),(\mathbb{Z}/p^e)^n\times C_S(\underline b))$.
\end{cor}
\begin{proof}
We compose the maps of Corollaries \ref{corFreeLoopTelescopeEquiv} and \ref{corEquivalentTelescopes}. In one direction we have the composite
\[
\tel_{\twistO n(\omega_\mathcal{F})} \xrightarrow{(\mathbb{Z}/p^e)^n\times \freeO n((\omega_\mathcal{F})_S^S)} \hat\Sigma^{\infty}_{+} B(\mathbb{Z}/p^e)^n \wedge \tel_{\freeO n(\omega_\mathcal{F})} \xrightarrow {(\mathbb{Z}/p^e)^n\times I_\mathcal{F}} \hat\Sigma^{\infty}_{+} B((\mathbb{Z}/p^e)^n\times \freeO n\mathcal{F}).
\]
Due to Lemma \ref{lemmaIdempotentOfRetract}, we have
\[\freeO n((\omega_\mathcal{F})_S^S) \mathbin{\odot} I_\mathcal{F} = I_\mathcal{F}\mathbin{\odot} T_\mathcal{F}\mathbin{\odot} I_\mathcal{F} = I_\mathcal{F}.\]
Hence the composed equivalence above is simply
\[
\tel_{\twistO n(\omega_\mathcal{F})} \xrightarrow {(\mathbb{Z}/p^e)^n\times I_\mathcal{F}} \hat\Sigma^{\infty}_{+} B((\mathbb{Z}/p^e)^n\times \freeO n\mathcal{F})
\]
induced by $(\mathbb{Z}/p^e)^n\times I_\mathcal{F}\in \mathbb{AF}_p((\mathbb{Z}/p^e)^n\times \freeO n S, (\mathbb{Z}/p^e)^n\times \freeO n \mathcal{F})$, which is invariant with respect to composition with $\twistO n((\omega_\mathcal{F})_S^S)$ on the left.
In the other direction, we have the composite
\[
\hat\Sigma^{\infty}_{+} B((\mathbb{Z}/p^e)^n\times \freeO n\mathcal{F}) \xrightarrow{(\mathbb{Z}/p^e)^n\times T_\mathcal{F}} \hat\Sigma^{\infty}_{+} B(\mathbb{Z}/p^e)^n \wedge \tel_{\freeO n(\omega_\mathcal{F})} \xrightarrow {\twistO n((\omega_\mathcal{F})_S^S)} \tel_{\twistO n(\omega_\mathcal{F})}.
\]
By Definition \ref{defLnFRetractOfLnS}, the matrix $T_\mathcal{F}\in \mathbb{AF}_p(\freeO n \mathcal{F},\freeO n S)$ has entries $(T_\mathcal{F})_{\underline a,\underline b}=\freeO n((\omega_\mathcal{F})_S^S)_{\underline a,\underline b}$, whenever $\underline a$ represents the class $[\underline a]\in \cntuples n \mathcal{F}$ and $\underline b$ represents $[\underline b]\in \cntuples n S$. Applying Proposition \ref{propComposingLoopIdempotents}\ref{itemIdempotentCompTwist}, we then calculate the entries of the composite equivalence:
\begin{align*}
\bigl(((\mathbb{Z}/p^e)^n\times T_\mathcal{F})\mathbin{\odot} \twistO n((\omega_\mathcal{F})_S^S)\bigr)_{\underline a,\underline b} &=
\sum_{[\underline c]\in \cntuples nS} ((\mathbb{Z}/p^e)^n\times T_\mathcal{F})_{\underline a,\underline c} \mathbin{\odot} \twistO n((\omega_\mathcal{F})_S^S)_{\underline c,\underline b}
\\ &= \sum_{[\underline c]\in \cntuples nS} ((\mathbb{Z}/p^e)^n\times \freeO n((\omega_\mathcal{F})_S^S)_{\underline a,\underline c} \mathbin{\odot} \twistO n((\omega_\mathcal{F})_S^S)_{\underline c,\underline b}
\\ &= \bigl(((\mathbb{Z}/p^e)^n\times \freeO n((\omega_\mathcal{F})_S^S))\mathbin{\odot} \twistO n((\omega_\mathcal{F})_S^S)\bigr)_{\underline a,\underline b}
\\ &= \twistO n((\omega_\mathcal{F})_S^S)_{\underline a,\underline b}.
\end{align*}
This is the matrix $\twistT \mathcal{F}\in \mathbb{AF}_p((\mathbb{Z}/p^e)^n\times \freeO n\mathcal{F}, (\mathbb{Z}/p^e)^n\times \freeO nS)$ described in the statement of the corollary. Furthermore $\twistT\mathcal{F}$ is invariant with respect to $\twistO n((\omega_\mathcal{F})_S^S)$ on the right.
\end{proof}
We define the functor $\twistO n$ for saturated fusoids similarly to Proposition \ref{propSimpleFusionLoop}:
\begin{prop}\label{propTwistedFusionLoop}
The functor $\twistO n$ on unions of $p$-groups extends to a functor $\twistO n\colon \mathbb{AF}_p\to \mathbb{AF}_p$ given on objects by $\mathcal{F}\mapsto (\mathbb{Z}/p^e)^n\times \freeO n \mathcal{F}$ and on morphisms $X\in \mathbb{AF}_p(\mathcal{E},\mathcal{F})$ by the matrix with entries
\[\twistO n(X_\mathcal{E}^\mathcal{F})_{\underline a,\underline b} =\sum_{\substack{[\underline b']\in \cntuples nS \\ \underline b' \sim_\mathcal{F} \underline b}} (\twistO n(X_R^S))_{\underline a,\underline b'} \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \zeta_{b'}^b),\]
with $\twistO n(X_\mathcal{E}^\mathcal{F})_{\underline a,\underline b}\in \mathbb{AF}_p((\mathbb{Z}/p^e)^n\times C_\mathcal{E}(\underline a),(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline b))$, and where $R$ and $S$ are the underlying unions of $p$-groups for $\mathcal{E}$ and $\mathcal{F}$ respectively.
\end{prop}
\begin{remark} \label{remarkFusionTwistedLoopFunctor}
In the proof we shall see that $\twistO n$ can alternatively be given as the composite $\twistO n(X_\mathcal{E}^\mathcal{F}) = \twistT\mathcal{E}\mathbin{\odot} \twistO n(X_R^S) \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times I_\mathcal{F})$.
\end{remark}
\begin{proof}
We first note that $\twistO n(X_\mathcal{E}^\mathcal{F}) = \twistT\mathcal{E}\mathbin{\odot} \twistO n(X_R^S) \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times I_\mathcal{F})$. We check this by calculating the entries of the right hand side:
\begin{align*}
& \bigl( \twistT\mathcal{E}\mathbin{\odot} \twistO n(X_R^S) \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times I_\mathcal{F}) \bigr)_{\underline a,\underline b}
\\ ={}& \sum_{[\underline c]\in \cntuples nR} (\twistT\mathcal{E})_{\underline a,\underline c} \mathbin{\odot} \bigl( \twistO n(X_R^S) \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times I_\mathcal{F}) \bigr)_{\underline c,\underline b}
\\ ={}& \sum_{[\underline c]\in \cntuples nR} (\twistO n((\omega_\mathcal{E})_R^R))_{\underline a,\underline c} \mathbin{\odot} \bigl( \twistO n(X_R^S) \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times I_\mathcal{F}) \bigr)_{\underline c,\underline b}
\\ ={}& \bigl( \twistO n((\omega_\mathcal{E})_R^R)\mathbin{\odot} \twistO n(X_R^S) \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times I_\mathcal{F}) \bigr)_{\underline a,\underline b}
\\ ={}& \bigl( \twistO n((\omega_\mathcal{E})_R^R\mathbin{\odot} X_R^S) \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times I_\mathcal{F}) \bigr)_{\underline a,\underline b}
\\ ={}& \bigl( \twistO n( X_R^S) \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times I_\mathcal{F}) \bigr)_{\underline a,\underline b}
\\ ={}& \sum_{[\underline d]\in \cntuples nS} \twistO n( X_R^S)_{\underline a,\underline d}\mathbin{\odot} ((\mathbb{Z}/p^e)^n\times I_\mathcal{F})_{\underline d,\underline b}
\\ ={}& \sum_{\substack{[\underline b']\in \cntuples nS \\ \underline b' \sim_\mathcal{F} \underline b}} (\twistO n(X_R^S))_{\underline a,\underline b'} \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \zeta_{b'}^b)
\\ ={}& \twistO n(X_\mathcal{E}^\mathcal{F})_{\underline a,\underline b}.
\end{align*}
It is now straightforward to check that $\twistO n$ is a functor. First note that $\twistO n$ takes identity maps to identity maps since
\begin{multline*}
\twistO n((\omega_\mathcal{F})_\mathcal{F}^\mathcal{F}) =\twistT\mathcal{F}\mathbin{\odot} \twistO n((\omega_\mathcal{F})_S^S) \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times I_\mathcal{F}) \\= \twistT\mathcal{F} \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times I_\mathcal{F}) = \id_{(\mathbb{Z}/p^e)^n\times\freeO n\mathcal{F}}.
\end{multline*}
We have used the fact that $\twistT\mathcal{F}$ is the inverse to $((\mathbb{Z}/p^e)^n\times I_\mathcal{F})$ by Corollary \ref{corTwistedLoopTelescopeEquiv}.
Let $\mathcal{E}$, $\mathcal{F}$, and $\mathcal{G}$ be saturated fusoids over $R$, $S$, and $T$ respectively. Suppose $X\in \mathbb{AF}_p(\mathcal{E},\mathcal{F})$ and $Y\in \mathbb{AF}_p(\mathcal{F},\mathcal{G})$. We check that $\twistO n$ preserves composition:
\begin{align*}
\twistO n(X_\mathcal{E}^\mathcal{F})\mathbin{\odot} \twistO n(Y_\mathcal{F}^\mathcal{G}) &= \twistT \mathcal{E}\mathbin{\odot} \twistO n(X_R^S)\mathbin{\odot} ((\mathbb{Z}/p^e)^n\times I_\mathcal{F})\mathbin{\odot} \twistT \mathcal{F} \mathbin{\odot} \twistO n(Y_S^T) \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times I_\mathcal{G})
\\ &= \twistT \mathcal{E}\mathbin{\odot} \twistO n(X_R^S)\mathbin{\odot} \twistO n((\omega_\mathcal{F})_S^S) \mathbin{\odot} \twistO n(Y_S^T) \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times I_\mathcal{G})
\\ &= \twistT \mathcal{E}\mathbin{\odot} \twistO n(X_R^S\mathbin{\odot} (\omega_\mathcal{F})_S^S \mathbin{\odot} Y_S^T) \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times I_\mathcal{G})
\\ &= \twistT \mathcal{E}\mathbin{\odot} \twistO n((X\mathbin{\odot} Y)_S^T) \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times I_\mathcal{G})
\\ &= \twistO n((X\mathbin{\odot} Y)_\mathcal{E}^\mathcal{G}).
\end{align*}
The characteristic idempotent $\omega_\mathcal{F}$ disappears from the middle because $X$ and $Y$ are $\mathcal{F}$-stable.
\end{proof}
We next give a formula for $\twistO n$ in terms of the decomposition of $X\in \mathbb{AF}_p(\mathcal{E},\mathcal{F})$ into basis elements. The formula is analogous to \cite[Proposition 3.31]{RSS_Bold1} and the proof follows the same lines as the proof of Corollary \ref{corSimpleFusionLoopOrbits}.
\begin{prop}\label{propFusionTwistedLoopOrbits}
Let $\mathcal{E}$ and $\mathcal{F}$ be saturated fusion systems over $p$-groups $R$ and $S$ respectively, and suppose $X\in \mathbb{AF}_p(\mathcal{E},\mathcal{F})$ is a virtual biset. Furthermore, let $\underline a$ in $\mathcal{E}$ and $\underline b$ in $\mathcal{F}$ be chosen representatives for conjugacy classes of commuting $n$-tuples (according to Convention \ref{conventionTupleReps}). Consider the restriction of $X$ to the centralizer fusion system $C_\mathcal{E}(\underline a)$, and write $M_{C_\mathcal{E}(\underline a)}^\mathcal{F}$ as a linear combination of basis elements (recalling Convention \ref{conventionFusionOrbitDecomposition}):
\[
X_{C_\mathcal{E}(\underline a)}^\mathcal{F} =\sum_{(P,\varphi)}c_{P,\varphi} \cdot [P,\varphi]_{C_\mathcal{E}(\underline a)}^\mathcal{F},
\]
where $P\leq C_R(\underline a)$ and $\varphi\colon P\to S$.
The matrix entry $\twistO n(X)_{\underline a,\underline b}$ then satisfies the formula
\begin{multline*}
\twistO n(X)_{\underline a,\underline b} = \hspace{-.4cm} \sum_{\substack{(P,\varphi) \text{ s.t. } \varphi(\underline a^{k(\underline a,P)})\\\text{is $\mathcal{F}$-conj. to $\underline b$}}}\hspace{-.4cm} c_{P,\varphi}\cdot
\Bigl( [\mathord{\mathrm{ev}}_{\underline a}^{-1}(P),(\id_{(\mathbb{Z}/p^e)^n}\times\varphi)\circ \wind(\underline a, P)]_{(\mathbb{Z}/p^e)^n\times C_\mathcal{E}(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_S(\varphi(\underline a^{k(\underline a,P)}))} \\ \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \zeta_{\varphi(\underline a^{k(\underline a,P)})}^{\underline b}) \Bigr),
\end{multline*}
with $P$, $\varphi$, and $c_{P,\varphi}$ as in the linear combination above, and with $k(\underline a,P)$ and $\wind(\underline a,P)$ as given in \cite[Definition 3.25]{RSS_Bold1} and \cite[Lemma 3.29]{RSS_Bold1}, respectively.
\end{prop}
\begin{proof}
As in the proof of Corollary \ref{corSimpleFusionLoopOrbits}, we first wish to replace the coefficients $c_{P,\varphi}$ with an alternative set of coefficients that play nicely with the underlying $p$-groups. View $X$ as an element of $\mathbb{AF}_p(C_R(\underline a),S)$ and write $X_{C_R(\underline a)}^S$ as a linear combination:
\[
X_{C_R(\underline a)}^S = \sum_{(P,\varphi)}u_{P,\varphi} \cdot [P,\varphi]_{C_R(\underline a)}^S,
\]
with a (possibly) different collection of coefficients $u_{P,\varphi}$.
If we precompose with $\omega_{C_\mathcal{E}(\underline a)}$ and postcompose with $\omega_\mathcal{F}$, the linear combination above becomes
\[
X_{C_\mathcal{E}(\underline a)}^\mathcal{F} = \omega_{C_\mathcal{E}(\underline a)}\mathbin{\odot} X_{C_R(\underline a)}^S \mathbin{\odot} \omega_{\mathcal{F}} = \sum_{(P,\varphi)}u_{P,\varphi} \cdot [P,\varphi]_{C_\mathcal{E}(\underline a)}^\mathcal{F}.
\]
We shall then prove that the formula for $\freeO n(X)_{\underline a,\underline b}$ in the statement of the proposition is independent of the choice of linear combination for $X_{C_\mathcal{E}(\underline a)}^\mathcal{F}$, so that we can use the coefficients $u_{P,\varphi}$ instead of $c_{P,\varphi}$.
As in the proof of Corollary \ref{corSimpleFusionLoopOrbits}, it suffices to prove that whenever two basis elements $[P,\varphi]_{C_\mathcal{E}(\underline a)}^\mathcal{F}$ and $[P',\varphi']_{C_\mathcal{E}(\underline a)}^\mathcal{F}$ are equal, then the composite
\[
[\mathord{\mathrm{ev}}_{\underline a}^{-1}(P),(\id_{(\mathbb{Z}/p^e)^n}\times\varphi)\circ \wind(\underline a, P)]_{(\mathbb{Z}/p^e)^n\times C_\mathcal{E}(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_S(\varphi(\underline a^{k(\underline a,P)}))} \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \zeta_{\varphi(\underline a^{k(\underline a,P)})}^{\underline b})
\]
and the same composite for the pair $(P',\varphi')$ are also equal.
Suppose $[P,\varphi]_{C_\mathcal{E}(\underline a)}^\mathcal{F}=[P',\varphi']_{C_\mathcal{E}(\underline a)}^\mathcal{F}$ so that $(P',\varphi')$ is $(C_\mathcal{E}(\underline a),\mathcal{F})$-conjugate to $(P,\varphi)$. Suppose further that $\varphi(\underline a^{k(\underline a,P)})$ is $\mathcal{F}$-conjugate to $\underline b$. Let $\varphi'=\gamma\circ \varphi\circ \alpha$ with $\alpha\in C_\mathcal{E}(\underline a)$ and $\gamma\in \mathcal{F}$, and where $\alpha\colon \gen{\underline a}P'\xrightarrow\cong \gen{\underline a}P$ satisfies $\alpha(\underline a)=\underline a$.
Via the isomorphism $\alpha$, it is clear that powers of elements in $\underline a$ lie in $P$ if and only if they lie in $P'$. Hence $k(\underline a,P)=k(\underline a,P')$, and
\[\wind(\underline a,P)\circ (\id_{(\mathbb{Z}/p^e)^n}\times \alpha) = (\id_{(\mathbb{Z}/p^e)^n}\times \alpha)\circ \wind(\underline a,P')\]
as maps $\mathord{\mathrm{ev}}^{-1}_{\underline a}(P') \xrightarrow\cong (\mathbb{Z}/p^e)^n\times P$.
In $S$ we have $\varphi(\underline a^{k(\underline a,P')})=\gamma(\varphi'(\underline a^{k(\underline a, P')}))$, and since $\gamma\in \mathcal{F}$, we conclude that $\varphi'(\underline a^{k(\underline a,P')})$ is also $\mathcal{F}$-conjugate to $\underline b$. Furthermore, we have
\[[(\mathbb{Z}/p^e)^n\times P,\id\times \gamma]\mathbin{\odot} ((\mathbb{Z}/p^e)^n\times\zeta_{\varphi'(\underline a^{k(\underline a,P')})}^{\underline b}) = (\mathbb{Z}/p^e)^n\times\zeta_{\varphi(\underline a^{k(\underline a,P)})}^{\underline b}.\]
Combining these observations, we get
\begin{align*}
& [\mathord{\mathrm{ev}}_{\underline a}^{-1}(P'),(\id_{(\mathbb{Z}/p^e)^n}\times\varphi')\circ \wind(\underline a, P')]_{(\mathbb{Z}/p^e)^n\times C_\mathcal{E}(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_S(\varphi'(\underline a^{k(\underline a,P')}))} \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \zeta_{\varphi'(\underline a^{k(\underline a,P')})}^{\underline b})
\\ &{}= [\mathord{\mathrm{ev}}_{\underline a}^{-1}(P'),(\id_{(\mathbb{Z}/p^e)^n}\times(\gamma\circ\varphi\circ\alpha))\circ \wind(\underline a, P')]_{(\mathbb{Z}/p^e)^n\times C_\mathcal{E}(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_S(\varphi'(\underline a^{k(\underline a,P')}))} \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \zeta_{\varphi'(\underline a^{k(\underline a,P')})}^{\underline b})
\\ &{}= [\mathord{\mathrm{ev}}_{\underline a}^{-1}(P'),(\id_{(\mathbb{Z}/p^e)^n}\times\varphi)\circ \wind(\underline a, P)\circ (\id_{(\mathbb{Z}/p^e)^n}\times\alpha)]_{(\mathbb{Z}/p^e)^n\times C_\mathcal{E}(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_S(\varphi(\underline a^{k(\underline a,P)}))} \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \zeta_{\varphi(\underline a^{k(\underline a,P)})}^{\underline b})
\\ &{}= [\mathord{\mathrm{ev}}_{\underline a}^{-1}(P),(\id_{(\mathbb{Z}/p^e)^n}\times\varphi)\circ \wind(\underline a, P)]_{(\mathbb{Z}/p^e)^n\times C_\mathcal{E}(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_S(\varphi(\underline a^{k(\underline a,P)}))} \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \zeta_{\varphi(\underline a^{k(\underline a,P)})}^{\underline b}),
\end{align*}
where the last equality follows from the fact that $\alpha\in C_\mathcal{E}(\underline a)$.
As in the proof of Corollary \ref{corSimpleFusionLoopOrbits}, the equality
\[ \sum_{(P,\varphi)}c_{P,\varphi} \cdot [P,\varphi]_{C_\mathcal{E}(\underline a)}^\mathcal{F} = X_{C_\mathcal{E}(\underline a)}^\mathcal{F} = \sum_{(P,\varphi)}u_{P,\varphi} \cdot [P,\varphi]_{C_\mathcal{E}(\underline a)}^\mathcal{F}\]
implies that we can replace $c_{P,\varphi}$ with $u_{P,\varphi}$ in the formula
\begin{multline*}
\hspace{-.4cm} \sum_{\substack{(P,\varphi) \text{ s.t. } \varphi(\underline a^{k(\underline a,P)})\\\text{is $\mathcal{F}$-conj. to $\underline b$}}}\hspace{-.4cm} c_{P,\varphi}\cdot
\Bigl( [\mathord{\mathrm{ev}}_{\underline a}^{-1}(P),(\id_{(\mathbb{Z}/p^e)^n}\times\varphi)\circ \wind(\underline a, P)]_{(\mathbb{Z}/p^e)^n\times C_\mathcal{E}(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_S(\varphi(\underline a^{k(\underline a,P)}))} \\ \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \zeta_{\varphi(\underline a^{k(\underline a,P)})}^{\underline b}) \Bigr)
\end{multline*}
and get the same sum.
By Proposition \ref{propTwistedFusionLoop}, we can write $\twistO n(X_\mathcal{E}^\mathcal{F})_{\underline a,\underline b}$ as
\begin{align*}
\twistO n(X_\mathcal{E}^\mathcal{F})_{\underline a,\underline b} &= \sum_{\substack{[\underline b']\in \cntuples n S\\\underline b'\sim_\mathcal{F} \underline b}} \twistO n(X_R^S)_{\underline a,\underline b'}\mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \zeta_{\underline b'}^{\underline b}).
\end{align*}
We then apply Proposition \ref{propFusionTwistedLoopOrbits} with the linear combination for $X_{C_R(\underline a)}^S$ given by the coefficients $u_{P,\varphi}$:
\begin{align*}
&\twistO n(X_\mathcal{E}^\mathcal{F})_{\underline a,\underline b}
\\={}& \sum_{\substack{[\underline b']\in \cntuples n S\\\underline b'\sim_\mathcal{F} \underline b}} \twistO n(X_R^S)_{\underline a,\underline b'}\mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \zeta_{\underline b'}^{\underline b})
\\ ={}& \sum_{\substack{[\underline b']\in \cntuples n S\\\underline b'\sim_\mathcal{F} \underline b}} \sum_{\substack{(P,\varphi) \text{ s.t. } \varphi(\underline a^{k(\underline a,P)})\\\text{is $S$-conj. to $\underline b'$}}}\hspace{-.4cm} u_{P,\varphi}\cdot \Bigl( [\mathord{\mathrm{ev}}_{\underline a}^{-1}(P),(\id_{(\mathbb{Z}/p^e)^n}\times\varphi)\circ \wind(\underline a, P)]_{(\mathbb{Z}/p^e)^n\times C_R(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_S(\varphi(\underline a^{k(\underline a,P)}))}
\\*& \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \zeta_{\varphi(\underline a^{k(\underline a,P)})}^{\underline b'}) \Bigr)\mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \zeta_{\underline b'}^{\underline b})
\\={}& \hspace{-.4cm} \sum_{\substack{(P,\varphi) \text{ s.t. } \varphi(\underline a^{k(\underline a,P)})\\\text{is $\mathcal{F}$-conj. to $\underline b$}}}\hspace{-.4cm} u_{P,\varphi}\cdot \Bigl( [\mathord{\mathrm{ev}}_{\underline a}^{-1}(P),(\id_{(\mathbb{Z}/p^e)^n}\times\varphi)\circ \wind(\underline a, P)]_{(\mathbb{Z}/p^e)^n\times C_R(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_S(\varphi(\underline a^{k(\underline a,P)}))}
\\*& \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \zeta_{\varphi(\underline a^{k(\underline a,P)})}^{\underline b}) \Bigr).
\end{align*}
Finally, note that $\twistO n(X_\mathcal{E}^\mathcal{F})_{\underline a,\underline b}\in \mathbb{AF}_p((\mathbb{Z}/p^e)^n\times C_\mathcal{E}(\underline a),(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline b))$ is left\linebreak $((\mathbb{Z}/p^e)^n\times C_\mathcal{E}(\underline a))$-stable and as such does not change if we precompose with the idempotent $(\mathbb{Z}/p^e)^n\times \omega_{C_\mathcal{E}(\underline a)}$:
\begin{align*}
& \twistO n(X_\mathcal{E}^\mathcal{F})_{\underline a,\underline b}
\\ ={}& ((\mathbb{Z}/p^e)^n\times \omega_{C_\mathcal{E}(\underline a)})\mathbin{\odot} \twistO n(X_\mathcal{E}^\mathcal{F})_{\underline a,\underline b}
\\= {}& \hspace{-.4cm} \sum_{\substack{(P,\varphi) \text{ s.t. } \varphi(\underline a^{k(\underline a,P)})\\\text{is $\mathcal{F}$-conj. to $\underline b$}}}\hspace{-.4cm} u_{P,\varphi}\cdot \omega_{C_\mathcal{E}(\underline a)}
\\*& \mathbin{\odot} \Bigl( [\mathord{\mathrm{ev}}_{\underline a}^{-1}(P),(\id_{(\mathbb{Z}/p^e)^n}\times\varphi)\circ \wind(\underline a, P)]_{(\mathbb{Z}/p^e)^n\times C_R(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_S(\varphi(\underline a^{k(\underline a,P)}))}
\mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \zeta_{\varphi(\underline a^{k(\underline a,P)})}^{\underline b}) \Bigr)
\\={}& \hspace{-.4cm} \sum_{\substack{(P,\varphi) \text{ s.t. } \varphi(\underline a^{k(\underline a,P)})\\\text{is $\mathcal{F}$-conj. to $\underline b$}}}\hspace{-.4cm} u_{P,\varphi}\cdot \Bigl( [\mathord{\mathrm{ev}}_{\underline a}^{-1}(P),(\id_{(\mathbb{Z}/p^e)^n}\times\varphi)\circ \wind(\underline a, P)]_{(\mathbb{Z}/p^e)^n\times C_\mathcal{E}(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_S(\varphi(\underline a^{k(\underline a,P)}))}
\\*& \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \zeta_{\varphi(\underline a^{k(\underline a,P)})}^{\underline b}) \Bigr)
\\={}&\hspace{-.4cm} \sum_{\substack{(P,\varphi) \text{ s.t. } \varphi(\underline a^{k(\underline a,P)})\\\text{is $\mathcal{F}$-conj. to $\underline b$}}}\hspace{-.4cm} c_{P,\varphi}\cdot \Bigl( [\mathord{\mathrm{ev}}_{\underline a}^{-1}(P),(\id_{(\mathbb{Z}/p^e)^n}\times\varphi)\circ \wind(\underline a, P)]_{(\mathbb{Z}/p^e)^n\times C_\mathcal{E}(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_S(\varphi(\underline a^{k(\underline a,P)}))}
\\*& \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \zeta_{\varphi(\underline a^{k(\underline a,P)})}^{\underline b}) \Bigr).\qedhere
\end{align*}
\end{proof}
\begin{remark}\label{remarkFusionUntwisted}
As in \cite[Remark 3.32]{RSS_Bold1}, we can form a functor $\untwistO n$ by taking the formula in Proposition \ref{propFusionTwistedLoopOrbits} and leaving out all summands indexed by $(P,\varphi)$, where $P$ does not contain $\underline a$. By \cite[Lemma 3.27]{RSS_Bold1}, the summands for which $\underline a\in P$ are precisely those summands where $\mathord{\mathrm{ev}}^{-1}_{\underline a}(P)=(\mathbb{Z}/p^e)^n\times P$, and these are also the summands for which $k(\underline a,P)_i=1$, for all $1\leq i\leq n$ .
For saturated fusion systems $\mathcal{E}$ and $\mathcal{F}$ over $p$-groups $R$ and $S$, and for $X\in \mathbb{AF}_p(\mathcal{E},\mathcal{F})$, we thus define $\untwistO n(X)\in \mathbb{AF}_p((\mathbb{Z}/p^e)^n\times \mathcal{E}, (\mathbb{Z}/p^e)^n\times \mathcal{F})$ to be the matrix with entries as in Proposition \ref{propFusionTwistedLoopOrbits} except leaving out all summands where $\underline a$ is not in $P$:
\begin{align*}
\untwistO n(X)_{\underline a,\underline b} ={}& \hspace{-.4cm} \sum_{\substack{(P,\varphi) \text{ s.t. }\underline a\in P\text{ and} \\ \varphi(\underline a^{k(\underline a,P)})\text{ is $\mathcal{F}$-conj. to $\underline b$}}}\hspace{-.4cm} c_{P,\varphi}\cdot
\Bigl( [\mathord{\mathrm{ev}}_{\underline a}^{-1}(P),(\id_{(\mathbb{Z}/p^e)^n}\times\varphi)\circ \wind(\underline a, P)]_{(\mathbb{Z}/p^e)^n\times C_\mathcal{E}(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_S(\varphi(\underline a^{k(\underline a,P)}))}
\\* &\mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \zeta_{\varphi(\underline a^{k(\underline a,P)})}^{\underline b}) \Bigr)
\\ ={}& \hspace{-.4cm} \sum_{\substack{(P,\varphi) \text{ s.t. } \underline a\in P\text{ and}\\ \varphi(\underline a)\text{ is $\mathcal{F}$-conj. to $\underline b$}}}\hspace{-.4cm} c_{P,\varphi}\cdot
\Bigl( [(\mathbb{Z}/p^e)^n\times P,(\id_{(\mathbb{Z}/p^e)^n}\times\varphi)]_{(\mathbb{Z}/p^e)^n\times C_\mathcal{E}(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_S(\varphi(\underline a))}
\\* &\mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \zeta_{\varphi(\underline a)}^{\underline b}) \Bigr).
\end{align*}
Comparing with Corollary \ref{corSimpleFusionLoopOrbits}, we see that $\untwistO n(X)$ coincides with $(\mathbb{Z}/p^e)^n\times \freeO n(X)$. As mentioned, we observed this for groups in \cite[Remark 3.32]{RSS_Bold1} and for the category $\mathord{\mathrm{Cov}}$ in \cite[Remark 2.11]{RSS_Bold1}.
\end{remark}
\section{Properties of $\twistO n$}\label{secFusionMainTheorem}
Before we state the fusion system version of \cite[Theorems 2.13 and 3.33]{RSS_Bold1}, we need to discuss the auxiliary maps describing evaluation and partial evaluation for $\freeO n\mathcal{F}$, the action of $\Sigma_n$ on $\freeO n\mathcal{F}$, and the embedding of $(\mathbb{Z}/p^e)^{n+m}\times \freeO {n+m}\mathcal{F}$ into $(\mathbb{Z}/p^e)^n\times \freeO n((\mathbb{Z}/p^e)^m\times \freeO m\mathcal{F})$.
We shall handle all four auxiliary maps at once according to the following common framework: First the auxiliary map on $p$-groups gives rise to a map between towers corresponding to suitable idempotents. The map of towers induces a map between the mapping telescopes in spectra, and the telescopes are each equivalent to classifying spectra of fusoids. We calculate the induced map between the classifying spectra of fusoids and confirm that this map takes the form that we would expect for the auxiliary map in question.
Constructing the auxiliary maps in terms of maps between towers of idempotents instead of just defining the auxiliary maps for fusion systems directly has one significant advantage: It allows us to formally, using standard methods, turn natural transformations for $p$-groups into natural transformations for fusion systems (see Proposition \ref{propExtendNaturalTransformation}). This allows us to use \cite[Theorem 3.33]{RSS_Bold1} for $p$-groups to prove a significant part of Theorem \ref{thmFusionMain} for fusion systems.
\begin{lemma}\label{lemmaInducedTelescopeEquivalence}
Let $F\colon \mathbb{AF}_p\to \mathbb{AF}_p$ be a functor, and let $\mathcal{F}$ be a saturated fusoid with underlying union of $p$-groups $S$. Consider the idempotent endomorphism $F((\omega_\mathcal{F})_S^S)\in \mathbb{AF}_p(F(S),F(S))$ and the associated mapping telescope \[\tel_{F(\omega_\mathcal{F})} = \colim(\hat\Sigma^{\infty}_{+} BF(S) \xrightarrow{F((\omega_\mathcal{F})_S^S)} \hat\Sigma^{\infty}_{+} BF(S) \xrightarrow{F((\omega_\mathcal{F})_S^S)} \dotsb).\]
We then have a pair of inverse equivalences
\[
\begin{tikzpicture}
\node (M) [matrix of math nodes] {
\tel_{F (\omega_\mathcal{F})} &[3cm] \hat\Sigma^{\infty}_{+} BF(\mathcal{F}). \\
};
\path[->,arrow,auto]
(M-1-1.north east) edge[bend left=30] node{$F((\omega_\mathcal{F})_S^\mathcal{F})$} (M-1-2.north west)
(M-1-2.south west) edge[bend left=30] node{$F((\omega_\mathcal{F})_\mathcal{F}^S)$} (M-1-1.south east)
;
\end{tikzpicture}
\]
\end{lemma}
\begin{proof}
Consider the two towers
\[S\xrightarrow{(\omega_\mathcal{F})_S^S} S\xrightarrow{(\omega_\mathcal{F})_S^S}\dotsb\]
as well as
\[\mathcal{F}\xrightarrow{\id_\mathcal{F}} \mathcal{F} \xrightarrow{\id_\mathcal{F}} \dotsb.\]
We can apply $(\omega_\mathcal{F})_\mathcal{F}^S$ and $(\omega_\mathcal{F})_S^\mathcal{F}$ level-wise to get maps back and forth between the towers, where we recall that $\id_\mathcal{F}$ is simply $(\omega_\mathcal{F})_\mathcal{F}^\mathcal{F}\in \mathbb{AF}_p(\mathcal{F},\mathcal{F})$. The induced maps between telescopes $\tel_{\omega_\mathcal{F}}$ and $\hat\Sigma^{\infty}_{+} B\mathcal{F}$ are equivalences (or even the identity if we used $\tel_{\omega_\mathcal{F}}$ as the construction of $\hat\Sigma^{\infty}_{+} B\mathcal{F}$).
If we apply the functor $F\colon \mathbb{AF}_p\to \mathbb{AF}_p$ to the elements $(\omega_\mathcal{F})_\mathcal{F}^S$ and $(\omega_\mathcal{F})_S^\mathcal{F}$, we can apply the resulting maps $F((\omega_\mathcal{F})_\mathcal{F}^S)$ and $F((\omega_\mathcal{F})_S^\mathcal{F})$ level-wise to the towers
\[F(S)\xrightarrow{F((\omega_\mathcal{F})_S^S)} F(S)\xrightarrow{F(\omega_\mathcal{F})_S^S)}\dotsb\]
and
\[F(\mathcal{F})\xrightarrow{\id_{F(\mathcal{F})}} F(\mathcal{F}) \xrightarrow{\id_{F(\mathcal{F})}} \dotsb.\]
The composites $F((\omega_\mathcal{F})_S^\mathcal{F})\mathbin{\odot} F((\omega_\mathcal{F})_\mathcal{F}^S) = F((\omega_\mathcal{F})_S^S)$ and $F((\omega_\mathcal{F})_\mathcal{F}^S)\mathbin{\odot} F((\omega_\mathcal{F})_S^\mathcal{F}) = F((\omega_\mathcal{F})_\mathcal{F}^\mathcal{F})= \id_{F(\mathcal{F})}$ recover the idempotents of the towers, so the induced maps on telescopes
\[
\begin{tikzpicture}
\node (M) [matrix of math nodes] {
\tel_{F (\omega_\mathcal{F})} &[3cm] \hat\Sigma^{\infty}_{+} BF(\mathcal{F}) \\
};
\path[->,arrow,auto]
(M-1-1.north east) edge[bend left=30] node{$F((\omega_\mathcal{F})_S^\mathcal{F})$} (M-1-2.north west)
(M-1-2.south west) edge[bend left=30] node{$F((\omega_\mathcal{F})_\mathcal{F}^S)$} (M-1-1.south east)
;
\end{tikzpicture}
\]
are inverse to each other in $\Ho(\mathord{\mathrm{Sp}}_{p})$.
\end{proof}
\begin{remark}
If we apply Lemma \ref{lemmaInducedTelescopeEquivalence} to the functors $\freeO n$ and $\twistO n$, we recover Corollaries \ref{corFreeLoopTelescopeEquiv} and \ref{corTwistedLoopTelescopeEquiv} with the same equivalences. However, we need those corollaries in order to construct $\freeO n$ and $\twistO n$ as functors $\mathbb{AF}_p\to \mathbb{AF}_p$ in the first place.
\end{remark}
\begin{definition}\label{defInducedNaturalTransformation}
Suppose $F,G\colon \mathbb{AF}_p\to \mathbb{AF}_p$ are functors defined on all saturated fusoids, and suppose we have a natural transformation $\eta\colon F|_{\textup{$p$-groups}} \Rightarrow G|_{\textup{$p$-groups}}$ defined only on formal unions of $p$-groups. For each saturated fusoid $\mathcal{F}$ over $S$, we then define a map $\eta_\mathcal{F}\colon F(\mathcal{F}) \to G(\mathcal{F})$ as the composite
\[\eta_\mathcal{F} \colon F(\mathcal{F}) \xrightarrow{F((\omega_\mathcal{F})_\mathcal{F}^S)} F(S) \xrightarrow{\eta_S} G(S) \xrightarrow{G((\omega_\mathcal{F})_S^\mathcal{F})} G(\mathcal{F}).\]
When $\mathcal{F}$ is the trivial fusion system on $S$, this just recovers $\eta_S$, so there is no ambiguity of notation.
\end{definition}
Since $\eta$ is a natural transformation on $p$-groups, $\eta_S$ fits in a diagram of towers:
\[
\begin{tikzpicture}
\node (M) [matrix of math nodes] {
F(\mathcal{F}) &[2.5cm] F(\mathcal{F}) &[2.5cm] \dotsb \\[1.5cm]
F(S) & F(S) & \dotsb\\[1.5cm]
G(S) & G(S) & \dotsb\\[1.5cm]
G(\mathcal{F}) & G(\mathcal{F}) & \dotsb.\\
};
\path[->,arrow,auto]
(M-1-1) edge node{$\id_{F(\mathcal{F})}$} (M-1-2)
edge node{$F((\omega_\mathcal{F})_\mathcal{F}^S)$} (M-2-1)
(M-1-2) edge node{$\id_{F(\mathcal{F})}$} (M-1-3)
edge node{$F((\omega_\mathcal{F})_\mathcal{F}^S)$} (M-2-2)
(M-2-1) edge node{$F((\omega_\mathcal{F})_S^S)$} (M-2-2)
edge node{$\eta_S$} (M-3-1)
(M-2-2) edge node{$F((\omega_\mathcal{F})_S^S)$} (M-2-3)
edge node{$\eta_S$} (M-3-2)
(M-3-1) edge node{$G((\omega_\mathcal{F})_S^S)$} (M-3-2)
edge node{$G((\omega_\mathcal{F})_S^\mathcal{F})$} (M-4-1)
(M-3-2) edge node{$G((\omega_\mathcal{F})_S^S)$} (M-3-3)
edge node{$G((\omega_\mathcal{F})_S^\mathcal{F})$} (M-4-2)
(M-4-1) edge node{$\id_{G(\mathcal{F})}$} (M-4-2)
(M-4-2) edge node{$\id_{G(\mathcal{F})}$} (M-4-3)
;
\end{tikzpicture}
\]
The map $\eta_S$ induces a map between the telescopes $\eta_S\colon \tel_{F(\omega_\mathcal{F})}\to \tel_{G(\omega_\mathcal{F})}$, and as such $\eta_\mathcal{F}$ is simply the composite
\[\eta_\mathcal{F}\colon \hat\Sigma^{\infty}_{+} BF(\mathcal{F}) \xrightarrow[\simeq]{F((\omega_\mathcal{F})_\mathcal{F}^S)} \tel_{F(\omega_\mathcal{F})} \xrightarrow{\eta_S} \tel_{G(\omega_\mathcal{F})} \xrightarrow[\simeq]{G((\omega_\mathcal{F})_S^\mathcal{F})} \hat\Sigma^{\infty}_{+} BG(\mathcal{F})\]
in $\Ho(\mathord{\mathrm{Sp}}_{p})$. It is now easy to prove that the extension of $\eta$ to fusoids defines a natural transformation $F\Rightarrow G$ on all of $\mathbb{AF}_p$.
\begin{prop}\label{propExtendNaturalTransformation}
Suppose $F,G\colon \mathbb{AF}_p\to \mathbb{AF}_p$ are functors defined on all saturated fusoids, and suppose we have a natural transformation $\eta\colon F|_{\textup{$p$-groups}} \Rightarrow G|_{\textup{$p$-groups}}$ defined only on formal unions of $p$-groups.
If we extend $\eta$ to all saturated fusoids by Definition \ref{defInducedNaturalTransformation}, then the extension defines a natural transformation $\eta\colon F\Rightarrow G$ on all of $\mathbb{AF}_p$.
\end{prop}
\begin{proof}
Let $\mathcal{E}$ and $\mathcal{F}$ be saturated fusoids over $R$ and $S$ respectively, and let $X\in \mathbb{AF}_p(\mathcal{E},\mathcal{F})$ be any matrix of virtual bisets. We have to prove that $F(X_\mathcal{E}^\mathcal{F}) \mathbin{\odot} \eta_\mathcal{F} = \eta_\mathcal{E} \mathbin{\odot} G(X_\mathcal{E}^\mathcal{F})$ in $\mathbb{AF}_p(F(\mathcal{E}),G(\mathcal{F}))$.
Since $\mathbb{AF}_p\to \Ho(\mathord{\mathrm{Sp}}_{p})$ is fully faithful, it suffices to prove this as homotopy classes of maps $\hat\Sigma^{\infty}_{+} BF(\mathcal{E}) \to \hat\Sigma^{\infty}_{+} BG(\mathcal{F})$.
By the naturality of $\eta$ on $p$-groups as well as the definitions of $\eta_\mathcal{E}$ and $\eta_\mathcal{F}$, we have the following commutative diagram in $\Ho(\mathord{\mathrm{Sp}}_{p})$:
\[
\begin{tikzpicture}
\node (M) [matrix of math nodes] {
\hat\Sigma^{\infty}_{+} BF(\mathcal{E}) &[2cm] &[2cm] &[2cm] \hat\Sigma^{\infty}_{+} BG(\mathcal{E}) \\[2cm]
& \tel_{F(\omega_\mathcal{E})} & \tel_{G(\omega_\mathcal{E})} & \\[2cm]
& \tel_{F(\omega_\mathcal{F})} & \tel_{G(\omega_\mathcal{F})} & \\[2cm]
\hat\Sigma^{\infty}_{+} BF(\mathcal{F}) & & & \hat\Sigma^{\infty}_{+} BG(\mathcal{F}).\\
};
\path[->,arrow,auto]
(M-1-1) edge node{$\eta_\mathcal{E}$} (M-1-4)
edge node{$F((\omega_\mathcal{E})_\mathcal{E}^R)$} node[swap]{$\simeq$} (M-2-2)
edge node{$F(X_\mathcal{E}^\mathcal{F})$} (M-4-1)
(M-2-2) edge node{$\eta_R$} (M-2-3)
edge node{$F(X_R^S)$} (M-3-2)
(M-2-3) edge node{$G((\omega_\mathcal{E})_R^\mathcal{E})$} node[swap]{$\simeq$} (M-1-4)
edge node{$G(X_R^S)$} (M-3-3)
(M-1-4) edge node{$G(X_\mathcal{E}^\mathcal{F})$} (M-4-4)
(M-4-1) edge node[swap]{$F((\omega_\mathcal{F})_\mathcal{F}^S$} node{$\simeq$} (M-3-2)
edge node{$\eta_\mathcal{F}$} (M-4-4)
(M-3-2) edge node{$\eta_S$} (M-3-3)
(M-3-3) edge node[swap]{$G((\omega_\mathcal{F})_S^\mathcal{F})$} node{$\simeq$} (M-4-4)
;
\end{tikzpicture}
\]
Since all the smaller squares commute, the outer square commutes as well.
\end{proof}
As the first of the four auxiliary maps needed for Theorem \ref{thmFusionMain}, let us describe the evaluation map from $(\mathbb{Z}/p^e)^n\times \freeO n\mathcal{F}$ to $\mathcal{F}$.
Consider the two endofunctors $\twistO n, \Id_{\mathbb{AF}_p}\colon \mathbb{AF}_p\to \mathbb{AF}_p$. By Theorem \cite[Theorem 3.33.(v)]{RSS_Bold1}, the evaluation maps for unions of $p$-groups define a natural transformation $\mathord{\mathrm{ev}} \colon \twistO n|_{\textup{$p$-groups}}\Rightarrow \Id_{\mathbb{AF}_p}|_{\textup{$p$-groups}}$.
We can thus extend $\mathord{\mathrm{ev}}$ by Proposition \ref{propExtendNaturalTransformation} to a natural transformation $\mathord{\mathrm{ev}}\colon \twistO n\Rightarrow \Id_{\mathbb{AF}_p}$ on all of $\mathbb{AF}_p$. Let us determine a formula for the evaluation map $\mathord{\mathrm{ev}}_\mathcal{F}\colon (\mathbb{Z}/p^e)^n\times \freeO n\mathcal{F}\to \mathcal{F}$ in this extension.
By Definition \ref{defInducedNaturalTransformation}, the biset matrix $\mathord{\mathrm{ev}}_\mathcal{F}$ is given as the composite
\[
\mathord{\mathrm{ev}}_\mathcal{F} = \twistO n((\omega_\mathcal{F})_\mathcal{F}^S) \mathbin{\odot} \mathord{\mathrm{ev}}_S \mathbin{\odot} (\omega_\mathcal{F})_S^\mathcal{F}
\]
inside $\mathbb{AF}_p((\mathbb{Z}/p^e)^n\times \freeO n \mathcal{F}, \mathcal{F})$. In order to calculate the matrix entries of $\mathord{\mathrm{ev}}_\mathcal{F}$, recall that $\twistO n((\omega_\mathcal{F})_\mathcal{F}^S) = \twistT \mathcal{F}$ by Remark \ref{remarkFusionTwistedLoopFunctor}, and, by Corollary \ref{corEquivalentTelescopes}, we have $(\twistT \mathcal{F})_{\underline a,\underline b}=\twistO n((\omega_\mathcal{F})_S^S)_{\underline a,\underline b}$, whenever $\underline a$ represents a conjugacy class of tuples in $\mathcal{F}$ and $\underline b$ represents a class in $S$.
Calculating the entries of $\mathord{\mathrm{ev}}_\mathcal{F}\in \mathbb{AF}_p((\mathbb{Z}/p^e)^n\times \freeO n \mathcal{F}, \mathcal{F})$, we then get
\begin{align*}
(\mathord{\mathrm{ev}}_\mathcal{F})_{\underline a} &= \sum_{[\underline b]\in \cntuples n S} \twistO n((\omega_\mathcal{F})_\mathcal{F}^S)_{\underline a,\underline b} \mathbin{\odot} [(\mathbb{Z}/p^e)^n\times C_S(\underline b),\mathord{\mathrm{ev}}_{\underline b}]_{(\mathbb{Z}/p^e)^n\times C_S(\underline b)}^S \mathbin{\odot} (\omega_\mathcal{F})_S^\mathcal{F}
\\ &= \sum_{[\underline b]\in \cntuples n S} \twistO n((\omega_\mathcal{F})_S^S)_{\underline a,\underline b} \mathbin{\odot} [(\mathbb{Z}/p^e)^n\times C_S(\underline b),\mathord{\mathrm{ev}}_{\underline b}]_{(\mathbb{Z}/p^e)^n\times C_S(\underline b)}^S \mathbin{\odot} (\omega_\mathcal{F})_S^\mathcal{F}
\\ &= (\twistO n((\omega_\mathcal{F})_S^S) \mathbin{\odot} \mathord{\mathrm{ev}}_S \mathbin{\odot} (\omega_\mathcal{F})_S^\mathcal{F})_{\underline a}
\\ &= (\mathord{\mathrm{ev}}_S \mathbin{\odot} (\omega_\mathcal{F})_S^S\mathbin{\odot} (\omega_\mathcal{F})_S^\mathcal{F})_{\underline a}
\\ &= (\mathord{\mathrm{ev}}_S \mathbin{\odot} (\omega_\mathcal{F})_S^\mathcal{F})_{\underline a}
\\ &= [(\mathbb{Z}/p^e)^n\times C_S(\underline a),\mathord{\mathrm{ev}}_{\underline a}]_{(\mathbb{Z}/p^e)^n\times C_S(\underline a)}^S \mathbin{\odot} (\omega_\mathcal{F})_S^\mathcal{F}
\\ &= [(\mathbb{Z}/p^e)^n\times C_S(\underline a),\mathord{\mathrm{ev}}_{\underline a}]_{(\mathbb{Z}/p^e)^n\times C_S(\underline a)}^\mathcal{F}
\end{align*}
as an element of $\mathbb{AF}_p((\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline a), \mathcal{F})$.
By Lemma \ref{lemmaEvalFusionPreserving}, the homomorphism $\mathord{\mathrm{ev}}_{\underline a}$ is fusion preserving from $(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline a)$ to $\mathcal{F}$.
Lemma \ref{lemmaFusionPreserving} now implies that we can also write
\[(\mathord{\mathrm{ev}}_\mathcal{F})_{\underline a} = [(\mathbb{Z}/p^e)^n\times C_S(\underline a),\mathord{\mathrm{ev}}_{\underline a}]_{(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline a)}^\mathcal{F}.\]
Thus, the evaluation map $\mathord{\mathrm{ev}}_\mathcal{F}$ simply applies the fusion preserving map $\mathord{\mathrm{ev}}_{\underline a}\colon (\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline a)\to \mathcal{F}$ to each component of $(\mathbb{Z}/p^e)^n\times \freeO n\mathcal{F}$. Let us record this fact for future reference:
\begin{lemma}\label{lemmaFusionEvaluation}
Let $\mathcal{F}$ be a saturated fusion system (or fusoid) over a finite $p$-group $S$ (or formal union of such). The evaluation map $\mathord{\mathrm{ev}}_\mathcal{F}\colon (\mathbb{Z}/p^e)^n\times\freeO n\mathcal{F} \to \mathcal{F}$ takes each component $(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline a)$ to $\mathcal{F}$ via the fusion preserving map $\mathord{\mathrm{ev}}_{\underline a}$. As such, the entries of $\mathord{\mathrm{ev}}_\mathcal{F}$ are given by
\[(\mathord{\mathrm{ev}}_\mathcal{F})_{\underline a} = [(\mathbb{Z}/p^e)^n\times C_S(\underline a),\mathord{\mathrm{ev}}_{\underline a}]_{(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline a)}^\mathcal{F}.\]
Furthermore, the maps $\mathord{\mathrm{ev}}_\mathcal{F}$ assemble into a natural transformation $\mathord{\mathrm{ev}}\colon \twistO n\Rightarrow \Id_{\mathbb{AF}_p}$ between endofunctors on $\mathbb{AF}_p$.
\end{lemma}
We could have easily defined $\mathord{\mathrm{ev}}_\mathcal{F}$ directly for fusion systems in terms of the fusion preserving maps $(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline a)\to \mathcal{F}$. However, by constructing $\mathord{\mathrm{ev}}_\mathcal{F}$ via Proposition \ref{propExtendNaturalTransformation}, we know that $\mathord{\mathrm{ev}}_\mathcal{F}$ gives rise to a natural transformation which will greatly simplify the proof of Theorem \ref{thmFusionMain}.
The next auxiliary map we shall work on is the partial evaluation map $\ensuremath{\partial \mathrm{ev}}_\mathcal{F}\colon \mathbb{Z}/p^e\times \freeO {n+1}\mathcal{F} \to \freeO n \mathcal{F}$. Note that for $p$-groups the partial evaluation map $\ensuremath{\partial \mathrm{ev}}_S\colon \mathbb{Z}/p^e\times \freeO {n+1}S \to \freeO n S$ of \cite[Theorem 3.33.(vi)]{RSS_Bold1}, given by
\[\ensuremath{\partial \mathrm{ev}}_S(t,z) = (a_{n+1})^t\cdot z\in C_S(a_1,\dotsc,a_n), \quad \text{for } t\in \mathbb{Z}/p^e, z\in \coprod_{\underline a\in \cntuples{n+1}S} C_S(\underline a),\]
coincides with the 1-fold evaluation map for $\freeO n S$, i.e. we have $\ensuremath{\partial \mathrm{ev}}_S = \mathord{\mathrm{ev}}_{\freeO n S}\colon \mathbb{Z}/p^e\times \freeO 1(\freeO n S) \to \freeO n S$. As such, $\ensuremath{\partial \mathrm{ev}}_S$ provides a natural transformation $\ensuremath{\partial \mathrm{ev}}\colon \twistO 1(\freeO n(-))\Rightarrow \freeO n(-)$ on $p$-groups. By Lemma \ref{lemmaFusionEvaluation}, the maps $\ensuremath{\partial \mathrm{ev}}_\mathcal{F} = \mathord{\mathrm{ev}}_{\freeO n \mathcal{F}} \colon \mathbb{Z}/p^e\times \freeO {n+1}\mathcal{F} \to \freeO n \mathcal{F}$ provide a natural transformation $\ensuremath{\partial \mathrm{ev}}\colon \twistO 1(\freeO n(-))\Rightarrow \freeO n(-)$ on all of $\mathbb{AF}_p$. The entries of the biset matrix, $(\ensuremath{\partial \mathrm{ev}}_\mathcal{F})_{\underline a,\underline b}\in \mathbb{AF}_p(\mathbb{Z}/p^e \times C_\mathcal{F}(\underline a),C_\mathcal{F}(\underline b))$ have the following form for each representative $(n+1)$-tuple $\underline a = (a_1,\dotsc, a_n,a_{n+1})$:
\[
(\ensuremath{\partial \mathrm{ev}}_\mathcal{F})_{\underline a,\underline b} =\begin{cases}
[\mathbb{Z}/p^e \times C_S(\underline a), \ensuremath{\partial \mathrm{ev}}_{\underline a}]_{\mathbb{Z}/p^e \times C_\mathcal{F}(\underline a)}^{C_\mathcal{F}(a_1,\dotsc,a_n)} &\text{if }\underline b=(a_1,\dotsc,a_n),
\\ 0 &\text{otherwise.}
\end{cases}
\]
Note that $(a_1,\dotsc,a_n)$ is itself a chosen representative $n$-tuple for $\mathcal{F}$ by Convention \ref{conventionTupleReps}.
According to \cite[Theorem 3.33.(vi)]{RSS_Bold1}, the partial evaluation maps give a natural transformation $(\mathbb{Z}/p^e)^n\times \ensuremath{\partial \mathrm{ev}}\colon \twistO {n+1} \Rightarrow \twistO n$ for all unions of $p$-groups. By Proposition \ref{propExtendNaturalTransformation}, we can extend this natural transformation to all saturated fusoids to get $\eta\colon \twistO {n+1} \Rightarrow \twistO n$ on $\mathbb{AF}_p$ with $\eta_\mathcal{F}$ given by
\[\eta_\mathcal{F} = \twistO{n+1}((\omega_\mathcal{F})_\mathcal{F}^S) \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \ensuremath{\partial \mathrm{ev}}_S) \mathbin{\odot} \twistO n((\omega_\mathcal{F})_S^\mathcal{F}).\]
We claim that $\eta_\mathcal{F}$ simply recovers $(\mathbb{Z}/p^e)^n\times \ensuremath{\partial \mathrm{ev}}_\mathcal{F}$ for fusoids as well. The calculation is completely analogous to the calculation preceding Lemma \ref{lemmaFusionEvaluation}.
We note that $\twistO{n+1}((\omega_\mathcal{F})_\mathcal{F}^S)_{\underline a,\underline b} = \twistO{n+1}((\omega_\mathcal{F})_S^S)_{\underline a,\underline b}$ for all $(n+1)$-tuple representatives $\underline a$ for $\mathcal{F}$ and $\underline b$ for $S$. Additionally, Remark \ref{remarkFusionTwistedLoopFunctor} implies that $\twistO n((\omega_\mathcal{F})_S^\mathcal{F})= (\mathbb{Z}/p^e)^n\times I_\mathcal{F}$ has entries
\[
\twistO n((\omega_\mathcal{F})_S^\mathcal{F})_{\underline a,\underline b} =
\begin{cases}
(\mathbb{Z}/p^e)^n\times \zeta_{\underline a}^{\underline b} &\text{if $\underline a$ is $\mathcal{F}$-conj. to $\underline b$,}
\\ 0 &\text{otherwise.}
\end{cases}
\]
The calculation of $\eta_\mathcal{F}$ the proceeds accordingly:
\begin{align*}
&(\eta_\mathcal{F})_{\underline a,\underline b}
\\ ={}& \Bigl(\twistO{n+1}((\omega_\mathcal{F})_\mathcal{F}^S) \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \ensuremath{\partial \mathrm{ev}}_S) \mathbin{\odot} \twistO n((\omega_\mathcal{F})_S^\mathcal{F})\Bigr)_{\underline a,\underline b}
\\ ={}& \Bigl(\twistO{n+1}((\omega_\mathcal{F})_S^S) \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \ensuremath{\partial \mathrm{ev}}_S) \mathbin{\odot} \twistO n((\omega_\mathcal{F})_S^\mathcal{F})\Bigr)_{\underline a,\underline b}
\\ ={}& \Bigl(((\mathbb{Z}/p^e)^n\times \ensuremath{\partial \mathrm{ev}}_S) \mathbin{\odot} \twistO{n}((\omega_\mathcal{F})_S^S) \mathbin{\odot} \twistO n((\omega_\mathcal{F})_S^\mathcal{F})\Bigr)_{\underline a,\underline b}
\\ ={}& \Bigl(((\mathbb{Z}/p^e)^n\times \ensuremath{\partial \mathrm{ev}}_S) \mathbin{\odot} \twistO n((\omega_\mathcal{F})_S^\mathcal{F})\Bigr)_{\underline a,\underline b}
\\ ={}& \begin{cases}
\!\begin{aligned}
[(\mathbb{Z}/p^e)^{n+1} \times C_S(\underline a), (\mathbb{Z}/p^e)^n\times \ensuremath{\partial \mathrm{ev}}_{\underline a}&]_{(\mathbb{Z}/p^e)^{n+1} \times C_S(\underline a)}^{(\mathbb{Z}/p^e)^n \times C_S(a_1,\dotsc,a_n)}
\\*& \mathbin{\odot} ((\mathbb{Z}/p^e)^n\times \zeta_{(a_1,\dotsc,a_n)}^{\underline b})
\end{aligned}&\text{if $(a_1,\dotsc,a_n)\sim_\mathcal{F} \underline b$},
\\ 0 &\text{otherwise.}
\end{cases}
\end{align*}
However if $(a_1,\dotsc,a_n)$ is $\mathcal{F}$-conjugate to $\underline b$, then the two $n$-tuples are equal as they both represent the same $\mathcal{F}$-conjugacy class, hence $\zeta_{(a_1,\dotsc,a_n)}^{\underline b}$ is the inclusion $C_S(a_1,\dotsc,a_n) \to C_\mathcal{F}(a_1,\dotsc,a_n)$.
In total we have
\[
(\eta_\mathcal{F})_{\underline a,\underline b} = \begin{cases}
[(\mathbb{Z}/p^e)^{n+1} \times C_S(\underline a), (\mathbb{Z}/p^e)^n\times \ensuremath{\partial \mathrm{ev}}_{\underline a}]_{(\mathbb{Z}/p^e)^{n+1} \times C_S(\underline a)}^{(\mathbb{Z}/p^e)^n \times C_\mathcal{F}(a_1,\dotsc,a_n)} &\text{if }\underline b=(a_1,\dotsc,a_n),
\\ 0 &\text{otherwise.}
\end{cases}
\]
By Lemma \ref{lemmaEvalFusionPreserving}, $\ensuremath{\partial \mathrm{ev}}_{\underline a}\colon \mathbb{Z}/p^e \times C_\mathcal{F}(\underline a) \to C_\mathcal{F}(a_1,\dotsc,a_n)$ is fusion preserving, so
\begin{multline*}
[(\mathbb{Z}/p^e)^{n+1} \times C_S(\underline a), (\mathbb{Z}/p^e)^n\times \ensuremath{\partial \mathrm{ev}}_{\underline a}]_{(\mathbb{Z}/p^e)^{n+1} \times C_S(\underline a)}^{(\mathbb{Z}/p^e)^n \times C_\mathcal{F}(a_1,\dotsc,a_n)}
\\= [(\mathbb{Z}/p^e)^{n+1} \times C_S(\underline a), (\mathbb{Z}/p^e)^n\times \ensuremath{\partial \mathrm{ev}}_{\underline a}]_{(\mathbb{Z}/p^e)^{n+1} \times C_\mathcal{F}(\underline a)}^{(\mathbb{Z}/p^e)^n \times C_\mathcal{F}(a_1,\dotsc,a_n)}
\end{multline*}
and $\eta_\mathcal{F}$ recovers $(\mathbb{Z}/p^e)^n\times \ensuremath{\partial \mathrm{ev}}_\mathcal{F}$ as claimed.
Again we record this as a lemma:
\begin{lemma}\label{lemmaFusionPartialEvaluation}
Let $\mathcal{F}$ be a saturated fusion system (or fusoid) over a finite $p$-group $S$ (or formal union of such). The partial evaluation map $\ensuremath{\partial \mathrm{ev}}_\mathcal{F}\colon \mathbb{Z}/p^e\times\freeO{n+1}\mathcal{F} \to \freeO n\mathcal{F}$ takes each component $\mathbb{Z}/p^e\times C_\mathcal{F}(\underline a)$ to $C_\mathcal{F}(a_1,\dotsc,a_n)$ via the fusion preserving map $\ensuremath{\partial \mathrm{ev}}_{\underline a}$ given by
\[
\ensuremath{\partial \mathrm{ev}}_{\underline a}(t,z) = (a_{n+1})^t\cdot z.
\]
The maps $(\mathbb{Z}/p^e)^n\times \ensuremath{\partial \mathrm{ev}}_\mathcal{F}$ define a natural transformation $(\mathbb{Z}/p^e)^n\times \ensuremath{\partial \mathrm{ev}}\colon \twistO{n+1}\Rightarrow \twistO n$ between endofunctors on $\mathbb{AF}_p$.
\end{lemma}
Next we establish the action of $\Sigma_n$ on $\freeO n \mathcal{F}$ in a similar fashion.
Given a permutation $\sigma\in \Sigma_n$, the action of $\sigma$ on $\freeO n S$ permutes the components (as described in \cite[Theorem 3.33.(iii)]{RSS_Bold1}) by sending $C_S(\underline a)=C_S(\sigma(\underline a))$ to $C_S(\tilde{\sigma(\underline a)})$ via the isomorphism $\zeta_{\sigma(\underline a)}^{\tilde{\sigma(\underline a)}}$, where $\tilde{\sigma(\underline a)}$ is the chosen representative for the $S$-conjugacy class of $\sigma(\underline a)$.
The action of $\sigma$ defines a natural transformation $\sigma\colon \freeO n\Rightarrow \freeO n$ for all unions of $p$-groups (note that \cite[Proposition 3.15]{RSS_Bold1} is invariant with respect to permuting the $n$-tuples by $\sigma$). Proposition \ref{propExtendNaturalTransformation} then provides an extension $\sigma\colon \freeO n\Rightarrow \freeO n$ to all of $\mathbb{AF}_p$.
As for the (partial) evaluation maps, we calculate that the induced map $\sigma_\mathcal{F} = \freeO n((\omega_\mathcal{F}^S))\mathbin{\odot} \sigma_S \mathbin{\odot} \freeO n((\omega_S^\mathcal{F}))$ is a matrix in which the only non-zero entries are
\[(\sigma_\mathcal{F})_{\underline a,\tilde{\sigma(\underline a)}} = \zeta_{\sigma(\underline a)}^{\tilde{\sigma(\underline a)}} \in \mathbb{AF}_p(C_\mathcal{F}(\underline a),C_\mathcal{F}(\tilde{\sigma(\underline a)}),\]
and where $\tilde{\sigma(\underline a)}$ is the chosen representative for the $\mathcal{F}$-conjugacy class of $\sigma(\underline a)$.
Similarly, by \cite[Theorem 3.33.(iii)]{RSS_Bold1} the diagonal action of $\sigma$ on $(\mathbb{Z}/p^e)^n\times \freeO n S$ provides a natural transformation $\sigma\colon \twistO n\Rightarrow \twistO n$ for unions of $p$-groups.
Again we extend this natural transformation to a natural transformation $\eta\colon\twistO n\Rightarrow \twistO n$ on all of $\mathbb{AF}_p$, and we proceed to calculate that $\eta_\mathcal{F} = \twistO n((\omega_\mathcal{F}^S))\mathbin{\odot} \sigma_S \mathbin{\odot} \twistO n((\omega_S^\mathcal{F}))$ is simply the diagonal action of $\sigma$ on $(\mathbb{Z}/p^e)^n\times \freeO n\mathcal{F}$. We record this fact:
\begin{lemma}\label{lemmaFusionEquivariant}
Let $\mathcal{F}$ be a saturated fusion system (or fusoid) over a finite $p$-group $S$ (or formal union of such).
The symmetric group $\Sigma_n$ acts on $\freeO n\mathcal{F}$ by permuting the coordinates of the $n$-tuples. If $\sigma\in \Sigma_n$, then the action of $\sigma$ on $\freeO n\mathcal{F}$ sends each component $C_\mathcal{F}(\underline a)=C_\mathcal{F}(\sigma(\underline a))$ to $C_\mathcal{F}(\tilde{\sigma(\underline a)})$ via the isomorphism $\zeta_{\sigma(\underline a)}^{\tilde{\sigma(\underline a)}}$, where $\tilde{\sigma(\underline a)}$ is the chosen representative for the $\mathcal{F}$-conjugacy class of $\sigma(\underline a)$.
The diagonal action of $\sigma$ on $(\mathbb{Z}/p^e)^n\times \freeO n\mathcal{F}$ gives a natural transformation $\sigma\colon \twistO n\Rightarrow \twistO n$ between endofunctors on $\mathbb{AF}_p$.
\end{lemma}
The final auxiliary map needed for Theorem \ref{thmFusionMain} is the embedding of $(\mathbb{Z}/p^e)^{n+m}\times \freeO{n+m}\mathcal{F}$ into $(\mathbb{Z}/p^e)^m\times \freeO m((\mathbb{Z}/p^e)^n\times \freeO n\mathcal{F})$. \cite[Theorem 3.33.(vii)]{RSS_Bold1} states that we have a natural transformation $\iota\colon \twistO{n+m}(-)\Rightarrow \twistO m(\twistO n(-))$ for all unions of $p$-groups, where $\iota_S\colon (\mathbb{Z}/p^e)^{n+m} \times \freeO{n+m}S \to (\mathbb{Z}/p^e)^m \times \freeO m((\mathbb{Z}/p^e)^n \times \freeO n S)$ takes each component
$(\mathbb{Z}/p^e)^{n+m} \times C_S(\underline x,\underline y)$ to the component $(\mathbb{Z}/p^e)^m\times C_{(\mathbb{Z}/p^e)^n\times C_S(\underline x)}(\underline 0 \times \underline y)$ via the map
\begin{multline*}
\bigl((\underline s,\underline r), z\bigr)\in (\mathbb{Z}/p^e)^{n+m}\times C_S((\underline x,\underline y)) \mapsto \bigl(\underline r, (\underline s,z) \bigr) \in (\mathbb{Z}/p^e)^m\times C_{(\mathbb{Z}/p^e)^n\times C_S(\underline x)}(\underline 0\times \underline y),
\end{multline*}
for $\underline s\in (\mathbb{Z}/p^e)^n$, $\underline r\in (\mathbb{Z}/p^e)^m$, $\underline x\in \cntuples n S$, $\underline y\in \cntuples m {C_S(\underline x)}$, and $z\in C_S(\underline x,\underline y)$.
Suppose $(\underline x,\underline y)$ is a chosen representative $(n+m)$-tuple in $S$. Then by Convention \ref{conventionTupleReps} the commuting $n$-tuple $\underline x$ is a chosen representative in $S$ as well, and the commuting $m$-tuple $\underline y$ is a chosen representative in $C_S(\underline x)\subseteq \freeO n S$ -- note that $\underline y$ is \emph{not} necessarily a chosen representative in $S$.
The entries of the biset matrix $\iota_S\in \mathbb{AF}_p((\mathbb{Z}/p^e)^{n+m} \times \freeO{n+m}S, (\mathbb{Z}/p^e)^m \times \freeO m((\mathbb{Z}/p^e)^n \times \freeO n S))$ are given by
\begin{align*}
& (\iota_S)_{(\underline x,\underline y)\in \cntuples{n+m} S, (\underline t\times \underline w)\in \cntuples m {((\mathbb{Z}/p^e)^n\times \freeO n S)}}
\\ = {}& \begin{cases}
[(\mathbb{Z}/p^e)^{n+m} \times C_S(\underline x,\underline y), ((\underline s,\underline r),z)\mapsto (\underline r, (\underline s,z))] & \substack{\text{if $(\underline t\times \underline w) = (\underline 0\times \underline y)$}
\\ \text{in the component $(\mathbb{Z}/p^e)^n\times C_S(\underline x)$,}}
\\ 0 & \text{otherwise.}
\end{cases}
\end{align*}
We extend $\iota$ to a natural transformation $\twistO{n+m}(-)\Rightarrow \twistO m(\twistO n(-))$ on all of $\mathbb{AF}_p$ by Proposition \ref{propExtendNaturalTransformation}, and as such we have
\[\iota_\mathcal{F} = \twistO{n+m}((\omega_\mathcal{F})_\mathcal{F}^S) \mathbin{\odot} \iota_S\mathbin{\odot} \twistO m(\twistO n((\omega_\mathcal{F})_S^\mathcal{F})).\]
As for the previous auxiliary maps, we first note that for any representative $(n+m)$-tuple $(\underline x,\underline y)$ in $\mathcal{F}$ and $m$-tuple $(\underline t\times \underline w)$ in $(\mathbb{Z}/p^e)^n\times \freeO n \mathcal{F}$ we have
\begin{align*}
(\iota_\mathcal{F})_{(\underline x,\underline y),(\underline t\times \underline w)} &= (\twistO{n+m}((\omega_\mathcal{F})_\mathcal{F}^S) \mathbin{\odot} \iota_S\mathbin{\odot} \twistO m(\twistO n((\omega_\mathcal{F})_S^\mathcal{F})))_{(\underline x,\underline y),(\underline t\times \underline w)}
\\ &= (\iota_S\mathbin{\odot} \twistO m(\twistO n((\omega_\mathcal{F})_S^\mathcal{F})))_{(\underline x,\underline y),(\underline t\times \underline w)}
\\ &= \sum_{\underline t'\times \underline w'\in \cntuples m {((\mathbb{Z}/p^e)^n\times \freeO n S)}} (\iota_S)_{(\underline x,\underline y),(\underline t'\times \underline w')} \mathbin{\odot} \twistO m(\twistO n((\omega_\mathcal{F})_S^\mathcal{F}))_{(\underline t'\times \underline w'),(\underline t\times \underline w)}.
\end{align*}
Now the matrix entries of $\iota_S$ are zero except when $\underline t'\times \underline w' = \underline 0\times \underline y$ as $m$-tuples in the component $(\mathbb{Z}/p^e)^n\times C_S(\underline x)$ of $(\mathbb{Z}/p^e)^n\times \freeO n S$. Hence the above equation becomes
\begin{align*}
& (\iota_\mathcal{F})_{(\underline x,\underline y),(\underline t\times \underline w)}
\\={}& (\iota_S)_{(\underline x,\underline y),(\underline 0\times \underline y)} \mathbin{\odot} \twistO m(\twistO n((\omega_\mathcal{F})_S^\mathcal{F}))_{(\underline 0\times \underline y),(\underline t\times \underline w)}
\\ ={}& [(\mathbb{Z}/p^e)^{n+m} \times C_S(\underline x,\underline y), ((\underline s,\underline r),z)\mapsto (\underline r, (\underline s,z))]_{(\mathbb{Z}/p^e)^{n+m}\times C_S(\underline x,\underline y)}^{(\mathbb{Z}/p^e)^m\times C_{(\mathbb{Z}/p^e)^n\times C_S(\underline x)}(\underline 0\times \underline y)}
\\* &\mathbin{\odot} \twistO m(\twistO n((\omega_\mathcal{F})_S^\mathcal{F}))_{(\underline 0\times \underline y),(\underline t\times \underline w)}.
\end{align*}
If we apply Proposition \ref{propFusionTwistedLoopOrbits} twice to the basis element $(\omega_\mathcal{F})_S^\mathcal{F} = [S,\id]_S^\mathcal{F} \in \mathbb{AF}_p(S,\mathcal{F})$, we see that
\begin{multline*}
\twistO m(\twistO n([S,\id]_S^\mathcal{F}))_{(\underline 0\times \underline y),(\underline t\times \underline w)}
\\ = \begin{cases}
(\mathbb{Z}/p^e)^m\times \zeta_{(\underline 0\times \underline y)}^{(\underline t\times \underline w)} & \text{if $(\underline t\times \underline w)$ represents the $(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline x)$-conjugacy class of $(\underline 0\times \underline y)$,}
\\ 0 &\text{otherwise.}
\end{cases}
\end{multline*}
Because $(\underline x,\underline y)$ was assumed to be a representative $(n+m)$-tuple in $\mathcal{F}$, the $m$-tuple $\underline y$ is a representative in $C_\mathcal{F}(\underline x)$. At the same time, if the $m$-tuple $\underline t$ in $(\mathbb{Z}/p^e)^n$ is conjugate to $\underline 0$, then $\underline t=\underline 0$. Consequently $(\underline 0\times \underline y)$ is the chosen representative for its $(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline x)$-conjugacy class.
We conclude that $(\iota_\mathcal{F})_{(\underline x,\underline y),(\underline t\times \underline w)}$ is zero unless $\underline t\times \underline w=\underline 0\times \underline y$ in the component $(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline x)$ of $(\mathbb{Z}/p^e)^n\times \freeO n S$. Furthermore,
\begin{align*}
& (\iota_\mathcal{F})_{(\underline x,\underline y),(\underline 0\times \underline y)}
\\ ={}& [(\mathbb{Z}/p^e)^{n+m} \times C_S(\underline x,\underline y), ((\underline s,\underline r),z)\mapsto (\underline r, (\underline s,z))]_{(\mathbb{Z}/p^e)^{n+m}\times C_S(\underline x,\underline y)}^{(\mathbb{Z}/p^e)^m\times C_{(\mathbb{Z}/p^e)^n\times C_S(\underline x)}(\underline 0\times \underline y)}
\\* &\mathbin{\odot} (\mathbb{Z}/p^e)^m\times \zeta_{(\underline 0\times \underline y)}^{(\underline 0\times \underline y)}
\\ ={}& [(\mathbb{Z}/p^e)^{n+m} \times C_S(\underline x,\underline y), ((\underline s,\underline r),z)\mapsto (\underline r, (\underline s,z))]_{(\mathbb{Z}/p^e)^{n+m}\times C_S(\underline x,\underline y)}^{(\mathbb{Z}/p^e)^m\times C_{(\mathbb{Z}/p^e)^n\times C_S(\underline x)}(\underline 0\times \underline y)}
\\* &\mathbin{\odot} (\mathbb{Z}/p^e)^m\times [C_{(\mathbb{Z}/p^e)^n\times C_S(\underline x)}(\underline 0\times \underline y),\id]_{C_{(\mathbb{Z}/p^e)^n\times C_S(\underline x)}(\underline 0\times \underline y)}^{C_{(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline x)}(\underline 0\times \underline y)}
\\ ={}& [(\mathbb{Z}/p^e)^{n+m} \times C_S(\underline x,\underline y), ((\underline s,\underline r),z)\mapsto (\underline r, (\underline s,z))]_{(\mathbb{Z}/p^e)^{n+m}\times C_S(\underline x,\underline y)}^{(\mathbb{Z}/p^e)^m\times C_{(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline x)}(\underline 0\times \underline y)}.
\end{align*}
As in Lemma \ref{lemmaEvalFusionPreserving}, it follows that the map $((\underline s,\underline r),z)\mapsto (\underline r, (\underline s,z))$ is fusion preserving from $(\mathbb{Z}/p^e)^{n+m}\times C_\mathcal{F}(\underline x,\underline y)$ to $(\mathbb{Z}/p^e)^m\times C_{(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline x)}(\underline 0\times \underline y)$ and additionally we can write
\[
(\iota_\mathcal{F})_{(\underline x,\underline y),(\underline 0\times \underline y)} = [(\mathbb{Z}/p^e)^{n+m} \times C_S(\underline x,\underline y), ((\underline s,\underline r),z)\mapsto (\underline r, (\underline s,z))]_{(\mathbb{Z}/p^e)^{n+m}\times C_\mathcal{F}(\underline x,\underline y)}^{(\mathbb{Z}/p^e)^m\times C_{(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline x)}(\underline 0\times \underline y)}
\]
if we prefer. We record our final preliminary result about the auxiliary maps of the main theorem.
\begin{lemma}\label{lemmaFusionLnIteration}
Let $\mathcal{F}$ be a saturated fusion system (or fusoid) over a finite $p$-group $S$ (or formal union of such). The embedding $\iota_\mathcal{F}\colon (\mathbb{Z}/p^e)^{n+m}\times\freeO{n+m}\mathcal{F} \to (\mathbb{Z}/p^e)^m \times \freeO m((\mathbb{Z}/p^e)^n\times \freeO n\mathcal{F})$ takes each component $(\mathbb{Z}/p^e)^{n+m}\times C_\mathcal{F}(\underline x,\underline y)$ to the component $(\mathbb{Z}/p^e)^m \times C_{(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline x)}(\underline 0\times \underline y)$ via the fusion preserving map given by
\[
((\underline s,\underline r),z) \mapsto (\underline r,(\underline s,z)).
\]
The maps $\iota_\mathcal{F}$ define a natural transformation $\twistO{n+m}(-)\Rightarrow \twistO m(\twistO n(-))$ between endofunctors on $\mathbb{AF}_p$.
\end{lemma}
Now we are finally ready to prove that our extension of $\twistO n$ to fusion systems satisfies the same properties as for finite groups and finite sheeted covering maps. Recalling \cite[Convention 3.21]{RSS_Bold1}, we state and prove the following variant of \cite[Theorems 2.13 and 3.33]{RSS_Bold1}:
\begin{theorem}\label{thmFusionMain}
The endofunctors $\twistO n \colon \mathbb{AF}_p\to \mathbb{AF}_p$ for $n\geq 0$ of Proposition \ref{propTwistedFusionLoop} have the following properties:
\begin{enumerate}
\renewcommand{\theenumi}{$(\roman{enumi})$}\renewcommand{\labelenumi}{\theenumi}
\item[$(\emptyset )$] Let $L^{\dagger,\mathbb{AG}}_n\colon \mathbb{AG} \to \mathbb{AG}$ be the functor constructed in \cite[Section 3]{RSS_Bold1}. When restricted to the full subcategories of $\mathbb{AG}$ and $\mathbb{AF}_p$ spanned by formal unions of finite $p$-groups, the functor $\twistO n\colon \mathbb{AF}_p \to \mathbb{AF}_p$ is the $\mathbb{Z}_p$-linearization of $L^{\dagger,\mathbb{AG}}_n$.
\item\label{itemFusionLnZero} $\twistO 0$ is the identity functor on $\mathbb{AF}_p$.
\item\label{itemFusionLnObjects} On objects, $\twistO n$ takes a saturated fusoid $\mathcal{F}$ to the saturated fusoid
\[\twistO n(\mathcal{F}) = (\mathbb{Z}/p^e)^n\times \freeO n (\mathcal{F})=\coprod_{\underline a\in \cntuples n\mathcal{F}} (\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline a).\]
\item\label{itemFusionEquivariant} The group $\Sigma_n$ acts on $\freeO n\mathcal{F}=\coprod_{\underline a\in \cntuples n\mathcal{F}} C_\mathcal{F}(\underline a)$ by permuting the coordinates of the $n$-tuples $\underline a$. Explicitly, if $\sigma\in \Sigma_n$ and if $\widetilde{\sigma(\underline a)}$ is the representative for the $\mathcal{F}$-conjugacy class of $\sigma(\underline a)$, then $\sigma\colon \freeO n\mathcal{F}\to \freeO n\mathcal{F}$ maps $C_\mathcal{F}(\underline a)=C_\mathcal{F}(\sigma(\underline a))$ to $C_\mathcal{F}(\widetilde{\sigma(\underline a)})$ via the isomorphism $\zeta_{\sigma(\underline a)}^{\widetilde{\sigma(\underline a)}}\in \mathbb{AF}_p(C_\mathcal{F}(\underline a),C_\mathcal{F}(\widetilde{\sigma(\underline a)}))$.
The functor $\twistO n$ is equivariant with respect to the $\Sigma_n$-action on $(\mathbb{Z}/p^e)^n\times \freeO n(-)$ that permutes the coordinates of both $(\mathbb{Z}/p^e)^n$ and $\freeO n(-)$, i.e. for every $\sigma\in\Sigma_n$ the diagonal action of $\sigma$ on $(\mathbb{Z}/p^e)^n\times \freeO n(-)$ induces a natural isomorphism $\sigma\colon \twistO n\overset\cong\Rightarrow \twistO n$.
\item\label{itemFusionLnForwardMaps} Let $\mathcal{E}$ and $\mathcal{F}$ be saturated fusion systems on $R$ and $S$ respectively. For forward maps, i.e. transitive bisets $[R,\varphi]_\mathcal{E}^\mathcal{F}\in \mathbb{AF}_p(\mathcal{E},\mathcal{F})$ with $\varphi\colon R\to S$ fusion preserving, the functor $\twistO n$ coincides with $(\mathbb{Z}/p^e)^n \times \freeO n(-)$ so that
\[\twistO n([R,\varphi]_\mathcal{E}^\mathcal{F})=(\mathbb{Z}/p^e)^n\times \freeO n([R,\varphi]_\mathcal{E}^\mathcal{F}).\]
In addition, $\freeO n([R,\varphi]_\mathcal{E}^\mathcal{F})$ is the biset matrix that takes a component $C_\mathcal{E}(\underline a)$ of $\freeO n\mathcal{E}$ to the component $C_\mathcal{F}(\underline b)$ of $\freeO n\mathcal{F}$ by the biset
\[
[C_R(\underline a),\varphi]_{C_\mathcal{E}(\underline a)}^{C_S(\varphi(\underline a))}\mathbin{\odot} \zeta_{\varphi(\underline a)}^{\underline b}\in \mathbb{AF}_p( C_\mathcal{E}(\underline a), C_\mathcal{F}(\underline b)),
\]
where $\underline b$ represents the $\mathcal{F}$-conjugacy class of $\varphi(\underline a)$.
\item\label{itemFusionEvalSquare} For all $n \geq 0$, the functor $\twistO n$ commutes with evaluation maps, i.e. the evaluation maps $\mathord{\mathrm{ev}}_\mathcal{F}\colon (\mathbb{Z}/p^e)^n\times \freeO n(\mathcal{F})\to \mathcal{F}$ form a natural transformation $\mathord{\mathrm{ev}}\colon \twistO n \Rightarrow \Id_{\mathbb{AF}_p}$.
\item\label{itemFusionLnPartialEvaluation} For all $n \geq 0$, the partial evaluation maps $\ensuremath{\partial \mathrm{ev}}_\mathcal{F}\colon \mathbb{Z}/p^e\times \freeO {n+1}(\mathcal{F}) \to \freeO n (\mathcal{F})$ given as fusion preserving maps $\ensuremath{\partial \mathrm{ev}}_{\underline a}\colon \mathbb{Z}/p^e \times C_\mathcal{F}(\underline a) \to C_\mathcal{F}(a_1,\dotsc, a_n)$ in terms of the formula
\[
\ensuremath{\partial \mathrm{ev}}_{\underline a}(t,z)= (a_{n+1})^t\cdot z\in C_S(a_1,\dotsc,a_n), \quad \text{for } t\in \mathbb{Z}/p^e, z\in C_S(\underline a),
\] form natural transformations $(\mathbb{Z}/p^e)^n\times \ensuremath{\partial \mathrm{ev}}\colon \twistO {n+1} \Rightarrow \twistO n$.
\item\label{itemFusionIterateLn} For all $n,m\geq 0$, and any saturated fusoid $\mathcal{F}$ on $S$, the formal union $(\mathbb{Z}/p^e)^{n+m}\times \freeO {n+m} \mathcal{F}$ embeds into $(\mathbb{Z}/p^e)^m\times \freeO m((\mathbb{Z}/p^e)^n\times \freeO n \mathcal{F})$ as the components corresponding to the commuting $m$-tuples in $(\mathbb{Z}/p^e)^n\times \freeO n \mathcal{F}$ that are zero in the $(\mathbb{Z}/p^e)^n$-coordinate, i.e. the embedding takes each component $(\mathbb{Z}/p^e)^{n+m}\times C_\mathcal{F}(\underline x,\underline y)$ to the component $(\mathbb{Z}/p^e)^m \times C_{(\mathbb{Z}/p^e)^n\times C_\mathcal{F}(\underline x)}(\underline 0\times \underline y)$, for $\underline x\in \cntuples n \mathcal{F}$ and $\underline y\in \cntuples m \mathcal{F}$, via the fusion preserving map given by
\[
((\underline s,\underline r),z) \mapsto (\underline r,(\underline s,z)),
\]
for $\underline s\in (\mathbb{Z}/p^e)^n$, $\underline r\in (\mathbb{Z}/p^e)^m$, and $z\in C_S(\underline x,\underline y)$.
These embeddings $(\mathbb{Z}/p^e)^{n+m}\times \freeO {n+m} \mathcal{F}\to (\mathbb{Z}/p^e)^m\times \freeO m((\mathbb{Z}/p^e)^n\times \freeO n \mathcal{F})$ then form a natural transformation $\twistO {n+m}(-)\Rightarrow \twistO m(\twistO n(-))$.
\end{enumerate}
\end{theorem}
\begin{proof}
\ref{itemFusionLnZero},\ref{itemFusionLnObjects}: Both follow immediately from the definition of $\twistO n$ in Proposition \ref{propTwistedFusionLoop}. In particular for $n=0$ and $X\in \mathbb{AF}_p(\mathcal{E},\mathcal{F})$ there is only a single $0$-tuple $()$ in each component and
\[\twistO 0(X_\mathcal{E}^\mathcal{F}) = \twistO 0(X_R^S) = X\]
in Proposition \ref{propTwistedFusionLoop}.
\ref{itemFusionEvalSquare},\ref{itemFusionLnPartialEvaluation},\ref{itemFusionEquivariant},\ref{itemFusionIterateLn}: These are Lemmas \ref{lemmaFusionEvaluation}-\ref{lemmaFusionLnIteration} respectively.
\ref{itemFusionLnForwardMaps}: With $\varphi\colon R\to S$ a fusion preserving map from $\mathcal{E}$ to $\mathcal{F}$, it follows from Lemma \ref{lemmaFusionPreserving} that
\[
X := [R,\varphi]_\mathcal{E}^\mathcal{F} = [R,\varphi]_R^\mathcal{F}.
\]
Let $\underline a$ be a representative commuting $n$-tuple in $\mathcal{E}$, then restricting to $C_R(\underline a)$ we have
\[ X_{C_\mathcal{E}(\underline a)}^\mathcal{F} = X_{C_R(\underline a)}^\mathcal{F} = [C_R(\underline a), \varphi|_{C_R(\underline a)}]_{C_R(\underline a)}^\mathcal{F} = [C_R(\underline a), \varphi|_{C_R(\underline a)}]_{C_\mathcal{E}(\underline a)}^\mathcal{F},
\]
where we note that the restriction $\varphi|_{C_R(\underline a)}$ is fusion preserving from $C_\mathcal{E}(\underline a)$ to $\mathcal{F}$.
Now Proposition \ref{propFusionTwistedLoopOrbits} and Corollary \ref{corSimpleFusionLoopOrbits} together state that
\begin{align*}
&\twistO n(X)_{\underline a,\underline b}
\\ ={}& \left.\begin{cases} (\mathbb{Z}/p^e)^n\times ([C_R(\underline a), \varphi]_{C_\mathcal{E}(\underline a)}^{C_S(\varphi(\underline a))} \mathbin{\odot} \zeta_{\varphi(\underline a)}^{\underline b} )& \text{if $\underline b$ is $\mathcal{F}$-conjugate to $\varphi(\underline a)$} \\ 0 & \text{otherwise} \end{cases}\right\}
\\ ={}& (\mathbb{Z}/p^e)^n \times \freeO n(X)_{\underline a,\underline b}.
\end{align*}
This completes the proof of \ref{itemFusionLnForwardMaps} and the theorem.
\end{proof}
\section{$\twistO n$ commutes with $p$-completion for bisets of finite groups}\label{secFusionPCompletionCommutes}
In this final section we compare the functor $\twistO n\colon \mathbb{AF}_p \to \mathbb{AF}_p$ with the functor $\twistO {n,p}\colon \mathbb{AG}\to \mathbb{AG}$ for finite groups, where we restrict to centralizers of commuting $n$-tuples of $p$-power order elements (see \cite[Proposition 3.35]{RSS_Bold1}).
We shall see that these two functors are closely related via the $p$-completion functor $(-)^\wedge_p\colon \mathbb{AG}\to \mathbb{AF}_p$ described in \cite{RSS_p-completion}, such that we have
\[
(-)^\wedge_p \circ \twistO {n,p} = \twistO n \circ (-)^\wedge_p
\]
as functors $\mathbb{AG}\to \mathbb{AF}_p$.
Suppose $S$ and $T$ are Sylow $p$-subgroups of $G$ and $H$ respectively. Let $\mathcal{F}_G=\mathcal{F}_S(G)$ and $\mathcal{F}_H=\mathcal{F}_T(H)$ be the associated fusion systems at the prime $p$. The $p$-completed classifying spectrum $\hat\Sigma^{\infty}_{+} B\mathcal{F}_G$ is equivalent to the $p$-completion of $\Sigma^{\infty}_{+} BG$ via the composite
\[\hat\Sigma^{\infty}_{+} B\mathcal{F}_G \xrightarrow{\omega_{\mathcal{F}_G}} \hat\Sigma^{\infty}_{+} BS \xrightarrow{(B\incl_S^G)^\wedge_p} \hat\Sigma^{\infty}_{+} BG.\]
Here we first include the summand $\hat\Sigma^{\infty}_{+} B\mathcal{F}_G$ into $\hat\Sigma^{\infty}_{+} BS$, and then include $S$ into $G$. The fact that this map is an equivalence is essentially due to \cite{CartanEilenberg}*{XII.10.1} and \cite{BLO2}*{Proposition 5.5} (see \cite{RSS_p-completion}*{Proposition 3.3} for additional details).
Via the Segal conjectures for finite groups and fusion systems, we can interpret the $p$-completion functor $(-)^\wedge_p$ as a functor $(-)^\wedge_p\colon \mathbb{AG} \to \mathbb{AF}_p$, where we use the particular equivalence $\hat\Sigma^{\infty}_{+} BG \simeq \hat\Sigma^{\infty}_{+} B\mathcal{F}_G$ described above:
\begin{definition}
The functor $(-)^\wedge_p\colon \mathbb{AG} \to \mathbb{AF}_p$ is defined on morphisms by
\[(-)^\wedge_p\colon \mathbb{AG}(G,H) \to [\Sigma^{\infty}_{+} BG, \Sigma^{\infty} BH] \xrightarrow{(-)^\wedge_p} [\hat\Sigma^{\infty}_{+} B\mathcal{F}_G, \hat\Sigma^{\infty}_{+} B\mathcal{F}_H] \cong \mathbb{AF}_p(\mathcal{F}_G,\mathcal{F}_H).\]
The first map is the Segal map for finite groups, and the last isomorphism is the Segal conjecture for saturated fusion systems.
\end{definition}
If we restrict the $(H,H)$-biset $H$ to a $(T,T)$-biset $H_T^T$ the resulting biset is stable with respect to $\mathcal{F}_H$, hence $H_{\mathcal{F}_H}^{\mathcal{F}_H}\in \mathbb{AF}_p(\mathcal{F}_H,\mathcal{F}_H)$. The element $H_{\mathcal{F}_H}^{\mathcal{F}_H}$ is always invertible in $\mathbb{AF}_p(\mathcal{F}_H,\mathcal{F}_H)$ by \cite{RSS_p-completion}*{Lemma 3.6}, and using the inverse, we get the following algebraic formula for the $p$-completion functor $(-)^\wedge_p\colon \mathbb{AG}\to \mathbb{AF}_p$:
\begin{prop}[\cite{RSS_p-completion}*{Theorem 1.1}]\label{propPCompletion}
The $p$-completion functor $(-)^\wedge_p \colon \mathbb{AG} \to \mathbb{AF}_p$ satisfies the following formula for any virtual biset $X_G^H\in \mathbb{AG}(G,H)$:
\[(X_G^H)^\wedge_p = X_{\mathcal{F}_G}^{\mathcal{F}_H} \mathbin{\odot} (H_{\mathcal{F}_H}^{\mathcal{F}_H})^{-1} \in \mathbb{AF}_p(\mathcal{F}_G,\mathcal{F}_H).\]
\end{prop}
The free loop space $\freeO n G$ has components corresponding to all $G$-conjugacy classes of commuting $n$-tuples in $G$, while $\freeO n \mathcal{F}_G$ only has components corresponding to $G$-conjugacy classes of commuting $n$-tuples in $S$, i.e. tuples of $p$-power order elements. Hence we cannot hope for some sort of equivalence between $\freeO n G$ and $\freeO n \mathcal{F}_G$ on the nose.
However, recall from \cite[Proposition 3.35]{RSS_Bold1} that we define $\freeO n_p G$ to consist of the components in $\freeO n G$ corresponding to commuting tuples of $p$-power order elements up to $G$-conjugation. In this case, we do have a correspondence between the component of $\freeO n_p G$ and the components of $\freeO n \mathcal{F}_G$.
\begin{convention}\label{conventionCommonReprs}
We use the same chosen representative $n$-tuples $\underline a$ in $\mathcal{F}_G$ to represent both the components $C_{\mathcal{F}_G}(\underline a)$ in $\freeO n\mathcal{F}_G$ and the components $C_G(\underline a)$ in $\freeO n_p G$.
\end{convention}
Each component $C_{\mathcal{F}_G}(\underline a)$ in $\freeO n \mathcal{F}_G$ is the fusion system induced by $C_G(\underline a)$ on the Sylow $p$-subgroup $C_S(\underline a)$, and $C_G(\underline a)$ is the corresponding component of $\freeO n_p(G)$. Consequently the $p$-completion functor $(-)^\wedge_p\colon \mathbb{AG}\to \mathbb{AF}_p$ takes the group $C_G(\underline a)$ to $C_{\mathcal{F}_G}(\underline a)$, and thus takes all of $\freeO n_p(G)$ to $\freeO n \mathcal{F}_G$.
On morphisms, according to Proposition \ref{propPCompletion}, $(-)^\wedge_p\colon \mathbb{AG}((\mathbb{Z}/p^e)^n \times\freeO n_p G, (\mathbb{Z}/p^e)^n\times \freeO n_p H) \to \mathbb{AF}_p((\mathbb{Z}/p^e)^n \times\freeO n \mathcal{F}_G, (\mathbb{Z}/p^e)^n\times \freeO n \mathcal{F}_H)$ is given on a biset matrix $X\in \mathbb{AG}((\mathbb{Z}/p^e)^n \times\freeO n_p G, (\mathbb{Z}/p^e)^n\times \freeO n_p H)$ by
\[
((X)^\wedge_p)_{\underline a,\underline b} = (X_{\underline a,\underline b})_{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_G}(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)} \mathbin{\odot} \Bigl(((\mathbb{Z}/p^e)^n\times C_H(\underline b))_{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}^{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}\Bigr)^{-1}
\]
for representative commuting $n$-tuples $\underline a$ in $\mathcal{F}_G$ and $\underline b$ in $\mathcal{F}_H$.
Now suppose we start with a virtual $(G,H)$-biset $X\in \mathbb{AG}(G,H)$. If we first apply $\twistO {n,p}$ and then apply $(-)^\wedge_p$, we get a matrix in $\mathbb{AF}_p((\mathbb{Z}/p^e)^n \times\freeO n \mathcal{F}_G, (\mathbb{Z}/p^e)^n\times \freeO n \mathcal{F}_H)$ consisting of virtual bisets
\begin{equation}\label{eqTwistThenComplete}
((\twistO {n,p} (X))^\wedge_p)_{\underline a,\underline b} = \Bigl(\twistO {n,p}(X)_{\underline a,\underline b}\Bigr)_{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_G}(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)} \mathbin{\odot} \Bigl(((\mathbb{Z}/p^e)^n\times C_H(\underline b))_{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}^{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}\Bigr)^{-1}.
\end{equation}
If, on the other hand, we first apply $(-)^\wedge_p$ and then apply $\twistO n\colon \mathbb{AF}_p\to \mathbb{AF}_p$, we can use the fact that $\twistO n$ is a functor to get a matrix in $\mathbb{AF}_p((\mathbb{Z}/p^e)^n \times\freeO n \mathcal{F}_G, (\mathbb{Z}/p^e)^n\times \freeO n \mathcal{F}_H)$ consisting of virtual bisets
\begin{equation}\label{eqCompleteThenTwist}
\begin{split}
\twistO n ((X)^\wedge_p)_{\underline a,\underline b} &= \Bigl(\twistO n(X_{\mathcal{F}_G}^{\mathcal{F}_H} \mathbin{\odot} (H_{\mathcal{F}_H}^{\mathcal{F}_H})^{-1})\Bigr)_{\underline a,\underline b}
\\ &= \Bigl(\twistO n(X_{\mathcal{F}_G}^{\mathcal{F}_H}) \mathbin{\odot} \twistO n (H_{\mathcal{F}_H}^{\mathcal{F}_H})^{-1}\Bigr)_{\underline a,\underline b}.
\end{split}
\end{equation}
We claim that these two matrices are always equal (Theorem \ref{thmTwistedLoopsAndPCompletion} below), and in order to prove this we will make use of the following lemma:
\begin{lemma}\label{lemmaRestrictBifreeToSylow}
Suppose $H$ is a finite group with Sylow $p$-subgroup $T$ and let $S$ be a $p$-group.
The restriction map $\mathbb{AG}(H,S)^\wedge_p \to \mathbb{AG}(T,S)^\wedge_p$ is injective on the subgroup of bifree virtual $(H,S)$-bisets.
\end{lemma}
\begin{proof}
Suppose we have a bifree virtual biset $Y\in \mathbb{AG}(H,S)^{\wedge}_p$, then $Y$ must be a $\mathbb{Z}_p$-linear combination of basis elements of the form $[P,\varphi]_H^S$, where $P\leq H$ is a $p$-subgroup and $\varphi\colon P\to S$ is injective. Such a linear combination is uniquely determined by its number of fixed points $\abs{Y^{(Q,\psi)}}\in \mathbb{Z}_p$, for all pairs $(Q,\psi)$ of a subgroup $Q\leq H$ and an injective group homomorphism $\psi\colon Q\to S$. Thus the virtual biset $Y$ is uniquely determined by $\abs{Y^{(Q,\psi)}}\in \mathbb{Z}_p$, where we only consider pairs $(Q,\psi)$ with $Q\leq H$ a $p$-subgroup.
Furthermore, since every $p$-subgroup of $H$ is $H$-conjugate to a subgroup of $T$, $Y$ is uniquely determined by the number of fixed points $\abs{Y^{(Q,\psi)}}\in \mathbb{Z}_p$ for $Q\leq T$ and $\psi\colon Q\to S$ injective. These fixed points happen to be preserved under restriction of the left action from $H$ to $T$, so it follows that any bifree $Y\in \mathbb{AG}(H,S)^{\wedge}_p$ is uniquely determined by its restriction $[T,\id]_T^H\mathbin{\odot} Y\in \mathbb{AG}(T,S)^{\wedge}_p$.
\end{proof}
We now prove Theorem \ref{thmIntroMainTheoremPCompletion} from the introduction.
\begin{theorem}\label{thmTwistedLoopsAndPCompletion}
We have $(-)^\wedge_p \circ \twistO {n,p} = \twistO n \circ (-)^\wedge_p$
as functors $\mathbb{AG}\to \mathbb{AF}_p$.
\end{theorem}
\begin{proof}
We aim to prove that the matrices described in \eqref{eqTwistThenComplete} and \eqref{eqCompleteThenTwist} are equal.
Let us for a moment consider the matrix $\twistO n (X_{\mathcal{F}_G}^{\mathcal{F}_H}).$ Since restricting the action of $G$ to $\mathcal{F}_G$ is given by precomposition with $\omega_{\mathcal{F}_G}\circ [S,\id]_S^G$, functoriality of $\twistO n$ gives us
\[
\twistO n (X_{\mathcal{F}_G}^{\mathcal{F}_H}) = \twistO n((\omega_{\mathcal{F}_G})_{\mathcal{F}_G}^S)\mathbin{\odot} \twistO n (X_S^T) \mathbin{\odot} \twistO n((\omega_{\mathcal{F}_H})_T^{\mathcal{F}_H}).
\]
Since every element in a $p$-group has $p$-power order, the functor $\twistO{n,p}$ coincides with $\twistO n$ when restricted to $p$-groups. Using functoriality of $\twistO{n,p}$, we can further decompose $\twistO n (X_{\mathcal{F}_G}^{\mathcal{F}_H})$ as
\begin{equation*}
\twistO n (X_{\mathcal{F}_G}^{\mathcal{F}_H}) = \twistO n((\omega_{\mathcal{F}_G})_{\mathcal{F}_G}^S) \mathbin{\odot} \twistO{n,p}([S,\id]_S^G) \mathbin{\odot} \twistO {n,p} (X_G^H) \mathbin{\odot} \twistO{n,p}([T,\id]_H^T) \mathbin{\odot} \twistO n((\omega_{\mathcal{F}_H})_T^{\mathcal{F}_H}).
\end{equation*}
By Proposition \ref{propTwistedFusionLoop}, the entries of $\twistO n((\omega_{\mathcal{F}_G})_{\mathcal{F}_G}^S)$ satisfy
\[
\twistO n((\omega_{\mathcal{F}_G})_{\mathcal{F}_G}^S)_{\underline a,\underline s} = \twistO n((\omega_{\mathcal{F}_G})_S^S)_{\underline a,\underline s},
\]
whenever $\underline a$ is a representative $n$-tuple in $\mathcal{F}_G$, and $\underline s$ in $S$. For an $n$-tuple of $p$-power order elements $\underline g$ in $G$ and a representative $n$-tuple $\underline a$ in $\mathcal{F}_G$, the entries of the composite $\twistO n((\omega_{\mathcal{F}_G})_{\mathcal{F}_G}^S) \mathbin{\odot} \twistO{n,p}([S,\id]_S^G)$ satisfy
\begin{align*}
&(\twistO n((\omega_{\mathcal{F}_G})_{\mathcal{F}_G}^S) \mathbin{\odot} \twistO{n,p}([S,\id]_S^G))_{\underline a, \underline g}
\\ ={}& (\twistO n((\omega_{\mathcal{F}_G})_S^S) \mathbin{\odot} \twistO{n,p}([S,\id]_S^G))_{\underline a, \underline g}
\\ ={}& (\twistO {n,p}((\omega_{\mathcal{F}_G})_S^S) \mathbin{\odot} \twistO{n,p}([S,\id]_S^G))_{\underline a, \underline g}
\\ ={}& (\twistO {n,p}((\omega_{\mathcal{F}_G})_S^S\mathbin{\odot} [S,\id]_S^G))_{\underline a, \underline g}
\\ ={}& (\twistO {n,p}([S,\id]_S^G))_{\underline a, \underline g} \hspace{2cm} \text{since $[S,\id]_S^G$ is left $\mathcal{F}_G$-stable}.
\end{align*}
The biset $[S,\id]_S^G$ represents the forward map $\incl_S^G\colon S\to G$, hence by \cite[Theorems 3.33.(iv)]{RSS_Bold1} the entries $(\twistO {n,p}([S,\id]_S^G))_{\underline a, \underline g}$ are given by
\[
(\twistO {n,p}([S,\id]_S^G))_{\underline a, \underline g} = [(\mathbb{Z}/p^e)^n\times C_S(\underline a), \id]_{(\mathbb{Z}/p^e)^n\times C_S(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_G(\underline g)},
\]
if $\underline g=\underline a$ (following Convention \ref{conventionCommonReprs}), and $0$ otherwise.
To sum up, the composite $\twistO n((\omega_{\mathcal{F}_G})_{\mathcal{F}_G}^S) \mathbin{\odot} \twistO{n,p}([S,\id]_S^G)$ is a diagonal matrix with entries
\begin{equation}\label{eqPCompletionPreComposition}
(\twistO n((\omega_{\mathcal{F}_G})_{\mathcal{F}_G}^S) \mathbin{\odot} \twistO{n,p}([S,\id]_S^G))_{\underline a, \underline a} = [(\mathbb{Z}/p^e)^n\times C_S(\underline a), \id]_{(\mathbb{Z}/p^e)^n\times C_S(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_G(\underline a)},
\end{equation}
which are left $((\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_G}(\underline a))$-stable. This will be the first main ingredient in the proof of the theorem.
\medskip\noindent For the second ingredient, consider the matrix entry
\begin{equation*}
\Bigl(\twistO{n,p}([T,\id]_H^T) \mathbin{\odot} \twistO n((\omega_{\mathcal{F}_H})_T^{\mathcal{F}_H})\mathbin{\odot} \twistO n(H_{\mathcal{F}_H}^{\mathcal{F}_H})^{-1}\Bigr)_{\underline h,\underline b}
\end{equation*}
inside the right $((\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b))$-stable elements of $\mathbb{AG}((\mathbb{Z}/p^e)^n\times C_H(\underline h), (\mathbb{Z}/p^e)^n\times C_T(\underline b))^\wedge_p$.
We claim that this entry is $0$ unless $\underline h=\underline b$, in which case it equals the composite
\[[(\mathbb{Z}/p^e)^n\times C_T(\underline b),\id]_{(\mathbb{Z}/p^e)^n\times C_H(\underline b)}^{(\mathbb{Z}/p^e)^n\times C_T(\underline b)}\mathbin{\odot} \Bigl(((\mathbb{Z}/p^e)^n\times C_H(\underline b))_{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}^{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}\Bigr)^{-1}.\]
Each of the virtual bisets $\twistO{n,p}([T,\id]_H^T)$, $\twistO n((\omega_{\mathcal{F}_H})_T^{\mathcal{F}_H})$ and $\twistO n(H_{\mathcal{F}_H}^{\mathcal{F}_H})^{-1}$ are composites of bifree bisets and so are bifree as well -- note that $\twistO n$ preserves bifree actions, since the $\wind$-maps involved in the construction are all injective group homomorphisms.
We shall apply Lemma \ref{lemmaRestrictBifreeToSylow} to the bifree matrix entry
\[\Bigl(\twistO{n,p}([T,\id]_H^T) \mathbin{\odot} \twistO n((\omega_{\mathcal{F}_H})_T^{\mathcal{F}_H})\mathbin{\odot} \twistO n(H_{\mathcal{F}_H}^{\mathcal{F}_H})^{-1}\Bigr)_{\underline h,\underline b}\]
in $\mathbb{AG}((\mathbb{Z}/p^e)^n\times C_H(\underline h), (\mathbb{Z}/p^e)^n\times C_T(\underline b))^\wedge_p$.
Recall \eqref{eqPCompletionPreComposition} from the first part of the proof, and replace $G$, $S$, and $\underline a$, with $H$, $T$, and $\underline h$, respectively. We then know that $\twistO n((\omega_{\mathcal{F}_H})_{\mathcal{F}_H}^T) \mathbin{\odot} \twistO{n,p}([T,\id]_T^H)$ is a diagonal matrix with entries \[
(\twistO n((\omega_{\mathcal{F}_H})_{\mathcal{F}_H}^T) \mathbin{\odot} \twistO{n,p}([T,\id]_T^H))_{\underline h, \underline h} = [(\mathbb{Z}/p^e)^n\times C_T(\underline h), \id]_{(\mathbb{Z}/p^e)^n\times C_T(\underline h)}^{(\mathbb{Z}/p^e)^n\times C_H(\underline h)}.
\]
From this it follows that restricting the virtual biset
\[\Bigl(\twistO{n,p}([T,\id]_H^T) \mathbin{\odot} \twistO n((\omega_{\mathcal{F}_H})_T^{\mathcal{F}_H})\mathbin{\odot} \twistO n(H_{\mathcal{F}_H}^{\mathcal{F}_H})^{-1}\Bigr)_{\underline h,\underline b}\]
on the left from $(\mathbb{Z}/p^e)^n\times C_H(\underline h)$ to $(\mathbb{Z}/p^e)^n\times C_T(\underline h)$ can be achieved by precomposing the entire matrix with the diagonal matrix $\twistO n((\omega_{\mathcal{F}_H})_{\mathcal{F}_H}^T) \mathbin{\odot} \twistO{n,p}([T,\id]_T^H)$:
\begin{align*}
& [(\mathbb{Z}/p^e)^n\times C_T(\underline h),\id]_{(\mathbb{Z}/p^e)^n\times C_T(\underline h)}^{(\mathbb{Z}/p^e)^n\times C_H(\underline h)}\mathbin{\odot} \Bigl(\twistO{n,p}([T,\id]_H^T) \mathbin{\odot} \twistO n((\omega_{\mathcal{F}_H})_T^{\mathcal{F}_H})\mathbin{\odot} \twistO n(H_{\mathcal{F}_H}^{\mathcal{F}_H})^{-1}\Bigr)_{\underline h,\underline b}
\\ ={}& \Bigl(\twistO n((\omega_{\mathcal{F}_H})_{\mathcal{F}_H}^T) \mathbin{\odot} \twistO{n,p}([T,\id]_T^H)\mathbin{\odot} \twistO{n,p}([T,\id]_H^T) \mathbin{\odot} \twistO n((\omega_{\mathcal{F}_H})_T^{\mathcal{F}_H})\mathbin{\odot} \twistO n(H_{\mathcal{F}_H}^{\mathcal{F}_H})^{-1}\Bigr)_{\underline h,\underline b}
\\ ={}& \Bigl(\twistO n(H_{\mathcal{F}_H}^{\mathcal{F}_H})\mathbin{\odot} \twistO n(H_{\mathcal{F}_H}^{\mathcal{F}_H})^{-1}\Bigr)_{\underline h,\underline b}
\\ ={}& \Bigl(\twistO n((\omega_{\mathcal{F}_H})_{\mathcal{F}_H}^{\mathcal{F}_H})\Bigr)_{\underline h,\underline b},
\end{align*}
which is the identity on $(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)$ when $\underline h=\underline b$ and $0$ otherwise.
Because $\Bigl(\twistO{n,p}([T,\id]_H^T) \mathbin{\odot} \twistO n((\omega_{\mathcal{F}_H})_T^{\mathcal{F}_H})\mathbin{\odot} \twistO n(H_{\mathcal{F}_H}^{\mathcal{F}_H})^{-1}\Bigr)_{\underline h,\underline b}$ is uniquely determined by its restriction to $(\mathbb{Z}/p^e)^n\times C_T(\underline h)$, we conclude that it is zero unless $\underline h =\underline b$.
At the same time, we can also consider the restriction of
\[[(\mathbb{Z}/p^e)^n\times C_T(\underline b),\id]_{(\mathbb{Z}/p^e)^n\times C_H(\underline b)}^{(\mathbb{Z}/p^e)^n\times C_T(\underline b)}\mathbin{\odot} \Bigl(((\mathbb{Z}/p^e)^n\times C_H(\underline b))_{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}^{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}\Bigr)^{-1}\]
from $(\mathbb{Z}/p^e)^n\times C_H(\underline b)$ to $(\mathbb{Z}/p^e)^n\times C_T(\underline b)$ on the left:
\begin{align*}
& [(\mathbb{Z}/p^e)^n\times C_T(\underline b),\id]_{(\mathbb{Z}/p^e)^n\times C_T(\underline b)}^{(\mathbb{Z}/p^e)^n\times C_H(\underline h)}\mathbin{\odot} [(\mathbb{Z}/p^e)^n\times C_T(\underline b),\id]_{(\mathbb{Z}/p^e)^n\times C_H(\underline b)}^{(\mathbb{Z}/p^e)^n\times C_T(\underline b)}
\\* &\qquad\mathbin{\odot} \Bigl(((\mathbb{Z}/p^e)^n\times C_H(\underline b))_{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}^{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}\Bigr)^{-1}
\\ ={}& ((\mathbb{Z}/p^e)^n\times C_H(\underline b))_{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}^{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}\mathbin{\odot} \Bigl(((\mathbb{Z}/p^e)^n\times C_H(\underline b))_{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}^{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}\Bigr)^{-1}
\\ ={}& \id_{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}.
\end{align*}
We again get the identity on $(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)$, and again since the bifree bisets are uniquely determined by their restrictions, we conclude that the matrix $\twistO{n,p}([T,\id]_H^T) \mathbin{\odot} \twistO n((\omega_{\mathcal{F}_H})_T^{\mathcal{F}_H})\mathbin{\odot} \twistO n(H_{\mathcal{F}_H}^{\mathcal{F}_H})^{-1}$ is diagonal with entries
\begin{multline}\label{eqPCompletionPostComposition}
\Bigl(\twistO{n,p}([T,\id]_H^T) \mathbin{\odot} \twistO n((\omega_{\mathcal{F}_H})_T^{\mathcal{F}_H})\mathbin{\odot} \twistO n(H_{\mathcal{F}_H}^{\mathcal{F}_H})^{-1}\Bigr)_{\underline b,\underline b}
\\= [(\mathbb{Z}/p^e)^n\times C_T(\underline b),\id]_{(\mathbb{Z}/p^e)^n\times C_H(\underline b)}^{(\mathbb{Z}/p^e)^n\times C_T(\underline b)}\mathbin{\odot} \Bigl(((\mathbb{Z}/p^e)^n\times C_H(\underline b))_{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}^{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}\Bigr)^{-1}.
\end{multline}
This is the second main ingredient in the proof of the Theorem.
The claim of the theorem that \eqref{eqCompleteThenTwist} equals \eqref{eqTwistThenComplete} is now a straightforward check of matrix entries:
\begin{align*}
& \twistO n ((X)^\wedge_p)_{\underline a,\underline b}
\\ \overset{\eqref{eqCompleteThenTwist}}={}& \Bigl(\twistO n(X_{\mathcal{F}_G}^{\mathcal{F}_H}) \mathbin{\odot} \twistO n (H_{\mathcal{F}_H}^{\mathcal{F}_H})^{-1}\Bigr)_{\underline a,\underline b}
\\ ={}& \Bigl(\twistO n((\omega_{\mathcal{F}_G})_{\mathcal{F}_G}^S) \mathbin{\odot} \twistO{n,p}([S,\id]_S^G) \mathbin{\odot} \twistO {n,p} (X_G^H) \mathbin{\odot} \twistO{n,p}([T,\id]_H^T) \mathbin{\odot} \twistO n((\omega_{\mathcal{F}_H})_T^{\mathcal{F}_H}) \mathbin{\odot} \twistO n (H_{\mathcal{F}_H}^{\mathcal{F}_H})^{-1}\Bigr)_{\underline a,\underline b}
\\ \overset{\eqref{eqPCompletionPreComposition}}={}& [(\mathbb{Z}/p^e)^n\times C_S(\underline a), \id]_{(\mathbb{Z}/p^e)^n\times C_S(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_G(\underline a)} \mathbin{\odot} \Bigl(\twistO {n,p} (X_G^H) \mathbin{\odot} \twistO{n,p}([T,\id]_H^T) \mathbin{\odot} \twistO n((\omega_{\mathcal{F}_H})_T^{\mathcal{F}_H}) \mathbin{\odot} \twistO n (H_{\mathcal{F}_H}^{\mathcal{F}_H})^{-1}\Bigr)_{\underline a,\underline b}
\\ \overset{\eqref{eqPCompletionPostComposition}}={}& [(\mathbb{Z}/p^e)^n\times C_S(\underline a), \id]_{(\mathbb{Z}/p^e)^n\times C_S(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_G(\underline a)} \mathbin{\odot} \Bigl(\twistO {n,p} (X_G^H) \Bigr)_{\underline a,\underline b} \mathbin{\odot} [(\mathbb{Z}/p^e)^n\times C_T(\underline b),\id]_{(\mathbb{Z}/p^e)^n\times C_H(\underline b)}^{(\mathbb{Z}/p^e)^n\times C_T(\underline b)}
\\* &\qquad\mathbin{\odot} \Bigl(((\mathbb{Z}/p^e)^n\times C_H(\underline b))_{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}^{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}\Bigr)^{-1}
\\ ={}& \Bigl(\twistO {n,p} (X_G^H)_{\underline a,\underline b} \Bigr)_{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_G}(\underline a)}^{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}\mathbin{\odot} \Bigl(((\mathbb{Z}/p^e)^n\times C_H(\underline b))_{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}^{(\mathbb{Z}/p^e)^n\times C_{\mathcal{F}_H}(\underline b)}\Bigr)^{-1}
\\ \overset{\eqref{eqTwistThenComplete}}={}& ((\twistO {n,p} (X))^\wedge_p)_{\underline a,\underline b}.
\end{align*}
Finally we conclude that the composite functors $\twistO n \circ (-)^\wedge_p$ and $(-)^\wedge_p\circ \twistO{n,p}$ agree as functors $\mathbb{AG}\to \mathbb{AF}_p$.
\end{proof}
|
{
"timestamp": "2021-09-30T02:02:12",
"yymm": "2109",
"arxiv_id": "2109.13988",
"language": "en",
"url": "https://arxiv.org/abs/2109.13988"
}
|
\section{Introduction}
Quantum Chromodynamics (QCD) is the non-abelian gauge theory which describes strongly interacting matter in terms of its fundamental degrees of freedom, namely quark and gluon fields. Despite the great success of QCD at high energy scales, in which it becomes a weakly interacting theory, its non-perturbative character at the low energy scale makes calculations of nucleons and nuclear matter properties extremely difficult, and other alternatives to the usual perturbative approach must be considered, like lattice QCD or phenomenological nuclear physics models, such as the Skyrme model \cite{skyrme1994non}.
Similarly to other effective approaches to strongly interacting matter, like chiral perturbation theory (ChPT) of relativistic mean-field theory (RMF), the basic fields of the Skyrme model are given by the fields that provide the physical, asymptotic particle states, concretely the meson fields. At variance with ChPT, however, a chiral expansion in powers of the typical momentum or energy of a physical process is not assumed. Instead, terms in the lagrangian with different scaling dimensions are treated on an equal footing, allowing for a balance between oppositely scaling terms and evading the Derrick theorem \cite{Derrick:1964ww}.
As a consequence of this balance, it is sufficient to introduce the mesonic fields as the basic degrees of freedom (DoF), because the baryons emerge as collective excitations or topological solitons ("skyrmions") from the nonlinear interactions of the mesons \cite{skyrme1962unified}.
The Skyrme model was, in fact, originally proposed by T. Skyrme in 1961 precisely with the aim of describing baryons within a self-interacting pion field theory.
The Skyrme model differs in this respect from both ChPT and RMF, where the baryons must be introduced as independent DoF. Furthermore, both the conservation of baryon number - which is identified with the topological charge of the skyrmions - and the extended character of baryons are built-in properties of the Skyrme model.
Later, it was shown that QCD in the large $N_c$ (the number of colors) limit reduces to a weakly interacting mesonic field theory in which baryons share the properties of topological solitons \cite{tHooft:1973alw, witten1979current}. Independent support for the Skyrme model is provided by its derivation from holographic QCD, both for the original \cite{Sakai:2004cn,Sakai:2005yt} and for the generalized Skyrme model \cite{Bartolini:2017sxi}.
First attempts of reproducing the properties of nucleons and nuclei within the Skyrme model were partially successful, but with a typical precision of only about $30\%$. In addition, there remained some relevant discrepancies, like the too large nuclear binding energies and the shell-like matter distribution of the Skyrme model solutions \cite{manton2004topological}. Recent results, however, demonstrate that both a quantization procedure beyond the rigid rotor approximation and the addition of new terms in the Skyrme lagrangian can solve many of these problems \cite{Adam_2010, adam2010bps, Gillard:2015eia, Gudnason:2016mms, Naya:2018kyi} and lead to much more precise results. Concretely, generalized Skyrme models which lead to realistic binding energies are discussed, e.g., in \cite{Adam_2010, adam2010bps, Gillard:2015eia, Gudnason:2016mms}, whereas in \cite{Naya:2018kyi} it is demonstrated that the inclusion of the rho meson allows to find the known cluster structures of light nuclei. Finally, in \cite{Lau:2014baa} and \cite{Halcrow:2016spb}, the excitation spectra of carbon-12 and oxigen-16 are reproduced with an astonishing precision where, in the latter case, the quantization of both rotational and vibrational DoF has been taken into account. There has also been important progress in the Skyrme model description of the nucleon-nucleon forces \cite{Halcrow:2020gbm}.
On the other hand, neutron stars (NS) have become our most useful resource for studying the behavior of nuclear matter at ultra-high densities \cite{Lattimer:2004pg}. Indeed, with the advent of gravitational wave astronomy, a deeper insight in the Equation of State (EoS) of strongly interacting matter has been provided by recent gravitational wave events involving binary neutron star mergers \cite{Abbott_2017}. Despite the large theoretical and observational effort employed in the last decades, the EoS of nuclear matter at a range of densities much higher than the nuclear saturation density is still not fully understood. From all the different approaches to the study of dense nuclear matter, the Skyrme model stands out as a relatively simple effective model with a low number of free parameters. Moreover, an equation of state based on this model (and its generalization) has been recently proposed in \cite{Adam:2020yfv} and shown to yield reasonable results in predicting the properties of neutron stars such as the mass-radius relations or the quasi-universal relations between the moment of inertia, the deformability and the quadrupolar moment of slowly rotating and tidally deformed stars \cite{naya2019neutron,Adam:2020aza}.
However, the EoS proposed in \cite{Adam:2020yfv} was based on the interpolation between two different submodels of the general Skyrme model, namely, the standard Skyrme model, that predicts a crystalline state of the dense nuclear matter, and the BPS submodel, in which matter behaves as a perfect fluid \cite{Adam:2014nba}. The transition between the crystal and fluid phases was modeled as a smooth crossover, where an additional free parameter describing the point at which the transition takes place had to be introduced.
Moreover, within the Skyrme model literature it has been established that the configurations that minimize the energy per baryon at large baryon number correspond to crystalline solutions, in which skyrmions (or half-skyrmions, see sect. III) are arranged in a periodic fashion respecting some particular (discrete) symmetries. Indeed, configurations with different symmetries and energies have been proposed in order to find the one with minimal energy. However, some symmetries could be more energetically favourable than others at different density regimes. This is indeed what was found for the standard Skyrme model in \cite{klebanov1985nuclear}, in which the existence of a phase transition between different crystalline configurations is predicted at a certain density.
In the present paper, our goal is to construct different solutions both of crystalline and non-crystalline types of the full generalized Skyrme model and to study their behavior
for a wide range of densities. The main aim is to determine which configurations minimize the energy per baryon in the different regimes, and to find the corresponding equation of state (EoS). The resulting classical skyrmionic matter and its EoS should provide an interesting starting point for the description of strongly interacting matter. For a completely realistic description, however, most likely further modifications like quantum corrections or the effects of additional fields have to be taken into account.
This paper is organised as follows: In the second section we introduce the general Skyrme model, from which we will construct the crystalline solutions. In the third section we review the procedure of how to construct crystal-like solutions following \cite{kugler1989skyrmion}. In section IV, we study the problem of the inhomogeneous phase for nuclear matter at intermediate densities, in which the Skyrmion crystal ceases to be a good ansatz for the field, as the energy per baryon starts to grow. Then we use these solutions to obtain an equation of state (EoS) for classical skyrmionic matter. Finally we end with some conclusions and possible future directions. We always assume units such that the speed of light $c=1$. Further, we use the mostly minus metric convention.
\section{Generalized Skyrme Lagrangian}
The general Skyrme model is described by the following lagrangian density,
\begin{align}
\notag \mathcal{L} = &\mathcal{L}_{\rm Sk} + \mathcal{L}_{\rm BPS} = \left( \mathcal{L}_2 + \mathcal{L}_4 \right) + \left( \mathcal{L}_6 + \mathcal{L}_0 \right)= \\[2mm]
\notag= -\frac{f^2_{\pi}}{16}&\Tr\left\{L_{\mu}L^{\mu}\right\} + \frac{1}{32e^2}\Tr\left\{\left[L_{\mu},L_{\nu}\right]^2\right\} \\[2mm]
&- \lambda^2 \pi^4\mathcal{B}_{\mu}\mathcal{B}^{\mu} + \frac{m^2_{\pi} f^2_{\pi}}{8}\Tr\left\{ U - I \right\}.
\label{Lag}
\end{align}
Apart from the specific choice for the potential term $\mathcal{L}_0$ - the pion mass term - which could be replaced by a more general expression, the above lagrangian density is the most general one in terms of the pion field only which is both Poincare invariant and at most quadratic in time derivatives, such that a standard hamiltonian can be defined.
We find it useful to regroup the full generalized model $\mathcal{L}$ into the standard part $\mathcal{L}_{\rm Sk} $ and the BPS part $\mathcal{L}_{\rm BPS}$, because some solutions of these submodels for large baryon number $B$ are relatively simple and have been widely studied, which will allow us to compare our full solutions to these limiting cases.
The second part $\mathcal{L}_{\rm BPS}$ is a BPS model, \textit{i.e.}, it has solutions saturating the corresponding Bogomol'nyi energy bound \cite{Adam_2010}. The lagrangian has 3 free parameters ($f_{\pi}, e, \lambda^2$) which will be used to fit the ground state of the solutions. The pion mass is set to its physical value $m_{\pi} = 140$ MeV. This model represents the dynamics of a $SU(2)$ field $U$, which always appears in the Lagrangian in terms of the Maurer-Cartan left-invariant current $L_{\mu} = U^{\dagger}\partial_{\mu}U$, except for the potential term $\mathcal{L}_0$. Static configurations of the field $U$ constitute maps from $\mathbb{R}^3$ to the the target space manifold, which can be identified with the three sphere $S^3$.
For usual solitonic configurations, the requirement of finite energy implies that the field must take values in the vacuum manifold at spatial infinity, which, due to the potential term, corresponds to $U(\abs{x}\rightarrow \infty) = I$. This boundary condition, in turn, implies that finite energy configurations correspond to mappings from one-point compactified real space, $\mathbb{R}^3\cup \{\infty \} \sim S^3$ to $S^3$. These maps are classified by the third homotopy group of $S^3$, $\pi_3(S^3) = \mathbb{Z}$, so they can be labelled by an integer. Hence, the Skyrme model allows for topological soliton solutions, called Skyrmions, carrying an integer valued charge. This integer, the so-called topological degree, is identified with the baryon number $B$, and can be calculated as an integral of the topologically conserved current $\mathcal{B}^\mu$:
\begin{equation}
B = \int d^3 \mathcal{B}^0, \hspace{2mm} \mathcal{B}^{\mu} = \frac{1}{24\pi^2} \epsilon^{\mu\nu\alpha\beta}\Tr\left\{ L_{\nu}L_{\alpha}L_{\beta} \right\},
\label{TopoNumber}
\end{equation}
which is the same expressions that appears in the sextic term ($\mathcal{L}_6$) of the lagrangian \eqref{Lag}.
Solutions of the standard Skyrme model in the $B = 1$ sector with \cite{adkins1984skyrme} and without pion mass term \cite{adkins1983static}, and including the contributions from the zero mode quantization, have been found to reproduce the nucleon properties reasonably well. Later, these calculations were extended for higher values of $B$ \cite{battye2009light} within the rational map approximation. This has also be done in the BPS model \cite{Adam:2013wya, Adam:2013tda}, obtaining quite accurate results according to experimental data, and with no approximation since the symmetries of the BPS model allows an analytical treatment of the solutions. The generalized Skyrme lagrangian \eqref{Lag} was used to reproduce nucleons, as well \cite{Ding:2007xi}. These last results will be compared to the ones we obtain from the condition to reproduce infinite nuclear matter.
To describe NS, on the other hand, we need to find solutions for $B$ of the order of $B \sim 10^{57}$, the number of baryons in the Sun. Then, we should think about how skyrmions arrange under these conditions. It was Klebanov \cite{klebanov1985nuclear} who proposed a kind of crystalline solution with the aim of describing the highly compressed interiors of neutron stars. As usually, when considering crystalline configurations, we will define a unit cell for each symmetry and work with it. Hence, in order to describe these solutions, we may define the Skyrme fields as mappings from the finite size unit cell (which has finite energy) to the target manifold.
We would like to remark that, although the boundary conditions imposed on the Skyrme fields are different from those of regular solitonic solutions, the topological properties of the field configurations remain the same. Indeed, a cubic lattice with periodic boundary conditions is mathematically equivalent to a three torus, $T^3$, so that crystalline configurations are described by maps $U_{\rm crystal}:T^3\rightarrow S^3$. As $T^3$ is still a compact and oriented manifold, mappings from $T^3$ to $S^3$ are still characterized by their topological degree, as ensured by Hopf's degree theorem \footnote{In fact, the theorem asserts that the topological degree is the \emph{only} homotopy invariant in such situations}.
Then, in the standard Skyrme model the solution that minimizes the energy is a crystalline configuration. On the other hand, we know that the BPS model solutions behave like a perfect fluid, due to the symmetry under volume preserving diffeomorphisms of $\mathcal{L}_{\rm BPS}$ (in fact, one can exactly identify the field configurations in the BPS submodel $\mathcal{L}_{\rm BPS}$ with a perfect fluid, as can be shown from a careful analysis of the corresponding stress-energy tensor \cite{Adam_2010}). However, since the sextic term is only important at high densities \cite{Adam:2020yfv}, we expect that the crystal solution is still the ground state solution in the generalized Skyrme lagrangian.
In order to construct Skyrme crystal solutions it is useful to define dimensionless units of length and energy ($\vec{r} = (x,y,z)^{\rm t}$)
\begin{equation}
\vec{r} = \frac{1}{f_{\pi}e}\vec{\widetilde{r}}, \hspace{3mm} E = \frac{3\pi^2f_{\pi}}{e}\widetilde{E}.
\end{equation}
These units are frequently used in the Skyrme model, so they are useful to compare the results. It is commonly known that the energy of the $B = 1$ skyrmion in the standard Skyrme model is $\widetilde{E} = 1.23$, whereas the topological (Skyrme-Faddeev) bound \cite{skyrme1962unified,Faddeev:1976pg} on the energy reads $\widetilde{E} \ge 1$ in these units.
The field $U$ can be parametrized as an expansion in the $SU(2)$ Lie algebra generators:
\begin{equation}
U = \sigma + i \pi_k \tau_k,
\label{Ufield}
\end{equation}
where the $\pi_k$ ($k$ = 1, 2, 3) represent the pions, $\tau_k$ are the Pauli matrices, and the fields satisfy the unitarity condition $\sigma^2 + \pi_i\pi_i = 1$. We will work with static solutions $\partial_0 U = 0$, then the energy is simply $E = -\int d^3x \mathcal{L}$. Inserting \eqref{Ufield} in \eqref{Lag} and \eqref{TopoNumber} we can calculate the energy and the baryon number:
\begin{align}
\notag\widetilde{E} = \frac{1}{24\pi^2}&\int d^3\widetilde{x} \left[ -\frac{1}{2}\Tr\left\{L_iL_i\right\} - \frac{1}{4}\Tr\left\{\left[L_i,L_j\right]^2\right\} + \right.\\[2mm]
\notag&\left. 8\lambda^2 \pi^4 f^2_{\pi}e^4 \mathcal{B}^0\mathcal{B}^0 + \frac{m^2_{\pi}}{f^2_{\pi}e^2}\Tr\left\{ I - U \right\} \right] =\\[2mm]
\notag=\frac{1}{24\pi^2}&\int d^3\widetilde{x} \left[ \partial_i n_a\partial_i n_a + \left( \partial_i n_a \partial_j n_b - \partial_i n_b\partial_j n_a \right)^2 + \right. \\[2mm]
&\left. C_6\left( \epsilon_{abcd}n_a\partial_1 n_b\partial_2 n_c \partial_3 n_d \right)^2 + C_0\left( 1-\sigma \right)\right], \label{Energy} \\[2mm]
B = -\frac{1}{2\pi^2}&\int d^3\widetilde{x} \:\epsilon_{abcd}n_a\partial_1 n_b\partial_2 n_c \partial_3 n_d,
\label{Baryon}
\end{align}
where we have define the unit vector $n_a = (\sigma, \pi_i)$, and the constants $C_6 = 2\lambda^2f^2_{\pi}e^4$, $C_0 = \frac{2m^2_{\pi}}{f^2_{\pi}e^2}$.
\section{Crystal solutions in the Skyrme model}
Two $B=1$ skyrmions are in the maximally attractive channel if the second skyrmion is isorotated by $\pi$ relatively to the first one, about an axis perpendicular to the distance vector between the two skyrmions. It can be checked easily that the maximally attractive orientation of skyrmions can be extended to a cubic arrangement, such that all skyrmions forming the cubic lattice are maximally attracted by all their nearest neighbours. This led Klebanov \cite{klebanov1985nuclear} to consider a Skyrme crystal based on an infinite periodic lattice with cubic symmetry. At low densities, the solution is described by spherically symmetric skyrmions located in the corners of the cube. The fact that nearest neighbours must be mutually isorotated to be in the maximally attractive channel translates into a particular set of symmetries for the field in the unit cell (simple cubic and periodic) that must be imposed. Then, the solution is found by minimizing the static energy functional \eqref{Energy}.
Most Skyrme crystal calculations have been performed for the standard Skyrme model $\mathcal{L}_{\rm Sk}$. We shall, therefore, briefly review these crystals and their symmetries before presenting our own results. In all cases, skyrmions (or half-skyrmions) are located at the vertices of a cubic lattice, and we call the distance between two nearest neighbours $L$ (or $\widetilde{L}$ in dimensionless units). The Skyrme fields, however, are {\em not} periodic under a lattice translation by $L$, because of the necessity to isorotate nearest neighbours. They are, however, periodic for $2L$ translations. The unit cell of the crystal is, therefore, a cube with sidelength $2L$ (or $2\widetilde{L}$) in all cases.
The total energy of a crystal is infinite since it is, by construction, infinitely extended. Nevertheless, the energy per baryon number (here $\widetilde{E}_{\text{cell}}$ is the dimensionless energy of the unit cell, and $N_{\text{cells}}$ is its baryon number),
\begin{equation}
\frac{\widetilde{E}}{B} = \frac{N_{\text{cells}}\:\widetilde{E}_{\text{cell}}}{N_{\text{cells}}\:B_{\text{cell}}},
\end{equation}
remains finite. Then we can work with a single unit cell, which has finite energy, and calculate its baryon number and energy. The unit cell is characterized by the sidelength $2\widetilde{L}$, whereas its energy changes for different values of $\widetilde{L}$. The curve $\widetilde{E}(\widetilde{L})$ is always found to have a minimum $E_{\rm min} = E(L_{\rm min})$ for a certain finite $L_{\rm min}$, which for the case of the cubic symmetry of \cite{klebanov1985nuclear} takes the value $\widetilde{E}_{\rm min}/B = 1.08$. This value is only an $8\%$ higher than the Skyrme-Faddeev bound,
which indicates that a crystalline arrangement of skyrmions is probably the field configuration with lowest energy (per baryon) for an infinite baryon number.
Different symmetries were computed to get closer to the (unattainable) Skyrme-Faddeev bound. In \cite{goldhaber1987maximal}, Manton and Goldhaber proposed an additional symmetry to the Klebanov crystal, motivated by the dynamics of the two-skyrmion configuration. This introduced a new solution based on half-skyrmions, which can be thought of as a body-centered cubic (BCC) arrangement, in which one half-skyrmion solution is located in the center of the cube and the other half-skyrmions in the corners.
Finally, in two different but almost simultaneous papers \cite{castillejo1989dense, kugler1989skyrmion} a new phase was proposed. They computed a crystal with face-centered cubic (FCC) symmetry of half-skyrmions using two different approaches. The resulting crystal is believed to be the crystal of lowest energy and is a good candidate for the ground state of the standard Skyrme model for infinite baryon number, with a minimal energy $E_{\rm min}$ per baryon which is only $3.8\%$ above the energy bound.
The influence of the sextic and potential terms in the crystalline phases of the Skyrme model was already investigated in \cite{perapechka2017crystal} from a more formal point of view. In our paper, we extend these studies, considering physical values of the parameters of the Skyrme model and focusing on the extraction of an equation of state for the ground state of symmetric nuclear matter.
All the crystals mentioned above have the cubic symmetries in common. They are given by the following combined transformations,
\begin{align}
\notag\text{A}_1&: (x,y,z) \rightarrow (-x,y,z), \\[2mm] &(\sigma,\pi_1,\pi_2,\pi_3) \rightarrow (\sigma,-\pi_1,\pi_2,\pi_3), \label{A1}\\[2mm]
\notag\text{A}_2&: (x,y,z) \rightarrow (y,z,x), \\[2mm] &(\sigma,\pi_1,\pi_2,\pi_3) \rightarrow (\sigma,\pi_2,\pi_3,\pi_1).
\label{A2}
\end{align}
The simple cubic (SC) crystal of Klebanov has an additional periodicity symmetry,
\begin{align}
\notag \text{A}_3&: (x,y,z) \rightarrow (x+L,y,z), \\[2mm] &(\sigma,\pi_1,\pi_2,\pi_3) \rightarrow (\sigma,-\pi_1,\pi_2,-\pi_3).
\end{align}
This symmetry locates the center of the skyrmions in the corners of the cube, isorotated with respect to their nearest neighbours. Owing to the translational invariance of A$_3$, the energy and baryon densities are periodical in $L$. Since each skyrmion contributes 1/8 to the baryon number and the cube has 8 corners, the baryon number of this cube is 1. However, the fields are periodical in $2L$ (as follows from the symmetry A$_3$), and the unit cell is a cube of length $2L$.
The BCC half-skyrmion phase shares the same symmetries of the SC phase ($\text{A}_1, \text{A}_2, \text{A}_3$), plus one additional symmetry,
\begin{align}
\notag\text{B}_4&: (x,y,z) \rightarrow (L/2-z,L/2-y,L/2-x), \\[2mm] &(\sigma,\pi_1,\pi_2,\pi_3) \rightarrow (-\sigma,\pi_2,\pi_1,\pi_3).
\end{align}
In this phase, the cube of length $L$ has in its center a half skyrmion (with $\sigma = -1$ at the center) and 8 other half skyrmions (with $\sigma = +1$) in the corners. An interesting result obtained from this half-skyrmion symmetry is that the mean value of $\sigma$ over the unit cell, denoted by $\langle \sigma \rangle$, vanishes identically. From this property it is obvious that the potential term in the lagrangian will exactly scale as $L^3$ in this phase (remember that $\mathcal{L}_0 \sim \sigma -1$). Again, the unit cell has length $2L$, therefore the integrals \eqref{Energy} and \eqref{Baryon} are perfomed between $-L$ and $L$. It is easy to deduce that the baryon number of the unit cell is 8. A typical energy density plot is shown in \cref{Figure.B}, where blue regions correspond to low density and yellow regions to high density.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.3]{BCC_Contourf.pdf}
\caption{\small Energy contour plots for the unit cell of the body-centered-cubic (BCC) crystal. The plots show energy density surfaces for different heights within the unit cell.}
\label{Figure.B}
\end{figure}
The FCC symmetry of single skyrmions has two different symmetries besides A$_1$ and A$_2$,
\begin{align}
\notag\text{C}_3&: (x,y,z) \rightarrow (x,z,-y), \\[2mm] &(\sigma,\pi_1,\pi_2,\pi_3) \rightarrow (\sigma,-\pi_1,\pi_3,-\pi_2), \\[2mm]
\notag\text{C}_4&: (x,y,z) \rightarrow (x+L,y+L,z), \\[2mm] &(\sigma,\pi_1,\pi_2,\pi_3) \rightarrow (\sigma,-\pi_1,-\pi_2,\pi_3).
\end{align}
The energy, baryon number and the fields are periodical in $2L$ in this case, a typical plot is shown in \cref{Figure.C}.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.3]{FCC_Contourf.pdf}
\caption{\small Energy contour plots for the unit cell of the face-centered cubic (FCC) crystal of skyrmions. The plots show energy density surfaces for different heights within the unit cell. Here we choose $z=0, L/2, L$ for the heights because in this case also the energy density has a $2L$ periodicity.}
\label{Figure.C}
\end{figure}
The FCC half-skyrmion symmetry shares $\text{C}_3$ and additionally:
\begin{align}
\notag\text{D}_4&: (x,y,z) \rightarrow (x+L,y,z), \\[2mm] &(\sigma,\pi_1,\pi_2,\pi_3) \rightarrow (-\sigma,-\pi_1,\pi_2,\pi_3).
\end{align}
We can recover the FCC phase symmetry C$_4$ applying two D$_4$ transformations. In this phase, the energy and baryon number are periodic in $L$, but the fields have period $2L$, and it has, in fact, the appearance of a simple cubic phase of half-skyrmions, see \cref{Figure.D}. The baryon number per unit cell $(2L)^3$ is $B_{\rm cell} =4$ for both FCC crystals. Further, as in the BCC phase, $\langle \sigma \rangle$ vanishes, so the potential term contribution is already known and scales like $L^3$, like in the BCC phase.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.3]{FCC_Half_Contourf.pdf}
\caption{\small Energy contour plots for the unit cell of the face-centered cubic crystal of half-skyrmions (FCC$_{^1\!/_2}$). The plots show energy density surfaces for different heights within the unit cell.}
\label{Figure.D}
\end{figure}
From now on we will refer to the FCC phase of half-skyrmions as FCC$_{^1\!/_2}$.
Following \cite{kugler1989skyrmion}, we find that the cubic symmetries \eqref{A1} motivate the following Fourier-like expansion of the fields,
\begin{align}
&\overline{\sigma} = \sum^{\infty}_{a,b,c = 0} \beta_{abc}\cos\left( \frac{a\pi x}{L} \right) \cos\left( \frac{b\pi y}{L} \right) \cos\left( \frac{c\pi z}{L} \right) \label{Sigmaexpan}\\[2mm]
&\overline{\pi}_1 = \sum^{\infty}_{h,k,l = 0} \alpha_{hkl} \sin\left( \frac{h\pi x}{L} \right) \cos\left( \frac{k\pi y}{L} \right) \cos\left( \frac{l\pi z}{L} \right). \label{Piexpan}
\end{align}
Then, the fields $\pi_2$ and $\pi_3$ can be constructed applying the transformation A$_2$ on $\pi_1$. The bars over the fields denote that these fields do not satisfy the $SU(2)$ condition, hence we have to normalize them,
\begin{equation}
n_a = \frac{1}{\sqrt{\overline{\sigma}^2 + \overline{\pi}_k\overline{\pi}_k}}\left( \overline{\sigma}, \overline{\pi}_k \right).
\label{vector}
\end{equation}
Once the particular symmetries of a crystalline ansatz have been specified, the problem is reduced to a finite-dimensional optimization problem for the coefficients $\beta_{abc}$ and $\alpha_{hkl}$, which must be adequately chosen in order to minimize the energy \eqref{Energy} of the solution. Furthermore the symmetry properties associated to each phase can be used to reduce the number of independent parameters, since they result in some constraints between the coefficients.
In the BCC phase, the following coefficients $\beta_{abc}$ and $\alpha_{hkl}$ may be nonzero
\begin{itemize}
\item $h$, $k$ are odd, $l$ is even.
\item $a$, $b$ and $c$ are even.
\item $\beta_{abc} = \beta_{bca} = \beta_{cab}$.
\item $\alpha_{hkl} = -(-1)^{\frac{h+k+l}{2}}\alpha_{khl}$.
\item $\beta_{abc} = -(-1)^{\frac{a+b+c}{2}}\beta_{bac}$.
\end{itemize}
For both the FCC and the FCC$_{^1\!/_2}$ phases, the following coefficients are allowed
\begin{itemize}
\item $h$ is odd, $k$ and $l$ are even.
\item $a$, $b$, $c$ are all odd.
\end{itemize}
The FCC phase permits, in addition
\begin{itemize}
\item $h$ is even, $k$ and $l$ are odd.
\item $a$, $b$, $c$ are all even.
\end{itemize}
As we can see from these constraints, the FCC and FCC$_{^1\!/_2}$ phases share many Fourier coefficients. The FCC phase, however, has additional coefficients which are set to zero in the half-skyrmion phase. FCC$_{^1\!/_2}$ solutions are, therefore, at the same time particular FCC solutions. This implies that the ground state energy per baryon number of a FCC$_{^1\!/_2}$ solution can never be smaller than the
ground state energy per baryon number of a FCC solution. The standard Skyrme model $\mathcal{L}_{\rm Sk}$ is compatible with the FCC$_{^1\!/_2}$ symmetries (it respects the symmetry
$\sigma \to - \sigma$), so it allows both for a FCC$_{^1\!/_2}$ ground state which is symmetric w.r.t. $\sigma \to - \sigma$ and for a FCC ground state with a spontaneously broken symmetry. It turns out that for sufficiently large $L$ the FCC ground state is realised, whereas the system settles in the more symmetric FCC$_{^1\!/_2}$ ground state at higher densities. The two phases are separated by a second order phase transition at a certain critical $L_{\rm PT}$, where the additional coefficients allowed by FCC approach zero. The pion mass term, on the other hand, is not compatible with the symmetry
$\sigma \to - \sigma$, therefore the system is always in the FCC phase. At large densities, however, the pion mass term becomes irrelevant and the additional
non-zero coefficients of the FCC phase are suppressed in the limit of small $L$ \cite{Vento:2017ypn}.
\subsection{Numerical procedure}
\label{Sec:Numproc}
In order to solve the optimization problem explained above, we have to fix the value of the length $\widetilde{L}$ of the unit cell, which is an input of the crystal ansatz, and then the energy is minimized varying the Fourier coefficients using a Nelder-Mead algorithm \cite{nelder1965simplex} with the GSL C++ library. Once this process has been repeated for many different lattice length values, we will obtain a curve $\widetilde{E}(\widetilde{L})$ (energy of the unit cell) for each phase.
Such a procedure constitutes an efficient solution to the problem, because higher terms in the expansions \eqref{Sigmaexpan}, \eqref{Piexpan} only give very small contributions. We can, therefore, safely truncate the series to a certain finite number of terms ($N_t$) and neglect higher order terms. We take $N_t$ such that we reproduce the results in \cite{kugler1989skyrmion} up to a precision of $1$\textperthousand, concretely an energy $\widetilde{E}/B = 1.038$ for an FCC half-skyrmion unit cell of length $\widetilde{L} = 4.7$, for which $N_t = 32$ terms are needed in total. This last assumption is numerically checked: the first two coefficients in that symmetry are $\alpha_{100} = 0.982$ and $\beta_{111} = -1.110$, the next-to-leading order coefficients are a $5\%$ of the first and the next order is a $0.4\%$. Due to this quick convergence, even the solution of the crystal with only 2 Fourier coefficients already provides a rather good approximation.
Once the values of the curve $\widetilde{E}(\widetilde{L})$ have been obtained, we fit them with the following function
\begin{equation}
\frac{\widetilde{E}}{B} = k + k_2\widetilde{L} + \frac{k_4}{\widetilde{L}} + C_6 \frac{k_6}{\widetilde{L}^3} + C_0 k_0\widetilde{L}^3,
\label{E_Fit}
\end{equation}
which is motivated from the scaling behaviour of the different terms that appear in the lagrangian. An interesting observation is that the contribution to the energy of each term individually can be approximately parametrized as $\widetilde{E}_i(\widetilde{L}) = K_i \widetilde{L}^{3-i}$, at least for $L \lesssim L_{\rm{min}}$. Here $K_i$ is almost a constant, and $i$ is the scaling dimension of each term. Then the energy can be expressed as the sum of the individual contributions of each term in the lagrangian. This suggests that, at least in the high density regime (which is the one of interest), there is an \emph{
approximate perfect scaling} of each term. The precision of this approximation is given by the differences $K_i \neq k_i$ and $k \neq 0$. This perfect scaling property at lower densities will be useful to fit the values of the constants $f_{\pi}$ and $e$ in the next sections.
To obtain the perfect scaling parametrization, we calculate the energy for a single value of $\widetilde{L}$ and obtain the contribution of the different terms individually to extract the constants $K_i$ (we calculate the constants $K_i$ in the case $C_6 = C_0 = 1$ for simplicity.). Then, the curve $\widetilde{E}(\widetilde{L})$ can be approximated by:
\begin{equation}
\frac{\widetilde{E}_{\text{PS}}}{B} = K_2\widetilde{L} + \frac{K_4}{\widetilde{L}} + C_6\frac{K_6}{\widetilde{L}^3} + C_0 K_0\widetilde{L}^3.
\label{Perfect_Sc}
\end{equation}
This procedure is applied in the generalized Skyrme model. However, the inclusion of the sextic and the mass terms forces us to give numerical values for the parameters even when choosing dimensionless units, as now not all the parameters can be factored out in the Lagrangian. We choose the parameter values such that we reproduce the energy density of infinite nuclear matter at saturation, which is given by (here $p$ is the pressure and $n$ is the thermodynamical, average baryon density)
\begin{equation}
\left.\frac{E}{B}\right|_{p = 0} = 923.3 \:\text{MeV}, \hspace{3mm} n(p = 0) = n_0 = 0.16\: \text{fm}^{-3},
\label{Infinite_Matter}
\end{equation}
and the point $p = 0$ is identified with the minimum $\widetilde{E}_{\rm min}$ of the curve $\widetilde{E}(\widetilde{L})$ (see next section).
The baryon density of the unit cell is the number of baryons per unit cell divided by its volume,
$n=B_\text{cell}/(2L)^2$. Here an important point is that the BCC and FCC unit cells have different baryon numbers, such that the same baryon density corresponds to different lattice parameters $L$ for different phases,
\begin{equation}
n= \frac{B_{\rm cell}^{\text{FCC}}}{8L^3_{\text{FCC}}} = \frac{B_{\rm cell}^{\text{BCC}}}{8L^3_{\text{BCC}}} \longrightarrow \frac{L_{\text{BCC}}}{2^{1/3}} = L_{\text{FCC}}.
\end{equation}
Further, if we want to compare the $(E/B)(L)$ curves of different phases, these comparisons should be done for the same baryon density. We shall, therefore, assume that $L=L_\text{FCC}=L_\text{BCC}/\sqrt[3]{2}$ whenever such a comparison is made like, e.g., in Fig. \ref{Figure.EvsL} below.
To satisfy conditions \eqref{Infinite_Matter} at the minimum, we have to find the correct values for the physical constants, and this process must be repeated iteratively until a reasonable convergence is reached. The value of the pion mass will be fixed to its physical value $m_{\pi} = 140$ MeV. We take the value of $\lambda^2 = 7$ MeV/fm$^{3}$ motivated by the $\omega$ meson coupling \cite{Adam:2020aza}, then this coupling constant is not varied in the iteration procedure. On the other hand, the values of $f_{\pi}$ and $e$ are given as initial seeds. Since the initial values of $f_{\pi}$ and $e$ will not reproduce \eqref{Infinite_Matter}, they must be varied until we match this condition.
This iterative process of fitting the constants in the generalized model hugely increases the time of computation, since the curve $\widetilde{E}(\widetilde{L})$ must be reproduced to find the minimum each time that $f_{\pi}$ and $e$ are changed. To avoid this computational cost, we can now take advantage of the (approximate) perfect scaling property of the curve $\widetilde{E}(\widetilde{L})$ near the minimum to solve this problem much faster. In this approximation, the constants $K_i$ are already known, and only $C_6(f_{\pi}, e)$ and $C_0(f_{\pi}, e)$ will change. This approach is much faster and reproduces \eqref{Infinite_Matter} with a sufficient accuracy of a few percent. Therefore, this approximation is used to fit the values of $f_{\pi}$ and $e$ at the minimum. We just have to find the phase of minimum energy for each choice of the lagrangian, then we only need the constants $K_i$ for that specific phase.
\subsection{Results}
The values of the physical constants resulting from the perfect scaling are given in \cref{Table.Constants}.
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
$f_{\pi}$ (MeV) & $e$ & $\lambda^2$ (MeV/fm$^3$) & $m_{\pi}$ (MeV) \\ \hline
137.83 & 4.59 & 0 & 0 \\ \hline
118.83 & 4.32 & 0 & 140 \\ \hline
160.32 & 8.59 & 7 & 0 \\ \hline
136.85 & 6.46 & 7 & 140 \\ \hline
\end{tabular}
\caption{\small Values of the parameters that fit the infinite nuclear matter for each model.}
\label{Table.Constants}
\end{table}
We show the curves $\widetilde{E}/B$ for the different symmetries and for different models in \cref{Figure.EvsL}. The left upper plot (model $\mathcal{L}_{24} \equiv \mathcal{L}_{\rm Sk}$) reproduces the known results described at the beginning of this Section. A more detailed discussion of the remaining plots will be given below, where we describe the resulting phases of skyrmionic matter at different densities. In \cref{Figure.EvsL} we also use the fact that for all models except for the simplest model $\mathcal{L}_{\rm Sk}$ there exist topological energy bounds \cite{Adam:2013tga} which are tighter than the Skyrme-Faddeev bound. We plot these topological energy bounds for each model. Although the crystals do not reach the bound, they are very close to it at the minimum. We show the values of these bounds and how far the crystals are above it in \cref{Table.Bounds} (both the bounds and the plots in \cref{Figure.EvsL} are given for the values of the parameters specified in \cref{Table.Constants}).
Further, we find that the half-skyrmion phases are well fitted to the proposed parametrization (\ref{E_Fit}) even for $L \geq L_{\min}$. However, this parametrization breaks down for large $L$ for the FCC phase, and a more complicated behaviour is observed in this region. Indeed, $\langle \sigma \rangle$ does not vanish for large $L$ in the FCC phase, but has a non-trivial dependence on $L$ which could be fitted to a hyperbolic tangent. However, for small $L$ the FCC phase is either exactly equal to the FCC$_{^1\!/_2}$ phase (a phase transition occurs) or very close to it. In particular, the region where the FCC phase differs significantly from the FCC$_{^1\!/_2}$ phase is always beyond the minimum, i.e., for $L>L_{\rm min}$. As we shall argue below, in this region the FCC crystal is not relevant for the nuclear EoS. We will, therefore, ignore this problem and use the parameters of the fit $k_i$ that reproduce the half-skyrmion curves, which are given in Table \ref{Table.Fits}. The fit for the FCC$_{^1\!/_2}$ phase serves as a good approximation for the FCC phase in the small $L$ region.
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|}
\hline
Model & Bound & Crystal value ($\%$) \\ \hline
$\mathcal{L}_{24}$ & 1 & 3.7 \\ \hline
$\mathcal{L}_{240}$ & 1.07 & 5.8 \\ \hline
$\mathcal{L}_{246}$ & 1.57 & 6.2 \\ \hline
$\mathcal{L}_{2460}$ & 1.37 & 8.0 \\ \hline
\end{tabular}
\caption{\small The topological bound for each model, in dimensionless units and with $C_0$, $C_6$ chosen to reproduce infinite nuclear matter. In the right column we show $[(E_{\rm min} - E_{\rm bound})/E_{\rm bound}] \times 100$, i.e., the percentage deviation of the minimum crystal energy from the bound for the FCC lattice, which provides the lowest minimum.}
\label{Table.Bounds}
\end{table}
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Model & $k$ & $k_2$ & $k_4$ & $k_6$ & $k_0$ \\ \hline
$\mathcal{L}^{\text{FCC}}_{24}$ & 0.029 & 0.11 & 2.38 & 0 & 0 \\ \hline
$\mathcal{L}^{\text{FCC}}_{240}$ & 0.005 & 0.11 & 2.40 & 0 & 0.008 \\ \hline
$\mathcal{L}^{\text{FCC}}_{246}$ & 0.31 & 0.089 & 2.56 & 0.85 & 0 \\ \hline
$\mathcal{L}^{\text{FCC}}_{2460}$ & 0.55 & 0.050 & 1.61 & 0.89 & 0.012 \\ \hline
$\mathcal{L}^{\text{BCC}}_{24}$ & 0.014 & 0.096 & 3.00 & 0 & 0 \\ \hline
$\mathcal{L}^{\text{BCC}}_{240}$ & 0.011 & 0.096 & 3.00 & 0 & 0.004 \\ \hline
$\mathcal{L}^{\text{BCC}}_{246}$ & 0.195 & 0.087 & 2.96 & 0.239 & 0 \\ \hline
$\mathcal{L}^{\text{BCC}}_{2460}$ & 0.139 & 0.084 & 3.05 & 1.68 & 0.005 \\ \hline
\end{tabular}
\caption{\small Fitting constants for the numerically obtained $\widetilde{E}(\widetilde{L})$ curves.}
\label{Table.Fits}
\end{table}
Furthermore, the constants $K_i$ that result from the perfect scaling are given in \cref{Table.PS}.
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Model & $K_2$ & $K_4$ & $K_6$ & $K_0$ \\ \hline
$\mathcal{L}^{\text{FCC}_{1/2}}_{24}$ & 0.111 & 2.43 & 0 & 0 \\ \hline
$\mathcal{L}^{\text{FCC}_1}_{240}$ & 0.114 & 2.41 & 0 & 0.0082 \\ \hline
$\mathcal{L}^{\text{FCC}_{1/2}}_{246}$ & 0.111 & 2.43 & 1.24 & 0 \\ \hline
$\mathcal{L}^{\text{FCC}_1}_{2460}$ & 0.115 & 2.41 & 1.13 & 0.0079 \\ \hline
\end{tabular}
\caption{\small Fitting constants for perfect scaling approximated curves.}
\label{Table.PS}
\end{table}
The values of $f_{\pi}$ and $e$ which reproduce \eqref{Infinite_Matter} can, in fact, be calculated exactly, since the dimensionless lagrangian $\mathcal{L}_{24}$ does not depend on them,
\begin{equation}
\left( \frac{1.28}{B_{\text{cell}}} \right)^{1/3}\widetilde{L}_{\text{min}} = f_{\pi}e, \hspace{3mm} \frac{923.3}{3\pi^2} \frac{B_{\text{cell}}}{\widetilde{E}_{\text{min}}} = \frac{f_{\pi}}{e},
\label{f-pi-e-eq}
\end{equation}
where $\widetilde{L}_{\text{min}}$ and $\widetilde{E}_{\text{min}}$ denote the values of the length and energy at the minimum, and $B_{\text{cell}}$ is the baryon number of the unit cell (here we have used that the volume of the unit cell is $8\widetilde{L}^3$). In the FCC$_{^1\!/_2}$ phase, $B_{\text{cell}} = 4$, and for the model $\mathcal{L}_{24}$ the exact values are $f_{\pi} = 137.77$ MeV, and $e = 4.59$, which are quite close to those obtained within the perfect scaling approximation. These values are in fact similar to those obtained from fitting the hedgehog solution to the proton \cite{Ding:2007xi}.
For the other models, we do not attempt to calculate $f_\pi$ and $e$ exactly. Instead, we calculate them from the exact scaling, see \cref{Table.Constants}, and then use \eqref{f-pi-e-eq} to find $\widetilde{L}_{\text{min}}$ and $\widetilde{E}_{\text{min}}$.
In \cref{Figure.EvsL} we also see that the different terms that we include in the Lagrangian have the expected impact on the energy per baryon curve. The sextic term, due to its repulsive behaviour, shifts the length of the minimum to larger values, whereas the attractive potential term has the opposite effect. From the four models that we have studied, the same qualitative behavior is observed for the different crystalline phases in those models which do not include the potential term, i.e $\mathcal{L}_{24}$ and $\mathcal{L}_{246}$. Indeed, for these two models, the lowest energy phase corresponds to the FCC of single Skyrmions at low densities, eventually suffering a phase transition and becoming an FCC$_{1/2}$. Such a transition was already found in \cite{Lee:2003aq}. On the other hand, for the models
including a pion mass potential, this transition only occurs asymptotically, as the symmetries of the FCC$_{1/2}$ phase are not compatible with a nonvanishing pion mass. Also, in all models a (first order) phase transition from the FCC$_{1/2}$ to the BCC phase is observed at high densities, as we shall explain in more detail below. Such a transition is also expected by symmetry considerations \cite{goldhaber1987maximal}.
In the rest of this section we will comment on the different density regimes at which each of these different phases become relevant, and the possible existence of phase transitions between them.
\newpage
\begin{onecolumngrid}
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{EB4figsadim.pdf}
\caption{Energy per baryon as a function of the lattice length parameter for the four different models considered in this paper. Here, the energy per baryon is plotted in the dimensionless units of Table \ref{Table.Bounds}. Further, remember that $L=L_\text{FCC}=L_\text{BCC}/\sqrt[3]{2}$.}
\label{Figure.EvsL}
\end{figure*}
\end{onecolumngrid}
\twocolumngrid
\subsection{The high density phase: transition to a fluid-like configuration}
As stated above, the BPS model shares the properties of a perfect fluid \cite{adam2010bps}. The inclusion of the sextic term in the Skyrme lagrangian, therefore, will have the effect of homogeneizing the densities in the unit cell of a crystal configuration, at least in the density regime where the contribution from this term to the energy becomes relevant. A measure of this homogeneity may be obtained by comparing the energy density and its mean value over the unit cell. Since the sextic term scales as $1/L^3$, we expect that at the minimum of energy the density still approaches that of the FCC$_{^1\!/_2}$ crystal, whereas for decreasing values of $L$ a more homogeneous energy density (fluid behaviour) will appear, i.e., the field configuration will become more similar to a perfect fluid with homogeneous energy density.
As a measure for this effect, we define the radial energy profile (REP), i.e. the energy enclosed by a sphere of radius $r$,
\begin{equation}
E(r) = \int_0^{r} d^3x \:\varepsilon ,
\label{EnEnclosed}
\end{equation}
where $\varepsilon$ is the energy density (the integrand in \eqref{Energy}). For this concrete calculation, we only consider the BCC phase, because {\em i)} this is the relevant phase for high densities, and {\em ii)} the effect of homogeneization is stronger for this phase. Further, we use the smaller "unit cell" of size $L$ (because the energy density has periodicity $L$), surrounded by vacuum. The resulting REP will grow with the radius until $r = \sqrt{3}\:L$ and take a constant value equal to the energy of the unit cell for $r \ge \sqrt{3}\:L$. In the case of a fluid, $\varepsilon_{\text{fluid}}$ is a constant, therefore we also compute the REP \eqref{EnEnclosed} for a unit cell of the same energy but with a constant energy density. The ratio $\chi$ between these two radial energy profiles tells us how far we are from the fluid-like behaviour.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.55]{Fluidity.pdf}
\caption{\small Influence of the sextic term on the ratio between the REPs for the crystal and the fluid, at $L_{\rm min}$ (solid), $\tfrac{2}{3}L_{\rm min}$ (dashed), and $\tfrac{1}{2}L_{\rm min}$ (dotline). The radial coordinate is rescaled by the lattice length $\bar{r}=r/L$. }
\label{Figure.Fluid}
\end{figure}
In \cref{Figure.Fluid} we can see that the homogeneity of the energy density strongly increases with density, i.e. with decreasing values of the lattice parameter, when the sextic term is included. For the model $\mathcal{L}_{24}$ without the sextic term, on the other hand, the ratio $\chi$ between the lattice and the fluid REPs is almost independent of the density and strongly deviates from unity. In other words, skyrmionic matter remains in a crystalline phase up to very high densities without the sextic term, whereas it approaches a fluid phase when the sextic term is included. We remark that the pion mass term is irrelevant for these high-density effects.
Our findings are further illustrated by the three-dimensional energy density plots in \cref{Figure.EDFCC}. There it can be seen that regions of small energy density are almost completely expelled from the unit cell for small $L$ (high density) if the sextic term is included, leading to a much more homogeneous energy density. Without the sextic term, on the other hand, the relative sizes of the regions of small and large energy density remain almost unchanged when $L$ is varied.
In addition, the inclusion of the sextic term is known to result in a much stiffer EoS for Skyrmionic matter at high densities \cite{Adam:2015lra}. This is one of the successes of the generalized model, as argued in \cite{Adam:2020yfv}, since it allows for more realistic neutron star maximum masses than the standard Skyrme model.
\hspace{1cm}\begin{figure}[h!]
\centering
\includegraphics[scale=0.3]{ED_complete.pdf}
\caption{Evolution of the energy density of a unit cell in the BCC half-skyrmion phase with (lower row) and without (upper row) sextic term for $L=L_{\rm min},\,\, \tfrac{2}{3}L_{\rm min}$ and $\tfrac{1}{2}L_{\rm min}$}.
\label{Figure.EDFCC}
\end{figure}
In fact, one of our main results in this paper consists in the numerical confirmation of the hypothesis made in \cite{Adam:2020yfv} about the smooth transition between a pure Skyrmion crystal and a perfect fluid phase at higher densities, which is crucial to be able to describe the most massive $(M\sim 2.3 M_\odot)$ neutron stars within the (generalized) Skyrme model. We have identified the two principal factors that provide this effect, namely, the inclusion of the sextic term in the generalized model, whose repulsive character tends to homogeneize the energy density, and the transition from the FCC to BCC half Skyrmion phase (see below), which actually accelerates this proccess.
\subsection{The medium density phases and phase transitions}
\subsubsection{FCC to FCC$_{^1\!/_2}$ phase transition}
In \cref{Figure.EvsL} we see that the effect of a nonvanishing pion mass potential has a big qualitative effect on the behavior of the $E(L)$ curve of the Skyrmion crystal at low densities. Indeed, without potential term the FCC and FCC$_{1/2}$ curves join around $L_{\rm PT} = 7.7$ and $L_{\rm PT} = 15.5$ with and without sextic term, respectively, and they have the same energy from there on. In other words, a second order phase transition from FCC to FCC$_{^1\!/_2}$ occurs at these values of the lattice parameter $L$. When we include the potential term, on the other hand, this joining never occurs exactly since the FCC$_{^1\!/_2}$ symmetries are not respected by the lagrangian. The FCC curve approaches the FCC$_{^1\!/_2}$ curve in the chiral limit, when $\langle \sigma \rangle \rightarrow 0$. But even in this case, the two curves are essentially indistinguishable for $L\le L_{\rm min}$.
The phase transition in the chirally symmetric case (without pion mass potential) has been previously reported in the literature \cite{Lee:2003aq}, and the vanishing of the mean value of the $\sigma$ field has been proposed as an order parameter signaling this transition, since it vanishes in the half-skyrmion crystal due to the symmetry properties of the unit cell in this phase. The physical significance of such a transition has also been extensively studied \cite{Park:2008xm,Vento:2017ypn}. Moreover, this transition, which involves a \emph{topology change} ---in the sense that the $4$ skyrmions of a unit cell must split into $8$ half-skyrmions with the same total baryon number--- \cite{Harada:2016tkf} has been argued to be a genuine prediction of the Skyrme crystal model for dense nuclear matter, and to have nontrivial observational effects in the EOS of neutron stars.
In these investigations, the Skyrme model (and Skyrme crystal) is typically embedded into a larger effective model motivated by QCD, containing, e.g., the dilaton field in order to recover the scale invariance of Yang-Mills theory at high density. Here we want to argue, however, that at least for the pure Skyrme model without these additional DoF, the physical relevance of this phase transition is questionable. Firstly, this phase transition always occurs at an $L_{\rm PT} > L_{\rm min}$, i.e., in a region where the energy per baryon $E/B$ {\em grows} with $L$. But this corresponds to a {\em thermodynamically unstable} region with negative pressure, as was already pointed out in \cite{kugler1989skyrmion}.
Secondly, in the next section we will show that there exists another skyrmion lattice phase with lower energy per baryon than the FCC crystal of skyrmions in the region $L\ge L_{\rm min}$. Further, this phase evolves naturally towards a half-skyrmion phase without involving any sort of change in the topology of the field configurations. Concretely, this phase describes a cubic lattice of $B=4$ skyrmions, i.e., $\alpha$ particles. In this phase, the individual $\alpha$ particles are free to occupy their preferred volume within the $B=4$ unit cell, and we find that, indeed, for large $L$ they only occupy a small fraction of the total volume.
This is in accordance with the physical picture that at low densities nuclear matter clusters into larger substructures (nuclei) and not just individual nucleons. $\alpha$ particles are good candidates for these substructures, because they are strongly bound, both in nature and in the Skyrme model.
\subsubsection{FCC$_{^1\!/_2}$ to BCC phase transition}
The energies per baryon number of the FCC and BCC phases have been compared in \cref{Figure.EvsL}.
We appreciate in the plots of this figure that the intersections of the BCC and FCC curves, marked by a cross in all cases, always occur for rather small values of $L$ and, in particular, always for $L<L_{\rm min}$. In this region, the FCC and FCC$_{^1\!/_2}$ curves are indistinguishable, and we use the numerical FCC$_{^1\!/_2}$ results for our calculations.
In order to obtain the correct ground state of the crystal, we have to compare the energies per baryon number at the same baryon density. This implies $L=L_\text{FCC} = L_\text{BCC}/\sqrt[3]{2}$, as explained above. From \cref{Figure.EvsL} we find that the minimum of the energy is always given by the FCC phase, but then at some $\widetilde{L} =\widetilde{L}_{\rm T}$ the $E/B$ curves for the FCC and the BCC phases intersect.
The two curves have different slopes at their crossing, which implies that the phase transition is of first order and must be treated by a Maxwell construction, where the two phases are connected by a region of phase coexistence at constant pressure \cite{kugler1989skyrmion}. This implies that both the baryon density and the energy density suffer a sudden jump in the phase coexistence region when expressed as functions of the pressure.
The values $\widetilde{L}_{\text{T}}$ at which the intersections of the two curves occur are given in Table \ref{Table.Cuts}. In the same table, we also compare $\widetilde{L}_{\text{T}}$ to $\widetilde{L}_\text{min}$ (which gives the density of nuclear matter at saturation), and we provide the pressure at the phase transition (at phase coexistence) and the jumps suffered by the energy density and the baryon density. The values of the physical coupling constants for the different models are given in Table \ref{Table.Constants}.
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\diagbox[innerwidth=5.3em, height=\line]{}{}\hspace{0.4cm}\diagbox[innerwidth=-7em, height=\line]{}{}\hspace{-0.41cm} & $\mathcal{L}_{24}$ & $\mathcal{L}_{240}$ & $\mathcal{L}_{246}$ & $\mathcal{L}_{2460}$ \\ \hline
$\widetilde{L}_{\text{T}}$ & 1.13 & 0.98 & 7.03 & 5.27 \\ \hline
$\widetilde{L}_{\text{min}}$ & 4.7 & 3.8 & 10 & 6.5 \\ \hline
$\widetilde{L}_\text{T}/\widetilde{L}_{\rm{min}}$ & 0.24 & 0.26 & 0.72 & 0.80 \\ \hline
$p_{\text{T}}$ (MeV/fm$^3$) & 6905 & 5833 & 108.2 & 53.7 \\ \hline
$\Delta \rho$ (MeV/fm$^3$) & 669 & 484 & 41.9 & 25.8 \\ \hline
$\Delta n$ (fm$^{-3}$) & 0.26 & 0.19 & 0.03 & 0.02 \\ \hline
\end{tabular}
\caption{\small FCC to BCC phase transition.}
\label{Table.Cuts}
\end{table}
The transition to the BCC phase was expected, since the symmetries of this phase are mainly motivated at high densities. However, the transition can be produced at such high densities that they are unreachable inside NS. It is clearly seen that the inclusion of the sextic term decreases the density at which the transition is produced, making it more likely that this phase transition can occur inside NS.
\subsection{The low density phase: a lattice of $B=4$ skyrmions}
The energy per baryon of the Skyrme crystal ansatz is bounded from below, by construction. In fact, there is a topological bound which must be satisfied at any length scale. Furthermore, although $E/B$ gets rather close to the topological bound at the minimum --- as can be seen in \cref{Figure.EvsL}--- which corresponds to the nuclear saturation density $n_0$ (see next section), the fact that the energy per baryon grows with $L$ for $n<n_0$ shows that this particular ansatz is not valid for densities lower than $n_0$. It was first argued in \cite{Park:2002ie} that the correct minimum energy phase in this regime should correspond to an inhomogeneous phase in which skyrmions collapse into lumps with most of the space filled with vacuum. Further, in \cite{Park:2019bmi} a concrete realization of such a phase was proposed, constructed in terms of planar structures from the Atiyah-Manton instanton ansatz \cite{ATIYAH1989438}. However, this phase lacks the isotropy symmetry that one would expect from infinite nuclear matter. We now argue that there is an even simpler phase which may play the role of such an inhomogeneous phase while keeping the cubic symmetry of the unit cell, namely, the $\alpha$-particle lattice phase.
The key point is that when the parameter $L$ grows, the distance between half-skyrmions uniformly increases, and so does the unit cell as well as its energy. Nevertheless, we may assume that for distances larger than that of minimum energy, it is more energetically favourable for each unit cell to collapse into a lump with the same baryon charge, so that the Skyrme crystal fragments into a lattice of well defined $B=4$ skyrmions (one per unit cell) interacting with their neighbours. This fact was actually reported in \cite{SilvaLobo:2010acs} for the standard Skyrme model. Once the field has reached this phase, the length scale of each unit cell will not be given by the size of each skyrmion anymore, but by the distance between them, so that decreasing the density will not necessarily imply a change in the skyrmion size.
A simple way to take into account the effect of finite density is to consider skyrmions on the 3-torus (i.e. imposing periodic boundary conditions). In a first approximation, we will describe the low energy skyrmion lattice using skyrmions that preserve the cubic symmetry of the unit cell (i.e. symmetries \eqref{A1}, \eqref{A2}), the simplest of them being the cubic $B=4$ skyrmion (the $\alpha$ particle). We then numerically obtain the energy per baryon number of alpha particles in the three-torus as a function of the torus size parameter $L$, where now $2L$ represents the distance between nearest-neighbour unit cells of the physical skyrmion lattice. We have done this calculation with the help of a gradient flow algorithm for energy minimization, on a cubic grid with $n^3$ points, with $n=2L/\delta+5$, $\delta$ being the distance between points in the grid. The extra points are needed due to the periodic boundary conditions, which were imposed by identifying the first and last two points of the grid in each dimension. The initial condition for the fields was generated from the $B=4$ rational map ansatz \cite{manton2004topological}, but using a rescaled radial coordinate for the profile function, of the form $f(r) = \tfrac{\pi}{1+(\alpha r/L)^2}$, to account for the squeezing of the cell. The constant $\alpha$ is freely chosen so that the initial ansatz is well behaved within the unit cell. In our case, it is sufficient to take $\alpha = 5$. Once the initial condition was obtained, we run the gradient flow algorithm until an error of $\sim 10^{-4}$ in the baryon number and a convergence up to the same order in the total energy was reached.
In our numerical calculations, we only consider the model $\mathcal{L}_{240}$ without the sextic term.
The technical reason is that the calculations with the sextic term included become much more involved.
The physical reason is that, at low densities, where the $\alpha$ particle lattice is relevant, the contributions of the sextic term are small and should not qualitatively change our results. More precisely, in the interior of the individual $\alpha$ particles, the sextic term will have a certain influcence, essentially consisting in the expulsion of low energy density regions. The symmetries of the $\alpha$ particles should remain unchanged, because the sextic term is invariant under volume-preserving shape changes (diffeomorphisms). In the large near-vacuum regions between the $\alpha$ particles, on the other hand, the sextic term can be safely neglected.
\begin{figure*}
\centering
\includegraphics[scale=0.5]{Alpha2fcc.pdf}
\caption{$E(L)$ curve for the Skyrmion crystal phases and the $\alpha$ particle lattice (asterisks)}
\label{fig:Alphalattice_EvL}
\end{figure*}
We emphasize that the symmetries \eqref{A1}, \eqref{A2} of the $\alpha$ particle lattice are a subset of the symmetries of all the crystals which we considered. That is to say, the constraints imposed on the $\alpha$ lattice field configurations form a subset on the constraints imposed on all other lattices. This implies that the energy-per-baryon curve of the $\alpha$ lattice is bounded from above by all the other $E/B$ curves, i.e., it is a better approximation to the true minimum energy configuration that the crystals. The physical expectation is that the $\alpha$ particle lattice will lead to a strictly lower energy for sufficiently low densities (large $L$), whereas the more symmetric crystals will be recovered in the high-density region, either asymptotically or via a second-order phase transition.
In \cref{fig:Alphalattice_EvL} we can see that the energy per baryon number of $\alpha$ particles on $T^3$ tends to the isolated $B = 4$ Skyrmion at low densities, and that the $\alpha$-lattice phase has less energy per baryon than all the skyrmion crystal phases for $L>L_{\rm min}$, so that the former is a strictly better ansatz for the low density region than the rest. Indeed, our numerical results indicate that there is a transition near the minimum of energy, such that the interpolated curve between the two phases (FCC crystal before $L_{\rm min}$ and $\alpha$-lattice just after) describes the correct behavior of the skyrmion matter in this range of densities. The energy density plots of \cref{fig:alpha_transition} confirm the behaviour described above. For sufficiently large $L$, the $\alpha$ particle only occupies a small fraction of the unit cell. For small $L$, instead, we recover the half-skyrmion structure of the FCC$_{^1\!/_2}$ lattice. To appreciate that the energy density of \cref{fig:alpha_transition} approaches that of \cref{Figure.D} in the limit of small $L$, we have to shift the energy density plot of \cref{Figure.D}, left panel, by $L/2$ in the $x$ and $y$ directions. The reason is that in \cref{fig:alpha_transition} the $\alpha$ particle is always placed in the center of the unit cell, whereas in \cref{Figure.D} the half-skyrmions are placed at the corners of the unit cell. We would like to remark that the transition from the $\alpha$ lattice to the FCC$_{^1\!/_2}$ lattice happens quite naturally, and no topology change occurs, since the half-skyrmion structure of the energy density is already present in the structure of the $\alpha$ particles, as can be seen in \cref{fig:alpha_transition}.
\begin{figure}
\hspace{2.0cm}
\centering
\includegraphics[scale=0.30]{Transition_2.pdf}
\caption{Energy density contours for the minimum energy field configuration in $T^3$ for different values of the torus length ($\Tilde{L}=8,5,4,2$). The color scheme is as in \cref{Figure.D}}
\label{fig:alpha_transition}
\end{figure}
Indeed, it is well-known that already the single $B=4$ skyrmion first reported in \cite{Braaten:1989rg}, corresponding to the $L\to \infty$ limit of our $\alpha$ particles on $T^3$, shows this half-skyrmion substructure, see e.g., \cite{Manton:2011mi}. We emphasize that this half-skyrmion substructure of the $\alpha$ particles is not imposed in the numerical procedure. Instead, it is a property of the resulting solution.
Furthermore, this transition to the $\alpha$ particle lattice renders the difference between the minimum energy and the energy at $L\rightarrow \infty$ not only finite, but very small (about $\sim 5\%$). Obviously, we could improve the minimization of this difference even further by considering a larger unit cell containing, e.g., the $B = 32$ or $B = 108$ solutions, which have a slightly lower energy per baryon than the alpha particle and share its cubic symmetry. However, this difference would be rather small, of the order of $1$ or $2$ percent, so the $\alpha$-lattice approximation, being significantly simpler from the numerical point of view, already constitutes a good candidate for the description of skyrmion matter at low densities.
\section{The Skyrme crystal EoS}
Before discussing the EoS resulting from the crystals which minimize the energy per baryon in the different density regions, it is useful for our purposes to show a figure similar to \cref{Figure.EvsL}, but where the baryon density $n$ is used as the independent variable (horizontal axis) and the $E/B$ vs. $n$ curves are shown in physical units. That is to say, the true minimum (the minimum of the FCC or FCC$_{^1\!/_2}$ curve) is located at the saturation density $n_0$ and takes the value $(E/B)_0 = 923.3\;$MeV.
It is clearly visible from \cref{Figure.Evsn} that the increase in energy per baryon with $n$ is much steeper for models including the sextic term, so that much bigger energies are reached at the same baryon density. This implies a much stiffer EoS. Further, the FCC-to-BCC phase transition occurs in a region of 3-5 $n_0$ when the sextic term is included, which is clearly relevant for the interior of sufficently heavy NS. This is no longer the case without the sextic term. Also the second order phase transition from FCC to FCC$_{^1\!/_2}$ for the models without potential in the unstable region $n < n_0$ can be clearly identified. When the pion mass potential is included, this phase transition turns into an asymptotic approach.
The equation of state (EoS) $\rho(p)$ is the relation between the (thermodynamical, average) energy density $\rho$ of a system and the pressure $p$ applied to it. Both magnitudes $\rho$ and $p$ as well as the baryon density $n$ can be obtained from the crystal energy, using their thermodynamical definitions
\begin{align}
\rho &= \frac{E}{V} = \frac{N_{\text{cells}}\:E_{\text{cell}}}{N_{\text{cells}}\:V_{\text{cell}}} = \frac{E_{\text{cell}}}{8L^3},
\label{eos1} \\[2mm]
p &= -\frac{\partial E_{\text{cell}}}{\partial V_{\text{cell}}} = -\frac{1}{24L^2}\frac{\partial E_{\text{cell}}}{\partial L}, \label{eos2} \\[2mm]
n &= \frac{B_{\text{cell}}}{V_{\text{cell}}} = \frac{B_{\text{cell}}}{8L^3}. \label{eos3}
\end{align}
Again, all these quantites remain finite in the thermodynamical limit. Since the energy of the unit cell is also a function of $L$ \eqref{E_Fit} we have to solve the equation $L(p)$ to finally obtain the EoS.
For the standard Skyrme Lagrangian it is possible to invert this function analytically. But once the sextic and potential terms are included, this inversion must be done numerically. Further, it is obvious from the above definitions that the regions $L>L_\text{min}$ (or $n<n_0$), where $E/B$ grows with $L$, correspond to thermodynamically unstable regions of negative pressure. This remains true even if the $\alpha$-particle phase is included, although this phase ameliorates the problem. We shall exclude those regions from our plots for the EoS which are, therefore restricted to $p\ge 0$ ($n\ge n_0$). In \cite{Adam:2020yfv} the EoS was extended to $n< n_0$ by
a smooth interpolation to the standard nuclear physics EoS
of \cite{Sharma:2015bna}. Below we shall discuss possibilities to overcome this restriction and to derive a purely Skyrme model EoS valid for all densities.
The EoS resulting from \eqref{eos1} - \eqref{eos3} are shown in \cref{Figure.EoS}, in which the energy and baryon densities are plotted against the corresponding pressure, for a range of values which have been shown to be physically relevant for matter inside neutron stars \cite{Adam:2020aza}.
\newpage
\begin{onecolumngrid}
\begin{figure*}[htb!]
\centering
\includegraphics[width=\textwidth]{EB4figs.pdf}
\caption{ Energy per baryon of the unit cell vs baryon density of the different crystals in various models. The true (FCC) minima are fitted to the energy and baryon density of symmetric, infinite nuclear matter at saturation.}
\label{Figure.Evsn}
\end{figure*}
\end{onecolumngrid}
\twocolumngrid
\begin{figure}[h!]
\centering
\includegraphics[scale=0.6]{EoS.pdf}
\caption{Equation of State for all four models}
\label{Figure.EoS}
\end{figure}
As explained in \cref{Sec:Numproc}, the free parameters of each model are fitted so that the minimum energy per baryon in the crystal corresponds to that for saturated, infinite nuclear matter. In particular, this implies that all the EoS depicted in \cref{Figure.EoS} must converge to the same point in the $(\rho,p)$ plane as $p$ goes to zero. However, since we have determined the values of $f_{\pi}$ and $e$ using the perfect scaling approximation, these curves do not reproduce exactly \eqref{Infinite_Matter}. Nevertheless, the largest difference is produced for the $\mathcal{L}_{246}$ case and it is about 6$\%$.
Furthermore, as shown in the previous section, a phase transition between the FCC$_{^1\!/_2}$ and BCC phases is expected to occur in the high density region. Indeed, we take into account such a transition in the equation of state by smoothly joining the corresponding EoS of the two different phases via the Maxwell construction, i.e., the points at which $\pdv{E_\text{cell}}{V_\text{cell}}$ coincide for each phase are joined through a straight line tangent to both curves. This means that, at a certain value of the pressure, the baryon and energy densities suffer a sudden jump, which corresponds to a first order phase transition.
In \cref{Figure.EoS} we see that the inclusion of the sextic term in the Lagrangian significantly stiffens the resulting EoS, which was of course expected due to the incompressible character of matter in the $BPS$ Skyrme submodel, towards which the generalized model tends at large pressures. Another effect of the inclusion of the sextic term is that the FCC-to-BCC first-order phase transition is shifted to smaller densities which may become relevant for the core region of heavy NS.
\subsection{Towards a description of asymmetric nuclear matter and NS crusts within the Skyrme model}
Despite constituting only $\sim 1\%$ of the total stellar mass, the crust, defined as the external region of a neutron star with densities $\rho \lesssim 10^{14} \rm g \, cm^{-3}$, plays an important role for determining many observational properties, such as the tidal deformability or the cooling rate via neutrino emission. It is also a crucial element to explain radio pulsar glitches \cite{Chamel:2008ca}.
A good effective model for nuclear matter in neutron stars, therefore, should be able to describe matter at such (and lower) densities. However, it is precisely at these density regimes where the Skyrme model approach to nuclear matter becomes problematic, because the energy density obtained from the thermodynamical definition in all the phases studied above reaches a finite value at zero pressure, due to the presence of a minimum in the curve $E(L)$.
The presence of such a minimum in the binding energy is a feature shared by all models of symmetric nuclear matter \cite{Sammarruca:2010gc,Sammarruca:2021bpn,Ekstrom:2015rta,DBHFapproachNucmat}, and signals the point at which nuclear matter is most bounded, referred to as the nuclear saturation point in standard nuclear physics literature.
This is, in fact, the main reason why we interpret the classical Skyrme crystal configurations as models for symmetric nuclear matter and identify the minimum of $E/B$ with the nuclear saturation point.
This minimum, however, does not show up in physical nuclear matter, and deviations from the symmetric nuclear matter model become relevant near this point. Indeed, our approach to nuclear matter has only taken into account the \emph{classical} properties of the Skyrmion crystals. In other words, we have not taken into account, for example, the so called \emph{symmetry energy} ---which in the Skyrme model results from the quantization of isospin---, that is of great importance when describing nuclear matter at these density regimes. The correct treatment of quantum effects, such as isospin interactions due to the difference between the proton and neutron number, as well as the Coulomb forces, require a detailed analysis that will be developed in a future publication.
It is expected \cite{Baskerville:1996he}, however, that the quantum corrections to the skyrmion crystal will only be relevant precisely in the intermediate density regime at which the $E(L)$ curve presents its minimum, and that such a minimum will disappear when these quantum effects are properly taken into account. Indeed, in \cite{Baskerville:1996he} a correction of about $4\%$ to the energy at the minimum was obtained from the isospin contribution in the half Skyrmion phase, to be compared with the $5\%$ difference in energy per baryon between the minimum and the $L\rightarrow \infty$ limit in the new $\alpha$ lattice phase.
To summarize, there are strong indications that a more complete description of Skyrmionic matter which includes both quantum and Coulomb effects can erase the minimum in $E/B$ and, therefore, lead to an EoS that is valid also at low densities $n<n_0$.
In this case, we would be able to construct a genuine equation of state for physical nuclear matter and neutron stars from the Skyrme model alone, valid for the full range of densities, hence able to describe both the ultradense NS cores and the solid NS crusts within a single effective model.
\section{Conclusions}
It was the main purpose of the present paper to provide a detailed investigation of the different phases of Skyrme crystals in the generalized Skyrme model defined in Eq. \eqref{Lag} and the resulting EoS, having in mind mainly its application to nuclear matter and the description of neutron stars. More concretely, we
\begin{enumerate}
\item confirmed the importance of the sextic term in the generalized Skyrme Lagrangian for describing nuclear matter at sufficiently high densities. As conjectured in \cite{Adam:2020yfv}, the inclusion of this term leads to an EoS which behaves like a crystal for low densities but approaches a perfect fluid in the high-density limit. The sextic term is crucial to describe NS cores implying, in particular, their perfect-fluid behavior, because it allows to describe NS with masses up to $M \sim 2.3 M_\odot$ which have been observed recently.
\item presented a clear picture of the different known crystalline phases of the Skyrme model, as well as the possible transitions between phases, and discussed whether or not they may appear as the true ground states for symmetric nuclear matter at some given density based on physical grounds. Specifically, we found that the FCC-to-BCC phase transition, which occurs at unrealistically high densities in the standard Skyrme model, is shifted to densities of 3-5 $n_0$ when the sextic term is included. This density region is certainly relevant for the inner core of sufficiently heavy neutron stars. The FCC-to-FCC$_{^1\!/_2}$ phase transition, on the other hand, is unlikely to be of relevance for the nuclear matter EoS. First of all, it occurs in the thermodynamically unstable region $n<n_0$ of the classical Skyrme crystal. Secondly, we found that there exists another phase of a lattice of $\alpha$ particles with strictly lower energy in this region.
\item described this new phase, the $\alpha$-particle lattice, which is obtained numerically using a gradient flow procedure, and represents (to our knowledge) the best approximation for the ground state of skyrmion matter past the minimum of the energy-per-baryon curve. Furthermore, this phase has some appealing characteristics to make it a good model for nuclear matter in neutron star crusts, which are believed to consist of well defined nuclei sparsely distributed in a lattice.
\end{enumerate}
In this paper, we only investigated classical Skyrme crystals which, up to a certain degree, can be viewed as models for symmetric nuclear matter. The resulting EoS could still be used for sufficiently high densities, where it gives a reasonable description, and matched to a standard nuclear physics EoS at some density $n_* > n_0$ to calculate the resulting neutron star EoS, as we did in \cite{Adam:2020yfv}. However, our ultimate objective is to achieve a good phenomenological description of the nuclear matter EoS at all regimes of density and pressure using only the Skyrme model - or some extensions thereof - to represent baryonic DoF.
A next important step in this direction would be the inclusion of both quantum corrections and Coulomb effects into our Skyrme crystal calculations. These corrections, which would, e.g., incorporate effects of the symmetry energy, may lead to a Skyrme-model based EoS which is valid for the whole density range of neutron stars, from the inner core to the crust, thus providing us with an approximate EoS for asymmetric nuclear matter. As argued in the previous section, preliminary results involving the addition of isospin quantum corrections to the Skyrmion crystal energy per baryon are very encouraging.
Another important issue is the inclusion of further degrees of freedom besides the pions. Indeed, the appearance of hyperons and, in particular, the condensation of kaons is expected to occur at sufficiently high densities in the core of heavy NS, leading to a softer EoS. Previous investigations suggest that in the standard Skyrme model, kaon condensation sets in at about 3.5 $n_0$ \cite{Westerberg:1994hu}. However, the magnitude of its effect on the resulting EoS, as well as the effect of the sextic term in the kaon condensation onset are both worth investigating. Further, the importance of vector mesons, concretely the rho meson, for the correct formation of alpha particle clusters in light nuclei has been demonstrated recently in \cite{Naya:2018kyi}. It is perfectly conceivable that the inclusion of rho mesons is also required for a realistic description of nuclear matter. All these questions require further studies.
At this point, it is interesting to recall the main differences between the Skyrme model, on the one hand, and other effective field theory approaches like ChPT, on the other. In those theories, the nucleons are treated as quantum mechanical point particles, and many resulting properties of strongly interacting matter are related to the corresponding quantum effects, like the degeneracy pressure or the in-medium formation of Cooper pairs leading to a neutron superfluid. In the Skyrme model, instead, the nucleons are extended objects already classically, and the most important question for the determination of the EoS is how these finite chunks of matter must be arranged in order to minimize the energy per baryon number. Quantum corrections can, in principle, be included in the Skyrme model description of nuclear matter, but experience tells us that they are subleading in many cases.
In other words, the Skyrme model approach to nuclear matter assumes that, at least at sufficiently high densities, the extended, non-point-like character of the nucleons is their most important property.
Physical nucleons are extended objects and, in addition, the nuclear force becomes strongly repulsive at short distances, therefore this assumption seems reasonable.
In any case, our point of view is that one should simply develop the Skyme model predictions for strongly interacting matter properties as far as possible, work out its consequences, and compare with the available data, especially those extracted from neutron star observations, which currently seem to be the most reliable ones at high densities.
Such an open-minded approach is all the more justified because {\em i)} experimental results on strongly interacting matter above saturation density are still quite scarce and {\em ii)} more standard approaches face some difficulties in explaining several neutron star puzzles like, e.g., the rather high observed maximum NS masses, or the so-called hyperon puzzle. In addition, already a rather simple Skyrme-model based approach to neutron stars leads to very reasonable results for NS properties \cite{Adam:2020yfv}, as mentioned above.
To summarize, we think that our results present an important next step towards the final goal of a realistic description of nuclear matter and neutron stars within the framework of the (generalized) Skyrme model.
\begin{acknowledgements}
The authors would like to thank C. Naya for helpful discussions.
Further, the authors acknowledge financial support from the Ministry of Education, Culture, and Sports, Spain (Grant No. FPA2017-83814-P), the Xunta de Galicia (Grant No. INCITE09.296.035PR and Centro singular de investigaci\'on de Galicia accreditation 2019-2022), the Spanish Consolider-Ingenio 2010 Programme CPAN (CSD2007-00042), Maria de Maetzu Unit of Excellence MDM-2016-0692, and the European Union ERDF.
AW is supported by the Polish National Science Centre,
grant NCN 2020/39/B/ST2/01553.
AGMC is grateful to the Spanish Ministry of Science, Innovation and Universities, and the European Social Fund for the funding of his predoctoral research activity (\emph{Ayuda para contratos predoctorales para la formaci\'on de doctores} 2019). MHG is also grateful to the Xunta de Galicia (Conseller\'ia de Cultura, Educaci\'on y Universidad) for the funding of his predoctoral activity through \emph{Programa de ayudas a la etapa predoctoral} 2021.
\end{acknowledgements}
|
{
"timestamp": "2022-01-25T02:00:58",
"yymm": "2109",
"arxiv_id": "2109.13946",
"language": "en",
"url": "https://arxiv.org/abs/2109.13946"
}
|
\section{Introduction}
The $K$-\emph{nearest}-\emph{neighbour} ($K$-NN) \cite{FixHodges:51,FixHodges:52}
classifier is an elegantly simple and surprisingly effective learning
machine. It takes as input a set of training objects and their labels,
and returns for a given test object represented in terms of its \emph{pairwise}
\emph{distances} to the training objects a label that is determined
by a majority vote over the labels of the $K$ nearest neighbours
in the training sample. $K$-NN is not only conceptually simple but
also very versatile because it does not require a vectorial representation
but only the pairwise distances between test and training objects.
It is thus applicable to all kinds of \emph{structural} \emph{data}
like strings or graphs as long as a meaningful (in the sense of the
classification task) distance measure can be defined. $K$-NN also
has some remarkable asymptotic properties. It is \emph{universally}
\emph{consistent} in the sense that it converges to the Bayes decision
if $K\rightarrow\infty$ and $K/m\rightarrow0$ as the training sample
size $m\rightarrow\infty$. Also under certain regularity conditions
the risk of $1$-NN for $m\rightarrow\infty$ is bounded from above
by twice the Bayes error, $R_{\infty}\left(\mathrm{NN}_{1}\right)\leq2R\left(h_{\mathrm{Bayes}}\right)$,
while for $K$-NN it can be shown that $R_{\infty}\left(\mathrm{NN}_{K}\right)\leq R\left(h_{\mathrm{Bayes}}\right)(1+\sqrt{2/K})$.
With regard to the \emph{computational} \emph{effort} a simple analysis
yields $\mathcal{O}\left(mKd\right)$ where $d$ represents the cost
of one distance evaluation. More refined analysis reveals that for
fixed $K$ and $d$ the worst case time is $\mathcal{O}\left(m^{1/d}\right)$
and the expected time is $\mathcal{O}\left(\log m\right)$. These
results and more regarding $K$-NN can be found in \cite{Devroye:96}.
In this paper we will be concerned with bounds on the risk of the
$K$-NN classifier for \emph{small} \emph{sample} \emph{size}. Why
should such an analysis be of interest? To answer this question consider
the infamous \emph{no}-\emph{free}-\emph{lunch} \emph{theorem} by
Wolpert \cite{Wolpert:95}. This theorem essentially states that averaged
over a uniform distribution over all learning problems no classifier
is better than any other. This theorem may at first glance leave no
hope for the successful development of reliable learning algorithms.
More careful analysis, however, reveals that only the objective of
developing a universally best learning machine is led ad absurdum.
What the theorem \emph{does} tell us is that given a sample and a
learning algorithm we should require the learning algorithm to output
not only a classifier but also a \emph{performance} \emph{guarantee}:
we require the learning algorithm to be \emph{self}-\emph{bounding}
\cite{Freund:98}. This performance guarantee is best given in terms
of an \emph{a-posteriori} bound on the risk of the classifier. Standard
PAC/VC theorems provide \emph{a-priori} results in the sense that
the bound value is entirely determined by the level of confidence
$1-\delta$, the number $m$ of training examples, the empirical risk
$R_{\mathrm{emp}}$, and the complexity of the hypothesis class $\mathcal{H}$
--- usually expressed in terms of its VC-dimension $d_{\mathrm{VC}}$.
These bounds can thus be evaluated \emph{before} learning if $R_{\mathrm{emp}}=0$
is enforced, or after learning when $R_{\mathrm{emp}}$ is known.
In contrast, an a-posteriori bound may only be evaluated \emph{after}
learning, because it takes into account the match between the hypothesis
class $\mathcal{H}$ and the training data $Z$, e.g.~in terms of
the \emph{margin} observed on the training sample.
The idea of a-posteriori bounds was developed in statistical learning
theory and the first conceptual framework for such bounds was \emph{structural
risk minimisation} \cite{Vapnik:98}\emph{.} The idea was further
developed to include \emph{data}-\emph{dependent} structural risk
minimisation \cite{Taylor:96} that is capable of exploiting \emph{luckiness}
w.r.t.~the match of input data and learning machine. The latest results
are now known as the \emph{PAC}-\emph{Bayesian} \emph{framework} based
on work by David McAllester \cite{McAllester:98}. Note that the PAC-Bayesian
framework also provided the basis for the discovery of a very tight
margin bound for linear classifiers in kernel spaces \cite{Herbrich:99b}.
In Section \ref{sec:pac_bayes} we introduce basic concepts and notation
and present the PAC-Bayesian results on which our analysis is based.
In Section \ref{sec:knn_review} we briefly review the definition
of the $K$-nearest-neighbour classifier. In Section \ref{sec:1nn}
the $1$-NN algorithm is formulated as the limiting case of a linear
classifier in a kernel space. This leads to an intuitive explanation
of its generalisation ability. The resulting hypothesis space is then
used in Section \ref{sec:bound} by defining a sparse prior that leads
to a PAC-Bayesian bound for $1$-NN. This result will is generalised
to $K$-NN in Section \ref{sec:knn}. Finally, we conclude and point
to ideas for future work by relating $K$-NN to Support Vector Machines
(SVM) \cite{Vapnik:98}.
Throughout the paper we denote probability measures by $\boldsymbol{\mathsf{P}}_{\mathsf{H}}$
and the related expectation by $\boldsymbol{\mathsf{E}}_{\mathsf{H}}$.
The subscript refers to the random variable.
\section{Learning in the PAC-Bayesian framework\label{sec:pac_bayes}}
We consider the learning of binary classifiers. We define learning
as the process of selecting one hypothesis $h$ from a given hypothesis
space $\mathcal{H}$ of hypotheses $h:\mathcal{O}\rightarrow\mathcal{Y}$
that map objects $\boldsymbol{\mathsf{o}}\in\mathcal{O}$ to labels
$y\in\mathcal{Y}=\left\{ -1,+1\right\} $. The selection is based
on a training sample $Z$ comprised of a set $O=\left\{ \boldsymbol{\mathsf{o}}_{1},\ldots,\boldsymbol{\mathsf{o}}_{m}\right\} \in\mathcal{O}^{m}$
of objects and their corresponding labels. We will assume the training
sample $Z$ to be drawn iid from a probability measure $\boldsymbol{\mathsf{P}}_{\mathsf{Z}}\equiv\boldsymbol{\mathsf{P}}_{\mathsf{OY}}=\boldsymbol{\mathsf{P}}_{\mathsf{Y}|\mathsf{O}}\boldsymbol{\mathsf{P}}_{\mathsf{O}}$.
Based on these definitions let us define the risk $R\left(h\right)$
of a hypothesis $h$ by
\[
R\left(h\right)=\boldsymbol{\mathsf{P}}_{\mathsf{Z}}\left[h\left(\mathsf{O}\right)\neq\mathsf{Y}\right]\,.
\]
A reasonable criterion for learning is to try to find the the hypothesis
$h^{*}=\mathrm{argmin}_{h}R\left(h\right)$ that minimises the risk.
The difficulty in this learning task lies in the fact that the probability
measure $\boldsymbol{\mathsf{P}}_{\mathsf{Z}}$ is unknown. Let us
define the empirical risk $R_{\mathrm{emp}}\left(h,Z\right)$ of an
hypothesis $h\in\mathcal{H}$ on a training sample $Z$ by
\begin{equation}
R_{\mathrm{emp}}\left(h,Z\right)=\frac{1}{m}\left|\left\{ \left(\boldsymbol{\mathsf{o}},y\right)\in Z:h\left(\boldsymbol{\mathsf{o}}\right)\neq y\right\} \right|\,.\label{eq:r_emp}
\end{equation}
The principle of \emph{empirical risk minimisation \cite{Vapnik:98}}
advocates minimising the empirical risk $R_{\mathrm{emp}}\left(h,Z\right)$
instead of the true risk $R\left(h\right)$.
An a-posteriori bound aims at bounding the risk $R\left(h\right)$
of an hypothesis $h$ based on the knowledge of $\mathcal{H}$ as
well as $Z$. We now present two theorems by D.~McAllester \cite{McAllester:98}
that require the definition of a prior measure $\boldsymbol{\mathsf{P}}_{\mathsf{H}}$
on $\mathcal{H}$ and reward the selection of a hypothesis of high
prior weight with a low bound on the generalisation error. Note that
these theorems do not depend on the correctness of $\boldsymbol{\mathsf{P}}_{\mathsf{H}}$.
If the belief expressed in $\boldsymbol{\mathsf{P}}_{\mathsf{H}}$
turns out to be wrong, the bounds just become trivial.
\begin{thm}
\label{thm:dma1}For any probability measure $\boldsymbol{\mathsf{P}}_{\mathsf{H}}$
over an hypothesis space $\mathcal{H}$ containing a target hypothesis
$h^{*}\in\mathcal{H}$, and any probability measure $\boldsymbol{\mathsf{P}}_{\mathsf{Z}}$
on labelled objects, we have, for any $\delta>0,$ that with probability
at least $1-\delta$ over the selection of a sample $Z$ of $m$ examples,
the following holds for all hypotheses $h\in\mathcal{H}$ agreeing
with $h^{*}$on that sample:
\[
R\left(h\right)\leq\frac{\log\frac{1}{\boldsymbol{\mathsf{P}}_{\mathsf{H}}\left(h\right)}+\log\frac{1}{\delta}}{m}\,.
\]
\end{thm}
To see that this is true note that the probability that a hypothesis
$h$ with risk $R\left(h\right)$ is consistent with a sample of $m$
examples is bounded from above by $\left(1-R\left(h\right)\right)^{m}\leq\exp\left(-mR\left(h\right)\right)$.
If $R\left(h\right)$ is greater than the above bound the probability
that $h$ is consistent with the sample is bounded from above by $\boldsymbol{\mathsf{P}}_{\mathsf{H}}\left(h\right)\delta.$
Applying the union bound the probability that some hypothesis $h$
that violates the bound is consistent with the sample is bounded by
$\sum_{h\in\mathcal{H}}\boldsymbol{\mathsf{P}}_{\mathsf{H}}\left(h\right)\delta=\delta$.
Essentially replacing the binomial tail bound used in the above argument
by the Chernoff bound for bounded random variables leads to an agnostic
version of the above Theorem \ref{thm:dma1}.
\begin{thm}
\label{thm:dma2}For any probability measure $\boldsymbol{\mathsf{P}}_{\mathsf{H}}$
over an hypothesis space $\mathcal{H}$, and any probability measure
$\boldsymbol{\mathsf{P}}_{\mathsf{Z}}$ on labelled objects, we have,
for any $\delta>0,$ that with probability at least $1-\delta$ over
the selection of a sample $Z$ of $m$ examples, all hypotheses $h\in\mathcal{H}$
satisfy
\[
R\left(h\right)\leq R_{\mathrm{emp}}\left(h,Z\right)+\sqrt{\frac{\log\frac{1}{\boldsymbol{\mathsf{P}}_{\mathsf{H}}\left(h\right)}+\log\frac{1}{\delta}}{2m}}\,.
\]
\end{thm}
In both theorems the complexity term as found, e.g.~in VC bounds,
is replaced by the negative $\log$-prior of the hypothesis at hand.
Thus if the prior belief in that particular hypothesis was high the
effective complexity is low and the bound gives small values. Before
we can apply these bounds to $K$-NN we need to cast this classifier
into the appropriate framework.
\section{The $K$-nearest-neighbour classifier\label{sec:knn_review}}
The $K$-nearest-neighbour classifier requires that the set $\mathcal{O}$
of objects $\boldsymbol{\mathsf{o}}\in\mathcal{O}$ be equipped with
a distance measure. Although not strictly necessary for the application
of $K$-NN we will assume that we have a \emph{metric} $d:\mathcal{O}\times\mathcal{O}\mapsto\mathbb{R}^{+}$
between objects. Then the $K$-nearest-neighbour classifier is a mapping
$\mathrm{NN}_{K}:\left(O\times\mathcal{Y}\right)^{m}\times O\mapsto\mathcal{Y}$
defined as follows,
\begin{eqnarray}
\mathrm{NN}_{K}\left(Z,\boldsymbol{\mathsf{o}}\right) & = & \textrm{sign}\left(\sum_{\stackrel{\left(\boldsymbol{\mathsf{o}}^{\prime},y^{\prime}\right)\in Z:}{\boldsymbol{\mathsf{o}}^{\prime}\in N_{K}(\boldsymbol{\mathsf{o}})}}y^{\prime}\right)\nonumber \\
& = & \textrm{sign}\left(\sum_{i:\boldsymbol{\mathsf{o}}_{i}\in N_{K}(\boldsymbol{\mathsf{o}})}y_{i}\right),\label{eq:knn}
\end{eqnarray}
where the $K$-neighbourhood $N_{K}\left(\boldsymbol{\mathsf{o}}\right)$
is defined for $\boldsymbol{\mathsf{o}}^{\prime},\boldsymbol{\mathsf{o}}^{\prime\prime}\in O$
as
\[
N_{K}\left(\boldsymbol{\mathsf{o}}\right)=\left\{ \boldsymbol{\mathsf{o}}^{\prime}:\left|\left\{ \boldsymbol{\mathsf{o}}^{\prime\prime}:d\left(\boldsymbol{\mathsf{o}},\boldsymbol{\mathsf{o}}^{\prime\prime}\right)<d\left(\boldsymbol{\mathsf{o}},\boldsymbol{\mathsf{o}}^{\prime}\right)\right\} \right|<K\right\} .
\]
Note that this definition may lead to $K$-neighbourhoods of cardinality
$\left|N_{K}\left(\boldsymbol{\mathsf{o}}\right)\right|>K$ in the
case of a distance tie $d\left(\boldsymbol{\mathsf{o}},\boldsymbol{\mathsf{o}}^{\prime}\right)=d(\boldsymbol{\mathsf{o}},\boldsymbol{\mathsf{o}}^{\prime\prime})$
for some $\boldsymbol{\mathsf{o}}^{\prime},\boldsymbol{\mathsf{o}}^{\prime\prime}\in O$.
Let us explicitly break this tie and enforce $\left|N_{K}\left(\boldsymbol{\mathsf{o}}\right)\right|=K$
by discarding those objects in the tie with a higher index.
Also, for $K$ even there may result a voting-tie in the decision
leading to $\mathrm{NN}_{K}(Z,\boldsymbol{\mathsf{o}})=0$. Of course,
a tie of the latter type may serve as an indicator of an uncertain
prediction.
The above formulation of $K$-NN reflects the basic algorithm. Extensions
have been suggested (see \cite{Devroye:96}) that allow for different
weighting factors depending on the ranks of the neighbours. For conceptual
clarity, such extensions are not considered here.
\section{The 1-NN classifier as the limit of a kernel classifier\label{sec:1nn}}
Let us first consider the case of $K=1$. Then the definition of the
neighbourhood is reduced to
\[
N_{1}\left(\boldsymbol{\mathsf{o}}\right)=\left\{ \boldsymbol{\mathsf{o}}^{\prime}\in O:\boldsymbol{\mathsf{o}}^{\prime}=\mathrm{argmin}_{\boldsymbol{\mathsf{o}}^{\prime\prime}\in O}d\left(\boldsymbol{\mathsf{o}},\boldsymbol{\mathsf{o}}^{\prime\prime}\right)\right\} .
\]
In order to be able to view the NN-classifier as a linear classifier
in a kernel space let us introduce a $\mathrm{softmin}$-function
so as to replace the $\mathrm{argmin}$-function. Since the kernel
used should conform to the Mercer conditions \cite{Mercer:09} in
order to ensure the desirable properties of a kernel space, we leave
the soft-min function unnormalised. This does not change the output
of the classifier under the $\mathrm{sign}$-function and leads to
\begin{eqnarray}
\mathrm{NN}_{1}\left(Z,\boldsymbol{\mathsf{o}}\right) & = & \textrm{sign}\left(\sum_{i=1}^{m}\lim_{\sigma\rightarrow0}k_{\sigma}\left(\boldsymbol{\mathsf{o}},\boldsymbol{\mathsf{o}}_{i}\right)y_{i}\right).\label{eq:1nn}
\end{eqnarray}
We can use any positive definite kernel $k_{\sigma}:\mathcal{O}\times\mathcal{O}\mapsto\mathbb{R}^{+}$
with $k_{\sigma}\left(\boldsymbol{\mathsf{o}},\boldsymbol{\mathsf{o}}^{\prime}\right)=k_{\sigma}\left(d^{2}\left(\boldsymbol{\mathsf{o}},\boldsymbol{\mathsf{o}}^{\prime}\right)\right)$
(satisfying the Mercer conditions) and for which for any countable
set $I\subset\mathbb{R}^{+}$ of positive real numbers we have
\[
\lim_{\sigma\rightarrow0}\frac{k_{\sigma}\left(d\right)}{\sum\limits _{d^{\prime}\in I}k_{\sigma}\left(d^{\prime}\right)}=\left\{ \begin{array}{ll}
1 & \qquad\textrm{ if }d=\min\left(I\right)\\
0 & \qquad\textrm{ otherwise}
\end{array}\right.\,.
\]
Such a kernel is, e.g.~given by the RBF kernel
\begin{equation}
k_{\sigma}\left(\boldsymbol{\mathsf{o}},\boldsymbol{\mathsf{o}}^{\prime}\right)=\exp\left(-\frac{d^{2}\left(\boldsymbol{\mathsf{o}},\boldsymbol{\mathsf{o}}^{\prime}\right)}{\sigma^{2}}\right)\,,\label{eq:rbf}
\end{equation}
which we will use in the following.
\begin{figure}
\begin{centering}
\includegraphics[width=0.9\columnwidth]{conv_plot}
\par\end{centering}
\caption{\label{fig:1nn}Illustration of the convergence of the classifier
based on class-conditional Parzen window density estimation to the
$1$-NN classifier in $\mathcal{O}=\left[-1,+1\right]^{2}\subset\mathbb{R}^{2}$
using $d^{2}\left(\mathbf{x},\mathbf{x}^{\prime}\right)=\left\Vert \mathbf{x}-\mathbf{x}^{\prime}\right\Vert ^{2}$.
For $\sigma=5$ the decision surface (thin line) is almost linear,
for $\sigma=0.4$ the curved line (medium line) results, and for very
small $\sigma=0.02$ the piecewise linear decision surface (thick
line) of $1$-NN is approaches. For $1$-NN only the circled points
contribute to the decision surface, comparable to support vectors
in the SVM. }
\end{figure}
There exists an interesting relation to the Bayes optimal classifier
$h_{\mathrm{Bayes}}$ that can be approximated using the Parzen window
\cite{Parzen:62} kernel density estimate with kernel $k_{\sigma}$.
If the objects $\boldsymbol{\mathsf{o}}\in\mathcal{O}$ are represented
as vectors $\mathbf{x}_{\boldsymbol{\mathsf{o}}}\in\mathcal{X}\subseteq\mathbb{R}^{n}$,
i.e.~$d\left(\boldsymbol{\mathsf{o}},\boldsymbol{\mathsf{o}}^{\prime}\right)=\left\Vert \mathbf{x}_{\boldsymbol{\mathsf{o}}}-\mathbf{x}_{\boldsymbol{\mathsf{o}}^{\prime}}\right\Vert $
then the class conditional density $\boldsymbol{\mathsf{f}}_{\mathsf{X}|\mathsf{Y}=\pm1}$
can be estimated using a Parzen window density estimator
\[
\boldsymbol{\widehat{\mathsf{f}}}_{\mathsf{X}|\mathsf{Y}=\pm1}\left(\mathbf{x}\right)=\frac{1}{m^{\pm}}\sum_{i=1}^{m^{\pm}}k_{\sigma}\left(\mathbf{x},\mathbf{x}_{\boldsymbol{\mathsf{o}}_{i}}\right)\,,
\]
where the kernel is assumed to be normalised to one, i.e.~$\int_{\mathcal{X}}k_{\sigma}\left(\mathbf{x}\right)\,d\mathbf{x}=1$.
The Bayes optimal decision at point $\mathbf{x}$ is given by
\[
h_{\mathrm{Bayes}}\left(\boldsymbol{\mathsf{o}}\right)=\textrm{sign}\left(\boldsymbol{\mathsf{f}}_{\mathsf{X}|\mathsf{Y}=+1}\left(\mathbf{x}_{\boldsymbol{\mathsf{o}}}\right)-\boldsymbol{\mathsf{f}}_{\mathsf{X}|\mathsf{Y}=-1}\left(\mathbf{x}_{\boldsymbol{\mathsf{o}}}\right)\right)
\]
and can be approximated by
\[
\widehat{h}_{\sigma}\left(\boldsymbol{\mathsf{o}}\right)=\textrm{sign}\left(\boldsymbol{\widehat{\mathsf{f}}}_{\mathsf{X}|\mathsf{Y}=+1}\left(\mathbf{x}_{\boldsymbol{\mathsf{o}}}\right)-\boldsymbol{\widehat{\mathsf{f}}}_{\mathsf{X}|\mathsf{Y}=-1}\left(\mathbf{x}_{\boldsymbol{\mathsf{o}}}\right)\right)\,.
\]
This estimator is shown in Figure \ref{fig:1nn} for the RBF-kernel
(\ref{eq:rbf}) and three different values of $\sigma$. The convergence
$\lim_{\sigma\rightarrow0}\widehat{h}_{\sigma}\left(\boldsymbol{\mathsf{o}}\right)=\mathrm{NN}_{1}\left(\boldsymbol{\mathsf{o}}\right)$
leads to the convergence
\[
\lim_{\sigma\rightarrow0}R\left(\widehat{h}_{\sigma}\right)=R\left(\mathrm{NN}_{1}\right)
\]
on which our analysis is based. Note that the performance of $\widehat{h}_{\sigma}$
for small sample size may be bad. Also for increasing sample size
$m\rightarrow\infty$ a a decreasing kernel bandwidth $\sigma\rightarrow0$
is required for consistency.
Let us consider $k_{0}\left(\boldsymbol{\mathsf{o}},\boldsymbol{\mathsf{o}}_{i}\right)\equiv\lim_{\sigma\rightarrow0}k_{\sigma}\left(\boldsymbol{\mathsf{o}},\boldsymbol{\mathsf{o}}_{i}\right)$
and classifiers of the form
\begin{equation}
g_{\boldsymbol{\alpha}}\left(\boldsymbol{\mathsf{o}}\right)=\textrm{sign}\left(\sum_{i=1}^{m}\alpha_{i}k_{0}\left(\boldsymbol{\mathsf{o}},\boldsymbol{\mathsf{o}}_{i}\right)\right)\,.\label{eq:kernel_class}
\end{equation}
The $1$-NN classifier is then given by $\mathrm{NN}_{1}\left(Z,\cdot\right)=g_{\boldsymbol{\alpha}=\mathbf{y}}\equiv g_{\mathbf{y}}$,
and can be expressed if we restrict the coefficients $\alpha_{i}$
to take values only from $A=\mathcal{Y}=\left\{ -1,+1\right\} $.
Hence, the resulting hypothesis space is given by
\begin{equation}
\mathcal{G}=\left\{ g_{\boldsymbol{\alpha}}:\forall i\in\left\{ 1,\ldots,m\right\} :\alpha_{i}\in A\right\} \,.\label{eq:hypo_bin}
\end{equation}
It turns out that the $1$-NN classifier $g_{\mathbf{y}}$ is a minimiser
of the empirical risk (\ref{eq:r_emp}), i.e.~$g_{\mathbf{y}}=\mathrm{argmin}_{g\in\mathcal{G}}\,R_{\mathrm{emp}}\left(g,Z\right)$.
This is easily seen by considering that $\forall\boldsymbol{\mathsf{o}}\in O$
the $1$-neighbourhood $N_{1}\left(\boldsymbol{\mathsf{o}}\right)=\left\{ \boldsymbol{\mathsf{o}}\right\} $
and thus $\forall\left(\boldsymbol{\mathsf{o}},y\right)\in Z:g_{\mathbf{y}}\left(\boldsymbol{\mathsf{o}}\right)=y$
resulting in $R_{\mathrm{emp}}\left(g_{\mathbf{y}},Z\right)=0$. Also,
$g_{\mathbf{y}}\left(\boldsymbol{\mathsf{o}}\right)$ is the only
minimiser of $R_{\mathrm{emp}}\left(g,Z\right)$ because for each
$\alpha_{i}$ flipped, exactly one training error is incurred resulting
in an increase over $R_{\mathrm{emp}}\left(g_{\mathbf{y}},Z\right)$
by $1/m$. Thus the application of $1$-NN conforms to the principle
of \emph{empirical} \emph{risk} \emph{minimisation} \cite{Vapnik:98}
at vanishing training error.
Based on this view let us turn to an intuitive explanation of why
the $1$-NN classifier is able to generalise. As shown in Figure \ref{fig:1nn}
not all the objects $\boldsymbol{\mathsf{o}}\in O$ from the training
sample $O$ contribute to the decision function. In terms of the hypothesis
space defined in (\ref{eq:kernel_class}) and (\ref{eq:hypo_bin})
above this means that the respective summands could be set to nought
without changing the decision at any object $\boldsymbol{\mathsf{o}}\in\mathcal{O}$
. Let us define the set $\overline{Z}_{K}$ of subsets $Z^{\prime}\subset Z$
of training data \emph{redundant} for the $K$-NN classifier by
\[
\overline{Z}_{K}\stackrel{\mathrm{def}}{=}\left\{ Z^{\prime}:\forall\boldsymbol{\mathsf{o}}\in\mathcal{O}\;\mathrm{NN}_{K}\left(Z,\boldsymbol{\mathsf{o}}\right)=\mathrm{NN}_{K}\left(Z\setminus Z^{\prime},\boldsymbol{\mathsf{o}}\right)\right\} ,
\]
and let $Z_{K}\in\overline{Z}_{K}$ be defined as the element of
$\overline{Z}_{K}$ of maximum cardinality, $Z_{K}=\mathrm{argmax}_{Z^{\prime}\in\overline{Z}_{K}}\left(\left|Z^{\prime}\right|\right).$
Even if all the training examples in $Z_{K}$ were left out the prediction
at no object $\boldsymbol{\mathsf{o}}\in\mathcal{O}$ would change.
Please note the interesting resemblance of $Z\setminus Z_{K}$ to
the set of support vectors in SVM learning \cite{Vapnik:98}. In order
to be able to express this sparseness of solutions in the expansion
coefficients $\alpha_{i}$ let us augment the hypothesis space $\mathcal{G}$
by allowing the coefficients $\alpha_{i}$ to take on nought as an
additional value, $\forall i\in\left\{ 1,\ldots\,m\right\} \;\alpha_{i}\in A\cup\left\{ 0\right\} \equiv\widetilde{A}$.
This will allow us to express prior belief in the sparseness of a
solution by putting additional prior weight on solutions with few
non-vanishing coefficients. The augmented hypothesis space $\widetilde{\mathcal{G}}$
is given by
\begin{equation}
\widetilde{\mathcal{G}}=\left\{ g_{\boldsymbol{\widetilde{\alpha}}}:\alpha_{i}\in\widetilde{A}\right\} .\label{eq:hypo_tern}
\end{equation}
Then we can define the set $G_{\mathbf{y}}\subseteq\widetilde{\mathcal{G}}$
of hypotheses that are equivalent to $g_{\mathbf{y}}$ w.r.t. the
classification on $\mathcal{O}$
\begin{equation}
G_{\mathbf{y}}=\left\{ g_{\boldsymbol{\alpha}}\in\widetilde{\mathcal{G}}:\forall\boldsymbol{\mathsf{o}}\in\mathcal{O}\;g_{\boldsymbol{\alpha}}\left(\boldsymbol{\mathsf{o}}\right)=g_{\mathbf{y}}\left(\boldsymbol{\mathsf{o}}\right)\right\} .\label{eq:equivalence}
\end{equation}
The cardinality $\left|G_{\mathbf{y}}\right|$ of this set will later
serve as the crucial quantity for bounding the generalisation error.
Since the set $G_{\mathbf{y}}$ is not easily accessible we can define
a subset $G_{Z_{1}}\subseteq G_{\mathbf{y}}$ by
\[
G_{Z_{1}}=\left\{ g_{\boldsymbol{\alpha}}\in\widetilde{\mathcal{G}}:\forall\left(\boldsymbol{\mathsf{o}}_{i},y_{i}\right)\in Z\setminus Z_{1}\quad\alpha_{i}=y_{i}\right\} .
\]
The cardinality $\left|G_{Z_{1}}\right|$ of this set is then given
by $\left|G_{Z_{1}}\right|=2^{r}$ and is thus trivially related to
the number $r$ of redundant points. It will later serve as a convenient
lower bound, $\left|G_{Z_{1}}\right|\leq\left|G_{\mathbf{y}}\right|$.
The redundancy $r$ can also be viewed as a kind of luckiness in the
sense of \cite{Taylor:96}.
\section{A PAC-Bayesian bound for 1-NN\label{sec:bound}}
We would like to define a prior over $\widetilde{\mathcal{G}}$ and
apply the PAC-Bayesian Theorem \ref{thm:dma1}. However, the prior
over the hypothesis space $\mathcal{H}$ as referred to in Theorem
\ref{thm:dma1} requires us to define an hypothesis space $\mathcal{H}$
\emph{before} learning. In contrast, the hypothesis space $\widetilde{\mathcal{G}}$
defined by equations (\ref{eq:kernel_class}) and (\ref{eq:hypo_tern})
appears to be data-dependent and thus not known before the data are
considered. Let us consider an alternative hypothesis space given
by all the linear functions
\[
\mathcal{H}=\left\{ h_{\mathbf{w}}:h_{\mathbf{w}}=\mathrm{sign}\left(\left\langle \mathbf{w},\boldsymbol{\phi}\left(\boldsymbol{\mathsf{o}}\right)\right\rangle _{\mathcal{K}}\right),\mathbf{w}\in\mathcal{K},\left\Vert \mathbf{w}\right\Vert _{\mathcal{K}}=1\right\} .
\]
$\mathcal{K}$ is the kernel space associated with the kernel
\[
k_{\sigma}\left(\boldsymbol{\mathsf{o}},\boldsymbol{\mathsf{o}}^{\prime}\right)=\left\langle \boldsymbol{\phi}\left(\boldsymbol{\mathsf{o}}\right),\boldsymbol{\phi}\left(\boldsymbol{\mathsf{o}}^{\prime}\right)\right\rangle _{\mathcal{K}},
\]
and $\boldsymbol{\phi}:\mathcal{O}\mapsto\mathcal{K}$. The unit length
constraint $\left\Vert \mathbf{w}\right\Vert _{\mathcal{K}}=1$ is
required in order to be able to define a proper (normaliseable) prior
measure over $\mathcal{H}_{\mathbf{w}}$ such that $\boldsymbol{\mathsf{P}}_{\mathsf{H}_{\mathbf{w}}}\left(\mathcal{H}_{\mathbf{w}}\right)=1$.
Since we can expand the weight vector $\mathbf{w}$ in terms of the
objects $\boldsymbol{\mathsf{o}}_{i}\in Z$ by
\begin{equation}
\mathbf{w}=\sum_{i=1}^{m}\alpha_{i}\boldsymbol{\phi}\left(\boldsymbol{\mathsf{o}}_{i}\right)\label{eq:expansion}
\end{equation}
the hypotheses as given in equation (\ref{eq:kernel_class}) can be
written as
\begin{eqnarray*}
g_{\boldsymbol{\alpha}}\left(\boldsymbol{\mathsf{o}}\right) & = & \mathrm{sign}\left(\sum_{i=1}^{m}\alpha_{i}k_{\sigma}\left(\boldsymbol{\mathsf{o}},\boldsymbol{\mathsf{o}}_{i}\right)\right)\\
& = & \mathrm{sign}\left(\sum_{i=1}^{m}\alpha_{i}\left\langle \boldsymbol{\phi}\left(\boldsymbol{\mathsf{o}}_{i}\right),\boldsymbol{\phi}\left(\boldsymbol{\mathsf{o}}\right)\right\rangle _{\mathcal{K}}\right)\\
& = & \mathrm{sign}\left(\left\langle \mathbf{w},\boldsymbol{\phi}\left(\boldsymbol{\mathsf{o}}\right)\right\rangle _{\mathcal{K}}\right)\\
& = & h_{\mathbf{w}}\left(\boldsymbol{\mathsf{o}}\right)
\end{eqnarray*}
Thus for every hypothesis $g_{\boldsymbol{\alpha}}\in\widetilde{\mathcal{G}}$
there exists a corresponding hypothesis $h_{\mathbf{w}}\in\mathcal{H}$
\emph{before} the training data $Z$ are considered. Since Theorem
\ref{thm:dma1} holds for any two probability measures $\boldsymbol{\mathsf{P}}_{\mathsf{H}}$
and $\boldsymbol{\mathsf{P}}_{\mathsf{OY}}$ it is sufficient to show
that given any prior measure $\boldsymbol{\mathsf{P}}_{\widetilde{\mathsf{G}}}$
over $\widetilde{\mathcal{G}}$ there always exists a corresponding
prior measure $\boldsymbol{\mathsf{P}}_{\mathsf{H}}$ over $\mathcal{H}$.
Let us define the $\left(\dim\left(\mathcal{K}\right)\times m\right)$-matrix
\[
\boldsymbol{\Phi}\left(O\right)=\left(\boldsymbol{\phi}\left(\boldsymbol{\mathsf{o}}_{1}\right),\ldots,\boldsymbol{\phi}\left(\boldsymbol{\mathsf{o}}_{m}\right)\right)
\]
of training objects $\boldsymbol{\mathsf{o}}$ mapped to kernel space
$\mathcal{K}$. Then the linear transformation from the parameter
space $\widetilde{A}^{m}$ to kernel space \emph{$\mathcal{K}$} can
be written as $\mathbf{w}=\boldsymbol{\Phi}\left(O\right)\boldsymbol{\alpha}$
and we have for any measurable subset $H\subseteq\mathcal{H}$ a corresponding
set $\widetilde{G}\subseteq\widetilde{\mathcal{G}}$ given by
\[
\widetilde{G}\left(H,O\right)=\left\{ g_{\boldsymbol{\alpha}}:\exists\mathbf{w}\in H\quad\frac{\boldsymbol{\Phi}\left(O\right)\boldsymbol{\alpha}}{\left\Vert \boldsymbol{\Phi}\left(O\right)\boldsymbol{\alpha}\right\Vert }=\mathbf{w}\right\}
\]
The resulting prior measure $\boldsymbol{\mathsf{P}}_{\mathsf{H}}$
is given by
\[
\boldsymbol{\mathsf{P}}_{\mathsf{H}}\left(H\right)=\boldsymbol{\mathsf{E}}_{\mathsf{O}^{m}}\left[\boldsymbol{\mathsf{P}}_{\widetilde{\mathsf{G}}}\left(\widetilde{G}\left(H,\mathsf{O}\right)\right)\right],
\]
indicating that knowledge of the measure $\boldsymbol{\mathsf{P}}_{\mathsf{O}}$
over objects is necessary in order to determine $\boldsymbol{\mathsf{P}}_{\mathsf{H}}$.
This does not constitute a problem, however, because explicit knowledge
of $\boldsymbol{\mathsf{P}}_{\mathsf{H}}$ is neither required for
the application of the algorithm nor for the calculation of the PAC-Bayesian
bound values.
First, let us illustrate the application of the PAC-Bayesian bound
(\ref{thm:dma1}) by constructing a very simple prior $\boldsymbol{\mathsf{P}}_{\widetilde{\mathsf{G}}}\left(\boldsymbol{\alpha}\right)$
over $\widetilde{\mathcal{G}}$. Due to the iid property of the training
sample $Z$, we have no knowledge about any specific $\alpha_{i}$
and thus choose a factorising prior
\begin{equation}
\boldsymbol{\mathsf{P}}_{\widetilde{\mathsf{G}}}\left(g_{\boldsymbol{\alpha}}\right)=\prod_{i=1}^{m}\boldsymbol{\mathsf{P}}_{\widetilde{\mathsf{A}}}\left(\alpha_{i}\right),\label{eq:factorise}
\end{equation}
that reflects the interchangeability of the training examples in $Z$.
Assuming no further knowledge about the plausibility of hypotheses
let us choose the prior to be uniform,
\[
\boldsymbol{\mathsf{P}}_{\widetilde{\mathsf{A}}=-1}\left(\alpha_{i}\right)=\boldsymbol{\mathsf{P}}_{\widetilde{\mathsf{A}}=1}\left(\alpha_{i}\right)=\boldsymbol{\mathsf{P}}_{\widetilde{\mathsf{A}}=0}\left(\alpha_{i}\right)=\frac{1}{3},
\]
which obviously leads to a uniform measure $\boldsymbol{\mathsf{P}}_{\widetilde{\mathsf{G}}}\left(\boldsymbol{\alpha}\right)$,
as well. This choice will later be refined in the light of general
knowledge about the sparseness of typical $1$-NN classifiers. Then
the measure of hypotheses $g_{\boldsymbol{\alpha}}\in G_{\mathbf{y}}$
equivalent to $g_{\mathbf{y}}$ on $\mathcal{O}$ is given by \hfill{}
\begin{equation}
\boldsymbol{\mathsf{P}}_{\widetilde{\mathsf{G}}}\left(G_{\mathbf{y}}\right)=\frac{\left|G_{\mathbf{y}}\right|}{\left|\widetilde{\mathcal{G}}\right|}\geq\frac{\left|G_{Z_{1}}\right|}{\left|\widetilde{\mathcal{G}}\right|}=\frac{2^{r}}{3^{m}-1},\label{eq:simple_prior}
\end{equation}
because among the total of $3^{m}-1$ hypotheses in $\widetilde{\mathcal{G}}$
we have $2^{r}$ hypotheses that agree with $g_{\mathbf{y}}$ on \emph{$\mathcal{O}$}.
Then we can give the following bound on the generalisation error of
$1$-NN.
\begin{thm}
\label{thm:1nn_bound}For any probability distribution $\boldsymbol{\mathsf{P}}_{\mathsf{Z}}$
on labelled objects we have, for any $\delta>0,$ that with probability
at least $1-\delta$ over the selection of a sample of $m$ examples,
the following holds for the $1$-NN classifier $g_{\mathbf{y}}$ with
$r$ redundant examples :
\[
R\left(g_{\mathbf{y}}\right)\leq\frac{m\log3-r\log2+\log\frac{1}{\delta}}{m}.
\]
\end{thm}
\begin{figure}
\begin{centering}
\includegraphics[angle=270,width=0.9\columnwidth]{bound_b_r_s}
\par\end{centering}
\caption{\label{fig:bound_R_S}Values of the bound given in Theorem \ref{thm:1nn_bound2}
as a function of the expected sparsity $S$ for four different values
$r$ of the observed number of redundant objects. The training set
size is $m=100$ and the confidence is $95\%$ corresponding to $\delta=0.05$.
Large values of $r$ lead to lower values of the bound, but the bound
attains its minimum only if the expected sparsity $S$ matches the
the number of redundant objects $r$. Note that the optimum $S_{\mathrm{opt}}$
for a given redundancy $r$ is $S_{\mathrm{opt}}<\frac{r}{m}$, the
value one may have expected.}
\end{figure}
Let us refine this bound by constructing a more informative prior
$\boldsymbol{\mathsf{P}}_{\widetilde{\mathsf{G}}}$. Maintaining the
factorising property (\ref{eq:factorise}) and introducing an expected
level $S$ of sparsity we choose
\[
\boldsymbol{\mathsf{P}}_{\widetilde{\mathsf{A}}=0}\left(\alpha_{i}\right)=S\;\mathrm{and}\;\boldsymbol{\mathsf{P}}_{\widetilde{\mathsf{A}}=-1}\left(\alpha_{i}\right)=\boldsymbol{\mathsf{P}}_{\widetilde{\mathsf{A}}=1}\left(\alpha_{i}\right)=\frac{1-S}{2}
\]
The resulting prior measure $\boldsymbol{\mathsf{P}}_{\widetilde{\mathsf{G}}}$
is then only a function of the sparsity $s\left(g_{\boldsymbol{\alpha}}\right)$
of an hypothesis $g_{\boldsymbol{\alpha}}$ given by $s\left(g_{\boldsymbol{\alpha}}\right)=\left|\left\{ i\in\left\{ 1,\ldots\,m\right\} :\alpha_{i}=0\right\} \right|$.
We are interested in the prior measure $\boldsymbol{\mathsf{P}}_{\widetilde{\mathsf{G}}}\left(G_{\mathbf{y}}\right)$
of all those hypotheses $g_{\boldsymbol{\alpha}}\in\widetilde{\mathcal{G}}$
that behave equivalently to $g_{\mathbf{y}}$ on \emph{$\mathcal{O}$.}
This quantity is lower bounded by $\boldsymbol{\mathsf{P}}_{\widetilde{\mathsf{G}}}\left(G_{Z_{1}}\right)$.
\begin{eqnarray}
\boldsymbol{\mathsf{P}}_{\widetilde{\mathsf{G}}}\left(G_{Z_{r}}\right) & = & \sum_{s=0}^{r}\left(\begin{array}{c}
r\\
s
\end{array}\right)\frac{S^{s}\left(\frac{1-S}{2}\right)^{m-s}}{1-S^{m}}\nonumber \\
& = & \frac{\left(\frac{1-S}{2}\right)^{m-r}}{1-S^{m}}\sum_{s=0}^{r}\left(\begin{array}{c}
r\\
s
\end{array}\right)S^{s}\left(\frac{1-S}{2}\right)^{r-s}\nonumber \\
& = & \frac{\left(\frac{1-S}{2}\right)^{m-r}\left(\frac{1}{2}\left(1+S\right)\right)^{r}}{1-S^{m}}\nonumber \\
& = & \frac{\left(1-S\right)^{m-r}\left(1+S\right)^{r}}{2^{m}\left(1-S^{m}\right)}.\label{eq:refined_prior}
\end{eqnarray}
Note, that this reduces to the previous result (\ref{eq:simple_prior})
for $S=1/3.$ Using the result (\ref{eq:refined_prior}) we can give
a more refined PAC-Bayesian bound on the generalisation error of $1$-NN.
\begin{thm}
\label{thm:1nn_bound2}For any distribution $\boldsymbol{\mathsf{P}}_{\mathsf{Z}}$
over labelled objects and any sparsity value $S\in\left[0,1\right[$
chosen a-priori, we have, for any $\delta>0,$ that with probability
at least $1-\delta$ over the selection of a sample of $m$ examples,
the following holds for the $1$-NN classifier $g_{\mathbf{y}}$ with
$r$ redundant examples :
\[
R\left(g_{\mathbf{y}}\right)\leq\frac{m\log\frac{2\left(1-S^{m}\right)}{\left(1-S\right)}+r\log\frac{\left(1-S\right)}{\left(1+S\right)}+\log\frac{1}{\delta}}{m}.
\]
\end{thm}
In order to get a feel for the bound, consider first Figure \ref{fig:bound_R_S}.
The convex shapes of the curves clearly indicate that a wrong choice
of $S$ hurts in both cases: For over- and underestimated redundancy.
Figure \ref{fig:bound_R_r} illustrates the behaviour of the bound
as a function of redundancy $r$. The case $S=0$ effectively corresponds
to the unaugmented hypothesis space $\mathcal{G}$ with a flat prior.
Due to the increase $\left|\mathcal{G}\right|=2^{m}$ of $\mathcal{G}$
with $m$ the resulting cardinality bound can never give values below
$\sqrt{2}\approx0.69$. The case $S=0.33$ corresponds to the bound
of Theorem \ref{thm:1nn_bound} and is superior mostly in ``trivial''
regimes with $R>0.5$. Only for ``courageous'' choices of $S=0.9$
and $S=0.99$ does the bound reach non-trivial regimes. It should
be noted that standard VC-bounds often require training set sizes
of $m>100000$ for even the luckiest cases to give non-trivial bounds
($R<0.5$).
As a matter of fact, it is feasible to incorporate even more knowledge
than the level of sparsity $S$ into the bound. In addition, knowledge
about the a-priori class probabilities $\boldsymbol{\mathsf{P}}_{\mathsf{Y}}\left(\mathsf{Y}=\pm1\right)$
and knowledge about the levels of sparsity $S^{\pm}$ in each of the
classes could be incorporated in the bound.
\begin{figure}
\begin{centering}
\includegraphics[angle=270,width=0.9\columnwidth]{bound_b_r_s2}
\par\end{centering}
\caption{\label{fig:bound_R_r}Values of the bound given in Theorem \ref{thm:1nn_bound2}
as a function of the number $r$ of redundant objects for four different
values of the expected sparsity $S$. The training set size is $m=100$
and the confidence is $95\%$ corresponding to $\delta=0.05$. Large
values of $S$ lead to lower values of the bound, but only for sufficiently
large values of $r$. If the value of $S$ is chosen too optimistically,
the resulting value of the bound suffers.}
\end{figure}
\section{The general case of $K$-NN\label{sec:knn}}
In practice, people often use the $K$-NN classifier, $K>1$, rather
than the $1$-NN classifier to avoid over-fitting the data. In order
to arrive at a similar result as that obtained in Section \ref{sec:bound}
let us find a formulation for $K$-NN equivalent to that given in
(\ref{eq:1nn}) for $1$-NN. We avoid the problem of voting ties by
considering only odd values of $K$. Since the $K$ nearest neighbours
need to be selected, we use a product of kernels,
\begin{eqnarray}
\mathrm{NN}_{K}\left(Z,\boldsymbol{\mathsf{o}}\right) & = & \mathrm{sign}\left(\sum_{\stackrel{\left(\boldsymbol{\mathsf{o}}^{\prime},y^{\prime}\right)\in Z:}{\boldsymbol{\mathsf{o}}^{\prime}\in N_{K}(\boldsymbol{\mathsf{o}})}}y^{\prime}\right)\nonumber \\
& = & \mathrm{sign}\left(\sum_{\stackrel{Z^{\prime}\subseteq Z:}{\left|Z^{\prime}\right|=K}}\prod_{Z^{\prime}}k_{0}\left(\boldsymbol{\mathsf{o}},\boldsymbol{\mathsf{o}}^{\prime}\right)\sum_{Z^{\prime}}^{K}y^{\prime}\right)\nonumber \\
& = & \mathrm{sign}\left(\sum_{\mathbf{i}\in I}\prod_{j=1}^{K}k_{0}\left(\boldsymbol{\mathsf{o}},\boldsymbol{\mathsf{o}}_{i_{j}}\right)\sum_{l=1}^{K}y_{i_{l}}\right).\label{eq:knn}
\end{eqnarray}
The sum is over the set $I$ of index vectors $\mathbf{i}\in I$
defined as
\[
I\equiv\left\{ \mathbf{i}\in\left\{ 1,\ldots,m\right\} ^{K}:\forall j\in\left\{ 1,\ldots,m-1\right\} \:i_{j+1}>i_{j}\right\} ,
\]
and we use components $i_{j}$ of $\mathbf{i}=\left(i_{1},\ldots,i_{K}\right)^{\prime}$
for indexing. Again the above classifier can be considered a linear
classifier in a kernel space if we define an augmented product kernel
$\widetilde{k}:\mathcal{O}^{K}\times\mathcal{O}^{K}\mapsto\mathbb{R}^{+}$by
\[
\widetilde{k}\left(\boldsymbol{\mathsf{o}}_{1},\ldots\,\boldsymbol{\mathsf{o}}_{K},\boldsymbol{\mathsf{o}}_{K+1},\ldots,\boldsymbol{\mathsf{o}}_{2K}\right)\equiv\prod_{j=1}^{K}k\left(\boldsymbol{\mathsf{o}}_{j},\boldsymbol{\mathsf{o}}_{j+K}\right).
\]
The product kernel $\widetilde{k}$ retains its Mercer property due
to the closure of kernels under the tensor product \cite{Haussler:99}.
Defining coefficients $\beta_{\mathbf{i}}\equiv\sum_{l=1}^{K}\alpha_{i_{l}}$
with $\forall i\in\left\{ 1,\ldots,m\right\} \:\alpha_{i}=y_{i}$
we express the $K$-NN classifier as the limiting case of a linear
classifier
\[
\mathrm{NN}_{K}\left(Z,\boldsymbol{\mathsf{o}}\right)=\mathrm{sign}\left(\sum_{\mathbf{i}\in I}\beta_{\mathbf{i}}\widetilde{k}_{0}(\underbrace{\boldsymbol{\mathsf{o}},\ldots,\boldsymbol{\mathsf{o}}}_{K\,\mathrm{times}},\boldsymbol{\mathsf{o}}_{i_{1}},\ldots,\boldsymbol{\mathsf{o}}_{i_{j}})\right).
\]
Since the coefficients $\beta_{\mathbf{i}}=\beta_{\mathbf{i}}\left(\boldsymbol{\alpha}\right)$
are fully determined by the values of the $\alpha_{i}$ it is sufficient
to consider these. As discussed in Section \ref{sec:1nn} the $1$-NN
classifier can be considered as the \emph{empirical} \emph{risk} \emph{minimiser}
with vanishing training error. The situation is different for $K$-NN.
Consider, e.g.~the situation of a training sample of three different
objects two of which belong to one class and one of which belongs
to the other class. For $K=3$ under any metric the $3$-NN classifier
will incur a loss of $1/3$ because the single object belonging to
the minority class will be classified as belonging to the majority
class.
Again we can use the redundancy of features to benefit from sparse
solutions in the coefficients $\alpha_{i}$. As in the case of $1$-NN
the two types of results as in Theorems \ref{thm:1nn_bound} and \ref{thm:1nn_bound2}
are possible, this time base on Theorem \ref{thm:dma2}. We will give
here only the version corresponding to Theorem \ref{thm:1nn_bound2}
because Theorem \ref{thm:1nn_bound} follows as a special case thereof.
\begin{thm}
\label{thm:knn_bound}For any probability distribution $\boldsymbol{\mathsf{P}}_{\mathsf{Z}}$
on labelled objects and any sparsity value $S\in\left[0,1\right[$
chosen a-priori, we have, for any $\delta>0,$ that with probability
at least $1-\delta$ over the selection of a sample of $m$ examples,
the following holds for the $K$-NN classifier $g_{\mathbf{y}}^{K}$
with $r_{K}=\log_{2}\left|G_{Z_{K}}\right|$ redundant examples: The
difference
\[
\Delta R=R\left(g_{\mathbf{y}}^{K}\right)-R_{\mathrm{emp}}\left(g_{\mathbf{y}}^{K};Z\right)
\]
between actual and empirical risk is bounded from above by
\[
\Delta R\leq\sqrt{\frac{m\log\frac{2\left(1-S^{m}\right)}{\left(1-S\right)}+r_{K}\log\frac{\left(1-S\right)}{\left(1+S\right)}+\log\frac{1}{\delta}}{2m}}.
\]
\end{thm}
While this bound behaves similarly to the one given in Theorem \ref{thm:1nn_bound2}
in terms of $r_{K}$ and $S$, it is more interesting to ask about
the dependency of $\left|G_{\mathbf{y}}\right|$ (or its lower bound
$2^{r_{K}}$) on the number $K$ of neighbours considered. Empirical
results indicate that the risk $R(g_{\mathbf{y}}^{K})$ is a bowl-shaped
function of $K$, indicating the existence of an optimum number $K>1$.
A corresponding theoretical result together with Theorem \label{thm:knn_bound}
would then yield a sound explanation of why $K>1$ may be preferred,
and may even serve as a guide for model selection.
\section{Conclusions and Future Work\label{sec:conclusion}}
We provided small sample size bounds on the generalisation error of
the $K$-nearest-neighbour classifier in the PAC-Bayesian framework
by viewing $K$-NN as a linear classifier in a collapsed kernel space.
Referring back to the goal set in the Introduction these bounds may
serve to make $K$-NN a self-bounding algorithm in the sense of \cite{Freund:98}.
It is left for future research to provide means for determining \emph{in}
\emph{practice} at least an estimate of the number $r_{K}$ of redundant
points.
Interestingly, our analysis involves the notion of redundant examples
and --- as a consequence --- of essential examples that bear a close
resemblance with \emph{support} \emph{vectors} \cite{Vapnik:98}.
Also, considering Figure \ref{fig:1nn} it is obvious that $1$-NN
performs a \emph{local} \emph{margin} \emph{maximisation} as opposed
to a global margin maximisation in the SVM.
Pursuing the similarity to SVMs further, note that the $K$-NN classifier
not only returns a classification for a given object $\boldsymbol{\mathsf{o}}\in\mathcal{O}$,
but also provides a discrete margin
\[
\gamma\left(\boldsymbol{\mathsf{o}}\right)=y\sum_{i:\boldsymbol{\mathsf{o}}_{i}\in N_{K}(\boldsymbol{\mathsf{o}})}y_{i}
\]
taking values $\gamma\in\left\{ -K,-K+2,\ldots,K-2,K\right\} $.
Hence, we can define the margin $\gamma_{Z}$ on the training sample
$Z$ by $\gamma_{Z}=\min_{\left(\boldsymbol{\mathsf{o}},y\right)\in Z}\gamma\left(\boldsymbol{\mathsf{o}}\right).$
Since the now famous Support Vector Machine \cite{Vapnik:98} is based
on maximising the margin and also generalisation bounds for linear
classifiers \cite{Herbrich:99b,Vapnik:98} are based on this notion
it is tempting to speculate that also for $K$-NN the margin $\gamma_{Z}$
on the training sample should play a role in the generalisation bound.
Intuitively, the relation between $\left|G_{\mathbf{y}}\right|$ and
the margin $\gamma_{Z}$ is clear: The more unanimous the outcome
of the voting on $O$ the more hypotheses $g_{\boldsymbol{\alpha}}\in\widetilde{\mathcal{G}}$
would give the same classification of the training data and therefore
more likely agree on $\mathcal{O}$. However, at this point it is
not clear how exactly the margin $\gamma_{Z}$ is related to $\left|G_{\mathbf{y}}\right|$,
the quantity determining generalisation.
Another interesting aspect of the margin $\gamma\left(\boldsymbol{\mathsf{o}}\right)$
is its use as a confidence measure for the prediction of labels on
test objects. For linear classifiers this method has been theoretically
justified by \cite{Taylor:96a}. Indeed, for $K$-NN such a strategy
has been put forward in the form of the $\left(K,L\right)$-nearest-neighbour
rule \cite{Hellman:70} that given a parameter $L>K/2$ refuses to
make predictions at $\boldsymbol{\mathsf{o}}$ unless $\gamma\left(\boldsymbol{\mathsf{o}}\right)\geq L$.
Depending on $L$ this principle leads to a rejection rate $\rho\left(L\right)$
on a given test sample. Based on $L$ and $\rho\left(L\right)$ it
should be possible to bound the risk on the non-rejected points.
\bibliographystyle{chicago}
|
{
"timestamp": "2021-09-29T02:25:51",
"yymm": "2109",
"arxiv_id": "2109.13889",
"language": "en",
"url": "https://arxiv.org/abs/2109.13889"
}
|
\section{Introduction}
The definition of Oral Reading Fluency (ORF) is ``the oral translation of text with speed and accuracy,'' see for example \cite{fuchs2001oral} and \cite{shinn1992curriculum}. Reading fluency is a skill developed during childhood that is needed to understand the meaning of texts and literary pieces. There is a strong correlation between reading fluency and reading comprehension, see \cite{allington1983fluency}, \cite{johns1983informal}, \cite{samuels1988decoding}, and \cite{schreiber1991understanding}. According to \cite{disalle2017impact}, once a student has identified a word and read it correctly, their focus generally shifts from word recognition (attempting to recognize the word) to comprehension (making meaning of the word). This leads to overall understanding of the text. These authors have claimed that incompetent ORF levels are the cause of up to 90\% of reading fluency issues. If a child does not read fluently, their ability to read comprehensively is also hindered and they will have trouble in grasping the meaning of texts. Thus, ORF is a method of evaluating whether a child is at their appropriate reading level compared to their peers and assists in identify at-risk students with poor reading skills.
In this paper, we analyze ORF data collected from a sample of $508$ fourth-grade students. Each child was given one of ten available passage to read and the number of words read incorrectly (WRI) was recorded. This resulted in around 50 WRI measurements per passage. Reading sessions were recorded so that observer error in counting the number of words read correctly and incorrectly could be eliminated. The WRI scores were obtained from these recorded sessions and are assumed free of measurement error. Strong readers tend to have low WRI scores and weak readers tend to have high WRI scores. However, as the passages are not all equal in difficulty, it is important to be cautious in directly using WRI scores obtained from different passages to measure overall ORF levels in a classroom setting.
Our work is motivated by noting that, to the best of our knowledge, ORF assessment in practice neither makes any adjustments to account for variations in passage difficulty nor quantifies the differences in passage difficulty. Instead, in implementation a student is given one minute to read as many words as possible in a 250 word passage, after which an assessor calculates their words correct per minute (WCPM) score by subtracting the number of words read incorrectly from the total number of words read. This WCPM score does not make adjustments for passage difficulty and is currently still the most prevalent measure used to assess ORF, see \cite{miura2007literature}, \cite{fuchs2001oral}, and \cite{hasbrouck2006oral}.
The statistical novelty of this work stems from the use of penalized maximum likelihood to estimate parameters in a count data setting where the counts are naturally bounded (below by $0$ and above by passage length). Penalty functions are used to ``encourage'' estimated passage-specific parameters to be close to one another and/or close to zero. This particular implementation of parameter shrinkage is motivated by the structural properties of the data. Firstly, the passages in an ORF assessment differ with respect to vocabulary used and how sentences are constructed. It follows that the passages naturally vary in difficulty, although they are designed to be comparable. Secondly, passages are designed to not be overly challenging for proficient readers, meaning that it is fairly common to have WRI scores of $0$. Finally, passage-specific sample sizes are small relative to the number of passages.
There is, of course, a rich literature on parameter shrinkage in various statistical models. One of the definitive examples in the multivariable setting is the James-Stein estimator of the mean, see \cite{stein1956inadmissibility}. This estimator is often described as ``borrowing'' information between variables to obtain a more efficient estimator. Other applications of shrinkage include \cite{pandey1985bayes} and \cite{jani1991class} who considered univariate Bayes-type shrinkage in, respectively, a Weibull distribution and an exponential distribution. In the bivariate setting, shrinkage was used to estimate probabilities of the form $P(Y<X)$ for underlying exponential distributions, see \cite{baklizi2003shrinkage}.
One of the most frequently encountered applications of shrinkage is in regression models with a large number of predictor variables. The lasso, developed by \cite{tibshirani1996regression}, is one such technique which revolutionized parameter estimation in generalized linear models (GLMs). The lasso shrinks regression parameters towards zero using an $L_1$ penalty, resulting in predictors being “dropped” from the model by setting the corresponding coefficients equal to zero. The lasso was predated by ridge regression which uses an $L_2$ penalty, see \cite{hoerl1970ridge}. This approach results in some regression coefficients being very close to zero, but does not eliminate potential predictor variables from the model altogether. Other examples of shrinkage applied to GLMs include \cite{maansson2013developing} and \cite{qasim2020new} who developed Liu-type estimators for, respectively, a zero-inflated negative binomial regression model and a Poisson regression model. Shrinkage estimation of fixed effects in a random-effects zero-inflated negative binomial model was considered by \cite{zandi2021using}. The monographs by \cite{gruber2017improving} and \cite{hastie2019statistical} are very good resources for further exploration of shrinkage in regression models.
We would be remiss to not highlight the similarity of penalty-based frequentist estimation methods to Bayesian methods with appropriately selected prior functions. For example, \cite{efron1973stein} show how the James-Stein mean estimator belongs to a larger class of empirical Bayes estimators. Similarly, as a parallel to lasso regression, \cite{park2008bayesian} define a Bayesian lasso for sparse regression estimation. For an overview of some of the recent developments in Bayesian regularization using hierarchical models, see \cite{polson2019bayesian}.
In this paper, measures of passage-specific difficulty are of primary interest. The measure of difficulty considered here is $p=\mathrm{E}[\mathrm{WRI}/N]$ with $N$ the passage length. That is, define the proportion of words read incorrectly in a passage as a measure of difficulty. The required expected value can be expressed as a function of the underlying count data model parameters, meaning their estimation is of central importance. Parameter shrinkage applied to count data models has received limited attention in the literature. In the univariate case of estimating a binomial success probability, \cite{lemmer1981note} considered three different estimators of $p$, while \cite{lemmer1981ordinary} proposed estimators of the type $w \hat{p} + (1-w)p_0$ where $p_0$ is an \textit{a priori} guess. However, neither of these papers consider likelihood-based methods nor provide guidance on selecting the amount of shrinkage.
Our literature review brought a few papers to our attention that are similar in spirit, but consider parameter estimation through shrinkage problem from fundamentally different perspectives. In the frequentist paradigm, \cite{hansen2016efficient} considers three shrinking approaches -- restricted maximum likelihood, an efficient minimum distance approach, and a projection approach -- for estimating model parameters. The work of Hansen requires the specification of a shrinkage direction, which is similar to the selection of a penalty function. In the Bayesian paradigm, \cite{agresti2005bayesian} consider hierarchical models for estimating multinomial success probabilities and \cite{datta2016bayesian} consider estimating the intensity parameter of quasi-sparse Poisson count data. The scarcity of relevant literature highlights the opportunities available to further explore shrinkage estimation methods.
The remainder of this paper proceeds as follows. In Section 2, the penalized likelihood approach is more fully developed, emphasizing the binomial distribution for clarity of exposition. In Section 3, V-fold cross-validation is presented as a data-driven approach for selecting the shrinkage level. Section 4 presents results from extensive simulation studies and the motivating data are analyzed in Section 5.
\section{Shrinkage through Penalized Likelihood Methods}\label{PenLik}
\subsection{Shrinkage through Penalized Likelihood Estimation}
Consider a collection of random variables $\bm{X}=\{X_{ij}\}$, $j=1,\ldots,n_i$, $i=1,\ldots,I$, with the $X_{ij} \sim F(\cdot|\bm{\theta}_i)$ mutually independent. Here, $F(\cdot|\bm{\theta}_i)$ denotes a distribution function with $p$-dimensional parameter $\bm{\theta}_i \in \bm{\Theta}\subset \mathbb{R}^{p}$. Let $\bm{\Theta}^{I}=\bm{\Theta}\times \cdots \times \bm{\Theta}$ denote the parameter space associated with the collection of parameters $\bm{\theta}=(\bm\theta_1,\ldots,\bm{\theta}_I)$. Also let $\mathcal{\ell}(\bm{\theta}|\bm{X})$ denote the log-likelihood of the data $\bm{X}$ and let $\bm{\mathcal{S}}_0\subseteq \bm\Theta^I$ denote a specified subset of the parameter space that is of interest. Finally, for $\mathbf{s},\mathbf{t}\in \mathbb{R}^{p\times I}$, let $\tilde{h}(\mathbf{s},\mathbf{t})$ be a norm. We then define
$h(\bm\theta|\bm{\mathcal{S}}_0) = \inf_{\bm{t}\in \bm{\mathcal{S}}_0} \tilde{h}(\bm{\theta},\bm{t})$. That is, $h(\bm\theta|\bm{\mathcal{S}}_0)$ is the shortest distance between a point $\bm{\theta}$ and the space $\mathcal{S}_0$ as measured by the norm $h$. Note that whenever $h(\bm\theta_1|\bm{\mathcal{S}}_0)<h(\bm\theta_2|\bm{\mathcal{S}}_0)$, the point $\bm\theta_1$ is closer to the region $\bm{\mathcal{S}}_0$ than the point $\bm\theta_2$.
In this context, parameter shrinkage is said to be any estimation method that balances adherence to the data-generating model as measured by $\ell(\bm{\theta}|\bm{X})$ and the closeness of any estimator to $\mathcal{S}_0$ as measured by $h(\bm{\theta}|\bm{\mathcal{S}}_0)$. One such approach is penalized maximum likelihood. Adopting the convention that $\mathrm{Pen}(\bm{\theta})=h(\bm{\theta}|\bm{\mathcal{S}}_0)$ denote the penalty function, the penalized likelihood estimator $\tilde{\bm{\theta}}$ is found by minimizing
\begin{equation}
D(\bm{\theta}) = -\ell(\bm{\theta}|\bm{x}) + \lambda \mathrm{Pen}(\bm{\theta}) \label{eq:D(theta)}
\end{equation}
with $\lambda>0$ a specified constant. The two component functions of $D(\bm{\theta})$ often exist in some kind of tension; minimizing $-\ell(\bm{\theta}|\bm{x})$ gives the maximum likelihood estimator (MLE), while $\mathrm{Pen}(\bm{\theta})$ attains a minimum for any $\bm{\theta}$ in $\bm{\mathcal{S}}_0$ where the desired parameter constraint is fully satisfied. The tension can be ascribed to the MLE not necessarily being close to the subset of interest $\bm{\mathcal{S}}_0$. The magnitude of $\lambda$ determines the balance between these at times competing interest.
Calculation of the penalized likelihood estimator $\tilde{\bm{\theta}}$ requires the specification of a generating model, a penalty function, and a value for the parameter $\lambda$. Throughout this paper, generating models closely related to the binomial distribution are considered. All models considered naturally accommodate counts restricted to the set $\{0,1,\ldots,N\}$. The remainder of Section 2 will consider some possible choices of the penalty function while assuming $\lambda$ is known, with the choice of $\lambda$ discussed in Section 3. Note that when it comes to the selection of a penalty function, it will often be the case that the subject-matter expert presents the statistician with a non-mathematical description of $\mathcal{S}_0$. There may be multiple ways of constructing a set $\mathcal{S}_0$ and a penalty function $\mathrm{Pen}(\bm{\theta})$ that satisfies the description. Therefore, the penalty functions considered in this paper should not be considered an exhaustive enumeration of the possibilities. Rather, these are intended to illustrate the many ways in which shrinking can be implemented.
\subsection{Shrinkage to Zero in Binomial Models}
Let $x_{i}$, $i=1,\ldots,I$ denote observed realizations of independent random variables $X_i \sim \mathrm{Bin}(N_i,p_i)$, $i=1,\ldots,I$. Assume that the number of binomial trials $N_i$ are known and that estimation of the success probabilities $p_i$, $i=1,\ldots,I$, is of interest. The log-likelihood is given by
\[\ell(\bm{p}|\bm{x}) = \sum_{i = 1}^{I}\log \binom{N_i}{x_i} + \sum_{i = 1}^{I} x_i \log(p_i) + \sum_{i = 1}^{I} (N_i - x_i)\log(1- p_i).\]
Now, consider the hypothetical scenario where the subject-matter expert has expressed that the success probabilities should all be ``small.'' In the context of the WRI data, this is equivalent to expecting that only a small proportion of words will be read incorrectly by a reader at grade-level. This is consistent with setting $\bm{\mathcal{S}}_0 = (0,\ldots,0)$. There are numerous penalty functions that can assess the closeness of a potential parameter value $\bm{p}=(p_1,\ldots,p_I)$ to $\bm{\mathcal{S}}_0$. For example, both the $L_1$ and $L_2$ norms
\begin{equation}
\mathrm{Pen}_1(\bm{p}) = \sum_{i=1}^{I} p_i \quad \mathrm{and}\quad \mathrm{Pen}_2(\bm{p}) = \sum_{i=1}^{I} p_i^2, \label{eq:L1_and_L2_pen}
\end{equation}
are candidates worth considering. In the context of binomial success probabilities, both of these functions are bounded, having $\sup_{\bm{p}} \mathrm{Pen}_1(\bm{p}) = \sup_{\bm{p}} \mathrm{Pen}_2(\bm{p}) = I.$ Figure \ref{fig:pen_illustration1} visualizes these penalties for the case $I=2$. The axes $p_1$ and $p_2$ range from $0$ to $1$ in the direction of the arrows. The value of the penalty function itself is omitted from the plot as the magnitude is only informative up to a constant of proportionality. This emphasizes that the goal here (and with other penalty functions graphs that follow) is only to illustrate the shape of these functions.
\begin{figure}[H]
\centering
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[trim={5cm 2.5cm 5cm 18cm},clip,width=1\textwidth]{L1_pen.pdf}
\end{minipage}\hfill
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[trim={5cm 2.5cm 5cm 18cm},clip,width=1\textwidth]{L2_pen.pdf}
\end{minipage}
\caption{$L_1$ norm (left) and $L_2$ norm (right) penalty functions for $J=2$ binomial success probabilities.}
\label{fig:pen_illustration1}
\end{figure}
Note that as $\mathrm{Pen}_2(\bm{p}) \leq \mathrm{Pen}_1(\bm{p})$ for all $\bm{p}\in [0,1]^{I}$, the $L_1$ norm will more aggressively shrink success probabilities to $0$ than the $L_2$ norm. Due to the resemblance of the $L_1$ norm to the commonly-used lasso penalty in regression, it should be pointed out that its application here will not result in shrinkage estimators exactly equal to $0$. In fact, the penalized negative log-likelihood function $D_1(\bm{p}) = -\ell(\bm{p}|\bm{x}) + \lambda \mathrm{Pen}_1(\bm{p})$
has unique solution
\[\tilde{p}_i = \frac{1}{2}\left(\frac{\lambda_i+1}{\lambda_i}\right)\left[1-\left(1-\frac{4\lambda_i\hat{p}_i}{(\lambda_i+1)^2}\right)^{1/2}\right],\ i=1,\ldots,I\]
where $\lambda_i = \lambda/N_i$ and $\hat{p}_i = x_i/N_i$ is the unpenalized MLE. While it is not necessarily intuitive from the form of the penalized estimator, it can easily be verified that $0<\tilde{p}_i <\hat{p}_i$ for all $\lambda>0$. The solution to the $L_2$ penalty function is also easy to compute, but no general closed-form expression is possible as it requires solving a cubic polynomial.
The bounded nature of $\mathrm{Pen}_1$ and $\mathrm{Pen}_2$ in \eqref{eq:L1_and_L2_pen} may not appeal to some. One choice of an unbounded penalty is
\[\mathrm{Pen}_{3}(\bm{p}) = - \sum_i \log (1-p_i). \]
This penalty has a lower bound of $0$, but has no upper bound. For an illustration when $I=2$, see Figure \ref{fig:pen_illustration2}. The solution to the corresponding penalized likelihood problem is
\[\tilde{p}_i = \frac{N_i}{N_i+\lambda}\hat{p}_i,\ i=1,\ldots,I.\]
None of the penalties considered so far have the lasso-like property of shrinking parameters to $0$ for a finite value of $\lambda$. However, it is possible to find a penalty that achieves this. Consider
\[\mathrm{Pen}_{4}(\bm{p}) = \sum_i \log p_i \]
also illustrated in Figure \ref{fig:pen_illustration2} for $I=2$. This penalty function is bounded above, but has no lower bound as the individual $p_i$'s approach $0$. In fact, this penalty function is \textit{not} associated with a norm as defined in Section 2.1, putting it somewhat outside the framework in which our estimation problem has been formulated. The latter point notwithstanding, the corresponding penalized likelihood estimator is
\[\tilde{p}_i = \left\{
\begin{array}{ll}
\dfrac{N_i}{N_i-\lambda}\hat{p}_i - \dfrac{\lambda}{N_i-\lambda} & \lambda \leq x_i \\
0 & \lambda > x_i
\end{array}%
\right. \]
for $i=1,\ldots,I$. Perhaps this penalty can appropriately be described as ``greedy'' in the sense that it has the potential to dominate the data and result in a shrinkage estimator equal to $0$ even when there are observed successes suggesting otherwise.
\begin{figure}[H]
\centering
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[trim={5cm 2.5cm 5cm 18cm},clip,width=1\textwidth]{NoUB_pen.pdf}
\end{minipage}\hfill
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[trim={5cm 2.5cm 5cm 18cm},clip,width=1\textwidth]{NoLB_pen.pdf}
\end{minipage}
\caption{Penalties $\mathrm{Pen}_3$ (left) and $\mathrm{Pen}_4$ (right) penalty functions, respectively unbounded from above and below, for $I=2$ binomial success probabilities.}
\label{fig:pen_illustration2}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[trim={1cm 1.5cm 1cm 18cm},clip,width=0.8\textwidth]{PenEsts_scematic.pdf}
\caption{Schematic representation of four different penalized estimators shrinking $\tilde{p}$ closer to $0$.}
\label{fig:pen_sceme}
\end{figure}
All four of the penalized solutions above corresponding to some notion of success probabilities being ``close to $0$'' or ``not too large.'' Figure \ref{fig:pen_sceme} shows a schematic representation of the behavior of these estimators as a function of $\log(\lambda)$.
\subsection{Other Shrinkage Configurations}
The penalized estimators of Section 2.2 all revolve around the goal of ensuring that the estimates $\tilde{p}_i$ are close to $0$. If, on the other hand, it was desired to have estimates $\tilde{p}_i$ close to $1$, then by symmetry all of the examples considered could replace the $p_i$ in each of the penalty functions by $1-p_i$. Of course, many other types of penalties could also be of interest. For instance, consider the hypothetical example where a subject-matter expert expresses confidence that all of the $p_i$ should be close to some specified value $\kappa \in (0,1)$. For this specified $\kappa$, define
\[\mathrm{Pen}_{5}(\bm{p}|\kappa) = -\sum_{i=1}^{I}\left[\kappa \log(p_i) + (1-\kappa)\log(1-p_i)\right].\] This penalty function has a minimum when all the $p_i$ are equal to $\kappa$, and is unbounded above whenever one of the $p_i$ approach either $0$ or $1$. This penalty therefore shrinks the $p_i$ towards the specified $\kappa$ value. The penalized estimators are $$\tilde{p}_i = \frac{N_i}{N_i + \lambda}\ \hat{p}_i + \frac{\lambda}{N_i + \lambda}\ \kappa,\ i=1,\ldots,I.$$
For the $i^{th}$ variable, this estimator is a linear combination of the MLE and $\kappa$. The careful reader may also notice that this estimator has much in common with the Bayesian estimator of a binomial success probability with a beta prior. This example makes clear how the value of $\lambda$ controls whether the strength of evidence lies with the empirical estimator $\hat{p}_i$ or with the pre-specified reference $\kappa$. Similarly, say a subject-matter expert states that the success probabilities should all be ``close'' to one another, but without specifying a $\kappa$ value. For the WRI data, this is equivalent to requiring the $p_i$ to be near one another using some appropriate distance metric. For this, define the bounded penalty function
\[\mathrm{Pen}_{L_2}(\bm{p}) = \sum_{i=1}^{I}\sum_{j=1}^{I}\left(p_i-p_j\right)^2.\] Alternatively, if an unbounded penalty function is preferred, one could use
\[\mathrm{Pen}_{Q_2}(\bm{p}) = \sum_{i=1}^{I}\sum_{j=1}^{I}\left[\Phi^{-1}(p_i)-\Phi^{-1}(p_j)\right]^2\]
where $\Phi^{-1}$ is the standard normal quantile function. Neither of these penalties result in closed-form solutions for the shrinkage estimators $\tilde{p}_i$, $i=1,\ldots,I$.
\section{Data-Driven Shrinkage} \label{sec:CV}
In Section \ref{PenLik}, different penalty functions were considered for estimating $I$ independent binomial success probabilities assuming a known value of the shrinkage parameter $\lambda$. As $\lambda$ controls the relative importance of the penalty function, it is important to choose a value resulting in parameter estimates with small MSE. We present here how V-fold cross-validation (VFCV) can be used for selecting an optimal shrinkage parameter. While the VFCV approach is fully defined in this section, the interested reader can consult \cite{arlot2010survey} for a more in-depth discussion of this method as well as other cross-validation approaches.
Consider a dataset consisting of $I$ independently sampled variables, with the $i$th variable consisting of $n_i$ independent observations. Let $\bm{x}_i=(x_{i1},x_{i2},\ldots,x_{in_i})$ denote the observations corresponding to the $i$th variable. VFCV partitions the data into $V$ subsets of roughly equal size. For the $i$th variable, let $\mathcal{I}_{i,v}$, $v=1,\ldots,V$ denote a partition of the indices, such that $\bigcup_v\ \mathcal{I}_{i,v}=\{1,\ldots,n_i\}$ and $\mathcal{I}_{i,v_1}\bigcap\mathcal{I}_{i,v_2}=\varnothing$ for all $v_1\neq v_2$ with $v_1,v_2\in \{1,\ldots,V\}$.
VFCV repeatedly creates subsets of the data for model training, in each instance leaving out one of the $V$ subsets per variable. The subsets left out in each iteration are then used for model validation. More specifically, the model building data subsets are used to estimate penalized parameter estimates for various degrees of penalty enforcement, say $M$ possible values of $\lambda$ satisfying $0=\lambda_1 < \lambda_2 < \cdots < \lambda_M$. The negative log-likelihood function for the validation data is then evaluated using penalized estimators corresponding to each possible value of $\lambda$. The optimal value $\lambda_{opt}$ is chosen to be the minimizer of the negative log-likelihood function averaged over the validation subsets.
Algorithmically, implementation of VFCV proceeds as follows:
\begin{itemize}
\item For the $i^{th}$ variable, form a training dataset by excluding the $v$th fold, $\bm{x}_{train,i}^{(v)}=\{x_{ij}:\ j\not\in \mathcal{I}_{i,v}\}$, and let the $v$th fold equal to the validation set, $\bm{x}_{valid,i}^{(v)}=\{x_{ij}:\ j \in \mathcal{I}_{i,v}\}$. Let $n_i^{(v)}$ denote the number of observations in $\bm{x}_{train,i}^{(v)}$. Also let $\bm{x}_{train}^{(v)}$ and $\bm{x}_{valid}^{(v)}$ denote the collection of the training and validation sets for all $I$ variables.
\item For each value $0=\lambda_0<\lambda_1<\ldots<\lambda_M$, find the estimators $\tilde{\bm{\theta}}_{train}^{(v)}(\lambda_m)$ that minimize the penalized negative log-likelihood function
\[D_k(\bm{\theta}) = - l\left(\bm{\theta}\left|\bm{x}_{train}^{(v)}\right.\right) + \lambda_m \bar{n}^{(v)} \mathrm{Pen}(\bm{\theta})\] where $\bar{n}^{(v)} = (1/I)\sum_i n_i^{(v)}$.
\item Calculate the validation function by evaluate the negative log-likelihood at this estimator,
\[\tilde{D}^{(v)}(\lambda_m) = -\ell\left(\left.\tilde{\bm{\theta}}_{train}^{(v)}(\lambda_m)\right|\bm{x}_{valid}^{(v)}\right).\]
\end{itemize}
The above bullets are repeated for $v=1,\ldots,V$ and the VFCV score is defined as
\begin{equation}
\mathrm{CV}_m = \mathrm{CV}(\lambda_m) = \sum_{v=1}^{V} \tilde{D}^{(v)}(\lambda_m). \label{CVscore}
\end{equation}
The optimal shrinkage level is taken to be the minimizer of $\mathrm{CV}_m$, i.e. $\lambda_{opt} = \lambda_{m^\ast}$ with $m^\ast = \mathrm{argmin}_m \mathrm{CV}_m$.
Note that after the optimal penalty level has been chosen using VFCV, penalized estimators are calculated one more time using the full dataset. The penalized likelihood estimator with data-driven shrinkage, denoted $\bm{\tilde{\theta}}_{pen}$, is the minimizer of \[D_{opt}(\bm{\theta}) = -\ell(\bm{\theta}|\bm{x}) + \lambda_{opt}\, \bar{n}\, \mathrm{Pen}(\bm{\theta}) \]
where $\bar{n}=(1/I)\sum_i n_i$.
The literature on cross-validation recommends various choices for $V$, with common values ranging from $V=2$ to $V=10$. The choice $V=n$ is equivalent to leave-one-out cross-validation and can become computationally expensive. As discussed in \cite{arlot2010survey}, the size of the validation set has an effect on the bias of the penalized estimator, while the number of folds $V$ controls for the variance of the estimated penalization parameter. These authors also discuss some asymptotic considerations of cross-validation. If $n_{train}$ denotes the size of the training set, then for $n_{train}/n \rightarrow 1$, cross-validation is asymptotically equivalent to Mallows’ $C_p$ and therefore asymptotically optimal. Furthermore, if $n_{train}/n \rightarrow \gamma \in (0,1)$, then asymptotically the model is equivalent to Mallows' $C_p$ multiplied by (or over-penalized by) a factor $(1+\gamma)/(2\gamma)$. Asymptotics notwithstanding, throughout the remainder of this paper, an approach of $V=10$ is used. This strikes a balance between having larger training sets and reasonable computational costs.
\section{Simulation Studies}
In Section \ref{PenLik}, various shrinkage estimators for the binomial distribution were considered. Of course, the binomial model is not the only count model of interest. In this section, shrinkage estimation is considered for the binomial model as well as two related models, the zero-inflated binomial and the beta-binomial. In most scenarios investigated here, no closed-form solutions for the penalized estimators are available. Even so, these simulation studies are very useful for investigating the properties of different penalty functions and how they impact parameter estimation for the three models. Simulations are restricted to $I=10$ independent variables (passages), consisting of $N_i=N=40$ trials (passage length) and having $n_i=n=50$ independent observations (students) for $i=1,\ldots,I$. This choice was motivated in large part by the structure of the real data considered in this paper.
\subsection{The Binomial Model}
In the simulation, samples $\mathcal{X} = \{X_{ij}, i=1,\ldots,I,\ j = 1,\ldots,n\}$ were generated with independent observations $X_{ij} \sim \mathrm{Bin}(N,p_i)$ and $(I,N,n)=(10,40,50)$. The binomial success probabilities $p_i$ were sampled from a scaled beta distribution. Three shapes of the success probability distribution were considered, namely a skewed distribution $(p_i-a)/(b-a) \sim \mathrm{Beta}(2, 5)$, a very flat distribution $(p_i-a)/(b-a) \sim \mathrm{Beta}(5/4, 5/4)$, and a bell-shaped distribution $(p_i-a)/(b-a) \sim \mathrm{Beta}(10, 10)$. The three success probability distributions are illustrated in Figure \ref{fig:SimParms}. When considering shrinkage to $0$, we chose scaling parameters $(a,b)\in \{(0.01,0.05),(0.01,0.10),(0.30,0.50)\}$ and when considering shrinkage closer to one another, we chose $(a,b)\in \{(0.01,0.05),(0.08,0.20),(0.31,0.35)\}$. In total, this makes for $18$ simulation configurations: $3$ distributions for the $p_i$ $\times$ $2$ types of shrinkage $\times$ $3$ choices of $(a,b)$ for each shrinkage type. The $\lambda$ term controlling how aggressively the penalty gets enforced was chosen using cross-validation using $63$ possible values ranging from $0$ to $10,000$ spaced approximately equidistant on a logarithmic scale. These $\lambda$ values were selected (after some trial-and-error) to ensure they cover the spectrum of negligible penalization ($\lambda=0$) through the penalty dominating ($\lambda=10,000$). VFCV was used to choose the optimal $\lambda$ for each simulated dataset. In addition to the penalized estimators, maximum likelihood estimators were also calculated. In total, $K=500$ samples were generated for each of the $18$ simulation configurations.
\begin{figure}[H]
\begin{center}
\includegraphics[trim={2cm 7cm 1.05cm 8cm},clip,width=9cm]{SimparmDist.pdf}
\caption{Success probability distributions considered in the simulation study.}
\label{fig:SimParms}
\end{center}
\end{figure}
Summarized in the tables below are the Monte Carlo estimates of the MSE ratios. For the $k$th sample $\mathcal{X}_k$, let $\bm{p}_k=\left(p_{k,1},\ldots,p_{k,10}\right)$ denote the true success probabilities simulated from a specified scaled Beta distribution. Let $\hat{\bm{p}}_k$ denote the MLE and let $\tilde{\bm{p}}_k$ denote a penalized estimator found using VFCV. Define Sum of Squared Deviations $\mathrm{SSD}(\bm{p}_1,\bm{p}_2) = \sum_{i=1}^{I} (p_{1i} - p_{2i})^2$.
The Monte Carlo MSE ratios are subsequently defined as
\[\mathrm{MSE}_{\mathrm{Pen}} = \frac{(1/K)\sum_{k=1}^{K}\mathrm{MSD}(\tilde{\bm{p}}_k,\bm{p}_k)}{(1/K)\sum_{k=1}^{K}\mathrm{MSD}(\hat{\bm{p}}_k,\bm{p}_k)} \]
where the subscript ``$\mathrm{Pen}$'' emphasizes the specific penalty function used to obtain the estimators. Maximum likelihood is often considered a ``gold standard'' estimation method. Therefore, we do not report the estimated MSE values themselves, but rather emphasize the MSE ratios comparing the penalized estimators to maximum likelihood. An MSE ratio less than $1$ indicates superior performance of the penalized estimator, while an MSE ratio exceeding $1$ indicates that the unpenalized estimator is preferred.
In Table \ref{tab:Bin_to_zero}, the results of shrinking success probabilities to zero are presented using the penalties $\mathrm{Pen}_j(\bm{p})$, $j=1,\ldots,4$. In Table \ref{tab:Bin_closer}, the results of shrinking success probabilities closer to one another using penalties $\mathrm{Pen}_{L_2}$ and $\mathrm{Pen}_{Q_2}$ are presented. To recall these penalties, consult Section 2.2 of this paper. The tables also report summary measures for the count variables simulated under the different configurations, taking $\bar{\mathrm{E}}(X) = (1/I)\sum_{i=1}^{I}\mathrm{E}(X_i)$ and $\bar{\mathrm{S}}(X) = \left[(1/I)\sum_{i=1}^{I}\mathrm{Var}(X_i)\right]^{1/2}$ as summary measures of location and spread.
\begin{table}[H]
\tbl{{MSE ratios comparing penalized parameter estimates to maximum likelihood when shrinking estimators to 0.}}
{ \begin{tabular}{|c|c|c|c|c|c|c|c|}
\cline{5-8}
\multicolumn{4}{c|}{} & \multicolumn{4}{c|}{Penalty} \\ \hline
$p_i \in(a,b)$ & Shape & $\bar{\mathrm{E}}(X)$ & $\bar{\mathrm{S}}(X)$ & Pen$_1$ & Pen$_2$ & Pen$_3$ & Pen$_4$ \\ \hline
$(0.01,0.05)$ & Skew & 0.857 & 0.950 & 0.999 & 0.956 & 0.988 & 1.382 \\
~ & Flat & 1.200 & 1.158 & 0.999 & 0.968 & 0.995 & 1.012 \\
~ & Bell & 1.200 & 1.093 & 0.999 & 0.961 & 0.997 & 1.011 \\ \hline
$(0.01,0.10)$ & Skew & 1.429 & 1.303 & 0.999 & 0.977 & 0.995 & 1.013 \\
~ & Flat & 2.200 & 1.726 & 1.000 & 0.982 & 0.994 & 1.004 \\
~ & Bell & 2.200 & 1.493 & 1.000 & 0.978 & 0.996 & 1.002 \\ \hline
$(0.30,0.50)$ & Skew & 14.286 & 3.283 & 0.998 & 0.998 & 1.015 & 0.999 \\
~ & Flat & 16.000 & 3.749 & 0.999 & 0.998 & 1.037 & 1.000 \\
~ & Bell & 16.000 & 3.216 & 1.000 & 0.999 & 1.027 & 1.003 \\ \hline
\end{tabular} }
\label{tab:Bin_to_zero}
\end{table}
\begin{table}[H]
\tbl{MSE ratios comparing penalized parameter estimates to maximum likelihood when shrinking estimators closer to one another.}
{\begin{tabular}{|c|c|c|c|c|c|c|}
\cline{5-6}
\multicolumn{4}{c|}{} & \multicolumn{2}{c|}{Penalty}
\\ \hline
$p_i \in(a,b)$ & Shape & $\bar{\mathrm{E}}(X)$ & $\bar{\mathrm{S}}(X)$ & L$_2$ & Q$_2$ \\ \hline
$(0.01,0.05)$ & Skew & 0.857 & 0.950 & 0.928 & 0.906 \\
~ & Flat & 1.200 & 1.159 & 0.935 & 0.942 \\
~ & Bell & 1.200 & 1.093 & 0.705 & 0.704 \\ \hline
$(0.08,0.20)$ & Skew & 4.571 & 2.149 & 0.960 & 0.952 \\
~ & Flat & 5.600 & 2.533 & 0.969 & 0.973 \\
~ & Bell & 5.600 & 2.255 & 0.854 & 0.856\\ \hline
$(0.31,0.35)$ & Skew & 12.857 & 2.965 & 0.411 & 0.411 \\
~ & Flat & 13.200 & 3.003 & 0.652 & 0.652 \\
~ & Bell & 13.200 & 2.979 & 0.292 & 0.293 \\ \hline
\end{tabular}}
\label{tab:Bin_closer}
\end{table}
In Table \ref{tab:Bin_to_zero}, the best-performing penalty function when shrinking to $0$ is $\mathrm{Pen}_2(\bm{p}) = \sum_i p_i^2$. Even so, the relative improvement in efficiency is small throughout. The only penalty that consistently leads to worse performance than maximum likelihood is $\mathrm{Pen}_4(\bm{p})$. Recall that this penalty function is not associated with a norm and is able to very aggressively shrink success probabilities to $0$. This simulation suggests that, at least in the scenarios considered, this penalty shrinks too aggressively. For the other three estimators, VFCV results in penalized estimators with slightly better performance than MLE.
In Table \ref{tab:Bin_closer}, the performance of the $L_2$ and $Q_2$ penalties is nearly indistinguishable. When shrinking parameters closer to one another, large gains in efficiency are sometimes realized. This is especially notable when the Beta shape from which the success probabilities are generated is bell-shaped, i.e. the $p_i$ are close to one another. In all instances, VFCV results in penalized estimators with performance superior to maximum likelihood. Altogether, these simulations illustrate that both the average success probability and the spacing of the $p_i$ relative to that average are important in determining the reduction in MSE. In Table 2, we also note that the MSE ratio tends to decrease, indicating better efficiency, when $\bar{\mathrm{E}}(X)$ is further from $0$. For penalties shrinking the $p_i$ closer to one another, an MSE ratio below $0.3$ was realized, showing dramatic improvement due to shrinkage.
\subsection{The Zero-inflated Binomial Distribution}
The probability mass function of the zero-inflated binomial (ZIB) distribution is
\[f(x|N,\pi,\gamma) = \left\{
\begin{array}{ll}
\gamma + (1 - \gamma)(1 - \pi)^{N} & \text{for $x = 0$} \\
(1 - \gamma)\dbinom{N}{x}\pi^{x}(1-\pi)^{N-x} & \text{for $x = 1,\ldots, N$}
\end{array}%
\right. \]
where $\gamma$ represents the excess zero probability, and $\pi$ and $N$ are the binomial success probability and number of trials. For $X\sim \mathrm{ZIB}(N,\pi,\gamma)$, it follows that $E[X] = N\pi(1-\gamma)$. Consequently, we note the overall expected success proportion in a ZIB is $p = E[X]/N = \pi(1 - \gamma)$. The parameter $p$ is of primary interest when considering possible penalty functions, especially under the assumption that the different ZIB distributions are ``similar'' to one another.
In the simulation study, samples $\mathcal{X} = \{X_{ij}, i=1,\ldots,I,\ j = 1,\ldots,n\}$ were generated with independent ZIB variables, $X_{ij} \sim \mathrm{ZIB}(N, \pi_i, \gamma_i)$ and $(I,N,n) = (10,40,50)$. The overall success proportions $p_i$ and the excess zero probabilities $\gamma_i$ were sampled from the scaled beta distributions as per Figure \ref{fig:SimParms} with the specific bounds $(a_1,b_1)$ for the $p_i$ and $(a_2,b_2)$ for the $\gamma_i$ listed in the table below. In total, $12$ simulation configurations were considered: $3$ distributional shapes $\times$ $4$ choices for $(a_1,b_1,a_2,b_2)$. In the simulations, the binomial success probabilities $\pi_i$ were recovered from the $p_i$ and $\gamma_i$ through $\pi_i = p_i/(1-\gamma_i)$, $i=1,\ldots,I$. A total of $k=500$ samples were simulated under each configuration.
The ZIB simulation considered three penalty functions, $\mathrm{Pen}_2(\bm{p})=\sum_i p_i^2$, $\mathrm{Pen}_{L_2}(\bm{p})=\sum_i\sum_j (p_i-p_j)^2$, and $\mathrm{Pen}_{full}(\bm{\gamma},\bm{\pi}) = \sum_{i} \sum_{j} (\gamma_i - \gamma_j)^2 + \sum_{i} \sum_{j} (\pi_i - \pi_j)^2$. The first of these, termed \textit{zero shrinkage}, results in estimated $p_i$ closer to $0$. The second, termed \textit{mean shrinkage}, results in $p_i$ closer to one another. The third, termed \textit{full shrinkage}, shrinks all $\gamma_i$ closer to one another and all $\pi_i$ closer to one another. While both the penalties $\mathrm{Pen}_{L_2}$ and $\mathrm{Pen}_{full}$ have the goal of estimating models that are ``similar'' to one another, the second penalty is much more strict. To see this, consider two passages with equal average difficulty $p_i=p_j$. Under the first penalty, the contribution of their squared difference is $0$. However, it is possible to have $(\gamma_i,\pi_i)\neq(\gamma_j,\pi_j)$ even when $p_i=p_j$, meaning there could conceivably be a non-zero contribution to the full shrinkage penalty function.
In addition to using VFCV to select the level of shrinkage for the above three penalities, a combined estimator, termed \textit{minCV}, was calculated by selecting among the three penalized estimators the one with the smallest VFCV score. The same set of $63$ $\lambda$ values ranging from $0$ to $10,000$ were used. The Monte Carlo MSE ratios for the success proportions $\bm{p}$ are in Table \ref{tab:ZIB_MSE}. The MSE ratios for $\bm{\gamma}$ and $\bm{\pi}$ were also calculated, and these can be found in Table 8 of the Supplemental Material.
\begin{table}[H]
\tbl{MSE ratios for ZIB success proportions $\bm{p}=(p_1,\ldots,p_{10})$ comparing penalized parameter estimates to maximum likelihood for different penalization approaches.}
{\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\cline{6-9}
\multicolumn{5}{c|}{} & \multicolumn{4}{c|}{Penalty} \\ \hline
$\pi_i \in (a_1,b_1)$ & $\gamma_i \in (a_2,b_2)$ & Shape & $\bar{\mathrm{E}}(X)$ & $\bar{\mathrm{S}}(X)$ & Zero & Mean & Full & minCV \\ \hline
$(0.01,0.05)$ & $(0.10,0.14)$ & Skew & 0.761 & 0.935 & 0.957 & 0.888 & 0.981 & 0.958 \\
~ & ~ & Flat & 1.055 & 1.153 & 0.977 & 0.942 & 0.979 & 0.983 \\
~ & ~ & Bell & 1.056 & 1.097 & 0.964 & 0.668 & 0.836 & 0.755 \\ \hline
$(0.04,0.06)$ & $(0.20,0.30)$ & Skew & 1.410 & 1.395 & 0.968 & 0.364 & 0.368 & 0.356 \\
~ & ~ & Flat & 1.502 & 1.485 & 0.971 & 0.562 & 0.526 & 0.523 \\
~ & ~ & Bell & 1.496 & 1.477 & 0.968 & 0.258 & 0.246 & 0.239 \\ \hline
$(0.15,0.30)$ & $(0.04,0.06)$ & Skew & 7.364 & 3.064 & 1.006 & 0.969 & 0.860 & 0.885 \\
~ & ~ & Flat & 8.551 & 3.586 & 1.010 & 1.005 & 0.808 & 0.819 \\
~ & ~ & Bell & 8.552 & 3.296 & 1.009 & 0.821 & 0.873 & 0.899 \\ \hline
$(0.05,0.06)$ & $(0.20,0.70)$ & Skew & 1.389 & 1.526 & 0.963 & 0.203 & 0.635 & 0.273 \\
~ & ~ & Flat & 1.209 & 1.531 & 0.955 & 0.223 & 0.934 & 0.259 \\
~ & ~ & Bell & 1.210 & 1.529 & 0.951 & 0.183 & 0.372 & 0.245 \\ \hline
\end{tabular}}
\label{tab:ZIB_MSE}
\end{table}
Consider now Table \ref{tab:ZIB_MSE}. While \textit{zero shrinkage} does result in some efficiency gains in most scenarios, overall MSE ratios close to $1$ suggest little improvement from using this penalty. On the other hand, both \textit{mean} and \textit{full shrinkage} result in large decreases in the MSE ratios. Overall, it cannot be said that either \textit{mean} and \textit{full} shrinkage performs best. This makes sense, as it depends on the configuration of all parameters and not just the mean parameters. Finally, while \textit{minCV} does not always have the smallest MSE ratio, it is generally close to the minimum. This suggests that data-driven selection of the level of shrinkage \textit{as well as} the penalty function leads to good performance for the model.
\subsection{The Beta-Binomial Model}
The probability mass function of the beta-binomial distribution is given by
\[ f(x|N,\alpha,\beta) = \binom {N}{x}\frac{B(x+\alpha,N - x + \beta)}{B(\alpha,\beta)} ,\ x = 0,1,...,N\]
where $B(x,y) = \int_0^1 t^{x-1}(1-t)^{y-1}dt$ is the so-called Beta function, $N$ is the number of trials, and $\alpha>0$ and $\beta>0$ control the mean and variance of the distribution. Defining $p = \alpha/(\alpha + \beta)\in (0,1)$ and $\nu = (\alpha+\beta+N)/(\alpha+\beta+1)\in (1,N)$, the mean and variance of the distribution can be written as $E[X]=Np$ and $Var[X] = Np(1-p)\nu$. In this parameterization, $p$ and $\nu$ denote, respectively, the expected success proportion successes and the the over-dispersion relative to a binomial distribution with the same mean value.
Samples $\mathcal{X} = \{X_{ij}, i=1,\ldots,I,\ j = 1,\ldots,n\}$ were generated with independent Beta-Binomial variables, $X_{ij} \sim \mathrm{BetaBin}(N, \alpha_i, \beta_i)$, with $(I,N,n)=(10,40,50)$. The overall success proportions $p_i$ and the overdispersion measures $\nu_i$ were sampled from the scaled beta distributions as per Figure \ref{fig:SimParms} with the specific bounds $(a_1,b_1)$ for the $p_i$ and $(a_2,b_2)$ for the $\nu_i$ listed in Table \ref{tab:BBin_MSE}. Again, $12$ simulation configurations were considered. In the simulation, parameters $\alpha_i$ and $\beta_i$ for the beta-binomial distribution were recovered from the simulated $p_i$ and $\nu_i$ through the relationships in the preceding paragraph. A total of $K=500$ samples were simulated under each configuration.
As in Section 4.2, three penalty functions. Letting $p_i = \alpha_i/(\alpha_i+\beta_i)$, $i=1,\ldots,I$, these were $\mathrm{Pen}_2(\bm{p})=\sum_i p_i^2$, $\mathrm{Pen}_{L_2}(\bm{p})=\sum_i\sum_j (p_i-p_j)^2$, and $\mathrm{Pen}_{full}(\bm{\alpha},\bm{\beta}) = \sum_{i} \sum_{j} (\alpha_i - \alpha_j)^2 + \sum_{i} \sum_{j} (\beta_i - \beta_j)^2$. These are again termed, respectively, \textit{zero shrinkage}, \textit{mean shrinkage}, and \textit{full shrinkage}. In addition to the three penalized estimators, an estimator termed \textit{minCV} was calculated by selecting among the three penalized estimators the one with the smallest CV score. The MSE ratios for all estimators are reported in Table \ref{tab:BBin_MSE}. The table shows the results for the success proportions $\bm{p}$, and the equivalent results for $\bm{\alpha}$ and $\bm{\beta}$ can be found in Table 9 of the Supplemental Material.
\begin{table}[H]
\tbl{MSE ratios for Beta-Binomial success proportions $\bm{p}=(p_1,\ldots,p_{10})$ comparing penalized parameter estimates to maximum likelihood for different penalization approaches.}
{\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\cline{6-9}
\multicolumn{5}{c|}{} & \multicolumn{4}{c|}{Penalty} \\ \hline
$p_i \in (a_1,b_1)$ & $\nu_i \in (a_2,b_2)$ & Shape & $\bar{\mathrm{E}}(X)$ & $\bar{\mathrm{S}}(X)$ & Zero & Mean & Full & minCV \\ \hline
$(0.05, 0.10)$ & $(4, 6)$ & Skew & 2.361 & 3.102 & 0.917 & 0.474 & 0.428 & 0.429 \\
& ~ & Flat & 2.733 & 3.480 & 0.928 & 0.702 & 0.591 & 0.604 \\
& ~ & Bell & 2.730 & 3.455 & 0.921 & 0.290 & 0.270 & 0.271 \\ \hline
$(0.12, 0.22)$ & $(2, 5)$ & Skew & 5.742 & 3.755 & 0.974 & 0.722 & 0.726 & 0.708 \\
& ~ & Flat & 6.513 & 4.419 & 0.977 & 0.903 & 0.948 & 0.889 \\
& ~ & Bell & 6.515 & 4.327 & 0.973 & 0.476 & 0.466 & 0.463 \\ \hline
$(0.17, 0.22)$ & $(3, 8)$ & Skew & 6.947 & 4.929 & 0.971 & 0.301 & 0.400 & 0.331 \\
& ~ & Flat & 7.230 & 5.557 & 0.971 & 0.445 & 0.762 & 0.481 \\
& ~ & Bell & 7.221 & 5.555 & 0.968 & 0.217 & 0.242 & 0.227 \\ \hline
$(0.05, 0.06)$ & $(2, 10)$ & Skew & 1.952 & 2.723 & 0.905 & 0.170 & 0.469 & 0.211 \\
& ~ & Flat & 1.943 & 3.139 & 0.891 & 0.188 & 0.733 & 0.213 \\
& ~ & Bell & 1.949 & 3.183 & 0.893 & 0.155 & 0.187 & 0.175 \\ \hline
\end{tabular}}
\label{tab:BBin_MSE}
\end{table}
Inspecting Table \ref{tab:BBin_MSE}, \textit{zero shrinkage} is noted to be the least effective approach here, even while still being more effective than maximum likelihood. For most of the simulation configurations, MSE ratios under \textit{mean} and \textit{full shrinkage} are comparable. Here, the \textit{minCV} approach is also very impressive, in most instances nearly matching the best-performing method. This reaffirms that VFCV can be effectively used to choose both the level of shrinkage for a specific penalty function, but then also choose from among competing penalty functions.
\section{Data Analysis}
The methodology developed in this paper was motivated by the oral reading fluency data collected from a sample of $508$ elementary-school aged children. Each child was randomly assigned one of ten available passages to read. This resulted in around 50 Words Read Incorrectly (WRI) scores per passage. Table \ref{SumStats} reports specific details for passage length, sample size per passage, as well as the minimum, median, and maximum WRI scores. Of interest is to accurately and efficiently estimate passage difficulty as measured by the average proportion of words read incorrectly. Note that higher WRI proportions (i.e. WRI counts divided by passage length) indicate that a passage is more difficult. Figure \ref{fig:WRIProportions} provides information about the passage-specific WRI proportions. The solid dot in each violin plot represents the mean WRI proportion. The means correspond to the unpenalized maximum likelihood estimates of passage difficulty.
\begin{figure}[H]
\begin{center}
\includegraphics[width=10cm]{Rplot01.png}
\caption{WRI proportions for the ten passages}
\label{fig:WRIProportions}
\end{center}
\end{figure}
\begin{table}[H]
\tbl{Passage-level summary statistics}
{\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Passage Number & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline
Sample Size & 49 & 51 & 51 & 50 & 52 & 51 & 50 & 53 & 51 & 50\\ \hline
Passage Length & 48 & 50 & 69 & 50 & 44 & 56 & 44 & 48 & 51 & 47 \\ \hline
Minimum WRI & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline
Median WRI & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline
Maximum WRI & 4 & 6 & 19 & 5 & 13 & 9 & 10 & 13 & 10 & 14 \\ \hline
\end{tabular}}
\label{SumStats}
\end{table}
The mean WRI proportions in Figure \ref{fig:WRIProportions} appear fairly close to one another, supporting the assumption that the passages fall within a narrow range of difficulty. Thus, it is plausible that appropriate shrinkage will result in improved estimates of difficulty.
Three models and three types of shrinkage were considered for the data at hand. We remind the reader that classic selection criteria such as AIC and BIC cannot easily be applied in parameter shrinkage settings unless one is able to calculate the effective number of parameters. In a penalized model with $K$ specified parameters, the \textit{effective number of parameters} $\tilde{K}$ can be dramatically smaller than $K$. Generally, there is no easy way to calculate $\tilde{K}$ in penalized models. We therefore used cross-validation (CV) to select the best model, noting that such CV scores as per \citet{geisser1975predictive} represent a \textit{discrepancy measure} for each model. The lowest CV score corresponds to the smallest empirical discrepancy between observed data and estimated model. Therefore, the smallest CV score corresponds to the optimal model choice. In each model under consideration, the same set of data partitions was used to select a smoothing parameter with VFCV with $V=10$ fold. Table \ref{tab:data_analysis_CV} reports the VFCV scores as defined in \eqref{CVscore}. When the penalty in the table is specified as ``None'', the VFCV score corresponds to the unpenalized maximum likelihood estimators.
\begin{table}[H]
\tbl{10-fold CV scores and optimal $\lambda$ values for the three distributions considered.}
{\begin{tabular}{|c|c|c|c|}
\hline
Distribution & Penalty & VFCV & $\log(\lambda_{opt}+1)$ \\ \hline
Binomial & None & $1025.5$ & -- \\
& Zero & $1024.9$ & $3.56$ \\
& Mean & $1017.1$ & $4.36$ \\ \hline
ZIB & None & $964.7$ & -- \\
& Zero & $964.3$ & $2.78$ \\
& Mean & $959.6$ & $3.96$ \\
& Full & $950.4$ & $3.56$ \\ \hline
BetaBin & None & $869.7$ & -- \\
& Zero & $869.5$ & $2.41$ \\
& Mean & $866.3$ & $3.56$ \\
& Full & $851.9$ & $0.04$ \\ \hline
\end{tabular}}
\label{tab:data_analysis_CV}
\end{table}
\begin{table}[H]
\tbl{Beta-binomial parameter estimates for the WRI data.}
{\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\cline{2-10}
\multicolumn{1}{c|}{} & \multicolumn{3}{c|}{Maximum Likelihood} & \multicolumn{3}{c|}{Mean Shrinkage} & \multicolumn{3}{c|}{Full Shrinkage} \\ \hline
Passage & $\hat\alpha$ & $\hat\beta$ & $\hat{p}$ & $\tilde\alpha$ & $\tilde\beta$ & $\tilde{p}$ & $\tilde\alpha$ & $\tilde\beta$ & $\tilde{p}$ \\ \hline
P1 & 1.28 & 66.34 & 0.019 & 1.20 & 52.67 & 0.022 & 0.70 & 27.45 & 0.025 \\ \hline
P2 & 1.51 & 45.15 & 0.032 & 1.50 & 47.47 & 0.031 & 0.91 & 27.44 & 0.032 \\ \hline
P3 & 0.84 & 19.85 & 0.040 & 0.80 & 22.81 & 0.034 & 0.96 & 27.44 & 0.034 \\ \hline
P4 & 2.47 & 160.0 & 0.015 & 2.25 & 123.5 & 0.018 & 0.67 & 27.45 & 0.024 \\ \hline
P5 & 1.17 & 42.54 & 0.027 & 1.17 & 41.65 & 0.027 & 0.83 & 27.45 & 0.030 \\ \hline
P6 & 1.18 & 29.10 & 0.039 & 1.13 & 32.52 & 0.034 & 0.97 & 27.44 & 0.034 \\ \hline
P7 & 0.53 & 19.48 & 0.026 & 0.53 & 18.75 & 0.027 & 0.74 & 27.44 & 0.026 \\ \hline
P8 & 0.87 & 25.37 & 0.033 & 0.86 & 27.20 & 0.031 & 0.88 & 27.44 & 0.031 \\ \hline
P9 & 0.85 & 32.25 & 0.026 & 0.84 & 30.74 & 0.027 & 0.79 & 27.45 & 0.028 \\ \hline
P10 & 0.65 & 17.03 & 0.037 & 0.63 & 19.14 & 0.032 & 0.89 & 27.44 & 0.031 \\ \hline
\end{tabular} }
\label{tab:BetaBinParms}
\end{table}
It is evident from Table \ref{tab:data_analysis_CV} that all variations of the beta-binomial model have much lower cross-validation scores than either the binomial or zero-inflated binomial models. Furthermore, VFCV never selects unpenalized maximum likelihood model for any of the distributions considered. With regards to penalty type, full shrinkage works best for this model with mean shrinkage being a distant second choice. Table \ref{tab:BetaBinParms} shows the beta-binomial parameter estimates obtained using maximum likelihood as well as penalized likelihood with mean shrinkage and full shrinkage. It is interesting to note that in the full shrinkage solution, the $\tilde{\beta}_i$ values have all been shrunk to within $0.01$ of a common value, but the $\tilde{\alpha}_i$ still exhibit a fair spread of values. For unpenalized maximum likelihood, the estimated success proportions range from $0.019$ to $0.04$, while the full shrinkage values range from $0.024$ to $0.034$. The latter shows much more adherence to the idea that the passages are similar in terms of difficulty.
\begin{figure}[H]
\centering{
\includegraphics[scale=0.9]{BetaBinPlotNew.pdf}}
\caption{Beta-binomial parameter estimates under mean shrinkage and full shrinkage. Dashed line indicates optimal shrinkage. Scale value to improve full shrinkage plot readability is $\varepsilon=e^{-10}$.}
\label{fig:BetaBinParms}
\end{figure}
For the interested reader, Figure \ref{fig:BetaBinParms} shows the penalized likelihood estimate trajectories for mean shrinkage and full shrinkage as a function of $\lambda$. The estimates of $\tilde{\beta}$ are presented on a logarithmic scale. For mean shrinkage, the horizontal scale is $\log(\lambda+1)$ and for full shrinkage it is $\log(\lambda+\varepsilon)$ with $\varepsilon=10^{-10}$. These adjustments were all made to improve readability of the plots. Dashed vertical lines indicate the optimal shrinkage solutions as determined by VFCV.
Under mean shrinkage, the passage-specific $\tilde{\alpha}_i$ and $\tilde{\beta}_i$ still exhibit a large spread even when the success proportions $\tilde{p}_i= \tilde{\alpha}_i/(\tilde{\alpha}_i+\tilde{\beta}_i)$ are close to one another. Under full shrinkage, the $\tilde{\beta}_i$ values are very quickly shrunk to a nearly common value while the $\tilde{\alpha}_i$ still exhibit some spread.
One last matter that we will briefly address is that of post-selection model checking. Using VFCV above, the penalized beta-binomial model with full parameter shrinkage has been selected as the best model in a \textit{relative} sense. If one wishes to evaluate how well the model fits in an \textit{absolute} sense, one might compare the empirical and penalized model-based pmfs or cdfs. Figure \ref{fig:Pass2} shows both of these comparisons using the Passage 2 data as an example. These figures are presented with a note of caution -- the penalized model-based probabilities will almost never be as close to the empirical probabilities as the unpenalized probabilities based on the same parametric model and estimated for that specific passage only i.e. ignoring the data from other passages. As such, rather than a visual inspection, one may wish to use a more formal diagnostic tool. Pearson's chi-square goodness-of-fit statistic is one possibility worth considering. The use of this statistic is complicated by two matters. Firstly, as per \cite{chernoff1954use}, the Pearson statistic no longer has a limiting $\chi^2$ distribution when evaluated using estimated model parameters. Secondly, the effect of parameter penalization and model selection will further impact the distribution of the statistic. Therefore, to find sensible critical values, one would have to rely on a Monte Carlo procedure that incorporates both penalization and selection. This is a computationally burdensome procedure that we do not further consider in the present paper.
\begin{figure}[H]
\centering{
\includegraphics[trim={0.1cm 0.5cm 1.05cm 0.7cm},clip,scale=0.63]{pmf_cdf.pdf}}
\caption{Empirical and penalized model-based pmf and cdf comparisons for the Passage 2 data.}
\label{fig:Pass2}
\end{figure}
\section{Conclusions}
The goal of this project was defining and exploring penalized parameter estimators of passage difficulty from independent multivariate count data. WRI scores realized by $508$ students during an ORF assessment motivated the work and these data were analyzed in Section 5. The simulation results presented show that across the different count distributions and simulation configurations considered, large decreases in MSE relative to unpenalized maximum likelihood were often achieved. There is also very little risk in using penalized likelihood, as V-fold cross validation never resulted in a large increase in MSE. In fact, the \textit{minCV} approach explored in the simulations point the cross-validation being able to choose not just the appropriate level of shrinkage, but also the most appropriate penalty function under consideration. When applying the methodology to the observed WRI data, a penalized beta-binomial model is selected. This choice results in penalized estimators of the passage difficulty with a much tighter spread. This affirms the expectation that the passages are similar in difficulty, with estimated difficulty scores ranging from $2.4\%$ to $3.4\%$. Even so, this does highlight one important avenue for future research. If students are reading different passages to assess ORF, it is desirable to have a method that standardizes WRI scores to be independent of passage difficulty. In practice, students also typically read multiple passages, so exploring methods accounting for correlated WRI scores need to be considered in future.
\section*{Acknowledgements}
The research reported here was partially supported by the Institute of Education Sciences, U.S. Department of Education, through Grant R305D200038
to Southern Methodist University. The opinions expressed are those of the authors and do not represent
views of the Institute or the U.S. Department of Education.
|
{
"timestamp": "2022-05-16T02:04:36",
"yymm": "2109",
"arxiv_id": "2109.14010",
"language": "en",
"url": "https://arxiv.org/abs/2109.14010"
}
|
\section{Alignment}\label{sec:alignment}
\begin{figure}[H]
\vspace{-20pt}
\centering
\includegraphics[width=\textwidth]{figures/alignment.pdf}
\vspace{-15pt}
\caption{Alignment research aims to create and safely optimize ML system objectives.}
\label{fig:alignment}
\vspace{-10pt}
\end{figure}
While most technologies do not have goals and are simply tools, future machine learning systems may be more agent-like. How can we build ML agents that prefer good states of the world and avoid bad ones?
Objective functions drive system behavior, but aligning objective functions with human values requires overcoming societal as well as technical challenges. We briefly discuss societal challenges with alignment and then describe technical alignment challenges in detail.
Ensuring powerful future ML systems have aligned goals may be challenging because their goals may be given by some companies that do not solely pursue the public interest.
Unfortunately, sometimes corporate incentives can be distorted in the pursuit of maximizing shareholder value \cite{jensen1976theory}. %
Many companies help satisfy human desires and improve human welfare, but some companies have been incentivized to decimate rain forests \cite{Geist2001WhatDT}, lie to customers that cigarettes are healthy \cite{Botvin1993SmokingBO}, invade user privacy \cite{Zuboff2019TheAO}, and cut corners on safety \cite{sutton2010chromium}. %
Even if economic entities were more aligned, such as if corporations absorbed their current negative externalities, the larger economic system would still not be fully aligned with all human values.
This is because the overall activity of the economy can be viewed as approximating material wealth maximization \cite{posner}. However, once wealth increases enough, it ceases to be correlated with emotional wellbeing and happiness \cite{KahnemanDeaton}. Furthermore, wealth maximization with advanced ML may sharply exacerbate inequality \cite{greenwood1997third}, which is a robust predictor of aggression and conflict \cite{Fajnzylber2002InequalityAV}. Under extreme automation in the future, wealth metrics such as real GDP per capita may drift further from tracking our values \cite{Brynjolfsson2009WhatTG}. Given these considerations, the default economic objective shaping the development of ML is not fully aligned with human values.\looseness=-1
Even if societal issues are resolved and ideal goals are selected, technical problems remain. We focus on four important technical alignment problems: objective proxies are difficult to specify, objective proxies are difficult to optimize, objective proxies can be brittle, and objective proxies can spawn unintended consequences.
\subsection{Objectives Can Be Difficult to Specify}
\paragraph{Motivation for Value Learning.} Encoding human goals and intent is challenging. Lawmakers know this well, as laws specified by stacks of pages still often require that people interpret the spirit of the law.
Many human values, such as happiness \cite{LazariRadek2014ThePO}, good judgment \cite{Stanovich2016TheRQ}, %
meaningful experiences \cite{fbupdate}, human autonomy, and so on, are hard to define and measure.
Systems will optimize what is measurable \cite{Ridgway1956DysfunctionalCO}, as ``what gets measured gets managed.'' Measurements such as clicks and watch time may be easily measurable, but they often leave out and work against important human values such as wellbeing \cite{kross2013facebook,fbupdate,Stray2020AligningAO,Stray2021WhatAY}. Researchers will need to confront the challenge of measuring abstract, complicated, yet fundamental human values.
\paragraph{Directions.} %
Value learning seeks to develop better approximations of our values, so that corporations and policy makers can give systems better goals to pursue.
Some important values %
include wellbeing, fairness, and people getting what they deserve.
To model wellbeing, future work could use ML to model what people find pleasant, how stimuli affect internal emotional valence, and other aspects of subjective experience. Other work could try to learn how to align specific technologies, such as recommender systems, with wellbeing goals rather than engagement. Future models deployed in legal contexts must understand justice, so models should be taught the law \cite{Hendrycks2021MeasuringMM}. Researchers could create models that learn wellbeing functions that do not mimic cognitive biases \cite{Hendrycks2021AligningAW}. Others could make models that are able to detect when scenarios are clear-cut or highly morally contentious \cite{Hendrycks2021AligningAW}.
Other directions include learning difficult-to-specify goals in interactive environments \cite{HadfieldMenell2016CooperativeIR}, learning the idiosyncratic values of different stakeholders \cite{Liao2019BuildingJC}, and learning about cosmopolitan goals such as endowing humans with the capabilities necessary for high welfare \cite{Nussbaum2003CAPABILITIESAF}.\looseness=-1
\subsection{Objectives Can Be Difficult to Optimize}
\paragraph{Motivation for Translating Values Into Action.} Putting knowledge from value learning into practice may be difficult because optimization is difficult. For example, many sparse objectives are easy to specify but difficult to optimize. Worse, some human values are particularly difficult to optimize. Take, for instance, the optimization of wellbeing. Short-term and long-term wellbeing are often anticorrelated, as the hedonistic paradox shows \cite{sidgwick_1907}. Hence many local search methods may be especially prone to bad local optima, and they may facilitate the impulsive pursuit of pleasure.
Consequently, optimization needs to be on long timescales, but this reduces our ability to test our systems iteratively and rapidly, and ultimately to make them work well. %
Further, human wellbeing is difficult to compare and trade off with other complex values, is difficult to forecast even by humans themselves \cite{Wilson2005AffectiveF}, and wellbeing often quickly adapts and thereby nullifies interventions aimed at improving it \cite{Brickman1971HedonicRA}. Optimizing complex abstract human values is therefore not straightforward.\looseness=-1
To build systems that optimize human values well, models will need to mediate their knowledge from value learning into appropriate action.
Translating background knowledge into choosing the best action is typically not straightforward: while computer vision models are advanced, successfully applying vision models for robotics remains elusive. Also, while sociopaths are intelligent and have moral awareness, this knowledge does not necessarily result in moral inclinations or moral actions.
As systems make objectives easier to optimize and break them down into new goals, subsystems are created that optimize these new intrasystem goals.
But a common failure mode is that ``intrasystem goals come first'' \cite{Gall1977SystemanticsHS}. These goals can steer actions instead of the primary objective \cite{Hubinger2019RisksFL}. Thus a system's explicitly written objective is not necessarily the objective that the system operationally pursues, and this can result in misalignment.\looseness=-1
\paragraph{Directions.} To make models optimize desired objectives and not pursue undesirable secondary objectives, researchers could try to construct systems that guide models not just to follow rewards but also behave morally \cite{jiminy2021}; such systems could also be effective at guiding agents not to cause wanton harm within interactive environments and to abide by rules. To get a sense of an agent's values and see how it make tradeoffs between values, researchers could also create diverse environments that capture realistic morally salient scenarios and characterize the choices that agents make when faced with ethical quandaries. Research on steerable and controllable text generation \cite{Krause2020GeDiGD,Kenton2021AlignmentOL} could help chatbots exhibit virtues such as friendliness and honesty.\looseness=-1
\subsection{Objective Proxies Can Be Brittle}
Proxies that approximate our objectives are brittle, but work on Proxy Gaming and Value Clarification can help.\looseness=-1 %
\paragraph{Motivation for Proxy Gaming.} Objective proxies can be gamed by optimizers and adversaries. For example, to combat a cobra infestation, a governor of Delhi offered bounties for dead cobras. However, as the story goes, this proxy was brittle and instead incentivized citizens to breed cobras, kill them, and collect a bounty. %
In other contexts, some students overoptimize their GPA proxies by taking easier courses, and some academics overoptimize bibliometric proxies at the expense of research impact.
Agents in reinforcement learning often find holes in proxies. In a boat racing game, an RL agent gained a high score not by finishing the race but by
going in the wrong direction, catching on fire, and colliding into other boats \cite{boatrace}. Since proxies ``will tend to collapse once pressure is placed upon'' them by optimizers \cite{Goodhart1984ProblemsOM,Manheim2018CategorizingVO,Strathern1997ImprovingRA}, proxies can often be gamed.\looseness=-1
\begin{wrapfigure}{r}[0.01\textwidth]{.4\textwidth}%
\vspace{-8pt}
``When a measure becomes a target, it ceases to be a good measure.''\hfill\emph{Goodhart's Law}%
\vspace{-5pt}
\end{wrapfigure}
\paragraph{Directions.} Advancements in robustness and monitoring are key to mitigating proxy gaming.
ML systems encoding proxies must become more robust to optimizers, which is to say they must become more adversarially robust (\Cref{sec:advex}). %
Specifically, suppose a neural network is used to define a learned utility function; if some other agent (say another neural network) is tasked with maximizing this utility proxy, it would be incentivized to find and exploit any errors in the learned
\vspace{-5pt}utility proxy, similar to adversarial examples \cite{Trabucco2021ConservativeOM,Gleave2020AdversarialPA}. Therefore we should seek to ensure adversarial robustness of learned reward functions, and regularly test them for exploitable loopholes.
Separately, advancements in monitoring can help with proxy gaming. %
For concreteness, we discuss how monitoring can specifically help with ``human approval'' proxies, but many of these directions can help with proxy gaming in general. A notable failure mode of human approval proxies is their susceptibility to deception. Anomaly detectors (\Cref{sec:anom}) could help spot when ML models are being deceptive or stating falsehoods, could help monitor agent behavior for unexpected activity, and could help determine when to stop the agent or intervene.
Research on making models honest and teaching them to give the right impression (\Cref{sec:honest}) can help mitigate deception from models trying to game approval proxies.
To make models more truthful and catch deception, future systems could attempt to verify statements that are difficult for humans to check in reasonable timespans, and they could inspect convincing but not true assertions \cite{Peskov2020ItTT}. Researchers could determine the veracity of model assertions, possibly through an adversarial truth-finding process \cite{Irving2018AISV}.
\paragraph{Motivation for Value Clarification.} While maximization can expose faults in proxies, so too can future events. The future will sharpen and force us to confront unsolved ethical questions %
about our values and objectives \cite{Williams2015ThePO}. In recent decades, peoples' values have evolved by confronting philosophical questions, including whether to infect volunteers for science, how to equitably distribute vaccines,
the rights of people with different orientations, %
and so on. How are we to act if many humans spend most of their time chatting with compelling bots and not much time with humans, %
or how should we fairly address automation's economic ramifications? Determining the right action is not strictly scientific in scope \cite{hume}, and we will need philosophical analysis to help us correct structural faults in our proxies.\looseness=-1
\paragraph{Directions.} %
We should build systems to help rectify our objectives and proxies, so that we are less likely to optimize the wrong objective when a change in goals is necessary. This requires interdisciplinary research towards a system that can reason about values and philosophize at an expert level.
Research could start with trying to build a system to score highly in the philosophy olympiad, in the same way others are aiming to build expert-level mathematician systems using mathematics olympiad problems \cite{Maric2020FormalizingIP}. Other work could build systems to help extrapolate the end products of ``reflective equilibrium'' \cite{rawls}, or what objectives we would endorse by simulating a process of deliberation about competing values.
Researchers could also try to estimate the quality of a philosophical work by using a stream of historical philosophy papers and having models predict the impact of each paper on the literature.
Eventually, researchers should seek to build systems that can formulate robust positions through an argumentative dialog. These systems could also try to find flaws in verbally specified proxies.\looseness=-1 %
\subsection{Objective Proxies Can Lead to Unintended Consequences}
\paragraph{Motivation.} While optimizing agents may work towards subverting a proxy, in other situations both the proxy setter and an optimizing agent can fall into states that neither intended. %
For example, in their pursuit to modernize the world with novel technologies, previous well-intentioned scientists and engineers inadvertently increased pollution and hastened climate change, an outcome desired neither by the scientists themselves nor by the societal forces that supported them. %
In ML, some platforms maximized clickthrough rates to approximate maximizing enjoyment, but such platforms unintentionally addicted many users and decreased their wellbeing. These cases demonstrate that unintended consequences present a challenging but important problem.\looseness=-1 %
\paragraph{Directions.} Future research could focus on designing minimally invasive agents that prefer easily reversible to irreversible actions \cite{Grinsztajn2021ThereIN}, as irreversibility reduces humans' optionality and often unintentionally destroys potential future value. Likewise, researchers could create agents that properly account for their lack of knowledge of the true objective \cite{HadfieldMenell2017TheOG} and avoid disrupting parts of the environment whose value is unclear \cite{Turner2020AvoidingSE,Krakovna2020AvoidingSE,Shah2019PreferencesII}. We also need more complex environments that can manifest diverse unintended side effects \cite{Wainwright2020SafeLife1E} such as feedback loops, which are a source of hazards to users of recommender systems \cite{Krueger2020HiddenIF}. A separate way to mitigate unintended consequences is to teach ML systems to abide by constraints \cite{Achiam2019BenchmarkingSE,Saunders2018TrialWE}, be less brazen, and act cautiously. Since we may be uncertain about which values are best, research could focus on having agents safely optimize and balance many values, so that one value does not unintentionally dominate or subvert the rest \cite{Newberry2021ThePA,Ecoffet2021ReinforcementLU}. Sometimes unintended instrumental goals emerge in systems, such as self-preservation \cite{HadfieldMenell2017TheOG} or power-seeking \cite{turner2021optimal}, so researchers %
could try mitigating and detecting such unintended emergent goals; see \Cref{sec:emerge} for more directions in detecting emergent functionality.\looseness=-1
\section{Related Research Agendas}
\vspace{3pt}
There is a large ecosystem of work on addressing societal consequences of machine learning, including AI policy \cite{dafoe2018ai}, privacy \cite{Abadi2016DeepLW,shokri2017membership}, fairness \cite{Hardt2016EqualityOO}, and ethics \cite{Gabriel2020ArtificialIV}. %
We strongly support research on these related areas. %
For purposes of scope, in this section we focus on papers that outline paths towards creating safe ML systems.\\
An early work that helps identify safety problems is Russell \emph{et al.}, 2015 \cite{Russell2015ResearchPF}, who identify many potential avenues for safety, spanning robustness, machine ethics, research on AI's economic impact, and more. Amodei and Olah \emph{et al.}, 2016 \cite{Amodei2016ConcretePI} helped further concretize several safety research directions. With the benefit of five years of hindsight, %
our paper provides a revised and expanded collection of concrete problems. %
Some of our themes extend the themes in Amodei and Olah \emph{et al.}, such as Robustness and some portions of Alignment. We focus here on problems that remain unsolved and also identify new problems, such as emergent capabilities from massive pretrained models, that stem from recent progress in ML. We also broaden the scope by identifying systemic safety risks surrounding the deployment context of ML. %
The technical agenda of Taylor \emph{et al.}, 2016 \cite{Taylor2016AlignmentFA} considers similar topics to Amodei and Olah \emph{et al.}, and Leike \emph{et al.}, 2018 \cite{Leike2018ScalableAA} considers safety research directions in reward modeling. Although Leike \emph{et al.}'s research agenda focuses on reinforcement learning, they highlight the importance of various other research problems including adversarial training and uncertainty estimation.
Recently, Critch and Krueger, 2020 \cite{Critch2020AIRC} provide an extensive commentary on safety research directions and discuss safety when there are multiple stakeholders.\looseness=-1
\section{Conclusion}
This work presented a non-exhaustive list of four unsolved research problems, all of which are interconnected and interdependent.
Anomaly detection, for example, helps with detecting proxy gaming, detecting suspicious cyberactivity, and executing fail-safes in the face of unexpected events. Achieving safety requires research on all four problems, not just one. To see this, recall that a machine learning system that is not aligned with human values may be unsafe in and of itself, as it may create unintended consequences or game human approval proxies. Even if it is possible to create aligned objectives for ML systems, Black Swan events could cause ML systems to misgeneralize and pursue incorrect goals, malicious actors may launch adversarial attacks or compromise the software on which the ML system is running, and humans may need to monitor for emergent functionality and the malicious use of ML systems. As depicted in \Cref{fig:swiss}'s highly simplified model, work on all four problems helps create comprehensive and layered protective measures against a wide range of safety threats.\looseness=-1 %
As machine learning research evolves, the community's aims and expectations should evolve too.
For many years, the machine learning community focused on making machine learning systems work in the first place. %
However, machine learning systems have had notable success in domains from images, to natural language, to programming---therefore our focus should expand beyond just accuracy, speed, and scalability. Safety must now become a top priority. %
Safety is not auxiliary in most current widely deployed technology. Communities do not ask for ``safe bridges,'' but rather just ``bridges.'' Their safety is insisted upon---even assumed---and incorporating safety features is imbued in the design process. The ML community should similarly create a culture of safety and elevate its standards so that ML systems can be deployed in safety-critical situations.
\newpage
\subsection*{Acknowledgements}
We would like to thank Sidney Hough, Owain Evans, Collin Burns, Alex Tamkin, Mantas Mazeika, Kevin Liu, Jonathan Uesato, Steven Basart, Henry Zhu, D. Sculley, Mark Xu, Beth Barnes, Andreas Terzis, Florian Tram\`er, Stella Biderman, Leo Gao, Jacob Hilton, and Thomas Dietterich for their feedback. DH is supported by the NSF GRFP Fellowship and an Open Philanthropy Project AI Fellowship.
\section{Monitoring}
\begin{figure}[H]
\centering
\vspace{-10pt}
\includegraphics[width=0.9\textwidth]{figures/monitoring.pdf}
\caption{Monitoring research aims to identify hazards, inspect models, and help human ML system operators.}
\vspace{-2pt}
\label{fig:monitoring}
\end{figure}
\subsection{Identifying Hazards and Malicious Use With Anomaly Detection}\label{sec:anom}
\paragraph{Motivation.} Deploying and monitoring powerful machine learning systems will require high caution, similar to the caution observed for modern nuclear power plants, military aircraft carriers, air traffic control, and other high-risk systems. These complex and hazardous systems are now operated by high reliability organizations (HROs) which are relatively successful at avoiding catastrophes \cite{Dietterich2018RobustAI}. For safe deployment, future ML systems may be operated by HROs. %
Anomaly detectors are a crucial tool for these organizations since they can warn human operators of potential hazards \cite{hroanomaly}.
For detectors to be useful, research must strive to create detectors with high recall and a low false alarm rate in order to prevent alarm fatigue \cite{cvach2012monitor}.\looseness=-1
Separately, anomaly detection is essential in detecting malicious uses of ML systems \cite{Brundage2018TheMU}. Malicious users are incentivized to use novel strategies, as familiar misuse strategies are far easier to identify and prevent compared to unfamiliar ones. Malicious actors may eventually repurpose ML systems for social manipulation \cite{Buchanan2021Lies}, for assisting research on novel weapons \cite{Bostrom2019TheVW}, or for cyberattacks \cite{Buchanan2020Cyber}. When such anomalies are detected, the detector can trigger a fail-safe policy in the system and also flag the example for human intervention. However, detecting malicious anomalous behavior could become especially challenging when malicious actors utilize ML capabilities to try to evade detection.
Anomaly detection is integral not just for promoting reliability but also for preventing novel misuses.
\paragraph{Directions.} Anomaly detection is actively studied in research areas such as out-of-distribution detection \cite{Hendrycks2017ABF}, open-set detection \cite{Bendale2016TowardsOS}, and one-class learning \cite{Tack2020CSIND, Hendrycks2019UsingSL}, but many challenges remain. The central challenge is that existing methods for representation learning have difficulty discovering representations that work well for previously unseen anomalies. %
One of the symptoms of this problem is that anomaly detectors for large-scale images still cannot reliably detect that previously unseen random noise is anomalous \cite{Hendrycks2019DeepAD}. %
Moreover, there are many newer settings that require more study, such as detecting distribution shifts or changes to the environment \cite{Danesh2021OutofDistributionDD}, as well developing detectors that work in real-world settings such as intrusion detection, malware detection, and biosafety.
Beyond just detecting anomalies, high reliability organizations require candidate explanations of how an anomaly came to exist \cite{hroanomaly,Siddiqui2019}. To address this, detectors could help identify the origin or location of an anomaly \cite{Besnier2021TriggeringFO}. Other work could try to help triage anomalies and determine whether an anomaly is just a negligible nuisance or is potentially hazardous.
\subsection{Representative Model Outputs}\label{sec:honest}
\subsubsection{Calibration}
\paragraph{Motivation.}
Human monitors need to know when to trust a deployed ML system or when to override it. %
If they cannot discern when to trust and when to override, humans may unduly defer to models and cede too much control. If they can discern this, they can prevent many model hazards and failure modes.
To make models more trustworthy, they should accurately assess their domain of competence \cite{Gil2019A2C}---the set of inputs they are able to handle.
Models can convey the limits of their competency by expressing their uncertainty.
However, model uncertainties are not representative, and they are often overconfident \cite{Guo2017}. To address this, models could become more calibrated. If a model is perfectly calibrated and predicts a ``$70\%$ chance of rain,'' then when it makes that prediction, $70\%$ of the time it will rain. Calibration research makes model prediction probabilities more representative of a model's overall behavior, provides monitors with a clearer impression of their understanding, and helps monitors weigh model decisions.
\paragraph{Directions.} To help models express their domain of competence in a more representative and meaningful way, researchers could further improve model calibration on typical testing data \cite{Guo2017,Nguyen2015PosteriorCA,Lakshminarayanan2017SimpleAS,Kumar2019VerifiedUC,Zaidi2020NeuralES,Kuleshov2018,Kull2019,Luo2021}, though the greater challenge is calibration on testing data that is unlike the training data \cite{Ovadia2019CanYT}.
Future systems could communicate their uncertainty with language. For example, they could express decomposed probabilities with contingencies such as ``event $A$ will occur with $60\%$ probability assuming event $B$ also occurs, and with $25\%$ probability if event $B$ does not.'' %
To extend calibration beyond single-label outputs, researchers could take models that generate diverse sentence and paragraph answers and teach these models to assign calibrated confidences to their generated free-form answers.\looseness=-1
\subsubsection{Making Model Outputs Honest and Truthful}
\paragraph{Motivation.} Human monitors can more effectively monitor models if they produce outputs that accurately, honestly, and faithfully \cite{Gilpin2018ExplainingEA} represent their understanding or lack thereof.
However, current language models do not accurately represent their understanding and do not provide faithful explanations. They generate empty explanations that are often surprisingly fluent and grammatically correct but nonetheless entirely fabricated. These models generate distinct explanations when asked to explain again, generate more misconceptions as they become larger \cite{owain2021}, and sometimes generate worse answers when they know how to generate better answers \cite{Chen2021EvaluatingLL}.
If models can be made honest and only assert what they believe, then they can produce outputs that are more representative and give human monitors a more accurate impression of their beliefs.\looseness=-1
\paragraph{Directions.} Researchers could create evaluation schemes that catch models being inconsistent \cite{Elazar2021MeasuringAI}, as inconsistency implies that they did not assert only what they believe. Others could also build tools to detect when models are hallucinating information \cite{lee2018hallucinations}.
To prevent models from outputting worse answers when they know better answers, researchers can concretize what it means for models to assert their true beliefs or to give the right impression.
Finally, to train more truthful models, researchers could create environments \cite{Peskov2020ItTT} or losses that incentivize models not to state falsehoods, repeat misconceptions \cite{owain2021}, or spread misinformation.\looseness=-1
\subsection{Hidden Model Functionality}
\subsubsection{Backdoors}
\paragraph{Motivation.} Machine learning systems risk carrying hidden ``backdoor'' or ``trojan'' controllable vulnerabilities. Backdoored models behave correctly and benignly in almost all scenarios, but in particular circumstances chosen by an adversary, they have been taught to behave incorrectly \cite{gu2017badnets}. %
Consider a backdoored facial recognition system that gates building access. The backdoor could be triggered by a specific unique item chosen by an adversary, such as an item of jewelry. If the adversary wears that specific item of jewelry, the backdoored facial recognition will allow the adversary into the building \cite{sharif2016accessorize}.
A particularly important class of vulnerabilities are backdoors for sequential decision making systems, where a particular trigger leads an agent or language generation model to pursue a coherent and destructive sequence of actions \cite{Wang2020StopandGoEB, Zhang2020TrojaningLM}.
Whereas adversarial examples are created at test time, backdoors are inserted by adversaries at training time.
One way to create a backdoor is to directly inject the backdoor into a model's weights \cite{Schuster2020YouAM,Hong2021HandcraftedBI}, but they can
also be injected by adding poisoned data into the training or pretraining data \cite{Shafahi2018PoisonFT}. Injecting backdoors through poisoning is becoming easier as ML systems are increasingly trained on uncurated data scraped from online---data that adversaries can poison. If an adversary uploads a few carefully crafted poisoned images \cite{Carlini2021PoisoningAB},
code snippets \cite{Schuster2020YouAM},
or sentences \cite{Wallace2021ConcealedDP} to platforms such as Flickr, GitHub or Twitter, they can inject a backdoor into future models trained on that data \cite{Bagdasaryan2020BlindBI}.
Moreover, since downstream models are increasingly obtained by a single upstream model
\cite{Bommasani2021OnTO}, a single compromised model could
proliferate backdoors.\looseness=-1 %
\paragraph{Directions.} To avoid deploying models that may take unexpected turns and have vulnerabilities that can be controlled by an adversary, researchers could improve backdoor detectors to combat an ever-expanding set of backdoor attacks \cite{Karra2020TheTS}. Creating algorithms and techniques for detecting backdoors is promising, but to stress test them we need to simulate an adaptive competition where researchers take the role of both attackers and auditors. This type of competition could also serve as a valuable way of grounding general hidden model functionality detection research. Researchers could try to cleanse models with backdoors, reconstruct a clean dataset given a model \cite{Yin2020DreamingTD,Wang2021IMAGINEIS}, and build techniques to detect poisoned training data. Research should also develop methods for addressing backdoors that are manually injected, not just those injected through data poisoning.\looseness=-1
\subsubsection{Emergent Hazardous Capabilities}\label{sec:emerge}
\paragraph{Motivation.} We are better able to make models safe when we know what capabilities they possess.
For early ML models, knowing their limits was often trivial, as models trained on MNIST can do little more than classify handwritten images.
However, recent large-scale models often have capabilities that their designers do not initially realize, %
with %
novel and qualitatively distinct capabilities emerging as scale increases.
For example, as GPT-3 models became larger \cite{Brown2020LanguageMA}, they gained the ability to perform arithmetic, even though GPT-3 received no explicit arithmetic supervision.
Others have observed instances where a model's training loss remains steady, but then its test performance spontaneously ascends from random chance to perfect generalization \cite{grokking}.
Sometimes capabilities are only discovered after initial release. %
After a multimodal image and text model \cite{Radford2021LearningTV} was released, users eventually found that its synthesized images could be markedly improved by appending ``generated by Unreal Engine'' to the query \cite{unreal}.
Future ML models may, when prompted carefully, make the synthesis of harmful or illegal content seamless (such as videos of child exploitation, suggestions for evading the law, or instructions for building bombs). These examples demonstrate that it will be difficult to safely deploy models if we do not know their capabilities.
Some emergent capabilities may resist monitoring. In the future, it is conceivable that agent-like models may be inadvertently incentivized to adopt covert behavior. This is not unprecedented, as even simple digital organisms can evolve covert behavior. For instance, Ofria's \cite{Lehman2018TheSC} digital organisms evolved to detect when they were being monitored and would ``play dead'' to bypass the monitor, only to behave differently once monitoring completed.
In the automotive industry,
Volkswagen created products designed to bypass emissions monitors, underscoring that evading monitoring is sometimes incentivized in the real world. %
Advanced ML agents may be inadvertently incentivized to be deceptive not out of malice but simply because doing so may help maximize their human approval objective. If advanced models are also capable planners, they could be skilled at obscuring their deception from monitors.%
\paragraph{Directions.} To protect against emergent capabilities, researchers could create techniques and tools to inspect models and better foresee unexpected jumps in capabilities.
We also suggest that large research groups begin scanning models for numerous potential and as yet unobserved capabilities. We specifically suggest focusing on capabilities that could create or directly mitigate hazards. One approach is to create a continually evolving testbed to screen for potentially hazardous capabilities, such as the ability to execute malicious
user-supplied code,
generate illegal or unethical forms of content,
or to write convincing but wrong text on arbitrary topics. Another more whitebox approach would be to predict a model's capabilities given only its weights, which might reveal latent capabilities that are not obviously expressible from standard prompts.
Detection methods will require validation to ensure they are sufficiently sensitive. Researchers could implant hidden functionality to ensure that detection methods can detect known flaws; this can also help guide the development of
better methods. Other directions include quantifying and extrapolating future model capabilities \cite{Henighan2020ScalingLF,Hestness2017DeepLS} and searching for novel failure modes that may be symptoms of unintended functionality.
Once a hazardous capability such as deception or illegal content synthesis is identified, the capability must be prevented or removed. Researchers could create training techniques so that undesirable capabilities are not acquired during training or during test-time adaptation. For ML systems that have already acquired an undesirable capability, researchers could create ways to teach ML systems how to forget that capability. %
However, it may not be straightforward to determine whether the capability is truly absent and not merely obfuscated or just removed partially.
\section{Systemic Safety}
\begin{figure}[H]
\centering
\includegraphics[width=0.78\textwidth]{figures/external_safety.pdf}
\caption{Systemic safety research aims to address broader contextual risks to how ML systems are handled. Both cybersecurity and decision making may decisively affect whether ML systems will fail or be misdirected. %
}
\label{fig:externalsafety}
\end{figure}
Machine learning systems do not exist in a vacuum, and the safety of the larger context can influence how ML systems are handled and affect the overall safety of ML systems. ML systems are more likely to fail or be misdirected if the larger context in which they operate is insecure or turbulent. %
Systemic safety research applies ML to mitigate potential contextual hazards that may decisively cause ML systems to fail or be misdirected. As two examples, we support research on cybersecurity and on informed decision making. The first problem is motivated by the observation that ML systems are integrated with vulnerable software, and in the future ML may change the landscape of cyberattacks.
In the second problem, we turn to a speculative approach for improving governance decisions and command and control operations using ML, as institutions may direct the most powerful future ML systems.
Beyond technical work, policy and governance work will be integral to safe deployment \cite{dafoe2018ai,Bender2021OnTD,Birhane2021TheVE,zwetsloot2018beyond,Brundage2020TowardTA}. While techno-solutionism has limitations, technical ML researchers should consider using their skillset to address deployment environment hazards, and we focus on empirical ML research avenues, as we expect most readers are technical ML researchers.
Finally, since there are multiple hazards that can hinder systemic safety, this section is nonexhaustive. For instance, if ML industry auditing tools could help regulators more effectively regulate ML systems, research developing such tools could become part of systemic safety. Likewise, using ML to help facilitate cooperation~\cite{Dafoe2020OpenPI} may emerge as a research area.
\subsection{ML for Cybersecurity}
\paragraph{Motivation.} Cybersecurity risks can make ML systems unsafe, as ML systems operate in tandem with traditional software and are often instantiated as a cyber-physical system. As such, malicious actors could exploit insecurities in traditional software to control autonomous ML systems. Some ML systems may also be private or unsuitable for proliferation, and they will therefore need to operate on computers that are secure.
Separately, ML may amplify future automated cyberattacks and enable malicious actors to increase the accessibility, potency, success rate, scale, speed, and stealth of their attacks. For example, hacking currently requires specialized skills, but if state-of-the-art ML models could be fine-tuned for hacking, then the barrier to entry for hacking may decrease sharply.
Since cyberattacks can destroy valuable information and even destroy critical physical infrastructure \cite{Cary2020DestructiveCO} such as power grids \cite{Ottis2008AnalysisOT} and building hardware \cite{Langner2011StuxnetDA}, these potential attacks are a looming threat to international security.
While cybersecurity aims to increase attacker costs, the cost-benefit analysis may become lopsided if attackers eventually gain a larger menu of options that require negligible effort. In this new regime, attackers may gain the upper hand, like how attackers of ML systems currently have a large advantage over defenders.
Since there may be less of a duality between offensive and defensive security in the future, we suggest that research focus on techniques that are clearly defensive. %
The severity of this risk is speculative, but neural networks are now rapidly gaining the ability to write code and interact with the outside environment, and at the same time there is very little research on deep learning for cybersecurity. %
\paragraph{Directions.} To mitigate the potential harms of automated cyberattacks to ML and other systems, researchers should apply ML to develop better defensive techniques.
For instance, ML could be used to detect intruders \cite{lane1997application,sommer2010outside} or impersonators \cite{Ho2019DetectingAC}. ML could also help analyze code and detect software vulnerabilities.%
Massive unsupervised ML methods could also model binaries and learn to detect malicious obfuscated payloads \cite{sgn,Shin2015RecognizingFI,ghidra,harang2020sorel20m}. %
Researchers could also create ML systems that model software behavior and detect whether programs are sending packets when they should not. %
ML models could help predict future phases of cyberattacks, and such automated warnings could be judged by their lead time, precision, recall, and the quality of their contextualized explanation. Advancements in code translation \cite{Lachaux2020UnsupervisedTO,Austin2021ProgramSW} and code generation \cite{Chen2021EvaluatingLL,Pearce2021AnEC} suggest that future models could apply security patches and make code more secure, so that future systems not only flag security vulnerabilities but also fix them. %
\subsection{Improved Epistemics and Decision Making}
\paragraph{Motivation.} Even if we create reliable ML systems, these systems will not exhibit or ensure safety if the institutions that steer ML systems make poor decisions. Although nuclear weapons are a reliable and dependable technology, they became especially unsafe during the Cold War. During that time, misunderstanding and political turbulence exposed humanity to several close calls and brought us to the brink of catastrophe, demonstrating that systemic safety issues can make technologies unsafe.
The most pivotal decisions are made during times of crisis, and future crises may be similarly risky as ML continues to be weaponized \cite{lethalaw,OpenLetter}.
This is why we suggest creating tools to help decision-makers handle ML systems in highly uncertain, quickly evolving, turbulent situations.\looseness=-1
\paragraph{Directions.} To improve the decision-making and epistemics of political leaders and command and control centers,
we suggest two efforts: using ML to improve forecasting and bringing to light crucial considerations.
Many governance and command and control decisions are based on forecasts \cite{Tetlock2015SuperforecastingTA} from humans, and some forecasts are starting to incorporate ML \cite{gide3}. Forecasters assign probabilities to possible events that could happen within the next few months or years (e.g., geopolitical, epidemiological, and industrial events), and are scored by their correctness and calibration. To be successful, forecasters must dynamically aggregate information from disparate unstructured sources \cite{Jin2021ForecastQAAQ}.
This is challenging even for humans, but ML systems could potentially aggregate more information, be faster, be nonpartisan, consider multiple perspectives, and thus ultimately make more accurate predictions \cite{integrativecomplexity}. The robustness of such systems could be assessed based on their ability to predict pivotal historical events, if the model only has access to data before those events. An accurate forecasting tool would need to be applied with caution to prevent over-reliance \cite{Hedlund82}, and it would need to present its data carefully so as not to encourage risk-taking behavior from the humans operating the forecasting system \cite{Taleb2013OnTD}.
Separately, researchers should develop systems that identify questions worth asking and crucial factors to consider. While forecasting can refine estimates of well-defined risks, these advisory systems could help unearth new sources of risk and identify actions to mitigate risks. Since ML systems can process troves of historical data and can learn from diverse situations during training, they could suggest possibilities that would otherwise require extensive memory and experience. Such systems could help orient decision making by providing related prior scenarios and relevant statistics such as base rates.
Eventually advisory systems could identify stakeholders, propose metrics, brainstorm options, suggest alternatives, and note trade-offs to further improve decision quality \cite{Gathani2021AugmentingDM}.
In summary, ML systems that can predict a variety of events and identify crucial considerations could help provide good judgment and correct misperceptions, and thereby reduce the chance of rash decisions and inadvertent escalation.\looseness=-1
\section{Introduction}
\vspace{3pt}
As machine learning (ML) systems are deployed in high-stakes environments, such as medical settings \cite{Rajpurkar2017CheXNetRP}, roads \cite{teslaaiday}, and command and control centers \cite{gide3}, unsafe ML systems may result in needless loss of life. %
Although researchers recognize that safety is important \cite{asilomar,Amodei2016ConcretePI},
it is often unclear what problems to prioritize or how to make progress.
We identify four problem areas that would help make progress on ML Safety: robustness, monitoring, alignment, and systemic safety. While some of these, such as robustness, are long-standing challenges, the success and emergent capabilities of modern ML systems necessitate new angles of attack.\looseness=-1
We define ML Safety research as ML research aimed at making the adoption of ML more beneficial, with emphasis on long-term and long-tail risks.
We focus on cases where greater capabilities can be expected to decrease safety, or where ML Safety problems are otherwise poised to become more challenging in this decade.
For each of the four problems, after clarifying the motivation, we discuss possible research directions that can be started or continued in the next few years.
First, however, we motivate the need for ML Safety research.\looseness=-1
We should not procrastinate on safety engineering. In a report for the Department of Defense, Frola and Miller \cite{Frola1984SystemSI} observe that approximately $75\%$ of the most critical decisions that determine a system's safety occur early in development \cite{Leveson2012EngineeringAS}. If attention to safety is delayed, its impact is limited, as unsafe design choices become deeply embedded into the system.
The Internet was initially designed as an academic tool with neither safety nor security in mind \cite{denardis2007history}.
Decades of security patches later, security measures are still incomplete and increasingly complex.
A similar reason for starting safety work now is that relying on experts to test safety solutions is not enough---solutions must also be age tested.
The test of time is needed even in the most rigorous of disciplines. A century before the four color theorem was proved, Kempe's peer-reviewed proof went unchallenged for years until, finally, a flaw was uncovered \cite{Heawood1949MapColourT}. Beginning the research process early allows for more prudent design and more rigorous testing.
Since nothing can be done both hastily and prudently \cite{syrus1856moral}, postponing %
machine learning safety research increases the likelihood of accidents.
Just as we cannot procrastinate, we cannot rely exclusively on previous hardware and software %
engineering practices to create safe ML systems.
In contrast to typical software, ML control flows are specified by inscrutable weights learned by gradient optimizers rather than programmed with explicit instructions and general rules from humans.
They are trained and tested pointwise using specific cases, which has limited effectiveness at improving and assessing an ML system's completeness and coverage.
They are fragile, rarely correctly handle all test cases, and cannot become error-free with short code patches \cite{sculley2015hidden}. They exhibit neither modularity nor encapsulation, making them far less intellectually manageable and making causes of errors difficult to localize. They frequently demonstrate properties of self-organizing systems such as spontaneously emergent capabilities \cite{Brown2020LanguageMA,caron2021emerging}. They may also be more agent-like and tasked with performing open-ended actions in arbitrary complex environments. %
Just as, historically, safety methodologies developed for electromechanical hardware \cite{StamatisFailureMA} did not generalize to the new issues raised by software, we should expect
software safety methodologies not to generalize to the new complexities and hazards of ML.
We also cannot solely rely on economic incentives and regulation to shepherd competitors into developing safe models.
The competitive dynamics surrounding ML's development may pressure companies and regulators to take shortcuts on safety.
Competing corporations often prioritize minimizing development costs and being the first to the market over providing the safest product.
For example, Boeing developed the 737 MAX with unsafe design choices to keep pace with its competitors; and as a direct result of taking shortcuts on safety and pressuring inspectors, Boeing's defective model led to two crashes across a span of five months that killed 346 people \cite{sumwalt2019assumptions,Folkert2021,Ky2021}.
Robust safety regulation is almost always developed only after a catastrophe---a common saying in aviation is that ``aviation regulations are written in blood.'' While waiting for catastrophes to spur regulators
can reduce the likelihood of repeating the same failure, %
this approach cannot prevent catastrophic events from occurring in the first place. Regulation efforts may also be obstructed by lobbying or by the spectre of lagging behind international competitors who may build superior ML systems. Consequently, companies and regulators may be pressured to deprioritize safety.\looseness=-1
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figures/splash.pdf}
\label{fig:splash}
\vspace{-10pt}
\end{figure*}
These sources of hazards---starting safety research too late, novel ML system complexities, and competitive pressure---may result in deep design flaws. However, a strong safety research community can drive down these risks.
Working on safety proactively builds more safety into systems during the critical early design window.
This could help reduce the cost of building safe systems and reduce the pressure on companies to take shortcuts on safety.
If the safety research community grows, it can help handle the spreading multitude of hazards that continue to emerge as ML systems become more complex.
Regulators can also prescribe higher, more actionable, and less intrusive standards if the community has created ready-made safety solutions.
When especially severe accidents happen, everyone loses.
Severe accidents can cast a shadow that creates unease and precludes humanity from realizing ML's benefits. Safety engineering for powerful technologies is challenging, as the Chernobyl meltdown, the Three Mile Island accident, and the Space Shuttle Challenger disaster have demonstrated. However, done successfully, work on safety can improve the likelihood that essential technologies operate reliably and benefit humanity.%
\section{Analyzing Risks, Hazards, and Impact}
\subsection{Risk Management Framework}
\begin{table}[H]
\setlength{\tabcolsep}{6pt}
\setlength\extrarowheight{2pt}
\centering
\begin{tabularx}{1.0\textwidth}{*{1}{>{\hsize=0.85\hsize}X} *{1}{>{\hsize=4.8cm}X }
| *{1}{>{\hsize=0.9\hsize}Y} *{1}{>{\hsize=0.95\hsize}Y} *{1}{>{\hsize=0.95\hsize}Y} *{1}{>{\hsize=0.55\hsize}Y}
}
\thead[l]{Area} & \thead[l]{Problem} & \thead{ML\\System\\Risks} & \thead{Operational\\Risks} & \thead{Institutional\\ and Societal\\Risks} & \thead{Future\\Risks} \\ \hline
\parbox[t]{50mm}{\multirow{2}{*}{Robustness}}
& Black Swans and Tail Risks & \checkmark & \checkmark & & \checkmark\\
& Adversarial Robustness & \checkmark & & & \checkmark \\
\Xhline{0.5\arrayrulewidth}
\parbox[t]{50mm}{\multirow{3}{*}{Monitoring}}
& Anomaly Detection & \checkmark & \checkmark & & \checkmark \\
& Representative Outputs & \checkmark & \checkmark & & \checkmark \\
& Hidden Model Functionality & \checkmark & \checkmark & & \checkmark \\
\Xhline{0.5\arrayrulewidth}
\parbox[t]{50mm}{\multirow{5}{*}{Alignment}}
& Value Learning & & \checkmark & \checkmark & \checkmark \\
& Translating Values to Action & \checkmark & & & \checkmark \\
& Proxy Gaming & \checkmark & & & \checkmark \\
& Value Clarification & & & \checkmark & \checkmark \\
& Unintended Consequences & \checkmark & \checkmark & \checkmark & \checkmark \\
\Xhline{0.5\arrayrulewidth}
Systemic & ML for Cybersecurity & \checkmark & \checkmark & \checkmark & \checkmark \\
Safety & Informed Decision Making & & \checkmark & \checkmark & \checkmark \\
\bottomrule
\end{tabularx}
\caption{Problems and the risks they directly mitigate. Each checkmark indicates whether a problem directly reduces a risk. Notice that problems affect both near- and long-term risks. %
}
\label{tab:problemsandrisks}
\end{table}
To analyze how ML Safety progress can reduce abstract risks and hazards,\footnote{%
One can think of hazards as factors that have the potential to cause harm. One can think of risk as the hazard's prevalence multiplied by the amount of exposure to the hazard multiplied by the hazard's deleterious effect. For example, a wet floor is a hazard to humans. However, risks from wet floors are lower if floors dry more quickly with a fan (systemic safety). Risks are lower if humans heed wet floor signs and have less exposure to them (monitoring). Risks are also lower for young adults than the elderly, since the elderly are more physically vulnerable (robustness).
In other terms, robustness makes systems less vulnerable to hazards, monitoring reduces exposure to hazards, alignment makes systems inherently less hazardous, and systemic safety reduces systemic hazards.} we identify four dimensions of risk in this section and five hazards in the next section.
The following four risk dimensions are adopted from the Department of Defense's broad risk management framework \cite{DoD}, with its personnel management risks replaced with ML system risks.
\begin{enumerate}
\item \textbf{ML System Risks} -- risks to the ability of a near-term individual ML system to operate reliably.
\item \textbf{Operational Risks} -- risks to the ability of an organization to safely operate an ML system in near-term deployment scenarios.
\item \textbf{Institutional and Societal Risks} -- risks to the ability of global society or institutions that decisively affect ML systems to operate in near-term scenarios in an efficient, informed, and prudent way.
\item \textbf{Future (ML System, Operational, and Institutional) Risks} -- risks to the ability of future ML systems, organizations operating ML systems, and institutions to address mid- to long-term challenges.
\end{enumerate}
In \Cref{tab:problemsandrisks}, we indicate whether one of these risks is reduced by progress on a given ML Safety problem.
Note that these all problems reduce risks to all three of future ML systems, organizations, and institutions. In the future, organizations and institutions will likely become more dependent on ML systems, so improvements to Black Swans robustness would in the future help improve operations and institutions dependent on ML systems. Since this table is a snapshot of the present, risk profiles will inevitably change.
\subsection{Hazard Management Framework}
\begin{table}[H]
\fontsize{9}{11}\selectfont
\setlength{\tabcolsep}{5pt}
\setlength\extrarowheight{2pt}
\centering
\begin{tabularx}{1.0\textwidth}{*{1}{>{\hsize=0.7\hsize}X} *{1}{>{\hsize=4.3cm}X }
| *{1}{>{\hsize=0.75\hsize}Y} *{1}{>{\hsize=0.75\hsize}Y} *{1}{>{\hsize=0.87\hsize}Y} *{1}{>{\hsize=0.4\hsize}Y} *{1}{>{\hsize=0.87\hsize}Y} }
\thead[l]{Area} & \thead[l]{Problem} & \thead{Known\\Unknowns} & \thead{Unknown\\Unknowns} & \thead{Emergence} & \thead{Long\\Tails} & \thead{Adversaries\\\& Deception} \\ \hline
\parbox[t]{50mm}{\multirow{2}{*}{Robustness}}
& Black Swans and Tail Risks & \textcolor{rightgreen}{\ding{52}} %
& \textcolor{rightgreen}{\ding{52}} & \checkmark & \textcolor{rightgreen}{\ding{52}} & \\
& Adversarial Robustness & \checkmark & & & & \textcolor{rightgreen}{\ding{52}} \\
\Xhline{0.5\arrayrulewidth}
\parbox[t]{50mm}{\multirow{3}{*}{Monitoring}}
& Anomaly Detection & \checkmark & \textcolor{rightgreen}{\ding{52}} & \textcolor{rightgreen}{\ding{52}} & \textcolor{rightgreen}{\ding{52}} & \textcolor{rightgreen}{\ding{52}} \\
& Representative Outputs & \textcolor{rightgreen}{\ding{52}} & \checkmark & & & \textcolor{rightgreen}{\ding{52}} \\
& Hidden Model Functionality & \textcolor{rightgreen}{\ding{52}} & \textcolor{rightgreen}{\ding{52}} & \textcolor{rightgreen}{\ding{52}} & & \textcolor{rightgreen}{\ding{52}} \\
\Xhline{0.5\arrayrulewidth}
\parbox[t]{50mm}{\multirow{5}{*}{Alignment}}
& Value Learning & \textcolor{rightgreen}{\ding{52}} & & & & \\
& Translating Values to Action & \textcolor{rightgreen}{\ding{52}} & & & \checkmark & \\
& Proxy Gaming & \checkmark & & & & \textcolor{rightgreen}{\ding{52}} \\
& Value Clarification & \textcolor{rightgreen}{\ding{52}} & \textcolor{rightgreen}{\ding{52}} & \checkmark & \checkmark & \\
& Unintended Consequences & & \textcolor{rightgreen}{\ding{52}} & \textcolor{rightgreen}{\ding{52}} & \\
\Xhline{0.5\arrayrulewidth}
Systemic & ML for Cybersecurity & \checkmark & \checkmark & & & \textcolor{rightgreen}{\ding{52}} \\
Safety & Informed Decision Making & \textcolor{rightgreen}{\ding{52}} & \textcolor{rightgreen}{\ding{52}} & \checkmark & \checkmark & \\
\bottomrule
\end{tabularx}
\caption{Problems and the hazards they help handle. Checkmarks indicate whether a problem directly reduces vulnerability or exposure to a given hazard, and bold green checkmarks indicate an especially notable reduction.}
\label{tab:problemsandhazards}
\end{table}
We now turn from what is affected by risks to five abstract hazards that create risks.\begin{wrapfigure}{R}{0.23\textwidth}
\vspace{-22pt}
\begin{center}
\includegraphics[width=0.23\textwidth]{figures/cube.pdf}
\end{center}
\caption{A simplified model of interconnected factors for ML Safety.}
\label{fig:cube}
\vspace{-20pt}
\end{wrapfigure}
\begin{enumerate}
\item \textbf{Known Unknowns} -- Identified hazards for which we have imperfect or incomplete knowledge. These are identified hazards known to have unknown aspects. %
\item \textbf{Unknown Unknowns} -- Hazards which are unknown and unidentified, and they have properties that are unknown. %
\item \textbf{Emergence} -- A hazard that forms and comes into being as the system increases in size or its parts are combined. Such hazards do not exist in smaller versions of the system nor in its constituent parts.
\item \textbf{Long Tails} -- Hazards that can be understood as unusual or extreme events from a long tail distribution.
\item \textbf{Adversaries \& Deception} -- Hazards from a person, system, or force that aims to attack, subvert, or deceive.
\end{enumerate}
These hazards do not enumerate all possible hazards.
For example, the problems in Systemic Safety help with turbulence hazards. Furthermore, feedback loops, which can create long tails, could become a more prominent hazard in the future when ML systems are integrated into more aspects of our lives.
The five hazards have some overlap. For instance, when something novel emerges, it is an unknown unknown. When it is detected, it can become a known unknown. Separately, long tail events are often but not necessarily unknown unknowns: the 1987 stock market crash was a long tail event, but it was a known unknown to a prescient few and an unknown unknown to most everybody else. Emergent hazards sometimes co-occur with long tailed events, and an adversarial attack can cause long tail events.
In \Cref{tab:problemsandhazards}, we indicate whether an ML Safety problem reduces vulnerability or exposure to a given hazard. As with \Cref{tab:problemsandrisks}, the table is a snapshot of the present. For example, future adversaries could create novel unusual events or strike during tail events, so Black Swan robustness could improve adversarial robustness.
With risks, hazards, and goals now all explicated, we depict their interconnectedness in \Cref{fig:cube}.
\newpage
\subsection{Prioritization and Strategy for Maximizing Impact}
\begin{table}[ht]
\setlength\extrarowheight{2pt}
\centering
\begin{tabularx}{\textwidth}{*{1}{>{\hsize=1\hsize}X} *{1}{>{\hsize=5.2cm}X }
| *{1}{>{\hsize=1\hsize}X} *{1}{>{\hsize=1\hsize}X} *{1}{>{\hsize=1\hsize}X}}
Area & \multicolumn{1}{l|}{Problem} &
{Importance} & {Neglectedness} & {Tractability} \\ \hline
\parbox[t]{50mm}{\multirow{2}{*}{Robustness}}
& Black Swans and Tail Risks & $\bullet$ $\bullet$ & $\bullet$ $\bullet$ & $\bullet$ $\bullet$ \\
& Adversarial Robustness & $\bullet$ $\bullet$ & $\bullet$ & $\bullet$ $\bullet$ \\
\Xhline{0.5\arrayrulewidth}
\parbox[t]{50mm}{\multirow{3}{*}{Monitoring}}
& Anomaly Detection & $\bullet$ $\bullet$ $\bullet$ & $\bullet$ $\bullet$ & $\bullet$ $\bullet$ $\bullet$ \\
& Representative Outputs & $\bullet$ $\bullet$ $\bullet$ & $\bullet$ $\bullet$ & $\bullet$ $\bullet$ \\
& Hidden Model Functionality & $\bullet$ $\bullet$ $\bullet$ & $\bullet$ $\bullet$ & $\bullet$ $\bullet$ \\
\Xhline{0.5\arrayrulewidth}
\parbox[t]{50mm}{\multirow{5}{*}{Alignment}}
& Value Learning & $\bullet$ $\bullet$ $\bullet$ & $\bullet$ $\bullet$ & $\bullet$ $\bullet$ \\
& Translating Values to Action & $\bullet$ $\bullet$ & $\bullet$ $\bullet$ & $\bullet$ $\bullet$ $\bullet$ \\
& Proxy Gaming & $\bullet$ $\bullet$ $\bullet$ & $\bullet$ $\bullet$ & $\bullet$ $\bullet$ \\
& Value Clarification & $\bullet$ $\bullet$ & $\bullet$ $\bullet$ $\bullet$ & $\bullet$ \\
& Unintended Consequences & $\bullet$ $\bullet$ & $\bullet$ $\bullet$ $\bullet$ & $\bullet$ \\
\Xhline{0.5\arrayrulewidth}
Systemic & ML for Cybersecurity & $\bullet$ $\bullet$ & $\bullet$ $\bullet$ $\bullet$ & $\bullet$ $\bullet$ $\bullet$ \\
Safety & Informed Decision Making & $\bullet$ $\bullet$ & $\bullet$ $\bullet$ & $\bullet$ $\bullet$ \\
\bottomrule
\end{tabularx}
\caption{Problems and three factors that influence expected marginal impact.}
\label{tab:intframework}
\end{table}
We presented several problems, but new researchers may be able to make a larger impact on some problems over others. Some problems may be important, but if they are extremely popular, the risk of scooping increases, as does the risk of researchers stepping on each others' toes. Likewise, some problems may be important and may be decisive for safety if solved, but some problems are simply infeasible. Consequently, we should consider the importance, neglectedness, and tractability of problems.
\begin{enumerate}
\item \textbf{Importance} -- How much potential risk does substantial progress on this problem reduce?
\begin{enumerate}[leftmargin=2\parindent,align=left,labelwidth=\parindent,labelsep=10pt]
\item[$\bullet$\phantom{ $\bullet$ $\bullet$}] Progress on this problem reduces risks of catastrophes.
\item[$\bullet$ $\bullet$\phantom{ $\bullet$}] Progress on this problem directly reduces risks from potential permanent catastrophes.
\item[$\bullet$ $\bullet$ $\bullet$] Progress on this problem directly reduces risks from more plausible permanent catastrophes.
\end{enumerate}
\item \textbf{Neglectedness} -- How much research is being done on the problem?
\begin{enumerate}[leftmargin=2\parindent,align=left,labelwidth=\parindent,labelsep=10pt]
\item[$\bullet$\phantom{ $\bullet$ $\bullet$}] The problem is one of the top ten most researched topics at leading conferences.
\item[$\bullet$ $\bullet$\phantom{ $\bullet$}] The problem receives some attention at leading ML conferences, or adjacent problems are hardly neglected.
\item[$\bullet$ $\bullet$ $\bullet$] The problem has few related papers consistently published at leading ML conferences.
\end{enumerate}
\item \textbf{Tractability} -- How much progress can we expect on the problem?
\begin{enumerate}[leftmargin=2\parindent,align=left,labelwidth=\parindent,labelsep=10pt]
\item[$\bullet$\phantom{ $\bullet$ $\bullet$}] We cannot expect large research efforts to highly fruitful currently, possibly due to conceptual bottlenecks, or productive work on the problem likely requires far more advanced ML capabilities.
\item[$\bullet$ $\bullet$\phantom{ $\bullet$}] We expect to reliably and continually make progress on the problem.
\item[$\bullet$ $\bullet$ $\bullet$] A large research effort would be highly fruitful and there is obvious low-hanging fruit.
\end{enumerate}
\end{enumerate}
A snapshot of each problem and its current importance, neglectedness, and tractability is in \Cref{tab:intframework}. Note this only provides a rough sketch, and it has limitations. For example, a problem that is hardly neglected overall may still have neglected aspects; while adversarial robustness is less neglected than other safety problems, robustness to unforeseen adversaries is fairly neglected. Moreover, working on popular shovel-ready problems may be more useful for newcomers compared to working on problems where conceptual bottlenecks persist. Further, this gives a rough sense of marginal impact, but entire community should not chose to act in the same way marginally, or else neglected problems will suddenly become overcrowded.
These three factors are merely prioritization factors and do not define a strategy. Rather, a potential strategy for ML Safety is as follows.
\begin{enumerate}
\item Force Management: Cultivate and maintain a force of ready personnel to implement safety measures into advanced ML systems and operate ML systems safely.
\item Research: Build and maintain a community to conduct safety research, including the identification of potential future hazards, clarification of safety goals, reduction of the costs to adopt safety methods, research on how to incorporate safety methods into existing ML systems, and so on.
\item Protocols: Establish and incentivize adherence to protocols, precedents, standards, and research expectations such as red teaming, all for the safe development and deployment of ML systems.
\item Partnerships: Build and maintain safety-focused alliances and partnerships among academe, industry, and government.
\end{enumerate}
In closing, throughout ML Safety's development we have seen numerous proposed strategies, hazards, risks, scenarios, and problems. In safety, some previously proposed problems have been discarded, and some new problems have emerged, just as in the broader ML community. Since no individual knows what lies ahead, safety analysis and strategy will need to evolve and adapt beyond this document. Regardless of which particular safety problems turn out to be the most or least essential, the success of safety's evolution and adaptation rests on having a large and capable research community.
\end{appendices}
\section{Robustness}\label{sec:robustness}
\begin{figure}[H]
\centering
\includegraphics[width=0.75\textwidth]{figures/robustness.pdf}
\caption{Robustness research aims to build systems that endure extreme, unusual, or adversarial events.%
}
\label{fig:robustness}
\end{figure}
\subsection{Black Swan and Tail Risk Robustness}\label{sec:longtail}
\paragraph{Motivation.} To operate in open-world high-stakes environments, machine learning systems will need to endure unusual events and tail risks. However, current ML systems are often brittle in the face of real-world complexity and unknown unknowns. In the 2010 Flash Crash \cite{Kirilenko2011TheFC}, automated trading systems unexpectedly overreacted to market aberrations, created a feedback loop, and wiped away a trillion dollars of stock value in a matter of minutes. This demonstrates that computer systems can both create and succumb to long tail events.
Long tails continue to thwart modern ML systems such as autonomous vehicles. This is because some of the most basic concepts in the real world are long tailed, such as stop signs, where a model error can directly cause a crash and loss of life. Stop signs may be titled, occluded, or represented on an LED matrix; sometimes stop signs should be disregarded, for example when held upside down by a traffic officer, on open gates, on a shirt, on the side of bus, on elevated toll booth arms, and so on. Although these long tail events are rare, they are
\begin{wrapfigure}{r}[0.01\textwidth]{.38\textwidth}%
\vspace{-8pt}%
``Things that have never happened before happen all the time.''\hfill\emph{Scott D.\ Sagan}%
\vspace{-10pt}%
\end{wrapfigure}
extremely impactful \cite{Taleb2020StatisticalCO} and can cause ML systems to crash. Leveraging existing massive datasets is not enough to ensure robustness, as models trained with Internet data and petabytes of task-specific driving data still are not robust to long tail road scenarios \cite{teslaaiday}. This decades-long challenge is only a preview of the more difficult problem of handling tail events in environments that are beyond a road's complexity.\looseness=-1 %
Long-tail robustness is unusually challenging today and may become even more challenging. Long-tail robustness also requires more than human-level robustness; the 2008 financial crisis and COVID-19 have shown that even groups of humans have great difficulty mitigating and overcoming these rare but extraordinarily impactful long tail events. Future ML systems will operate in environments that are broader, larger-scale, and more highly connected with more feedback loops, paving the way to more extreme events \cite{Mitzenmacher2003ABH} than those seen today.\looseness=-1
While there are incentives to make systems partly robust, systems tend not to be incentivized nor designed for long tail events outside prior experience, even though Black Swan events are inevitable
\cite{usplanning}. To reduce the chance that ML systems will fall apart in settings dominated by rare events, systems must be \emph{unusually} robust.\looseness=-1
\paragraph{Directions.} In addition to existing robustness benchmarks \cite{hendrycks2019robustness,Koh2021WILDSAB,hendrycks2021many}, researchers could create more environments and benchmarks to stress-test systems, find their breaking points, and determine whether they will function appropriately in potential future scenarios.
These benchmarks could include new, unusual, and extreme distribution shifts and long tail events, especially ones that are challenging even for humans. Following precedents from industry \cite{teslaaiday,waymo}, benchmarks could include artificial simulated data that capture structural properties of real long tail events. Additionally, benchmarks should focus %
on ``wild'' distribution shifts that cause large accuracy drops over ``mild'' shifts \cite{Mandelbrot2004TheMO}.\looseness=-1 %
Robustness work could also move beyond classification and consider \emph{competent errors} where agents misgeneralize and execute wrong routines, such as an automated digital assistant knowing how to use a credit card to book flights, but choosing the wrong destination \cite{Koch2021ObjectiveRI,Hubinger2019RisksFL}.
Interactive environments \cite{Cobbe2019QuantifyingGI} could simulate qualitatively
distinct random shocks that irreversibly shape the environment's future evolution. Researchers could also create environments where ML system outputs affect their environment and create feedback loops.%
Using such benchmarks and environments, researchers could improve ML systems to withstand Black Swans \cite{Taleb2007TheBS, Taleb2020StatisticalCO}, long tails, and structurally novel events. %
The performance of many ML systems is currently largely shaped by data and parameter count, so future research could work on creating highly unusual but helpful data sources.
The more experience a system has with unusual future situations, even ones not well represented in typical training data, the more robust it can be.
New data augmentation techniques \cite{hendrycks2021pixmix,hendrycks2020augmix} and other sources of simulated data could create inputs that are not easy or possible to create naturally. %
Since change is a part of all complex systems, and since not everything can be anticipated during training, models will also need to adapt to an evolving world and improve from novel experiences \cite{Mummadi2021TestTimeAT,Wang2021TentFT,Taleb2012AntifragileTT}. %
Future adaptation methods could improve a system's ability to adapt quickly.
Other work could defend adaptive systems against poisoned data encountered during deployment \cite{tay}.
\subsection{Adversarial Robustness}\label{sec:advex}
\paragraph{Motivation.} We now turn from %
unpredictable accidents to carefully crafted and deceptive threats.
Adversaries can easily manipulate vulnerabilities in ML systems and cause them to make mistakes \cite{biggio2013evasion,szegedy2013intriguing}. %
For example, systems may use neural networks to detect intruders \cite{Ahmad2021NetworkID} or malware \cite{Suciu2019ExploringAE}, but if adversaries can modify their behavior to deceive and bypass detectors, the systems will fail.
While defending against adversaries might seem to be a straightfoward problem,
defenses are currently struggling to keep pace with attacks \cite{Athalye2018ObfuscatedGG,Tramr2020OnAA}, and much research is needed to discover how to fix these longstanding weaknesses.
\paragraph{Directions.} We encourage research on adversarial robustness to focus on broader robustness definitions.
Current research largely focuses on the problem of ``$\ell_p$ adversarial robustness,'' \cite{Madry2018TowardsDL, carlini2017towards} where an adversary attempts to induce a misclassification but can only perturb inputs subject to a small $p$-norm constraint.
While research on simplified problems helps drive progress, researchers may wish to avoid focusing too heavily on any one particular simplification.
To study adversarial robustness more broadly \cite{Gilmer2018MotivatingTR}, researchers could consider attacks that are perceptible \cite{Poursaeed2021RobustnessAG} or whose specifications are not known beforehand \cite{Kang2019TestingRA,Laidlaw2021PerceptualAR}.
For instance, there is no reason that an adversarial malware sample would have to be imperceptibly similar to some other piece of benign software---as long as the detector is evaded, the attack has succeeded \cite{pierazzi2020intriguing}. Likewise, copyright detection systems cannot reasonably assume that attackers will only construct small $\ell_p$ perturbations to bypass the system, as attackers may rotate the adversarially modified image \cite{engstrom2018rotation} or apply otherwise novel distortions \cite{Gilmer2018MotivatingTR} to the image.
While many effective attacks assume full access to a neural network, sometimes assuming limited access is more realistic.
Here, adversaries can feed in examples to an ML system and receive the system's outputs, but they do not have access to the intermediate ML system computation \cite{brendel2017decision}.
If a blackbox ML system is not publicly released and can only be queried, it may be possible to practically defend the system against zero-query attacks \cite{Tramr2018EnsembleAT} or limited-query attacks \cite{Chen2019StatefulDO}.
On the defense side, further underexplored assumptions are that systems have multiple sensors or that systems can adapt.
Real world systems, such as autonomous vehicles, have multiple cameras. Researchers could exploit information from these different sensors and find inconsistencies in adversarial images in order to constrain and box in adversaries \cite{Xiao2018CharacterizingAE}. Additionally, while existing ML defenses are typically static, future defenses could evolve during test time to combat adaptive adversaries \cite{Wang2021FightingGW}.\looseness=-1
Future research could do more work toward creating models with adversarially robust representations \cite{Croce2020RobustBenchAS}.
Researchers could enhance data for adversarial robustness by simulating more data \cite{Zhu2021TowardsUT}, augmenting data \cite{Rebuffi2021FixingDA}, repurposing existing real data \cite{Carmon2019UnlabeledDI,Hendrycks2019UsingPC}, and extracting more information from available data \cite{Hendrycks2019UsingSL}. Others could create architectures that are more adversarially robust \cite{Xie2020SmoothAT}. Others could improve adversarial training methods \cite{Wu2020AdversarialWP} and find better losses \cite{Zhang2019TheoreticallyPT,Tack2021ConsistencyRF}.
Researchers could improve adversarial robustness certifications \cite{Raghunathan2018CertifiedDA,lecuyer2019certified,Cohen2019CertifiedAR}, so that models have verifiable adversarial robustness.
It may also be possible to unify the areas of adversarial robustness and robustness to long-tail and unusual events. %
By building systems to be robust to adversarial worst-case environments, they may also be made more robust to random-worse-case environments \cite{Anderson1995ProgrammingSC,NAE}.
To study adversarial robustness on unusual inputs, researchers could also try detecting adversarial anomalies \cite{Bitterwolf2020CertifiablyAR,NAE} or assigning them low confidence \cite{Stutz2020ConfidenceCalibratedAT}.
|
{
"timestamp": "2022-06-20T02:03:49",
"yymm": "2109",
"arxiv_id": "2109.13916",
"language": "en",
"url": "https://arxiv.org/abs/2109.13916"
}
|
\section{Introduction}\label{sec:introduction}
\subsection{Background}\label{subsec:background}
Prediction is a fundamental part of statistical inference.
Prediction intervals are important for assessing the uncertainty of future random variables and have applications in business, engineering, science, and other fields.
For example, manufacturers require prediction intervals for the number of warranty claims to assure that there are sufficient cash reserves and spare parts to make repairs; engineers use historical data to compute prediction intervals for the remaining lifetime of systems.
Suppose that the available data are denoted by $\boldsymbol{X}_n$, and that we want to predict a random variable denoted by $Y$ (also known as the predictand).
We use a parametric distribution to model the data and the predictand.
Specifically, we consider the case where $\boldsymbol{X}_n=\{X_1,\dots,X_n\}$ corresponds to a sample of $n$ independent and identically distributed random variables with common density/mass function $f(\cdot;\boldsymbol{\theta})$.
The density $f(\cdot;\boldsymbol{\theta})$ depends on a vector $\boldsymbol{\theta}$ of unknown parameters.
The predictand $Y$ is a scalar random variable with conditional density $g(\cdot|\boldsymbol{x}_n;\boldsymbol{\theta})$, where $\boldsymbol{X}_n=\boldsymbol{x}_n$ is the observed sample.
If $Y$ is independent of $\boldsymbol{X}_n$, then $g(\cdot|\boldsymbol{x}_n;\boldsymbol{\theta})=g(\cdot;\boldsymbol{\theta})$; further if $Y$ has the same distribution as the data, then $g(\cdot;\boldsymbol{\theta})=f(\cdot;\boldsymbol{\theta})$.
The goal is to obtain information about the unknown parameters $\boldsymbol{\theta}$ from the data $\boldsymbol{X}_n$ to construct a prediction interval for the predictand $Y$.
We use $\text{PI}_{1-\alpha}(\boldsymbol{X}_n)$ to denote a prediction interval for $Y$ with a nominal confidence level of $1-\alpha$.
Letting $\Pr_{\boldsymbol{\theta}}(\cdot|\boldsymbol{X}_n)$ be conditional probability given $\boldsymbol{X}_n$, the conditional coverage probability of $\text{PI}_{1-\alpha}(\boldsymbol{X}_n)$ is
\[
\text{CP}[\text{PI}_{1-\alpha}(\boldsymbol{X}_n)|\boldsymbol{X}_n]=\Pr{}_{\!\boldsymbol{\theta}}\left[Y\in\text{PI}_{1-\alpha}(\boldsymbol{X}_n)|\boldsymbol{X}_n\right].
\]
We can obtain the unconditional coverage probability by taking the expectation of the conditional coverage probability $\text{CP}[\text{PI}_{1-\alpha}(\boldsymbol{X}_n)|\boldsymbol{X}_n]$ with respect to $\boldsymbol{X}_n$,
\[
\text{CP}[\text{PI}_{1-\alpha}(\boldsymbol{X}_n)]=\Pr{}_{\!\boldsymbol{\theta}}[Y\in\text{PI}_{1-\alpha}(\boldsymbol{X}_n)]=\text{E}_{\boldsymbol{\theta}}\left\{\text{CP}[\text{PI}_{1-\alpha}(\boldsymbol{X}_n)|\boldsymbol{X}_n]\right\}.
\]
Unlike the conditional coverage probability, which is a random variable, the unconditional coverage probability is a fixed property of a prediction interval procedure.
Hence, the unconditional coverage probability is used to evaluate a prediction interval method and the term ``coverage probability'' is used to denote the unconditional coverage probability unless stated otherwise.
If $\text{CP}[\text{PI}_{1-\alpha}(\boldsymbol{X}_n)]=1-\alpha$, we say the prediction method is exact; if $\text{CP}[\text{PI}_{1-\alpha}(\boldsymbol{X}_n)]\to1-\alpha$ as $n\to\infty$, we say the prediction method is asymptotically correct.
\subsection{Related Literature}\label{subsec:literature}
Some prediction interval methods are based on a pivotal or an approximate pivotal quantity.
The main idea is to find a function of $\boldsymbol{X}_n$ and $Y$, say $q(\boldsymbol{X}_n, Y)$, that has a distribution that is free of parameters $\boldsymbol{\theta}$ (or approximately so for large samples).
Then, the distribution of $q(\boldsymbol{X}_n, Y)$ can be used to construct a $1-\alpha$ prediction region for $Y$ as
\begin{equation*}
\mathcal{P}_{1-\alpha}(\boldsymbol{x}_n)=\left\{y:q(\boldsymbol{x}_n, y)\leq q_{n, 1-\alpha}\right\},
\end{equation*}
where $\boldsymbol{X}_n=\boldsymbol{x}_n$ denotes the observed value of sample and $q_{n,1-\alpha}$ is the $1-\alpha$ quantile of $q(\boldsymbol{X}_n, Y)$ (i.e., $\Pr{}_{\!\boldsymbol{\theta}}\left[q(\boldsymbol{X}_n, Y)\leq q_{n,1-\alpha}\right]=1-\alpha$).
If $q(\boldsymbol{x}_n, y)$ is a monotone function of $y$, then
$\mathcal{P}_{1-\alpha}(\boldsymbol{x}_n)$ provides a one-sided prediction
bound; if $-q(\boldsymbol{x}_n, y)$ is a unimodal function of $y$,
then $\mathcal{P}_{1-\alpha}(\boldsymbol{x}_n)$ becomes a (two-sided) prediction interval.
Relevant references of this pivotal method include \citet{cox1975}, \citet{atwood1984},
\cite{beran1990}, \citet{barncox1996}, \citet{nelson2000},
\citet{lawless2005}, and \citet{fonseca2012}.
One implementation of the pivotal method is through a hypothesis test.
\citet{cox1975} and \citet[Page 243]{cox1979theoretical} suggested to construct prediction intervals by inverting a hypothesis test, and gave examples with distributions having simple test statistics.
Suppose the data $\boldsymbol{X}_n$ and the predictand $Y$ have densities $f(\boldsymbol{x}_n;\boldsymbol{\theta})$ and $g(y;\boldsymbol{\theta}^\dagger)$ governed by real-valued $\boldsymbol{\theta}$ and $\boldsymbol{\theta}^\dagger$, respectively, and a hypothesis
test can be found for the null hypothesis $\boldsymbol{\theta}=\boldsymbol{\theta}^\dagger$.
Let $w_\alpha$ be a critical region for the test $H_0:\boldsymbol{\theta}=\boldsymbol{\theta}^\dagger$ with size $\alpha$.
For critical region $\omega_\alpha$ we have the probability statement
\[
\Pr\left[(\boldsymbol{X}_n,Y)\in w_\alpha\right]=\alpha.
\]
Then for $\boldsymbol{X}_n=\boldsymbol{x}_n$, a $1-\alpha$ prediction
region for $Y$ can be defined as
\begin{equation}\label{eq:hypothesis-test-method}
\mathcal{P}_{1-\alpha}(\boldsymbol{x}_n)=\{y:(\boldsymbol{x}_n, y)\not\in w_\alpha\}.
\end{equation}
Thus, for the critical region defined in (\ref{eq:hypothesis-test-method}), we have that for all $\boldsymbol{\theta}$
\begin{equation*}
\Pr{}_{\!\boldsymbol{\theta}}\left[Y\in\mathcal{P}_{1-\alpha}(\boldsymbol{X}_n)\right]=1-\alpha,
\end{equation*}
so that (\ref{eq:hypothesis-test-method}) defines an exact prediction procedure for $Y$.
In (\ref{eq:hypothesis-test-method}), one could also potentially use a critical region $w_{\alpha}\equiv w_{\alpha,n}$ having size $\alpha$ asymptotically (i.e., $\lim_{n\to \infty }\Pr_{\theta}\left[(\boldsymbol{X}_,Y) \in w_{\alpha}\right]=\alpha$); then (\ref{eq:hypothesis-test-method}) becomes an asymptotically correct $1-\alpha$ prediction region for $Y$.
\citet[pp. 245]{cox1979theoretical} illustrated this test-based prediction region (\ref{eq:hypothesis-test-method}) with the normal distribution.
Suppose $\boldsymbol{X}_n$ is an independent random sample from
$\text{Norm}(\mu,\sigma)$ and $Y$ is a further independent random
variable with the same distribution.
By assuming that $\boldsymbol{X}_n\sim\text{N}(\mu_1,\sigma)$
and $Y\sim\text{N}(\mu_2,\sigma)$, a test statistic for the null
hypothesis $H_0:\mu_1=\mu_2$ is
\begin{equation}\label{eq:normal-invert-t-test}
t=\frac{\bar{X}_n-Y}{s\sqrt{(n+1)/n}}\sim t_{n-1},
\end{equation}
where $s^2=\sum_{i=1}^{n}(X_i-\bar{X}_n)^2/(n-1)$ and $t_{n-1}$ denotes a $t$-random variable with $n-1$ degrees of freedom.
This corresponds to the form of a two-sample $t$-test that is often used for comparison of means.
Then a $1-\alpha$ equal-tailed (i.e., equal probability of being outside either endpoint) prediction interval based on inverting the $t$-test is
\begin{equation*}
\text{PI}_{1-\alpha}(\boldsymbol{X}_n)=\left[\bar{X}_n-t_{n-1,\alpha/2}s\sqrt{(n+1)/n},\,\bar{X}_n+t_{n-1,\alpha/2}s\sqrt{(n+1)/n}\right],
\end{equation*}
where $t_{n-1,\alpha}$ denotes the $\alpha$ quantile of a $t_{n-1}$ distribution.
As a contrast to the pivotal prediction method, \citet{bjo1990} reviewed an alternative prediction method called the predictive likelihood method.
The main idea of the predictive likelihood method is to obtain an approximate density for $Y$ by eliminating the unknown parameters in the joint likelihood (or density) of the data and the predictand $(\boldsymbol{X}_n,Y)$.
The resulting predictive likelihood then provides a type of distribution for computing a prediction interval for $Y$ given $\boldsymbol{X}_n=\boldsymbol{x}_n$.
For example, a Bayesian predictive distribution for $Y$ involves steps of integrating out the unknown parameters of a posterior distribution based on the joint likelihood of the data and the predictand.
\subsection{Motivations}\label{subsec:motivations}
As reviewed in Section~\ref{subsec:literature}, prediction intervals can be constructed by inverting hypothesis tests for parameters.
However, the construction of such tests often needs to be tailored to each problem, where the determination of an appropriate hypothesis test is an essential step in the construction of such prediction intervals.
For example, in the normal distribution example, we obtain the prediction interval by inverting a $t$-test.
However, in many cases, there is no well-known or clear hypothesis test, making it difficult to implement a test-based method for obtaining prediction intervals.
As a remedy, the purpose of this paper is to propose a general prediction method based on inverting a type of LR test.
The advantage of the LR approach is that this principle applies broadly to different settings where prediction intervals are needed---and particularly for cases where an appropriate test statistic or pivotal quantity is not available or obvious for the need.
In addition, we will demonstrate that this general method has desirable statistical properties.
\subsection{Overview}\label{subsec:overview}
This paper is organized as follows.
Sections~\ref{sec:method}--\ref{sec-choose-full} focus on predictions with continuous data.
Section~\ref{sec:method} describes how to construct a prediction interval by formulating a certain LR statistic.
Section~\ref{sec:pivotal-quantity} discusses situations where the proposed prediction method provides exact coverage, while Section~\ref{asymptotic-results} shows that, more broadly, that the method is generally (under weak conditions) guaranteed to provide asymptotically correct coverage (i.e., improving coverage properties with increasing sample sizes).
Section~\ref{sec-choose-full} discusses some further details about constructing a suitable LR test.
Section~\ref{sec:discrete-distribution} focuses on applying the proposed method to prediction problems involving discrete data.
Section~\ref{sec-relationship-pred-dist} describes how the proposed LR prediction method compares and differs from predictions based on predictive likelihood methods (mentioned in Section~\ref{subsec:literature}).
Section~\ref{sec:conclusion} concludes by describing potential areas for future research.
\section{A General Method}\label{sec:method}
In Section~\ref{subsec:lrt}, we show how to construct general (not necessarily equal-tailed) two-sided prediction intervals with an LR statistic.
Section~\ref{subsec:one-sided} describes how to construct one-sided prediction intervals by using an LR and how this method can also be applied to calibrate equal-tailed two-sided prediction intervals.
In this section, we assume that both the data $\boldsymbol{X}_n$ and the predictand $Y$ are continuous.
For clarity in the exposition and ease of presentation, we further assume that $Y\sim g(\cdot;\boldsymbol{\theta})$ is independent of $\boldsymbol{X}_n\sim f(\cdot;\boldsymbol{\theta})$ and has the same distribution/density (i.e., $g(\cdot;\boldsymbol{\theta})=f(\cdot;\boldsymbol{\theta})$).
\subsection{Prediction Intervals Based on an LR Test}\label{subsec:lrt}
\citet{nelson2000} proposed a prediction interval method for predicting the number of failures
in a future inspection of a sample of units, based on a likelihood
ratio test in combination with the Wilks' theorem.
Although \citet{nelson2000} only considered a specific prediction problem, we extend the principle of LR-based prediction interval statistics to a more general setting.
The approach may also be viewed as a generalization of test-based prediction intervals explained in \citet{cox1979theoretical}, using an LR in the role of the test statistic.
\subsubsection{Reduced and Full Models}
Recalling its traditional use for parametric inference, the LR test provides a general approach for comparing two nested models for data (or parameter configurations) based on the observed data $\boldsymbol{X}_n=\boldsymbol{x}_n$.
The null hypothesis about the parameters corresponds to a reduced model, which is nested within a larger full model (i.e., a parameter subset of the full model).
Let $\mathcal{L}_n(\boldsymbol{\theta};\boldsymbol{x}_n)$ be the likelihood function for the full model having a parameter space $\Theta$ and suppose that the reduced model (corresponding to the null hypothesis) has a constrained parameter space $\Theta_0\subset\Theta$.
The LR for testing the null hypothesis $H_0:\boldsymbol{\theta}\in\Theta_0$ is then
\begin{equation*}
\Lambda_n = \frac{\sup_{\boldsymbol{\theta}\in\Theta_0}\mathcal{L}_n(\boldsymbol{\theta};\boldsymbol{x}_n)}{\sup_{\boldsymbol{\theta}\in\Theta}\mathcal{L}_n(\boldsymbol{\theta};\boldsymbol{x}_n)},
\end{equation*}
and the log-LR statistic is $-2\log\Lambda_n$.
Generally, the distribution of $\Lambda_n$ or $-2\log\Lambda_{n}$ needs to be determined, either analytically, through approximate large-sample distributional results, or through Monte Carlo simulation.
Then, such a distribution can be used
to determine the critical region for the LR test of the null hypothesis or relatedly a confidence interval/region for parameters.
For example, if the reduced model
is true, and if Wilks' theorem (cf. \citet{wilks1938}) applies (as it does under particular regularity conditions), the asymptotic distribution of the log-LR statistic is given by
$-2\log\Lambda_n\xrightarrow{d}\chi^2_d$ as the sample size $n\to\infty$,
where $\chi_d^2$ denotes a chi-square random variable with $d$ degrees of freedom and where $d$ is the difference in the lengths of $\Theta$ and $\Theta_0$.
The latter large sample chi-square distribution approximation is often used to calibrate the critical region of an LR test.
As we describe next, a log-LR statistic for model parameters can be modified to provide a log-LR statistic for a future random variable $Y$ in a general manner, which in turn can be used to construct prediction intervals for $Y$.
To outline the approach, suppose the available $\boldsymbol{X}_n$ represent an iid sample with common density $f(\cdot;\boldsymbol{\theta})$ and $Y$ denotes a future random variable with the same density $f(\cdot;\boldsymbol{\theta})$ ($Y$ is again independent of $\boldsymbol X_n$ here).
A log-LR statistic for $Y$ can then be broadly framed as a type of parameter $\boldsymbol{\theta}$ comparison involving full vs reduced models for the joint distribution $(\boldsymbol X_n,Y)$.
While $f(\cdot;\boldsymbol{\theta})$ denotes the true marginal density for both the data $\boldsymbol X_n$ and the predictand $Y$ (with parameter space $\boldsymbol{\theta} \in \Theta$), the main idea is to define a hypothesis test regarding an enlarged (and fictional) parameter space $\Theta_E \equiv \{(\boldsymbol{\theta},\boldsymbol{\theta_y})\} $, where the data $X_n$ have a common density $f(\cdot;\boldsymbol{\theta})$, where the predictand $Y$ has a density $f(\cdot;\boldsymbol{\theta}_y)$, say, and where $\boldsymbol{\theta}$ and $\boldsymbol{\theta}_y$ differ in exactly one pre-selected component when $(\boldsymbol{\theta},\boldsymbol{\theta}_y)\in\Theta_E$. For example, supposing $\boldsymbol{\theta}=(\theta_1,\ldots,\theta_k) \in \Theta$ consists of $k \geq 1$ components, then we choose exactly one parameter component, say $\theta_\ell$, from among $\{\theta_1,\ldots,\theta_k\}$ to vary and subsequently define $\boldsymbol{\theta}_y \in \Theta$ to match $\boldsymbol{\theta}\in \Theta$, except for the component $\ell$, which is $\theta_\ell$ for $\boldsymbol{\theta}$ but $\theta_{\ell,y}$ say in $\boldsymbol{\theta}_{y}$.
This framework sets up a comparison of a contrived full model ($\boldsymbol{X}_n\sim f(\cdot;\boldsymbol{\theta})$ marginally and $Y\sim f(\cdot;\boldsymbol{\theta}_y)$ for ($\boldsymbol{\theta}, \boldsymbol{\theta}_y)\in\Theta_E$) versus a reduced model ($\boldsymbol{\theta}_y=\boldsymbol{\theta}\in\Theta$), where the parameter space of the reduced model is nested within $\Theta_E$ with the constraint $\boldsymbol{\theta}=\boldsymbol{\theta}_{y}$.
The purpose of this contrived LR test is not to conduct hypothesis tests of parameters---as we already know that the reduced model is a true model---but to construct a predictive root (i.e., a test statistic containing $\boldsymbol{X}_n$ and $Y$), which will be used to predict $Y$.
In particular, the extra degree of freedom between the parameter spaces of the full model and the reduced model is used to identify the predictand $Y$ when formulating a log-LR statistic for $\boldsymbol{\theta}=\boldsymbol{\theta}_y$.
For example, in the case of data from a normal distribution $\boldsymbol{X}_n,Y\sim\text{Norm}(\mu,\sigma)$, we may define a full model as $\boldsymbol{X}_n\sim\text{Norm}(\mu,\sigma)$ and $Y\sim \text{Norm}(\mu_y,\sigma)$ for parameters $\boldsymbol{\theta}=(\mu,\sigma)$ and $\boldsymbol{\theta}_y=(\mu_y,\sigma)\in\mathbb{R}\times(0,\infty)$, where the reduced model $\boldsymbol{\theta}_y=\boldsymbol{\theta}$ corresponds to the true underlying model $\boldsymbol{X}_n,Y\sim\text{Norm}(\mu,\sigma)$ in prediction.
Let the joint likelihood function for $(\boldsymbol{X}_n,Y)$ be
\[
\mathcal{L}(\boldsymbol{\theta},\boldsymbol{\theta}_y;\boldsymbol{x}_n,y)=f(y;\boldsymbol{\theta}_y)\prod_{i=1}^{n}f(x_i;\boldsymbol{\theta})
\]
under the full model and suppose the maximum likelihood (ML) estimators of $(\boldsymbol{\theta},\boldsymbol{\theta}_y)$ are estimable under both the reduced ($\boldsymbol{\theta}=\boldsymbol{\theta}_y$) and the full ($(\boldsymbol{\theta},\boldsymbol{\theta}_y)\in\Theta_E$) models.
Then the joint LR statistic based on $(\boldsymbol{X}_n,Y)$ is
\begin{equation}\label{eq:likelihood-ratio}
\Lambda_n(\boldsymbol{X}_n,Y)=\frac{\sup_{\boldsymbol{\theta}=\boldsymbol{\theta}_y\in\Theta}\mathcal{L}(\boldsymbol{\theta},\boldsymbol{\theta}_y;\boldsymbol{X}_n,Y)}{\sup_{(\boldsymbol{\theta},\boldsymbol{\theta}_y)\in\Theta_E}\mathcal{L}(\boldsymbol{\theta},\boldsymbol{\theta}_y;\boldsymbol{X}_n, Y)}
\end{equation}
for the test of $\boldsymbol{\theta}=\boldsymbol{\theta}_y$.
The LR statistic in (\ref{eq:likelihood-ratio}) and its distribution can then be applied to obtain prediction intervals for the future predictand $Y$ based on observed data values $\boldsymbol{X}_n=\boldsymbol{x}_n$.
Note that the construction (\ref{eq:likelihood-ratio}) depends on which parameter from $\boldsymbol{\theta}$ is selected to vary in defining $\boldsymbol{\theta}_y$.
Typically, this selected parameter will be a mean-type parameter for purposes of identifying $Y$ in the LR statistic (\ref{eq:likelihood-ratio}); more details about this selection are given in Section~\ref{sec-choose-full}.
\subsubsection{Determining the Distribution of the LR}
\label{sec-determine}
The next step is to determine a critical region as in (\ref{eq:hypothesis-test-method}) so that we can compute the prediction region for $Y$, based on the LR statistic $\Lambda_n(\boldsymbol{X}_n,Y)$ from (\ref{eq:likelihood-ratio}).
This, however, requires the distribution of $\Lambda_n(\boldsymbol{X}_n,Y)$ (or $-2\log\Lambda_{n}(\boldsymbol{X}_n,Y)$).
There are three potential approaches for determining or approximating the distribution of $-2\log\Lambda_{n}(\boldsymbol{X}_n, Y)$.
The first approach is to obtain the distribution of $-2\log\Lambda_{n}(\boldsymbol{X}_n, Y)$ analytically.
For illustration, consider the situation with an iid sample $\boldsymbol{X}_n\sim\text{Norm}(\theta,\sigma)$ where $\sigma$ is known and the future random variable $Y$ is from the same distribution. Here there is one parameter $\theta\equiv\mu$ where the full model is
$\boldsymbol{X}_n\sim\text{Norm}(\mu,\sigma)$ and $Y\sim\text{Norm}(\mu_y,\sigma)$ for $(\mu,\mu_y)\in\mathbb{R}^2$ in the LR construction of (\ref{eq:likelihood-ratio}); the corresponding log-LR statistic for $Y$ based on $\boldsymbol{X}_n$ is then
\begin{equation*}
-2\log\Lambda_n(\boldsymbol{X}_n, Y)=\frac{n}{n+1}\left(\frac{Y-\bar{X}_n}{\sigma}\right)^2\sim\chi^2_1.
\end{equation*}
Then, a $1-\alpha$ prediction region for $Y$ given $\boldsymbol{X}_n=\boldsymbol{x}_n$ is
\[
\begin{split}
\mathcal{P}_{1-\alpha}(\boldsymbol{x}_n)&=\left\{y:-2\log\Lambda_{n}(\boldsymbol{x}_n, y)\leq\chi^2_{1,1-\alpha}\right\}\\
&=\left\{y:\bar{x}_n-\sigma\sqrt{\frac{n+1}{n}\chi^2_{1,1-\alpha}}\leq y\leq\bar{x}_n+\sigma\sqrt{\frac{n+1}{n}\chi^2_{1,1-\alpha}}\right\}\\
&=\left\{y:\bar{x}_n-z_{1-\alpha/2}\sigma\sqrt{\frac{n+1}{n}}\leq y\leq\bar{x}_n+z_{1-\alpha/2}\sigma\sqrt{\frac{n+1}{n}}\right\}
\end{split}
\]
where $\chi_{1,1-\alpha}^2$ is the $1-\alpha$ quantile of $\chi^2_1$ and $z_{1-\alpha/2} = \sqrt{\chi^2_{1,1-\alpha}}$ is the $1-\alpha/2$ quantile of a standard normal variable.
In this example, because $-2\log\Lambda_n(\boldsymbol{X}_n, Y)$ is a unimodal function
of $Y$, the prediction region $\mathcal{P}_{1-\alpha}(\boldsymbol{x}_n)$
leads to a prediction interval and the LR prediction method has exact coverage probability because the log-LR statistic is a pivotal quantity (i.e., $\chi_1^2$-distributed).
The second approach for approximating the distribution of $-2\log\Lambda_n(\boldsymbol{X}_n,Y)$, when applicable, is to use Wilks' theorem. Under the
conditions given in \citet{wilks1938}, the LR statistic $-2\log\Lambda_{n}(\boldsymbol{X}_n, Y)\xrightarrow{d}\chi_d^2$,
where $d$ is the difference in the dimensions of the full and reduced models ($d=1$ in our prediction interval construction).
Similarly, the $1-\alpha$ prediction region based on Wilks' theorem is
\[\mathcal{P}_{1-\alpha}(\boldsymbol{x}_n)=\left\{y:-2\log\Lambda_{n}(\boldsymbol{x}_n, y)\leq\chi^2_{d,1-\alpha}\right\}.\]
Wilks' theorem, however, does {\it not} apply in all prediction problems.
There exist important cases, particularly with discrete data, where the Wilks' result is valid for the log-LR statistic $-2\log\Lambda_n(\boldsymbol{X}_n,Y)$ in prediction and the chi-square-calibrated prediction region above is then appropriate; this is described in Section~\ref{sec:discrete-distribution}.
When Wilks' theorem does not apply, an alternative limiting distribution may still exist as illustrated in Section~\ref{asymptotic-results}.
The third approach, which is the most general one, is to use parametric bootstrap.
If $\lambda_{1-\alpha}$ is the $1-\alpha$ quantile of $\Lambda_{n}(\boldsymbol{X}_n,Y)$, then we have the following prediction region
\begin{equation*}
\mathcal{P}_{1-\alpha}(\boldsymbol{x}_n)=\left\{y:-2\log\Lambda_n(\boldsymbol{x}_n, y)\leq\lambda_{1-\alpha}\right\}.
\end{equation*}
The idea of this approach is to use a parametric bootstrap re-creation of the data $(\boldsymbol{X}_n^*,Y^*)$, which leads to a distribution for a bootstrap version $-2 \log \Lambda_n(\boldsymbol{X}_n^*,Y^*)$ of the log-LR statistic. Then, the $1-\alpha$ quantile of the bootstrap distribution, say $\lambda_{1-\alpha}^*$, is used to approximate the unknown quantile $\lambda_{1-\alpha}$ of the true sampling distribution of $-2 \log\Lambda_n (\boldsymbol{X}_n,Y)$.
Then the resulting parametric bootstrap prediction region is
\begin{equation}\label{eq:bootstrap-find-lik-ratio}
\mathcal{P}_{1-\alpha}(\boldsymbol{x}_n)=\left\{y:-2\log\Lambda_n(\boldsymbol{x}_n, y)\leq\lambda_{1-\alpha}^\ast\right\}.
\end{equation}
An algorithm for implementing a Monte Carlo (i.e., simulation-based) approximation of the parametric bootstrap is as follows.
\begin{enumerate}
\item Compute an estimate corresponding to a consistent estimator of $\boldsymbol{\theta}$ (usually the ML estimate) using observed data $\boldsymbol{X}_n=\boldsymbol{x}_n$, denoted by $\widehat{\boldsymbol{\theta}}_n$ (recall the data model is that the $\boldsymbol{X}_n$ are iid $f(\cdot;\boldsymbol{\theta})$.)
\item Generate a bootstrap sample $\boldsymbol{x}_n^\ast$ and $y^\ast$ as iid observations drawn from $f(\cdot;\widehat{\boldsymbol{\theta}})$.
\item Evaluate the LR in (\ref{eq:likelihood-ratio}) using bootstrap pair $(\boldsymbol{x}_n^\ast, y^\ast)$ to get $\lambda^\ast\equiv-2\log\Lambda_{n}(\boldsymbol{x}_n^\ast, y^\ast)$.
\item Repeat steps 2--3 $B$ times to obtain $B$ realizations of $\lambda^\ast$ as $\{\lambda^\ast_b\}_{b=1}^{B}$.
\item Use the $1-\alpha$ sample quantile of $\{\lambda^\ast_b\}_{b=1}^{B}$ as $\lambda_{1-\alpha}^\ast$ in (\ref{eq:bootstrap-find-lik-ratio}) and compute the prediction region.
\end{enumerate}
The prediction region $\mathcal{P}_{1-\alpha}(\boldsymbol{x}_n)$ in (\ref{eq:bootstrap-find-lik-ratio}) is a prediction interval when $\Lambda_{n}(\boldsymbol{x}_n,y)$ is a unimodal function of $y$ for a given data set $\boldsymbol{X}_n=\boldsymbol{x}_n$.
Such prediction intervals generally do not have equal-tail
probabilities. In many applications, however, the cost of the
predictand being greater than the upper bound is different than
having it being less than the lower bound.
In such cases, it is better to have a prediction interval with equal-tail
probabilities. This can be achieved by calibrating separately the lower
and upper one-sided $1-\alpha/2$ prediction bounds and putting them
together to provide a two-sided $1-\alpha$ equal-tail-probability
prediction interval. The next section shows how to construct a one-sided prediction bound using the LR in (\ref{eq:likelihood-ratio}).
\subsection{Constructing One-Sided Prediction Bounds}\label{subsec:one-sided}
Suppose that the LR $\Lambda_{n}(\boldsymbol{x}_n,y)$ is a unimodal function of $y$ based on observed data $\boldsymbol{X}_n=\boldsymbol{x}_n$.
This is a common property (holding with probability 1) in most prediction problems. Note that a two-sided prediction interval (4) for $Y$ based on $\boldsymbol{X}_n=\boldsymbol{x}_n$ is defined by a horizontal line drawn through the curve of $-2\log \Lambda_n(\boldsymbol{x}_n,y) $ (as a function of $y$) at an appropriate threshold $\lambda_{1-\alpha}$, as shown in Figure~\ref{fig:horizontal}.
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{horizontal-line.pdf}
\caption{Example of log-LR statistic (as a function of $y$) for given data $\boldsymbol{x}_n$, which is an illustration of the prediction interval procedure in (\ref{eq:bootstrap-find-lik-ratio}).}
\label{fig:horizontal}
\end{figure}
Here we describe a method for calibrating one-sided bounds directly, without resorting to (4) by adjusting the log-LR curve so that it becomes a monotone function.
For a given data set $\boldsymbol{X}_n=\boldsymbol{x}_n$, let $y_0\equiv y_0(\boldsymbol{x}_n)$ denote the value of $y$ that maximizes $\Lambda_n(\boldsymbol{x}_n,y)$, where $\Lambda_n(\boldsymbol{x}_n,y_0)=1$ at $y_0$. Define a signed log-LR statistic $\zeta_n(\boldsymbol{x}_n,y)$ based on (3) as
\begin{equation}
\label{eq-extended-lik-ratio}
\zeta_n(\boldsymbol{x}_n,y)\equiv(-1)^{\text{I}(y \leq y_0)} \left[-2 \log \Lambda_n(\boldsymbol{x}_n,y)\right] = \left\{
\begin{array}{ll}
2 \log \Lambda_n(\boldsymbol{x}_n,y) \in (-\infty,0]& \mbox{$y \leq y_0$}\\
-2 \log \Lambda_n(\boldsymbol{x}_n,y) \in [0,\infty) & \mbox{$y \geq y_0$},\\
\end{array}
\right.
\end{equation}
where $\text{I}(\cdot)$ denotes the indicator function. That is, $(-1)^{\text I(y \leq y_0)} \left[-2 \log \Lambda_n(\boldsymbol{x}_n,y)\right]$ is a signed version of the log-LR statistic $-2 \log \Lambda_n(\boldsymbol{x}_n,y)$ which, unlike the latter statistic, is an increasing function of $y$
and is negative when $y<y_0$ (but positive when $y>y_0$).
Hence, to set a one-sided bound for $Y$, we calibrate the signed log-LR statistic $\zeta_n(\boldsymbol{x}_n,y)$ which has a one-to-one correspondence to $y$-values when $\Lambda_n(\boldsymbol{x}_n,y)$ is unimodal (unlike $\Lambda_n(\boldsymbol{x}_n,y)$ itself).
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{vertical-line.pdf}
\caption{An illustration of the one-sided prediction bound procedure in (\ref{eq-vertical}).}
\label{fig:vertical}
\end{figure}
Note that if the $1-\alpha$ quantile of the distribution of $\zeta_n(\boldsymbol{X}_n,Y)$, denoted by $\zeta_{1-\alpha}$, were known, we could set a $1-\alpha$ upper prediction bound for $Y$ given $\boldsymbol{X}_n=\boldsymbol{x}_n$ as
\begin{equation}\label{eq-vertical}
\tilde{y}_{1-\alpha}(\boldsymbol{x}_n) \equiv \sup\{y\in \mathbb{R}: \zeta_n(\boldsymbol{x}_n,y) \leq \zeta_{1-\alpha}\}.
\end{equation}
Figure~\ref{fig:vertical} provides a graphical illustration of (\ref{eq-vertical}), illustrating the resulting prediction region.
Similar to the third approach in Section~\ref{sec-determine}, we can approximate the quantile $\zeta_{1-\alpha}^\ast$ using the $1-\alpha$ quantile of $\zeta_{n}(\boldsymbol{X}_n^\ast,Y^\ast)$, which is the bootstrap version of the signed log-LR statistic.
Then, a bootstrap prediction bound is obtained by replacing $\zeta_{1-\alpha}$ with $\zeta_{1-\alpha}^\ast$ in (\ref{eq-vertical}), and the $1-\alpha$ upper prediction bound $\tilde{y}_{1-\alpha}(\boldsymbol{x}_n)$ is defined as
\begin{equation}\label{eq:upper}
\tilde{y}_{1-\alpha}(\boldsymbol{x}_n)=\sup_{y\in\mathbb{R}}\left\{y:\zeta_{n}(\boldsymbol{x}_n,y)\leq \zeta_{1-\alpha}^\ast\right\}.
\end{equation}
Constructing the $1-\alpha$ lower prediction bound $\utilde{y}_{1-\alpha}(\boldsymbol{x}_n)$ is similar, and the $1-\alpha$ lower prediction bound is
\begin{equation}\label{eq;lower}
\utilde y_{1-\alpha}(\boldsymbol{x}_n)=\sup_{y\in\mathbb{R}}\left\{y:\zeta_{n}(\boldsymbol{x}_n,y)\leq \zeta_\alpha^\ast\right\}.
\end{equation}
The following algorithm describes how to compute the $1-\alpha$ upper (and lower) prediction bound $\tilde{y}_{1-\alpha}(\boldsymbol{x}_n)$ (and $\utilde{y}_{\alpha}(\boldsymbol{x}_n)$) using a Monte Carlo approximation of the bootstrap distribution $\zeta_{n}(\boldsymbol{X}_n^\ast,Y^\ast)$ and the bootstrap quantile $\zeta_{1-\alpha}^\ast$ (and $\zeta_{\alpha}^\ast$).
\begin{enumerate}
\item Compute $\widehat{\boldsymbol{\theta}}_n$ using the observed data $\boldsymbol{X}_n=\boldsymbol{x}_n$.
\item Simulate a sample $\boldsymbol{x}_n^\ast$ using a parametric bootstrap with $\widehat{\boldsymbol{\theta}}_n$ and compute $y_0(\boldsymbol{x}_n^\ast)$.
\item Simulate $y^\ast$ from distribution $f(\cdot;\widehat{\boldsymbol{\theta}}_n)$ and compute $$\zeta^\ast\equiv\zeta_n(\boldsymbol{x}_n^\ast,y^\ast)=(-1)^{\text{I}[y^\ast\leq y_0(\boldsymbol{x}_n^\ast)]}\left[-2\log\Lambda_n(\boldsymbol{x}_n^\ast,y^\ast)\right].$$
\item Repeat steps 2--3 $B$ times to obtain $B$ realizations of $\zeta^\ast$ as $\{\zeta_{i}^\ast\}_{i=1}^{B}$.
\item Use the $1-\alpha$ (or $\alpha$) sample quantile from $\{\zeta_{i}^\ast\}_{i=1}^{B}$ as $\zeta_{1-\alpha}^\ast$ (or $\zeta_{\alpha}^\ast$) in (\ref{eq:upper}) (or (\ref{eq;lower})) to compute the $1-\alpha$ upper (or lower) prediction bound.
\end{enumerate}
Note that, in the algorithm for one-side bounds, one can simultaneously keep track of bootstrap statistics $\lambda^\ast=\left|\xi^\ast\right|$ for computing the two-sided bounds in (\ref{eq:bootstrap-find-lik-ratio}) (i.e., the same resamples can be used).
\section{Exact Results}\label{sec:pivotal-quantity}
The LR-based prediction method can often uncover and exploit pivotal quantities involving the data $\boldsymbol{X}_n$ and the predictand $Y$ when these exist.
In these cases, the LR statistic is pivotal, often emerging as a function of another pivotal quantity from $(\boldsymbol{X}_n,Y)$.
Consequently, in these cases, prediction intervals or bounds for $Y$ based on the LR-statistic (\ref{eq:likelihood-ratio}) will have exact coverage, when either based on the direct distribution of LR statistic (when available analytically) or more broadly when based on a bootstrap.
In this section, we provide more explanation about when the LR prediction method is exact, beginning with some illustrative examples.
\noindent\textbf{Exponential Distribution}:
Suppose the data $X_1,\dots,X_n$ and future predictand $Y$ are iid $\text{Exp}(\theta)$ with mean $\theta>0$. Letting $\widehat\theta_{\boldsymbol{x}_n,y}\equiv\left(\sum_{i=1}^{n}x_i+y\right)/(n+1)$, and $\widehat{\theta}_{\boldsymbol{x}_n}\equiv\sum_{i=1}^{n}x_i/n$ based on data $\boldsymbol{X}_n=\boldsymbol{x}_n$ and a given value $y>0$ of $Y$, then the LR statistic (\ref{eq:likelihood-ratio}) is
\[
\Lambda_n(\boldsymbol{x}_n,y)=\frac{\widehat\theta^{-n-1}_{\boldsymbol{x}_n,y}\exp\left[-\frac{\sum_{i=1}^{n}x_i+y}{\widehat{\theta}_{\boldsymbol{x}_n,y}}\right]}{\widehat{\theta}_{\boldsymbol{x}_n}^{-n}\exp\left[-\frac{\sum_{i=1}^{n}x_i}{\widehat{\theta}_{\boldsymbol{x}_n}}\right]y^{-1}\exp\left(-\frac{y}{y}\right)}=\frac{y\widehat{\theta}_{\boldsymbol{x}_n}^n}{\widehat{\theta}_{\boldsymbol{x}_n,y}^{n+1}}=\frac{\left(\frac{\bar{x}_n}{y}\right)^n}{\left[\frac{n}{n+1}\frac{\bar{x}_n}{y}+\frac{1}{n+1}\right]^{n+1}},
\]
which is a function of the pivotal quantity $\bar{X}_n/Y$ and a unimodal function of $y$.
Thus, the LR prediction method is exact (when based on the $F$-distribution of $\bar{X}_n/Y\sim F_{n,1}$ or the bootstrap as in (\ref{eq:bootstrap-find-lik-ratio})), and the prediction region becomes a prediction interval.
\noindent\textbf{Normal Distribution}:
Let $X_1,\dots,X_n,Y\sim\text{Norm}(\mu,\sigma)$, where both $\mu\in\mathbb{R}$ and $\sigma>0$ are unknown.
We construct the full model by allowing the predictand $Y$ to have a different location parameter $\mu$: $X_1\dots,X_n\sim\text{Norm}(\mu,\sigma)$ and $Y\sim\text{Norm}(\mu_y,\sigma)$ (i.e., $\boldsymbol{\theta}=(\mu,\sigma)$ and $\boldsymbol{\theta}_y=(\mu_y,\sigma)$).
Then for the full model, the ML estimators are
\[
\widehat{\mu}=\bar{X}_n,\quad\widehat{\mu}_y=Y,\quad\widehat{\sigma}=\sqrt{\frac{\sum_{i=1}^{n}(X_i-\bar{X}_n)^2}{n+1}},
\]
while for the reduced model $\theta=\theta_y$, the ML estimators are
\[
\widehat{\mu}=\frac{\sum_{i=1}^{n}X_i+Y}{n+1},\quad\widehat{\sigma}=\sqrt{\frac{\sum_{i=1}^{n}(X_i-\widehat{\mu})^2+(Y-\widehat{\mu})^2}{n+1}}.
\]
Then the resulting LR statistic (\ref{eq:likelihood-ratio}) is
\begin{equation}\label{eq:normal-two-para}
\Lambda_n(\boldsymbol{X}_n,Y)=\left(1 + \frac{n^2+1}{n^2-1}\frac{t^2}{n} \right)^{-(n+1)/2}
\end{equation}
where
\[
t\equiv\frac{\bar{X}_n-Y}{s}\sqrt\frac{n}{n+1},
\]
and $s^2 \equiv \sum_{i=1}^n (X_i-\bar{X}_n)^2/(n-1)$.
Here, $t \sim t_{n-1}$ has the same $t$-test statistic form as in (\ref{eq:normal-invert-t-test}).
Hence, the LR is pivotal and also $\Lambda_n(\boldsymbol{x}_n,y)$ is a unimodal function of $y$.
Thus the resulting prediction interval procedure has exact coverage probability when based on the bootstrap as in (\ref{eq:bootstrap-find-lik-ratio}) (or using the $t_{n-1}$ distribution here).
In fact, the results for the normal distribution can be generalized to the (log-)location-scale family, which contains many other important distributions.
Theorem~\ref{theorem-location-scale-family-11} says that, by allowing the location parameter of the predictand to be different from that of the data to create a full versus reduced model comparison, the resulting LR statistic (\ref{eq:likelihood-ratio}) is a pivotal quantity so that the prediction method is exact.
\begin{theorem}\label{theorem-location-scale-family-11}
(i) Suppose the LR-statistic (\ref{eq:likelihood-ratio}) is a pivotal quantity.
Then, the corresponding $1-\alpha$ prediction region (\ref{eq:bootstrap-find-lik-ratio}) for $Y$ based on the parametric bootstrap will have exact coverage.
That is,
\[
\Pr\left[Y\in\mathcal{P}_{1-\alpha}(\boldsymbol{X}_n)\right]=1-\alpha.
\]
(ii) Suppose also that both the data $X_1,\dots,X_n$ and $Y$ are from a location-scale distribution with density $f(\cdot;\mu,\sigma)=\phi\left[(x-\mu)/\sigma\right]$ with parameters $\boldsymbol{\theta}=(\mu,\sigma)\in\mathbb{R}\times(0,\infty)$.
In the LR construction (\ref{eq:likelihood-ratio}), suppose the full model involves parameters $\boldsymbol{\theta}=(\mu,\sigma)$ and $\boldsymbol{\theta}_{y}=(\mu_y,\sigma)$ (i.e., $X_1,\dots,X_n\sim f(\cdot;\mu,\sigma)$ and $Y\sim f(\cdot;\mu_y,\sigma)$).
Then the LR statistic $\Lambda_{n}(\boldsymbol{X}_n,Y)$ (or $-2\log\Lambda_{n}(\boldsymbol{X}_n,Y)$) is a pivotal quantity and the result of Theorem~\ref{theorem-location-scale-family-11}(i) holds.
\end{theorem}
\noindent The proof is given in Section~A of supplementary material.
\noindent\textbf{Remark 1.} If the LR statistic $\Lambda_{n}(\boldsymbol{x}_n,y)$ is a unimodal function of $y\in\mathbb{R}$ with probability 1 (as determined by $\boldsymbol{X}_n$) and if the signed LR-statistic $\zeta_{n}(\boldsymbol{X}_n,Y)$ is a pivotal quantity, then the Theorem~\ref{theorem-location-scale-family-11}(i) result (i.e., exact coverage) also applies for one-sided prediction bounds based on parametric bootstrap.
For (log-)location-scale distributions as in Theorem~\ref{theorem-location-scale-family-11}(ii), the signed LR-statistic $\zeta_{n}(\boldsymbol{X}_n,Y)$ is a pivot.
We next provide some illustrative examples.
\noindent\textbf{Simple Regression}: We consider the simple linear regression model $Y\sim\text{Norm}(\beta_0+\beta_1x,\sigma)$ with given $x$ and data $Y_1,\dots,Y_n$ that satify $Y_i\sim\text{Norm}(\beta_0+\beta_1x_i,\sigma)$ where $x_i, i=1,\dots,n$.
Similar to the normal distribution example, it is natural to choose $\beta_0+\beta_1x$ to construct the ``full'' model.
In fact, choosing $\beta_0$ or $\beta_1$ gives the same log-LR statistic as $\beta_0+\beta_1x$, which is given by
\[
(n+1)\log\left(1+\frac{1}{n-2}T^2\right),
\]
where $T$ matches the standard statistic for normal theory predictions (i.e., a studentized version of $Y-\widehat{\beta}_0-\widehat{\beta}_1x$) having a $t$-distribution with $n-2$ degrees of freedom.
\noindent\textbf{Two-Parameter Exponential Distribution}:
Suppose $X_1,\dots,X_n,Y$ are independent observations from a two-parameter exponential distribution $\text{Exp}(\mu, \beta)$.
That is, $(X_i-\mu)/\beta\sim\text{Exp}(1)$ with location and scale parameters as $\boldsymbol{\theta}=(\mu,\beta)$.
Hence, under Theorem~\ref{theorem-location-scale-family-11}, the LR $\Lambda_{n}(\boldsymbol{X}_n,Y)$ is a pivotal quantity and bootstrap-calibrated prediction regions for $Y$ are exact.
In fact, an exact form of the LR-statistic may be determined as
\begin{equation*}
\Lambda_n(\boldsymbol{x}_n, y)=
\left[\frac{\sum_{i=1}^{n}x_i+y-(n+1)\min\{x_{(1)}, y\}}{\sum_{i=1}^{n}x_i-nx_{(1)}}\right]^{n+1}
\label{eq:two-parameter-exponential}
\end{equation*}
based on given positive data $\boldsymbol{x}_n=(x_1,\ldots,x_n)$ where $x_{(1)}$ denotes the first order statistic.
Note that $\Lambda_n(\boldsymbol{x}_n,y)$ is a unimodal function of $y$ given $\boldsymbol{X}_n=\boldsymbol{x}_n$ (with probability 1); hence, one-sided prediction bounds for $Y$ based on a parametric bootstrap will also have exact coverage by Remark~1.
Replacing $\boldsymbol{x}_n$ and $y$ with corresponding random variables $\boldsymbol{X}_n$ and $Y$
in (\ref{eq:two-parameter-exponential}) gives
\[ \Lambda_n(\boldsymbol{X}_n, Y)\stackrel{d}{=}\left[\frac{\sum_{i=1}^{n}E_i+T-(n+1)\min\{E_{(1)}, T\}}{\sum_{i=1}^{n}E_i-nE_{(1)}}\right]^{n+1}, \]
where $E_1, \dots, E_n, T$ denote iid $\text{Exp}(1)$ random variables and $E_{(1)}$ is the first order statistic of
$E_1, \dots, E_n$; this verifies that $\Lambda_n(\boldsymbol{X}_n, Y)$ is
indeed a pivotal quantity for the exponential data case, as claimed in Theorem~\ref{theorem-location-scale-family-11}.
Thus this prediction interval procedure is exact when parametric bootstrap is used to obtain the distribution of $\Lambda_n(\boldsymbol{X}_n, Y)$.
\noindent\textbf{Uniform Distribution}:
Suppose $X_1,\dots,X_n,Y$ are iid $\text{Unif}(0,\theta)$, which is a one-parameter scale family.
The LR statistic (\ref{eq:likelihood-ratio}) has a form
\begin{equation}\label{eq-uniform-lr}
\Lambda_{n}(\boldsymbol{x}_n,y)=\frac{(x_{(n)}/y)^n}{[\max(x_{(n)}/y,1)]^{n+1}},
\end{equation}
where $x_{(n)}$ denotes the maximum of $\{x_1,\dots,x_n\}$.
Hence, $\lambda_n(\boldsymbol{x}_n,y)$ is a unimodal function $y$ given $\boldsymbol{X}_n=\boldsymbol{x}_n$ (with probability 1) and $\Lambda_{n}(\boldsymbol{X}_n,Y)$ can also be seen to be a pivotal quantity as
\[
\frac{X_{(n)}}{Y}=\frac{X_{(n)}/\theta}{Y/\theta}\stackrel{d}{=}\frac{\max\{U_1,\dots,U_n\}}{U_0},
\]
where $U_0,U_1,\dots,U_n$ denote iid $\text{Unif}(0,1)$ variables.
Hence, by Theorem~\ref{theorem-location-scale-family-11}(i) and Remark~1, both the two-sided prediction interval procedure (\ref{eq:bootstrap-find-lik-ratio}) as well as the one-sided bound procedures (\ref{eq:upper})-(\ref{eq;lower}) based on bootstrap have exact coverage.
That is, bootstrap simulation provides an effective and unified means for estimating the distribution of $\Lambda_{n}(\boldsymbol{x}_n,y)$ and constructing prediction intervals.
\section{General Results}\label{asymptotic-results}
Section~\ref{sec:pivotal-quantity} discusses cases where the LR prediction method is exact, particularly when the construction (\ref{eq:likelihood-ratio}) results in a pivotal quantity.
For some prediction problems, however, the LR statistic may not be a pivotal quantity, as the next example illustrates.
\noindent\textbf{Gamma Distribution}:
Let $X_1,\dots,X_n,Y$ denote iid random variables from a gamma density $f(x;\alpha,\beta)=\beta^{-\alpha} x^{\alpha-1} \exp(-x/\beta)/{\Gamma(\alpha)}$, $x>0$, with scale $\beta>0$ and shape $\alpha>0$ parameters.
In the LR construction (\ref{eq:likelihood-ratio}) with parameters $\boldsymbol{\theta}=(\beta,\alpha)$, suppose the full model involves parameters $\boldsymbol{\theta}$ and $\boldsymbol{\theta}_y=(\beta_y,\alpha)$ or $X_1,\dots,X_n\sim\text{Gamma}(\alpha,\beta)$ and $Y\sim\text{Gamma}(\alpha,\beta_y)$.
The LR statistic is then given by
\[
\Lambda_{n}(\boldsymbol{x}_n,y)=\frac{\sup_{\alpha} [\Gamma(\alpha)]^{-n} [(n\bar{x}_n+y)/(n+1)]^{-\alpha(n+1)} (y \prod_{i=1}^n x_i)^{\alpha-1}}{\sup_{\alpha} [\Gamma(\alpha)]^{-n} [n\bar{x}_n]^{-\alpha n} (y/\alpha)^{-\alpha} (y \prod_{i=1}^n x_i)^{\alpha-1}}
.
\]
Unlike the previous examples, the LR statistic is no longer a pivotal quantity.
However, we can use the bootstrap method to approximate the distribution for $\Lambda_{n}(\boldsymbol{X}_n,Y)$.
A small simulation study was conducted to investigate the coverage probability of the LR prediction method.
Figure~\ref{fig:gamma} shows the coverage probability of 90\% and 95\% one-sided prediction bounds for a future gamma variate based on the LR prediction method (i.e., (\ref{eq:upper}) and (\ref{eq;lower})), and compares the LR prediction method with other methods.
Sample size values $n=4,5,6,7,8,9,10,30,50,70,90,100$ were used.
Without loss of generality, the scale parameter was set to $\beta=1$, and the shape parameter values $\alpha=1,2,3$ were used.
$N=2000$ Monte Carlo samples were used to compute the coverage probability and $B=2000$ bootstrap samples were used to approximate the distribution of the signed log-LR statistic.
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{gamma-cp.pdf}
\caption{Coverage probabilities for predicting a gamma random variable versus the sample size $n$ for the $90\%$ and $95\%$ one-sided prediction bounds: Approximate Fiducial Prediction (AFP) (\citealt{chen2017approximate}), Calibration Bootstrap (CB) (\citealt{beran1990}), Direct-Bootstrap (DB) (\citealt{harris1989}), Likelihood Ratio Prediction (LR) (cf.~(\ref{eq:upper}), and (\ref{eq;lower})); Plug-in (PL) Methods.}
\label{fig:gamma}
\end{figure}
The simulation results show that the calibration-bootstrap method has the best coverage while the LR prediction and the approximate fiducial (or GPQ) methods also work well.
When $n=4$, the difference between the true coverage of the LR prediction method and the nominal level (combined with Monte Carlo error) is less than $2\%$.
When the sample size $n$ increases, the discrepancy quickly shrinks.
This illustrtates that even when the LR statistic has a complicated and non-pivotal distribution, using parameteric bootstrap can be effective and useful.
Theorem~\ref{theorem-multiple-parameters}, given next, shows that the LR prediction method, combined with bootstrap calibration is asymptotically correct for continuous prediction problems under general conditions.
The theorem consists of two parts: the first part establishes that the log-LR statistic $-2\log\Lambda_{n}(\boldsymbol{X}_n,Y)$ has a limit distribution as $n\to\infty$.
However, this limit distribution will sometimes {\it not} be chi-square as in Wilks' theorem and may even depend on one or more of the parameters.
Nevertheless, the second part of Theorem~\ref{theorem-multiple-parameters} establishes that the bootstrap version of the log-LR statistic $-2\log\Lambda_{n}(\boldsymbol{X}_n^\ast,Y^\ast)$ can capture the distribution of $-2\log\Lambda_{n}(\boldsymbol{X}_n,Y)$.
Consequently, $1-\alpha$ bootstrap-based prediction regions (\ref{eq:bootstrap-find-lik-ratio}) for $Y$ will have coverage probabilities that converge to the correct coverage level $1-\alpha$ as the sample size $n$ increases.
\begin{theorem}\label{theorem-multiple-parameters}
Suppose a random sample $\boldsymbol{X}_n$ of size $n$ and a predictand $Y$ (independent of $\boldsymbol{X}_n$) have a common density $f(\cdot;\boldsymbol{\theta})$, and that the LR construction (3) is used with $\boldsymbol{\theta}= (\theta,\boldsymbol{\theta}^\prime)$ and $\boldsymbol{\theta}_y = (\theta_y,\boldsymbol{\theta}^\prime)$ having common parameters $\boldsymbol{\theta}^\prime$ (and real-valued parameters $\theta,\theta_y$ that may differ).
Then, under mild regularity conditions (detailed in the supplement),
\begin{enumerate}
\item As $n\to\infty$,
\[
-2\log\Lambda_{n}(\boldsymbol{X}_n,Y)\xrightarrow{d}-2\log\left[\frac{f(Y;\boldsymbol{\theta}_0)}{\sup_{\theta_{y}}f(Y;\theta_{y},\boldsymbol{\theta}_{0}^\prime)}\right],
\]
where $\boldsymbol{\theta}_0=(\theta_0,\boldsymbol{\theta}_{0}^\prime)$ denotes the true value of the parameter vector $\boldsymbol{\theta}$.
\item The bootstrap provides an asymptotically consistent estimator of the distribution of the log-LR statistic; that is,
\[
\sup_{\lambda\in\mathbb{R}}\left|\Pr{}_{\!\ast}(-2\log\Lambda_{n}^\ast\leq\lambda)-\Pr\left(-2\log\Lambda_{n}\leq\lambda\right)\right|\xrightarrow{p}0,
\]
where $\Pr{}_{\!\ast}$ is the bootstrap induced probability and $\Lambda^\ast_n\equiv\Lambda_n(\boldsymbol{X}_n^\ast,Y^\ast)$.
\end{enumerate}
\end{theorem}
\noindent\textbf{Remark 2.}
Similar to Remark~1, if $\Lambda_{n}(\boldsymbol{X}_n,y)$ is a unimodal function of $y$ (with probability 1 or with probability approaching 1 as $n\to\infty$), then the signed log-LR statistic converges as well
\[
(-1)^{\text{I}[Y\leq y_0(X_n)]}\left[-2\log\Lambda_n(\boldsymbol{X}_n,Y)\right] \stackrel{d}{\rightarrow}
\begin{cases}
2\log\left[\frac{f(Y;\boldsymbol{\theta}_0)}{\sup_{\theta_y} f(Y;\theta_y,\boldsymbol{\theta}^\prime_0)}\right], & Y\leq m_0.\\
-2\log\left[\frac{f(Y;\boldsymbol{\theta}_0)}{\sup_{\theta_y} f(Y;\theta_y,\boldsymbol{\theta}^\prime_0)}\right], & Y> m_0.
\end{cases}
\]
where $m_0$ is maximizer of ${f(y;\boldsymbol{\theta}_0)}/{\sup_{\theta_y} f(y; \theta_y,\boldsymbol{\theta}^\prime_0)}$ over $y$.
The bootstrap approximation for the signed log-LR statistic is also valid asymptotically.
The proof is described in the supplementary material along with a proof of Theorem~\ref{theorem-multiple-parameters}.
We use two examples to illustrate Theorem~\ref{theorem-multiple-parameters}.
In the uniform example of Section~\ref{sec:pivotal-quantity}, if $\theta_0>0$ denotes the true parameter value (i.e., $Y\sim\mbox{Unif}(0,\theta_0)$), then the limit distribution in Theorem~\ref{theorem-multiple-parameters}(i) for the log-LR statistic is
\begin{equation}\label{eq-new-key-1}
-2\log\Lambda_{n}(\boldsymbol{X}_n,Y)\xrightarrow{d}-2\log\left(\frac{Y}{\theta_0}\right),
\end{equation}
which has a $\chi^2_{2}$ distribution.
This result can be alternatively verified by using the LR in (\ref{eq-uniform-lr}) to determine that
\[
\Lambda_{n}(\boldsymbol{X}_n,Y)=\frac{(X_{(n)}/Y)^n}{[\max(X_{(n)}/Y,1)]^{n+1}}=\frac{Y}{X_{(n)}}\frac{(X_{(n)}/Y)^{n+1}}{\left[\max(X_{(n)}/Y,1)\right]^{n+1}}\xrightarrow{d}\text{Unif}(0,1)
\]
from which $-2\log\Lambda_{n}(\boldsymbol{X}_n,Y)\xrightarrow{d}\chi_2^2$ follows.
While $\Lambda_{n}(\boldsymbol{X}_n,Y)$ is a pivotal quantity for any $n\geq1$ (so that bootstrap calibration is exact by Theorem~\ref{theorem-location-scale-family-11}), Theorem~\ref{theorem-multiple-parameters} shows that the bootstrap also captures the limiting distribution of the log-LR statistic $\chi_2^2$ in (\ref{eq-new-key-1}).
To consider a distribution with more than one parameter, we re-visit the gamma distribution example in this section.
Using Theorem~\ref{theorem-multiple-parameters}, the limit distribution is
\begin{equation}\label{key}
\begin{split}
-2\log\Lambda_{n}(\boldsymbol{X}_n,Y)&\xrightarrow{d}-2\log\left[\frac{f(Y;\beta_0,\alpha_0)}{\sup_\beta f(Y;\beta,\alpha_0)}\right]\\
&=-2\log\left[\left(\frac{Y}{\beta_0}\right)^{\alpha_0}\exp\left(-\frac{Y}{\beta_0}+\alpha_0\right)\right]\\
&=2(Z-\alpha_0)-2\alpha_0\log(Z),
\end{split}
\end{equation}
where $Z\equiv Y/\beta_0\sim\text{Gamma}(\alpha_0,1)$.
Even though the log-LR statistic (\ref{eq:likelihood-ratio}) depends on the shape parameter $\alpha_0$ in this example, a bootstrap approximation for the distribution of the log-LR statistic is asymptotically correct by Theorem~\ref{theorem-multiple-parameters}.
This is demonstrated numerically through the coverage behavior of Figure~\ref{fig:gamma}.
In addition to the bootstrap (Theorem~\ref{theorem-multiple-parameters}~(ii)), the limit distribution of the log-LR statistic in Theorem~\ref{theorem-multiple-parameters}~(i) (as well as that of the signed log-LR statistic $\zeta_n$ from Remark~2) may also be used as an alternative approach to construct prediction intervals.
That is, we may use the $1-\alpha$ quantile of the limit distribution in Theorem~\ref{theorem-multiple-parameters}~(i) to replace the quantile $\lambda_{1-\alpha}$ in (4) (corresponding to the finite sampling distribution of the log-LR statistic). For example, in the uniform prediction example above, the limit distribution is $\chi_2^2$ from (\ref{eq-new-key-1}) and an approximate $1-\alpha$ prediction region for $Y$ would be $\{y : -2 \log \Lambda_n(\boldsymbol{x}_n,y) \leq \chi^2_{2,1-\alpha}\}$, which has asymptotically correct coverage by Theorem~\ref{theorem-multiple-parameters}~(i). As another example from the gamma prediction case, we can use the $1-\alpha$ quantile of the limit distribution in (\ref{key}) to replace $\lambda_{1-\alpha}$ in (\ref{eq:bootstrap-find-lik-ratio}).
This limit distribution, however, depends on the unknown shape parameter $\alpha_0$ in (\ref{key}), which differs from the uniform case where the log-LR statistic has a limit distribution in (\ref{eq-new-key-1}) that is free of unknown parameters. However, in prediction cases such as the gamma distribution, where the limit distribution of log-LR statistic from Theorem~\ref{theorem-multiple-parameters}~(i) does depend on unknown parameters, we can still approximate and use the limit distribution by replacing any unknown parameters with consistent estimators.
To illustrate with gamma predictions, we may estimate the unknown shape parameter $\alpha_0$ in (\ref{key}) with the ML estimate $\widehat{\alpha}$ from the data $\boldsymbol{X}_n=\boldsymbol{x}_n$ and compute the $1-\alpha$ quantile of the ``plug-in'' version of the limit distribution $2(Z-\widehat\alpha)-2\widehat\alpha\log(Z)$.
Such use of the limit distribution of the log-ratio statistic (Theorem~\ref{theorem-multiple-parameters}~(i)), possibly with plug-in estimation, can be computationally simpler than parametric bootstrap and may have advantages for large sample sizes or when the numerical costs of repeated ML estimation (i.e., as in bootstrap) are prohibitive.
\section{How to Choose the Full Model}\label{sec-choose-full}
When $\boldsymbol{\theta}$ is a parameter vector, construction of the LR statistic depends on which parameter component in $\boldsymbol{\theta}$ is varied to create $\boldsymbol{\theta}_y$ in a full model, where $(\boldsymbol{\theta},\boldsymbol{\theta}_{y})$ again differ in exactly one component.
Our recommendation is to choose a parameter that is most readily identifiable from a single observation.
In other words, we can envision maximizing $f(y|\boldsymbol{\theta})$, the density of one observation, with respect to a single unknown parameter of our choice, with all remaining parameters fixed at arbitrary values; the parameter that represents the simplest single maximization step of $f(y|\boldsymbol{\theta})$ corresponds to a good parameter choice in the LR construction and choosing such a parameter can simplify computation.
Under some one-to-one reparameterization, if necessary, such a parameter is often given by the mean or the median of the model density $f(y|\boldsymbol{\theta})$ that can naturally be identified through a single observation $Y$.
This approach is also supported by Theorem~\ref{theorem-multiple-parameters} where the limiting distribution of the LR statistic is determined by a single-parameter maximization.
We provide some examples in the rest of this section.
\noindent\textbf{Normal Distribution}: For $\text{Norm}(\mu,\sigma)$ with unknown $\mu,\sigma$, consider maximizing the normal density $f(y;\mu,\sigma)$ of a single observation $Y$ with respect to one parameter while the other parameter is fixed.
If choosing $\mu$, then we can estimate $\mu$ simply as $\widehat{\mu}_y=y$.
However, if choosing $\sigma$, we have $\widehat{\sigma}_y^2=(y-\mu)^2$, which is less simple.
More technically, the LR construction for the normal model then eventually involves a complicated estimation of the remaining parameter $\mu$ (from a ``full'' model sample $x_1,\dots,x_n,y$) as the maximizer of $-2\log|y-\mu|-n\log[\sum_{i=1}^{n}(x_i-\mu)^2]$, which can exhibit numerical sensitivity in the value of $y$.
We have seen in Section~\ref{sec:pivotal-quantity} that choosing the mean parameter $\mu$ gives a LR statistic with a nice form and coverage properties, but choosing $\sigma$ results in a much less tractable LR statistic.
\noindent\textbf{Gamma Distribution}: For a gamma distribution with shape $\alpha$ and scale $\beta$, we select the parameter that most easily maximizes a single gamma density $f(y;\alpha,\beta)$ when the other parameter is fixed.
Choosing $\beta$ is simpler than choosing $\alpha$ because the maximizer of the gamma pdf with respect to $\beta$ is $\widehat{\beta}=\alpha/y$ while choosing $\alpha$ does not yield a closed-form maximizer.
Also, choosing $\alpha$ leads to a more complicated LR statistic and a less tractable limit distribution, from Theorem~\ref{theorem-multiple-parameters}.
Alternatively, to more closely align parameter choice in the gamma distribution with parameter identification from one observation, we use another parameterization $(\alpha\beta,\alpha)$ and choose the mean $\alpha\beta$ (i.e., estimated as $y$ analogously to the normal case).
This choice will produce the same LR statistic as choosing $\beta$ in the parameterization $(\beta,\alpha)$.
Hence, choosing a parameter with the simplest stand-alone maximization step in a parameterization and choosing a parameter based on identifiability considerations (e.g., means) in a second parameterization are related concepts.
\noindent\textbf{Generalized Gamma Distribution}:
The (extended) generalized gamma distribution, using the \citet{farewell1977study} parameterization (see also Section~4.13 of \citealt{meeker2021}) has (on the log scale) a location $\mu$, a scale $\sigma$, and a shape parameter $\lambda$ with a pdf given by
\[
f(y;\mu,\sigma,\lambda)=
\begin{cases}
\dfrac{|\lambda|}{\sigma y}\phi_{lg}[\lambda\omega+\log(\lambda^{-2});\lambda^{-2}]\quad&\text{if}\quad\lambda\neq0\\
\dfrac{1}{\sigma y}\phi_{\text{norm}}(\omega)&\text{if}\quad\lambda=0,
\end{cases}
\]
where $y>0$, $\omega=[\log(y)-\mu]/\sigma$, $-\infty<\mu<\infty$, $-\infty<\lambda<\infty$, $\sigma>0$, $\phi_{\text{norm}}(\cdot)$ is the pdf of $\text{Norm}(0,1)$, and
$
\phi_{lg}(z;\kappa)=\exp\left[\kappa z-\exp(z)\right]/{\Gamma(\kappa)}
$.
When considering the maximization of a single density $f(y;\mu,\sigma,\lambda)$ for one of the three parameters (with others fixed), the ML estimator of $\mu$ has the simplest form as $\widehat{\mu}=\log(y)$.
Hence, we choose $\mu$ to construct the ``full'' model.
A small simulation study was done to investigate the coverage probability of one-sided prediction bounds.
Fixing the location parameter at $\mu=0$ and the scale parameter at $\sigma=1$ without loss of generality, we consider four different levels for the shape parameter $\lambda=0.5, 1, 1.5, 2$.
The Monte Carlo sample size was set as $N=2000$; the bootstrap sample size was set as $B=2000$.
The data sample sizes are $n=30,35,40,45,50,55,60,70,80,90,100$.
Figure~\ref{fig:gengamma-cp} shows the coverage probabilities for the LR prediction method and for the plug-in method (where the unknown parameters are replaced with the ML estimates) versus the sample size.
We can see that the LR method has good coverage probability and consistently outperforms the plug-in method.
Also, the coverage probability of the LR method, if not close to the nominal confidence level, is conservative.
The plug-in method, however, is always anti-conservative in this simulation study.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\linewidth]{figures/gengamma-cp}
\caption{Coverage probabilities of 95\% upper bounds using LR prediction method (LR) and naive plug-in method (PL) versus the sample size $n$.}
\label{fig:gengamma-cp}
\end{figure}
\section{Discrete Distributions}\label{sec:discrete-distribution}
Prediction methods for discrete distributions are less well developed when compared to those for continuous distributions.
Many methods (e.g., the calibration-bootstrap method
proposed by \citealt{beran1990}) that generally work in continuous
settings are not applicable for certain discrete data models.
This section presents three prediction applications based on discrete distributions and
shows that the LR prediction method not only works for discrete
distributions but also has performance comparable to existing methods that were especially tailored to these particular discrete prediction problems. Because the LR prediction method is a generally applicable method for prediction, the good performance of the method against specialized alternatives in these discrete cases is also suggestive that the LR approach may apply well in other prediction problems.
\subsection{Binomial Distribution}\label{subsec:binomial-dist}
We consider the prediction problem where there are two independent binomial
samples with the same probability $p$. The initial sample $X$ has a distribution
$\text{Binom}(n, p)$ and the predictand $Y$ has a distribution $\text{Binom}(m, p)$.
Both $n$ and $m$ are known, and note here that the data and predictand have related, though not identical, distributions (unlike predictions in Sections~\ref{sec:pivotal-quantity}-\ref{asymptotic-results} with continuous $Y$).
The goal is to construct a prediction interval for
$Y$ given observed data $X=x$.
Using the fact that the conditional distribution of $X$ given the sum $X+Y$ does
not depend on the parameter $p$, \citet{thatcher1964relationships} proposed a prediction
method based on the cdf of the hypergeometric distribution.
\citet{faulkenberry1973method} proposed a similar method using the conditional
distribution of $Y$ given the sum $X+Y$, which is also free of the parameter $p$.
\citet{nelsonapplied} proposed a different approach using the asymptotic normality of an approximate pivotal
statistic. However, numerical studies in \citet{wang2008coverage} and
\citet{krishnamoorthy2011improved} showed that Nelson's method has poor coverage probability, and proposed alternative prediction methods using asymptotic normality (e.g., based on inverting a score-like statistic instead of a Wald-like statistic).
To construct prediction intervals using the LR prediction method,
the reduced model is that $X$ and $Y$ have the same parameter $p$ while the
full model allows $X$ and $Y$ to have a different $p$ in the construction (\ref{eq:likelihood-ratio}).
The LR statistic is then
\[ \Lambda_{n,m}(x, y)=\frac{\text{dbinom}(x, n, \widehat{p}_{xy})\times\text{dbinom}(y, m, \widehat{p}_{xy})}
{\text{dbinom}(x, n, \widehat{p}_{x})\times\text{dbinom}(y, m, \widehat{p}_{y})}=\frac{(\widehat{p}_{xy})^{x+y}(1-\widehat{p}_{xy})^{n+m-x-y}}{(\widehat{p}_x)^{x}(1-\widehat{p}_x)^{n-x}(\widehat{p}_y)^{y}(1-\widehat{p}_y)^{m-y}},
\]
where $\text{dbinom}$ is the binomial pmf, $\widehat{p}_{x}=x/n$, $\widehat{p}_y=y/m$,
and $\widehat{p}_{xy}=(x+y)/(n+m)$. The asymptotic distribution of
the log-LR statistic is $-2\log\Lambda_{n,m}(X,Y)\xrightarrow{d}\chi^2_1$ as
$n\to\infty$ and $m\to\infty$; this theoretical result is explained further in Section~\ref{sec-theories-discrete} for discrete data.
The prediction region is defined as
\begin{equation}\label{eq:binomial-bounds}
\mathcal{P}_{1-\alpha}(x)=\{y:-2\log\Lambda_{n,m}(x, y)\leq\chi_{1,1-\alpha}^2\},
\end{equation}
which gives an approximate $1-\alpha$ prediction interval procedure that has, asymptotically, equal-tail probabilities.
Due to the discrete nature of data here, we can further refine the LR prediction method by making a continuity
correction at the extreme values $x=0$ or $x=n$ and $y=0$ or $y=m$.
We first define $x^\prime\equiv x+0.5\text{I}_{x=0}-0.5\text{I}_{x=n}$
and $y^\prime\equiv y+0.5\text{I}_{y=0}-0.5\text{I}_{y=m}$ and further define
$\widehat{p}^\prime_{x}\equiv x^\prime/n$, $\widehat{p}^\prime_y\equiv y^\prime/m$,
and $\widehat{p}^\prime_{xy}\equiv(x^\prime+y^\prime)/(n+m)$. The corrected LR statistic is then
\begin{equation*}
\Lambda_{n,m}^{\prime}(x,y)=\frac{(\widehat{p}_{xy}^\prime)^{x^\prime+y^\prime}(1-\widehat{p}_{xy}^\prime)^{n+m-x^\prime-y^\prime}}{(\widehat{p}_x^\prime)^{x^\prime}(1-\widehat{p}_x^\prime)^{n-x^\prime}(\widehat{p}_y^\prime)^{y^\prime}(1-\widehat{p}_y^\prime)^{m-y^\prime}}.
\end{equation*}
\begin{figure}[!ht]
\begin{subfigure}{0.55\textwidth}
\includegraphics[width=\linewidth]{n15m15.pdf}
\end{subfigure}\hspace*{\fill}
\begin{subfigure}{0.55\textwidth}
\includegraphics[width=\linewidth]{n20m10.pdf}
\end{subfigure}
\medskip
\begin{subfigure}{0.55\textwidth}
\includegraphics[width=\linewidth]{n50m50.pdf}
\end{subfigure}\hspace*{\fill}
\begin{subfigure}{0.55\textwidth}
\includegraphics[width=\linewidth]{n50m20.pdf}
\end{subfigure}
\medskip
\begin{subfigure}{0.55\textwidth}
\includegraphics[width=\linewidth]{n100m50.pdf}
\end{subfigure}\hspace*{\fill}
\begin{subfigure}{0.55\textwidth}
\includegraphics[width=\linewidth]{n100m20.pdf}
\end{subfigure}
\caption{Coverage probabilities of 95\% lower and upper prediction bounds using corrected LR prediction method (c-LR),
joint sample prediction method (JS), and LR prediction method (LR) as a function of $p$.}
\label{fig:binomial}
\end{figure}
A numerical study was done to investigate the coverage probability of the LR prediction methods
and we also used the joint sampling prediction method as a benchmark for comparison because of its good
coverage probability (cf.~\citealt{krishnamoorthy2011improved}). The results in Figure~\ref{fig:binomial} show that
the original LR prediction method can have poor coverage for small sample sizes (e.g., $n=15$) when $p$ is near 0
or 1. However, with the continuity correction, the coverage probability of the corrected
LR prediction method is comparable to that of the joint sampling prediction method.
Unlike the joint sampling prediction method though, the LR prediction method is a general approach, which applies outside binomial prediction problems and has not been specifically designed for this purpose.
The numerical results here aim to provide evidence that the LR prediction method can be a generally effective procedure for prediction.
\subsection{Poisson Distribution}\label{subsec:poisson-dist}
Suppose $X\sim\text{Poi}(n\lambda)$ and $Y\sim\text{Poi}(m\lambda)$, where
$n$ and $m$ are known positive integers and $\lambda>0$ is unknown. The goal is to construct
prediction intervals for $Y$ based on data $X=x$. Similar to the binomial example, one can construct
prediction intervals using the fact that the conditional distribution of $X$ or $Y$
given $X+Y$ is binomial while \citet{nelsonapplied} and \citet{krishnamoorthy2011improved}
proposed alternative methods using a Wald-like approximate pivotal quantity.
To construct prediction intervals using the LR prediction method, the reduced model for the LR statistic (\ref{eq:likelihood-ratio}) is that
$X$ and $Y$ have the same $\lambda$ parameter while for the full model,
$X$ and $Y$ may not have the same $\lambda$ parameter. The LR statistic is given by
\[ \Lambda_{n,m}(x, y)=\frac{\text{dpois}(x, n\widehat{\lambda}_{xy})\times\text{dpois}(y, m\widehat{\lambda}_{xy})}
{\text{dpois}(x, n\widehat{\lambda}_x)\times\text{dpois}(y, m\widehat{\lambda}_y)}=\frac{\exp\left[-(n+m)\widehat{\lambda}_{xy}\right](n\widehat{\lambda}_{xy})^{x}(m\widehat{\lambda}_{xy})^{y}}{\exp\left(-n\widehat{\lambda}_x-m\widehat{\lambda}_y\right)(n\widehat{\lambda}_x)^{x}(m\widehat{\lambda}_y)^{y}}, \]
where $\widehat{\lambda}_{xy}=(x+y)/(n+m)$, $\widehat{\lambda}_x=x/n$, $\widehat{\lambda}_y=y/m$, and $\text{dpois}$ is the Poisson pmf.
The prediction interval can be obtained using
the same procedure in (\ref{eq:binomial-bounds}); see Section~\ref{sec-theories-discrete} for justification.
We can also refine the LR prediction method with a continuity
correction at the extremes $x=0$ or $y=0$ by letting $x^\prime\equiv x+0.5\text{I}_{x=0}$ and $y^\prime\equiv y+0.5\text{I}_{y=0}$.
Then define $\widehat{\lambda}_{xy}^\prime\equiv (x^\prime+y^\prime)/(n+m)$,
$\widehat{\lambda}^\prime_x\equiv x^\prime/n$, and $\widehat{\lambda}_y^\prime\equiv y^\prime/m$ so that the corrected LR statistic is
\begin{equation*}
\Lambda_{n,m}^\prime(x,y)=\frac{\exp\left[-(n+m)\widehat{\lambda}_{xy}^\prime\right](n\widehat{\lambda}_{xy}^\prime)^{x^\prime}(m\widehat{\lambda}_{xy}^\prime)^{y^\prime}}{\exp\left(-n\widehat{\lambda}_x^\prime-m\widehat{\lambda}_y^\prime\right)(n\widehat{\lambda}_x^\prime)^{x^\prime}(m\widehat{\lambda}_y^\prime)^{y^\prime}}.
\end{equation*}
A numerical study was done to investigate the coverage probability of
the proposed methods. Similar to the binomial example, the joint sampling
method from \citet{krishnamoorthy2011improved} was used for comparison
because of its good coverage properties. Figure~\ref{fig:poisson} shows that
the continuity correction improves the poor coverage of the LR prediction method when $\lambda$ is small. The coverage
probability of the corrected LR prediction method is comparable to
that of the joint sampling method. In the bottom-right subplot of Figure~\ref{fig:poisson},
the corrected method has better performance than the joint sampling method.
Again, unlike the joint sampling prediction method, the LR prediction method is general and not specifically designed for Poisson predictions.
\begin{figure}[ht!]
\begin{subfigure}{0.55\textwidth}
\includegraphics[width=\linewidth]{pn1m1.pdf}
\end{subfigure}\hspace*{\fill}
\begin{subfigure}{0.55\textwidth}
\includegraphics[width=\linewidth]{pn4m4.pdf}
\end{subfigure}
\medskip
\begin{subfigure}{0.55\textwidth}
\includegraphics[width=\linewidth]{pn5m2.pdf}
\end{subfigure}\hspace*{\fill}
\begin{subfigure}{0.55\textwidth}
\includegraphics[width=\linewidth]{pn2m5.pdf}
\end{subfigure}
\medskip
\begin{subfigure}{0.55\textwidth}
\includegraphics[width=\linewidth]{pn10m1.pdf}
\end{subfigure}\hspace*{\fill}
\begin{subfigure}{0.55\textwidth}
\includegraphics[width=\linewidth]{pn1m10.pdf}
\end{subfigure}
\caption{Coverage probabilities of 95\% Poisson lower and upper prediction bounds using the corrected LR prediction method (c-LR),
joint sample prediction method (JS), and LR prediction method (LR) as a function of $\lambda$.}
\label{fig:poisson}
\end{figure}
\subsection{Predicting the Number of Future Events}
\label{subsec:within-sample-prediction}
Suppose $n$ units start service at time $t=0$ and that the lifetime of each unit has a continuous parametric distribution with cdf $F(t;\boldsymbol{\theta})$ and density $f(t;\boldsymbol{\theta})$.
At a data freeze date, the unfailed units have accrued $t_c$ time units of service (e.g., hours or months in service) while $r_n$ failures have occurred and the failure
times (all less than $t_c$) are known. A prediction interval
for the number of failures that will occur in the interval $(t_c, t_w]$ $(t_w>t_c,)$ is required.
This problem is called within sample prediction because the predictand
and the observed Type-I censored data are from the same sample.
The within-sample prediction and related problems have been studied in \citet{elawqm1999} using a calibration method.
Similar problems have been studied in \citet{nelson2000} and \citet{nordman2002weibull} based on an LR statistic without calibration.
\citet{tian2020pred} showed that the simple plug-in method (where ML estimates replace the unknown parameters in the distribution of the predictand and the $\alpha/2$ and $1-\alpha/2$ quantiles of the resulting distribution define an approximate $1-\alpha$ prediction interval procedure) is not asymptotically correct and proposed three alternative methods, based on parametric bootstrap samples, that are asymptotically correct.
In this paper, we propose another solution based on an LR statistic,
that does not require bootstrap samples.
Suppose that a random sample $T_1, \dots, T_n\sim F(t;\boldsymbol{\theta})$ is observed under
Type-I censoring with $r_n=\sum_{i=1}^{n}\text{I}(T_i\leq t_c)$ censored
units (failures).
The predictand is the number $Y=\sum_{i=1}^{n}\text{I}(t_c\leq
T_i\leq t_w)$ of events occurring in the future interval $(t_c,t_w]$.
For the $n-r_n$ units surviving at $t_c$,
the conditional probability of each unit to fail in $(t_c, t_w]$, given that the unit survived to $t_c$, is given by
\begin{equation}\label{eq:conditional-prob}
p\equiv \Pr\left(t_c<T_1\leq t_w|T_1>t_c\right).
\end{equation}
\subsubsection{Implementing the LR Prediction Method}
To implement the LR prediction method, we specify a reduced model versus~full model comparison in order to construct an LR statistic analogous to (\ref{eq:likelihood-ratio}).
Such models will be formulated in terms of the value (\ref{eq:conditional-prob}) of the conditional probability $p$ for the interval $(t_c,t_w]$, recalling that the predictand $Y$ is the number of failures (out of $n-r_n$ possible) that will occur in this interval.
For the reduced model, we assume that the time-to-failure process is
governed by $F(t;\boldsymbol{\theta})$ in the interval $(0, t_w]$ and that the conditional probability (\ref{eq:conditional-prob}) of a failure in $(t_c,t_w]$ is
\[
p=\frac{F(t_w;\boldsymbol{\theta})-F(t_c;\boldsymbol{\theta})}{1-F(t_c;\boldsymbol{\theta})}.
\]
The likelihood function for the reduced model is
\begin{equation}\label{eq:reduced-lik}
\mathcal{L}_1(\boldsymbol{\theta};\boldsymbol{t}_n, y)=\binom{n-r_n}{y}\prod_{i=1}^{r}f(t_{(i)};\boldsymbol{\theta})
\left[F(t_w;\boldsymbol{\theta})-F(t_c;\boldsymbol{\theta})\right]^{y}\left[1-F(t_w;\boldsymbol{\theta})\right]^{n-y-r_n}.
\end{equation}
For the full model, $F(t;\boldsymbol{\theta})$ will still be the time-to-failure distribution in the interval $(0, t_c]$ but not $(t_c, t_w]$, so that the value (\ref{eq:conditional-prob}) of the conditional probability $p\in(0,1]$ becomes one additional parameter.
The likelihood function for the full model is
\begin{equation}\label{eq:full-lik}
\mathcal{L}_2(\boldsymbol{\theta}, p;\boldsymbol{t}_n, y)=\binom{n-r_n}{y}\prod_{i=1}^{r}f(t_{(i)};\boldsymbol{\theta})
p^y(1-p)^{n-y-r_n}.
\end{equation}
By maximizing the likelihood functions in (\ref{eq:reduced-lik}) and
(\ref{eq:full-lik}), the LR statistic is
\[ \Lambda_n(\boldsymbol{t}_n, y)=\frac{\sup_{\boldsymbol{\theta}}L_1(\boldsymbol{\theta};\boldsymbol{t}_n, y)}{\sup_{\boldsymbol{\theta}, p}L_2(\boldsymbol{\theta},p;\boldsymbol{t}_n, y)}. \]
The asymptotic (as $n\to\infty$) distribution of $-2\log\Lambda_n(\boldsymbol{T}_n, Y)$ is $\chi_1^{2}$,
because the full model has one more parameter than
the reduced model and standard regularity conditions hold (see also Section~\ref{sec-theories-discrete}). An approximate $1-\alpha$ prediction region is defined as
\begin{equation}\label{eq:within-sample-prediction-set}
\{y:-2\log\Lambda_n(\boldsymbol{t}_n, y)\leq\chi_{1,1-\alpha}^2\},
\end{equation}
where $\chi_{1,1-\alpha}^2$ is the $1-\alpha$ quantile of the $\chi_1^2$ distribution.
Because $\Lambda_n(\boldsymbol{t}_n, y)$ is a unimodal function of $y$,
the prediction region in (\ref{eq:within-sample-prediction-set}) provides the desired approximate prediction interval.
\subsubsection{A Simulation Study}
A simulation study was done to examine the coverage probability of the
LR prediction method for the within-sample prediction problem.
We simulated Type-I censored data with censoring time $t_c$ using the Weibull distribution
\[
F(t;\beta,\eta)=1-\exp\left[-\left(\frac{t}{\eta}\right)^\beta\right],\quad t>0.
\]
Then we constructed
prediction intervals for the number of failures in the future time interval $(t_c, t_w]$ using
several methods: plug-in, LR, direct-bootstrap, GPQ-bootstrap, and calibration-bootstrap methods.
As mentioned earlier, the plug-in method, which replaces the unknown parameter $\boldsymbol{\theta}=(\beta,\eta)$ with a consistent
estimate $\widehat{\boldsymbol{\theta}}_n$, fails to provide asymptotically correct prediction intervals (cf.~\citealt{tian2020pred}).
The last three methods are from \citet{tian2020pred} and have been established to be asymptotically correct.
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{{beta2delta0.1}.pdf}
\caption{Coverage probabilities versus expected number of events (failures) for the direct-bootstrap (DB), GPQ-bootstrap (GPQ), calibration-bootstrap (CB), LR, and plug-in (PL) methods when $d=0.1$ and $\beta=2$.}
\label{fig:within-sample-pred}
\end{figure}
The factors for this simulation study include
\begin{enumerate}
\item The probability that a unit fails before the censoring time $t_c$:
$p_{f1}=F(t_c;\beta,\eta)$.
\item The expected number of failures at the censoring time $t_c$: $\text{E}(r)=np_{f1}$.
\item The probability of a unit fails in the future time interval $(t_c, t_w]$: $d\equiv p_{f2}-p_{f1}$,
where $p_{f2}=F(t_w;\beta,\eta)$.
\item The Weibull shape parameter: $\beta$.
\end{enumerate}
We set the Weibull scale parameter as $\eta=1$ and, for other factors, we use the following
factor levels: (i) $p_{f1}=0.05, 0.1, 0.2$; (ii) $\text{E(r)}=5, 15, 25, 35, 45$; (iii) $d=0.1, 0.2$; (iv) $\beta=0.8, 1, 2, 4$. For the methods which involve bootstrap simulation, the bootstrap sample size is $B=5000$. The unconditional coverage probability is computed by averaging $N=5000$ conditional coverage probabilities
(i.e., the Monte Carlo sample size is $N=5000$).
Figure~\ref{fig:within-sample-pred} compares the coverage probabilities for the plug-in, direct-bootstrap, GPQ-bootstrap, calibration-bootstrap, and LR prediction methods when $d=0.1$ and $\beta=2$.
We can see that the LR, direct-bootstrap, and GPQ-bootstrap prediction method have similar coverage probabilities for within-sample prediction, where the latter two methods rely on bootstrap and the LR interval does not.
That is, the LR prediction method based on chi-square calibration has the advantage of being computationally easier than the direct-bootstrap or GPQ-bootstrap methods for this prediction problem, while providing comparable performance.
This pattern is consistent in the simulation results of other factor combinations (given in the online supplementary material).
While we have considered the LR prediction method for within-sample prediction for illustration and comparison, the LR prediction method is again general and not specific to within-sample prediction.
\subsection{Validating the Asymptotic Distribution}
\label{sec-theories-discrete}
In Sections~\ref{subsec:binomial-dist}--\ref{subsec:within-sample-prediction}, we construct the prediction intervals for certain discrete predictands $Y$ using the fact that the log-LR statistic has a chi-square limit with 1 degree of freedom in these prediction problems.
This section provides justification for these asymptotic results.
The prediction problems in Sections~\ref{subsec:binomial-dist} and \ref{subsec:poisson-dist} are similar in that the predictand $Y$ (as a $\text{Binom}(m,p)$ or $\text{Pois}(m\lambda)$ random variable) can be seen to have the same distribution as a sum of iid variables in both cases (i.e., $m$ iid $\text{Bern}(p)$ or $\text{Pois}(\lambda)$ random variables).
As a consequence, the log-LR statistic from Section~6.1, constructed on the basis of using $X\sim\mbox{Binom}(n,p)$ to predict
$Y\sim\mbox{Binom}(m,p)$, is the same as the log-LR statistic given in Theorem~3 based on the $X_1,\ldots,X_n$
and $Y_1,\ldots,Y_m$ being iid $\mbox{Binom}(1,p)$.
A similar statement holds for the Poisson prediction problem from Section~6.2.
Hence, the chi-square limit for the log-LR statistic in Sections~\ref{subsec:binomial-dist} and \ref{subsec:poisson-dist} follows from Theorem~\ref{theorem-discrete} below.
We provide Theorem~\ref{theorem-discrete} as a general result with standard regularity conditions given in the supplementary material.
For the prediction problem in Section~\ref{subsec:within-sample-prediction}, the proof is similar to that of Theorem~\ref{theorem-discrete}.
See Section A.3 of the online supplementary material for details.
\begin{theorem}\label{theorem-discrete}
Suppose $X_1,\dots,X_n$ are iid random variables with common density $f(\cdot;\theta_1)$ and, independently, $Y_1,\dots,Y_m$ are iid random variables with a common density $f(\cdot;\theta_2)$, where $\theta_1,\theta_2\in \Theta$ denote real-valued parameters.
Suppose further that mild regularity conditions hold (as described in Section~A.2 of the supplement).
Then, if $\theta_1=\theta_2$, the log-LR statistic for testing $\theta_1=\theta_2$ has a limiting chi-square distribution with 1 degree of freedom as $n,m\to\infty$; that is,
\[
-2\log\left\{\frac{\sup_{\theta}\left[\prod_{i=1}^{n}f(x_i;\theta)\prod_{j=1}^{m}f(y_i;\theta)\right]}{\left[\sup_{\theta_1}\prod_{i=1}^{n}f(x_i;\theta_1)\right]\left[\sup_{\theta_2}\prod_{i=1}^{m}f(y_i;\theta_2)\right]}\right\}\stackrel{d}{\rightarrow} \chi_1^2.
\]
\end{theorem}
\section{Comparison with the Predictive Likelihood Methods}\label{sec-relationship-pred-dist}
The predictive likelihood method, introduced in Section~\ref{subsec:literature}, is an important prediction method. While having similar-sounding names, the LR prediction method for prediction is different than the predictive likelihood method. The LR prediction method
may be classified as a type of test-based method (cf.~Section~\ref{subsec:literature}) for prediction intervals
which also share connections to approximate pivotal quantities (though technically, the LR statistic may not always be pivotal, even asymptotically, as shown in Section~\ref{asymptotic-results}, although its limiting distribution may then be estimated by bootstrap).
This section describes two specific types of predictive likelihood methods.
However, these predictive likelihood methods can fail to provide desirable prediction intervals in some prediction problems, where the LR prediction method emerges as having better properties.
\subsection{Profile Predictive Likelihood Method}
The profile predictive likelihood $\widetilde{\mathcal{L}}_p(\boldsymbol{x}_n,y)$ function for $y$ given data values $\boldsymbol{X}_n=\boldsymbol{x}_n$ is obtained by maximizing out the parameters in the joint likelihood function,
\[
\widetilde{\mathcal{L}}_p(\boldsymbol{x}_n,y)\equiv\sup_{\boldsymbol{\theta}}f(y;\boldsymbol{\theta})\prod_{i=1}^{n}f(x_i;\boldsymbol{\theta}).
\]
Then, the predictive likelihood is normalized to give a predictive density function for $Y$,
\[
f_p(y;\boldsymbol{x}_n)=\frac{\widetilde{\mathcal{L}}_p(\boldsymbol{x}_n,y)}{\int_{-\infty}^{\infty}\widetilde{\mathcal{L}}_p(\boldsymbol{x}_n,y)dy},
\]
which is viewed as univariate distribution depending on $\boldsymbol{X}_n=\boldsymbol{x}_n$ for calibrating prediction intervals for $Y$.
Note that $\widetilde{\mathcal{L}}(\boldsymbol{x}_n,y)$ is the numerator of the LR statistic in (\ref{eq:likelihood-ratio}) so that the process of obtaining the profile
predictive likelihood may be viewed as a step in constructing LR-based prediction intervals.
However, in some prediction problems, discussed next, the profile predictive likelihood does not lead to an exact prediction interval for the predictand $Y$ when the LR prediction method does.
To illustrate this, consider a sample $\boldsymbol{X}_n$ from a normal distribution, and consider constructing prediction intervals for a future random variable $Y$ from the same distribution. From \citet{lejeune1982simple}, the profile predictive likelihood for $Y$ given data $\boldsymbol{X}_n=\boldsymbol{x}_n$ (i.e., the distribution to be used for predicting $Y$, as implied by the profile predictive likelihood density) is given by the distribution of
\begin{equation*}
\bar{x}_n+s\sqrt{\frac{n^2-1}{n^2}}T,
\end{equation*}
where $\bar{x}_n$ is the sample mean, $s^2$ is the sample variance, and $T$ is an independent random variable having a $t$-distribution with $n$ degrees of freedom. However, in order
for the profile predictive likelihood method to produce an exact prediction interval for $Y$, the degrees of freedom for the $t$-distribution of $T$ above should be $n-1$ instead of $n$ (see (\ref{eq:normal-invert-t-test})).
Consequently, the profile predictive likelihood method is not exact in this example.
The LR prediction method, however, has exact coverage for this prediction problem, as shown in Section~\ref{sec:pivotal-quantity}.
\subsection{Approximate Predictive Likelihood Method}
\citet{davison1986} proposed an approximate predictive likelihood method that involves maximizing likelihood functions.
Let $\widehat{\boldsymbol{\theta}}$ be the maximizer of $\mathcal{L}(\boldsymbol{\theta};\boldsymbol{x}_n)$, which is
the likelihood function for data $\boldsymbol{x}_n$ alone and $\widehat{\boldsymbol{\theta}}_y$ be the maximizer of the joint likelihood
function for $\boldsymbol{X}_n$ and $Y$, say $\mathcal{L}(\boldsymbol{\theta};\boldsymbol{x}_n,y)$.
Then the approximate predictive likelihood is defined as
\begin{equation*}
\widetilde{\mathcal{L}}(\boldsymbol{x}_n,y)=\frac{\mathcal{L}(\widehat{\boldsymbol{\theta}}_y;\boldsymbol{x}_n,y)|J_1(\widehat{\boldsymbol{\theta}})|^{1/2}}{\mathcal{L}(\widehat{\boldsymbol{\theta}};\boldsymbol{x}_n)|J_2(\widehat{\boldsymbol{\theta}}_y)|^{1/2}},
\end{equation*}
where $J_1(\boldsymbol{\theta})$ is the minus Hessian of $\log\mathcal{L}(\boldsymbol{\theta};\boldsymbol{x}_n)$, $J_2(\boldsymbol{\theta})$ is the minus
Hessian of $\log\mathcal{L}(\boldsymbol{\theta};\boldsymbol{x}_n,y)$,
and $|\cdot|$ is the determinant.
Suppose that $\boldsymbol{X}_n$ and $Y$ are mutually independent with a
common exponential distribution. From \citet{davison1986}, the approximate
predictive likelihood for $Y$ is
$$
\widetilde{\mathcal{L}}(\boldsymbol{x}_n,y)\propto\left(\sum_{i=1}^{n}x_i\right)^{n-1}\left(\sum_{i=1}^{n}x_i+y\right)^{-n}.
$$
Then, prediction intervals for $Y$ are computed from density on $y\in(0,\infty)$, which is obtained by normalizing $\widetilde{\mathcal{L}}(\boldsymbol{x}_n,y)$ with respect to $y$.
Moreover, as noted by \citet{hall1999}, the approximate predictive likelihood method is not exact here and has a coverage probability error of order $O(1/n)$.
For the LR prediction method, however, the LR statistic (\ref{eq:likelihood-ratio}) is
$$
\Lambda_n(\boldsymbol{x}_n,y)=\left(\frac{n\bar{x}_n+y}{\bar{x}_n}\right)^n\frac{n\bar{x}_n+y}{y}
$$
which, in this case, is a function of a pivotal quantity $Y/\bar{X}_n$.
This implies that the LR prediction method, based on bootstrap calibration, for example, has exact coverage probability, according to Theorem~\ref{theorem-location-scale-family-11} (see also Section~\ref{sec:pivotal-quantity}).
\section{Concluding Remarks}\label{sec:conclusion}
In this paper, we propose a general prediction procedure based on inverting an LR test.
The construction of the LR test requires enlarging the parameter space to create a quasi ``full model.''
To compute prediction intervals, we need to find the distribution of the LR statistic.
Apart from finding the distribution of the LR statistic analytically when possible, we may use chi-square distribution to calibrate its distribution when Wilks' theorem is applicable; we have demonstrated this for predictions involving discrete random variables.
Furthermore, we can use a parametric bootstrap as a general approach to approximate the distribution of the LR statistic, particularly in those cases where Wilks' theorem does not apply.
The proposed method will generally discover a pivotal quantity if one exists.
In such cases, the procedure will have exact coverage probability.
When a pivotal quantity is not available, we have shown that the LR method is asymptotically correct.
When the LR statistic is unimodal (as a function of $y$), then the proposed prediction region will correspond to an interval.
Relatedly, when the LR statistic is again unimodal, we provide an approach in Section~\ref{subsec:one-sided} to compute one-sided bounds in a computationally efficient manner (which is related to, but simpler than, working directly from the two-sided intervals in Section~\ref{subsec:lrt} in determining the endpoint for a one-sided bound).
While not encountered in any work for this paper, when the LRS is not unimodal, the prediction regions in Section~\ref{subsec:lrt} are still valid but these regions may be a union of several disconnected intervals and the algorithm of Section~\ref{subsec:one-sided} for finding one-sided bounds will not be applicable; one-sided bounds then need to be determined from the prediction regions of Section~\ref{subsec:lrt}.
We see several potential future research topics and list three below: (a) we only consider scalar random variables for prediction in this paper, but the proposed LR prediction framework could be extended to construct 2-d (or even higher dimensional) prediction regions using the same method as in (\ref{eq:bootstrap-find-lik-ratio}).
The main change is that $\boldsymbol{Y}$ in the joint likelihood function $\mathcal{L}(\boldsymbol{X}_n, \boldsymbol{Y})$ becomes a random vector.
(b) The proposed prediction framework could be applied to problems involving complicated data with regressors.
Examples include data with different types of censoring, mixed linear models, and generalized linear model structures.
(c) The LR prediction method could also be extended to dependent data.
We discuss an example involving dependence in Section~\ref{subsec:within-sample-prediction}.
But in future research, we might apply the LR prediction method to problems with non-trivial dependence structure such as time series or Markov Random Fields.
\section*{Acknowledgments}
We want to thank the anonymous reviewers and the editor, Galit Shmueli, who provided comments and suggestions that improved our paper.
Research was partially supported by NSF DMS-2015390.
\bibliographystyle{apalike}
\section{Proof of Theoretical Results}
\label{sec:proof-of-theorems}
\subsection{Proof of Theorem~\ref{theorem-location-scale-family-11}}
For completeness, we first restate Theorem~\ref{theorem-location-scale-family-11} and then provide its proof.
\begin{theorem}\label{theorem-location-scale-family-11}
(i) Suppose the LR-statistic (3) is a pivotal quantity.
Then, the corresponding $1-\alpha$ prediction region (4) for $Y$ based on the parametric bootstrap will have exact coverage.
That is,
\[
\Pr\left[Y\in\mathcal{P}_{1-\alpha}(\boldsymbol{X}_n)\right]=1-\alpha.
\]
(ii) Suppose also that both the data $X_1,\dots,X_n$ and $Y$ are from a location-scale distribution with density $f(\cdot;\mu,\sigma)=\phi\left[(x-\mu)/\sigma\right]$ with parameters $\boldsymbol{\theta}=(\mu,\sigma)\in\mathbb{R}\times(0,\infty)$.
In the LR construction (3), suppose the full model involves parameters $\boldsymbol{\theta}=(\mu,\sigma)$ and $\boldsymbol{\theta}_{y}=(\mu_y,\sigma)$ (i.e., $X_1,\dots,X_n\sim f(\cdot;\mu,\sigma)$ and $Y\sim f(\cdot;\mu_y,\sigma)$).
Then the LR statistic $\Lambda_{n}(\boldsymbol{X}_n,Y)$ (or $-2\log\Lambda_{n}(\boldsymbol{X}_n,Y)$) is a pivotal quantity and the result of Theorem~\ref{theorem-location-scale-family-11}(i) holds.
\end{theorem}
\begin{proof}
\noindent To establish~Theorem~\ref{theorem-location-scale-family-11}(i), the feature of the LR-statistic $\Lambda_n(\boldsymbol{X}_n,Y)$ from (3) being a pivotal quantity implies that its bootstrap counterpart $\Lambda_n(\boldsymbol{X}^\ast_n, Y^\ast)$ has the same pivotal distribution, or that
\[
\Lambda_n(\boldsymbol{X}_n,Y)\stackrel{d}{=}\Lambda_n(\boldsymbol{X}^\ast_n, Y^\ast)
\]
for any observed sample $\boldsymbol{X}_n=\boldsymbol{x}_n$.
Consequently, the $1-\alpha$ quantile $\lambda_{1-\alpha}^\ast$ of $-2\log\Lambda_n(\boldsymbol{X}_n^\ast,Y^\ast)$ is equal to that $\lambda_{1-\alpha}$ of $-2\log\Lambda_n(\boldsymbol{X}_n,Y)$,
that is, $\lambda_{1-\alpha}=\lambda_{1-\alpha}^\ast$.
The coverage probability of the bootstrap prediction region in (4) then follows as
\[
\begin{split}
\Pr\left[Y\in\mathcal{P}_{1-\alpha}(\boldsymbol{X}_n)\right]&=\Pr\left[-2\log\Lambda_n(\boldsymbol{X}_n, Y)\leq\lambda_{1-\alpha}^\ast\right]\\
&=\Pr\left[-2\log\Lambda_n(\boldsymbol{X}_n, Y)\leq\lambda_{1-\alpha}\right]\\&=1-\alpha.
\end{split}
\]
\noindent We next establish~Theorem~\ref{theorem-location-scale-family-11}(ii). The full model is that $\boldsymbol{X}_n\sim\phi[(x-\mu_1)/\sigma_1]$ and $Y\sim\phi[(y-\mu_2)/\sigma_1]$ and the reduced model is
$\boldsymbol{X}_n\sim\phi[(x-\mu)/\sigma]$ and $Y\sim\phi[(y-\mu)/\sigma]$, where $\phi(\cdot)$ is a known pdf.
Under this formulation, the LR-statistic from (3) at values $(\boldsymbol{X}_n,Y)=(\boldsymbol{x}_n,y)$ is given by
\begin{equation*} \Lambda_n(\boldsymbol{x}_n,y)=\frac{\widehat{\sigma}^{-n-1}\prod_{i=1}^{n}\phi(\frac{x_i-\widehat{\mu}}{\widehat{\sigma}})\phi(\frac{y-\widehat{\mu}}{\widehat{\sigma}})}{\widehat{\sigma}_1^{-n-1}\prod_{i=1}^{n}\phi(\frac{x_i-\widehat{\mu}_1}{\widehat{\sigma}_1})\phi(\frac{y-\widehat{\mu}_2}{\widehat{\sigma}_1})},
\end{equation*}
where $(\widehat{\mu},\widehat{\sigma})$ is the ML estimator of $(\mu,\sigma)$ under reduced model. and $(\widehat{\mu}_1,\widehat{\mu}_2,\widehat\sigma_1)$ is the ML estimator
of $(\mu_1,\mu_2,\sigma_1)$ under full model.
Denoting the mode of $\phi(\cdot)$ as $y_0$ and by letting $\widehat\mu_2=y-\widehat\sigma_1y_0$ for clarity, the value of $\phi[(y-\widehat{\mu}_2)/\widehat{\sigma}_1]$ may be seen as a constant
$C\equiv\phi(y_0)>0$. Then, the likelihood ratio can be re-written as
\begin{equation}\label{eq:exact-ls-dist}
\Lambda_n(\boldsymbol{x}_n,y)=\frac{1}{C}\left(\frac{\widehat{\sigma_1}/\widehat{\sigma}}{\widehat{\sigma}/\sigma}\right)^{n+1}\frac{\prod_{i=1}^{n}\phi\left(\frac{x_i-\mu}{\sigma}\frac{\sigma}{\widehat{\sigma}}+\frac{\mu-\widehat{\mu}}{\sigma}\frac{\sigma}{\widehat{\sigma}}\right)\phi\left(\frac{y-\mu}{\sigma}\frac{\sigma}{\widehat{\sigma}}+\frac{\mu-\widehat{\mu}}{\sigma}\frac{\sigma}{\widehat{\sigma}}\right)}{\prod_{i=1}^{n}\phi\left(\frac{x_i-\mu}{\sigma}\frac{\sigma}{\widehat{\sigma}_1}+\frac{\mu-\widehat{\mu}_1}{\sigma}\frac{\sigma}{\widehat{\sigma}_1}\right)}.
\end{equation}
In (\ref{eq:exact-ls-dist}), the distribution of quantities $\widehat{\sigma}/\sigma$, $(\mu-\widehat{\mu})/\sigma$, $(Y-\mu)/\sigma$, and $(X_i-\mu)/\sigma$, $i=1,\ldots,n$, do not depend
on any parameters (cf.~\citet[Appendix E]{lawless_statistical_2002}). Hence,
if $\widehat{\sigma}_1/\sigma$ and $(\mu-\widehat{\mu}_1)/\sigma$ are likewise shown to have distributions not depending on any parameters (i.e., are
pivots), then the LR-statistic will be a pivot. The parametric
bootstrap will also then yield the exact distribution of $\Lambda_n(\boldsymbol{x}_n,y)$, so that the bootstrap prediction method is exact.
To prove that $\widehat{\sigma}_1/\sigma$ and $(\mu-\widehat{\mu}_1)/\sigma$ are indeed pivots,
we observe from (\ref{eq:exact-ls-dist}) that
$( \widehat{\mu}_1, \widehat{\sigma}_1)$ is the maximizer of
\begin{equation}
\label{eqn:shift}
\mathcal{L}(\mu_1,\sigma_1;\boldsymbol{x}_n)=\frac{1}{\sigma_1^{n+1}}\prod_{i=1}^{n}\phi\left(\frac{x_i-\mu_1}{\sigma_1}\right).
\end{equation}
If data $\boldsymbol{x}_n$ and parameter values $(\mu_1,\sigma_1)$ in (\ref{eqn:shift}) are scaled by a given positive $d>0$ and shifted by a given $c\in\mathbb{R}$, we may denote the resulting values as $\boldsymbol{x}_n^\prime=d\boldsymbol{x}_n+c$, $\mu^\prime_1=d\mu_1+c$, and $\sigma_1^\prime=d\sigma_1$, and we note that the corresponding objective function (\ref{eqn:shift}) would then become
\begin{equation*}
\mathcal{L}(\mu_1^\prime,\sigma_1^\prime;\boldsymbol{x}_n^\prime)=(\sigma_1^\prime)^{-n-1}\prod_{i=1}^{n}\phi\left(\frac{x_i^\prime-\mu_1^\prime}{\sigma_1^\prime}\right)=(d\sigma_1)^{-n-1}\prod_{i=1}^{n}\phi\left(\frac{x_i-\mu_1}{\sigma_1}\right),
\end{equation*}
which would have a maximizer as $(\widehat{\mu}_1^\prime, \widehat{\sigma}_1^\prime)=(d\widehat{\mu}_1+c,d\widehat{\sigma}_1)$. This result implies that
$(\widehat{\mu}_1,\widehat{\sigma}_1)$ is an equivariant estimator.
Because $(\widehat{\mu}_1,\widehat{\sigma}_1)$ is equivariant and
$(X_i-\mu)/\sigma,i=1,\dots,n$ are pivots, the two quantities
\begin{equation*}
\begin{split}
\widehat{\mu}_1\left(\frac{X_1-\mu}{\sigma},\dots,\frac{X_n-\mu}{\sigma}\right)=\frac{1}{\sigma}\left[\widehat{\mu}_1(X_1,\dots,X_n)-\mu\right],\\
\widehat{\sigma}_1\left(\frac{X_1-\mu}{\sigma},\dots,\frac{X_n-\mu}{\sigma}\right)=\frac{\widehat{\sigma}_1(X_1,\dots,X_n)}{\sigma}
\end{split}
\end{equation*}
do not have any unknown parameters; thus, $\widehat{\sigma}_1/\sigma$ and $(\mu-\widehat{\mu}_1)/\sigma$ are pivotal quantities.
\end{proof}
\subsection{Proof of Theorem~\ref{theorem-1}}
For simplicity and clarity in presentation, we first state and prove a version of Theorem~\ref{theorem-1} in the case of single-parameter distributions. The assumptions and regularity
conditions described in the theorem statement are mild and will be discussed further after the theorem statement. After establishing
this version of Theorem~\ref{theorem-1},
we then discuss the extension to the case of multiple-parameters.
\begin{theorem} (Scalar parameter case.) \label{theorem-1}
Assume iid data $X_1,\dots,X_n$ and independent predictand $Y$ have common pdf $f(\cdot;\theta)$ depending on real-valued $\theta\in\Theta$. Let $\theta_0$ denote the true parameter value.\\
(1) Additionally, suppose the following conditions (a)-(e) hold:
\begin{enumerate}[label=(\alph*)]
\item The ML estimator $\tilde\theta_n$ based on $(X_1,\dots,X_n)$ satisfies $\tilde\theta_n\xrightarrow{p}\theta_0$.
\item The log-density $\log f(x;\theta)$ is twice continuously differentiable in a neighborhood $O\subset\Theta$ of $\theta_0$.
\item The first and second derivatives of $\log f(X_1;\theta)$ at $\theta_0$ have moments as
\begin{equation*}
\begin{split}
&\text{E}_{\theta_0}\left[\frac{d\log f(X_1;\theta)}{d\theta}\bigg|_{\theta=\theta_0}\right]=0,\\
&\text{I}(\theta_0)\equiv\text{E}_{\theta_0}\left[\frac{d\log f(X_1;\theta)}{d\theta}\bigg|_{\theta=\theta_0}\right]^2=-\text{E}_{\theta_0}\left[\frac{d^2\log f(X_1;\theta)}{d\theta^2}\bigg|_{\theta=\theta_0}\right]\in(0,\infty).
\end{split}
\end{equation*}
\item Moments of second derivatives of $\log f(X_1;\theta)$ satisfy a continuity condition at $\theta_0$:
\begin{equation*}
\text{E}_{\theta_0}\left[\sup_{|\theta-\theta_0|<\delta}\left|\frac{d^2\log f(X_1;\theta_0)}{d\theta^2} -\frac{d^2\log f(X_1;\theta)}{d\theta^2} \right|\right]\to0\text{ as }\delta\to0.
\end{equation*}
\item $\sup_{\theta\in\Theta}f(X_1;\theta)$ and $f(X_1;\theta_0)/\sup_{\theta\in\Theta}f(X_1;\theta)$ are positive continuous random variables.
\end{enumerate}
Then, the asymptotic distribution of the LR statistic, as $n\to \infty$, is given by
\[
-2\log\Lambda_n(\boldsymbol{X}_n,Y) \xrightarrow{d}-2\log\left[\frac{f(Y;\theta_0)}{\sup_{\theta\in\Theta}f(Y;\theta)}\right],
\]
(2) In addition to conditions (a)-(e), further assume the following:
\begin{enumerate}[label=(\alph*)]
\addtocounter{enumi}{5}
\item Moments of second derivatives of $\log f(X_1;\theta)$ satisfy integrability conditions
\begin{equation*}
\begin{split}
&\sup_{|\theta-\theta_0|<\delta}\text{E}_{\theta}\left[\sup_{|\theta^\dagger-\theta|<\delta}\left|\frac{d^2\log f(X_1;\theta^\dagger)}{d\theta^2}-\frac{d^2\log f(X_1;\theta )}{d\theta^2}\right|\right]\to0\text{ as }\delta\to0,\\
&\sup_{\theta\in O}\text{E}_{\theta}\left[\left|\frac{d^2\log f(X_1;\theta_0)}{d\theta^2}\right|\text{I}\left(\left|\frac{d^2\log f(X_1;\theta_0)}{d\theta^2}\right|\geq M\right)\right]\to0\text{ as }M\to\infty,
\end{split}
\end{equation*}
where above $X_1\sim f(\cdot;\theta)$ under expectation $\text{E}_{\theta}$.
\item $x$-discontinuities of $f(x;\theta_0)$, $d \log f(x;\theta_0)/ d \theta $, $d^2 \log f(x;\theta_0)/ d \theta^2$, or of $\sup_{\theta\in\Theta}f(x;\theta)$ have probability 0 under $\Pr_{\theta_0}$.
\item For any generic sequence $\{\theta_m\}$ of parameter values and associated random variables $Z_m\sim f(\cdot;\theta_m)$, if $\theta_m \to \theta_0$ holds, then $Z_m\xrightarrow{d} X_1$ where $X_1\sim f(\cdot;\theta_0)$.
\item
Letting $X_1^*,\ldots,X_n^*,Y^*$ denote iid bootstrap observations from $f(\cdot; \tilde\theta_n )$ (where $\tilde\theta_n$ is the ML estimator from $(X_1,\dots,X_n)$) and letting $\widehat{\theta}^\ast_{n}$ and $\tilde{\theta}_n^\ast$ denote
ML estimators from $(X_1^*,\ldots,X_n^*,Y^*)$ and $(X_1^*,\ldots,X_n^*)$, respectively, it holds that
$\rho(|\widehat{\theta}^\ast_{n}-\tilde{\theta}_n|,0)+\rho(|\tilde{\theta}_n^\ast-\tilde{\theta}_n|,0) \stackrel{p}{\rightarrow}0$ as $n\to \infty$,
where $\rho(\cdot,\cdot)$ denotes any metric for the distance between distributions that can be used to describe weak convergence.
\end{enumerate}
Then, the bootstrap method is asymptotically correct for the distribution of the LR-statistic: as $n\to \infty$,
\[
\sup_{\lambda\in\mathbb{R}^{+}}\left|\Pr{}_{\!\!\ast}\left[-2 \log \Lambda_n(\boldsymbol{X}_n^*,Y^*)\leq\lambda\right]-\Pr\left[-2 \log \Lambda_n(\boldsymbol{X}_n,Y) \leq\lambda\right]\right|\xrightarrow{p}0.
\]
\end{theorem}
We comment on the conditions before presenting the proof of the scalar-parameter version of Theorem~\ref{theorem-1}.
Conditions (a)--(e) are used to determine the limit distribution of the log-LR statistic and correspond
to standard regularity conditions often applied in likelihood theory. For example, conditions (b)--(c) require that the score-function
exists, with the usual mean zero and a variance interpretable as an information number.
Condition~(e) is natural for continuous data,
ensuring that the log of this quantity is a well-defined random variable.
Conditions~(f)--(i) are then further imposed
to establish the validity of the bootstrap approximation, though these conditions are also generally mild.
Condition~(f) provides a type of uniform integrability and convergence of moments arising from derivatives of the score-function (i.e., in a neighborhood
of the true parameter $\theta_0$). Conditions~(g)--(h) are basic smoothness conditions on the marginal data density $f(\cdot;\theta_0)$;
condition~(h) states that convergence of parameters implies convergence of underlying distributions, which would follow by Scheffe's theorem for example (i.e., verifying pointwise convergence of densities). Condition~(i) corresponds to a basic condition on bootstrap parameter estimates,
which is technical due to the nature of bootstrap distributions being defined by observed data; this condition says that the distributions of bootstrap quantities $|\widehat{\theta}^\ast_{n}-\tilde{\theta}_n|$
and $|\tilde{\theta}^\ast_{n}-\tilde{\theta}_n|$, which are analogous to $|\tilde{\theta}_n-\theta_0|$ in the bootstrap world, converge to zero in distribution (i.e., holding with high probability as sample size $n$ increases).
This condition is much weaker than the standard use of the bootstrap for parameter estimation, often requiring that $\sqrt{n}(\tilde{\theta}^\ast_{n}-\tilde{\theta}_n)$ and $\sqrt{n}( \tilde{\theta}_n-\theta_0)$ have the same non-degenerate limit distributions. In condition~(i), the exact distance metric $\rho(\cdot,\cdot)$ is not important (e.g., Prokhorov or Levy metrics may be used); for generic random variables, $\rho(Z_m,Z_0)\to 0$ must simply have the equivalent interpretation that $Z_m \stackrel{d}{\rightarrow}Z_0$.
\begin{proof}
\noindent To establish Theorem~\ref{theorem-1}(i) for a scalar parameter, recall the ML estimator $\tilde\theta_n\equiv \tilde\theta_n(X_1,\dots,X_n)$ from the data $(X_1,\dots,X_n)$ (i.e., an iid sample of size $n$) maximizes the log-likelihood function
\[
l_n(\theta) \equiv \sum_{i=1}^n \log f(X_i;\theta),
\]
while the ML estimator $\widehat\theta_n \equiv \widehat\theta_n(X_1,\dots,X_n,Y)$ from the data $(X_1,\dots,X_n,Y)$
(i.e., an iid sample $X_1,\dots,X_n,Y\equiv X_{n+1}$ of size $n+1$) maximizes the log-likelihood function
\[
l_{2,n}(\theta) \equiv \log f(Y;\theta) + \sum_{i=1}^n \log f(X_i;\theta) =\log f(Y;\theta) + l_n(\theta).
\]
For simplicity in the following, we denote the first and second derivatives of $l_n(\theta),l_{2,n}(\theta)$ with respect to $\theta$ as
$l_n^\prime(\theta),l_{2,n}^\prime(\theta)$ and $l_n^{\prime\prime}(\theta),l_{2,n}^{\prime\prime}(\theta)$, when such derivatives appropriately exist.
Note that conditions~(a)-(b) imply that, with arbitrarily high probability
for large $n$, both $\tilde\theta_n, \widehat\theta_n $ lie in a neighborhood $O$ of $\theta_0$ and are solutions/roots of the log-likelihood equations
\begin{equation}
\label{eqn:score}
0=l_n^\prime(\tilde\theta_n)\equiv \sum_{i=1}^n \frac{d \log f(X_i; \tilde\theta_n)}{d\theta}, \qquad 0= l_{2,n}^\prime(\widehat\theta_n) \equiv \frac{d \log f(Y; \widehat{\theta}_n)}{d\theta} +l_n^{\prime}(\widehat\theta_n).
\end{equation}
By Taylor expansion under condition~(b) and $l^\prime_n(\tilde{\theta}_n)=0$, we may then write the difference of the log-likelihood functions as
\begin{equation}
\label{eqn:a1}
l_n(\widehat\theta_n)-l_n(\tilde{\theta}_n)
=l_n^\prime(\tilde{\theta}_n)(\widehat{\theta}_n-\tilde{\theta}_n)
+\frac{1}{2}l_n^{\prime\prime}({\theta}_n^\ast)(\tilde{\theta}_n-\widehat{\theta}_n)^2 = \frac{1}{2}l^{\prime\prime}_n(\tau_n)(\tilde{\theta}_n-\widehat{\theta}_n)^2,
\end{equation}
in terms of some value $\tau_n $ between $\tilde{\theta}_n$ and $\widehat{\theta}_n$. In addition, we may
re-write the score equation $0=l_{2,n}^\prime(\widehat\theta_n )$ in (\ref{eqn:score}) as
\begin{eqnarray*}
0 = l_{2,n}^\prime(\widehat\theta_n ) = \frac{d\log f(Y;\widehat{\theta}_n)}{d\theta}+l_n^\prime(\tilde{\theta}_n)+l_n^{\prime\prime}(\tau_{2,n})(\tilde{\theta}_n-\widehat{\theta}_n),
\end{eqnarray*}
using Taylor expansion $l_n^\prime(\widehat{\theta}_n)$ around $\tilde{\theta}_n$, where above $\tau_{2,n}$ denotes some value between $\tilde{\theta}_n$ and $\widehat{\theta}_n$;
because $l_n^\prime(\tilde{\theta}_n)=0$ in (\ref{eqn:score}), the above leads to
\begin{equation}
\label{eqn:a2}
\frac{d\log f(Y; \widehat{\theta}_n)}{d\theta} +l_n^{\prime\prime}(\tau_{2,n})(\tilde{\theta}_n-\widehat{\theta}_n)=0.
\end{equation}
Based on the developments above, define a scaled difference
$\Delta_n$ of second derivatives of log-likelihood functions as either $\Delta_n \equiv [ l_n^{\prime\prime}(\tau_{2,n}) - l_n^{\prime\prime}(\theta_0)]/n $
based on $\tau_{2,n}$ from (\ref{eqn:a2}) or as $\Delta_n \equiv [ l_n^{\prime\prime}(\tau_n) - l_n^{\prime\prime}(\theta_0)]/n $
based on $\tau_n$ from (\ref{eqn:a1}); both $\tau_{2,n}$ and $\tau_n$ similarly denote values between
$\widehat{\theta}_n$ and $\tilde{\theta}_n$. To complete the proof of Theorem~\ref{theorem-1}(i), we shall establish
the following three results, appearing in (\ref{eqn:a3})-(\ref{eqn:a5}) as $n\to \infty$:
\begin{equation}
\label{eqn:a3}
\Delta_n \stackrel{p}{\rightarrow} 0;
\end{equation}
\begin{eqnarray}
\label{eqn:a4} \frac{d\log f(Y; \widehat{\theta}_n)}{d\theta} =\frac{d\log f(Y;\theta_0)}{d\theta} + o_p(1), \\
\nonumber \log f(Y; \widehat{\theta}_n) = \log f(Y;\theta_0) + o_p(1);\end{eqnarray}
\begin{equation} \label{eqn:a5} n|\widehat{\theta}_n - \tilde{\theta}_n| = O_p(1).
\end{equation}
When the above hold, we may re-write (\ref{eqn:a1}) as
\begin{equation}
\label{eqn:a6}
l_n(\widehat{\theta}_n)-l_n(\tilde{\theta}_n)=\frac{l_n^{\prime\prime}(\tau_{2,n})}{2n}\left[\sqrt{n}(\widehat{\theta}_n-\tilde{\theta}_n)\right]^2 = O_p(1)o_p(1)=o_p(1)
\end{equation}
using that $l_n^{\prime\prime}(\tau_{2,n})/n\xrightarrow{p} -I(\theta_0) \in (-\infty,0)$ by (\ref{eqn:a3}) combined with
$l_n^{\prime\prime}(\theta_0)/n\xrightarrow{p} -I(\theta_0) $ by the SLLN (cf.~condition~(c)), while $\sqrt{n}|\widehat{\theta}_n-\tilde{\theta}_n|=o_p(1)$ by (\ref{eqn:a5}). From (\ref{eqn:a4}) with (\ref{eqn:a6}), we then obtain
the distribution of the log-LR statistic as
\begin{eqnarray*}
-2\log\Lambda_n(\boldsymbol{X}_n,Y) &=& -2 l_n(\widehat{\theta}_n) -2 \log f(Y;\widehat{\theta}_n) + 2 l_n(\tilde{\theta}_n) + 2 \log \sup_{\theta\in\Theta} f(Y;\theta)\\&=& -2 \log f(Y;\theta_0) + 2 \log \sup_{\theta\in\Theta} f(Y;\theta) + o_p(1),
\end{eqnarray*}
where $\log \sup_{\theta\in\Theta} f(Y;\theta)$ and $\log f(Y;\theta_0) -\log \sup_{\theta\in\Theta} f(Y;\theta)$ are well-defined random variables (cf.~condition~(e) where $Y\stackrel{d}{=}X_1$);
that is,
\[
-2\log\Lambda_n(\boldsymbol{X_n},Y)\xrightarrow{d}-2\left[\log f(Y;\theta_0)-\log\sup_{\theta\in\Theta}f(Y;\theta)\right].
\]
We next establish (\ref{eqn:a3})-(\ref{eqn:a5}), beginning with (\ref{eqn:a3}).
Consider $\Delta_n \equiv [ l_n^{\prime\prime}(\tau_{2,n}) - l_n^{\prime\prime}(\theta_0)]/n $
with $\tau_{2,n}$ from (\ref{eqn:a2}), lying between $\widehat{\theta}_n$ and $\tilde{\theta}_n$;
both latter estimators are consistent for $\theta_0$ by condition~(a). For any $\epsilon>0$, we may pick $\delta$ to bound the probability
(under $\Pr\equiv \Pr_{\theta_0}$)
\begin{equation}\label{eq:conv}
\begin{split}
\Pr(|\Delta_n|>\epsilon)&\leq\Pr\left(\left|\Delta_n\right|>\epsilon,|\widehat{\theta}_n-\theta_0|<\delta/2, |\tilde{\theta}_n-\theta_0|<\delta/2\right)+\\
&\Pr(|\widehat{\theta}_n-\theta_0|\geq\delta/2)+\Pr(|\tilde{\theta}_n-\theta_0|\geq\delta/2).
\end{split}
\end{equation}
Using condition~(c) with Markov's inequality, we may bound the probability
\begin{equation*}
\begin{split}
&\Pr\left(\left|\Delta_n\right|>\epsilon,|\widehat{\theta}_n-\theta_0|<\delta/2, |\tilde{\theta}_n-\theta_0|<\delta/2\right)
\leq\Pr(\left|\Delta_n\right|>\epsilon, \left|\tau_{2,n}-\theta_0\right|<\delta)\\
\leq&\Pr\left[\frac{1}{n}\sum_{i=1}^{n}\sup_{|\theta-\theta_0|<\delta}\left|\frac{d^2\log f(X_i;\theta)}{d\theta^2} -\frac{d^2\log f(X_i;\theta_0)}{d\theta^2} \right|>\epsilon\right]\\
\leq&\frac{h(\delta)}{\epsilon}, \quad h(\delta)\equiv \text{E}_{\theta_0} \sup_{|\theta-\theta_0|<\delta}\left|\frac{d^2\log f(X_1;\theta)}{d\theta^2} -\frac{d^2\log f(X_1;\theta_0)}{d\theta^2} \right|.
\end{split}
\end{equation*}
Hence, for fixed $\epsilon,\delta>0$, taking limits as $n\to \infty$ in (\ref{eq:conv}) yields
\begin{eqnarray*}
\limsup_{n\to \infty}\Pr\left(|\Delta_n|>\epsilon\right)
&\leq&\frac{h(\delta)}{\epsilon}+\limsup_{n\to \infty}\Pr(|\widehat{\theta}_n-\theta_0|\geq\delta/2)+\limsup_{n\to \infty}\Pr(|\tilde{\theta}_n-\theta_0|\geq\delta/2)\\
&
\leq& \frac{h(\delta)}{\epsilon},
\end{eqnarray*}
by consistency of $\widehat{\theta}_n$ and $\tilde{\theta}_n$ (e.g., $\lim_{n\to \infty}\Pr(|\tilde{\theta}_n-\theta_0|\geq\delta/2)=0$). Because $\delta>0$ was arbitrary, letting $\delta \to 0$ (i.e. using $\lim_{\delta \to 0}h(\delta) =0$ by condition~(c)) then yields $\limsup_{n\to \infty}\Pr\left(|\Delta_n|>\epsilon\right)=0$.
As $\epsilon>0$ was arbitrary, (\ref{eqn:a3}) now follows for $\Delta_n \equiv [ l_n^{\prime\prime}(\tau_{2,n}) - l_n^{\prime\prime}(\theta_0)]/n $; the argument is the same for $\Delta_n \equiv [ l_n^{\prime\prime}(\tau_n) - l_n^{\prime\prime}(\theta_0)]/n $ with $\tau_n $ from (\ref{eqn:a1}).
To show (\ref{eqn:a4}), we first consider expanding $d\log f(Y;\widehat{\theta}_n)/d\theta$ around $\theta_0$; in which case, we may write
\begin{equation}\label{eq:exp2}
\frac{d\log f(Y;\widehat{\theta}_n)}{d\theta} =\frac{d\log f(Y;\theta_0)}{d\theta} +\frac{d^2\log f(Y;\tau_{3,n})}{d\theta^2} (\widehat{\theta}_n-\theta_0),
\end{equation}
where $\tau_{3,n}$ is some value between $\theta_0$ and $\widehat{\theta}_n$. Picking some small $\delta>0$,
when $|\widehat{\theta}_n-\theta_0|\leq\delta$ holds (with arbitrarily high probability when $n$ is large),
then $|\tau_{3,n}-\theta_0|\leq\delta$ also holds so that we may bound
\[
\left|\frac{d^2\log f(Y;\tau_{3,n})}{d\theta^2} \right|\leq\left|\frac{d^2\log f(Y;\theta_0)}{d\theta^2} \right|+\sup_{|\theta-\theta_0|<\delta}\left|\frac{d^2\log f(Y;\theta)}{d\theta^2}-\frac{d^2\log f(Y;\theta_0)}{d\theta^2} \right|,
\]
where the sum on the right-hand side above has finite/bounded expectation by conditions (c)--(d). Consequently,
it follows that $ | d^2\log f(Y;\tau_{3,n})/d\theta^2 |=O_p(1)$ so that, from (\ref{eq:exp2}), we have
\[
\frac{d\log f(Y;\widehat{\theta}_n)}{d\theta} =\frac{d\log f(Y;\theta_0)}{d\theta} + O_p(1)o_p(1)
\]
in (\ref{eqn:a4}); the argument for $\log f(Y;\widehat{\theta}_n) = \log f(Y;\theta_0) + o_p(1)$ follows similarly.
To show (\ref{eqn:a5}), we may apply (\ref{eqn:a2}), (\ref{eqn:a3}), and (\ref{eqn:a4}) to write
\begin{equation}\label{find-diff-tilde-hat}
\frac{d\log f(Y|\theta)}{d\theta}\bigg|_{\theta=\theta_0}=o_p(1)+n(\tilde{\theta}_n-\widehat{\theta}_n)\cdot I(\theta_0)(1+o_p(1)),
\end{equation}
where $l_n^{\prime\prime}(\tau_{2,n})/n=-I(\theta_0)(1+ o_p(1))$ follows by (\ref{eqn:a3}) along with
$l_n^{\prime\prime}(\theta_0)/n\xrightarrow{p} -I(\theta_0)\in(-\infty,0) $ by the SLLN.
Because $o_p(1)$ terms in (\ref{find-diff-tilde-hat}) will be less than $1/2$ (say) with high probability for large $n$, this aspect implies that
\[
n\left|\tilde{\theta}_n-\widehat{\theta}_n\right|\leq \frac{1}{I(\theta_0)}+\frac{2}{I(\theta_0)} \left|\frac{d\log f(Y;\theta_0)}{d\theta}\right|
\]
holds (with high probability for large $n$); note that the right-hand side has finite expectation (i.e., bounded in probability).
Consequently, it follows that $n|\tilde{\theta}_n-\widehat{\theta}_n|=O_p(1)$ in (\ref{eqn:a6}).
To establish the bootstrap result of Theorem~\ref{theorem-1}(i) in the scalar parameter case, we establish
convergence in probability for the bootstrap approximation through a characterization of almost sure convergence
along subsequences. Let $\{n_j\}\subset \{n\}$ denote a positive integer subsequence, and note that we may assume
the random variables $X_{n}$, $n \geq 1$, and $Y$ are defined on a common probability space $(\Omega,\mathcal{F}, \Pr)$ with $\sigma$-algebra $\mathcal{F}$ (and $\Pr \equiv \Pr_{\theta_0}$).
Then, under conditions~(a) and (i), there
exists a further subsequence $\{n_k\}\subset \{n_j\}$ and an event $A\in \mathcal{F}$ with $\Pr(A)=1$ such that, pointwise for $\omega \in A$, it holds that $\tilde{\theta}_{n_k} \equiv \tilde{\theta}_{n_k}(\omega)\rightarrow \theta_0$ as $n_k \to \infty$ and
while $\rho( |\widehat{\theta}_{n_k}^\ast-\tilde{\theta}_{n_k}|,0) + \rho( |\tilde{\theta}_{n_k}^\ast-\tilde{\theta}_{n_k}|,0)\equiv
\rho( |\widehat{\theta}_{n_k}^\ast-\tilde{\theta}_{n_k}|,0)(\omega) + \rho( |\tilde{\theta}_{n_k}^\ast-\tilde{\theta}_{n_k}|,0)(\omega)
\rightarrow 0$
as $n_k \to \infty$. Note that, pointwise for each $\omega \in A$ (we will suppress the dependence of random variables $X_i, Y$ and estimators $\tilde{\theta}_{n}$ on $\omega$ hereafter), there is a sequence of bootstrap distributions, with each distribution indexed by $n_k$ and defined by bootstrap observations $X_1^*,\ldots, X_{n_k}^*, Y^*\equiv Y^*_{n_k}$ as iid draws
from $f(\cdot; \tilde{\theta}_{n_k})$ based on the ML estimator $\tilde{\theta}_{n_k}$ from observed data $X_1,\ldots, X_{n_k}$.
That is, for fixed $\omega \in A$, we have $\tilde{\theta}_{n_k} \rightarrow \theta_0$, $\widehat{\theta}_{n_k}^\ast-\tilde{\theta}_{n_k}
\stackrel{p*}{\rightarrow}0$ and $\tilde{\theta}_{n_k}^\ast-\tilde{\theta}_{n_k}
\stackrel{p*}{\rightarrow}0$ as $n_k \to \infty$, where $\stackrel{p*}{\rightarrow}$ denotes convergence in bootstrap probability, and
we consider establishing that the bootstrap log-LR statistic $-2\log\Lambda_{n_k}^\ast(\boldsymbol{X}_{n_k}^*,Y^*)$ converges in distribution as
$n_k \to \infty$, denoted as $\stackrel{d*}{\rightarrow}$, to the same limit $-2 \log [ f(Y;\theta_0)/\sup_{\theta \in \Theta} f(Y;\theta)]$ as the original log-LR statistic (i.e., $Y \sim f(\cdot;\theta_0)$).
The bootstrap proof is similar to that of Theorem~\ref{theorem-1}(i), with some modifications for the bootstrap version $Y^*\equiv Y_{n_k}^*\sim f(\cdot;\tilde{\theta}_{n_k})$
of the predictand described next.
As $\widehat{\theta}^\ast_{n_k}\stackrel{p*}{\rightarrow} \theta_0$ and $\tilde{\theta}^\ast_{n_k}\stackrel{p*}{\rightarrow} \theta_0$ (so that $\widehat{\theta}^\ast_{n_k},\tilde{\theta}^\ast_{n_k}$ lie in a neighborhood $O$ of $\theta_0$ in condition~(b)), we may
apply the same Taylor expansions used in the Theorem~\ref{theorem-1}(i) at the bootstrap level.
Define $l_{n_k}^*(\theta) $ and $l_{2,n_k}^*(\theta)$ to be the bootstrap counterparts of log-likelihood functions $l_{n_k}(\theta) $ and $l_{2,n_k}(\theta)$ (based on $X_1^*,\ldots,X_{n_k}^*,Y^*$ in place of $X_1,\ldots,X_{n_k},Y$), with corresponding derivatives $l_{n_k}^{*\prime}(\theta) ,l_{2,n_k}^{*\prime}(\theta)$ and second derivatives $l_{n_k}^{*\prime\prime}(\theta) ,l_{2,n_k}^{*\prime \prime}(\theta)$. Then, the same expansions as in (\ref{eqn:a1})-(\ref{eqn:a2}) at the original data level apply for bootstrap data as
\begin{eqnarray}
\label{eqn:aa1}
l_{n_k}^*(\widehat\theta_{n_k}^*)-l_{n_k}^*(\tilde{\theta}_{n_k}^*) = \frac{1}{2}l^{*\prime\prime}_{n_k}(\tau_{n_k}^*)(\tilde{\theta}_{n_k}^*-\widehat{\theta}_{n_k}^*)^2,\\
\label{eqn:aa2}\frac{d\log f(Y_{n_k}^*; \widehat{\theta}_{n_k}^*)}{d\theta} +l_{n_k}^{*\prime\prime}(\tau_{2,n_k}^*)(\tilde{\theta}_{n_k}^*-\widehat{\theta}_{n_k}^*)=0,
\end{eqnarray}
in terms of some values $\tau_{n_k}^*, \tau_{2, n_k}^*$ between $\tilde{\theta}_{n_k}^*$ and $\widehat{\theta}_{n_k}^*$.
Define a bootstrap difference as $\Delta_{n_k}^* \equiv [ l_{n_k}^{*\prime\prime}(\tau_{2,n_k}^*) - l_{n_k}^{*\prime\prime}(\theta_0)]/n $
based on $\tau_{2,n_k}^*$ from (\ref{eqn:aa2}) or as $\Delta_{n_k}^* \equiv [ l_{n_k}^{*\prime\prime}(\tau_{n_k}^*) - l_{n_k}^{*\prime\prime}(\theta_0)]/n $
based on $\tau_{n_k}^*$ from (\ref{eqn:aa1}). Similar to the proof of Theorem~\ref{theorem-1}(i), to complete the proof of Theorem~\ref{theorem-1}(ii), we shall establish analog versions of (\ref{eqn:a3})-(\ref{eqn:a5}) given by
\begin{equation}
\label{eqn:aa23}
l_{n_k}^{*\prime\prime}(\theta_0) \stackrel{p*}{\rightarrow} - I(\theta_0) \in (0-\infty);
\end{equation}
\begin{equation}
\label{eqn:aa3}
\Delta_{n_k}^* \stackrel{p*}{\rightarrow} 0;
\end{equation}
\begin{eqnarray}
\label{eqn:aa4} \frac{d\log f(Y_{n_k}^*; \widehat{\theta}_{n_k}^*)}{d\theta} =\frac{d\log f(Y_{n_k}^*;\theta_0)}{d\theta} + o_{p*}(1), \\
\nonumber \frac{d\log f(Y_{n_k}^*;\theta_0)}{d\theta} =O_{p*}(1),\\
\nonumber \log f(Y_{n_k}^*; \widehat{\theta}_{n_k}^*) = \log f(Y_{n_k}^*; \theta_0) + o_{p*}(1);\end{eqnarray}
\begin{equation} \label{eqn:aa5} n|\widehat{\theta}_{n_k}^* - \tilde{\theta}_{n_k}^*| = O_{p*}(1),
\end{equation}
where above $O_{p*}(1)$ (i.e., bounded in bootstrap probability) and $o_{p*}(1)$ (i.e., converging to zero in bootstrap probability) denote bootstrap probability orders along the subsequence $n_k$.
When (\ref{eqn:aa23})-(\ref{eqn:aa5}) hold, we can re-write (\ref{eqn:aa1}) as
\begin{equation}
\label{eqn:aa6}
l_{n_k}^*(\widehat\theta_{n_k}^*)-l_{n_k}^*(\tilde\theta_{n_k}^*) = \frac{1}{2n}l^{*\prime\prime}_{n_k}(\tau_{n_k}^*)\left[\sqrt{n_k}(\widehat{\theta}_{n_k}^*-\tilde{\theta}_{n_k}^*)\right]^2=
O_{p*}(1)o_{p*}(1)=o_{p*}(1)
\end{equation}
using (\ref{eqn:aa23})-(\ref{eqn:aa3}) (i.e., establish $l^{*\prime\prime}_{n_k}(\tau_{n_k}^*)/n \stackrel{p*}{\rightarrow} -I(\theta_0) \in (-\infty,0)$) combined with (\ref{eqn:aa5}).
From (\ref{eqn:aa4}) with (\ref{eqn:aa6}), we then write the bootstrap log-LR statistic as
\begin{eqnarray}
\label{eqn:end}
&&-2\log\Lambda_{n_k}^*(\boldsymbol{X_n^*},Y_{n_k}^*) \\
\nonumber &=& -2 l_{n_k}^*(\tilde{\theta}_{n_k}^*) -2 \log f(Y_{n_k^*};\widehat{\theta}_{n_k}^*) + 2 l_{n_k}^*(\tilde{\theta}_{n_k}^*) + 2 \log \sup_{\theta\in\Theta} f(Y_{n_k}^*;\theta)\\&=& \nonumber -2 \log f(Y_{n_k}^*;\theta_0) + 2 \log \sup_{\theta\in\Theta} f(Y_{n_k}^*;\theta) + o_{p*}(1).
\end{eqnarray}
Because $Y^*\equiv Y_{n_k}^\ast\sim f(\cdot;{\tilde{\theta}_{n_k}})$ and $\tilde{\theta}_{n_k}\to\theta_0$, we have $Y^\ast_{n_k} \stackrel{d*}{\rightarrow} Y_0\sim f(\cdot|\theta_0)$ under condition~(h); from this, by the continuous mapping theorem under conditions~(e) and (g), it holds that the random pair
\[
\left(\log f(Y^\ast_{n_k};\theta_0), \log \sup_{\theta \in \Theta}f(Y^\ast_{n_k};\theta) \right) \stackrel{d*}{\rightarrow}
\left(\log f(Y_0;\theta_0),
\log \sup_{\theta \in \Theta}f(Y_0;\theta)\right)
\]
converges in distribution. Consequently, the limit
\[-2\log\Lambda_{n_k}^*(\boldsymbol{X_n^*},Y_{n_k}^*) \stackrel{d*}{\rightarrow} -2 \log f(Y_0;\theta_0) + 2 \log \sup_{\theta\in\Theta} f(Y_0;\theta)
\]
then follows in (\ref{eqn:end}) by Slutsky's theorem and the continuous mapping theorem; note that this limit corresponds to the same (continuous) distributional limit as the original log-LR statistic in Theorem~\ref{theorem-1}(i) (i.e., $Y_0\sim f(\cdot|\theta_0)$).
By Polya's theorem, we then may write
\[
\lim_{n_k \to \infty}\sup_{\lambda\in\mathbb{R}^{+}}|\Pr{}_{\!\!\ast}[-2\log\Lambda_{n_k}^*(\boldsymbol{X}_{n_k}^*,Y_{n_k}^*)\leq\lambda]-
\Pr[-2\log\Lambda_{n_k}(\boldsymbol{X}_{n_k},Y)\leq\lambda]| =0.
\]
As this convergence above holds (almost surely) along the subsequence $\{n_k\}\subset \{n_j\}$, where the latter subsequence was arbitrary, we now have
\[\sup_{\lambda\in\mathbb{R}^{+}}|\Pr{}_{\!\!\ast}[-2\log\Lambda_{n}^*(\boldsymbol{X}_n^*,Y_{n}^*)\leq\lambda]-
\Pr[-2\log\Lambda_{n}(\boldsymbol{X}_{n},Y)\leq\lambda]| \stackrel{p}{\rightarrow}0
\]
as $n\to \infty$ in Theorem~\ref{theorem-1}(ii).
Finally, we establish (\ref{eqn:aa23})-(\ref{eqn:aa5}) to complete the proof, beginning with (\ref{eqn:aa23}).
To show (\ref{eqn:aa23}), we first require some bootstrap moment results for the second derivatives of log-densities.
Because again $\tilde{\theta}_{n_k} \to \theta_0$ and $X_1^\ast\equiv X_{1,n_k}^\ast\sim f(\cdot;\tilde{\theta}_{n_k})$, then $X_1^\ast \stackrel{d*}{\rightarrow} Y_0\sim f(\cdot;\theta_0)$ as $n_k\to \infty$ by condition~(h) and we also subsequently have
\begin{equation}
\label{eqn:help}
\frac{d^2\log f(X_1^\ast;\theta_0)}{d\theta^2}\stackrel{d*}{\rightarrow}\frac{d^2\log f(Y_0;\theta_0)}{d\theta^2}
\end{equation}
by condition~(g) and the continuous mapping theorem. Noting that the bootstrap sample $X_1^*,\ldots,X_{n_k}^*$ is iid under $f(\cdot;\tilde{\theta}_{n_k})$, we shall truncate each $d^2\log f(X_i^\ast; \theta_0)/d\theta^2$, $i=1,\ldots,n_k$, as
\[
T_{i,{n_k}}^\ast(M)\equiv\frac{d^2\log f(X_i^\ast;\theta_0)}{d\theta^2}\text{I}\left(\left|\frac{d^2\log f(X_i^\ast;\theta_0)}{d\theta^2}\right|\leq M\right),
\]
where $M$ is a continuity point in the distribution (cdf) of $d^2\log f(Y_0;\theta_0)/d\theta^2$ and
$Y_0\sim f(\cdot;\theta_0)$ (i.e., there can be only countably many discontinuity points). Then, at any given continuity point $M$,
we have from the continuous mapping theorem and (\ref{eqn:help}) that
\[
T_{1,{n_k}}^\ast(M)\stackrel{d*}{\rightarrow} \frac{d^2\log f(Y_0;\theta_0)}{d\theta^2}\text{I}\left(\left|\frac{d^2\log f(Y_0;\theta_0)}{d\theta^2}\right|\leq M\right)
\]
for $Y_0\sim f(\cdot;\theta_0)$. Because $T_{1,{n_k}}^\ast(M)$ is bounded (i.e., uniformly integrable), the latter convergence in distribution implies the following convergence of bootstrap moments
\begin{eqnarray}\label{key1}
\text{E}_{\tilde{\theta}_{n_k}}^\ast\left(\frac{1}{n_k}\sum_{i=1}^{n_k}T_{i,{n_k}}^\ast(M) \right) &=& \text{E}_{\tilde{\theta}_{n_k}}^\ast [T_{1,{n_k}}^\ast(M)] \\\nonumber &\rightarrow &\text{E}_{\theta_0}\left[\frac{d^2\log f(Y_0;\theta_0)}{d\theta^2}\text{I}\left(\left|\frac{d^2\log f(Y_0;\theta_0)}{d\theta^2}\right|\leq M\right)\right]\\
\nonumber &\equiv& Q(M),
\end{eqnarray}
using that $\{T_{i,{n_k}}^\ast(M)\}_{i=1}^{n_k}$ are iid bootstrap variables.
By condition~(b) and the dominated convergence theorem, note that
$Q(M)\to-I(\theta_0)$ as $M\to\infty$. We now bound the bootstrap expectation (absolutely) between
$l_{n_k}^{\ast\prime\prime}(\theta_0)$ and $-I(\theta_0)$ as
\begin{eqnarray}\label{eq:2333}
&&\text{E}_{\tilde{\theta}_{n_k}}^\ast\left|\frac{1}{n_k}l_{n_k}^{\ast\prime\prime}(\theta_0)+I(\theta_0)\right|\\
\nonumber &\leq&
\text{E}_{\tilde{\theta}_{n_k}}^\ast\left|\frac{1}{n_k}\sum_{i=1}^{n_k} [T_{i,{n_k}}^\ast(M)-\text{E}^\ast_{\tilde{\theta}_{n_k}}T_{i,{n_k}}^\ast(M)]\right| +\left|\text{E}^\ast_{\tilde{\theta}_{n_k}}T_{1,{n_k}}^\ast(M)+\text{I}(\theta_0)\right|\\
\nonumber \qquad && + \frac{1}{n_k}\sum_{i=1}^{n_k}\text{E}_{\tilde{\theta}_{n_k}}^\ast\left|\frac{d^2\log f(X_i^\ast;\theta_0)}{d\theta^2}\right|\text{I}\left(\left|\frac{d^2\log f(X_i^\ast;\theta_0)}{d\theta^2}\right|>M\right)\\
\nonumber & \equiv& a_{1,n_k}(M) + a_{2,n_k}(M)+a_{3,n_k}(M).
\end{eqnarray}
Note that, for fixed $M>0$, $[a_{1,n_k}(M)]^2$ is bounded by the bootstrap variance
\begin{eqnarray}
\label{eqn:help2}
[a_{1,n_k}(M)]^2 &\leq &\text{E}^\ast_{\tilde{\theta}_{n_k}}\left\{\frac{1}{n_k}\sum_{i=1}^{n_k}\left[T_{i,{n_k}}^\ast(M) -\text{E}^\ast_{\tilde{\theta}_{n_k}}(T_{i,{n_k}}) \right]\right\}^2\\
\nonumber &=& \frac{1}{n_k}\text{E}^\ast_{\tilde{\theta}_{n_k}}\left[ T_{1,{n_k}}^\ast(M) -\text{E}^\ast_{\tilde{\theta}_{n_k}}(T_{1,{n_k}} )\right]^2 \\
\nonumber &\leq& \frac{M^2}{n_k},
\end{eqnarray}
using that $T_{i,{n_k}}^\ast(M)$, $i=1,\ldots,n_k$ are iid and bounded by $M$. Additionally, we may bound
\begin{eqnarray*}
a_{3,n_k}(M)& \equiv &\frac{1}{n_k}\sum_{i=1}^{n_k}\text{E}_{\tilde{\theta}_{n_k}}^\ast\left|\frac{d^2\log f(X_i^\ast;\theta_0)}{d\theta^2}\right|\text{I}\left(\left|\frac{d^2\log f(X_i^\ast;\theta_0)}{d\theta^2}\right|>M\right)
\\&= &\text{E}_{\tilde{\theta}_{n_k}}^\ast\left|\frac{d^2\log f(X_1^\ast;\theta_0)}{d\theta^2}\right|\text{I}\left(\left|\frac{d^2\log f(X_1^\ast;\theta_0)}{d\theta^2}\right|>M\right)\\& \leq &\sup_{\theta \in O}\text{E}_{\theta}\left|\frac{d^2\log f(X_1;\theta_0)}{d\theta^2}\right|\text{I}\left(\left|\frac{d^2\log f(X_1;\theta_0)}{d\theta^2}\right|>M\right) \equiv a_3(M),
\end{eqnarray*}
under condition~(f) (e.g., note $\tilde{\theta}_{n_k}\to\theta_0\in O$) where $\lim_{M\to \infty}a_3(M)=0$.
Now fixing $M>0$ and taking limits as $n_k\to \infty$ in (\ref{eq:2333}), we have
\[
\limsup_{n_k \to \infty}\text{E}_{\tilde{\theta}_{n_k}}^\ast\left|\frac{1}{n_k}l_{n_k}^{\ast\prime\prime}(\theta_0)+I(\theta_0)\right| \leq |Q(M)+I(\theta_0)| + a_3(M)
\]
using that $\limsup_{n_k \to \infty} a_{1,n_k}(M)=0$ by (\ref{eqn:help2}), $\limsup_{n_k \to \infty} a_{2,n_k}(M)=|Q(M)+I(\theta_0)|$
by (\ref{eqn:help}), and $\limsup_{n_k \to \infty} a_{3,n_k}(M) \leq a_3(M)$. Now letting $M\to \infty$ and using $\lim_{M\to \infty}Q(M)=-I(\theta_0)$ and $\lim_{M\to \infty}a_3(M)=0$, we find
\[
\limsup_{n_k \to \infty}\text{E}_{\tilde{\theta}_{n_k}}^\ast\left|\frac{1}{n_k}l_{n_k}^{\ast\prime\prime}(\theta_0)+I(\theta_0)\right| =0;
\]
this establishes (\ref{eqn:aa23}).
To show (\ref{eqn:aa3}), the fact that $\Delta_{n_k}^* \stackrel{p*}{\rightarrow} 0$ follows from $\tilde{\theta}_{n_k}^* \stackrel{p*}{\rightarrow} \theta_0$ and $\widehat{\theta}_{n_k}^* \stackrel{p*}{\rightarrow} \theta_0$ along with condition~(f) (e.g., note $\tilde{\theta}_{n_k}\to\theta_0\in O$). To establish (\ref{eqn:aa4}), as $\widehat{\theta}^\ast_{n_k}\stackrel{p*}{\rightarrow} \theta_0$ (so that $\widehat{\theta}^\ast_{n_k}$ lies in a neighborhood $O$ of $\theta_0$ in condition~(b)), we may
expand $d\log f(Y_{n_k}^\ast;\widehat{\theta}^\ast_{n_k})/d \theta$ around $\theta_0$ as
\[
\frac{d\log f(Y_{n_k}^\ast; \widehat{\theta}_{n_k}^\ast)}{d \theta}=\frac{d\log f(Y_{n_k}^\ast; \theta_0)}{d \theta} +\frac{d^2\log f(Y^\ast_{n_k};\zeta_{n_k}^\ast)}{d\theta^2}(\widehat{\theta}^\ast_{n_k}-\theta_0),
\]
where $\zeta_{n_k}^\ast$ is between $\theta_0$ and $\widehat{\theta}_{n_k}^\ast$. Because
$|\zeta_{n_k}^\ast-\theta_0| \leq |\widehat{\theta}_{n_k}^\ast-\theta_0| \stackrel{p*}{\rightarrow}0$
and because $Y^\ast_{n_k} \sim f(;\tilde{\theta}_{n_k})$ where $ \tilde{\theta}_{n_k} \rightarrow \theta_0 $,
it follows from condition~(f) that the bootstrap expectation $\text{E}_{\tilde{\theta}_{n_k}}^* |d^2\log f(Y_{n_k}^\ast;\zeta_{n_k}^\ast)/d\theta^2|I(| \zeta_{n_k}^\ast-\theta_0| <\delta)$ is a bounded sequence in $n_k$ (for a given
small $\delta$), which implies $d^2\log f(Y_{n_k}^\ast;\zeta_{n_k}^\ast)/d\theta^2=O_{p*}(1)$ is tight in bootstrap probability.
Consequently, we have
\[
\frac{d\log f(Y_{n_k}^\ast; \widehat{\theta}_{n_k}^\ast)}{d \theta}=\frac{d\log f(Y_{n_k}^\ast; \theta_0)}{d \theta} + o_{p*}(1)
\]
in (\ref{eqn:aa4}).
Additionally, because $Y^\ast_{n_k} \sim f(;\tilde{\theta}_{n_k})$ where $ \tilde{\theta}_{n_k} \rightarrow \theta_0 $, it follows from conditions (g)-(h) with the continuous mapping theorem that $d\log f(Y_{n_k}^\ast; \theta_0)/d \theta \stackrel{d*}{\rightarrow} d\log f(Y_0; \theta_0)/d \theta $
and $\log \sup_{\theta \in \Theta}f(Y^\ast_{n_k};\theta)\stackrel{d*}{\rightarrow}\log \sup_{\theta \in \Theta}f(Y_0;\theta)$
for $Y_0\sim f(\cdot;\theta_0)$ in (\ref{eqn:aa4}) (e.g., consequently $d\log f(Y_{n_k}^\ast; \theta_0)/d \theta =O_{p*}(1)$ is tight in bootstrap probability). Finally, to establish (\ref{eqn:aa5}),
we may apply the same arguments that used to derive (\ref{find-diff-tilde-hat}) to find
\[
\frac{d\log f(Y^\ast_{n_k};\widehat{\theta}^*_{n_k})}{d\theta} =o_{p*}(1)+I(\theta_0)(\tilde{\theta}_{n_k}^\ast-\widehat{\theta}_{n_k}^\ast)n_k[1+o_{p^\ast}(1)],
\]
implying $n|\tilde{\theta}_{n_k}^\ast-\widehat{\theta}_{n_k}^\ast|=O_{p*}(1)$.
\end{proof}
\noindent\textbf{Remark~1}.
Here we discuss the extension to the case of multiple parameters $\theta$. To connect the multiple parameter
setting to scalar parameter version of Theorem~\ref{theorem-1} and the notation/proof developed there,
we write the multiple parameter as $\boldsymbol{\theta} = (\phi, \vartheta)\in \boldsymbol\Theta$, where $\phi$ is real-valued and $\vartheta$
may be vector valued. For constructing an LR statistic (3), in this notation the full model
is given by $X_1\ldots,X_n \sim f(\cdot; \phi, \vartheta)$ and $Y\sim f(\cdot; \varphi, \vartheta)$
while the reduced model is $X_1\ldots,X_n,Y \sim f(\cdot; \phi, \vartheta)$; that is, under the full model,
the density of the predictand $Y$ involves a real parameter $\varphi$ that may differ from the counterpart parameter
$\phi$ in the density of the data $X_1,\ldots,X_n$, though the remaining parameters $\vartheta$ are common to these densities in the full model.
In this notation, define $\tilde{\boldsymbol{\theta}}_n$ as the maximizer of the log-likelihood $l_n(\boldsymbol{\theta}) \equiv \sum_{i=1}^n \log f(X_i;\boldsymbol{\theta})$
and define $\widehat{\boldsymbol{\theta}}_n$ as the maximizer of the log-likelihood $l_{2,n}(\boldsymbol{\theta}) \equiv \log f(Y;\boldsymbol{\theta}) +\sum_{i=1}^n \log f(X_i;\boldsymbol{\theta})$.
The definitions of $\tilde{\boldsymbol{\theta}}_n,\widehat{\boldsymbol{\theta}}_n$ here also match those used in the proof of the scalar case
of Theorem~\ref{theorem-1}. Next define $\bar{\boldsymbol{\theta}}_n$ as the maximizer
of $l_{3,n}(\boldsymbol{\theta}) \equiv l_{3,n}(\phi,\vartheta) \equiv \log \sup_{\varphi} f(Y;\varphi,\vartheta) +\sum_{i=1}^n \log f(X_i;\boldsymbol{\theta})$.
In the scalar parameter case, the distinction between $l_n(\boldsymbol{\theta})$ and $l_{3,n}(\boldsymbol{\theta})$ as log-likelihood functions is small
in the sense that the difference $l_{3,n}(\boldsymbol{\theta})-l_n(\boldsymbol{\theta})$ does not depend on $\boldsymbol{\theta}$ (i.e., when there
is one parameter, we can write the difference $l_{3,n}(\boldsymbol{\theta})-l_n(\boldsymbol{\theta}) = \log \sup_{\phi} f(Y;\phi)$ in a way that does not depend on the parameter $\boldsymbol{\theta}$). That is, when there is one parameter, $\tilde{\boldsymbol{\theta}}_n$ and $\bar{\boldsymbol{\theta}}_n$ are the same; however, $\tilde{\boldsymbol{\theta}}_n$ and $\bar{\boldsymbol{\theta}}_n$ may not be the same estimators of $\boldsymbol{\theta}$ in the multi-parameter setting. Hence, to connect the construction
of LR statistics between the multiple and scalar parameter case, we can write the LR statistic (3) as
\begin{eqnarray*}
-2 \log \Lambda_n(\boldsymbol{X}_n,Y) &=& -2 [l_n(\widehat{\boldsymbol{\theta}}_n) - l_{3,n}(\bar{\boldsymbol{\theta}}_n) ]\\
& =& -2 [l_n(\widehat{\boldsymbol{\theta}}_n) - l_{2,n}(\tilde{\boldsymbol{\theta}}_n) ] + 2 [l_{3,n}(\bar{\boldsymbol{\theta}}_n)-l_{2,n}(\tilde{\boldsymbol{\theta}}_n)].
\end{eqnarray*}
Under the same conditions and arguments used in the scalar version of Theorem~\ref{theorem-1}, the component $-2 [l_n(\widehat{\boldsymbol{\theta}}_n) - l_{3,n}(\bar{\boldsymbol{\theta}}_n) ]$ above behaves the same way in the multiple parameter case as in the scalar case; that is,
\[
-2 [l_n(\widehat{\boldsymbol{\theta}}_n) - l_{3,n}(\bar{\boldsymbol{\theta}}_n) ] = -2 \log f(Y;\boldsymbol{\theta}_0) + o_p(1)
\]
holds where $\boldsymbol{\theta}_0$ denotes the true parameter value (e.g., $\boldsymbol{\theta}_0=(\phi_0, \vartheta_0)$ for multiple parameters).
However, the component $[l_{3,n}(\bar{\boldsymbol{\theta}}_n)-l_{2,n}(\tilde{\boldsymbol{\theta}}_n)]$ behaves slightly differently between scalar and multiple parameter cases. For a single parameter $\boldsymbol{\theta}$, we have $2[l_{3,n}(\bar{\boldsymbol{\theta}}_n)-l_{2,n}(\tilde{\boldsymbol{\theta}}_n)]=2 \log\sup_{\varphi} f(Y;\varphi)$ exactly, where $Y\sim f(\cdot;\boldsymbol{\theta}_0)$ again, and there are then no further steps needed to obtain the limit of
$-2 \log \Lambda_n(\boldsymbol{X}_n,Y)$. For multiple parameters though, we repeat an expansion of $2[l_{3,n}(\bar{\boldsymbol{\theta}}_n)-l_{2,n}(\tilde{\boldsymbol{\theta}}_n)]$, that is similar to that for $-2 [l_n(\widehat{\boldsymbol{\theta}}_n) - l_{2,n}(\tilde{\boldsymbol{\theta}}_n) ]$, to obtain
\[
2[l_{3,n}(\bar{\boldsymbol{\theta}}_n)-l_{2,n}(\tilde{\boldsymbol{\theta}}_n)] = 2 \log \sup_{\phi} f(Y;\phi,\vartheta_0) + o_p(1),
\]
where $\boldsymbol{\theta}_0=(\phi_0, \vartheta_0)$ again denotes the true parameter value. Upon this step, the limit of the log-LR statistic $-2 \log \Lambda_n(\boldsymbol{X}_n,Y)$ follows in a unified way for both scalar and multiple parameter cases.
The conditions required in the multiple parameter case largely match those given in the scalar version of Theorem~\ref{theorem-1},
with the understanding that first/second derivatives in the condition statements are replaced by first/second partial derivatives
and that parameter estimators $\tilde{\boldsymbol{\theta}}_n, \widehat{\boldsymbol{\theta}}_n, \bar{\boldsymbol{\theta}}_n$ are defined as above (in agreement between scalar and multiple parameter cases).
The other modifications to the conditions given in the scalar version of Theorem~\ref{theorem-1} are as follows.
Let $\boldsymbol{\theta}_0=(\phi_0,\vartheta_0)$ denote the true parameter value.
Condition (a) is augmented to include $\bar{\boldsymbol{\theta}}_n \stackrel{p}{\rightarrow}\boldsymbol{\theta}_0$; condition~(h)
is augmented to include $\rho(|\bar{\boldsymbol{\theta}}_n^*- \tilde{\boldsymbol{\theta}}_n|,0)\stackrel{p}{\rightarrow}$; and ``$\sup_{\boldsymbol{\theta} \in \Theta} f(x;\boldsymbol{\theta})$"
is replaced by ``$\sup_{\phi} f(x;\phi,\vartheta_0)$" in conditions (e) and (g). Finally, one additional assumption is required
that $g(x;\vartheta)\equiv \sup_{\phi} f(x;\phi,\vartheta)$ is continuously differentiable in a neighborhood $\mathcal{O}$ of $\vartheta_0$ and
that, over a neighborhood $O$ of $\boldsymbol{\theta}_0$, expectations $\sup_{\boldsymbol{\theta} \in O}E_{\boldsymbol{\theta}} \sup_{ \vartheta \in \mathcal{O}} |\partial g(X_1;\vartheta)/\partial \vartheta |<\infty$ are bounded,
where $\text{E}_{\boldsymbol{\theta}}$ denotes expected valued with respect to $X_1\sim f(\cdot;\boldsymbol{\theta})$.\\
\noindent\textbf{Remark~2}. Here we describe the derivation of the limit distribution of the signed log-LR statistic, as given in Remark~2 from
the main manuscript. Let $\Lambda_n(\boldsymbol{X}_n,y)$ denote an LR statistic (3), as a function of $y$ for a given data $\boldsymbol{X}_n$,
which is assumed to be unimodal (either with probability 1 or with probability approaching 1 as $n\to \infty$) with a mode at $y_0\equiv y_0(\boldsymbol{X}_n)$. The limit of the signed log-LR statistic $-(-1)^{I(Y \leq y_0)}2 \Lambda_n(\boldsymbol{X}_n,Y)$ follows
from the established limit of the log-LR statistic $-2 \Lambda_n(\boldsymbol{X}_n,Y)$ in Theorem~\ref{theorem-1} combined with
showing $(-1)^{I(Y \leq y_0)} \stackrel{p}{\rightarrow} (-1)^{I(Y \leq m_0)}$ as $n\to \infty$, where $m_0$ is the mode/maximizer
of $h(y) = f(y; \boldsymbol{\theta}_0)/ \sup_{\vartheta} f(y;\vartheta,\boldsymbol{\theta}_0^\prime)$ and
$\boldsymbol{\theta}_0 = (\vartheta_0, \boldsymbol{\theta}_0^\prime)$ denotes the true parameter value (in the notation of Remark~2).
Because $Y\sim f(\cdot; \boldsymbol{\theta}_0)$ is a continuous random variable, the convergence $(-1)^{I(Y \leq y_0)} \stackrel{p}{\rightarrow} (-1)^{I(Y \leq m_0)}$ follows by showing $ y_0 \stackrel{p}{\rightarrow} m_0$; from this, $(Y,y_0) \stackrel{d}{\rightarrow} (Y,m_0)$ then holds and the continuous mapping theorem then yields $(-1)^{I(Y \leq y_0)} \stackrel{p}{\rightarrow} (-1)^{I(Y \leq m_0)}$.
To establish $ y_0(\boldsymbol{X_n})\equiv y_0 \stackrel{p}{\rightarrow} m_0$, we pick/fix some small $\epsilon>0$. Then, the LR statistic $\Lambda_n(\boldsymbol{X}_n,m)\stackrel{p}{\rightarrow} h(m)$ for $m\in\{m_0,\ m_0 \pm \epsilon\}$; this implies, because $h(m_0-\epsilon)<h(m_0)$
and $h(m_0+\epsilon)<h(m_0)$ and because $\Lambda_n(\boldsymbol{X}_n,y)$ is unimodal in $y$,
it must be that the maximizer $y_0$ of $\Lambda_n(\boldsymbol{X}_n,y)$
lies in the interval $(m_0-\epsilon,m_0+\epsilon)$ (with arbitrarily high probability for large $n$) due to the fact
that $\Lambda_n(\boldsymbol{X}_n,y)$ at the endpoints $y=m_0\pm \epsilon$ is smaller than $\Lambda_n(\boldsymbol{X}_n,y)$ at $y=m_0$.
This shows $\lim_{n\to \infty} \Pr(|y_0(\boldsymbol{X_n}) - m_0|<\epsilon)=1$ and, since $\epsilon>0$ was arbitrary, $y_0(\boldsymbol{X_n})\equiv y_0 \stackrel{p}{\rightarrow} m_0$ follows.
\subsection{Proof of Theorem~\ref{theorem-chi-square-1}}
After presenting a proof of Theorem~\ref{theorem-chi-square-1}, we provide some further explanation (cf.~Remark~3 below) of the theoretical details mentioned in Section 6.4 of the main manuscript. These details concern how Theorem~\ref{theorem-chi-square-1} applies for justifying likelihood ratio statistics used in prediction problems for discrete random variables (e.g., binomial and Poisson predictions from Sections~6.1-6.2).
\begin{theorem}\label{theorem-chi-square-1}
Suppose iid data $X_1,\dots,X_n$ have a marginal density
$f(\cdot;\theta)$, depending on scalar parameter $\theta\in \Theta$, and satisfy Assumptions~(a)-(d) of Theorem~\ref{theorem-1}
with true parameter value $\theta_0$. Suppose further that, independently, $Y_1,\dots,Y_m$ are iid random variables with the same marginal density.
Consider a hypothesis test where
the null hypothesis (or reduced model) is that $X_1,\dots,X_n$ and $Y_1,\dots,Y_m$ have the same density $f(\cdot;\theta)$, while
the alternative hypothesis (or full model) is that
\[
X_1,\dots,X_n\sim f(\cdot;\theta),\quad Y_1,\dots Y_m\sim f(\cdot;\theta+\delta);
\]
above $\delta=0$ corresponds to the null hypothesis and the assumed true data distribution.
Denoting the parameter vector as $\boldsymbol{\xi}=(\delta,\theta)$ and the log-likelihood function based on $(X_1,\dots,X_n,Y_1,\dots,Y_m)$ as $l_{n,m}(\boldsymbol{\xi})$, let $\tilde{\boldsymbol\xi} = (0,\tilde{\theta})$ and $\widehat{\boldsymbol\xi} = (\widehat{\delta},\widehat{\theta})$ denote the ML estimators under the reduced and full models, respectively.
Then, as $n,m\to \infty$,
the likelihood ratio statistic has a chi-square limit with 1 degree of freedom, namely
\[
-2\log\Lambda_{n,m}\equiv 2\left[l_{n,m}(\widehat{\boldsymbol{\xi}})-l_{n,m}(\tilde{\boldsymbol{\xi}})\right]\stackrel{d}{\rightarrow} \chi_1^2.
\]
\end{theorem}
\begin{proof} Let $\boldsymbol{\xi}_0=(0,\theta_0)$ denote the true parameter. We first provide some notation to describe partial derivatives of the log-likelihood function. For the density $f(y;\theta+\delta)$ of $Y_i$ under the full model and letting $\eta=\theta+\delta$, note that
\[
\frac{\partial\log f(y;\theta+\delta)}{\partial\theta}=\frac{d\log f(y;\eta)}{d\eta}=\frac{\partial\log f(y;\theta+\delta)}{\partial\delta},
\]
and that
\begin{equation}\label{second-for-y}
\frac{\partial^2\log f(y;\theta+\delta)}{\partial\theta^2}=\frac{\partial^2\log f(y;\theta+\delta)}{\partial\theta\partial\delta}=\frac{\partial^2\log f(y;\theta+\delta)}{\partial\delta^2};
\end{equation}
for the density $f(x;\theta)$ of $X_i$ (under full or reduced models), we have
\begin{equation}\label{second-for-x}
\frac{\partial\log f(x;\theta)}{\partial\delta}= \frac{\partial^2\log f(x;\theta)}{d\delta^2}=\frac{\partial^2\log f(x;\theta)}{d\theta d\delta}=\frac{\partial^2\log f(x;\theta)}{ d\delta d\theta}=0.
\end{equation}
Then, the bivariate vector of first partial derivatives of the log-likelihood, at $\boldsymbol{\xi}=(\delta,\theta)$, is given by
\[
l_{n,m}^{\prime }(\boldsymbol{\xi}) \equiv \left[\begin{array}{l }
\displaystyle{\sum_{j=1}^{m} \frac{\partial \log f(Y_j;\theta+\delta)}{\partial\delta }} \\
\displaystyle{\sum_{i=1}^{n}\frac{\partial \log f(X_i;\theta)}{\partial\theta }+\sum_{j=1}^{m}\frac{\partial \log f(Y_i;\theta+\delta)}{\partial\theta }}
\end{array}\right],
\]
while,
by (\ref{second-for-y})-(\ref{second-for-x}), the second partial derivative matrix of the log-likelihood evaluated at $\boldsymbol{\xi}=(\delta,\theta)$, is given by
\[
l_{n,m}^{\prime\prime}(\boldsymbol{\xi})\equiv \left[\begin{array}{lcl}
\displaystyle{\sum_{j=1}^{m} \frac{\partial^2\log f(Y_j;\theta+\delta)}{\partial\delta^2}} &&\displaystyle{ \sum_{j=1}^{m}\frac{\partial^2\log f(Y_j;\theta+\delta)}{\partial\theta\partial\delta} }\\
\displaystyle{\sum_{j=1}^{m}\frac{\partial^2\log f(Y_j;\theta+\delta)}{\partial\theta\partial\delta} }& & \displaystyle{\sum_{i=1}^{n}\frac{\partial^2\log f(X_i;\theta)}{\partial\theta^2}+\sum_{j=1}^{m}\frac{\partial^2\log f(Y_i;\theta+\delta)}{\partial\theta^2}}
\end{array}\right].
\]
Furthermore, at the true parameter $\boldsymbol{\xi}_0=(0,\theta_0)$, we have
\[
\text{E}_{\theta_0} \frac{\partial\log f(X_1;\theta_0)}{\partial\theta} = \text{E}_{\theta_0} \frac{\partial\log f(Y_1;\theta_0)}{\partial\theta}=
\text{E}_{\theta_0} \frac{\partial\log f(Y_1;\theta_0)}{\partial\delta} = 0,
\]
while
\begin{eqnarray}
\label{second-equal}
\text{E}_{\theta_0} \frac{\partial^2\log f(X_1;\theta_0)}{\partial\theta^2} = \text{E}_{\theta_0} \frac{\partial^2\log f(Y_1;\theta_0)}{\partial\theta^2} &=& \text{E}_{\theta_0} \frac{\partial^2\log f(Y_1;\theta_0)}{\partial\theta \partial \delta}\\
\nonumber & =& \text{E}_{\theta_0} \frac{\partial^2\log f(Y_1;\theta_0)}{\partial \delta \partial\theta} = -I_{\theta_0}
\end{eqnarray}
holds along with
\begin{equation}
\label{second-equal2}
\text{Var}_{\theta_0}\left( \frac{\partial\log f(X_1;\theta_0)}{\partial\theta} \right) = \text{Var}_{\theta_0}\left( \frac{\partial\log f(Y_1;\theta_0)}{\partial\theta} \right)= \text{Var}_{\theta_0}\left( \frac{\partial\log f(Y_1;\theta_0)}{\partial\delta} \right)=I_{\theta_0}
\end{equation}
for an information number $I_{\theta_0}\in (0,\infty)$.
Under the conditions, both $\tilde{\boldsymbol\xi} \stackrel{p}{\rightarrow} \boldsymbol{\xi}_0$ and $\widehat{\boldsymbol\xi} \stackrel{p}{\rightarrow} \boldsymbol{\xi}_0$ hold as $n,m\to \infty$
so that estimators lie in a neighborhood of $\boldsymbol{\xi}_0=(0,\theta_0)$ (for large $m,n$) and the log-likelihood $l_{n,m}(\boldsymbol{\xi})$
is twice continuously differentiable in this parameter neighborhood.
To determine the distributional limit of the log-likelihood ratio statistic
\[
-2\log\Lambda_{n,m}=2\left[l_{n,m}(\widehat{\boldsymbol\xi})-l_{n,m}(\tilde{\boldsymbol\xi})\right],
\]
we may expand $l_{n,m}(\tilde{\boldsymbol\xi})$ at $\widehat{\boldsymbol\xi}$ to find
\[
l_{n,m}(\tilde{\boldsymbol\xi})=l_{n,m}(\widehat{\boldsymbol\xi})+l_{n,m}^\prime(\widehat{\boldsymbol\xi})(\tilde{\boldsymbol\xi}-\widehat{\boldsymbol\xi})+\frac{1}{2}(\tilde{\boldsymbol\xi}-\widehat{\boldsymbol\xi})^Tl_{n,m}^{\prime\prime}(\boldsymbol\xi^\ast)(\tilde{\boldsymbol\xi}-\widehat{\boldsymbol\xi}),
\]
where $\boldsymbol\xi^\ast$ is between $\tilde{\boldsymbol\xi}$ and $\widehat{\boldsymbol\xi}$.
Because $l_{n,m}^\prime(\widehat{\boldsymbol\xi})=\boldsymbol{0}\equiv (0,0)^{T}$ holds for the maximizer $\widehat{\boldsymbol\xi}$, the likelihood ratio statistic may be written as
\begin{equation}\label{eq-lik-ratio}
-2\log\Lambda_{n,m}=(n+m)(\tilde{\boldsymbol\xi}-\widehat{\boldsymbol\xi})^T\left[-\frac{l_{n,m}^{\prime\prime}(\boldsymbol\xi^\ast)}{n+m}\right](\tilde{\boldsymbol\xi}-\widehat{\boldsymbol\xi}).
\end{equation}
Let
$\{(n_j,m_j)\}$ denote an arbitrary subsequence of $\{(n,m)\}$. Then, there exists a further subsequence $\{(n_k,m_k)\}\subset \{(n_j,m_j)\}$ and a value $c\in [0,1]$ such that $m_k/(m_k+n_k)\rightarrow c$ as $k \to \infty$, due to the boundedness of $\{m/(n+m)\}$.
To show the chi-square limit of the log-likelihood ratio statistic (\ref{eq-lik-ratio}), it suffices to establish that $-2\log\Lambda_{n_k,m_k} \stackrel{d}{\rightarrow} \chi_1^2$ holds. We establish this by considering three cases $c\in (0,1)$, $c=0$, or $c=1$. For simplicity in the following, we will suppress the subsequence notation and consider sample sizes ``$(n,m)$" in place of ``$(n_k,m_k)$."
By using the smoothness conditions and the law of large numbers along with (\ref{second-equal}) and $m/(m+n)\to c\in [0,1]$ in (\ref{eq-lik-ratio}), we have
\begin{equation}\label{eq-fisher-inf}
-\frac{l_{n,m}^{\prime\prime}(\boldsymbol\xi^\ast)}{n+m}\xrightarrow{p} \begin{bmatrix}
c & c\\
c & 1
\end{bmatrix} I_{\theta_0} \equiv \boldsymbol{I}(\boldsymbol \xi_0).
\end{equation}
We next discuss the cases where (i) $0<c<1$; (ii) $c=0$; or (iii) $c=1$.
\noindent\textbf{(i)} When $0<c<1$, note that the matrix $\boldsymbol{I}(\boldsymbol \xi_0)$ is invertible.
From (\ref{eq-lik-ratio}) and (\ref{eq-fisher-inf}), the limit distribution of the log-likelihood statistic will follow from the distribution of
$\sqrt{n+m}(\Tilde{\boldsymbol\xi}-\widehat{\boldsymbol\xi})$. To determine the latter,
we start by expanding $l^\prime_{n,m}(\Tilde{\boldsymbol\xi})$ at $\widehat{\boldsymbol\xi}$ to find
\begin{equation}\label{eq-expand-2}
\frac{1}{\sqrt{n+m}}l^\prime_{n,m}(\Tilde{\boldsymbol\xi})=\frac{1}{\sqrt{n+m}}l^\prime_{n,m}(\widehat{\boldsymbol\xi})+\frac{l_{n,m}^{\prime\prime}(\boldsymbol\xi^{\ast}_2)}{n+m}\sqrt{n+m}(\Tilde{\boldsymbol\xi}-\widehat{\boldsymbol\xi}),
\end{equation}
where $\boldsymbol{\xi}^{\ast}_2$ denotes some value between $\tilde{\boldsymbol{\xi}}$ and $\widehat{\boldsymbol{\xi}}$.
Because $l^\prime_{n,m}(\widehat{\boldsymbol{\xi}})=\textbf0$ holds and because $[-{l_{n,m}^{\prime\prime}(\boldsymbol\xi^{\ast}_2)}/{(n+m)}]^{-1} \xrightarrow{p} [\boldsymbol{I}(\boldsymbol \xi_0)]^{-1}$ as in (\ref{eq-fisher-inf}), we may find the limit distribution of $l^\prime_{n,m}(\Tilde{\boldsymbol\xi})/\sqrt{n+m}$ to determine the distribution of $\sqrt{n+m}(\Tilde{\boldsymbol\xi}-\widehat{\boldsymbol\xi})$ in (\ref{eq-expand-2}).
We define a matrix $H$ as
\[
H=\begin{bmatrix}
0 & 0\\
0 & 1/I_{\theta_0}
\end{bmatrix}.
\]
so that
\[
Hl_{n,m}^{\prime}(\tilde{\boldsymbol{\xi}})=\textbf{0}
\]
holds because the second element of $l_{n,m}^\ast(\Tilde{\boldsymbol\xi})$ is 0 (maximizing the likelihood with respect to $\theta$ when $\delta=0$).
We then expand $l^\prime_{n,m}(\Tilde{\boldsymbol\xi})/\sqrt{n+m}$ at $\boldsymbol\xi_0$ to find
\begin{equation}\label{eq:expand-1}
\frac{l_{n,m}^\prime(\Tilde{\boldsymbol\xi})}{\sqrt{n+m}}=\frac{l_{n,m}^\prime(\boldsymbol\xi_0)}{\sqrt{n+m}}+\frac{l_{n,m}^{\prime\prime}(\boldsymbol\xi^{\ast}_3)}{n+m}\sqrt{n+m}(\tilde{\boldsymbol\xi}-\boldsymbol\xi_0),
\end{equation}
where $\boldsymbol{\xi}^{\ast}_3$ is some value between $\tilde{\boldsymbol{\xi}}$ and $\boldsymbol{\xi}$.
By multiplying (\ref{eq:expand-1}) with $H$, we have
\[
H\frac{l_{n,m}^\prime(\Tilde{\boldsymbol\xi})}{\sqrt{n+m}}=H\frac{l_{n,m}^\prime(\boldsymbol\xi_0)}{\sqrt{n+m}}+H\frac{l_{n,m}^{\prime\prime}
(\boldsymbol\xi^{\ast}_3)}{n+m}\sqrt{n+m}(\tilde{\boldsymbol{\xi}}-\boldsymbol{\xi}_0)=\boldsymbol0;
\]
using $\|-l_{n,m}^\prime(\boldsymbol\xi_3^*)/(n+m) -\boldsymbol{I}(\boldsymbol \xi_0)\| =o_p(1)
$ as in
(\ref{eq-fisher-inf}) along with $\|\tilde{\boldsymbol{\xi}} -\boldsymbol{\xi}_0\|=\|(0,\tilde{\theta}) - (0,\theta_0)\|=|\tilde{\theta} - \theta_0| = O_p((n+m)^{-1/2})$ (this order following from the CLT applied to the second entry in (\ref{eq:expand-1})), we may further
re-write as
\begin{eqnarray}
\label{eq-whatever-key}
H\frac{l_{n,m}^\prime(\boldsymbol\xi_0)}{\sqrt{n+m}} + \boldsymbol{R}_{n,m} &
=&H \boldsymbol{I}(\boldsymbol \xi_0) \sqrt{n+m}(\tilde{\boldsymbol{\xi}}-\boldsymbol{\xi}_0)
\\
\nonumber &=& \sqrt{n+m}H \boldsymbol{I}(\boldsymbol \xi_0) (\tilde{\boldsymbol{\xi}}-\boldsymbol{\xi}_0) \\
\nonumber &=&
\sqrt{n+m}\begin{bmatrix}
0 & 0\\
0 & 1/I_{\theta_0}
\end{bmatrix}
I_{\theta_0}
\begin{bmatrix}
c & c\\
c & 1
\end{bmatrix}
\begin{bmatrix}
0\\
\tilde{\theta}-\theta_0
\end{bmatrix}\\
\nonumber&=&\sqrt{n+m}(\tilde{\boldsymbol{\xi}}-\boldsymbol{\xi}_0),
\end{eqnarray}
where bivariate $\boldsymbol{R}_{n,m}$ denotes a remainder term $\|\boldsymbol{R}_{n,m}\|=o_p(1)$.
By replacing this form (\ref{eq-whatever-key}) of $\sqrt{n+m}(\tilde{\boldsymbol{\xi}}-\boldsymbol{\xi}_0)$ in (\ref{eq:expand-1})
and using $\|-l_{n,m}^\prime(\boldsymbol\xi_3^*)/(n+m) - \boldsymbol{I}(\boldsymbol \xi_0) \|=o_p(1)$ again,
we have
\begin{equation}\label{eqn:end2}
\frac{l_{n,m}^\prime(\Tilde{\boldsymbol\xi})}{\sqrt{n+m}}=\frac{\textbf{I}-\boldsymbol{I}(\boldsymbol\xi_0)H}{\sqrt{n+m}}
l_{n,m}^\prime(\boldsymbol\xi_0)+
\tilde{\boldsymbol{R}}_{n,m}
\end{equation}
where $\textbf{I}$ is a $2\times2$ identity matrix and $\tilde{\boldsymbol{R}}_{n,m}$ is a bivariate remainder term $\|\tilde{\boldsymbol{R}}_{n,m}\|=o_p(1)$.
Using the standard CLT with (\ref{second-equal2}), we have that
\[
\frac{l^\prime_{n,m}(\boldsymbol\xi_0)}{\sqrt{n+m}}=\sqrt{n+m}
\left(\frac{l^\prime_{n,m}(\boldsymbol\xi_0)}{n+m}-\boldsymbol0\right)\xrightarrow{d}W \sim \text{MVN}\left(\boldsymbol0,{\boldsymbol{I}(\boldsymbol\xi_0)}\right)
\]
as $n,m\to\infty$ due to independence and $m/(n+m)\to c$, where $W$ has a bivariate normal distribution; the continuous mapping theorem with (\ref{eqn:end2}) then gives
\begin{equation}\label{eq-almost-there}
\frac{l_{n,m}^\prime(\tilde{\boldsymbol{\xi}})}{\sqrt{n+m}}\xrightarrow{d}\left[\textbf{I}-\boldsymbol{I}(\boldsymbol\xi_0)H\right]W.
\end{equation}
Then, by applying (\ref{eq-almost-there}) with $\|-l_{n,m}^\prime(\boldsymbol\xi_2^*)/(n+m) -\boldsymbol{I}(\boldsymbol \xi_0)\| =o_p(1)
$ in (\ref{eq-expand-2}), the limit distribution of $\sqrt{n+m}(\Tilde{\boldsymbol\xi}-\widehat{\boldsymbol\xi})$ follows as
\[
\sqrt{n+m}(\Tilde{\boldsymbol\xi}-\widehat{\boldsymbol\xi})\xrightarrow{d}-[\boldsymbol{I}(\boldsymbol\xi_0)]^{-1}[\textbf{I}-\boldsymbol{I}(\boldsymbol\xi_0)H]W,
\]
so that the log-likelihood ratio statistic in (\ref{eq-lik-ratio}) has an asymptotic distribution as
\[
\begin{split}
-2\log\Lambda_{n,m}&\xrightarrow{d} W^T[\textbf{I}-\boldsymbol{I}(\boldsymbol\xi_0)H]^T [\boldsymbol{I}(\boldsymbol\xi_0)]^{-1}[\textbf{I}-\boldsymbol{I}(\boldsymbol\xi_0)H]W\\
&=Z^T (\textbf{I} -[\boldsymbol{I}(\boldsymbol\xi_0)]^{1/2} H [\boldsymbol{I}(\boldsymbol\xi_0)]^{1/2}) Z \sim \chi_1^2
\end{split}
\]
using that $Z\sim\text{MVN}(0,\textbf{I})$ and that $\textbf{I} -[\boldsymbol{I}(\boldsymbol\xi_0)]^{1/2} H [\boldsymbol{I}(\boldsymbol\xi_0)]^{1/2}$ is an idempotent matrix of rank/trace $1$.
\noindent\textbf{(ii)} When $c=0$, we use a similar argument to find
$\|\tilde{\boldsymbol{\xi}} -\boldsymbol{\xi}_0\|=\|(0,\tilde{\theta}) - (0,\theta_0)\|=|\tilde{\theta} - \theta_0| = O_p((n+m)^{-1/2})$ (following from the CLT and law of large numbers applied to the second entry in (\ref{eq:expand-1})); note this implies
$\sqrt{m}\|\tilde{\boldsymbol{\xi}} -\boldsymbol{\xi}_0\| = o_p(1)$ since $m/(n+m)\rightarrow c=0$. Now we multiply
(\ref{eq:expand-1}) by $\sqrt{n+m}/\sqrt{m}$ and consider the first components, say $l_{n,m}^{\prime(1)}(\Tilde{\boldsymbol\xi})$ and $l_{n,m}^{\prime(1)}( \boldsymbol{\xi}_0)$, of
$l_{n,m}^{\prime }(\Tilde{\boldsymbol\xi})$ and $l_{n,m}^{\prime}( \boldsymbol{\xi}_0)$ in (\ref{eq:expand-1}), respectively,
along with the first row, say $l_{n,m}^{\prime\prime (1) }(\boldsymbol{\xi}^*_3)$, of $l_{n,m}^{\prime\prime }(\boldsymbol{\xi}^*_3)$
in (\ref{eq:expand-1}); we then have from (\ref{eq:expand-1}) that
\begin{eqnarray}
\nonumber \frac{l_{n,m}^{\prime(1)}(\Tilde{\boldsymbol\xi})}{\sqrt{m}} &=& \frac{l_{n,m}^{\prime(1)}( \boldsymbol{\xi}_0)}{\sqrt{m}} +
\frac{l_{n,m}^{\prime\prime (1) }(\boldsymbol{\xi}^*_3)}{m} \sqrt{m}(\tilde{\boldsymbol{\xi}} -\boldsymbol{\xi}_0)\\
\label{eqn:gr2} & \stackrel{d}{\rightarrow}& N(0,I_{\theta_0})
\end{eqnarray}
as $n,m\to \infty$, using above that $l_{n,m}^{\prime(1)}( \boldsymbol{\xi}_0)/\sqrt{m} \stackrel{d}{\rightarrow} N(0,I_{\theta_0})$
by the standard CLT along with $ l_{n,m}^{\prime\prime (1) }(\boldsymbol{\xi}^*_3)/m \stackrel{p}{\rightarrow} (I_{\theta_0},I_{\theta_0})$
by the law of large numbers and with $\sqrt{m}\|\tilde{\boldsymbol{\xi}} -\boldsymbol{\xi}_0\| = o_p(1)$.
We now write the $2\times 2$ matrix $l_{n,m}^{\prime\prime }(\boldsymbol{\xi}^*_2)$ in (\ref{eq-expand-2}) as
\[
l_{n,m}^{\prime\prime }(\boldsymbol{\xi}^*_2) = \left[\begin{array}{lcl}
a_{n,m,1} && a_{n,m,2}\\
a_{n,m,2} && a_{n,m,3}
\end{array}\right],
\]
noting that the component sample averages satisfy (for the information number $I_{\theta_0}\in (0,\infty)$)
\begin{equation}
\label{eqn:gr}
\frac{a_{n,m,1}}{m}\stackrel{p}{\rightarrow}-I_{\theta_0}, \qquad \frac{a_{n,m,2}}{m} \stackrel{p}{\rightarrow}-I_{\theta_0}, \qquad
\frac{a_{n,m,3}}{n+m} \stackrel{p}{\rightarrow}-I_{\theta_0}
\end{equation} (i.e., as $a_{n,m,3}$ involves a sum of $n+m$ terms while $ a_{n,m,1}, a_{n,m,2}$ are sums of $m$ terms). Recalling that $(\Tilde{\boldsymbol\xi}-\widehat{\boldsymbol\xi}) = ( -\widehat{\delta}, \tilde{\theta}-\widehat{\theta})$, the second component of (\ref{eq-expand-2}) entails that $\tilde{\theta}-\widehat{\theta} = \widehat{\delta} a_{n,m,2}/a_{n,m,3}$; substitution of this quantity into
the first component of (\ref{eq-expand-2}) then gives that
\begin{eqnarray*}
\frac{l_{n,m}^{\prime(1)}(\Tilde{\boldsymbol\xi})}{\sqrt{m}} &=& \frac{a_{n,m,2}}{m} \sqrt{m}(\tilde{\theta}-\widehat{\theta}) -\frac{a_{n,m,1}}{m} \sqrt{m}\widehat{\delta}\\
&=& \left(\frac{a_{n,m,2}}{m} \frac{a_{n,m,2}}{a_{n,m,3}} -\frac{a_{n,m,1}}{m} \right) \sqrt{m}\widehat{\delta}.
\end{eqnarray*}
From this with (\ref{eqn:gr}) and $m/(n+m)\to 0$, we have
\begin{eqnarray*}
\frac{a_{n,m,2}}{m} \frac{a_{n,m,2} }{a_{n,m,3} } -\frac{a_{n,m,1}}{m} & = &\frac{m}{n+m} \frac{a_{n,m,2}}{m} \frac{a_{n,m,2}/m}{a_{n,m,3}/(n+m)} -\frac{a_{n,m,1}}{m} \\&=& o_p(1)-\frac{a_{n,m,1}}{m} = I_{\theta_0}(1+o_p(1))
\end{eqnarray*}
and then that
\begin{equation}
\label{eqn:gr3}
\sqrt{m}\widehat{\delta} \stackrel{d}{\rightarrow} N(0,1/I_\theta)
\end{equation}
by (\ref{eqn:gr2}) and Slutsky's theorem.
Now similarly write the $2\times 2$ matrix $l_{n,m}^{\prime\prime }(\boldsymbol{\xi}^*)$ in (\ref{eq-lik-ratio}) as
\[
l_{n,m}^{\prime\prime }(\boldsymbol{\xi}^*_2) = \left[\begin{array}{lcl}
\tilde{a}_{n,m,1} && \tilde{a}_{n,m,2}\\
\tilde{a}_{n,m,2} && \tilde{a}_{n,m,3}
\end{array}\right],
\]
where (\ref{eqn:gr}) likewise holds for the counterpart sample averages $ \tilde{a}_{n,m,1}/m$, $\tilde{a}_{n,m,2}/m$, $\tilde{a}_{n,m,3}/(n+m)$;
then re-writing (\ref{eq-lik-ratio}) in terms of $(\Tilde{\boldsymbol\xi}-\widehat{\boldsymbol\xi}) = ( -\widehat{\delta}, \tilde{\theta}-\widehat{\theta})$, where $\tilde{\theta}-\widehat{\theta} = \widehat{\delta} a_{n,m,2}/a_{n,m,3}$ again, we have
\begin{eqnarray*}
-2\log\Lambda_{n,m}&=& - m(\tilde{\boldsymbol\xi}-\widehat{\boldsymbol\xi})^T\left[ \frac{l_{n,m}^{\prime\prime}(\boldsymbol\xi^\ast)}{m}\right](\tilde{\boldsymbol\xi}-\widehat{\boldsymbol\xi}).
\\&=&
-m (\widehat{\delta})^2 \left[\begin{array}{cc}
-1 & \frac{a_{n,m,2}}{a_{n,m,3}}
\end{array}
\right]\left[\begin{array}{lcl}
\tilde{a}_{n,m,1}/m && \tilde{a}_{n,m,2}/m\\
\tilde{a}_{n,m,2}/m && \tilde{a}_{n,m,3}/m
\end{array}\right] \left[\begin{array}{c}
-1 \\ \frac{a_{n,m,2}}{a_{n,m,3}}
\end{array}
\right]
\end{eqnarray*}
in matrix form.
Note that $a_{n,m,2}/a_{n,m,3} = m/(m+n)\cdot (a_{n,m,2}/m)/(a_{n,m,3}/(n+m))= o_p(1)$ while
\[
\frac{\tilde{a}_{n,m,3}}{m} \frac{a_{n,m,2}}{a_{n,m,3}}= \frac{a_{n,m,2}}{m} \frac{\tilde{a}_{n,m,3}/(n+m)}{a_{n,m,3}/(n+m)}\stackrel{p}{\rightarrow} -I_{\theta_0},
\]
and $ \tilde{a}_{n,m,1}/m \stackrel{p}{\rightarrow} -I_{\theta_0} $, $ \tilde{a}_{n,m,2}/m \stackrel{p}{\rightarrow} -I_{\theta_0} $
as in (\ref{eqn:gr}); consequently, we then have
\[-2\log\Lambda_{n,m} = m(\widehat{\delta})^2 I_{\theta_0}(1+o_p(1)) \stackrel{d}{\rightarrow} \chi_1^2,
\]
by (\ref{eqn:gr3}) and the continuous mapping theorem (i.e., $\sqrt{I_{\theta_0}}\sqrt{m} \widehat{\delta}$ has a standard normal limit).
\noindent\textbf{(iii)} The argument when $c=1$ is the same of that for $c=0$ upon reversing the roles of $X_1,\ldots,X_n$
and $Y_1,\ldots,Y_{m}$; that is, we construct the log-likelihood ratio statistic assuming a full model with $X_1,\ldots,X_n \sim f(\cdot;\delta+\theta)$ and $Y_1,\ldots,Y_m \sim f(\cdot; \theta)$ and with a reduced model where $X_1,\ldots,X_n, Y_1,\ldots,Y_m \sim f(\cdot; \theta)$. This does not change the likelihood ratio statistic, though the proof when $c=0$ (or $m/(n+m)\rightarrow 0$) now applies
for the case $c=1$ with the roles of sample sizes $n,m$ reversed (i.e., $n/(n+m)\rightarrow 0$ holds in the $m/(n+m)\rightarrow c=1$ case).
\end{proof}
\noindent\textbf{Remark~3}. In support of Section~6.4 of the main manuscript, here we provide details regarding how Theorem~\ref{theorem-chi-square-1}
applies for establishing the chi-square limit (i.e., namely $\chi_1^2$) of the log-likelihood ratio statistic
for prediction in some important discrete data cases.
The binomial prediction problem from Section~6.1 involves the
prediction of $Y \sim \mbox{Binom}(m,p)$ from available data $X \sim \mbox{Binom}(n,p)$, where $n,m$ are integers and $p \in (0,1)$. The log-likelihood statistic $-2 \log \Lambda_{m,n}$ based on $(X,Y)$ from Section~6.1 is the same as the log-likelihood statistic $-2 \log \Lambda_{m,n}$ from Theorem~\ref{theorem-chi-square-1},
when the latter is based on iid $X_1,\ldots,X_{n},Y_1,\ldots,Y_m\sim\mbox{Binom}(1,p)$ (i.e., involving a comparison of full and reduced models
as $X_1,\ldots,X_{n}\sim\mbox{Binom}(m,p)$ with $Y_1,\ldots,Y_{m}\sim\mbox{Binom}(m,p_1 = p+\delta)$ for the full model vs.~$X_1,\ldots,X_{n},Y_1,\ldots,Y_m\sim\mbox{Binom}(1,p)$ for the reduced model).
Consequently, as $m,n\to \infty$, the limit of the log-likelihood statistic $-2 \log \Lambda_{m,n} \stackrel{d}{\rightarrow}\chi_1^2$ follows from
Theorem~\ref{theorem-chi-square-1}.
The equivalence of likelihood-statistics owes to the fact that $(X,Y)$ has the same distribution as $(\sum_{i=1}^n X_i, \sum_{i=1}^m Y_i)$ here.
Similarly, the Poisson prediction problem from Section~6.2 involves the
prediction of $Y \sim \mbox{Poi}(m\lambda)$ from available data $X \sim \mbox{Poi}(n\lambda)$, where $n,m$ are integers and $\lambda>0$. The log-likelihood statistic $-2 \log \Lambda_{m,n}$ given from $(X,Y)$ in Section~6.2 is likewise the same as the log-likelihood statistic $-2 \log \Lambda_{m,n}$ from Theorem~\ref{theorem-chi-square-1},
when the latter is based on iid $X_1,\ldots,X_{n},Y_1,\ldots,Y_m\sim\mbox{Poi}(\lambda)$. Hence, in the Poisson case, the limit of $-2 \log \Lambda_{m,n} \stackrel{d}{\rightarrow}\chi_1^2$ again follows from
Theorem~\ref{theorem-chi-square-1} as $m,n\to \infty$. Note here $(X,Y)$ has again the same distribution as $(\sum_{i=1}^n X_i, \sum_{i=1}^m Y_i)$.
The within-sample prediction problem of Section~6.3 involves predicting a binomial count $Y \equiv \sum_{i=1}^n I(T_i \in (t_c, t_w])$ based on
event time random variables $T_1,\ldots,T_n$. Here there are two counts (i.e., discrete random variables) involving
the observed number, say $X \equiv \sum_{i=1}^n I(T_i \leq t_c)$, of times occurring before a censoring point $t_c$ along with the number of times $Y \equiv \sum_{i=1}^n I(T_i \in (t_c, t_w])$ occurring in a future interval $(t_c,t_w]$. (Technically, the value of $T_i$ is also assumed to be available whenever $T_i \leq t_c$ occurs.) The structure of this prediction problem is similar to the binomial prediction case,
though the counts $(X,Y)$ here are multinomial (instead of two independent binomial variables). The proof of the chi-square limit of the log-likelihood statistic $-2 \log \Lambda_{m,n} \stackrel{d}{\rightarrow}\chi_1^2$ follows with an argument similar to that of Theorem~\ref{theorem-chi-square-1}.
\newpage
\section{Simulation Results}
\label{sec:simulation-results}
This section provides simulation results for other factor combinations in Section~6.3.2.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{{beta0.8delta0.1}.pdf}
\caption{Coverage probabilities versus expected number of events (failures) for the direct-bootstrap (DB), GPQ-bootstrap (GPQ), calibration-bootstrap (CB), LR (LR), and plug-in (PL) methods when $d=0.1$ and $\beta=0.8$.}
\label{fig:within-sample-pred-1123123}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{{beta0.8delta0.2}.pdf}
\caption{Coverage probabilities versus expected number of events (failures) for the direct-bootstrap (DB), GPQ-bootstrap (GPQ), calibration-bootstrap (CB), LR (LR), and plug-in (PL) methods when $d=0.2$ and $\beta=0.8$.}
\label{fig:within-sample-pred-14341}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{{beta1delta0.1}.pdf}
\caption{Coverage probabilities versus expected number of events (failures) for the direct-bootstrap (DB), GPQ-bootstrap (GPQ), calibration-bootstrap (CB), LR (LR), and plug-in (PL) methods when $d=0.1$ and $\beta=1$.}
\label{fig:within-sample-pred-1434}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{{beta1delta0.2}.pdf}
\caption{Coverage probabilities versus expected number of events (failures) for the direct-bootstrap (DB), GPQ-bootstrap (GPQ), calibration-bootstrap (CB), LR (LR), and plug-in (PL) methods when $d=0.2$ and $\beta=1$.}
\label{fig:within-sample-pred-121}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{{beta2delta0.2}.pdf}
\caption{Coverage probabilities versus expected number of events (failures) for the direct-bootstrap (DB), GPQ-bootstrap (GPQ), calibration-bootstrap (CB), LR (LR), and plug-in (PL) methods when $d=0.2$ and $\beta=2$.}
\label{fig:within-sample-pred-341}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{{beta4delta0.1}.pdf}
\caption{Coverage probabilities versus expected number of events (failures) for the direct-bootstrap (DB), GPQ-bootstrap (GPQ), calibration-bootstrap (CB), LR (LR), and plug-in (PL) methods when $d=0.1$ and $\beta=4$.}
\label{fig:within-sample-pred-11}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{{beta4delta0.2}.pdf}
\caption{Coverage probabilities versus expected number of events (failures) for the direct-bootstrap (DB), GPQ-bootstrap (GPQ), calibration-bootstrap (CB), LR (LR), and plug-in (PL) methods when $d=0.2$ and $\beta=4$.}
\label{fig:within-sample-pred-12}
\end{figure}
\clearpage
\bibliographystyle{apalike}
|
{
"timestamp": "2021-10-14T02:26:11",
"yymm": "2109",
"arxiv_id": "2109.13970",
"language": "en",
"url": "https://arxiv.org/abs/2109.13970"
}
|
\section{Introduction}
Quantum Key Distribution (QKD)~\cite{GisinQKD,ScaraniSecurityQKD,DiamantiQKD, pir2019advances} has the potential to allow secure communication between any two points on Earth. In a future continental-scale quantum network (or quantum internet)~\cite{Peev2009,Sasaki2011,Kimble08,Pirandolacomment,Wehnereaam9288} satellite, fiber, and free-space links will be required to operate jointly, in an inter-modal configuration.
While satellite-to-ground QKD has been demonstrated~\cite{Micius_Liao2017,Micius_BBM92_Yin2017,Bedington2017,Agnesi2018,Khan2018} and the development of fiber-based QKD is technologically mature~\cite{BoaronRecord,Yoshino2013,Islam2017,Yuan2018,MinderPittaluga2019,Optica_Agnesi2019,experimental_twin_field_Liu2019,centro_di_calcolo_Avesani2021}, the inter-modal operation of free-space and fiber links has only recently started to be investigated~\cite{Liao2017_daylight, Gong2018, qcosone_Avesani2021}. Free-space ground-to-ground links, although more lossy than a fiber equivalent over the same distance, require a lighter infrastructure investment, may exploit mobile stations and offer connectivity in remote locations.
An inter-modal QKD network must guarantee the compatibility of the free-space links with the fiber-based infrastructure, which is based on the achievement of stable coupling of the free-space signal into a single-mode fiber (SMF) and on the use of a shared signal wavelength, typically in the telecom band. Coupling the received signal into a SMF brings in of itself several advantages, since the narrow field-of-view of the fiber limits the amount of background solar radiance that can reach the detector and the small mode-field-diameter of a standard SMF (typically $10~\mu$m) allows the use of detectors with a small active area, which are typically faster than larger detectors. This opens the way to daylight free-space QKD, thus enabling for continuous-time operation~\cite{qcosone_Avesani2021}.
However, the SMF coupling efficiency is strongly affected by the wavefront perturbations introduced by atmospheric turbulence, requiring the introduction of mitigation techniques such as Adaptive Optics (AO)~\cite{Jian2014}.
The performance of fiber-based QKD systems was studied in detail by Ref.~\cite{Rusca2018}, where the authors calculated the secret key rate (SKR), optimal decoy-state parameters, and key block length for the finite-key analysis, {considering} {the} channel loss as a fixed parameter. This approach is not appropriate for the case of free-space channels, since the statistics of atmospheric turbulence induces a random fading of the transmitted signal.
The statistics of the free-space channel transmission was derived in~\cite{Vasylyev2016, Vasylyev2018} to calculate the SKR of decoy-state QKD including the effect of collection losses due to beam-wander and scintillation. This treatment is however limited to the case of a QKD receiver with free-space detectors, and thus excludes single-mode fiber-coupled receivers. A similar approach was recently adopted in~\cite{Pirandola2021free,Pirandola2021sat}, for the specific case of continuous-variable (CV) QKD.
The effect of wavefront perturbations and atmospheric scintillation was calculated in~\cite{Canuet2018}, where the the single-mode fiber-coupling probability distribution was derived to extract the fading statistics of a satellite-to-ground link, considering the effect of a partial AO correction of the perturbed wavefront received.
This approach was found by~\cite{qcosone_Avesani2021} to be applicable also to ground-to-ground links.
In this article, we develop a comprehensive model of the performance of a free-space ground-to-ground QKD system. Differently from Ref.~\cite{Pirandola2021free}, we focus on the commonly used efficient-BB84 protocol with active decoy states, in the one-decoy variant of Ref.~\cite{Rusca2018}.
We generalize the approach of~\cite{Vasylyev2018} and~\cite{Canuet2018} to include both the collection losses at the receiver aperture, and the losses due to single-mode fiber-coupling.
Moreover the model of~\cite{Canuet2018} is further extended to include the effect of a finite control bandwidth of the AO system.
The model considers the effect of atmospheric absorption, receiver collection efficiency as a function of beam broadening, beam wandering and atmospheric scintillation, and SMF-coupling in the presence of atmospheric turbulence with partial AO correction of the wavefront deformations and finite AO control bandwidth to calculate the overall channel loss. The finite efficiency and saturation of the single-photon detectors are also included.
The expected error rate is calculated considering the intrinsic coding error caused by imperfect preparation and measurement of quantum states, the noise introduced by the detectors (dark counts and afterpulses), and the amount of diffuse atmospheric background coupled into the receiver in daylight operation.
The present model gives as output the obtainable secret key rate (SKR), which takes into account the atmospheric channel contribution, the transmitter and receiver design constraints, the parameters of the quantum source and detectors, and the finite-key analysis to produce a set of requirements and optimal design choices for a QKD system operating under specific free-space channel conditions.
The workflow of our QKD model is sketched in Fig.~\ref{fig:model_workflow}.
\begin{figure}[b]
\centering
\includegraphics[width=0.9\columnwidth]{Figures/FinalMap.pdf}
\caption{Sketch of the workflow of the model.}
\label{fig:model_workflow}
\end{figure}
In Sec.~\ref{sec:Efficiency}, we study the several contributions to the channel efficiency of ground-to-ground links, which reduce the signal detection rate.
In Sec.~\ref{sec:Errors}, we consider the effects which introduce errors in the exchange of qubits and reduce the secret key rate.
Finally, in Sec.~\ref{sec:Secret}, we combine the channel analysis with the decoy-state and finite-key analysis to estimate the final SKR.
\section{Channel efficiency and detection rate}
\label{sec:Efficiency}
The channel efficiency $\eta_{\rm CH}$ is given by the product of three terms and can be written as:
\begin{equation}
\eta_{\rm CH} = \eta_\alpha \ \eta_{D_{\rm Rx}} \ \eta_{\rm SMF}
\label{eqn:eta_tot}
\end{equation}
where $\eta_\alpha$ denotes the atmospheric absorption, $\eta_{D_{\rm Rx}}$ the receiver collection efficiency, and $\eta_{\rm SMF}$ the SMF coupling efficiency.
As a first order analysis, we will consider the effect that the atmospheric turbulence has on the \textit{average} value of the different terms composing the channel efficiency. However, when the receiver rate approaches the saturation limit of the single-photon detectors, the statistics of the collection efficiency and single-mode coupling efficiency can no longer be ignored, and the whole probability distribution $p(\eta_{\rm CH})$ has to be considered for the expected SKR to be estimated correctly (see Sec.~\ref{ss:detect_saturation}).
In our model, the probability distributions are numerically calculated and normalized as weight functions over a discretized array $\lbrace\eta_1,\dots,\eta_N\rbrace$, so that:
\begin{equation}
\sum_{i=1}^N p(\eta_i) = 1~,
\end{equation}
where the probability $p(\eta_i)$ is the probability that the efficiency lies within the interval $[\eta_{i-1}, \eta_i]$ with $\delta\eta_i = \eta_i - \eta_{i-1}$ the spacing.
Starting from the analytic probability density function (\textit{pdf}), for sufficiently fine binning we have
\begin{equation}
p(\eta_i) = pdf(\eta_i)\delta\eta_i \ .
\end{equation}
\subsection{Atmospheric absorption}
The channel absorption $\eta_A$ for a link distance $z$ depends on the absorption coefficient $A(\lambda)$ for a specific wavelength $\lambda$ as:
\begin{equation}
\eta_A = 10^{-A(\lambda)\cdot z} ~,
\end{equation}
where the absorption coefficient is assumed constant for a horizontal link.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figures/attenuation.pdf}
\caption{Atmospheric absorption coefficient computed by LOWTRAN for a horizontal path (sub-arctic winter atmospheric model).}
\label{fig:lowtran_absorption_coefficient}
\end{figure}
An established tool for calculating the spectral properties of the atmosphere is the LOWTRAN software package~\cite{LOWTRAN}, which can be used to predict atmospheric absorption and scattering over a wide wavelength range, on horizontal or slanted paths, taking into account both geographical and seasonal atmospheric variations.
In Fig.\ref{fig:lowtran_absorption_coefficient}, we show the atmospheric absorption coefficient computed by LOWTRAN as a function of wavelength for a horizontal link, considering a sub-arctic winter atmosphere.
\subsection{Collection efficiency}
\subsubsection{Turbulence-induced beam broadening}
In vacuum, the beam size $W(z)$ of a collimated Gaussian beam of waist $W_0$ and wavelength $\lambda$ propagating for a distance $z$ is given by the formula for diffraction-limited propagation:
\begin{equation}
W(z) = W_0\sqrt{1+\left(\frac{\lambda z}{\pi W_0^2}\right)^2} ~.
\label{eqn:waist_z_diff_limit}
\end{equation}
{According to the Kolmogorov's theory of turbulence~\cite{Andrews_book}, when the propagation happens through the turbulent air, the index of refraction is treated as a fluctuating random field around a mean value which induces a perturbation of the wavefront, resulting in an overall loss of coherence of the optical wave. The atmospheric perturbation is captured by the so-called \emph{power spectral density} $\Phi(\kappa)$, which is the Fourier transform of the refractive-index covariance function in terms of the spatial frequency $\kappa$. In our model, we use the power spectral density $\Phi(\kappa)$ for refractive-index fluctuations given by the well-known Kolmogorov spectrum of atmospheric turbulence }
\begin{equation}
\Phi_n(\kappa) = 0.033 ~C_n^2~\kappa^{-11/3} \ ,
\label{eqn:kolmogorov_spectrum}
\end{equation}
which is widely used in theoretical calculations.
The strength of the turbulence is parametrized by the refractive-index structure constant $C_n^2$, which may be considered constant along a horizontal link.
The effect of turbulence-induced coherence loss on Gaussian beam propagation has been studied by~\cite{Ricklin2002,Ricklin2003}, who found that the following formula holds:
\begin{equation}
W(z) = W_0\sqrt{1+\left(1+\frac{2W_0^2}{\rho_0^2(z)}\right)\left(\frac{\lambda z}{\pi W_0^2}\right)^2} ~,
\label{eqn:waist_z_turbulence}
\end{equation}
where
\begin{equation}
\rho_0(z) = (0.55~C_n^2k^2z)^{-3/5}
\end{equation}
is the spherical-wave atmospheric spatial coherence radius, with $k=2\pi/\lambda$ the wave-number.
While for the diffraction-limited case a larger beam waist $W_0$ implies a smaller intrinsic divergence $\theta_0 = \lambda/\pi W_0$, in the turbulence-affected case the larger the ratio $W_0/\rho_0$, the stronger is the loss of coherence.
Indeed, for $z\gg1$ we have that the two competing effects, diffraction-broadening and turbulence broadening, compensate each other and the beam size tends to a value that is independent of the initial waist $W_0$
\begin{equation}
W(z) \overset{z\gg1}{\sim}\frac{\lambda\sqrt{2}}{\pi}\left(0.55~C_n^2k^2\right)^{3/10} z^{8/5} \ ,
\end{equation}
as shown in Fig.~\ref{fig:beampropagation} for different values of $W_0$ and $C_n^2$.
\begin{figure}[b]
\centering
\includegraphics[width=\columnwidth]{Figures/waist_z_turb.pdf}
\caption{Beam size as a function of distance for different values of $C_n^2$ and $W_0$: $C_n^2(1)=10^{-15}~\rm m^{-2/3}$, $C_n^2(2)=10^{-13}~\rm m^{-2/3}$, $W_0(1)=10~\rm mm$, $W_0(2)=50~\rm mm$, $W_0(3)=200~\rm mm$.}
\label{fig:beampropagation}
\end{figure}
Assuming for simplicity that the pointing error is negligible, the average contribution to the collection efficiency $\eta_{D_{\rm Rx}}$ caused by diffraction and beam broadening for a receiver of finite aperture diameter $D_{\rm Rx}$ is given by
\begin{equation}
\expval{\eta_{D_{\rm Rx}}} = 1 - \exp\left[-\frac{D_{\rm Rx}^2}{2W(z)^2}\right] \ ,
\label{eqn:eta_diffraction}
\end{equation}
which is the integral of a Gaussian distribution of standard deviation $W(z)/2$ over a concentric circular area of diameter $D_{\rm Rx}$.
\subsubsection{Beam wander and scintillation}
The beam-size $W(z)$ appearing in Eq.~\eqref{eqn:eta_diffraction} is the so-called \textit{long-term} spot size, which represents the size of the beam averaged over a timescale much longer than the turbulence dynamic.
The instantaneous \textit{short-term} (ST) beam size $W_{\rm ST}(z)$ at a distance $z$ is given by:
\begin{equation}
W_{\rm ST}(z) = \sqrt{W^2(z) - \expval{r_c^2}} ~,
\label{eqn:waist_short_term}
\end{equation}
where $\expval{r_c^2}$ is the variance of beam-wander fluctuations at the receiver aperture plane.
Beam wandering is caused by the larger-sized turbulence eddies, and results in a shift of the short-term spot on the receiver aperture. For a collimated beam transmitted along a horizontal channel one has~\cite{Andrews_book}:
\begin{equation}
\expval{r_c^2} = 2.42~ C_n^2 z^3 W_0^{-1/3} ~.
\label{eqn:beam_wander_variance}
\end{equation}
In addition to beam wander, one must also consider the effect of atmospheric scintillation, which introduces random fluctuations in the beam irradiance profile.
The probability density function $p_{D_{\rm Rx}}$ of $\eta_{D_{\rm Rx}}$ was derived analytically in~\cite{Vasylyev2016, Vasylyev2018} exploiting the law of total probability and separating the contributions from turbulence-induced beam wandering and atmospheric scintillation.
The distribution $p_{D_{\rm Rx}}$ varies depending on the strength of turbulence, which is parametrized by the Rytov variance $\sigma_R^2$, which is defined as:
\begin{equation}
\sigma_R^2 = 1.23~C_n^2k^{7/6}z^{11/6} ~.
\label{eqn:rytov_variance}
\end{equation}
For weak turbulence ($\sigma_R^2<1$) $p_{D_{\rm Rx}}$ resembles a log-negative Weibull distribution, while for stronger turbulence ($\sigma_R^2>1$) one finds a truncated log-normal distribution.
The exact form and derivation of $p_{D_{\rm Rx}}$ can be found in~\cite{Vasylyev2018} and requires the knowledge of the following quantities: the average collection efficiency $\expval{\eta_{D_{\rm Rx}}}$ of Eq.~\eqref{eqn:eta_diffraction},the collection efficiency calculated using the short-term waist $W_{\rm ST}$ of Eq.~\eqref{eqn:waist_short_term}, the beam-wander variance $\expval{r_c^2}$, and the mean-squared efficiency $\expval{\eta_{D_{\rm Rx}}^2}$, that is:
\begin{equation}
\expval{\eta_{D_{\rm Rx}}^2} = \expval{\eta_{D_{\rm Rx}}}(1 + \sigma_{\rm I}^2(D_{\rm Rx})) ~,
\end{equation}
where $\sigma_{\rm I}^2(D_{\rm Rx})$ is the aperture-averaged scintillation index (\textit{flux variance})~\cite{Andrews_book}:
\begin{align}
\sigma_{\rm I}^2(D_{\rm Rx}) &=
\exp\left[
\frac{0.49\beta_0^2}
{\left( 1 + 0.18d^2 + 0.56\beta_0^{12/5} \right)^{7/6}} \right. \nonumber \\
&+\left. \frac{ 0.51\beta_0^2\left( 1 + 0.69\beta_0^{12/5} \right)^{-5/6}}
{1 + 0.90d^2 + 0.62d^2\beta_0^{12/5}}
\right] - 1 ~,
\label{eqn:scintillation_flux_variance}
\end{align}
with $d = \sqrt{\frac{kD_{\rm Rx}^2}{4z}}$ and $\beta_0^2=0.4065~\sigma_R^2$.
Some examples of probability distributions $p_{D_{\rm Rx}}$ calculated in this way are provided in Fig.~\ref{fig:prob_wander_scintillation}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figures/collection_distribution_W050mm_D100mm_Cn2_14.pdf}
\caption{Probability distribution $p_{D_{\rm Rx }}$ of the collection efficiency $\eta_{D_{\rm Rx}}$ for a fixed receiver aperture $D_{\rm Rx}=100~\rm mm$, turbulence parameter $C_n^2=10^{-14}~\rm m^{-2/3}$, transmitter waist $W_0=50~\rm mm$, wavelength $\lambda=1550~\rm nm$, and different link distances: $z=1~\rm km$ ($\sigma_R=0.4$), $z=2~\rm km$ ($\sigma_R=0.8$), $z=5~\rm km$ ($\sigma_R=2$), $z=10~\rm km$ ($\sigma_R=3.7$), $z=15~\rm km$ ($\sigma_R=5.3$), $z=30~\rm km$ ($\sigma_R=10.1$). The probability is given as a weight function, given an array with logarithmic spacing: $\eta_{D_{\rm Rx}} = 10^{[-8~:~0.02~:~0]}$.}
\label{fig:prob_wander_scintillation}
\end{figure}
\subsection{Fiber coupling efficiency}
The fiber coupling efficiency is given by the normalized overlap integral between the fiber mode, and the incident optical field $U(\vec{r},t)$,
\begin{equation}
U(\vec{r},t) = U_0(\vec{r},t)\exp\left[\chi(\vec{r},t)+i \Psi(\vec{r},t)\right] \ ,
\end{equation}
where $\chi$ is the log-amplitude perturbation term, and $\Psi$ is the wavefront phase term. The different origins of $\chi$ and $\Psi$ perturbations imply the statistical independence of scintillation and phase effects~\cite{Canuet2018,Fried1966}, and the average coupling efficiency $\eta_{\rm SMF}$ can be factorized into three terms:
\begin{equation}
\eta_{\rm SMF} =\eta_0 \ \eta_{\rm AO} \ \eta_S \
,
\end{equation}
where $\eta_0$ is the optical efficiency of the receiver telescope, $\eta_{\rm AO}$ is the coupling efficiency due to wavefront perturbations that may be partially corrected by AO, and $\eta_S$ is the coupling efficiency due to the spatial structure of atmospheric scintillation. We now discuss the three terms separately.
\subsubsection{Optical coupling efficiency}
The \emph{optical coupling efficiency} $\eta_0$ of an optical system measures the matching between an unperturbed received beam and the mode-field diameter (MFD) of the SMF. $\eta_0$ is determined by the design optics of the receiving telescope, and particularly by the ratio $\alpha = D_{\rm Obs}/D_{\rm Rx}$ between the diameters of the central obscuration and of the telescope aperture~\cite{Ruilier2001}. The ideal coupling efficiency can be parametrized by
\begin{equation}
\eta_0(\alpha,\beta) = 2\left[\frac{\exp(-\beta^2) -\exp(-\beta^2\alpha^2)}{\beta\sqrt{1-\alpha^2}} \right]^2 \ ,
\label{eqn:eta0}
\end{equation}
where $\beta$ is given by
\begin{equation}
\beta = \frac{\pi D_{\rm Rx} \rm}{2\lambda} \frac{\rm MFD}{f} \ ,
\end{equation}
with $D_{\rm Rx}$ the receiver diameter, $f$ the effective focal length of the optical system, and {\rm MFD} the mode field diameter of the SMF.
Given a particular $\alpha$, the value of $\beta$ can be optimized to achieve the optimal coupling efficiency $\eta_0^{\rm (opt)} = \eta_0(\beta_{\rm opt})$. Knowing the value of $\beta_{\rm opt}$ allows to choose the optimal design value of $f$, since typically the working wavelength $\lambda$, fiber ${\rm MFD}$ and receiver diameter are constrained. Fig.~\ref{fig:eta0_beta_alpha} shows $\eta_0^{\rm (opt)}$ and $\beta_{\rm opt}$ as a function of the obscuration ratio $\alpha$.
For $\alpha=0$, we have $\beta_{\rm opt}=1.12$, that allows to achieve a maximum optical coupling efficiency of $81.5\% \approx -0.89$~dB.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Figures/ideal_efficiency_beta_alpha.pdf}
\caption{Maximum ideal SMF coupling efficiency and optimum $\beta$ parameter as a function of obscuration ratio $\alpha$. }
\label{fig:eta0_beta_alpha}
\end{figure}
\subsubsection{Effect of wavefront perturbations and adaptive optics correction}
As was first derived by~\cite{Fried1965} and~\cite{Noll1976},
the instantaneous wavefront aberration $\Psi(\vec{r},t)$ introduced by the turbulent channel at a point $\vec{r}$ on the receiver aperture can be decomposed in a superposition of Zernike polynomials defined over the normalised pupil coordinates $(r,\varphi)$, with $r = 2|\vec{r}|/D_{\rm Rx}$. In the decomposition of $\Psi(\vec{r},t)$, each polynomial term $Z_n^m(r,\varphi)$ of radial degree $n$ and azimuthal degree $m$ is weighted by a time-dependent coefficient $b_n^m(t)$, yielding
\begin{equation}
\Psi(r,\varphi,t) = \sum_{n,m} b_n^m(t) Z_n^m(r,\varphi) ~.
\end{equation}
The Zernike coefficient variances $\expval{b_n^{m2}}$ represent the statistical strength of a particular aberration order, and depend on the ratio of the receiver aperture $D_{\rm Rx}$ to the atmospheric coherence width $r_0=2.1\rho_0$ (also known as the Fried parameter), with a modal term scaling with the radial order $n$ ~\cite{Noll1976, Boreman1996}:
\begin{equation}
\expval{b_n^{m2}} = \left( \frac{D_{\rm Rx}}{r_0} \right) ^{\frac{5}{3}} \frac{n+1}{\pi} \frac{ \Gamma \left( n-\frac{5}{6} \right) \Gamma \left(\frac{23}{6}\right) \Gamma \left(\frac{11}{6}\right) \sin \left( \frac{5}{6}\pi \right)}{ \Gamma \left(n+\frac{23}{6}\right)} ~.
\label{eqn:zer_variance}
\end{equation}
An expression of the instantaneous coupling efficiency in the presence of wavefront perturbations was derived by~\cite{Ma2015} and~\cite{Canuet2018} directly in terms of the Zernike coefficients $b_n^m$:
\begin{equation}
\eta_{\rm AO}(t) = \exp \left[ - \sum_{n,m} b_n^m(t)^2\right] ~.
\label{eqn:eta_smf_instant}
\end{equation}
From Eq.~\eqref{eqn:eta_smf_instant}, knowing that the coefficients are independent, Gaussian-distributed random variables with zero mean and variance given by Eq.~\eqref{eqn:zer_variance} -- i.e. $b_n^m\sim\mathcal{N}(0,\expval{b_n^{m2}})$ -- we derive the average coupling efficiency:
\begin{equation}
\expval{\eta_{\rm AO}} = \prod_{n,m} \frac{1}{\sqrt{1+2\expval{b_n^{m2}}}} ~.
\label{eqn:eta_smf_average}
\end{equation}
Eq.~\eqref{eqn:eta_smf_average} makes the calculation of the average SMF coupling efficiency in the presence of a partial adaptive-optic compensation of turbulence up to an order $n_{\rm max}$ straightforward.
Assuming an ideal AO system with infinite control bandwidth, it is sufficient to completely suppress the coefficients in the productory corresponding to radial orders $n\leq n_{\rm max}$. Fig.~\ref{fig:smf_n_inf_bw}
shows the SMF coupling efficiency as a function of the ratio $D_{\rm Rx}/r_0$ of receiver diameter to atmospheric coherence width for increasing order of AO correction, assuming infinite control bandwidth.
\begin{figure}[b]
\centering
\includegraphics[width=\columnwidth]{Figures/AO_efficiency_infbw_nmax.pdf}
\caption{SMF coupling efficiency as a function of the ratio of receiver diameter to atmospheric coherence width for increasing order of AO correction, assuming infinite control bandwidth.}
\label{fig:smf_n_inf_bw}
\end{figure}
From Eq.~\eqref{eqn:zer_variance}, we have that the strength of the aberration orders decreases for higher orders. As is shown in Fig.~\ref{fig:required_zernike_correction}, it is interesting then to evaluate how many Zernike modes should be compensated to regain near diffraction-limited wavefront quality. This can be done referring to the Rayleigh criterion~\cite{malacara_book}, from which we have that aberrations whose $\rm RMS\leq 0.05\lambda$ are below the threshold for diffraction-limited quality.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figures/required_zernike_correction.pdf}
\caption{Required Zernike correction shown as the maximum order (radial $n$ and single OSA/ANSI index $j = [n(n+1)+m]/2$) that is above the threshold for diffraction-limited quality as a function of the ratio of receiver diameter to Fried parameter.}
\label{fig:required_zernike_correction}
\end{figure}
As discussed in Ref.~\cite{Roddier_AO}, the effect of control bandwidth limitations can be taken into account by introducing a mode attenuation factor $\gamma_n^2$ for the $n$-th order aberration coefficients:
\begin{equation}
\gamma_n^2 = \frac{\int |W_n(\nu)|^2|\varepsilon(\nu) |^2\rm d \nu}
{\int |W_n(\nu)|^2 \rm d \nu} ~,
\label{eqn:gamma_nu_bandwidth}
\end{equation}
where $|W_n(\nu)|^2$ represents the power spectral density of the temporal spectrum of the $n$-th order aberrations, and $\varepsilon(\nu)$ represents the transfer function between the
residual phase and the turbulent wavefront fluctuations, and depends on the AO system's open-loop transfer function $G(\nu)$:
\begin{equation}
\varepsilon(\nu) = \frac{1}{1+G(\nu)} ~.
\label{eqn:epsilon_nu}
\end{equation}
In this work, we consider a typical AO system with a pure-integrator control based on wavefront sensing with a Shack-Hartmann wavefront sensor, and correction with a deformable piezo-electric mirror. The open-loop transfer function is then given by
\begin{equation}
G(\nu) = K_i\frac{e^{-\tau \nu}\left(1-e^{-T\nu} \right)}{(T\nu)^2} ~,
\label{eqn:g_open}
\end{equation}
where $K_i$ is the gain of the integrator, $\tau$ is the overall latency of the control-actuator stage, and $T$ is the inverse of the wavefront sensor frame rate.
The temporal power spectra of the different aberration orders have been studied in Ref.~\cite{Conan1995}, where the authors find that the power spectral density scales polynomially with a cut-off frequency $\nu_c^{(n)}$ depending on the radial order $n$, average wind velocity $\bar v$ and receiver diameter $D_{\rm Rx}$:
\begin{equation}
|W_n(\nu)|^2 \sim
\begin{cases}
\nu^{-2/3} & \nu\leq\nu_c, n=1\\
\nu^{0} & \nu\leq\nu_c, n\neq1\\
\nu^{-17/3} & \nu>\nu_c
\end{cases}
\end{equation}
with
\begin{equation}
\nu_c^{(n)}=0.3(n+1)\bar{v}/D_{\rm Rx} \ .
\end{equation}
The SMF efficiency of Eq.~\eqref{eqn:eta_smf_average} is thus modified in the case of finite control bandwidth as:
\begin{align}
\expval{\eta_{AO}} &= \prod_{\substack{n,m \\n\leq n_{\rm max}}} \frac{1}{\sqrt{1+2\gamma_n^2\expval{|b_n^{m2}}}} \nonumber \\
& \quad + \prod_{\substack{n,m \\n>n_{\rm max}}} \frac{1}{\sqrt{1+2\expval{b_n^{m2}}}} \ ,
\end{align}
with $n_{\rm max}$ the maximum aberration order corrected.
Fig.~\ref{fig:eta_SMF_vs_n_BW} shows $\expval{\eta_{\rm AO}}$ as a function of the corrected aberration order and wavefront sensor integration time, in a scenario with $D_{\rm Rx}/r_0=17$ and an average wind velocity corresponding to a light breeze.
\begin{figure}[b]
\centering
\includegraphics[width=\columnwidth]{Figures/AO_efficiency_realbw_nmax_new.pdf}
\caption{Coupling efficiency $\expval{\eta_{\rm AO}}$ as a function of maximum aberration order corrected $n_{\rm max}$, and integration time of the wavefront sensor $T$, for a scenario with $\lambda= 1550$~nm, $z= 20$~km, $D_{\rm Rx}= 400$~mm, $D_{\rm Rx}/r_0= 17$, $C_n^2= 10^{-14}{\rm m}^{-2/3}$, and $\bar{v}=3~\rm m/s$.}
\label{fig:eta_SMF_vs_n_BW}
\end{figure}
\subsubsection{Effect of atmospheric scintillation}
In addition to phase perturbations, we also take into account the irradiance fluctuations introduced by atmospheric scintillation, which result in a random apodization of the pupil transmittance function, which in turn affects the maximum SMF coupling efficiency.
A rigorous calculation of the scintillation contribution $\eta_S$ to the SMF efficiency, which considers the optical system modulation transfer function and log-amplitude spatial covariance function $C_\chi(r)$ can be found in Ref.~\cite{Canuet2018} and is based on the result of Ref.~\cite{Fried1965,Fried1966}. Nonetheless, a good approximation for the average value of $\eta_S$ is only dependent on the on-axis pupil-plane scintillation index $\sigma_I^2$, which is given by Eq.~\eqref{eqn:scintillation_flux_variance} in the limit of an infinitesimal aperture $d=0$.
\begin{equation}
\expval{\eta_S} \approx \exp[-C_\chi(0)] = \exp[-\sigma_\chi^2] = (1+\sigma_I^2)^{-\frac{1}{4}} ~,
\end{equation}
where we use the fact that the log-amplitude variance $\sigma_\chi^2$ is related to the scintillation index through $\sigma_I^2=\exp(4\sigma_\chi^2)-~1$.
Fig.~\ref{fig:eta_scintill} shows a plot of $\expval{\eta_S}$ and $\sigma_I^2$ as a function of the Rytov variance \eqref{eqn:rytov_variance}, highlighting the behavior of $\sigma_I^2$, which increases for the \textit{weak fluctuation} regime, reaches a maximum value in the \textit{focusing} regime, and then tends to $\sigma_I^2\sim 1$ in the \textit{strong fluctuation} regime.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figures/eta_scintill_sigma_i.pdf}
\caption{Scintillation contribution to the SMF coupling efficiency and scintillation index as a function of the Rytov variance $\sigma_R$ (see Eq.~\eqref{eqn:scintillation_flux_variance}).}
\label{fig:eta_scintill}
\end{figure}
\subsubsection{Optimum receiver diameter}
We have seen that the average collection efficiency $\expval{\eta_{D_{\rm Rx}}}$ of Eq.~\eqref{eqn:eta_diffraction} increases as the receiver diameter increases.
Conversely, for a fixed AO correction order $n$, the SMF coupling efficiency of $\expval{\eta_{\rm AO}}$ Eq.~\eqref{eqn:eta_smf_average} decreases with increasing receiver diameter.
This leads to a trade-off between the collection efficiency and the fiber-coupling efficiency (as represented in Fig.~\ref{fig:efficiency_vs_D}) in order to maximize the overall channel efficiency $\eta_{\rm CH}$, and we can find the optimum receiver diameter $D_{\rm opt}$, once the other link parameters, such as wavelength, link distance, transmitter waist, and turbulence strength are fixed.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{Figures/efficiency_vs_D.pdf}
\caption{Trade-off between collection efficiency and fiber-coupling efficiency. Fixed link parameters: $\lambda=1550~\rm nm$, $z= 5~\rm km$, $W_0=50~\rm mm$, and $C_n^2= 10^{-14}{\rm m}^{-2/3}$.}
\label{fig:efficiency_vs_D}
\end{figure}
Fig.~\ref{fig:optimum_diameter} shows the optimum receiver diameter as a function of link distance, for different orders of aberration correction and a moderate turbulence strength of $C_n^2=10^{-14}~\rm m^{-2/3}$.
For short link distances (shorter than the transmitter Rayleigh distance $z_0=\pi W_0^2/\lambda$) the beam size does not diverge much, and the fiber-coupling term dominates, causing the optimal diameter to decrease with increasing distance.
For link distances $z\sim z_0$, the beam size starts increasing, the collection efficiency term dominates, leading to an increasing optimum beam diameter.
For longer propagation distances ($z\gg z_0$), the decrease in spatial coherence of the beam is more severe and the turbulence term dominates again, leading to a smaller optimum diameter.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figures/optimum_diameter.pdf}
\caption{Optimum receiver beam diameter as a function of link distance. Fixed link parameters: $\lambda=1550~\rm nm$, $W_0=50~\rm mm$, $z_0=5~\rm km$, and $C_n^2= 10^{-14}{\rm m}^{-2/3}$.}
\label{fig:optimum_diameter}
\end{figure}
\subsubsection{Probability distribution of coupling efficiency}
As anticipated, when the receiver rate approaches the saturation limit of the single-photon detectors, the full statistics of channel efficiency can no longer be ignored, and the whole probability distribution $p_{\rm CH}$ has to be considered for the expected SKR to be estimated correctly, as we will present in Sec.~\ref{ss:detect_saturation}).
The derivation of the probability distribution of the SMF coupling efficiency with partial adaptive optics correction can be found in Ref.~\cite{Canuet2018}, which, however, does not include the effect of finite control bandwidth.
Since the irradiance fluctuation statistical contribution is already taken into account for the collection efficiency, we restrict the calculation of the SMF coupling distribution to phase distortions only, and present some examples of probability distributions calculated as a function of the maximum corrected aberration order (Fig.~\ref{fig:pdf_smf_n}), and varying the bandwidth of the AO control loop (Fig.~\ref{fig:pdf_smf_bw}).
Given a set of Zernike coefficients $\lbrace b_j\rbrace$ with variances $\langle b_j^2\rangle$, which may be corrected by a compensation factor $\gamma_j^2$ -- where $\gamma_j^2=0$ for perfect compensation, and $\gamma_j^2=1$ for uncorrected coefficients~\cite{Canuet2018} -- we define the quantity $z(t)$ as the instantaneous sum of the squared Zernike coefficients:
\begin{equation}
z(t) = \sum_j b_j^2(t) ~.
\end{equation}
The probability distribution of $z$, including the effect of finite AO control bandwidth is then, using the result of Ref.~\cite{gil_pelaez}:
\begin{equation}
p_z(z) = \frac{1}{\pi} \int_0^\infty
\frac{\cos\left[\sum_j\frac{1}{2}\text{arctan}\left(2\gamma_j^2\langle b_j^2\rangle u\right) -zu\right]}
{\prod_j \left[1+u^2\left(\gamma_j\langle b_j^2\rangle\right)^2 \right]^{1/4}}{\rm d}u ~.
\end{equation}
The probability distribution of $\eta_{\rm SMF}$ is then:
\begin{equation}
p_{\rm SMF}(\eta_{\rm SMF}|\eta_{\rm max}) = \frac{1}{\eta_{\rm SMF}}p_z\left[\log\left(\frac{\eta_{\rm max}}{\eta_{\rm SMF}}\right)\right] ~, \label{eq_32}
\end{equation}
where $\eta_{\rm max}=\eta_0\cdot\eta_S$ is the maximum normalized coupled flux, given by the product of the optical coupling efficiency of the system $\eta_0$, and the spatial scintillation term $\eta_S$.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Figures/coupling_distribution_W050mm_D400mm_Cn2_14_z10.pdf}
\caption{Probability distribution of the SMF efficiency for a fixed link distance $z=10~\rm km$, turbulence parameter $C_n^2=10^{-14}~\rm m^{-2/3}$, transmitter waist $W_0=50~\rm mm$, wavelength $\lambda=1550~\rm nm$, ideal design efficiency $\eta_0=0.8145$, receiver apertures $D_{\rm Rx}=400~\rm mm$, for different orders of AO correction, assuming infinite correction bandwidth. The probability is given as a weight function, given an array with logarithmic spacing: $\eta= 10^{[-6 : 0.05 : 0]}$.}
\label{fig:pdf_smf_n}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Figures/coupling_distribution_W050mm_D400mm_Cn2_14_z10_n4_nozero.pdf}
\caption{Probability distribution of the SMF efficiency for a fixed link distance $z=10~\rm km$, turbulence parameter $C_n^2=10^{-14}~\rm m^{-2/3}$, transmitter waist $W_0=50~\rm mm$, wavelength $\lambda=1550~\rm nm$, ideal design efficiency $\eta_0=0.8145$, receiver aperture $D_{\rm Rx}=400~\rm mm$, AO correction order $n=4$, as a function of the wavefront sensor integration time. The probability is given as a weight function, given an array with logarithmic spacing: $\eta= 10^{[-8 : 0.05 : 0]}$.}
\label{fig:pdf_smf_bw}
\end{figure}
\subsection{Channel probability distribution}\label{sec:prob_ch}
Based on the results of Refs.~\cite{Canuet2018}, which calculated the channel loss term due to SMF coupling, and Ref.~\cite{Vasylyev2018}, which calculated the channel loss due to the finite receiver aperture in the presence of beam broadening, beam wandering and scintillation, we calculate the overall channel transmittance probability distribution considering both contributions (pupil-plane and focal-plane losses) and exploiting the law of total probability to write $p_{\rm CH}(\eta_{\rm CH})$ as:
\begin{equation}
p_{\rm CH}(\eta_{\rm CH}) = \int p_{\rm SMF}(\eta_{\rm CH}|\eta_0 \eta_S \eta_{D_{\rm Rx}})p_{D_{\rm Rx}}(\eta_{D_{\rm Rx}}) \rm d \eta_{D_{\rm Rx}} ~,
\end{equation}
where $p_{\rm SMF}(\eta_{\rm CH}|\eta_0\eta_S\eta_{D_{\rm Rx}})$ is the probability of obtaining a normalized flux $\eta_{\rm CH}$ in the SMF fiber, given a maximum input normalized flux $\eta_0 \ \eta_S \ \eta_{D_{\rm Rx}}$, through Eq.~\eqref{eq_32}.
\begin{table*}[htb!]
\centering
\begin{tabular}{lccccccccc}
\toprule
& Case 1 & Case 2 & Case 3 & Case 4 & Case 5 & Case 6 & Case 7 & Case 8 &\\
\midrule
$\langle\eta_{\rm CH}\rangle$ & -7 & -15 & -17 & -23 & -25 & -38 & -43 & -48 & $[\rm dB]$\\
\midrule
$C_n^2$ & $10^{-13}$ & $10^{-14}$ & $10^{-13}$ &
$10^{-14}$ & $10^{-14}$ & $10^{-14}$ & $10^{-14}$ & $10^{-14}$ & $\rm[m^{-2/3}]$\\
$W_0$ & 25 & 60 & 25 & 60 & 60 & 60 & 60 & 25 & $[\rm mm]$\\
$D_{\rm Rx}$ & 50.8 & 200 & 50.8 & 200 & 50 & 200 & 400 & 200 & $[\rm mm]$\\
$z$ & 1 & 10 & 2 & 10 & 10 & 20 & 20 & 30 & $[\rm km]$\\
$n_{\rm max}$ & 1 & 4 & 1 & 1 & 1 & 1 & 2 & 1\\
$\eta$ & $10^{[-3:0.02:0]}$ & $10^{[-5:0.05:0]}$ & $10^{[-5:0.05:0]}$ & $10^{[-8:0.1:0]}$ & $10^{[-8:0.1:0]}$ & $10^{[-12:0.1:0]}$ & $10^{[-15:0.1:0]}$ & $10^{[-15:0.1:0]}$ \\
\bottomrule
\end{tabular}
\caption{Input parameters for the simulation of the eight case studies.}
\label{tab:case_studies}
\end{table*}
In Fig.~\ref{fig:joint_distributions} we show some examples of channel probability distributions for eight case studies, with average link losses in the range $[-7:-48]~\rm dB$ and input parameters summarized in Table~\ref{tab:case_studies}. In all case studies, we assume an unobstructed receiver aperture, and maximum optical efficiency of $\eta_0=81.5\%$. Cases 1 and 3 correspond to a scenario with a short urban link with strong turbulence, small aperture Tx/Rx telescopes, and mere tip/tilt correction.
Cases 4, and 5 correspond to a scenario with moderate turbulence and longer link distance and highlight the effect of a smaller/larger receiver aperture, which leads to a trade-off between large collection efficiency and minimum aberration order corrected.
Cases 2 and 6 show the effect of AO on longer, moderately turbulent links with large aperture receivers.
Cases 7 and 8 show examples of highly lossy channels.
\begin{figure}[b]
\centering
\includegraphics[width=\columnwidth]{Figures/joint_distributions_together.pdf}
\caption{Overall channel probability distributions for the case studies described in Table~\ref{tab:case_studies}.}
\label{fig:joint_distributions}
\end{figure}
\subsection{Effect of detector saturation}
\label{ss:detect_saturation}
The key performance indicators of QKD, such as the SKR and error rate, are estimated using large samples of data acquired during a long experiment, averaging over the fluctuations of the channel efficiency.
This might suggest that these fluctuations can be neglected and only their mean value is relevant.
Yet, this would be equivalent to calculating the expected value of a function by applying it to the expected value of its parameters.
Since the functions that model the generation of a key are not all affine, i.e., they are not all compositions of a translation and a linear map, neglecting the fluctuations and using only the mean values is in general incorrect.
The most important non-affine effect is caused by the saturation of the detectors.
Single photon detectors are blinded just after an event and might be further kept off to combat the phenomenon of afterpulses, which increases noise.
This so-called dead time $T_d$ implies that there is maximum rate of output signals $R_{\rm sat} = 1/T_d$ that the detectors can produce.
If $R_0(\eta_{\rm CH})$ is the rate of photons reaching a detector multiplied by its finite efficiency, the output detection rate is~\cite{Muller1974}
\begin{equation}
R_{\rm det} = \frac{R_0(\eta_{\rm CH})\cdot R_{\rm sat}}{R_0(\eta_{\rm CH}) + R_{\rm sat}} \ .
\label{eq:Saturation}
\end{equation}
Because this is not an affine function of $R_0(\eta_{\rm CH})$, its fluctuations cannot be neglected.
Although this expression is derived for a continuous source, it is approximately valid also for a pulsed one if the repetition rate is much greater than $R_{\rm sat}$.
To quantify the importance of these fluctuations, we estimate the raw key rate of a QKD system for the eight cases of Fig.~\ref{fig:joint_distributions}, first considering the entire distributions and then only their mean values.
In Fig.~\ref{fig:multiplot} we show the overestimation factor (converted to dB to visualize it better) caused by neglecting fluctuations.
This can reach a value of almost $9$ dB for the distribution of case $8$.
The effect is larger when $\langle R_0\rangle\approx R_{\rm sat}$ and vanishes for $\langle R_0\rangle\gg R_{\rm sat}$ or $\langle R_0\rangle\ll R_{\rm sat}$, when Eq.~\eqref{eq:Saturation} is well approximated by its linearization.
The distributions for which this error is greater are those which have stronger tails (i.e., a high kurtosis).
Indeed, neglecting fluctuations means neglecting the suppression of the tails caused by the saturation of the detectors: the greater the tails are, the graver is the error caused by neglecting them.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figures/ComparisonApprox.pdf}
\caption{Overestimation caused by neglecting fluctuations for the different distributions of Fig.~\ref{fig:joint_distributions}.
The effect is larger when $\langle R_0\rangle\approx R_{\rm sat}$ and for distributions of high kurtosis.}
\label{fig:multiplot}
\end{figure}
In Fig.~\ref{fig:trend}, we consider an arbitrary QKD scenario which features a 1 GHz source, $T_d= 10~\mu$s, $15\%$ detection efficiency.
We show a simulation of the raw key rate (as a function of the channel efficiency $\eta_{\rm CH}$) which neglects fluctuations, and compare it with the more correct values which consider this effect.
We can see a clear separation between the two methods of estimation, which grows larger when $\langle R_0\rangle\approx R_{\rm sat}$ and for distributions of greater kurtosis.
This shows that performance predictions that consider only the mean value of the channel efficiency can be severely inaccurate.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figures/Trend.pdf}
\caption{Comparison between a simulation of the raw key rate that neglects fluctuations and the ones which consider them for the eight distributions of Fig.~\ref{fig:joint_distributions} (to which the numbers refer).}
\label{fig:trend}
\end{figure}
\section{Contributions to the error rate}
\label{sec:Errors}
Mismatches between Alice and Bob's raw keys influence the performance of QKD in two ways.
First, the more the errors, the more bits must be published to correct them.
Second, they indicate the amount information leaked to an attacker.
In the security scenario in which QKD operates, all errors are attributed to attacks and reduce the length of the final secret key.
Therefore, when simulating a QKD system, several physical sources of error must be considered.
One is intrinsic to the signal: inaccurate quantum state preparation or measurement might cause a mismatch between Alice's encoded bit and Bob's decoded one, even if the carrier photon arrives at the detector.
We quantify this with the coding error, which we define as the conditional probability of a mismatch given that a signal photon is detected.
In principle, the channel can also increase it if it can change the state of the photons, but this does not happen in typical stationary free space systems with polarization encoding, because the medium in which light travels is not birefringent.
The only way to reduce the coding error is to build better quantum state encoders and decoders, and better systems to align them to each other~\cite{Agnesi2019,Avesani2020}.
Then, there is random noise.
A portion of it is caused by single-photon detectors, in the form of dark counts and afterpulses.
The former are random events that happen even in the total absence of light, whereas the latter are spurious signals caused by true ones and are typical of avalanche diodes.
Another portion is introduced by the channel background light, especially in the free-space case that we are studying.
In Sec.~\ref{ss:Background} we quantify this background light and in Sec.~\ref{ss:TemporalFiltering} we study a way to counter noise with temporal filtering.
\subsection{Diffuse atmospheric background}
\label{ss:Background}
A crucial requirement for the realization of daylight free-space QKD is the successful filtering of the background radiation.
Fig.~\ref{fig:scatterradiance_lowtran} shows the spectrum of the diffuse atmospheric radiance $I_{\rm diff}$, extracted with LOWTRAN.
As we can see, the spectrum peaks at blue wavelengths, and decreases for wavelengths in the infra-red.
\begin{figure}[b!]
\centering
\includegraphics[width=\columnwidth]{Figures/diffuse.pdf}
\caption{Diffuse atmospheric radiance spectrum, horizontal path.}
\label{fig:scatterradiance_lowtran}
\end{figure}
This datum considers only the sky brightness and neglects the fact that the transmitter partially blocks the field-of-view (FOV) of the receiver, therefore it is overestimated in realistic scenarios, especially for short links.
Nonetheless, we will use it in the following discussion, which focuses on the impact of background on the performance of QKD.
As a first-order approximation, we can consider the diffuse radiance to be uniform over the receiver FOV.
We can estimate the detection rate of background photons per detection window $\expval{r_{\rm sky}}$ at the quantum signal wavelength as a function of the receiver aperture $D_{\rm Rx}$, solid-angle field of view $\Omega$, and filtering bandwidth $\delta\lambda$ ($h$ is the Planck's constant)
\begin{equation}
\expval{r_{\rm sky}} = \frac{I_{\rm diff}\cdot \pi\left(\frac{D_{\rm Rx}}{2}\right)^2 \cdot \Omega \cdot \delta\lambda }{ hc/\lambda} ~.
\label{eq:rskyGeneral}
\end{equation}
For the typically small field of view characteristic of free-space communication systems, $\Omega$ can be approximated by
\begin{equation}
\Omega=2\pi\left(1-\cos({\rm FOV})\right)\approx \pi {\rm FOV}^2 ~,
\end{equation}
where ${\rm FOV}$ is the one-dimensional field-of-view.
The FOV of the receiver optical system depends on the optical design:
if the optical fiber, or free-space detector, is placed on an image-plane of the entrance pupil, which is the case for free-space detectors with large active area ($\sim 150~\mu m$) or large-core multi-mode fibers (MMF), the FOV is essentially a free parameter, limited only by the size of the optical elements (lenses, mirrors) used in the optical system, and may be as large as $400~\mu\rm rad$.
In the case of single-mode fibers or free-space detectors with small active area $<10~\mu$m, the optimal choice is to place them on the focal plane of the optical system. In this configuration the field of view is constrained by the size of the active area or fiber mode field diameter (MFD). Since the receiver focal length $f$ is chosen so that the optical system efficiency in Eq.~\eqref{eqn:eta0} is maximized, we have
\begin{equation}
{\rm MFD} = \beta_{\rm opt}\frac{2}{\pi} \lambda \frac{f}{D_{\rm Rx}} ~,
\end{equation}
where $\beta_{\rm opt}$ is shown in Fig.~\ref{fig:eta0_beta_alpha} and $\beta_{\rm opt}=1.12$ for unobstructed apertures.
If we define the FOV as the pupil incidence angle at which the spot on the focal plane is deflected to a distance equal to half the MFD, then we have that:
\begin{equation}
{\rm FOV} = \frac{\rm MFD}{2 f} =\frac{\beta_{\rm opt}}{\pi} \frac{\lambda}{D_{\rm Rx}} ~.
\label{eq:FOVSMF}
\end{equation}
Another consequence of this constraint is that the detection rate of diffuse background photons coupled into the system becomes almost independent of the receiver optical system parameters. Indeed, combining Eqs. \eqref{eq:rskyGeneral} and \eqref{eq:FOVSMF}, we find that
\begin{equation}
\expval{r_{\rm sky}}_{\rm SMF} = \frac{\beta_{\rm opt}^2}{4}\frac{I_{\rm diff}}{hc} \lambda^3 \delta\lambda
\end{equation}
does not depend on $f$ nor $D_{\rm Rx}$.
Fig.~\ref{fig:noise_count_smf} shows the expected noise count rate for a SMF coupled receiver.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figures/Fig18.pdf}
\caption{Noise count rate due to the diffuse atmospheric background as a function of qubit wavelength and linewidth of the spectral filter, for the SMF receiver case. Note that the dark bands in the figure also correspond to absorption windows of the atmosphere, see Fig.~\ref{fig:lowtran_absorption_coefficient}.}
\label{fig:noise_count_smf}
\end{figure}
\subsection{Temporal gating}
\label{ss:TemporalFiltering}
Typical DV-QKD systems apply a temporal filter to all detected events, with the purpose of reducing the impact of noise.
Indeed, the latter is uniformly distributed in time whereas signal photons, being emitted at regular intervals, have a predictable time of arrival.
In post-processing, one can apply a $T_{\rm gat}$-wide temporal window centered at this time and discard all events that fall outside of it, thus suppressing noise by a factor $T_{\rm gat}/\tau$, where $\tau$ is the repetition period of the source.
However, there is a tradeoff, because the temporal distribution of the signal events is enlarged by the optical pulse width, by the jitter of the source, of the detectors, and of the time-digitizing hardware.
A small value of $T_{\rm gat}$, while strongly reducing noise, might discard too much of the signal, negatively impacting the final SKR.
Assuming a normal distribution of standard deviation $J$ for the signal time of arrival, the filter reduces the signal detection rate by a factor $\erf\left(\frac{T_{\rm gat}}{J2\sqrt{2}}\right)$.
A numerical study of the tradeoff can guide the choice of $T_{\rm gat}$. The figure of merit to maximize is the final SKR, which includes the contribution of the detection and error rates.
We focus on the ratio $T_{\rm gat}/J$ between it and the standard deviation $J$ of the temporal distribution of the signal (including all the aforementioned jitter contributions).
We can expect the tradeoff to be influenced by (i) the signal-to-noise ratio (SNR) when the noise is gated to a window as wide as the signal ($\pm 3$ times the standard deviation $J$) and (ii) the coding error.
The SNR obtained before any gating is not sufficient to describe the situation, because it does not consider the width of the signal.
Intuitively, for the same ungated SNR, a temporally wider signal favors smaller values of $T_{\rm gat}/J$ to eliminate more noise.
Our definition of the SNR, by considering only the portion of noise that falls under the signal, effectively combines the ungated SNR and the width of the signal in a single parameter.
Other protocol parameters such as decoy intensity levels and probabilities also have an importance, but we focus only on the two quantities above for simplicity.
All the parameters of the model except the SNR and coding error are arbitrarily fixed to a realistic QKD scenario.
In Fig.~\ref{fig:Gating}, we can see the results of an optimization with the Nelder-Mead algorithm~\cite{Nelder1965} of the $T_{\rm gat}/J$ ratio to maximize the SKR, for a grid of values of the SNR (noise gated at $\pm 3J$) and coding error.
Predictably, the smaller the SNR, the smaller the gating window should be, in order to discard more noise.
High values of the coding/decoding error decrease the optimal $T_{\rm gat}/J$.
Indeed, although temporal gating alone cannot change the coding/decoding error, the higher the total QBER, the more important it is to reduce it, even at the expense of discarding part of the signal.
This shifts the balance of the tradeoff towards smaller windows, as can be seen in the top-left corner of the figure.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figures/SNRvsCE_context.pdf}
\caption{Optimization of $T_{\rm gat}/J$ to maximize the secret key rate, for a grid of values of the SNR (with noise gated to a $\pm 3J$-wide window) and the coding/decoding error. The quantitative details are influenced by the specific scenario used in the simulation, but the behavior of $T_{\rm gat}/J$ is quite general.}
\label{fig:Gating}
\end{figure}
The small ridge in Fig.~\ref{fig:Gating} indicates a sudden jump in the optimal value of $T_{\rm gat}/J$.
This is because there are several terms contributing to the secret key rate (see Eq.~\eqref{eq:SKR}), and each is computed with several methods~\cite{Rusca2018}, choosing the best for every configuration of the parameters.
This leads to the presence of multiple local maxima: when one of them is promoted to global maximum, overcoming another, the optimal $T_{\rm gat}/J$ changes.
The exact position and entity of the ridge is not universal and depends on the specific scenario that we chose in this simulation, but its presence is to be expected in general when the SKR is optimized.
\section{Estimation of the SKR}
\label{sec:Secret}
The rate of production of the secret key in a QKD experiment is calculated from the detection and error rates through a security analysis which bounds the amount of information leaked to an adversary Eve.
For our choice of protocol, efficient BB84 with decoy states, we follow Ref.~\cite{Rusca2018} and find
\begin{equation}
\mathrm{SKR} = \frac{1}{t}\left(s_{Z,0} + s_{Z,1}(1 - h(\phi_Z)) - \ell_{\rm EC} -\ell_{\rm c} - \ell_{\rm sec}\right) .
\label{eq:SKR}
\end{equation}
Here, $s_{Z,0}$ and $s_{Z,1}$ are the lower bounds on the number of vacuum and single-photon detections in the key-generating $Z$ basis, $\phi_{Z}$ is the upper bound on the phase error rate corresponding to single photon pulses, $h(\cdot)$ is the binary entropy, $\ell_{\rm EC}$ and $\ell_{\rm c}$ are the number of bits published during the error correction and confirmation of correctness steps, and $\ell_{\rm sec}=6 \log_2(\frac{19}{\epsilon_{\rm sec}})$, where $\epsilon_{\rm sec}=10^{-9}$ is the secrecy parameter associated to the key.
Finally, $t$ is the duration of the qubit transmission.
Terms $s_{Z,0}$, $s_{Z,1}$, and $\phi_{Z}$ are estimated from detections and errors observed in the experiment and depend on protocol parameters and statistical effects.
In what follows, we study how to optimize these parameters to maximize the SKR and finally estimate it in some exemplary scenarios.
\subsection{Finite-key effects}
\label{ss:Finite}
Because actual experiments accumulate only a finite amount of data, the parameters mentioned above cannot be estimated with perfect precision, leaving an opening for potential attackers.
To counter this, QKD uses a broad range of statistical analyses~\cite{Renner2005,Scarani2008,Tomamichel2012}.
Following Ref.~\cite{Rusca2018}, we construct confidence intervals based on the Hoeffding inequality~\cite{Hoeffding1963} around the parameters of interest, and then use the pessimistic extrema of the intervals as our estimates.
This penalizes the performance of the system, but guarantees that the key is secure with a very high probability $1-\epsilon_\mathrm{sec}$.
Fortunately, we can mitigate this cost by optimizing some parameters of the protocol.
These are: the probability $p_Z$ that each of Alice and Bob choose the key basis $Z$ for their preparations and measurements, the probability $p_\mu$ that Alice chooses the stronger intensity level, and the intensities $\mu, \nu$ of each level.
Their optimal values that maximize the SKR change depending on the amount of data (block length) that is used in the statistical analysis.
In our study, we consider two example QKD scenarios: a high-end system with SNSPDs, a GHz source, and low coding error (scenario A), and a less expensive one with SPADs, a slower source, and a higher coding error.
The most relevant parameters of each scenario are listed in Table~\ref{tab:FiniteScenarios}.
\begin{table}
\centering
\begin{tabular}{c|cc}
\toprule
Quantity & Scenario A & Scenario B \\
\midrule
Source repetition rate & 1 GHz & 100 MHz \\
Detector efficiency & 80 \% & 15 \% \\
Coding Error & 0.5 \% & 1.5 \% \\
Dark count rate & 10 Hz & 2 kHz \\
Dead time & 10 ns & 20 $\mu$s \\
Afterpulse probability & 0 & 10 \% \\
Temporal jitter & 10 ps & 200 ps \\
Additional receiver losses & 3 dB & 3 dB \\
Wavelength & 1550 nm & 1550 nm \\
Background photons rate & 5 kHz & 5 kHz \\
\bottomrule
\end{tabular}
\caption{Relevant parameters for the two scenarios considered in the studies of Secs. \ref{ss:Finite} and \ref{ss:SKRTypical}.
For the former, $\langle \eta_{\rm CH}\rangle \approx -7$ dB, whereas for the latter, the distributions of Fig.~\ref{fig:joint_distributions} are used.}
\label{tab:FiniteScenarios}
\end{table}
\begin{figure}[b]
\centering
\subfloat[Scenario A.]{\includegraphics[width=0.495\textwidth]{opt_FK_case_1_1km_RFI_7dB_A.pdf}}\hfill
\subfloat[Scenario B.\label{fig:OptimalProtocolParamsB}]{\includegraphics[width=0.495\textwidth]{opt_FK_case_1_1km_RFI_7dB_B.pdf}}
\caption{Optimization of the protocol parameters against the block length to maximize the secret key rate.}
\label{fig:OptimalProtocolParams}
\end{figure}
In Fig.~\ref{fig:OptimalProtocolParams} we show the results of the optimization (with a simulated annealing procedure~\cite{Xiang1997}) when the channel is characterized by the distribution of Case 1 (Fig.~\ref{fig:joint_distributions}), with $\langle \eta_{\rm CH}\rangle \approx -7$ dB.
We can see an upward trend for $p_Z$ and $p_\mu$ for growing block length, indeed the larger the total sample size, the easier it is to accumulate the needed statistics in the check-basis basis $X$ (mutually unbiased with respect to $Z$) and intensity level $\nu$ even if they are chosen rarely.
The mean photon numbers $\mu$ and $\nu$ show more stability, with a slight downward trend for $\nu$ in the right-hand side of the plot.
This is justified considering that its limit for infinite block length is zero, as this makes the bounds of the decoy-state method tighter.
For shorter block lengths, $\nu$ must grow to increase the detection rate and accumulate the needed statistics.
The optimal value of $\mu$ is the one that strikes the right balance between a low multi-photon emission probability and a high detection rate.
However, it is also influenced by other factors like afterpulses and detector saturation, which is why it is smaller in scenario B.
For low and decreasing block length, as $p_\mu$ shrinks to increase the statistics accumulated in the low intensity level, $\mu$ grows to keep the signal rate high, and $\nu$ responds by slightly decreasing to tighten the bounds.
The jump which can be seen in Fig.~\ref{fig:OptimalProtocolParamsB} is similar to the ridge of Fig.~\ref{fig:Gating}: a local maximum is promoted to global and the optimal parameters suddenly move.
The exact position and entity of the jump depends strongly on the scenario, but its presence is to be expected in optimizations like these.
\begin{figure}
\centering
\subfloat[Scenario A.]{\includegraphics[width=0.495\textwidth]{cost_FK_case_1_1km_RFI_7dB_A.pdf}}\hfill
\subfloat[Scenario B.]{\includegraphics[width=0.495\textwidth]{cost_FK_case_1_1km_RFI_7dB_B.pdf}}
\caption{Cost of finite-key effects, represented by the ratio between the SKR and $\mathrm{SKR}_\infty$, which would be obtained by optimizing the protocol parameters for infinite length and by accumulating an infinitely long key block.}
\label{fig:CostFinite}
\end{figure}
In Fig.~\ref{fig:CostFinite} we show the cost of finite-key effects and of using a wrong set of parameters.
We express it as the ratio between the SKR and $\mathrm{SKR}_\infty$, i.e., the SKR that would be obtained by optimizing the above parameters for infinite length and by accumulating an infinitely long key block.
The dashed line corresponds to the choice of optimizing the parameters for a fixed block length of $10^7$ bits.
Generating a key is possible but the SKR quickly drops to zero if block lengths shorter than $10^7$ are used.
This happens more slowly in scenario B because the noisier detectors reduce the impact of tuning signal-related parameters.
The solid line shows what happens if the parameters are optimized for each block length.
The results improve drastically, underlining the importance of optimizing the protocol parameters for the predicted size that will be used operatively.
However, even with a large block length of $10^9$ bits, the SKR is reduced by a sizeable portion with respect to $\mathrm{SKR}_\infty$.
While this precise value depends on the chosen scenarios, the fact that finite-key effects should not be neglected even for large block lengths is general.
\subsection{SKR in typical scenarios}
\label{ss:SKRTypical}
\begin{figure}[b]
\centering
\includegraphics[width=\columnwidth]{Figures/AllSKR.pdf}
\caption{Secret key rate for the two scenarios and the eight cases of Fig.~\ref{fig:joint_distributions} (to which the numbers refer), considering all the effects studied in this work.}
\label{fig:AllSKR}
\end{figure}
To conclude our analysis and give an example of the capabilities of our full model, we calculate the final SKR of a QKD system considering all the effects we have studied in this work.
We do this for the two scenarios of Table~\ref{tab:FiniteScenarios} and the eight channel efficiency distributions of Fig.~\ref{fig:joint_distributions}.
For each configuration, we use the simulated annealing algorithm to optimize the protocol parameters (the same of Sec. \ref{ss:Finite}) and the temporal gating, for a fixed block length of $10^7$ bits.
The results are shown in Fig.~\ref{fig:AllSKR}.
We can observe the characteristic linear behavior of the SKR with transmittance, and glimpse a drop for strong losses caused by the prevalence of noise.
However, thanks to the breadth of phenomena that our model includes, these values go beyond the simple verification of this typical trend, and are accurate estimates of the performance of the considered QKD systems.
Scenario B is strongly penalized by its slower source, lower detection efficiency, afterpulses and dead time.
Because of this, only the first four distributions yield a positive SKR.
\section{Conclusion and Outlook}
In this work, we studied many of the relevant phenomena that influence the performance of a ground-to-ground QKD BB84 system.
Particular focus was given to the channel model that estimates the efficiency of the link, considering atmospheric absorption, turbulence-induced beam broadening, wandering, scintillation, and the effect of single-mode fiber coupling.
We showed how adaptive optics can reduce losses and suggested ways to optimize the receiver diameter.
We found that calculating only the mean efficiency can sometimes be insufficient, and the entire probability distribution is needed.
This is because of the saturation of single-photon detectors, which may suppress the high tails of distribution, reducing the detection rate more than one would expect by considering only the mean.
We analyzed most of the sources of error in QKD and some mitigation techniques, showing how to find the best temporal gating to filter out noise.
We included also the finite-key effects that reduce the performance because of imperfect parameter estimation.
We highlighted how optimizing the probabilities of basis choice and the properties of the decoy states can alleviate this cost.
Finally, we put everything together to estimate the final secret key rate in some example scenarios.
Our model can be expanded further, for instance to include tracking imprecision in moving links and imperfect quantum state preparation beyond the coding error.
However, it is comprehensive enough to guide the design of QKD systems and underline what problems should be considered.
This can help the implementation and deployment of free-space daylight links in future QKD networks.
\begin{acknowledgments}
This work was supported by Agenzia Spaziale Italiana, project {\it Q-SecGroundSpace} (Accordo n. 2018-14-HH.0, CUP: E16J16001490001). CloudVeneto is acknowledged for the computational resources.
\end{acknowledgments}
\vfill
\bibliographystyle{apsrev4-2}
|
{
"timestamp": "2022-01-14T02:23:46",
"yymm": "2109",
"arxiv_id": "2109.13886",
"language": "en",
"url": "https://arxiv.org/abs/2109.13886"
}
|
\section{Introduction}
During the last 90 years, several breakthroughs in the field of particle physics and high energies have been brought about by studying the properties of neutrinos. This historical fact motivates scrutinizing neutrino interaction at all possible energy intervals to look for signs of new non-standard interactions. BOREXINO has made precise measurement of interaction of solar neutrinos with energies of sub MeV to 14~MeV on the electron. The interaction of neutrinos with energy range of few 10~GeV off nuclei has been studied with great precision by experiments like NOMAD \cite{NOMAD:2001xxt}. Finally, neutrino telescopes such as ICECUBE, DEEPCORE, ANTARES, ARCA and ORCA detect the scattering of higher energy atmospheric and cosmic neutrinos on nuclei but these detectors cannot resolve the fine details of the scattering processes in case an intermediate new particle is produced.
Neutrinos of all flavors with 100~GeV$-$1~TeV energies can be produced at the Interaction Points of the LHC. The main detectors of the LHC, CMS and ATLAS, are not designed to detect these neutrinos and miss them.
Indeed, interactions of colliding protons produce large flux of quarks along the beamline which hadronize as pions, Kaons and charmed hadrons. The eventual decay of these hadrons emit a large flux of neutrinos in the forward direction. During the run III of the LHC in 2022-2024, two new detectors called FASER$\nu$ \cite{Abreu:2019yak} and SND@LHC
\cite{SND} will detect these large fluxes of neutrinos emitted in the forward direction. These detectors are designed to resolve even the short track of the $\tau$-lepton produced by Charged Current (CC) $\nu_\tau$ interaction. That is they have a superb spatial resolution which makes them ideal for probing new feebly interacting particles that go through chain decays.
Ref.~\cite{Bakhti:2020vfq} demonstrates that FASER$\nu$ can explore a variety of light dark matter models in which dark matter abundance is set via freeze-out scenario with annihilation into pairs of intermediate light new neutral particles.
Ref. \cite{Falkowski:2021bkq} studies the capability of FASER$\nu$ to probe effective beyond standard model couplings between neutrino and quarks that may lead to anomalous Charged Current (CC) interaction of neutrinos: $\nu+{\rm nulceus}\to l+X$.
Ref. \cite{Ismail:2020yqc} makes a forecast of the impact of the Neutral Current (NC) Non-Standard Interaction (NSI) of form $[(V-A)(V \pm A)]$ on FASER$\nu$ data: $\nu +{\rm nucleous} \to \nu +X$.
Ref.~\cite{Kling:2020iar} investigates the effects of light flavor gauge bosons at FASER$\nu$ and SND@LHC.
Ref.~\cite{Bakhti:2020szu} as well Ref.~\cite{Jho:2020jfz,Jodlowski:2020vhr} scrutinize $\nu+{\rm nucleus} \to N+X$ in which $N$ is a heavy neutrino. In the prvious studies, the intermediate particle meditating $\nu+{\rm nucleus} \to N+X$ is either a SM photon (interacting via dipole interaction \cite{Jodlowski:2020vhr,Ismail:2021dyp}) or a dark photon or $Z'$ \cite{Bakhti:2020szu,Jho:2020jfz}.
Large fluxes of $\nu_\mu$ and $\bar{\nu}_\mu$ at FASER$\nu$ and at SND@LHC
provide a unique opportunity to study possible interactions of the second generation leptons with new particles. Such new interactions are also motivated by the famous $(g-2)_\mu$ anomaly. In this paper, we build a model in which the second generation left-handed leptons interact with a new right-handed neutrino $N$ and a new scalar $SU(2)$ doublet that also couples to the quarks. $N$ heavier than a few GeV is an uncharted territory which could not be explored by previous neutrino scattering experiments but as we shall show FASER$\nu$ and SND@LHC can explore $N$ with a mass up to $\sim 15$ GeV. We outline the signals of the models at forward experiments and show that the signals are background free. We propose methods to derive the mass and the lifetime of $N$ from the data. We forecast the bounds that can be extracted from the data of SND@LHC, FASER$\nu$ and its upgrade for HL-LHC on the relevant effective couplings. We also argue that the model can be tested by main detectors of the LHC. Indeed, a single signal event at these forward experiment will be a great motivation to look for signatures of the model at CMS and ATLAS.
We discuss the contribution to $(g-2)_\mu$ and find that to explain the anomaly, the model should be extended to include more generations of right-handed neutrinos and/or scalar doublets. Adding more scalar doublets may increase the new effective coupling between $\bar{N}\nu$ and the quarks, increasing the statistics of the signal at the forward experiments. Multiple $N$ can have a more spectacular effect, leading to chain decays of right-handed neutrinos at both forward experiments and at the main detectors of the LHC ({\it i.e.,} at CMS and ATLAS). In the process of chain decay, multiple charged leptons can be emitted.
Recent studies report such multiple lepton signal in the LHC data \cite{vonBuddenbrock:2017gvy,Fischer:2021sqw,Hernandez:2019geu}. As discussed in \cite{Sabatta:2019nfg},
the $(g-2)_\mu$ anomaly and these multilepton excess at the LHC may be related.
We have discussed the signatures of the variant of the model
with multiple $N$ at forward experiments and discuss how such a variant can be distinguished from the minimal version of the model with only a single $N$.
Finally, we add a lighter singlet scalar, $S$, mixed with the neutral component of the scalar electroweak doublet. This makes the model a two Higgs doublet plus $S$ model which is also motivated by some anomalies reported in the LHC data
\cite{vonBuddenbrock:2017gvy,Fischer:2021sqw,Hernandez:2019geu,
Mathaha:2021buc,Crivellin:2021ubm} (see, however, \cite{Fowlie:2021ldv}). We show that with adding such a light scalar the statistics of the signal at forward experiments can be dramatically increased.
This paper is organized as follows. In sect.~\ref{FE}, we describe the characteristics of the forward experiments that are relevant for our analysis. We then outline the concept of deriving information on the properties of new neutral intermediate particles which might be produced and subsequently decay inside the detector. In sect.~\ref{model}, we propose the minimalistic version of our model and describe its signals at forward experiments. We assess the background for the signal. We compute the cross section of the $N$ production by $\nu_\mu$ and $\bar{\nu}_\mu$ scattering off nucleons. We forecast the bounds from forward experiments on the couplings in case of null signals. We also compute maximum number of signal events, saturating the present bounds and discuss how the parameters of the underlying model can be extracted in the lucky situation of relatively large signal sample.
In sect.~\ref{phen}, we compute the contribution from the new interactions to $(g-2)_\mu$ and to the neutrino masses. We then describe the potential signals at the main detectors of the LHC, CMS and ATLAS.
In sect.~\ref{multiple}, we describe the signatures of a variant of the model with multiple $N$ at forward experiments as well as at the main detectors of the LHC. In sect.~\ref{light-singlet}, we show that by adding a singlet scalar, the $N$ production cross section can be significantly increased.
In sect.~\ref{summary}, we summarize the results and briefly discuss the implications for the detection of atmospheric neutrinos by neutrino telescopes.
\section{Forward experiments \label{FE}}
In this section,
we briefly review the setup and capabilities of SND@LHC, FASER$\nu$ and its upgrades for high luminosity LHC. FASER$\nu$ and SND@LHC will take data during the run III of the LHC. They will be both located at a distance of 480~m from the ATLAS Interaction Point (IP) but at opposite sides in the TI12 and TI18 tunnels \cite{Kling:2021gos}. These detectors are designed to detect neutrinos emitted from the IP in the forward direction. The main sources of neutrinos are the decays of hadrons such as pions, Kaons, and charmed hadrons produced at the IP with a momentum along the beamline. Although the distance of both detectors from the IP are equal, the fluxes of neutrinos at SND@LHC are predicted to be softer than those at the FASER$\nu$ because, while FASER$\nu$ is located exactly in the beamline direction (right before the FASER detector), SND@LHC will be located slightly off-line.
The upgrade of FASER$\nu$ will be located at the proposed Forward Physics Facility (FPF) at a distance of 620~m from IP.
While during run III (2022-2024) 150 fb$^{-1}$ integrated luminosity will be collected at ATLAS IP, during the high luminosity upgrade the integrated luminosity will increase to 3000 fb$^{-1}$. Ref.~\cite{Kling:2021gos} predicts fluxes of neutrinos in these three detectors.
The FASER$\nu$ and SND@LHC detectors are both made of Tungsten with masses of
1.2~tons \cite{Abreu:2019yak} and 850~kg \cite{SND}, respectively. Indeed FASER$\nu$ is made of 1000 layers of emulsion films interleaved with Tungsten plates of thickness of 1 mm. The size of FASER$\nu$ is $25~{\rm cm}\times 25~{\rm cm} \times 1.3~{\rm m}$~\cite{Abreu:2019yak}.
That of SND@LHC is $41.6~{\rm cm} \times 38.7~{\rm cm} \times 32~{\rm cm} $. The upgrade of FASER$\nu$ will be of {size $50~{\rm cm}\times 50~{\rm cm} \times 5~{\rm m}$ and} will weight 10~tons~\cite{felix_kling_2020_4059893}.
Notice that FASER$\nu$, being an emulsion detector,
cannot record the timing of events. The whole data taking period of run III will be divided into periods with 10-50 fb$^{-1}$ integrated luminosity after which the emulsion will be processed \cite{Abreu:2019yak}. As we will see in sect.~\ref{model}, such division reduces significantly the accidental background from standard model NC and CC events to our signal.
The great advantage of these detectors is their spatial resolution which makes them capable of resolving even the short $\tau$-tracks.
For example, FASER$\nu$ can resolve vertices with a precision of $\sigma_{pos}=0.4~\mu$m. The angular resolution of tracks will be $\sqrt{2} \sigma_{pos}/L_{tr}$ where $L_{tr}$ is the track size. As discussed in \cite{Bakhti:2020vfq}, this makes FASER$\nu$ ideal setup for studying the dark sector which go through chain decays. The energy resolution is however modest and of order of 30~\% \cite{Abreu:2019yak}.
Let us consider a new particle produced at FASER$\nu$ decaying to $n_f$ final charged particles. The mass of this particle can be extracted by measuring the invariant mass of these final particles with a relative error of $30 \% \sqrt{n_f}$. If $N_{new}$ such events are observed, the relative precision will be improved to $30\% \sqrt{n_f/N_{new}}$.
The great precision in the angular
resolution can also help us to find out whether the decay products of the new particle also include an invisible neutral particle or not. This can be done by reconstructing {transverse missing} momentum relative to the direction of the momentum of the intermediate new particle. Let us consider a new particle (charged or neutral) whose production and decay vertices are both inside the FASER$\nu$ detector and are reconstructed. Let us denote the distance between these two vertices by $L_{new}$. The {transverse momentum} of a possible invisible particle can be reconstructed by projecting the momenta of the final charged particles onto the plane perpendicular to the track of the invisible particles. If the final charged particles make an angle smaller than $90^\circ$ with each other, the {transverse momentum} of the missing light particle will be of order of $m_N/2$. If $m_N/(2E_{new}) > \sqrt{2} \sigma_{pos}/L_{new}$, the precision will be enough to distinguish the emission of a neutral invisible particle in the final state. By measuring $L_{new}$, the lifetime of the new particles can also be extracted. The uncertainty in the deviation of the lifetime will be mostly dominated by statistics: $\Delta \tau_{new}/\tau_{new} =2 \log 2/\sqrt{N_{new}}$.
\section{The minimal model and its prediction for forward experiments \label{model}}
In subsect.~\ref{Lag}, we introduce the field content of the minimal model and its Lagrangian. In
subsect.~\ref{Nprod}, we compute the cross section of $\nu_\mu+{\rm nucleon}\to N+X$ and predict the number of events at the forward experiments. In subsect.~\ref{Ndec}, we study the decay modes of $N$. We calculate the average energy of $N$ produced by $\nu_\mu$ with a given energy and use it to estimate the decay length of $N$ at the forward experiments. In subsect.~\ref{sig}, we describe the signals and estimate the potential backgrounds for it. We then forecast the bounds that can be derived on the effective coupling by the forward experiments if no signal is observed.
\subsection{The model content and Lagrangian \label{Lag}}
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{img/sigma_tot.pdf}
\caption{Cross section of $\nu_\mu+{\rm nucleon}\to N+X$ versus the energy of the initial neutrino for various $N$ mass. We have taken $G_u=G_d=10^{-5}$~GeV$^{-2}$. }
\label{fig:sigma_tot}
\end{figure}
In this subsection, we introduce the Lagrangian of the minimalistic version of the model.
We add a scalar doublet
\be \Phi=\left[ \begin{matrix} \Phi^+ \cr \Phi^0 \end{matrix} \right]\ee
and a right-handed singlet fermion, $N$ to the standard model. The right-handed $N$ can be either Majorana or Dirac. We turn on the following Yukawa interactions
\be Y_\alpha \bar{N} \Phi^TcL_\alpha+ Y_d \bar{d} \Phi^{\dagger}Q +Y_u \bar{u} \Phi^{T}cQ
+{\rm H.c.}\ee
where $L_\alpha$ is the left-handed lepton doublet of flavor $\alpha$ and $Q$ is the first generation of left-handed quarks.
As long as the masses of $\Phi^0$ and $\Phi^+$ are heavier than $\sim 300$ GeV and $Y_u,Y_d\stackrel{<}{\sim}0.3$, the present bounds from the direct production of these particles at CMS and ATLAS can be avoided.
Moreover, as long as the splitting between the components is small, the bounds from precision data can be satisfied \cite{Haller:2018nnx}.
However, $\Phi^0$ and $\Phi^+$ will be within the reach of the high
luminosity phase of the LHC or even that of the run III. We shall discuss the possible signatures of direct $\Phi$ production in sect.~\ref{phen}. The Yukawa coupling to the muon, $Y_\mu$ can also lead to a contribution to $(g-2)_\mu$. As we shall see in sect.~\ref{phen}, explaining the famous $(g-2)_\mu$ anomaly motivates large values of $Y_\mu$, saturating the perturbativity condition, $Y_\mu\sim 3-4$. Ref.~\cite{Allwicher:2021rtd} shows that the models explaining the $(g-2)_\mu$ anomaly with new Yukawa coupling generally need large Yukawa couplings. Here, our main aim is not to explain the $(g-2)_\mu$ anomaly but we take $Y_\mu \sim O(1)$ to have a high rate of the $N$ production at FASER$\nu$ and SND@LHC detectors due to $\nu_\mu$ interaction. Integrating out $\Phi^0$ and $\Phi^+$, we obtain the following effective Lagrangian
\be G_{u} \bar{N}_R\nu_\mu \bar{u}_Lu_R+
G_{d} \bar{N}_R\nu_\mu \bar{d}_Rd_L+
G_{L} \bar{N}_R\mu_L \bar{d}_Ru_L+
G_{R} \bar{N}_R\mu_L \bar{d}_Lu_R+{\rm H.c.}\label{GuGd} \ee
where
$$ G_u=\frac{Y_\mu Y_u^*}{m_{\Phi^0}^2}, \ G_d=-\frac{Y_\mu Y_d}{m_{\Phi^0}^2}, \
G_L=\frac{Y_\mu Y_d}{m_{\Phi^+}^2}, \ {\rm and } \
G_R=\frac{Y_\mu Y_u^*}{m_{\Phi^+}^2}. $$
Taking $Y_u \sim Y_d \sim 0.3$, $Y_\mu \sim 3$ and $m_{\Phi^0}\sim m_{\Phi^+}\sim 300$ GeV, we find $G_u\sim G_d \sim G_L\sim G_R \sim 10^{-5}$ GeV$^{-2}$.
Throughout this paper, we assume that only the second generation leptons couple to $\Phi$ and $N$. We can also impose the following global $U(1)$ symmetry to set $Y_e=Y_\tau=0$:
\be \Phi \to e^{i \alpha}\Phi, \ \ L_\mu \to e^{-i\alpha}L_\mu, \ \ \mu_R \to e^{-i\alpha}\mu_R, \ \ N\to N, \ \ u\to e^{i\alpha}u, \ \ {\rm and} \ \ d\to e^{-i\alpha}d, \label{glob} \ee
with the rest of SM fields including $Q$, the SM Higgs and the first and third generation leptons being invariant.
Notice that the $u$-quark and $d$-quark masses as well as neutrino mixing break this $U(1)$ symmetry. As a result, this symmetry explains the smallness of the masses of the first generation quarks relative to the rest of quark masses, as a bonus. We shall discuss in sect.~\ref{phen} that this symmetry also prevents a large 1-loop contribution to the neutrino mass in case $N$ is of Majorana type.
\subsection{$N$ production by neutrino scattering on nuclei \label{Nprod}}
Let us consider a neutrino of energy $E_\nu^{lab}$ colliding on a quark which carries a fraction $x$ of the proton momentum.
The $s$ Mandelstam variable of the quark neutrino system will be
\be s=x^2m_p^2+2xm_p E_\nu^{lab} \ee
where the first term is negligible. The energy of the neutrino and quark in the center of mass is
\be E_\nu=\left( \frac{xm_p E_\nu^{lab}}{2}.\right)^{1/2}.\ee
The cross section of scattering off a $u$ quark at the center of mass frame with a scattering angle of $\theta$ can be written as
\be \frac{d\sigma_u}{d\cos \theta}=\frac{G_u^2}{32 \pi v_{rel}}
\left( 1-\frac{m_N^2}{4 E_\nu^2}\right)^2\left( \frac{m_{\Phi^0}^2}{t-m_{\Phi^0}^2}\right)^2\left(
E_\nu^2(1-\cos \theta)^2 +\frac{m_N^2}{4}(1-\cos^2\theta)\right)\label{diff-cross}\ee
where {$m_N$ is the mass of particle $N$}. The Mandelstam variable $t$ can be written as
$$t=2E_\nu(E_\nu-\frac{m_N^2}{4 E_\nu})(\cos\theta -1)\stackrel{<}{\sim} O[(10~{\rm GeV})^2]\ll m_{\Phi^0}^2$$ so the second parenthesis in Eq.~(\ref{diff-cross}) can be approximated by 1.
The cross section of scattering off $d$-quark is given by a similar formula replacing $G_u$ with $G_d$. Since the mediator is scalar, the cross sections of the scattering off quark and antiquark are equal. Moreover, the cross section of neutrino and antineutrino will be equal. We can therefore write the total cross section of scattering on a nucleon ($i \in \{{\rm proton,~neutron}\})$ as
\be\sigma_{tot}^i=
\int_{-1}^1\int_{x_{min}}^1\left[\frac{d\sigma_{u}}{d \cos \theta}(f_u^i(x,t)+ f_{\bar{u}}^i(x,t))+\frac{d\sigma_{d}}{d \cos \theta}(f_d^i(x,t)+ f_{\bar{d}}^i(x,t))\right] dx d\cos \theta, \label{eq:Tocross} \ee
where $x_{min}=m_N^2/(2 m_p E_\nu^{lab})$.
The nucleon parton distribution functions, $f^i_q(x,t)$, are computed {by LHAPDF6~\cite{buckley2015lhapdf6} using NNPDF3.1~\cite{ball2017parton}.} We have set $G_u=G_d=10^{-5}$ GeV$^{-2}$ throughout this paper. Notice that since we have taken $G_u$ equal to $G_d$, the cross sections on u-quark and d-quark and as a result on the proton and neutron will be equal.
That is, from $G_u=G_d$, we conclude $d\sigma_u/d\cos\theta=d\sigma_d/d\cos\theta$
and therefore $\sigma_{tot}^n=\sigma_{tot}^p$.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.5\textwidth]{img/Flux_nu_Fa.pdf}
\caption{The number of $N$ and $\bar{N}$ particles produced by muon neutrino and antineutrino fluxes at each energy bin at FASER$\nu$ during run III of the LHC. We have taken $G_u=G_d=10^{-5}$~GeV$^{-2}$. The fluxes of neutrinos and antineutrinos are taken from \cite{Kling:2021gos}. }
\label{fig:N_tot}
\end{figure}
The total cross section shown in Eq.~(\ref{eq:Tocross}) versus $E_{\nu}^{lab}$ is illustrated in Fig.~\ref{fig:sigma_tot}, taking $m_N=0.1, 1, 3, 8, 10$~GeV and $15$~GeV. As seen from the figure, the curves converge for large $E_\nu^{lab}$. This is expected because when the energy of the center of mass is much larger than the masses of final particles, the dependence of the cross section on the masses becomes weaker. Fig.~\ref{fig:sigma_tot} also shows that for $m_N<1$ GeV and $E_\nu^{lab}>100$~GeV, the cross section is almost independent of $m_N$. This means that the major contribution to the cross section comes from $x>{\rm few }\times 10^{-2}$. Even in the limit $m_N\to 0$ and taking $G_u=G_d\sim G_F$, the cross section in Eq.~(\ref{eq:Tocross}) is about one order of magnitude smaller than the cross section of SM neutral current. This is partly due to the spinorial difference of the amplitudes and partly due to the difference in the numerical factors in the definitions of $G_F$ and $G_u$ or $G_d$ (cf., the definitions of $G_u$ and $G_d$ in Eq.~(\ref{GuGd}) with the Fermi interaction, $(4 G_F/\sqrt{2})(\bar{d}_L\gamma^\mu u_L)(\bar{\nu}_\mu \gamma_\mu \mu_L)$).
\begin {table}
\caption {The total predicted numbers of $N+\bar{N}$ to be detected by FASER$\nu$ and SND@LHC during run III of the LHC as well as by FASER$\nu 2$ during HL-LHC era, taking $G_u=G_d=10^{-5}$ GeV$^{-2}$ and different values of $m_N$ as shown in the second row. } \label{tab:1}
\begin{center}
\begin{tabular} {| l | c c c c c c|}
\hline
Number & & & $N+\stackrel{-}{N} $ & & & \\
\hline
$m_N$~GeV & 0.1 & 1 & 3 & 8 & 10 & 15 \\
\hline
SND@LHC & 19 & 18 &13 &5 &3 & 1 \\
FASER$\nu$ & 113 & 109 &90 &46 &35 &17 \\
FASER$\nu 2$ & 7685&7394&6045&3019&2229& 1015 \\
\hline
\end{tabular}
\end{center}
\end{table}
The total number of $N$ and $\bar{N}$ produced at the detector will be \be
\frac{M_{det}}{m_p} \int_{m_N^2/(2m_p )}(F_{\nu_\mu}(E_\nu^{lab})+F_{\bar{\nu}_\mu}(E_\nu^{lab}))
(r_n \sigma_{tot}^n+r_p \sigma_{tot}^p) d E_\nu^{lab}\ee
where $M_{det}$ is the detector mass and $r_n$ and $r_p$ are fractions of the neutron and proton in the nuclei of the detector, respectively: $r_p=Z/A$ and $r_n=(A-Z)/A$.
$F_{\nu_\mu}$ and $F_{\bar{\nu}_\mu}$ \cite{Kling:2021gos} are time integrated fluxes per unit area at the detector. Fig. (\ref{fig:N_tot}) illustrates the spectrum of $N+\bar{N}$ produced during the run III of the LHC at FASER$\nu$. {The total number of events at SND@LHC, FASER$\nu$ and FASER$\nu 2$ for various values of $m_N$ are shown in table~\ref{tab:1} }
\subsection{$N$ decay \label{Ndec}}
The produced $N$ travels a distance of $l\sim \Gamma_N^{-1} E_N/m_N$ where
$\Gamma_N$ is the total decay rate of $N$. Taking $N$ heavier than $\sim 3$~GeV, the following decay modes are available for the
$N$ decay
\be \Gamma(N\to \nu_\mu u\bar{u})=\frac{|G_u|^2}{|G_d|^2}\Gamma(N\to \nu_\mu d\bar{d})=\frac{|G_u|^2}{|G_L|^2+|G_R|^2}\Gamma(N\to \mu u\bar{d}) =\frac{G_u^2 m_N^5}{1024\pi^3} \label{decay}\ee
where we have neglected the masses of the final particles. Notice that the factor of three that usually appears in denominator from phase space integration in the three body decay modes has been canceled out from Eq. (\ref{decay}) by the factor of three in the numerator from summation on the color of the final quarks. Moreover, we have assumed that $N$ is heavy enough so that the meson resonances are not relevant for the $N$ decay.
Of course, if $N$ is of Majorana type, it can also decay into the charged conjugates of the above final particles.
The average energy of $N$ produced by a neutrino of $E_\nu^{lab}$ can be estimated as
\be \langle E_N^{lab}\rangle =\frac{
\int_{-1}^1\int_{x_{min}}^1 E_N^{lab}\left[\frac{d\sigma_{u}}{d \cos \theta}(f_u(x,t)+ f_{\bar{u}}(x,t))+\frac{d\sigma_{d}}{d \cos \theta}(f_d(x,t)+ f_{\bar{d}}(x,t))\right] dx d\cos \theta}{\sigma_{tot}}, \ee
where $E_N^{lab}$ in the integrand can be written as
$$ E_N^{lab}= \frac{E_\nu^{lab}}{2}(1+\cos\theta) +
\frac{m_N^2}{4 x m_p}(1-\cos\theta).$$
$\langle E_N^{lab}\rangle$ versus { $E_{\nu}^{lab}$} is shown in Fig (\ref{fig:E_N}). As expected $N$ carries a fraction of $O(0.3)$ of the energy of the initial neutrino. For heavier $N$, this fraction is larger because the energy available for the jets produced along with $N$ is lower.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.5\textwidth]{img/E_N_AVE.pdf}
\caption{Average energy of $N$ particles produced by incoming neutrinos with energy of $E_\nu^{lab}$. }
\label{fig:E_N}
\end{figure}
The momentum of $N$ produced by the scattering on a parton of associated momentum fraction $x$ will make an angle of $\sim \gamma^{-1}=(2xm_p/E_\nu^{lab})^{1/2}\sim 10^{-2}$ with the beamline. It will decay after traveling a distance of
\be l=\Gamma_N^{-1} \gamma_N=3~\mu{\rm m}\frac{\left(10^{-5}~{\rm GeV}\right)^2 }{|G_u|^2+ |G_d|^2+|G_L|^2+|G_R|^2}\left(\frac{10~{\rm GeV}}{m_N}\right)^6 \left(\frac{E_N^{lab}}{200~{\rm GeV}}\right).\label{decayLENGHT} \ee
\subsection{Signal and background at forward experiments \label{sig}}
The signature of the production of $N$ vertex is jets similar to the SM neutral current neutrino interaction. As seen from Eq.(\ref{decayLENGHT}), the dependence of the $N$ decay vertex displacement on $m_N$ is very strong. For values of the effective coupling saturating the upper limit, up to $m_N<14$~GeV, the displacement can be large enough to disentangle thanks to the superb position resolution of 0.4~$\mu$m of FASER$\nu$. Taking $m_N>2$ GeV, the displacement will be smaller than 10~cm and therefore within the size of the detector. Since $N$ is also relativistic, its decay products will be emitted within a cone with an opening angle of $m_N/E_N \sim 10^{-3}-10^{-2}$. Again thanks to the excellent angular resolution of FASER$\nu$, it can resolve the two jets associated with $u \bar{u}$ (or $d \bar{d}$) as well as the two jets associated with $u \bar{d}$ and the charged lepton. The topology of the events are schematically shown in Fig.~\ref{fig:topology}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{img/topology.pdf}
\caption{Schematic topology of the signal event. The green arrows show the final particles from the $N$ decay. The solid green arrows show jets. The dashed green arrow denotes lepton. }
\label{fig:topology}
\end{figure}
The model therefore predicts two signals:
\begin{itemize}
\item A neutral-current-like event associated with the $N$ production plus another vertex with two jets associated with $N \to \nu_\mu u \bar{u}$ or $\nu_\mu d \bar{d}$. The second vertex lies within a cone with an apex at the $N$ production vertex and an opening angle of $\gamma^{-1}\sim 10^{-2}$. The angular separation of the two final jets from each other as well as from the $N$ track ({\it i.e.,} from the line connecting the two vertices) will be also of order of $10^{-2}$. The sum of the momentum of the two jets projected onto the plane perpendicular to the direction of the $N$ momentum ({\it i.e.,} to the direction of the line connecting the $N$ production and the $N$ decay vertices) will be of order of a fraction of $m_N$ which should be compensated with the transverse momentum of the final neutrino which escapes detection. In almost half of the events, the transverse components of the jet momenta make an angle smaller than $90^\circ$ with each other. For such events, the transverse missing momentum will be relatively large and close to $m_N/2$. As discussed in sect.~\ref{FE}, if the decay length of the intermediate $N$ is larger than $\sqrt{2}\sigma_{pos}(2E_{N}/m_N) \sim 20$~$\mu$m, the missing transverse momentum can be reconstructed with enough precision to verify the emission of an invisible particle along with the jets.
\item Again a neutral-current-like event associated with the $N$ production followed by a vertex of two jets plus a muon track (associated with $N \to \mu u \bar{d}$). The second vertex lies within a cone with an apex at the $N$ production vertex and an opening angle of $\gamma^{-1}\sim 10^{-2}$. In this case there should not be any missing transverse momentum. By measurement of the four-momenta of the decay products of $N$ and computing their invariant mass, the mass of $N$ can be reconstructed. By measuring their total momenta the energy of $N$ can be derived. Then, if the statistics is enough, the average displacement of the vertices (average $l$) gives the lifetime of $N$. The relative error in the extracting the mass and the lifetime of $N$ from $N_{\mu u\bar{d}}$ number of such types of events will respectively be $40 \%/\sqrt{N_{\mu u\bar{d}}}$ and $2\log 2 /\sqrt{N_{\mu u\bar{d}}}$ as shown in Ref.~\cite{Bakhti:2020vfq}.
\end{itemize}
Let us now estimate the backgrounds for the signals, starting with the main source of the background which is accidental alignment of two SM neutrino scattering vertices. If two separate neutral current events accidentally happen along the beam direction from each other in one data collecting period, they can mimic the first signal described above when processing the emulsion. Similarly, if a $\nu_\mu$ charged current event lies ahead of a neutral current event, it can mimic the second signal described above.
The volume of a cone with a height of $l$ and an opening angle
of $\gamma^{-1}$ is $\pi l^3 \gamma^{-2}/3$. The probability of one event to lie accidentally within a cone with the apex at the other vertex is $p = (\pi l^3 \gamma^{-2}/3)/({\rm detector~ size}).$ The numbers of fake signals from pile-up of the SM vertices are therefore equal to $\mathcal{N}_{NC}^2\times p/2 $ and $\mathcal{N}_{NC}\times \mathcal{N}_\mu \times p $ where $\mathcal{N}_{NC}$ and $\mathcal{N}_{\mu}$ are respectively the numbers of SM NC and $\nu_\mu$ CC vertices passing the applied cuts in each data collecting period. Of course, for heavier $N$, $l$ is smaller and the relevant probabilities decrease very fast with a factor of $l^3 \propto (2~{\rm GeV}/m_N)^{18}$.
Fig.~\ref{fig:Acc-Back} shows the numbers of accidental backgrounds for $N\to \nu q \bar{q}$ (marked with NC) as well as for $N\to \mu q' \bar{q}$ (marked with CC) at FASER$\nu$ and at FASER$\nu$2 versus the displacement of the second vertex, $l$. The second vertex is supposed to be located within a cone with an apex at the first vertex and an opening angle of $10^{-2}$, oriented in the forward direction as depicted in Fig.~\ref{fig:topology}. We have assumed data accumulation periods of 10, 20, 50 and 75 fb$^{-1}$. To draw the figures we have used the results of \cite{Ismail:2020yqc,Kling:2021gos} to estimate $\mathcal{N}_{NC}$ and $\mathcal{N}_{CC}$. As seen from the figure, with $l<$few cm, the accidental background will be negligible. Notice that for the $m_N$ range of our interest, $l<$few cm corresponds to $G>{\rm few}\times 10^{-7}$~GeV$^{-2}$ which fortunately coincides with the coupling range giving rise to fairly significant number of events at the forward experiments.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.8\textwidth]{img/N_b_1.pdf}
\caption{Number of accidental backgrounds for the $N\to \nu q \bar{q}$ and $N\to \mu q' \bar{q}$ signals at FASER$\nu$ and FASER$\nu$2 for data taking periods with collected data of 10, 20, 50 and 75 fb$^{-1}$. Background consists of a NC or CC vertex (respectively, marked as NC or CC) located within a narrow cone of opening angle of $10^{-2}$ and a height of $l$ in front of another neutral current vertex with a topology and orientation as depicted in Fig.~\ref{fig:topology}. The solid and dashed lines respectively correspond to FASER$\nu$ and FASER$\nu$2. }
\label{fig:Acc-Back}
\end{figure}
We now discuss other possible sources of the background. Neutral hadrons such as neutrons produced in the neutral current events may mimic the signal
associated with $N$ decaying into $\nu q \bar{q}$. The number of such events is not calculated but is expected to be much smaller than muon induced neutral hadrons \cite{Ismail:2020yqc}. Similarly to discriminating between NC events and neutral hadron induced events \cite{Ismail:2020yqc}, our signal can be distinguished from this background by measuring the transverse momentum of visible final tracks. The interaction length of neutral hadrons is of order of 10~cm. Inserting a cut of few cm on the displacement of the second vertex as well as using the criteria of selecting NC events (described in \cite{Ismail:2020yqc}) can eliminate this background. Another potential source of background is displaced vertex caused by the $\tau$ production by $\nu_\tau$ and its subsequent decay at the detector.
By measuring the total charge of the $N$ decay products, $N$ can be discriminated against the $\tau$ lepton produced by the charged current interaction of $\nu_\tau$. If the decay length of $N$ is of order of 100-500 $\mu$m, it may be mistaken with neutral $D$-meson decaying into $\mu$. However, $D$-mesons are mostly produced by charged current interaction of $\nu_\mu$ off the strange sea quarks inside the proton and neutron. That is the signature of $D$-meson at FASER$\nu$ is a charged current vertex followed by a displaced vertex. Since in our model the first vertex is neutral current, $D$-meson production will not constitute a background to our signals. In principle, $D$-meson can also be produced via neutral current interaction of neutrinos on intrinsic $c$ quark in the nucleons but the rate will be too low to cause a significant background to our signal. We therefore conclude that our signals with $l<$few cm are practically background-free. As a result, detection of even a single event will be an indication of new physics..
Fig.~\ref{fig:bond} shows the bound that can be set on $G_u=G_d$ versus $m_N$ by FASER$\nu$ and SND@LHC during the run III of the LHC as well as by the upgrade of FASER$\nu$ during the high luminosity run of the LHC. That is the curves correspond to number of produced $N$ equal to 1. As seen from the figure in case of null results, FASER$\nu$ can set a bound of $10^{-6}$~GeV$^{-2}$ on $G_u=G_d$ improving the theoretical limit by a factor of 10. Its upgrade can improve by another order of magnitude. As shown in the figure, for $m_N<15$ GeV, the dependence of the bound from forward experiments on $m_N$ is mild. If $N$ is lighter than 3~GeV, it should have been already discovered by lower energy neutrino scattering experiments such as NOMAD \cite{NOMAD:2001xxt}. This is yet another reason why we focus on $m_N>3$~GeV.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.5\textwidth]{img/bond.pdf}
\caption{Forecasted upper bound on $G_u=G_d$ from SND@LHC, FASER$\nu$ and its upgrade FASER$\nu$2 in case of null signal. We have assumed integrated luminosity of 150~fb$^{-1}$ and 3000~fb$^{-1}$, respectively, for run III of the LHC (SND@LHC and FASER$\nu$) and for HL-LHC (FASER$\nu$2). }
\label{fig:bond}
\end{figure}
\section{Neutrino mass, $(g-2)_\mu$ and production of the heavy states at the LHC\label{phen}}
As mentioned before, the $Y_\mu$ coupling gives a contribution to $(g-2)_\mu$ which can be written as \cite{Lavoura:2003xp,Farzan:2009ji}
$$\Delta a_\mu=\delta \left(\frac{g-2}{2}\right)=\frac{Y_\mu^2}{16\pi^2}\frac{m_\mu^2}{m_{\Phi^+}^2}K(m_N^2/m_{\Phi^+}^2),
$$
where
$$K(t)=\frac{2t^2+5t-1}{12(t-1)^3}-\frac{t^2 \log t}{2(t-1)^4}.$$
For $m_N^2/m_{\Phi^+}^2\to 0$, $K\to 1/12$ so
$$\Delta a_\mu= \delta \left(\frac{g-2}{2}\right)=5 \times 10^{-10} \left(\frac{Y_\mu}{3}\right)^2 \left(\frac{300~{\rm GeV}}{m_{\Phi^+}^2}\right)^2 .$$
This contribution has the right sign to explain the anomaly but its magnitude is too small to explain the deviation of the experimental result from the SM prediction \cite{Muong-2:2021ojo}. If we want to explain the $(g-2)_\mu$ anomaly, we have to add more $\Phi$ or $N$ to the model. For example, four $\Phi$ with couplings of $O(3)$ to the muon and $N$ can account for the deviation.
In this case, the effective couplings can increase by a factor of $\simeq 4$ {leading} to a $O(16)$ fold increase in the statistics at forward experiments such as FASER$\nu$. In case that model has more than one $N$ with masses smaller than $O(10~{\rm GeV})$ and with couplings of $O(3)$ to muon and $\Phi$, the signal of the model at forward experiments as well as at CMS and ATLAS will be completely different. We shall discuss the phenomenology of such a variant of the model with multiple $N$ in sect. \ref{multiple}. Let us now focus on the consequences of the minimal version with single $N$ and $\Phi$.
In our model, the new scalar does not develop a VEV so neutrinos do not obtain a Dirac mass term at the tree level. The neutrino mass can receive a contribution at one loop level provided that (i) $N$ is a Majorana fermion and (ii) there is a splitting between real (CP-even) and imaginary (CP-odd) components of $\Phi^0$ which originates from a quartic coupling of form
$\lambda_{H \Phi} (H^\dagger\cdot \Phi)^2+H.c.$
\cite{Farzan:2009ji,Boehm:2006mi}. Thus, if we take $N$ to be of Majorana type, the smallness of neutrino
mass constrains the splitting of the real and complex components of $\Phi^0$ or equivalently $\lambda_{H\Phi}$. The smallness of $\lambda_{H\Phi}$ can be explained by the global $U(1)$ symmetry introduced in Eq.~(\ref{glob}).
Such a symmetry not only explains the smallness of $\lambda_{H \Phi}$ and the splitting but also explains the smallness of the $u$ and $d$ masses in comparison to those of higher generation quarks as mentioned before.
At the interaction points of the LHC, the components of $\Phi$ can be pair produced via electroweak interaction with a cross section of $\sim 10$ fb \cite{Farzan:2010fw}. The $\Phi$ components can also be singly produced in association of a recoiling gluon via the $Y_u$ and $Y_d$ couplings. Again the cross section is expected to be of order of 10~fb as $\alpha_s Y_u^2, \alpha_s Y_d^2\sim e^2 \alpha/\sin^4\theta_W$. Since $Y_\mu \gg Y_u,Y_d,e/\sin\theta_W$, the dominant decay modes are $\Phi^0 \to N \bar{\nu}_\mu$ and $\Phi^+ \to N \mu^+$. Subsequently, $N$ will decay to $\mu^-+$two jets or $\nu_\mu+$two jets. If $N$ is of Majorana type, it can also decay into the charged conjugates of these final states.
Let us shortly discuss two possible signals:
\begin{itemize}
\item $\Phi^+$ in association of a gluon showing up as $\mu^+$ with displaced vertex of $\mu^-+$two jets. If $N$ is of Majorana type, it can even decay into $\mu^++$two jets, providing a distinctive same sign signature.
\item $\Phi^+$ in association of a gluon showing up as $\mu^+$ with displaced vertex of two jets+missing energy.
\end{itemize}
Similarly, $\Phi^0$ can be produced in association with the gluon. Moreover, the pair productions $\Phi^+\Phi^-,\Phi^+\bar{\Phi}^0, \Phi^-{\Phi}^0,\Phi^0\bar{\Phi}^0$ and subsequent decays can take place. Exploring all the signals at CMS and ATLAS and assessing their background is beyond the scope of the present paper. However, this rich phenomenology with its distinctive signatures sounds very promising. If FASER$\nu$ finds signals for this model, it will be a great motivation for a dedicated search in the CMS and ATLAS data for the distinctive predictions of the present model.
Notice that FASER$\nu$ cannot distinguish the nature of $N$ ({\it i.e.,} Majorana vs Dirac). The reason is that there is no way to know whether the detected $\mu^-$ is initiated by $\nu_\mu$ via lepton number conserving processes or by $\bar{\nu}_\mu$ via lepton number violating processes involving Majorana $N$.
However, at CMS and ATLAS, same sign muon signals coming from $\Phi^+\to \mu^+ N \to \mu^+\mu^+ d \bar{u}$ and their charge conjugates will testify for the lepton number violation and a Majorana type $N$.
\section{Model with multiple sterile neutrinos \label{multiple}}
In this section, we shall discuss the phenomenological consequences of a variant of the model introduced in sect.~\ref{phen} with more than one right-handed neutrinos with couplings of form
\be Y_{\alpha i}\bar{N}_i\Phi^TcL_\alpha+{\rm H.c.}\label{Ni}\ee
We shall take $Y_{\alpha i}\sim 3$. As we discussed in sect.~\ref{phen}, adding multiple $N_i$ is motivated by the $(g-2)_\mu$ anomaly. We shall take all $N_i$ heavier than $\sim 3$ GeV to avoid the bounds from early universe, core collapse supernova and meson decay as well as from lower energy neutrino scattering experiments such a NOMAD. Integrating out the heavy $\Phi$ states, the coupling in Eq.~(\ref{Ni}) yields
\be G_{ij}^\mu (\bar{N}_i\mu)(\bar{\mu}N_j) +G_{ij}^\nu (\bar{N}_i\nu_\mu)(\bar{\nu}_{\mu}N_j)\ee in which
\be
G_{ij}^\mu=
\frac{Y_{\mu i} Y_{\mu j}^*}{m_{\Phi^+}^2} \ \ \ {\rm and}\ \ \ G_{ij}^\nu=
\frac{Y_{\mu i} Y_{\mu j}^*}{m_{\Phi^0}^2} +{\rm H.c}. \ee
Taking $Y_{\mu i}\sim Y_{\mu j}\sim 3$ and $m_{\Phi^+}\sim m_{\Phi^0}\sim 300$~GeV, we find
$G_{ij}^\nu\sim G_{ij}^\mu\sim 10^{-4}~{\rm GeV}^{-2}$.
All $N_i$ can be produced via the $\nu_\mu$ interaction in the forward experiments as described in the previous section. They can also be produced via the decay of the $\Phi$ components
at the Interaction Point of the LHC. The lightest $N_i$ decays into $\nu$+two jets or $\mu$+two jets as described for the minimal model in the previous section.
The heavier $N_i$ will however dominantly decay into lighter $N_j$ because $Y_{\mu i}\gg Y_u,Y_d$:
$$ N_i \to N_j \mu\bar{\mu} \ \ \ {\rm and} \ \ \ N_i \to N_j \nu_\mu\bar{\nu}_\mu.$$
For $m_{N_j}^2\ll m_{N_i}^2$, the decay length will be given by Eq.~(\ref{decayLENGHT}), replacing $|G_u|^2+|G_d|^2+|G_L|^2+|G_R|^2$ with $\sum_j(|G_{ij}^\mu|^2+|G_{ij}^\nu|^2)/3$ where $j$ includes all $N_j$ states lighter than $N_i$. The factor of 3 is due to the summation on color in hadronic decay case. The factor $\sum_j(|G_{ij}^\mu|^2+|G_{ij}^\nu|^2)/3$ can be $\sim 30$ times larger than $|G_u|^2+|G_d|^2+|G_L|^2+|G_R|^2$, making the decay length 30 times smaller. The whole decay chain of $N_i$ will take place inside the detector. As long as $N_i$ is lighter than $8-9$~GeV, the displacement of the vertex can still be resolved at FASER$\nu$.
In case of $N_i \to \mu^-\mu^+ N_j$, the di-muon can be resolved at FASER$\nu$, providing a background-free signal even without resolving the displacement of the vertex \cite{Bakhti:2020szu}. For $N_i \to \nu \bar{\nu} N_j$, the intermediate vertices in the decay chain of $N_i$ particles cannot however be identified and located. The decay vertex of the final $N$ is however guaranteed to be resolved because the final vertex involves two detectable jets. If the lightest $N_i$ decays into muon+two jets, the total momenta of final particles can be measured. If there is a missing transverse momentum in the plane perpendicular to the line connecting the first neutral current type vertex and the second $\mu+$two jets vertex, this would indicate that the final $N$ was produced via decay chain in which $\nu\bar{\nu}$ pair(s)
were emitted rather than directly by interaction of $\nu_\mu$ from IP on the Tungsten at the detector.
\section{Model with a light scalar singlet \label{light-singlet}}
Within the model described in the previous sections, the $N$ particles are produced via $G_u$ and $G_d$ couplings which are suppressed by $m_{\Phi^0}^{-2}$. If we build a model in which the neutral mediator is lighter, the statistics can be higher. Let us consider a new singlet scalar $S$ which has a mixing angle $\theta$ with $\Phi^0$. Such a mixing may originate from trilinear coupling of form $AS^\dagger H^\dagger \cdot \Phi$. To preserve the global symmetry introduced in Eq.~(\ref{glob}), the relevant $U(1)$ charge of $S$ has to be equal to that of $\Phi$. The cross section of $N$ production shown in Eq.~(\ref{diff-cross}) will then be modified with replacement
\be \label{rep} \left( \frac{m_{\Phi^0}^2}{t-m_{\Phi^0}^2}\right)^2\rightarrow \left( \frac{m_{\Phi^0}^2\cos^2\theta }{t-m_{\Phi^0}^2}+ \frac{m_{\Phi^0}^2\sin^2\theta}{t-m_{S}^2}\right)^2.\ee
If $S$ is heavier than a few GeV, the bounds from meson decay can be avoided. Notice that $m_{\Phi^0}^2 \sin^2 \alpha$ should not be much larger than $m_{S}^2$; otherwise, the model will suffer from fine tuning. Taking $|t|<m_{S}^2< m_{\Phi^0}^2$ and a sizable mixing, the replacement as in Eq~(\ref{rep})
enhances the $N$ production cross section and therefore the statistics by a factor of
$$4\times 10^2\left(\frac{30~{\rm GeV}}{m_{S}}\right)^2\frac{\sin^2\theta}{0.2}.$$
In sect.~\ref{phen}, we have observed that in the minimal version of the model, the number of signal events cannot exceed $\sim 200$ for $m_N>3$~GeV. A number of signal events above $O(4000)$ would indicate a light singlet scalar mixed with $\Phi^0$. Such a scalar can be also produced at the Interaction Point of the LHC via mixing with $\Phi^0$. The produced $S$ decays into $N$ and $\nu_\mu$ with signatures that were already discussed in sect.~\ref{phen}.
Let us discuss the possibility of coherent enhancement of the cross section. As argued before $|t|\sim (10~{\rm GeV})^2(1-\cos \theta)$. In order for the amplitudes of the scatterings off various nucleons of Tungsten to sum up coherently, $|t|$ should be of order of $(0.1~{\rm GeV})^2$ which corresponds to $(1-\cos\theta)\stackrel{<}{\sim}10^{-4}$.
Due to coherence, an enhancement of $A_W=183$ (corresponding to the mass number of Tungsten) is expected. As long as $m_S\gg 0.1$~GeV, no enhancement in the amplitude in this region is expected so the contribution of the coherent scattering with $|t|<(0.1~{\rm GeV})^2$ to the total cross section of scattering off Tungsten nucleus will be of order $A_W \Delta \cos\theta\sim 10^{-2}$ and therefore negligible.
\section{Summary and discussion\label{summary}}
We have proposed a model in which the left-handed doublet, $L_\mu=(\nu_\mu,\ \mu_L)$ couples to new scalar doublet(s), $\Phi=(\Phi^+, \ \Phi^0)$ and right-handed neutrino(s), $N$. If the components of $\Phi$ are heavier than $\sim 300$~GeV and their coupling to quarks is of order of $O(0.3)$ or smaller, they can escape the bounds from direct production at the colliders as well as the bounds from precision data, such as oblique parameters. Satisfying these bounds, the effective coupling between $L_\mu$ and quarks after integrating out the heavy $\Phi$ components can be as large as $10^{-5}$ GeV$^{-2}$.
In the minimal version of the model with only a single $N$, the signatures of the model at forward experiments will be a multiple jet vertex due to $N$ production (similar to that appearing due to the neutral current interaction of neutrinos in the SM), followed by a displaced vertex within a cone with an apex at the first vertex aligned in the direction of the beam with a small opening of size $10^{-2}$. The topology of the event is shown in Fig.~\ref{fig:topology}.
The decay of $N$ can produce either two jets plus a charged muon $(u \bar{d}\mu)$ or two jets plus muon neutrino ($u \bar{u}\nu_\mu$ or $d\bar{d} \nu_\mu$). In the former case, all the final particles are detectable so the mass of $N$ can be reconstructed at forward experiments by measuring the four momenta of the final particles. In the case of $N\to u \bar{u}\nu_\mu, d \bar{d}\nu_\mu$, we have formulated the condition under which the emission of $\nu_\mu$ can be observationally distinguished by measurement of the transverse momenta of the final jets.
Notice that $\Phi$ and $N$ can also couple to the first and third generations of left-handed leptons, $L_e$ and $L_\tau$. We have however focused on the second generation as it is less constrained than the first generation. Moreover, with a coupling to the second generation, the possibility of an observable signal at forward experiments is higher thanks to the larger fluxes of $\nu_\mu$ and $\bar{\nu}_\mu$ compared to those of $\nu_e$, $\bar{\nu}_e$, $\nu_\tau$ and $\bar{\nu}_\tau$.
We have shown that the minimal model with only single $N$ and $\Phi$ can at most account for 25 \% of the $(g-2)_\mu$ anomaly but by adding more generations of $\Phi$ and/or $N$, the anomaly can be completely explained. In this case, the heavier $N$ will go through chain decays to the lightest $N$, emitting either $\mu^-\mu^+$ pairs or $\nu_\mu \bar{\nu}_\mu$ pairs in the process. The final $N$ will decay into jets plus a lepton, similarly to the case of single $N$. The $\mu^-\mu^+$ pair emitted through chain decay can be of course observed at forward experiments. We have shown that even in case that the decays of intermediate $N$ particles produce only invisible $\nu_\mu \bar{\nu}_\mu$, their production can be confirmed by measuring the transverse momentum of the final particles.
We have argued that in both minimal version of the model with a single $N$
and in its variants with multiple $N$, the predicted signals at FASER$\nu$ will be background free. As a result, even a single event observed at forward experiments can count as discovery. We have shown that in the case of null results at FASER$\nu$ or SND@LHC, the bound on the relevant effective couplings can be lowered down to $10^{-6}$ GeV$^{-2}$.
If $N$ is heavier than $O(3)$ GeV, it could not be produced at lower energy scattering experiments such as NOMAD \cite{NOMAD:2001xxt}, CHORUS \cite{CHORUS:2007wlo}, Miner$\nu$a \cite{MINERvA:2021csy}, CHARM II \cite{CHARMII:2008qag} and MicroBooNE \cite{MicroBooNE:2021fdt}.
For the same reason, the future state-of-the-art DUNE experiment cannot test the model, either. The energy of the neutrino beam at NuTeV experiment \cite{Dore:2018ldz} was few 100~GeV so $N$ particles heavier than 3~GeV could be produced at this experiment but the spatial resolution of NuTeV \cite{NuTeV:1999uck} was not fine enough to disentangle the second displaced vertex so the signals would be mistaken for SM NC and CC vertices. IF FASER$\nu$ discovers signals for $N$, the NuTeV data should be re-analyzed to correct parton distribution function derived from it. Thus, the bound from FASER$\nu$ and SND@LHC will be the strongest, only to be surpassed by the bounds to be provided by their own upgrades.
The upgrade of FASER$\nu$ for the high luminosity phase of the LHC (FASER$\nu$2) can improve the bound down to $10^{-7}$ GeV$^{-2}$.
If a discovery is made by forward experiments, it will be a strong motivation for customized searches for the $\Phi^+$ and $\Phi^0$ production at the CMS and ATLAS. These particles can be pair produced via electroweak interactions or can be singly produced in association of a gluon via their Yukawa couplings to the quarks at Interaction Point. They will then go through decays as $\Phi^+ \to N\mu^+ \to {\rm two ~ jets}+\mu^+\mu^-$ or $\to{\rm two ~ jets}+\mu^+\nu_\mu$ and
$\Phi^+ \to N\bar{\nu}_\mu \to {\rm two ~ jets}+\mu^-\nu_\mu$ or $\to {\rm two ~ jets}+\bar{\nu}_\mu\nu_\mu$. Moreover, if $N$ is a Majorana particle, it can decay into $\mu^+$ instead of $\mu^-$, producing a same sign muon signal. Thus, the Majorana nature of $N$ can be established by CMS and ATLAS via detecting same sign muon signals.
If $N$ is of Majorana type, its couplings to $\nu_\mu$ can contribute to the $\mu\mu$ component of neutrino mass matrix. Then, the smallness of neutrino mass imposes a bound on the mass splittings between the CP-even and CP-odd components of $\Phi^0$. We have devised a global U(1) symmetry explaining this smallness. The same symmetry can also explain the smallness of the masses of the first generation quarks as a bonus.
The $N$ particles can also be produced by high energy atmospheric neutrinos scattering off nuclei inside the neutrino telescopes. The production of $N$ will lead to a cascade similar to those produced by the SM neutral current interactions. Since the cross section of the new interactions is at most 10 \% of the cross section of the SM neutral current, the new physics can account for less than $10\%$ of the cascade events registered by ICECUBE. The produced $N$ will decay after traveling $\sim 2~{\rm m}(3~{\rm GeV}/m_N)^6 (E_N/100~{\rm TeV})$. For heavy $N$, the decay length will be too short to be resolved by ICECUBE. Moreover, the particles from the $N$ decay will be emitted almost parallel, making a small angle of $m_N/(2E_N)$ with each other. Since their total electric charge is zero, it will be like propagation of a neutral particle in ice so the Cherenkov emission from the particles of the $N$ decay may be too faint to be detected at the neutrino telescopes.
\acknowledgments
This project has received funding /support from the European Union$^\prime$s Horizon 2020 research and innovation programme under the Marie Sklodowska -Curie grant agreement No 860881-HIDDeN. YF has received financial support from Saramadan under contract No.~ISEF/M/400279 and No.~ISEF/M/99169.
SA is supported by a grant from Basic Sciences Research Fund (No. BSRF-phys-399-01).
\bibliographystyle{JHEP}
|
{
"timestamp": "2022-02-08T02:10:19",
"yymm": "2109",
"arxiv_id": "2109.13962",
"language": "en",
"url": "https://arxiv.org/abs/2109.13962"
}
|
\section{Introduction \label{sec:intro}}
The availability and affordability of whole-genome sequencing technology has made the use of genomic data increasingly common in epidemiological modeling studies \citep{Loman, polonsky2019outbreak}. Sequencing data from bacteriological pathogens provide insight into the spread of infectious diseases through populations and numerous methods have been developed to infer individual-to-individual transmission. The simplest approach uses the difference of single nucleotide polymorphisms (SNPs) between two samples and assigns any pairs separated by fewer than a threshold number of SNPs to a transmission cluster \citep{RN11}. Other approaches make use of phylogenetic trees, a reconstruction of the estimated evolutionary history of a pathogen based on genetic sequencing data \citep{Delsuc}. For example, patristic distances, the summed length of the tree branches separating two isolates, is a similar measure of genetic distance, but this approach leverages prior biological assumptions about the substitution rate to generate a more accurate measure than SNP distance \citep{Poon}. More complex approaches make use of transmission trees, based on phylogenetic analyses, to estimate the asymmetric probability of direct transmission between pairs of cases \citep{Didelot2017, Klinkenberg, Campbell2019}.
Understanding how potentially modifiable factors contribute to the increased likelihood of disease transmission between two individuals is important for infection control. However, while a robust set of methods exists to estimate novel measures of genetic relatedness between case pairs, advanced statistical methods to analyze these dyadic data and their associations with other factors may require modifications/extensions due to the unique features of the outcomes (e.g., spatial correlation, distributional characteristics).
Fixed effect regression-based approaches that do not account for correlation between the dyadic outcomes may have serious inferential limitations. For example, we may expect correlation between dyadic outcomes $Y_{ij}$ and $Y_{kj}$, describing the genetic relatedness between individuals $i,j$ and $k,j$ respectively, because individual $j$ is present in both pairs; particularly if individual $j$ is a major driver of transmission in the population. Ignoring this potentially positive correlation during analysis can result in overly optimistic measures of uncertainty for regression parameter estimates, leading to incorrect conclusions regarding statistical significance \citep{hoffbook}.
Several different analysis frameworks have been developed for modeling network dependence in dyadic data \citep{kenny2020dyadic}. Random effect regression models have been widely used in this setting due to their ability to flexibly characterize the correlation, ability to handle different outcome types, and relative ease in making statistical inference \citep{warner1979new, wong1982round, gill2001statistical, hoff2002latent, hoffbook, hoff2005bilinear, hoff2021additive}. However, current models often ignore other important sources of correlation that may be unique to the infectious disease setting (e.g., spatial correlation due to unmeasured transmission dynamics) and/or are limited in their ability to describe non-standard distributions that may be seen when analyzing novel genetic relatedness outcomes (e.g., zero-inflation), with a few exceptions. \cite{beck2006space} and \cite{neumayer2010spatial} discuss the use of spatial lag regression models for analyzing spatially-referenced dyadic data while \cite{austin2013covariate} and \cite{ciminelli2019social} integrate spatially correlated random effects within a network analysis by extending the latent position distance model of \cite{hoff2002latent}. With respect to non-standard distributions, \cite{simpson2015two} introduce a non-spatial (in terms of the random effect parameters) two-part mixed model for analyzing the probability of connection in whole-brain network dyadic data.
In this work, we develop hierarchical Bayesian spatial methods for analyzing dyadic genetic relatedness data in the form of patristic distances and transmission probabilities by extending the existing random effect frameworks to better reflect features of the genetic relatedness outcomes. Specifically, instead of inducing spatial correlation in the data through the latent position distance components of the model as in \cite{austin2013covariate} and \cite{ciminelli2019social}, we incorporate spatial structure directly into the individual-level random effect parameters. We fully investigate the implications of this choice on the induced correlation structure of the data. Additionally, we accommodate the distinctive distributional features of the considered genetic relatedness outcomes during modeling. Finally, we develop efficient Markov chain Monte Carlo (MCMC) sampling algorithms for several genetic relatedness outcomes, along with an R package, to facilitate posterior sampling in future work.
Using a simulation study, we show the importance of these methods for conducting accurate statistical inference for key regression associations and the limitations of existing approaches. We also apply our methods to a unique dataset of \emph{Mycobacterium tuberculosis} isolates and associated patient data from the Republic of Moldova and show the benefits of the new methodology with respect to model fit and uncovering new insights into potentially important transmission factors. Analyzing the posterior distributions of individual-specific random effect parameters is shown to be important for understanding how individuals in the population personally contribute to the transmission dynamics and the role of spatial versus individual variability in this process. Given the increasing availability and interest in these types of data, and the limitations of existing approaches, we anticipate that these methods will represent important tools for researchers looking to correctly identify factors associated with genetic relatedness between individuals in future studies.
In Section 2 we describe the data from the Republic of Moldova while the new statistical methods are presented in Section 3. Sections 4 and 5 represent the simulation study and real data application, respectively. We close in Section 6 with conclusions and further discussion.
\section{Motivating data and application}
In this study, we analyze data previously described by \cite{yang2022phylogeography}. In that work, the authors performed whole genome sequencing on \emph{Mycobacterium tuberculosis} isolates from 2,236 of the 2,770 non-incarcerated adults diagnosed with culture-positive tuberculosis (TB) in the Republic of Moldova between January 1, 2018 and December 31, 2019. They constructed a maximum likelihood phylogenetic tree with RAxML \citep{RAxML} and identified broad, putative transmission clusters using TreeCluster \citep{TreeCluster} with a threshold of 0.001 substitutions per site.
\begin{table}[ht!]
\centering
\caption{Summary of the dyadic genetic relatedness data in the Republic of Moldova study population by transmission probability (TP) status. Means, with interquartile ranges given in parentheses, are shown for continuous variables and percentages are shown for categorical variables.}
\begin{tabular}{lrr}
\hline
Effect & TP $= 0$ ($n=6,958)$ & TP $>0$ ($n=2,744$) \\
\hline
Distance Between Villages (km) & 87.49 (67.92) & 94.76 (78.13) \\
Same Village (\% Yes) & 0.56 & 1.28 \\
Date of Diagnosis Difference (Day) & 225.09 (233.75) & 202.41 (211.00) \\
Age Difference (Year) & 13.57 (14.00) & 13.33 (15.00) \\
Sex (\%): & & \\
\ \ \ Both Male & 56.54 & 58.89 \\
\ \ \ Both Female & 6.11 & 4.63 \\
\ \ \ Mixed Pair & 37.35 & 36.48 \\
Residence Location (\%): & & \\
\ \ \ Both Urban & 14.62 & 14.18 \\
\ \ \ Both Rural & 37.55 & 38.16 \\
\ \ \ Mixed Pair & 47.83 & 47.67 \\
Working Status (\%): & & \\
\ \ \ Both Employed & 0.99 & 0.77 \\
\ \ \ Both Unemployed & 80.81 & 80.50 \\
\ \ \ Mixed Pair & 18.19 & 18.73 \\
Education (\%): & & \\
\ \ \ Both $<$ Secondary & 10.30 & 10.02 \\
\ \ \ Both $\geq$ Secondary & 45.33 & 46.21 \\
\ \ \ Mixed Pair & 44.37 & 43.77 \\
\hline
\end{tabular}
\end{table}
From these analyses, \cite{yang2022phylogeography} produced estimates of genetic relatedness among sequences using two metrics. First, they computed patristic distance between any pair of isolates within a cluster. Patristic distance is a measure of genetic relatedness between two sequences in a phylogenetic tree expressed in substitutions per site. Second, the authors estimated the probability that one individual infected another individual. These probabilities are obtained using TransPhylo \citep{Didelot2017}, a Bayesian approach that augments timed phylogenetic trees with who-infected-whom information while accounting for the possibility of unsampled individuals. TransPhylo uses an MCMC framework to obtain a posterior collection of transmission trees, accounting for the time from infection to infecting others (generation time) and the time from infection to sampling. For most pairs $(i,j)$, there is no posterior transmission tree that includes a transmission event from $i$ to $j$ or from $j$ to $i$. Where there are samples in the posterior in which $i$ or $j$ infected the other, the posterior probability is not necessarily symmetric. This can occur, for example, because $i$ was sampled many months prior to $j$, making it more likely that $i$ infected $j$ than vice versa. For all possible pairs within a transmission cluster, symmetric estimates of patristic distance (i.e., $Y_{ij} = Y_{ji}$) and asymmetric estimates of transmission probability (i.e., $Y_{ij} \neq Y_{ji}$ necessarily) are available. To our knowledge, this is the first study to analyze transmission probabilities in the infectious disease setting.
In addition to estimates of genetic relatedness, demographic data are available for each TB case. These data include individual characteristics such as age (in years), sex, education status (less than secondary, secondary or higher), working status (employed, unemployed), and residence type (urban, not urban). With these data, we calculate characteristics of the pair of individuals, such as an indicator for whether the individuals in the pair reside in the same village, the Euclidean distance between their villages of residence (in kilometers), the difference between their dates of diagnosis (in days), and the absolute difference between their ages (in years).
\begin{figure}[ht!]
\begin{center}
\includegraphics[trim={1.95cm 0.0cm 1.00cm 0.0cm}, clip, scale = 0.38]{Figures/Patristic_Hist.pdf}
\includegraphics[trim={1.95cm 0.0cm 1.00cm 0.0cm}, clip, scale = 0.38]{Figures/Trans_Hist.pdf}
\includegraphics[trim={1.95cm 0.0cm 0.0cm 0.0cm}, clip, scale = 0.38]{Figures/loc_map.pdf}\\
\caption{Patristic distances (substitutions per site; panel 1), transmission probabilities that are $> 0$ (panel 2), and village locations (panel 3) from the largest putative cluster in the Republic of Moldova data analysis. In panel 3, gray, blue, orange, and red points represent villages with one, two, three, and five cases, respectively.}
\end{center}
\end{figure}
In this work, we analyze data from individuals in the largest putative transmission cluster. After removing six individuals with missing covariates, this cluster includes 99 individuals resulting in 4,851 symmetric patristic distance pairs and 9,702 asymmetric transmission probability pairs. Characteristics of the dyadic data are shown in Table 1 while in Figure 1 we display distributions of the different genetic relatedness outcomes along with the village locations for the included individuals.
\section{Methods}
We develop models to analyze spatially-referenced dyadic genetic relatedness outcomes (i.e., patristic distances, transmission probabilities) while accounting for multiple sources of correlation between responses and important features of the outcomes. The methods are designed to yield accurate statistical inference for the primary regression associations of interest while also identifying individuals/locations that play a more important role in the transmission dynamics within the population.
\subsection{Patristic distances}
We model the patristic distance (log scale) between individuals $i$ and $j$ as a function of pair- and individual-level covariates as well as spatially-referenced, individual-specific random effect parameters such that \begin{equation}\ln\left(P_{ij}\right) = \textbf{x}_{ij}^{\text{T}}\boldsymbol{\beta} + \left(\textbf{d}_{i} + \textbf{d}_{j}\right)^{\text{T}}\boldsymbol{\gamma} + \theta_i + \theta_j + \epsilon_{ij},\end{equation} $i=1, \hdots, n - 1,\ j=i + 1,\hdots, n,$ where $\epsilon_{ij}|\sigma^2_{\epsilon} \stackrel{\text{iid}}{\sim} \text{N}\left(0, \sigma^2_{\epsilon}\right)$ is the error term; $n$ is the total number of individuals in the study; $P_{ij} > 0$ is the symmetric patristic distance between a unique pair of individuals $i$ and $j$ (i.e., $P_{ij} = P_{ji}$ for all $i \neq j$); $\textbf{x}_{ij}$ is a vector of covariates describing differences between individuals $i$ and $j$ (e.g., spatial distance), with $\boldsymbol{\beta}$ the corresponding vector of regression parameters; $\textbf{d}_{i}$ is a vector of covariates specific to individual $i$ where the impact of the covariates on the response, described by the $\boldsymbol{\gamma}$ vector, is assumed to be the same across all individuals; and $\theta_{i}$ is a spatially-referenced, individual-specific random effect parameter which describes individual $i$'s role in transmission events within the population (both as infector and infectee). Small $\theta_i$ values indicate that across all outcome pairs involving individual $i$, the patristic distance is smaller on average, suggestive of an increased likelihood of being in transmission pairs.
We allow for the possibility of spatial correlation between responses by modeling the random effect parameters using a spatially-referenced Gaussian process such that \begin{align} \begin{split}
&\theta_i = \eta\left\{h\left(\textbf{s}_i\right)\right\} + \zeta_i,\ i=1,\hdots,n, \\
&\boldsymbol{\eta}^{\text{T}} = \left\{\eta\left(\textbf{s}_1^*\right), \hdots, \eta\left(\textbf{s}_m^*\right)\right\} | \phi, \tau^2 \sim \text{MVN}\left\{\boldsymbol{0}_m, \tau^2 \Sigma\left(\phi\right)\right\}, \text{ and}\\
&\Sigma\left(\phi\right)_{ij} = \text{Corr}\left\{\eta\left(\textbf{s}_i^*\right), \eta\left(\textbf{s}_j^*\right)\right\} = \exp\left\{-\phi \left\|\textbf{s}_i^* - \textbf{s}_j^* \right\|\right\}
\end{split} \end{align} where $\theta_i$ is decomposed into two pieces; one that is purely spatial, $\eta\left(\textbf{s}_i^*\right)$, and one that is non-spatial, $\zeta_i$. This allows for each parameter to be individual-specific and not driven solely by spatial location. In the case where individuals are co-located, the function $h\left(.\right)$ maps the spatial location of an individual to an entry within a smaller set of $m < n$ unique locations such that $h\left(\textbf{s}_i\right) \in \left\{\textbf{s}_1^*, \hdots, \textbf{s}_m^*\right\}$, where it is possible that $h\left(\textbf{s}_i\right) = h\left(\textbf{s}_j\right) = \textbf{s}^*_k$ for some $i \neq j$. When all individuals have a unique location, $h\left(\textbf{s}_i\right) = \textbf{s}^*_i$ for all $i$ and therefore, $m = n$.
The vector of purely spatial random effect parameters, $\boldsymbol{\eta}$, is modeled using a Gaussian process centered at zero (i.e., $\textbf{0}_m$ is an $m$ length vector of zeros) with correlation structure defined by the Euclidean distances between spatial locations (i.e., $\left\|\textbf{s}^*_i - \textbf{s}^*_j\right\|$), where $\phi > 0$ controls the level of spatial correlation between the parameters and $\tau^2$ the total variability of the spatial process. Small values of $\phi$ indicate strong spatial correlation even at larger distances. Finally, $\zeta_i|\sigma^2_{\zeta} \stackrel{\text{iid}}{\sim}\text{N}\left(0, \sigma^2_{\zeta}\right)$ represent the individual-specific parameters that account for the possibility that two people could be in very similar spatial locations but have vastly different patterns of behavior that impact their likelihood of being transmitted to and/or transmitting to others.
\subsubsection{Prior distributions}
We assign weakly informative prior distributions to the remaining model parameters when appropriate. The regression parameters are specified as $\beta_{j}, \gamma_{k} \stackrel{\text{iid}}{\sim}\text{N}\left(0,100^2\right)$ for $j=1,\hdots,p_x$ and $k=1,\hdots,p_d$, where $p_x$ and $p_d$ are the lengths of the $\textbf{x}_{ij}$ and $\textbf{d}_i$ vectors, respectively; the variance parameters as $\sigma^2_{\epsilon}, \tau^2, \sigma^2_{\zeta} \stackrel{\text{iid}}{\sim} \text{Inverse Gamma}\left(0.01, 0.01\right)$; and the spatial correlation parameter as $\phi \sim \text{Gamma}\left(1.00, 1.00\right)$. We scale the spatial distances used in the analysis to allow the prior distribution for $\phi$ to be minimally informative (i.e., ranging from relatively weak to strong spatial correlation at both short and long spatial distances).
\subsection{Transmission probabilities}
Next, we introduce a method for analyzing transmission probabilities which, unlike patristic distances, contain potentially rich information regarding the direction of transmission. As a result, the outcomes are not necessarily symmetric as they were in Section 3.1 (i.e., $T_{ij} \neq T_{ji}$ necessarily). Here, we introduce a framework for analyzing transmission probabilities that includes similar inferential goals as the model in (1, 2) while also accounting for the asymmetry of the outcome as well as other important data features (e.g., zero-inflation).
Specifically, we model the probability that individual $j$ infected individual $i$ (i.e., $T_{ij}$) as a function of individual- and pair-specific covariates and individual/location-specific random effect parameters using a mixed-type distribution. This specification includes a binary component to account for the large proportion of exact zero transmission probabilities (see Figure 1), and a continuous piece to model the non-zero probabilities. We define the probability density function (pdf) for a transmission probability, $f_{t_{ij}}\left(t\right)$, as \begin{equation}T_{ij} \stackrel{\text{ind}}{\sim} f_{t_{ij}}\left(t\right) = \left(1 - \pi_{ij}\right)^{1\left(t = 0\right)} \left[\frac{\pi_{ij}}{t\left(1-t\right)}f_{w_{ij}}\left\{\ln\left(\frac{t}{1-t}\right)\right\}\right]^{1\left(t > 0\right)},\ t \in \left[0,1\right)\end{equation} where all pairs are now included in the analysis (i.e., $i=1,\hdots,n$; $j=1,\hdots,n$; $i \neq j$); $1\left(.\right)$ is an indicator function taking the value of one when the input statement is true and the value of zero otherwise; the transmission probabilities can be exactly equal to zero; and $\left(1 - \pi_{ij}\right)$ describes the probability that this event occurs.
We use a logistic regression framework to connect these underlying probabilities with covariates and random effect parameters such that \begin{equation}\ln\left(\frac{\pi_{ij}}{1 - \pi_{ij}}\right) = \textbf{x}_{ij}^{\text{T}}\boldsymbol{\beta}_z + \textbf{d}_{j}^{\text{T}}\boldsymbol{\gamma}_z^{\left(g\right)} + \textbf{d}_{i}^{\text{T}}\boldsymbol{\gamma}_z^{\left(r\right)} + \theta_{zj}^{\left(g\right)} + \theta_{zi}^{\left(r\right)} + \nu_{zi}\nu_{zj}\end{equation} where $\textbf{x}_{ij}$ and $\textbf{d}_i$ were previously described in Section 3.1. Because we now observe two responses for each pair (i.e., $T_{ij}$ and $T_{ji}$), we are able to separate the included parameters into the different roles of the individuals within a pair; specifically, we can estimate the infector or ``giver'' ($g$) and the infectee or ``receiver'' ($r$) terms. For example, $T_{ij}$ describes the probability that individual $j$ transmits to $i$, so $\textbf{d}_j^{\text{T}} \boldsymbol{\gamma}_z^{(g)}$ from (4) represents the impact of the giver's (individual $j$'s) covariates on the probability of transmission while $\textbf{d}_i^{\text{T}} \boldsymbol{\gamma}_z^{(r)}$ describes the impact of the receiver's (individual $i$'s) characteristics. Similarly, now each individual has two different additive random effect parameters, $\boldsymbol{\theta}_{zi}^{\text{T}} = \left(\theta_{zi}^{(g)}, \theta_{zi}^{(r)}\right)$; one for the giver and receiver roles, respectively. The $\nu_{zi}\nu_{zj}$ interaction term accounts for correlation between observations from the same pair of individuals in different roles (i.e., $\nu_{zi}\nu_{zj} = \nu_{zj}\nu_{zi}$), where $\nu_{zi}|\sigma^2_{\nu_z} \stackrel{\text{iid}}{\sim}\text{N}\left(0, \sigma^2_{\nu_z}\right)$, $i=1,\hdots,n$ \citep{hoff2005bilinear}.
In order to understand if the magnitude of a non-zero transmission probability is also impacted by covariates and random effect parameters, the pdf in (3) includes a separate regression framework for the non-zero probabilities. Specifically, $f_{w_{ij}}\left(w\right)$ is a pdf introduced to model the non-zero probabilities on the logit scale such that \begin{equation} f_{w_{ij}}\left(w\right) \equiv \text{N}\left(\textbf{x}_{ij}^{\text{T}}\boldsymbol{\beta}_w + \textbf{d}_{j}^{\text{T}}\boldsymbol{\gamma}_w^{\left(g\right)} + \textbf{d}_{i}^{\text{T}}\boldsymbol{\gamma}_w^{\left(r\right)} + \theta_{wj}^{\left(g\right)} + \theta_{wi}^{\left(r\right)} + \nu_{wi}\nu_{wj},\ \sigma^2_{\epsilon}\right)\end{equation} where $\sigma^2_{\epsilon}$ represents the variance of the distribution and the remaining terms in (5) have been previously described. The ``$w$'' subscripts in (5) serve to differentiate these parameters from those used by the binary model in (4) (i.e., ``$z$'' subscripts).
In total, each individual in the analysis has four additive random effect parameters, $\boldsymbol{\theta}_i^{\text{T}} = \left(\boldsymbol{\theta}_{zi}^{\text{T}}, \boldsymbol{\theta}_{wi}^{\text{T}}\right)$, representing the residual (i.e., after adjustment for known risk factors and repeated pair correlation) likelihood of transmitting ($g$) or being transmitted to ($r$) in the binary ($z$) and positive ($w$) transmission probability regressions. Large values of the $z$ subscript random effect parameters suggest an increased chance of a non-zero transmission probability, while large values of the $w$ subscript parameters indicate an increasingly positive transmission probability.
As in (2), the introduced random effect parameters serve to adjust for pair- and proximity-based correlation in the outcomes. Unlike in \cite{simpson2015two}, we anticipate that the collection of parameters corresponding to the same individual may themselves be correlated and introduce a multivariate model as a result. The model for one set of the parameters is similar to (2) and is given as (for $i=1,\hdots,n)$ \begin{align*} \theta_{zi}^{(g)} = \eta_z^{(g)}\left\{h\left(\textbf{s}_i\right)\right\} + \zeta_{zi}^{(g)} \end{align*} where $\zeta_{zi}^{(g)}|\sigma^2_{\zeta^{(g)}_{z}} \stackrel{\text{iid}}{\sim}\text{N}\left(0, \sigma^2_{\zeta^{(g)}_{z}}\right)$ once again account for individual variability, and the remaining parameters, $\theta^{(r)}_{zi}$, $\theta^{(g)}_{wi}$, $\theta^{(r)}_{wi}$, are defined similarly. To account for cross-covariance and spatial correlation among the parameters, we specify a multivariate Gaussian process for the mean parameters such that \begin{align}\begin{split} &\boldsymbol{\eta}|\Omega, \phi \sim \text{MVN}\left\{\boldsymbol{0}_{4m}, \Sigma\left(\phi\right) \otimes \Omega \right\} \text{ where}\\ &\boldsymbol{\eta}^{\text{T}} = \left\{\boldsymbol{\eta}\left(\textbf{s}^*_1\right)^{\text{T}}, \hdots, \boldsymbol{\eta}\left(\textbf{s}^*_m\right)^{\text{T}}\right\} \text{ and}\\ &\boldsymbol{\eta}\left(\textbf{s}^*_i\right)^{\text{T}} = \left\{\eta_z^{(g)}\left(\textbf{s}^*_i\right), \eta_z^{(r)}\left(\textbf{s}^*_i\right), \eta_w^{(g)}\left(\textbf{s}^*_i\right), \eta_w^{(r)}\left(\textbf{s}^*_i\right)\right\}.\end{split} \end{align} The complete collection of mean parameters across all $m$ unique spatial locations is denoted by $\boldsymbol{\eta}$ (similar to (2)); $\boldsymbol{\eta}\left(\textbf{s}^*_i\right)$ represents the collection of mean parameters across each component of the model and different roles, specific to unique location $\textbf{s}_i^*$; $\Sigma\left(\phi\right)$ was previously described in (2); $\otimes$ is the Kronecker product; and $\Omega$ represents a four-by-four unstructured covariance matrix describing the cross-covariance among the set of four random effect parameters specific to a unique spatial location.
\subsubsection{Prior distributions}
We complete the model specification by assigning prior distributions to the introduced model parameters. As in Section 3.1.1, the regression parameters are specified as $\beta_{zj}$, $\beta_{wj}$, $\gamma_{zk}^{(g)}$, $\gamma_{zk}^{(r)}$, $\gamma_{wk}^{(g)}$, $\gamma_{wk}^{(r)}$ $\stackrel{\text{iid}}{\sim}\text{N}\left(0,100^2\right)$ for $j=1,\hdots,p_x$ and $k=1,\hdots,p_d$; the variance parameters as $\sigma^2_{\epsilon}$, $\sigma^2_{\zeta_z^{(g)}}$, $\sigma^2_{\zeta_z^{(r)}}$, $\sigma^2_{\zeta_w^{(g)}}$, $\sigma^2_{\zeta_w^{(r)}}$, $\sigma^2_{\nu_z}$, $\sigma^2_{\nu_w}$ $\stackrel{\text{iid}}{\sim} \text{Inverse Gamma}\left(0.01, 0.01\right)$; the spatial correlation parameter as $\phi \sim \text{Gamma}\left(1.00, 1.00\right)$; and the cross-covariance matrix as $\Omega^{-1} \sim \text{Wishart}\left(5, I_4\right)$, a weakly informative choice resulting in uniform cross-correlations \textit{a priori} \citep{gelman2013bayesian}.
\subsection{Induced correlation structure}
The inclusion of individual-specific and spatially correlated random effect parameters in the dyadic outcome models detailed in Sections 3.1 and 3.2 results in a positive correlation between the responses whose magnitude varies depending on (i) if the two pairs share a common individual and (ii) the geographic distance between the people in the pairs. To better understand the induced correlations, we consider two different cases specifically for patristic distances; when there is, and is not, a shared individual between the pairs. Derivations for the transmission probability model are similar but more complicated due to the mixed-type distribution and two regression frameworks used. Therefore, in Figure S1 of the Supplementary Material, we present simulation-based correlation estimates for this model over a range of spatial variance/correlation settings; see Section S1 for full details.
First, we derive the correlation between two dyadic patristic distances that consist of entirely different individuals, $\ln\left(P_{ij}\right)$ and $\ln\left(P_{kl}\right)$ where $i \neq j \neq k \neq l$. The variance of one of the responses (assuming every individual in the study has a unique spatial location) is given as \begin{align*}\begin{split} & \text{Var}\left\{\ln\left(P_{ij}\right)\right\} = \text{Var}\left(\theta_i\right) + \text{Var}\left(\theta_j\right) + \text{Var}\left(\epsilon_{ij}\right) + 2\text{Cov}\left(\theta_i,\theta_j\right)\\
&= \text{Var}\left\{\eta\left(\textbf{s}^*_i\right) + \zeta_i\right\} + \text{Var}\left\{\eta\left(\textbf{s}^*_j\right) + \zeta_j\right\} + \text{Var}\left(\epsilon_{ij}\right) + 2\text{Cov}\left\{\eta\left(\textbf{s}^*_i\right) + \zeta_i, \eta\left(\textbf{s}^*_j\right) + \zeta_j\right\}\\
&=2\tau^2\left(1 + \exp\left\{-\phi\left\|\textbf{s}^*_i - \textbf{s}^*_j \right\|\right\}\right) + 2\sigma^2_{\zeta} + \sigma^2_{\epsilon},\end{split}\end{align*} due to the spatial dependence between the $\theta_i$ parameters. The covariance between these responses is given as \begin{align*} \begin{split} &\text{Cov}\left\{\ln\left(P_{ij}\right), \ln\left(P_{kl}\right)\right\} = \text{E}\left\{\left(\theta_i + \theta_j\right)\left(\theta_k + \theta_l\right)\right\} \\
&= \text{E}\left[\left\{\eta\left(\textbf{s}^*_i\right) + \zeta_i + \eta\left(\textbf{s}^*_j\right) + \zeta_j\right\}\left\{\eta\left(\textbf{s}^*_k\right) + \zeta_k + \eta\left(\textbf{s}^*_l\right) + \zeta_l\right\}\right] \\
&= \tau^2 \sum_{p_1 \in \left\{i,j\right\}} \sum_{p_2 \in \left\{k,l\right\}} \exp\left\{-\phi\left\|\textbf{s}^*_{p_1} - \textbf{s}^*_{p_2} \right\|\right\}, \end{split}\end{align*} and is simply a function of spatial distances between the individuals in the pairs. This suggests that if spatial correlation is negligible then the covariance/correlation between outcome pairs without a shared individual is effectively equal to zero since $\exp\left\{-\phi\left\|\textbf{s}^*_i - \textbf{s}^*_j\right\|\right\} \approx 0$ for all $i,j$. In the case of strong spatial correlation (i.e, $\exp\left\{-\phi\left\|\textbf{s}^*_i - \textbf{s}^*_j\right\|\right\} \approx 1$), the variance and covariance become $4\tau^2 + 2\sigma^2_{\zeta} + \sigma^2_{\epsilon}$ and $4\tau^2$, respectively. This yields a correlation between the dyadic outcomes of $0$ and $\frac{4\tau^2}{4\tau^2 + 2\sigma^2_{\zeta} + \sigma^2_{\epsilon}}$ for the weak and strong spatial correlation settings, respectively.
Following similar derivations, the covariance between observations from two pairs that include one of the same individuals, say $\ln\left(P_{ij}\right)$ and $\ln\left(P_{ik}\right)$, is given as $$\tau^2\left(1 + \exp\left\{-\phi\left\|\textbf{s}^*_{i} - \textbf{s}^*_{j} \right\|\right\} + \exp\left\{-\phi\left\|\textbf{s}^*_{i} - \textbf{s}^*_{k} \right\|\right\} + \exp\left\{-\phi\left\|\textbf{s}^*_{j} - \textbf{s}^*_{k} \right\|\right\}\right) + \sigma^2_{\zeta}.$$ Under negligible spatial correlation, this expression approaches $\tau^2 + \sigma^2_{\zeta}$ due to the fact that the same individual is represented in both pairs; while for strong spatial dependency, it approaches $4\tau^2 + \sigma^2_{\zeta}$. This yields correlations of $\frac{\tau^2 + \sigma^2_{\zeta}}{2\tau^2 + 2\sigma^2_{\zeta} + \sigma^2_{\epsilon}}$ and $\frac{4\tau^2 + \sigma^2_{\zeta}}{4\tau^2 + 2\sigma^2_{\zeta} + \sigma^2_{\epsilon}}$ for the weak and strong spatial correlation settings, respectively. Both of these quantities are larger than the comparable versions for pairs that do not share an individual.
These findings, along with the results shown in Figure S1 of the Supplementary Material, suggest that there is increased correlation between outcomes when at least one individual is shared between the pairs. In the transmission probability model, there is also extremely high correlation between outcomes when the pairs include both individuals but in different roles, with spatial correlation/variability having almost no impact. However, even when pairs have no shared individuals, strong correlation can still exist depending on the strength/magnitude of spatial correlation/variability in the data. This may be an important feature to consider in the infectious disease setting where unmeasured transmission dynamics could results in residual spatial correlation between individuals.
\section{Simulation study}
We design a simulation study to investigate the implications of ignoring multiple forms of correlation between dyadic genetic relatedness outcomes when making inference on regression parameters of interest. Additionally, we aim to determine if a common Bayesian model comparison tool can be used to identify datasets that contain non-negligible levels of correlation and also differentiate the sources of the correlation. In order to avoid repetition of text, the process described throughout Section 4 is specifically for the patristic distances model. However, we carried out the same steps for transmission probabilities using the corresponding equations from Section 3.2. These full details are provided in Section S2 of the Supplementary Material.
\subsection{Data generation}
We simulate data from the model in (1) under three different scenarios. In Setting 1, we specify that there is no unmeasured correlation by setting $\theta_i = 0$ for all $i$. In Setting 2, we simulate data from (1) and (2) with $\eta\left(\textbf{s}_i^*\right) = 0$ for all $i$, resulting in only non-spatial variability in the random effect parameters. In Setting 3, we also simulate data from (1) and (2) with $\phi = -\ln\left(0.05\right)/\max\left\{\left\|\textbf{s}_i^* - \textbf{s}_j^*\right\|; i < j\right\}$ and $\zeta_i = 0$ for all $i$, resulting in spatially correlated random effect parameters with correlation that decreases to $0.05$ at the maximum distance observed in the data (i.e., the effective range).
When simulating data from these models, we use results from our data application in the Republic of Moldova (Section 5) to ensure that we are working with realistic outcomes. Specifically, we use the same sample size ($n=99$), same covariates ($\textbf{x}_{ij}$, $\textbf{d}_i$), and choose the true parameter values needed to simulate from (1) and (2) based on posterior estimates obtained from the data application. In Table S1 of the Supplementary Material, the specific values used in each simulation study are given.
When simulating data from Settings 2 and 3, we define the total variance of the random effect process as $\tau^2 + \sigma^2_{\zeta}$ and use the values in Table S1 of the Supplementary Material to estimate this quantity. In Setting 2, all of this variability is assumed to be non-spatial while in Setting 3 it is attributed entirely to spatial correlation. We generate a new vector of $\boldsymbol{\theta}$ parameters for each dataset from the setting-specific model. We also create a unique set of spatial locations, with no co-located individuals (i.e., $m = n$), for each dataset. In total, we simulate 100 datasets from each setting.
\subsection{Competing models}
We apply three competing models to every dataset. The first model (i.e., \textit{Fixed}) represents a simplified fixed effects regression form of (1) where $\theta_i = 0$ for all $i$ (i.e., no random effect parameters). As \textit{Fixed} matches the data generated from Setting 1, we expect it to perform well overall in that setting. However, in Settings 2 and 3, it will likely struggle in estimating the associations of interest as it ignores all correlation. The second model (i.e., \textit{Non-spatial}) also represents a variant of (1) and (2) where $\eta\left(\textbf{s}^*_i\right) = 0$ for all $i$ (matching data generation Setting 2). Therefore, \textit{Non-spatial} accounts for correlation due to the nature of dyadic responses, but ignores the potential for spatial correlation in the data. It is currently unknown how inference for the regression parameters is impacted when dyadic correlation is accounted for but spatial correlation is ignored. The final competing model (i.e., \textit{Spatial}) is our newly developed model in Section 3.1 which can address both sources of correlation simultaneously.
We monitor several pieces of information collected from the analyzed datasets to compare the methods. First, we calculate the mean absolute error (MAE) for every regression parameter in the model using the posterior mean as the point estimate. Next, we calculate 95\% quantile-based equal-tailed credible intervals (CIs) for each regression parameter and monitor how often this interval includes the true value (ideally around 95\% of the time) and its length. Finally, we formally compare the models using Watanabe Akaike information criterion (WAIC), a metric that balances model fit and complexity where smaller values suggest that a model is preferred \citep{watanabe2010asymptotic}.
\subsection{Results}
We apply each method to each dataset and collect 20,000 posterior samples after removing the first 5,000 iterations prior to convergence of the model. Additionally, we thin the remaining samples by a factor of two to reduce posterior autocorrelation, resulting in 10,000 samples with which to make posterior inference. The priors from Sections 3.1.1 and 3.2.1 were used other than when we encountered convergence issues when applying \textit{Spatial} in the transmission probabilities framework to a few of the datasets generated from Setting 3. In that case, $\sigma^2_{\nu_z} \sim \text{Inverse Gamma}\left(100.00, 100.00\right)$ or $\text{Inverse Gamma}\left(1000.00, 1000.00\right)$ was used to stabilize estimation of the $\nu_{zi}$ parameters.
The full results are shown in Table 2 for all models where we report the average MAE, average empirical coverage (EC), and average CI length across all regression parameters and simulated datasets, as well as the average difference in WAIC values, with respect to \textit{Spatial}, across all simulated datasets.
\begin{landscape}
\begin{table}
\centering
\caption{Simulation study results. $\Delta$ WAIC = WAIC $-$ WAIC$_{\text{new}}$. Averages across the 100 simulated datasets are reported with standard errors given in parentheses. Bold entries indicate the ``best'' value within a setting and outcome type across models. MAE results for the patristic distances model are multiplied by 100 for presentation purposes.}
\begin{tabular}{llrrrrrrr}
\hline
& & \multicolumn{3}{c}{Patristic} & & \multicolumn{3}{c}{Trans.\ Probs.} \\
\cline{3-5} \cline{7-9}
Metric & Setting & Standard & Non-spatial & Spatial & & Standard & Non-spatial & Spatial \\
\hline
MAE & 1 & \textbf{1.02} (0.03) & 1.03 (0.03) & 1.04 (0.03) & & 1.62 (0.20) & 1.53 (0.19) & \textbf{1.40} (0.16) \\
& 2 & 6.77 (0.23) & 6.21 (0.23) & \textbf{6.19} (0.23) & & 0.40 (0.00) & 0.35 (0.01) & \textbf{0.34} (0.01) \\
& 3 & 5.56 (0.20) & 4.84 (0.19) & \textbf{2.31} (0.08) & & 0.43 (0.03) & 0.61 (0.24) & \textbf{0.35} (0.08) \\
\hline
EC & 1 & \textbf{0.94} (0.01) & 0.97 (0.00) & 0.97 (0.00) & & 0.92 (0.01) & 0.92 (0.01) & \textbf{0.93} (0.01) \\
& 2 & 0.41 (0.02) & 0.93 (0.01) & \textbf{0.94} (0.01) & & 0.44 (0.01) & 0.93 (0.00) & \textbf{0.94} (0.00) \\
& 3 & 0.39 (0.01) & 0.92 (0.01) & \textbf{0.96} (0.01) & & 0.47 (0.01) & 0.90 (0.01) & \textbf{0.94} (0.01) \\
\hline
CI Length & 1 & 0.05 (0.00) & 0.06 (0.00) & 0.06 (0.00) & & 4.33 (0.41) & 4.26 (0.40) & 4.28 (0.39) \\
& 2 & 0.10 (0.00) & 0.28 (0.00) & 0.30 (0.00) & & 0.38 (0.00) & 1.61 (0.01) & 1.62 (0.01) \\
& 3 & 0.08 (0.00) & 0.21 (0.00) & 0.13 (0.00) & & 0.52 (0.09) & 1.90 (0.48) & 1.31 (0.18) \\
\hline
$\Delta$ WAIC & 1 & -31.74 (0.97) & -2.95 (0.07) & -- & & 11.28 (1.95) & -0.49 (1.74) & -- \\
& 2 & 6138.68 (56.90) & 0.57 (0.02) & -- & & 9013.90 (87.72) & 26.04 (22.47) & -- \\
& 3 & 3983.08 (98.48) & 4.89 (0.10) & -- & & 8349.95 (298.65) & 111.87 (10.67) & -- \\
\hline
\end{tabular}
\end{table}
\end{landscape}
\clearpage
As expected, all competing models perform similarly in terms of MAE, EC, and CI length in Setting 1, with \textit{Spatial} having improved MAE results for the transmission probabilities framework. For the patristic distances model, WAIC also correctly favors \textit{Fixed} on average as it most closely matches the way the data were generated. However, with respect to WAIC, \textit{Fixed} is slightly outperformed by \textit{Spatial} in this setting for transmission probabilities.
In Settings 2 and 3 where correlation is present, \textit{Spatial} consistently outperforms \textit{Fixed} and \textit{Non-spatial} across all metrics. \textit{Fixed} displays troubling behavior in these settings, particularly with respect to EC. The 95\% CIs are only capturing the true parameter values around 39-47\% of the times and are much shorter on average than those from the competing methods. This suggests that failing to account for correlation may result in CIs that are too narrow, possibly leading to an inflated type I error rate.
The MAE results suggest that regression parameter inference may also suffer when correlation is ignored and/or mischaracterized. For example, in Setting 3 where the data exhibit spatial correlation, \textit{Spatial} produces substantially smaller MAEs than \textit{Non-spatial} (and \textit{Fixed}), showing the importance of accounting for spatial correlation. However, in Setting 2 where there is no spatial correlation, \textit{Non-spatial} and \textit{Spatial} perform almost identically with respect to MAE.
The WAIC results in Settings 2 and 3 are decisively in favor of \textit{Spatial} over \textit{Fixed}, with differences between \textit{Spatial} and \textit{Non-spatial} increasing under spatially correlated data. When combined with results from Setting 1, this provides evidence to suggest that WAIC may be a useful tool for differentiating datasets in terms of the presence and composition of correlation.
\section{\emph{Mycobacterium tuberculosis} in the Republic of Moldova}
We apply the methods developed in Sections 3.1 and 3.2 to better understand factors related to \emph{Mycobacterium tuberculosis} transmission dynamics in the Republic of Moldova. Specifically, $\textbf{x}_{ij}$ from (1, 4, 5) includes the pair-specific covariates described in Section 2 and Table 1 (i.e., intercept, same village indicator, distance between villages, difference in diagnosis dates, difference in ages) while $\textbf{d}_i$ includes the individual-specific covariates (i.e., age, sex, education, working status, residence type). In addition to the new methods, we also present results from the competing approaches detailed in Section 4.2 (i.e., \textit{Fixed} and \textit{Non-spatial}). We use WAIC to compare the different models, identify the source of correlation, and ultimately determine the need for the new methodology.
All models are fit in the Bayesian setting using MCMC sampling techniques, with the full conditional distributions detailed in Section S3 of the Supplementary Material. For each method, we collect 10,000 samples from the joint posterior distribution after removing the first 50,000 iterations prior to convergence and thinning the remaining 200,000 posterior samples by a factor of 20 to reduce posterior autocorrelation. For each parameter, convergence was assessed using Geweke's diagnostic \citep{geweke1991evaluating} while effective sample size was calculated to ensure we collected sufficient post-convergence samples to make accurate statistical inference. Neither tool suggested any issues of concern. We present posterior means and 95\% quantile-based equal-tailed CIs when discussing posterior inference.
\subsection{Patristic distances}
Results from the patristic distance analyses are shown in Figure 2 and Table 3. From Figure 2 it is clear that WAIC favors \textit{Spatial} over the other approaches. In the simulation study, WAIC was shown to consistently identify the correct data generating setting, suggesting that there is likely non-negligible \textbf{and} spatially structured correlation in these data. Consequently, \textit{Fixed} should not be used as it is likely to underestimate regression parameter uncertainty, resulting in CIs that are often too narrow, while \textit{Non-spatial} may result in regression parameter estimates with inflated MAE. This can partly be seen in Figure 2 where the \textit{Fixed} CIs are much shorter on average. The point estimates seen in Figure 2 are generally consistent across all methods in this case.
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale = 0.45]{Figures/Patristic_Standard.pdf}
\includegraphics[trim={8.6cm 0.0cm 0.0cm 0.0cm}, clip, scale = 0.45]{Figures/Patristic_Nonspatial.pdf}
\includegraphics[trim={8.6cm 0.0cm 0.0cm 0.0cm}, clip, scale = 0.45]{Figures/Patristic_New.pdf}\\
\caption{Results from the patristic distance analyses in the Republic of Moldova. Posterior means and 95\% quantile-based equal-tailed credible intervals are shown for the raw regression parameters from each of the competing methods. Red lines indicate that the interval excludes zero. WAIC values are provided along with the expected number of parameters/model complexity term in parentheses (i.e., p$_{\text{WAIC}}$), with smaller values of WAIC preferred.}
\end{center}
\end{figure}
Posterior inference from \textit{Spatial} for the exponentiated regression parameters in Table 3 suggest that the indicator of whether the pair of individuals was in the same village is the only association whose CI excludes one. Patristic distances from pairs of individuals from the same village are around 45\% (40\%, 49\%) smaller on average than those from pairs of individuals in different villages.
\begin{table}
\centering
\caption{Results from the \textit{Spatial} patristic distance analyses in the Republic of Moldova. Posterior means and 95\% quantile-based equal-tailed credible intervals are shown for the exponentiated regression parameters (i.e., ratio of expected patristic distances per specified change in covariate value). The displayed credible intervals for the bolded entries exclude one.}
\begin{tabular}{lr}
\hline
Effect & Estimate (95\% CI) \\
\hline
Distance Between Villages (50 km) & 1.00 (0.99, 1.01) \\
Same Village (Yes vs.\ No) & \textbf{0.55 (0.51, 0.60)} \\
Date of Diagnosis Difference (1/2 Year) & 1.01 (1.00, 1.01) \\
Age Difference (10 Years) & 1.00 (0.99, 1.01) \\
Age (10 Years) & 1.00 (0.95, 1.04) \\
Sex: & \\
\ \ \ Mixed Pair vs.\ Both Female & 0.99 (0.88, 1.11) \\
\ \ \ Both Male vs.\ Both Female & 0.99 (0.78, 1.24) \\
Residence Location: & \\
\ \ \ Mixed Pair vs.\ Both Rural & 0.98 (0.88, 1.08) \\
\ \ \ Both Urban vs.\ Both Rural & 0.97 (0.78, 1.19) \\
Working Status: & \\
\ \ \ Mixed Pair vs.\ Both Unemployed & 0.98 (0.84, 1.14) \\
\ \ \ Both Employed vs.\ Both Unemployed & 1.00 (0.72, 1.35) \\
Education: & \\
\ \ \ Mixed Pair vs.\ Both $\geq$ Secondary & 1.05 (0.95, 1.17) \\
\ \ \ Both $<$ Secondary vs.\ Both $\geq$ Secondary & 1.12 (0.90, 1.37) \\
\hline
\end{tabular}
\end{table}
\subsection{Transmission probabilities}
Results from the transmission probability analyses are shown in Figure 3 and Table 4. Similar to the patristic distance analyses, WAIC also favors \textit{Spatial} over the competing approaches in this setting. Once again, \textit{Fixed} may be producing CIs that are too narrow while the \textit{Non-spatial} point estimates may have inflated MAEs. Graphical results from \textit{Spatial} show several more significant associations than observed with the competing approaches. Recall that in the transmission probabilities framework, \textit{Non-spatial} specifies that $\boldsymbol{\eta}\left(\textbf{s}^*_i\right) = \textbf{0}_4$ for all $i$, meaning that it accounts for neither spatial correlation nor cross-covariance between the parameters. This could explain the relatively large differences observed between results from the two methods.
\begin{figure}[ht!]
\centering
\includegraphics[trim={0.0cm 1.0cm 0.0cm 0.0cm}, clip, scale = 0.45]{Figures/Trans_Binary_Standard.pdf}
\includegraphics[trim={8.6cm 1.0cm 0.0cm 0.0cm}, clip, scale = 0.45]{Figures/Trans_Binary_Nonspatial.pdf}
\includegraphics[trim={8.6cm 1.0cm 0.0cm 0.0cm}, clip, scale = 0.45]{Figures/Trans_Binary_New.pdf}\\
\includegraphics[scale = 0.45]{Figures/Trans_Cont_Standard.pdf}
\includegraphics[trim={8.6cm 0.0cm 0.0cm 0.0cm}, clip, scale = 0.45]{Figures/Trans_Cont_Nonspatial.pdf}
\includegraphics[trim={8.6cm 0.0cm 0.0cm 0.0cm}, clip, scale = 0.45]{Figures/Trans_Cont_New.pdf}
\caption{Results from the transmission probability analyses in the Republic of Moldova. Posterior means and 95\% equal-tailed quantile-based credible intervals are shown for the binary component (top row) and continuous component (bottom row) raw regression parameters from each of the competing methods. Red lines indicate that the interval excludes zero. WAIC values are provided along with the expected number of parameters/model complexity term in parentheses (i.e., p$_{\text{WAIC}}$), with smaller values of WAIC preferred.}
\end{figure}
\clearpage
For the pair-specific covariates in the binary regression model, odds ratio results in Table 4 from \textit{Spatial} suggest that pairs of individuals in the same village and with similar dates of diagnosis are more likely to have non-zero transmission probabilities on average. Results from the continuous regression model are similar, with the addition of a significant finding for the distance between villages. Pairs of individuals whose villages are closer together geographically are more likely to have larger non-zero transmission probabilities.
\begin{table}[ht!]
\small
\centering
\caption{Results from the \textit{Spatial} transmission probability analyses in the Republic of Moldova. Posterior means and 95\% quantile-based equal-tailed credible intervals are shown for the exponentiated regression parameters (i.e., odds ratios). The displayed credible intervals for the bolded entries exclude one.}
\begin{tabular}{lrrr}
\hline
Effect & Binary & & Continuous \\
\hline
Distance Between Villages (50 km) & 0.99 (0.89, 1.10) & & \textbf{0.93 (0.87, 1.00)} \\
Same Village (Yes vs.\ No) & \textbf{5.41 (2.22, 11.11)} & & \textbf{10.84 (5.54, 19.12)} \\
Date of Diagnosis Difference (1/2 Year) & \textbf{0.77 (0.68, 0.86)} & & \textbf{0.75 (0.69, 0.82)} \\
Age Difference (10 Years) & 1.03 (0.93, 1.13) & & 0.93 (0.86, 1.01) \\
\hline
\textbf{Giver Role:} & & & \\
Age (10 Years) & \textbf{1.28 (1.04, 1.57)} & & 0.94 (0.86, 1.03) \\
Sex: & & & \\
\ \ \ Male vs.\ Female & 0.82 (0.46, 1.36) & & 1.15 (0.87, 1.51) \\
Residence Location: & & & \\
\ \ \ Urban vs.\ Rural & \textbf{0.35 (0.11, 0.82)} & & 1.23 (0.90, 1.65) \\
Working Status: & & & \\
\ \ \ Employed vs.\ Unemployed & \textbf{4.69 (1.23, 12.26)}& & 1.05 (0.71, 1.49) \\
Education: & & & \\
\ \ \ $<$ Secondary vs.\ $\geq$ Secondary & \textbf{3.01 (1.49, 5.61)} & & 1.08 (0.82, 1.39) \\
\hline
\textbf{Receiver Role:} & & & \\
Age (10 Years) & \textbf{1.28 (1.01, 1.62)} & & 1.00 (0.90, 1.10) \\
Sex: & & & \\
\ \ \ Male vs.\ Female & 0.65 (0.33, 1.19) & & 1.20 (0.90, 1.59) \\
Residence Location: & & & \\
\ \ \ Urban vs.\ Rural & \textbf{0.25 (0.06, 0.66)} & & 1.27 (0.90, 1.72) \\
Working Status: & & & \\
\ \ \ Employed vs.\ Unemployed & \textbf{6.18 (1.32, 18.66)}& & 1.20 (0.80, 1.73) \\
Education: & & & \\
\ \ \ $<$ Secondary vs.\ $\geq$ Secondary & \textbf{3.70 (1.60, 7.66)} & & 1.15 (0.86, 1.48) \\
\hline
\end{tabular}
\end{table}
\clearpage
For the individual-specific covariates, overall the giver and receiver results are very similar within both the binary and continuous regression models, suggesting that the impact of the included factors do not differ based on what role the individual is serving in the pair. For the binary component of the model we see that older, employed, and less formally educated individuals living in rural areas are more likely to be in pairs that have non-zero transmission probabilities. There were no statistically significant individual-level associations identified for the continuous component of the model.
\subsection{Random effect parameter analyses}
In Section S4 of the Supplementary Material, we present two additional analyses for the estimated random effect parameters, $\theta_i$ and $\left(\theta_{zi}^{(g)}, \theta_{zi}^{(r)}, \theta_{wi}^{(g)}, \theta_{wi}^{(r)}\right)^{\text{T}}$, from the fitted patristic distances and transmission probabilities models, respectively. First, we present several summaries of the estimated random effect parameters/hyperparameters with respect to the observed dyadic genetic distance outcomes, finding an intuitive connection between the random effect parameters and outcomes as well as the clear presence of spatially structured variability in the data. Results suggest that the frameworks have the ability to identify key individuals associated with increased transmission activity. Next, we use Bayesian kriging techniques \citep{banerjee2014hierarchical} to predict the random effect parameters at new/unobserved spatial locations for both outcomes across the Republic of Moldova. Posterior predicted mean and standard deviation maps are presented which allow us to determine areas of increased residual transmission risk for both outcomes.
\section{Discussion}
In this work, we presented innovative hierarchical Bayesian spatial methods for modeling two types of dyadic genetic relatedness data, patristic distances and transmission probabilities. The models account for multiple sources of correlation (i.e., dyadic and spatial) and important features of each outcome (e.g., zero-inflation). Through simulation, we showed that these approaches perform as well as \textit{Fixed} (i.e., regression while ignoring correlation) and \textit{Non-spatial} (i.e., no spatial or cross-covariance) when applied to uncorrelated data, and greatly outperform them under correlation in terms of estimating and quantifying uncertainty in the regression parameters. Under any type of correlation, \textit{Fixed} produces CIs that are too narrow and as a result should not be used for estimating associations between genetic relatedness measures and other factors.
When applying the models to \emph{Mycobacterium tuberculosis} data from the Republic of Moldova, we found significant associations between genetic relatedness and spatial proximity, dates of diagnosis, and multiple individual-level factors; many of which represent new insights in this setting. Analysis of the random effect parameters identified individuals and geographic areas that are generally associated with higher levels of transmission, a potentially useful aspect of the model with respect to infection control as heterogeneity in infectiousness among TB patients has been investigated in previous studies \citep{ypma2013sign, melsew2019role}. Additionally, WAIC showed that the correlation between outcomes was at least partially spatially structured, further motivating the development of the new methodology.
The unique data in our study represent a major strength of the work, however, a number of factors inherent to \emph{Mycobacterium tuberculosis} make the study of transmission dynamics difficult. First, TB epidemics are relatively slow moving, often occurring over years \citep{Pai}, and it is difficult to observe these processes in a two-year study. Second, the time between infection and disease onset varies greatly, from weeks to years \citep{Borgdorff}. Third, not all TB cases are `culture positive', meaning it is more difficult to culture, and, by extension, sequence isolates for every known TB case \citep{Cruciani, Nguyen}. Taken together, these factors make it difficult to infer direct transmission events. Even within a putative transmission cluster, the probabilities of direct transmission among case pairs may be low. For this reason, it can be difficult to fit a model to these types of data. We address this challenge by including a binary component in our model specification (i.e., an indicator of a non-zero transmission probability between case pairs). Finally, as is standard in most epidemiological analyses, our data do not include information on individuals who were exposed but not infected. Therefore, the results should be interpreted conditional on both individuals being infected.
While we use patristic distance as an outcome representative of symmetric dyadic genetic relatedness data in this work, the framework we have created can be readily adapted to SNP distances by modifying the likelihood in (1) to accommodate discrete count data. In our R package \texttt{GenePair} (available at: \texttt{https://github.com/warrenjl/GenePair}), we have developed software for fitting the negative binomial and binary (i.e., genetically clustered or not) versions of the patristic distances model, along with the two versions described in Sections 3.1 and 3.2.
Future work in this area could focus on alternative techniques for accounting for the correlation caused by analyzing dyadic outcomes while also incorporating spatial correlation. Our current methods are based on the idea of shared spatially correlated random effect parameters between pairs of data including the same individual, which was shown to induce an intuitive correlation structure between observations in Section 3.3. Additionally, these parameters were shown to be useful in identifying key individuals/areas that drive transmission in the study. However, there are undoubtedly other approaches for achieving these same goals.
A meta-regression approach could also be used in future work for genetic relatedness outcomes that are estimated in preceding analyses and include measures of uncertainty (something not applicable in our data). The introduced random effects frameworks could still be used for defining the true but unobserved outcomes in this model, with a first stage that treats the observed outcome as an estimate of the true value. A joint framework that combines the model which estimates the genetic relatedness outcomes with our models for characterizing variability and exploring associations in the outcomes could also represent an important extension. However, the computational burden of this sort of approach would likely be great and could possibly require the addition of more efficient posterior sampling techniques.
Overall, we recommend the use of the newly developed methods when the goal of a study is to estimate associations between spatially-referenced genetic relatedness and other variables of interest. Even under the most simplistic, and likely unrealistic assumption of no correlation between dyadic responses, the new methodology performs well. When correlation is present, competing approaches can yield potentially overly optimistic insights about the significance of the associations and/or poorly estimate the associations of interest and should not be used for decision making.
\bibliographystyle{chicago}
\section{Induced correlation structure: Transmission probabilities}
In Figure S1, we present simulation-based correlation estimates for the transmission probabilities model over a range of spatial variance/correlation settings. Using the same sample sizes, covariates, and spatial locations as observed in the Republic of Moldova dataset described in Section 2 of the main text, as well as parameter estimates obtained by fitting the model to those data in Section 5 (see Table S1), we simulate $1,000$ datasets from the full model detailed in Section 3.2 for each unique setting of $\phi$ and $\Omega$. We vary $\phi$ from $0.0001$ to $10$ by increments of $0.10$ and multiply the estimate of $\Omega$ by a factor with the same range as $\phi$ to vary the amount of spatial correlation and variability, respectively, in the simulated responses.
For a specific setting of these parameters, the $1,000$ simulated datasets are used to estimate the correlations of interest. Specifically, we present estimates of the correlation between the $\pi_{ij}$ parameters, which are defined in (4) from the main text and control whether a transmission probability is larger than zero. Correlations between these parameters are more robustly estimated than those between the raw outcomes or the $w_{ij}$ parameters given the large proportion of zeros in the real, and therefore simulated, data. We present estimates of the correlation between $\pi_{ij}$ and $\pi_{ji}$ (i.e., two shared individuals, different roles), $\pi_{ij}$ and $\pi_{ik}$ (i.e., one shared individual, same role), $\pi_{ij}$ and $\pi_{ki}$ (i.e., one shared individual, different role), and $\pi_{ij}$ and $\pi_{kl}$ (i.e., no shared individual).
\clearpage
\section{Simulation study details: Transmission probabilities}
\subsection{Data generation}
We simulate data from the distribution in (3) from the main text under three different scenarios. In Setting 1, we specify that there is no unmeasured correlation by setting $\theta_{zi}^{\left(g\right)} = \theta_{zi}^{\left(r\right)} = \theta_{wi}^{\left(g\right)} = \theta_{wi}^{\left(r\right)} = \nu_{zi} = \nu_{wi} = 0$ for all $i$. In Setting 2, we include random effect parameters but from (6) specify that $\eta_{z}^{\left(g\right)}\left(\textbf{s}^*_i\right) = \eta_{z}^{\left(r\right)}\left(\textbf{s}^*_i\right) = \eta_{w}^{\left(g\right)}\left(\textbf{s}^*_i\right) = \eta_{w}^{\left(r\right)}\left(\textbf{s}^*_i\right) = 0$ for all $i$, resulting in only non-spatial variability in the random effect parameters. As a result, Setting 2 also ignores cross-covariance between parameters corresponding to the same individual. In Setting 3, we also include random effect parameters but specify that $\phi = -\ln\left(0.05\right)/\max\left\{\left\|\textbf{s}_i^* - \textbf{s}_j^*\right\|; i < j\right\}$ and $\zeta_{zi}^{\left(g\right)} = \zeta_{zi}^{\left(r\right)} = \zeta_{wi}^{\left(g\right)} = \zeta_{wi}^{\left(r\right)} = 0$ for all $i$ in (6), resulting in multivariate spatial correlation that decreases to $0.05$ at the maximum distance observed in the data (i.e., the effective range) and non-zero cross-covariance.
When simulating data from Setting 2, we define the total covariance matrix of the multivariate random effect process as $\text{diag}\left(\Omega\right) + \text{diag}\left(\sigma^2_{\zeta_z^{\left(g\right)}}, \sigma^2_{\zeta_z^{\left(r\right)}}, \sigma^2_{\zeta_w^{\left(g\right)}}, \sigma^2_{\zeta_w^{\left(r\right)}}\right)$ (i.e., no cross-covariance) and use the values in Table S1 to estimate this quantity. For Setting 3, we consider the full covariance matrix as $\Omega + \text{diag}\left(\sigma^2_{\zeta_z^{\left(g\right)}}, \sigma^2_{\zeta_z^{\left(r\right)}}, \sigma^2_{\zeta_w^{\left(g\right)}}, \sigma^2_{\zeta_w^{\left(r\right)}}\right)$. In Setting 2, all of this variability is assumed to be non-spatial while in Setting 3 it is attributed entirely to spatial correlation. We generate a new vector of $\boldsymbol{\theta}$, $\boldsymbol{\nu}_{z}$, and $\boldsymbol{\nu}_{w}$ parameters for each dataset from the setting-specific model. We also create a unique set of spatial locations, with no co-located individuals (i.e., $m = n$), for each dataset. In total, we simulate 100 datasets from each setting.
\subsection{Competing models}
We apply three competing models to every dataset. The first model (i.e., \textit{Fixed}) represents a simplified fixed effects regression form of (3-5) from the main text where $\theta_{zi}^{\left(g\right)} = \theta_{zi}^{\left(r\right)} = \theta_{wi}^{\left(g\right)} = \theta_{wi}^{\left(r\right)} = \nu_{zi} = \nu_{wi} = 0$ for all $i$ (i.e., no random effect parameters). As \textit{Fixed} matches the data generated from Setting 1, we expect it to perform well overall in that setting. However, in Settings 2 and 3, it will likely struggle in estimating the associations of interest as it ignores all correlation. The second model (i.e., \textit{Non-spatial}) also represents a variant of the model (3-6) where $\eta_{z}^{\left(g\right)}\left(\textbf{s}^*_i\right) = \eta_{z}^{\left(r\right)}\left(\textbf{s}^*_i\right) = \eta_{w}^{\left(g\right)}\left(\textbf{s}^*_i\right) = \eta_{w}^{\left(r\right)}\left(\textbf{s}^*_i\right) = 0$ for all $i$ (matching data generation Setting 2). Therefore, \textit{Non-spatial} accounts for correlation due to the nature of dyadic responses, but ignores the potential for spatial correlation and cross-covariance in the data. It is currently unknown how inference for the regression parameters is impacted when dyadic correlation is accounted for but other correlation is ignored. The final competing model (i.e., \textit{Spatial}) is our newly developed model in Section 3.2 which can address all sources of correlation simultaneously.
\clearpage
\section{Model fitting details}
We use Markov chain Monte Carlo sampling techniques (i.e., Gibbs and Metropolis-within-Gibbs algorithms) to fit the newly developed models within our R package \texttt{GenePair} \newline (available at: \texttt{https://github.com/warrenjl/GenePair}) \citep{metropolis1953equation, geman1984stochastic, gelfand1990sampling}. In this section we present the full conditional distributions for all introduced model parameters needed to fit both of the models (i.e., patristic distances and transmission probabilities).
\subsection{Patristic distances}
The patristic distances model in Section 3.1 of the main text can be written more generally in the linear mixed model format such as $$\ln\left(\boldsymbol{P}\right) = X\boldsymbol{\delta} + Z\boldsymbol{\theta} + \boldsymbol{\epsilon}$$ where $\boldsymbol{\epsilon}|\sigma^2_{\epsilon} \sim \text{MVN}\left(\boldsymbol{0}_{n^*}, \sigma^2_{\epsilon}I_{n^*}\right)$; $n^* = \sum_{i=1}^{n-1} \sum_{j=i+1}^n 1$ is the total number of unique-pair patristic distances in the study; $n$ is the number of individuals in the study; $I_{n^*}$ is the $n^*$ by $n^*$ identity matrix; $\boldsymbol{P}^{\text{T}} = \left\{P_{12}, \hdots, P_{n-1,n}\right\}$ is the full vector of patristic distances (i.e., $P_{ij},\ i \neq j,\ i < j$); $X$ is an $n^*$ by $\left(p_x + p_d\right)$ matrix of covariates with $i^{th}$ row (corresponding to pair $(j,k)$ of data for example) equal to $\left\{\textbf{x}_{jk}^{\text{T}}, \left(\textbf{d}_j + \textbf{d}_k\right)^{\text{T}}\right\}$; $\boldsymbol{\delta}^{\text{T}} = \left(\boldsymbol{\beta}^{\text{T}}, \boldsymbol{\gamma}^{\text{T}}\right)$; $Z$ is an $n^*$ by $n$ matrix with $i^{th}$ row (corresponding to pair $(j,k)$ of data for example) equal to a vector of all zeros other than entries $j$ and $k$ which are equal to one; and $\boldsymbol{\theta}^{\text{T}} = \left(\theta_1, \hdots, \theta_n\right)$.
The prior distributions match those from Section 3.1.1 of the main text with \newline $\boldsymbol{\delta} \sim \text{MVN}\left(\boldsymbol{0}_{\left(p_x + p_d\right)}, 100^2 I_{\left(p_x + p_d\right)}\right)$ and $\boldsymbol{0}_p$ is a $p$-length vector of zeros; $\boldsymbol{\theta}|\boldsymbol{\eta}, \sigma^2_{\zeta} \sim \text{MVN}\left(V \boldsymbol{\eta}, \sigma^2_{\zeta} I_n\right)$ where $V$ is an $n$ by $m$ matrix with $V_{ij} = 1$ if individual $i$ resides at location $\textbf{s}^*_j$ and $V_{ij} = 0$ otherwise; $\boldsymbol{\eta}|\phi, \tau^2 \sim \text{MVN}\left\{\boldsymbol{0}_m, \tau^2 \Sigma\left(\phi\right)\right\}$; $\phi \sim \text{Gamma}\left(1.00, 1.00\right)$; and $\sigma^2_{\epsilon}, \sigma^2_{\zeta}, \tau^2 \stackrel{\text{iid}}{\sim} \text{Inverse Gamma}\left(0.01, 0.01\right)$. For complete details on the definitions for each of these variables/parameters, please see Section 3.1.
Based on these specifications, the full conditional distributions are given as:
\begin{itemize}
\item $\sigma^2_{\epsilon} | \Theta_{-\sigma^2_{\epsilon}}, \boldsymbol{P} \sim \text{Inverse Gamma}\left(\frac{n^*}{2} + 0.01, \frac{\left(\boldsymbol{P} - X\boldsymbol{\delta} - Z\boldsymbol{\theta}\right)^{\text{T}} \left(\boldsymbol{P} - X\boldsymbol{\delta} - Z\boldsymbol{\theta}\right)}{2} + 0.01\right),$ where $\Theta_{-\sigma^2_{\epsilon}}$ is the full vector of all parameters with $\sigma^2_{\epsilon}$ removed;
\item $\boldsymbol{\delta}|\Theta_{-\boldsymbol{\delta}}, \boldsymbol{P} \sim \text{MVN}\left(\Sigma_{\boldsymbol{\delta}}, \boldsymbol{\mu}_{\boldsymbol{\delta}}\right)$ with $\Sigma_{\boldsymbol{\delta}} = \left(\frac{X^{\text{T}}X}{\sigma^2_{\epsilon}} + \frac{I_{\left(p_x + p_d\right)}}{100^2}\right)^{-1}$ and $\boldsymbol{\mu}_{\boldsymbol{\delta}} = \frac{\Sigma_{\boldsymbol{\delta}} X^{\text{T}}\left(\boldsymbol{P} - Z\boldsymbol{\theta}\right)}{\sigma^2_{\epsilon}}$;
\item $\boldsymbol{\theta}|\Theta_{-\boldsymbol{\theta}}, \boldsymbol{P} \sim \text{MVN}\left(\Sigma_{\boldsymbol{\theta}}, \boldsymbol{\mu}_{\boldsymbol{\theta}}\right)$ with $\Sigma_{\boldsymbol{\theta}} = \left(\frac{Z^{\text{T}}Z}{\sigma^2_{\epsilon}} + \frac{I_n}{\sigma^2_{\zeta}}\right)^{-1}$ and $\boldsymbol{\mu}_{\boldsymbol{\theta}} = \Sigma_{\boldsymbol{\theta}} \left\{ \frac{Z^{\text{T}}\left(\boldsymbol{P} - X\boldsymbol{\delta}\right)}{\sigma^2_{\epsilon}} + \frac{V\boldsymbol{\eta}}{\sigma^2_{\zeta}}\right\}$; we implement a sum-to-zero constraint \textit{on the fly} for these parameters \citep{besag1995bayesian, berrocal2012space, warren2021spatial};
\item $\sigma^2_{\zeta} | \Theta_{-\sigma^2_{\zeta}}, \boldsymbol{P} \sim \text{Inverse Gamma}\left(\frac{n}{2} + 0.01, \frac{\left(\boldsymbol{\theta} - V\boldsymbol{\eta}\right)^{\text{T}} \left(\boldsymbol{\theta} - V\boldsymbol{\eta}\right)}{2} + 0.01\right)$;
\item $\boldsymbol{\eta}|\Theta_{-\boldsymbol{\eta}}, \boldsymbol{P} \sim \text{MVN}\left(\Sigma_{\boldsymbol{\eta}}, \boldsymbol{\mu}_{\boldsymbol{\eta}}\right)$ with $\Sigma_{\boldsymbol{\eta}} = \left(\frac{V^{\text{T}}V}{\sigma^2_{\zeta}} + \frac{\Sigma\left(\phi\right)^{-1}}{\tau^2}\right)^{-1}$ and $\boldsymbol{\mu}_{\boldsymbol{\eta}} = \frac{\Sigma_{\boldsymbol{\eta}} V^{\text{T}}\boldsymbol{\theta}}{\sigma^2_{\zeta}}$;
\item $\tau^2 | \Theta_{-\tau^2}, \boldsymbol{P} \sim \text{Inverse Gamma}\left(\frac{m}{2} + 0.01, \frac{\boldsymbol{\eta}^{\text{T}} \Sigma\left(\phi\right)^{-1} \boldsymbol{\eta}}{2} + 0.01\right)$; and
\item $f\left\{\ln\left(\phi\right)| \Theta_{-\phi}, \boldsymbol{P}\right\} \propto \left|\Sigma\left(\phi\right)^{-1}\right|^{1/2} \exp\left\{-\frac{1}{2\tau^2} \boldsymbol{\eta}^{\text{T}} \Sigma\left(\phi\right)^{-1} \boldsymbol{\eta}\right\}f\left\{\ln\left(\phi\right)\right\}$; during model fitting, we work with $\ln\left(\phi\right) \in \mathbb{R}$ instead of $\phi$ so that a symmetric proposal density can be used in the Metropolis algorithm.
\end{itemize}
\subsection{Transmission probabilities}
From the transmission probabilities model introduced in Section 3.2 of the main text, we work with the complete vector of observed transmission probabilities, $\boldsymbol{T}^{\text{T}} = \left(T_{12}, \hdots, T_{1n}, T_{21}, T_{23}, \hdots, T_{n,n-1}\right)$, whose length is $n^* = \sum_{i=1}^n \sum_{j=1, i \neq j}^n 1$ (i.e., $T_{ij}$ and $T_{ji}$ included for $i \neq j$). We also introduce a vector of latent probabilities controlling whether the transmission probabilities are $> 0$ (i.e., $\boldsymbol{\pi}$) and a vector of latent continuous variables representing the logit-scaled transmission probabilities (i.e., $\boldsymbol{w}$), both of which are the same length and ordered in the same way as $\boldsymbol{T}$. We note that not every $T_{ij} > 0$, meaning that not every $w_{ij}$ is directly observed. When $T_{ij} = 0$, the corresponding $w_{ij}$ is treated as a missing variable within this framework.
We then use the regression frameworks defined in Section 3.2 of the main text to define these latent parameter vectors such that \begin{align*} &\text{logit} \left(\boldsymbol{\pi}\right) = X\boldsymbol{\delta}_z + Z^{(g)}\boldsymbol{\theta}_z^{(g)} + Z^{(r)}\boldsymbol{\theta}_z^{(r)} + \boldsymbol{\nu}_z^* \text{ and}\\ &\boldsymbol{w} = X\boldsymbol{\delta}_w + Z^{(g)}\boldsymbol{\theta}_w^{(g)} + Z^{(r)}\boldsymbol{\theta}_w^{(r)} + \boldsymbol{\nu}_w^* + \boldsymbol{\epsilon} \end{align*} where $\boldsymbol{\epsilon}|\sigma^2_{\epsilon} \sim \text{MVN}\left(\boldsymbol{0}_{n^*}, \sigma^2_{\epsilon}I_{n^*}\right)$; $X$ is an $n^*$ by $(p_x + 2p_d)$ matrix of covariates with $i^{th}$ row (corresponding to pair $(j,k)$ of data for example) equal to $\left\{\textbf{x}_{jk}^{\text{T}},\ \textbf{d}_k^{\text{T}},\ \textbf{d}_j^{\text{T}}\right\}$; $\boldsymbol{\delta}_z^{\text{T}} = \left(\boldsymbol{\beta}_z^{\text{T}}, \ \boldsymbol{\gamma}_z^{(g)\text{T}},\ \boldsymbol{\gamma}_z^{(r)\text{T}}\right)$ with $\boldsymbol{\delta}_w$ defined similarly; $Z^{(g)}$ is an $n^*$ by $n$ matrix with $i^{th}$ row (corresponding to pair $(j,k)$ of data for example) equal to a vector of all zeros other than entry $k$ which is equal to one; $Z^{(r)}$ is defined similarly to $Z^{(g)}$ where entry $j$ is equal to one instead of entry $k$; $\boldsymbol{\theta}_z^{(g)\text{T}} = \left(\theta_{z1}^{(g)}, \hdots, \theta_{zn}^{(g)}\right)$ with $\boldsymbol{\theta}_z^{(r)}$, $\boldsymbol{\theta}_w^{(g)}$, and $\boldsymbol{\theta}_w^{(r)}$ defined similarly; and $\boldsymbol{\nu}^{*\text{T}}_z = \left(\nu_{z1}\nu_{z2}, \hdots, \nu_{z1}\nu_{zn}, \nu_{z2}\nu_{z1}, \nu_{z2}\nu_{z3}, \hdots, \nu_{zn}\nu_{z,n-1}\right)$ is the vector of interaction terms with $\boldsymbol{\nu}^*_w$ defined similarly.
The prior distributions match those from Section 3.2.1 of the main text with \newline $\boldsymbol{\delta}_z, \boldsymbol{\delta}_w \stackrel{\text{iid}}{\sim} \text{MVN}\left(\boldsymbol{0}_{\left(p_x + 2p_d\right)}, 100^2 I_{\left(p_x + 2p_d\right)}\right)$; $\boldsymbol{\theta}_z^{(g)} | \boldsymbol{\eta}_z^{(g)}, \sigma^2_{\zeta_z^{(g)}} \sim \text{MVN}\left(V \boldsymbol{\eta}_z^{(g)}, \sigma^2_{\zeta_z^{(g)}} I_n\right)$ where $\boldsymbol{\theta}_z^{(r)}$, $\boldsymbol{\theta}_w^{(g)}$, and $\boldsymbol{\theta}_w^{(r)}$ are defined similarly; $\sigma^2_{\epsilon}, \sigma^2_{\zeta_z^{(g)}}, \sigma^2_{\zeta_z^{(r)}}, \sigma^2_{\zeta_w^{(g)}}, \sigma^2_{\zeta_w^{(r)}}, \sigma^2_{\nu_z}, \sigma^2_{\nu_w} \stackrel{\text{iid}}{\sim} \text{Inverse Gamma}\left(0.01, 0.01\right)$; \newline $\left(\boldsymbol{\eta}_z^{(g)\text{T}}, \boldsymbol{\eta}_z^{(r)\text{T}}, \boldsymbol{\eta}_w^{(g)\text{T}}, \boldsymbol{\eta}_w^{(r)\text{T}} \right)^{\text{T}} | \phi, \Omega \sim \text{MVN}\left\{\boldsymbol{0}_{4m}, \Omega \otimes \Sigma\left(\phi\right)\right\}$; $\phi \sim \text{Gamma}\left(1.00, 1.00\right)$; and $\Sigma^{-1} \sim \text{Wishart}\left(5, I_4\right)$. For complete details on the definitions for each of these variables/parameters, please see Section 3.2.
Because we are working in a logistic regression framework, we use the P\'olya-Gamma latent variable approach for posterior sampling developed by \cite{polson2013bayesian}. Specifically, for each $T_{ij}$ outcome, we introduce a corresponding P\'olya-Gamma distributed latent variable such that $$\omega^*_{ij}| \boldsymbol{\delta}_z, \theta_{zj}^{(g)}, \theta_{zi}^{(r)} \stackrel{\text{ind}}{\sim} \text{P\'olya-Gamma}\left(1, \textbf{x}_{ij}^{\text{T}}\boldsymbol{\beta}_z + \textbf{d}_j^{\text{T}}\boldsymbol{\gamma}_z^{(g)} + \textbf{d}_i^{\text{T}}\boldsymbol{\gamma}_z^{(r)} + \theta_{zj}^{(g)} + \theta_{zi}^{(r)} + \nu_{zi}\nu_{zj}\right).$$ These latent variables allow for closed form full conditional distributions for several of the parameter sets and we sample from their distribution using the \texttt{pgdraw} package in R \citep{pgdraw}.
Based on these specifications, the full conditional distributions are given as:
\begin{itemize}
\item $\omega^*_{ij} | \boldsymbol{\Theta}_{-\omega^*_{ij}}, \boldsymbol{T} \stackrel{\text{ind}}{\sim} \text{P\'olya-Gamma}\left(1, \textbf{x}_{ij}^{\text{T}}\boldsymbol{\beta}_z + \textbf{d}_j^{\text{T}}\boldsymbol{\gamma}_z^{(g)} + \textbf{d}_i^{\text{T}}\boldsymbol{\gamma}_z^{(r)} + \theta_{zj}^{(g)} + \theta_{zi}^{(r)} + \nu_{zi}\nu_{zj}\right)$;
\item $\boldsymbol{\delta}_z|\Theta_{-\boldsymbol{\delta}_z}, \boldsymbol{T} \sim \text{MVN}\left(\Sigma_{\boldsymbol{\delta}_z}, \boldsymbol{\mu}_{\boldsymbol{\delta}_z}\right)$ with $\Sigma_{\boldsymbol{\delta}_z} = \left(X^{\text{T}} \Omega^* X + \frac{I_{\left(p_x + 2p_d\right)}}{100^2}\right)^{-1}$ and \\ $\boldsymbol{\mu}_{\boldsymbol{\delta}_z} = \Sigma_{\boldsymbol{\delta}_z} X^{\text{T}} \Omega^* \left(\boldsymbol{\lambda} - Z^{(g)}\boldsymbol{\theta}_z^{(g)} - Z^{(r)}\boldsymbol{\theta}_z^{(r)} - \boldsymbol{\nu}^*_z\right)$, where $\Omega^*$ is an $n^*$ by $n^*$ diagonal matrix with the diagonal vector equal to $\boldsymbol{\omega}^{*\text{T}} = \left(\omega^*_{12}, \hdots, \omega^*_{n, n-1}\right)$ and \newline $\boldsymbol{\lambda}^{\text{T}} = \left(\frac{1\left(T_{12} > 0\right) - 0.50}{\omega^*_{12}}, \hdots, \frac{1\left(T_{n,n-1} > 0\right) - 0.50}{\omega^*_{n,n-1}}\right)$;
\item $f\left(\nu_{zi} | \Theta_{-\nu_{zi}}, \boldsymbol{T}\right) \propto f\left(\boldsymbol{T}|\boldsymbol{\Theta}\right)f\left(\nu_{zi} | \sigma^2_{\nu_z}\right)$, where $f\left(\boldsymbol{T}|\boldsymbol{\Theta}\right)$ is the joint distribution of the data and we use a Metropolis algorithm to sample from this distribution;
\item $\sigma^2_{\nu_z} | \Theta_{-\sigma^2_{\nu_z}}, \boldsymbol{T} \sim \text{Inverse Gamma}\left(\frac{n}{2} + 0.01, \frac{\boldsymbol{\nu}_z^{\text{T}} \boldsymbol{\nu}_z}{2} + 0.01\right)$, where $\boldsymbol{\nu}_z^{\text{T}} = \left(\nu_{z1}, \hdots, \nu_{zn}\right)$;
\item $\boldsymbol{\theta}_z^{(g)}|\Theta_{-\boldsymbol{\theta}_z^{(g)}}, \boldsymbol{T} \sim \text{MVN}\left(\Sigma_{\boldsymbol{\theta}_z^{(g)}}, \boldsymbol{\mu}_{\boldsymbol{\theta}_z^{(g)}}\right)$ with $\Sigma_{\boldsymbol{\theta}_z^{(g)}} = \left(Z^{(g)\text{T}} \Omega^* Z^{(g)} + \frac{I_n}{\sigma^2_{\zeta_z^{(g)}}}\right)^{-1}$ and $\boldsymbol{\mu}_{\boldsymbol{\theta}_z^{(g)}} = \Sigma_{\boldsymbol{\theta}_z^{(g)}} \left\{Z^{(g)\text{T}} \Omega^* \left(\boldsymbol{\lambda} - X\boldsymbol{\delta}_z - Z^{(r)}\boldsymbol{\theta}_z^{(r)} - \boldsymbol{\nu}^*_w\right) + \frac{V \boldsymbol{\eta}_z^{(g)}}{\sigma^2_{\zeta_z^{(g)}}}\right\}$; we implement a sum-to-zero constraint \textit{on the fly} for these parameters \citep{besag1995bayesian, berrocal2012space, warren2021spatial};
\item $f\left(\boldsymbol{\theta}_z^{(r)}|\Theta_{-\boldsymbol{\theta}_z^{(r)}}, \boldsymbol{T}\right)$ has similar form to $f\left(\boldsymbol{\theta}_z^{(g)}|\Theta_{-\boldsymbol{\theta}_z^{(g)}}, \boldsymbol{T}\right)$ with $(g)$ replaced by $(r)$;
\item $\sigma^2_{\zeta_z^{(g)}} | \Theta_{-\sigma^2_{\zeta_z^{(g)}}}, \boldsymbol{T} \sim \text{Inverse Gamma}\left(\frac{n}{2} + 0.01, \frac{\left(\boldsymbol{\theta}_z^{(g)} - V \boldsymbol{\eta}_z^{(g)}\right)^{\text{T}} \left(\boldsymbol{\theta}_z^{(g)} - V \boldsymbol{\eta}_z^{(g)}\right)}{2} + 0.01\right)$;
\item $f\left(\sigma^2_{\zeta_z^{(r)}} | \Theta_{-\sigma^2_{\zeta_z^{(r)}}}, \boldsymbol{T}\right)$, $f\left(\sigma^2_{\zeta_w^{(g)}} | \Theta_{-\sigma^2_{\zeta_w^{(g)}}}, \boldsymbol{T}\right)$, and $f\left(\sigma^2_{\zeta_w^{(r)}} | \Theta_{-\sigma^2_{\zeta_w^{(r)}}}, \boldsymbol{T}\right)$ each have the same form as $f\left(\sigma^2_{\zeta_z^{(g)}} | \Theta_{-\sigma^2_{\zeta_z^{(g)}}}, \boldsymbol{T}\right)$ with $(g)$ replaced by $(r)$ and $w$ replaced by $z$ when appropriate;
\item $\boldsymbol{\eta}_z^{(g)}|\Theta_{-\boldsymbol{\eta}_z^{(g)}}, \boldsymbol{T} \sim \text{MVN}\left(\Sigma_{\boldsymbol{\eta}_z^{(g)}}, \boldsymbol{\mu}_{\boldsymbol{\eta}_z^{(g)}}\right)$ with $\Sigma_{\boldsymbol{\eta}_z^{(g)}} = \left\{\frac{V^{\text{T}}V}{\sigma^2_{\zeta_z^{(g)}}} + \left(\Sigma_{11} - \Sigma_{12}\Sigma_{22}^{-1}\Sigma_{21}\right)^{-1}\right\}^{-1}$ and $\boldsymbol{\mu}_{\boldsymbol{\eta}_z^{(g)}} = \Sigma_{\boldsymbol{\eta}_z^{(g)}} \left\{\frac{V^{\text{T}}\boldsymbol{\theta}_z^{(g)}}{\sigma^2_{\zeta_z^{(g)}}} + \left(\Sigma_{11} - \Sigma_{12}\Sigma_{22}^{-1}\Sigma_{12}\right)^{-1} \Sigma_{12}\Sigma_{22}^{-1} \begin{bmatrix}
\boldsymbol{\eta}_z^{(r)} \\
\boldsymbol{\eta}_w^{(g)} \\
\boldsymbol{\eta}_w^{(r)}
\end{bmatrix}\right\}$, where $\Sigma_{11} = \Omega_{11} \Sigma\left(\phi\right)$, $\Sigma_{12} = \left\{\Omega_{12}\Sigma\left(\phi\right), \Omega_{13}\Sigma\left(\phi\right), \Omega_{14}\Sigma\left(\phi\right)\right\}$, $\Sigma_{21} = \Sigma_{12}^{\text{T}}$, and $\Sigma_{22} = \Omega_{2:4,2:4} \otimes \Sigma\left(\phi\right)$;
\item $f\left(\boldsymbol{\eta}_z^{(r)}|\Theta_{-\boldsymbol{\eta}_z^{(r)}}, \boldsymbol{T}\right)$, $f\left(\boldsymbol{\eta}_w^{(g)}|\Theta_{-\boldsymbol{\eta}_w^{(g)}}, \boldsymbol{T}\right)$, and $f\left(\boldsymbol{\eta}_w^{(r)}|\Theta_{-\boldsymbol{\eta}_w^{(r)}}, \boldsymbol{T}\right)$ each have a similar form to $f\left(\boldsymbol{\eta}_z^{(g)}|\Theta_{-\boldsymbol{\eta}_z^{(g)}}, \boldsymbol{T}\right)$ with the obvious $(g)$, $(r)$, $z$, $w$ changes and updates of the subscripts on the entries of $\Omega$ and ordering of the vectors;
\item $w_{ij} | \Theta_{-w_{ij}}, \boldsymbol{T}_{\left\{T_{ij} > 0\right\}} \equiv \text{logit}\left(T_{ij}\right)$;
\item $w_{ij} | \Theta_{-w_{ij}}, \boldsymbol{T}_{\left\{T_{ij} = 0\right\}} \stackrel{\text{ind}}{\sim} \text{N}\left(\textbf{x}_{ij}^{\text{T}}\boldsymbol{\beta}_w + \textbf{d}_j^{\text{T}}\boldsymbol{\gamma}_w^{(g)} + \textbf{d}_i^{\text{T}}\boldsymbol{\gamma}_w^{(r)} + \theta_{wj}^{(g)} + \theta_{wi}^{(r)} + \nu_{wi}\nu_{wj}, \sigma^2_{\epsilon}\right)$;
\item $\boldsymbol{\delta}_w|\Theta_{-\boldsymbol{\delta}_w}, \boldsymbol{T} \sim \text{MVN}\left(\Sigma_{\boldsymbol{\delta}_w}, \boldsymbol{\mu}_{\boldsymbol{\delta}_w}\right)$ with $\Sigma_{\boldsymbol{\delta}_w} = \left(\frac{X^{\text{T}}X}{\sigma^2_{\epsilon}} + \frac{I_{\left(p_x + 2p_d\right)}}{100^2}\right)^{-1}$ and \\ $\boldsymbol{\mu}_{\boldsymbol{\delta}_w} = \frac{\Sigma_{\boldsymbol{\delta}_w} X^{\text{T}} \left(\boldsymbol{w} - Z^{(g)}\boldsymbol{\theta}_w^{(g)} - Z^{(r)}\boldsymbol{\theta}_w^{(r)} - \boldsymbol{\nu}^*_w\right)}{\sigma^2_{\epsilon}}$;
\item $\sigma^2_{\epsilon} | \Theta_{-\sigma^2_{\epsilon}}, \boldsymbol{T} \sim \newline \text{Inverse Gamma}\left(\frac{n^*}{2} + 0.01, \frac{\left(\boldsymbol{w} - X\boldsymbol{\delta}_w - Z^{(g)}\boldsymbol{\theta}_w^{(g)} - Z^{(r)}\boldsymbol{\theta}_w^{(r)} - \boldsymbol{\nu}^*_w\right)^{\text{T}} \left(\boldsymbol{w} - X\boldsymbol{\delta}_w - Z^{(g)}\boldsymbol{\theta}_w^{(g)} - Z^{(r)}\boldsymbol{\theta}_w^{(r)} - \boldsymbol{\nu}^*_w\right)}{2} + 0.01\right)$;
\item $f\left(\nu_{wi} | \Theta_{-\nu_{wi}}, \boldsymbol{T}\right) \propto f\left(\boldsymbol{T}|\boldsymbol{\Theta}\right)f\left(\nu_{wi} | \sigma^2_{\nu_w}\right)$, where $f\left(\boldsymbol{T}|\boldsymbol{\Theta}\right)$ is the joint distribution of the data and we use a Metropolis algorithm to sample from this distribution;
\item $\sigma^2_{\nu_w} | \Theta_{-\sigma^2_{\nu_w}}, \boldsymbol{T} \sim \text{Inverse Gamma}\left(\frac{n}{2} + 0.01, \frac{\boldsymbol{\nu}_w^{\text{T}} \boldsymbol{\nu}_w}{2} + 0.01\right)$, where $\boldsymbol{\nu}_w^{\text{T}} = \left(\nu_{w1}, \hdots, \nu_{wn}\right)$;
\item $\boldsymbol{\theta}_w^{(g)}|\Theta_{-\boldsymbol{\theta}_w^{(g)}}, \boldsymbol{T} \sim \text{MVN}\left(\Sigma_{\boldsymbol{\theta}_w^{(g)}}, \boldsymbol{\mu}_{\boldsymbol{\theta}_w^{(g)}}\right)$ with $\Sigma_{\boldsymbol{\theta}_w^{(g)}} = \left(\frac{Z^{(g)\text{T}}Z^{(g)}}{\sigma^2_{\epsilon}} + \frac{I_n}{\sigma^2_{\zeta_w^{(g)}}}\right)^{-1}$ and \newline $\boldsymbol{\mu}_{\boldsymbol{\theta}_w^{(g)}} = \Sigma_{\boldsymbol{\theta}_w^{(g)}} \left\{ \frac{Z^{(g)\text{T}}\left(\boldsymbol{w} - X\boldsymbol{\delta}_w - Z^{(r)}\boldsymbol{\theta}_w^{(r)} - \boldsymbol{\nu}^*_w\right)}{\sigma^2_{\epsilon}} + \frac{V\boldsymbol{\eta}_w^{(g)}}{\sigma^2_{\zeta_w^{(g)}}}\right\}$; we implement a sum-to-zero constraint \textit{on the fly} for these parameters;
\item $f\left(\boldsymbol{\theta}_w^{(r)}|\Theta_{-\boldsymbol{\theta}_w^{(r)}}, \boldsymbol{T}\right)$ has the same as $f\left(\boldsymbol{\theta}_w^{(g)}|\Theta_{-\boldsymbol{\theta}_w^{(g)}}, \boldsymbol{T}\right)$ with $(g)$ replaced by $(r)$;
\item $\Omega^{-1}|\Theta_{-\Omega^{-1}}, \boldsymbol{T} \sim \text{Wishart}\left\{\left(\sum_{j=1}^m \sum_{i=1}^m \boldsymbol{\eta}\left(\textbf{s}^*_j\right) \boldsymbol{\eta}\left(\textbf{s}^*_i\right)^{\text{T}} \Sigma\left(\phi\right)^{-1}_{ij} + I_4\right)^{-1}, m + 5\right\}$; and
\item $f\left\{\ln\left(\phi\right)| \Theta_{-\phi}, \boldsymbol{T}\right\} \propto \left|\Sigma\left(\phi\right)^{-1}\right|^{2} \exp\left[-\frac{1}{2} \boldsymbol{\eta}^{\text{T}} \left\{\Sigma\left(\phi\right)^{-1} \otimes \Omega^{-1}\right\} \boldsymbol{\eta}\right]f\left\{\ln\left(\phi\right)\right\}$; as in Section S3.1, we work with $\ln\left(\phi\right) \in \mathbb{R}$ instead of $\phi$ so that a symmetric proposal density can be used in the Metropolis algorithm.
\end{itemize}
\clearpage
\section{Random effect parameter analyses}
\subsection{Summarizing the estimates}
We make inference on the random effect parameters and hyperparameters from both the patristic distance and transmission probability models and summarize them with respect to the raw genetic distance outcomes. Results are shown in Table S2. For the patristic distance results, we observe that 57 of the 99 $\theta_i$ parameters have CIs that exclude zero; 29 of which are positive and 28 that are negative. Recall that individuals with positive $\theta_i$ are more likely to be in pairs with larger patristic distances (i.e., less likely transmission) while those with negative $\theta_i$ are more likely to be in pairs with small patristic distances (i.e., transmission pair). To test this, we calculate the average patristic distance for all outcome pairs that involve the individuals with significantly positive $\theta_i$, and repeat the calculation for those with significantly negative $\theta_i$. For reference, the average patristic distance overall (multiplied by 10,000 for presentation purposes) is 3.12 while for the positive and negative random effect parameter pairs is 3.96 and 2.47, respectively. This shows the intuitive connection between the raw data and estimated values of these parameters, and the ability of the framework to identify key drivers of transmission in the population. Around 23\% (5\%, 68\%) of the total variability in the $\theta_i$ parameters is spatially structured based on analyzing the posterior distribution of $\tau^2/\left(\tau^2 + \sigma^2_{\zeta}\right)$.
For the transmission probability outcome results, as expected individuals identified with significant, positive random effect parameters (both giver and receiver type) are more likely to have a larger, non-zero transmission probability. Also, the proportion of total variability in the random effects due to the multivariate spatial process is much higher with point estimates ranging from 74\% to 98\%.
\subsection{Spatial prediction}
In Figures S2 and S3, we display posterior means and standard deviations, respectively, for the random effect parameters predicted across the Republic of Moldova. To produce these maps, we create an equally spaced grid over the domain containing 500 new locations (i.e., where data were not previously observed) and then collect samples from the posterior predictive distribution of the random effect parameters at each location separately (e.g., samples from $f\left\{\theta_{0}|\ln\left(P_{12}\right),\hdots,\ln\left(P_{n-1,n}\right)\right\}$ for the patristic distances model) using techniques from \cite{banerjee2014hierarchical}. These samples are then summarized using posterior means and standard deviations and plotted across the domain using inverse distance weighting to fill in the remaining gaps (for visualization purposes only). Areas in red from the posterior mean map suggest that patristic distances/transmission probabilities involving individuals residing in these locations tend to be smaller/larger on average than areas in blue, possibly indicating geographic areas of increased residual transmission activity. For patristic distances however, the variability in these values overall is not large in this setting, suggesting that the risk does not differ substantially across the map.
\clearpage
\section{Additional figures and tables}
\begin{figure}[h!]
\begin{center}
\includegraphics[trim={0.0cm 0.0cm 3.8cm 0.0cm}, clip, scale = 0.29]{Figures/TP_p_ijji.pdf}
\includegraphics[trim={0.0cm 0.0cm 3.8cm 0.0cm}, clip, scale = 0.29]{Figures/TP_p_ijik.pdf}
\includegraphics[trim={0.0cm 0.0cm 3.8cm 0.0cm}, clip, scale = 0.29]{Figures/TP_p_ijki.pdf}
\includegraphics[scale = 0.29]{Figures/TP_p_ijkl.pdf}
\caption{Simulation-based correlation estimates between the $\pi_{ij}$ parameters from the transmission probabilities model for different pairs of individuals.}
\end{center}
\end{figure}
\clearpage
\begin{figure}[ht]
\centering
\includegraphics[trim={0.0cm 0.0cm 0.0cm 1.0cm}, clip, scale = 0.30]{Figures/theta_patristic_mean.pdf}\\
\includegraphics[trim={0.0cm 0.0cm 0.0cm 1.0cm}, clip, scale = 0.30]{Figures/theta_tp_binary_giver_mean.pdf}
\includegraphics[trim={0.0cm 0.0cm 0.0cm 1.0cm}, clip, scale = 0.30]{Figures/theta_tp_binary_receiver_mean.pdf}\\
\includegraphics[trim={0.0cm 0.0cm 0.0cm 1.0cm}, clip, scale = 0.30]{Figures/theta_tp_cont_giver_mean.pdf}
\includegraphics[trim={0.0cm 0.0cm 0.0cm 1.0cm}, clip, scale = 0.30]{Figures/theta_tp_cont_receiver_mean.pdf}
\caption{Predicted random effect parameter results (posterior means) from the Republic of Moldova data analyses. Patristic distance results (top row); transmission probability results (rows two and three). $z$: binary component; $w$: continuous component; $g$: giver role; $r$: receiver role.}
\end{figure}
\clearpage
\begin{figure}[ht]
\centering
\includegraphics[trim={0.0cm 0.0cm 0.0cm 1.0cm}, clip, scale = 0.30]{Figures/theta_patristic_sd.pdf}\\
\includegraphics[trim={0.0cm 0.0cm 0.0cm 1.0cm}, clip, scale = 0.30]{Figures/theta_tp_binary_giver_sd.pdf}
\includegraphics[trim={0.0cm 0.0cm 0.0cm 1.0cm}, clip, scale = 0.30]{Figures/theta_tp_binary_receiver_sd.pdf}\\
\includegraphics[trim={0.0cm 0.0cm 0.0cm 1.0cm}, clip, scale = 0.30]{Figures/theta_tp_cont_giver_sd.pdf}
\includegraphics[trim={0.0cm 0.0cm 0.0cm 1.0cm}, clip, scale = 0.30]{Figures/theta_tp_cont_receiver_sd.pdf}
\caption{Predicted random effect parameter results (posterior standard deviations) from the Republic of Moldova data analyses. Patristic distance results (top row); transmission probability results (rows two and three). $z$: binary component; $w$: continuous component; $g$: giver role; $r$: receiver role.}
\end{figure}
\clearpage
\begin{landscape}
\begin{table}[ht!]
\centering
\caption{Simulation study true parameter values obtained from the Republic of Moldova data analyses (posterior means). PD: patristic distances; TP: transmission probabilities; $z$: binary component; $w$: continuous component; $g$: giver role; $r$: receiver role.}
\begin{tabular}{lrrrrr}
\hline
Parameter & PD & TP $(z, g)$ & TP $(z, r)$ & TP $(w, g)$ & TP $(w, r)$ \\
\hline
Intercept & -8.1265 & \multicolumn{2}{c}{-5.1612} & \multicolumn{2}{c}{-7.6490} \\
Distance Between Villages & 0.0026 & \multicolumn{2}{c}{-0.0153} & \multicolumn{2}{c}{-0.0785} \\
Same Village & -0.5921 & \multicolumn{2}{c}{ 1.6007} & \multicolumn{2}{c}{ 2.3335} \\
Date of Diagnosis Distance & 0.0039 & \multicolumn{2}{c}{-0.2310} & \multicolumn{2}{c}{-0.2468} \\
Age Difference & 0.0003 & \multicolumn{2}{c}{ 0.0237} & \multicolumn{2}{c}{-0.0697} \\
Age & -0.0088 & 0.2840 & 0.2875 & -0.0707 & -0.0058 \\
Sex (Male vs.\ Female): & & -0.2443 & -0.4785 & 0.1283 & 0.1726 \\
\ \ \ Mixed Pair vs.\ Both Female & -0.0171 & & & & \\
\ \ \ Both Male vs.\ Both Female & -0.0215 & & & & \\
Residence Location (Urban vs.\ Rural): & & -1.1754 & -1.5636 & 0.1929 & 0.2231 \\
\ \ \ Mixed Pair vs.\ Both Rural & -0.0224 & & & & \\
\ \ \ Both Urban vs.\ Both Rural & -0.0348 & & & & \\
Working Status (Employed vs.\ Unemployed): & & 1.3725 & 1.5920 & 0.0264 & 0.1653 \\
\ \ \ Mixed Pair vs.\ Both Unemployed & -0.0216 & & & & \\
\ \ \ Both Employed vs.\ Both Unemployed & -0.0170 & & & & \\
Education ($<$ Secondary vs.\ $\geq$ Secondary): & & 1.0456 & 1.2273 & 0.0684 & 0.1255 \\
\ \ \ Mixed Pair vs.\ Both $\geq$ Secondary & 0.0503 & & & & \\
\ \ \ Both $<$ Secondary vs.\ Both $\geq$ Secondary & 0.1058 & & & & \\
\hline
$\sigma^2_{\epsilon}$ & 0.0548 & & & \multicolumn{2}{c}{2.3799} \\
$\sigma^2_{\nu_z}$ & & \multicolumn{2}{c}{7.4912} & & \\
$\sigma^2_{\nu_w}$ & & & & \multicolumn{2}{c}{1.5168} \\
$\tau^2 + \sigma^2_{\zeta}$ & 0.0784 & & & & \\
$\Omega_{11} + \sigma^2_{\zeta^{(g)}_{z}}$ & & 10.6231 & & & \\
$\Omega_{22} + \sigma^2_{\zeta^{(r)}_{z}}$ & & & 14.9655 & & \\
$\Omega_{33} + \sigma^2_{\zeta^{(g)}_{w}}$ & & & & 0.2945 & \\
$\Omega_{44} + \sigma^2_{\zeta^{(r)}_{w}}$ & & & & & 0.2958 \\
$\Omega_{12}$ & & \multicolumn{4}{c}{ 12.3052} \\
$\Omega_{13}$ & & \multicolumn{4}{c}{-0.7456} \\
$\Omega_{14}$ & & \multicolumn{4}{c}{-0.1850} \\
$\Omega_{23}$ & & \multicolumn{4}{c}{-0.8779} \\
$\Omega_{24}$ & & \multicolumn{4}{c}{-0.2136} \\
$\Omega_{34}$ & & \multicolumn{4}{c}{ 0.1324} \\
\hline
\end{tabular}
\end{table}
\end{landscape}
\clearpage
\begin{table}[ht!]
\centering
\begin{threeparttable}[b]
\caption{Random effect parameter results from all analyses in the Republic of Moldova. The table presents the number of parameters with 95\% quantile-based equal-tailed credible intervals (CIs) that exclude zero (positive and negative), summaries of the outcome variables (i.e., patristic distances and transmission probabilities (TP)) overall and for each significance group, and the proportion of total variability in the parameters that is spatially structured.}
\begin{tabular}{lrrrrrrrr}
\hline
& \multicolumn{2}{c}{\# of Significant $\theta_i$} & & \multicolumn{3}{c}{Outcome Summary\footnotemark[1]} & & \\
\cline{2-3} \cline{5-7}
Model & $> 0$ & $< 0$ & & Overall & $\theta_i > 0$ & $\theta_i < 0$ & & Spatial Variability\footnotemark[2] \\
\hline
Patristic & 29 & 28 & & 3.12 & 3.96 & 2.47 & & 0.23 (0.05, 0.68) \\
TP: Binary, Giver & 45 & 26 & & 71.72 & 70.74 & 73.41 & & 0.98 (0.96, 1.00) \\
TP: Binary, Receiver & 44 & 29 & & 71.72 & 70.70 & 73.57 & & 0.98 (0.96, 0.99) \\
TP: Continuous, Giver & 9 & 7 & & 0.86 & 2.29 & 0.42 & & 0.74 (0.50, 0.93) \\
TP: Continuous, Receiver & 8 & 6 & & 0.86 & 0.75 & 0.65 & & 0.76 (0.49, 0.95) \\
\hline
\end{tabular}
\begin{tablenotes}
\item[1] Patristic: Average patristic distance multiplied by 10,000; TP: Binary: Percentage of transmission probabilities equal to zero; TP: Continuous: Average of the positive transmission probabilities multiplied by 100.
\item[2] Patristic: Posterior mean and 95\% CI of $\tau^2/(\tau^2 + \sigma^2_{\zeta})$; TP: Binary, Giver: Posterior mean and 95\% CI for $\omega_{11}/(\omega_{11} + \sigma^2_{\zeta_z^{(g)}})$ where $\omega_{11}$ is the $(1,1)$ entry of $\Omega$; similar definitions for TP: Binary, Receiver; TP: Continuous, Giver; and TP: Continuous, Receiver.
\end{tablenotes}
\end{threeparttable}
\end{table}
\clearpage
\bibliographystyle{chicago}
|
{
"timestamp": "2022-08-23T02:28:12",
"yymm": "2109",
"arxiv_id": "2109.14003",
"language": "en",
"url": "https://arxiv.org/abs/2109.14003"
}
|
\section{Introduction}\label{intro}
3D pose estimation is a fundamental technology that has become very important in recent years for computer vision-based tasks, primarily due to advances in several practical applications such as virtual reality (VR) \citep{alam2022unified}, augmented reality (AR) \citep{makar2014interframe}, sign language recognition \citep{avola2018exploiting} and, more generally, gesture recognition \citep{guo2021normalized}. In these fields, much of the current effort is directed towards the pose estimation of hands since, due to the high number of joints, they are one of the most complex components of the human body \citep{rehg1994visual}. To address this complexity, researchers generally follow either a model-driven or data-driven strategy. The former uses articulated hand models to describe bones, muscles, and tendons, for example, through kinematics constraints \citep{de2010variational,de2011model}, whereas the latter directly exploits depth, RGB-D, or RGB images \citep{zhao2017simple,dibra2018monocular,mueller2018ganerated} to extract keypoints that represent a hand. Although both strategies have their merits, data-driven approaches have allowed various systems to achieve significant performance while remaining more straightforward to implement and are usually the preferred choice between the two strategies.
Although early data-driven works were based on machine learning or computer vision algorithms such as random forest \citep{keskin2012hand} and geodesic distance-based systems \citep{tang2014latent}, attention has recently shifted towards deep learning methods \citep{li2019survey}. This is due to the high performance obtained in a heterogeneous range of fields such as emotion recognition \citep{sheng2021multi,avola2020deep}, medical image analysis \citep{yan2021development,avola2021multimodal}, and person re-identification \citep{prasad2021spatio,wu2022learning}, as well as the availability of commodity hardware for capture systems \citep{yuan2017bighand2} that can provide different types of input (e.g., depth maps).
For the task of 3D hand pose estimation, methods based on deep learning architecture configurations such as multilayer perceptrons (MLPs), convolutional neural networks (CNNs), and autoencoders have been proposed. These methods usually analyze hand keypoints via 2D heatmaps, which represent hand skeletons and are extrapolated from depth \citep{zhao2017simple}, RGB-D \citep{dibra2018monocular}, or RGB \citep{iqbal2018hand} input images. While the first two of these input options provide useful information for estimating the 3D pose through the depth component, near state-of-the-art results have been obtained by exploiting single RGB images \citep{cai2018weakly}.
The reasons for this are twofold. Firstly, even though commodity sensors are available, it is hard to acquire and correctly label a depth dataset due to the intrinsic complexity of the hand skeleton, which has resulted in a lack of such datasets. Secondly, RGB images can easily be exploited in conjunction with data augmentation strategies, thus allowing a network to be trained more easily \citep{tanner1987calculation}.
To further improve the estimation of 3D hand pose from 2D images, several recent works have exploited more complex architectures (e.g., residual network) and the multi-task learning paradigm by generating 3D hand shapes together with their estimated pose.
By leveraging these approaches together with hand models, such as the model with articulated and non-rigid deformations (MANO) \citep{romero2017embodied} and graph CNNs \citep{ge20193d}, various systems have achieved state-of-the-art performance. In particular, they can produce correct hand shapes and obtain more accurate pose estimations from an analysis of 2D hand heatmaps and depth maps generated from a single RGB input image \citep{zhang2019end}.
Inspired by the results reported in other works, we propose a keypoint-based end-to-end framework that can give state-of-the-art performance for both 3D hand pose and shape estimation; we also show that this system can be successfully applied to the task of hand gesture recognition and can outperform other keypoint-based works.
In more detail, our framework first applies a pre-processing phase to normalize RGB images containing hands.
A semantic feature extractor (SFE) with a multi-task stacked hourglass network is employed for the first time in the literature to simultaneously generate 2D heatmaps and hand silhouettes starting from an RGB image.
A novel viewpoint encoder (VE) is used to reduce the number of parameters required to encode the feature space representing the camera view during the computation of the viewpoint vector.
A stable hand pose/shape estimator (HE) based on a fine-tuned MANO layer is employed in conjunction with an improved version of a neural 3D mesh renderer \citep{kato2018neural}. This is extended via a custom weak perspective projection through the 2D re-projection of the generated 3D joint positions and meshes. Finally, a multi-task loss function is used to train the various framework components to carry out the 3D hand pose and shape estimation.
The main contributions of this paper can be summarized as follows:
\begin{itemize}
\item We present a comprehensive end-to-end framework based on keypoints that combines and improves upon several different technologies to generate 3D hand pose and shape estimations;
\item We propose a multi-task SFE, design an optimized VE, and introduce a re-projection procedure for more stable outputs;
\item We evaluate the generalization of the capabilities of our model to the task of hand gesture recognition and show that it outperforms other relevant keypoint-based approaches developed for 3D hand estimation.
\end{itemize}
The rest of this paper is organized as follows. Section~\ref{sec:related} introduces relevant work that inspired this study. Section~\ref{sec:method} presents an exhaustive description of the components of our framework. Section~\ref{sec:results} describes the experiments performed to validate the proposed approach and presents a comparison with other state-of-the-art methods for hand pose, shape estimation, and hand-gesture recognition tasks. Finally, Section~\ref{sec:conclusions} draws some conclusions from this study.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{images/section3/flow.pdf}
\caption{Proposed framework flowchart.}
\label{fig:flowchart}
\end{figure}
\section{Related Work}\label{sec:related}
Methods of 3D pose and shape estimation generally exploit depth, RGB-D, or RGB images. The last of these is usually the preferred solution due to the availability of datasets; however, approaches that leverage depth information have provided solutions and ideas that can also be applied to standalone RGB images.
For instance, the study in \cite{sun2015cascaded} introduces a hierarchical regression algorithm that starts with depth maps and describes hand keypoints based on their geometric properties and divides the hand into meaningful components such as the fingers and palm to obtain 3D poses.
In contrast, the study in \cite{malik2020handvoxnet} uses depth maps to build both 3D hand shapes and surfaces by defining 3D voxelized depth maps that can mitigate possible depth artifacts.
The representation of meaningful hand components and reductions in input noise are also relevant problems for RGB and RGB-D images.
For example, when considering RGB-D inputs for the task of 3D hand pose estimation, \cite{oikonomidis2011efficient} and \cite{qian2014realtime} use the depth component to define hand characteristics through geometric primitives that are later matched with the RGB information to generate 3D poses, and hence to track the hands.
Specifically, spheres, cones, cylinders, and ellipsoids are used to describe the palm and fingers in \cite{oikonomidis2011efficient}, while the approach in \cite{qian2014realtime} employs only sphere primitives for faster computation.
Using a different technique, \cite{dibra2018monocular} focuses on handling input noise by using synthetic RGB-D images to train a CNN. In particular, the use of artificial RGB/depth image pairs is shown by the authors to alleviate the effects of missing or unlabeled depth datasets.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{images/section3/archi.pdf}
\caption{Proposed framework architecture overview.}
\label{fig:archi}
\end{figure*}
Approaches that exploit depth information are inherently more suitable for the 3D pose estimation task since they suffer less from image ambiguities compared to systems based exclusively on RGB images.
In view of this, \cite{iqbal2018hand} introduces a 2.5D representation by building a latent depth space via an autoencoder that retains depth information without directly using this type of data. The authors further refine this latent space through an element-wise multiplication with 2D heatmaps to increase the depth consistency and obtain realistic 3D hand poses.
Another work that focuses on depth retention without using the extra information at test time is presented in \cite{cai20203d}. This scheme first employs a conditional variational autoencoder (VAE) to build the latent distribution of joints via the 2D heatmaps extracted from the input RGB image, and then exploits a weak-supervision approach through a depth regularizer that forces the autoencoder to consider automatically generated depth information in its latent space at training time.
A similar weak-supervision rationale is also applied in \cite{zhang2019end} where, in addition to depth information, the hand shape consistency is evaluated through a neural renderer.
More specifically, by exploiting the outputs of the MANO layer (i.e., the 3D hand pose and mesh), the authors project the 3D joint coordinates defining the hand pose into a 2D space to account for depth information, and implement a neural renderer to generate silhouettes from hand shapes to increase the consistency of the results. In the present work, we refine this procedure further via a weak re-projection that is applied to both 3D joint locations and mesh so that the proposed framework can also be applied to a different task.
Accounting for depth information without directly using such data enables higher performance when only RGB images are analyzed.
However, this image format introduces several challenges that must be addressed, such as different camera view parameters, background clutter, occlusions, and hand segmentation.
In general, to handle these problems, RGB-based methods define a pipeline that includes feature extraction from the input image (usually in the form of 2D heatmaps), a latent space representation of such features to allow for the extrapolation of meaningful view parameters, and the 3D hand pose estimation based on the computed view parameters.
For instance, the study in \cite{zimmermann2017learning} implements a CNN called HandSegNet to identify hand silhouettes so that the input images can be cropped and resized around the hand. A second CNN (PoseNet) is then used to extract the features, i.e., 2D heatmaps, allowing the network to estimate the 3D pose via symmetric streams and to analyze the prior pose and the latent pose representation derived by the network.
The authors of \cite{baek2020weakly} instead devise a domain adaptation strategy in which a generative adversarial network (GAN), driven by the 2D heatmaps extracted from the input by a convolutional pose machine, automatically outputs hand-only images from hand-object images. The resulting hand-only images are then used to estimate the correct 3D pose, even in the case of occlusions (e.g., from the object being held). Object shapes are also exploited in \cite{hasson2019learning} to handle occlusions that arise during the task of 3D hand pose estimation. The authors describe the use of two parallel encoders to obtain latent representations of both hand and object, which are in turn employed to define meaningful hand-object constellations via a custom contact loss, so that consistent 3D hand poses can be generated. In contrast, the authors of \cite{yang2019disentangling} directly address the issues of background clutter and different camera view parameters by designing a disentangling VAE (dVAE) to decouple hand, background, and camera view, using a latent variable model, so that a MANO layer can receive the correct input to generate the 3D hand pose.
Furthermore, through the use of the dVAE, the authors are able to synthesize realistic hand images in a given 3D pose, which may also alleviate the issue of low numbers of available datasets.
The use of the pipeline described above allows models to achieve state-of-the-art performance on the 3D hand pose estimation task. Nevertheless, depending on the 3D joint position generation procedure, a good latent space representation is necessary to ensure an effective system. For example, the scheme in \cite{ge20193d} utilizes a stacked hourglass to retrieve 2D heatmaps from the input RGB image. A residual network is then implemented to generate a meaningful latent space, and a graph CNN is built to define both the 3D pose and the shape of the input hand. To further improve the results, the authors pre-train all networks on a synthetic dataset before fine-tuning them on the estimation task.
Unlike the model introduced in \cite{ge20193d}, which is used as a starting point, the framework presented here extends the use of a stacked hourglass and residual network by defining a multi-task SFE and VE, respectively. Moreover, only the multi-task feature extractor is pre-trained on synthetic data so that the VE is able to obtain a hand abstraction that can be utilized in different tasks such as hand gesture recognition.
Finally, the latent space representation is also relevant when using other procedures for 3D pose estimation, such as the MANO layer, as discussed in \cite{boukhayma20193d} and \cite{baek2019pushing}. In more detail, the former of these schemes extracts 2D heatmaps via a CNN, then employs an encoder to generate the MANO layer view parameters directly, while in the latter approach, a latent space is built by an evidence estimator module that leverages a convolutional pose machine, which generates the required parameters for the MANO layer. Both methods implement a re-projection procedure for the 3D joints, and this was further extended in \cite{baek2019pushing} via iterative re-projection to improve the final 3D hand pose and hence the shape estimations. In the framework presented here, we exploit this interesting strategy in a different way from the two approaches described above by also applying the re-projection procedure to the mesh generated by the MANO layer so that the estimation can benefit from both outputs rather than only the 3D locations.
\section{Method}\label{sec:method}
The proposed framework for 3D hand pose and shape estimation starts from a pre-processed hand image input, and first generates 2D heatmaps and hand silhouettes through the use of a multi-task SFE. Secondly, it estimates the camera, hand pose, and hand shape view parameters by exploiting the semantic features using a VE. Finally, it computes the hand 2D and 3D joints, mesh and silhouette by feeding the estimated view parameters to the HE, which consists of a MANO layer, weak perspective projection, and neural renderer components. We note that a single compound loss function is employed to drive the learning phase of the various modules jointly.
A flowchart for the framework is shown in Fig.~\ref{fig:flowchart} and the high-level pipeline is illustrated in Fig.~\ref{fig:archi}.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{images/section3/sfe.pdf}
\caption{Modified stacked hourglass network used as a multi-task semantic feature extractor.}
\label{fig:sfe}
\end{figure*}
\subsection{Pre-processing}
A necessary step in order to handle different image sizes and to reduce the amount of background clutter in the samples in a given dataset is pre-processing of the input image. More specifically, to allow the proposed framework to focus on hands, each image is modified so that the hand is always centered and there is as little background as possible, while still retaining all of the 21 hand joints keypoints. The hand is centered by selecting the metacarpophalangeal joint (i.e., the base knuckle) of the middle finger as a center crop point $p_c$. The crop size $l$ is then computed as follows:
\begin{equation}
l = 2*\max((p_{max}-p_c), (p_c-p_{min})),
\end{equation}
where $p_{max}$ and $p_{min}$ are the joint keypoint coordinates with the largest and smallest $(x, y)$ distance with respect to $p_c$. $l$ is enlarged by another $20\%$ (i.e., $\sim$20 px padding in all directions) to ensure all hand joints are fully visible inside the cropped area.
\subsection{Semantic Feature Extractor}
Inspired by the results obtained in \cite{ge20193d}, we implemented a modified version of a stacked hourglass network \citep{newell2016stacked} to take advantage of the multi-task learning approach. In particular, 2D heatmaps and hand silhouette estimates are generated based on a $256\times256\times3$ normalized (i.e., with zero-mean and unit variance) image $I_{normalized}$. The hourglass architecture was selected as it can capture many features, such as hand orientation, articulation structure, and joint relationships, by analyzing the input image at different scales. Four convolutional layers are employed in the proposed architecture to reduce the input image to a size of $64\times64$ via two max pooling operations in the first and third layers. The downsized images are then fed to the hourglass module and intermediate heatmaps and silhouettes are generated by processing local and global contexts in a multi-task learning scenario.
These two outputs, of size $64\times64\times21$ (i.e., one channel per hand joint) and $64\times64\times2$ (i.e., back and foreground channels) for 2D heatmaps and silhouette, respectively, are then mapped to a larger number of channels via a $1\times1$ convolution to reintegrate the intermediate feature predictions into the feature space.
These representations are then summed with the hourglass input into a single vector $\hat{f}$, thus effectively introducing long skip connections to reduce data loss for the second hourglass module. Finally, this second module is employed to extract the semantic feature vector $f$ that contains the effective 2D heatmaps and hand silhouette used by the VE to regress camera view, hand pose, and hand shape parameters. Note that unlike $\hat{f}$, the vector $f$ is computed via concatenation of 2D heatmaps, hand silhouette, and $\hat{f}$, which provides the VE with a comprehensive representation of the input.
In the hourglass components, each layer employs two residual modules \citep{he2016deep} for both downsampling and upsampling sequences. In the former, the features are processed down to a very low resolution (i.e., $4\times4$), whereas in the latter, images are upsampled and combined with the extracted features across all scales.
In more detail, the information on two adjacent resolutions is merged by first increasing the lower resolution through the nearest neighbor interpolation and then performing an element-wise addition. After the upsampling sequence, three consecutive $1\times1$ convolutions are applied to obtain the multi-task output (i.e., 2D heatmaps and silhouette) used by the VE. The presented SFE architecture with the corresponding layer sizes are illustrated in Fig.~\ref{fig:sfe}.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{images/section3/wpe.pdf}
\caption{Viewpoint encoder architecture.}
\label{fig:wpe}
\end{figure}
\subsection{Viewpoint Encoder}
The extracted semantic feature vector $f$ is resized to dimensions of $256\times256$ and is then used as input for the second framework component, i.e., the VE, which has two main objectives. Firstly, this unit generates a set of parameters $v$, which are employed by the last component of the pipeline to produce the 3D hand pose and shape.
The vector $v$ contains the camera view translation $t\in \mathbb{R}^2$, scaling $s\in \mathbb{R}^+$, and rotation $R \in \mathbb{R}^3$, as well as the hand pose $\theta\in \mathbb{R}^{45}$ and shape $\beta \in \mathbb{R}^{10}$ values necessary to move from a 2D space to a 3D one.
Secondly, the VE also needs to sensibly reduce the number of trainable parameters of the architecture to satisfy the hardware constraints.
In more detail, given the semantic feature vector $f$, a flattened latent viewpoint feature space $l_v$ that encodes semantic information is obtained by using four abstraction blocks, each containing two residual layers \citep{he2016deep}, to analyze the input, and a max-pooling layer, to both consolidate the viewpoint representation and reduce the input vector down to a $64\times64$ size.
Subsequently, to obtain the set $v=\{t, s, R, \theta, \beta\}$, the VE transforms the latent space representation $l_v$ into $v\in\mathbb{R}^{61}$, by employing three dense layers to elaborate on the representation derived by the residual layers.
We note that by employing a max-pooling layer inside each abstraction block, a smaller latent space is obtained, meaning that a lower amount of parameters needs to be trained; hence, both of the objectives for this module are achieved.
The architecture of the VE is shown in Fig.~\ref{fig:wpe}.
\subsection{Hand Pose/Shape Estimator}
The last component of the framework utilizes the parameter set $v=\left\{t,s,R,\theta,\beta\right\}$ to generate the hand pose and shape. The estimator applies a MANO layer for the 3D joint positions and hand mesh generation. These outputs are then improved during training by leveraging a weak perspective projection procedure and a neural renderer for more accurate estimations.
\subsubsection{MANO Layer}
This layer models the properties of the hand, such as the slenderness of the fingers, the thickness of the palm, as well as the hand pose, and controls the 3D surface deformation defined from articulations. More formally, given the pose $\theta$ and shape $\beta$ parameters, the MANO hand model $M$ is defined as follows:
\begin{equation}
M(\beta,\theta)= W(T_p(\beta,\theta),J(\beta),\theta,\mathcal{W}),
\end{equation}
where $W$ is a linear blend skinning (LBS) function \citep{loper2015smpl}; $T_p$ corresponds to the articulated mesh template to blend, consisting of $K$ joints; $J$ represents the joint locations learned from the mesh vertices via a sparse linear regressor; and $\mathcal{W}$ indicates the blend weights.
To avoid the common problems of LBS models, such as overly smooth outputs or mesh collapse near joints, the template $T_p$ is obtained by deforming a mean mesh $\hat{T}$ using the pose and blend functions $B_S$ and $B_P$, using the following equation:
\begin{equation}
T_p = \hat{T} + B_S(\beta) + B_P(\theta),
\end{equation}
where $B_S$ and $B_P$ allow us to vary the hand shape and to capture deformations derived from bent joints, respectively, and are computed as follows:
\begin{equation}
B_S(\beta) = \sum_{n=1}^{|\beta|}\beta_nS_n,
\end{equation}
\begin{equation}
B_P(\theta) = \sum_{n=1}^{9K}\left(R_n(\theta)-R_n(\theta^*)\right)P_n,
\end{equation}
where $S_n\in S$ are the blend shapes computed by applying principal component analysis (PCA) to a set of registered hand shapes, normalized by a zero pose $\theta^*$; $9K$ represents the rotation matrix scalars for each of the $K$ hand articulations; $R_n$ indicates the $n$-th element rotation matrix coefficients; while $P_n\in P$ corresponds to the blend poses.
Finally, there is a natural variability among hand shapes in a human population, meaning that possible skeleton mismatches might be found in the MANO layer 3D joint output. To address this issue, we implemented a skeleton adaptation procedure following the solution presented in \cite{hasson2019learning}. Skeleton adaptation is achieved via a linear layer initialized to the identity function, which maps the MANO joints to the final joint annotations.
\subsubsection{Weak Perspective Projection}
The weak perspective projection procedure takes as inputs the translation vector $t$, the scalar parameter $s$, and the 3D hand pose $RJ(\beta,\theta)$ derived from the MANO model $M(\theta,\beta)$, and re-projects the generated 3D keypoints back onto a 2D space to allow for identification of the 2D hand joint locations. This approximation allows us to train the model without defining intrinsic camera parameters, since consistency between the input and the projected 2D locations can be enforced, thus avoiding issues arising from the different devices calibrations that are typical of different datasets. Formally, the re-projection $w$ is computed as follows:
\begin{equation}
w_{2D} = s\Pi(RJ(\beta,\theta)) + t,
\end{equation}
where $\Pi$ corresponds to the orthographic projection.
\begin{figure*}[t]
\subfloat[$vi=(x_i,y_i)$ is a face vertex. $I_j$ is pixel $P_j$ color. $x_0$ is the current $x_i$ position. $x_1$ is the $x_i$ position when a face edge collides with $P_j$ center, with $x_i$ moving to its right. When $x_i=x_1$, then $I_j=I_{ij}$.]{\includegraphics[width=.485\textwidth]{images/section3/outside_raster.png}}
\hfill
\subfloat[$vi=(x_i,y_i)$ is a face vertex. $I_j$ is pixel $P_j$ color. $x_0$ is the current $x_i$ position. $x_1^a$ and $x_1^b$ are the $x_i$ positions when a face edge collides with $P_j$ center, with $x_i$ moving to its left or right. When $x_i=x_1^a|x_1^b$, then $I_j=I_{ij}^a|I_{ij}^b$.]{\includegraphics[width=.485\textwidth]{images/section3/inside_raster.png}}
\caption{Rasterization process for pixels residing outside a given face (i) and inside it (ii). Image courtesy of \cite{kato2018neural}.}
\label{fig:rasterization}
\end{figure*}
\subsubsection{Neural Renderer}
To improve the mesh generated by the MANO layer, the differentiable neural renderer devised in \cite{kato2018neural}, which is trainable via back-propagation, is employed to rasterize the 3D shape into a hand silhouette. This silhouette, which is similar to the re-projected hand joint coordinates, is then used to improve the mesh generation. Formally, given a 3D mesh composed of $N=778$ vertices $\{v_1^o, v_2^o, \dots, v_N^o\}$, with $v_i^o\in\mathbb{R}^{3}$, and $M=1538$ faces $\{f_1, f_2, \dots, f_M \} $, with $f_j\in\mathbb{N}^{3}$, the vertices are first projected onto the 2D screen space using the weak perspective projection via the following equation:
\begin{equation}
v_i^s = s\Pi(Rv_i^o) + t,
\end{equation}
where $R$ corresponds to the rotation matrix used to build the MANO hand model $M(\beta,\theta)$. Rasterization is then applied to generate an image from the projected vertices $v_i^s$ and faces $f_j$, (where $i\in N, j\in M$) via sampling, as explained in \cite{kato2018neural}.
Different gradient flow rules are used to handle vertices residing outside or inside a given face. For ease of explanation, only the $x$ coordinate of a single vertex $v_i^s$ and a pixel $P_j$ are considered in each scenario. Note that the color of $P_j$ corresponds to a function $I_j(x_i)$, which is defined by freezing all variables except $x_i$ to compute the partial derivatives.
As shown in Fig.~\ref{fig:rasterization}, during the rasterization process, the color $I_j(x_i)$ of a pixel $P_j$ changes instantly from $I(x_0)$ to $I_{ij}$ (i.e., starting and hitting point, respectively) when $x_i$ reaches a point $x_1$ where an edge of the face collides with $P_j$ center.
Note that such a change can be observed for pixels located either outside the face (Fig.~\ref{fig:rasterization}.i.b) or inside it (Fig.~\ref{fig:rasterization}.ii.b).
If we then let $\delta_i^x=x_1-x_0$ be the distance traveled from a starting point $x_0$, and $\delta_j^I=I(x_1)-I(x_0)$ be the color change, it is straightforward to see that when computing the partial derivatives $\partial I_j(x_i)/\partial x_i$ during the rasterization process, they will be zero almost everywhere due to the sudden color change (Fig.~\ref{fig:rasterization}.i/ii.c).
To address this issue, the authors of \cite{kato2018neural} introduce a gradual change between $x_0$ and $x_1$ via linear interpolation (Fig.~\ref{fig:rasterization}.i/ii.d), which allows for transformation of the partial derivates $\partial I_j(x_i)/\partial x_i$ into $\delta_j^I/\delta_i^x$ since the gradual change allows for non-zero derivatives (Fig.~\ref{fig:rasterization}.i/ii.e).
However, $I_j(x_i)$ has different left and right derivatives on $x_0$. Hence, the error signal $\delta_j^P$, which is back-propagated to pixel $P_j$ and indicates whether $P_j$ should be brighter or darker, is also used to handle the case where $x_i=x_0$. Finally, to deal with this situation, the authors of \cite{kato2018neural} define gradient flow rules to correctly modify pixels residing either outside or inside a given face during the backpropagation.
For $P_j$ residing outside the face, the gradient flow rule is defined as follows:
\begin{equation}
\left. \frac{\partial I_j(x_i)}{\partial x_i}\right\rvert_{x_i=x_0} = \begin{cases} \frac{\delta_j^I}{\delta_i^x}, & \mbox{if } \delta_j^P\delta_j^I<0; \\ 0, & \mbox{if } \delta_j^P\delta_j^I\ge0, \end{cases}
\label{eq:pj_out}
\end{equation}
where $\delta_i^x=x_1-x_0$ indicates the distance traveled by $x_i$ during the rasterization procedure; $\delta_j^I=I(x_1)-I(x_0)$ represents the color change; and $\delta_j^P$ corresponds to the error signal backpropagated to pixel $P_j$.
As described in \cite{kato2018neural}, to minimize the loss during training, pixel $P_j$ must become darker for $\delta_j^P>0$. In fact, since the sign of $\delta_j^I$ denotes whether $P_j$ can be brighter or darker, for $\delta_j^I>0$, pixel $P_j$ becomes brighter when pulling $x_i$ toward the face but, at the same time, $P_j$ cannot become darker when moving $x_i$ as it would require a negative $\delta_j^I$. Thus, the gradient should not flow if $\delta_j^P>0\:\wedge\:\delta_j^I>0$.
In addition, the face and $P_j$ might also not overlap, regardless of where $x_i$ moves, as the hitting point $x_1$ may not exist.
In this case, the derivatives are defined as $\partial I_j(x_i)/\partial x_i\rvert_{x_i=x_0}=0$.
This means that the gradient should never flow when $\delta_j^P\delta_j^I\ge0$, in accordance with Eq.~\ref{eq:pj_out}.
Finally, we note that the derivative with respect to $y_i$ is obtained by using the y-axis in Eq.~\ref{eq:pj_out}.
For a point $P_j$ residing inside the face, the left and right derivatives are first defined \cite{kato2018neural} at $x_0$ as $\delta_j^{Ia}=I(x_1^a)-I(x_0)$, $\delta_j^{Ib}=I(x_1^b)-I(x_0)$, $\delta_x^a=x_1^a-x_0$, and
$\delta_x^b=x_1^b-x_0$. Then, in a similar way to a point $P_j$ residing outside the face, they define the gradient flow rules as follows:
\begin{equation}
\left. \frac{\partial I_j(x_i)}{\partial x_i}\right\rvert_{x_i=x_0} = \left. \frac{\partial I_j(x_i)}{\partial x_i}\right\rvert_{x_i=x_0}^a + \left. \frac{\partial I_j(x_i)}{\partial x_i}\right\rvert_{x_i=x_0}^b,
\end{equation}
\begin{equation}
\left. \frac{\partial I_j(x_i)}{\partial x_i}\right\rvert_{x_i=x_0}^a = \begin{cases} \frac{\delta_j^{I^a}}{\delta_x^a}, & \mbox{if } \delta_j^P\delta_j^{I^a}<0; \\ 0, & \mbox{if } \delta_j^P\delta_j^{I^a}\ge0, \end{cases}
\end{equation}
\begin{equation}
\left. \frac{\partial I_j(x_i)}{\partial x_i}\right\rvert_{x_i=x_0}^b = \begin{cases} \frac{\delta_j^{I^b}}{\delta_x^b}, & \mbox{if } \delta_j^P\delta_j^{I^b}<0; \\ 0, & \mbox{if } \delta_j^P\delta_j^{I^b}\ge0. \end{cases}
\end{equation}
\subsection{Multi-task Loss}\label{subsec:multi_task_loss}
The final outputs of the proposed framework are the 3D joint positions and the hand mesh. However, in real-world datasets, the ground truths for 3D hand mesh, pose, shape, and view parameters, which are unknown in unconstrained situations, are hard to collect. Our framework therefore automatically generates intermediate representations (i.e., 2D heatmaps, hand silhouettes, and 2D re-projected joint positions and meshes), which are then exploited to train the whole system jointly using a single loss function.
The ground truths in this case are defined as follows:
\begin{itemize}
\item 2D heatmaps (one per joint) are built using a 2D Gaussian with a standard deviation of $2.5$ pixels, centered on the 2D joint locations annotations identified in the datasets, to describe the likelihood of a given joint residing in that specific area;
\item Hand silhouettes are computed from the input image with the GrabCut algorithm, implemented using the OpenCV library, and 2D joint annotations from the datasets are used to initialize the foreground, background, and probable foreground/background regions, following \cite{boukhayma20193d};
\item The 3D joint positions and 2D re-projected joint positions are compared directly with the 3D and 2D joint positions provided by the various datasets;
\item The 2D re-projected meshes are compared with the same hand silhouette masks built with the GrabCut algorithm.
\end{itemize}
Formally, the multi-task loss function employed here is defined through the following equation:
\begin{equation}
L = L_{SFE} + L_{3D} + L_{2D} + L_{M} + L_{Reg},
\end{equation}
where $L_{SFE}$, $L_{3D}$, $L_{2D}$, $L_{M}$, and $L_{Reg}$, represent the semantic feature extractor (i.e., 2D heatmaps and hand silhouette), 3D joint positions, 2D joint re-projection, hand silhouette mask (i.e., the re-projected mesh), and model parameter regularization losses, respectively.
\subsubsection{$L_{SFE}$}
The semantic feature extractor estimates 2D heatmaps and the hand silhouette. The loss for a given hourglass module $h_i$ is defined as the sum of heatmaps $L_2$ and the pixel-wise binary cross-entropy (BCE) losses for the silhouette, as follows:
\begin{equation}
L_{h_i} = \lVert H - \hat{H} \rVert_2^2 + \lVert M - \hat{M} \rVert_{BCE}^2,
\end{equation}
where $\hat{\cdot}$ is the hourglass output for the 2D heatmaps $H$ and the silhouette mask $M$, for which the ground truths are derived via the 2D Gaussian and GrabCut algorithm, respectively.
The two stacked hourglass network losses are then summed to apply intermediate supervision since, as demonstrated in \cite{newell2016stacked}, this improves the final estimates. Thus, the SFE loss is defined in the following equation:
\begin{equation}
L_{SFE} = L_{h_1} + L_{h_2},
\end{equation}
\subsubsection{$L_{3D}$}
An $L_2$ loss is also used to measure the distance between the estimated 3D joint positions $RJ(\beta,\theta)$ and the ground truth coordinates $w_{3D}$ provided by the datasets, as follows:
\begin{equation}
L_{3D} = \lVert w_{3D} - RJ(\beta,\theta) \rVert_2^2
\end{equation}
\subsubsection{$L_{2D}$}
The 2D re-projected hand joint positions loss is used to refine the view parameters $t$, $s$, and $R$, for which the ground truths are generally unknown. It is computed as follows:
\begin{equation}
L_{2D} = \lVert w_{2D} - \hat{w}_{2D} \rVert_1,
\end{equation}
where $\hat{w}_{2D}$ indicates the network 2D re-projected positions, and $w_{2D}$ are the ground truths of 2D joint positions annotated in a given dataset.
Notice that an $L_1$ loss is used since it is less sensitive and more robust to outliers than the $L_2$ loss.
\subsubsection{$L_{M}$}
The silhouette mask $M$ loss is introduced into the weak supervision, since the hand mesh should be consistent with its silhouette \citep{zhang2019end} or depth map \citep{kato2018neural}. This $L_2$ loss therefore helps to refine both the hand shape and the camera view parameters, via the following equation:
\begin{equation}
L_M = \lVert M - \hat{M} \rVert_2^2,
\end{equation}
where $\hat{M}$ is the 2D re-projected mesh, and $M$ corresponds to the hand silhouette mask used as ground truth, extracted using the GrabCut algorithm.
\subsubsection{$L_{Reg}$}
The last loss component is a regularization term that is applied with the aim of reducing the magnitudes of the hand model parameters $\beta$ and $\theta$, in order to avoid unrealistic mesh representations. Focusing only on the 2D and 3D joint positions while ignoring the hand surfaces results in the mesh fitting joint locations but completely ignoring the actual anatomy of the hand. Hence, to avoid possible extreme mesh deformations, a regularization term is used as follows:
\begin{equation}
L_{Reg} = \lVert \beta \rVert_2^2 + \lVert \theta \rVert_2^2.
\end{equation}
\section{Experimental Results}\label{sec:results}
In this section, we first introduce the benchmark datasets for 3D hand pose and shape estimation and hand gesture recognition that are used to validate our framework, and the data augmentation strategy that is employed to better exploit all the available samples.
We then present a comprehensive performance evaluation. We report the results of ablation studies on each component of the framework to highlight the effectiveness of the proposed approach both quantitatively and qualitatively. We conduct a comparison with state-of-the-art alternatives for the 3D hand pose and shape estimation, and present the results obtained from a different task (i.e., hand-gesture recognition), so that the abstraction capabilities of our framework can be fully appreciated.
\subsection{Datasets}
The following benchmark datasets are exploited to evaluate our framework: the synthetic object manipulation (ObMan) \citep{hasson2019learning}, stereo hand dataset (STB) \citep{zhang20163d}, Rhenish-Westphalian Technical University gesture (RWTH) \citep{dreuw2006modeling}, and the creative Senz3D (Senz3D) \citep{minto2015exploiting} datasets. Specifically, ObMan is used to pre-train the SFE to generate 2D heatmaps and hand silhouette estimations that are as accurate as possible; STB is employed to evaluate the 3D hand pose and shape estimations through ablation studies and comparisons with state-of-the-art methods; and RWTH and Senz3D are utilized to assess the generalization capabilities of our framework to the task of hand gesture recognition.
\subsubsection{ObMan}
This is a large-scale synthetic dataset containing images of hands grasping different objects such as bottles, bowls, cans, jars, knives, cellphones, cameras, and remote controls. Realistic images of embodied hands are built by transferring different poses to hands via the SMPL+H model \citep{romero2017embodied}. Several rotation and translation operations are applied to maximize the viewpoint variability to provide natural occlusions and coherent backgrounds. For each hand-object configuration, object-only, hand-only, and hand-object images are generated with the corresponding segmentation, depth map, and 2D/3D joints location of 21 keypoints. From this dataset, we selected 141,550 RGB images with dimensions $256\times256$, showing either hand-only or hand-object configurations, to train the semantic feature extractor.
\subsubsection{STB}
This dataset contains stereo image pairs (STB-BB) and depth images (STB-SK), and was created for the evaluation of hand pose tracking/estimation difficulties in real-world scenarios.
Twelve different sequences of hand poses were collected with six different backgrounds representing static or dynamic scenes.
The hand and fingers are either moved slowly or randomly to give both simple and complex self-occlusions and global rotations.
Images in both collections have the same resolution of $640\times480$, identical camera pose, and similar viewpoints.
Furthermore, both subsets contain 2D/3D joint locations of 21 keypoints.
From this collection, we used only the STB-SK subset to evaluate the proposed network, and divided it into 15,000 and 3,000 samples for the training and test sets, respectively.
\subsubsection{RWTH}
This dataset includes fingerspelling gestures from the German sign language. It consists of RGB video sequences for 35 signs representing letters from A to Z, the 'SCH' character, umlauts \"A, \"O and \"U, and numbers from one to five. For each gesture, 20 different individuals were recorded twice, using two distinct cameras with different viewpoints, at resolutions of $320\times240$ and $352\times288$, giving a total of 1,400 samples. From this collection, we excluded all gestures requiring motion, i.e., the letters J, Z, \"A, \"O and \"U. The final subset contained 30 static gestures over 1,160 images. This collection was divided into disjoint training and test sets containing 928 and 232 images, respectively, in accordance with \citep{zimmermann2017learning}.
\subsubsection{Senz3D}
This dataset contains 11 different gestures performed by four different individuals. To increase the complexity of the samples, the authors collected gestures with similar characteristics (e.g., the same number of raised fingers, low distances between fingers, and touching fingertips). All gestures were captured using an RGB camera and a time-of-flight (ToF) depth sensor at a resolution of $320\times240$. Moreover, each gesture was repeated by each person 30 times, for a total of 1,320 acquisitions. All of the available samples from this collection were employed in the experiments.
\subsection{Data Augmentation}
Data augmentation is a common practice that can help a model to generalize the input data, making it more robust. In this work, up to four different groups of transformations, randomly selected during each iteration of the training phase, were applied to an input image to further increase the dissimilarities between samples. These transformations were as follows:
\begin{itemize}
\item \textit{blur:} This is obtained by applying a Gaussian filter with varying strength, via $\sigma \in [1,3]$ kernel, or by computing the mean over a neighborhood using a kernel shape with a size of between $3\times3$ and $9\times9$;
\item \textit{random noise:} This is achieved by adding Gaussian noise to an image, either sampled randomly per pixel channel or once per pixel from a normal distribution $\mathcal{N}(0,0.05\cdot255)$;
\item \textit{artificial occlusion:} This can be attained either by dropping (i.e., setting to black) up to 30\% of the contiguous pixels, or by replacing up to 20\% pixels using a salt-and-pepper strategy;
\item \textit{photometric adjustments:} These are derived from arithmetic operations applied to the image matrix, for example by adding a value in the range $[-20, 20]$ to each pixel, by improving or worsening the image contrast, or by changing the brightness by multiplying the image matrix with a value in the range $[0.5, 1.5]$.
\end{itemize}
Note that all transformations only affect the appearance of an image, and leave the 2D/3D coordinates unaltered.
\subsection{Performance Evaluation}
The proposed system was developed using the Pytorch framework. All experiments were performed using an Intel Core i9-9900K @3.60GHz CPU, 32GB of RAM, and an Nvidia GTX 1080 GPU with 8GB GDDR5X RAM.
With this configuration, at inference time, the SFE, VE, and HE components required $64.9$, $9.6$, and $15.04$ ms, respectively, giving a total of $89.54$ ms per input image. The proposed system can therefore analyze approximately 11 images per second, regardless of the dataset used. Moreover, we note that this speed could be further improved with higher performance hardware. In fact, when using a more recent GPU model such as the Nvidia GeForce RTX 3080 with 10GB GDDR6X RAM, the total time required to analyze a single input image is reduced to $37.02$ ms, enabling about 27 images to be examined per second, i.e., a speed increase of roughly 2.4x with respect to our configuration.
To evaluate the proposed framework, we employ three metrics that are commonly used for 3D hand pose and shape estimation, and for hand gesture recognition. These are the 3D end-point-error (EPE) and area under the curve (AUC) for the former task, and accuracy for the latter.
The EPE used in the ablation studies is defined as the average Euclidean distance, measured in millimeters (mm), between predicted and ground truth keypoints. The AUC is computed based on the percentage of correct 3D keypoints (3D PCK) at different thresholds, for a range of 20-50 mm. Finally, for both metrics, the public implementation in \cite{zimmermann2017learning} is employed for a fair comparison.
\begin{table}[t]
\centering
\caption{Semantic feature extractor ablation study.}
\label{tab:sfe_ablation}
\begin{tabular}{p{.8\columnwidth} c}
\hline
\textbf{Design choice} & \textbf{EPE} \\
\hline
No semantic feature extractor (SFE) & 13.06 \\
SFE (heatmaps only) & 11.69 \\
SFE (heatmaps + silhouette) & 11.52 \\
ObMan pre-trained SFE (heatmaps + silhoutte) & 11.12 \\
\hline
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\caption{Viewpoint encoder ablation study.}
\label{tab:ve_ablation}
\begin{tabular}{p{.75\columnwidth} c}
\hline
\textbf{Design choice} & \textbf{EPE} \\
\hline
2*ResNet modules VE & 28.77 \\
4*ResNet modules VE & 11.12 \\
5*ResNet modules VE & 12.25 \\
\hline
3*1024 dense layers VE (VE$_1$) & 10.79 \\
2*2048/1*1024 dense layers VE (VE$_2$) & 11.12 \\
3*2048 dense layers VE & 11.86 \\
\hline
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\caption{Hand pose/shape estimator ablation study.}
\label{tab:hpse_ablation}
\begin{tabular}{p{.75\columnwidth} c}
\hline
\textbf{Design choice} & \textbf{EPE} \\
\hline
15 PCA parameters MANO layer & 12.31 \\
30 PCA parameters MANO layer & 11.47 \\
45 PCA parameters MANO layer & 11.12 \\
\hline
No 2D re-projection & 11.56 \\
With 2D re-projection & 11.12 \\
\hline
No $L_{Reg}$ loss & 10.10 \\
With $L_{Reg}$ loss & 11.12 \\
\hline
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\caption{Advanced components ablation study.}
\label{tab:adv_ablation}
\begin{tabular}{p{.75\columnwidth} c}
\hline
\textbf{Design choice} & \textbf{EPE} \\
\hline
No adapt skeleton \& hourglass summation & 11.12 \\
With adapt skeleton \& hourglass summation & 9.10 \\
With adapt skeleton \& hourglass concatenation \& (VE$_1$) & 9.03 \\
With adapt skeleton \& hourglass concatenation \& (VE$_2$) & 8.79 \\
\hline
\end{tabular}
\end{table}
\subsubsection{Framework Quantitative and Qualitative Results}\label{subsubsec:framework_quantitative_and_qualitative}
The proposed framework contains several design choices that were made in order to obtain stable 3D hand pose and shape estimations.
We therefore performed ablation studies to assess the effectiveness of each of these decisions. The obtained results are summarized in Tables \ref{tab:sfe_ablation}, \ref{tab:ve_ablation}, \ref{tab:hpse_ablation}, and \ref{tab:adv_ablation}, where each table shows the results for a given component, i.e., the SFE, VE, HE, and advanced framework components.
All of the reported EPE scores were computed for the STB dataset, while pre-training of the SFE unit was carried out exclusively on the ObMan dataset, since it contains a high number of synthetic images under various conditions.
For both collections, mini-batches of size six and an Adam optimizer with a learning rate of $10^{-4}$ and a weight decay of $10^{-5}$ were used to train the system. The framework was trained for 60 and 80 epochs on the ObMan and STB datasets, respectively, as the former contained substantially more samples.
In relation to the STB training time, which involves the entire framework, each mini-batch required $\sim$0.5 seconds to be analyzed by the specified hardware configuration, giving a total of $\sim$1278 s per training epoch.
The first experiment quantitatively evaluated the usefulness of the SFE component in terms of producing more accurate 3D hand pose and shape estimations. As shown in Table~\ref{tab:sfe_ablation}, while it is still possible to achieve estimations by feeding the input image directly to the VE (i.e., with no SFE), an improvement of $\sim$1.5 mm in the estimation can be obtained by extracting the semantic features. This indicates that the 2D heatmaps allow the network to focus on the positions of the hand joints, whereas generating the heatmaps and silhouette simultaneously enables a comprehensive view of the hand that forces the joints to be placed in the right position when moving to a 3D plane. This behavior is further supported by the results of pre-training the SFE component on the ObMan dataset, where the occlusions force the network to create meaningful abstractions for both the 2D heatmaps and silhouette.
The second test gauged the quality of the VE in terms of estimating view parameters $v$. As shown in Table~\ref{tab:ve_ablation}, using either a low or high number of ResNet modules (i.e., two in the former case or five or more in the latter) to produce the latent space representation $l_v$ results in an increased EPE score, which is often associated with underfitting and overfitting. Slightly better performances can be achieved by reducing the sizes of the dense layers used to build up the vector $v$. However, although the smaller VE (i.e., $VE_1$) can perform better than a larger one (i.e., $VE_2$), this result does not apply when extra steps are included, such as skeleton adaptation and hourglass output concatenation (shown in Table~\ref{tab:adv_ablation}), suggesting that some information can still be lost.
The third experiment, which is summarized in Table~\ref{tab:hpse_ablation}, focused on the hand pose and shape estimator. Tests were performed on the number of parameters of the MANO layer, the proposed 2D re-projection, and the regularization loss. Increasing the number of hand articulations allowed us, as expected, to obtain more realistic hands and consequently more precise estimations when all 45 values were used. Applying the proposed 2D re-projection further reduced the EPE score by providing the MANO layer with direct feedback on its output. Employing the regularization loss resulted in a higher keypoint distance, with the difference derived from the hand shape collapsing onto itself as shown in the bottom row of Fig.~\ref{fig:ablation_qualitative}.
The fourth and last test, reported in Table~\ref{tab:adv_ablation}, dealt with advanced strategies, i.e., skeleton adaptation (adapt skeleton) and hourglass output concatenation, rather than summation. The former strategy allowed for a significant performance boost (a 2 mm lower EPE) since it directly refines the 3D joints produced by the MANO layer. Stacked hourglass output concatenation improved the precision of the system by a further 0.31 mm, since it provided the VE with a finegrained input representation. However, this detailed input description requires bigger dense layers (i.e., $VE_2$) to avoid losing any information. Consequently, using smaller dense layers (i.e., $VE_1$) results in an increase in the EPE score.
\begin{figure*}[t]
\begin{overpic}[width=\textwidth]{images/section4/ablation.pdf}
\put(5.6,0){(a)}
\put(19.1,0){(b)}
\put(32.9,0){(c)}
\put(51.2,0){(d)}
\put(82.2,0){(e)}
\end{overpic}
\caption{STB dataset 3D pose and shape estimation outputs. From top to bottom, the presented framework, framework without 2D re-projection, framework without SFE module, and full framework without regularization loss. Input image, 2D joints, silhouette, 3D joints, and mesh, are shown in (a), (b), (c), (d), and (e), respectively.}
\label{fig:ablation_qualitative}
\end{figure*}
\begin{figure*}[t]
\begin{overpic}[width=\textwidth]{images/section4/failed_ablation.pdf}
\put(5.6,0){(a)}
\put(19.1,0){(b)}
\put(32.9,0){(c)}
\put(51.2,0){(d)}
\put(82.2,0){(e)}
\end{overpic}
\caption{STB dataset failed 3D pose and shape estimation outputs for the presented framework. Input image, 2D joints, silhouette, 3D joints, and mesh, are shown in (a), (b), (c), (d), and (e), respectively.}
\label{fig:failure_qualitative}
\end{figure*}
\begin{table*}[t]
\centering
\caption{AUC state-of-the-art comparison on STB dataset. Works are subdivided according to their input and output types.}
\label{tab:auc_comp}
\begin{tabular}{l ccc}
\hline
\textbf{Model} & \textbf{Input} & \textbf{Output} & \textbf{AUC} \\
\hline
CHPR \citep{sun2015cascaded} & Depth & 3D skeleton & 0.839 \\
ICPPSO \citep{qian2014realtime} & RGB-D & 3D skeleton & 0.748 \\
PSO \citep{oikonomidis2011efficient} & RGB-D & 3D skeleton & 0.709 \\
Dibra et al. \cite{dibra2018monocular} & RGB-D & 3D skeleton & 0.923 \\
\hline
Cai et al. \cite{cai20203d} & RGB & 3D skeleton & 0.996 \\
Iqbal et al. \cite{iqbal2018hand} & RGB & 3D skeleton & 0.994 \\
Hasson et al. \cite{hasson2019learning} & RGB & 3D skeleton & 0.992 \\
Yang and Yao \cite{yang2019disentangling} & RGB & 3D skeleton & 0.991 \\
Spurr et al. \cite{spurr2018cross} & RGB & 3D skeleton & 0.983 \\
Zummermann and Brox \cite{zimmermann2017learning} & RGB & 3D skeleton & 0.986 \\
Mueller et al. \cite{mueller2018ganerated} & RGB & 3D skeleton & 0.965 \\
Panteleris et al. \cite{panteleris2018using} & RGB & 3D skeleton & 0.941 \\
\hline
Ge et al. \cite{ge20193d} & RGB & 3D skeleton+mesh & 0.998 \\
Baek et al. \cite{baek2020weakly} & RGB & 3D skeleton+mesh & 0.995 \\
Zhang et al. \cite{zhang2019end} & RGB & 3D skeleton+mesh & 0.995 \\
Boukhayma et al. \cite{boukhayma20193d} & RGB & 3D skeleton+mesh & 0.993 \\
ours & RGB & 3D skeleton+mesh & 0.995 \\
\hline
\end{tabular}
\end{table*}
The differences in the the qualitative results for different framework configurations are shown in Fig.~\ref{fig:ablation_qualitative}. From the top row to the bottom, the outputs correspond to the proposed framework, the framework without 2D re-projection, the framework without the SFE module, and the full framework without the regularization loss. As can be seen, the most important component in terms of obtaining coherent hand shapes is the regularization loss, since otherwise the mesh collapses onto itself in order to satisfy the 3D joint locations during training time (bottom row in Fig.~\ref{fig:ablation_qualitative}.c and Fig.~\ref{fig:ablation_qualitative}.e).
When we employ the SFE module (first two rows in Fig.~\ref{fig:ablation_qualitative}), more accurate 3D joints and shapes are generated since the SFE enforces both the correct localization of joints and the generation of a more realistic silhouette (Figs.~\ref{fig:ablation_qualitative}.b, c, and d).
When we re-project the generated 3D-coordinates and mesh, the final 3D joint locations and hand shape (Figs.~\ref{fig:ablation_qualitative}.d and e) are more consistent with both the estimated 2D locations and the input image (Figs.~\ref{fig:ablation_qualitative}.b and a). To conclude this qualitative evaluation, some examples of failed 3D pose and shape estimations are shown in Fig.~\ref{fig:failure_qualitative}. It can be seen that although coherent hand poses and shapes are generated, the framework is unable to produce the correct output due to wrong estimations of both the 2D joints and the silhouette by the SFE. This preliminary error is then amplified by the subsequent framework modules, as can be seen from the discrepancy between the 2D and 3D joint locations shown in Figs.~\ref{fig:failure_qualitative}.b and d. Although the loss described in Section~\ref{subsec:multi_task_loss} ultimately forces the MANO layer to produce consistent hands, as also discussed for the third experiment, it can also result in greater inaccuracy in the 3D joint position estimation to guarantee such a consistency. This outcome has two implications: firstly, it indicates that there is still room for improvement, particularly for the 2D joints and silhouette estimations that represent the first step in the proposed pipeline; and secondly, it highlights the effectiveness of the proposed framework, which can generate stable 3D representations from RGB images.
\subsubsection{Comparison of 3D Hand Pose/Shape Estimation}
To demonstrate the effectiveness of the proposed framework, a state-of-the-art 3D PCK AUC comparison was carried out, as shown in Table~\ref{tab:auc_comp}. As can be seen, the presented system is competitive with other successful schemes while using only RGB images and outputting both 3D skeleton locations and mesh, indicating that all of our design choices allow the framework to generate good estimates using only RGB information. It is particularly interesting that the proposed method was able to easily outperform systems that exploited depth data, suggesting that the simultaneous use of the multi-task SFE, VE, and 2D re-projection can help to produce correct estimations by compensating for the missing depth information.
Specifically, the multi-task SFE enables the implementation of a customized VE that, unlike the scheme in \cite{boukhayma20193d}, does not require to be pre-trained on a synthetic dataset in order to perform well; there is also no need to normalize the latent feature space representation by using a VAE to disentangle different factors influencing the hand representation in order to obtain accurate 3D hand poses, as described in \cite{yang2019disentangling}.
Furthermore, thanks to the re-projection module, these results are obtained without applying an iterative regression module to the MANO layer, unlike in \cite{zhang2019end}, and \cite{baek2020weakly}, where progressive changes are carried out recurrently to refine the estimation parameters, thus simplifying the training procedure. In addition, the solutions implemented in the proposed framework allow us to avoid input assumptions and post-processing operations, unlike the majority of schemes in the literature where some parameter (e.g., global hand scale or root joint depth) is assumed to be known at test time, and secondly to achieve similar performance to the best model devised in \cite{ge20193d}, even though the latter scheme employs a more powerful solution (i.e., a graph CNN instead of a fixed MANO layer) for the 3D hand pose and shape estimation.
To conclude this comparison, the 3D PCK curve computed at different thresholds for several state-of-the-art works is shown in Fig.~\ref{fig:3d_pck}. It can be seen that performance on the 3D hand pose and shape estimation task is becoming saturated, and newer works can consistently achieve high performance with low error thresholds. In this context, the proposed method is on par with the top works in the literature, further supporting the view that all of our design choices allowed the framework to generate good 3D poses and shapes from monocular RGB images.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{images/section4/3dpck.pdf}
\caption{3D PCK state-of-the-art comparison on STB dataset.}
\label{fig:3d_pck}
\end{figure}
\subsubsection{Comparison on Hand Gesture Recognition}
To assess the generalizability of the proposed framework, experiments were performed on the RWTH and Senz3D datasets. Since our architecture does not include a classification component, it was extended by attaching the same classifier described in \cite{zimmermann2017learning} to handle the new task. This classifier takes as input the 3D joint coordinates generated by the MANO layer, and consists of three fully connected layers with a ReLU activation function.
Note that all of the weights (except those used for the classifier) are frozen when training the system on the hand gesture recognition task, so that it is possible to correctly evaluate the generalizability of the framework.
Moreover, although weights are frozen, the entire framework still needs to be executed. Hence, the majority of the training time is spent on the generation of the 3D joint coordinates, since this is where most of the computation is performed; as a result, each mini-batch is analyzed by the specified hardware configuration in $\sim$222 ms (i.e., $\sim$37 ms per image), giving total times of $\sim$206 s and $\sim$235 s per epoch for the RWTH and Senz3D datasets, respectively.
Our experiments followed the testing protocol devised in \cite{dibra2018monocular} in order to present a fair comparison; this consisted of 10-fold cross-validation with non-overlapping 80/20 splits for the training and test sets, respectively. In a similar way to other state-of-the-art works, all images were cropped close to the hand to remove as much background as possible and meet the requirements for the input size, i.e., $256\times256$.
The results are shown in Table~\ref{tab:gesture_recog_comp}, and a comparison with other schemes in the literature is presented.
It can be seen that our framework consistently outperformed another work focusing on 3D pose and shape estimation (i.e., \cite{zimmermann2017learning}) on both datasets, meaning that it generates more accurate joint coordinates from the RGB image; the same result was also obtained for the estimation task, as shown in Table~\ref{tab:auc_comp}.
However, methods that exploit depth information (i.e., \cite{dibra2018monocular} or concentrate on hand gesture classification (i.e., \cite{papadimitriou2019fingerspelled}) can still achieve slightly higher performance. There are two reasons for this.
Firstly, by concentrating on the hand gesture classification task, lower performance is achieved on the estimation task, although similar information, such as the 3D joint locations, is used. As a matter of fact, even though they exploit depth information in their work, the authors of \cite{dibra2018monocular} obtained an AUC score of 0.923, while the scheme in \cite{zimmermann2017learning} and the proposed framework achieved AUC scores of 0.986 and 0.994 on the STB dataset for 3D hand pose estimation, respectively. Secondly, as discussed in Section~\ref{subsubsec:framework_quantitative_and_qualitative} and shown by the qualitative results in Fig.~\ref{fig:failure_qualitative}, the proposed architecture could be improved further by increasing the estimation accuracy of the 2D joints and silhouette, indicating that if a good hand abstraction is used to derive the 3D hand pose and shape, this can be effective for the hand gesture recognition task.
In summary, the proposed method achieves state-of-the-art performance on the 3D hand pose and shape estimation task, can outperform other existing estimation approaches when applied to the hand gesture recognition task, and behaves in a comparable way to other specifically designed hand gesture recognition systems. This indicates that the proposed pipeline outputs stable hand pose estimations that can be effectively used to recognize hand-gestures.
\begin{table}[t]
\centering
\caption{Hand-gesture recognition accuracy comparison.}
\label{tab:gesture_recog_comp}
\begin{tabular}{l cc}
\hline
\textbf{Model} & \textbf{RWTH} & \textbf{Senz3D} \\
\hline
Papadimitriou and Potamianos \cite{papadimitriou2019fingerspelled} & 73.92\% & - \\
Memo and Zanuttigh \cite{memo2018head} & - & 90.00\% \\
Dibra et al. \cite{dibra2018monocular} & 73.60\% & 94.00\% \\
Dreuw et al. \cite{dreuw2006modeling} & 63.44\% & - \\
Zimmerman and Brox \cite{zimmermann2017learning}* & 66.80\% & 77.00\% \\
ours* & 72.03\% & 92.83\% \\
\hline
\end{tabular}\\
\footnotesize{$^*$method focusing on 3D hand pose and shape estimation}
\end{table}
\section{Conclusion}\label{sec:conclusions}
In this paper, we have presented an end-to-end framework for the estimation of 3D hand pose and shape, and successfully applied it to the task of hand gesture recognition.
Our model comprises three modules, a multi-task SFE, a VE, and an HE with weak re-projection, each of which has certain strengths and weaknesses.
More specifically, the SFE, thanks to its pre-training providing multi-task context, enables our architecture to achieve similar performance to more powerful schemes in the literature that exploit graph representations, for instance. However, the 2D joints/silhouettes estimates could be improved through the use of more accurate algorithms for the generation of ground truths.
The VE did not require any form of pre-training, due to the fine-grained input from the SFE; this module was able to generate the parameters necessary to move from a 2D space to a 3D one, which is more easily applied in a diverse range of tasks and datasets.
It was important to experiment in order to find the right design for the VE, since this could have broken the entire system in scenarios of over- or underfitting where the produced viewpoint parameters do not allow for a correct estimation of the hand pose/shape.
Finally, unlike other existing works, the HE was able to output accurate estimations without requiring an iterative process, as its re-projection procedure allowed for closer correlation between the 3D and 2D hand representations during training. However, a regularization term was still required, as without this the meshes completely collapsed onto themselves when the system tried to generate near-perfect spatial 3D joint estimations.
We further improved our architecture through the use of advanced strategies such as skeleton adaptation and hourglass output concatenation, to obtain both more refined 3D joint locations and finer grained input representations.
Our experimental results demonstrate that the multi-task SFE, VE, HE with weak re-projection, and the use of advanced strategies, which were designed by exploiting and extending schemes in the literature, achieved state-of-the-art performance on the task of 3D hand pose and shape estimation.
Moreover, when applied to hand gesture recognition on both benchmark datasets, our framework outperformed other schemes that were devised for the estimation task and later employed to recognize hand gestures.
In future work, to address the current weaknesses of our system, we plan to upgrade the SFE by first increasing the accuracy of its 2D heatmaps and silhouette estimation by generating the corresponding ground truths via deep learning-based segmentation algorithms. We will also design other meaningful features to extend the multi-task strategy.
In addition, we will explore solutions for the HE that are not based on the MANO layer but on other approaches such as graph representations of the hand, in order to increase the abstraction capabilities of the model. In view of this, additional experiments will be performed in which we will retain the 3D shape when moving to the hand gesture recognition task, with the aim of improving the final results.
Although the proposed model currently addresses the pose/shape estimation of single hands by design, it could be extended to simultaneously handle inputs containing both hands. Thus, another possible avenue for future work would involve the exploration of alternative architectural extensions or design modifications to handle input images containing two hands.
Moreover, since the proposed architecture can classify roughly 27 images per second, we will design an adaptation to try to achieve real-time gesture recognition from video sequences.
\section*{Acknowledgment}
This work was supported in part by the MIUR under grant “Departments of Excellence 2018–2022” of the Department of Computer Science of Sapienza University.
\section{Introduction}\label{intro}
3D pose estimation is a fundamental technology that has become very important in recent years for computer vision-based tasks, primarily due to advances in several practical applications such as virtual reality (VR) \citep{alam2022unified}, augmented reality (AR) \citep{makar2014interframe}, sign language recognition \citep{avola2018exploiting} and, more generally, gesture recognition \citep{guo2021normalized}. In these fields, much of the current effort is directed towards the pose estimation of hands since, due to the high number of joints, they are one of the most complex components of the human body \citep{rehg1994visual}. To address this complexity, researchers generally follow either a model-driven or data-driven strategy. The former uses articulated hand models to describe bones, muscles, and tendons, for example, through kinematics constraints \citep{de2010variational,de2011model}, whereas the latter directly exploits depth, RGB-D, or RGB images \citep{zhao2017simple,dibra2018monocular,mueller2018ganerated} to extract keypoints that represent a hand. Although both strategies have their merits, data-driven approaches have allowed various systems to achieve significant performance while remaining more straightforward to implement and are usually the preferred choice between the two strategies.
Although early data-driven works were based on machine learning or computer vision algorithms such as random forest \citep{keskin2012hand} and geodesic distance-based systems \citep{tang2014latent}, attention has recently shifted towards deep learning methods \citep{li2019survey}. This is due to the high performance obtained in a heterogeneous range of fields such as emotion recognition \citep{sheng2021multi,avola2020deep}, medical image analysis \citep{yan2021development,avola2021multimodal}, and person re-identification \citep{prasad2021spatio,wu2022learning}, as well as the availability of commodity hardware for capture systems \citep{yuan2017bighand2} that can provide different types of input (e.g., depth maps).
For the task of 3D hand pose estimation, methods based on deep learning architecture configurations such as multilayer perceptrons (MLPs), convolutional neural networks (CNNs), and autoencoders have been proposed. These methods usually analyze hand keypoints via 2D heatmaps, which represent hand skeletons and are extrapolated from depth \citep{zhao2017simple}, RGB-D \citep{dibra2018monocular}, or RGB \citep{iqbal2018hand} input images. While the first two of these input options provide useful information for estimating the 3D pose through the depth component, near state-of-the-art results have been obtained by exploiting single RGB images \citep{cai2018weakly}.
The reasons for this are twofold. Firstly, even though commodity sensors are available, it is hard to acquire and correctly label a depth dataset due to the intrinsic complexity of the hand skeleton, which has resulted in a lack of such datasets. Secondly, RGB images can easily be exploited in conjunction with data augmentation strategies, thus allowing a network to be trained more easily \citep{tanner1987calculation}.
To further improve the estimation of 3D hand pose from 2D images, several recent works have exploited more complex architectures (e.g., residual network) and the multi-task learning paradigm by generating 3D hand shapes together with their estimated pose.
By leveraging these approaches together with hand models, such as the model with articulated and non-rigid deformations (MANO) \citep{romero2017embodied} and graph CNNs \citep{ge20193d}, various systems have achieved state-of-the-art performance. In particular, they can produce correct hand shapes and obtain more accurate pose estimations from an analysis of 2D hand heatmaps and depth maps generated from a single RGB input image \citep{zhang2019end}.
Inspired by the results reported in other works, we propose a keypoint-based end-to-end framework that can give state-of-the-art performance for both 3D hand pose and shape estimation; we also show that this system can be successfully applied to the task of hand gesture recognition and can outperform other keypoint-based works.
In more detail, our framework first applies a pre-processing phase to normalize RGB images containing hands.
A semantic feature extractor (SFE) with a multi-task stacked hourglass network is employed for the first time in the literature to simultaneously generate 2D heatmaps and hand silhouettes starting from an RGB image.
A novel viewpoint encoder (VE) is used to reduce the number of parameters required to encode the feature space representing the camera view during the computation of the viewpoint vector.
A stable hand pose/shape estimator (HE) based on a fine-tuned MANO layer is employed in conjunction with an improved version of a neural 3D mesh renderer \citep{kato2018neural}. This is extended via a custom weak perspective projection through the 2D re-projection of the generated 3D joint positions and meshes. Finally, a multi-task loss function is used to train the various framework components to carry out the 3D hand pose and shape estimation.
The main contributions of this paper can be summarized as follows:
\begin{itemize}
\item We present a comprehensive end-to-end framework based on keypoints that combines and improves upon several different technologies to generate 3D hand pose and shape estimations;
\item We propose a multi-task SFE, design an optimized VE, and introduce a re-projection procedure for more stable outputs;
\item We evaluate the generalization of the capabilities of our model to the task of hand gesture recognition and show that it outperforms other relevant keypoint-based approaches developed for 3D hand estimation.
\end{itemize}
The rest of this paper is organized as follows. Section~\ref{sec:related} introduces relevant work that inspired this study. Section~\ref{sec:method} presents an exhaustive description of the components of our framework. Section~\ref{sec:results} describes the experiments performed to validate the proposed approach and presents a comparison with other state-of-the-art methods for hand pose, shape estimation, and hand-gesture recognition tasks. Finally, Section~\ref{sec:conclusions} draws some conclusions from this study.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{images/section3/flow.pdf}
\caption{Proposed framework flowchart.}
\label{fig:flowchart}
\end{figure}
\section{Related Work}\label{sec:related}
Methods of 3D pose and shape estimation generally exploit depth, RGB-D, or RGB images. The last of these is usually the preferred solution due to the availability of datasets; however, approaches that leverage depth information have provided solutions and ideas that can also be applied to standalone RGB images.
For instance, the study in \cite{sun2015cascaded} introduces a hierarchical regression algorithm that starts with depth maps and describes hand keypoints based on their geometric properties and divides the hand into meaningful components such as the fingers and palm to obtain 3D poses.
In contrast, the study in \cite{malik2020handvoxnet} uses depth maps to build both 3D hand shapes and surfaces by defining 3D voxelized depth maps that can mitigate possible depth artifacts.
The representation of meaningful hand components and reductions in input noise are also relevant problems for RGB and RGB-D images.
For example, when considering RGB-D inputs for the task of 3D hand pose estimation, \cite{oikonomidis2011efficient} and \cite{qian2014realtime} use the depth component to define hand characteristics through geometric primitives that are later matched with the RGB information to generate 3D poses, and hence to track the hands.
Specifically, spheres, cones, cylinders, and ellipsoids are used to describe the palm and fingers in \cite{oikonomidis2011efficient}, while the approach in \cite{qian2014realtime} employs only sphere primitives for faster computation.
Using a different technique, \cite{dibra2018monocular} focuses on handling input noise by using synthetic RGB-D images to train a CNN. In particular, the use of artificial RGB/depth image pairs is shown by the authors to alleviate the effects of missing or unlabeled depth datasets.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{images/section3/archi.pdf}
\caption{Proposed framework architecture overview.}
\label{fig:archi}
\end{figure*}
Approaches that exploit depth information are inherently more suitable for the 3D pose estimation task since they suffer less from image ambiguities compared to systems based exclusively on RGB images.
In view of this, \cite{iqbal2018hand} introduces a 2.5D representation by building a latent depth space via an autoencoder that retains depth information without directly using this type of data. The authors further refine this latent space through an element-wise multiplication with 2D heatmaps to increase the depth consistency and obtain realistic 3D hand poses.
Another work that focuses on depth retention without using the extra information at test time is presented in \cite{cai20203d}. This scheme first employs a conditional variational autoencoder (VAE) to build the latent distribution of joints via the 2D heatmaps extracted from the input RGB image, and then exploits a weak-supervision approach through a depth regularizer that forces the autoencoder to consider automatically generated depth information in its latent space at training time.
A similar weak-supervision rationale is also applied in \cite{zhang2019end} where, in addition to depth information, the hand shape consistency is evaluated through a neural renderer.
More specifically, by exploiting the outputs of the MANO layer (i.e., the 3D hand pose and mesh), the authors project the 3D joint coordinates defining the hand pose into a 2D space to account for depth information, and implement a neural renderer to generate silhouettes from hand shapes to increase the consistency of the results. In the present work, we refine this procedure further via a weak re-projection that is applied to both 3D joint locations and mesh so that the proposed framework can also be applied to a different task.
Accounting for depth information without directly using such data enables higher performance when only RGB images are analyzed.
However, this image format introduces several challenges that must be addressed, such as different camera view parameters, background clutter, occlusions, and hand segmentation.
In general, to handle these problems, RGB-based methods define a pipeline that includes feature extraction from the input image (usually in the form of 2D heatmaps), a latent space representation of such features to allow for the extrapolation of meaningful view parameters, and the 3D hand pose estimation based on the computed view parameters.
For instance, the study in \cite{zimmermann2017learning} implements a CNN called HandSegNet to identify hand silhouettes so that the input images can be cropped and resized around the hand. A second CNN (PoseNet) is then used to extract the features, i.e., 2D heatmaps, allowing the network to estimate the 3D pose via symmetric streams and to analyze the prior pose and the latent pose representation derived by the network.
The authors of \cite{baek2020weakly} instead devise a domain adaptation strategy in which a generative adversarial network (GAN), driven by the 2D heatmaps extracted from the input by a convolutional pose machine, automatically outputs hand-only images from hand-object images. The resulting hand-only images are then used to estimate the correct 3D pose, even in the case of occlusions (e.g., from the object being held). Object shapes are also exploited in \cite{hasson2019learning} to handle occlusions that arise during the task of 3D hand pose estimation. The authors describe the use of two parallel encoders to obtain latent representations of both hand and object, which are in turn employed to define meaningful hand-object constellations via a custom contact loss, so that consistent 3D hand poses can be generated. In contrast, the authors of \cite{yang2019disentangling} directly address the issues of background clutter and different camera view parameters by designing a disentangling VAE (dVAE) to decouple hand, background, and camera view, using a latent variable model, so that a MANO layer can receive the correct input to generate the 3D hand pose.
Furthermore, through the use of the dVAE, the authors are able to synthesize realistic hand images in a given 3D pose, which may also alleviate the issue of low numbers of available datasets.
The use of the pipeline described above allows models to achieve state-of-the-art performance on the 3D hand pose estimation task. Nevertheless, depending on the 3D joint position generation procedure, a good latent space representation is necessary to ensure an effective system. For example, the scheme in \cite{ge20193d} utilizes a stacked hourglass to retrieve 2D heatmaps from the input RGB image. A residual network is then implemented to generate a meaningful latent space, and a graph CNN is built to define both the 3D pose and the shape of the input hand. To further improve the results, the authors pre-train all networks on a synthetic dataset before fine-tuning them on the estimation task.
Unlike the model introduced in \cite{ge20193d}, which is used as a starting point, the framework presented here extends the use of a stacked hourglass and residual network by defining a multi-task SFE and VE, respectively. Moreover, only the multi-task feature extractor is pre-trained on synthetic data so that the VE is able to obtain a hand abstraction that can be utilized in different tasks such as hand gesture recognition.
Finally, the latent space representation is also relevant when using other procedures for 3D pose estimation, such as the MANO layer, as discussed in \cite{boukhayma20193d} and \cite{baek2019pushing}. In more detail, the former of these schemes extracts 2D heatmaps via a CNN, then employs an encoder to generate the MANO layer view parameters directly, while in the latter approach, a latent space is built by an evidence estimator module that leverages a convolutional pose machine, which generates the required parameters for the MANO layer. Both methods implement a re-projection procedure for the 3D joints, and this was further extended in \cite{baek2019pushing} via iterative re-projection to improve the final 3D hand pose and hence the shape estimations. In the framework presented here, we exploit this interesting strategy in a different way from the two approaches described above by also applying the re-projection procedure to the mesh generated by the MANO layer so that the estimation can benefit from both outputs rather than only the 3D locations.
\section{Method}\label{sec:method}
The proposed framework for 3D hand pose and shape estimation starts from a pre-processed hand image input, and first generates 2D heatmaps and hand silhouettes through the use of a multi-task SFE. Secondly, it estimates the camera, hand pose, and hand shape view parameters by exploiting the semantic features using a VE. Finally, it computes the hand 2D and 3D joints, mesh and silhouette by feeding the estimated view parameters to the HE, which consists of a MANO layer, weak perspective projection, and neural renderer components. We note that a single compound loss function is employed to drive the learning phase of the various modules jointly.
A flowchart for the framework is shown in Fig.~\ref{fig:flowchart} and the high-level pipeline is illustrated in Fig.~\ref{fig:archi}.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{images/section3/sfe.pdf}
\caption{Modified stacked hourglass network used as a multi-task semantic feature extractor.}
\label{fig:sfe}
\end{figure*}
\subsection{Pre-processing}
A necessary step in order to handle different image sizes and to reduce the amount of background clutter in the samples in a given dataset is pre-processing of the input image. More specifically, to allow the proposed framework to focus on hands, each image is modified so that the hand is always centered and there is as little background as possible, while still retaining all of the 21 hand joints keypoints. The hand is centered by selecting the metacarpophalangeal joint (i.e., the base knuckle) of the middle finger as a center crop point $p_c$. The crop size $l$ is then computed as follows:
\begin{equation}
l = 2*\max((p_{max}-p_c), (p_c-p_{min})),
\end{equation}
where $p_{max}$ and $p_{min}$ are the joint keypoint coordinates with the largest and smallest $(x, y)$ distance with respect to $p_c$. $l$ is enlarged by another $20\%$ (i.e., $\sim$20 px padding in all directions) to ensure all hand joints are fully visible inside the cropped area.
\subsection{Semantic Feature Extractor}
Inspired by the results obtained in \cite{ge20193d}, we implemented a modified version of a stacked hourglass network \citep{newell2016stacked} to take advantage of the multi-task learning approach. In particular, 2D heatmaps and hand silhouette estimates are generated based on a $256\times256\times3$ normalized (i.e., with zero-mean and unit variance) image $I_{normalized}$. The hourglass architecture was selected as it can capture many features, such as hand orientation, articulation structure, and joint relationships, by analyzing the input image at different scales. Four convolutional layers are employed in the proposed architecture to reduce the input image to a size of $64\times64$ via two max pooling operations in the first and third layers. The downsized images are then fed to the hourglass module and intermediate heatmaps and silhouettes are generated by processing local and global contexts in a multi-task learning scenario.
These two outputs, of size $64\times64\times21$ (i.e., one channel per hand joint) and $64\times64\times2$ (i.e., back and foreground channels) for 2D heatmaps and silhouette, respectively, are then mapped to a larger number of channels via a $1\times1$ convolution to reintegrate the intermediate feature predictions into the feature space.
These representations are then summed with the hourglass input into a single vector $\hat{f}$, thus effectively introducing long skip connections to reduce data loss for the second hourglass module. Finally, this second module is employed to extract the semantic feature vector $f$ that contains the effective 2D heatmaps and hand silhouette used by the VE to regress camera view, hand pose, and hand shape parameters. Note that unlike $\hat{f}$, the vector $f$ is computed via concatenation of 2D heatmaps, hand silhouette, and $\hat{f}$, which provides the VE with a comprehensive representation of the input.
In the hourglass components, each layer employs two residual modules \citep{he2016deep} for both downsampling and upsampling sequences. In the former, the features are processed down to a very low resolution (i.e., $4\times4$), whereas in the latter, images are upsampled and combined with the extracted features across all scales.
In more detail, the information on two adjacent resolutions is merged by first increasing the lower resolution through the nearest neighbor interpolation and then performing an element-wise addition. After the upsampling sequence, three consecutive $1\times1$ convolutions are applied to obtain the multi-task output (i.e., 2D heatmaps and silhouette) used by the VE. The presented SFE architecture with the corresponding layer sizes are illustrated in Fig.~\ref{fig:sfe}.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{images/section3/wpe.pdf}
\caption{Viewpoint encoder architecture.}
\label{fig:wpe}
\end{figure}
\subsection{Viewpoint Encoder}
The extracted semantic feature vector $f$ is resized to dimensions of $256\times256$ and is then used as input for the second framework component, i.e., the VE, which has two main objectives. Firstly, this unit generates a set of parameters $v$, which are employed by the last component of the pipeline to produce the 3D hand pose and shape.
The vector $v$ contains the camera view translation $t\in \mathbb{R}^2$, scaling $s\in \mathbb{R}^+$, and rotation $R \in \mathbb{R}^3$, as well as the hand pose $\theta\in \mathbb{R}^{45}$ and shape $\beta \in \mathbb{R}^{10}$ values necessary to move from a 2D space to a 3D one.
Secondly, the VE also needs to sensibly reduce the number of trainable parameters of the architecture to satisfy the hardware constraints.
In more detail, given the semantic feature vector $f$, a flattened latent viewpoint feature space $l_v$ that encodes semantic information is obtained by using four abstraction blocks, each containing two residual layers \citep{he2016deep}, to analyze the input, and a max-pooling layer, to both consolidate the viewpoint representation and reduce the input vector down to a $64\times64$ size.
Subsequently, to obtain the set $v=\{t, s, R, \theta, \beta\}$, the VE transforms the latent space representation $l_v$ into $v\in\mathbb{R}^{61}$, by employing three dense layers to elaborate on the representation derived by the residual layers.
We note that by employing a max-pooling layer inside each abstraction block, a smaller latent space is obtained, meaning that a lower amount of parameters needs to be trained; hence, both of the objectives for this module are achieved.
The architecture of the VE is shown in Fig.~\ref{fig:wpe}.
\subsection{Hand Pose/Shape Estimator}
The last component of the framework utilizes the parameter set $v=\left\{t,s,R,\theta,\beta\right\}$ to generate the hand pose and shape. The estimator applies a MANO layer for the 3D joint positions and hand mesh generation. These outputs are then improved during training by leveraging a weak perspective projection procedure and a neural renderer for more accurate estimations.
\subsubsection{MANO Layer}
This layer models the properties of the hand, such as the slenderness of the fingers, the thickness of the palm, as well as the hand pose, and controls the 3D surface deformation defined from articulations. More formally, given the pose $\theta$ and shape $\beta$ parameters, the MANO hand model $M$ is defined as follows:
\begin{equation}
M(\beta,\theta)= W(T_p(\beta,\theta),J(\beta),\theta,\mathcal{W}),
\end{equation}
where $W$ is a linear blend skinning (LBS) function \citep{loper2015smpl}; $T_p$ corresponds to the articulated mesh template to blend, consisting of $K$ joints; $J$ represents the joint locations learned from the mesh vertices via a sparse linear regressor; and $\mathcal{W}$ indicates the blend weights.
To avoid the common problems of LBS models, such as overly smooth outputs or mesh collapse near joints, the template $T_p$ is obtained by deforming a mean mesh $\hat{T}$ using the pose and blend functions $B_S$ and $B_P$, using the following equation:
\begin{equation}
T_p = \hat{T} + B_S(\beta) + B_P(\theta),
\end{equation}
where $B_S$ and $B_P$ allow us to vary the hand shape and to capture deformations derived from bent joints, respectively, and are computed as follows:
\begin{equation}
B_S(\beta) = \sum_{n=1}^{|\beta|}\beta_nS_n,
\end{equation}
\begin{equation}
B_P(\theta) = \sum_{n=1}^{9K}\left(R_n(\theta)-R_n(\theta^*)\right)P_n,
\end{equation}
where $S_n\in S$ are the blend shapes computed by applying principal component analysis (PCA) to a set of registered hand shapes, normalized by a zero pose $\theta^*$; $9K$ represents the rotation matrix scalars for each of the $K$ hand articulations; $R_n$ indicates the $n$-th element rotation matrix coefficients; while $P_n\in P$ corresponds to the blend poses.
Finally, there is a natural variability among hand shapes in a human population, meaning that possible skeleton mismatches might be found in the MANO layer 3D joint output. To address this issue, we implemented a skeleton adaptation procedure following the solution presented in \cite{hasson2019learning}. Skeleton adaptation is achieved via a linear layer initialized to the identity function, which maps the MANO joints to the final joint annotations.
\subsubsection{Weak Perspective Projection}
The weak perspective projection procedure takes as inputs the translation vector $t$, the scalar parameter $s$, and the 3D hand pose $RJ(\beta,\theta)$ derived from the MANO model $M(\theta,\beta)$, and re-projects the generated 3D keypoints back onto a 2D space to allow for identification of the 2D hand joint locations. This approximation allows us to train the model without defining intrinsic camera parameters, since consistency between the input and the projected 2D locations can be enforced, thus avoiding issues arising from the different devices calibrations that are typical of different datasets. Formally, the re-projection $w$ is computed as follows:
\begin{equation}
w_{2D} = s\Pi(RJ(\beta,\theta)) + t,
\end{equation}
where $\Pi$ corresponds to the orthographic projection.
\begin{figure*}[t]
\subfloat[$vi=(x_i,y_i)$ is a face vertex. $I_j$ is pixel $P_j$ color. $x_0$ is the current $x_i$ position. $x_1$ is the $x_i$ position when a face edge collides with $P_j$ center, with $x_i$ moving to its right. When $x_i=x_1$, then $I_j=I_{ij}$.]{\includegraphics[width=.485\textwidth]{images/section3/outside_raster.png}}
\hfill
\subfloat[$vi=(x_i,y_i)$ is a face vertex. $I_j$ is pixel $P_j$ color. $x_0$ is the current $x_i$ position. $x_1^a$ and $x_1^b$ are the $x_i$ positions when a face edge collides with $P_j$ center, with $x_i$ moving to its left or right. When $x_i=x_1^a|x_1^b$, then $I_j=I_{ij}^a|I_{ij}^b$.]{\includegraphics[width=.485\textwidth]{images/section3/inside_raster.png}}
\caption{Rasterization process for pixels residing outside a given face (i) and inside it (ii). Image courtesy of \cite{kato2018neural}.}
\label{fig:rasterization}
\end{figure*}
\subsubsection{Neural Renderer}
To improve the mesh generated by the MANO layer, the differentiable neural renderer devised in \cite{kato2018neural}, which is trainable via back-propagation, is employed to rasterize the 3D shape into a hand silhouette. This silhouette, which is similar to the re-projected hand joint coordinates, is then used to improve the mesh generation. Formally, given a 3D mesh composed of $N=778$ vertices $\{v_1^o, v_2^o, \dots, v_N^o\}$, with $v_i^o\in\mathbb{R}^{3}$, and $M=1538$ faces $\{f_1, f_2, \dots, f_M \} $, with $f_j\in\mathbb{N}^{3}$, the vertices are first projected onto the 2D screen space using the weak perspective projection via the following equation:
\begin{equation}
v_i^s = s\Pi(Rv_i^o) + t,
\end{equation}
where $R$ corresponds to the rotation matrix used to build the MANO hand model $M(\beta,\theta)$. Rasterization is then applied to generate an image from the projected vertices $v_i^s$ and faces $f_j$, (where $i\in N, j\in M$) via sampling, as explained in \cite{kato2018neural}.
Different gradient flow rules are used to handle vertices residing outside or inside a given face. For ease of explanation, only the $x$ coordinate of a single vertex $v_i^s$ and a pixel $P_j$ are considered in each scenario. Note that the color of $P_j$ corresponds to a function $I_j(x_i)$, which is defined by freezing all variables except $x_i$ to compute the partial derivatives.
As shown in Fig.~\ref{fig:rasterization}, during the rasterization process, the color $I_j(x_i)$ of a pixel $P_j$ changes instantly from $I(x_0)$ to $I_{ij}$ (i.e., starting and hitting point, respectively) when $x_i$ reaches a point $x_1$ where an edge of the face collides with $P_j$ center.
Note that such a change can be observed for pixels located either outside the face (Fig.~\ref{fig:rasterization}.i.b) or inside it (Fig.~\ref{fig:rasterization}.ii.b).
If we then let $\delta_i^x=x_1-x_0$ be the distance traveled from a starting point $x_0$, and $\delta_j^I=I(x_1)-I(x_0)$ be the color change, it is straightforward to see that when computing the partial derivatives $\partial I_j(x_i)/\partial x_i$ during the rasterization process, they will be zero almost everywhere due to the sudden color change (Fig.~\ref{fig:rasterization}.i/ii.c).
To address this issue, the authors of \cite{kato2018neural} introduce a gradual change between $x_0$ and $x_1$ via linear interpolation (Fig.~\ref{fig:rasterization}.i/ii.d), which allows for transformation of the partial derivates $\partial I_j(x_i)/\partial x_i$ into $\delta_j^I/\delta_i^x$ since the gradual change allows for non-zero derivatives (Fig.~\ref{fig:rasterization}.i/ii.e).
However, $I_j(x_i)$ has different left and right derivatives on $x_0$. Hence, the error signal $\delta_j^P$, which is back-propagated to pixel $P_j$ and indicates whether $P_j$ should be brighter or darker, is also used to handle the case where $x_i=x_0$. Finally, to deal with this situation, the authors of \cite{kato2018neural} define gradient flow rules to correctly modify pixels residing either outside or inside a given face during the backpropagation.
For $P_j$ residing outside the face, the gradient flow rule is defined as follows:
\begin{equation}
\left. \frac{\partial I_j(x_i)}{\partial x_i}\right\rvert_{x_i=x_0} = \begin{cases} \frac{\delta_j^I}{\delta_i^x}, & \mbox{if } \delta_j^P\delta_j^I<0; \\ 0, & \mbox{if } \delta_j^P\delta_j^I\ge0, \end{cases}
\label{eq:pj_out}
\end{equation}
where $\delta_i^x=x_1-x_0$ indicates the distance traveled by $x_i$ during the rasterization procedure; $\delta_j^I=I(x_1)-I(x_0)$ represents the color change; and $\delta_j^P$ corresponds to the error signal backpropagated to pixel $P_j$.
As described in \cite{kato2018neural}, to minimize the loss during training, pixel $P_j$ must become darker for $\delta_j^P>0$. In fact, since the sign of $\delta_j^I$ denotes whether $P_j$ can be brighter or darker, for $\delta_j^I>0$, pixel $P_j$ becomes brighter when pulling $x_i$ toward the face but, at the same time, $P_j$ cannot become darker when moving $x_i$ as it would require a negative $\delta_j^I$. Thus, the gradient should not flow if $\delta_j^P>0\:\wedge\:\delta_j^I>0$.
In addition, the face and $P_j$ might also not overlap, regardless of where $x_i$ moves, as the hitting point $x_1$ may not exist.
In this case, the derivatives are defined as $\partial I_j(x_i)/\partial x_i\rvert_{x_i=x_0}=0$.
This means that the gradient should never flow when $\delta_j^P\delta_j^I\ge0$, in accordance with Eq.~\ref{eq:pj_out}.
Finally, we note that the derivative with respect to $y_i$ is obtained by using the y-axis in Eq.~\ref{eq:pj_out}.
For a point $P_j$ residing inside the face, the left and right derivatives are first defined \cite{kato2018neural} at $x_0$ as $\delta_j^{Ia}=I(x_1^a)-I(x_0)$, $\delta_j^{Ib}=I(x_1^b)-I(x_0)$, $\delta_x^a=x_1^a-x_0$, and
$\delta_x^b=x_1^b-x_0$. Then, in a similar way to a point $P_j$ residing outside the face, they define the gradient flow rules as follows:
\begin{equation}
\left. \frac{\partial I_j(x_i)}{\partial x_i}\right\rvert_{x_i=x_0} = \left. \frac{\partial I_j(x_i)}{\partial x_i}\right\rvert_{x_i=x_0}^a + \left. \frac{\partial I_j(x_i)}{\partial x_i}\right\rvert_{x_i=x_0}^b,
\end{equation}
\begin{equation}
\left. \frac{\partial I_j(x_i)}{\partial x_i}\right\rvert_{x_i=x_0}^a = \begin{cases} \frac{\delta_j^{I^a}}{\delta_x^a}, & \mbox{if } \delta_j^P\delta_j^{I^a}<0; \\ 0, & \mbox{if } \delta_j^P\delta_j^{I^a}\ge0, \end{cases}
\end{equation}
\begin{equation}
\left. \frac{\partial I_j(x_i)}{\partial x_i}\right\rvert_{x_i=x_0}^b = \begin{cases} \frac{\delta_j^{I^b}}{\delta_x^b}, & \mbox{if } \delta_j^P\delta_j^{I^b}<0; \\ 0, & \mbox{if } \delta_j^P\delta_j^{I^b}\ge0. \end{cases}
\end{equation}
\subsection{Multi-task Loss}\label{subsec:multi_task_loss}
The final outputs of the proposed framework are the 3D joint positions and the hand mesh. However, in real-world datasets, the ground truths for 3D hand mesh, pose, shape, and view parameters, which are unknown in unconstrained situations, are hard to collect. Our framework therefore automatically generates intermediate representations (i.e., 2D heatmaps, hand silhouettes, and 2D re-projected joint positions and meshes), which are then exploited to train the whole system jointly using a single loss function.
The ground truths in this case are defined as follows:
\begin{itemize}
\item 2D heatmaps (one per joint) are built using a 2D Gaussian with a standard deviation of $2.5$ pixels, centered on the 2D joint locations annotations identified in the datasets, to describe the likelihood of a given joint residing in that specific area;
\item Hand silhouettes are computed from the input image with the GrabCut algorithm, implemented using the OpenCV library, and 2D joint annotations from the datasets are used to initialize the foreground, background, and probable foreground/background regions, following \cite{boukhayma20193d};
\item The 3D joint positions and 2D re-projected joint positions are compared directly with the 3D and 2D joint positions provided by the various datasets;
\item The 2D re-projected meshes are compared with the same hand silhouette masks built with the GrabCut algorithm.
\end{itemize}
Formally, the multi-task loss function employed here is defined through the following equation:
\begin{equation}
L = L_{SFE} + L_{3D} + L_{2D} + L_{M} + L_{Reg},
\end{equation}
where $L_{SFE}$, $L_{3D}$, $L_{2D}$, $L_{M}$, and $L_{Reg}$, represent the semantic feature extractor (i.e., 2D heatmaps and hand silhouette), 3D joint positions, 2D joint re-projection, hand silhouette mask (i.e., the re-projected mesh), and model parameter regularization losses, respectively.
\subsubsection{$L_{SFE}$}
The semantic feature extractor estimates 2D heatmaps and the hand silhouette. The loss for a given hourglass module $h_i$ is defined as the sum of heatmaps $L_2$ and the pixel-wise binary cross-entropy (BCE) losses for the silhouette, as follows:
\begin{equation}
L_{h_i} = \lVert H - \hat{H} \rVert_2^2 + \lVert M - \hat{M} \rVert_{BCE}^2,
\end{equation}
where $\hat{\cdot}$ is the hourglass output for the 2D heatmaps $H$ and the silhouette mask $M$, for which the ground truths are derived via the 2D Gaussian and GrabCut algorithm, respectively.
The two stacked hourglass network losses are then summed to apply intermediate supervision since, as demonstrated in \cite{newell2016stacked}, this improves the final estimates. Thus, the SFE loss is defined in the following equation:
\begin{equation}
L_{SFE} = L_{h_1} + L_{h_2},
\end{equation}
\subsubsection{$L_{3D}$}
An $L_2$ loss is also used to measure the distance between the estimated 3D joint positions $RJ(\beta,\theta)$ and the ground truth coordinates $w_{3D}$ provided by the datasets, as follows:
\begin{equation}
L_{3D} = \lVert w_{3D} - RJ(\beta,\theta) \rVert_2^2
\end{equation}
\subsubsection{$L_{2D}$}
The 2D re-projected hand joint positions loss is used to refine the view parameters $t$, $s$, and $R$, for which the ground truths are generally unknown. It is computed as follows:
\begin{equation}
L_{2D} = \lVert w_{2D} - \hat{w}_{2D} \rVert_1,
\end{equation}
where $\hat{w}_{2D}$ indicates the network 2D re-projected positions, and $w_{2D}$ are the ground truths of 2D joint positions annotated in a given dataset.
Notice that an $L_1$ loss is used since it is less sensitive and more robust to outliers than the $L_2$ loss.
\subsubsection{$L_{M}$}
The silhouette mask $M$ loss is introduced into the weak supervision, since the hand mesh should be consistent with its silhouette \citep{zhang2019end} or depth map \citep{kato2018neural}. This $L_2$ loss therefore helps to refine both the hand shape and the camera view parameters, via the following equation:
\begin{equation}
L_M = \lVert M - \hat{M} \rVert_2^2,
\end{equation}
where $\hat{M}$ is the 2D re-projected mesh, and $M$ corresponds to the hand silhouette mask used as ground truth, extracted using the GrabCut algorithm.
\subsubsection{$L_{Reg}$}
The last loss component is a regularization term that is applied with the aim of reducing the magnitudes of the hand model parameters $\beta$ and $\theta$, in order to avoid unrealistic mesh representations. Focusing only on the 2D and 3D joint positions while ignoring the hand surfaces results in the mesh fitting joint locations but completely ignoring the actual anatomy of the hand. Hence, to avoid possible extreme mesh deformations, a regularization term is used as follows:
\begin{equation}
L_{Reg} = \lVert \beta \rVert_2^2 + \lVert \theta \rVert_2^2.
\end{equation}
\section{Experimental Results}\label{sec:results}
In this section, we first introduce the benchmark datasets for 3D hand pose and shape estimation and hand gesture recognition that are used to validate our framework, and the data augmentation strategy that is employed to better exploit all the available samples.
We then present a comprehensive performance evaluation. We report the results of ablation studies on each component of the framework to highlight the effectiveness of the proposed approach both quantitatively and qualitatively. We conduct a comparison with state-of-the-art alternatives for the 3D hand pose and shape estimation, and present the results obtained from a different task (i.e., hand-gesture recognition), so that the abstraction capabilities of our framework can be fully appreciated.
\subsection{Datasets}
The following benchmark datasets are exploited to evaluate our framework: the synthetic object manipulation (ObMan) \citep{hasson2019learning}, stereo hand dataset (STB) \citep{zhang20163d}, Rhenish-Westphalian Technical University gesture (RWTH) \citep{dreuw2006modeling}, and the creative Senz3D (Senz3D) \citep{minto2015exploiting} datasets. Specifically, ObMan is used to pre-train the SFE to generate 2D heatmaps and hand silhouette estimations that are as accurate as possible; STB is employed to evaluate the 3D hand pose and shape estimations through ablation studies and comparisons with state-of-the-art methods; and RWTH and Senz3D are utilized to assess the generalization capabilities of our framework to the task of hand gesture recognition.
\subsubsection{ObMan}
This is a large-scale synthetic dataset containing images of hands grasping different objects such as bottles, bowls, cans, jars, knives, cellphones, cameras, and remote controls. Realistic images of embodied hands are built by transferring different poses to hands via the SMPL+H model \citep{romero2017embodied}. Several rotation and translation operations are applied to maximize the viewpoint variability to provide natural occlusions and coherent backgrounds. For each hand-object configuration, object-only, hand-only, and hand-object images are generated with the corresponding segmentation, depth map, and 2D/3D joints location of 21 keypoints. From this dataset, we selected 141,550 RGB images with dimensions $256\times256$, showing either hand-only or hand-object configurations, to train the semantic feature extractor.
\subsubsection{STB}
This dataset contains stereo image pairs (STB-BB) and depth images (STB-SK), and was created for the evaluation of hand pose tracking/estimation difficulties in real-world scenarios.
Twelve different sequences of hand poses were collected with six different backgrounds representing static or dynamic scenes.
The hand and fingers are either moved slowly or randomly to give both simple and complex self-occlusions and global rotations.
Images in both collections have the same resolution of $640\times480$, identical camera pose, and similar viewpoints.
Furthermore, both subsets contain 2D/3D joint locations of 21 keypoints.
From this collection, we used only the STB-SK subset to evaluate the proposed network, and divided it into 15,000 and 3,000 samples for the training and test sets, respectively.
\subsubsection{RWTH}
This dataset includes fingerspelling gestures from the German sign language. It consists of RGB video sequences for 35 signs representing letters from A to Z, the 'SCH' character, umlauts \"A, \"O and \"U, and numbers from one to five. For each gesture, 20 different individuals were recorded twice, using two distinct cameras with different viewpoints, at resolutions of $320\times240$ and $352\times288$, giving a total of 1,400 samples. From this collection, we excluded all gestures requiring motion, i.e., the letters J, Z, \"A, \"O and \"U. The final subset contained 30 static gestures over 1,160 images. This collection was divided into disjoint training and test sets containing 928 and 232 images, respectively, in accordance with \citep{zimmermann2017learning}.
\subsubsection{Senz3D}
This dataset contains 11 different gestures performed by four different individuals. To increase the complexity of the samples, the authors collected gestures with similar characteristics (e.g., the same number of raised fingers, low distances between fingers, and touching fingertips). All gestures were captured using an RGB camera and a time-of-flight (ToF) depth sensor at a resolution of $320\times240$. Moreover, each gesture was repeated by each person 30 times, for a total of 1,320 acquisitions. All of the available samples from this collection were employed in the experiments.
\subsection{Data Augmentation}
Data augmentation is a common practice that can help a model to generalize the input data, making it more robust. In this work, up to four different groups of transformations, randomly selected during each iteration of the training phase, were applied to an input image to further increase the dissimilarities between samples. These transformations were as follows:
\begin{itemize}
\item \textit{blur:} This is obtained by applying a Gaussian filter with varying strength, via $\sigma \in [1,3]$ kernel, or by computing the mean over a neighborhood using a kernel shape with a size of between $3\times3$ and $9\times9$;
\item \textit{random noise:} This is achieved by adding Gaussian noise to an image, either sampled randomly per pixel channel or once per pixel from a normal distribution $\mathcal{N}(0,0.05\cdot255)$;
\item \textit{artificial occlusion:} This can be attained either by dropping (i.e., setting to black) up to 30\% of the contiguous pixels, or by replacing up to 20\% pixels using a salt-and-pepper strategy;
\item \textit{photometric adjustments:} These are derived from arithmetic operations applied to the image matrix, for example by adding a value in the range $[-20, 20]$ to each pixel, by improving or worsening the image contrast, or by changing the brightness by multiplying the image matrix with a value in the range $[0.5, 1.5]$.
\end{itemize}
Note that all transformations only affect the appearance of an image, and leave the 2D/3D coordinates unaltered.
\subsection{Performance Evaluation}
The proposed system was developed using the Pytorch framework. All experiments were performed using an Intel Core i9-9900K @3.60GHz CPU, 32GB of RAM, and an Nvidia GTX 1080 GPU with 8GB GDDR5X RAM.
With this configuration, at inference time, the SFE, VE, and HE components required $64.9$, $9.6$, and $15.04$ ms, respectively, giving a total of $89.54$ ms per input image. The proposed system can therefore analyze approximately 11 images per second, regardless of the dataset used. Moreover, we note that this speed could be further improved with higher performance hardware. In fact, when using a more recent GPU model such as the Nvidia GeForce RTX 3080 with 10GB GDDR6X RAM, the total time required to analyze a single input image is reduced to $37.02$ ms, enabling about 27 images to be examined per second, i.e., a speed increase of roughly 2.4x with respect to our configuration.
To evaluate the proposed framework, we employ three metrics that are commonly used for 3D hand pose and shape estimation, and for hand gesture recognition. These are the 3D end-point-error (EPE) and area under the curve (AUC) for the former task, and accuracy for the latter.
The EPE used in the ablation studies is defined as the average Euclidean distance, measured in millimeters (mm), between predicted and ground truth keypoints. The AUC is computed based on the percentage of correct 3D keypoints (3D PCK) at different thresholds, for a range of 20-50 mm. Finally, for both metrics, the public implementation in \cite{zimmermann2017learning} is employed for a fair comparison.
\begin{table}[t]
\centering
\caption{Semantic feature extractor ablation study.}
\label{tab:sfe_ablation}
\begin{tabular}{p{.8\columnwidth} c}
\hline
\textbf{Design choice} & \textbf{EPE} \\
\hline
No semantic feature extractor (SFE) & 13.06 \\
SFE (heatmaps only) & 11.69 \\
SFE (heatmaps + silhouette) & 11.52 \\
ObMan pre-trained SFE (heatmaps + silhoutte) & 11.12 \\
\hline
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\caption{Viewpoint encoder ablation study.}
\label{tab:ve_ablation}
\begin{tabular}{p{.75\columnwidth} c}
\hline
\textbf{Design choice} & \textbf{EPE} \\
\hline
2*ResNet modules VE & 28.77 \\
4*ResNet modules VE & 11.12 \\
5*ResNet modules VE & 12.25 \\
\hline
3*1024 dense layers VE (VE$_1$) & 10.79 \\
2*2048/1*1024 dense layers VE (VE$_2$) & 11.12 \\
3*2048 dense layers VE & 11.86 \\
\hline
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\caption{Hand pose/shape estimator ablation study.}
\label{tab:hpse_ablation}
\begin{tabular}{p{.75\columnwidth} c}
\hline
\textbf{Design choice} & \textbf{EPE} \\
\hline
15 PCA parameters MANO layer & 12.31 \\
30 PCA parameters MANO layer & 11.47 \\
45 PCA parameters MANO layer & 11.12 \\
\hline
No 2D re-projection & 11.56 \\
With 2D re-projection & 11.12 \\
\hline
No $L_{Reg}$ loss & 10.10 \\
With $L_{Reg}$ loss & 11.12 \\
\hline
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\caption{Advanced components ablation study.}
\label{tab:adv_ablation}
\begin{tabular}{p{.75\columnwidth} c}
\hline
\textbf{Design choice} & \textbf{EPE} \\
\hline
No adapt skeleton \& hourglass summation & 11.12 \\
With adapt skeleton \& hourglass summation & 9.10 \\
With adapt skeleton \& hourglass concatenation \& (VE$_1$) & 9.03 \\
With adapt skeleton \& hourglass concatenation \& (VE$_2$) & 8.79 \\
\hline
\end{tabular}
\end{table}
\subsubsection{Framework Quantitative and Qualitative Results}\label{subsubsec:framework_quantitative_and_qualitative}
The proposed framework contains several design choices that were made in order to obtain stable 3D hand pose and shape estimations.
We therefore performed ablation studies to assess the effectiveness of each of these decisions. The obtained results are summarized in Tables \ref{tab:sfe_ablation}, \ref{tab:ve_ablation}, \ref{tab:hpse_ablation}, and \ref{tab:adv_ablation}, where each table shows the results for a given component, i.e., the SFE, VE, HE, and advanced framework components.
All of the reported EPE scores were computed for the STB dataset, while pre-training of the SFE unit was carried out exclusively on the ObMan dataset, since it contains a high number of synthetic images under various conditions.
For both collections, mini-batches of size six and an Adam optimizer with a learning rate of $10^{-4}$ and a weight decay of $10^{-5}$ were used to train the system. The framework was trained for 60 and 80 epochs on the ObMan and STB datasets, respectively, as the former contained substantially more samples.
In relation to the STB training time, which involves the entire framework, each mini-batch required $\sim$0.5 seconds to be analyzed by the specified hardware configuration, giving a total of $\sim$1278 s per training epoch.
The first experiment quantitatively evaluated the usefulness of the SFE component in terms of producing more accurate 3D hand pose and shape estimations. As shown in Table~\ref{tab:sfe_ablation}, while it is still possible to achieve estimations by feeding the input image directly to the VE (i.e., with no SFE), an improvement of $\sim$1.5 mm in the estimation can be obtained by extracting the semantic features. This indicates that the 2D heatmaps allow the network to focus on the positions of the hand joints, whereas generating the heatmaps and silhouette simultaneously enables a comprehensive view of the hand that forces the joints to be placed in the right position when moving to a 3D plane. This behavior is further supported by the results of pre-training the SFE component on the ObMan dataset, where the occlusions force the network to create meaningful abstractions for both the 2D heatmaps and silhouette.
The second test gauged the quality of the VE in terms of estimating view parameters $v$. As shown in Table~\ref{tab:ve_ablation}, using either a low or high number of ResNet modules (i.e., two in the former case or five or more in the latter) to produce the latent space representation $l_v$ results in an increased EPE score, which is often associated with underfitting and overfitting. Slightly better performances can be achieved by reducing the sizes of the dense layers used to build up the vector $v$. However, although the smaller VE (i.e., $VE_1$) can perform better than a larger one (i.e., $VE_2$), this result does not apply when extra steps are included, such as skeleton adaptation and hourglass output concatenation (shown in Table~\ref{tab:adv_ablation}), suggesting that some information can still be lost.
The third experiment, which is summarized in Table~\ref{tab:hpse_ablation}, focused on the hand pose and shape estimator. Tests were performed on the number of parameters of the MANO layer, the proposed 2D re-projection, and the regularization loss. Increasing the number of hand articulations allowed us, as expected, to obtain more realistic hands and consequently more precise estimations when all 45 values were used. Applying the proposed 2D re-projection further reduced the EPE score by providing the MANO layer with direct feedback on its output. Employing the regularization loss resulted in a higher keypoint distance, with the difference derived from the hand shape collapsing onto itself as shown in the bottom row of Fig.~\ref{fig:ablation_qualitative}.
The fourth and last test, reported in Table~\ref{tab:adv_ablation}, dealt with advanced strategies, i.e., skeleton adaptation (adapt skeleton) and hourglass output concatenation, rather than summation. The former strategy allowed for a significant performance boost (a 2 mm lower EPE) since it directly refines the 3D joints produced by the MANO layer. Stacked hourglass output concatenation improved the precision of the system by a further 0.31 mm, since it provided the VE with a finegrained input representation. However, this detailed input description requires bigger dense layers (i.e., $VE_2$) to avoid losing any information. Consequently, using smaller dense layers (i.e., $VE_1$) results in an increase in the EPE score.
\begin{figure*}[t]
\begin{overpic}[width=\textwidth]{images/section4/ablation.pdf}
\put(5.6,0){(a)}
\put(19.1,0){(b)}
\put(32.9,0){(c)}
\put(51.2,0){(d)}
\put(82.2,0){(e)}
\end{overpic}
\caption{STB dataset 3D pose and shape estimation outputs. From top to bottom, the presented framework, framework without 2D re-projection, framework without SFE module, and full framework without regularization loss. Input image, 2D joints, silhouette, 3D joints, and mesh, are shown in (a), (b), (c), (d), and (e), respectively.}
\label{fig:ablation_qualitative}
\end{figure*}
\begin{figure*}[t]
\begin{overpic}[width=\textwidth]{images/section4/failed_ablation.pdf}
\put(5.6,0){(a)}
\put(19.1,0){(b)}
\put(32.9,0){(c)}
\put(51.2,0){(d)}
\put(82.2,0){(e)}
\end{overpic}
\caption{STB dataset failed 3D pose and shape estimation outputs for the presented framework. Input image, 2D joints, silhouette, 3D joints, and mesh, are shown in (a), (b), (c), (d), and (e), respectively.}
\label{fig:failure_qualitative}
\end{figure*}
\begin{table*}[t]
\centering
\caption{AUC state-of-the-art comparison on STB dataset. Works are subdivided according to their input and output types.}
\label{tab:auc_comp}
\begin{tabular}{l ccc}
\hline
\textbf{Model} & \textbf{Input} & \textbf{Output} & \textbf{AUC} \\
\hline
CHPR \citep{sun2015cascaded} & Depth & 3D skeleton & 0.839 \\
ICPPSO \citep{qian2014realtime} & RGB-D & 3D skeleton & 0.748 \\
PSO \citep{oikonomidis2011efficient} & RGB-D & 3D skeleton & 0.709 \\
Dibra et al. \cite{dibra2018monocular} & RGB-D & 3D skeleton & 0.923 \\
\hline
Cai et al. \cite{cai20203d} & RGB & 3D skeleton & 0.996 \\
Iqbal et al. \cite{iqbal2018hand} & RGB & 3D skeleton & 0.994 \\
Hasson et al. \cite{hasson2019learning} & RGB & 3D skeleton & 0.992 \\
Yang and Yao \cite{yang2019disentangling} & RGB & 3D skeleton & 0.991 \\
Spurr et al. \cite{spurr2018cross} & RGB & 3D skeleton & 0.983 \\
Zummermann and Brox \cite{zimmermann2017learning} & RGB & 3D skeleton & 0.986 \\
Mueller et al. \cite{mueller2018ganerated} & RGB & 3D skeleton & 0.965 \\
Panteleris et al. \cite{panteleris2018using} & RGB & 3D skeleton & 0.941 \\
\hline
Ge et al. \cite{ge20193d} & RGB & 3D skeleton+mesh & 0.998 \\
Baek et al. \cite{baek2020weakly} & RGB & 3D skeleton+mesh & 0.995 \\
Zhang et al. \cite{zhang2019end} & RGB & 3D skeleton+mesh & 0.995 \\
Boukhayma et al. \cite{boukhayma20193d} & RGB & 3D skeleton+mesh & 0.993 \\
ours & RGB & 3D skeleton+mesh & 0.995 \\
\hline
\end{tabular}
\end{table*}
The differences in the the qualitative results for different framework configurations are shown in Fig.~\ref{fig:ablation_qualitative}. From the top row to the bottom, the outputs correspond to the proposed framework, the framework without 2D re-projection, the framework without the SFE module, and the full framework without the regularization loss. As can be seen, the most important component in terms of obtaining coherent hand shapes is the regularization loss, since otherwise the mesh collapses onto itself in order to satisfy the 3D joint locations during training time (bottom row in Fig.~\ref{fig:ablation_qualitative}.c and Fig.~\ref{fig:ablation_qualitative}.e).
When we employ the SFE module (first two rows in Fig.~\ref{fig:ablation_qualitative}), more accurate 3D joints and shapes are generated since the SFE enforces both the correct localization of joints and the generation of a more realistic silhouette (Figs.~\ref{fig:ablation_qualitative}.b, c, and d).
When we re-project the generated 3D-coordinates and mesh, the final 3D joint locations and hand shape (Figs.~\ref{fig:ablation_qualitative}.d and e) are more consistent with both the estimated 2D locations and the input image (Figs.~\ref{fig:ablation_qualitative}.b and a). To conclude this qualitative evaluation, some examples of failed 3D pose and shape estimations are shown in Fig.~\ref{fig:failure_qualitative}. It can be seen that although coherent hand poses and shapes are generated, the framework is unable to produce the correct output due to wrong estimations of both the 2D joints and the silhouette by the SFE. This preliminary error is then amplified by the subsequent framework modules, as can be seen from the discrepancy between the 2D and 3D joint locations shown in Figs.~\ref{fig:failure_qualitative}.b and d. Although the loss described in Section~\ref{subsec:multi_task_loss} ultimately forces the MANO layer to produce consistent hands, as also discussed for the third experiment, it can also result in greater inaccuracy in the 3D joint position estimation to guarantee such a consistency. This outcome has two implications: firstly, it indicates that there is still room for improvement, particularly for the 2D joints and silhouette estimations that represent the first step in the proposed pipeline; and secondly, it highlights the effectiveness of the proposed framework, which can generate stable 3D representations from RGB images.
\subsubsection{Comparison of 3D Hand Pose/Shape Estimation}
To demonstrate the effectiveness of the proposed framework, a state-of-the-art 3D PCK AUC comparison was carried out, as shown in Table~\ref{tab:auc_comp}. As can be seen, the presented system is competitive with other successful schemes while using only RGB images and outputting both 3D skeleton locations and mesh, indicating that all of our design choices allow the framework to generate good estimates using only RGB information. It is particularly interesting that the proposed method was able to easily outperform systems that exploited depth data, suggesting that the simultaneous use of the multi-task SFE, VE, and 2D re-projection can help to produce correct estimations by compensating for the missing depth information.
Specifically, the multi-task SFE enables the implementation of a customized VE that, unlike the scheme in \cite{boukhayma20193d}, does not require to be pre-trained on a synthetic dataset in order to perform well; there is also no need to normalize the latent feature space representation by using a VAE to disentangle different factors influencing the hand representation in order to obtain accurate 3D hand poses, as described in \cite{yang2019disentangling}.
Furthermore, thanks to the re-projection module, these results are obtained without applying an iterative regression module to the MANO layer, unlike in \cite{zhang2019end}, and \cite{baek2020weakly}, where progressive changes are carried out recurrently to refine the estimation parameters, thus simplifying the training procedure. In addition, the solutions implemented in the proposed framework allow us to avoid input assumptions and post-processing operations, unlike the majority of schemes in the literature where some parameter (e.g., global hand scale or root joint depth) is assumed to be known at test time, and secondly to achieve similar performance to the best model devised in \cite{ge20193d}, even though the latter scheme employs a more powerful solution (i.e., a graph CNN instead of a fixed MANO layer) for the 3D hand pose and shape estimation.
To conclude this comparison, the 3D PCK curve computed at different thresholds for several state-of-the-art works is shown in Fig.~\ref{fig:3d_pck}. It can be seen that performance on the 3D hand pose and shape estimation task is becoming saturated, and newer works can consistently achieve high performance with low error thresholds. In this context, the proposed method is on par with the top works in the literature, further supporting the view that all of our design choices allowed the framework to generate good 3D poses and shapes from monocular RGB images.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{images/section4/3dpck.pdf}
\caption{3D PCK state-of-the-art comparison on STB dataset.}
\label{fig:3d_pck}
\end{figure}
\subsubsection{Comparison on Hand Gesture Recognition}
To assess the generalizability of the proposed framework, experiments were performed on the RWTH and Senz3D datasets. Since our architecture does not include a classification component, it was extended by attaching the same classifier described in \cite{zimmermann2017learning} to handle the new task. This classifier takes as input the 3D joint coordinates generated by the MANO layer, and consists of three fully connected layers with a ReLU activation function.
Note that all of the weights (except those used for the classifier) are frozen when training the system on the hand gesture recognition task, so that it is possible to correctly evaluate the generalizability of the framework.
Moreover, although weights are frozen, the entire framework still needs to be executed. Hence, the majority of the training time is spent on the generation of the 3D joint coordinates, since this is where most of the computation is performed; as a result, each mini-batch is analyzed by the specified hardware configuration in $\sim$222 ms (i.e., $\sim$37 ms per image), giving total times of $\sim$206 s and $\sim$235 s per epoch for the RWTH and Senz3D datasets, respectively.
Our experiments followed the testing protocol devised in \cite{dibra2018monocular} in order to present a fair comparison; this consisted of 10-fold cross-validation with non-overlapping 80/20 splits for the training and test sets, respectively. In a similar way to other state-of-the-art works, all images were cropped close to the hand to remove as much background as possible and meet the requirements for the input size, i.e., $256\times256$.
The results are shown in Table~\ref{tab:gesture_recog_comp}, and a comparison with other schemes in the literature is presented.
It can be seen that our framework consistently outperformed another work focusing on 3D pose and shape estimation (i.e., \cite{zimmermann2017learning}) on both datasets, meaning that it generates more accurate joint coordinates from the RGB image; the same result was also obtained for the estimation task, as shown in Table~\ref{tab:auc_comp}.
However, methods that exploit depth information (i.e., \cite{dibra2018monocular} or concentrate on hand gesture classification (i.e., \cite{papadimitriou2019fingerspelled}) can still achieve slightly higher performance. There are two reasons for this.
Firstly, by concentrating on the hand gesture classification task, lower performance is achieved on the estimation task, although similar information, such as the 3D joint locations, is used. As a matter of fact, even though they exploit depth information in their work, the authors of \cite{dibra2018monocular} obtained an AUC score of 0.923, while the scheme in \cite{zimmermann2017learning} and the proposed framework achieved AUC scores of 0.986 and 0.994 on the STB dataset for 3D hand pose estimation, respectively. Secondly, as discussed in Section~\ref{subsubsec:framework_quantitative_and_qualitative} and shown by the qualitative results in Fig.~\ref{fig:failure_qualitative}, the proposed architecture could be improved further by increasing the estimation accuracy of the 2D joints and silhouette, indicating that if a good hand abstraction is used to derive the 3D hand pose and shape, this can be effective for the hand gesture recognition task.
In summary, the proposed method achieves state-of-the-art performance on the 3D hand pose and shape estimation task, can outperform other existing estimation approaches when applied to the hand gesture recognition task, and behaves in a comparable way to other specifically designed hand gesture recognition systems. This indicates that the proposed pipeline outputs stable hand pose estimations that can be effectively used to recognize hand-gestures.
\begin{table}[t]
\centering
\caption{Hand-gesture recognition accuracy comparison.}
\label{tab:gesture_recog_comp}
\begin{tabular}{l cc}
\hline
\textbf{Model} & \textbf{RWTH} & \textbf{Senz3D} \\
\hline
Papadimitriou and Potamianos \cite{papadimitriou2019fingerspelled} & 73.92\% & - \\
Memo and Zanuttigh \cite{memo2018head} & - & 90.00\% \\
Dibra et al. \cite{dibra2018monocular} & 73.60\% & 94.00\% \\
Dreuw et al. \cite{dreuw2006modeling} & 63.44\% & - \\
Zimmerman and Brox \cite{zimmermann2017learning}* & 66.80\% & 77.00\% \\
ours* & 72.03\% & 92.83\% \\
\hline
\end{tabular}\\
\footnotesize{$^*$method focusing on 3D hand pose and shape estimation}
\end{table}
\section{Conclusion}\label{sec:conclusions}
In this paper, we have presented an end-to-end framework for the estimation of 3D hand pose and shape, and successfully applied it to the task of hand gesture recognition.
Our model comprises three modules, a multi-task SFE, a VE, and an HE with weak re-projection, each of which has certain strengths and weaknesses.
More specifically, the SFE, thanks to its pre-training providing multi-task context, enables our architecture to achieve similar performance to more powerful schemes in the literature that exploit graph representations, for instance. However, the 2D joints/silhouettes estimates could be improved through the use of more accurate algorithms for the generation of ground truths.
The VE did not require any form of pre-training, due to the fine-grained input from the SFE; this module was able to generate the parameters necessary to move from a 2D space to a 3D one, which is more easily applied in a diverse range of tasks and datasets.
It was important to experiment in order to find the right design for the VE, since this could have broken the entire system in scenarios of over- or underfitting where the produced viewpoint parameters do not allow for a correct estimation of the hand pose/shape.
Finally, unlike other existing works, the HE was able to output accurate estimations without requiring an iterative process, as its re-projection procedure allowed for closer correlation between the 3D and 2D hand representations during training. However, a regularization term was still required, as without this the meshes completely collapsed onto themselves when the system tried to generate near-perfect spatial 3D joint estimations.
We further improved our architecture through the use of advanced strategies such as skeleton adaptation and hourglass output concatenation, to obtain both more refined 3D joint locations and finer grained input representations.
Our experimental results demonstrate that the multi-task SFE, VE, HE with weak re-projection, and the use of advanced strategies, which were designed by exploiting and extending schemes in the literature, achieved state-of-the-art performance on the task of 3D hand pose and shape estimation.
Moreover, when applied to hand gesture recognition on both benchmark datasets, our framework outperformed other schemes that were devised for the estimation task and later employed to recognize hand gestures.
In future work, to address the current weaknesses of our system, we plan to upgrade the SFE by first increasing the accuracy of its 2D heatmaps and silhouette estimation by generating the corresponding ground truths via deep learning-based segmentation algorithms. We will also design other meaningful features to extend the multi-task strategy.
In addition, we will explore solutions for the HE that are not based on the MANO layer but on other approaches such as graph representations of the hand, in order to increase the abstraction capabilities of the model. In view of this, additional experiments will be performed in which we will retain the 3D shape when moving to the hand gesture recognition task, with the aim of improving the final results.
Although the proposed model currently addresses the pose/shape estimation of single hands by design, it could be extended to simultaneously handle inputs containing both hands. Thus, another possible avenue for future work would involve the exploration of alternative architectural extensions or design modifications to handle input images containing two hands.
Moreover, since the proposed architecture can classify roughly 27 images per second, we will design an adaptation to try to achieve real-time gesture recognition from video sequences.
\section*{Acknowledgment}
This work was supported in part by the MIUR under grant “Departments of Excellence 2018–2022” of the Department of Computer Science of Sapienza University.
|
{
"timestamp": "2022-05-10T02:43:57",
"yymm": "2109",
"arxiv_id": "2109.13879",
"language": "en",
"url": "https://arxiv.org/abs/2109.13879"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.